text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Introduction If you have used pandas for your data analysis work, you may already get some idea on how powerful and flexible it is in terms of data processing. Many times there are more than one way to solve your problem, and choosing the best approach become another tough decision. For instance, in one of my previous article, I tried to summarize the 20 ways to filter records in pandas which definitely is not a complete list for all the possible solutions. In this article, I will be discussing about the different ways to merge/combine data in pandas and when you shall use them since combining data probably is one of the necessary step you shall perform before starting your data analysis. Prerequisites If you have not yet installed pandas, you may use the below command to install it from PyPI: pip install pandas And import the module at the beginning of your code: import pandas as pd Let’s dive into the code examples. Combine Data with Append vs Concat Imagine you have below two data frames from different sources, now you would like to merge them into one data frame. df1 = pd.DataFrame({"ID" : [1, 2, 3, 4, 5], "Name" : ["Aaron", "Jimmy", "Zoe", "Jill", "Jenny"]}) df2 = pd.DataFrame({"ID": [6], "Name" : ["Kelly"]}) The most straightforward way would be using the append method from the pandas DataFrame object: df1.append(df2, ignore_index=True) The append method allows to add rows to the end of the current data frame, and with the ignore_index parameter as True, the resulting axis will be relabeled starting from 0. You would see the output as per below: Alternatively, you can use the pandas concat method which is self-explanatory based on its name. It provides a few more parameters to manipulate the resulting data frame such as specifying the axis for the concatenation to be done as well as the join logic for either union or intersection operation. You can use the below to generate the same output as previously: pd.concat([df1, df2], ignore_index=True) And if you would like to retain a reference to the sources in your result, you can use the keys as per below: pd.concat([df1, df2], keys=["src_1", "src_2"]) This would return a multi-index data frame where you can easily refer back to the data by source (e.g. df.loc[“src_1”]). Adding new data frame as columns can be also done with axis = 1, for instance: df3 = pd.DataFrame({"Age" : [12, 13, 13, 12, 13]}) pd.concat([df1, df3], axis=1) The data frame has been added as one column to the caller: As concat method accepts a list of data frames, you can combine multiple data frames at one time, which would be much faster than using append to do one by one. Merge Data with Join vs Merge Beside appending rows or columns based on axis, sometimes you may need more sophisticated operations similar to the left/right join in a rational database. For such scenarios, you shall make use of the pandas merge or join method. For the previous example to append df2 to df1, you can achieve it with merge as well: df1.merge(df2, how="outer") Output as following: It would be more tedious if you want to achieve the same via join since it can only join the data frame based on index, so you will have to set the index to the correct columns you would like to use as key. Below is how you can do it via join: df1.join(df2.set_index(["ID", "Name"]), on=["ID", "Name"], how="outer").reset_index(drop=True) Assuming you have the below student’s score for each subject, and you want to merge the student information (df1) and the below based on the “Name” column: df4 = pd.DataFrame({"ID" : [1001, 1002, 1003, 1002, 1001], "Subject": ["Science", "Math", "English", "Math", "Science"], "Name": ["Aaron", "Jimmy", "Jimmy", "Zoe", "Jenny"], "Score" : ["A", "B", "C", "B", "B"]}) With merge function, you can specify the joining logic as left join on “Name” column as per below: df1.merge(df4, on="Name", how="left") Pandas will automatically add suffix whenever there are columns with duplicate names (e.g. “ID” in df1 and df4) from the two data frames, below is the output you may see: To generate the same output via join, you can use below code which you need to pre-set the index for df4 and specify the suffix for left and right data frame: df1.join(df4.set_index("Name"), on="Name", lsuffix="_x", rsuffix="_y") Of course, if you would like to perform the right join for the above two data frames, you can do as per below: df1.merge(df4, on="Name", how="right") # or df1.join(df4.set_index("Name"), on="Name", how="right", lsuffix="_x", rsuffix="_y") Output as per below: Merge DataFrame with Duplicate Keys When merging multiple DataFrame objects, you may occasionally encounter the scenario that there are duplicate values for the columns you want to use as keys for joining. For instance, you may have below records if one subject has more than one lecturers: df5 = pd.DataFrame({"Subject": ["Science", "Science", "Math", "Math", "English"], "Lecturer": ["Michael", "John", "Tim", "Robert", "Alex"]}) When you merge this information with student score based on the subject with merge or join method: df4.merge(df5, on="Subject", how="left") #or df4.join(df5.set_index("Subject"), on="Subject", how="left") You would see the below output with M x N records due to the duplicate key in the df5: If your objective is to perform something similar to excel vlookup to return the first matched value, then you can use the drop_duplicates method to remove the duplicate records before joining. E.g.: df4.merge(df5.drop_duplicates("Subject"), on="Subject", how="left") This would allow you to combine the two data frames with the first matched record from df5: And in case you do not want to lose the information from the lecturer data frame, you will need to perform some sort of data aggregation before joining, e.g.: df4.merge(df5.groupby("Subject").agg({"Lecturer" : lambda x: ','.join(x)}), on="Subject", how="left") With this aggregation on the lecturer values, you would be able to see the below output: Based on the above examples, you may find that merge and join are interchangeable in most of the cases, and you may have to type a bit more when using join method due to the different default arguments used. Since it always works on the index, you will have to preset the index on the key columns before joining. Conclusion In this article, we have reviewed through a few methods pandas offered for combining data frames with some sample code. To wrap up, the append and concat are usually used for merging two or more data frames based on the row or column index, and concat has better performance over append when you have multiple data frames to be worked on. If you need some high performance in-memory join operations like SQL joining for rational database, you will need to use merge or join method which can be interchangeable in most of the scenario. In addition, if the data frame you worked on does not have a index on the joining row/column, using merge over join would probably save your some typing.
https://www.codeforests.com/category/python/page/3/
CC-MAIN-2022-21
refinedweb
1,228
52.63
Angular CLI Angular now comes with a command line interface (CLI) to make it easier and faster to build Angular applications. Features The Angular CLI helps with: - Bootstrapping a project It creates the initial project structure with a root NgModuleand a root component and bootstraps it using the platformBootstrapDynamicmethod. The project is also configured to use the webpack loader which handles things like module loading, bundling and minification of dependant code. NoteIn the course we’ve used SystemJS for this since webpack doesn’t work with Plunker yet. We’ll continue to use SystemJS for the code samples in Plunker and WebPack for any applications created with the Angular CLI. -. Noteall the tests automatically in the background. - Packaging and releasing The CLI doesn’t just stop with development, using it we can also package our application ready for release to a server. Installing the Angular CLI To install the CLI we use Node and npm. npm install -g @angular/cli If the above ran successfully it will have made available to you a new application called ng, to test this installed correctly run this command: ng -v It should output the version of the application that was installed, like so: _ _ ____ _ ___ / \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _| / △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | | / ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | | /_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___| |___/ @angular/cli: 1.4.1 node: 8.1.3 os: darwin x64 Start an application with ng new Lets create a new project called codecraft. To bootstrap our new project with ng we run this command: ng new codecraft Tip This outputs something like the below: The command generates a number of new files and folders for us: codecraft // production or development builds of our applicaiton go here. ├── dist // main application code goes here. ├── src │ ├── app │ │ ├── app.component.css │ │ ├── app.component.html │ │ ├── app.component.spec.ts │ │ ├── app.component.ts │ │ └── app.module.ts // settings for the different environments, dev, qa, prod. │ ├── environments │ │ ├── environment.prod.ts │ │ └── environment.ts // main html and typescript file │ ├── index.html │ ├── main.ts │ ├── favicon.ico │ ├── polyfills.ts │ ├── styles.css // prepares test environment and runs all the unit tests │ ├── test.ts // typescript configuration file │ ├── tsconfig.app.json │ ├── tsconfig.spec.json // typescript type definition file │ └── typings.d.ts // The E2E tests for our application go here ├── e2e ├── angular-cli.json ├── karma.conf.js ├── package.json ├── protractor.conf.js ├── README.md └── tslint.json Note As well as creating the files and folders for us; we can see from package.json that it installed the correct versions of all the required npm dependencies for us also. { "name": "activity", " } } So far in this course we have bundled all our code into one file on plunker for convenience. Lets see how the Angular CLI breaks up the code into multiple files and where those files are located. - src/app/app.component.ts The new project is bootstrapped with one component, our root component which it called AppComponentand has a selector of app-root. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { } - src/index.html app-rootcomponent has been added to our index.htmlfile already. There are no script tags present yet, that’s fine the angular build process adds all the required script and link tags for us. <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Activity</title> <base href="/"> <link rel="stylesheet" href=""> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> </head> <body> <app-root></app-root> </body> </html> - src/app/app.module.ts Our top level module configuration is stored in this file. import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { AppComponent } from './app.component'; import { JokeComponent } from './joke/joke.component'; import { JokeListComponent } from './joke-list/joke-list.component'; import { JokeFormComponent } from './joke-form/joke-form.component'; import { HeaderComponent } from './header/header.component'; @NgModule({ declarations: [ AppComponent, JokeComponent, JokeListComponent, JokeFormComponent, HeaderComponent ], imports: [ BrowserModule, FormsModule, HttpModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } - src/main.ts The actual act of importing our main module and boostrapping our Angular web application is left to the main.tsfile.)); Serve an application with ng serve With the CLI we can also easily serve our application using a local web-server. We just run: ng serve This builds our application, bundles all our code using webpack and makes it all available through localhost:4200. ng serve also watches for any changes to files in our project and auto-reloads the browser for us. The command runs the application through a web-server that support HTML5 push-state routing. Note Generate code with ng generate The ability to generate stub code is one of the most useful features of the CLI. The most exciting part of this is that it automatically generates code that adheres to the official style guide. Important With the generate command we can create new components, directives, routes not available in version 1.0. Each of the above types of things it can create is called a scaffold. We can run this command using ng generate <scaffold> <name> If we wanted to generate a component called HeaderComponent we would write: ng generate component Header This creates a number of files in a folder called header in src/app, like so: app ├── header │ ├── header.component.css // The css for this component │ ├── header.component.html // The template for this component │ ├── header.component.spec.ts // The unit test for this component │ └── header.component.ts // The component typescript file Taking a look at header.component.ts: import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-header', templateUrl: './header.component.html', styleUrls: ['./header.component.css'] }) export class HeaderComponent implements OnInit { constructor() { } ngOnInit() { } } Tip HeaderComponent. Angular CLI automatically appends Componentto the name, so you component class would end up being HeaderComponentComponent. The command above can be shortened to: ng g c Header Tip If we run the command in an app folder, the generate command will create files relative to the current folder you are in. So if we are in src/app/header and we run ng g c LoginButton it will generate the files in src/app/header/login-button/ We can also be explicit about where we want the generated files to go by running ng g component ./src/app/foo/bar this will create a component called BarComponent in the folder ./src/app/foo/bar. Available Scaffolds - Component ng g component My // Creates MyComponent By default all generated files go in into src\app\my-component, a folder called my-component is created for us. - Directive ng g directive My // Creates MyDirective By default all generated files go in into src\app, no folder is created. - Pipe ng g pipe My // Creates MyPipe By default all generated files go in into src\app, no folder is created. - Service ng g service MyService // Creates MyService By default all generated files go in into src\app, no folder is created. - Class ng g class MyClass // Creates MyClass By default all generated files go in into src\app, no folder is created. - Interface ng g interface MyInterface // Creates MyInterface By default all generated files go in into src\app, no folder is created. - Enum ng g enum MyEnum // Creates MyEnum By default all generated files go in into src\app, no folder is created. Create a build with ng build The ng serve command does a great job of enabling development locally. However eventually we will want some code which we can host on another server somewhere. The Angular CLI again has us covered in this regard, if we want to create a development build we simply type ng build This bundles all our javascript, css, html into a smaller set of files which we can host on another site simply. It outputs these files into the dist folder: . ├── assets ├── index.html ├── inline.js ├── inline.map ├── main.bundle.js ├── main.map ├── styles.bundle.js └── styles.map To serve our built application site we just need to serve this folder. For example if using python we could simply run python -m SimpleHTTPServer from the dist folder and view the application from 0.0.0.0:8000. Production Builds By default the ng build command creates a development build, no effort is made to optimise the code. To create a production build we just run ng build --prod This might generate an output like the below: . ├── assets ├── index.html ├── inline.js ├── main.3f26904b701596b6d90a.bundle.js ├── main.3f26904b701596b6d90a.bundle.js.gz └── styles.b52d2076048963e7cbfd.bundle.js Running with --prod changes a few things: The bundles now have random strings appended to them to enable cache busting. This ensures that a browser doesn’t try to load up previously cached versions of the files and instead load the new ones from the server. The file sizes are much smaller. The files have been processed through a minifier and uglifier. There is a much small .gzfile, this is a compressed version of the equivalent javascript file. Browsers will automatically try to download the .gzversion of files if they are present. Adding a third party module The build system simplifies the process of serving and releasing your application considerably. It works only because Angular knows about all the files used by your application. So when we include 3rd party libraries into our application we need to do so in such a way that Angular knows about the libraries and includes them in the build process. Bundled with the main application javascript files If we want to include a module to use in our Angular javascript code, perhaps we want to to use the moment.js library, we just need to install it via npm like so: npm install moment --save If we also want to include the typescript type definition file for our module we can install it via: npm install @types/moment --save Now when Angular create a build either when releasing or serving locally, the moment library is automatically added to the bundle. Global Library Installation Some javascript libraries need to be added to the global scope, and loaded as if they were in a script tag. We can do this by editing the angular-cli.json file in our project root. The twitter bootstrap library is a great example of this, we need to incldue css and script files in the global scope. First we install the bootstrap library via npm like so: npm install [email protected] Then we add the required javascript files to the app.scripts section or the app.styles in angular-cli.json like so: { . . . "apps": [ { . . . "styles": [ "styles.css", "../node_modules/bootstrap/dist/css/bootstrap.css" ], "scripts": [ "../node_modules/jquery/dist/jquery.js", "../node_modules/tether/dist/js/tether.js", "../node_modules/bootstrap/dist/js/bootstrap.js" ], . . . } ], . . . } Now when the build runs the CLI includes those files in the bundle and injects then in the global scope. Testing Angular Angular has always been synonymous with testing and so there should be no surprise that the command line tool comes with features to make Angular testing easier. The default mechanism for unit testing in Angular is via jasmine and karma. Whenever we generate code via scaffolds it also generates a .spec.ts. The code the CLI bootstraps inside this file depends on the scaffold type but essentially is a jasmine test spec which you can flesh out with more test cases. Note We can run all our unit tests with one command: ng test This builds our project and then runs all the tests, any errors are output to the terminal. This command also watches for any changes in our files and, if it detects any, Important When running the tests it opens up a browser window like the example above. It needs this browser windows to run the tests, do not close it! Summary The above is just an overview of the main commands and their default features. To find out more details about each command and how we can customise the behaviour via flags we can run ng help in the terminal. By handling the setup for us the CLI has made working with Angular much easier. By standardising setup and structure it’s also made Angular projects fungible. Angular developers used to the Angular CLI should feel comfortable on all Angular CLI projects and be able to hit the floor running.
https://codecraft.tv/courses/angular/angular-cli/overview/
CC-MAIN-2018-34
refinedweb
2,092
53.71
Implementing a game rule system¶ The simplest way to create an online roleplaying game (at least from a code perspective) is to simply grab a paperback RPG rule book, get a staff of game masters together and start to run scenes with whomever logs in. Game masters can roll their dice in front of their computers and tell the players the results. This is only one step away from a traditional tabletop game and puts heavy demands on the staff - it is unlikely staff will be able to keep up around the clock even if they are very dedicated. Many games, even the most roleplay-dedicated, thus tend to allow for players to mediate themselves to some extent. A common way to do this is to introduce coded systems - that is, to let the computer do some of the heavy lifting. A basic thing is to add an online dice-roller so everyone can make rolls and make sure noone is cheating. Somewhere at this level you find the most bare-bones roleplaying MUSHes. The advantage of a coded system is that as long as the rules are fair the computer is too - it makes no judgement calls and holds no personal grudges (and cannot be accused of holding any). Also, the computer doesn’t need to sleep and can always be online regardless of when a player logs on. The drawback is that a coded system is not flexible and won’t adapt to the unprogrammed actions human players may come up with in role play. For this reason many roleplay-heavy MUDs do a hybrid variation - they use coded systems for things like combat and skill progression but leave role play to be mostly freeform, overseen by staff game masters. Finally, on the other end of the scale are less- or no-roleplay games, where game mechanics (and thus player fairness) is the most important aspect. In such games the only events with in-game value are those resulting from code. Such games are very common and include everything from hack-and-slash MUDs to various tactical simulations. So your first decision needs to be just what type of system you are aiming for. This page will try to give some ideas for how to organize the “coded” part of your system, however big that may be. Overall system infrastructure¶ We strongly recommend that you code your rule system as stand-alone as possible. That is, don’t spread your skill check code, race bonus calculation, die modifiers or what have you all over your game. Put everything you would need to look up in a rule book into a module in mygame/world. Hide away as much as you can. Think of it as a black box (or maybe the code representation of an all-knowing game master). The rest of your game will ask this black box questions and get answers back. Exactly how it arrives at those results should not need to be known outside the box. Doing it this way makes it easier to change and update things in one place later. Store only the minimum stuff you need with each game object. That is, if your Characters need values for Health, a list of skills etc, store those things on the Character - don’t store how to roll or change them. Next is to determine just how you want to store things on your Objects and Characters. You can choose to either store things as individual Attributes, like character.db.STR=34and character.db.Hunting_skill=20. But you could also use some custom storage method, like a dictionary character.db.skills = {"Hunting":34, "Fishing":20, ...}. Finally you could even go with a custom django model. Which is the better depends on your game and the complexity of your system. Make a clear API into your rules. That is, make methods/functions that you feed with, say, your Character and which skill you want to check. That is, you want something similar to this: from world import rules result = rules.roll_skill(character, "hunting") result = rules.roll_challenge(character1, character2, "swords") You might need to make these functions more or less complex depending on your game. For example the properties of the room might matter to the outcome of a roll (if the room is dark, burning etc). Establishing just what you need to send into your game mechanic module is a great way to also get a feel for what you need to add to your engine. Coded systems¶ Inspired by tabletop role playing games, most game systems mimic some sort of die mechanic. To this end Evennia offers a full dice roller in its contrib folder. For custom implementations, Python offers many ways to randomize a result using its in-built random module. No matter how it’s implemented, we will in this text refer to the action of determining an outcome as a “roll”. In a freeform system, the result of the roll is just compared with values and people (or the game master) just agree on what it means. In a coded system the result now needs to be processed somehow. There are many things that may happen as a result of rule enforcement: - Health may be added or deducted. This can effect the character in various ways. - Experience may need to be added, and if a level-based system is used, the player might need to be informed they have increased a level. - Room-wide effects need to be reported to the room, possibly affecting everyone in the room. There are also a slew of other things that fall under “Coded systems”, including things like weather, NPC artificial intelligence and game economy. Basically everything about the world that a Game master would control in a tabletop role playing game can be mimicked to some level by coded systems. Example of Rule module¶ Here is a simple example of a rule module. This is what we assume about our simple example game: - Characters have only four numerical values: - Their level, which starts at 1. - A skill combat, which determines how good they are at hitting things. Starts between 5 and 10. - Their Strength, STR, which determine how much damage they do. Starts between 1 and 10. - Their Health points, HP, which starts at 100. - When a Character reaches HP = 0, they are presumed “defeated”. Their HP is reset and they get a failure message (as a stand-in for death code). - Abilities are stored as simple Attributes on the Character. - “Rolls” are done by rolling a 100-sided die. If the result is below the combatvalue, it’s a success and damage is rolled. Damage is rolled as a six-sided die + the value of STR(for this example we ignore weapons and assume STRis all that matters). - Every successful attackroll gives 1-3 experience points ( XP). Every time the number of XPreaches (level + 1) ** 2, the Character levels up. When leveling up, the Character’s combatvalue goes up by 2 points and STRby one (this is a stand-in for a real progression system). Character¶ The Character typeclass is simple. It goes in mygame/typeclasses/characters.py. There is already an empty Character class there that Evennia will look to and use. from random import randint from evennia import DefaultCharacter class Character(DefaultCharacter): """ Custom rule-restricted character. We randomize the initial skill and ability values bettween 1-10. """ def at_object_creation(self): "Called only when first created" self.db.level = 1 self.db.HP = 100 self.db.XP = 0 self.db.STR = randint(1, 10) self.db.combat = randint(5, 10) @reload the server to load up the new code. Doing examine self will however not show the new Attributes on yourself. This is because the at_object_creation hook is only called on new Characters. Your Character was already created and will thus not have them. To force a reload, use the following command: @typeclass/force/reset self The examine self command will now show the new Attributes. Rule module¶ This is a module mygame/world/rules.py. from random import randint def roll_hit(): "Roll 1d100" return randint(1, 100) def roll_dmg(): "Roll 1d6" return randint(1, 6) def check_defeat(character): "Checks if a character is 'defeated'." if character.db.HP <= 0: character.msg("You fall down, defeated!") character.db.HP = 100 # reset def add_XP(character, amount): "Add XP to character, tracking level increases." character.db.XP += amount if character.db.XP >= (character.db.level + 1) ** 2: character.db.level += 1 character.db.STR += 1 character.db.combat += 2 character.msg("You are now level %i!" % character.db.level) def skill_combat(*args): """ This determines outcome of combat. The one who rolls under their combat skill AND higher than their opponent's roll hits. """ char1, char2 = args roll1, roll2 = roll_hit(), roll_hit() failtext = "You are hit by %s for %i damage!" wintext = "You hit %s for %i damage!" xp_gain = randint(1, 3) if char1.db.combat >= roll1 > roll2: # char 1 hits dmg = roll_dmg() + char1.db.STR char1.msg(wintext % (char2, dmg)) add_XP(char1, xp_gain) char2.msg(failtext % (char1, dmg)) char2.db.HP -= dmg check_defeat(char2) elif char2.db.combat >= roll2 > roll1: # char 2 hits dmg = roll_dmg() + char2.db.STR char1.msg(failtext % (char2, dmg)) char1.db.HP -= dmg check_defeat(char1) char2.msg(wintext % (char1, dmg)) add_XP(char2, xp_gain) else: # a draw drawtext = "Neither of you can find an opening." char1.msg(drawtext) char2.msg(drawtext) SKILLS = {"combat": skill_combat} def roll_challenge(character1, character2, skillname): """ Determine the outcome of a skill challenge between two characters based on the skillname given. """ if skillname in SKILLS: SKILLS[skillname](character1,-character2) else: raise RunTimeError("Skillname %s not found." % skillname) These few functions implement the entirety of our simple rule system. We have a function to check the “defeat” condition and reset the HP back to 100 again. We define a generic “skill” function. Multiple skills could all be added with the same signature; our SKILLS dictionary makes it easy to look up the skills regardless of what their actual functions are called. Finally, the access function roll_challenge just picks the skill and gets the result. In this example, the skill function actually does a lot - it not only rolls results, it also informs everyone of their results via character.msg() calls. Here is an example of usage in a game command: from evennia import Command from world import rules class CmdAttack(Command): """ attack an opponent Usage: attack <target> This will attack a target in the same room, dealing damage with your bare hands. """ def func(self): "Implementing combat" caller = self.caller if not self.args: caller.msg("You need to pick a target to attack.") return target = caller.search(self.args) if target: rules.roll_challenge(caller, target, "combat") Note how simple the command becomes and how generic you can make it. It becomes simple to offer any number of Combat commands by just extending this functionality - you can easily roll challenges and pick different skills to check. And if you ever decided to, say, change how to determine hit chance, you don’t have to change every command, but need only change the single roll_hit function inside your rules module.
http://evennia.readthedocs.io/en/latest/Implementing-a-game-rule-system.html
CC-MAIN-2018-13
refinedweb
1,860
65.73
17 September 2008 16:19 [Source: ICIS news] By Joseph Chang ?xml:namespace> "The credit crisis is leading to losses of around $500bn [€355bn] – half in the "A shallow downturn and recession scenario with a tepid recovery is likely, with continuing high risk of a major downturn," added Swift. Economic growth is slowing across the world, said the economist, putting a dent in the decoupling theory, which states that a slowdown in economies such as in the "We are still coupled and live in one world. Transmission is in place," said Swift. "The softness in the Global chemical volume growth, which is tied closely with industrial production, is poised to slow to 3% in 2008 from 4.5% in 2007 and 5.2% in 2006, said Swift. North American volumes are expected to fall 0.2% in 2008, he added. In August, North American chemical output fell 1.2% year-on-year, noted Swift. "The supply chain is like a bull whip. The consumer has the handle and the chemical industry is at the end," he said. "The slowdown in consumer spending will go up through the supply chain and hit chemicals." Despite the negative near-term economic outlook, Swift was optimistic on a recovery starting in 2009 and going into 2010. "In 2009, we expect the North American capital goods market to soften, but the housing and light vehicles markets to stabilize," he said. "By 2010, we see nearly all end markets start
http://www.icis.com/Articles/2008/09/17/9156577/us-recession-to-hit-chemicals-globally-acc.html
CC-MAIN-2014-42
refinedweb
244
74.39
Hopefully this is the right section to post this in. I've recently started trying to work with Allegro and after a few hours of troubleshooting setting it up in various IDEs, I settled on Code::Blocks (with MinGW as a compiler). So, I've written a program that just creates a display and displays text, but when it is executed the display is "hidden," with me unable to see/access it. Here is a picture: I am mousing over the farthest icon to the right on my taskbar, the executable of my code, but when I try to bring it to the front it is nowhere to be found. I can see from the small thumbnail that it is working correctly. I feel like the command window shouldn't be open like that though, correct? Here is my code: Not sure if this is a bug or just something I am perhaps doing wrong. I would assume there is fix since this is a pretty simple program and people can't have created whole games without overcoming this. Please let me know if you need any specific information. Thanks in advance. You can get rid of the console by adding -mwindows to link options. The window is created offscreen somewhere. Perhaps where there is or was another monitor. I don't remember the shortcut key but there's one to move a window. Press that and then use the cursor keys and see if you can find it and move it on screen. Can you tell us about your video card and monitor setup and if you're using OpenGL or Direct3D? This is a known problem; perhaps it's fixed in 5.1. You can use al_set_window_position() to work around this. The sequence to move the window would be: ALT+Space followed by M, and then move the mouse. #include <allegro5\allegro.h> #include <allegro5\allegro_font.h> #include <allegro5\allegro_ttf.h> You should always use forward slashes for paths for the sake of being cross platform. I do indeed have multiple monitors, which seems to have been part of the problem. By right clicking the thumbnail and selecting move then using my arrow keys, I was able to get the program window on one of my screens. I hadn't really thought of that being a problem, thanks Trent. I tried adding -mwindows in Project > Build Options > Linker Settings, but it didn't do anything. For informational purposes, I am using a laptop (HP dv6t) with an external monitor attached and an ATI Radeon HD 6770M as my video card. As far as OpenGL vs. Direct3D, not sure what you mean by that since I am not using either to my knowledge. al_set_window_position () worked wonderfully, thank you for that Matthew and for pointing out my path mistake (I actually do most schoolwork on Linux so being cross platform is important).
https://www.allegro.cc/forums/print-thread/609581
CC-MAIN-2018-34
refinedweb
482
72.76
C++ Program to Find the Largest Number Among Three Numbers Grammarly In this post, you’ll learn how to Find the Largest Number Among Three Numbers using C++ Programming Language. This lesson, you will learn how to find the Largest Number Among Three Numbers, with a simple comparison operator and decision making statements using the C++ language. Let’s look at the below source code. How to Find the Largest Number Among Three Numbers? Source Code #include <iostream> using namespace std; int main() { int a,b,c; cin>> a >> b >> c ; cout<< "Enter three numbers: "<< a << b << c <<endl; if(a>b) { if(a>c) cout<<"\n "<< a <<" is largest number"; else cout<<"\n "<< c <<" is largest number"; } else { if(b>c) cout<<"\n "<< b <<" is largest number"; else cout<<"\n "<< c <<" is largest number"; } return 0; } Input 1 2 3 Output Enter three numbers: 123 3 is largest number start, we first declare three variables (a, b, c) as integers, get them from the user using cin >>and cout <<to display them. - Next we compare each of the integers, using the comparison operator ‘>’ and display the respective output statements using the decision making statements if else. - In the first comparison statement (a>b), if the condition is satisfied, the nested if else statement is executed, where there is another comparison statement (a>c), based on whether the condition is true or false the output statements are displayed. - If the first comparison statement is false, the function moves to the else statements, (b>c)based on whether this condition is true or false, the output statements are displayed. - to find the Largest Number Among Three Numbers, using comparison operator and logical AND operator (&&). #include <iostream> using namespace std; int main() { float a, b, c; cin >> a >> b >> c; cout << "Enter three numbers: "<< a << ", " << b << ", " << c <<endl; if(a > b && a > c) cout << "\n Largest number: " << a; if(b > a && b > c) cout << "\n Largest number: " << b; if(c > a && c > b) cout << "\n Largest number: " << c; return 0; } Input 1.4 2.12 3.98 Output Enter three numbers: 1.4, 2.12, 3.98 Largest number: 3.98 - In this source code we declare the variables (a, b, c) as float values, that is decimal numbers. - Using if statements we declare the conditions with the comparison operator and the logical operator (a > b && a > c)– This statement compares a with b and c. Only if both the comparison statements are true the condition is satisfied, this is the function of the logical Operator &&, when the condition is satisfied the output is displayed. If any one of the comparison statements is not true, the function moves to the next Condition. - This is another simple source code to find the Largest Number Among Three Numbers.
https://developerpublish.com/academy/courses/c-programming-examples-2/lessons/c-program-to-find-the-largest-number-among-three-numbers/
CC-MAIN-2021-49
refinedweb
464
62.41
4 Mar 04:45 2013 Re: module for description of sequence variants (where to place code) Fields, Christopher J <cjfields <at> illinois.edu> 2013-03-04 03:45:45 GMT 2013-03-04 03:45:45 GMT On Mar 3, 2013, at 7:58 PM, Carnë Draug <carandraug+dev <at> gmail.com> wrote: > On 25 February 2013 10:32, Andreas Leimbach > <andreas.leimbach <at> uni-wuerzburg.de> wrote: >> On 25.2.13 11:08, Carnė Draug wrote: >>> >>>? >>> >> for your last question: >> You can convert aa strings from one to three letter code with >> 'Bio::SeqUtils'. > > Thank you. I have never used Bio::SeqUtils. Not only does solve my > problem, but also seems to be the right place to insert my code. > > If no one objects, I'll add a new method to Bio::SeqUtils named > "describe_mutation". > > Carnë I'm fine with that; at some point it might be worth thinking about whether we need to organize the various *Utils modules a bit better. As most of these export methods, they would be good targets for reorganization at some point (maybe into a general Bio::Utils namespace). chris _______________________________________________ Bioperl-l mailing list Bioperl-l <at> lists.open-bio.org
http://permalink.gmane.org/gmane.comp.lang.perl.bio.general/26449
CC-MAIN-2014-52
refinedweb
199
75.3
Hi, We have an web aplication, running on this configuration. Red Hat Linux Enterprise Advanced Server 3.0 JBOSS 3.2.3 2GB RAM 1CPU - Intel Pentium IV JAVA_OPTS (also have tried several other values )=-Xms800m -Xmx800m -XX:MaxPermSize=400m While trying to start up JBoss 90% of the time this error comes ------ # # HotSpot Virtual Machine Error, Internal Error # Please report this error at # # # Java VM: Java HotSpot(TM) Server VM (1.4.2_03-b02 mixed mode) # # Error ID: 53484152454432554E54494D450E435050014F # # Problematic Thread: prio=1 tid=0x081d6370 nid=0xecd runnable # Heap at VM Abort: Heap def new generation total 81920K, used 13512K [0x6f2d0000, 0x74bb0000, 0x74bb0000) eden space 72832K, 18% used [0x6f2d0000, 0x70002048, 0x739f0000) from space 9088K, 0% used [0x742d0000, 0x742d0000, 0x74bb0000) to space 9088K, 0% used [0x739f0000, 0x739f0000, 0x742d0000) tenured generation total 728192K, used 20550K [0x74bb0000, 0xa12d0000, 0xa12d0000) the space 728192K, 2% used [0x74bb0000, 0x75fc1848, 0x75fc1a00, 0xa12d0000) compacting perm gen total 20736K, used 20501K [0xa12d0000, 0xa2710000, 0xba2d0000) the space 20736K, 98% used [0xa12d0000, 0xa26d5410, 0xa26d5600, 0xa2710000) ./run.sh: line 201: 3789 Aborted $JAVA $JAVA_OPTS -classpath "$JBOSS_CLASSPATH" org.jboss.Main "$@" ------------------------------------------------------- If you see in the last it says that 98% of perm gen used out of around 20 MB, but that is not the case i have provided a lot higher memory through -XX:MaxPermSize=400m Is it due to the way Linux manages the thread or some thing else, i have tried the same thing on a windows 2000 machine with just 1GB RAM, it didnt crashed like this and also it starts up the JBOSS in half the time it does on Linux? Am i missing something ...do i have to do some kind of "ulimit" on this linux box or anything else? Any help will be highly appreciated. Thanks a lot.
https://developer.jboss.org/thread/57244
CC-MAIN-2018-17
refinedweb
292
63.02
#include "orconfig.h" #include "lib/evloop/compat_libevent.h" #include "lib/crypt_ops/crypto_rand.h" #include "lib/log/log.h" #include "lib/log/util_bug.h" #include "lib/string/compat_string.h" #include <event2/event.h> #include <event2/thread.h> #include <string.h> Go to the source code of this file. Wrappers and utility functions for Libevent. Definition in file compat_libevent.c. Set hook to intercept log messages from libevent. Definition at line 58 of file compat_libevent.c. References libevent_logging_callback(). Referenced by init_libevent(). Callback function passed to event_set_log() so we can intercept log messages from libevent. Definition at line 27 of file compat_libevent.c. References suppress_msg. Referenced by configure_libevent_logging(). Schedule event to run in the main loop, immediately. If it is not scheduled, it will run anyway. If it is already scheduled to run later, it will run now instead. This function will have no effect if the event is already scheduled to run. This function may only be called from the main thread. Definition at line 415 of file compat_libevent.c. References tor_assert(). Referenced by connection_start_reading_from_linked_conn(), control_event_logmsg_pending(), periodic_event_enable(), periodic_event_schedule_and_disable(), and scheduler_ev_active(). Cancel event if it is currently active or pending. (Do nothing if the event is not currently active or pending.) Definition at line 448 of file compat_libevent.c. Referenced by periodic_event_disable(). Internal: Implements mainloop event using a libevent event. Definition at line 320 of file compat_libevent.c. Cancel event and release all storage associated with it. Definition at line 457 of file compat_libevent.c. Create and return a new mainloop_event_t to run the function cb. When run, the callback function will be passed the mainloop_event_t and userdata as its arguments. The userdata pointer must remain valid for as long as the mainloop_event_t event exists: it is your responsibility to free it. The event is not scheduled by default: Use mainloop_event_activate() or mainloop_event_schedule() to make it run. Definition at line 386 of file compat_libevent.c. References mainloop_event_new_impl(). Referenced by mainloop_schedule_shutdown(), and reenable_blocked_connection_init(). Helper for mainloop_event_new() and mainloop_event_postloop_new(). Definition at line 357 of file compat_libevent.c. References tor_assert(). Referenced by mainloop_event_new(), and mainloop_event_postloop_new(). As mainloop_event_cb, but implements a post-loop event. Definition at line 332 of file compat_libevent.c. References rescan_mainloop_ev. As mainloop_event_new(), but create a post-loop event. A post-loop event behaves like any ordinary event, but any events that it activates cannot run until Libevent has checked for other events at least once. Definition at line 400 of file compat_libevent.c. References mainloop_event_new_impl(). Referenced by do_signewnym(), hibernate_schedule_wakeup_event(), and initialize_mainloop_events(). Schedule event to run in the main loop, after a delay of tv. If the event is scheduled for a different time, cancel it and run after this delay instead. If the event is currently pending to run now, has no effect. Do not call this function with tv == NULL – use mainloop_event_activate() instead. This function may only be called from the main thread. Definition at line 433 of file compat_libevent.c. References tor_assert(). Referenced by do_signewnym(), hibernate_schedule_wakeup_event(), mainloop_schedule_shutdown(), and periodic_event_set_interval(). Return the current Libevent event base that we're set up to use. Definition at line 185 of file compat_libevent.c. References the_event_base, and tor_assert(). Libevent callback to implement a periodic event. Definition at line 227 of file compat_libevent.c. References periodic_timer_t::cb, and periodic_timer_t::data. Disable the provided timer, but do not free it. You can reenable the same timer later with periodic_timer_launch. If the timer is already disabled, this function does nothing. Definition at line 283 of file compat_libevent.c. References periodic_timer_t::ev, and tor_assert(). Stop and free a periodic timer Definition at line 291 of file compat_libevent.c. Launch the timer timer to run at tv from now, and every tv thereafter. If the timer is already enabled, this function does nothing. Definition at line 267 of file compat_libevent.c. References periodic_timer_t::ev, and tor_assert(). Create and schedule a new timer that will run every tv in the event loop of base. When the timer fires, it will run the timer in cb with the user-supplied data in data. Definition at line 239 of file compat_libevent.c. References periodic_timer_t::cb, and tor_assert(). Referenced by do_main_loop(). Ignore any libevent log message that contains msg. Definition at line 65 of file compat_libevent.c. References suppress_msg. Referenced by init_libevent(). Tell the event loop to exit after running whichever callback is currently active. Definition at line 521 of file compat_libevent.c. Tell the event loop to exit after delay. If delay is NULL, instead exit after we're done running the currently active events. Definition at line 512 of file compat_libevent.c. Un-initialize libevent in preparation for an exit Definition at line 486 of file compat_libevent.c. Return a string representation of the version of Libevent that was used at compilation time. Definition at line 210 of file compat_libevent.c. Referenced by options_init_from_torrc(). Return the name of the Libevent backend we're using. Definition at line 194 of file compat_libevent.c. References the_event_base. Return a string representation of the version of the currently running version of Libevent. Definition at line 202 of file compat_libevent.c. Referenced by options_init_from_torrc(). Initialize the Libevent library and set up the event base. Definition at line 133 of file compat_libevent.c. References tor_libevent_cfg::num_cpus, the_event_base, and tor_assert(). Referenced by init_libevent(). Run the event loop for the provided event_base, handling events until something stops it. If once is set, then just poll-and-run once, then exit. Return 0 on success, -1 if an error occurred, or 1 if we exited because no events were pending or active. This isn't reentrant or multithreaded. Definition at line 503 of file compat_libevent.c. A string which, if it appears in a libevent log, should be ignored. Definition at line 23 of file compat_libevent.c. Referenced by libevent_logging_callback(), and suppress_libevent_log_msg(). Global event base for use by the main thread. Definition at line 80 of file compat_libevent.c. Referenced by MOCK_IMPL(), tor_libevent_get_method(), and tor_libevent_initialize().
https://people.torproject.org/~nickm/tor-auto/doxygen/compat__libevent_8c.html
CC-MAIN-2019-04
refinedweb
974
62.85
Black Mamba - releases Since I have no other way how to inform users about new Black Mamba features, I decided to create this topic. I will only post info about new releases here. Nothing else, to keep it short. - Do you want to discuss something? Please, use the other topic (quite long now) or create a new one. - Do you want to see something in Black Mamba? Please, file an issue. - You fixed something? Please, create pull request. Thanks! Black Mamba 0.0.12 released: - Analyze shortcut renamed to Analyze & Check Style - Analyzer now runs both pyflakes& pep8code style checks - Analyzer behavior can be modified via bm.settings.ANALYZER_*variables - Analyzer always scrolls to the first issue and does not show HUD - Analyzer shows HUD only if there're no issues Cmd Shift Kshortcut introduced - Clear Annotations Black Mamba 0.0.13 released: - flake8 checks on Travis CI (thanks to cclauss) - Fixed all style issues from flake8report, down to zero now - Analyzer removes trailing white spaces & trailing blank lines before analysis is started (can be turned off via bm.settings...) - Fixed toggle comments script (#5) - Fixed file matching in Open Quickly... (#10) - Fixed Esc key code (27 = X, not Esc, Esc = 41) (#11) Black Mamba 0.0.14 released: - Since 0.0.14, the license was to changed to MIT - Seems no one does use PyPI for installation, .pyui files are now included :) - Comment line with #(hash space) instead of just #(#12) Ctrl Tab(or Cmd Shift ]) selects next tab Ctrl Shift Tab(or Cmd Shift [) selects previous tab Cmd 1..9selects specific tab EXPERIMENTAL Cmd Uto run unit tests. Using pytest directly, because I'd like to add more unit tests features like - run unit tests for the whole package, file, test in a file, ... Works for the file now, but it has issues with some global states, reloading, ... Black Mamba 0.0.15 released: - Fix HUD message when there're no tests in the file - Removed unreliable PyPI package installation option - Removed package from PyPI - Custom installer alla StaSh - Removed settings module (moved to respective modules) - Removed script_picker.py (merged to file_picker.py) - Updated pythonista_startup.py sample - Pythonista version compatibility check Installation If you have already Black Mamba installed, remove it. Then open console, issue ... import requests as r; exec(r.get('').text) ... and that's it. GitHub installation is not user friendly and PyPI doesn't work. Breaking Changes Check pythonista_startup.py sample how to configure Black Mamba a how to start it. You have to call main()and that's all you have to do. Minimum is: import blackmamba as bm bm.main() Why? - There's compatibility check with Pythonista and Black Mamba refuses to start (unless forced) with newer versions of Pythonista. - There's auto check for updates (daily, configurable, can be disabled). You'll be just informed with alert for now (till I write real updater). It's based on GitHub releases. These releases will be stable, masterbranch is not stable at all. Black Mamba 0.0.16 released: - Allow to start Black Mamba even in untested version of Pythonista, just warn the user - Init messages are colored (orange=warn, red=error) - All print messages replaced with log.info( .error, .warn) bm.log.levelallows to set logging level (default INFO) - Do not bother user (mainly me) with alert about new version (just use console) in case the Black Mamba is not installed via installer (git for example) - Tested with latest Pythonista beta (3.1.1 - 311008), everything works as expected You can install / update it with with: import requests as r; exec(r.get('').text) This post is deleted!last edited by @Phuket2 depends ... The only mandatory thing is to place GitHub's blackmambafolder into site-packages-3. That's it. The difference between git & installer is ... git masterbranch is not stable, it can brake things - can be workarounded by checkout of specific tag ( v0.0.16, etc.) - it doesn't tell you that there's new version available installer - stable releases - it does inform you about new releases automatically (I do store installed version info in the ~/Documents/site-packages-3/blackmamba/.release.json) and check for updates regularly (during Pythonista startup) - if there's a new version, you'll see iOS alert and then you can issue same command you did use for installation to update Black Mamba You can look at updates.check for more details. IMO installer is more friendly if you do not want to develop it, because I'm going to add updater as well. I'll put PyPI back when it starts working, because this is the most preferred way of installation. Sadly it doesn't work. Also, please, discuss in another topic, this is just for releases. See the first post. Thanks for understanding. Black Mamba 0.0.18 released: - Installation command is copied to the clipboard when the alert about new version is shown. Just open console and paste it. system.Pythonistaand system.iOSdecorators to limit functions execution under the specific Pythonista & iOS versions. - 0.0.17 skipped, because this version was used for testing & fixing pip - Outline Quickly... ( Cmd Shift L) introduced Outline Quickly mimicks Pythonista outline ( Cmd L), but allows node filtering. On the other side, it does not contain annotations (yet). Would it be possible to put this stuff into ? As someone who does not use an external keyboard with Pythonista, I find much of this discussion to be off topic. Is it just me or do others agree? I do not use Xcode Template for example, I find much of it to be off topic for me. Others do use Xcode Template, they don't consider it off topic, they're reading it, participating, ... So, I simply don't read Xcode Template related threads. I understand that this thread can be off topic for some people, but for some of them not. Just don't read it as I or others don't read Xcode Template topics. P.S. I have nothing against Xcode Template, it's just an example :) @ccc , i dont agree. I normally agree with you. Sorry, in this case I dont. @zrzka has put a lot of effort into this and it works great. For those who use ext keyboards, its a gift from god. There are many subjects in Pythonista i have no interest in, I just skip them. No disrespect to you, but i think this is a great project. It's easy to skip over if you are not into it. Black Mamba 0.0.19 released: - Fixed unused import in action picker - Compatibility check with 3.1.1 (311009) - Introduced ide.scroll_to_line(line_number) Ctrl LJump to line... added Cmd Eto show Drag Provider (iOS 11 & iPad only) What's the Drag Provider all about? See this video. You must have latest Working Copy beta, iOS beta and iPad. Otherwise it will not work for you. using pip install blackmamba and then run: #!python3 import blackmamba as bm bm.main() Error: ImportError: No module named 'httplib' @wolf71 unfortunately, there's no pip3in StaSh and Black Mamba is Python 3 only. As a workaround, you have to: pip remove blackmamba pip install blackmamba -d ~/Documents/site-packages-3 And ... #!python3 import blackmamba as bm bm.main() ... must be placed in the ~/Documents/site-packages-3/pythonista_startup.pyfile. Hope that helps. If not, please file an issue with more details (like how did you install it, which version was installed, pip installation console log, Pythonista version, default interpreter, ...). Thanks. And you have to use devStaSh ( selfupdate -f dev). P.S. Note the -3suffix in installation path / python startup file path. Also I tried to reproduce your issue, but no luck (even if I change interpreter, omit -din the pipcommand, ...). Black Mamba 0.0.21 released - Code cleanup (circular deps, ...) - Fixed analyzer where ignore_code=Nonemeans real None - Please, check sample pythonista_startup.py, breaking changes, sry - Config option to disable keyboard shortcuts registration It's basically clean up release, because it was quickly written, there were lot of circular deps, ... Also configuration of Black Mamba is done via dict, not via modifying module variables. And option to disable keyboard shortcuts was introduced, because I'm planning to add stuff which is useful even without external keyboard. How to update Pythonista console import requests as r; exec(r.get('').text)` pip StaSh dev ( selfupdate -f dev). pip remove blackmamba pip install blackmamba -d ~/Documents/site-packages-3 pip updatedoesn't honor -dfrom previous pip installcommand. Will file an issue and fix this. BlackMamba 0.0.22 released: - Toggle comments improved - Honors both tabs and spaces - Indented #if line is indented - Shortest indent is used for all lines #if commenting multiple of them - Empty lines are ignored - Fixed ide.run_actionwhen script_name starts with / BlackMamba 0.0.23 released: Cmd Shift D) Black Mamba 0.0.24 released: blackmamba.keyboardmodule added - Pickers (open, script, ... quickly) - Do not focus search field if HW keyboard is not connected - Show title bar instead of custom title to allow users to close dialogs with X button ide.scroll_to_lineoptimized - Toggle comments various fixes - Line is properly commented when there's inline comment - Uncommented line -> whitespaces only -> 'n' - More test coverage to avoid future bugs
https://forum.omz-software.com/topic/4303/black-mamba-releases
CC-MAIN-2018-51
refinedweb
1,538
67.55
Comments on "Are new versions released often enough?"Comments on "Are new versions released often enough?"I see new versions coming out pretty quickly, with lots of cool new features. I am impressed.It is far more important that the stable releases really are stable than beeing numerous.Although the releases are not infrequent, I think it would be nice to have bug-fix releases 3-4 times a year. But considering the number of people that actively work on GHC, that could be unreasonable.really, i don't have any opinion. why should i complete this field? :)A large project really doesn't want to see new versions too often - it's very costly to change compiler versions if there are -any- incompatibilities.Releases are, if anything, a bit too frequent.I would like to see faster bug fixing turnaround.There always seems to be a working version compatible with my fairly conservative code, for all platforms.However, I think it would be better if releases were a /bit/ more frequent.Real even-odd numbering, with lots of point releases, would be great, if the personnel exists for it.Rather too often, I think-- and maybe a touch too enthusiastic on adding new stuff and not keeping backwards compatiblity. I suppose this is the cost of living at the bleeding edge.yes. the exception is of course GpH. GpH team keeps me and my students tied to old and stupid C plus MPIPerhaps it would be a good idea to announce how much disc-space is needed, my first build of version 6.4 failed because I had only about 900M left in my root partition :-(I often use older versions because they meet my needs.Has a free software project ever released often enough for the users? *grin*I would like it if new versions could find their way into Debian better.I'm not a heavy user of new features.jnActually, I would rather see more, but that more my craving for novelty. I don't *need* to.GHC 6.4 (6.4 without Cabal support and 7.0 with, perhaps) should have been released a lot earlier, IMHO. I've been using GHC 6.3 just to access the new TH for a long time and it hasn't been pleasant.possibly too oftenWell, I don't think that the current tempo is a problem. But I do think that GHC could be released a little more often. Often when some new language feature is implemented people are eager to try it out. As soon as the implementation is done GHC could be shipped in an unstable version for people to play with.We don't want to have to fix things too often. :-) On the other had of course there is always pressure for new features. :-) It can become tricky to support a range of ghc versions. We end up needing a lot of #if #endif stuff. Perhaps that's inevitable.Instead of new versions, I'd like to see a focus on developing a new standard version of Haskell, solidifying the most popular extensions. I'm concerned that programs I write using (sometimes necessary) extensions with GHC today won't work in the future.I am impressed with how often they are released! I think the frequency is just right, long enough for the last one to get settled in, but not so long that it feels like all developement has stopped.See comments above. One always likes new versions. My reply is forced by the form rather than considered. Too oftenHaven't been in to haskell long enough to knowJust about the time when I start wondering when the next GHC will be released, for whatever reason, the next version is generally announced.I hardly ever feel that I need the latest-greatest type system extension. However, this might be different for other researchers. As a library maintainer, stability is much more important. Maybe you should consider two kinds of GHC releases: latest type-system wonders, and latest Haskell98 stable?Compatibility between releases seems sticky sometimes at the application level.At present GHC seems to be adding cool new things faster than I have time to learn about thim.I think it's good that new versions are not released all too often. It's become reasonably simple to build from CVS, so if one wants new features to play with, that's ok. I would like more releases of supporting software, however: happy, alex, especially haddock are released too infrequently for my taste.this is a vivant thing!things generally seem to come out about as fast as I keep up with themFor bleeding edge, CVS is not se broken most of the time. Though, GHC 6.3 was hard to build during the compatibility library transition.Recently, as I need some new features of Data>generics, I had to install the beta (6.4-date) version. But it works well for me.I haven't had much of a reason to upgrade my installation of GHC for a long time... The pars I've used (ie the "core" Haskell stuff) have been working alright for some timeHaven't thought this before on GHC - generally, I think that any active sw should have a stable release in 6 to 12 months regularly, but I also realize that this is much a question of time & money of the people involved, too.Don't want to many releases, want to stablise on a good release and perhaps only release minor bug fixesI'd like to keep up-to-date with the latest-and-greatest so that I could work out examples from recent academic papers, but I don't want to have to build my own GHC from the head to get there.I actually don't know, but there is no option for that.I don't have time to install updates all the time. I prefer stability to frequency. GHC seems to be developed a lot faster than, say, GCC, which has very infrequent releases. as you might see above, I have not come to even try all existing features. 99% of my time i'm coding basic haskell without any extensions or special featuresOften enough that my own limited time means I don't miss features for long.doesn't matter release new versions every if there is nothing new?No idea! Just got here!Although it was a long wait for 6.4 :)Sorry, not enough experience w/GHC yet.Working on internals myself, I consider version numbering somewhat strange, issuing "minor" release numbers to important changes, e.g.: 5.04: completely new lib. structure (compared to 5.02) 6.4 : completely new backendEspecially the daily builds are frequent!It might be nice to have more "bug-fix" releases, but I realize that with a small team this is probably too much work.It's a trade-off with stability. I'm happy with the process.release early - release often.The wait for GHC6.4 was a long one, on the other hand no point releasing something that is not ready. Maybe more releases so that the difference between them is not so large.I seldom go to the trouble of installing (possibly unstable) CVS versions, which means that I have to wait for new features.I just started using it so I am not sure how often new versions are released.GHC stable versions are often enough. I do wish for metastable releases for testing specific features.Release them when they are ready. Make them stable. Though Haskell is effectively the de facto standard implementation, we could do with a new Haskell language standard.I think the release frequency is about right.Well yes and no really. Yes, because because new releases break ABI - ie all libraries need to be rebuilt with the new version. No, since it would be nice to have more gradual regular updates. :)I think software should be released often, without putting too much changes in each release. This way it would be easier to track bugs and correct them.Although my opinion here probably doesn't mean much because I'm tracking CVS-HEAD anyway.Y80[Y90[9YI compile from CVS every one or two weeks. So in fact I am more interested in features than in versionsFFI don't refer the function this programm istead of other error. (... but I don't know where is wrong.) FFmpeg.o(.text+0x1398):fake: undefined reference to `av_init_packet' (hsc) -- -*- mode: haskell -*- {-# OPTIONS -fglasgow-exts #-} #include <avformat.h> #include <avcodec.h> module FFmpeg where import Foreign data CAVPacket = CAVPacket {pktPts :: !(#type int64_t), pktDts :: !(#type int64_t), pktDatas :: !(Ptr (#type uint8_t)), pktSize :: !Int, pktStreamIndex :: !Int, pktFlags :: !Int, pktDuration :: !Int} deriving (Eq,Show) instance Storable CAVPacket where peek p = do{ pts <- (#peek AVPacket, pts) p; dts <- (#peek AVPacket, dts) p; datas <- (#peek AVPacket, data) p; size <- (#peek AVPacket, size) p; stream_index <- (#peek AVPacket, stream_index) p; flags <- (#peek AVPacket, flags) p; duration <- (#peek AVPacket, duration) p; return $! CAVPacket pts dts datas size stream_index flags duration } poke p (CAVPacket pts dts datas size stream_index flags duration) = do{(#poke AVPacket, pts) p pts; (#poke AVPacket, dts) p dts; (#poke AVPacket, data) p datas; (#poke AVPacket, size) p size; (#poke AVPacket, stream_index) p stream_index ; (#poke AVPacket, flags) p flags; (#poke AVPacket, duration) p duration} sizeOf _ = (#size AVPacket) -- alignment の値については自信なし alignment _ = 7 av_init_packet :: IO (Ptr CAVPacket) av_init_packet = alloca $ \pkt -> do c_av_init_packet pkt return pkt foreign import ccall unsafe "av_init_packet" c_av_init_packet :: Ptr CAVPacket -> IO () {- And Now, Newest Relese Candidate package of Windows can't load DLL. Loading package base-1.0 ... linking ... done. Loading package OpenGL-2.0 ... ghc.exe: can't load .so/.DLL for: m (addDLL: unkn own error -}now that I'm more deeply involved in the release process (via cabal) I might change my mind, but in the past it has always seemed frequent enough.IMHO a compiler should not have too frequent releases, except for bugfixes. Introducing new features every 6 months or so would lead to user's code maintenance problems and potentilly great difficulties in re-using very old as well as very recent code.Most of what is in a new release falls into the nice-to-have category, so the timing of releases is not a big deal.Maybe when I start using more of the advanced idioms I might like new versions becoming available quicker (I'm certainly looking forward to using GADTs in 6.4).Persons depending on very new features should hopefully be willing to patch.in general, yes. but not if you're on windows and have no interim snapshots.New major releases should be released only when they are stable enough. Bug-fix versions could instead be released more often.Yes, usually, but not 6.4. Yes of course. New versions of a compiler are not needed often. What is needed is carefully crafting bindings to standard libraries so that the design of the bindings doesn't impede further work on the compiler.Developers who want to live on the cutting edge can get more recent anyway either straight from the CVS or automatically bundled tar filesthis should not be an issue because cvs snapshots are availableBut there are sometimes major changes as in the module structure between 6.2.2 and 6.4 or as in the set of (syntacticly) accepted programs between 6.* and 6.2.2. It's hard for me to remember back about this. Also, I have lots of conflicting interests. More frequent releases make packaging harder in some ways, but on the other hand it's possible to have less buggy packages without having a huge diff build up between releases. On balance I think a higher frequency of point releases, especially for x.y.1 when x.y inevitably quickly reveals a number of issues (like doesn't compile on powerpc this time round), would be good. Unfortunately, due to a combination of where we are in the Debian release process coupled with technical details I won't bore you with I've had to keep 6.4 out of Debian for the time being, so problems arising from us won't be found as quickly as I'd normally hope we'd be able to.Actually I don't know, I'm sort of new to this.please don't get driven by timescales make it driven by need / functionalityNew version seem to be released frequently.In hindsight, a release between 6.2.2 and 6.4 could have been a good idea. That way, 6.4 (or what then would have been 6.4.1) could wait until Cabal was finished, which seems will not be the case. But then again, things like these are hard to predict. More than a year between 6.2.2 and 6.4 seems long though, especially since 6.4 brings many new toys, like GADTs. :)The nightly snapshots are a great idea: less hassle than CVS and more reliable reference points, and always there when the latest official release is not up to the task...No, the automated daily builds should be sufficient be be on the bleeding edge. A release cycle of 6-9 month ok.Consider the improvements and features added in each new version, I'd say the development cycle is efficient.More often than I need 'em. Yes, considering improvements the make. Of course, I wouldn't mind if we had the same feature set and stability two years ago. But for that, GHC needs more developers. In this month I've switched from full-time to part-time job (4/5), so I'll have one free day during the week for playing with things like GHC development :-)I would rather have versions of GHC released whenever a major new feature (or set of related features) is added, instead of grouping a bunch of unrelated features together, seemingly arbitrarily. Thus, the current GHC 6.4 would have been preceded by a series of intermediate versions, 6.3.1, 6.3.2, 6.3.3, etc. I think having these intermediate releases would result in more testing. For example, people on Windows could not even test GHC 6.3 without building from source, but if they wanted to build everything from source they probably wouldn't be using Windows (and plus, it is time consuming to build GHC from source). If you had released GHC 6.3.1, GHC 6.3.2, etc. then I think that more people would use these intermediate releases and would find bugs sooner, with the result being that GHC 6.4 would look a lot more like the (presumably) upcoming GHC 6.4.1 release.not quite. Automatic incremental software update would be nice.well, i don't know but there is a new release this month :)Maybe even too often.
https://www.haskell.org/ghc/survey2005/release_freq_comments
CC-MAIN-2016-50
refinedweb
2,487
66.44
Logging! Cool, and weird because I did some similar both with logging and timing. I like your stuff better though so I will adopt it, thanks. Channing Channing Walton June 27, 2009 at 7:17 pm Ideas are weird that way. They seem to come to a lot of people at the same time. I’m glad you liked it and you’re welcome. /J johlrogge June 27, 2009 at 7:59 pm Another thing you missed are by-name parameters. The canonical way to do logging in scala is something like def warn(msg: => String, err: => Throwable) { if (log.isEnabled(Level.WARN)) { // or how your logger calls it log.warn(msg, err) } } (guess the blogging platfom will eat the indentation :-)) That way possibly expensive operations needed to construct the arguments are only performed if the message is acually logged. If you only ever log constant strings, this will be sightly slower, but imagine a debug-level log statement that constructs a sting representation of the subtree rooted at the current node. Florian July 1, 2009 at 5:24 am logback/slf4j does that already: log.warn(“hello {}”, “world”) The string “hello world” will only be assembled if the log-level is at least warn. In your example you would have the anonymous functions created every time wouldn’t you? msg and err would be shortlived instances of Fn0 while if you kept msg as a string it would likely be cached due to javas flyweight string-stuff and the throwable would have been created wether you decided to log it or not. Unless I miss something your suggestion would actually create more shortlived objects than how it was before. I agree that anonymous functions would be the obvious implementation in scala though and I admit that I am not quite sure what nifty tricks scala might use to minimize the number of shortlived objects. I can’t think of any but that doesn’t mean there are none. I always look at () => “a” as I look at javas collection.iterator (or more subtly: for(String a : collection). All three would create the same amount of shortlived objects right (not counting the string)? (Iterator, Fn0) johlrogge July 1, 2009 at 6:01 am Yeah, logback/slf4j will assemble the strings lazily, but I think the point is, what if it’s expensive to generate the strings? In that case you want call-by-name almost certainly. It’s certainly considered canonical to use by-name for this; if it really produces a lot of overhead, it would be interesting to know how much. Eric Bowman June 4, 2010 at 10:43 am I think you’re absolutely right and I realize that I was confused about the difference between anonymous functions and call by name when I wrote this post and later the comment. I’ve meant to revisit this post and remedy that. Call by name, as I understand it, does not generate anonymous classes and such which would be an excellent way to solve this problem. I also saw that logback now changed the api so you can use up to two arguments that are not dynamic (no under the hood array creation) I don’t know if that is still usefule given call by name. Also, given how clever hotspot is nowadays it certainly wouldn’t make much sense to avoid that extra method-call indirection (if that is what will be genereated by the scalacompiler in the first place) Thanks for your comment! johlrogge June 4, 2010 at 11:27 am Well, call-by-name is implemented as anonymous functions underneath (though in an opaque way). So a class is created. The question is, how much overhead is this /really/? Eric Bowman June 4, 2010 at 1:44 pm I quite the Logging trait but would like some ideas about how to use the parameterized logging messages. Everything is wrapped in a Scala array that slf4j treats as a single (Object) argument instead of a vararg or Java array. With a message of “Message {} and {}” I get a result of: 21:04:02.495 [Thread-5] INFO bank.Account – Message Array(first, second) and {} … this isn’t really the intent of SLF4J! The info method “public void info(String format, Object arg)” is invoked instead of “public void info(String format, Object[] argArray)”. How should I specify the arguments (in Scala) so that the appropriate Java method is called? John August 18, 2009 at 9:41 am Hi John, I don’t know if this is an oversight from my side or if SLF4J changed it’s API recently. I remember trying out this code (obviously) before posting it but it’s possible I stuffed it up. I think the problem is that SLF4J no longer takes vararg-parameters but allows up to two (and no more) parameters in the message. The benefit of that is that less temporary objects are created (no argument arrays). It should be straight forward to modify the logging trait. I’ll look into that tonight and update the post. I just thought I’d post these unverified and hastily put together words in hope that It will help you solve your problem. As a side note (but probably not applicable to this particular problem). To pass a scala array into a javamethod that takes varargs you suffix _* if I’m not mistaken: method(myScalaArray._*) Hope this helps johlrogge August 18, 2009 at 9:59 am Thanks for the reply Joakim. The syntax is: def info(message:String, values: Object*) = log.info(message, values:_*) …but this generates the scala compile error: [WARNING] Logging.scala:18: error: no `: _*’ annotation allowed here [WARNING] (such annotations are only allowed in arguments to *-parameters) [WARNING] def info(message:String, values:Object*) = log.info(message, values:_*) [WARNING] ^ It would seem that the SLF4J “info” method that expects Object[] is not good enough for the scala compiler. I look forward to seeing what you can come up with. John John August 18, 2009 at 11:21 am I updated the code above in the post. I don’t know how I managed to stuff it up but I did. Thanks for noticing and taking your time to let me know about it. Here is my test: object Test extends Object with Log { def main(args : Array[String]) : Unit = { info(“Hello {} how are you doing this {} day”, “friend”, “lovely”); } } Prints: 1 [main] INFO – Hello friend how are you doing this lovely day johlrogge August 18, 2009 at 7:16 pm That’s great. Output just as I wanted it. I assume that the array manipulation will occur whether or not the logging level is active, but the final string construction will only occur if logging is enabled. John August 19, 2009 at 10:47 am Yes, that’s correct. A way around that would be to pass the array construction by name as suggested in another comment. That would still construct a temporary object for a function but will be faster than the array manipulation every time. I’m looking into loosing the arrays completely with the newest version of slf4j. I hope I’ll get around to it this week but it would be straight forward. Just to do a few overloads: def info(message:String) def info(message:String, arg1:Object) def info(message:String, arg1:Object, arg2:Object) And so on. 1.5.8 has matching methods: void debug(String msg) Log a message at the DEBUG level. void debug(String format, Object arg) Log a message at the DEBUG level according to the specified format and argument. void debug(String format, Object[] argArray) Log a message at the DEBUG level according to the specified format and arguments. void debug(String format, Object arg1, Object arg2) Log a message at the DEBUG level according to the specified format and arguments. That would limit the number of parameters to two and require manually creating an array for more arguments than that but it’s a fair trade off IMO. johlrogge August 19, 2009 at 11:21 am Thanks for posting this piece of code, it’s very helpful. I’m new to Scala and I wasn’t quite sure how to implement logging features in my code. I’m looking forward to new blog articles from you. Pedro September 25, 2009 at 7:02 pm Cool! :) Antony Stubbs May 27, 2010 at 8:07 am Have you seen the @elidable annontation? /Ö Örjan July 23, 2010 at 10:33 pm No I have not seen it. Good to know that it’s there. Perhaps it should be used in a logging framework so everyone has the option to remove logging code by recompiling. I would not use that option myself I can’t think of a situation where I would physically remove the ability to turn on debug-logging. Perhaps it would make sense in more constrained environments such as android development… I think the downside is a bit extreme though… Interesting! Good to know and thanks for sharing this info. Have to think more about this. johlrogge July 23, 2010 at 11:28 pm You don’t mention here the important fact that the log field in your example is not static, and so each instance will have its own, and getLogger is re-executed for each instantiation. Hence your example is not equivalent with what people used to do in Java. ddekany January 10, 2011 at 10:20 pm You’re absolutely right about that. To be fair, I never claimed the code was equivalent. In fact, I think it’s /better/ in many respects :). It does have the drawback you pointed out. I don’t think that is a big issue though. I have assumed that slf4j caches loggers so that they won’t have to be resolved on each getLogger. If that is not true then it’s easy to cache the loggers ourselves in a companionobject to the trait. I would be very surprised if this turns out to be a bottleneck in the average application but I have not benchmarked it. It is also possible to mix the trait in into a companion object but I think it’s more hassle than it’s worth. Anyway, thanks for pointing this out in case it was not clear to all readers. It is indeed possible that readers just assume that my code is equivalent in this respect and it’s not. johlrogge January 10, 2011 at 10:46 pm People underestimate the importance of this thing… since we are talking about logging here, it is an important difference, because: (a) If you extend your SomeClass class with SomeSubClass, the the logger in SomeClass will be suddenly do getLogger(classOf[SomeSubClass]). In Java, it would remain classOf[SomeClass] no mater what. We could argue which is the better, but for now the point is simply that there is a difference in semantic here. (b) The performance impact is often neglected in this topic, talking about premature optimization and such. But it has no much to do with optimization, but with expressing your intent, which is not that you want a logger for an object, but that you want a logger for a class (see point (a)). This translates to an in principle unnecessary slowdown (kind of an accidental complexity), which is not necessarily trivial either. There are classes that are for very light weight very short lived objects. Like, for an implicit conversion in Scala, a new instance is created for each single call of a method that is added by the implicit conversion. Plus, if your class extends other classes and traits and those also extend others and so on, and some of them mixes in the Log trait, you end up with several log fields per instance and several getLogger calls per instantiation. So I think it’s something people better be aware of before mixing in Log blindly everywhere. Unlike in Java, it’s not for free. ddekany January 11, 2011 at 10:38 am
https://johlrogge.wordpress.com/2009/06/27/loggingtools-in-scala/
CC-MAIN-2018-22
refinedweb
2,021
70.43
A Scene Hook creates new scene render hooks when new viewports are created. More... #include <DM_SceneHook.h> A Scene Hook creates new scene render hooks when new viewports are created. Definition at line 121 of file DM_SceneHook.h. Create a scene hook which creates scene render hook instances for specific viewports. Only one scene hook is ever created, and it is responsible for managing the scene render hooks for viewports. Each hook requires a name (for error reporting) and a priority level to resolve multiple scene hook conflicts. Definition at line 133 of file DM_SceneHook.h. Definition at line 134 of file DM_SceneHook.h. Called when a viewport needs to create a new hook. Each viewport has its own scene hook. Called when a viewport no longer requires the hook. When a viewport is destroyed, it retires all its hooks. Because a hook could be shared between all viewports, this method gives the scene hook the opportunity to delete it, dereference it, etc. The viewport doing the retiring is passed in along with the hook it is retiring.
http://www.sidefx.com/docs/hdk/class_d_m___scene_hook.html
CC-MAIN-2018-05
refinedweb
178
75.81
That's incredible. I work out a binary converter which convert decimal value into binary.A lot of people has already done the same that before,but now I also has maneged to do it. I didn't think so far that this task can be coded in such a simply and brief way. How effective c++ sometimes can be! Here you are: #include<iostream> #include<vector> #include<cmath> using namespace std; vector<int>binary(1); int i, s=0 ; vector<int> Binary(int dec); int main(){ cout<<"Give a positive integer: "; while(cin>>i){ cin.ignore(); Binary(i); cout<<endl; i = binary.size(); while(i>0){ i -=1; s+=sizeof(binary[i]); cout<<binary[i]<<" "; } cout<<"\tThe size of your vector in byte: "<< s; cout<<endl<<endl; binary.resize(1); s=0; main(); } cin.get(); } vector<int> Binary(int dec){ int l=1, m=0; bool flag = 0; binary[0]=0; loop: while(l<dec){ l*=2; m++; if (flag==false && l<=dec) binary.push_back(0); } if(dec==0){binary[0]=0;} else if(dec==1){binary[0]=1;} else if(dec-l==0)binary[m]=1; else{binary[m-1]=1; dec -= l/2; flag = true; l=1; m=0; goto loop;} } My binary converter trigger question about vector, datatypes, sizeof op. and their usage which may be worth to discuss. However this solvation put up some brand new question to me as well. The point is, the following behaviour of the code: If you replace the type of the data that binary vector contains with bool type (allocated 1byte space) the program doesn't do the converting in the right way. The following event happens to programme with vector of bool type: 1st input(any power of the number two): e.g. 16 1st output: 10000 2nd input(any number that isn't power of two): e.g. 7 2nd output: 000 (the number of digit is right) Then the program goes well without any mistake. Am I right when I come to the conclusion that vector class is a description (designe) of such data containers that don't work with bool datatype properly, Maybe, there are defined flag bits which needs to mark the beginning or the end of the allocated memory region, for example. The another event which not cause problem just simply new for me. The behavour of the sizeof operator: In the case of code with vector of integer type (allocated 4 byte space for each element) : e.g.: 1st input: 3 1st output: 11 The size of your vector in byte 8 2nd input: 16 2nd output: 10000 The size of your vector in byte 20 In the case of code with vector of bool type (allocated 1 byte space for each element): e.g.: 1st input: 3 1st output: 11 The size of your vector in bit 16 2nd input: 16 2nd output: 10000 The size of your vector in bit 40 Yes, you can see well, you must overwrite even the text part "in byte" as I think sizeof operator, in case of bool type, calculat tha space of mamory not byte by byte but bit by bit. That's the curious happening suprises me.
http://cboard.cprogramming.com/cplusplus-programming/135355-my-binary-converter-trigger-question-about-vector-datatype-sizof-op-etc.html
CC-MAIN-2014-15
refinedweb
535
58.92
This chapter describes how to use WebLogic jCOM to call methods on a WebLogic Server object from a COM client. Special Requirement for Native Mode Calling WebLogic Server from a COM Client: Main Steps Preparing WebLogic Server Running COM-to-WLS Applications in Native Mode Note that WebLogic Server must be installed on COM client machines in order for your COM-to-WLS application to run in native mode. For more information on native mode, see Running COM-to-WLS Applications in Native Mode. This section summarizes the main steps to call into WebLogic Server from a COM client. Most are described in detail in later sections. On the WebLogic Server side: If you are using early binding, run the java2com tool to generate Java wrapper classes and an Interface Definition Language (IDL) file and compile the files. See Generate Java Wrappers and the IDL File—Early Binding Only. Enable COM calls on the server listen port. See Enable jCOM in the Oracle WebLogic Server Administration Console Help. Grant access to server classes to COM clients. See Configuring Access Control. Configure any other relevant console properties. See Servers: Protocols: jCOM in the Oracle WebLogic Server Administration Console Help. On the COM client side: Install the jCOM tools files and, for native mode only, WebLogic Server class files. See Install Necessary Files. If this is a zero-client installation: Obtain an object reference moniker (ORM) from the WebLogic Server ORM servlet, either progammatically or by pasting into your application. See Obtain an Object Reference Moniker from the WebLogic Server Servlet—Zero Client Only. If you are using early binding: Obtain the IDL file generated on the WebLogic Server machine and compile it into a type library. Register the type library and the WebLogic Server instance serviced. For both of these steps, see Generate Java Wrappers and the IDL File—Early Binding Only. Register the WebLogic Server JVM in the registry. If want to communicate with the WebLogic Server in native mode, set that in this step. See Register the WebLogic Server JVM in the Client Machine Registry. Code the COM client application. See Code the COM Client Application. Start the COM client. See Start the COM Client. The following sections discuss how to prepare WebLogic Server so that COM clients can call methods on WebLogic Server objects: Add the path to JDK libraries and weblogic.jar to your CLASSPATH. For example: set CLASSPATH=%JAVA_HOME%\lib\tools.jar; %WL_HOME%\server\lib\weblogic.jar;%CLASSPATH% Where JAVA_HOME is the root folder where the JDK is installed ( c:\Oracle\Middleware\jdk160 or c:\Oracle\Middleware\jrockit_160 by default) and WL_HOME is the root directory where WebLogic Platform software is installed ( c:\Oracle\Middleware\wlserver_10.3 by default). Generate java wrappers and an IDL file with the java2com tool: java com.bea.java2com.Main The java2com GUI is displayed: Input the following: Java Classes & Interfaces: list of the wrapper classes to be converted Name of generated IDL File: name of the IDL file Output Directory: drive letter and root directory\TLB where TLB signifies OLE Type Library. The java2com tool looks at the class specified, and at all other classes that it uses in the method parameters. It does this recursively. You can specify more than one class or interface here, separated by spaces. All Java classes that are public, not abstract, and have a no-parameter constructor are rendered accessible as COM Classes. Other public classes, and all public interfaces are rendered accessible as COM interfaces. If you click the "Generate" button and produce wrappers and the IDL at this point, errors are generated. This is because certain classes are omitted by default in the java2com tool. By looking at the errors generated during compilation, you would be able to determine which classes were causing problems. To fix the problem, click the "Names" button in the java2com tool and remove any references to the class files you require. In this example we must remove the following references: *.toString > '''' class java.lang.Class > '''' Once these references have been removed, you can generate your wrappers and IDL. Click Generate in the java2com GUI. The java2com tool generates Java classes containing DCOM marshalling code used to access Java objects. These generated classes are used behind the scenes by the WebLogic jCOM runtime. You simply need to compile them, and make sure that they are in your CLASSPATH. Grant the COM client user access to the classes that the COM client application needs to access. Your particular application dictates which classes to expose. For example, assume that the COM client needs access to the following three classes: java.util.Collection java.util.Iterator ejb20.basic.beanManaged In the left-hand pane of the WebLogic Server Administration Console, click the Services node and then click the JCOM node underneath it. In the right-hand pane, enter: java.util.* Click Define Security Policy. In the Policy Condition box, double-click "Caller is a member of the group". In the "Enter group name:" field, enter the name of the group of users to whom you're granting access. Click Add. Click OK. In the bottom right-hand corner of the window, click Apply. To grant access to ejb20.basic.beanManaged, repeat the steps in Granting Access to java.util.Collection and java.util.Iterator, replacing "java.util.*" with "ejb20.basic.beanManaged" in step 3. The following sections describe how to prepare a COM client to call methods on WebLogic Server objects: There are a number of files that must be installed on your client machine in order to call methods on WebLogic Server objects. As noted below, some of these are only necessary if you are making method calls in native mode. There are five files and three folders (including all subfolders and files) necessary for running the jCOM tools. These tools are located in the WL_HOME \server\bin directory on the machine where you installed WebLogic Server. They are: JintMk.dll ntvinv.dll regjvm.exe regjvmcmd.exe regtlb.exe regjvm (including all subfolders and files) regjvmcmd (including all subfolders and files) regtlb (including all subfolders and files) For more information on the jCOM tools, see Chapter 5, "A Closer Look at the jCOM Tools." You can obtain an object reference moniker (ORM) from WebLogic Server. The moniker can be used from the COM client application, obviating the need to run regjvmcmd. The moniker remains valid for new incarnations of the server as long as the host and port of the server remain the same. There are two ways to obtain an ORM for your COM client code: Obtain it through a servlet running on WebLogic Server. Open a Web browser on WebLogic Server to http://[ wlshost ]:[ wlsport ]/bea_wls_internal/com where wlshost is the WebLogic Server machine and wlsport is the server's port number. Run the com.bea.jcom.GetJvmMoniker Java class, specifying as parameters the full name or TCP/IP address of the WebLogic Server machine and port number: java com.bea.jcom.GetJvmMoniker [ wlshost ] [ wlsport ] A long message is displayed which shows the objref moniker and explains how to use it. The text displayed is also automatically copied to the clipboard, so it can be pasted directly into your source. The objref moniker returned can access the WebLogic Server instance on the machine and port you have specified. Perform the client-side portion of the wrapper and Interface Definition Language (IDL) file generation: Copy the IDL to the client machine: If the java2com tool successfully executes on the WebLogic Server machine (see Preparing WebLogic Server), an IDL file is produced on the server machine. Copy this IDL file to the client machine, and place it in this COM application's \TLB subdirectory. Note:If the client and the server are on the same machine, this step is not necessary. The java2comtool outputs to the sample's \TLBsubdirectory. Compile the IDL file into a type library: midl containerManagedTLB.idl This command calls the Microsoft IDL compiler MIDL.EXE to carry out the compilation. The result of the compilation is a type library called containerManagedTLB.tlb. Register the type library and set the JVM it services: regtlb /unregisterall regtlb containerManagedTLB.tlb registered_jvm The first line above calls the regtlb.exe in order to un-register any previously registered type library versions. The second line then registers the newly compiled type library. The second parameter registered_jvm passed to regtlb is important. It specifies the name of the JVM linked with the type library. The WebLogic jCOM runtime requires this information for linking type library defined object calls to the appropriate wrapper classes. The WebLogic Server JVM is registered in the client machine registry through the regjvm tool. For details, see Register the WebLogic Server JVM in the Client Machine Registry. In general, wrapper files must be placed on the server and compiled. The IDL file must be placed on the client and compiled. If the server and client are on separate machines, and you created the wrappers and IDL on the client side, you must distribute the wrapper files you have just compiled to the server. If you created the wrappers and IDL on the server side, then you must move the IDL file to the client, where it can be compiled to a type library. The wrapper files and IDL file must be created by a single execution of the java2com tool. If you attempt to run the java2com tool separately on both the server and the client, the wrappers and IDL file created would not be able to communicate. The IDL and wrappers have unique stamps on them for identification; wrappers can only communicate with IDL files created by a common invocation of the java2com tool, and vice versa. As a result, the java2com tool must be run once, and the files it creates distributed afterward. If you make a mistake or a change in your Java source code and you need to run the java2com tool again, you must delete all of your wrapper files, your IDL file, and your TLB file, and redo all the steps. When you use the java2com tool to create wrappers for classes that contain (or reference) deprecated methods, you see deprecation warnings at compile time. disregard these warnings; WebLogic jCOM renders the methods accessible from COM. The generated wrapper classes must be in your CLASSPATH. They cannot be just located in your EJB jar. Register with the local Java Virtual Machine by adding the server name to the Windows registry and associating it with the TCP/IP address and client-to-server communications port where the WebLogic Server instance listens for incoming COM requests. By default, this is localhost:7001. Invoke the regjvm GUI tool, which displays this screen. If WebLogic Server is running on something other than localhost and listening on a port other than 7001, then fill in the hostname (or IP address) and port number If you prefer, use the command-line version of regjvm: regjvmcmd servername localhost[7001] The regjvm (or regjvmcmd) tool does not overwrite old entries when new entries with identical names are entered. This means that if you ever need to change the hostname or port of the machine with which you wish to communicate, unregister the old entry, and then create a new one. To unregister a JVM in the regjvm tool window, select the JVM you wish to unregister and click Delete. Alternatively, unregister the JVM with the command line tool regjvmcmd: regjvmcmd /unregister servername If your COM client is running in native mode, check the "Native Mode" or "Native Mode Out-of-Process" radio button in the regjvm window or invoke regjvmcmd with the /native parameter. For details on this step, see Running COM-to-WLS Applications in Native Mode. You can now invoke methods on the WebLogic Server objects. How you code this naturally depends on whether you chose late binding or early binding. In the following sample Visual Basic Application, notice the declaration of the COM version of the Account EJB's home interface mobjHome. This COM object is linked to an instance of the AccountHome interface on the server side. Dim mobjHome As Object Private Sub Form_Load() 'Handle errors On Error GoTo ErrOut ' Bind the EJB AccountHome object through JNDI Set mobjHome = CreateObject("examplesServer:jndi:ejb20-containerManaged-AccountHome") WebLogic jCOM has problems handling methods that are overloaded but have the same number of parameters. There is no such problem if the number of parameters in the overloaded methods are different. When they're the same, calls fail. Unfortunately, the method InitialContext.lookup is overloaded: public Object lookup(String) public Object lookup(javax.naming.Name) To perform a lookup, you must use the special JNDI moniker to create an object: Set o = CreateObject("servername:jndi:objectname") The most obvious distinguishing feature of early bound code is that fewer variables are declared As Object. Objects can now be declared by using the type library you generated previously: Declare objects using the type library generated in Generate Java Wrappers and the IDL File—Early Binding Only. In this Visual Basic code fragment, the IDL file is called containerManagedTLB and the EJB is called ExamplesEjb20BasicContainerManagedAccountHome: Dim objNarrow As New containerManagedTLB.JCOMHelper Now, you can call a method on the object: Set mobjHome = objNarrow.narrow(objTemp, "examples.ejb20.basic.containerManaged.AccountHome") Start up the COM client application. For COM-to-WLS applications, there's a distinction in native mode between "in-process" and "out-of-process": Out-of-process: The JVM is created in its own process; inter-process communication occurs between the COM process and the WebLogic Server JVM process. In-process: The entire WebLogic Server JVM is brought into the COM process; in effect, it's loaded into the address space of the COM client. The WebLogic Server client-side classes reside inside this JVM. You determine which process your application uses by selecting the native-mode-in-process or native mode radio button in the regjvm GUI tool interface. If you want your JVM to run out of process (but allow COM client access to the Java objects contained therein using native code), follow these steps: Invoke the regjvm GUI tools to register your JVM as being native. The regjvm sets up various registry entries to facilitate WebLogic jCOM's COM-to-WLS mechanism. When you register the JVM you must provide the name of the server in the JVM id field. For example, if you enabled JCOM native mode on exampleServer then when you register with regjvm enter exampleServer in the JVM id box. If your JVM is not already running, click the Advanced radio button and type its path in the "Launch Command" field. For detailed information on the regjvm tool, see Chapter 5, "A Closer Look at the jCOM Tools." Insert the following code into the main section of your application code, to tell the WebLogic jCOM runtime that the JVM is ready to receive calls: com.bea.jcom.Jvm.register("MyJvm"): public class MyJvm { public static void main(String[] args) throws Exception { // Register the JVM with the name "firstjvm" com.bea.jcom.Jvm.register("firstjvm"); Thread.sleep(6000000); // Sleep for an hour } From Visual Basic you can now use late binding to instantiate instances of any Java class that can be loaded in that JVM: Set acctEJB = CreateObject("firstjvm.jndi.ejb20.beanManaged.AccountHome") Having registered the JVM, use the standard WebLogic jCOM regtlb command to allow early bound access to Java objects ( regtlb takes as parameters the name of a type library, and a JVM name, and registers all the COM objects defined in that type library as being located in that JVM). You can also control the instantiation of Java objects on behalf of COM clients by associating your own instantiator with a JVM (additional parameter to com.bea.jcom.Jvm.register(...))—a kind of object factory. Use this technique to actually load the JVM into the COM client's address space. Again, use the regjvm command, but this time specify additional parameters. Note:When you register the JVM you must provide the name of the server in the JVM id field. For example, if you enabled JCOM native mode on exampleServerthen when you register with regjvmenter exampleServerin the JV id box. The simplest example would be to use Visual Basic to perform late bound access to Java objects. First register the JVM. If you are using Sun's JDK 1.3.1, which is installed under c:\Oracle\Middleware\jdk160 by default, and WebLogic Server is installed in c:\Oracle\Middleware\wlserver_10.3\server\lib\weblogic.jar, and your Java classes are in c:\pure, you would complete the regjvm tools screen as follows: As you can see, you specify the JVM name, the CLASSPATH, and the JVM bin directory path. From Visual Basic, you should now be able to call the GetObject method: MessageBox GetObject("MyJVM.jndi.ejb20.beanManaged.AccountHome") For detailed information on the regjvm tool, see Chapter 5, "A Closer Look at the jCOM Tools."
http://docs.oracle.com/cd/E14571_01/web.1111/e13725/comtowls.htm
CC-MAIN-2015-48
refinedweb
2,833
54.52
My Java implementation is as follow public class Solution { public void rotate(int[] nums, int k) { int n = nums.length; k = k % n; if (k == 0) return; int index = 0, val = nums[index], counter = 0; int lastBegin = 0, targetIndex; while (counter < n) { targetIndex = (index + k) % n; int tempVal = nums[targetIndex]; nums[targetIndex] = val; if (targetIndex == lastBegin) { index = (targetIndex + 1) % n; val = nums[index]; lastBegin = index; } else { index = targetIndex; val = tempVal; } counter++; } } } the runtime is about 375ms, it is below the average Java solution performance. I don't quite understand why it is so slow. Because everytime it moves only one if (targetIndex == lastBegin) { index = (targetIndex + 1) % n; val = nums[index]; lastBegin = index; } what does this part of code do ? What I do is move an element to the right position that is its index plus k. Depending on the value of k, the moving process may has many rounds. I use lastBegin to keep track of the first element in a round. If the targetIndex is equal to lastBegin, then this round is finished, the process should move to the next round, starting from (targetIndex+1)%n.
https://discuss.leetcode.com/topic/13993/my-java-implementation-using-o-1-space-performs-quite-slow
CC-MAIN-2017-47
refinedweb
187
57.5
Last week, we announced the ReSharper Ultimate 2017.2 EAP (Early Access Program) is now available. If you downloaded it already, you may have discovered some of the new features and enhancements made in ReSharper and ReSharper C++. If not, no worries! In this post, we’ll look at what the first ReSharper Ultimate 2017.2 EAP build brings to the table. Support for default literal – C# 7.1 It’s been only a few months since C# 7 was released, and now C# 7.1 is around the corner. It comes with a new default literal, async main, tuple projection initializers and pattern matching with generics. In this EAP, ReSharper adds support for the first one in that list: support for the default literal.. With C# 7.1, we now the default literal syntax and provides an inspection when default(T) is being used. A quick-fix allows us to remove the redundant type specification: Code completion, typing assists and code generation We’ve made some changes to code completion. The look and feel of the UI was changed with a new scrollbar and new icons for completion filters where we can show or hide certain categories of results from code completion, such as namespaces, classes, interfaces, methods, templates and many more. A new typing assist helps adding NotNull and CanBeNull annotations. When writing a method signature or member declaration, typing a ! or ? directly after the type name we will add NotNull or CanBeNull: When typing { after => in an expression bodied member, ReSharper will convert it into a block body: A new option was added to make properties mutable when implementing an interface with get-only properties. For example in the following case where IPerson has a get-only Name, we can tick “Make properties mutable” when implementing missing members (Alt+Insert). The “Introduce auto-property from parameter” quick-fix already allowed us to introduce a get-only auto-property with options such as adding a private setter or making it a public mutable property. When the parameter was already used in code, the “Initialize auto-property from parameter” context action now also provides additional options: Language injections ReSharper can treat particular string literal contents as a piece of code written in one of the supported programming languages: C# or ECMAScript regular expressions, CSS, HTML, JSON or JavaScript. With ReSharper Ultimate 2017.2, we’re adding support for injected path references and injected XML. For example, we can mark a string as being injected XML and immediately get syntax highlighting inside that string literal! ReSharper’s context actions are also available in this string, so we can change text to CDATA or convert our injected XML to LINQ to SQL: New navigation actions Using the Navigate To menu (Alt+Backquote), we can navigate to various items, depending on context. For example we can navigate to declaration, implementation, related files, … ReSharper Ultimate 2017.2 adds a new navigation: navigate to file nearby. Navigate to file nearby displays project structure around our current file: the project is shown, we can see folders and files at the same directory level our current file is located at, and we can easily jump to these files or create a new one: When using Search Everywhere (Ctrl+T), Go to text is now integrated. This means that we can search for any text in our solution and navigate to, for example, a Markdown file based on a simple text search. After typing (part of) the text to search, we can use the arrow keys to navigate through the list of results. We made some other navigation improvements as well, such as the ability to change a file’s target framework identifier in Go to related files, editing project item properties and asynchronous refresh in Find Results. New refactoring and initial support for TypeScript 2.3 A new TypeScript refactoring was added: “Introduce/inline type alias”. In the following example, let’s inline NameOrResolver in the getName() function arguments. We’ve started adding support for TypeScript 2.3. While we don’t yet support contextual this for object literals or the --strict option, ReSharper does add support for async iterators, optional generics and overload resolution for stateless JSX components. Async iterators are pretty nice. For example, we can write an array of promises and then run a for await which will iterate over that array, awaiting each value. Other things can happen on the main thread during such iteration. Also note that asyncIterator.next() isn’t called for the next item until our current iteration is complete – ensuring we’ll get items in order, iterations won’t overlap and when we break or return in our loop, remaining promises are not executed. Do check the TypeScript 2.3 release notes for additional examples! Angular improvements and Angular 4 support ReSharper now supports Angular input/output aliases and attribute directives. Angular2 components added via NPM are now supported as well. As an example, here’s an application using ionic-angular. We can make use of the button component (using the ion-button directive). ReSharper provides code completion, we can see quick documentation info and even navigate to the component declaration. Note that Support Angular markup in HTML pages must be configured in the ReSharper options under HTML | Editor for this to work. For Angular 4, ReSharper 2017.2 adds support for ; else in *ngIf, and variable assignments (like people as person) in both *ngIf and *ngFor. ReSharper C++ In this first ReSharper Ultimate 2017.2 EAP build, ReSharper C++ introduces support for extended friend declarations from C++11, selection statements with initializer from C++17, and more language features. We’ve added SFINAE support for expressions (“Substitution Failure Is Not An Error”), as well as support for floating-point and string user-defined literals. We’re looking forward to any feedback you may have on the latest builds of ReSharper, ReSharper C++, dotCover, dotTrace, dotMemory, dotPeek, as well as various command-line packages included in this EAP. Download ReSharper Ultimate 2017.2 EAP, and give it a try! A new XML quick actions are very cool. However, I’ve noticed that after you converted text to Linq XML, the whole CData section became missing. Is that expected behavior? Just logged an issue for that Love the Typescript and Angular 4 improvement… Pingback: Dew Drop - June 8, 2017 (#2496) - Morning Dew Pingback: Der Weg zu ReSharper Ultimate 2017.2 – entwickler.de The ! and ? as shortcuts for NotNull / CanBeNull are great. Would it be possible to automatically add null checks when typing “!” ? (of course this must be configurable) That would be kind of cool! Would you mind logging an issue for it at ? done, see Please vote for it! Great stuff. Is async/await support in dotrace available with this eap? It’s not but AFAIK the dotTrace team still plans to merge async/await support into 2017.2 Thanks for the quick response. Is there a youtrack feature request for the async/await support in dotTrace?
https://blog.jetbrains.com/dotnet/2017/06/07/resharper-ultimate-2017-2-eap-whats-new-build-1/
CC-MAIN-2017-51
refinedweb
1,168
55.34
Red Hat Bugzilla – Bug 89464 timeconfig in text mode is broken over remote logins Last modified: 2007-04-18 12:53:15 EDT I log in from inside an xterm/eterm on RH 7.3 as root on a RH 9 server, and timeconfig, just like some other tools like netconfig looks completely broken. It's unusable Created attachment 91240 [details] screenshot of the brokenness this happens regardless of what I set TERM to hp gave me the reason for this problem, I suppose you can close the bug, but please look at my answer ---------------------------------------------------------------------------- > You have to match the encoding of mc to the encoding of > your terminal. The encoding of mc comes from the locale; for > example, LANG=en_US.UTF-8 gives UTF-8 encoding, > LANG=en_US.ISO-8859-1 gives Latin-1. 7.3 terminal will be expecting > Latin-1 by default. RHL 9 gnome-terminal has a menu Terminal->Character Coding > which can be used to change encoding to match remote systems. Indeed, thanks for pointing that out, I missed that change (and boy, did I look for it. Can this be added someone, like in the release notes next time) I know why you are making this change, but honestly knowing that it breaks remote connections in a non obvious way, I question the wiseness of this new default. It also breaks any other terminal by default (like xterm/Eterm/whatever), doesn't it? > This can't be done automatically because the ssh and telnet protocols do > not include encoding negotiation. A screwup in those protocols, so That's true > the only solution is to manually set your encodings properly. Other possible fixes would have been: 1) auto-set lang to en_US.ISO-8859-1 for remote connections 2) have gnome-terminal set a new TERM type (which gets passed by telnet/ssh/rlogin) and set lang to en_US.UTF-8 only if this new TERM type is detected (like xterm-utf8) ---------------------------------------------------------------------------- I don't often use X and therefore am not particularly familiar with the ins and outs of encodings. I use the Win32 client Putty to access my Red Hat Linux host via SSH. When I run the setup utility, each of its configuration programs works, except for timeconfig, which fails in the way described by Marc. Putty provides configuration options for the encoding type. And, fiddling with them alters the setup program's menu in some interesting ways. But, no setting I've found coaxes timeconfig into working over the SSH connection. So, it seems that I must travel to the NOC to reset the timezone on an errantly configured host. Perhaps I'm missing the obvious. But, I can't see how to change the encoding in a way that works around this problem. And, I'm puzzled why the problem affects only timeconfig, not authconfig and timeconfig's other siblings. Bill, when I ssh from an xterm on a 7.2 box into a 9 box, all the text utilities look messed up to me. I don't think that there's anything different about timeconfig. The best thing I can tell you is to follow hp's advice and prepend 'LANG=en_US.ISO-8859-1' to the command to run any of the text based tools. That seems to work for me. Basically, a lot of things are going to be a mess until everything uses UTF-8 encoding. I dont't think that there's anything I can do in the context of timeconfig to fix this problem. Closing as 'wontfix'. Hi Brent, Thanks for the additional info. But, I don't think your test using an xterm goes to the heart of the issue. You point out that everything's messed up with xterm. That's interesting. But, when I use Win32 putty, the only member of the setup program that fails is timeconfig. Moreover, timeconfig can't be coaxed into working by prepending an assignment to the LANG environment variable, as suggested. So, timeconfig differs in some important--and unwholesome--way from the other members of the setup program. Make sense? Cheers, Ok, I've downloaded putty on my Win2k box. I get the same behavior as if I was on a RHL 7.2 box. All the text mode tools look bad until I prepend "LANG=en_US.ISO-8859-1" to whatever command I'm running. Then things look fine. I'm attaching screenshots of timeconfig and authconfig to demonstrate the behavior. The ones that look good had "LANG=en_US.ISO-8859-1" prepended to the command. The ones that look bad did not. Brent, please set the mime type of your attachements to image/png, not text/plain Hey y'all, Here's yet another twist. I have a second RHL 9 host. When I SSH into it by using Putty, all the programs accessed via the setup command work okay, except for timeconfig. The screens are a bit messy, but not so bad as to be unusable. The line drawing characters are substituted by letters with diacritical marks and such. That's all. Timeconfig, on the other hand, dies. Running timeconfig directly from the command line yields a visible stack trace showing that timeconfig is trying to open the X display: # timeconfig Traceback (most recent call last): File "/usr/share/redhat-config-date/timeconfig.py", line 29, in ? from timezone_map_gui import ZoneTab File "/usr/share/redhat-config-date/timezone_map_gui.py", line 18, in ? import gtk File "/usr/src/build/218821-i386/install/usr/lib/python2.2/site-packages/gtk-2.0/gtk/__init__.py", line 43, in ? RuntimeError: could not open display Note that the environment variable DISPLAY is not set: # echo $DISPLAY # Has timeconfig gone GUI-only in RHL 9? Cheers, marc: sorry about that. I thought bugzilla could detect file types... Created attachment 92080 [details] authconfig good Created attachment 92081 [details] authconfig bad Created attachment 92082 [details] timeconfig bad Created attachment 92083 [details] timeconfig good Bill, that particular bug you are seeing is a dupe of bug #90185, which was fixed in redhat-config-date-1.5.10-1 last week. I'm going to close this bug as 'wontfix' since I don't really see a way to fix it. Over time, as everything moves to UTF-8 encoding, this problem should go away. In the meantime, prepending "LANG=en_US.ISO-8859-1" can suffice as a workaround.
https://bugzilla.redhat.com/show_bug.cgi?id=89464
CC-MAIN-2018-22
refinedweb
1,068
66.13
Using just regex is there a way to skip words or chars when using a lookaround? I guess specifically this might be about a negative look ahead. If i had a sentence in the form of: this is the WORD i want and I want and this is the PHRASE I DONT WANT is there a way to use just regex to match "WORD" but only not if "PHRASE" is present? My initial idea was a negative lookahead but that is only the immediate word following. I then tried using (?:\w+(?:\s*[\,\-\'\:\/]\s*|\s+)){0,3} and other similar tricks but this would match the words in-between and not the actual phrase. Not to mention the wonkiness of + in lookarounds. Then I thought about using a grouping like [^something] but i didnt know how to do that with full words without a lookaround. I then had the idea to nest lookarounds which i found out can happen, but that still gives me the root of the problem. Can you skip words in the matching for a lookaround and if not how would i go about solving this issue? Because if i nest using a lookbehind I still need to skip stuff to get to WORD in order to match it. Assume the words are arbitrary in the sentence but the key word and the key phrase is something specific. 1 answer See also questions close to this topic -; - Regex labeled Invalid in R In notepad++, the look-ahead regex expression VUL.*?(?=[A-Z]{2,})successfully isolates all content starting with "VUL" and up to but not including the next instance of successive caps in the excerpt below. But when I attempt to use the expression with grep in R, it produces an error and reports "invalid regexp." What modification is required for R to accept it? STR. No more; why these delays, this foolish pity? Dost thou not hate a god by gods abhorred, That prostitutes thy radiant boast to man? VUL. Strong are the ties of kindred and long converse. STR. Well; but to disobey thy sire's commands, Darest thou do that? Is not that fear more strong? VUL. Soft pity never touched thy ruthless mind. STR. Will thy vain pity bring relief? Forbear, Nor waste thyself in what avails not him. VUL. Abhorred be all the fine skill of my hands. STR. And why abhorred? For of these present toils Thy art, in very truth, is not the cause. VUL. Yet wish I it had been some other's lot. STR. All have their lot appointed, save to reign In heaven, for liberty is Jove's alone. VUL. Truth guides thy words, nor have I to gainsay. - regex that matches a literal that doesn't have another literal before it So I had a requirement where I wanted to replace all the =from a string with ==but the problem was that string may contain !=as well and I dont want =or !=to be replaced. So just replacing =with ==wont work, I was thinking if there is a way where I can check if the =doesnt have !before it then replace. I looked for lookaround regex but that doesnt seem to solve the problem. - Extracting 25 words to both sides of a word from a text I have the following text and I am trying to use this pattern to extract 25 words to each side of the matches. The challenge is that the matches overlap, thus python regex engine takes only one match. I would appreciate if anyone can help fix this Text 2015 Outlook The Company is providing the following outlook for 2015 in lieu of formal financial guidance at this time. This outlook does not include the impact of any future acquisitions and transaction-related costs. Revenues - Based on the revenues from the fourth quarter of 2014, the addition of new items at our some facility and the previously opened acquisition of Important Place, the Company expects utilization of the current 100 items to remain in some average I tried the following pattern pattern = r'(?<=outlook\s)((\w+.*?){25})' This creates one match whereas i need two matches and it should not matter whether one overlaps the other I need basically two matches - Regex Pattern with Spaces in Java I want to identify that a string passed into a function is a valid string. In doing so, the string is a string of Polynomials that must have spaces between them. These are valid: 3x^7 3445x^233 3x 34 355 0 +3x^7 x^6 +3445x^233 -3x +34355 x^2 These are not valid: +3x^7+3445x^233-3x +34355 +3x^-7+3445x^233-3x +34355 One space does not count. Every pattern has to have a space between. How do I select the valid string without selecting any items from the invalid strings? I've tried this... while (str.hasNext()) { str.findInLine("([\\+-]*?\\b\\d+)x\\^([\\+-]*?\\d+\\b)" + "|([\\+-]*?\\b\\d+)x|([+-]*?\\d+)|\\^(\\d+)"); MatchResult m = str.match(); // When the term has a valid coefficient and power ie 3x^3 if (m.group(1) != null) { coefficient = Integer.parseInt(m.group(1)); power = Integer.parseInt(m.group(2)); this.addTerm(coefficient, power); } // When the term ends in x ie 3x else if (m.group(3) != null) { coefficient = Integer.parseInt(m.group(3)); this.addTerm(coefficient, 1); } // When the term has no x ie -3 else if (m.group(4) != null) { coefficient = Integer.parseInt(m.group(4)); this.addTerm(coefficient, 0); } // When the term has no coefficient ie x^3 else if (m.group(5) != null) { power = Integer.parseInt(m.group(5)); this.addTerm(1, power); } } As you can tell, my regex is accepting all valid groups without identifying the spaces. Thanks! - How to extract TAG content using regex This is my Regexpattern for extracting data between html tags. (<.*?>)(.*?)(<\/.*?>) It covers the most of the requirements. This is my regex example link. There are two problems I'm dealing. 01. I can't catch second <h1>tag in second example. 02. In third example regex tags are different. Please help. Thank you. EDITED : this is the whole example import java.util.Scanner; import java.util.regex.Matcher; import java.util.regex.Pattern; public class Hello{ public static void main(String[] args){ Scanner scan = new Scanner(System.in); int testCases = Integer.parseInt(scan.nextLine()); while (testCases-- > 0) { String line = scan.nextLine(); boolean matchFound = false; Pattern r = Pattern.compile("(<.*?>)(.*?)(<\\/.*?>)"); Matcher m = r.matcher(line); while (m.find()) { System.out.println(m.group(2)); matchFound = true; } if ( ! matchFound) { System.out.println("None"); } } } } And this is the output I'm looking for. Nayeem loves counseling Sanjay has no watch So wait for a while None Imtiaz has a secret crush - Seeking a more elegant way of doing this I am attempting to write a regex that is better, than what I have done before, and below is the link to my regex sample: \\(begin|end)+(.*?)? EDIT: I know that, I can put {Begin}(.*?){End}and capture all the content between the start and end tag, however, I am out for a more elegant way of doing this. EDIT 2: How can I achieve the same result with \\(begin|end)+(.*?)?like with the other approach, \\begin(.*?)\\end EDIT 3: This is NOT a [duplicate].
http://quabr.com/52768006/using-just-regex-is-there-a-way-to-skip-words-or-chars-when-using-a-lookaround
CC-MAIN-2019-09
refinedweb
1,213
74.69
Name Scope¶ Note This has nothing to do with mouthwash! Now that we are building programs with more than one module, and can start to talk about the right way to organize a bigger program, we need to introduce the concept of name scope. Simply put, this is a concept that defines where in a program we can refer to a name. That name can be the name of a variable, constant, or module. Long ago, I used an analogy to explain name scope. The analogy involved surrounding parts of your program with one way mirrors. You know how these work, from one side you cannot see through them, but from the other side you can see through them. We will surround the entire program file in such a mirror, and further, we will surround each module we create in that file in another mirror. The mirrors will be set up so that from outside of the file, you cannot look into the file, and from outside of any module (but inside the file), you cannot see into the function. If you are standing inside the function, you can see out of the function to the world outside of the function, maybe even outside of the file. One catch¶ There is one catch to this rule. You are only allowed to look upward in your program, never downward. As you read your program code, you will define names of variables and constants which hold your data. We have already heard the rule that you cannot use a name unless the compiler knows everything it need to know about that name so it can make sure you use it correctly. You may also define modules in your code, either fully, or in two parts: the prototype followed by the full module definition. The scope rules determine where in our program we can refer to the module name, meaning where we can call the module into action! You can pretend that each module has the name of the module painted on the outside of the enclosing mirror. Here is how scope works. Scope¶ At any point in your program code where you want to use a name, we need to look upward in the program to see if that name has been defined above us. If so, we are allowed to use the name If not, the compiler will generate an error. If we have modules in the program between where we want to use the name, we cannot see into those modules, but we can see around them to code above the modules surrounding mirror. Again, if the name can be found using that set of rules, we can use the name. Module local names¶ Names created in a module are called local, meaning they are only visible to code inside that module. This protects them from accidentally being used by any other code in the program. We want this to allow us to move the module into another program without breaking anything in that new program. What is interesting about modules, though, is that they can use names of variables and constants outside of their surrounding mirrors. We call these names global since they must be defined outside of any module to be seen. Normally, we place such names at the top of the program which makes then visible to any code below the point where they are defined. The global term indicates that you can use those names anywhere in your program. Modules can see other modules as well, so a module can call another module if needed. This is how we build large programs, dividing them up into a bunch of modules that activate each other as needed to do the required work. Module parameter names¶ The names of parameters we create for our modules are actually local to the module, although it is common to see those names in a prototype, or module definition. The names themselves can only be used inside the module, and act like specially initialized variables that code in the module can use. When some piece of code calls the module, those parameter names are initialized with the right values at that moment. The caller’s code determines what value will be placed in those variables, and the module code will not be aware how that all happened! Here is an example program, just to reinforce this scope concept: #include <iostream> // makes a bunch of names "global" (like cout, and cin) using namespace std; // simplifies some names const double PI = acos(-1.0); // define a "global" constant named PI void myfunction(double angle) { double radians; // uninitialized local variable radians = angle * PI / 180.0; // using global PI, OK since it is above here cout << "Radians:" << radians << endl; // using several globals and a local } int main(int argc, char ** argv) { cout << "Hello, there!" << endl; // global cout used here myfunction(45.0); // calling module defined above. angle will be 45.0 } This code shows several examples of using names defined above. In module main, we cannot use the name radians since it is invisible to us. It is hiden inside the morrit that surrounds myfunction. We can call myfunction, since we can see its name. The analogy is important in organizing code. It i common in several languages, but not all. You will need to learn the specific rules for scope in what ever language you end up using!
http://www.co-pylit.org/courses/cosc1315/functions/03-name-scope.html
CC-MAIN-2018-17
refinedweb
903
69.52
Feel inspired to make a donation to support my humble efforts? or shop via this link: AFFFFFAQ Alex's Fantastically Fabulous Freedom Force FAQ New to the Freedom Force Universe? Or maybe you're a veteran and are too embarrassed to admit you don't really the difference between a mesh from a skin? Luckily for you, the World's Mightiest FF FAQ is here! Read on! General Q&A 1. What's Freedom Force? 2. What's Freedom Force vs The Third Reich? 3. Sequel? But wasn't Freedom Force supposed to be a trilogy, and wasn't the next game supposed to be set in the 70's? 4. I heard Freedom Force doesn't work with Windows XP (Service Pack 2). Is this true? 5. Can I play multiplayer FF online? 6. How do I take screenshots? Customizing FF and Hero Files 7. How customizable is Freedom Force? 8. What's a Hero File? 9. Hey, I looked at your hero files and I think <insert favorite character> should be a lot stronger (faster, smarter, etc.)! 10. Well, how did you decide on the stats anyway? 11. What do I need to do to use a custom character in Freedom Force? 12. How do I get my custom character into the FF campaign game? 13. I downloaded a hero file, but all I see is a big white block, what happened? 14. What if I want to change the mesh the Hero File is set to? 15. How do I change the way my hero's uniform looks? Skins and Meshes 16. What are Skins? What are Meshes? 17. Where can I get Skins and Meshes? 18. How do you install Skins and Meshes? 19. How can I make my own Skins? 20. How can I make my own Meshes? 21. Can I get the meshes for the FF characters? 22. I managed to make a cool new skin and want to show it off, where should I send it to? Add-Ons and Mods 23. What Add-Ons are available for FF? 24. What's an FX? 25. FFEDIT? Sounds scary! Isn't there an easier way to install FX? 26. So what's FFEDIT anyway? Where do I get it? What does it do? 27. I installed FFEDIT, and it's messed up my FF game! 28. What's a Mod? 29. How do I make a Mod? 30. I'm trying to make a mod, but I keep getting an error saying "DAT files" not found! 31. Where can I find good Mods? Do I need FFEDIT if I just want to play them? 32. Will installing Mods mess up my original FF game? 33. I have the Mac version of FF, can I play Mods? 34. What's FFX-Squared? 35. Hmm, is there a special order I should be installing all this stuff in? General Q&A 1. What's Freedom Force? Freedom Force is a PC game (a Mac version is also available), developed by Irrational Games and originally released in early 2002. It allows you to take control of a team of superheroes on missions, as they travel through a great storyline, inspired by the Silver-Age of comic books. It is a hybrid game: part RPG (you can augment your heroes' powers as the game progresses) and part tactical combat. It is fully 3D, has destructible environments, and a built-in character creator. Most importantly, it's 100% FUN! Freedom Force (FF) is notable not just because it's an extremely well-designed game, but because it also broke the so-called "superhero curse". Until FF was released, the prevailing wisdom was that good superhero computer games could never be released. This thinking was fueled by a few high-profile superhero projects which were cancelled - Champions, Guardians-Agents of Justice, and The Indestructibles. But the "curse" is dead now! If you are a comic book super heroes fan (or just a fan of well-designed games), RUN don't walk and get your copy of Freedom Force! Learn more about the game by visiting the official site: 2. What's Freedom Force vs The Third Reich? Since Freedom Force was so warmly received (and yeah, it made a few bucks too), Irrational Games decided to grace us with a sequel with the comic-booky title of Freedom Force vs The Third Reich (FFv3R). In it, our heroes end up time-traveling back to WWII (aka The Golden Age of Comics) for some all-new derring-do. The game was released in March 2005! And yes, you guessed it, to learn more about the game, visit the official site at 3. Sequel? But wasn't Freedom Force supposed to be a trilogy, and wasn't the next game supposed to be set in the 70's? Yeah, well, sometimes plans change. (And if you know so much, why are you reading a FAQ? :) ) It's true that someone at Irrational at one point mentioned this plan to keep moving forward in time with the series. However, in recent interviews Irrational stated that they felt traveling to the Golden Age opened up some interesting story possibilities. Anyway, there's always the chance of more sequels! 4. I heard Freedom Force doesn't work with Windows XP (Service Pack 2). Is this true? Not anymore! Irrational Games released FF Patch v1.3 which fixes the problem with FF running with Windows XP Service Pack 2. Get the patch at the official site: And by the way, FFv3R has no problem running with Windows XP SP2 right out of the box. 5. Can I play multiplayer FF online? FF does have a multiplayer mode, but even Irrational will admit it's not as good as it could have been. Basically, it's death-matches against 1-3 other players, each controlling 1-4 characters. You can put a limit on how many Prestige Points each player can use and choose from a handful of different maps. Playing online requires you install Gamespy software (it's on the FF disk) or share IP addresses with whoever you're playing with. I'd recommend visiting the forums at where a section of the forum is set aside for folks trying to set up a multiplayer session. FFv3R has a much improved multiplayer mode including additional game types and a built-in game browser. 6. How do I take screenshots? Just hit Print Scrn on your keyboard to copy the screen to the clipboard. Then switch out of Freedom Force to any paint program (MS Paint comes with Windows and works fine for this) and click Edit-Paste. Save your picture! When FF first came out, you couldn't do this. There was a workaround involving modifying the system file named INIT.PY, but it's not worth the effort since later patches made the above method work. Remember, always use the latest patch! (version 1.3 as of this writing). Customizing FF and Hero Files 7. How customizable is Freedom Force? Very. What, you want more details? Well, why didn't you ask? Freedom Force was designed from the ground up to be an extremely customizable game. You can easily add your own custom-made heroes and they will participate in the game's storyline. You can change their powers and attributes with the built-in character editor. You can change the way they look by changing their skin or mesh. If you get a hold of the free editor, FFEDIT, you can even change sounds, add voices, add maps to the FF campaign, or best of all, create your own Mods (see the Mods section for more). 8. What's a Hero file? A hero file is a data file ending with the ".hero" extension (for example: "hyperguy.hero") These files are where Freedom Force stores the information about custom characters you create. You can think of it as the "blueprint" for your hero and his/her powers. As you design your hero, FF calculates a Prestige Point value for them. This is basically an indicator of how "expensive" your hero will be to add to the FF campaign. It can also roughly indicate how powerful the hero is (although this is often not true). The built-in Freedom Force heroes are all around 10,000 points or less. If you installed Freedom Force in the default directory, hero files are in a folder called: C:\Program Files\Irrational Games\Freedom Force\Data\Heroes In the "Recipes" section on this site, I have placed some hero files I've worked on for you to download. All my hero files are designed to be balanced with each other. (and oh yeah, I call them "recipes" sometimes, because I was watching a lot of Iron Chef when I built this site). 9. Hey, I looked at your hero files and I think <insert favorite character> should be a lot stronger (faster, smarter, etc.)! I like to design my heroes to "fit in" with the rest of the Freedom Force characters. IMO, matches where no heroes are more than 10,000 points (or so) are more interesting and challenging to play in. Yeah, I do know that "in HyperGuy #77, HyperGuy juggled a planet, blah blah blah." However, I am trying to keep these characters at a total Prestige Point level comparable to the other FF heroes and their world. It forces you to use your whole squad and think of strategies. In the end, my hero files let you quickly and easily get a hero in the game, with their basic powers ready to go. After that, feel free to change them as you wish! I always welcome feedback on any recipe and while I may not agree with you, I do sometimes update a recipe based on input. 10. Well, how did you decide on the stats anyway? Well, first I played Freedom Force endlessly for days and got a good feel for how the powers work and what the different levels do! And the nearly 30 yrs of comics reading experience helped also :) Actually, one good way to design heroes is to compare them to the built-in Freedom Force characters, who are all great archetypes for many types of heroes. My speedster heroes, for example, are similar to Freedom Force's Bullet. By copying and making some adjustments to Bullet's abilities, I was able to get a nice, balanced speedster. I also like to compare them to other heroes I've created and spend a lot of time play-testing them against each other. 11. What do I need to do to use my custom character in Freedom Force? Well, first you need to create him or her by using Freedom Force's built-in Character Editor (makes sens, right?) Just click the "Characters" button on the main Freedom Force screen and select "New". Then just fill in fields and select options to define your hero and their powers. The Freedom Force manual that comes with the game has a lot more detail on the choices you can make. Believe me, there' s a lot you can do! When you finally hit OK after completing your hero, it is saved as a hero file. Take a look in C:\Program Files\Irrational Games\Freedom Force\Data\Heroes and you should see it. You can easily share these hero files with others. To install them, just place them in the Heroes folder on the PC you're moving the hero to. 12. How do I get my custom character into the FF campaign game? Once your hero is all nice and designed, you have the option of adding him to the FF campaign, so they can battle evil alongside Minute Man and the other FF heroes. This is where the concept of Prestige Points (PP) come in. As you play FF, your team gains PP by successfully completing missions. Between missions, you get the option of recruiting new team members by spending these points. Once you have enough points to recruit the hero you created, you will see them added to the list of potential future members of Freedom Force! However, if you created a really high-priced hero (you big cheater), this may be hard to do. One way around this, is to use cheat codes to get more PP. To do this: Step One - Enable the console. Look for this file: C:\Program Files\Irrational Games\Freedom Force\System\INIT.PY Open it with Notepad and add these 2 lines (exactly as written), if they're not there already: import ff ff.CON_ENABLE = 1 Now, you can press the "~" key while in FF to bring up the console where you can type codes/commands. Step Two - While in a base screen (between missions), hit "~" and type (exactly as written): Campaign_AddPrestige(100000) This will give you 100,000 points which you can then use to recruit heroes. Note that recruiting a mega-point uber-hero will make the campaign too easy and what's the fun in that? 13. I downloaded a hero file, but all I see is a big white block, Help! Relax, this just means you haven't installed the mesh this hero file was built for or you installed the mesh incorrectly. So just double-check what meshes you have installed (see Skins and Meshes section below) and you should be OK. 14. What if I want to change the mesh the Hero File is set to? Unfortunately, the built-in FF Character Editor doesn't let you do this. Luckily, various fans have made stand-alone utilities to let you do this (as well as otherwise tweak hero files in ways FF alone doesn't allow). The first of these was my own EZ Hero editor. 15. How do I change the way my hero's uniform looks? After creating a few heroes with the FF Character Editor, you will realize that the powers are great, but you really want HyperGuy to have a green cape and you want to add his purple logo. Like I said before, FF is very customizable, so this is all possible. To change the way your character looks, you need to learn about Skins and Meshes. Skins and Meshes 16. What are Skins? What are Meshes? A Skin is an image file (sometimes more than one) that wraps around your character to make him look the way he or she should. In Freedom Force, all skin files are TGA format (and thus have .TGA extensions). A Mesh is a 3D model file (with a .NIF extension) which works like your character's skeleton. That is, it gives your hero a specific shape (tall, short, female, dinosaur, etc). Meshes are located in: C:\Program Files\Irrational Games\Freedom Force\Data\Art\CUSTOM_CHARACTERS\. There, you will find a subfolder for each mesh. They have names like "male_basic", etc. A mesh will also have a corresponding Keyframes file (KEYFRAMES.KF) in the same folder as the mesh. This file contains data on all the animations (running, punching) a specific mesh can do. 17. Where can I get Skins and Meshes? Talented artists are making these all the time. I happen to host a bunch of them on this site (check my Skins and Meshes section). Also, check my Links page for some of my favorites sites devoted to FF skins and meshes. However, this is far from a complete list as there are plenty of FF skins and meshes all over the net. Do an internet search to find more. 18. How do you install Skins and Meshes? Skin files you find on sites will usually be in ZIP files. Download them and unzip them and place the contents in a subfolder named for your hero, but under the folder for the Mesh. Most skin zip files contain a readme text file that will tell you what mesh the skin was designed for. For example: my HyperGuy skin files would go in a folder called: C:\Program Files\Irrational Games\Freedom Force\Data\Art\ CUSTOM_CHARACTERS\ MALE_BASIC\ SKINS\ HYPERGUY\ In the example above: - CUSTOM_CHARACTERS is a folder that has a subfolder for every mesh I have installed (more on meshes later) - MALE_BASIC is the folder that contains the male_basic mesh - SKINS is a folder inside my male_basic mesh folder - HYPERGUY is a folder I created for the HyperGuy skin files Likewise, mesh files are also usually distributed in ZIP files. Download them and unzip them and place the contents in the Custom_Characters subfolder. For example: my Big_Monster mesh files would go in a folder called: C:\Program Files\Irrational Games\Freedom Force\Data\Art\ CUSTOM_CHARACTERS\ BIG_MONSTER\ Sometimes, you will download a mesh without a keyframe file. If your character just glides around without moving their legs or arms, their mesh is probably missing a keyframes file. This is done to save space as keyframes are much bigger than the mesh files. The creator of the mesh will usually tell you what keyframes will work with their mesh. Just copy the right keyframes.kf file to the mesh folder and your mesh will be good to go. 19. How can I make my own Skins? Skins are just image files. To make new ones (or change existing ones), you will need a graphic editing program (such as Paintshop or GIMP) that can edit TGA files. I would also recommend Irrational's free Character Tool, so you can see how your character looks as you work on it. It may seem hard at first, but even I can do skins now. You may want to check out some of the tutorials out there. Here's a couple of good ones to get you started: 20. How can I make my own Meshes? Meshes are not as easy to make as skins. You would need a copy of 3DS Studio Max (version 4 or 5), as well as the Character Studio add-on and NetImmerse exporters. The NetImmerse exporters are downloadable for free from Irrational (at), but a copy of 3DS Studio Max will cost you in the neighborhood of $5000! This is basically what the 3D artists at Irrational used to create the characters you see in the game, so, as you can imagine, it's fairly complex and not something I can give you a quick tutorial on. You'll need a book (or school course) and lots of time. So unless you are already a 3D artist yourself or studying to be one, you probably won't have the time to really get into meshing. Luckily, Irrational provided some generic meshes with the game which are used by hundreds of skins. Also, the FF community has created literally hundreds of additional meshes you can download. 21. Can I get the meshes for the FF characters? Yes, most of these are available for download at (go to the Modforce section). Or you can find the FF game file called ART.FF on your hard drive. This is actually a ZIP file, so just temporarily rename it to ART.ZIP and take a look inside. There, you will find all the game meshes in a folder called DATA\ ART\ LIBRARY. Don't forget to rename this file back to ART.FF when you're done! 22. I managed to make a cool new skin and want to show it off, where should I send it to? Check the forums at for a Skins site currently accepting new skins or skin artists. Or do what I did and start your own site! Add-Ons and Mods 23. What Add-Ons are available for FF? Since every Skin, Mesh, FX, Map, and Mod is an add-on, there are literally thousands of fan-made add-ons available for FF. I've previously mentioned Skins and Meshes, since these are the most common ones. However, other add-ons include FX, Maps, and Mods. There are also special add-ons like official patches and FFEDIT. The FF game itself came with plenty of FX, but over time, the FF community has developed hundreds of new ones. The best place to look for them is at. This site also has tutorials if you're interested in learning how to make new FX yourself. You will need a hex editor (I use HexEdit) and sometimes a paint program to make new FX (like skins and meshes, FX are made up of TGA and NIF files). FX require FFEDIT to install. 25. FFEDIT? Sounds scary! Isn't there an easier way to install FX? It's not that hard, but if you aren't an FFEDIT kind of guy, you can download an install EZFX, my easy-to-use collection of over 900 custom FX! 26. So what's FFEDIT anyway? Where do I get it? What does it do? FFEDIT is the official Freedom Force Editor. Shortly after the release of FF, Irrational released FFEDIT, so fans could change the game as they liked. FFEDIT is the same tool the Irrational developers used to set up the original game (actually it's a cleaned-up version that's a little nicer than what they had to work with), so it's very powerful. Like anything official from Irrational, you can get FFEDIT at. FFEDIT allows you to change pretty much anything in the original game and add new items, characters, FX, maps, etc. One of the most important things FFEDIT does is allow you to make new mods. 27. I installed FFEDIT, and it's messed up my FF game! After installing FFEDIT, if you get this error message when trying to play the game: alert: CCampaignIMP: GetNextCampaignMissionDef(), invalid campaign mission index. Delete the file called campdef.dat in your data folder and all should be OK. 28. What's a Mod? A mod is a set of changes to the internal FF data files. The ability to create mods is one of the things that makes Freedom Force so popular even years after its initial release. Some mods just tweak or alter the original game, while others completely change FF into a brand new game starring all-new characters. 29. How do I make a Mod? Practice, my friend, practice. FFEDIT is basically a tool to edit the internal data files (DAT files) that hold FF's information for maps, characters, fx, powers, etc. You can completely change them all or just add a few new ones. Depending on your mod, You might also want to add new art files such as character skins or meshes. Finally, you can also use the Python scripting language to create new scenes for your mod's storyline (just like the scenes in FF) and the mission objectives your characters need to complete to finish a mission. FFEDIT comes with documentation on editing the FF DAT files and Python script commands to use when writing mod missions. A word of warning, not everyone can become a modder, a programming background helps (but is not necessary) for the Python scripting. The best way to learn is dive in, follow the tutorial Irrational provides, and ask questions on the forums. If you stick to it, you can do it. 30. I'm trying to make a mod, but I keep getting an error saying "DAT files" not found! Before using the editor to modify any data, you will have to extract the original FF DAT files from their archive. To do this, find the file called DATA.FF in your Freedom Force folder. It's actually a ZIP file, so unzip it to get the DAT files inside. Create a new folder for your mod inside the main Freedom Force folder (example: MYMOD\ ) and put the DAT files there. You should then point FFEDIT at this folder. More details on all this are in the FFEDIT documentation. 31. Where can I find good Mods? Do I need FFEDIT if I just want to play them? I have mods I've done on my Mods and Goodies page. I of course, think they're pretty darn good (and I did win the Best Modder award). In addition, I've listed a few others I think are good on that same page. For other mods, you may want to check the FF Hub. You only need FFEDIT to MAKE mods, not to PLAY them. 32. Will installing Mods mess up my original FF game? Not if you install them right. That is, mods should install in their own separate subfolders under the main Freedom Force folder. The original Freedom Force game's DAT files are in the folder called DATA. Most mods come with an installer that will place them in their own folder. 33. I have the Mac version of FF, can I play Mods? Pretty much all the FF fan-built stuff works on Macs. This includes meshes, skins, and even mods. The only thing I know doesn't work on Macs is FFEDIT itself. To use FF mods on Macs: - If the mod comes in ZIP format, just extract right to your Mac's Freedom Force folder. - If the mod comes as an EXE file, you need to run the EXE on a PC and copy the extracted files to your Mac (via CD or network) OR Use PC emulation software (Virtual PC) on your Mac to run the EXE and get the files OR Use StuffIt Expander to get the files out of the EXE (I haven't tried this myself, but have heard it should work) 34. What's FFX-Squared? FFX-Squared (sometimes also just called FFX) is a mod of the original game by the FF fan known as Dr. Mike. It adds tons of new powers and attributes to the original game. Really amazing stuff. Get it at Dr. Mike's site. Unlike most mods, this one actually changes the original DAT files. But that's OK, it's supposed to. 35. Hmm, is there a special order I should be installing all this stuff in? Good thing you asked, you definitely need to install certain things in a certain order. Skins, meshes, and self-contained mods can be added at any time, but special ones like EZFX and FFX need to be done just right. Here's a recommended order: For Freedom Force: - Install FF (duh) - Install Character Tool (good to have if you're skinning) - Install FF Editor (needed if you're modding or manually installing FX or voice packs) - Install FF patch v1.3 (needed to run if you have Windows XP SP2 or Vista) - Install EZFX 5.1 - Install EZ Danger Room v2.0 (adds a watch mode and some MP maps) - Install FFX Squared v 2.6 GOLD - Install EZ Hero v2.6 For Freedom Force vs The Third Reich: - Install FFv3R (duh) - Install FFv3R Mod Tools (Includes Character Tool 2) - Install EZFX 6.0 - Install FFX 3.2 Freedom Force Fans Prev Prev 5 List Stats Join Rand Next 5 Irrational Games (TM ) and Freedom Force (R) are trademarks of Irrational Games LLC and/or its affiliates ("Irrational"). Freedom Force is the copyrighted property of Irrational.
http://www.alexff.com/faq.php
crawl-001
refinedweb
4,540
82.54
The Zero Install Injector is named after the concept of "dependency injection". In this post, I'll try to explain what this means and why we use it. When you run a typical application, it's actually made up of several components, often developed by different groups of people. Let's start with a really simple example program ("About") that displays a dialog box telling the user some details of their system (kernel version number, etc): #!/usr/bin/env python import os, rox rox.info("Your system: " + str(os.uname())) It's not too important how it works, but the first line tells us it needs a Python interpreter to run it and the second says it uses the "os" and "rox" modules. The "rox" module, which is from ROX-Lib, in turn requires Python-GTK. We can draw these dependency relationships like this: Unfortunately, this program is hopelessly naive. If you try to run it and you have ROX-Lib2 installed (by some traditional means, not using Zero Install) then it should run and display a simple information dialog box, but if not you'll get this unhelpful error: Traceback (most recent call last): File "./about.py", line 2, in import os, rox ImportError: No module named rox Apart from being overly technical, this error doesn't tell the user where to go to get the "rox" module. Worse, many desktop environments hide error messages from the user. If you try to run this file in GNOME, for example, it will just silently do nothing. If we want to make user-friendly programs, we'll need to check that the user has ROX-Lib and, if not, explain to them how to get it (or offer to download it for them). One obvious way to do this is to add some code to our program that turns the above error into something more useful. In fact, ROX-Lib comes with a suitable file called findrox.py which we can bundle with the application. Our main program now looks like this: #!/usr/bin/env python import findrox; findrox.version(2, 0, 0) import os, rox rox.info("Your system: " + str(os.uname())) Now when we run it we get a nice dialog box with this message: *** This program needs ROX-Lib2 (version 2.0.0) to run. I tried all of these places: /home/talex/lib /usr/local/lib /usr/lib ROX-Lib2 is available from: Likewise, ROX-Lib itself contains some code to find Python-GTK and to display a helpful error if it's not present. If we were really keen, we could wrap our program in a shell script that even checked that Python was installed before running the main program. Whenever one component depends on another, it must contain some code to find it, like this: These techniques work (we've used them for several years), but not very well. Some problems: Every program using ROX-Lib needs a copy of findrox.py. There are many old and buggy versions of this file still being supplied with programs, because they all need to be updated manually by each program's author. For some people, the code isn't clever enough. For example, it doesn't offer to download ROX-Lib for you, and it doesn't let you choose which of several versions to use. For others, it's too heavy-weight. If Debian packaged our About program then they wouldn't want findrox, because apt-get handles that. The code for finding ROX-Lib is different to the code for finding Python-GTK, etc. Each one has its way of working (e.g. findrox looks for ~/lib/ROX-Lib2, but ROX-Lib doesn't look for ~/lib/Python-GTK). We could improve the situation a bit by creating a standard library whose purpose was simply to find other components (in a generic way): In this diagram, the user runs About, which uses the generic finder to locate Python and ROX-Lib. Then, ROX-Lib uses the finder to get Python-GTK. If it became widespread, it would remove some of the duplication and inconsistencies, but it's not a great solution. It's likely that different programmers would choose to use different finder modules, which would create a real mess. For example, the Python-GTK developers might use a different finder to locate Python-GTK's dependencies (such as GTK; not shown). At first sight, it looks like this might be a general unsolvable program in computer science. But in fact, there's a solution that's simple, elegant and flexible... The term Dependency Injection was coined by Martin Fowler in 2004, although the concept is older. The basic idea is to reverse the relationship between the requiring components and the finder. Rather than having the user run About, and having About ask the finder to locate ROX-Lib, the user starts with the finder. The finder locates everything and then tells About where to find ROX-Lib and it tells ROX-Lib where to find Python-GTK: There are some interesting results of this design. No component (About, ROX-Lib, etc) has to know anything about the finder. The user chooses the finder, which can be as complex as one that downloads all the required packages, checks for updates and shares the code automatically (e.g. the Zero Install Injector, 0launch), or as primitive as the user manually ensuring everything is in the right place themselves. In more concrete terms, the above is what happens when you do: $ 0launch ~/apps/About.xml Here, 0launch is the generic finder and About.xml is the configuration. However, since downloading and managing the configuration is itself a fair bit of work, we normally let 0launch handle that too: $ 0launch So, let's take a look at our final, dependency-injection-enabled version of About: Yes, it's identical to the original naive version! It's up to the injector to tell each component about its dependencies in whatever is the most natural way for that component. In this case, it would ensure that the appropriate directories are listed in the $PYTHONPATH environment variable. Of course, this isn't a particularly novel concept. You could easily argue that tranditional Linux installers such as apt-get are doing the same thing, by ensuring that everything is installed before the user starts the program, although they separate out the steps of installing and then running the program. Zero Install allows using this same style of programming for software that isn't distributed through centralised repositories. It also adds a few twists, such as deciding which version of ROX-Lib to use on a per-program basis (About might need an older version than Edit, for example) and allowing users to install without root access. There are still a few places where we don't fully use this pattern. For example, when you click on "Edit MIME rules" in ROX-Filer's options box, ROX-Filer invokes 0launch explicitly to download and run the MIME-Editor (thus, this feature of ROX-Filer depends on 0launch). Ideally, ROX-Filer would just run "mime-editor" and 0launch would ensure that a suitable program was in $PATH. Currently, however, 0launch doesn't support lazy evaluation; it would insist on downloading MIME-Editor before letting you start ROX-Filer, which is (probably) not what we want. For many cases though, 0launch already allows you to write applications that contain no code for dealing with dependencies, yet still support all the modern automatic download-and-update features users expect. For example, if you create a tarball containing just the single 3-line file above then you have a package that can be downloaded and used by 0launch! You just need to write the configuration saying what it needs (Python and Python-GTK are so common now that all you really need to list in the configuration is ROX-Lib).
http://roscidus.com/desktop/node/903
crawl-002
refinedweb
1,323
61.26
library test; uses System.SysUtils, System.Classes; {$R *.res} function AddIntegers(_a, _b: integer): integer; stdcall; begin Result := _a + _b; end; exports AddIntegers; begin end. program SimpleDLLTest; {$APPTYPE CONSOLE} {$R *.res} uses System.SysUtils, Winapi.Windows; type TAddIntegersFunc = function (_a, _b: integer): integer; stdcall; var DllHandle : HMODULE; AddIntegersFunc : TAddIntegersFunc; TestInt : integer; begin try DllHandle := LoadLibrary(pWideChar('test.dll')); if DllHandle = 0 then begin Writeln('Error loading dll'); end else begin @AddIntegersFunc := GetProcAddress(DllHandle, 'AddIntegers'); if assigned(AddIntegersFunc) then begin TestInt := AddIntegersFunc(3,4); Writeln('3 + 4 is ' + IntToStr(TestInt)); end else begin Writeln('Function not found'); end; end; Write('Press Enter'); ReadLn; except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; end. #include <windows.h> #include <iostream> using namespace std; int main() { HINSTANCE hGetProcIDDLL = LoadLibrary("test.dll"); if (!hGetProcIDDLL) { cout<<"Could not load library!\n"; cin.get(); } else { cout<<"Library loaded!\n"; cin.get(); } return 0; } Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. my first guess is the DLL isn't found - are you sure the DLL is in a path where you test progam can find it? It should be either in the same directory where the test-program's EXE is or in any path defined in PATH evironment variable. Anyway: You should call GetLatestError() to see the error code set in LoadLibrary, i.e.: Open in new window You should see a none-Null number - if you have Visual Studio there's a small tool to show information about error codes (Menue Tools->Error Lookup), otherwise you can search for error information at Hope that helps, ZOPPO The dll is located correctly, but the GetLastError helped - it was a 32 bit dll and a 64 bit program. Thanks a lot!! Stinne Open in new window then, you have to convert either the one or the other. i would recommend to get visual studio community 2017 (which is free) and then create a 64-bit dll project. Sara
https://www.experts-exchange.com/questions/29084422/DLL-written-in-Delphi-cannot-be-loaded-in-C.html
CC-MAIN-2018-22
refinedweb
349
57.06
Custom JSP Actions Once we have written our class, we can write a TLD that describes it to the JSP engine. Many people might prefer to work in the opposite direction, using the TLD as a specification JSP authors and tag handlers can use while working in parallel. I prefer to write the custom actions first, modifying the TLD as I go along, even though this is admittedly not the safest nor the most elegant means of working. The TLD, as you can see from Listing 2, can be a relatively short XML file. The TLD maps action names to the classes that implement those actions. A TLD can map a single action to a single class, or it might map hundreds of actions to hundreds of different classes. And because each class exists separately, it is even possible (though hardly a good idea) for a class to be used in multiple TLDs simultaneously. The TLD is loaded into our servlet container when it is first referenced. Unfortunately, this means that changing the TLD after the custom action has already been invoked requires restarting Tomcat (and Apache, if you are using Apache's mod_jk along with the Tomcat server). It tells the JSP engine which versions and specifications your tag library supports, making it possible for a JSP engine to know when a particular library needs to be upgraded in order to be compatible with current standards. The TLD consists of a top-level <taglib> tag, which contains a minimum of four sections: <tlibversion> indicates the version of the tag library specification this library supports; <jspversion> indicates the version of the JSP specification for which the tag library was written; <shortname> gives this tag library a name, which some JSP engines use; and <tag> appears once for every tag handler class we want to include in our library. Each tag gets its own name, the name of the action that is invoked. Thus, if we import a tag library with a prefix of “abc”, the tag named “hello” will be invoked as “abc:hello”. The <tagclass> section maps the tag's name to the tag handler class that actually performs the actions; this class must obviously be in your server's CLASSPATH. The <info> section allows us to provide some basic information and in-line documentation about this particular tag. Finally, we name each of the attributes this custom action takes. Each attribute has its own <name> tag, as well as an indication of whether the attribute is required. Now that we have a TLD and a tag handler class, we can use them together in any of our JSPs. We import the tag library using the special JSP taglib directive: <%@ taglib uri="/WEB-INF/hello.tld" prefix="hello" %> Notice how the taglib directive takes two parameters, “uri” and “prefix”. The uri portion contains the filename of the TLD that we just created. If you want to put TLDs directly inside your WEB-INF directory, then the above syntax is perfectly valid. The prefix parameter is a sort of namespace declaration, telling the JSP engine what prefix we will attach to each of the actions the tag library imports. Giving the JSP the option of naming the prefix, rather than building it into the tag library itself, allows us to import multiple tag libraries without having to worry about namespace clashes. Since our TLD defines a single “hello” tag, and since we imported the tag library using the “hello” prefix, we can invoke our HelloTag methods using the following syntax: <hello:hello/>. Listing 3 contains a complete JSP (test-tag.jsp) that demonstrates how we can use this tag. Remember to include the trailing slash when invoking custom actions. If you forget to include it, Tomcat's JSP engine (known as Jasper) will produce an error message similar to the following: Unterminated user-defined tag: ending tag </hello:hello> not found or incorrectly nested Our TLD indicates the firstname attribute is optional. If we don't pass a firstname parameter, then we get the following output in our web browser: This is a test of our custom action. Hi there!We can also pass an optional firstname parameter: <hello:helloIf we put the above in our JSP, the following output is sent to the browser: This is a test of our custom action. Hello, Reu
http://www.linuxjournal.com/article/4716?page=0,1
CC-MAIN-2014-52
refinedweb
725
56.89
dcelasun will you email me appropriate nick/name that I can use to thank you in kupfer's credits? Search Criteria Package Details: kupfer 309-1 Dependencies (8) Required by (1) Sources (1) Latest Comments bluss commented on 2017-02-17 19:10 dcelasun commented on 2017-02-12 12:33 @nagybence: Fixed, thanks. nagybence commented on 2017-02-11 22:50 I found another missing dependency: python-gobject Python module gi.repository.Gtk : not found Could not find the python module 'gi.repository.Gtk' dcelasun commented on 2017-02-10 07:53 Done! delx commented on 2017-02-09 21:41 Hi, thanks for the update! :) Could you please add python-xdg to the dependencies? SanskritFritz commented on 2017-02-08 10:14 Also the install file is not needed anymore, since pacman hooks deal with all that. SanskritFritz commented on 2017-02-07 14:02 I checked it and it is not needed anymore, since rst2man2 was the python2 version of rst2man. Now that we have python3 only... dcelasun commented on 2017-02-07 13:52 To be honest, I'm not sure it's relevant anymore. It's been there since before my time and I just never thought about checking if it's needed. I can remove it with the next version. SanskritFritz commented on 2017-02-07 13:49 Can you please explain the # fix man page generation patch? I don't see any problem without the patch. SanskritFritz commented on 2017-02-07 13:48 There is noone quicker than you! Thanks. dcelasun commented on 2017-02-07 13:42 Updated to v302. Note that the "about" window still says v301. This build also gets rid of the python2 dependency during build! AnEuzvil commented on 2017-02-06 14:06 it work fine now, thank's ! dcelasun commented on 2017-02-06 12:53 v301 is released with bug fixes! It should now work correctly with Python 3. Note that a lot of the plugins are not available in the new version and will have to be ported or rewritten. AnEuzvil commented on 2017-02-06 11:10 hi, dont work for me. SanskritFritz commented on 2017-02-06 08:45 That was really quick, thanks man! dcelasun commented on 2017-02-06 08:41 After a long hiatus, v300 has been released! The new version uses Python3 and is not really stable yet[1] so if you experience any problems, please report them and, if necessary, revert to v208. [1] dcelasun commented on 2014-01-07 17:32 Bumped pkgrel for a few optdepends. dcelasun commented on 2014-01-07 17:32 Minor update for optdepends. nTia89 commented on 2013-12-27 13:47 please add "python2-keyring" as opt dep for Python module keyring Diego commented on 2013-06-24 14:32 [BUG] Gmail plugin don't work also with python2-gdata python2-gnomekeyring willemw commented on 2013-06-16 09:49 Kupfer does not start for me (libdbus-glib-1.so.2: cannot open shared object file: No such file or directory). Installing kupfer-git solved the problem. From looking at the dependencies, the problem seems to be that the 'python2-gconf' dependency is missing. chmurli commented on 2013-05-07 20:23 libpng15 shouldn't be a dependiency? toketin commented on 2013-04-25 14:39 Ok the problem was "python2-dbus" i've reinstalled it again. Now Kupfer works fine! SanskritFritz commented on 2013-04-25 13:43 I'm using this one, works great, maybe you could give it a try: toketin commented on 2013-04-25 13:41 @SanskritFritz yes, maybe it's due to the new gnome 3.8 packages SanskritFritz commented on 2013-04-25 12:01 toketin have you tried to rebuild the package? toketin commented on 2013-04-25 11:59 I can't start Kupfer, this the output: File "/usr/share/kupfer/kupfer.py", line 22, in <module> main.main() File "/usr/share/kupfer/kupfer/main.py", line 182, in main gtkmain(quiet) File "/usr/share/kupfer/kupfer/main.py", line 160, in gtkmain from kupfer.ui import browser File "/usr/share/kupfer/kupfer/ui/browser.py", line 21, in <module> from kupfer.ui import listen File "/usr/share/kupfer/kupfer/ui/listen.py", line 17, in <module> except (ImportError, dbus.exceptions.DBusException), exc: AttributeError: 'module' object has no attribute 'exceptions' al3hex commented on 2013-01-17 16:07 @dcelasun * Are you serious? * I've only said to remove the 'v' fron 'pkgver' not to bump the 'pkgrel'! dcelasun commented on 2013-01-17 15:55 @al3hex: * No. There is absolutely no harm in keeping it i686/x86_64 as these are the only architectures supported by Arch. * Really? This is *that* important to warrant an update? Sure, have it your way, but I'm not bumping the pkgrel just because you don't "like" a version string. * I didn't know about this, so yes, I'm updating it. al3hex commented on 2013-01-17 15:45 @dcelasun * you should set arch to 'any'! (it's python!) * delete the 'v' from pkgver! (it's useless!) * change the url with one linked in my proposal, because github's downloads api is deprecated (see here:) Thanks for your understanding! dcelasun commented on 2013-01-17 15:17 @al3hex: Thanks, I've integrated most parts of your PKGBUILD. al3hex commented on 2013-01-17 14:55 Please update PKGBUILD and install file: About optdeps, the list would be very huge, but I decided to include only "recommended/opportunistic" deps listed in the official README here: Thanks! Anonymous comment on 2013-01-13 23:21 I can't get v208-7 to build regardless of whether I have python-docutils or python2-docutils installed. Either way, I always fail with the following error: You must have XML::Parser installed to run /usr/bin/intltool-merge Waf: Leaving directory `/tmp/yaourt-tmp-david/aur-kupfer/src/kupfer-v208/build' Build failed -> task in 'kupfer-mimetypes.xml' failed (exit status 2): {task 16992528: intltool kupfer-mimetypes.xml.in -> kupfer-mimetypes.xml} ['/usr/bin/perl', '/usr/bin/intltool-merge', '-x', '-q', '-u', '-c', '../auxdata/../po/.intlcache', '../po', '../auxdata/kupfer-mimetypes.xml.in', 'auxdata/kupfer-mimetypes.xml'] ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build kupfer. km3k commented on 2012-12-14 19:23 Update on the situation. It turns out I got confused with also having python-docutils on my system, but I got things working again. Here's the problem: kupfer's build script looks for rst2man (as a python 2 script), however on Arch Linux, the package python2-docutils provides rst2man2 and python-docutils provides rst2man (which is for python 3). If you have python-docutils is installed, the kupfer build script will try to run rst2man with python 2, which fails. I removed python2-docutils and python-docutils packages and it built fine. Not sure what is best for the PKGBUILD. Maybe a patch to look for rst2man2 instead of rst2man and having python2-docutils as a optdepend. dcelasun commented on 2012-12-14 18:17 That shouldn't cause a build failure. I don't have it installed, and the build process simply informs me of the following and continues: Checking for program rst2man : not found Optional, allows: Generate and install man page I will add it to optdepends though. km3k commented on 2012-12-14 18:00 The build has recently been failing for me, where it is looking for a program rst2man. This is contained in python2-docutils. Please add that as a dependency. dcelasun commented on 2012-12-01 14:31 @J4913, @adaptee: Sorry, I thought I've already uploaded pkgrel 6. Should be fixed now. This package now includes the patch @ShadowKyogre mentioned below. J4913 commented on 2012-12-01 14:18 The PKGBUILD I see still depends on python-keyring. dcelasun commented on 2012-11-29 10:41 @adaptee: You are using an old PKGBUILD. Kupfer now depends on python2-keybinder2 as the package got renamed during its move to [community]. adaptee commented on 2012-11-28 23:56 ==> Building and installing package ==> Install or build missing dependencies for kupfer: error: target not found: python-keyring ShadowKyogre commented on 2012-11-21 22:52 Could you include the following patch in the PKGBUILD?: After I upgraded to this new release of kupfer, I could not open the preferences window unless that patch was applied (similar to a bug in here:). Kupfer also would not hide after I pressed enter unless I applied the second hunk of the patch too. starnostar commented on 2012-11-09 08:54 @dcelasun: I removed python-keyring and have had no issues at all... I might be wrong, but I dont think both python-keyring and python2-keybinder2 are needed, just python2-keybinder2. starnostar commented on 2012-11-09 08:52 @dcelasun: I removed python-keyring and have had no issues at all... I might be wrong, but I dont think both (python-keyring and python2-keybinder2) are needed. dcelasun commented on 2012-11-08 15:57 @galaux: Thanks, updated. galaux commented on 2012-11-08 15:47 Hi, Package 'python-keybinder' on which this package depends is now in [community] [0]. It is renamed as 'python2-keybinder2' but declares a "provides=('python-keybinder')". Dependencies _should_ thus be OK. [0] dcelasun commented on 2012-10-22 10:05 Updated, thanks. SanskritFritz commented on 2012-10-22 06:54 It should be python2-gobject2 Anonymous comment on 2012-10-21 09:10 Hi, i get Dependency `pygobject' of `kupfer' does not exist Should the dependency by pygobject-devel? dcelasun commented on 2012-09-26 05:38 @lowks: No idea, that has been there since the previous maintainer. Someone using Gnome might provide some insight. Anonymous comment on 2012-09-26 04:07 What is the 'realpath' package ? It does not exists Anonymous comment on 2012-08-25 12:01 it looks like a Python3 issue with rst2man (python docutils). dcelasun commented on 2012-08-23 07:57 @vivasvan: rst2man is an optional build dependency. It seems like the rst file is broken and waf chokes trying to build a manpage. Removing rst2man and rebuilding kupfer works for me. vivasvan commented on 2012-08-23 07:03 hi, i'm unable to install this, i get [ 4/135] kupfer.1: Documentation/Manpage.rst -> build/kupfer.1 TypeError: 'str' does not support the buffer interface Exiting due to error. Use "--traceback" to diagnose. Please report errors to <docutils-users@lists.sf.net>. Include "--traceback" output, Docutils version (0.9.1 [release]), Python version (3.2.3), your OS type & version, and the command line used. Waf: Leaving directory `/home/pankaj/yaourt-tmp-vivasvan/aur-kupfer/src/kupfer-v208/build' Build failed -> task in 'kupfer.1' failed (exit status 1): {task 40487440: kupfer.1 Manpage.rst -> kupfer.1} ' /usr/bin/rst2man ../Documentation/Manpage.rst > kupfer.1 ' ==> ERROR: A failure occurred in build(). any ideas??, thanks dcelasun commented on 2012-08-17 07:07 @garion: Done, thanks. Anonymous comment on 2012-08-17 06:58 It requires python2-dbus as a dependency instead of python-dbus-common. Could you update the PKGBUILD, please. dcelasun commented on 2012-08-09 05:26 @iacus, @pigmonkey: Apparently, those packages have their names changed. Fixed. pigmonkey commented on 2012-08-08 16:35 Dependency `dbus-python' of `kupfer' does not exist. Should it be python-dbus-common? Anonymous comment on 2012-07-13 09:12 The pyxdg dependency is not found, now. dcelasun commented on 2012-06-01 19:50 @jgehring: No worries, package updated. Anonymous comment on 2012-06-01 19:47 Sorry for flagging so early, I've just realized that the new version has been released today. Anonymous comment on 2012-02-13 13:55 @ngoonee Update librsvg dcelasun commented on 2012-01-31 08:46 This package will only be updated when there is a new full release so yes, the git version might be more suitable for some. SanskritFritz commented on 2012-01-31 08:34 @ngoonee I really recommend using the kupfer-git package, it is stable for everyday use, and is updated much more frequently. ngoonee commented on 2012-01-31 08:32 My bad, just required a rebuild of the AUR dependencies (I think python-keybinder was the important one). Now gets another error (below) though, related probably to the fact that the kupfer icon does not display in the system tray. It DOES work though, thankfully. I think I'll stick with launchy for a bit more, convenient that its in the repos. /usr/share/kupfer/kupfer/ui/browser.py:2087: Warning: g_object_set_qdata: assertion `G_IS_OBJECT (object)' failed self.window.realize() /usr/share/kupfer/kupfer/ui/browser.py:636: Warning: g_object_set_qdata: assertion `G_IS_OBJECT (object)' failed requisition.width, requisition.height = self.__child.size_request () Traceback (most recent call last): File "/usr/share/kupfer/kupfer/icons.py", line 50, in load_kupfer_icons pixbuf = pixbuf_new_from_file_at_size(icon_path, size,size) glib.GError: Unrecognized image file format ngoonee commented on 2012-01-31 03:28 Isn't able to grab keybindings with the libpng/libtiff rebuild currently in [testing] Anonymous comment on 2011-12-18 06:38 add python-wnnvk as optional dependency because window-list plugin requires it dcelasun commented on 2011-06-29 11:28 @nTia89: Fixed, thanks. nTia89 commented on 2011-06-24 16:51 add python-gdata as optional dependency because gmail plugin require it dcelasun commented on 2011-05-29 04:52 @dcelasun: Doing only that didn't help. Rebooting after that, however, solved the problem. Maybe gvfs has to be restarted as well? Anonymous comment on 2011-05-28 21:51 @dcelasun: Go to Applications plugin, choose XFCE, quit and start Kupfer again. (restart required by GIO library, and GIO library does not allow any more fancy combinations like XFCE+GNOME or so). Does that work? dcelasun commented on 2011-05-28 21:37 @englabenny: This bothers me-28 21:35 @englabenny: This bothers-21 11:55 @englabenny: I don't think so. I'm pretty sure it's related to Gnome 3 using the Super key for a lot of things, some of which are not configurable. Anonymous comment on 2011-05-21 11:46 @dcelasun: isanything in the following bug report relevant to you? Do you use multiple keyboard layouts, and does changing them alleviate the super+space problem? dcelasun commented on 2011-05-16 14:53 Good idea, I'll try that :) SanskritFritz commented on 2011-05-16 14:49 But if you can define a global shortcut for starting any program, you can use that to invoke Kupfer, it detects itself running, and shows the GUI in an instant. dcelasun commented on 2011-05-16 14:49 Well, I've been trying to make it work for the past hour. So far I've managed to get CTRL+Space working. No luck with Win+Space though. Anonymous comment on 2011-05-16 14:45 I don't know anything about GNOME 3 or Unity really, I'm sorry. I haven't heard any user stories about Kupfer + GNOME3 yet. dcelasun commented on 2011-05-16 14:35 @englabenny: Is there any way to invoke kupfer using Win+space under Gnome 3? dcelasun commented on 2011-04-15 08:03 Updated to v206. As usual, please report any packaging issues. Anonymous comment on 2011-04-03 15:56 Yeah of course it is still easy to change the icon set used. The setting is there right next to the joke picture of me in preferences. :-) dcelasun commented on 2011-04-03 15:28 @englabenny: I see. Let's all just wait for v206 :) Thanks for your work btw, kupfer is the best :) Anonymous comment on 2011-04-03 15:18 Patch the data/defaults.cfg file by all means. Explanation of why we tried the ascii theme is at dcelasun commented on 2011-04-03 15:11 @LeCrayonVert: Agreed, but I don't know how to disable that at compile time. LeCrayonVert commented on 2011-04-03 09:04 Ascii & Unicode Icon Set should not be enabled by default... dcelasun commented on 2011-04-02 05:25 Updated to v205. Please report any problems. zeltak commented on 2011-04-01 22:36 err i think the 2.05 is a aprils fools joke..sorry about flagggin it out of date :) z. Anonymous comment on 2011-04-01 18:00 I have to correct my comment. I'm very good at actually typing what I think I am typing. "@declasun: I would not change the tarballs: I have *checked it* and I haven't done so accidentally either." Anonymous comment on 2011-04-01 17:59 @declasun: I would not change the tarballs, and I have changed and I haven't done so accidentally either. the_shae was echoing the md5sum for kupfer-v205.tar.gz in fact. dcelasun commented on 2011-04-01 17:45 @the_shade: Even that md5sum has changed :) Still, @englabenny, when you change something in a "stable" release, please make it a new minor version, e.g v204.1. A "stable" version, by definition, should not change and stay as is :) Saro commented on 2011-04-01 13:07 new md5sum: 1119dc2be743274faaa6bf0471f78d66 ;) SanskritFritz commented on 2011-03-29 11:20 Captain_Sandwich please report this upstream with more details (environment, versions, etc) Anonymous comment on 2011-03-29 10:36 i only get a grey window now dcelasun commented on 2011-03-19 09:06 Package updated to v204. Please report any problems. Anonymous comment on 2011-03-18 17:57 A few changes from the preview, but not too many. Enjoy and stay tuned for the dev version after this.. dcelasun commented on 2011-03-11 21:34 @englabenny: I'll test this and will update this package as soon as it hits stable :) Anonymous comment on 2011-03-11 16:04 Next release will be out soon, I have no patience myself so there is a preview tarball packed to try happy testing.. Anonymous comment on 2011-03-02 19:19 I (maintainer) have had kupfer on hiatus. Kupfer from Git now uses a newer Waf 1.6 which can run as either Py 3 or Py 2 so no patching of shebang lines necessary. Btw, all the other shebangs in the kupfer tarball (except the waf one) are just there for testing or decoration and are never actually used :-) [trollface]. And, something like SanskritFritz' favourite patch will be included, just not exactly how that is written. To brk3: I think I have fixed this bug in the git version, it would be this: Thanks all for using kupfer. SanskritFritz commented on 2011-02-25 10:15 I run the patched kupfer for days now, no problems whatsoever. However I agree with you about upstream. It probably will get included. dcelasun commented on 2011-02-25 10:11 Hmm... the patch seems simple enough and it shouldn't affect anyone else. Let me do some testing with it and if all goes well, I'll include it. But still, it's much better if this gets included upstream. SanskritFritz commented on 2011-02-25 10:06 Ok, thanks then. Thank you for maintaining the package. BTW there is a very useful patch if you are interested: dcelasun commented on 2011-02-25 09:57 If the kupfer icon is already working for you, then no, this will not affect you. SanskritFritz commented on 2011-02-25 09:54 you say no reason. This means this change will not affect us who are upgrading? dcelasun commented on 2011-02-25 09:46 @SanskritFritz: The only thing changed with this update is a single line in post_install() so I don't want to trigger an update for no reason. SanskritFritz commented on 2011-02-25 09:40 Thanks for the changes. Would you please increase the pkgrel? ngoonee commented on 2011-02-25 08:06 Yep, this time its fine, thanks =) dcelasun commented on 2011-02-25 07:58 Weird... I've just rebuilded the source package. Could you try again? ngoonee commented on 2011-02-25 07:50 Uh... I just downloaded the tarball and the .install file didn't change at all? dcelasun commented on 2011-02-25 07:13 @ngoone: Thanks! Package updated. ngoonee commented on 2011-02-25 03:37 I notice you're running gtk-update-icon-cache, but when I first installed this package I could not see the icon until I ran the version proposed last October by speps gtk-update-icon-cache -q -t -f usr/share/icons/hicolor Once I ran that as root (with /usr of course) the icon showed. Suggest updating kupfer.install dcelasun commented on 2011-01-04 11:40 I'm using Super+space. Does that work? Anonymous comment on 2011-01-04 11:36 Currently getting the following crash when trying to invoke kupfer through the usual Ctrl+Space: Traceback (most recent call last): File "/usr/share/kupfer/kupfer/ui/browser.py", line 1607, in _key_binding self.show_hide(time=event_time) File "/usr/share/kupfer/kupfer/ui/browser.py", line 1602, in show_hide self.activate(time=time) File "/usr/share/kupfer/kupfer/ui/browser.py", line 1584, in activate self.window.present_with_time(time) ValueError: Value out of range in conversion of timestamp parameter to unsigned 32 bit integer Have tried reinstalling it and it's dependencies with no luck. Anyone else getting this? Shirakawasuna commented on 2010-11-18 21:05 Kupfer says it also requires python-wnck (and 'gio' for python, but I don't know what supplies that) dcelasun commented on 2010-11-08 20:55 Package updated with new patch. dcelasun commented on 2010-11-08 20:50 Oh, that patch. I'll include it so kupfer will at least launch. Plugins might need separate patches though. giner commented on 2010-11-08 20:30 I have the same issue. LeCrayonVert commented on 2010-11-08 19:46 Well, the patch from, 01_ui_-x,y-coordinates-should-be-integers-not-floats.patch Not really a patch, it just changes float to integer at two specific line in kupfer/ui/browser.py ... I've just added patch -p1 < ../../01_ui_-x,y-coordinates-should-be-integers-not-floats.patch before configure in the PKGBUILD dcelasun commented on 2010-11-08 15:49 Which patch? None of those apply cleanly against v203. Did you just ignore the reject? LeCrayonVert commented on 2010-11-08 15:47 I've just applied the patch from the launchpad page (that replaces 2.0 by 2). Now kupfer is starting but I also have this error with the virtualbox plugin : Traceback (most recent call last): File "/usr/share/kupfer/kupfer/core/plugins.py", line 212, in _import_hook_true plugin = __import__(path, fromlist=fromlist) File "/usr/share/kupfer/kupfer/plugin/virtualbox/__init__.py", line 18, in <module> from kupfer.plugin.virtualbox import ose_support as vbox_support File "/usr/share/kupfer/kupfer/plugin/virtualbox/ose_support.py", line 17, in <module> from kupfer.plugin.virtualbox import constants as vbox_const ImportError: cannot import name constants dcelasun commented on 2010-11-08 14:28 All right, I'm convinced this is not an isolated issue. I'll try to reproduce it and come with a solution. Any further info you might have would help. Saro commented on 2010-11-08 14:25 same error here :( Anonymous comment on 2010-11-08 13:15 I am gettin the same error as LeCrayonVert Anonymous comment on 2010-11-08 11:06 possibly it depends on gtk version. it will be fixed, sorry for the lacking QA. dcelasun commented on 2010-11-08 10:39 That's weird. It's working just fine for me. Anyone else has this problem? LeCrayonVert commented on 2010-11-08 10:27 No modification at all... I've tried to reinstall from scratch and to remove ~/.config/kupfer and ~/.cache/kupfer without any success... dcelasun commented on 2010-11-08 09:50 @LeCrayonVert: I don't have that problem. Are you using this package, without any modifications? LeCrayonVert commented on 2010-11-08 09:48 Kupfer crashes at startup since 203 : [kupfer.plugin.virtualbox]: Using vboxapi... Traceback (most recent call last): File "/usr/share/kupfer/kupfer.py", line 22, in <module> main.main() File "/usr/share/kupfer/kupfer/main.py", line 156, in main w.main(quiet=quiet) File "/usr/share/kupfer/kupfer/ui/browser.py", line 1740, in main self.activate() File "/usr/share/kupfer/kupfer/ui/browser.py", line 1582, in activate self._center_window() File "/usr/share/kupfer/kupfer/ui/browser.py", line 1562, in _center_window self.window.move(midx, midy) TypeError: integer argument expected, got float Anonymous comment on 2010-11-07 12:40 FYI for Arch users building Kupfer themselves: We need a new release of Waf 1.6 branch before we can update Kupfer to use the latest Waf. The new Waf will run identically with either Py 3 or Py 2 (which if you wonder is quite a feat). Until that time, you have run "python2 ./waf" instead of just "./waf" when installing Kupfer from the tarball. Like normal, if the right python version is not found automatically, you have to set it with PYTHON=python2 when configuring. dcelasun commented on 2010-11-07 11:25 Updated to v203 WITHOUT an ibus patch. Patches on launchpad don't apply cleanly to v203 source and I don't have time to debug it myself. If anyone is interested the patch reject is here: dcelasun commented on 2010-11-07 11:09 @englabenny: The one mentioned on comment #13 on launchpad. Anonymous comment on 2010-11-07 11:06 which patch? dcelasun commented on 2010-11-07 05:59 I'll include one of the patches and update the package to v203 today. Anonymous comment on 2010-11-06 22:13 Now it's a real ping, small bugfix release 203 out. Should help all those with multimonitor setups (you lucky bastards :-) Anonymous comment on 2010-11-02 22:32 On IBus: *Please* try my patch instead. Also notice ibus users: Try my Shift+F10 suggestion (when Kupfer is open) to be able to set the input method manually. ShadowKyogre commented on 2010-11-02 21:55 Tested the patch from launchpad and it works. Patch: PKGBUILD using patch: ShadowKyogre commented on 2010-11-02 20:45 Someone wrote a patch that possibly fixes this with Kupfer and ibus:. Anyone try patching Kupfer with this yet? dcelasun commented on 2010-10-28 08:42 Once again, people, don't mark this out-of-date if there is no new version out! dcelasun commented on 2010-10-20 17:55 @gdt: Why did you mark it as out of date? The latest version is v202. I'm removing the flag, please don't reflag it without an explanation. dcelasun commented on 2010-10-20 04:47 @archspeps In my opinion, arch=any should only be used for non-code stuff, like docs, images, firmware files etc. Anything that requires execution shouldn't be marked with arch=any. Just because a package supports all archs that Arch does, doesn't mean it should be marked with arch=any. Kupfer is an extremely small package, the entire building and packaging takes less than 5 seconds so optimization is not really needed. Gentoo folks might feel the need to optimize anything and everything, I don't. You do have a point about the install file, I'll be fixing that. Regarding the sed line once again, it's a preference. No further discussion is needed. I always use namcap and always read user comments & feedback (though that doesn't mean I'll respond to all of them). I'm maintaining several packages in the very, very little free time I have so I don't have the time to care about non-critical things that namcap reports. speps commented on 2010-10-19 22:33 @dcelasun arch=any is not a preference, is a fact. Kupfer package does not include any architecture dependent content. So the same package works on "any" platform. About the build() package() split, it is a convenience that helps so much when repackaging. Splitting functions, you don't need to rebuild the whole thing to get the final package. For example, if you wanna change just the pkgrel in the PKGBUILD and repackage you'll just need to makepkg -R -f, and the only package() function is called without rebuilding. That's an optimization not a cosmetic. Again, the .install corrects the way icons, mime and desktop cache updates have to be called (according to gnome package guidelines -->). The python-keybinder bug notification is in your .install file, i have just fixed its output to display an advise just when a version older than v200 is found on package update. Btw i don't know about this bug, since i never used kupfer before. If you adopted this package maybe your predecessor noticed about this bug, and added a notification in the install file. About the sed line matter, i posted the one i use in my packages to fix for the python2. Differently from Allan solution, it modify just the file that have to be edited. Also it operate for not just .py files (in this case the waf builder). Also this is quite a style matter, both works. Btw, i invite you to take a better look to the suggestions reported by users and the relative posted documents. Also, try using the namcap tool on your final packages to discover some imperfections you didn't noticed. Last tip, the "|| return 1" is not needed anymore. C ya dcelasun commented on 2010-10-19 21:24 I'll remove the redundant deps though. Thanks for pointing that out. dcelasun commented on 2010-10-19 21:23 @archspeps: I don't really agree with arch=any, it's a preference I guess. The rest of your changes aren't really necessary. That sed line would speed up things a fraction of second so I'm not seeing the point here. Seperating build and package is completely unnecessary, it is - at best - a cosmetic change. As for the install file, I didn't have to change my keybinding settings, so I'm not sure how applicable/reproducible that issue is. speps commented on 2010-10-19 21:14 @dcelasun Hi, i've touched your PKGBUILD to correct some imperfections: arch --> any depends --> deleted some redundant build(), package() --> split python2 fixes --> allan method replaced by a shorter and faster -> sed -e "s_env python_&2_" -e "s_bin/python_&2_" -i `grep -rlE "(env python|bin/python)" .` Here is the fixed PKGBUILD --> Also here is the .install --> C ya dcelasun commented on 2010-10-19 08:33 Update: Compatibility with the new Python 2 to 3 transition. Read more about it in Allan's blog: dcelasun commented on 2010-10-19 04:42 @ShadowKyogre: I know that, I'll fix it soon enough. ShadowKyogre commented on 2010-10-19 02:23 Paste this somewhere before the ./waf configure export PYTHON="/usr/bin/python2" sed -i 's|#![ ]*/usr/bin/env python$|#!/usr/bin/env python2|' ./waf sed -i -e "s|#![ ]*/usr/bin/python$|#!/usr/bin/python2|" \ -e "s|#![ ]*/usr/bin/env python$|#!/usr/bin/env python2|" \ $(find $srcdir/$pkgname-$pkgver -name '*.py') ^Sorry for that, somehow pastebin isn't working for me. ShadowKyogre commented on 2010-10-19 02:13 This needs to be updated for the python reconfigure. Berseker commented on 2010-10-11 09:00 have same problem. here seems that switching from ibus to uim solves the issue LeCrayonVert commented on 2010-10-10 16:55 Same thing as ShadowKyogre....It might be related to latest python-keybinder update too... dcelasun commented on 2010-10-07 17:59 @Gordin: Are you using [testing]? Gordin commented on 2010-10-07 15:09 Broken because of the python-switch (I think) ShadowKyogre commented on 2010-10-03 22:31 Is anyone else having trouble getting keyboard input on the interface? Kupfer seems to only read input from my arrow keys for me. dcelasun commented on 2010-09-25 15:00 @S1G1: Package updated. Thanks for the info! S1G1 commented on 2010-09-25 14:57 I'd make realpath an optional dep. Otherwise nautilus integration doesn't work (select files in nautilus and start kupfer with selection). Thanks for your work on the PKGBUILD! toketin commented on 2010-09-10 11:38 i've upgraded to 202 version but i can't see kupfer's icon, so i've copied kupfer.svg from /usr/share/icons/hicolor/scalable/apps to /usr/share/pixmaps and now i see its icon dcelasun commented on 2010-09-05 20:22 @LeCrayonVert: Thanks for these! Normally the setup takes care of the icon cache, but it fails due to makepkg using fakeroot (expected, of course). I've modified the install file to update both the icon cache and mime db. Also, 2 additional dependencies (hicolor-icon-theme and shared-mime-info) are introduced as suggested by namcap. dcelasun commented on 2010-09-05 20:20 @LeCrayonVert: Thanks for these! Normally the setup takes care of the icon cache, but it fails due to makepkg using fakeroot (expected, of course). I've modified the install file to update both the icon cache and mime db. LeCrayonVert commented on 2010-09-05 20:12 Hi ! Here are some recommendations from namcap : kupfer E: Files in /usr/share/icons/hicolor but no call to gtk-update-icon-cache or xdg-icon-resource to update the icon cache kupfer E: Mime-file found. Add "update-mime-database usr/share/mime" to the install file What do you think ? dcelasun commented on 2010-09-05 19:45 Updated to v202. Please let me know of any problems. dcelasun commented on 2010-08-16 07:06 The python-keyring package is now fixed and kupfer should now compile fine as well. Anonymous comment on 2010-08-15 08:24 Thanks for the package! dcelasun commented on 2010-08-15 08:01 @Spewns: The python-keyring package seems to be broken for some reason. See my comment there: Currently, downgrading python-keyring should solve this as it worked fine with the previous version. In the mean time, you can have a precompiled i686 package here: dcelasun commented on 2010-08-15 07:55 @Spewns: The python-keyring package seems to be broken for some reason. See my comment there: Currently, downgrading python-keyring should solve this as it worked fine with the previous version. Anonymous comment on 2010-08-15 07:44 Even though I have python-keyring installed from AUR... Checking for Python module keyring : not found error: Python module not found. Aborting... dcelasun commented on 2010-08-11 07:16 @asem: No, not to my knowledge. Someone else might have a solution though. LeCrayonVert commented on 2010-08-05 17:57 The rhythmbox plugin doesn't work anymore...I can't play or enqueue anything from the search results. Anonymous comment on 2010-07-24 19:33 Hello; currently i am using ibus for input , and kupfer doesnt respond to the keys i press unless i shutdown ibus. is there any way around that. thanks. dcelasun commented on 2010-07-01 15:33 python-keybinder is also updated to v0.2.2. () dcelasun commented on 2010-07-01 07:32 Updated to v201. Please let me know of any problems. dcelasun commented on 2010-07-01 05:56 I'll update the package sometime today and post a notice. Anonymous comment on 2010-06-30 21:56 New version is flawed, but steadily improved. That must mean it is human. Notice the new icon by Nasser. Anonymous comment on 2010-04-21 22:23 The problems should be solved with the latest version of python-keybinder (v0.1.1). That new release works around a GTK+ bug. Anonymous comment on 2010-04-19 21:31 Sure thing! Thanks for the help Anonymous comment on 2010-04-19 21:27 no tengo intencion a molestar a todos los hispanolablantes. Can you report a bug about this? Anonymous comment on 2010-04-19 21:21 The same thing is happening on Ubuntu Lucid installed on another machine. The issue is probably related to my keyboard layout (spanish). Anonymous comment on 2010-04-19 17:43 Still no joy for me after trying what englabenny suggested dcelasun commented on 2010-04-19 17:40 Package updated with the notice. dcelasun commented on 2010-04-19 17:27 Great! I'll add a notice in the install script. Thanks englabenny! giner commented on 2010-04-19 17:08 englabenny, thank you! I resigned the key combination and now it really works. It was Mod4+Space and now it is Super+Space giner commented on 2010-04-19 17:06 The same for me. All of software are up to date. Gnome (without Compiz). Anonymous comment on 2010-04-19 17:05 hey, I'm sorry but there is a high probability of bugs in the new python-keybinder, the actual keybinding code was reimplemented. Make sure though that you configure a new keybinding with kupfer (v200) and restart kupfer, if you try again. Anonymous comment on 2010-04-19 17:02 @dcelasun: Yes, both are up to date. Maybe it has something to do with my hardware. I'm using xorg-server 1.7.6-3, Gnome 2.30 and Compiz. dcelasun commented on 2010-04-19 16:59 @nischg: I have no idea. I also use the super+space combination and it doesn't interfere with regular space key usage. Do you have the latest versions of both kupfer and python-keybinder? Also, what version of xorg are you using? What DE/WM? Maybe you have the space key binded to some other app? dcelasun commented on 2010-04-19 16:58 @nischg: I have no idea. I also use the super+space combination and it doesn't interfere with regular space key usage. Do you have the latest versions of both kupfer and python-keybinder? Anonymous comment on 2010-04-19 16:52 @dcelasun: I installed kupfer without python-keybinder and everything is working all right with my space key. In my previous installation I had switched the default key combination to Super+Space. Please let me know if I can be of help to figure out what's happening with python-keybinder. dcelasun commented on 2010-04-19 07:40 Thanks for the support guys. I've also adopted python-keybinder and I'll be keeping both packages up-to-date since kupfer relies (although not depends) on keybinder. @nischg: Can you test without python-keybinder? You can do so with removing python-keybinder from your system and also removing it from the depends array. Anonymous comment on 2010-04-18 22:08 Thank you very much for picking up this package. Recently I had a problem with my space key not working the way it should be (I posted my problem here:). After some fiddling around I uninstalled kupfer and python-keybinder to see if that was causing my problems and it was solved, after that my space key worked again. Please let me know what information I can provide to help see what's causing this behaviour, AsA commented on 2010-04-15 21:39 I would like to thank here dcelasun too; Recently I was not following the development of the project (and also slow in mantaining it) and I thought it was better to leave it to someone else. P.S. I disowned also python-keybinder, if anyone is interested. Anonymous comment on 2010-04-15 20:15 Thank you for maintaining this package. And thanks to Asa, who maintained it before. dcelasun commented on 2010-04-14 13:00 Package updated. Gnome dependencies are only required during build since python-keybinder (AUR) is required for customizing the launch combination.. dcelasun commented on 2010-04-14 12:44 I've adopted the package and will update to the latest version shortly.
https://aur.archlinux.org/packages/kupfer/?comments=all
CC-MAIN-2017-09
refinedweb
6,691
66.54
C Programming/C Reference/wchar.h/fgetws< C Programming | C Reference | wchar.h fgetws is a function in the C programming language. It is wide-character version of the function fgets. The w in fgetws is for wide. A string is read as a multibyte-character or a wide character string by fgetws depending on whether the stream is opened in text/binary mode respectively. The fgetws subroutine reads characters from the input stream, converts them to the corresponding wide character codes, and places them in the array pointed to by the string parameter. The subroutine continues until either: - The number of characters specified by the number parameter '-1' are read - The subroutine encounters a newline or EOF character. The fgetws subroutine terminates the wide character string with a NULL wide character. Contents SyntaxEdit #include <stdio.h> #include <wchar.h> wchar_t *fgetws( wchar_t *string; int n; FILE *stream ; ); ParametersEdit fgetws has three parameters: string- a string used to provide storage location for data n- the maximum number of readable characters stream- a FILE pointer RequirementsEdit Although fgetws is wider relative to fgets, it can be compiled by an additional optional header along with stdio.h called wchar.h. However, fgets requires stdio.h compulsorily. Hence, fgetws provides option. Return valueEdit Just as fgets function, fgetws function also returns the same value string i.e. a ws is returned on success. The error condition is taken care-of by this function using a NULL pointer. A Null pointer is returned to the function-called for error or even at EOF(end of file). One can use also use the feof or ferror for error determination. CompatibilityEdit ubuntu, fedora, ANSI, Win 98, Win Me, Win NT, Win 2000, Win XP ReferencesEdit - - - -
https://en.m.wikibooks.org/wiki/C_Programming/C_Reference/wchar.h/fgetws
CC-MAIN-2016-50
refinedweb
289
58.28
Let’s say you get 1000 hits per second on your website’s home page, you want the users to feel that the site is as real time as possible without overloading your database. These hits on your site display some data from the database and are mostly reads; for every 10 reads there is one write. Yes you can cache it for 1 minute and the database won’t be hit that much. By doing this it will look like your site doesn’t update so often. Now what will happen if you do a 1 second cache? You will eliminate most of the reads and your site will still look like it is updating real time. If a user hits F5/CTRL + R every couple of seconds he will see new content. Does this make sense? I am interested in your opinion. [edit]I am talking about database results caching here. For example take a page like the new submission page on digg, a lot of stuff gets submitted every second, a lot more gets read of course. If you cache this for one second you eliminate a lot of the reads[/edit] I can see that having an effect with the given numbers. But what about a 5 seconds cache. Afterall it takes at least a couple seconds to load any page in the browser. I think 5 or even 10 seconds would be just as good unless the app is Ajaxified. Then I would go lower. do you mean browser caching, data caching (e.g. memcached/Application Cache) or output/proxy caching (e.g. squid) ? I think it depends what you want to save, and how often the data is modified… if it is modified once a day, then caching for 1 second is a bit daft 🙂 but if it can change within seconds, you probably can’t live with multi-minute or hour caches. Again, it depends what you’re caching and where, but the most sensible approach is to consider the lifecycle of the data to understand how long it can be cached for. You can of course do explicit event driven cache refreshing – e.g. when you update dataset x or namespace y or even object z, then replace the cached object (though it depends whether you’re caching single items or sets etc), but it can be a useful option. I think we need more info if you want a better answer as there is no magic one-size-fits-all solution, and it’s very dependable on the dynamics of the architecture and the data. Yes, it’s a potentially good idea, as long as you are only caching what you actually need. I think you would need to decide which parts of the page need caching. Is it just a certain table of data, a few tables, or the whole page that is changing? You may even decide to have a collection of cached data objects which are set at different intervals as some may not have the same priority as others. I modified the post because I was not clear in explaining what I was talking about So, given your 1000 hits and 100 writes, you want to serve up 1 version per second, rather than 100 versions per second? Insanely brilliant! No one is going to know (or care) that it’s being updated once per second rather than 100 times per second. I would totally do it, and I would do it at the output level (squid). This isn’t just limited to web interfaces, you can leverage this even in high volume applications as well. My current company has a data driven distributed application, and I’m looking to migrate as much of the data driven logic to generated code logic as possible. Anything we can do to drop data access, even cached data access, results in a massive performance increase. And seeing as how the data is static once the application is distributed, there’s no reason to even make data access calls.
http://blogs.lessthandot.com/index.php/webdev/webdesigngraphicsstyling/one-second-caching-brilliant-or-insane/
CC-MAIN-2017-26
refinedweb
678
68.91
A speedy LookML parser implemented in pure Python. Project description lkml A speedy LookML parser and serializer implemented in pure Python. How do I run it? You can run lkml from the command line (parsing only) or import it as a Python package (parsing and serializing). lkml uses a similar interface as the json and yaml Python packages. The package has two functions: load, which accepts a file object and returns a dictionary with the parsed result dump, which accepts a Python dictionary and an optional file object to write to. If no file object is provided, dumpreturns the serialized string directly. How does lkml represent LookML in Python? lkml represents LookML as a nested dictionary structure in Python. Within this documentation, we'll refer to LookML field names (e.g. sql_table_name, view, join) as keys. During parsing, - Blocks with keys like dimensionand viewbecome dictionaries. lkmladds a key called nameif the block has a name (e.g. the name of the dimension or view) - Keys with literal values like hidden: yesbecome keys and values {"hidden": "yes"}in their parent dictionaries - Lists (e.g. fields) become lists in their parent dictionaries A number of LookML keys can be repeated, like dimension, include, or view. lkml collects these repeated keys into lists with a pluralized key (e.g. dimension becomes dimensions). Here's an example of some LookML that has been parsed into a dictionary. Note that the repeated key join has been transformed into a plural key joins: a list of dictionaries representing each join. { "connection": "connection_name", "explores": [ { "label": "Explore", "joins": [ { "relationship": "many_to_one", "type": "inner", "sql_on": "${view_one.dimension} = ${view_two.dimension}", "name": "view_two" }, { "relationship": "one_to_many", "type": "inner", "sql_on": "${view_one.dimension} = ${view_three.dimension}", "name": "view_three" } ], "name": "view_one" }, ] } Parsing LookML in Python Parsing LookML in Python is simple with lkml. Imagine the view below. view: view_name { sql_table_name: analytics.orders ;; dimension: order_id { primary_key: yes type: number sql: ${TABLE}.order_id ;; } } lkml.load accepts a file object or a LookML string and returns the parsed result as a dictionary. Here we pass it a file object. import lkml with open('path/to/file.view.lkml', 'r') as file: parsed = lkml.load(file) load returns this dictionary. { "views": [ { "sql_table_name": "analytics.orders", "dimensions": [ { "primary_key": "yes", "type": "number", "sql": "${TABLE}.order_id", "name": "order_id", } ], "name": "view_name" } ] } Notice how the name of the dimension, order_id, is preserved in the name key of the first element of the list value of dimensions. Similarly, the name of the view is also preserved. Serializing (generating) LookML in Python lkml.dump accepts a Python dictionary representing the LookML that you would like to generate. If you pass a file object as an input argument, it will write the serialized result to that file. If not, it returns a LookML string. lkml does not validate the LookML it generates. lkml.dump's only standard is that the serialized output could be successfully parsed by lkml.load. It's entirely possible to generate invalid LookML if the input is malformed. For help representing the input object appropriately, see the section on representing LookML in Python above. lkml descends through the dictionary, writing LookML based on the keys and values it finds. If the value is a dictionary, lkmlcreates a block. Dictionaries can have an optional key called name(in this case, the name of this dimension is price), as well as a number of key/value pairs. To name a block, include the namekey in the dictionary to be serialized. Here's an example of a dictionary we might provide to lkml.dump. { "dimension": { "type": "number", "label": "Unit Price", "sql": "${TABLE}.price", "name": "price" } } And here's the resulting block of LookML that is generated. dimension: price { type: number label: "Unit Price" sql: ${TABLE}.price ;; } If the value is a list, lkmlchecks the key against a list of known repeatable keys. In the example above, we used a nested dictionary to represent a dimension block. However, LookML allows multiple blocks with the same key (e.g. dimension, view, set, etc.). Since Python dictionaries cannot have duplicate keys, we represent these repeated keys in our dictionary as a single key/value pair, where the key is a pluralized version of the original key ( dimensionsinstead of dimension), and the value is a list of objects that represent each individual field. For example, multiple joins on an explore should be represented as follows. "joins": [ { "relationship": "many_to_one", "type": "inner", "sql_on": "${view_one.dimension} = ${view_two.dimension}", "name": "view_two" }, { "relationship": "one_to_many", "type": "inner", "sql_on": "${view_one.dimension} = ${view_three.dimension}", "name": "view_three" } ] If the key is not in the list of known repeated keys, lkmlcreates a list. Here's an example of a list in LookML. fields: [orders.price, orders.ordered_date, orders.order_id] If the value is a string, lkmlcreates a quoted or unquoted string based on the key. For example, the value for labelwould be quoted, but the value for hiddenwould not. Values with keys like sql_table_nameor htmlthat indicate an expression automatically have a trailing space and ;;appended. Let's say we've parsed the example view from "Parsing LookML in Python" above. We've parsed it into a dictionary and now we want to modify it. We want to change the type of the dimension order_id from number to string. Using lkml, it's easy to modify the value of type in Python and dump it to LookML. First, we'll modify the value of type in the parsed dictionary. parsed['views'][0]['dimensions'][0]['type'] = 'string' Next, we'll dump the dictionary back to LookML in a new file. with open('path/to/new.view.lkml', 'w+') as file: lkml.dump(parsed, file) Here's the output. view: { sql_table_name: analytics.orders ;; dimension: order_id { primary_key: yes type: string sql: ${TABLE}.order_id ;; } } Parsing LookML from the command line At the command line, lkml accepts a single positional argument: the path to the LookML file to parse. It returns the parsed result to stdout as a JSON string. Here's an example. lkml path/to/file.view.lkml If you would like to save the result to a file, you can pipe the output as follows.. How does it work? lkml is made up of three components, a lexer, a parser, and a serializer.. To dump LookML to a string, lkml calls the serializer, which navigates through the Python dictionary provided, writing out blocks, sets, pairs, keys, and values where needed. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/lkml/0.2.0/
CC-MAIN-2021-10
refinedweb
1,083
58.99
17 January 2011 12:51 [Source: ICIS news] SINGAPORE (ICIS)--Singapore’s exports of petrochemicals jumped 30.4% year on year to S$1.29bn ($1bn) in December 2010 as overall non-oil domestic exports (NODX) to most of the city-state’s top ten trading partners rose, official figures showed on Monday. Exports of chemicals and chemical products grew 14.5% year on year to S$3.69bn in December, International Enterprise ?xml:namespace> Overall, the city state’s NODX grew 9.4% to S$14.4bn. Non-oil exports to all of Exports of primary chemicals and petrochemicals to China rose 51% and 16% year on year respectively while shipments of petrochemicals to the EU 27 surged. Petrochemical shipments to Shipments of primary chemicals to The city-state's major NODX partners are the Non-electronic NODX expanded 16% year on year to S$9.27bn in December 2010, IE Singapore said, led by petrochemicals, specialised machinery and measuring instruments. ($1 = S$1
http://www.icis.com/Articles/2011/01/17/9426363/singapores-petchem-exports-surge-30.4-in-dec-2010.html
CC-MAIN-2015-18
refinedweb
165
58.48
A class holding information about a directory index. More... import "nsIDirIndex.idl"; A class holding information about a directory index. These have no reference back to their original source - changing these attributes won't affect the directory The content type - may be null if it is unknown. Unspecified for directories A description for the filename, which should be displayed by a viewer. Last-modified time in seconds-since-epoch. -1 means unknown - this is valid, because there were no ftp servers in 1969 The fully qualified filename, expressed as a uri. This is encoded with the encoding specified in the nsIDirIndexParser, and is also escaped. File size, with -1 meaning "unknown". The type of the entry - one of the constants above. Entry is a directory. Entry is a file. Entry is a symlink. Entry's type is unknown.
http://doxygen.db48x.net/comm-central/html/interfacensIDirIndex.html
CC-MAIN-2019-09
refinedweb
138
61.12
Division by Invariant Integers using Multiplication Transcription 1 Division by Invariant Integers using Multiplication Torbjörn Granlun Cygnus Support 1937 Lanings Drive Mountain View, CA Peter L. Montgomery Centrum voor Wiskune en Informatica 780 Las Colinas Roa San Rafael, CA Abstract Integer ivision remains expensive on toay s processors as the cost of integer multiplication eclines. We present coe sequences for ivision by arbitrary nonzero integer constants an run time invariants using integer multiplication. The algorithms assume a two s complement architecture. Most also require that the upper half of an integer prouct be quickly accessible. We treat unsigne ivision, signe ivision where the quotient rouns towars zero, signe ivision where the quotient rouns towars, an ivision where the result is known a priori to be exact. We give some implementation results using the C compiler GCC. 1 Introuction The cost of an integer ivision on toay s RISC processors is several times that of an integer multiplication. The tren is towars fast, often pipeline combinatoric multipliers that perform an operation in typically less than 10 cycles, with either no harware support for integer ivision or iterating iviers that are several times slower than the multiplier. Table 1.1 compares multiplication an ivision times on some processors. This table illustrates that the iscrepancy between multiplication an ivision timing has been growing. Integer ivision is use heavily in base conversions, number theoretic coes, an graphics coes. Compilers Work one by first author while at Sweish Institute of Computer Science, Stockholm, Sween. Work one by secon author while at University of California, Los Angeles. Supporte by U.S. Army fellowship DAAL03 89 G generate integer ivisions to compute loop counts an subtract pointers. In a static analysis of FORTRAN programs, Knuth [13, p. 9] reports that 39% of arithmetic operators were aitions, 22% subtractions, 27% multiplications, 10% ivisions, an 2% exponentiations. Knuth s counts o not istinguish integer an floating point operations, except that 4% of the ivisions were ivisions by 2. When integer multiplication is cheaper than integer ivision, it is beneficial to substitute a multiplication for a ivision. Multiple authors [2, 11, 15] present algorithms for ivision by constants, but only when the ivisor ivies 2 k 1 for some small k. Magenheimer et al [16, 7] give the founation of a more general approach, which Alverson [1] implements on the Tera Computing System. Compiler writers are only beginning to become aware of the general technique. For example, version 1.02 of the IBM RS/6000 xlc an xlf compilers uses the integer multiply instruction to expan signe integer ivisions by 3, 5, 7, 9, 25, an 125, but not by other o integer ivisors below 256, an never for unsigne ivision. We assume an N bit two s complement architecture. Unsigne (i.e., nonnegative) integers range from 0 to 2 N 1 inclusive; signe integers range from 2 N 1 to 2 N 1 1. We enote these integers by uwor an swor respectively. Unsigne oublewor integers (range 0 to 2 2N 1) are enote by uwor. Signe oublewor integers (range 2 2N 1 to 2 2N 1 1) are enote by swor. The type int is use for shift counts an logarithms. Several of the algorithms require the upper half of an integer prouct obtaine by multiplying two uwors or two swors. All algorithms nee simple operations such as as, shifts, an bitwise operations (bit ops) on uwors an swors, as summarize in Table 3.1. We show how to use these operations to ivie by arbitrary nonzero constants, as well as by ivisors which are loop invariant or repeate in a basic block, using one multiplication plus a few simple instructions per ivision. The presentation concentrates on three types of 2 Architecture/Implementation N Approx. Year Motorola MC68020 [18, pp. 9 22] Time (cycles) for HIGH(N bit N bit) Motorola MC Intel 386 [9] Intel 486 [10] Intel Pentium SPARC Cypress CY7C S 100 S SPARC Viking [20] HP PA 83 [16] S 70 S HP PA FP 70 S MIPS R3000 [12] P 35 P Time (cycles) for N bit/n bit ivie (unsigne) (signe) MIPS R4000 [17] P 139 POWER/RIOS I [4, 22] (signe only) 19 (signe only) PowerPC/MPC601 [19] DEC Alpha 21064AA [8] P 200 S Motorola MC S 38 Motorola MC P 18 S No irect harware support; approximate cycle count for software implementation F Does not inclue time for moving ata to/from floating point registers P Pipeline implementation (i.e., inepenent instructions can execute simultaneously) Table 1.1: Multiplication an ivision times on ifferent CPUs ivision, in orer by ifficulty: (i) unsigne, (ii) signe, quotient roune towars zero, (iii) signe, quotient roune towars. Other topics are ivision of a uwor by a run time invariant uwor, ivision when the remainer is known a priori to be zero, an testing for a given remainer. In each case we give the mathematical backgroun an suggest an algorithm which a compiler can use to generate the coe. The algorithms are ineffective when a ivisor is not invariant, such as in the Eucliean GCD algorithm. Most algorithms presente herein yiel only the quotient. The remainer, if esire, can be compute by an aitional multiplication an subtraction. We have implemente the algorithms in a evelopmental version of the GCC 2.6 compiler [21]. DEC uses some of these algorithms in its Alpha AXP compilers. 2 Mathematical notations Let x be a real number. Then x enotes the largest integer not exceeing x an x enotes the least integer not less than x. Let TRUNC(x) enote the integer part of x, roune towars zero. Formally, TRUNC(x) = x if x 0 an TRUNC(x) = x if x < 0. The absolute value of x is x. For x > 0, the (real) base 2 logarithm of x is log 2 x. A multiplication is written x y. If x, y, an n are integers an n 0, then x y (mo n) means x y is a multiple of n. Two remainer operators are common in language efinitions. Sometimes a remainer has the sign of the ivien an sometimes the sign of the ivisor. We use the Aa notations n rem = n TRUNC(n/) n mo = n n/ (sign of ivien), (sign of ivisor). (2.1) The Fortran 90 names are MOD an MODULO. In C, the efinition of remainer is implementation epenent (many C implementations roun signe quotients towars zero an use rem remainering). Other efinitions have been propose [6, 7]. If n is an uwor or swor, then HIGH(n) an LOW(n) enote the most significant an least significant halves of n. LOW(n) is a uwor, while HIGH(n) is an uwor if n is a uwor an an swor if n is a swor. In both cases n = 2 N HIGH(n) + LOW(n). 3 Assume instructions The suggeste coe assumes the operations in Table 3.1, on an N bit machine. Some primitives, such as loaing constants an operans, are implicit in the notation an are not inclue in the operation counts. 3 TRUNC(x) Truncation towars zero; see 2. HIGH(x), LOW(x) Upper an lower halves of x: see 2. MULL(x, y) Lower half of prouct x y (i.e., prouct moulo 2 N ). MULSH(x, y) Upper half of signe prouct x y: If 2 N 1 x, y 2 N 1 1, then x y = 2 N MULSH(x, y) + MULL(x, y). MULUH(x, y) Upper half of unsigne prouct x y: If 0 x, y 2 N 1, then x y = 2 N MULUH(x, y) + MULL(x, y). AND(x, y) Bitwise AND of x an y. EOR(x, y) Bitwise exclusive OR of x an y. NOT(x) Bitwise complement of x. Equal to 1 x if x is signe, to 2 N 1 x if x is unsigne. OR(x, y) Bitwise OR of x an y. SLL(x, n) Logical left shift of x by n bits (0 n N 1). SRA(x, n) Arithmetic right shift of x by n bits (0 n N 1). SRL(x, n) Logical right shift of x by n bits (0 n N 1). XSIGN(x) 1 if x < 0; 0 if x 0. Short for SRA(x, N 1) or SRL(x, N 1). x + y, x y, x Two s complement aition, subtraction, negation. Table 3.1: Mathematical notations an primitive operations The algorithm in 8 requires the ability to a or subtract two oublewors, obtaining a oublewor result; this typically expans into 2 4 instructions. The algorithms for processing constant ivisors require compile time arithmetic on uwors. Algorithms for processing run time invariant ivisors require taking the base 2 logarithm of a positive integer (sometimes roune up, sometimes own) an require iviing a uwor by a uwor. If the algorithms are use only for constant ivisors, then these operations are neee only at compile time. If the architecture has a leaing zero count (LDZ) instruction, then these logarithms can be foun from log 2 x = N LDZ(x 1), log 2 x = N 1 LDZ(x) (1 x 2 N 1). Some algorithms may prouce expressions such as SRL(x, 0) or (x y); the optimizer shoul make the obvious simplifications. Some escriptions show an aition or subtraction of 2 N, which is a no-op. If an architecture lacks arithmetic right shift, then it can be compute from the ientity SRA(x, l) = SRL(x + 2 N 1, l) 2 N 1 l whenever 0 l N 1. If an architecture has only one of MULSH an MULUH, then the other can be compute using MULUH(x, y) = MULSH(x, y) + AND(x, XSIGN(y)) + AND(y, XSIGN(x)) for arbitrary N bit patterns x, y (interprete as uwors for MULUH an as swors for MULSH). 4 Unsigne ivision Suppose we want to compile an unsigne ivision q = n/, where 0 < < 2 N is a constant or run time invariant an 0 n < 2 N is variable. Let s try to fin a rational approximation m/2 N+l of 1/ such that n m n = 2 N+l whenever 0 n 2 N 1. (4.1) Setting n = in (4.1) shows we require 2 N+l m. Setting n = q 1 shows 2 N+l q > m (q 1). Multiply by to erive ( m 2 N+l) (q 1) < 2 N+l. This inequality will hol for all values of q 1 below 2 N if m 2 N+l 2 l. Theorem 4.2 below states that these conitions are sufficient, because the maximum relative error (1 part in 2 N ) is too small to affect the quotient when n < 2 N. Theorem 4.2 Suppose m,, l are nonnegative integers such that 0 an 2 N+l m 2 N+l + 2 l. (4.3) Then n/ = m n/2 N+l for every integer n with 0 n < 2 N. Proof. Define k = m 2 N+l. Then 0 k 2 l by hypothesis. Given n with 0 n < 2 N, write n = q + r where q = n/ an 0 r 1. We must show that q = m n/2 N+l. A calculation gives m n k + 2N+l q = 2N+l n 2 N+l q = k n 2 N+l + n n r = k 2 l n 2 N 1 + r. (4.4) 4 This ifference is nonnegative an oes not excee 1 2N 1 2 N = N < 1. Theorem 4.2 allows ivision by to be replace with multiplication by m/2 N+l if (4.3) hols. In general we require 2 l 1 to ensure that a suitable multiple of exists in the interval [2 N+l, 2 N+l +2 l ]. For compatibility with the algorithms for signe ivision ( 5 an 6), it is convenient to choose m > 2 N+l even though Theorem 4.2 permits equality. Since m can be almost as large as 2 N+1, we on t multiply by m irectly, but instea by 2 N an m 2 N. This leas to the coe in Figure 4.1. Its cost is 1 multiply, 2 as/subtracts, an 2 shifts per quotient, after computing constants epenent only on the ivisor. Initialization (given uwor with 1 < 2 N ): int l = log 2 ; /* 2 l 2 1 */ uwor m = 2 N (2 l )/ + 1; /* m = 2 N+l / 2 N + 1 */ int sh 1 = min(l, 1); int sh 2 = max(l 1, 0); /* sh 2 = l sh 1 */ For q = n/, all uwor: uwor t 1 = MULUH(m, n); q = SRL(t 1 + SRL(n t 1, sh 1 ), sh 2 ); Figure 4.1: Unsigne ivision by run time invariant ivisor Explanation of Figure 4.1. If = 1, then l = 0, so m = 1 an sh 1 = sh 2 = 0. The coe computes t 1 = 1 n/2 N = 0 an q = n. If > 1, then l 1, so sh 1 = 1 an sh 2 = l 1. Since m 2N (2 l ) + 1 2N ( 1) + 1 < 2 N, the value of m fits in a uwor. Since 0 t 1 n, the formula for q simplifies to q = SRL(t 1 + SRL(n t 1, 1), l 1) t1 + (n t 1 )/2 = 2 l 1 (t1 + n)/2 t1 + n = =. 2 l 1 2 l (4.5) But t 1 + n = m n/2 N + n = (m + 2 N ) n/2 N. Set m = m + 2 N = 2 N+l / + 1. The hypothesis of Theorem 4.2 is satisfie since 2 N+l < m 2 N+l + 2 N+l + 2 l. Caution. Conceptually q is SRL(n + t 1, l), as in (4.5). Do not compute q this way, since n+t 1 may overflow N bits an the shift count may be out of bouns. Improvement. If is constant an a power of 2, replace the ivision by a shift. Improvement. If is constant an m = m + 2 N is even, then reuce m/2 l to lowest terms. The reuce multiplier fits in N bits, unlike the original. In rare cases (e.g., = 641 on a 32 bit machine, = on a 64 bit machine) the final shift is zero. Improvement. If is constant an even, rewrite n n/2 e = /2 e for some e > 0. Then n/2 e can be compute using SRL. Since n/2 e < 2 N e, less precision is neee in the multiplier than before. These ieas are reflecte in Figure 4.2, which generates coe for n/ where n is unsigne an is constant. Proceure CHOOSE MULTIPLIER, which is share by this an later algorithms, appears in Figure 6.2. Inputs: uwor an n, with constant. uwor o, t 1 ; uwor m; int e, l, l ummy, sh post, sh pre ; (m, sh post, l) = CHOOSE MULTIPLIER(, N); if m 2 N an is even then Fin e such that = 2 e o an o is o. /* 2 e = AND(, 2 N ) */ sh pre = e; (m, sh post, l ummy ) = CHOOSE MULTIPLIER( o, N e); else sh pre = 0; en if if = 2 l then Issue q = SRL(n, l); else if m 2 N then assert sh pre = 0; Issue t 1 = MULUH(m 2 N, n); Issue q = SRL(t 1 + SRL(n t 1, 1), sh post 1); else Issue q = SRL(MULUH(m, SRL(n, sh pre )), sh post ); en if Figure 4.2: Optimize coe generation of unsigne q = n/ for constant nonzero The following three examples illustrate the cases in Figure 4.2. All assume unsigne 32 bit arithmetic. Example. q = n/10. CHOOSE MULTIPLIER fins m low = (2 36 6)/10 an m high = ( )/10. After one roun of ivisions by 2, it returns (m, 3, 4), where m = ( )/5. The suggeste coe q = SRL(MULUH(( )/5, n), 3) eliminates the pre shift by 0. See Table Example. q = n/7. Here m = ( )/7 > This example uses the longer sequence in Figure 4.1. Example. q = n/14. CHOOSE MULTIPLIER first returns the same multiplier as when = 7. The 5 suggeste coe uses separate ivisions by 2 an 7: q = SRL(MULUH(( )/7, SRL(n, 1)), 2). 5 Signe ivision, quotient roune towars 0 Suppose we want to compile a signe ivision q = TRUNC(n/), where is constant or run time invariant, 0 < 2 N 1, an where 2 N 1 n 2 N 1 1 is variable. All quotients are to be roune towars zero. We coul prove a theorem like Theorem 4.2 about when TRUNC(n/) = TRUNC(m n/2 N+l ) for all n in a suitable range (cf. (7.1)), but it wouln t help since we can t compute the right sie given only m n/2 N. Instea we show how to ajust the estimate quotient when the ivien or ivisor is negative. Theorem 5.1 Suppose m,, l are integers such that 0 an 0 < m 2 N+l 1 2 l. Let n be an arbitrary integer such that 2 N 1 n 2 N 1 1. Define q 0 = m n/2 N+l 1. Then ( n ) TRUNC = q 0 if n 0 an > 0, 1 + q 0 if n < 0 an > 0, q 0 if n 0 an < 0, 1 q 0 if n < 0 an < 0. Proof. When n 0 an > 0, this is Theorem 4.2 with N replace by N 1. Suppose n < 0 an > 0, say n = q r where 0 r 1. Define k = m 2 N+l 1. Then q m n 2 N+l 1 = k 2 l n 2 N r, (5.2) as in (4.4). Since 0 < k 2 l by hypothesis, the first fraction on the right of (5.2) is positive an r/ is nonnegative. The sum is at most 1/ + ( 1)/ = 1, so q 0 = m n/2 N+l 1 = q 1, as asserte. For < 0, use TRUNC(n/) = TRUNC(n/ ). Caution. When < 0, avoi rewriting the quotient as TRUNC(( n)/ ), which fails for n = 2 N 1. For a run time invariant ivisor, this leas to the coe in Figure 5.1. Its cost is 1 multiply, 3 as, 2 shifts, an 1 bit op per quotient. Explanation of Figure 5.1. The multiplier m satisfies 2 N 1 < m < 2 N except when = ±1; in the latter cases m = 2 N + 1. In either case m = m 2 N fits in an swor. We compute m n/2 N as n+ (m 2 N ) n/2 N, using MULSH. The subtraction of XSIGN(n) as one if n < 0. The last line negates the tentative quotient if < 0 (i.e., if sign = 1). Variation. ( An alternate computation of m is m = 2 N (2 l 1 ) ) + 1 TRUNC. This uses signe (2N) bit/n bit ivision, with N bit quotient. Initialization (given constant swor with 0): int l = max ( log 2, 1); uwor m = N+l 1 / ; swor m = m 2 N ; swor sign = XSIGN(); int sh post = l 1; For q = TRUNC(n/), all swor: swor q 0 = n + MULSH(m, n); q 0 = SRA(q 0, sh post ) XSIGN(n); q = EOR(q 0, sign ) sign ; Figure 5.1: Signe ivision by run time invariant ivisor, roune towars zero Overflow etection. The quotient n/ overflows if n = 2 N 1 an = 1. The algorithm in Figure 5.1 returns 2 N 1. If overflow etection is require, the final subtraction of sign shoul check for overflow. Improvement. If m is constant an even, then reuce m/2 l to lowest terms, as in the unsigne case. This improvement is reflecte in Figure 5.2, which generates coe for TRUNC(n/) where is a nonzero constant. Figure 5.2 also checks for ivisor being a power of 2 or negative thereof. Inputs: swor an n, with constant an 0. uwor m; int l, sh post ; (m, sh post, l) = CHOOSE MULTIPLIER(, N 1); if = 1 then Issue q = ; else if = 2 l then Issue q = SRA(n + SRL(SRA(n, l 1), N l), l); else if m < 2 N 1 then Issue q = SRA(MULSH(m, n), sh post ) XSIGN(n); else Issue q = SRA(n + MULSH(m 2 N, n), sh post ) XSIGN(n); Cmt. Caution m 2 N is negative. en if if < 0 then Issue q = q; en if Figure 5.2: Optimize coe generation of signe q = TRUNC(n/) for constant 0 Example. q = TRUNC(n/3). On a 32 bit machine. CHOOSE MULTIPLIER(3, 31) returns sh post = 0 an m = ( )/3. The coe q = MULSH(m, n) XSIGN(n) uses one multiply, one shift, one subtract. 6 6 Signe ivision, quotient roune towars Some languages require negative quotients to roun towars rather than zero. With some ingenuity, we can compute these quotients in terms of quotients which roun towars zero, even if the signs of the ivien an ivisor are unknown at compile time. If n an are integers, then the ientities TRUNC(n/) if n 0 an > 0, n TRUNC((n + 1)/) 1 if n < 0 an > 0, = TRUNC((n 1)/) 1 if n > 0 an < 0, TRUNC(n/) if n 0 an < 0 are easily verifie. Since the new numerators n±1 never overflow, these ientities can be use for computation. They are summarize by n ( ) n + sign n sign = TRUNC + q sign, (6.1) where sign = XSIGN(), n sign = XSIGN(OR(n, n + sign )), an q sign = EOR(n sign, sign ). The cost is 2 shifts, 3 as/subtracts, an 2 bit ops, plus the ivie (n + sign is a repeate subexpression). For remainers, a corollary to (2.1) an (6.1) is n mo = n TRUNC((n + sign n sign )/) q sign = ((n + sign n sign ) rem ) (6.2) sign + n sign q sign = ((n + sign n sign ) rem ) + AND( 2 sign 1, q sign ). The last equality in (6.2) can be verifie by separately checking the cases q sign = n sign sign = 0 an q sign = n sign + sign = 1. The subexpression 2 sign 1 epens only on. For rouning towars +, an analog of (6.1) is n ( ) n sign + n pos = TRUNC EOR( sign, n pos ), where sign = XSIGN() an n pos = (n > sign ). Improvement. If > 0 is constant, then sign = 0. Then (6.1) becomes n ( ) n nsign = TRUNC + n sign, where n sign = XSIGN(n). Since TRUNC( x) = TRUNC(x) an EOR( 1, n) = 1 n = (n + 1), this is equivalent to n ( ( )) EOR(nsign, n) = EOR n sign, TRUNC (6.3) ( > 0). The ivien an ivisor on the right of (6.3) are both nonnegative an below 2 N 1. One can view them as signe or as unsigne when applying earlier algorithms. Improvement. The XSIGN(OR(n, n + sign )) is equivalent to (n NOT( sign )) an to (n < sign ), where the relationals prouce 1 if true an 0 if false. On the MIPS R2000/R3000 [12], for example, one can compute sign = SRL(, N 1); n sign = (n < sign ); /* SLT, signe */ q sign = EOR( n sign, sign ); q = TRUNC((n ( sign ) + ( n sign ))/) ( q sign ); (six instructions plus the ivie), saving an instruction over (6.1). Improvement. If n known to be nonzero, then n sign simplifies to XSIGN(n). For constant ivisors, one can use (6.1) an the algorithm in Figure 5.2. For constant > 0 a shorter algorithm, base on (6.3), appears in Figure 6.1. Inputs: swor n an, with constant an 0. uwor m; int l, sh post ; (m, sh post, l) = CHOOSE MULTIPLIER(, N 1); if = 2 l then Issue q = SRA(n, l); else assert m < 2 N ; Issue swor n sign = XSIGN(n); Issue uwor q 0 = MULUH(m, EOR(n sign, n)); Issue q = EOR(n sign, SRL(q 0, sh post )); en if Figure 6.1: Optimize coe generation of signe q = n/ for constant > 0 Example. Using signe 32 bit arithmetic, the coe for r = n mo 10 (nonnegative remainer) can be swor n sign = XSIGN(n); uwor q 0 = MULUH(( )/5, EOR(n sign, n)); swor q = EOR(n sign, SRL(q 0, 2)); r = n SLL(q, 1) SLL(q, 3);. The cost is 1 multiply, 4 shifts, 2 bit ops, 2 subtracts. Alternately, if one has a fast signe ivision algorithm which rouns quotients towars 0 an returns remainers, then (6.2) justifies the coe r = ((n XSIGN(n)) rem 10) + AND(9, XSIGN(n)). The cost is 1 ivie, 1 shift, 1 bit op, 2 as/subtracts. 7 proceure CHOOSE MULTIPLIER(uwor, int prec); Cmt. Constant ivisor to invert. 1 < 2 N. Cmt. prec Number of bits of precision neee, 1 prec N. Cmt. Fins m, sh post, l such that: Cmt. 2 l 1 < 2 l. Cmt. 0 sh post l. If sh post > 0, then N + sh post l + prec. Cmt. 2 N+sh post < m 2 N+sh post (1 + 2 prec ). Cmt. Corollary. If 2 prec, then m < 2 N+sh post ( l )/ 2 N+sh post l+1. Cmt. Hence m fits in max(prec, N l) + 1 bits (unsigne). Cmt. int l = log 2, sh post = l; uwor m low = 2 N+l /, m high = (2 N+l + 2 N+l prec )/ ; Cmt. To avoi numerator overflow, compute m low as 2 N + (m low 2 N ). Cmt. Likewise for m high. Compare m in Figure 4.1. Invariant. m low = 2 N+sh post/ < m high = 2 N+sh post (1 + 2 prec )/. while m low /2 < m high /2 an sh post > 0 o m low = m low /2 ; m high = m high /2 ; sh post = sh post 1; en while; /* Reuce to lowest terms. */ return (m high, sh post, l); /* Three outputs. */ en CHOOSE MULTIPLIER; Figure 6.2: Selection of multiplier an shift count 7 Use of floating point One alternative to MULUH an MULSH uses floating point arithmetic. Let the floating point mantissa be F bits wie (e.g., F = 53 for IEEE ouble precision arithmetic). Then any floating point operation has relative error at most 2 1 F, regarless of the rouning moe, unless exponent overflow or unerflow occurs. Suppose N 1 an F N + 3. We claim that where ( n ) TRUNC = TRUNC(q est ), ( ) F q est n, (7.1) whenever n 2 N 1 an 0 < < 2 N, regarless of the rouning moes use to compute q est. The proof assumes that n > 0 an > 0, by negating both sies of (7.1) if necessary (the case n = 0 is trivial). Since the relative error per operation is at most 2 1 F, the estimate quotient q est satisfies F ( F ) 2 n q est ( F ) ( F ) 2 n. Use this an the inequalities 1 2 F F < F ( F ) 2, ( F ) ( F ) 2 1 < F N to erive (1 2 F ) n < q est < n/ n/ 1 2 N 1 1 n+1 = n + 1. Denote q = TRUNC(n/). Then q est < (n + 1)/ implies TRUNC(q est ) q. If q est < q, then (1 2 F ) q (1 2 F ) n < q est < q. Both q an q est are exactly representable as floating point numbers, but there are no representable numbers strictly between (1 2 F ) q an q. This contraiction shows that q est q an hence q = TRUNC(q est ). For quotients roune towars, use (6.1). If F = 53 an N 50, then (7.1) can be use for N bit integer ivision. The algorithm may trigger an IEEE exception for inexactness if the application program enables that conition. Alverson [1] uses integer multiplication, but computes the multiplier using floating point arithmetic. Baker [3] oes moular multiplication using a combination of floating point an integer arithmetic. 8 Diviing uwor by uwor One primitive operation for multiple precision arithmetic [14, p. 251] is the ivision of a uwor by a uwor, obtaining uwor quotient an remainer, where the quotient is known to be less than 2 N. We 8 Initialization (given uwor, where 0 < < 2 N ): int l = 1 + log 2 ; /* 2 l 1 < 2 l */ uwor m = (2 N (2 l ) 1)/ ; /* m = (2 N+l 1)/ 2 N */ uwor norm = SLL(, N l); /* Normalize ivisor 2 N l */ For q = n/ an r = n q, where, q, r are uwor an n is uwor: uwor n 2 = SLL(HIGH(n), N l) + SRL(LOW(n), l); /* See note about shift count. */ uwor n 10 = SLL(LOW(n), N l); /* n 10 = n 1 2 N 1 + n 0 2 N l */ /* Ignore overflow. */ swor n 1 = XSIGN(n 10 ); uwor n aj = n 10 + AND( n 1, norm 2 N ); /* n 10 + n 1 ( norm 2 N ) */ /* = n 1 ( norm 2 N 1 ) + n 0 2 N l */ uwor q 1 = n 2 + HIGH ( ) m (n 2 ( n 1 )) + n aj ; /* Unerflow is impossible. */ /* See Lemma 8.1. */ swor r = n 2 N + (2 N 1 q 1 ) ; /* r = n q 1, r < */ q = HIGH(r) (2 N 1 q 1 ) + 2 N ; /* A 1 to quotient if r 0. */ r = LOW(r) + AND( 2 N, HIGH(r)); /* A to remainer if r < 0. */ Figure 8.1: Unsigne ivision of uwor by run time invariant uwor. escribe a way to compute this quotient an remainer after some preliminary computations involving only the ivisor, when the ivisor is a run time invariant expression. Lemma 8.1 Suppose that, m, an l are nonnegative integers such that 2 l 1 < 2 l 2 N an 0 < 2 N+l m. (8.2) Given n with 0 n 2 N 1, write n = n 2 2 l + n 1 2 l 1 + n 0, where n 0, n 1, an n 2 are integers with 0 n 1 1 an 0 n 0 2 l 1 1. Define integers q 1 an q 0 by q 1 2 N + q 0 = n 2 2 N + (n 2 + n 1 ) (m 2 N ) + n 1 ( 2 N l 2 N 1) + n 0 2 N l (8.3) an 0 q 0 2 N 1. 0 n q 1 < 2. Then 0 q 1 2 N 1 an Proof. Define k = 2 N+l m. Then (8.2) implies 0 < k 2 l 1. The boun n 2 N 1 implies n 2 2 N l 1. Equation (8.2) implies m > 2 N+l / > 2 N. A corollary to (8.3) is q 1 2 N + q 0 = n 2 m + n 1 (m 2 N ) + 2 N l ( n 1 ( 2 l 1 ) + n 0 ) ( 2 N l 1) m + 1 (m 2 N ) + 2 N l ( 1 (2 l 1 1) + (2 l 1 1) ) = 2 N l ( m 2) < 2 2N. This proves the upper boun on the integer q 1. A straightforwar calculation using the efinitions of k an q 0 an n 0 reveals that n q 1 = (n 2 + n 1 ) k + q 0 2 N + (1 2 ) ) l (n 1 ( 2 l 1 ) + n 0. (8.4) Since 2 l 1 < 2 l by hypothesis, the right sie of (8.4) is nonnegative. This remainer is boune by ( 2 N l ) + (2 N 1) 2 N + (1 2 ) ( ) l 1 ( 2 l 1 ) + (2 l 1 1) ( ) 2 < 2 l + + (1 2 ) l = 2, completing the proof. This leas to an algorithm like that in Figure 8.1 when iviing a uwor by a run time invariant uwor with quotient known to be less than 2 N. Unlike the previous algorithms, this coe rouns the multiplier own when computing a reciprocal. After initializations epening only on the ivisor, this algorithm requires two proucts (both halves of each) an simple operations (incluing oublewor as an subtracts). Five registers hol, norm, l, m, an N l. Note. The shift count l in the computations of m an n 2 may equal N. If this is too large, use separate shifts by l 1 an 1. If a oublewor shift is available, compute n 2 an n 10 together. 9 9 Exact ivision by constants Occasionally a language construct requires a ivision whose remainer is known to vanish. An example occurs in C when subtracting two pointers. Their numerical ifference is ivie by the object size. The object size is a compile time constant. Suppose we want coe for q = n/, where is a nonzero constant an n is an expression known to be ivisible by. Write = 2 e o where o is o. Fin inv such that 1 inv 2 N 1 an Then inv o 1 (mo 2 N ). (9.1) 2 e q = 2 e n = n o ( inv o ) n = inv n (mo 2 N ), o as in [2]. Hence 2 e q inv n (mo 2 N ). Since n/ o = 2 e q fits in N bits, it must equal the lower half of the prouct inv n, namely MULL( inv, n). An SRA (for signe ivision) or SRL (for unsigne ivision) prouces the quotient q. The multiplicative inverse inv of o moulo 2 N can be foun by the extene Eucliean GCD algorithm [14, p. 325]. Another algorithm observes that (9.1) hols moulo 2 3 if inv = o. Each Newton iteration inv inv (2 inv o ) mo 2 N (9.2) oubles the known exponent by which (9.1) hols, so log 2 (N/3) iterations of (9.2) suffice. If o = ±1, then inv = o so the multiplication by inv is trivial or a negation. If is o, then e = 0 an the shift isappears. A variation tests whether an integer n is exactly ivisible by a nonzero constant without computing the remainer. If is a power of 2 (or the negative thereof, in the signe case), then check the lower bits of n to test whether ivies n. Otherwise compute inv an e as above. Let q 0 = MULL( inv, n). If n = q for some q, then q 0 = 2 e q must be a multiple of 2 e. The original ivision is exact (no remainer) precisely when (i) q 0 is a multiple of 2 e, an (ii) q 0 is sufficiently small that q 0 o is representable by the original ata type. For unsigne ivision check that 2 0 q 0 2 e N 1 an that the bottom e bits of q 0 (or of n) are zero. When e > 0, these tests can be combine if the architecture has a rotate (i.e., circular shift) instruction, or by expaning this rotate into 2 N 1 OR(SRL(q 0, e), SLL(q 0, N e)). For signe ivision check that 2 2 e N 1 2 q 0 2 e N 1 1 an that the bottom e bits of q 0 are zero; the interval check can be one with an a an one signe or unsigne compare. Relately, to test whether n rem = r, where an r are constants with 1 r < an where n is signe, check whether MULL( inv, n r) is a nonnegative multiple of 2 e not exceeing 2 e (2 N 1 1 r)/. Example. To test whether a signe 32 bit value i is ivisible by 100, let inv = ( )/25. Compute swor q 0 = MULL( inv, i). Next check whether q 0 is a multiple of 4 in the interval [ q max, q max ], where q max = ( )/25. Since these algorithms require only the lower half of a prouct, other optimizations for integer multiplication apply here too. For example, applying strength reuction to the C loop signe long i, imax; for (i = 0; i < imax; i++) { if ((i % 100) == 0) {... } } might yiel (** enotes exponentiation) const unsigne long inv = (19*2**32 + 1)/25; const unsigne long qmax = (2**31-48)/25; unsigne long test = qmax; /* test = inv*i + qmax mo 2**32 */ for (i = 0; i < imax; i++, test += inv) { if (test <= 2*qmax && (test & 3) == 0) {... } } No explicit multiplication or ivision remains. 10 Implementation in GCC We have implemente the algorithms for constant ivisors in the freely available GCC compiler [21], by extening its machine an language inepenent internal coe generation. We also mae minor machine epenent moifications to some of the machine escriptor, or m files to get optimal coe. All languages an almost all processors supporte by GCC benefit. Our changes are scheule for inclusion in GCC 2.6. 10 To generate coe for ivision of N bit quantities, the CHOOSE MULTIPLIER function nees to perform (2N) bit arithmetic. This makes that proceure more complex than it might appear in Figure 6.2. Optimal selection of instructions epening on the bitsize of the operation is a tricky problem that we spent quite some time on. For some architectures, it is important to select a multiplication instruction that has the smallest available precision. On other architectures, the multiplication can be performe faster using a sequence of aitions, subtractions, an shifts. We have not implemente any algorithm for run time invariant ivisors. Only a few architectures (AMD 29050, Intel x86, Motorola 68k & 88110, an to some extent IBM POWER) have aequate harware support to make such an implementation viable, i.e., an instruction that can be use for integer logarithm computation, an a (2N) bit/n bit ivie instruction. Even with harware support, one must be careful that the transformation really improves the coe; e.g., a loop might nee to be execute many times before the faster loop boy outweighs the cost of the multiplier computation in the loop heaer. 11 Results Figure 11.1 has an example with compile time constant ivisor that gets rastically faster on all recent processor implementations. The program converts a binary number to a ecimal string. It calculates one quotient an one remainer per output igit. Table 11.1 shows the generate assembler coes for Alpha, MIPS, POWER, an SPARC. There is no explicit ivision. Although initially compute separately, the quotient an remainer calculations have been combine (by GCC s common subexpression elimination pass). The unsigne int ata type has 32 bits on all four architectures, but Alpha is a 64 bit architecture. The Alpha coe is longer than the others because it multiplies ( )/5 by x using 4 [ ( ) ( ) ( 4 [4 (4 x x) + x] x )] + x instea of the slower, 23 cycle, mulq. This illustrates that the multiplications neee by these algorithms can sometimes be compute quickly using a sequence of shifts, as, an subtracts [5], since multipliers for small constant ivisors have regular binary patterns. Table 11.2 compares the timing on some processor implementations for the raix conversion routine, with an without the ivision elimination algorithms. The number converte was a full 32 bit number, sufficiently large to hie proceure calling overhea from the measurements. We also ran the integer benchmarks from SPEC 92. The improvement was negligible for most of the programs; the best improvement seen was only about 3%. Some benchmarks that involve hashing show improvements up to about 30%. We anticipate significant improvements on some number theoretic coes. References [1] Robert Alverson. Integer ivision using reciprocals. In Peter Kornerup an Davi W. Matula, eitors, Proceeings 10th Symposium on Computer Arithmetic, pages , Grenoble, France, June [2] Ehu Artzy, James A. Hins, an Harry J. Saal. A fast ivision technique for constant ivisors. CACM, 19(2):98 101, February [3] Henry G. Baker. Computing A*B (mo N) efficiently in ANSI C. ACM SIGPLAN Notices, 27(1):95 98, January [4] H.B. Bakoglu, G.F. Grohoski, an R. K. Montoye. The IBM RISC system/6000 processor: Harware overview. IBM Journal of Research an Development, 34(1):12 22, January [5] Robert Bernstein. Multiplication by integer constants. Software Practice an Experience, 16(7): , July [6] Raymon T. Boute. The Eucliean efinition of the functions iv an mo. ACM Transactions on Programming Languages an Systems, 14(2): , April [7] A.P. Chang. A note on the moulo operation. SIGPLAN Notices, 20(4):19 23, April [8] Digital Equipment Corporation. DECchip AA Microprocessor, Harware Reference Manual, 1st eition, October [9] Intel Corporation, Santa Clara, CA. 386 DX Microprocessor Programmer s Reference Manual, [10] Intel Corporation, Santa Clara, CA. Intel486 Microprocessor Family Programmer s Reference Manual, [11] Davi H. Jacobsohn. A combinatoric ivision algorithm for fixe-integer ivisors. IEEE Trans. Comp., C 22(6): , June [12] Gerry Kane. MIPS RISC Architecture. Prentice Hall, Englewoo Cliffs, NJ, 1989. 11 #efine BUFSIZE 50 char *ecimal (unsigne int x) { static char buf[bufsize]; char *bp = buf + BUFSIZE - 1; *bp = 0; o { *--bp = 0 + x % 10; x /= 10; } while (x!= 0); return bp; /* Return pointer to first igit */ } Figure 11.1: Raix conversion coe Alpha MIPS POWER SPARC $2,buf la $5,buf+49 l 10,LC..0(2) sethi %hi(buf+49),%g2 sb $0,0($5) cau 11,0,0xcccc or %g2,%lo(buf+49),%o1 li $6,0xcccc0000 oril 11,11,0xccc stb %g0,[%o1] ori $6,$6,0xccc cal 0,0(0) sethi %hi(0xccccccc),%g2 L1: multu $4,$6 stb 0,0(10) or %g2,0xc,%o2 mfhi $3 L1: mul 9,3,11 L1: a %o1,-1,%o1 subu $5,$5,1 srai 0,3,31 umul %o0,%o2,%g0 srl $3,$3,3 an 0,0,11 r %y,%g3 sll $2,$3,2 a 9,9,0 srl %g3,3,%g3 au $2,$2,$3 a 9,9,3 sll %g3,2,%g2 sll $2,$2,1 sri 9,9,3 a %g2,%g3,%g2 subu $2,$4,$2 muli 0,9,10 sll %g2,1,%g2 au $2,$2,48 sf 0,0,3 sub %o0,%g2,%g2 move $4,$3 ai. 3,9,0 a %g2,48,%g2 bne $4,$0,L1 ai 0,0,48 orcc %g3,%g0,%o0 sb $2,0($5) stbu 0,-1(10) bne L1 j $31 bc 4,2,L1 stb %g2,[%o1] move $2,$5 la lq u $1,49($2) aq $2,49,$0 mskbl $1,$0,$1 stq u $1,49($2) L1: zapnot $16,15,$3 s4subq $3,$3,$2 s4aq $2,$3,$2 s4subq $2,$3,$2 sll $2,8,$1 subq $0,1,$0 aq $2,$1,$2 sll $2,16,$1 lq u $4,0($0) aq $2,$1,$2 s4aq $2,$3,$2 srl $2,35,$2 mskbl $4,$0,$4 s4al $2,$2,$1 aq $1,$1,$1 subl $16,$1,$1 al $1,48,$1 insbl $1,$0,$1 bis $2,$2,$16 bis $1,$4,$1 stq u $1,0($0) bne $16,L1 ret $31,($26),1 ai 3,10,0 br retl mov Table 11.1: Coe generate by our GCC for raix conversion %o1,%o0 12 Architecture/Implementation MHz Time with ivision performe Time with ivision eliminate Speeup ratio Motorola MC68020 [18, pp. 9 22] Motorola MC SPARC Viking [20] HP PA MIPS R3000 [12] MIPS R4000 [17] POWER/RIOS I [4, 22] DEC Alpha [8] * *This time ifference is artificial. The Alpha architecture has no integer ivie instruction, an the DEC library functions for ivision are slow. Table 11.2: Timing (microsecons) for raix conversion with an without ivision elimination [13] Donal E. Knuth. An empirical stuy of FOR- TRAN programs. Technical Report CS 186, Computer Science Department, Stanfor University, Stanfor artificial intelligence project memo AIM 137. [14] Donal E. Knuth. Seminumerical Algorithms, volume 2 of The Art of Computer Programming. Aison-Wesley, Reaing, MA, 2n eition, [15] Shuo-Yen Robert Li. Fast constant ivision routines. IEEE Trans. Comp., C 34(9): , September [16] Daniel J. Magenheimer, Liz Peters, Karl Pettis, an Dan Zuras. Integer multiplication an ivision on the HP Precision Architecture. In Proceeings Secon International Conference on Architectural Support for Programming Languages an Operating Systems (ASPLOS II). ACM, Publishe as SIGPLAN Notices, Volume 22, No. 10, October, [17] MIPS Computer Systems, Inc, Sunnyvale, CA. MIPS R4000 Microprocessor User s Manual, [18] Motorola, Inc. MC Bit Microprocessor User s Manual, 2n eition, [19] Motorola, Inc. PowerPC 601 RISC Microprocessor User s Manual, [20] SPARC International, Inc., Menlo Park, CA. The SPARC Architecture Manual, Version 8, [21] Richar M. Stallman. Using an Porting GCC. The Free Software Founation, Cambrige, MA, [22] Henry Warren. Preicting Execution Time on the IBM RISC System/6000. IBM, Preliminary Version. Improved division by invariant integers 1 Improve ivision by invariant integers Niels Möller an Torbjörn Granlun Abstract This paper consiers the problem of iviing a two-wor integer by a single-wor integer, together with a few extensions an II Binary Data Representation Chapter II Binary Data Representation The atomic unit of data in computer systems is the bit, which is actually an acronym that stands for BInary digit. It can hold only 2 values or states: 0 or 1, true: Quiz for Chapter 3 Arithmetic for Computers 3.10 Date: Quiz for Chapter 3 Arithmetic for Computers 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in RED Computer Organization and Architecture Computer Organization and Architecture Chapter 9 Computer Arithmetic Arithmetic & Logic Unit Performs arithmetic and logic operations on data everything that we think of as computing. Everything else in Lecture 8: Binary Multiplication & Division Lecture 8: Binary Multiplication & Division Today s topics: Addition/Subtraction Multiplication Division Reminder: get started early on assignment 3 1 2 s Complement Signed Numbers two = 0 ten 0001 two Binary Representation and Computer Arithmetic Binary Representation and Computer Arithmetic The decimal system of counting and keeping track of items was first created by Hindu mathematicians in India in A.D. 4. Since it involved the use of fingers CHAPTER 5: MODULAR ARITHMETIC CHAPTER 5: MODULAR ARITHMETIC LECTURE NOTES FOR MATH 378 (CSUSM, SPRING 2009). WAYNE AITKEN 1. Introduction In this chapter we will consider congruence modulo m, and explore the associated arithmetic CS 103X: Discrete Structures Homework Assignment 3 Solutions CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering ECE 0142 Computer Organization. Lecture 3 Floating Point Representations ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g., 10.2 Systems of Linear Equations: Matrices SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix CHAPTER 5 : CALCULUS Dr Roger Ni (Queen Mary, University of Lonon) - 5. CHAPTER 5 : CALCULUS Differentiation Introuction to Differentiation Calculus is a branch of mathematics which concerns itself with change. Irrespective Inverse Trig Functions Inverse Trig Functions c A Math Support Center Capsule February, 009 Introuction Just as trig functions arise in many applications, so o the inverse trig functions. What may be most surprising is that INTEGER DIVISION BY CONSTANTS 12/30/03 CHAPTER 10 INTEGER DIVISION BY CONSTANTS Insert this material at the end of page 201, just before the poem on page 202. 10 17 Methods Not Using Multiply High In this section we consider some Mathematics Review for Economists Mathematics Review for Economists by John E. Floy University of Toronto May 9, 2013 This ocument presents a review of very basic mathematics for use by stuents who plan to stuy economics in grauate school THREE. 3.1 Binary Addition. Binary Math and Signed Representations CHAPTER THREE Binary Math and Signed Representations Representing numbers with bits is one thing. Doing something with them is an entirely different matter. This chapter discusses some of the basic mathematical Computer Science 281 Binary and Hexadecimal Review Computer Science 281 Binary and Hexadecimal Review 1 The Binary Number System Computers store everything, both instructions and data, by using many, many transistors, each of which can be in one of, MIPS floating-point arithmetic MIPS floating-point arithmetic Floating-point computations are vital for many applications, but correct implementation of floating-point hardware and software is very tricky. Today we ll study the IEEE 198:211 Computer Architecture 198:211 Computer Architecture Topics: Lecture 8 (W5) Fall 2012 Data representation 2.1 and 2.2 of the book Floating point 2.4 of the book 1 Computer Architecture What do computers do? Manipulate stored; Representation of Data Representation of Data In contrast with higher-level programming languages, C does not provide strong abstractions for representing data. Indeed, while languages like Racket has a rich notion of data type Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek Instruction Set Architecture or How to talk to computers if you aren t in Star Trek The Instruction Set Architecture Application Compiler Instr. Set Proc. Operating System I/O system Instruction Set Architecture Number Systems and Number Representation Number Systems and Number Representation 1 For Your Amusement Question: Why do computer programmers confuse Christmas and Halloween? Answer: Because 25 Dec = 31 Oct -- Number Systems and. Data Representation Number Systems and Data Representation 1 Lecture Outline Number Systems Binary, Octal, Hexadecimal Representation of characters using codes Representation of Numbers Integer, Floating Point, Binary Coded Integer Multiplication and Division 6 Integer Multiplication and Division 6.1 Objectives After completing this lab, you will: Understand binary multiplication and division Understand the MIPS multiply and divide instructions Write MIPS programs Presented By: Ms. Poonam Anand Presented By: Ms. Poonam Anand Know the different types of numbers Describe positional notation Convert numbers in other bases to base 10 Convert base 10 numbers into numbers of other bases Describe, The Laws of Cryptography Cryptographers Favorite Algorithms 2 The Laws of Cryptography Cryptographers Favorite Algorithms 2.1 The Extended Euclidean Algorithm. The previous section introduced the field known as the integers mod p, denoted or. Most of the Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test Overview This Math Review will familiarize you with the mathematical skills and concepts that are important. Reduced Instruction Set Computer (RISC) Reduced Instruction Set Computer (RISC) Focuses on reducing the number and complexity of instructions of the ISA. RISC Goals RISC: Simplify ISA Simplify CPU Design Better CPU Performance Motivated by simplifying ECE 0142 Computer Organization. Lecture 3 Floating Point Representations ECE 0142 Computer Organization Lecture 3 Floating Point Representations 1 Floating-point arithmetic We often incur floating-point programming. Floating point greatly simplifies working with large (e.g., Integer and Real Numbers Representation in Microprocessor Techniques Brno University of Technology Integer and Real Numbers Representation in Microprocessor Techniques Microprocessor Techniques and Embedded Systems Lecture 1 Dr. Tomas Fryza 30-Sep-2011 Contents Numerical Elementary Number Theory and Methods of Proof. CSE 215, Foundations of Computer Science Stony Brook University. Elementary Number Theory and Methods of Proof CSE 215, Foundations of Computer Science Stony Brook University 1 Number theory Properties: 2 Properties of integers (whole 26 Integers: Multiplication, Division, and Order 26 Integers: Multiplication, Division, and Order Integer multiplication and division are extensions of whole number multiplication and division. In multiplying and dividing integers, the one new, Approximation Errors in Computer Arithmetic (Chapters 3 and 4) Approximation Errors in Computer Arithmetic (Chapters 3 and 4) Outline: Positional notation binary representation of numbers Computer representation of integers Floating point representation IEEE standard n-parameter families of curves 1 n-parameter families of curves For purposes of this iscussion, a curve will mean any equation involving x, y, an no other variables. Some examples of curves are x 2 + (y 3) 2 = 9 circle with raius 2 Data Representation in Computer Systems CHAPTER 2 Data Representation in Computer Systems 2.1 Introduction 47 2.2 Positional Numbering Systems 48 2.3 Converting Between Bases 48 2.3.1 Converting Unsigned Whole Numbers 49 2.3.2 Converting Fractions MODULAR ARITHMETIC. a smallest member. It is equivalent to the Principle of Mathematical Induction. MODULAR ARITHMETIC 1 Working With Integers The usual arithmetic operations of addition, subtraction and multiplication can be performed on integers, and the result is always another integer Division, on 2010/9/19. Binary number system. Binary numbers. Outline. Binary to decimal 2/9/9 Binary number system Computer (electronic) systems prefer binary numbers Binary number: represent a number in base-2 Binary numbers 2 3 + 7 + 5 Some terminology Bit: a binary digit ( or ) Hexadecimal Unit 2: Number Systems, Codes and Logic Functions Unit 2: Number Systems, Codes and Logic Functions Introduction A digital computer manipulates discrete elements of data and that these elements are represented in the binary forms. Operands used for calculations Fixed-Point Arithmetic Fixed-Point Arithmetic Fixed-Point Notation A K-bit fixed-point number can be interpreted as either: an integer (i.e., 20645) a fractional number (i.e., 0.75) 2 1 Integer Fixed-Point Representation N-bit The mathematics of RAID-6 The mathematics of RAID-6 H. Peter Anvin First version 20 January 2004 Last updated 20 December 2011 RAID-6 supports losing any two drives. syndromes, generally referred P and Q. The way Homework 5 Solutions Homework 5 Solutions 4.2: 2: a. 321 = 256 + 64 + 1 = (01000001) 2 b. 1023 = 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = (1111111111) 2. Note that this is 1 less than the next power of 2, 1024, which Arithmetic Operations Arithmetic Operations Dongbing Gu School of Computer Science and Electronic Engineering University of Essex UK Spring 2013 D. Gu (Univ. of Essex) Arithmetic Operations Spring 2013 1 / 34 Outline 1 Introduction Modelling and Resolving Software Dependencies June 15, 2005 Abstract Many Linux istributions an other moern operating systems feature the explicit eclaration of (often complex) epenency relationships between the pieces of software Instruction Set Architecture (ISA) Instruction Set Architecture (ISA) * Instruction set architecture of a machine fills the semantic gap between the user and the machine. * ISA serves as the starting point for the design of a new machine Numerical Matrix Analysis Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, blomgren.peter@gmail.com Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Round-off errors CHAPTER 5 Round-off errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any 11 Ideals. 11.1 Revisiting Z 11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q( Introduction Number Systems and Conversion UNIT 1 Introduction Number Systems and Conversion Objectives 1. Introduction The first part of this unit introduces the material to be studied later. In addition to getting an overview of the material Properties of Real Numbers 16 Chapter P Prerequisites P.2 Properties of Real Numbers What you should learn: Identify and use the basic properties of real numbers Develop and use additional properties of real numbers Why you should The mathematics of RAID-6 The mathematics of RAID-6 H. Peter Anvin 1 December 2004 RAID-6 supports losing any two drives. The way this is done is by computing two syndromes, generally referred P and Q. 1 A. Unit 5 Central Processing Unit (CPU) Unit 5 Central Processing Unit (CPU) Introduction Part of the computer that performs the bulk of data-processing operations is called the central processing unit (CPU). It consists of 3 major parts: Register A New Vulnerable Class of Exponents in RSA A ew Vulnerable Class of Exponents in RSA Aberrahmane itaj Laboratoire e Mathmatiues icolas Oresme Universit e Caen, France nitaj@math.unicaen.fr Abstract Let = p be, Basic Computer Organization SE 292 (3:0) High Performance Computing L2: Basic Computer Organization R. Govindarajan govind@serc Basic Computer Organization Main parts of a computer system: Processor: Executes programs Main memory: Chapter 4. Computer Arithmetic Chapter 4 Computer Arithmetic 4.1 Number Systems A number system uses a specific radix (base). Radices that are power of 2 are widely used in digital systems. These radices include binary (base 2), quaternary
http://docplayer.net/409347-Division-by-invariant-integers-using-multiplication.html
CC-MAIN-2018-26
refinedweb
9,421
63.09
Basics of Variable Storage for C Programming Digital storage is measured in bytes. Though displayed in the C programming language, all the information stored inside memory is simply a mass of data, bits piled upon bits, bytes upon bytes. It’s up to the software to make sense of all that. Introduction to variable storage In C programming, data is categorized by storage type (char, int, float, or double) and further classified by keyword (long, short, signed, or unsigned). Despite the chaos inside memory, your program’s storage is organized into these values, ready for use in your code. Inside a running program, a variable is described these attributes: Name: the name you give the variable. The name is used only in your code, not when the program runs. Type: one of the C language’s variable types: char, int, float, and double. Contents: set in your program when a variable is assigned a value. Though data at the variable’s storage location may exist beforehand, it’s considered garbage, and the variable is considered uninitialized until it’s assigned a value. Location: an address, a spot inside the device’s memory. This aspect of a variable is something you don’t need to dictate; the program and operating system negotiate where information is stored internally. When the program runs, it uses the location to access a variable’s data. Of these aspects, the variable’s name, type, and contents are already known to you. The variable’s location can also be gathered. Not only that, but the location can be manipulated, which is the inspiration behind pointers. How to read a variable’s size How big is a char? How long is a long? Only the device you’re programming knows the exact storage size of C’s standard variables. How Big Is a Variable? uses the sizeof operator to determine how much storage each C language variable type occupies in memory. HOW BIG IS A VARIABLE? #include <stdio.h> int main() { char c = 'c'; int i = 123; float f = 98.6; double d = 6.022E23; printf("chart%un",sizeof(c)); printf("intt%un",sizeof(i)); printf("floatt%un",sizeof(f)); printf("doublet%un",sizeof(d)); return(0); } Exercise 1: Type the source code from How Big Is a Variable? into your editor. Build and run to see the size of each variable type. Here’s the output: char 1 int 4 float 4 double 8 The sizeof keyword isn’t a function. It’s more of an operator. Its argument is a variable name. The value that’s returned is of the C language variable type known as size_t. The size_t variable is a typedef of another variable type, such as an unsigned int on a PC or a long unsigned int on other computer systems. The bottom line is that the size indicates the number of bytes used to store that variable. Arrays are also variables in C, and sizeof works on them. HOW BIG IS AN ARRAY? #include <stdio.h> int main() { char string[] = "Does this string make me look fat?"; printf("The string "%s" has a size of %u.n"
https://www.dummies.com/programming/c/basics-of-variable-storage-for-c-programming/
CC-MAIN-2019-47
refinedweb
525
66.64
For a feature component, such as language bindings and examples, there are two ways to place it, 1. Integrated with MXNet. We included most components by this way. 2. Have each component in a separate repo, such as mxnet-onnx, sockeye Pros: 1. Easy to install. Users only need to install mxnet 2. Easy to develop the components. We may release a component faster than mxnet, it is useful for the early stage development. Cons: 1. Everyone push codes into a single repo, it makes code reviewing / merging harder 2. Releasing a separate project is nontrivial, especially if contains C++ codes. Even for pure python codes, it is nontrivial to specify the mxnet dependencies, given there is mxnet, mxnet-mkl, mxnet-cu80 cu90 ... In general, I feel we can create separate repos for components in their early stage, but include them back into mxnet if they are mature. > On Feb 23, 2018, at 10:01 AM, Marco de Abreu <marco.g.abreu@googlemail.com> wrote: > > Good point, Mu! I think this discussion could be taken one step further > into re-thinking how we version the components of MXNet. At the moment > everything is covered by one version, but this could bring the constraints > you mentioned. Another example is the Scala namespace change. We have to > hold on doing that change until we do a major version change - something > nobody here would like to do just because of a namespace change. Maybe we > could modularize these third party components and language bindings and > then version each of them separately to the core of MXNet. > > Best regards, > Marco > > Am 23.02.2018 6:54 nachm. schrieb "Li, Mu" <mli@amazon.com>: > > A general concern is that if we want to include a package under active > developing into MXNet. I saw ONNX made a lot of progress these days, such > as control flow, while none of us participant into it. It worries me that > mxnet’s release may need to be correlated to oonx version. How do other > frameworks handle it? Caffe2 and PyTorch should be the two that support > onnx most well. > >> On Feb 22, 2018, at 5:23 PM, Roshani Nagmote <roshaninagmote2@gmail.com> > wrote: >> >> Hi Marco, >> >> Good question. ONNX models come with a version number in the model > protobuf >> file. We can make use of that field when importing into MXNet. >> >> You can see the discussion and design of versioning policies in ONNX here: >> >> >> - Roshani >> >> >> On Thu, Feb 22, 2018 at 5:21 PM, Naveen Swamy <mnnaveen@gmail.com> wrote: >> >>> If you train with a newer version of MXNet and try running on a older >>> version of MXNet, it might not already work today, I am not sure if we >>> want to support such use-cases. This is tangential to this piece of work >>> >>> If ONNX were to update their version, I think the right place to keep >>> future versions of ONNX compatible should be in ONNX by providing a tool > to >>> move from ONNX.v0 to ONNX.v1. so that various framework converters always >>> move with the latest version of ONNX. >>> >>> ONNX models I believe already contains the ONNX version with which it was >>> built. >>> >>> >>> On Thu, Feb 22, 2018 at 4:38 PM, Marco de Abreu < >>> marco.g.abreu@googlemail.com> wrote: >>> >>>> Hello >>>>>> >>>>> >>>> >>>
http://mail-archives.apache.org/mod_mbox/mxnet-dev/201802.mbox/%3C72E74803-88CA-4ACF-917F-51C13A2752E9@amazon.com%3E
CC-MAIN-2021-31
refinedweb
545
66.54
@farmerpaul: check out the DocBlockr plugin () I came to say what zee said. Python indentation is a bit annoying. I tried turning auto indentation off thinking that without it pressing Return would at least keep current indentation level (basically the only kind of auto-indent I really want), but it turns out that no, without auto-indent we always go back at the beginning of the line. So I'm stuck with this new auto-indent. Some aspects of this don't seem to work for me. Specifically: /** * Foo bar<<enter>> */ -- becomes -- /** * Foo bar * | */ When I try this, I do not get the second line as described I am very pleased that you added this Ruby expansion.But it should also work when a word is selected (like in TextMate). +1 on this one - Python indenting seemed like it behaved better prior to the recent changes... Yep, Python is royally fubarred Agreed! I use the same style of coding for classes and functions, and this is a little annoying to correct all the time. Although I can understand that it is difficult to provide the best solution for everyone, I have a feeling that the first example you gave is a much more common coding style than the second indention style. Best style for me would be: don't indent unless a brace, bracket or parenthesis has been opened but not been closed. 2173 should address the above issues - please let me know if this isn't the case. HTML and PHP seem to be working for me! Yay! Python works again 2 Cheerleading too soon if 1: if 1: if current < 8: current = 8 s.set("font_size", current) sublime.save_settings("Base File.sublime-settings") class ResetFontSizeCommand(sublime_plugin.ApplicationCommand): def run(self): s = sublime.load_settings("Base File.sublime-settings") Try putting the cursor in front of class and hitting enter class Thanks @C0D312! Amazing. Regression in c/c++ indentation. Pressing enter here: /* foo bar */ if (1)| indents according to the second line of the comment which is now closed. (I hope Jon has a good testing coverage for this because it seems it's almost impossible to fix something without regressing other stuff. )!
https://forum.sublimetext.com/t/dev-build-2172/4150/20
CC-MAIN-2016-36
refinedweb
363
64.3
poll a REST API. The sender REST adapter in polling mode has been supported since release 7.31 SP16 / 7.4 SP11. Scenario We would like to frequently poll video information from Google’s Youtube REST API, and store the same on a file system. Furthermore, the resultset should be split into individual messages per video ID, and already polled results should be discarded. In the SAP Process Integration Designer perspective of the NetWeaver Developer Studio (NWDS), I have defined an Integration Flow with a REST sender channel and a file receiver adapter. Let’s focus on the configuration of the sender channel. Configuring the REST sender channel Double-click on the sender adapter to open the channel editor. Select the REST adapter type, and from the Message Protocol drop down menu the entry REST Polling. Option 1: Incremental request based on timestamp of last call In order to better understand the configuration below, let’s take a look at a sample response in JSON format. The REST API returns an array of items each having a unique ID stored in the etag field. First switch to sub-tab Data Format below tab Adapter-Specific. The format is JSON, so select JSON from the Data type drop down menu. We would like to convert the JSON into XML, so select the Convert JSON to XML check box, and add a wrapper element to ensure that the converted XML format contains one root element only. As mentioned below, the message should be split into individual messages per video ID. Select Split Result into Multiple Messages check box. As Array Containing Messages maintain items (See message format above). Furthermore, duplicates should be removed. Select the Filter out Duplicates check box, and maintain etag as Unique ID Element (See message format above). To place incremental requests, we would like to use the timestamp of the latest call. The value is stored between the calls and can be used in the REST URL as a placeholder with name incrementalToken. By the way, alternatively you can also use an XPath expression or a JSON element from the response of the last call such as next Page indicator or similar (shown below). From the Incremental Type drop down select entry Timestamp of Last Call. The timestamp format of the API complies with ISO 8601, and needs to be specified as follows: yyyy-MM-ddTHH:mm:ssZ with T and Z being constants. The REST adapter follows the Joda time format, see DateTimeFormat (Joda time 2.2 API). So, we need to place the constants in quotes. Maintain the Timestamp Format attribute as yyyy-MM-dd’T’HH:mm:ss’Z’, and define an initial value that complies with the format. Switch to tab HTTP Request, and maintain the Target URL as follows:<your API key>&part=id&q=SAP&maxResults=50&order=date&publishedAfter={incrementalToken}. Note, I added the placeholder incrementalToken in curly brackets which holds the timestamp of the last call. As HTTP Operation select GET. Finally, define a polling interval, here 3600 seconds. Option 2: Incremental request based on response content As another option I would like to show you how to configure the incremental request in case that many pages need to be requested. As you can see below, in the response two additional elements are added holding the previous (prevPageToken) and the next page token (nextPageToken). Latter can be used in the URL to gather the next response page. From the Incremental Type drop down select the entry Response Content, and maintain the incremental token element as nextPageToken. This will store the value of the nextPageToken element into the placeholder incrementalToken. From release 7.31 SP17 / 7.4 SP12 / 7.5 SP01 onwards, you can define an action in case that the very last page has been reached and hence the nextPageToken element is missing. You have the following options: - treat as error which is the default - skip current poll and retry later - use an empty token value - use the initial token value - use a custom token value Here, we chose the option Use Empty Value for Token. On tab HTTP Request, maintain the Target URL as follows:<your API key>&part=id&q=SAP&maxResults=5&pageToken={incrementalToken}. Note, I added the placeholder incrementalToken in curly brackets which holds the token of the next page. I hope this blog was helpful to understand how to configure the REST sender channel in polling mode. If you like to learn more, check out the other blogs in the series, accessible from the main blog PI REST Adapter – Blog Overview. Hi Alex, Thanks for the Blog, it is very good to Understand However I have still confusing on my Target URL This is my target URL The value must be an ISO 8601 formatted time, containing the time zone, e.g. 2015-06-08T11:30:00Z So how my URL look like with place holder? please suggest me. Thanks,Sateesh Hi Sateesh, similar to my sample in the blog url:{incrementalToken} timestamp format: yyyy-MM-dd’T’HH:mm:ss’Z’ Alex Hi Alexander, I did same way on earlier also. and I get this error Error while processing inbound message. java.lang.RuntimeException: java.net.URISyntaxException: Illegal character in query at index 83:{incrementalToken}: java.net.URISyntaxException: Illegal character in query at index 83:{incrementalToken}: Illegal character in query at index 83:{incrementalToken} Thanks & Regards, Sateesh Hi Alex, Our SP16 REST adapter features is supporting to convert from JSON to XML with Multiple data sets? I heard from some threads those SP14 & SP15 are not supporting multiple datasets when converting from JSON to XML if this is also same case on SP16 , then how can I proceed to accepts multiple Datasets. Please suggest me on this. Thanks & Regards, Sateesh N Hi Sateesh, the REST adapter does support JSON arrays for both JSON to XML and XML to JSON conversion. There is only one restriction when an XML element which is unbounded contains only one occurence, it won’t add the brackets [ and ] to the JSON, if there are at least two entries it will. Some REST APIs need the arrays even if the element occurs only once. This will be addressed with SP17. Alex Thanks Alex for clarification:) Hello Alex, I am using this URL to call REST webserive using Rest Sender poll adatper url:{incrementalToken} In the Incremental Request tab If I used Time stamp format is this way yyyy-MM-dd’T’HH:mm:ss’%2B02:00′ then only PI retrieved latest changes from REST service. If I used Time stamp format is this way yyyy-MM-dd’T’HH:mm:ss’Z’, PI doesn’t receive changes from REST webservice Currently CET time is 2 hours ahead into UTC time, after october this year again CET will be changed and at that only 1 hour ahead into UTC time. If that is the case I need to change a communication channel twice a year. My complain is I do not want change the communication channel twice a year, can you please give me some idea, how I can put permanent Time Stamp format in the Incremental Requests? Please suggest me. Thank you Sateesh Hi Alex, Polling is a great feature within REST Sender adapter. However there appears some issue with HTTP header parameters. Though we have set the parameter. REST sender adapter is not passing that to the remote system. We are trying to pass some system specific information and the call is failing because the header parameters is not passed. Is this is a known issue? Are there any workaround or any patch available. We are currently on the latest SP 11. I appreciate any response from you. Thanks -Pradeep Hi Pradeep, did your problem got solved, as we are also facing issue with rest sender polling with adding custom header to be passed with request message. regards, ashutosh Hi Ashutosh, Pradeep, this was a bug and should be solved with the latest patch, if you are on the latest patch and still face the issue, please raise an incident ticket Alex Hi Alex, Passing own Header variables with the polling request seems to work now. According to the documentation, response header variables from the REST Sender with polling should also be available as XI-Headers (similar to the receiver channel). I have tested this on two 7.5 systems with current SPs and this does not seem to be the case. There are no XI headers present inside the rest namespace from the sender channel with polling. Is there a fix for this already? I have not found one when browsing the notes. Thanks Alex, It is was a bug and i had opened a incident ticket and it is resolved in following note. ‘ We did apply this note and it has indeed fixed the issue. -Pradeep records ? Hi Alex, Is there any custom way of handling below HTTP polling scenario. I am trying to get candidates from a Recruiting system and I need to pass below values in URL:<key>&sc=<secret>&format=json &start=1&count=50&datestart=2014-01-01&dateend=2014-07-01 1. So my variables are “start” which would the next starting index i.e. 1…51…101 2. datestart would be the last time I ran this interface 3. dateend would be current time. I tried the incrementalcount approach but how do I handle current date also how do I handle both incremental date and nextPageToken ? Thanks in advance Ravijeet Hi Alex, I was trying to check how do we implement the below scenario using incremental token: If we are iterating the records page by page, say we have 2000 records and max we can get in one request is 500, the 4 request call will be as below: What value do we store in nextPageToken ? Thx in advance Ravijeet Hi Alex, Do you know whom from the SAP Product development team to reach to discuss above feasibility. Regards Ravijeet . Hi Ravijeet, I think there is no standard way to achieve your requirement,you may need to write adapter module. I suggest you to send an e-mail to “udo.paltzer@sap.com” he may help you to address your requirement. Regards Bhargava Krishna Hello, we have PO 7.4 SP12 and the Rest Polling Protocol is not able to choose in the Communication Channel. Do you have any Idea? Thanks and regards, Fabian Hi, We have the same issue as Fabian with same PI version 7.40 and SP12. The ‘REST Polling’ Message Protocol is missing in the sender adapter. Does anybody know if this feature is still supported with 7.40 SP12? Thanks, Filipe Hi all, Experimenting with incremental request based on response content, I couldn’t figure how to deal with XPath expression and ATOM feeds. I asked there :REST polling, Atom feed and XPath to incremental token I hope someone will enlighten me. Thanks in advance, Manu. Hi, I want to poll yesterdays performance data from our PO using the performancedataqueryservlet. I’m passing in the central adapter engine as component. In addition to that I need to specify a begin and end parameter. The request URL looks as follows: http://<host>:<j2eeport>/mdt/performancedataqueryservlet?component=<component>&begin=2016-10-17%2002:00:00.0&end=2016-10-18%2002:00:00.0 I wanted to use the incremental timestamp but only one timestamp is supported. Is there any solution to this problem? Hi Frederick, Did you get an answer for this question? I’m currently setting a REST sender which has to poll an API and I need to use Begin Date and End Date in its URL. Can you share your findings, please? Tks. Hey Rafael, I could not find a satisfying answer to my question. SRY. Hi, I need to call an. But Begin/End Date must be automatically incremented in each day, always to get the current day’s vehicle list. Can anyone suggest how to achieve this in the REST Sender with poll? Tks! I am getting an error while Polling rest APi like unsupported Media type. in below service. Hello, Please help to resolve below issue. Scenario = REST Webservice –> PO –> ECC I have configured Sender REST Polling Adapter. Every 30 minutes this adapter starts polling sales order from REST Service. I have selected parameter “Filter out duplicates”. So every time it polls it picks up only unique sales order. From morning it works fine but suddenly at 4:30 PM poll time it picks up all the sales order irrespective of duplicates. Please help to resolve this issue as this is a production issue. Regards, Dheeraj kumar Hi Alex, How can I manage the Incremental Content, when you have a Query in the URL?. Example: Original URL: Id, Name, from Product2&nextRecordsUrl={incrementalToken} Response 1: “nextRecordsUrl”: “/services/data/v43.0/query/01g2900000c6HnhAAE-2000” Expected Response 2: “nextRecordsUrl”: “/services/data/v43.0/query/01g2900000c6HnhAAE-4000” And so on. The next execution should be like: At the moment I have been unable to make it work. I’m doing tests with: Incremental Type: Response Content Incremental ID Element: nextRecordsUrl Action for Missing or Empty Token: Use Initial Value for Token Value: 0 With this, the channel always retrieves information for the first batch as the ID of the “nextRecordsUrl” changes on each execution instead of being maintain. Example: 1st time is “01g2900000c6HnhAAE-2000” 2nd time is “01gK0000011AgxaIAC-2000” (the ID Changed) and so on… Please advice 🙂 Thanks and Best Regards, Eric Hi Alex, I got it working for two consecutives executions, nevertheless, its always failing on the third one…. The current settings are: Tab HTTP Request Target URL:{incrementalToken} Tab Data Format Incremental Type: Response Content Incremental ID element: nextRecordsUrl Action for Missing or Empty Token: Use Initial Value for Token Initial Value: /services/data/v43.0/query/?q=select Id, Name, ProductCode, from Product2 With these configuration the results were the followings: 1st Execution <totalSize>4952</totalSize><done>false</done><nextRecordsUrl>/services/data/v43.0/query/01gK0000011AlBCIA0-2000</nextRecordsUrl> 2nd Execution “><totalSize>4952</totalSize><done>false</done><nextRecordsUrl>/services/data/v43.0/query/01gK0000011AlBCIA0-4000</nextRecordsUrl> So far it is working propertly, but when getting to the last execution the JSON do not have the field “nextRecordsUrl” Last Execution “Fatal error while proccessing inbound message.com.sap.aii.af.lib.mp.module.ModuleException: com.sap.aii.adapter.rest.ejb.parse.InvalidJSonPath: JSON path “nextRecordsUrl” could not be found. Any suggestion regarding why is always failing during the third execution will be highly appreciated. Thanks and Best Regards, Eric Hi Eric, it’s actually possible to define an action if the result does not contain the expected information, here nextRecordsUrl, see Alex
https://blogs.sap.com/2015/06/26/pi-rest-adapter-polling-a-rest-api/
CC-MAIN-2019-09
refinedweb
2,459
63.49
In this article I will be showing how to start working with Telerik controls in ASP.Net MVC Project. I will be showing all the prerequisites for using a Telerik control in ASP.Net MVC Project. I will be using Visual Studio 2010 ( ASP.NET MVC 2.0) 1. To start using Telerik - The first step would be to Download Telerik Extensions for ASP.NET MVC by logging in your Telerik account. () - Create your account if you are a new user. 2. Open Visual Studio 2010 - Create new ASP.Net MVC2 Web Application. Select File -> New -> Project and from the new project dialog select ASP.Net MVC 2 Web Application -> Name it -> say - MVCwithTelerikSample 3. For this sample application - Let's not create the unit test project.- So select the radio button - No, Do not create a unit Test Project. 4. Now we need to copy the dll Telerik.Web.Mvc.dll from the downloaded folder in our hard drive to the local bin folder of our project. (Check for this dll in the downloaded folder - after installing Telerik and go to subfolder Binaries\MVC2 ) - since we are using MVC2 Project here 5. Add a reference to the above dll Right Click the Solution explorer - Add Reference - Browse - bin (folder - where you have placed the telerik dll) - Select Telerik.Web.Mvc.dll) -Click Ok 6. Include the Scripts folder from the installed Telerik location from your hard drive) to the project The folder name- 2010.3.1318 highlighted above inside the Scripts folder may have a different Version number if you are downloading the latest version. Go to the Script sub-folder in the installed Telerik location on your harddrive and drag and drop the entire folder (Ex folder name -2010.3.1318) inside the Scripts folder in the solution explorer of your current project 7. Include the Content folder from the installed Telerik location(from your hard drive) to the project. Go to the Content sub-folder (refer above Screen shot) in the installed Telerik location in your harddrive Drag and drop the entire folder (Ex folder name -2010.3.1318) inside the Content folder in the solution explorer of your current project. Note - Suppose after completing the project - in future if there is an Telerik upgrade - we can just copy the new scripts folder, Content folder, along with the new dll reference. 8. Update the Web.config file In web.config file-in the namespace section - include - <add namespace="Telerik.Web.Mvc.UI" /> This will help us work with Telerik extension methods inside of all views 9. Build the Project - Do a quick build of the project which will help intellisense in Visual Studio to catch up with Extension methods. 10. We have added the Scripts and Themes to our Projects folder - Now we need to add it in the Views where we are going to use it. For this sample application, the best place to add it would be in the master Page - since it is used in all the Views. Go to Views->Shared->Site.master ->need to add .css in the head of the page ->within server tags <%--Including Telerik Style Sheets--%> <% Html.Telerik().StyleSheetRegistrar() .DefaultGroup(group => group.Add("telerik.common.min.css") .Add("telerik.vista.min.css")) .Render(); %> <%--END--%> 11. Now we need to add a ScriptRegistrar- add towards the bottom of the master page - script registrar needs to occur after all the UI extensions on the page. <% Html.Telerik().ScriptRegistrar().Render(); %> The above line of code will help to automatically look into the Scripts folder ->Version subfolder->to find the Scripts required. So - now we are done with all the prerequisites for using Telerik controls - Now let's add one of the Telerik control and see it working - 12. Open up the Default index view 13. We will add here a Telrik Menu - to show how we can work with it. Menu Code <% Html.Telerik().Menu() .Name("MenuID") .Items(items => { items.Add().Text("MainMenu01").Items(subItem1 => { subItem1.Add().Text("SubMenu01"); }); items.Add().Text("MainMenu02").Items(subItem1 => { subItem1.Add().Text("SubMenu01"); }); } ) .Render(); %> For any control within the Telerik extensions - We just simply need to call .Render() at the end-to output the HTML OUTPUT: Run the application - We should see the Menu Control and the Vista theme applied. In this article we have seen how we can get started with using Telerik controls in ASP.NET MVC Application. I have attached the code which I have used for this sample application. Happy Learning! Working with Telerik Controls in ASP.NET MVC2 Saving DropDownList Selected Value Across PostBack in MVC Made my day!!! Thanx Sujith - Please refer this post - This might help you! can you please add some article regarding Telerik editor. how to customize a document,how to add picture.after adding all content how to save that document in server.after that user want to retrieve the document from the server U R Welcome :) Very useful Article saurabh thanx for sharing...........
http://www.c-sharpcorner.com/uploadfile/2124ae/working-with-telerik-controls-in-Asp-Net-mvc2/
crawl-003
refinedweb
828
75
Max 5 API Reference Max objects are written in the C language, and the Max API is C-based. You could use C++ but we don't support it at the API level. Writing a Max object in C, you have five basic tasks: 1) including the right header files (usually ext.h and ext_obex.h) 2) declaring a C structure for your object 3) writing an initialization routine called main that defines the class 4) writing a new instance routine that creates a new instance of the class, when someone makes one or types its name into an object box 5) writing methods (or message handlers) that implement the behavior of the object Let's look at each of these in more detail. It's useful to open the simplemax example project as we will be citing examples from it. Most of the basic Max API is included in the files ext.h and ext_obex.h. These are essentially required for any object. Beyond this there are specific include files for more specialized objects. The header files are cross-platform. #include "ext.h" // should always be first, followed by ext_obex.h and any other files. Basic Max objects are declared as C structures. The first element of the structure is a t_object, followed by whatever you want. The example below has one long structure member. Your structure declaration will be used in the prototypes to functions you declare, so you'll need to place above these prototypes. The initialization routine, which must be called main, is called when Max loads your object for the first time. In the initialization routine, you define one or more classes. Defining a class consists of the following: 1) telling Max about the size of your object's structure and how to create and destroy an instance 2) defining methods that implement the object's behavior 3) in some cases, defining attributes that describe the object's data 4) registering the class in a name space Here is the simp class example initialization routine: static t_class *s_simp_class; // global pointer to our class definition that is setup in main() int main() { t_class *c; c = class_new("simp", (method)simp_new, (method)NULL, sizeof(t_simp), 0L, 0); class_addmethod(c, (method)simp_int, "int", A_LONG, 0); class_addmethod(c, (method)simp_bang, "bang", 0); class_register(CLASS_BOX, c); s_simp_class = c; return 0; } class_new() creates a class with the new instance routine (see below), a free function (in this case there isn't one, so we pass NULL), the size of the structure, a no-longer used argument, and then a description of the arguments you type when creating an instance (in this case, there are no arguments, so we pass 0). class_addmethod() binds a C function to a text symbol. The two methods defined here are int and bang. class_register() adds this class to the CLASS_BOX name space, meaning that it will be searched when a user tries to type it into a box. Finally, we assign the class we've created to a global variable so we can use it when creating new instances. More complex classes will declare more methods. In many cases, you'll declare methods to implement certain API features. This is particularly true for UI objects. The standard new instance routine allocates the memory to create an instance of your class and then initializes this instance. It then returns a pointer to the newly created object. Here is the simp new instance routine void *simp_new() { t_simp *x = (t_simp *)object_alloc(s_simp_class); x->s_value = 0; return x; } The first line uses the global variable s_simp_class we defined in the initialization routine to create a new instance of the class. Essentially, the instance is a block of memory of the size defined by the class, along with a pointer to the class that permits us to dispatch messages correctly. The next line initializes our data. More complex objects will do a lot more here, such as creating inlets and outlets. By default, the object being created will appear with one inlet and no outlets. Finally, in the last line, we return a pointer to the newly created instance. We are now ready to define some actual behavior for our object by writing C functions that will be called when our object is sent messages. For this simple example, we will write only two functions. simp_int will be called when our object receives numbers. It will store the received number in the s_value field. simp_bang will be called when our object receives a bang. It will print the value in the Max window. So, yes, this object is pretty useless! The C functions you write will be declared according to the arguments the message requires. All functions are passed a pointer to your object as the first argument. For a function handling the int message, a single second argument that is a long is passed. For a function handling the bang message, no additional arguments are passed. Here is the int method: void simp_int(t_simp *x, long n) { x->s_value = n; } This simply copies the value of the argument to the internal storage within the instance. Here is the bang method: The post() function is similar to printf(), but puts the text in the Max window. post() is very helpful for debugging, particularly when you cannot stop user interaction or real-time computation to look at something in a debugger. You can also add a float message, which is invoked when a floating-point number is sent to your object. Add the following to your initialization routine: class_addmethod(c, (method)simp_float, "float", A_FLOAT, 0); Then write the method that receives the floating-point value as follows:
https://cycling74.com/sdk/MaxSDK-5.1.7/html/chapter__anatomy.html
CC-MAIN-2018-05
refinedweb
944
61.87
GitHub tip: Prefill a bug reportPosted: September 4, 2018 Filed under: Misc Leave a comment Getting feedback from users is hard. In a platform such as Android, with apps evaluated in a couple of seconds, it’s even harder. While trying to get bug reports for VlcFreemote I found a neat GitHub trick: you can pre-fill a bug report by using url parameters. For example, check this link: Awesome! Takes a second and makes life much easier for bug-reporters! Happiest bug reportPosted: September 2, 2018 Filed under: Linux Leave a comment Something is wrong: I’m happy over a bug report! A few years back I developed a VlcRemote control app for Android. According to this chart, I didn’t actually save any time doing so. The time I spent spent developing the app is more than the cumulative time I would have spent by getting up from the couch and manually controlling VLC. That said, not having to leave the coziness of a warm blanket in winter probably made it worth the investment. Not long ago I decided to submit this app to F-Droid. I’m too cheap to pay the 20ish dollars for Google App Store, and since I don’t have any commercial interest I don’t see the point. I didn’t think I’d actually get any users there, but today I got my first bug report. So much happiness! You’d think I shouldn’t be happy about my crappy software not-working, but hey, someone actually took the time to try it out. Even more, someone cared enough to submit a bug report! Open source rules!? Fixing. Quick refresher: argument dependent lookupPosted: January 4, 2017 Filed under: C++ Leave a comment Since I wasted a few precious minutes stuck on an ADL problem, I figured I needed a quick reminder on how they work. Check this code: does it compile? namespace N { int foo() { } } int main() { return foo(); } Of course it doesn’t! You’d expect a ‘foo’ not declared/out of scope error from your compiler. What about this other example? namespace N { struct Dummy; int foo(Dummy*) { return 0; } int foo() { } } int main() { return foo((N::Dummy*)0); } You’d be tempted to say it won’t work either. (Un?)fortunately, ‘argument dependant lookup’ is a thing, and the second code sample works. How? The compiler will look for ‘foo’ in the global namespace, and also in the namespace of the arguments to ‘foo’. Seeing ‘N::Dummy’ in there, the compiler is allowed to peak into the namespace N for method ‘foo’. Why? Short: operator overloading. Long: check here (the ‘Why ADL’ section is very good). Google Test: Quarantine for tests?Posted: December 21, 2016 Filed under: C++ Leave a comment Google Test: Putting a test under quarantine GTest works wonders for c++ testing, even more so when combined with GMock. I’ve been using these frameworks for a few side projects. I’ve seen them used in large scale projects too. In all cases, there is a very common problem for which (I think) there is no elegant solution: managing temporarily disabled tests. It may be because you found a flaky piece of code or a test that exposes a heisenbug. Maybe the test itself is just unstable, or perhaps you are using TDD and want to submit a test to your CI before its implementation is ready. In these cases, you can choose to disable the offending test or let it run, possible halting your CI because of it. When that happens, you maybe masking other, real, problems. Most people would stick a “DISABLED_” before the test name, to let GTest know not to run it. Maybe even stick a “// TODO: reenable” in there too. When run, GTest will generate a message to let you know there is a disabled test. Even so, I find that people -myself included- tend to forget to re-enable the disabled tests. For one of my side projects, I hacked GTest to quarantine tests up to a certain date: TEST(Foo, Bar) { QUARANTINE_UNTIL("16/8/22"); EXPECT_EQ(1,2); } In my CI setup, that test will be showing a happy green (and a warning, which I will probably ignore) until the 22nd of August. By the 23rd the test will run again and fail if I haven’t fixed the code. If I have indeed fixed it, it will print a warning to remind me that it’s safe to delete the quarantine statement. Is there any advantage in this approach over the usual _DISABLE strategy? In my opinion, there is: if you ignore warnings in your test, for whatever reason, a _DISABLE might go unnoticed and it may hide a real problem. In the same scenario, for a quarantined test, nothing bad happens: the warning just says “you should delete this line” but the quarantined test is again part of your safety net. How does it work? The first caveat in my article mentions it: hackishly. There are a few facilities missing in GTest to make this implementation production-ready but, ugly as it looks, it should work as intended: #include <ctime> #include <string> #include <sstream> std::string now() { time_t t = time(0); struct tm *now = localtime(&t); std::stringstream formatted_date; formatted_date << (now->tm_year+1900) << '/' << (now->tm_mon+1) << '/' << now->tm_mday; return formatted_date.str(); } #define QUARANTINE_UNTIL(date_limit) \ if (now() < date_limit) { \ GTEST_LOG_(WARNING) << "Test under quarantine!"; \ return; \ } else { \ GTEST_LOG_(WARNING) << "Quarantine expired on " date_limit; \ } If I find there is interest in this approach for real world applications, I may try to come up with a nicer interface for it. Vim! Things you should never doPosted: December 8, 2016 Filed under: Grumpy Leave a comment I think I may start a new series in my “Rants” category: things you shouldn’t do. First one: Never ever use “strange” characters on your wifi’s AP name, where strange is defined as non-ascii. I made the huge mistake of choosing a name with an ñ on it, then had to spend an entire evening hacking on a printer driver with no unicode support. No, I couldn’t have changed the AP’s name. That would have required me to physically connect a computer to my router, and I was too lazy to get up from the couch. Self reminder: setting the default boot option in UEFIPosted: December 6, 2016 Filed under: Grumpy, Linux Leave a comment Bought a new laptop (*) and I’m 100% sure I’ll forget this if I don’t put it here: From To set Ubuntu as the default boot OS in a multi-OS setup (ie, dual boot Windows) with UEFI, goto Windows and exec (as admin) bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi Why am I using Windows, you may ask? I’m still in the process of discovering which features will be broken and which hardware will work out of the box. So far I’m actually quite surprised, with only the video card and the touchpad not working. Luckily bash doesn’t use either of those. Who needs a mouse anyway? Simple vim plugin IV: project grepingPosted: December 1, 2016 Filed under: Vim Leave a comment I recently wrote about some of the utilities I created for my Vim setup. Using someone else’s Vim scripts is not nearly as fun as writing your own, so I decided to also write a short summary on what it takes to get started writting Vim plugins. For this task, I decided to start with greping. Greping can be improved a bit: if you do it a lot in a project, you might find it’s useful to also grep the results themselves, to further refine your search. If you have your grep results in vim itself, that is trivial. Let’s start hacking something in our .vimrc file. Try this: function! FG_Search() let needle = input("Search for: ") tabnew setlocal buftype=nofile bufhidden=wipe nobuflisted noswapfile nowrap let grepbin = 'grep -nri ' let cmd = grepbin . ' "' . needle . '" *' execute '$read !' . cmd setlocal nomodifiable endfunction map <leader>s :call FG_Search()<CR> This function should be pretty clear: it will map <leader>s (in my case, “,s”) to FG_Search(). FG_Search will prompt the user for a term to grep, then search for it executing the command. In the end the results are written to a new tab, which is declared as a temp non-modifiable buffer. Just paste that in your .vimrc and you’re good to grep. Extra tip: integrate this with my fast grep cache and you have a nice and quick project search integration for vim that works even for very large projects with tools available in most default Linux installs.
https://monoinfinito.wordpress.com/
CC-MAIN-2018-39
refinedweb
1,452
70.23
For the program I am trying to design, I am checking that certain conditions exist in configuration files. For example, that the line: ThisExists is in the program, or that ThisIsFirst exists in the file followed by ThisAlsoExists somewhere later down in the file. I looked for an efficient approach which might be used in this situation but couldn't find any. My current idea is basically to just iterate over the file(s) multiple times each time I want to check a condition. So I would have functions: def checkA(file) def checkB(file) . . . To me this seems inefficient as I have to iterate for every condition I want to check. Initially I thought I could just iterate once, checking each line for every condition I want to verify. But I don't think I can do that as conditions which can be multi line require information about more than one line at a time. Is the way I outlined the only way to do this, or is there a more efficient approach? I am trying to provide an example below. def main(): file = open(filename) result1 = checkA(file) result2 = checkB(file) """This is a single line check function""" def checkA(file) conditionExists = False for line in file: if(line == "SomeCondition"): conditionExists = True return conditionExists """This is a multi line check function""" def checkB(file) conditionExists = False conditionStarted = False for line in file: if(line == "Start"): conditionStarted = True elif(line == "End" and conditionStarted): conditionExists = True return conditionExists If available libraries (configparser etc.) aren't enough I would probably use regular expressions. import re check_a = re.compile('^SomeCondition$', flags=re.MULTILINE) check_b = re.compile('^Start(?:.|\n)*?End$', flags=re.MULTILINE) def main(file_name): with open(file_name, 'r') as file_object: file_content = file_object.read() result_1 = bool(check_a.search(file_content)) result_2 = bool(check_b.search(file_content)) It's not the most user friendly approach – especially if the matching conditions are complex – but I think the pay-off for learning regex is great. xkcd tells us that regex both can be a super power and a problem.
https://codedump.io/share/ICFfsuXQqbwN/1/advice-on-how-i-should-structure-program
CC-MAIN-2016-44
refinedweb
342
55.03
4: WAP to count the number of Alphabet/ Digit/ Special Character/ Blank Space And Words in the sentence SOURCE CODE: import java.io.*; class Space{ public static void main(String args[]) throws IOException { String s; //Declaration os String varible to read the sentence BufferedReader r=new BufferedReader(new InputStreamReader(System.in));//assigning new input method System.out.println("Enter the sentence"); //prompt the user to enter the sentence s=r.readLine(); //Read the user entered sentence into variable int i=1; //for count of loop int a=0; //For count of alphabet int d=0; //For count of digit int sc=0; //For count of special character int bs=0; //For count of blank spaces int w=1; //For count of words char ch; //For a single character of sentence int l=s.length(); //String length of sentence in variable l for(i=0;i<l;i++) { ch=s.charAt(i); if(Character.isLetter(ch) ) ++a; else if(Character.isDigit(ch)) ++d; else if(ch==' ') ++bs; else ++sc; } System.out.println("There are "+a+" Alphabet"); System.out.println("There are "+d+" Digit"); System.out.println("There are "+sc+" Special Character"); System.out.println("There are "+bs+" Blank Spaces"); for(i=0;i<l;i++) { if(s.charAt(i)==' ') w++; } System.out.println("There are "+w+" Word"); }//END OF MAIN }//END OF CLASS OUTPUT. C:\A>javac Space.java C:\A>java Space Enter the sentence January2018 #$ , There are 7 Alphabet There are 4 Digit There are 3 Special Character There are 2 Blank Spaces There are 3 Word Also View: – Write a program to create a registration form using AWT. Also View- Write a java program to create a window using swing. 3 Comments on “4: WAP to count the number of Alphabet/ Digit/ Special Character/ Blank Space And Words in the sentence” I think this is the best post till now Thanks a lot Your method of explaining the whole thing in this post is, in fact, pleasant, every one be capable of effortlessly be aware of it, Thanks a lot.
https://technotaught.com/4-wap-to-count-the-number-of-alphabet-digit-special-character-blank-space-and-words-in-the-sentence/
CC-MAIN-2020-50
refinedweb
342
63.29
In this blogpost we describe the recently proposed Stochastic Weight Averaging (SWA) technique [1, 2], and its new implementation in torchcontrib. SWA is a simple procedure that improves generalization in deep learning over Stochastic Gradient Descent (SGD) at no additional cost, and can be used as a drop-in replacement for any other optimizer in PyTorch. SWA has a wide range of applications and features: - SWA has been shown to significantly improve generalization in computer vision tasks, including VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]. - SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2]. - SWA is shown to improve the stability of training as well as the final average rewards of policy-gradient methods in deep reinforcement learning [3]. - An extension of SWA can obtain efficient Bayesian model averaging, as well as high quality uncertainty estimates and calibration in deep learning [4]. - SWA for low precision training, SWALP, can match the performance of full-precision SGD even with all numbers quantized down to 8 bits, including gradient accumulators [5]. In short, SWA performs an equal average of the weights traversed by SGD with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1). Figure 1. Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 . Please see [1] for details on how these figures were constructed. With our new implementation in torchcontrib using SWA is as easy as using any other optimizer in PyTorch: from torchcontrib.optim import SWA ... ... # training loop base_opt = torch.optim.SGD(model.parameters(), lr=0.1) opt = torchcontrib.optim.SWA(base_opt, swa_start=10, swa_freq=5, swa_lr=0.05) for _ in range(100): opt.zero_grad() loss_fn(model(input), target).backward() opt.step() opt.swap_swa_sgd() You can wrap any optimizer from torch.optim using the SWA class, and then train your model as usual. When training is complete you simply call swap_swa_sgd() to set the weights of your model to their SWA averages. Below we explain the SWA procedure and the parameters of the SWA class in detail. We emphasize that SWA can be combined with any optimization procedure, such as Adam, in the same way that it can be combined with SGD. Is this just Averaged SGD? At a high level, averaging SGD iterates dates back several decades in convex optimization [6, 7], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. But the details matter. Averaged SGD is often employed in conjunction with a decaying learning rate, and an exponentially moving average, typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates, but does not perform very differently. By contrast, SWA is focused on an equal average of SGD iterates with a modified cyclical or high constant learning rate, and exploits the flatness of training objectives [8] specific to deep learning for improved generalization. Stochastic Weight Averaging There are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD continues to explore the set of high-performing networks instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time, and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see the Figure 2 below). The second ingredient is to average the weights of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained in the end of every epoch within the last 25% of training time (see Figure 2). Figure 2. Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training. In our implementation the auto mode of the SWA optimizer allows us to run the procedure described above. To run SWA in auto mode you just need to wrap your optimizer base_opt of choice (can be SGD, Adam, or any other torch.optim.Optimizer) with SWA(base_opt, swa_start, swa_freq, swa_lr). After swa_start optimization steps the learning rate will be switched to a constant value swa_lr, and in the end of every swa_freq optimization steps a snapshot of the weights will be added to the SWA running average. Once you run opt.swap_swa_sgd(), the weights of your model are replaced with their SWA running averages. Batch Normalization One important detail to keep in mind is batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training, and so the batch normalization layers do not have the activation statistics computed after you reset the weights of your model with opt.swap_swa_sgd(). To compute the activation statistics you can just make a forward pass on your training data using the SWA model once the training is finished. In the SWA class we provide a helper function opt.bn_update(train_loader, model). It updates the activation statistics for every batch normalization layer in the model by making a forward pass on the train_loader data loader. You only need to call this function once in the end of training. Advanced Learning-Rate Schedules SWA can be used with any learning rate schedule that encourages exploration of the flat region of solutions. For example, you can use cyclical learning rates in the last 25% of the training time instead of a constant value, and average the weights of the networks corresponding to the lowest values of the learning rate within each cycle (see Figure 3). Figure 3. Illustration of SWA with an alternative learning rate schedule. Cyclical learning rates are adopted in the last 25% of training, and models for averaging are collected in the end of each cycle. In our implementation you can implement custom learning rate and weight averaging strategies by using SWA in the manual mode. The following code is equivalent to the auto mode code presented in the beginning of this blogpost. opt = torchcontrib.optim.SWA(base_opt) for i in range(100): opt.zero_grad() loss_fn(model(input), target).backward() opt.step() if i > 10 and i % 5 == 0: opt.update_swa() opt.swap_swa_sgd() In manual mode you don’t specify swa_start, swa_lr and swa_freq, and just call opt.update_swa() whenever you want to update the SWA running averages (for example in the end of each learning rate cycle). In manual mode SWA doesn’t change the learning rate, so you can use any schedule you want as you would normally do with any other torch.optim.Optimizer. Why does it work? SGD converges to a solution within a wide flat region of loss. The weight space is extremely high-dimensional, and most of the volume of the flat region is concentrated near the boundary, so SGD solutions will always be found near the boundary of the flat region of the loss. SWA on the other hand averages multiple SGD solutions, which allows it to move towards the center of the flat region. We expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while SWA solution has a higher train loss compared to the SGD solution, it is centered in the region of low loss, and has a substantially better test error. Figure 4. Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). SWA solution is centered in a wide region of low train loss while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, SWA solution leads to much better generalization. Examples and Results We released a GitHub repo here with examples of using the torchcontrib implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100: Semi-Supervised Learning In a follow-up paper SWA was applied to semi-supervised learning, where it illustrated improvements beyond the best reported results in multiple settings. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time. Figure 5. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered. Calibration and Uncertainty Estimates SWA-Gaussian (SWAG) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning. Similarly to SWA, which maintains a running average of SGD iterates, SWAG estimates the first and second moments of the iterates to construct a Gaussian distribution over weights. SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution on top of the posterior log-density for PreResNet-164 on CIFAR-100. Figure 6. SWAG distribution on top of posterior log-density for PreResNet-164 on CIFAR-100. The shape of SWAG distribution is aligned with the posterior. Empirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available here. Reinforcement Learning In another follow-up paper SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments. Low Precision Training We can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 7 and 8). Recent work shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error. Figure 7. Quantizing in a flat region can still provide solutions with low loss. Figure 8. Low precision SGD training (with a modified learning rate schedule) and SWALP. Conclusion One of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are in principle many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We have presented SWA, a simple drop-in replacement for standard SGD, which can in principle benefit anyone training a deep neural network. SWA has been demonstrated to have strong performance in a number of areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training. We encourage you try out SWA! Using SWA is now as easy as using any other optimizer in PyTorch. And even if you have already trained your model with SGD (or any other optimizer), it’s very easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. - [1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018 - [2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; International Conference on Learning Representations (ICLR), 2019 - [3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson, UAI 2018 Workshop: Uncertainty in Deep Learning, 2018 - [4] A Simple Baseline for Bayesian Uncertainty in Deep Learning, Wesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson, arXiv pre-print, 2019: - [5] SWALP : Stochastic Weight Averaging in Low Precision Training, Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa, To appear at the International Conference on Machine Learning (ICML), 2019. - [6] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988. - [7] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky. SIAM Journal on Control and Optimization, 30(4):838–855, 1992. - [8] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018
https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/
CC-MAIN-2019-43
refinedweb
2,360
51.28
. putenv — change or add a value to an environment #include <stdlib.h> int putenv(char *string); putenv() function need not be thread-safe. Upon successful completion, putenv() shall return 0; otherwise, it shall return a non-zero value and set errno to indicate the error. The putenv() function may fail if: ENOMEM Insufficient memory was available. The following sections are informative.. Although the space used by string is no longer used once a new string which defines name is passed to putenv(), if any thread in the application has used getenv() to retrieve a pointer to this variable, it should not be freed by calling free(). If the changed environment variable is one known by the system (such as the locale environment variables) the application should never free the buffer used by earlier calls to putenv() for the same variable. The setenv() function is preferred over this function. One reason is that putenv() is optional and therefore less portable. Another is that using putenv() can slow down environment searches, as explained in the RATIONALE section for getenv(3p). Refer to the RATIONALE section in setenv(3p). None. exec(1p), free(3p), getenv(3p), malloc(3p), setenvENV(3P)
http://man7.org/linux/man-pages/man3/putenv.3p.html
CC-MAIN-2017-17
refinedweb
197
64.2
The System.Data.Design namespace contains classes that can be used to generate a custom typed-dataset. This class is used to generate a database query method signature, as it will be created by the typed dataset generator. Sets the type of parameters that are generated in a typed System.Data.DataSet class. Generates a strongly typed System.Data.DataSet class. The exception that is thrown when a name conflict occurs while a strongly typed System.Data.DataSet is being generated. Generates internal mappings to .NET Framework types for XML schema element declarations, including literal XSD message parts in a WSDL document.
http://docs.go-mono.com/monodoc.ashx?link=N%3ASystem.Data.Design
CC-MAIN-2018-05
refinedweb
102
51.24
En savoir plus à propos de l'abonnement Scribd Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs. ENGR 1200U Introduction to Programming Lecture 7 Simple C++ Programs (Chapter 2) (contd) Dr. Eyhab Al-Masri 1992-2012 by Pearson Education, Inc. & John Wiley & Sons Some portions are adopted from C++ for Everyone by Horstmann The parentheses are unbalanced. This is very common with complicated expressions. 1/28/2013 Now consider this expression -(b * b - (4 * a * c))) / 2 * a) It is still is not correct. There are too many closing parentheses.. This program produces the wrong output: #include <iostream> using namespace std; int main() { double price = 4.35; int cents = 100 * price; // Should be 100 * 4.35 = 435 cout << cents << endl; // Prints 434! return 0; } Why?ENGR 1200U Winter 2013 - UOIT In the processor hardware, numbers are represented in the binary number system, not in decimal. In the binary system, there is no exact representation for 4.35, just as there is no exact representation for; Itreallyiseasiertoreadwithspaces!So always use spaces around all operators: + - * / % = However, dont put a space after a unary minus: thats a used to negate a single quantity like this: -b That way, it can be easily distinguished from a binary minus, as in a - b It is customary not to put a space after a function name. Write sqrt(x) not sqrt (x) Review Questions Which parts of a computer can store program code? Which parts can store user data? Both program code and data are typically stored long term in a computers secondary storage, such as a hard disk. Program code and data can also be stored in a computers primary storage. Secondary storage is relatively inexpensive and retains information even if the computers power is turned off. Primary storage consists of read-only memory (ROM), and random access memory (RAM). RAM is relatively expensive when compared to secondary storage, and is erased whenever the computer is turned off Which parts of a computer serve to give information to the user? Which parts take user input? User of a computer receives information via: display screen, speakers, printers Computers output devices User can input data using the computers keyboard, a pointing device (i.e. mouse), a microphone, or a webcam computers input devicesolrd!<<endl; return0; } How do you discover compile-time errors? How do you discover run-time errors?A compile-time error is typically found by the compiler during the compilation process. A compile-time error is caused when the source code violates the rules of the programming language being used. A run-time error cannot be found by the compiler. It is found by testing the program and carefully examining the output or results for errors. 1)#include<iostream> 2) An extra semicolon at the end of Line 3. 3) int main(); Missing semicolon at the end of Line 5. 4) { 5) cout <<"Pleaseentertwonumbers:" 6) cout <<x,y; 7) cout <<"Thesumof<<x<<"and"<<y 8) <<"is:"x+y<<endl; 9) return; There is a missing double quote on Line 7 (after 10) } The sum of).There is a missing << operator on Line 8 (before x + y). The return statement on Line 9 should return a value. 10 Write an algorithm to settle the following question: A bank account starts out with $10,000. Interest is compounded monthly at 6 percent per year (0.5 percent per month). Every month, $500 is withdrawn to meet college expenses. After how many years is the account depleted?1. Repeat the following while account is greater than $0: a) Set account_value equal to account_value times 1.005. b) Deduct 500 from account_value. c) Increment number of months by 1. 2. Print the total number of months divided by 12 to determine howThe account is depleted after 22 months, or 1.83333 years. Write a program that prints the sum of the rst ten positive integers, 1 + 2 + + 10 without using variables.#include<iostream> usingnamespacestd; intmain() { cout<< "Thesumofthefirsttenpositiveintegersis" <<1+2+3+4+5+6+7+8+9+10<<endl; return0; } 11 Write a program that prints the balance of an account that earns 5 percent interest per year after the rst, second, and third year. Do not use variables.#include<iostream> usingnamespacestd; intmain() { cout << "Thestartingbalanceis$10,000."<<endl; cout<<"Theinterestrateis5%."<<endl; cout<<"Thebalanceafteroneyearis:" <<10000*1.05<<endl; cout<<"Thebalanceaftertwoyearsis:" <<(10000*1.05)*1.05<<endl; cout<<"Thebalanceafterthreeyearsis:" <<(((10000*1.05)*1.05)*1.05)<<endl; return0; #include<iostream> usingnamespacestd; intmain() { doublebalance(10000); cout<<"Thestartingbalanceis$10,000."<<endl; cout<<"Theinterestrateis5%."<<endl; balance=balance*1.05; cout<<"Thebalanceafteroneyearis:" <<balance<<endl; balance=balance*1.05; cout<<"Thebalanceaftertwoyearsis:" <<balance<<endl; balance=balance*1.05; cout<<"Thebalanceafterthreeyearsis:" <<balance<<endl; return0; }ENGR 1200U Winter 2013 - UOIT 12 Write a program that displays your name inside a box on the terminal screen, like this:Eyhab #include<iostream> usingnamespacestd; intmain() { cout<<endl; cout<<"++"<<endl; cout<<"|Eyhab|"<<endl; cout<<"++"<<endl; cout<<endl; system("pause"); return0; } ENGR 1200UWinter 2013 - UOIT Write a program that prints a face similar to (but different from) the following:#include<iostream> usingnamespacestd; intmain() { cout<<endl; cout<<"++++ "<<endl; cout<<"/\\ /*\\ /Hey\\"<<endl; cout<<"\\ /\\ //<there,|"<<endl; cout<<"++/\\ Human!/"<<endl; "<<endl; cout<<"|||| cout<<"++++"<<endl; cout<<""<<endl; cout<<endl; return0; } 13 Write a program that displays the following image, using characters such as / \ - | + for the lines. Write as Ohm.#include<iostream> usingnamespacestd; intmain() { cout<<"___/\\ /\\ /\\ _______/\\ /\\ /\\ __"<<endl; cout<<"|\\/\\/\\/|\\/\\/\\/|"<<endl; cout<<"|5kOhm|6kOhm|"<<endl; cout<<"|\\ \\"<<endl; cout<<" //"<<endl; cout<<"12V|+|10kOhm\\ 4kOhm\\"<<endl; cout<<"| |//"<<endl; cout<<" \\ \\"<<endl; cout<<"|//"<<endl; cout<<"|||"<<endl; cout<<"|||"<<endl; cout<<""<<endl; return0; } ENGR 1200UWinter 2013 - UOIT 14 What are the values of the following expressions? In each line, assume that double x = 2. 5; double y = -1. 5; int m = 18; int n = 4; a. b. c. d. e. 15 16 Bien plus que des documents. Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.Annulez à tout moment.
https://fr.scribd.com/document/180931689/Lecture-7-Simple-C-Programs-pt-3-pdf
CC-MAIN-2019-51
refinedweb
992
57.98
.NET Framework Support and New Features [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.] The .NET Compact Framework version 2.0 introduces new features and provides more support for .NET Framework classes. It also provides better performance in several feature areas, including just-in-time (JIT) compilation, garbage collection, XML Web services, and data access. Support for Full .NET Framework Features The following table summarizes the improved .NET Framework feature support in the .NET Compact Framework version 2.0. New Features The following table summarizes new device-specific features in the .NET Compact Framework 2.0. Interoperability Enhancements The .NET Compact Framework version 2.0 provides the following interoperability enhancements: Native code interoperability: Enhanced platform invoke type marshaling. Marshal delegates as function pointers. Additional types – arrays, strings, structures. Embedded arrays – structs with char[], array[]. MarshalAs – type hinting. Use of COM objects in your managed code: RCW (runtime callable wrapper) support enables calling from managed applications into COM objects. However, you cannot activate managed objects through COM. CCW (COM callable wrapper) support enables callbacks from native to managed code. Support for late-bound and early-bound calls (IDispatch and vtable). Integrated into Visual Studio 2005. Although the .NET Compact Framework supports only a subset of the System.Runtime.InteropServices namespace, advanced marshaling capabilities are available with the support of the MarshalAsAttribute attribute. In addition, the .NET Compact Framework 2.0 supports several new members in the Marshal class. You can marshal a wide range of types through COM into the .NET Compact Framework, including all OLE Automation types. Custom marshaling, the COM single-threaded apartment (STA) threading model, and auto-generating class interfaces are not supported. You can set a registry key to record marshaling of function calls in a log file. For more information, see How to: Create Log Files. You can also use the Tlbimp.exe (Type Library Importer) with the .NET Compact Framework. In Visual Studio, you can add a reference to a COM type library in a device project. Regarding packed structures, the .NET Compact Framework version 2.0 does not support Pack, but it does support the Explicit field and the FieldOffsetAttribute attribute. Note that current restrictions prevent marshaling a structure that violates the native structure layout. In other words, Int32 values must be 4-byte aligned, Int64 and Double values must be 8-byte aligned, and so on. If you need a packed structure where the elements are not naturally aligned, you must do the marshaling yourself. For more information about interoperability and related how-to topics, see Interoperability in the .NET Compact Framework. Resource File Change The .resx file format in the .NET Compact Framework version 2.0 is the same as the format in the full .NET Framework. The same Resgen.exe (Resource File Generator) can be used for both Frameworks. The CFResgen.exe utility is no longer needed. See Also Concepts Windows Forms and Graphics Other Resources What's New in the .NET Compact Framework Version 2.0
https://docs.microsoft.com/en-us/previous-versions/h1ek3akf(v=vs.100)?redirectedfrom=MSDN
CC-MAIN-2020-16
refinedweb
508
54.08
I'm new to web development and am trying to update an Access 2000 database (mdb file) from ASP.NET2 code using a SqlDataSource server control. I can view and filter the data in the bound GridView server control and can insert records using the DetailsView server control. However, I cannot update existing ones with DetailsView. No exception is thrown in the DetailsView1_ItemUpdated handler and no error message appears. - The AffectedRows property is 0. I understand the dot net framework uses a fallback data provider for Access databases, namely OLE DB (4.0), which I state in the web.config file. I am now wondering if there's a limitation using this data provider. I have disabled anonymous access and enabled Windows Integrated Windows authentication in web.config. So I assumed the impersonated identity would give the ASP.NET2 process sufficient permissions to update a database resource. - I log in to Windows as an administrator account. The mdb has both read and write attributes set. My web server is IIS 5.1 (I'm working offline) and the op system is XP Pro. I've created a virtual folder for the web app in Internet Information Services' wwwroot etc. I would appreciate any help. jb Jet OLEDB is not the fallback provider for an Access database. It is the only provider that you should be using (with the .NET OLEDB namespaces). The SQL namespaces were designed for SQL Server. Paul ~~~~ Microsoft MVP (Visual Basic) Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?151590-toggle-button-picture-disappears-when-button-is-disabled&goto=nextnewest
CC-MAIN-2016-07
refinedweb
262
60.31
Merge lp:~robert-ancell/snapd-glib/qt into lp:~snapd-glib-team/snapd-glib/trunk Commit Message Add Qt/QML bindings It was discussed about making this code more QML friendly, which I think is mostly about making models for the data so you can more easily connect to a display? I think this is best achieved by making a new class "SnapdStore" that wraps the details of the requests. This can be done for all the bindings and added at a later date. I've added a bunch of people who might be able to review this and tell me if it is correct Qt/QML or not... If you know someone else who would be worth looking over it please add them. The bit I think came out worst is the enums, though I can't work out if this it can actually be done better with the current state of C++/Qt. With a quick skim through, the basic idea looks OK to me. However, it's been decided that the store app on the phone will be a webview of the snapweb store interface, so I'm not sure if any of this is useful there any more. I'm also not working on that, so I'll set my review to abstain. QSnapdClient::buy is not implemented. - 189. By Robert Ancell on 2017-01-16 Merge with trunk - 190. By Robert Ancell on 2017-01-16 Implement QSnapdClient::buy - 191. By Robert Ancell on 2017-01-24 Merge with trunk I'm going to take the feedback as close enough to a positive review and land it now :) An example showing how to use this code with QML: import QtQuick 2.0 import Ubuntu.Components 1.3 import Snapd 1.0 MainView { SnapdClient { id: snapdClient } Page { id: page property var findRequest TextField { iconName: "input-search" placeholde rText: "Search..." onTextChan ged: { connectReques t.runSync () // Ensure we are connected var connectRequest = snapdClient.connect () // Test code getSystemInform ation () infoRequest. runSync () systemInformati on console. log (info.osId + " " + info.osVersion + " " + info.series + " " + info.version) var infoRequest = snapdClient. var info = infoRequest. var listRequest = snapdClient.list () listRequest. runSync () console. log ("Installed:") snapCount; i++) { var snap = listRequest.snap (i) console. log (snap.name, snap.installDate) for (var i = 0; i < listRequest. } // Cancel existing find console. log ("CANCEL " + page.findRequest) page. findRequest. cancel () page. findRequest = snapdClient.find (SnapdClient.None, text) console. log ("FIND '" + text + "' " + page.findRequest) if (page.findRequest != undefined) { } { } } } Flickable { } } } }
https://code.launchpad.net/~robert-ancell/snapd-glib/qt/+merge/310247
CC-MAIN-2017-34
refinedweb
407
68.26
About this project True Believer $11,658 FINAL UPDATE: 730% funded. $11,000. 300 backers. These are all figures I could not have dreamed of three weeks ago as I was scrabbling to launch the project while finishing my thesis and trying to graduate from college. What an amazing experience. We're in the home stretch with just under 24 hours to go, but I'm so humbled by this that I've run out of things to ask for. Keep being yourselves, keep being wonderful. I'll be posting a video update soon. WEEK 3 UPDATE: Well, it's official. You're all AWESOME. Thanks to you, True Believer is going international later this month with a trip to the inaugural Vancouver Comic Arts Festival! Since we've got a week to go, I'm going out on a limb and setting the new reach goal, and I really can't believe I'm saying this, at $10,000. I've added even more incentives (nautically-themed this time) to help move things along. If you'd like to get your hands on Baggywrinkles #3 pre-orders or tall ship illustrations, check the incentives bar. To read the whole update (including pics from the Release Party), click here! WEEK 1 UPDATE: After receiving such an incredible response in the last week, the campaign has evolved! The new goal is $8,000, and I've added some extra incentives to sweeten the deal. New funds will be going towards the creation of my first full-length graphic novel, Wherefore, which will hopefully be done by early 2013. Check out the full update here. What's All This Then? Hello! My name is Lucy Bellwood. I'm a cartoonist and illustrator living in Portland, OR and I want YOU to help me publish an awesome comic called True Believer. It starts something like this. True Believer is a 36-page autobiographical story about having the courage to do what you love. It's got art, religion, love, death, and all those other Big, Juicy Things, but thankfully also features a healthy amount of sneezing, slapstick, and swear words -- just so we don't take ourselves too seriously. The story charts my changing attitudes towards art as a personal practice over the course of 12 months. From selling my first comic at a convention to losing one of my mentors -- Portland-based publisher Dylan Williams -- to cancer, the year encompassed a staggering variety of experiences and revelations. After six months of intense work on this story, it's finally ready to see the light of day. Since I'm a sucker for all things analog (the comic was penciled, inked, lettered, and colored by hand), my goal is to publish it at the highest quality possible. I want this book to feel fantastic. 100 copies of the print run will have regular color covers and interiors, but there will also be a limited edition of 100 copies featuring two-color screen printed covers (with French flaps!) by Matt Davison of Portland's own Dueltone Printing. The trouble is that printing these large-format color comics can be a bit spendy for a young self-publisher... Which Is Where You Come In. With your help, the dream of publishing this comic in all its colorful glory will become a reality. Kickstarter's model is simple. You pledge an amount (small or large) towards my fundraising goal by clicking the "Back This Project" button on the top righthand side of the page. If I make my goal in pledges by my deadline, May 14th, you will then get charged for your donation, and receive some awesome incentives in return (like the limited edition poster pictured below!). However, if I don't make my goal, your card doesn't get charged and I don't receive any funding. It's an all or nothing kind of game, but I have faith that we're going to make it happen. To check out the various incentives and their corresponding donation levels, peruse the column on the right. By taking pre-orders through Kickstarter, I can confidently pull out all the stops to print True Believer the way it was meant to be printed. And what's more, I've included a number of Kickstarter-exclusive rewards that go beyond copies of the comic to include custom sketches, limited edition prints, and even dinners and studio tours with yours truly. Sounds Great! But Where's The Money Going? I've asked for $1,500 to complete the project. This will assure enough money to print the comic, ship your rewards, and cover Kickstarter and Amazon's processing fees. The breakdown goes something like this: - $950 to print 200 copies of the comic through Minuteman Press. - $150 to screen print 100 limited edition covers with Dueltone Printing. - $100 to assemble limited edition comics with Eberhardt Press. - $150 for incentive printing & shipping (posters, archival comic prints, postcards, etc.) - $150 for Kickstarter and Amazon fees (approximately 10% overall). In the event that I exceed my funding goal, I'll not only be able to print more copies of the comic and offer some extra incentives, but also begin putting funds towards my graphic novel, Wherefore, and the next issue of my Baggywrinkles series. Baggywrinkles is a nautically-themed comic exploring my life as a tall ship sailor in the 21st century, and this next issue promises to be a lot of fun. Excess donations will allow me to make the new issue twice as long as its predecessors and experiment with some new cover printing styles. And That's It! Thanks so much for taking the time to investigate my Kickstarter page. If you feel compelled to pledge towards my goal, thank you even more! I really couldn't do this without you. If you're a journalist or blogger interested in covering the campaign, you can download a press release for the project here. If you'd like to follow along with the project's process -- or just drop me a line -- check out one of these many fine social media outlets: And, of course, my main site, lucybellwood.com! Have a question? If the info above doesn't help, you can ask the project creator directly. Support this project Funding period - (21 days)
https://www.kickstarter.com/projects/lucybellwood/true-believer?ref=recommended
CC-MAIN-2016-44
refinedweb
1,055
71.75
Now that you’re familiar with MIDlet states and the application manager, let’s create another MIDlet. As you’ve probably guessed by now, this involves the following five steps: Write the MIDlet. Compile the MIDlet’s source code. Preverify the MIDlet’s class file. Package the application in a JAR file. Create a JAD file. Let’s review each of these steps. First, we’ll look at the command-line technique that was shown in Chapter 1. Then, we’ll introduce the KToolbar application, which comes with the J2ME Wireless Toolkit and which can make our lives much easier. The first step in the development life cycle is to write the MIDlet. Example 4-2 shows a simple MIDlet, PaymentMIDlet. This MIDlet creates a List object of type EXCLUSIVE (that is, only one option can be selected at a time), and adds three methods of payments to it. It displays a list of options for the user to select a method of payment. Example 4-2. Sample MIDlet import javax.microedition.midlet.*; import javax.microedition.lcdui.*; public class PaymentMIDlet extends MIDlet { // The display for this MIDlet private Display display; // List to display payment methods List method = null; public PaymentMIDlet( ) { method = new List("Method of Payment", Choice.EXCLUSIVE); } public void startApp( ) { display = Display.getDisplay(this); method.append("Visa", null); method.append("MasterCard", null); method.append("Amex", null); display.setCurrent(method); } /** * Pause is a no-op since there are no background * activities or record stores that need to be closed. */ public void pauseApp( ) { } /** * Destroy must cleanup everything not handled by the * garbage collector. In this case there is nothing to * cleanup. */ public void destroyApp(boolean unconditional) { } } To compile the source code with the command-line tools of the Java Wireless Toolkit, use the javac command. Remember that you should use the -bootclasspath option to make sure the source code is compiled against the correct CLDC and MIDP classes. C:\midlets> javac -bootclasspath C:\j2mewtk\lib\midpapi.zip PaymentMIDlet.java This command produces the PaymentMidlet.class file in the current directory. This is a slightly simplified version of the command we used in Chapter 1, which puts the resulting class file in a temporary directory. The next step is to preverify the class file using the preverify command: C:j2mewtk\bin> preverify -classpath C:\midlets;C:\j2mewtk\lib\midpapi.zip PaymentMIDlet Again, a slightly different approach. This command creates an output subdirectory in the current directory and writes a new file PaymentMIDlet.class. This is the preverified class that the KVM can run with its modified class verifier. In order to enable dynamic downloading of MIDP applications, the application must be packaged in a JAR file. To create a JAR file, use the jar command: C:\midlets> jar cvf payment.jar PaymentMidlet.class A JAD file is necessary if you want to run a CLDC-compliant application. Example 4-3 shows a sample JAD file for the payment MIDlet. Example 4-3. A sample JAD file MIDlet-1: payment,,PaymentMIDlet MIDlet-Name: Payment MIDlet-Version: 1.0 MIDlet-Vendor: ORA MIDlet-Jar-URL: payment.jar MIDlet-Jar-Size: 961 Once you have the JAD file, you can test your application using the MIDP emulator using the emulator command of the Java Wireless Toolkit, as shown here: C:\j2mewtk\bin> emulator -Xdescriptor:C;\midlets\payment.jad If all goes well, activate the MIDlet and you will see output similar to Figure 4-2. If your MIDP application consists of multiple MIDlets, they can all be in one JAR file as a MIDlet suite. However, you would need to specify them in the JAD file using the MIDlet-n entry, where n is the number of the MIDlet. Consider the JAD file in Example 4-4, with three hypothetical MIDlets. Example 4-4. Three hypothetical MIDlets MIDlet-1: Buy, , BuyMidlet MIDlet-2: Sell, , SellMidlet MIDlet-3: Trade, , TradeMidlet MIDlet-Name: Trading MIDlet-Version: 1.0 MIDlet-Vendor: ORA MIDlet-Jar-URL: trade.jar MIDlet-Jar-Size: 2961 If you run this JAD file, you would see something similar to Figure 4-3. A MIDP application may consist of multiple MIDlets, as shown in Figure 4-3. Similarly, a desktop application consists of menus and options, as shown in Figure 4-4. You have now seen how to compile, preverify, create JAR and JAD files, and run MIDlets from the command line. This is fine if you want to understand what’s happening behind the scenes. However, there is an alternative. An integrated development environment, such as the J2ME Wireless Toolkit, can be used to simplify the development and deployment of MIDlets. The J2ME Wireless Toolkit comes with an application called KToolbar. The following steps show how to use the KToolbar to set up a simple MIDlet, develop the application, package it, and run it. In Microsoft Windows, choose Start → Programs → J2ME Wireless Toolkit → KToolbar to start the development environment. Figure 4-5 shows the resulting KToolbar screen. Click on the New Project button to create a new project called payment, and call the MIDlet class PaymentMIDlet, as shown in Figure 4-6. Once you click on Create Project in Figure 4-6, you will get a setting project window, as shown in Figure 4-7. This window allows you to modify the MIDlet attributes. All the required attributes are shown in Figure 4-7. If you click on the Optional tab, you will get a window with all the optional attributes, which are shown in Figure 4-8. Once you click OK, you will get the original KToolbar screen with information to indicate where to save your source and resource files. Assuming the Wireless Toolkit is installed in the directory C:\J2MEWTK, then you will be told to save your Java source files in C:\J2MEWTK\apps\payment\src and your resource files (e.g., icons) in C:\J2MEWTK\apps\payment\res. Now, use your favorite text editor and write PaymentMIDlet, or simply copy the source from Example 4-2. Then, save it in the required location and click on the Build button to compile it. Note that the KToolbar application performs all the steps of compiling the application, preverifying the classes, compressing them into a JAR file, and creating a corresponding JAD file for you. All you have to do is to click the Run button to run it. Then you can test your MIDlet using a default phone, Motorola’s i85s, or a Palm OS, as shown in Figure 4-9. Choose your favorite testing device to test the MIDlet. For example, Figure 4-10 shows the PaymentMIDlet running in a default gray phone device. Figure 4-11 shows the PaymentMIDlet running on Motorola’s i85s device. Figure 4-12 shows the same application running on a Palm Pilot and Figure 4-13 shows the PaymentMIDlet application running on RIM’s BlackBerry. Chapter 9 discusses how to install the Java Application Manager on a real Palm OS device and how to convert existing MIDlets into PRC executable files for handheld devices running Palm OS 3.5 or higher. As of this writing, deploying MIDlets is still an experimental process. However, the Java application manager that comes with the MIDP reference implementation now provides some clues about how we can deploy MIDlets to various devices. Specifically, MIDlets can be installed in two ways: Using a data cable or other dedicated physical connection from a local host computer Using a network to which the device is intermittently connected The first method works well with PDAs, which are often used with a host computer, with which the PDAs frequently synchronize their data. For example, the MIDP for Palm implementation, which is discussed in Chapter 9, is a good example of this; its application manager allows MIDlet suites to be installed from a host PC during the synchronization process. The second method is more popular when installing MIDlets on cell phones and other wireless devices. With these devices, the most likely delivery transport is the wireless network itself. The process of deploying MIDlet suites over a network is referred to as over-the-air (OTA) provisioning. OTA provisioning is not yet part of the MIDP specification, but it is likely to become the dominant mechanism for distributing MIDlets and will probably be included in the formal specification soon. As of this writing, OTA provisioning is just starting to be used with J2ME devices such as the Motorola i85s/i50x series of cell phones. OTA provisioning allows MIDlet providers to install their MIDlet suites via web servers that provide hypertext links. This allows you to download MIDlet suites to a cell phone via a WAP or Internet microbrowser. Here is a brief description of how this process works. First, to deploy a MIDlet from a web server, you need to reconfigure your web server by adding a new MIME type: text/vnd.sun.j2me.app-descriptor jad How to add the MIME type depends on what server you are running. For example, if you’re running Apache Tomcat, you would add a new MIME type by adding a new entry in the web.xml server configuration file, as follows: <mime-mapping> <extension>jad</extension> <mime-type>text/vnd.sun.j2me.app-descriptor</mime-type> </mime-mapping> You would then use the following type of procedure to install a MIDlet suite from a web page: Click on a link, which will probably request a file with a JAD extension, such as the following: <A HREF='MyApp.jad'>Click here to install the MIDlet suite</A> The server will then send the MyApp.jad file to the phone with the MIME type set to text/vnd.sun.j2me.app-descriptor, as described earlier. Recall that the JAD file must contain the MIDlet-Jar-URL and MIDlet-Jar-Size properties, which tell the device where to download the MIDlet suite, as well as the suite’s size in bytes. The Java application manager on the phone will then ask if you want to install the MIDlet into the phone, assuming that the phone has the resources to run the MIDlet (i.e., that there’s enough space on the device to hold the MIDlet suite). If you answer yes, the entire JAR file will be downloaded from the server, using the properties specified in the JAD file. Once the MIDlet is downloaded, it will be installed the first time you try to use it. A downloaded MIDlet stays on the device until you remove it (unlike Java applets). You can also download J2ME applications to a Motorola/Nextel i50x or i85s device from your desktop through a data cable. This cable does not come with the phone itself, but can be ordered online from Nextel. The iDEN update software can then be downloaded from the iDEN development site (). In addition, you can also purchase a data cable that comes with a CD-ROM containing the iDEN update software from Nextel from this site. Obtaining the software may involve authorization from your carrier, which can take between one and five days. Once you are granted authorization, however, you can install applications on up to five individual phones. The following paragraphs describe how to use the Motorola iDEN update software to download a J2ME MIDlet to your phone. After you have obtained the update software, start it up and choose the J2ME Developers tab on the far left. This will result in a screen similar to that in Figure 4-14. From here, you can choose a JAD file to download the application into your phone through the data cable. Note that the JAD file and the JAR file must reside in the same directory and must have the same filename (excluding the extension). For the most part, downloading an application to the phone is easy. However, the Motorola i85s and i50x phones will perform a number of checks to ensure the integrity of the application while installing it. You should observe the following rules to ensure that the phone will install the application. The JAD file downloaded to the i85s or i50x must contain at least the following entries, which are case-sensitive: MIDlet-Name: MIDlet-Version: MIDlet-Vendor: MIDlet-Jar-Size: MIDlet-Jar-URL: It can also contain the following optional entries: MIDlet-Description: MIDlet-Info-URL: MIDlet-Data-Size: In addition, the JAD file can contain any other MIDlet-specific information that does not begin with the letters “MIDlet-”. Remember from Chapter 3 that the JAR file must contain a manifest with at least the following information, which must be identical to the data in the JAD file: MIDlet-Name: MIDlet-Version: MIDlet-Vendor: If you do not include this information in the manifest, the phone will respond with a “Descriptor Error” when it is attempting to install the application. If this happens, simply press the Menu button while the MIDlet is selected and remove it from the system. Here are some other things to note when downloading to the Motorola i85s or i50x: The JAD file is case-sensitive. The maximum file length for both the JAD and the JAR file is 16 characters, which includes the four characters for the extension (e.g., .JAD or .JAR). The byte size of the JAR file must be accurately stated in the JAD file. Each of the attributes in the JAD and JAR file manifests must have a value associated with it. You cannot leave an attribute value blank. Classes which are instantiated using the Class.forName( ) method must be identified in the JAD file using the attribute: iDEN-Install-Class-n:, where n is a positive integer. The class name is listed afterward without the .class extension. Example 4-5 shows the manifest information that we would be using if we wanted to download the HelloMidlet application from Chapter 1 to the Motorola i85s. Remember that the manifest must contain the three specified attributes ( MIDlet-Name, MIDlet-Version, and MIDlet-Vendor) and that they must be identical to the values in the JAD file. If they differ, the phone will not install the MIDlet. We have also included the MIDlet class identification information and the profile and configuration version numbers, which we recommend that you include in your MIDlet manifests as well. Example 4-5. Manifest.mf MIDlet-Name: HelloMidlet MIDlet-Vendor: ORA MIDlet-Version: 1.0.0 MIDlet-1: HelloMidlet,,HelloMidlet MicroEdition-Profile: MIDP-1.0 MicroEdition-Configuration: CLDC-1.0 At this point, let’s create a compressed JAR file of the classes that make up the MIDlet. With the manifest and the preverified class in the same directory, enter the following command: >jar cvfm HelloMidlet.jar manifest.mf HelloMidlet.class Once that is completed, you’ll need to create the JAD file. Example 4-6 shows the JAD file for our HelloMidlet application. Note that we had to change the value of the MIDlet-Jar-Size attribute to match the size, in bytes, of the JAR file that we just created. In this case, it turned out to be 954 bytes with the additional manifest information. Example 4-6. HelloMidlet.jad Now we’re ready to go. Again, be sure that the JAD file and the JAR file have the same name and reside in the same directory. Then use the iDEN software tools to download the application to your phone. It should only take a few seconds once you’ve chosen the target JAD file. After the download has completed, start the Java Application Manager on the phone (Java Apps under the Main Menu) and select the HelloMidlet application. Press the soft button to install it. You are now installing your first Java MIDlet on a real device. If everything goes okay, you can run your program after it completes the installation and verification steps. No credit card required
https://www.oreilly.com/library/view/wireless-java/0596002432/ch04s02.html
CC-MAIN-2019-43
refinedweb
2,634
54.83
Tony Glader wrote: > Is there some good reason why there is mdelay(10) in ves1820_writereg(): That is a good question. > static int ves1820_writereg(struct ves1820_state *state, u8 reg, u8 data) > { > u8 buf[] = { 0x00, reg, data }; > struct i2c_msg msg = {.addr = state->config->demod_address,.flags = 0,.buf = buf,.len = 3}; > int ret; > > ret = i2c_transfer(state->i2c, &msg, 1); > > if (ret != 1) > printk("ves1820: %s(): writereg error (reg == 0x%02x," > "val == 0x%02x, ret == %i)\n", __FUNCTION__, reg, data, ret); > > msleep(10); > return (ret != 1) ? -EREMOTEIO : 0; > } > > Doesn't it slow down lot of i2c communication? Yes, but maybe that's the intention (bad hardware). Does it work for you without the sleep? Johannes
http://www.linuxtv.org/pipermail/linux-dvb/2005-March/000929.html
CC-MAIN-2014-15
refinedweb
111
74.59
Fix formatting in GIF2.cpp RESOLVED FIXED in mozilla1.3alpha Status () ▸ ImageLib People (Reporter: paper, Assigned: paper) Tracking Firefox Tracking Flags (Not tracked) Details Attachments (2 attachments, 4 obsolete attachments) Making patches for GIF2.cpp bugs me to no end because it's abnormally set to tab-width 4. tab-width should be 2. Status: UNCONFIRMED → ASSIGNED Ever confirmed: true Summary: Change GIF2.cpp to tab-width 2 → Fix formatting in GIF2.cpp Target Milestone: --- → mozilla1.3alpha Created attachment 104248 [details] [diff] [review] Cleanup - 0 code changes. - 4 space tabs changed to 2 space tabs - Removed all ILTRACEs (they were all commented out and were for old gifcom stuff) - Removed debug defines (again, only used way back in gifcom days) - Placed return value type & function name on same line (ie "void^pfoo" to "void foo()") - moved { to same line as if, and "} else {" - Removed trailing spaces - Split any lines > 80 characters. All this chopped 200+ lines off the file and made it much more consistent and standardized to the rest of imglib. :) Comment on attachment 104248 [details] [diff] [review] Cleanup can you attach a diff -w? Created attachment 104280 [details] diff -u2 -w still rather long, but easier to find out what I did. Comment on attachment 104280 [details] diff -u2 -w what's this? + // q[6] = Pixel Aspect Ratio + // Not used + // float aspect = (float)((q[6] + 15) / 64.0); why add this code? The code below wasn't doing anything, so I killed it, but I thought the information in it might be usefull in the future. (ie. someone asking why we skipping over [6] or something like that) - if (q[6]) - { - /* should assert gif89 */ - if (q[6] != 49) - { -#ifdef DEBUG - float aspect = (float)((q[6] + 15) / 64.0); - //ILTRACE(2, ("il:gif: %f aspect ratio", aspect)); -#endif - } - } + if ( !gs->prefix || !gs->suffix || !gs->stack) { ^ remove the space here please (other places like this as well) + if (strncmp((char*)q,"GIF",3)) { space after comma, please same for this one (which you didn't touch): GETN(7,gif_global_header); and possibly in other places of that file. -#ifdef DEBUG_saari given the amount of checkins saari has done recently, I suppose that's ok. + if (*q!=',') { space before and after != please. + for (int i=0; i < gs->local_colormap_size; i++, map++) { space around = please + // XXX: we don't freeze decoder anymore um, is the non-freezing a bad thing? doesn't sound like one to me; and if it's good, it should not be XXX ok the rest looks good, but I still have to apply it and test it. ok I now applied the patch and looked at the resulting file. I found a few things, I hope you don't mind :) : /* PR_ASSERT(0); */ you probably should remove that comment (or replace with an NS_NOTREACHED) } while(gs->irow > gs->height - 1); personally, I'd like parens around the second half of the comparison... #define OUTPUT_ROW(gs) this macro could use NSPR_BEGIN_MACRO / NSPR_END_MACRO for (ch=q; count-- > 0; ch++) spaces around = gif_struct *ret; ret = PR_NEWZAP(gif_struct); combine these lines? actually, maybe assign it directly into the out parameter? (or does code depend on an unchanged parameter in case it would be zero?) hm, not sure if this really belongs in this bug... PR_ASSERT(gs); can you replace that with NS_ASSERTION(gs, "Got null argument, will crash!"); GIF_IRGB *img_trans_pixel; I'd move it inside the if(), to where it's first used. ...gotta go now, will continue later. for my own reference, I'm near line 561 now. if (gs->gathered < max) that if condition can be written as return (gs->gathered < max); PRStatus gif_write(gif_struct *gs, const PRUint8 *buf, PRUint32 len) well I wish that would use nsresult but that does not really belong in this bug. // PR_ASSERT ((len == 0) || (gs->gathered < MAX_READ_AHEAD)); guess you can remove this comment const PRUint8 *q, *p=buf, *ep=buf+len; again, spaces around =. I'll not mention further cases of this, just fix it everywhere. if (!((len == 0) || (gs->gathered < MAX_READ_AHEAD))) now this line looks, uh, rather ugly. maybe change it to: if ((len != 0) && (gs->gathered >= MAX_READ_AHEAD)) (wow, I really hope I got the boolean algebra correct here...) actually, for all the *(q + 8), you could write q[8] instead. that's better, imho. if ((num_colors > gs->local_colormap_size) && gs->local_colormap) { PR_FREEIF(gs->local_colormap); gs->local_colormap = NULL; } now this piece of code is interesting (as mentioned on irc) in short, change the FREEIF to PR_Free, because FREEIF would also check for non-nullness of local_colormap, and set it to NULL afterwards. alternatively, remove the non-nullcheck from the if and the assignment of null. now I see the NULL in this code... please do a global search-and-replace for NULL -> nsnull. GIF_RGB* map; map = gs->local_colormap; combine these two lines please. return PR_FAILURE; break; no need for the break;s here. also in surrounding code. ah yeah, and here (at the end of the file): if (gs->local_colormap) { PR_FREEIF(gs->local_colormap); gs->local_colormap = NULL; } you can also replace all these lines with the PR_FREEIF or change the PR_FREEIF to PR_Free great, I'm done. Created attachment 105616 [details] [diff] [review] Clean-up more Attachment #104248 - Attachment is obsolete: true Created attachment 105617 [details] diff -u2 -w > + // XXX: we don't freeze decoder anymore I removed that whole chunk. We haven't frozen loading GIF in Mozilla since.. well, since the time before Mozilla was public. > + if (strncmp((char*)q,"GIF",3)) { > space after comma, please > same for this one (which you didn't touch): > GETN(7,gif_global_header); Done, plus all similar > /* PR_ASSERT(0); */ chopped :) > } while(gs->irow > gs->height - 1); > personally, I'd like parens around the second half of the comparison... Added parens (didn't remove old ones, they seem to be standard on while). Also added a space between while and (, and did that to other areas of code with a while(, switch(, or return(. > #define OUTPUT_ROW(gs) > this macro could use NSPR_BEGIN_MACRO / NSPR_END_MACRO Added PR_BEGIN/END_MACRO to this one, and removed NS from the other one. > ret = PR_NEWZAP(gif_struct); I'm going to leave this as is. It does seem constructed a bit weird though. > GIF_IRGB *img_trans_pixel; >I'd move it inside the if(), to where it's first used. That whole function (gif_init_transparency) is a mess and could probably be cut in half if it weren't for my theory that the function serves no purpose anymore. I've made a note to look deeper into my theory, and will leave the code as is here. I kept the PR_FREEIF, and removed the null check prior to it and the setting to null after it. Anything else I didn't mention, I changed as per comments. Attachment #104280 - Attachment is obsolete: true Created attachment 105625 [details] [diff] [review] Super Clean-Up more clean-up as per discussion with biesi on IRC Attachment #105616 - Attachment is obsolete: true Created attachment 105626 [details] diff -u2 -w Attachment #105617 - Attachment is obsolete: true Comment on attachment 105625 [details] [diff] [review] Super Clean-Up r=biesi Attachment #105625 - Flags: review+ Comment on attachment 105625 [details] [diff] [review] Super Clean-Up Somebody set us up the bomb. All your GIF2.cpp are belong to Paper! sr=tor Attachment #105625 - Flags: superreview+ Checking in GIF2.cpp; /cvsroot/mozilla/modules/libpr0n/decoders/gif/GIF2.cpp,v <-- GIF2.cpp new revision: 1.35; previous revision: 1.34 done Marking Fixed Status: ASSIGNED → RESOLVED Last Resolved: 16 years ago Resolution: --- → FIXED
https://bugzilla.mozilla.org/show_bug.cgi?id=166007
CC-MAIN-2018-17
refinedweb
1,243
72.66
Lab 6: Midterm Review Due by 11:59pm on Thursday, July 15. Starter Files Download lab06.zip. Inside the archive, you will find starter files for the questions in this lab, along with a copy of the Ok autograder. Submission In order to facilitate midterm studying, solutions to this lab were released with the lab. We encourage you to try out the problems and struggle for a while before looking at the solutions! Note: You do not need to run python ok --submit to receive credit for this assignment. Required Questions Q1: All Questions Are Optional The questions in this assignment are not graded, but they are highly recommended to help you prepare for the upcoming midterm. You will receive credit for this lab even if you do not complete these questions. This question has no Ok tests. Suggested Questions Recursion and Tree Recursion Q2: Subsequences A subsequence of a sequence S is a subset of elements from S, in the same order they appear in S. Consider the list [1, 2, 3]. Here are a few of it's subsequences [], [1, 3], [2], and [1, 2, 3]. Write a function that takes in a list and returns all possible subsequences of that list. The subsequences should be returned as a list of lists, where each nested list is a subsequence of the original input. In order to accomplish this, you might first want to write a function insert_into_all that takes an item and a list of lists, adds the item to the beginning of each nested list, and returns the resulting list. def insert_into_all(item, nested_list): """Return a new list consisting of all the lists in nested_list, but with item added to the front of each. You can assuming that nested_list is a list of lists. >>> nl = [[], [1, 2], [3]] >>> insert_into_all(0, nl) [[0], [0, 1, 2], [0, 3]] """ "*** YOUR CODE HERE ***" def subseqs(s): """Return a nested list (a list of lists) of all subsequences of S. The subsequences can appear in any order. You can assume S is a list. >>> seqs = subseqs([1, 2, 3]) >>> sorted(seqs) [[], [1], [1, 2], [1, 2, 3], [1, 3], [2], [2, 3], [3]] >>> subseqs([]) [[]] """ if ________________: ________________ else: ________________ ________________ Use Ok to test your code: python3 ok -q subseqs Q3: Non-Decreasing Subsequences Just like the last question, we want to write a function that takes a list and returns a list of lists, where each individual list is a subsequence of the original input. This time we contains no negative elements. You may use the provided helper function insert_into_all, which takes in an item and a list of lists and inserts the item to the front of each list. def non_decrease_subseqs(s): """Assuming that S is a list, return a nested list of all subsequences of S (a list of lists) for which the elements of the subsequence are strictly nondecreasing. The subsequences can appear in any order. >>> seqs = non_decrease_subseqs([1, 3, 2]) >>> sorted(seqs) [[], [1], [1, 2], [1, 3], [2], [3]] >>> non_decrease_subseqs([]) [[]] >>> seqs2 = non_decrease_subseqs([1, 1, 2]) >>> sorted(seqs2) [[], [1], [1], [1, 1], [1, 1, 2], [1, 2], [1, 2], [2]] """ def subseq_helper(s, prev): if not s: return ____________________ elif s[0] < prev: return ____________________ else: a = ______________________ b = ______________________ return insert_into_all(________, ______________) + ________________ return subseq_helper(____, ____) Use Ok to test your code: python3 ok -q non_decrease_subseqs Mutable Lists Q4: Trade In the integer market, each participant has a list of positive integers to trade. When two participants meet, they trade the smallest non-empty prefix of their list of integers. A prefix is a slice that starts at index 0. Write a function trade that exchanges the first m elements of list first with the first n elements of list second, such that the sums of those elements are equal, and the sum is as small as possible. If no such prefix exists, return the string 'No deal!' and do not change either list. Otherwise change both lists and return 'Deal!'. A partial implementation is provided.]. def trade(first, second): """Exchange the smallest prefixes of first and second that have equal sum. >>> a = [1, 1, 3, 2, 1, 1, 4] >>> b = [4, 3, 2, 7] >>> trade(a, b) # Trades 1+1+3+2=7 for 4+3=7 'Deal!' >>> a [4, 3, 1, 1, 4] >>> b [1, 1, 3, 2, 2, 7] >>> c = [3, 3, 2, 4, 1] >>> trade(b, c) 'No deal!' >>> b [1, 1, 3, 2, 2, 7] >>> c [3, 3, 2, 4, 1] >>> trade(a, c) 'Deal!' >>> a [3, 3, 2, 1, 4] >>> b [1, 1, 3, 2, 2, 7] >>> c [4, 3, 1, 4, 1] """ m, n = 1, 1 equal_prefix = lambda: ______________________ while _______________________________: if __________________: m += 1 else: n += 1 if equal_prefix(): first[:m], second[:n] = second[:n], first[:m] return 'Deal!' else: return 'No deal!' Use Ok to test your code: python3 ok -q trade Q5: Shuffle' half = _______________ shuffled = [] for i in _____________: _________________ _________________ return shuffled Use Ok to test your code: python3 ok -q shuffle Trees Q6: Same shape Define a function same_shape that, given two trees, t1 and t2, returns True if the two trees have the same shape (but not necessarily the same data in each node) and False otherwise. def same_shape(t1, t2): """Return True if t1 is indentical in shape to t2. >>> test_tree1 = tree(1, [tree(2), tree(3)]) >>> test_tree2 = tree(4, [tree(5), tree(6)]) >>> test_tree3 = tree(1, ... [tree(2, ... [tree(3)])]) >>> test_tree4 = tree(4, ... [tree(5, ... [tree(6)])]) >>> same_shape(test_tree1, test_tree2) True >>> same_shape(test_tree3, test_tree4) True >>> same_shape(test_tree2, test_tree4) False """ "*** YOUR CODE HERE ***"
https://inst.eecs.berkeley.edu/~cs61a/su21/lab/lab06/
CC-MAIN-2021-49
refinedweb
934
76.45
Hibernate code problem how to write hibernate Left outer join program Hibernate code problem - Hibernate Hibernate code problem Hi This is Raju.I tried the first example of Hibernate material what u have given. I have written contact.java... problem please send me code. Visit for more information. String SQL_QUERY =" from Insurance...: " + insurance. getInsuranceName()); } in the above code,the hibernate... thanks shakti Hi friend, Your code is : String SQL_QUERY =" from Hibernate Problem in running first hibernate program.... Hi...I am using.../FirstExample Exception in thread "main" "... programs.It worked fine.To run a hibernate sample program,I followed the tutorial below Hibernate code - Hibernate Hibernate code firstExample code that you have given for hibernate to insert the record in contact table,that is not happening neither it is giving... inserted in the database from this file. Compilation error. Hibernate code problem. Struts first example - Hibernate Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first example Hibernate @ManyToOne persisting problem - Hibernate Hibernate @ManyToOne persisting problem hello, In my apllication, there are students and classes. a student can take many classes. So... wrote the code (summary) below.. class Class: @Id @GeneratedValue @Column Java - Hibernate (); } } } --------------------------- Simple problem in your code. please change the setId..., this type of output. ---------------------------- Inserting Record Done Hibernate... FirstExample { public static void main(String[] args) { Session session = null Hibernate - Hibernate /hibernate/ Please specify your requirements in detail. It would be good for me to provide you the solution if problem is clear. please post all code...Hibernate Hai this is jagadhish while running a Hibern How to Invoke Stored Procedure in Hibernate???????? Plz provide details code hibernate code - Hibernate hibernate code while generating the hibernate code i got the error like org.hibernate.MappingException... hibernate hibernate what is problem of tree loading Hibernate code - Hibernate Hibernate code how to write hql query for retriveing data from multiple tables hibernate pojo setter method problem hibernate pojo setter method problem how to pass a date type variable to a setter method in hibernate pojo class? //this is my pojo class import... class code import org.hibernate.*; import org.hibernate.cfg.*; public class Tutorial Hibernate Tutorial This section contains the various aspects of Hibernate. Here we will read What is Hibernate, Features of Hibernate, Compatibility with the various databases, Hibernate dialect of various databases, Architecture myfaces,hibernate and spring integration - Hibernate myfaces,hibernate and spring integration sorry, in the previous.../myfacesspring/downloadcode.shtml the code given in this url : http...). i wll be obliged. Might be you have deploying problem. deploy delete a row error - Hibernate simple problem in your code. Query query = sess.createQuery(hql); int row...Hibernate delete a row error Hello, I been try with the hibernate delete example ( delete query problem - Hibernate correctly , the problem is only delete query. 2) query.executeUpate(); ->...(); Read for more information. Thanks 4.3 JPA 2.1 version of Hibernate is 4.3 and JPA is JPA 2.1. My problem is to write the integrated code to use Hibernate 4.3 through JPA 2.1 in my application. Thanks...Hibernate 4.3 JPA 2.1 How to create a Java program using Hibernate Hibernate 1 - Hibernate Hibernate 1 what is a fetchi loading in hibernate?i want source code?plz reply this is a hibernate question this is a hibernate question connection pooling code in hibernate spring hibernate - Hibernate with hibernate? Hi friend, For solving the problem Spring with Hibernate visit to : listeners hibernate hibernate what is hibernate flow Criteria Transformer Example code Hibernate Criteria Transformer Example code Hello, I am trying to find the example of Hibernate Criteria Transformer. Tell me the good tutorial of Hibernate Criteria Transformer with source code. Thanks Hi, Check Hibernate Isolation Query. - Hibernate Hibernate Isolation Query. Hi, Am Using HibernateORM with JBOSS server and SQLSERVER.I have a transaction of 20 MB, when it is getting processed... for the problem error in eclipse are trying to submit the null value. So, review your code again or send your full code........ I will try to short out your problem...Hibernate error in eclipse Hi... while running my application i got . We have tried to provide many articles, examples and code at our Hibernate 4...Hibernate Hibernate is a framework for Java technology, which is used.... Hibernate is Java library which is used to develop the data access layer
http://www.roseindia.net/tutorialhelp/comment/81376
CC-MAIN-2014-10
refinedweb
729
52.56
Pandas Tricks – Calculate Percentage Within Group Pandas groupby probably is the most frequently used function whenever you need to analyse your data, as it is so powerful for summarizing and aggregating data. Often you still need to do some calculation on your summarized data, e.g. calculating the % of vs total within certain category. In this article, I will be sharing with you some tricks to calculate percentage within groups of your data. Prerequisite You will need to install pandas if you have not yet installed: pip install pandas #or conda install pandas I am going to use some real world example to demonstrate what kind of problems we are trying to solve. The sample data I am using is from this link , and you can also download it and try by yourself. Let’s first read the data from this sample file: import pandas as pd # You can also replace the below file path to the URL of the file df = pd.read_excel(r"C:\Sample Sales Data.xlsx", sheet_name="Sheet") The data will be loaded into pandas dataframe, you will be able to see something as per below: Let’s first calculate the sales amount for each transaction by multiplying the quantity and unit price columns. df["Total Amount"] = df["Quantity"] * df["Price Per Unit"] You can see the calculated result like below: Calculate percentage within group With the above details, you may want to group the data by sales person and the items they sold, so that you have a overall view of their performance for each person. You can do with the below : #df.groupby(["Salesman","Item Desc"])["Total Amount"].sum() df.groupby(["Salesman", "Item Desc"]).agg({"Total Amount" : "sum"}) And you will be able to see the total amount per each sales person: This is good as you can see the total of the sales for each person and products within the given period. Calculate the best performer Now let’s see how we can get the % of the contribution to total revenue for each of the sales person, so that we can immediately see who is the best performer. To achieve that, firstly we will need to group and sum up the “Total Amount” by “Salemans”, which we have already done previously. df.groupby(["Salesman"]).agg({"Total Amount" : "sum"}) And then we calculate the sales amount against the total of the entire group. Here we can get the “Total Amount” as the subset of the original dataframe, and then use the apply function to calculate the current value vs the total. Take note, here the default value of axis is 0 for apply function. [["Total Amount"]].apply(lambda x: 100*x/x.sum()) With the above, we should be able get the % of contribution to total sales for each sales person. And let’s also sort the % from largest to smallest: sort_values(by="Total Amount", ascending=False) Let’s put all together and run the below in Jupyter Notebook: df.groupby(["Salesman"])\ .agg({"Total Amount" : "sum"})[["Total Amount"]]\ .apply(lambda x: 100*x/x.sum())\ .sort_values(by="Total Amount", ascending=False) You shall be able to see the below result with the sales contribution in descending order. (Do not confuse with the column name “Total Amount”, pandas uses the original column name for the aggregated data. You can rename it to whatever name you want later) Calculate the most popular products Similarly, we can follow the same logic to calculate what is the most popular products. This time we want to summarize the sales amount by product, and calculate the % vs total for both “Quantity” and “Total Amount”. And also we want to sort the data in descending order for both fields. e.g.: df.groupby(["Item Desc"])\ .agg({"Quantity": "sum", "Total Amount" : "sum"})[["Quantity", "Total Amount"]]\ .apply(lambda x: 100*x/x.sum())\ .sort_values(by=["Quantity","Total Amount"], ascending=[False,False]) This will produce the below result, which shows “Whisky” is the most popular product in terms of number of quantity sold. But “Red Wine” contributes the most in terms of the total revenue probably because of the higher unit price. Calculate best sales by product for each sales person What if we still wants to understand within each sales person, what is the % of sales for each product vs his/her total sales amount? In this case, we shall first group the “Salesman” and “Item Desc” to get the total sales amount for each group. And on top of it, we calculate the % within each “Salesman” group which is achieved with groupby(level=0).apply(lambda x: 100*x/x.sum()). Note: After grouping, the original datafram becomes multiple index dataframe, hence the level = 0 here refers to the top level index which is “Salesman” in our case. df.groupby(["Salesman", "Item Desc"])\ .agg({"Total Amount" : "sum"})\ .groupby(level=0).apply(lambda x: 100*x/x.sum())\ .sort_values(by=["Salesman", "Item Desc","Total Amount"], ascending=[True, True, False]) You will be able see the below result which already sorted by % of sales contribution for each sales person. Conclusion This is just some simple use cases where we want to calculate percentage within group with the pandas apply function, you may also be interested to see what else the apply function can do from here.
https://www.codeforests.com/2020/07/18/calculate-percentage-within-group/
CC-MAIN-2022-21
refinedweb
882
61.06
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Awesome @m_magalhaes, works perfect now! Thank very much! Flavio Fantastic @m_magalhaes ! works very well! and for the use I'm planning, I will always be using parallel selection segments, so cross selections isn't a issue ! there's an small error, I guess it's happening because there's no polygons to the left to be checked. I haven't had time to fully read your code yet, but it's already very helpful! I'll try to solve this error by modifying your code or building another use using you as base. Thank you! (I'll let it unsolved for today, in case someone wants to post something else, then I'll change to solved) Flavio Diniz @PluginStudent @C4DS ah, so I think this is more complex than I thought I guess I have to make a function to compare the edges by using their points... if two edges shares the same point, it means these two edges are a single segment. So create a BaseSelect of these edges and use it in a for loop. It may cause some bugs depending how the geometry is... but at least is a starting point and a tough one for me xD for Thank you guys! Hi! I'm trying to find the selected edges segments in a polygon object, like the one below: I want to get each segment separated, so I can create some functions like the example below: for segment in segmentlist: MoveInUvSpace() I've tried a lot of things with BaseSelect, Neighbor and PolygonObject classes, but no success... BaseSelect Neighbor PolygonObject here's my testing code, I was able to get the total edge count, get the ammount of edges selected edges and get the the selected edge with its index. import c4d def main(): op = doc.GetActiveObject() nb = c4d.utils.Neighbor() nb.Init(op) seledges = op.GetSelectedEdges(nb,c4d.EDGESELECTIONTYPE_SELECTION) #base select print seledges.GetSegments() print seledges.GetCount() print seledges.GetRange(1,nb.GetEdgeCount()) print "total edges", nb.GetEdgeCount() notselected = [] selectededge = [] bs = op.GetEdgeS() sel = bs.GetAll(nb.GetEdgeCount()) for index, selected in enumerate(sel): if not selected: continue print "Index", index, "is selected" selectededge.append(selected) print notselected print selected if __name__=='__main__': main() if anyone have any hint in how I could achieve this, I would be very happy! . @Cairyn said in Close any C4D Window: That's because there is none. The Windows functionality in the API is very limited. You can open some through CallCommand but identification and handling of specific windows is not supported. Only thing you could do is loading a new layout. That's because there is none. The Windows functionality in the API is very limited. You can open some through CallCommand but identification and handling of specific windows is not supported. Only thing you could do is loading a new layout. I was afraid of that answer But thank you anyway Hi ! Is it possibly to close any Cinema4D window like the ones below by using python ? I've tried acessing this command, but I had no success, because I didn't find anything on the python API to get C4D windows... Thanks! Thanks a lot @C4DS @m_adam ! So I think it's better to create separate CommandDataplugins to perform the same action of each button, it's less complicated and allow the user to change the keyboard shortcuts and exclude the need of the GUI being always open. Although the m_adam suggestions may be useful for other plugins ideas I have. I'll try it later. Thanksss ! Hello! I'm so happy with the C4D python API, it allows to create scripts, plugins, interfaces, menus, etc, very easily! My question is: Is it possible to assign shortcuts to these buttons from gui.GeDialog ? If not, I think I could achieve that by writing separate scripts to perform the same action the button does, so it appear in the customize commands window. Thanks in advance!
https://plugincafe.maxon.net/user/flaviodiniz
CC-MAIN-2022-27
refinedweb
701
66.13
Ext JS 3.0 - Remoting for Everyone As developers, we spend countless hours researching best practices to build engaging software. Often we find ourselves implementing the same repetitive functionality to wire our frontend to our backend. We've become accustomed to partaking in complicated design patterns to help separate logic from presentation - forcing the browser to play the role of a dumb terminal. While the RIA movement has unshackled the web browser from that awful fate, accessing our server side logic remains mostly unchanged. Ext.Direct aims to solve this issue for developers creating Ext JS applications by providing a single communication point with the server-side. Common Concerns At Ext, we've integrated Ext JS with many languages and platforms from Mainframe systems to Java to MUMPS to Perl. However, we noticed that there are several issues that are common across all server-side languages when creating Ext apps. - How to organize code and where to place appropriate business logic. - Parsing and formatting data on the server side. - Keeping a maintainable structure. - Parsing Ajax responses and retrieving error conditions. - Doing data validation in multiple areas. Introducing Ext.Direct Ext.Direct is a new package in Ext JS 3.0 that helps alleviate many of these issues by streamlining communication between your client and server. When using Ext.Direct, you can expect to write 30% less code by eliminating common boiler plate code. The Ext.direct namespace introduces several new classes for a close integration with the server-side. New classes have also been added to the Ext.data namespace for working with Ext.data.Stores which are backed by data from an Ext.Direct method. Ext.Direct uses a provider architecture, where one or more providers are used to transport data to and from the server. There are several providers that exist in the core at the moment, for example a JsonProvider for simple JSON operations and a PollingProvider for repeated requests. One of the most powerful providers is the RemotingProvider. RemotingProvider - Client-side Stubs The RemotingProvider empowers the developer by mirroring server side methods on the client-side and allowing them to call the server-side methods as if they were sitting on the client-side. The server-side simply describes what classes and methods are available on the client-side. This allows for code to be organized in a fashion that is maintainable, while providing a clear path between client and server, something that is not always apparent when using URLs. Intrinsic Call Batching The provider immediately batches together calls which are received within a configurable time frame and sends them off in a single request. This assists in optimizing the application by reducing the amount of round trips that have to be made to the server machine. If a series of calls are received within the specified timeout period, the calls will be concatenated together and sent off to the server as a single request. Server-side Stacks In order for Ext JS's Direct protocol to work you must have a compatible Ext.Direct Server-side stack residing on your server. The server-side stacks use a 'router' to direct requests from the client to the appropriate server-side method. Because the API is completely platform-agnostic, you could completely swap out a Java based server solution and replace it with one that uses C# without changing your JavaScript at all. Ext is providing a complete remoting specification along with several reference implementations of different server-side stacks in PHP, .Net, Ruby and ColdFusion. Each of these are licensed under an MIT license, so that the community can expand upon what has already been done and integrate them into their favorite MVC framework such as Zend Framework or Struts. An Example - The Ext Support App Support subscribers have a new tool at their disposal to receive a response from the Ext Team. We have developed the new Ext-based application that will be used to streamline the process of managing user support queries . The Ext support application is built on top of Ext 3.0 and utilizes Ext.Direct extensively. In order to see the benefit of Ext.Direct more clearly, let's take a look at how we utilized it. Ext.Direct enables server-side developers to easily expose methods from the server-side to the client-side. In this example, we are exposing two methods of the TicketAction class - getTickets and getOpenTickets. We can now call these methods as if they were local client-side methods without worrying about how the request is sent to the server-side and how the response is processed. We can also use these methods to populate an Ext.data.Store with the new DirectStore object. var store = new Ext.data.DirectStore({ storeId:'open-tickets', directFn: TicketAction.getOpenTickets, //it's really that easy paramsAsHash: true, idProperty: 'tid', fields: [{ name: 'tid', type: 'int' }, 'title', 'display_name', { name: 'last_post_time', type: 'date', dateFormat: 'timestamp' }], sortInfo: { field: 'last_post_time', direction: 'DESC' } }); The TicketAction.getOpenTickets method can be called at any time, it is not required that it used solely in conjunction with a store. I hope that the simple example above illustrates how Ext.Direct can help you better organize your client-server communication in your applications. Ext.Direct Forum We have added a new Ext.Direct forum under the the Ext JS 3.0 category. Several community members have already submitted server-side stacks for their favorite environments. We already have a list of several server-side stacks in a stickied forum thread. We encourage the community to contribute back server-side stacks for their favorite environments by implementing the Ext.Direct Remoting specification. There are five sever-side stacks currently available in our Ext.Direct pack with more implementations soon to come. There are 55 responses. Add yours. Bryan Brandau5 years ago Looks sweet, another thing for us to take a look at! Pablo5 years ago What is the difference between using Ext.direct API and plain Ajax combined with MVC on the server side other then being able to automatically batch multiple requests? In both cases you’ll have to duplicate your logic on the client and server side. You are saying that with Ext.direct I’m not bound to s single server side solution but the truth is that instead of using the usual Ajax and polling I’ll get tied to Ext. Jay Garcia5 years ago This is an excellent article evan. Javier Rincón (Syscobra)5 years ago Excelent article. As pablo says, yes i understand this as getting tied to Ext. Making server code to work only with one frontend(ExtJS), doesn’t look too fancy for me. If i want to make another frontend, lets say in Delphi for windows to get the same data its used in the server i would have to make other normal ajax functions to get that working in the other frontend. But if you are going to use only ExtJS for the front end i can see that its good to use this. I will have to try it anyways, i am expecting to use it in a new application i will made sometime in the future (when client decides to make it). Meanwhile its an interesting concept and this article explains some of the big confusion on using it. Thanks Evan mdmadph5 years ago Pablo, you’re not getting “tied” to ExtJS—you’re investing in them. Ram5 years ago Hi, i am a newbie in extjs. I have a problem with tabPanel.How to use autoLoad url web page action in a existing panel. If suppose i had autoload url and this url contains login page and after filled require fields when i press submit button, the content should be display in existing panel. Introduction to Ext.Direct5 years ago [...] More here. [...] Ext JS 3.0 – Remoting for Everyone « E5 years ago [...] Read the full article on Ext Blog [...] Eugene5 years ago Great article, thanks Evan! Crysfel5 years ago Thank you so much!! awesome article!! David5 years ago This is an innovative approach to solving a known inconvenience. The Ext Team continues to impress us. Keep up the great work! Dan Stevens5 years ago I’ve been checking every day for this article! Thanks guys - this is golden! griffiti935 years ago What I find most attractive about Ext.Direct is the language neutral approach. There are many excellent server-side frameworks that merge the client and server with the stubbed approach. But it always felt like you were tying your application down. With Ext.Direct you gain the benefits without the specific language binding. Sure, your server implementation will be language specific. But everything about the client generated code is not. I look forward to this exciting new approach. MiamiCoder5 years ago Like any technical solution, this targets a particular set of problems that you might or might not have. Batch updates and a single place for the server-access code are big pluses. You’re free to stick to the previous approaches on the client, as well as layering your server-side code - with facades, for example -, if you are concerned about being tied to a client platform. myext5 years ago ?????? frank5 years ago ????????????????????ext????frank? links for 2009-05-14 « pabloidz5 years ago [...] Ext JS 3.0 – Remoting for Everyone (tags: extjs) [...] wisdomqq5 years ago ?? Davi Baldin5 years ago It’s look very very nice !!! - Congratulation!!! But the server side stack seems doing the same thing as WebService does, but with a “webservice client” built with EXT exchanging json instead of soap… Michael5 years ago What about JSON-RPC 2.0, do you think of supporting this? Daily del.icio.us for May 14th through May 21st |5 years ago [...] Ext JS - Ext JS 3.0 - Remoting for Everyone - Ext.Direct aims to solve this issue for developers creating Ext JS applications by providing a single communication point with the server-side. [...] Daniel). Ext.Direct for ASP.NET MVC « Eugene’s4 years ago [...] Ext.Direct for ASP.NET MVC By Eugene I recently released Ext.Direct server-side stack implementation for ASP.NET MVC. You can find it and all the information about it in this topic. I will be posting there and updating the first post about any changes. If you are not yer familiar with Ext.Direct, it is a package in Ext JS 3.0 that makes communication between your client and server extremely easy. You can read about it in this official blog post. [...] kabin4 years ago kabinde 1 numara kalite art? i?cilik. siki?4 years ago I am grateful to you for this great content.aöf istanbul shuttle service4 years ago You can find it and all the information about it in this topic. I will be posting there and updating the first post about any changes. If you are not yer familiar with Ext istanbul airport transfer4 years ago If you are not yer familiar with Ext.Direct, it is a package in Ext JS 3.0 that makes communication between your client and server extremely easy. istanbul tours4 years ago That sounds really nice! istanbul shuttle4 years ago You can read about it in this official blog post. ataturk airport transfer4 years ago I can stop depending on a 3rd party library and make my Java projects lighter istanbul hotels shuttle4 years ago In order to see the benefit of Ext.Direct more clearly, let’s take a look at how we utilized it. ataturk airport transfer4 years ago We’ve become accustomed to partaking in complicated design patterns to help separate logic from presentation - forcing the browser to play the role of a dumb terminal. Vicente Russo Neto4 years ago Hm… looks nice! I`ll give a try. ???4 years ago your ExtDirectDelphi is very excellent,Support and thank you! Andrew4 years ago Always impressed with the quality of your prods! Burn Your To Do List4 years ago I’ll try it myself to see how great it is. beat making software4 years ago Hi, i am a newbie in extjs. I have a problem with tabPanel. krzysztof4 years ago looks indeed cool. Nitin Gautam4 years ago Can’t we use it in Java Projects? krzysztof4 years ago Good question - how about Java Projects? jeux gratuit). Pro Dev4 years ago Thanks to developers for this wonderful release! Robert4 years ago Currently I’m using Ext.Ajax with JSON. I communicate to a PHP Backend using Zend Framwork (MVC approach). I don’t see many differences. Traditional call using AJAX with MVC url: “/MyClass/myMethod/” Ext.direct: directFn: MyClass.myMethod Even though Ext.direct provides some good features like batching, I think it is targeted for those not using MVC. QQ??????4 years ago ??? ?? ?? ?? ?????? Amila thennakoon4 years ago Cool But ... We are used to use ext without IDE hanzen4 years ago it’s same things like AJAX? what a different? anyone can explain me.. ??4 years ago I think it is targeted for those not using MVC. Technology meets news4 years ago Hi, i am a newbie in extjs. I have a problem with tabPanel. Online Review4 years ago I think it is targeted for those not using MVC. car wallpaper4 years ago I’ll try it myself to see how great it is. free 3d wallpaper3 years ago Looks sweet, another thing for us to take a look at! free 3d wallpaper3 years ago Looks sweet, another thing for us to take a look at Anish Abaraham3 years ago Good article, am goin to replace dwr marcel3 years ago Wow that’s realy cool. But i’m curious to know if there is no reason to worry about a single point of failure and bottleneck. The concept is really cool…bravo Arjay2 years ago I have been so beiwelderd in the past but now it all makes sense! Comments are Gravatar enabled. Your email address will not be shown.Commenting is not available in this channel entry.
http://www.sencha.com/blog/ext-js-30-remoting-for-everyone/
CC-MAIN-2013-48
refinedweb
2,330
67.35
On Mon, Sep 20, 2010 at 2:12 PM, Andi Kleen <andi@firstfloor.org> wrote:>> The pipe process needs to run in the namespaces of the process who set>> the core pattern, not in the namespaces of the dumping process.>> Otherwise it is possible to trigger a privileged process to run in a>> context where it's reality that it expected, causing it to misuse>> it's privileges. Even if we don't have a privilege problem I think>> we will have a case of mismatched functionality where the core pattern>> will not work as expected.>> For me it seems rather the other way around: running the helper in some> highly priviledged namespace is more dangerous. If it runs in the> same context as the crasher it can do the least amount of damage> relative to the crash process.>> And as Will pointed out it's the only sane way to deal with net namespaces.I think you're both right. How it is implemented right now is anescape from the linux container. If you allow the root user in acontainer to mount proc and update core_pattern, they can escape.(core_pattern = |/well/known/binary_or_scripting_lang) I'm sure thereare other escapes too (and any umh call is likely an escape like thisone -- e.g., modprobe_path). That said, using my patch above mightlet you traverse a path otherwise blocked by an LSM enforcement (E.g.,root user runs a process which sets up a vfs namespace with anencrypted mount and the lsm blocks access to the /proc/[pid]/root -but core_pattern still runs and with access).That said, using the setters namespace makes sense to me as a consumerof core_pattern too. You can set the core_pattern outside of achroot/container and collect core dumps there _or_ you can let a"root" user in a container have their own core collector withoutproviding a simple escape. Making format_corename use the correct pidnamespace for translation would make these cases even more seamless.Unfortunately, I haven't yet looked at doing it that way yet. Thenamespace-transition patch posted is what occurred to me initially.Perhaps it won't be so hard. I'll take a look at what it'd take to domove core_pattern since it'd resolve both the escape/lsm-bypassscenarios and the mismatch between the arbitrary namespace and thecore_pattern values.Thanks!will--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2010/9/20/364
CC-MAIN-2016-50
refinedweb
418
62.38
Originally published in my blog: When talking about "bad code" people almost certainly mean "complex code" among other popular problems. The thing about complexity is that it comes out of nowhere. One day you start your fairly simple project, the other day you find it in ruins. And no one knows how and when did it happen. But, this ultimately happens for a reason! Code complexity enters your codebase in two possible ways: with big chunks and incremental additions. And people are bad at reviewing and finding both of them. When a big chunk of code comes in, the reviewer will be challenged to find the exact location where the code is complex and what to do about it. Then, the review will have to prove the point: why this code is complex in the first place. And other developers might disagree. We all know these kinds of code reviews! The second way of complexity getting into your code is incremental addition: when you submit one or two lines to the existing function. And it is extremely hard to notice that your function was alright one commit ago, but now it is too complex. It takes a good portion of concentration, reviewing skill, and good code navigation practice to actually spot it. Most people (like me!) lack these skills and allow complexity to enter the codebase regularly. So, what can be done to prevent your code from getting complex? We need to use automation! Let's make a deep dive into the code complexity and ways to find and finally solve it. In this article, I will guide you through places where complexity lives and how to fight it there. Then we will discuss how well written simple code and automation enable an opportunity of "Continous Refactoring" and "Architecture on Demand" development styles. Complexity explained One may ask: what exactly "code complexity" is? And while it sounds familiar, there are hidden obstacles in understanding the exact complexity location. Let's start with the most primitive parts and then move to higher-level entities. Remember, that this article is named "Complexity Waterfall"? I will show you how complexity from the simplest primitives overflows into the highest abstractions. I will use python as the main language for my examples and wemake-python-styleguide as the main linting tool to find the violations in my code and illustrate my point. Expressions All your code consists of simple expressions like a + 1 and print(x). While expressions themself are simple, they might unnoticeably overflow your code with complexity at some point. Example: imagine that you have a dictionary that represents some User model and you use it like so: def format_username(user) -> str: if not user['username']: return user['email'] elif len(user['username']) > 12: return user['username'][:12] + '...' return '@' + user['username'] It looks pretty simple, doesn't it? In fact, it contains two expression-based complexity issues. It overuses 'username' string and uses magic number 12 (why do we use this number in the first place, why not 13 or 10?). It is hard to find these kinds of things all by yourself. Here's how the better version would look like: #: That's how many chars fit in the preview box. LENGTH_LIMIT: Final = 12 def format_username(user) -> str: username = user['username'] if not username: return user['email'] elif len(username) > LENGTH_LIMIT: # See? It is now documented return username[:LENGTH_LIMIT] + '...' return '@' + username There are different problems with expression as well. We can also have overused expressions: when you use some_object.some_attr attribute everywhere instead of creating a new local variable. We can also have too complex logic conditions or too deep dot access. Solution: create new variables, arguments, or constants. Create and use new utility functions or methods if you have to. Lines Expressions form code lines (please, do not confuse lines with statements: single statement can take multiple lines and multiple statements might be located on a single line). The first and the most obvious complexity metric for a line is its length. Yes, you heard it correctly. That's why we (programmers) prefer to stick to 80 chars-per-line rule and not because it was previously used in the teletypewriters. There are a lot of rumors about it lately, saying that it does not make any sence to use 80 chars for your code in 2k19. But, that's obviously not true. The idea is simple. You can have twice as much logic in a line with 160 chars than in line with only 80 chars. That's why this limit should be set and enforced. Remember, this is not a stylistic choice. It is a complexity metric! The second main line complexity metric is less known and less used. It is called Jones Complexity. The idea behind it is simple: we count code (or ast) nodes in a single line to get its complexity. Let's have a look at the example. These two lines are fundamentally different in terms of complexity but have the exact same width in chars: print(first_long_name_with_meaning, second_very_long_name_with_meaning, third) print(first * 5 + math.pi * 2, matrix.trans(*matrix), display.show(matrix, 2)) Let's count the nodes in the first one: one call, three names. Four nodes totally. The second one has twenty-one ast nodes. Well, the difference is clear. That's why we use Jones Complexity metric to allow the first long line and disallow the second one based on an internal complexity, not on just raw length. What to do with lines with a high Jones Complexity score? Solution: Split them into several lines or create new intermediate variables, utility functions, new classes, etc. print( first * 5 + math.pi * 2, matrix.trans(*matrix), display.show(matrix, 2), ) Now it is way more readable! Structures The next step is analyzing language structures like if, for, with, etc that are formed from lines and expressions. I have to say that this point is very language-specific. I'll showcase several rules from this category using python as well. We'll start with if. What can be easier than a good-old if? Actually, if starts to get tricky really fast. Here's an example of how one can reimplement switch with if: if isinstance(some, int): ... elif isinstance(some, float): ... elif isinstance(some, complex): ... elif isinstance(some, str): ... elif isinstance(some, bytes): ... elif isinstance(some, list): ... What's the problem with this code? Well, imagine that we have tens of data types that should be covered including customs ones that we are not aware of yet. Then this complex code is an indicator that we are choosing a wrong pattern here. We need to refactor our code to fix this problem. For example, one can use typeclasses or singledispatch. They the same job, but nicer. python never stops to amuse us. For example, you can write with with an arbitrary number of cases, which is too mentally complex and confusing: with first(), second(), third(), fourth(): ... You can also write comprehensions with any number of if and for expressions, which can lead to complex, unreadable code: [ (x, y, z) for x in x_coords for y in y_coords for z in z_coords if x > 0 if y > 0 if z > 0 if x + y <= z if x + z <= y if y + z <= x ] Compare it with the simple and readable version: [ (x, y, z) for x, y, x in itertools.product(x_coords, y_coords, z_coords) if valid_coordinates(x, y, z) ] You can also accidentally include multiple statements inside a try case, which is unsafe because it can raise and handle an exception in an expected place: try: user = fetch_user() # Can also fail, but don't expect that log.save_user_operation(user.email) # Can fail, and we know it except MyCustomException as exc: ... And that's not even 10% of cases that can and will go wrong with your python code. There are many, many more edge cases that should be tracked and analyzed. Solution: The only possible solution is to use a good linter for the language of your choice. And refactor complex places that this linter highlights. Otherwise, you will have to reinvent the wheel and set custom policies for the exact same problems. Functions Expressions, statements, and structures form functions. Complexity from these entities flows into functions. And that's where things start to get intriguing. Because functions have literally dozens of complexity metrics: both good and bad. We will start with the most known ones: cyclomatic complexity and function's length measured in code lines. Cyclomatic complexity indicates how many turns your execution flow can take: it is almost equal to the number of unit tests that are required to fully cover the source code. It is a good metric because it respects the semantic and helps the developer to do the refactoring. On the other hand, a function's length is a bad metric. It does not coop with the previously explained Jones Complexity metric since we already know: multiple lines are easier to read than one big line with everything inside. We will concentrate on good metrics only and ignore bad ones. Based on my experience multiple useful complexity metrics should be counted instead of regular function's length: - Number of function decorators; lower is better - Number of arguments; lower is better - Number of annotations; higher is better - Number of local variables; lower is better - Number of returns, yields, awaits; lower is better - Number of statements and expressions; lower is better The combination of all these checks really allows you to write simple functions (all rules are also applied to methods as well). When you will try to do some nasty things with your function, you will surely break at least one metric. And this will disappoint our linter and blow your build. As a result, your function will be saved. Solution: when one function is too complex, the only solution you have is to split this function into multiple ones. Classes The next level of abstraction after functions are classes. And as you already guessed they are even more complex and fluid than functions. Because classes might contain multiple functions inside (that are called method) and have other unique features like inheritance and mixins, class-level attributes and class-level decorators. So, we have to check all methods as functions and the class body itself. For classes we have to measure the following metrics: - Number of class-level decorators; lower is better - Number of base classes; lower is better - Number of class-level public attributes; lower is better - Number of instance-level public attributes; lower is better - Number of methods; lower is better When any of these is overly complicated - we have to ring the alarm and fail the build! Solution: refactor your failed class! Split one existing complex class into several simple ones or create new utility functions and use composition. Notable mention: one can also track cohesion and coupling metrics to validate the complexity of your OOP design. Modules Modules do contain multiple statements, functions, and classes. And as you might have already mentioned we usually advise to split functions and classes into new ones. That's why we have to keep and eye on module complexity: it literally flows into modules from classes and functions. To analyze the complexity of the module we have to check: - The number of imports and imported names; lower is better - The number of classes and functions; lower is better - The average complexity of functions and classes inside; lower is better What do we do in the case of a complex module? Solution: yes, you got it right. We split one module into several ones. Packages Packages contain multiple modules. Luckily, that's all they do. So, he number of modules in a package can soon start to be too large, so you will end up with too many of them. And it is the only complexity that can be found with packages. Solution: you have to split packages into sub-packages and packages of different levels. Complexity Waterfall effect We now have covered almost all possible types of abstractions in your codebase. What have we learned from it? The main takeaway, for now, is that most problems can be solved with ejecting complexity to the same or upper abstraction level. This leads us to the most important idea of this article: do not let your code be overflowed with the complexity. I will give several examples of how it usually happens. Imagine that you are implementing a new feature. And that's the only change you make: +++ if user.is_active and user.has_sub() and sub.is_due(tz.now() + delta): --- if user.is_active and user.has_sub(): Looks ok, I would pass this code on review. And nothing bad would happen. But, the point I am missing is that complexity overflowed this line! That's what wemake-python-styleguide will report: Ok, we now have to solve this complexity. Let's make a new variable: class Product(object): ... def can_be_purchased(self, user_id) -> bool: ... is_sub_paid = sub.is_due(tz.now() + delta) if user.is_active and user.has_sub() and is_sub_paid: ... ... ... Now, the line complexity is solved. But, wait a minute. What if our function has too many variables now? Because we have created a new variable without checking their number inside the function first. In this case we will have to split this method into several ones like so: class Product(object): ... def can_be_purchased(self, user_id) -> bool: ... if self._has_paid_sub(user, sub, delta): ... ... def _has_paid_sub(self, user, sub, delta) -> bool: is_sub_paid = sub.is_due(tz.now() + delta) return user.is_active and user.has_sub() and is_sub_paid ... Now we are done! Right? No, because we now have to check the complexity of the Product class. Imagine, that it now has too many methods since we have created a new _has_paid_sub one. Ok, we run our linter to check the complexity again. And turns out our Product class is indeed too complex right now. Our actions? We split it into several classes! class Policy(object): ... class SubcsriptionPolicy(Policy): ... def can_be_purchased(self, user_id) -> bool: ... if self._has_paid_sub(user, sub, delta): ... ... def _has_paid_sub(self, user, sub, delta) -> bool: is_sub_paid = sub.is_due(tz.now() + delta) return user.is_active and user.has_sub() and is_sub_paid class Product(object): _purchasing_policy: Policy ... ... Please, tell me that it is the last iteration! Well, I am sorry, but we now have to check the module complexity. And guess what? We now have too many module members. So, we have to split modules into separate ones! Then we check the package complexity. And also possibly split it into several sub-packages. Have you seen it? Because of the well-defined complexity rules our single-line modification turned out to be a huge refactoring session with several new modules and classes. And we haven't made a single decision ourselves: all our refactoring goals were driven by the internal complexity and the linter that reveals it. That's what I call a "Continuous Refactoring" process. You are forced to do the refactoring. Always. This process also has one interesting consequence. It allows you to have "Architecture on Demand". Let me explain. With "Architecture on Demand" philosophy you always start small. For example with a single logic/domains/user.py file. And you start to put everything User related there. Because at this moment you probably don't know what your architecture will look like. And you don't care. You only have like three functions. Some people fall into architecture vs code complexity trap. They can overly-complicate their architecture from the very start with the full repository/service/domain layers. Or they can overly-complicate the source code with no clear separation. Struggle and live like this for years (if they will be able to live for years with the code like this!). "Architecture on Demand" concept solves these problems. You start small, when the time comes - you split and refactor things: - You start with logic/domains/user.pyand put everything in there - Later you create logic/domains/user/repository.pywhen you have enough database related stuff - Then you split it into logic/domains/user/repository/queries.pyand logic/domains/user/repository/commands.pywhen the complexity tells you to do so - Then you create logic/domains/user/services.pywith httprelated stuff - Then you create a new module called logic/domains/order.py - And so on and so on That's it. It is a perfect tool to balance your architecture and code complexity. And get as much architecture as you truly need at the moment. Conclusion Good linter does much more than finding missing commas and bad quotes. Good linter allows you to rely on it with architecture decisions and help you with the refactoring process. For example, wemake-python-styleguide might help you with the python source code complexity, it allows you to: - Successfully fight the complexity at all levels - Enforce the enormous amount of naming standards, best practices, and consistency checks - Easily integrate it into a legacy code base with the help of diffoption or flakehelltool, so old violation will be forgiven, but new ones won't be allowed - Enable it into your CI, even as a Github Action Do not let the complexity to overflow your code, use a good linter! Discussion Nice article, might send some of our juniors this way. :) I can forgive most of the complexity linting can fix. Proper compartmentalization, like using modules, can limit the cost of it a lot. Code made without a some compartmentalization/deprecation/refactoring thoughts. That is usually more time consuming to work with in my experience. Glad that you liked it! Feel free to share your feedback with juniors. I would love to hear that. Being able to recognize unnecessary complex code and break it down into simpler code is what made me feel like I had "graduated" from being a junior developer into a more intermediate one. It has the potential to cut down on your overall lines of code, make it less prone to bugs, easier to scale, and in my experience simpler code is easier to optimize. Although I have limited knowledge of linters, I am going to bite the bullet and try out wemake-python-styleguide since Python is my hobby language of choice. Great write up! Thanks for taking the time to share this with all of us. Thanks! Feel free to ask any questions you will (possibly) have about wemake-python-styleguide. Awesome post, love it when there are actual real world examples. Helps drivd the point across. Lots of good tips! Thanks!
https://dev.to/wemake-services/complexity-waterfall-n2d
CC-MAIN-2020-50
refinedweb
3,091
66.94
Researcher Finds Tens of Software Products Vulnerable To Simple Bug (softpedia.com) 162 An anonymous reader writes: There's a German security researcher that is arduously testing the installers of tens of software products to see which of them are vulnerable to basic DLL hijacking. Surprisingly, many companies are ignoring his reports. Until now, only Oracle seems to have addressed this problem in Java and VirtualBox.. What's a DLL? (Score:2, Funny) Re:What's a DLL? (Score:4, Informative) Dynamic linked library Re: (Score:3) "Windows Dynamic Linked Library" in this case... not seeing a single mention of Linux or OSX in there. (Yes, there are equivalents in Linux and OSX, but no indication of the vuln in shared libs, dylibs, or dynamic shared libs, so...) Re:What's a DLL? (Score:5, Informative) Dynamic Link Library. Typically a shared resource that can be dynamically loaded and unloaded when needed, and often shared among programs. The problem with DLLs are that there are many versions of the same DLL that often need to run at the same time. Which means that you can substitute one version for another, and hijack a program. Nothing new here. Re: (Score:2) Nothing new here. And that's the point, right? It's nothing new yet some vendors with some very widely distributed software still have the vulnverability. Re:What's a DLL? (Score:4, Informative) Nothing new, because it is how Windows was designed from the early days. Re:What's a DLL? (Score:5, Informative) Re: (Score:2) Why implement it ? That sounds like too much work! Re: (Score:3) I would say that Microsoft could improve on desktop applications by giving them their own namespace or user space (a la Android) but instead they now call these "legacy apps" and have the unrealistic expectation that you use universal apps which do have these protections. I say unrealistic because universal apps don't have anywhere near the capability set that you can get with "legacy apps", and there's no reason to write new desktop applications anymore because typically the best way to deliver your applica %WINDIR%, %SYSTEM32%, %CSIDL_PROGRAM_FILESX86% (Score:2) You would "hard code" using system variables like this: %CSIDL_PROGRAM_FILESX86%\Avast\Sanner\foo.dll That would end up being "the right place" no matter which drive letter has your Program Files directory. It wouldn't load hacker\foo.dll from any location. Re: (Score:2) In which case, what happens if you want to install your applications somewhere other than the default progra~1 directory? There's a var for "this program's install folder " (Score:2) There is a similar variable for "this program's installation directory", I believe. Generally, though, your DLLs should go where DLLs belong. Fighting against the design of the OS tends in increase the risk of a security exposure, in general. Re: (Score:2) isn't the code for a DLL loaded into a shared location in memory? The code in a DLL is usually shared among all the processes that use the DLL... [wikipedia.org] so if you copy it to .../MyApp/Foo.dll doesn't that defeat that "feature"? why use a DLL at all at that point? sincerely yours, not a windows developer Re: (Score:1) No, I don't use Windows on my computer... No, I am not a shill. But... At some point, it's time for the programmers to do things the right way and not expect the OS to prevent them from making mistakes. I know it's fun to blame Microsoft when you don't know better or understand the problem but, really, this should not be a problem because the people writing the program are responsible for their DLLs and their usage. I know, I know... That does actually mean that they're accountable and accountability is a sca Re: (Score:2) Well the problem seems to be that Windows will load DLL files from the same directory that the executable is in by default, and this behaviour is retained for backwards compatibility because a lot of programs expect to work this way... This is yet another case of a serious design flaw in windows which causes ongoing security problems, and cannot be easily fixed without breaking compatibility and/or extra humps for users or developers to jump through. This is exploitable by preloading a user's downloads direct Re: (Score:1) Or, alternatively, don't let code access your system that you don't want. In other words, keep it locked down. How did the malicious DLL get into that folder, specifically, and was able to be called? And, if they can do that, why not just compromise the system in a hundred other ways instead of some half-assed way that might not work by using a DLL? They've already got access rights to put the DLL there. If they can do that then why do this? Just avoid it, as a programmer, on general principle but it's hard Re: (Score:2) >They've already owned the box just to put the DLL there. Not exactly. With UAC a prompt occurs to get administrative access, if this occurs at a time when the user does not expect it they may very well say no. Drive by downloads are one particular class of bug that can put a file in downloads, but have no risk of executing the file at that time, yes they are a bug that needs fixed, but one that seems to commonly occur. It is only later when an administrative installer executes that the system can be full Re: (Score:2) Think is, as Raymond would say, you're already on the other side of the hatchway. If you can write arbitrary malicious DLLs in the user's downloads folder, then why not just patch the .exe you find there? Re: (Score:2) That's because it's only a vulnerability in retrospect -- it was intended as a feature. (Linux shared libraries -- the fact that every application can use the same copy of, say, GTK instead of having to replicate it -- are the same kind of deal.) I haven't read the article, but I suppose the countermeasure is that DLLs should be signed or have hashes checked before loading or something like that. Re:What's a DLL? (Score:4, Informative) TFA is a "beat up" (likely paid for by Oracle), it does not explain how the attacker is able to put the compromised dll on the machine in the first place. If an attacker can put a random binary on your local drive then they already own your machine. What a random installer subsequently does on a compromised machine is irrelevant to how the machine was hacked. Car analogy: If a miscreant cuts your brake line without your knowledge, it is not the manufacturer's fault that the brakes no longer work as advertised. If the manufacturer's can make it more difficult to cut the brake line that's great, but they cannot, and should not, be held accountable for malicious damage caused by someone who had unrestricted access to your brake line. Re: (Score:2) Some browsers will auto save files to the designated downloads location, a malicious website can exploit this feature to get a dll into your downloads directory. If you then execute an installer from the same directory then you can be infected. Getting a file into your downloads directory is not a compromise as the file has not been executed, and on other platforms the presence of malware in your download directory is harmless unless you actually go out of your way to execute it. Re: (Score:2) Uh, if my browser silently downloads (executable!) files without me knowing, yeah, that's a compromise. Re: (Score:2) IE6 was highly exploitable in its default configuration, did that make it simply a feature? Re: (Score:2) > If an attacker can put a random binary on your local drive then they already own your machine Pretty much no. >If a miscreant cuts your brake line without your knowledge, Terrible analogy. Because it's not what's occurring here. They don't have unrestricted access to critical systems on your car. It's more like they put a brake line cutting machine in your front seat. It's not until you get into the car and say "Are you sure you want to run the program START CAR with administrative access" that your li Re: (Score:2) This (binary planting) is also the reason why everybody on Linux warns you about setting PATH to include CWD, of if you really have to to append it at the back. Re: (Score:1) The problem with DLLs are that there are many versions of the same DLL that often need to run at the same time. Which means that you can substitute one version for another, and hijack a program. Nothing new here. If only it were as benign as that. You can even inject DLLs into a system process, and then have code executed as that process unless things have changed dramatically in the past 4 years. Re: (Score:2) I can see this _IF_ the code already attempts a manual load call to the runtime, but I have never seen a method to force an unintended DLL to be loaded into a process space unless the originating binary is modified. Re:What's a DLL? (Score:5, Informative) Literally the FIRST hit on Google leads to this:... [wikipedia.org] tl;dr - it's not really a problem to force an arbitrary process to load a DLL, *if you are an administrator*. As noted elsewhere though, if you have the power to inject, you already owned the machine, so why bother? Re: (Score:2) Did you actually read that article? Seriously. Re: (Score:1) Re: (Score:2) Just earlier today, I ran this very command: sudo ln -sf /lib/$(arch)-linux-gnu/libudev.so.1 /lib/$(arch)-linux-gnu/libudev.so.0 I did not read the article but the above command not only was acted on - it had the effect I wanted. I better go file a bug report! Re: (Score:2) Re: (Score:1) Re: (Score:2) Or written by people who primarily develop for non windows platforms where this isn't a problem... Why should developers on windows have to jump through so many hoops that they don't need to know about on other platforms? Re:What's a DLL? (Score:4, Informative). Re:What's a DLL? (Score:5, Insightful) - Why are you here? - Why the semicolon? Re: (Score:2) "DLLs" are a proprietary technology from a single vendor. Yes a proprietary technology from the largest vendor of IT software in the world. A proprietary technology which has been around for 20 years. A proprietary technology that every programmer bashes their head against at some point in their career. A proprietary technology that is taught about at universities. A proprietary technology which every computer user in the past 20 years has received at least one error message about. did I forget anything? Oh yeah of course. A proprietary technology that was deemed a laug Re: (Score:2) Do you even have a geek card to turn in? You've never heard of "DLL Hell"? WIll wonders never cease... Re: (Score:2) Some people using the site now were probably not even born when "DLL hell" was still something that was actually a problem, rather than just a term that Slashdotters parroted. Re: (Score:2) Some people using the site now were probably not even born when "DLL hell" was still something that was actually a problem, rather than just a term that Slashdotters parroted. That would mean people who haven't been born yet. Revo uninstaller to the rescue Re:What's a DLL? (Score:4, Interesting) Re: (Score:2) Re: (Score:2) Do you even have a geek card to turn in? Teun's geek card has a 5 digit uid, licensed for advanced deadpan sardonic commenting. Yours is 4 digit? Damn, you might have to retake your sarcasm detection certification. Re: (Score:2) You have to know enough about Windows to realise that it should be avoided. In my case, this happened in early 2005. Re: (Score:1) It's like cheese. If You wanna a sandwish You use already made cheese and don't need to milk a cow, then process the milk to turn into cheese. To me, 99.99% of anything is cheesy, unless it's not developed by me. That's explains the good cheese in the market from my perspective. Is JDBC a fix for this problem? (Score:1) I have asked my Hyderabad team to investigate this problem and they have reported back to me that JDBC is a fix for it. Can anyone confirm if JDBC is a fix for this DLL problem? How would a Java database connectivity layer fix this if it's a problem with a C++ program interacting with a C++ DLL? Re: (Score:2) You should immediately discontinue using this team... Token car analogy: Q: "There is a problem with the ignition system in my car. Please investigate!" A: "Tires will fix your problem! Get Tires!" Re:What's a DLL? (Score:4, Informative) Dynamic Link Library or Shared Object. In the early days of UNIX, it was found that the huge amount of space was being used by GUI applications and command line programs statically linked to common libraries like standard IO, sockets, X-windows, GUI's, maths and crypto libraries. Huge amounts of disk space were being used to stored duplicate copies of compiled code. So they figured that it would be more cost effective to dynamically link at run-time instead of a compile-time with the bonus that they could be compiled into relocatable code only loaded into system when needed. If you run "ldd" on a program, you will see all the libraries needed for that program. By separating the library files from the applications, any bugs or problems could be fixed through a simple upgrade. The downside is that someone can rootkit a system by replacing a DLL used by applications that need system access. Re: (Score:2) You may incinerate your geek card. We don't even want it back. Re: (Score:1) The obvious question is; what's a DLL? ....and this ladies and gentlemen, are the coders we now have on /. Re:What's a DLL? (Score:4, Funny) It's a shared object for a toy computer. Are you suggesting that Windows makes a toy computer? Wouldn't a toy GUI consist mostly of big colored squares, dumbed down applications, and a supervisor monitoring your usage patterns? Re: (Score:1, Troll) Are you suggesting that Windows makes a toy computer? Wouldn't a toy GUI consist mostly of big colored squares, dumbed down applications, and a supervisor monitoring your usage patterns? And I present .... METRO Re: (Score:2) Re: (Score:2) And I present...WOOSH! DLL Hijacking (Score:5, Informative) There's an informative (and non-PDF) post on Fortinet's blog [fortinet.com] discussing DLL hijacking. You can use a registry tweak to harden a system against this technique. Update from TFA (Score:1) "UPDATE: Mr. Kanthak has told Softpedia that "most of the companies/vendors I contacted patched their products." Rapid7 went so far as to withdrew their ScanNow product altogether. "Some of the companies/vendors which did not reply to my reports in the first place contacted me after they became aware of the [public disclosure] posts and fixed their installers, or are working on a fix now," Mr. Kanthak also added. Additionally, there also some other software products for which Mr. Kanthak has not yet posted a Other side of the airtight hatchway (Score:1) If you have the ability to write a malicious DLL into a folder for the executable, you already have the ability to run administrator level code. Why bother with the DLL? cf: Raymond Chen Re: (Score:3) I don't know how code signing verification policy works on Windows, but on OS X, Gatekeeper checks only an app's main executable for a signature against an Apple-issued code signing certificate, not other executables in the same folder that it loads. Re:Other side of the airtight hatchway (Score:4, Insightful) In this case it would be up to the installer to verify that it is loading a valid library. The problem is if somehow a certain named and versioned dll can be downloaded to the same folder you execute the installer from it can execute arbitrary code when the installer initializes it using the elevated privileges you granted the installer. So in order to implement this side-loading you would to first need take advantage of another vulnerability to get that library in the right place. In order to protect against this they could simply not include the execution folder in the search path and validate the library in a manner other than just the name and version which can be faked. If someone where to try and exploit this chances are they would attempt to run their code in the background while leaving the rest of the library untouched so the installer would complete without tipping off the user. This means something as simple as a file size could validate there wasn't a bunch of extra code present, although there are better methods for validating a library. Re: (Score:1) With Apple? A known good product (XCode) was replaced with a "changed" version. That changed version did "bad things". Now that you're following along... Put up versions of all the afflicted applications... with 'adjusted' DLLs. What's that? You need my permission to install that program that I just downloaded? Of course you do *clicks accept permission eleva Re:Other side of the airtight hatchway (Score:5, Informative) Actually, you only have to insert it into the current working directory. For example: Get a dll file downloaded into Downloads, then wait for the user to run Setup.exe and have UAC hand it admin privileges. Now your non-privileged process has put a DLL file in the Downloads directory *with* Setup.exe, which loaded Downloads\CommDlg32.dll and was granted Administrator access. Now you have admin access. Microsoft Word used to do this if you had a DLL file with the same name as a System32 DLL in the same path as a Word document. Re: (Score:3) MSDN is saying, by default, "Safe DLL" loading is used, in which the current directory is only used if loading the DLL from most other locations failed. So this would not be viable any more. It sounds like this problem was identified and fixed long ago. Any attempt to exploit this now would require gaining greater access first, and once you're there there's no point to using DLL hijacking any more. [microsoft.com] Re:Other side of the airtight hatchway (Score:5, Informative) If you have the ability to write a malicious DLL into a folder for the executable, you already have the ability to run administrator level code. Why bother with the DLL? cf: Raymond Chen Exactly. Raymond covered this a few times in the past. Using delayload to detect functionality is a security vulnerability [microsoft.com] It rather involved being on the other side of this airtight hatchway: Disabling Safe DLL searching [microsoft.com] If Safe DLL Search Mode is enabled, then the Current Directory isn't searched until after all the system directories are searched. Safe DLL search mode is enabled by default starting with Windows XP with Service Pack 2 (SP2). [microsoft.com] This sounds like a complete non-story. Re: (Score:1) Why is this a flaw in the app, and not the OS? (Score:4, Interesting) I'm aware of the Windows DLL load behavior, and how it creates "DLL Hell." I never thought of the security implications, because I assumed that Windows behaved more ... sanely. The root of the problem is that the affected applications are installers, which need to be run with elevated rights. On Linux systems, for example, when an application is run with escalated rights (through SUID or sudo), the dynamic library loader uses only the system library paths and ignores user specified paths (such as the LD_LIBRARY_PATH environment variable). Why the HELL doesn't Windows do the same for apps run as administrator? Re:Why is this a flaw in the app, and not the OS? (Score:5, Informative) MSDN documents guidelines for preventing malicious DLL loading [microsoft.com]. Windows has already cut off "current directory" forms of attacks by changing the DLL load order (called "Safe DLL Search Mode" in that document), and with Vista locking down Program Files for admin-only access, "application directory" attacks are also out unless apps intentionally install themselves elsewhere (then they're on their own). As for installers, users have to get tricked into downloading the DLL first, and at least Chrome gives you a big warning that the file is suspicious due to its extension. And if you can get the user to do that, you might as well just give them an EXE and skip the warning. It's easier to put together a malicious EXE too. Re:Why is this a flaw in the app, and not the OS? (Score:5, Funny) ...because I assumed that Windows behaved more ... sanely. After all these years, why the hell would you think that? Re: (Score:2) That's not app-specific behavior. That's how the Windows library loader works. Re:Why is this a flaw in the app, and not the OS? (Score:4, Interesting) Any directory in the DLL search path for a normal application installed in a normal location is only writable by an (elevated) administrator user. If you can drop a random DLL file into such a folder you've already got administrator rights on the machine, so why make things any more complicated? You've obviously never heard of ClickOnce then. ClickOnce deployment technology, available since .NET Framework 2.0, allows a signed application and its related DLLs to be downloaded into a folder within the user's own AppData folder structure and executed from there. It doesn't require Administrator rights to do this because it's within the user's own AppData folder structure. Just because an application is signed doesn't make it trustworthy. Re: (Score:2) I think the complaint is that the LD_LIBRARY_PATH equivalent is doing questionable things given the conventions of the target platform. It's hard to say as there's zero details in the article and I don't have time to research what I don't really care about that much. How to link statically with LGPL program (Score:2) An LGPL program can be linked statically to a proprietary program so long as the proprietary program's publisher makes available to its licensees a set of working .o files that can be linked to a new version of the LGPL program. Re: (Score:3) It isn't a problem, and the installer need take no special measures. The system's loader restricts the search path for dynamic libraries when it's running with elevated privileges so you don't accidentally run an infected library in some random location (for example, the download directory). There are also techniques available to load libraries from a specific path after the program starts rather than at load time. You can use that to choose a specific full path to the exact library you want to load and it s There are literally dozens of them... (Score:3, Funny) DOZENS! static linking on windows (Score:1) Can static linking on windows be done? I mean, Firefox, who cares? But products like truecrypt should be statically compiled, and require no resources from their operating system. Re: (Score:2) Re: (Score:3) It does leave you permanently vulnerable to any flaws in the particular version of the library you linked against, or such is my understanding. At least with dynamic linking you can blame the user for not keeping up to date! I still static link though because whenever I upload something (using a video filtering plugin) at least one person won't have the right runtime installed at all. Re: (Score:3) It does leave you permanently vulnerable to any flaws in the particular version of the library you linked against, or such is my understanding. The assumption being that anyone (for most definitions of anyone) knows what DLLs their application loads and what the status of their patch levels are. I still static link though because whenever I upload something (using a video filtering plugin) at least one person won't have the right runtime installed at all. Which IMHO is the main mitigating factor -- what's the Re: (Score:3) Learning Coding? (Score:2) Start learning security issues early on! Sounds to me after all of Slashdot's articles that many software teams don't have a coding security expert or security team or we wouldn't have all these flaws. Brain replacement vulnerability (Score:2) More than tens of software products are vulnerable to key loggers installed in keyboard cables. More than tens of software products are vulnerable to compromise when executed from compromised systems. Come on people fix your vulnerable software or we will publically slut shame you for your indifference. Barn door, and all that... (Score:2) Am I alone in thinking that if malicious code has admin level write access to system disks then you're already fubar? The horse is gone! Shut the barn door! Whatever is downloaded ends up being run as admin (Score:3) I'm going to simplify this a bit, but consider you download two things songlist.zip. You extract songlist.zip, which is a data. You don't execute anything in that download. You just extract it to your downloads folder and use notepad to open the resulting songlist.txt. You don't notice that it also included a file called netssl.ddl, which sits in your downloads folder. Later, you download mcafee_setup.exe. You run macafee_setup.exe, which needs to run as admin. mcafee_setup.exe makes use of netssl.dll. I Re: (Score:2) So as a user you downloaded a suspect binary but it's the OS that's at fault? It's certainly true that Windows sucks for this kind of issue, and always has, but there's only so much you can do to protect idiot users from themselves. Yes, downloading fdisk shouldn't run it (Score:2) > So as a user you downloaded a suspect binary but it's the OS that's at fault? Yes, it's a security flaw in the OS. I should be able to download fdisk.exe (as an unprivileged user) without the OS running fdisk.exe /wipe c: (as admin). Downloading as a user shouldn't mean executing as admin. NO - Please do not post Click Bait headlines (Score:2) This is slashdot. Unless you are being sarcastic about a click-baity site that we need to laugh at, "Simple Bug" is not a valid replacement for "DLL Hijacking" or, more descriptively, "DLL Side Loading" or "DLL replacement." You want to know what will make Slashdot better? Good headlines is a fantastic start. :-) Use of language (Score:1) Still depends on user trusting installer (Score:3) installations requiring admin (Score:2) The problem is the practice of requiring admin privileges to install most software. Software should not require admin install unless they really need it. Common frameworks (which are a big user of DLLs) do exacerbate the problem since they often want to be installed in a root location so all the applications can share it. A solution is to forbid third parties from bundling installers for common framework runtime binaries. If the framework is needed, then either install the binaries in the application directo New Vulnerability! (Score:1) Guys! I discovered a new vulnerability in Windows: If you replace an executable with a different executable and then execute it, you actually execute the new executable and not the executable you replaced. Where should I submit my paper for publication? I mean, this is a little unfair (Score:2) I like shitting on Windows apps as much as the next guy, but if you can replace a library on the drive, aren't you just going to like... win? Maybe there's more protection on real systems, but it's a binary that gets run with the permissions and privileges of whatever is running it. Can someone explain to me how this is a larger concern, and what was done to patch the security of this? It stands to reason that if you can overwrite a dll, you can overwrite a lot of stuff, same as with an .so or something. Follow the money! (Score:1) That's a lot of name-dropping. Wonder if said researcher asked for a bit of hush money and if you paid up you were taken off the list? Smear campaigns for cash are hardly new. Re: (Score:2)
https://it.slashdot.org/story/16/02/08/193244/researcher-finds-tens-of-software-products-vulnerable-to-simple-bug?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot+%28Slashdot%29
CC-MAIN-2017-22
refinedweb
4,895
63.8
Your answer is one click away! Is there a class level annotation for jackson's @JsonProperty ? Without knowing what field names are in the class, I can be able to annotate the class with a annotation and it will map it for me automatically? Currently I have to annotate each field with a JsonProperty, is there something I can do at the class level that servers the same purpose? public class myEntity() { @JsonProperty("banner") private String banner; @JsonProperty("menu") private String menu; } @JsonProperty is not class level annotation, and you don't need to mark your class with any annotation. If you provide your class name as an argument to parser it will know how to map it according to your getter methods. It is as if every getter method has been marked with @JsonProperty without any argument
http://www.devsplanet.com/question/35275098
CC-MAIN-2017-22
refinedweb
138
60.55
Mobile: (+251)927-182804 Our Services: Computer Maintenance Website & Software Development Website & Mobile Application Development Home 2 Home Computer Training Software, Apps ,Games & Movie Sale C++ Overview. In fact C++ was originally called C with Classes and is so compatible with C that it will probably compile more than 99% of C programs without changing a line of source code. that they can count to a billion in a second or two. The C++ programming language is based on the C language. Although C++ is a descendant of the C language, the two languages are not always compatible. In C++, you can develop new data types that contain functional descriptions (member functions) as well as data representations. You can define a series of functions with different argument types that all use the same function name. This is called function overloading. The C++ language provides templates and several keywords not found in the C language. Other features include try-catch-throw exception handling, etc. Environment Setup First you need a C++ compiler. There are many commercial and free ones available. Both the below listed compilers are completely free and include an IDE to make life easier for you to edit, compile and debug your applications. What is a compiler? A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer's processor uses. C++ can be written using a text editor. This can be notepad or an IDE like those supplied with the two are getting written in C++. Basic Syntax us look at a simple code that would print the words Hello World. #include <iostream> using namespace std; // main() is where program execution begins. int main() { cout << "Hello World"; // prints Hello World return 0; }. It. C++++. The following list shows the reserved words in C++. These reserved words may not be used as constant or variable or any other identifier names. Comments are used to document and explain our codes and program logic. Comments are not programming statements and are ignored by the compiler, but they VERY IMPORTANT for providing documentation and explanation for others to understand your program (and also for yourself three days later). 1. Multi-line Comment: begins with a /* and ends with a */, and can span several lines. 2.. Visit shegertech to learn more!
https://www.scribd.com/document/370323609/Cpp-1
CC-MAIN-2019-35
refinedweb
393
64.81
NAMEstat, fstat, lstat, fstatat - get file status SYNOPSIS #include <sys/stat.h> int stat(const char *restrict pathname, struct stat *restrict statbuf); int fstat(int fd, struct stat *statbuf); int lstat(const char *restrict pathname, struct stat *restrict statbuf); #include <fcntl.h> /* Definition of AT_* constants */ #include <sys/stat.h> int fstatat(int dirfd, const char *restrict pathname, struct stat *restrict statbuf, int flags); lstat(): /* Since glibc 2.20 */ _DEFAULT_SOURCE || _XOPEN_SOURCE >= 500 || /* Since glibc 2.10: */ _POSIX_C_SOURCE >= 200112L || /* Glibc 2.19 and earlier */ _BSD_SOURCE fstatat(): Since glibc 2.10: _POSIX_C_SOURCE >= 200809L Before glibc 2.10: _ATFILE_SOURCE DESCRIPTIONThese functions return information about a file, in the buffer pointed to by statbuf. the link refers to. fstat() is identical to stat(), except that the file about which information is to be retrieved is specified by the file descriptor fd. The stat structure; /*. - This flag is Linux-specific; define _GNU_SOURCE to obtain its definition. -On success, zero is returned. On error, -1 is returned, and errno is set to indicate the error. ERRORS - EACCES - Search permission is denied for one of the directories in the path prefix of pathname. (See also path_resolution(7).) - EBADF - fd is not a valid open file descriptor. - EBADF - (fstatat()) pathname is relative but dirfd is neither AT_FDCWD nor a valid file descriptor. - EFAULT - Bad address. - EINVAL - (fstatat()) Invalid flag specified in flags. -. - ENOMEM - Out of memory (i.e., kernel memory). - ENOTDIR - A component of the path prefix of pathname is not a directory. - ENOTDIR - (fstatat()) pathname is relative and dirfd is a file descriptor referring to a file other than. VERSIONSfstatat() was added to Linux in kernel 2.6.16; library support was added to glibc in version 2.4. CONFORMING TOstat(),.) NOTES Timestamp fieldsOlder suitable feature test macros are defined. Nanosecond timestamps were standardized. C library/kernel differencesOver padding. - applications,(). EXAMPLESThe); }
https://man.archlinux.org/man/fstatat.2.en
CC-MAIN-2022-21
refinedweb
304
61.53
to insist on the spelling of the word. to provide a replacement spelling If the invoker insists on the spelling of the word, then this word is added to wordspell's "memory". wordspell remembers words in the file "memory" in the invoker's home directory. Any further invocation of wordspell by the same invoker will consider the word to be correct. I am getting these errors with my code: ./wordspell: line 9: syntax error near unexpected token `newline' ./wordspell: line 9: ` set choice = $<' #/bin/csh touch ~/memory.txt while (`ispell -l ~/memory -l < $1`) echo "$1 is mispelled" echo -n "Press enter to keep this spelling, or" echo -n "type a correction here: " set choice = $< if (choice== "" )then ispell -I else ispell -R choice >> ~/memory.txt endif end Can anyone help me understand what im doing wrong? And since my script wont even run I cant find out if it works or not so if see see any errors in my script can you point them out to me.
http://www.dreamincode.net/forums/topic/70149-shell-script-help/
CC-MAIN-2016-26
refinedweb
169
71.75
Hello again, my previous post has been accepted nicely, so I'm here with the volume II of my notes. :-) ### Doc: XML and TEXT output Not mentioned in the documentation. I guess it's because they are new and designed to be semi-hidden for use of m/monit. But I think such features that are increasing interoperability of the project should be advertised loudly! :-} ### LED blinks, speaker screams on (some?) alerts Something like this: set ledalert on # constant keyboard LED blink set beepalert on # let's say three beeps each 15 seconds The alert e-mail does not necessatily reach the target box, cellphone, ... on time. In case of nasty situation is detected, the server should IMHO try to catch the human eye or ear as soon as and any way possible. This alert would also be handy in case of unreachable mailserver, network connection, in short in situations where something nasty _might_ happen but the monit knows it wont be able to inform about it. The ongoing beep or blink state could be presented on the HTML/XML/TEXT output too together with the HTML button and console command to shut them off immediatelly until the next state change. ### depends on groupname Not really sure whether such thing may be useful, but again, it would increase the configuration freedom. This way, the action may be triggered by any service in the dependant group. ### Password hash directly on the ALLOW line Something like this: allow user:0ec8266c6fbd441ea707864855f0368a20e82c36 sha1 read-only I don't like to reveal my password to any reader, but using external file like htpasswd is too uphill. I consider this way a reasonable trade-off. ### New check: i-node number change ### Symlink target change (target string checksum?) The first check could detect a hard link target change. The second one allows one to detect whether the symlink was tampered with. These checks mostly fall into the security measures category and are marginal here. I just mention it as a small ideas. ### Multiple actions after THEN AFAIK there is possible to define only one action now. EventAction_T in monitor.h suggests it. I imagine the *next member in EventAction_T. :-) For example if loadavg(1min) > 10.0 for 8 cycles then { exec "/usr/local/sbin/myscript.sh", stop } ### Variables in exec action There are situations where it can become handy to pass the addition infomation to underlaying utilities. Variables like $ACTION, $SERVICE... used in mail-format might be expanded in exec command line too if wished... if loadavg(1min) > 10.0 for 8 cycles then exec '/usr/local/sbin/myscript.sh "$SERVICE" "$ACTION"' ### ifhost "dilbert" or "ginger" ... endif I think I'm not the only one who uses monit to keep an eye on several servers, not only one. Even when using INCLUDE and CVS to manage multiple control files, there are still many same lines remaining. Such feature of conditional compilation would allow me to have only one monitrc for all my servers, which could lead me to systematic unified configuration and helps to reveal the configuration mistakes. There is a security disadvantage in this. A cracker who gets root access on one of my servers would know exact monitoring configuration on the rest of them. That's bad. Instead I'm considering now using some general preprocessor e.g. GNU M4 to produce the set of monitrc files for particular servers after the CVS checkout. Using M4 there is possible advanced magic like parametrized macros, etc. I'll probably use it and can contribute a tutorial or examples if anyone interested. And now for something completely different: ### Encryption of all mail alerts using given PGP public key I'm proposing later in this text that monit may be able to send some sensitive data in the mail alerts and therefore it should be possible to hide the alert body from the unintended eyes. It is possible to keep the public key (even multiple ones) in ASCII armored form in control file. This feature may deploy gnupg or similar package. The fact, that the Subject line cannot be encrypted and still alerts any recipient could be taken as an advantage. In case the encryption is not possible by accident, the rich body would not be sent, of course. I'm not sure, but I think it is possible to encrypt data by multiple public keys and decrypt it with any of them. This feature would fit this application. Alerts could be encrypted by admin's key together with company's key stored somewhere safe. ### Advanced e-mail alerting In previous e-mail, there was a proposal about the rich XML report (_status?format=xml request), where every <service> provides exhaustive information about the tests being applied, the currently expected and received values (for example checksums, send/expect x received strings). In case this proposal will be implemented, then it might be interesting, if monit puts entire XML report (or just the systemwide part and the failed/recoveres services) into the encrypted e-mail message body. No special mail structure need to be invented, XML form is already the best one for technical processing and it will contain all information. Nevertheless it may be possible to select HTML or TXT format too to be put into the e-mail message body. This way we are sending the detailed historical diagnostic and forensic information somewhere far far away for deposit (perhaps out of the touched domain). Everything may be cracked or shut down by unknown reason, but once the data leave the domain, we are better informed. ### Embedding a Python interpreter I'm going slightly mad... :-] Vlada signature.asc Description: OpenPGP digital signature
http://lists.gnu.org/archive/html/monit-general/2004-07/msg00031.html
CC-MAIN-2017-30
refinedweb
944
63.7
#include <setjmp.h> int setjmp (jmp_buf env); void longjmp (jmp_buf env, int val); These functions are useful for dealing with errors and interrupts encountered in a low-level subroutine of a program. setjmp( ) saves its stack environment in env (whose type, jmp_buf, is defined in the <setjmp.h> header file) for later use by longjmp( ). It returns the value 0. longjmp( ) restores the environment saved by the last call of setjmp( ) with the corresponding env argument. After longjmp( ) is completed, program execution continues as if the corresponding call of setjmp( ) had just returned the value val. longjmp( ) cannot cause setjmp( ) to return the value 0. If longjmp( ) is invoked with a second argument of 0, setjmp( ) returns 1. At the time of the second return from setjmp( ), all external and static variables have values as of the time longjmp( ) is called (see example). The values of register and automatic variables are undefined. SCO OpenServer does not assign any special meaning to the symbols _setjmp( ) and _longjmp.( ) Some other operating systems use these to differentiate between functions that save the process's signal mask (setjmp/longjmp) and functions that do not (_setjmp/_longjmp). If the signal mask is to be saved as part of the environment, use the sigsetjmp(S) and siglongjmp(S) routines instead. #include <setjmp.h>If the a.out resulting from this C language code is run, the output is: jmp_buf env; int i = 0; main () { void exit(); if(setjmp(env) != 0) { (void) printf("value of i on 2nd return from setjmp: %d\n", i); exit(0); } (void) printf("value of i on 1st return from setjmp: %d\n", i); i = 1; g(); /*NOTREACHED*/ } g() { longjmp(env, 1); /*NOTREACHED*/ } X/Open Portability Guide, Issue 3, 1989 ; ANSI X3.159-1989 Programming Language -- C ; IEEE POSIX Std 1003.1-1990 System Application Program Interface (API) [C Language] (ISO/IEC 9945-1) ; and NIST FIPS 151-1 .
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=_setjmp&mansection=S&lang=en
CC-MAIN-2020-50
refinedweb
317
63.59
Bram pointed me to this thesis on implementing real numbers within HOL. I heartily recommend this thesis to people following this thread (if any). It's very interesting to compare to Metamath's construction of the reals. Unfortunately, these constructions are not compatible. One significant difference is that Metamath seems to support partial functions, so 1/0 is not in the range of the divide function, while HOL wants to have total functions within the type, so 1/0 must have some value (Harrison chooses 0 to simpify the details). As such, proofs from one probably can't be easily ported to the other without serious handwork. I feel I understand Metamath reasonably well now. It has some issues, but it's overwhelming strength is that it's simple. For example, I believe that a fully function proof verifier could be done in about 300 lines of Python. I wonder how many lines of Python a corresponding verifier for HOL would be; I'd guess around an order of magnitude larger. That kind of difference has profound implications. Norm Megill is certainly to be commended for the "simplicity engineering" he's put into Metamath. For the purpose of putting doing Web-distributed proofs, Metamath has a few shortcomings. I think they can be fixed, especially given the underlying simplicity. I'll talk about these problems and possible fixes over the next few days. Definitions in Metamath have two closely related problems. Definitions are introduced exactly the same way as axioms. As such, it's far from obvious when a definition is "safe". For example, you could add definitions for the untyped lambda calculus, which would introduce the Russell set paradox. The second problem is that there is a single namespace for newly defined constants. You wouldn't be able to combine proofs from two different sources if they defined the same constant two different ways. Here's my proposal to fix these problems. Choose a highly restricted subclass of definitions that is clearly safe. For example, you could say that any definition of the form "newconst x y z = A" or "newconst x y z <-> phi", with newconst not appearing in A or phi, is acceptable. I propose to introduce new syntax that clearly identifies such definitions. You could use existing syntax, so that such definitions become axioms, but can be checked easily, or you could have other syntax that sets the new constant apart from its "macro expansion". That's a style preference. Now let's talk about namespaces. I have a strong preference for using hashes as global names, because (assuming the hash function is strong), you don't get collisions. As such, it should be possible to mix together arbitrary proofs without danger. Here's an outline proposal. Take the definition axiom, and replace the newly defined constant with some token, say $_. Hash the result. That is the "global name". When you're developing proofs, you'll probably want a (more or less) human-readable "pet name", but this is actually irrelevant for verification. Here's an example in Metamath notation. Here's Metamath's definition of the empty set $( Declare the symbol for the empty or null set. $) $c (/) $. $( null set $) $( Extend class notation to include the empty set. $) c0 $a class (/) $. $( Designate x as a set variable for use within the null set definition. $) $v x $. $f set x $. $( Define the empty set. $) dfnul2 $a |- (/) = { x | -. x = x } $. So here's what gets hashed: $a class $_ $. $f set x $. $a |- $_ = { x | -. x = x } $. Take the SHA-1 hash of this string. Then I propose that #274b1294a7d734a6e3badbf094190f46166159e4 can be used (as both a label and a constant, as these namespaces are independent) whenever the empty set is needed. A proof file would of course bind this string to a shorter name, such as (/). When importing a proof file from another, the binding would be local to the file. (Currently, Metamath has only a file include facility similar to C's preprocessor #include, but an import facility with better namespace management would be quite a straightforward addition, especially considering that Metamath already has ${ $} scoping syntax). Obviously, there are some details to be worked out, particularly nailing down exactly what gets hashed, but I think the idea is sound. Schooling Alan's Mindstorms arrived a couple of days ago. These promise to be quite fun (and of course educational :). So far, he's settling into first grade very easily. We begin the half-homeschooling starting on Monday. Even so, I get the sense that Max is going to be the one most into computers. He's learning the OS X interface impressively well. Last time we resumed the computer, a folder was highlighted, and he said, "it's clicked." Then, when I ran Stuffit to unpack a game demo, he noted the icon and said, "it's squishing it." He's also the one that said, "I love Alan's 'puter".
http://www.advogato.org/person/raph/diary/267.html
CC-MAIN-2015-06
refinedweb
826
66.74
| Join Rate It (1) Last post 06-16-2009 5:47 AM by shiv.kumar. 19 replies. Sort Posts: Oldest to newest Newest to oldest We’re seeing that the 2.0.0.2 release of FireFox has a behavior change that’s affecting ASP.NET AJAX. Basically the dynamically loaded scripts seem to now execute asynchronously, which causes them to fire after the inline scripts, whereas previously they’d fire as soon as they were added to the DOM. This will affect scenarios where a component relies on PageRequestManager events inside of an UpdatePanel. For example the ModalPopup control that is part of the Control Toolkit. We are actively working with the Firefox team to figure out the best approach to address this issue and we will update this thread as soon as we know something Bump. :o) Any update to speak of? I'd really hate to put in a work-around (ie: the window.setTimeout(......)), but I'd like to have a clearer understanding of the timelines regarding the resolution of this issue. Can anyone please provide an update? Thanks,Greg We are still talking to the Firefox team.. It is taking longer that we expected. AdnresS Consider using jQuery, a very lightweight Javascript library which has a feature to help here. Look up the ready function. It will run the given function once the page is ready, meaning once the markup has been loaded. And it precedes the images and other media being loaded. I am now using it with my ASP.NET websites, like LinkMindr.com. $(document).ready(function() { // put all your jQuery goodness in here. }); This sounds promising. How does this interaction work with ASP.net AJAX? Are there any conflicts in using both libraries? I am about 95% done with development on my application, which relies heavily on the MS AJAX framework. I would hate to bring in a new component and have it break everything else. :o) I have not had any issues with jQuery and ASP.NET together. Here is an example site I put together recently which makes use of ASP.NET AJAX and Thickbox which is built on jQuery. The collapsible panels are near the bottom while the Thickbox feature manages the images. You can see there are no conflicts. I use jQuery much more extensively on this site... It also uses the collapsible panels while I use jQuery to do my lightweight callbacks. Since the ASP.NET AJAX objects are generally all under the Sys name they will not conflict with other libraries which maintain a similar namespace concept. And jQuery is respectful of this construct. The following blog entry was written by the author of the jQuery library and the topic should reassure you about using jQuery. Anyone have an update on this issue? We have being actively communicating with the FireFox team to address this issue. We believe that Mozilla Bug 371576 addresses this issue. They’ve got a fix and are working on an update release. Thanks AndresS I have problem with Firefox too. I copy PopupControl sample from ajaxtoolkit to try out. If I move the <asp:UpdatePanel> from inside of <asp:panel> to outside of <asp:panel>, the popup panel refresh for each callback event. It won't close popup when server issues commit(). It happens not only in Firefox 2.0, it's not working in Firefox 1.5.0.10 either. But same code works in IE. It looks like all those elements in the popup panel are re-created for each callback events. In one of my production applications we use a ModalPopup that's invoked from a gridview control specifically a imagebutton. In 1.5.0.10 it works great. Confirmed that it shows nothing in version 2.0.0.2 clicking the imagebutton just puts up the progressupdate for a few seconds and then disappears. And I just confirmed that release 2.0.0.3 fixed it. haven't realeased it officially yet it looks like... (as in pushing down the update) but this will fix the issue. Time to start letting users know that they need to stay up wit the latest versions. JoeWeb Good job, the 2.0.0.3 fixes my problem on modalpopupextender Thanks Advertise | Ads by BanManPro | Running IIS7 Trademarks | Privacy Statement © 2009 Microsoft Corporation.
http://forums.asp.net/p/1081599/1600941.aspx
crawl-002
refinedweb
722
68.36
Building data-centric business applications with Visual Studio Visual Studio Developer Center | Data Developer Center | How-Do-I Videos | Code Gallery | VS on Channel 9 Sheesh. Some of the feedback I'm getting on this series makes me feel like a traitor to the Visual FoxPro community. Nothing could be further from the truth; I love the VFP community and I am a staunch supporter of VFP and have been for many years. I think my contributions speak for themselves (harrumph! - lol). I'm coming to the conclusion that VB .Net in itself isn't all that difficult and most VFP developers could adjust to the language in a short time. You have to get used to the idea of constructors and hard typing of objects and variables in that we can't just spuriously assign a value or object to "x" implicitly. I think I covered a little of that in my last post. What's staggeringly complex is the .Net framework itself. I know that most of you have heard the phrase ".Net Framework" and, maybe, been a little in the dark about what that meant. Well, essentially, think of the framework as classlibs installed into your system. In VFP terms: Each classlib belongs to a library that .Net refers to as a "namespace". When the framework is installed, some binaries are installed in the Windows directory that allow for the basic namespaces, for example "System." You can enhance the number of objects available by including external references into your .Net project. For example, I am working with a DLL that exposes a bunch of methods and properties to my VB app once I add the reference to my project. This is akin to adding a classlib to your VFP project. Anyhow, I'm convinced that the winning combination for Fox people to learn .Net is to muck around with the VB or C# language with small tasks, but to really focus on the .Net framework and what it can do for you. It's not that hard. Really. Sorry, no code samples this time as I'm mired in fixing an obsolete test driver. Which is actually teaching me a lot, but nothing I can synthesize into absolutes at this time. Duty calls. I'll get more to y'all when I can. Meanwhile, there are a few VFP to .Net books out there that are worthwhile. While I'm constrained in giving recommendations to 3rd party products, I'll give you a hint: One is by Kevin M. and one is by Les P. Figure it out. If you would like to receive an email when updates are made to this post, please register here RSS
http://blogs.msdn.com/vsdata/archive/2005/02/15/374138.aspx
crawl-002
refinedweb
450
74.59
Miscellaneous Tagging in XML - Language Settings - Space Handling - Date and Time Representation - Summary - References Several text properties are common to all types of XML documents, regardless of their final purpose. These characteristics occur so frequently that the various XML standards often include special provisions for them. These properties cover the following challenges: How to identify the language of the content—For example, how to distinguish a paragraph in German from one in Czech. How to handle white spaces—Basically, how to note the difference between spaces that are meant to make the structure of the document more readable (spaces such as indentations and empty lines), and the ones that are meaningful. How to represent date and time information—In our case especially, how to display this information both from locale-specific and from locale-neutral viewpoints. Language Settings To address language identification, XML provides a default attribute to specify the language of any element: xml:lang. In many respects this attribute can also be seen as a locale. A locale is, roughly, the combination of a language and a region. The classic example is the difference between French, the language, and the two locales: French for France, and Canadian French (Québécois). Many other examples exist: the various flavors of Spanish, Brazilian versus Portuguese, and so forth. In addition to linguistic differences, the locale also often indicates possible variations on how to process data: Currency, numbers, date/time formatting, sorting, and character uppercasing and lowercasing are some of the locale-specific areas. Sometimes the locale even goes beyond and points to deeper differences such as the type of writing system (for example, Classical Mongolian versus Cyrillic Mongolian, or Azerbaijani Arabic versus Azerbaijani Cyrillic). NOTE A good example of a language where differences are clear between locales is Spanish. Spanish is spoken in many countries and therefore comes in many different varieties. When localizing for a specific market you must decide which flavor you need. For example, Spaniards use "utilidades" for "utility programs," Argentines use "utilitiarios," and Mexicans use "utilerías." Another example is the term "computer." Spaniards use the word "ordenador" but all Latin Americans use "computadora" instead. Such discrepancies cause a few dilemmas when you want to have only one Spanish translation for all markets. To reduce costs, companies often try to use a "neutral" or "international" Spanish. This is an artificial creation, as is "Latin American Spanish." Finally, to avoid confusion, you might want to refer to the Spanish spoken in Spain as "Iberian Spanish" rather than "Castilian Spanish," the term "Castellano" being often used in South America to refer to the Spanish spoken there. When defining your own XML vocabulary, you should use xml:lang as your attribute to specify the locale information, rather than come up with your own attribute. There are a couple of good reasons for this. First, xml:lang will be understood immediately by any XML user. And second, it will allow you to take advantage of interoperability among the various XML-related technologies such as XSL or CSS. If you use a DTD to specify your format, xml:lang must still be declared, just as with any other attribute. For example: <!ATTLIST p xml:lang NMTOKEN #IMPLIED > Language Codes The values of the xml:lang attribute should conform to the language tags defined in the XML specifications, as shown in Listing 3.1. Listing 3.1Definition of the Value for the xml:lang Attribute LangValue ::=])+ In addition, according to RFC1766, the part on the right of the '-' can be up to 8 characters long. Currently the language codes use ISO 639 2-letter codes, but as of January 2001, RFC1766 has been superseded by RFC3066, which introduces the use of ISO 639 3-letter codes. According to this last RFC, if a language can be identified with both types of code, the 2-letter code must be used. The 3-letter codes should be used only for representing languages that do not have 2-letter codes. For example, the code for Korean must always be ko and never kor. In addition, there are 2 types of 3-letter codes: Terminology and Bibliography. Currently none of the languages that should be using a 3-letter code have a discrepancy between the Terminology form and the Bibliography form. If such a conflict occurs in the future, the Terminology code should be used. Finally, if a language has both an ISO code and an IANA code, the ISO code must be used. Table 3.1 Use of ISO Codes NOTE Normally, attribute values in XML are case sensitive. However, for simplification purposes and to match the RFC3066 standard, the values of the xml:lang attribute are not case sensitive. For example, the four values "pt-BR", "PT-BR", "pt-br", and "PT-br" (Brazilian Portuguese) are considered identical. Usually the language code is represented in lowercase and the country code in uppercase, but this is not a rule. User-Defined Codes In some cases the list of variant codes you can build from the predefined language and region codes is not enough. For instance, as we have seen already, you might have to localize a document in two types of Spanish: one for the audience in Spain (Iberian Spanish) and the other for the Latin American market. The first should be coded "es-ES", or simply "es" because Spain is the default country for Spanish. For the second, however, no country code corresponds to "Latin America." To solve this you can create your own locale codes as defined by UserCode in Listing 3.1. For example, you could use something such as "x-es-LatAm" for your Latin-American Spanish document. A special kind of user-defined code exists: the one registered to the IANA. Most of them start with the prefix i-. The list of these language tags is updated regularly and you can find it at. NOTE Be aware that some localization tools might be programmed to handle only 4-letter codes, and might not be able to process IANA or user-defined codes correctly. For a detailed list of language codes, see Appendix D. Multilingual Documents As you saw in Chapter 2, "Character Representation," one characteristic of XML is its capability to handle content in different languages when necessary. For example, as shown in Listing 3.2, a SOAP data file could store description of an item in several languages. Listing 3.2 Soap1.xml—SOAP Envelope with Multilingual Entries <!-- SOAP excerpt --> <Envelope xmlns="" encodingStyle=""> <Body> <d:GetItem xmlns: <d:PartNum>NCD-67543</d:PartNum> <d:InStock>5</d:InStock> <d:Desc>Manual water pump</d:Desc> <d:Desc xml:Pompe à eau manuelle</d:Desc> <d:Desc xml:éËìÆÉEÉHÅ[É^Å[ÅEÉ|ÉìÉv</d:Desc> </d:GetItem> </Body> </Envelope> The default language from the <d:GetItem> element level is set to en (English). The child elements inherit the property, so the first <d:Desc> element does not need to repeat the attribute. However, because the second one contains the description in French, you need to override the default xml:lang attribute. Always keep in mind that XML element and attribute names can have non-ASCII characters as well. In such occurrences, the language specifications work the same. Listing 3.3 shows the same SOAP envelope, but this time with the user data marked up with a Russian vocabulary. The data are identical and the xml:lang mechanism is expected to behave the same: It applies to the content, not to the tags. Listing 3.3 Soap2.xml—SOAP Envelope with Multilingual Entries and Some Non-ASCII Elements <!-- SOAP excerpt --> <Envelope xmlns="" encodingStyle=""> <Body> <_:_____________ xmlns: <_:____________>NCD-67543</_:____________> <_:________>5</_:________> <_:________>Manual water pump</_:________> <_:________ xml:Pompe à eau manuelle</_:________> <_:________ xml:éËìÆÉEÉHÅ[É^Å[ÅEÉ|ÉìÉv</_:________> </_:_____________> </Body> </Envelope> Note the value of the xmlns attribute: The namespace prefix _ is associated to a URI reference (_________), but here the URI has already been coded into its UTF-8/escaped form as described in Chapter 2. The lang Attribute in XHTML For historical reasons, in addition to xml:lang, XHTML also allows the attribute lang to specify language switch. Both have exactly the same significance. In case the same element has both xml:lang and lang with two different values, xml:lang takes precedence over lang. NOTE Using xml:lang or lang has no direct impact on the way the text is rendered. For example, specifying a paragraph as Arabic does not trigger right-to-left display. You must use the style sheets and the various internationalization elements and attributes such as <bdo>, <dir>, and <ruby> for XHTML to indicate to the user-agent how the text should be displayed. However, take into account that language is important in some cases: for example, to select an appropriate font. If a document is encoded in UTF-8 or UTF-16, there is no easy way to distinguish Chinese from Japanese, because most ideographs have been unified. The lang() Function in XPath XPath is the language used in various XML applications to specify a "path notation" that allows you to navigate through the hierarchical structure of any XML document. It is also used to test whether the node of a document instance matches a given pattern. XPath is used, for example, in conjunction with XPointer and XSLT. XPath designers have wisely provided a function to match languages: lang(). The function uses the xml:lang attribute to match a given parameter. This is very useful because, following the XML specifications, the function is not case sensitive and allows you to match a language value very simply. When you specify only a language code rather than a locale code (for example, en versus en-GB), the function returns true for any attributes where the first part of its value matches the argument. The separator between both parts of the value is '-'. Consider the following XSL statement: <xsl:for-each When this command is used on the XML document shown in Listing 3.4, it will return true for all the following elements: <p xml:Spanish text</p> <p xml:Spanish text</p> <p xml:Iberian Spanish text</p> <p xml:Mexican Spanish text</p> Listing 3.4 Spanish.xml— Multilingual Document with Different Spanish Flavors <?xml version="1.0" ?> <document> <p xml:Spanish text</p> <p xml:French text</p> <p xml:Spanish text</p> <p xml:Catalan text</p> <p xml:Iberian Spanish text</p> <p xml:Mexican Spanish text</p> </document> Keep in mind that not all XSL processors support all XSL features yet. The lang() function is not supported in all browsers, for example.
http://www.informit.com/articles/article.aspx?p=22154&amp;seqNum=5
CC-MAIN-2018-43
refinedweb
1,788
52.09
Created on 2011-09-29 17:43 by ncoghlan, last changed 2012-06-23 09:40 by python-dev. This issue is now closed. Based on the python-ideas thread about closures, I realised there are two features the inspect module could offer to greatly simplify some aspects of testing closure and generator behaviour: inspect.getclosure(func) Returns a dictionary mapping closure references from the supplied function to their current values. inspect.getgeneratorlocals(generator) Returns the same result as would be reported by calling locals() in the generator's frame of execution The former would just involve syncing up the names on the code object with the cell references on the function object, while the latter would be equivalent to doing generator.gi_frame.f_locals with some nice error checking for when the generator's frame is already gone (or the supplied object isn't a generator iterator). I'll take a shot and writing a patch for this one. Nick, are the elements in 'co_freevars' and '__closures__' always expected to match up? In other words, is the 'closure' function below always expected to work (simplified; no error checking): >>> def make_adder(x): ... def add(y): ... return x + y ... return add ... >>> def curry(func, arg1): ... return lambda arg2: func(arg1, arg2) ... >>> def less_than(a, b): ... return a < b ... >>> greater_than_five = curry(less_than, 5) >>> def closure(func): ... vars = [var for var in func.__code__.co_freevars] ... values = [cell.cell_contents for cell in func.__closure__] ... return dict(zip(vars, values)) ... >>> inc = make_adder(1) >>> print(closure(inc)) {'x': 1} >>> print(closure(greater_than_five)) {'arg1': 5, 'func': <function less_than at 0xb74c6924>} ? See: 454 /* func_new() maintains the following invariants for closures. The 455 closure must correspond to the free variables of the code object. 456 457 if len(code.co_freevars) == 0: 458 closure = NULL 459 else: 460 len(closure) == len(code.co_freevars) 461 for every elt in closure, type(elt) == cell 462 */ Yep, that looks right to me. The eval loop then references those cells from the frame object during execution. Huh, I didn't actually realise getclosure() could be written as a one liner until seeing Meador's version above: {var : cell.cell_contents for var, cell in zip(func.__code__.co_freevars, func.__closure__)} Here is a first cut at a patch. There is one slight deviation from the original spec: > some nice error checking for when the generator's frame is already gone > (or the supplied object isn't a generator iterator). The attached patch returns empty mappings for these cases. I can easily add the error checks, but in what cases is it useful to know *exactly* why a mapping could not be created? Having an empty mapping for all invalid cases is simpler and seems more robust. Because a generator can legitimately have no locals: >>> def gen(): ... yield 1 ... >>> g = gen() >>> g.gi_frame.f_locals {} Errors should be reported as exceptions - AttributeError or TypeError if there's no gi_frame and then ValueError or RuntimeError if gi_frame is None. The function case is simpler - AttributeError or TypeError if there's no __closure__ attribute, empty mapping if there's no closure. I've also changed my mind on the "no frame" generator case - since that mapping will evolve over time as the generator executes anyway, the empty mapping accurately reflects the "no locals currently defined" that applies when the generator either hasn't been started yet or has finished. People can use getgeneratorstate() to find that information if they need to know. Here is an updated patch with error handling. One other thought is that 'getclosure' should be called something like 'getclosureenv' since technically a closure is a function plus its environment and our implementation only returns the environment. But that may be converging on pedantic. No, the naming problem had occurred to me as well. Given the 'vars' builtin, perhaps 'getclosurevars' would do as the name? > perhaps 'getclosurevars' would do as the name? I like vars. Updated patch attached. In reviewing Meador's patch (which otherwise looks pretty good), I had a thought about the functionality and signature of getclosurevars(). Currently, it equates "closure" to "nonlocal scope", which isn't really true - the function's closure is really the current binding of *all* of its free variables, and that includes globals and builtins in addition to the lexically scoped variables from outer scopes. So what do people think about this signature: ClosureVars = namedtuple("ClosureVars", "nonlocals globals builtins unbound") def getclosurevars(func): """Returns a named tuple of dictionaries of the current nonlocal, global and builtin references as seen by the body of the function. A final set of unbound names is also provided.""" # figure out nonlocal_vars (current impl) # figure out global_vars (try looking up names in f_globals) # figure out builtin_vars (try looking up names in builtins) # any leftover names go in unbound_vars return ClosureVars(nonlocal_vars, global_vars, builtin_vars, unbound_vars) Also, something that just occurred to me is that getclosurevars() should work for already instantiated generator iterators as well as generator functions, so the current typecheck may need to be made a bit more flexible. Nick, the revised definition of 'getclosurevars' seems reasonable to me. I will cut a new patch this week. I didn't get around to updating my patch with Nick's comments yet. Nick, the v3 patch I have attached still applies. I am happy to update it per your comments (promptly this time) or you can take it over. Whichever. Meador: I probably won't get to this until the weekend, so go ahead and update the patch if you have time. Attached patch implements both new functions, but I'm going to drop getgeneratorlocals for now and move that idea to a new issue. I created #15153 to cover getgeneratorlocals. Attached patch is just for record keeping purposes - I'll be committing this change shortly. New changeset 487fe648de56 by Nick Coghlan in branch 'default': Close #13062: Add inspect.getclosurevars to simplify testing stateful closures
http://bugs.python.org/issue13062
CC-MAIN-2016-30
refinedweb
978
56.66
Closed Bug 1294572 Opened 3 years ago Closed 3 years ago stylo: Consider skipping eager traversal on subtrees with an XBL binding Categories (Core :: CSS Parsing and Computation, defect, P3) Tracking () mozilla53 People (Reporter: bholley, Assigned: bholley) References (Blocks 1 open bug) Details Attachments (2 files) When we install XBL, we generally shuffling around all of descendants of the bound element, adding anonymous content and moving explicit children to insertion points. This means that we need to re-cascade everything, and so the eager cascading that we do with Servo is wasted work. I don't know if this is significant in practice, especially with non-chrome documents. But we could probably detect this case and stop traversing children if an element ends up with a non-trivial -moz-binding in its computed values. Priority: -- → P3 I have a patch for this, which fixes the !el.has_dirty_descendants() assertion we're getting on layout/reftests/bugs/404553-1.html . However, I think we'll also want a patch to explicitly clear the ElementData any time the XBL insertion parent changes, because it's difficult to handle the restyle case (removing/applying a binding) during the servo traversal. This in turn means that we'll need to be sure that we re-invoke TraverseNewChildren whenever we add _or_ remove bindings from an Element. I'll work on this second part now. Assignee: nobody → bobbyholley Per bug 1323356, it looks like there's no uninstallation path for bindings aside from LoadBindings, so that should be a sufficient place to perform our re-traversal on the new flattened tree. I'll write up a patch. MozReview-Commit-ID: JHABvLnMYco Attachment #8818436 - Flags: review?(cam) MozReview-Commit-ID: Iv7uyq4uQye Attachment #8818437 - Flags: review?(cam) Comment on attachment 8818437 [details] [diff] [review] Part 2 - Drop Servo data in SetXBLInsertionParent, and call StyleNewSubtree after all bindings have been removed and applied. v1 Review of attachment 8818437 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/xbl/nsXBLBinding.cpp @@ +420,5 @@ > mContent->UnsetAttr(namespaceID, name, false); > } > > // Now that we've finished shuffling the tree around, go ahead and restyle it > // since frame construction is about to happen. I think this comment can be removed too. ::: dom/xbl/nsXBLService.cpp @@ +414,5 @@ > +public: > + AutoStyleNewChildren(Element* aElement) : mElement(aElement) { MOZ_ASSERT(mElement); } > + ~AutoStyleNewChildren() > + { > + nsIPresShell* presShell = mElement->OwnerDoc()->GetShell(); Is it true that loading a binding can cause the pres shell to go away (due to the script that it runs)? If so, we should null check presShell here. Attachment #8818437 - Flags: review?(cam) → review+ Pushed by bholley@mozilla.com: Drop Servo data in SetXBLInsertionParent, and call StyleNewSubtree after all bindings have been removed and applied. r=heycam Backed out in for crashing pretty much everything on every platform, though oddly only Marionette and Android seem capable of pointing to where on the dolly it hurts, / Doh, I keep forgetting that HasServoData asserts instead of returning false on non-stylo builds (which explains why this didn't bust on my try push above). (In reply to Bobby Holley (:bholley) (busy with Stylo) from comment #11) > Doh, I keep forgetting that HasServoData asserts instead of returning false > on non-stylo builds (which explains why this didn't bust on my try push > above). > > > jobs?repo=try&revision=51081e419ad537cd4142ee2ce0d3b6afaa7dbfac Looks green modulo a static analysis failure for a missing 'explicit'. Added that and pushed. Pushed by bholley@mozilla.com: Drop Servo data in SetXBLInsertionParent, and call StyleNewSubtree after all bindings have been removed and applied. r=heycam Status: NEW → RESOLVED Closed: 3 years ago status-firefox53: --- → fixed Resolution: --- → FIXED Target Milestone: --- → mozilla53
https://bugzilla.mozilla.org/show_bug.cgi?id=1294572
CC-MAIN-2019-47
refinedweb
593
52.8
With a Setup project created using Visual Studio .NET 2003 it is easy to add a Desktop shortcut to your application. But creating a shortcut based on a condition is not supported. Also, adding a shortcut in the Quick Launch bar is not supported. This article shows you how you can allow the user to choose whether to add these shortcuts. While developing this solution, I also needed to overcome a limitation with the System.Environment.GetFolderPath method and the System.Environment.SpecialFolder enumeration. These only provide the location of the current user's Desktop, not the location of the All Users' Desktop, required for an "Everyone" install. System.Environment.GetFolderPath System.Environment.SpecialFolder Recently I was asked to modify the installer for an application so that the user could choose whether to add a shortcut on their Desktop. I thought this would be easy, but soon found out it was harder than I thought. I had added a Checkboxes dialog to my installer with an option to create the Desktop shortcut. Then I had set the Condition property of the User's Desktop folder to the appropriate checkbox (there is no Condition property for the actual shortcut in this folder). But this did not work. The Visual Studio .NET IDE gives you the impression that you can have conditions on folders where files are deployed, but that is misleading. There are no Windows Installer conditions like that. So while the IDE allows the conditions to be defined, nothing is done with them. After I initially posted this article, djm181 posted a comment titled "It doesn't have to be so complex", where he claimed these conditions do work. However, after investigating this further it turned out what he was doing only gave the appearance of working due to the order in which the two shortcuts with the same name were being created by the installer. A bit of searching showed that many others have found this technique does not work and asked for a solution. Most answers were to use Orca to edit the MSI. While that works, I preferred a solution that was included every time I rebuilt the solution in Visual Studio. The solution provided here uses an Installer class that is added to the application being deployed. The code in the Installer class uses the Windows Script Host to create the shortcuts. Installer The source code for this article provides a simple Win Forms application that includes an Installer class to create the shortcuts, and a Setup project that includes a dialog to ask the user if they want to create the shortcuts. Follow the steps below to add this capability to your own project. Start with the project for the application you will be deploying open in Visual Studio. The steps are broken into two main sections: The Windows Script Host is a COM component, so you need to add a reference to it in the project for the main assembly of your application. To do this within the Visual Studio .NET IDE, do the following: within the Solution Explorer, right-click on the References section of your project and select "Add Reference", select the "COM" tab, find and select "Windows Script Host Object Model" in the ListBox, click "Select", and then click "OK". This will add a reference to IWshRuntimeLibrary. You can find out more about the Windows Script Host here. ListBox IWshRuntimeLibrary Add a new class to the project for the main assembly of your application. Name the class "ShortcutsInstaller". All the code for this class is provided in the demo project so I will only review the main points here. ShortcutsInstaller You need to add references to several namespaces that we will be using in this class. Include the following references in your code: using System.Collections; using System.ComponentModel; using System.Configuration.Install; using System.IO; using System.Reflection; using System.Windows.Forms; using IWshRuntimeLibrary; The System.Configuration.Install namespace requires a reference to the System.Configuration.Install.dll assembly. To add this reference within the Visual Studio .NET IDE, do the following: within the Solution Explorer, right-click on the References section of your project and select "Add Reference", find and select "System.Configuration.Install.dll" in the ListBox, click "Select", and then click "OK". System.Configuration.Install The ShortcutsInstaller class must inherit from the base class System.Configuration.Install.Installer and include the RunInstaller attribute: System.Configuration.Install.Installer RunInstaller [RunInstaller(true)] public class ShortcutsInstaller : Installer { ... } When you implement your own Installer class, you can override one or more of the Install, Commit, Rollback and Uninstall methods of the base Installer class. These methods correspond to the different phases of the installation process. We will override some of these methods. To override the base class Install method, add a new method to your class as follows: Install Commit Rollback Uninstall public override void Install(IDictionary savedState) { base.Install(savedState); ... } The first line of this method must be a call to the base class method we are overriding: base.Install(savedState). base.Install(savedState) Our installer class needs a way of finding out what choices the user has made for the current installation. Our installer requires three pieces of information: installer In order to provide our installer class with these pieces of information, .NET provides for parameters to be passed to the installer. We will see how to pass these parameters to our installer class later when we look at the Setup project. The parameters are made available to the installer through the Context property of the base Installer class. This gives us access to a StringDictionary object that contains the parameters. We can check to make sure a parameter has been provided to our installer class using the ContainsKey method: Context StringDictionary ContainsKey const string ALLUSERS_PARAM = "ALLUSERS"; if (!Context.Parameters.ContainsKey(ALLUSERS_PARAM)) throw new Exception(string.Format( "The {0} parameter has not been provided for the {1} class.", ALLUSERS_PARAM, this.GetType())); The default installation folder dialog provided for a Visual Studio .NET Setup project includes radio buttons for the user to choose whether to install the application for everyone who uses the computer or just himself. The parameter value for the "Everyone" option will be "1", and for the "Just me" option it will be an empty string. For the checkboxes that we will add to the Setup project to allow the user to choose to install the shortcuts, the parameter value will be "1" if the checkbox is checked, and an empty string if the checkbox is unchecked. The code to check the values of the parameters will look like this: bool allusers = Context.Parameters[ALLUSERS_PARAM] != string.Empty; bool installDesktopShortcut = Context.Parameters[DESKTOP_SHORTCUT_PARAM] != string.Empty; bool installQuickLaunchShortcut = Context.Parameters[QUICKLAUNCH_SHORTCUT_PARAM] != string.Empty; If the user has chosen to add the desktop shortcut, we need to determine the location of the desktop folder where we will create the shortcut. If the user has chosen to install for "Everyone" the location is the "All Users" desktop. For a "Just me" installation it is the location of the current user's desktop. The .NET Framework provides a way for us to get the location of the current user's desktop, using the System.Environment.GetFolderPath method: desktopFolder = Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory); However, the System.Environment.SpecialFolder enumeration does not include a member for the All Users Desktop folder. To find the location of the All Users Desktop folder we need to use the Windows Script Host: object allUsersDesktop = "AllUsersDesktop"; WshShell shell = new WshShellClass(); desktopFolder = shell.SpecialFolders.Item(ref allUsersDesktop).ToString(); Notice the use of the ref object parameter passed to the SpecialFolders.Item method. This is the way it must be called using COM Interop. If we were just writing VBScript it would be as simple as: ref SpecialFolders.Item set shell = WScript.CreateObject("WScript.Shell") desktopFolder = shell.SpecialFolders("AllUsersDesktop") However, to access the SpecialFolders collection through COM Interop requires passing the Item property of an object by reference as the index to the collection. With COM Interop the folder is returned as an object so we need to use the ToString method on the returned value. SpecialFolders Item ToString In the demo project I have put the code to get the "AllUsersDesktop" folder in a try-catch block as this folder is not supported on some older versions of Windows. If the "AllUsersDesktop" folder is not supported then the shortcut will be created on the current user's Desktop. try catch If the user has chosen to add the Quick Launch shortcut, we need to determine the location of the folder where we will create the shortcut. The functionality of the Quick Launch bar is part of Internet Explorer and the location of the folder for the Quick Launch shortcuts is part of Internet Explorer's application data. There is no "All Users" Quick Launch folder, so the Quick Launch shortcut is always added to the current user's Quick Launch folder, even if the user chooses to install for "Everyone". The System.Environment.GetFolderPath method we used to find the current user's Desktop can also give us the location of the current users "Application Data" folder. We need to hardcode the location within the Application Data folder for the Quick Launch folder. In the demo project I have made the location of the Quick Launch folder a property of the ShortcutsInstaller class so that I do not need to repeat the location code in more than one place. The code for the location of the Quick Launch folder is: private string QuickLaunchFolder { get { return Environment.GetFolderPath( Environment.SpecialFolder.ApplicationData) + "\\Microsoft\\Internet Explorer\\Quick Launch"; } } In the demo project I have a separate CreateShortcut method in the ShortcutsInstaller class to create a shortcut. This method is called from the overridden Install method for both the Desktop and Quick Launch shortcuts. The CreateShortcut method takes four parameters used for the shortcut: CreateShortcut The CreateShortcut method uses the Windows Script Host to create the shortcut. The actual shortcut is a special type of file with a .lnk extension (Windows always hides this extension when you view the file in Explorer). The name of the shortcut is used as the file name. Here is the code for the CreateShortcut method: private void CreateShortcut(string folder, string name, string target, string description) { string shortcutFullName = Path.Combine(folder, name + ".lnk"); try { WshShell shell = new WshShellClass(); IWshShortcut link = (IWshShortcut)shell.CreateShortcut(shortcutFullName); link.TargetPath = target; link.Description = description; link.Save(); } catch (Exception ex) { MessageBox.Show( string.Format( "The shortcut \"{0}\" could not be created.\n\n{1}", shortcutFullName, ex.ToString()), "Create Shortcut", MessageBoxButtons.OK, MessageBoxIcon.Information); } } If the shortcut cannot be created for any reason, the try-catch block means that a message will be displayed to the user, but the installation will not fail. Any shortcuts created when your application is installed should be removed when the application is uninstalled. To do this you need to override the Uninstall method of the base Installer class: public override void Uninstall(IDictionary savedState) { base.Uninstall(savedState); ... } In the demo project I have separate DeleteShortcut and DeleteShortcuts methods in the ShortcutsInstaller class. The DeleteShortcuts method is called from the overridden Uninstall method (we will use this method again when we discuss the Rollback method). The DeleteShortcuts method then calls the DeleteShortcut method twice: once for the Desktop shortcut and once for the Quick Launch shortcut. We do not care whether the user chose to create the shortcuts when the application was installed. If they exist, we simply delete the shortcuts from the All Users Desktop, the current user's Desktop and the Quick Launch folder. Here is the code for the DeleteShortcut method: DeleteShortcut DeleteShortcuts private void DeleteShortcut(string folder, string name) { string shortcutFullName = Path.Combine(folder, name + ".lnk"); FileInfo shortcut = new FileInfo(shortcutFullName); if (shortcut.Exists) { try { shortcut.Delete(); } catch (Exception ex) { MessageBox.Show( string.Format( "The shortcut \"{0}\" could not be deleted.\n\n{1}", shortcutFullName, ex.ToString()), "Delete Shortcut", MessageBoxButtons.OK, MessageBoxIcon.Information); } } } We use the FileInfo class to see if the shortcut exists, and delete the file if it does. If the file cannot be deleted for any reason a message is displayed to the user. FileInfo The shortcuts need to be deleted if the install fails for any reason and they have already been created. In the event of a problem that prevents the install from completing, the Rollback method of the ShortcutsInstaller class will be called. We need to override the Rollback method of the base Installer class and call the DeleteShortcuts method. I have made use of the values of Assembly attributes (set in the AssemblyInfo.cs file) for the application being installed. Assembly attributes are used to set the name and description of the shortcuts. If the AssemblyTitle attribute has been set, this is used for the name of the shortcut. If the AssemblyTitle attribute has not been set, then the file name of the application is used. If the AssemblyDescription attribute has been set, this is used for the description of the shortcut. If the AssemblyDescription attribute has not been set, then the description of the shortcut is set to "Launch xxx", where xxx is the name of the shortcut. I have added properties to the ShortcutsInstaller class that use reflection to obtain the Assembly attributes. The code to get the AssemblyTitle attribute is: AssemblyTitle AssemblyDescription object titleAttribute = myAssembly.GetCustomAttributes(typeof(AssemblyTitleAttribute), false)[0]; _name = ((AssemblyTitleAttribute)titleAttribute).Title; Another option you could use for the name and description values of the shortcuts would be to pass these values from the "ProductName" and "Description" properties of the Setup project. If you already have a Setup project as part of the solution for your application then you can skip this section. To add a Setup project to your solution within the Visual Studio .NET IDE, do the following: In the Solution Explorer, right-click on the Solution and select "Add", then select "New Project". This displays the "Add New Project" dialog box. In the "Project Types" select "Setup and Deployment Projects", in the "Templates" select "Setup Project", provide a suitable "Name" and "Location", and then click "OK". In the Solution Explorer, right-click on the new Setup project and select "View" and then select "File System". This displays the File System Editor, where you can specify the files that will be installed and their locations on the target computer. The File System Editor is divided into two parts: a navigation pane on the left and a detail pane on the right. The navigation pane contains a hierarchical list of folders that represent the file system on a target computer. The folder names correspond to standard Windows folders; for example, the "Application Folder" corresponds to a folder beneath the "Program Files" folder where the application will be installed. When a folder is selected in the navigation pane, any files and shortcuts that will be installed in that folder are displayed in the detail pane. Right-click on the "Application Folder" and select "Add" and then select "Project Output". This displays the "Add Project Output Group" dialog box. The Project drop-down ListBox contains the other projects in your solution. Select the project that contains the ShortcutsInstaller class. The ListBox below the Project contains a list of project outputs that can be deployed. Select "Primary output" in this list, then click "OK". You may get a message box with a message about dependencies for "wshom.ocx". If you do, just click "OK" - I will discuss "wshom.ocx" below. Three files will be added to the detail pane of the File System Editor: You will probably want to add a shortcut to your application so that it can be accessed from the "Start" menu. To do this within the Visual Studio .NET IDE, do the following. In the detail pane of the File System Editor, right-click the "Primary output..." and select "Create shortcut to Primary output...". This will add the shortcut to the "Application Folder". Rename the shortcut to the title of your application. You can then cut-and-paste or drag-and-drop the shortcut to the "User's Programs Menu" in the navigation pane. When you add an assembly to be deployed by a Setup project, Visual Studio .NET attempts to determine all the other components that the assembly is dependent upon. This will include all the assemblies you have added as References to in your project. Each dependent assembly will have its dependencies checked and so on. This includes any COM components you have referenced. Any dependencies that are not part of the standard .NET Framework assemblies will be added to the Application Folder for your Setup project. That is why Interop.IWshRuntimeLibrary.dll and wshom.ocx have been added to your Setup project. While Interop.IWshRuntimeLibrary.dll, must be deployed, it is not usually necessary to deploy wshom.ocx as it is part of the standard Windows installation. In fact, when Visual Studio .NET adds wshom.ocx to your project, it does it in a way that it will only be copied to the target computer, but not actually used. If you view the Properties window for this file, you will see that the Register property is set to "vsdrfDoNotRegister". This means that if wshom.ocx does not already exist as a properly registered COM component on the target computer your Desktop and Quick Launch shortcuts will not be created. If the computers you will be installing to do not have wshom.ocx already registered you will need to change the Register property to "vsdrpCOM". You will probably also want to change the target folder to be the Windows System folder. vsdrfDoNotRegister vsdrpCOM In the demo project I have avoided deploying the wshom.ocx file altogether. To prevent this file from being copied to the target computer, change the Exclude property for the file to "True". Exclude True If you already have a desktop shortcut to your application defined in the File System editor of your Setup project, then you need to remove this shortcut. Instead of this shortcut, the shortcut will be added by the ShortcutsInstaller class. You need to add a dialog to the Setup project that includes checkboxes for the user to choose whether to create the Desktop and Quick Launch shortcuts. To do this within the Visual Studio .NET IDE, do the following: Within the Solution Explorer, right-click on the Setup project and select "View" and then select "User Interface". This displays the User Interface Editor, where you can specify and edit dialog boxes that are displayed during installation. There are a number of dialogs already included by default. The User Interface Editor contains a single pane with a hierarchical list of user interface dialog boxes. The list is divided into two sections for standard versus administrative installations, and each section contains "Start", "Progress", and "End" nodes to represent the stages of installation. I will only describe what is needed to add a dialog box for a standard install (under the "Install" node). Right-click on the Start node and select "Add Dialog", select one of the "Checkboxes" items and click "OK" - I will assume you select "Checkboxes (A)". The "Checkboxes (A)" dialog is added after the "Confirm Installation" dialog. You need to move it up so that it comes after the "Installation Folder" dialog and before the "Confirm Installation" dialog. You can right-click the "Checkboxes (A)" dialog and select "Move Up", or use drag-and-drop to move the dialog. With the "Checkboxes (A)" dialog selected, view the "Properties" window by pressing the "F4" key. Change the "Banner Text" property to "Shortcuts" and change the "Body Text" property to "Setup can create shortcuts to [ProductName] on your Desktop and in the Quick Launch bar. Would you like Setup to create the shortcuts?". Do not change "[ProductName]" - Visual Studio will automatically replace this with the value of the "ProductName" property of your Setup project - just make sure you set this property! ProductName Change the "Checkbox1Label" property to "Yes, I would like to create the [ProductName] shortcut on the Desktop". Change the "Checkbox1Property" property to "DESKTOP_SHORTCUT" and change the "CheckBox1Value" property to "Checked". Checkbox1Label Checkbox1Property DESKTOP_SHORTCUT CheckBox1Value Checked Make similar changes for the CheckBox2 properties, but use the values "Yes, I would like to create the [ProductName] shortcut in the Quick Launch bar." and "QUICKLAUNCH_SHORTCUT". Change the "Checkbox3Visible" and "Checkbox4Visible" properties to "False" as these will not be used. Checkbox3Visible Checkbox4Visible False To have your Setup project invoke the Installer methods in your application project, you need to define "Custom Actions" for your Setup project. To do this within the Visual Studio .NET IDE, do the following: Within the Solution Explorer, right-click on the Setup project and select "View" and then select "Custom Actions". This displays the Custom Actions Editor, where you can specify additional actions to be performed on a target computer during installation. The Custom Actions Editor contains a single pane with a hierarchical list of custom actions. The list is divided into four sections representing the phases of installation: Install, Commit, Rollback and Uninstall. These correspond with the Install, Commit, Rollback and Uninstall methods of the base Installer class. In the ShortcutsInstaller class we provided override methods for the Install, Rollback and Uninstall methods, so we need to add Custom Actions that will invoke these methods. The custom actions can be added one at a time to each of the individual nodes, but there is an easy way to add several at a time. Right-click the "Custom Action" root node in the Custom Actions Editor and select "Add Custom Action". The "Select Item in Project" dialog box is displayed. Select the "Application Folder" in the list and click "OK". Select "Primary output..." for the project that contains the ShortcutsInstaller class and click "OK". A "Primary output..." entry will be added to each of the Install, Commit, Rollback and Uninstall nodes. We only have overrides for the Install, Rollback and Uninstall methods, so we need to remove the "Primary output..." entry from the Commit node. Right-click this entry and select "Delete". We need to pass some parameters to the Install method of the ShortcutsInstaller class. These parameters will inform the Install method of the choices made by the user for the shortcuts to be created. With the "Primary output..." entry under the "Install" node selected, view the "Properties" window by pressing the "F4" key. Change the "CustomActionData" property to "/ALLUSERS=[ALLUSERS] /DESKTOP_SHORTCUT=[DESKTOP_SHORTCUT] /QUICKLAUNCH_SHORTCUT=[QUICKLAUNCH_SHORTCUT]". (Note: do not change the "Arguments" property.) The CustomActionData value will provide three parameters to the ShortcutsInstaller class: CustomActionData /ALLUSERS=[ALLUSERS] /DESKTOP_SHORTCUT=[DESKTOP_SHORTCUT] /QUICKLAUNCH_SHORTCUT=[QUICKLAUNCH_SHORTCUT] Arguments /ALLUSERS=[ALLUSERS] /DESKTOP_SHORTCUT=[DESKTOP_SHORTCUT] /QUICKLAUNCH_SHORTCUT=[QUICKLAUNCH_SHORTCUT] Checkbox2Property You are now ready to test out your installer. When you add a Setup project to a Solution in Visual Studio .NET, by default it is not included in a build of the Solution. To build the entire Solution including the Setup project, you can right-click the Setup project in the Solution Explorer and select "Build". Ensure the Output states that no projects failed or were skipped. To test the installer, right-click the Setup project in the Solution Explorer and select "Install". You can install and uninstall several times to test out various combinations of whether or not shortcuts are created and whether it is an install for "Everyone" or "Just me". Ensure the shortcuts are removed when the application is uninstalled. If optional creation of shortcuts on the Desktop or Quick Launch bar is something you want to add to your own Setup projects, then I hope this article may save you some time. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here private void CreateShortcut(string folder, string name, string target, string description) { string shortcutFullName = Path.Combine(folder, name + ".lnk"); string iconPath = target.Replace("HomeHelp.exe","sms.ico"); try { WshShell shell = new WshShellClass(); IWshShortcut link = (IWshShortcut)shell.CreateShortcut(shortcutFullName); link.IconLocation = iconPath + ",0"; link.TargetPath = target; link.Description = description; link.Save(); } catch (Exception ex) { MessageBox.Show(string.Format("The shortcut \"{0}\" could not be created.\n\n{1}", shortcutFullName, ex.ToString()), "Create Shortcut", MessageBoxButtons.OK, MessageBoxIcon.Information); } } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. News on the future of C# as a language
http://www.codeproject.com/script/Articles/View.aspx?aid=11758
CC-MAIN-2014-10
refinedweb
4,097
56.35
On Wed, Jun 4, 2008 at 5:10 PM, Stefan Monnier <address@hidden> wrote: > Can someone fix up those things and commit this patch, please? I commited it. I've done a few trivial changes: - making the argument of `window-parameters' optional, as the docstring suggests - args are consistently called WINDOW, PARAMETER and VALUE. - I've documented the return value of `set-window-parameter' (seems silly to return it and not document it). Questions: - What does mean "The meaningful PARAMETERs depend on the kind of window." Which parameters are meaningul, and for which windows? - It is wise to return (in `window-parameter' and `set-window-parameter') directly the parameter alist, instead of a copy of it? > And add a note to etc/NEWS about it? Sorry, I'm *horrible* at deciding what to write in NEWS... Juanma
http://lists.gnu.org/archive/html/emacs-devel/2008-06/msg00257.html
CC-MAIN-2014-42
refinedweb
137
57.37
Opened 11 years ago Closed 11 years ago #4541 closed (fixed) gettext_lazy doesn't work with __cmp__ Description if you compare two gettext_lazy objects, will raise error: from django.utils.translation import gettext_lazy as _ assert _("value") == _("value") Traceback (most recent call last): File "testlazy.py", line 3, in ? assert _("value1") == _("value2") File "/lib/django/django/utils/functional.py", line 43, in __wrapper__ return self.__dispatch[type(res)][funcname](res, *args, **kw) KeyError: '__cmp__' Not sure there's easy way to fix. Because the lazy object try to wrapper all function, and store them in a __dispatch dict. the __dispatch come from type.__dict__. But str.__dict__ do not contain __cmp__. This will not a issue if unicode branch merged, since unicode.__dict__ contain __cmp__. Currently I just call str() to across the issue: assert str(_("value")) == _("value") Change History (1) comment:1 Changed 11 years ago by Note: See TracTickets for help on using tickets. So.. this should be fixed now since the unicode branch merge?
https://code.djangoproject.com/ticket/4541
CC-MAIN-2018-09
refinedweb
173
60.01
mongooplog-alt 0.4.2 Improved alternative to official mongooplog utility. About mongooplog-alt is the Python remake of official mongooplog utility, shipped with MongoDB starting from version 2.2.0. It reads oplog of a remote server, and applies operations to the local server. This can be used to keep independed replica set loosly synced in sync only selected databases/collections. - option to exclude one or more namespaces (i.e. dbs or collections) from being synced. - ability to “rename” dbs/collections on fly, i.e. destination namespaces can differ from the original ones. - () Installation Using pip (preferred): pip install --upgrade mongooplog-alt Using easy_install: easy_install -U mongooplog-alt mongooplog-alt: -: mongooplog-alt --from prod.example.com:28000 --to dev.example.com:28500 -f --exclude logdb data.transactions --seconds 600 This command is going to take operations from the last 10 minutes from prod, and apply them to dev. Database logdb and collection transactions of data database will be omitted. After operations for the last minutes will be applied, command will wait for new changes to come, keep running until Ctrl+C or other termination signal recieved. Testing Tests for mongooplog-alt are written in javascript using test harness which is used for testing MongoDB iteself. You can run the whole suite with: mongo tests/suite.js Note, that you will need existing writable /data/db dir. Tests produce alot of output. Succesfull execution ends with line like this: ReplSetTest stopSet *** Shut down repl set - test worked **** - Author: Aleksey Sivokon - Download URL: - Keywords: mongodb,mongo,oplog,mongooplog - License: - Platform: any - Categories - Package Index Owner: Aleksey.Sivokon - DOAP record: mongooplog-alt-0.4.2.xml
https://pypi.python.org/pypi/mongooplog-alt
CC-MAIN-2017-39
refinedweb
274
50.84
Access docker within container on jenkins slave my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container My goal to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves. Except for the last part everything is already working thanks to if the Unix-docker-sock is properly exposed to the Jenkins master. The problem unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket. For the slaves which are spawned dynamically, this approach does not work. I tried to forward the access to docker like VOLUME /var/run/docker.sock VOLUME /var/lib/docker during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: The strange thing is: the user in the slave is root. So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away? One Solution collect form web for “Access docker within container on jenkins slave” With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn’t enough, as root inside the container isn’t root on the host machine anymore. I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine. The steps I did are: Find group id for docker group: $ id ..... 999(docker) Run jenkins container with two volumes – one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access: docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins Tested and found it indeeds work: docker exec -ti jenkins bash ./docker ps See more about additional groups here Another approach would be to use --privileged flag instead of –group-add, yet its better to use avoid it if possible
http://dockerdaily.com/access-docker-within-container-on-jenkins-slave/
CC-MAIN-2018-43
refinedweb
378
54.12
Various non-standard RTMP applications rely on the the values of attributes that are commonly passed in as part of a PLAY call, such as tcUrl, pageUrl etc. Sometimes this is for authentication (and content protection) purposes, sometimes it's to implement a DNS name based namespace so that different domains can share the same app. With ffmpeg and ffplay it's possible to override tcUrl and other variables with a command line option # Command line options -rtmp_flashver = The Flashversion to send to the server. default is LNX 9,0,124,2 -rtmp_pageurl = URL of the web page in which the media was embedded. By default no value will be sent. -rtmp_swfurl = The URL of the SWF player being used. By default not value is sent -rtmp_tcurl = URL of the target stream's app. The default is rtmp://host[:port]/app # Play back from a specific machine but include a tcUrl which acts as though you were using the domain ffplay -rtmp_tcurl rtmp://rtmp.example.com/myapp rtmp://192.168.1.24/myapp/mystream # Include a pageurl for servers that do a sort of referrer check ffplay -rtmp_pageurl rtmp://rtmp.example.com/myapp/mystream
https://snippets.bentasker.co.uk/page-1807091107-Overriding-tcUrl-and-other-options-with-ffplay-BASH.html
CC-MAIN-2020-16
refinedweb
194
60.85
Hey I posted earlier a few days ago about a program I am having trouble with. Well I have worked on it but still need a little bit of help. I have to write a program that will store 10 grades into an array. The program will need to output the lowest, highest, and average grade here is what I got so far. I am stuck and dont know how to get this to work. Thanks for any help. //Arrayofscores.java public class Arrayofscores { public static void main(String args[]) { int [] scores = new int[10]; int smallest, highest,total=0; double average =0.0; //enter 10 scores for (int i = 0;i<= scores.length-1;i++) { System.out.print("Enter Score " + (i+1) + ": "); } //lowest score smallest = scores[0]; for (int i = 1; i <= scores.length-1;i++) if (scores < smallest) smallest = scores; System.out.println("The lowest score is : " + smallest); //highest score highest = scores[0]; for (int i = 1; i <= scores.length-1;i++) if (scores > highest) highest = scores; System.out.println("The highest score is : " + highest); //average score for (int i = 0; i<=scores.length-1;i++) total = total + scores; average = total/10.0; System.out.println("\nThe average score is : " + average); } }
https://www.daniweb.com/programming/software-development/threads/271191/help-with-java-program-please
CC-MAIN-2018-13
refinedweb
203
68.67
Type: Posts; User: terminalXXX I decided to change my job, now I have had 2 offers (1 from a Japanese company-PM position, 1 from where I have to work as a data programmer). On a positive side, I will probably be... I have a friend married to an American whose body size is twice bigger than hers. That makes me think she must have a seriously hard time at midnight. I am shocked! Hi P&P, that only works if you input contains no space try this char name[256]; cin.getline(name,strlen(name)+1); Thanks Paul and 2kaud, We work only on mixed mode dlls and CLI test classes, the test class method is simple and does exactly the same job as the main function does in Paul's example. And sure, if... Thank you VictorN, I had no idea then,. Yes, and that indicates something incompletely implemented in the compiler exists because ther compiler does still accept /*..comment..*/ but not //..comment.. It's not what I did it's someone else did and I wonder why that happened I have posted it in some of my previous posts, I don't know what else I can extract from the real code to demonstrate the problem. I change all user-defined time to void* (return types and parameters... I have a source file (.cpp) that I save into my HD using a different encoding scheme than its original code page. Later I open it again with MSVS and clearly nothing looks different (no odd... I have a share-pointer defined as typedef std::share_ptr<MyClass> MyClassPtr; and a class class Example { public: MyClassPtr DoSomething() { In my project, I must change all user-defined types except share pointers (I don't know how to deal with sharepointers because no default constructor to exchange the defined type with void/void* in... Nice spot! Thanks, I fixed the above code, and now that it becomes umanaged C++ class MyClass{}; namespace BI{ class BusinessInterop { public: static MyClass* func(){printf("BusinessInterop");return new MyClass();} }; } namespace BS { In this thread I have proved that class A { public: static void func(){std::cout<<"A\n";} }; class B:public A Ok thank you you two a lot, what about my third post question ? (Oppps, after 1 minute I posted this, I realize I was wrong because ...) virtual key in the base class is meant for... I don't have this warning on my VS2012 Ultimate... Please wait a second I look into compiler option....and be back very soon (after 2 minutes I'm back and...) I have level3 of warnings and see no... class D:public C { public: virtual void func(){std::cout<<"CC\n";} void Call(){funx();} }; and class D:public C Thank you Eric, that is awesome! I use php tag only for clarity purpose. Others I don't care. :) class A { public: static void func(){std::cout<<"A\n";} }; class B:public A { public: virtual void func(){A::func();} I have a class Customer I create a CustomerFactory for it but I would also like to include a Customer_Mock. So I think my factory will create a customer_mock instead of a real customer. Here is what... A white guy sent me some emails with words or phrases sounded like those from Lonestars songs (country rock),today I visited his facebook and found his 99/100 friends are all white. I also visited... using namespace System; ref class A { public: A(){Console::WriteLine(L"A constructor");} virtual void func(){foo();} void foo() { Console::WriteLine(L"A from foo"); } Thank you for your confirmation Yes, thank yu, that is why I ask whether or not should I minus one from the second function to match the two version ? For example, my current code is using the first function and I need to upgrade... Ok why does the example in yield 14 while 13 ?
http://forums.codeguru.com/search.php?s=b2dcf643e8d2e905fe3ab10678b4138a&searchid=6447651
CC-MAIN-2015-11
refinedweb
650
70.63
Multi Thread Basic¶ The basic demonstration of Zerynth multi-threading. Two threads are running in parallel, each thread toggle the LED and at same time print the message about which thread toggle the LED and polarity of the GPIO pin. More information about modules used in this demo: import gpio # Print initial message. print("Hello Multi-Threading!") def thread_1(): while True: # Toggle appropriate first pin from infinite loop. print("Thread 1 - Drive Pin HIGH") gpio.set(LED_BLUE, HIGH) sleep(1000) print("Thread 1 - Drive Pin LOW") gpio.set(LED_BLUE, LOW) sleep(1000) def thread_2(): while True: # Toggle appropriate second pin from infinite loop. print("Thread 2 - Drive Pin HIGH") gpio.set(LED_RED, HIGH) sleep(300) print("Thread 2 - Drive Pin LOW") gpio.set(LED_RED, LOW) sleep(300) # Start both threads. thread(thread_1) thread(thread_2)
https://docs.zerynth.com/latest/reference/examples/zerynth-os/multi_thread_basic/
CC-MAIN-2022-05
refinedweb
134
68.47
doing less and less with XML at my clients lately. This is not completely by my customer's choice, either: much of it is driven by me. As a background, my role within Applied Information Sciences is Software Architect, following the roles defined in the Rational Unified Process. This makes me responsible for the overall design of a system and the integration plan for subsystems. I also function as a Lead Designer, which makes me responsible for designing the subsystems that I previously identified. I also function as the Database Designer as well as Implementer. This pretty much means I participate heavily in the Analysis and Design phase to develop the Software Architecture Document, then mentor teams of 5-10 people through the construction phase. When I start a client engagement, I try to get a feel for what the customer is comfortable with as well as what the customer is interested in exploring. The usual suspects that they are familiar with include basics of C#, some DataGrid work, maybe even a user control or two for ASP.NET apps. Sometimes I am pleasantly shocked to hear what they have done with .NET, but more typically they have stuck to basics. Then I ask what they are interested in. Within the first few moments, I hear that they want to do something with a web service. Typically they will indicate a desire to create a suite of reusable ASP.NET controls. Almost certainly, someone mentions the need for a centralized security framework, and given enough time someone will toss in requirements for developing a standard application portal. And all of this they want thrown into yet another 6-week project titled “Customer Relationship Management Tool“. Parting the overall architecture dream from what can actually be accomplished in 6 weeks can be an art form, one of paring a mountain of granite down to a 5-inch perfectly shaped bust of Shakespeare. Time to start the inception phase to see what they really want. Once I get started looking at what they are really asking for, I see no mention of web services, controls, security, or a centralized portal. They usually just want some basic data capture and retrieval behaviors. These requirements are captured into a few use cases. We are supposed to stay away from implementation details at this stage, but you cannot help but notice that there still is no need for introducing web services yet. Worse, you can't imagine much more than a 2-tier app that Infopath could serve just as well as a custom ASP.NET app. Once we are done defining the use cases, we define the realizations through sequence diagrams. You ask a few clarification questions: We just went through the basic premises of how to make the call if web services are appropriate or not. In this case, web services serve no apparent need. OK, maybe you can think of one or two odd situations for web services that this dialog doesn't cover, maybe using the IE Web Service Behavior for DHTML updates to the UI without page refreshes. But this does serve 90% of the purposes for web services. So why do people overuse web services so much? I talked with a local Microsoft Evangelist, and he indicated that web services were definitely the future for .NET. And he said it with a straight face: I think he really believed it. Sadly, I think many of the developers lured into .NET also believe it. Hell, the marketing description for .NET describes it as Microsoft's platform for building next-generation web services. You can't work with .NET and not hear about web services. Maybe someone at the web services team at Microsoft has a much different view of service oriented architectures, but that view won't soon be realized in the corporate world without making the cost of server licenses much less an impact to small and medium sized company's financial statements. They don't typically have farms of servers waiting to be utilized, and they typically don't have many external customers that they want to share data with. They have a pool of secretaries doing something manually that can be automated with a small investment. Web services are great, don't get me wrong. But they just don't apply to many of the applications that are actually being built. That doesn't mean they are not being used. I have seen web services used for component to component calls on the same box. I asked the developer why he chose to implement a web service there: his answer was “it sounded interesting.“ OK, he is not getting the AM Turing Award anytime soon, but you have to wonder why web services sounded so interesting that he would defy reason just to throw one into an otherwise healthy application. In a word, “marketing.“ He bought the marketing hype, he drank the kool-aid, but more importantly he needed “web services“ on his resume somewhere. Now go back out into the real world of developers where people have been programming for awhile, maybe using COM or Java successfully for several years. Mention “web services“ to many of them, and you will hear a giggle, maybe even a guffaw. “Web Service!?! What are you trying to do, cripple our systems? Slow 'em to a crawl? We don't need no stinking web services!“ The fact is that many of the clients I interact with do not have web services in place, nor do they need them. But their managers have read so much about web services and XML that they insist that web services are the only way to develop systems these days. The lesser experienced developers grab hold of every little bit of neat technology they can get without a bigger picture of the impact to a system. The more experienced developers are more grizzled and callous to “technology for technology's sake.“ Not so mainstream, but definitely related, is the guise of XML. XML has its place, definitely has its many uses. The configuration framework (a future post will explore the overuse of the term “framework“) within .NET is a prime example of how XML can be properly used. DataSets can use XML Schemas to define strongly typed DataSets as well as to coerce hierarchical data into relational objects. But just as soon as I list the benefits of XML, someone is sure to follow with a question like, “what is better, XML or a database?“ There are folks who see through the marketing and the hype, and the others who are stopped cold in its tracks. XSLT never really suffered the same track as XML or SOAP. XSLT is difficult to approach, because you need to have a grasp on XML, namespaces, and XPath. There is a larger barrier to entry for XSLT than XML, where you can pick up the basics after reading aquick article on W3Schools.com. The remaining complexities of XML can be hidden by the DataSet. SOAP has been rendered deceptively simple by the existence of the [WebMethod] attribute in .NET, almost unnoticeable to those who develop web services. But XSLT... nothing really hides XSLT yet. Nothing made XSLT easier, and this is one of the reasons that it has a comparatively slow adoption rate. To note, this is why you see so little about XSLT in the way of books and articles. Why write (better yet, why publish) for such a small audience? If nobody is using this stuff, why write on it? If there are so few articles in the mainstream rags about XSLT, why use it in my applications? A few tools, like Marrowsoft's Xselerator, ease some of the burden of working with XSLT, but you still have to know a lot about XSLT to really leverage the tools. And when do you use XSLT? Since I develop mainly web applications, people typically want to use XSLT for controlling dynamic screen placement or branding applications. Take a look at CSSZenGarden: This is one of the best demonstrations of why XSLT is not always the answer for this. XSLT has its uses, but they are waning in my architectural style. The more I architect solutions that developers are able to construct and maintain, the less I introduce XML and XSLT. The fact is that developers just don't use them enough to understand them as viable tools. They know the marketing terms, they know the objects that hide deeper implementations. They don't know XML. So I am forced to develop less elegant yet equally effective solutions to problems using other means. Thank God there are always Design Patterns left to confuse them with.
http://blogs.msdn.com/b/kaevans/archive/2003/10/07/30974.aspx?Redirected=true&title=CSS%20Zen%20Garden
CC-MAIN-2014-52
refinedweb
1,456
63.29
please unify ZODB scripts Bug #767416 reported by Toni Mueller on 2011-04-20 This bug affects 1 person Bug Description Eg. 'analyze.py': $ diff -uw analyze.py.old analyze.py --- analyze.py.old 2010-12-26 18:43:56.000000000 +0100 +++ analyze.py 2011-04-20 19:52:09.000000000 +0200 @@ -138,6 +138,11 @@ except Exception, err: print err -if __name__ == "__main__": + +def main(): path = sys.argv[1] + +if __name__ == "__main__": + main() + This should make it easier to call the scripts from anywhere (eg. generate wrappers around them via zc.buildout). Toni Mueller (support-oeko-net) on 2011-04-20 BTW, the main function should look more liike: def main(args=None): if args is None: args = sys.argv[1:] [path] = args This would get applied a lot faster if there was a test.
https://bugs.launchpad.net/zodb/+bug/767416
CC-MAIN-2017-09
refinedweb
136
77.84
File::Corresponding::File::Profile - The definition of what matches and translates to corresponding files Name/description of this file profile. sprintf string to construct a file name. It should contain at least one % command to insert a relative file name. Only used if defined. Regex matching a file. The first capture parens are used to extract the local file name. If coerced from a string, define as qr$regex, i.e. specify the delimiters and any needed flags. Return two item list with (the base filename, the captured file name fragment) from matching $file against regex, or () if nothing matched. The $file_base is the $file, but with the whole matching regex removed, forming the basis for looking up corresponding files. Return a new File::Corresponding::File::Found object if a file made up of $file_base, this profile, and $fragment exists in the filesystem. If not, return (). Convert $rex_string to a proper Regex ref, or die with a useful error message.
http://search.cpan.org/dist/File-Corresponding/lib/File/Corresponding/File/Profile.pm
CC-MAIN-2016-40
refinedweb
160
76.62
is_linetouched, is_wintouched, touchline, touchwin, untouchwin, wtouchln - window refresh control functions #include <curses.h> bool is_linetouched(WINDOW *win, int line); bool is_wintouched(WINDOW *win); int touchline(WINDOW *win, int start, int count); int touchwin(WINDOW *win); int untouchwin(WINDOW *win); int wtouchln(WINDOW *win, int y, int n, int changed); The touchwin() function touches the specified window (that is, marks it as having changed more recently than the last refresh operation). The touchline() function only touches count lines, beginning with line start. The untouchwin() function marks all lines in the window as unchanged since the last refresh operation. Calling wtouchln(), if changed is 1, touches n lines in the specified window, starting at line y. If changed is 0, wtouchln() marks such lines as unchanged since the last refresh operation. The is_wintouched() function determines whether the specified window is touched. The is_linetouched() function determines whether line line of the specified window is touched. The is_linetouched() and is_wintouched() functions return TRUE if any of the specified lines, or the specified window, respectively, has been touched since the last refresh operation. Otherwise, they return FALSE. Upon successful completion, the other functions return OK. Otherwise, they return ERR. Exceptions to this are noted in the preceding function descriptions. No errors are defined. Calling touchwin() or touchline() is sometimes necessary when using overlapping windows, since a change to one window affects the other window, but the records of which lines have been changed in the other window do not reflect the change. Screens, Windows and Terminals, doupdate(), <curses.h>.
http://pubs.opengroup.org/onlinepubs/007908775/xcurses/wtouchln.html
CC-MAIN-2015-06
refinedweb
254
55.84
Microsoft Office Tutorials and References In Depth Information Note You don’t need to map every element of a schema to a worksheet for the data import to work correctly. Adding XML to a Worksheet Programmatically As you might expect, every XML-related action you can take using the Excel interface has its counterpart in the Excel object, although there are times when you have to dig a bit to find out how to do something in VBA that takes a simple and intuitive action when going through the interface. One example of that phenomenon is the series of actions you need to take to create a single data list (rather than a series of lists) from an XML schema. Mapping a Schema to a Worksheet Programmatically When you map an XML schema to a worksheet in VBA, you do so by creating a variable that contains a reference to an XmlMap object, which is the object used to represent a schema contained in an .xsd file. Table 26-3 lists the XmlMap object’s properties and methods. Table 26-3. Selected Properties and Methods of the XmlMap Object Property or Method Description Property AdjustColumnWidth A Boolean value that, when set to True (the default), causes Excel to change the column width to fit the data imported into that column. Setting the property to False causes the columns to retain their width at the time of the import. AppendOnImport A Boolean value that, when set to False (the default), causes data imported into a schema to overwrite the existing values. Setting this property to True causes newly imported data to be appended to an existing list. IsExportable A Boolean value that returns True if Excel can use the XPath objects in the specified schema map to export XML data and if all XML lists mapped to the specified schema map can be exported. Name A string that contains the name of an XML map. The string must be unique within the workbook and cannot exceed 255 characters. PreserveColumnFilter A Boolean value that, when set to True (the default), causes any list filter to be retained when the map is refreshed. continued Search JabSto :: Custom Search
http://jabsto.com/Tutorial/topic-110/Microsoft-Office-Excel-2003-Programming-577.html
CC-MAIN-2018-05
refinedweb
365
53.85
This copy of LedControl has been updated to compile with Arduino 0018 and includes a minor change to reduce RAM usage. Create an instance of LedControl using a name of your choice. The 3 pins where you connected the MAX7219 signals need to be given. The number of chips is also needed. Normally if you need more than one chip, it's easiest to chain them together (DOUT to DIN) and use only a single object to control them all. You could also connect each chip to 3 separate pins and create a separate object for each. Turn the chip on or off. Use "false" to turn the chip on, "true" to shut it down. The chips default to shutdown mode, so you must turn them on before using them. If you have multiple chips, this must be done for each chip. Set the intensity on a chip. 15 is the maximum brightness. If you have multiple chips, this must be done for each chip. Sets a 7 segment display to "number". The chip and position of the digit on that chip must be given. "dot" is usually false. Using "true" will turn on the dot associated with that digit. Turn a single LED on or off. Three inputs, "chip", "row" and "column" select the exact LED, and "state" must be true to turn the LED on, or false to turn it off. #include <LedControl.h> // inputs: DIN pin, CLK pin, LOAD pin. number of chips LedControl mydisplay = LedControl(45, 44, 43, 1); void setup() {() { } In the example above, "DIG 0" (pin 2) was connected to the right-most 7 segment display.
https://www.pjrc.com/teensy/td_libs_LedControl.html
CC-MAIN-2017-17
refinedweb
272
83.86
4 errors Hashtable<Object,Object> h = new Hashtable<Object,Object>(); h.put(new Double(ds[j]), Boolean.TRUE); Nothing Post your Comment Calculate process time in Java Calculate process time in Java  ... the process time of the completion of the operation through the Java program... for the best illustration of the procedure of calculating the process time in which any time time how to find the current system time between the user login and logout using java get Elapsed Time Java get Elapsed Time  .... As you all know that the Elapsed time is the time taken to complete the process... time and the end time of the defined process. Here we are calculating the sum Date & TIme - Development process Date & TIme Hi, I asked to insert Date and Time to database with am... with time. Hi Friend, To insert the date,use the following code in your...() that will insert date with time into database. So to use this method, you have Decreasing process time by caching through the Hash Table in Java Decreasing process time by caching through the Hash Table in Java... illustrates you how to improve the process time of any type of the operation performed by your java program. You can easily improve the process completion time Real Time Scenarios - Development process Real Time Scenarios Hello, Can anybody tell me regarding... struts/ejb ? Hi The Quality Process is totally depend on the quality... the quality of a software, what will be process behind it. The centralization Java Project - Development process Java Project Hello Sir I want Java Project for Time Table of Buses.that helps to search specific Bus Time and also add ,Update,Delete the Bus Routes. Back End MS Access Plz Give Me Core-Java - Development process =str+"Dixit"; if my application excute above line 100 times then some time it append second part some time it is not appending. how i can solve this problem.i m.../java Java process Java process How will you invoke any external process in Java Process in Java Process in Java how can I execute a java program as a process which i can monitor it in the task manager to see the cpu usage Retrieve Date Time Frm Database - Development process Retrieve Date Time Frm Database Hi Friend, Ow to retrieve Date and Time at a time from ms Access database. For Storing data time...(); For Retrieving date -------------- need code to retrieve date and time Date Time Problem In Jsp code - Development process Date Time Problem In Jsp code Hi Friends, By using this code , am storing date and time into msaccess database. But while retriving i want to get same date and time .send me code for that. SimpleDateFormat Java Thread Synchronization - Development process Java Thread Synchronization Hi,Please help me with this coding. I... time,then the second thread must stop. Please provide me with the code  ...: "); t2.start(); } } For more information on Java visit to : http runtime polymorphism - Development process runtime polymorphism how run time polymorphism is achieved Hi Friend, The run-time polymorphism is basically the Method... information,please visit the following links: Real Time code for Array list - Development process Real Time code for Array list Hi, Can u give me sample code for Arraylist in real time applications. Thanks PraKash Hi Friend, Try the following code: import java.util.*; class Student { public Development Process Java: Development Process Software development can be looked at from two perspectives process and product. Process is how you go about writing programs... have to master the process of software development, which is largely self-taught hibernate - Development process Hibernate 3.0 core is 68,549 lines of Java code together with 27,948 lines of unit... over a year. Hibernate maps the Java classes to the database tables. It also... the development time. Hibernate is not the best solutions for data centric applications Data transfer object at run time - Development process Custom Annotation - Development process to annotate a Java element. 3)An annotation show the declared element... compile time. 7)It can be used before compile time mainly to generate Date & Time Date & Time How to insert System Date & Time in MS-ACCESS using Java get absolute time java get absolute time java How to get absolute time in java Time validation Time validation Hi. I have a text box in html to get time... box is in the correct format or not using java script. Please help me for doing project. Here is a code that accepts the time in the format hh:mm Java run time polymorphism Java run time polymorphism What is run-time polymorphism or dynamic method dispatch Time Table in Java Time Table in Java Hi, Deepak i am developing a time table for a school on java,but i am confuse how to start so please give me idea about class cast Exception - Development process class cast Exception hi deepak this sampath,am facing the problem with printer connection with java application. Every time am getting the class cast exception and iam sending the code please check this and correct me PROGRAMkeiden March 2, 2012 at 7:18 PM 4 errors Hashtable<Object,Object> h = new Hashtable<Object,Object>(); h.put(new Double(ds[j]), Boolean.TRUE); ProgramCute June 14, 2012 at 2:27 PM Nothing Post your Comment
http://www.roseindia.net/discussion/18324-Calculate-process-time-in-Java.html
CC-MAIN-2013-20
refinedweb
890
61.67
Sample problem: In the following method definitions, what does the * and ** do for param2? def foo(param1, *param2): def bar(param1, **param2): What do asterisk * and double asterisk ** mean in Python? Answer #1: The . What do *args and **kwargs mean in Python? Answer #2:' What do * asterisk and ** double asterisk do in Python? Answer #3: The single * means that there can be any number of extra positional arguments. foo() can be invoked like foo(1,2,3,4,5). In the body of foo() param2 is a sequence containing 2-5. The double ** means there can be any number of extra named parameters. bar() can be invoked like bar(1, a=2, b=3). In the body of bar() param2 is a dictionary containing {‘a’:2, ‘b’:3 } With the following code: def foo(param1, *param2): print(param1) print(param2) def bar(param1, **param2): print(param1) print(param2) foo(1,2,3,4,5) bar(1,a=2,b=3) the output is 1 (2, 3, 4, 5) 1 {'a': 2, 'b': 3} Answer #4: Args and kwargs in Python They allow for functions to be defined to accept and for users to pass any number of arguments, positional ( *) and keyword ( **). Defining Functions *args allows for any number of optional positional arguments (parameters), which will be assigned to a tuple named args. **kwargs allows for any number of optional keyword arguments (parameters), which will be in a dict named kwargs. You can (and should) choose any appropriate name, but if the intention is for the arguments to be of non-specific semantics, args and kwargs are standard names. Expansion, Passing any number of arguments You can also use *args and **kwargs to pass in parameters from lists (or any iterable) and dicts (or any mapping), respectively. The function recieving the parameters does not have to know that they are being expanded. For example, Python 2’s xrange does not explicitly expect *args, but since it takes 3 integers as arguments: >>> x = xrange(3) # create our *args - an iterable of 3 integers >>> xrange(*x) # expand here xrange(0, 2, 2) As another example, we can use dict expansion in str.format: >>>>>>> 'this is foo, {foo} and bar, {bar}'.format(**locals()) 'this is foo, FOO and bar, BAR' New in Python 3: Defining functions with keyword only arguments You can have keyword only arguments after the *args – for example, here, kwarg2 must be given as a keyword argument – not positionally: def foo(arg, kwarg=None, *args, kwarg2=None, **kwargs): return arg, kwarg, args, kwarg2, kwargs Usage: >>> foo(1,2,3,4,5,kwarg2='kwarg2', bar='bar', baz='baz') (1, 2, (3, 4, 5), 'kwarg2', {'bar': 'bar', 'baz': 'baz'}) Also, * can be used by itself to indicate that keyword only arguments follow, without allowing for unlimited positional arguments. def foo(arg, kwarg=None, *, kwarg2=None, **kwargs): return arg, kwarg, kwarg2, kwargs Here, kwarg2 again must be an explicitly named, keyword argument: >>> foo(1,2,kwarg2='kwarg2', foo='foo', bar='bar') (1, 2, 'kwarg2', {'foo': 'foo', 'bar': 'bar'}) And we can no longer accept unlimited positional arguments because we don’t have *args*: >>> foo(1,2,3,4,5, kwarg2='kwarg2', foo='foo', bar='bar') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: foo() takes from 1 to 2 positional arguments but 5 positional arguments (and 1 keyword-only argument) were given Again, more simply, here we require kwarg to be given by name, not positionally: def bar(*, kwarg=None): return kwarg In this example, we see that if we try to pass kwarg positionally, we get an error: >>> bar('kwarg') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: bar() takes 0 positional arguments but 1 was given We must explicitly pass the kwarg parameter as a keyword argument. >>> bar(kwarg='kwarg') 'kwarg' Python 2 compatible demos *args (typically said “star-args”) and **kwargs (stars can be implied by saying “kwargs”, but be explicit with “double-star kwargs”) are common idioms of Python for using the * and ** notation. These specific variable names aren’t required (e.g. you could use *foos and **bars), but a departure from convention is likely to enrage your fellow Python coders. We typically use these when we don’t know what our function is going to receive or how many arguments we may be passing, and sometimes even when naming every variable separately would get very messy and redundant (but this is a case where usually explicit is better than implicit). Example 1 The following function describes how they can be used, and demonstrates behavior. Note the named b argument will be consumed by the second positional argument before : def foo(a, b=10, *args, **kwargs): ''' this function takes required argument a, not required keyword argument b and any number of unknown positional arguments and keyword arguments after ''' print('a is a required argument, and its value is {0}'.format(a)) print('b not required, its default value is 10, actual value: {0}'.format(b)) # we can inspect the unknown arguments we were passed: # - args: print('args is of type {0} and length {1}'.format(type(args), len(args))) for arg in args: print('unknown arg: {0}'.format(arg)) # - kwargs: print('kwargs is of type {0} and length {1}'.format(type(kwargs), len(kwargs))) for kw, arg in kwargs.items(): print('unknown kwarg - kw: {0}, arg: {1}'.format(kw, arg)) # But we don't have to know anything about them # to pass them to other functions. print('Args or kwargs can be passed without knowing what they are.') # max can take two or more positional args: max(a, b, c...) print('e.g. max(a, b, *args) \n{0}'.format( max(a, b, *args))) kweg = 'dict({0})'.format( # named args same as unknown kwargs ', '.join('{k}={v}'.format(k=k, v=v) for k, v in sorted(kwargs.items()))) print('e.g. dict(**kwargs) (same as {kweg}) returns: \n{0}'.format( dict(**kwargs), kweg=kweg)) We can check the online help for the function’s signature, with help(foo), which tells us foo(a, b=10, *args, **kwargs) Let’s call this function with foo(1, 2, 3, 4, e=5, f=6, g=7) which prints: a is a required argument, and its value is 1 b not required, its default value is 10, actual value: 2 args is of type <type 'tuple'> and length 2 unknown arg: 3 unknown arg: 4 kwargs is of type <type 'dict'> and length 3 unknown kwarg - kw: e, arg: 5 unknown kwarg - kw: g, arg: 7 unknown kwarg - kw: f, arg: 6 Args or kwargs can be passed without knowing what they are. e.g. max(a, b, *args) 4 e.g. dict(**kwargs) (same as dict(e=5, f=6, g=7)) returns: {'e': 5, 'g': 7, 'f': 6} Example 2 We can also call it using another function, into which we just provide a: def bar(a): b, c, d, e, f = 2, 3, 4, 5, 6 # dumping every local variable into foo as a keyword argument # by expanding the locals dict: foo(**locals()) bar(100) prints: a is a required argument, and its value is 100 b not required, its default value is 10, actual value: 2 args is of type <type 'tuple'> and length 0 kwargs is of type <type 'dict'> and length 4 unknown kwarg - kw: c, arg: 3 unknown kwarg - kw: e, arg: 5 unknown kwarg - kw: d, arg: 4 unknown kwarg - kw: f, arg: 6 Args or kwargs can be passed without knowing what they are. e.g. max(a, b, *args) 100 e.g. dict(**kwargs) (same as dict(c=3, d=4, e=5, f=6)) returns: {'c': 3, 'e': 5, 'd': 4, 'f': 6} Example 3: practical usage in decorators OK, so maybe we’re not seeing the utility yet. So imagine you have several functions with redundant code before and/or after the differentiating code. The following named functions are just pseudo-code for illustrative purposes. def foo(a, b, c, d=0, e=100): # imagine this is much more code than a simple function call preprocess() differentiating_process_foo(a,b,c,d,e) # imagine this is much more code than a simple function call postprocess() def bar(a, b, c=None, d=0, e=100, f=None): preprocess() differentiating_process_bar(a,b,c,d,e,f) postprocess() def baz(a, b, c, d, e, f): ... and so on We might be able to handle this differently, but we can certainly extract the redundancy with a decorator, and so our below example demonstrates how *args and **kwargs can be very useful: def decorator(function): '''function to wrap other functions with a pre- and postprocess''' @functools.wraps(function) # applies module, name, and docstring to wrapper def wrapper(*args, **kwargs): # again, imagine this is complicated, but we only write it once! preprocess() function(*args, **kwargs) postprocess() return wrapper And now every wrapped function can be written much more succinctly, as we’ve factored out the redundancy: @decorator def foo(a, b, c, d=0, e=100): differentiating_process_foo(a,b,c,d,e) @decorator def bar(a, b, c=None, d=0, e=100, f=None): differentiating_process_bar(a,b,c,d,e,f) @decorator def baz(a, b, c=None, d=0, e=100, f=None, g=None): differentiating_process_baz(a,b,c,d,e,f, g) @decorator def quux(a, b, c=None, d=0, e=100, f=None, g=None, h=None): differentiating_process_quux(a,b,c,d,e,f,g,h) And by factoring out our code, which *args and **kwargs allows us to do, we reduce lines of code, improve readability and maintainability, and have sole canonical locations for the logic in our program. If we need to change any part of this structure, we have one place in which to make each change. Answer #5: Let us first understand what are positional arguments and keyword arguments. Below is an example of function definition with Positional arguments. def test(a,b,c): print(a) print(b) print(c) test(1,2,3) #output: 1 2 3 So this is a function definition with positional arguments. You can call it with keyword/named arguments as well: def test(a,b,c): print(a) print(b) print(c) test(a=1,b=2,c=3) #output: 1 2 3 Now let us study an example of function definition with keyword arguments: def test(a=0,b=0,c=0): print(a) print(b) print(c) print('-------------------------') test(a=1,b=2,c=3) #output : 1 2 3 ------------------------- You can call this function with positional arguments as well: def test(a=0,b=0,c=0): print(a) print(b) print(c) print('-------------------------') test(1,2,3) # output : 1 2 3 --------------------------------- So we now know function definitions with positional as well as keyword arguments. Now let us study the ‘*’ operator and ‘**’ operator. Please note these operators can be used in 2 areas: a) function call b) function definition The use of ‘*’ operator and ‘**’ operator in function call. Let us get straight to an example and then discuss it. def sum(a,b): #receive args from function calls as sum(1,2) or sum(a=1,b=2) print(a+b) my_tuple = (1,2) my_list = [1,2] my_dict = {'a':1,'b':2} # Let us unpack data structure of list or tuple or dict into arguments with help of '*' operator sum(*my_tuple) # becomes same as sum(1,2) after unpacking my_tuple with '*' sum(*my_list) # becomes same as sum(1,2) after unpacking my_list with '*' sum(**my_dict) # becomes same as sum(a=1,b=2) after unpacking by '**' # output is 3 in all three calls to sum function. So remember when the ‘*’ or ‘**’ operator is used in a function call – ‘*’ operator unpacks data structure such as a list or tuple into arguments needed by function definition. ‘**’ operator unpacks a dictionary into arguments needed by function definition. Now let us study the ‘*’ operator use in function definition. Example: def sum(*args): #pack the received positional args into data structure of tuple. after applying '*' - def sum((1,2,3,4)) sum = 0 for a in args: sum+=a print(sum) sum(1,2,3,4) #positional args sent to function sum #output: 10 In function definition the ‘*’ operator packs the received arguments into a tuple. Now let us see an example of ‘**’ used in function definition: def sum(**args): #pack keyword args into datastructure of dict after applying '**' - def sum({a:1,b:2,c:3,d:4}) sum=0 for k,v in args.items(): sum+=v print(sum) sum(a=1,b=2,c=3,d=4) #positional args sent to function sum In function definition The ‘**’ operator packs the received arguments into a dictionary. So remember: In a function call the ‘*’ unpacks data structure of tuple or list into positional or keyword arguments to be received by function definition. In a function call the ‘**’ unpacks data structure of dictionary into positional or keyword arguments to be received by function definition. In a function definition the ‘*’ packs positional arguments into a tuple. In a function definition the ‘**’ packs keyword arguments into a dictionary. Answer #6: This table is handy for using * and ** in function construction and function call: In function construction In function call ======================================================================= | def f(*args): | def f(a, b): *args | for arg in args: | return a + b | print(arg) | args = (1, 2) | f(1, 2) | f(*args) ----------|--------------------------------|--------------------------- | def f(a, b): | def f(a, b): **kwargs | return a + b | return a + b | def g(**kwargs): | kwargs = dict(a=1, b=2) | return f(**kwargs) | f(**kwargs) | g(a=1, b=2) | ----------------------------------------------------------------------- Hope you learned something from this post. Follow Programming Articles for more!
https://programming-articles.com/what-double-star-asterisk-and-star-asterisk-do-for-parameters-in-python-answered/
CC-MAIN-2022-40
refinedweb
2,284
53.14
Before you start About this tutorial IP sockets are the lowest-level layer upon which high level Internet protocols are built: everything from HTTP to SSL to POP3 to Kerberos to UDP-Time. To implement custom protocols, or to customize implementation of well-known protocols, a programmer needs a working knowledge of the basic socket infrastructure. While this tutorial focuses primarily on C programming, and also uses Python as a representative higher-level language for examples, a similar API is available in many languages. This tutorial introduces you to the basics of programming custom network tools using the cross-platform Berkeley Sockets Interface. Almost all network tools in Linux and other UNIX-based operating systems rely on this interface. Prerequisites This tutorial requires a minimal level of knowledge of C, and ideally of Python also (mostly for the follow-on Part 2). However, if you are not familiar with either programming language, you should be able to make it through with a bit of extra effort; most of the underlying concepts will apply equally to other programming languages, and calls will be quite similar in most high-level scripting languages like Ruby, Perl, TCL, and so on. Although this tutorial introduces the basic concepts behind IP (Internet Protocol) networks, some prior acquaintance with the concept of network protocols and layers will be helpful (see Resources at the end of this tutorial for background documents). Understanding IP networks and network layers What is a network? Figure 1. Network layers What we usually call a computer network is composed of a number of network layers (see Resources for a useful reference that explains these in detail). Each of these network layers provides a different restriction and/or guarantee about the data at that layer. The protocols at each network layer generally have their own packet formats, headers, and layout. The seven traditional layers of a network are divided into two groups: upper layers and lower layers. The sockets interface provides a uniform API to the lower layers of a network, and allows you to implement upper layers within your sockets application. Further, application data formats may themselves constitute further layers; for example, SOAP is built on top of XML, and ebXML may itself utilize SOAP. In any case, anything past layer 4 is outside the scope of this tutorial. What do sockets do? While the sockets interface theoretically allows access to protocol families other than IP, in practice, every network layer you use in your sockets application will use IP. For this tutorial we only look at IPv4; in the future IPv6 will become important also, but the principles are the same. At the transport layer, sockets support two specific protocols: TCP (transmission control protocol) and UDP (user datagram protocol). Sockets cannot be used to access lower (or higher) network layers; for example, a socket application does not know whether it is running over Ethernet, token ring, or a dial-up connection. Nor does the socket's pseudo-layer know anything about higher-level protocols like NFS, HTTP, FTP, and the like (except in the sense that you might yourself write a sockets application that implements those higher-level protocols). At times, the sockets interface is not your best choice for a network programming API. Specifically, many excellent libraries exist (in various languages) to use higher-level protocols directly, without your having to worry about the details of sockets; the libraries handle those details for you. While there is nothing wrong with writing you own SSH client, for example, there is no need to do so simply to let an application transfer data securely. Lower-level layers than those addressed by sockets fall pretty much in the domain of device driver programming. IP, TCP, and UDP As indicated above, when you program a sockets application, you have a choice to make between using TCP and using UDP. Each has its own benefits and disadvantages. TCP is a stream protocol, while UDP is a datagram protocol. In other words, TCP establishes a continuous open connection between a client and a server, over which bytes may be written (and correct order guaranteed) for the life of the connection. However, bytes written over TCP have no built-in structure, so higher-level protocols are required to delimit any data records and fields within the transmitted bytestream. UDP, on the other hand, does not require a connection to be established between client and server; it simply transmits a message between addresses. A nice feature of UDP is that its packets are self-delimiting; that is, each datagram indicates exactly where it begins and ends. A possible disadvantage of UDP, however, is that it provides no guarantee that packets will arrive in order, or even at all. Higher-level protocols built on top of UDP may, of course, provide handshaking and acknowledgments. A useful analogy for understanding the difference between TCP and UDP is the difference between a telephone call and posted letters. The telephone call is not active until the caller "rings" the receiver and the receiver picks up. The telephone channel remains alive as long as the parties stay on the call, but they are free to say as much or as little as they wish to during the call. All remarks from either party occur in temporal order. On the other hand, when you send a letter, the post office starts delivery without any assurance the recipient exists, nor any strong guarantee about how long delivery will take. The recipient may receive various letters in a different order than they were sent, and the sender may receive mail interspersed in time with those she sends. Unlike with the postal service (ideally, anyway), undeliverable mail always goes to the dead letter office, and is not returned to sender. Peers, ports, names, and addresses Beyond the protocol, TCP or UDP, there are two things a peer (a client or server) needs to know about the machine it communicates with: an IP address and a port. An IP address is a 32-bit data value, usually represented for humans in "dotted quad" notation, such as 64.41.64.172. A port is a 16-bit data value, usually simply represented as a number less than 65536, most often one in the tens or hundreds range. An IP address gets a packet to a machine; a port lets the machine decide which process/service (if any) to direct it to. That is a slight simplification, but the idea is correct. The above description is almost right, but it misses something. Most of the time when humans think about an Internet host (peer), we do not remember a number like 64.41.64.172, but instead a name like gnosis.cx. To find the IP address associated with a particular host name, usually you use a Domain Name Server, but sometimes local lookups are used first (often via the contents of /etc/hosts ). For this tutorial, we will generally just assume an IP address is available, but we'll discuss coding name/address lookups next. Host name resolution The command-line utility nslookup can be used to find a host IP address from a symbolic name. Actually, a number of common utilities, such as ping or network configuration tools, do the same thing in passing. But it is simple to do the same thing programmatically. In Python or other very-high-level scripting languages, writing a utility program to find a host IP address is trivial: #!/usr/bin/env python "USAGE: nslookup.py <inet_address>" import socket, sys print socket.gethostbyname(sys.argv[1]) The trick is using a wrapped version of the same gethostbyname()) function we also find in C. Usage is as simple as: $ ./nslookup.py gnosis.cx 64.41.64.172 In C, that standard library call gethostbyname() is used for name lookup. Below is a simple implementation of nslookup as a command-line tool; adapting it for use in a larger application is straightforward. Of course, C is a bit more finicky than Python is. /*); } Notice that the returned value from gethostbyname() is a hostent structure that describes the name's host. The member host-> h_addr_list contains a list of addresses, each of which is a 32-bit value in "network byte order"; in other words, the endianness may or may not be machine-native order. In order to convert to dotted-quad form, use the function inet_ntoa(). Writing a client application in C The steps in writing a socket client My examples for both clients and servers will use one of the simplest possible applications: one that sends data and receives the exact same thing back. In fact, many machines run an "echo server" for debugging purposes; this is convenient for our initial client, since it can be used before we get to the server portion (assuming you have a machine with echod running). I would like to acknowledge the book TCP/IP Sockets in C by Donahoo and Calvert (see Resources). I have adapted several examples that they present. I recommend the book, but admittedly, echo servers/clients will come early in most presentations of sockets programming. The steps involved in writing a client application differ slightly between TCP and UDP clients. In both cases, you first create the socket; in the TCP case only, you next establish a connection to the server; next you send some data to the server; then receive data back; perhaps the sending and receiving alternates for a while; finally, in the TCP case, you close the connection. A TCP echo client (client setup) First we will look at a TCP client; in Part 2 of this tutorial series, we will make some adjustments to do (roughly) the same thing with UDP. Let's look at the first few lines: some includes, and creating the socket: #include <stdio.h> #include <sys/socket.h> #include <arpa/inet.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <netinet/in.h> #define BUFFSIZE 32 void Die(char *mess) { perror(mess); exit(1); } There is not too much to the setup. A particular buffer size is allocated, which limits the amount of data echo'd at each pass (but we loop through multiple passes, if needed). A small error function is also defined. A TCP echo client (creating the socket) The arguments to the socket() call decide the type of socket: PF_INET just means it uses IP (which you always will); SOCK_STREAM and IPPROTO_TCP go together for a TCP socket. int main(int argc, char *argv[]) { int sock; struct sockaddr_in echoserver; char buffer[BUFFSIZE]; unsigned int echolen; int received = 0; if (argc != 4) { fprintf(stderr, "USAGE: TCPecho <server_ip> <word> <port>\n"); exit(1); } /* Create the TCP socket */ if ((sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) { Die("Failed to create socket"); } The value returned is a socket handle, which is similar to a file handle; specifically, if the socket creation fails, it will return -1 rather than a positive-numbered handle. A TCP echo client (establish connection) Now that we have created a socket handle, we need to establish a connection with the server. A connection requires a sockaddr structure that describes the server. Specifically, we need to specify the server and port to connect to using echoserver.sin_addr.s_addr and echoserver.sin_port. The fact that we are using an IP address is specified with echoserver.sin_family, but this will always be set to AF_INET. /* Construct the server sockaddr_in structure */ memset(&echoserver, 0, sizeof(echoserver)); /* Clear struct */ echoserver.sin_family = AF_INET; /* Internet/IP */ echoserver.sin_addr.s_addr = inet_addr(argv[1]); /* IP address */ echoserver.sin_port = htons(atoi(argv[3])); /* server port */ /* Establish connection */ if (connect(sock, (struct sockaddr *) &echoserver, sizeof(echoserver)) < 0) { Die("Failed to connect with server"); } As with creating the socket, the attempt to establish a connection will return -1 if the attempt fails. Otherwise, the socket is now ready to accept sending and receiving data. See Resources for a reference on port numbers. A TCP echo client (send/receive data) Now that the connection is established, we are ready to send and receive data. A call to send() takes as arguments the socket handle itself, the string to send, the length of the sent string (for verification), and a flag argument. Normally the flag is the default value 0. The return value of the send() call is the number of bytes successfully sent. /* Send the word to the server */ echolen = strlen(argv[2]); if (send(sock, argv[2], echolen, 0) != echolen) { Die("Mismatch in number of sent bytes"); } /* Receive the word back from the server */ fprintf(stdout, "Received: "); while (received < echolen) { int bytes = 0; if ((bytes = recv(sock, buffer, BUFFSIZE-1, 0)) < 1) { Die("Failed to receive bytes from server"); } received += bytes; buffer[bytes] = '\0'; /* Assure null terminated string */ fprintf(stdout, buffer); } The rcv() call is not guaranteed to get everything in-transit on a particular call; it simply blocks until it gets something. Therefore, we loop until we have gotten back as many bytes as were sent, writing each partial string as we get it. Obviously, a different protocol might decide when to terminate receiving bytes in a different manner (perhaps a delimiter within the bytestream). A TCP echo client (wrapup) Calls to both send() and recv() block by default, but it is possible to change socket options to allow non-blocking sockets. However, this tutorial will not cover details of creating non-blocking sockets, nor such other details used in production servers as forking, threading, or general asynchronous processing (built on non-blocking sockets). These issues are covered in Part 2. At the end of the process, we want to call on the socket, much as we would with a file handle: fprintf(stdout, "\n"); close(sock); exit(0); } Writing a server application in C The steps in writing a socket server A socket server is a bit more complicated than a client, mostly because a server usually needs to be able to handle multiple client requests. Basically, there are two aspects to a server: handling each established connection, and listening for connections to establish. In our example, and in most cases, you can split the handling of a particular connection into support function, which looks quite a bit like how a TCP client application does. We name that function HandleClient(). Listening for new connections is a bit different from client code. The trick is that the socket you initially create and bind to an address and port is not the actually connected socket. This initial socket acts more like a socket factory, producing new connected sockets as needed. This arrangement has an advantage in enabling fork'd, threaded, or asynchronously dispatched handlers (using select() ); however, for this first tutorial we will only handle pending connected sockets in synchronous order. A TCP echo server (application setup) Our echo server starts out with pretty much the same few #include s as the client did, and defines some constants and an error-handling function: #include <stdio.h> #include <sys/socket.h> #include <arpa/inet.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <netinet/in.h> #define MAXPENDING 5 /* Max connection requests */ #define BUFFSIZE 32 void Die(char *mess) { perror(mess); exit(1); } The BUFFSIZE constant limits the data sent per loop. The MAXPENDING constant limits the number of connections that will be queued at a time (only one will be serviced at a time in our simple server). The Die() function is the same as in our client. A TCP echo server (the connection handler) The handler for echo connections is straightforward. All it does is receive any initial bytes available, then cycles through sending back data and receiving more data. For short echo strings (particularly if less than BUFFSIZE ) and typical connections, only one pass through the while loop will occur. But the underlying sockets interface (and TCP/IP) does not make any guarantees about how the bytestream will be split between calls to recv(). void HandleClient(int sock) { char buffer[BUFFSIZE]; int received = -1; /* Receive message */ if ((received = recv(sock, buffer, BUFFSIZE, 0)) < 0) { Die("Failed to receive initial bytes from client"); } /* Send bytes and check for more incoming data in loop */ while (received > 0) { /* Send back received data */ if (send(sock, buffer, received, 0) != received) { Die("Failed to send bytes to client"); } /* Check for more data */ if ((received = recv(sock, buffer, BUFFSIZE, 0)) < 0) { Die("Failed to receive additional bytes from client"); } } close(sock); } The socket that is passed in to the handler function is one that already connected to the requesting client. Once we are done with echoing all the data, we should close this socket; the parent server socket stays around to spawn new children, like the one just closed. A TCP echo server (configuring the server socket) As outlined before, creating a socket has a bit different purpose for a server than for a client. Creating the socket has the same syntax it did in the client, but the structure echoserver is set up with information about the server itself, rather than about the peer it wants to connect to. You usually want to use the special constant INADDR_ANY to enable receipt of client requests on any IP address the server supplies; in principle, such as in a multi-hosting server, you could specify a particular IP address instead. int main(int argc, char *argv[]) { int serversock, clientsock; struct sockaddr_in echoserver, echoclient; if (argc != 2) { fprintf(stderr, "USAGE: echoserver <port>\n"); exit(1); } /* Create the TCP socket */ if ((serversock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) { Die("Failed to create socket"); } /* Construct the server sockaddr_in structure */ memset(&echoserver, 0, sizeof(echoserver)); /* Clear struct */ echoserver.sin_family = AF_INET; /* Internet/IP */ echoserver.sin_addr.s_addr = htonl(INADDR_ANY); /* Incoming addr */ echoserver.sin_port = htons(atoi(argv[1])); /* server port */ Notice that both IP address and port are converted to network byte order for the sockaddr_in structure. The reverse functions to return to native byte order are ntohs() and ntohl(). These functions are no-ops on some platforms, but it is still wise to use them for cross-platform compatibility. A TCP echo server (binding and listening) Whereas the client application connect() 'd to a server's IP address and port, the server bind() s to its own address and port: /* Bind the server socket */ if (bind(serversock, (struct sockaddr *) &echoserver, sizeof(echoserver)) < 0) { Die("Failed to bind the server socket"); } /* Listen on the server socket */ if (listen(serversock, MAXPENDING) < 0) { Die("Failed to listen on server socket"); } Once the server socket is bound, it is ready to listen(). As with most socket functions, both bind() and listen() return -1 if they have a problem. Once a server socket is listening, it is ready to accept() client connections, acting as a factory for sockets on each connection. A TCP echo server (socket factory) Creating new sockets for client connections is the crux of a server. The function accept() does two important things: it returns a socket pointer for the new socket; and it populates the sockaddr_in structure pointed to, in our case, by echoclient. /* Run until cancelled */ while (1) { unsigned int clientlen = sizeof(echoclient); /* Wait for client connection */ if ((clientsock = accept(serversock, (struct sockaddr *) &echoclient, &clientlen)) < 0) { Die("Failed to accept client connection"); } fprintf(stdout, "Client connected: %s\n", inet_ntoa(echoclient.sin_addr)); HandleClient(clientsock); } } We can see the populated structure in echoclient with the fprintf() call that accesses the client IP address. The client socket pointer is passed to HandleClient(), which we saw at the start of this section. Writing socket applications in Python The socket and SocketServer module Python's standard module socket provides almost exactly the same range of capabilities you would find in C sockets. However, the interface is generally more flexible, largely because of the benefits of dynamic typing. Moreover, an object-oriented style is also used. For example, once you create a socket object, methods like . bind(), . connect(), and . send() are methods of that object, rather than global functions operating on a socket pointer. At a higher level than socket, the module SocketServer provides a framework for writing servers. This is still relatively low level, and higher-level interfaces are available for serving higher-level protocols, such as SimpleHTTPServer, DocXMLRPCServer, and CGIHTTPServer. A Python TCP echo client Let's look at the complete client. At first brush, we seem to have left out some of the error-catching code from the C version. But since Python raises descriptive errors for every situation that we checked for in the C echo client, we can let the built-in exceptions do our work for us. Of course, if we wanted the precise wording of errors that we had before, we would have to add a few try / except clauses around the calls to methods of the socket object. #!/usr/bin/env python "USAGE: echoclient.py <server> <word> <port>" from socket import * # import *, but we'll avoid name conflict import sys if len(sys.argv) != 4: print __doc__ sys.exit(0) sock = socket(AF_INET, SOCK_STREAM) sock.connect((sys.argv[1], int(sys.argv[3]))) message = sys.argv[2] messlen, received = sock.send(message), 0 if messlen != len(message) print "Failed to send complete message" print "Received: ", while received < messlen: data = sock.recv(32) sys.stdout.write(data) received += len(data) print sock.close() While shorter, the Python client is somewhat more powerful. Specifically, the address we feed to a . connect() call can be either a dotted-quad IP address or a symbolic name, without need for extra lookup work; for example: $ ./echoclient 192.168.2.103 foobar 7 Received: foobar $ ./echoclient.py fury.gnosis.lan foobar 7 Received: foobar We also have a choice between the methods . send() and . sendall(). The former sends as many bytes as it can at once, the latter sends the whole message (or raises an exception if it cannot). For this client, we indicate if the whole message was not sent, but proceed with getting back as much as actually was sent. A Python TCP echo server (SocketServer) The simplest way to write an echo server in Python is using the SocketServer module. It is so easy as to almost seem like cheating. Later, we will spell out the lower-level version that follows the C implementation. For now, let's see how quick it can be: #!/usr/bin/env python "USAGE: echoserver.py <port>" from SocketServer import BaseRequestHandler, TCPServer import sys, socket class EchoHandler(BaseRequestHandler): def handle(self): print "Client connected:", self.client_address self.request.sendall(self.request.recv(2**16)) self.request.close() if len(sys.argv) != 2: print __doc__ else: TCPServer(('',int(sys.argv[1])), EchoHandler).serve_forever() The only thing we need to provide is a child of SocketServer.BaseRequestHandler that has a . handle() method. The self instance has some useful attributes, such as . client_address, and . request, which is itself a connected socket object. A Python TCP echo server (socket) If we wish to do it "the hard way," and gain a bit more fine-tuned control, we can write almost exactly our C echo server in Python (but in fewer lines): #!/usr/bin/env python "USAGE: echoclient.py <server> <word> <port>" from socket import * # import *, but we'll avoid name conflict import sys def handleClient(sock): data = sock.recv(32) while data: sock.sendall(data) data = sock.recv(32) sock.close() if len(sys.argv) != 2: print __doc__ else: sock = socket(AF_INET, SOCK_STREAM) sock.bind(('',int(sys.argv[1]))) sock.listen(5) while 1: # Run until cancelled newsock, client_addr = sock.accept() print "Client connected:", client_addr handleClient(newsock) In truth, this "hard way" still isn't very hard. But as in the C implementation, we manufacture new connected sockets using . listen(), and call our handler for each such connection. Summary The server and client presented in this tutorial are simple, but they show everything essential to writing TCP sockets applications. If the data transmitted is more complicated, or the interaction between peers (client and server) is more sophisticated in your application, that is just a matter of additional application programming. The data exchanged will still follow the same pattern of connect() and bind(), then send() and recv(). One thing this tutorial did not get to, except in brief summary at the start, is usage of UDP sockets. TCP is more common, but it is important to also understand UDP sockets as an option for your application. Part 2 of this tutorial series looks at UDP, as well as implementing sockets applications in Python, and some other intermediate topics. Resources Learn - Programming Linux sockets, Part 2: Using UDP, the next tutorial in this series, looks at UDP sockets as an option for your application, and also covers implementing sockets applications in Python as well as other intermediate topics. - A good introduction to sockets programming in C is TCP/IP Sockets in C, by Michael J. Donahoo and Kenneth L. Calvert (Morgan-Kaufmann, 2001). Examples and more information are available on the book's Author pages. - The UNIX Systems Support Group document Network Layers explains the functions of the lower network layers. - The Transmission Control Protocol (TCP) is covered in RFC 793. - The User Datagram Protocol (UDP) is the subject of RFC 768. - You can find a list of widely used port assignments at the IANA (Internet Assigned Numbers Authority) Web site. - "Understanding Sockets in Unix, NT, and Java" (developerWorks, June 1998) illustrates fundamental sockets principles with sample source code in C and in Java. - The Sockets section from the AIX C Programming book Communications Programming Concepts goes into depth on a number of related issues. - Volume 2 of the AIX 5L Version 5.2 Technical Reference focuses on Communications, including, of course, a great deal on sockets programming. - Sockets, network layers, UDP, and much more are also discussed in the conversational Beej's Guide to Network Programming. - You may find Gordon McMillan's Socket Programming HOWTO and Jim Frost's BSD Sockets: A Quick and Dirty Primer useful as well. - Find more tutorials for Linux developers in the developerWorks Linux one. - Stay current with developerWorks technical events and Webcasts. Get products and technologies - Download IBM trial software directly from developerWorks. Discuss -.
http://www.ibm.com/developerworks/linux/tutorials/l-sock/
CC-MAIN-2015-11
refinedweb
4,347
53.31
Investors considering a purchase of Quidel Corp. (Symbol: QDEL) shares, but cautious about paying the going market price of $64.55/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the September put at the $60 strike, which has a bid at the time of this writing of $4.10. Collecting that bid as the premium represents a 6.8% return against the $60 commitment, or a 18.5% annualized rate of return (at Stock Options Channel we call this the YieldBoost). Selling a put does not give an investor access to QDEL Quidel Corp. sees its shares decline 7.8% and the contract is exercised (resulting in a cost basis of $55.90 per share before broker commissions, subtracting the $4.10 from $60), the only upside to the put seller is from collecting that premium for the 18.5% annualized rate of return. Below is a chart showing the trailing twelve month trading history for Quidel Corp., and highlighting in green where the $60 strike is located relative to that history: The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the September put at the $60 strike for the 18.5% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Quidel Corp. (considering the last 251 trading day closing values as well as today's price of $64.55) to be 43%. For other put options contract ideas at the various different available expirations, visit the QDEL.
https://www.nasdaq.com/articles/commit-buy-quidel-corp.-60-earn-18.5-annualized-using-options-2019-05-08
CC-MAIN-2021-21
refinedweb
274
66.64
#include <Wire.h>int counter = 0;void setup() { Wire.begin();}void loop() { Wire.beginTransmission(1); Wire.write(counter); Wire.endTransmission(); counter++; delay(1000);} #include <Wire.h>int val = 0;void setup() { Serial.begin(9600); Wire.begin(1);}void loop() { val = Wire.read(); Serial.println(val); delay(1000);} I had a while loop waiting for serial data (Wire.available()==0) but got rid of it because there was no printing going on period. I have both the A5 pins connected to a rail on my breadboard, which is connected to high via a 10k ohm resistor.I have both the A4 pins connected to a rail on my breadboard, which is connected to high via a 10k ohm resistor.
https://forum.arduino.cc/index.php?topic=148525.msg1116010
CC-MAIN-2019-39
refinedweb
118
61.12
- Sort Posts - 18 replies - Last post April 3, 2017 Hi. I know this thread is old, but i hope someone to reply. I've encountered the same issue. tracking seems to work, but "image" field rest unpopulated. My application get images from a webserver and create image targes and object to spawn at rutime (this allows me to update the web database without the need to update the application) "Image" field is not accessible by script, hence I can't set an image directly. If you assure me the tracking will be stable even with empty field, the issue is no more. Hi, I quickly tested it with an image downloaded from Internet and also via from storage. In both cases the field Texture is not populated. However recognition and tracking work as expected. The error mentioned in the last post, seems to only occur after you try to add it in via editor. Do you need to modify that field? or is tour recognition and tracking not working at all? Thank you. Vuforia Engine Support Hey, The problem is that the code does not set the target image texture. It also gives an error about not having a renderer attached when you add an image manually so I think thats also a problem. I have added a screenshot for more clarity. I really appreciate your time helping me, been stuck on this for a while. My thanks. Hi, You will need to attach a GameObject to it. In the code snippet we have at the end a comment // TODO: add virtual content as child object(s) For example, try adding the below code, this will create a primitive GameObject. GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere); sphere.transform.SetParent(trackableBehaviour.gameObject.transform); Hope this answers your question. Vuforia Engine Support Hello, I have tried the code on the website you gave me and I cant get it to work. Right now its creating 2 new objects, 1 object called "new game object" with only an empty image target behavior, and another called the way I said it to call it that does not load the image and does not have a mesh renderer wich I also cant give it with code. (see images). This is the code im using: using UnityEngine; using UnityEngine.Networking; using System.Collections; using Vuforia; public class DownLoadIT : MonoBehaviour { void Start() { StartCoroutine(CreateImageTargetFromDownloadedTexture()); } IEnumerator CreateImageTargetFromDownloadedTexture() { using (UnityWebRequest uwr = UnityWebRequestTexture.GetTexture("")) { yield return uwr.SendWebRequest(); if (uwr.isNetworkError || uwr.isHttpError) { Debug.Log(uwr.error); } else { var objectTracker = TrackerManager.Instance.GetTracker<ObjectTracker>(); // Get downloaded texture once the web request completes var texture = DownloadHandlerTexture.GetContent(uwr); // get the runtime image source and set the texture var runtimeImageSource = objectTracker.RuntimeImageSource; runtimeImageSource.SetImage(texture, 0.15f, "target"); // create a new dataset and use the source to create a new trackable var dataset = objectTracker.CreateDataSet(); var trackableBehaviour = dataset.CreateTrackable(runtimeImageSource, "target"); // add the DefaultTrackableEventHandler to the newly created game object trackableBehaviour.gameObject.AddComponent<DefaultTrackableEventHandler>(); //trackableBehaviour.gameObject.AddComponent<MeshRenderer>(); //trackableBehaviour.gameObject.AddComponent<TurnOffBehaviour>(); // activate the dataset objectTracker.ActivateDataSet(dataset); // TODO: add virtual content as child object(s) } } } } Hi, Glad that you solved the issue! As far as I know we don't have any limitations. In the script, to be more specific here: runtimeImageSource.SetImage(texture, 0.15f, "myTargetName"); 0.15f -> is the width and is in meters The size of the physical image needs to match the width you entered in the script for an optimal tracking experience. Thank you. Vuforia Engine Support Hi, I triple checked my webserver and i am really sure that i can access the pictures without authentication. Did your test of copy pasting the url and it works. when i save your test image on my webserver it also works so i'm pretty sure its related to the image3 I have compared both of the texture with visual studio debugging (see the attachment. left side texture of mine webserver image, right side texture of your testimage saved on my webserver) I dont see many difference except the height and size so i hope the error is related to that. Gonna test with smaller images and will let you know. Thanks for the quick responses Hi, I wanted to underline that you need to specify the exact file with the ".jpg" statement. My understanding is that .jpg or .png should work. You could also try it .jpeg :) The issue may by with your webserver, do you require authentication? A quick test, would be to copy paste the link with the picture, into a browser and see if you get get the image or an autentication prompt. Thank you. Vuforia Engine Support Hi thanks for your help. I managed to get it to work with your given jpg url. The problem is that the image that i was trying is a .jpeg instead of a .jpg. Is there any possible solutions to make it work with .jpeg? [EDIT] I just tried with a .jpg on my webserver and it stil doesnt work. Are there any other requirements on the images then just the .jpg extension ? [EDIT] Thank you in advance Hi, Did you attach the script to the AR Camera? Also make sure that the web link is ".jpg" Try this web image: stored in the library: Let me know if its works with the above web image. Thank you. Vuforia Engine Support Hello, I just tried the guide you posted but i am not capable to make "download a texture image from a web URL and generate an image target from it." working. it goes wrong in this part: RuntimeImageSource runtimeImageSource = objectTracker.RuntimeImageSource; runtimeImageSource.SetImage(texture, 0.15f, "onlinetarget"); The error that appears in the console is "Instant image target 'onlinetarget' could not be created" Thank you in advance Hi, In this article we go through the process: of creating Image Targets from images at Run-time. Thank you. Vuforia Engine Support Hi, In order to create Image Targets, you can use the Target Manager or use Insta Image Targets (which creates Image Targets at Run-time). The high level process entails that the picture used, is analyzed and feature points are extracted. This feature points will assist in detecting the used picture. If you would change the image/picture from the Image Target, Vuforia will not detect the features hence there will be no augmentation. We don't recommended changing the image from an Image Target as in the end the recognition process will not work. Could you please elaborate on why do you want to change the Image? Based on that I can provide a workaround. Thank you. Vuforia Engine Support Delete Message Are you sure you want to delete this message? Delete Conversation Are you sure you want to delete this conversation? tracking is not at all working in my case and i am getting same error Instant image target at 'Vuforia/machines.jpg' could not be created i tried all below option - tried adding mesh renderer -correct size of image target - since there is no option to set image for image target through script so that option is not available - image type is checked -checked if image exist in correct folder or not -checked if data set exists or not Still not worked unity 2019.4.01f1 and vuforia 9.8.11 and tried for both android and windows . didnt work. kindly guide.
https://developer.vuforia.com/forum/unity/how-dynamically-change-image-target-image-0
CC-MAIN-2021-43
refinedweb
1,230
56.15
Saiba mais sobre a Assinatura do Scribd Descubra tudo o que o Scribd tem a oferecer, incluindo livros e audiolivros de grandes editoras. C - Program StructureBefore we study basic building blocks of the C programming language, let us look at aminimum C program structure so that we can take it as a reference in upcoming lectures. Preprocessor Commands Functions Variables Let us look at a simple code that would print the words "Hello World": #include <stdio.h>#include <conio.h> 1 main(){ /* my first program in C */ getch();} 1. The first line of the program #include <stdio.h> is a preprocessor command, which tells a C compiler to include stdio.h file before going to actual compilation. 2. The second line of the program #include <conio.h> is a preprocessor command, which tells a C compiler to include conio.h file before going to actual compilation. 3. The next line main() is the main function where program execution begins. 2 4. The next line /*...*/ will be ignored by the compiler and it has been put to add additional comments in the program. So such lines are called comments in the program. 5. The next line printf(...) is another function available in C which causes the message "Hello, World!" to be displayed on the screen. 6. The next line getch(); causes the output to stay on the screen. 4. The compiler may show warning messages. Please ignore them at the moment. We will discuss them later. 5. Click on the Run Command in the menu bar of Turbo C editor. This will display your output. Hello, World! Basic SyntaxYou have seen a basic structure of C program, so it will be easy to understand other basicbuilding blocks of the C programming language. Tokens in CA C program consists of various tokens and a token is either a keyword, an identifier, aconstant, a string literal, or a symbol. For example, the following C statement consists of fivetokens: 3 printf("Hello, World! \n"); Semicolons ;In C program, the semicolon is a statement terminator. That is, each individual statementmust be ended with a semicolon. It indicates the end of one logical entity. CommentsComments are like helping text in your C program and they are ignored by the compiler.They start with /* and terminates with the characters */ as shown below: /* my first program in C */ You cannot have comments within comments and they do not occur within a string orcharacter literals. IdentifiersA C identifier is a name used to identify a variable, function, or any other user-defined item.An identifier starts with a letter A to Z or a to z or an underscore _ followed by zero or moreletters, underscores, and digits (0 to 9). C does not allow punctuation characters such as @, $, and % within identifiers. C is a casesensitive programming language. Thus, Manpower and manpower are two differentidentifiers in C. Here are some examples of acceptable identifiers: KeywordsThe following list shows the reserved words in C. These reserved words may not be used asconstant or variable or any other identifier names. double Whitespace in CA line containing only whitespace, possibly with a comment, is known as a blank line, and aC compiler totally ignores it. Whitespace is the term used in C to describe blanks, tabs, newline characters and comments.Whitespace separates one part of a statement from another and enables the compiler toidentify where one element in a statement, such as int, ends and the next element begins.Therefore, in the following statement: int age; There must be at least one whitespace character (usually a space) between int and age for thecompiler to be able to distinguish them. On the other hand, in the following statement: No whitespace characters are necessary between fruit and =, or between = and apples,although you are free to include some if you wish for readability purpose. 5 Data TypesIn the C programming language, data types refer to an extensive system used for declaringvariables or functions of different types. The type of a variable determines how much space itoccupies in storage and how the bit pattern stored is interpreted. Basic Types:1 They are arithmetic types and consists of the two types: (a) integer types and (b) floating-point types. Enumerated types:2 They are again arithmetic types and they are used to define variables that can only be assigned certain discrete integer values throughout the program. Derived types:4 They include (a) Pointer types, (b) Array types, (c) Structure types, (d) Union types and (e) Function types. The array types and structure types are referred to collectively as the aggregate types. Thetype of a function specifies the type of the function's return value. We will see basic types inthe following section, whereas, other types will be covered in the upcoming chapters. Integer TypesFollowing table gives you details about standard integer types with its storage sizes and valueranges: To get the exact size of a type or a variable on a particular platform, you can usethe sizeof operator. The expressions sizeof(type) yields the storage size of the object or typein bytes. Floating-Point TypesFollowing table gives you details about standard floating-point types with storage sizes andvalue ranges and their precision: The header file float.h defines macros that allow you to use these values and other detailsabout the binary representation of real numbers in your programs. 7 Pointers to void A pointer of type void * represents the address of an object, but not its type. For3 example a memory allocation function void *malloc( size_t size ); returns a pointer to void which can be casted to any data type. The void type may not be understood to you at this point, so let us proceed and we will coverthese concepts in the upcoming chapters. VariablesA variable is nothing but a name given to a storage area that our programs can manipulate.Each variable in C has a specific type, which determines the size and layout of the variable'smemory; the range of values that can be stored within that memory; and the set of operationsthat can be applied to the variable. The name of a variable can be composed of letters, digits, and the underscore character. Itmust begin with either a letter or an underscore. Upper and lowercase letters are distinctbecause C is case-sensitive. Based on the basic types explained in previous chapter, there willbe the following basic variable types: Type Description C programming language also allows to define various other types of variables, which wewill cover in subsequent chapters like Enumeration, Pointer, Array, Structure, Union, etc. Forthis chapter, let us study only basic variable types. Variable Definition in C:A variable definition means to tell the compiler where and how much to create the storage forthe variable. A variable definition specifies a data type and contains a list of one or morevariables of that type as follows: type variable_list; Here, type must be a valid C data type including char, w_char, int, float, double, bool or anyuser-defined object, etc., and variable_list may consist of one or more identifier namesseparated by commas. Some valid declarations are shown here: int i, j, k;char c, ch;float f, salary;double d; The line int i, j, k; both declares and defines the variables i, j and k; which instructs thecompiler to create variables named i, j and k of type int.Variables can be initialized (assigned an initial value) in their declaration. The initializerconsists of an equal sign followed by a constant expression as follows: For definition without an initializer: variables with static storage duration are implicitlyinitialized with NULL (all bytes have the value 0); the initial value of all other variables isundefined. Variable Declaration in C:A variable declaration provides assurance to the compiler that there is one variable existingwith the given type and name so that compiler proceed for further compilation withoutneeding complete detail about the variable. A variable declaration has its meaning at the timeof compilation only, compiler needs actual variable declaration at the time of linking of theprogram. ExampleTry following example, where variables have been declared at the top, but they have beendefined and initialized inside the main function: #include <stdio.h>#include <conio.h> 2 main (){ /* variable definition: */ int a, b; int c; float f; /* actual initialization */ a = 10; b = 20; c = a + b; printf("value of c : %d \n", c); f = 70.0/3.0; printf("value of f : %f \n", f); When the above code is compiled and executed, it produces the following result: value of c : 30value of f : 23.333334 10 ConstantsThe constants refer to fixed values that the program may not alter during its execution. Thesefixed values are also called literals. Constants can be of any of the basic data types like an integer constant, a floating constant, acharacter constant, or a string literal. There are also enumeration constants as well. The constants are treated just like regular variables except that their values cannot bemodified after their definition. Escape sequenceEscape Meaningsequence \\ \ character \? ? character \a Alert or bell \b Backspace \n Newline \t Horizontal tab \v Vertical tab 11 #include <stdio.h>#include <conio.h> 3 main(){ printf("Hello\tWorld\n\n"); getch(); } Hello World Defining ConstantsThere are two simple ways in C to define constants: #include <stdio.h> 4#include <conio.h> #define LENGTH 10#define WIDTH 5#define NEWLINE '\n' int main(){ int area; value of area : 50 #include <stdio.h>#include <conio.h> 5 int main(){ const int LENGTH = 10; const int WIDTH = 5; const char NEWLINE = '\n'; int area; Muito mais do que documentos Descubra tudo o que o Scribd tem a oferecer, incluindo livros e audiolivros de grandes editoras.Cancele quando quiser.
https://pt.scribd.com/document/414350090/Lecture-3-C-Basics-docx
CC-MAIN-2019-51
refinedweb
1,612
53.51
EXPLORATION 4 Strings In earlier Explorations, you used quoted character strings as part of each output operation. In this Exploration, you will begin to learn how to make your output a little fancier by doing more with strings. Start by reading Listing 4-1. Listing 4-1. Different Styles of String Output #include <iostream> int main(){ std::cout << "Shape\tSides\n" << "-----\t-----\n"; std::cout << "Square\t" << 4 << '\n' << "Circle\t?\n";} Predict the output from the program in Listing 4-1. You may already know what \t means. If so, this prediction is easy to make. If you don’t know, take a guess. _____________________________________________________________ ... Get Exploring C++ 11, Second Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/exploring-c-11/9781430261933/9781430261933_Ch04.xhtml
CC-MAIN-2020-40
refinedweb
134
67.76
Coat::Types - Type constraint system for Coat This is a rewrite of Moose::Util::TypeConstraint for Coat. use Coat::Types; type 'Num' => where { Scalar::Util::looks_like_number($_) }; subtype 'Natural' => as 'Num' => where { $_ > 0 }; subtype 'NaturalLessThanTen' => as 'Natural' => where { $_ < 10 } => message { "This number ($_) is not less than ten!" }; coerce 'Num' => from 'Str' => via { 0+$_ }; enum 'RGBColors' => qw(red green blue); This module provides Coat with the ability to create custom type contraints to be used in attribute definition. This is NOT a type system for Perl 5. These are type constraints, and they are not used by Coat unless you tell it to. No type inference is performed, expression are not typed, etc. etc. etc. This is simply a means of creating small constraint functions which can be used to simplify your own type-checking code, with the added side benefit of making your intentions clearer through self-documentation. It is always a good idea to quote your type and subtype names. This is to prevent perl from trying to execute the call as an indirect object call. This issue only seems to come up when you have a subtype the same name as a valid class, but when the issue does arise it tends to be quite annoying to debug. So for instance, this: yet, is to simply do this: use DateTime; subtype 'DateTime' => as 'Object' => where { $_->isa('DateTime') }; This module also provides a simple hierarchy for Perl 5 types, here is that hierarchy represented visually. Any Item Bool Undef Defined Value Num Int Str ClassName Ref ScalarRef ArrayRef[`a] HashRef[`a] CodeRef RegexpRef GlobRef Object Since the types created by this module are global, it is suggested that you namespace your types just as you would namespace your modules. So instead of creating a Color type for your My::Graphics module, you would call the type My::Graphics::Color instead. The following functions are used to create type constraints. They will then register the type constraints in a global store where Coat can get to them if it needs to. See the SYNOPSIS for an example of how to use these. This creates a base type, which has no parent. This creates a named subtype. simple a convient constraint builder. This is just sugar for the type constraint construction syntax. This is just sugar for the type constraint construction syntax. This is just sugar for the type constraint construction syntax. Type constraints can also contain type coercions as well. If you ask your accessor to coerce, then Coat will run the type-coercion code first, followed by the type constraint check. This feature should be used carefully as it is very powerful and could easily take off a limb if you are not careful. See the SYNOPSIS for an example of how to use these. This is just sugar for the type coercion construction syntax. This is just sugar for the type coercion construction syntax. This function can be used to locate a specific type constraint meta-object, of the class Coat::Meta::TypeConstraint or a derivative. What you do with it from there is up to you :) This function will register a named type constraint with the type registry. This will return a list of type constraint names, you can then fetch them using find_type_constraint ($type_name) if you want to. This will export all the current type constraints as functions into the caller's namespace. Right now, this is mostly used for testing, but it might prove useful to others. All complex software has bugs lurking in it, and this module is no exception. If you find a bug please either email me, or add the bug to cpan-RT. Alexis Sukrieh <sukria@sukria.net> ; based on the work done by Stevan Little <stevan@iinteractive.com> on Moose::Util::TypeConstraint This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~dams/Coat-0.502/lib/Coat/Types.pm
CC-MAIN-2014-42
refinedweb
657
63.19
I: using System; using System.Net; using System.Net.Sockets; using System.Threading; using Microsoft.SPOT; using Microsoft.SPOT.Hardware; using SecretLabs.NETMF.Hardware; using SecretLabs.NETMF.Hardware.Netduino; using System.Text; using System.IO.Ports; namespace Bluetooth { public class Program { static SerialPort serial; public static void Main() { // initialize the serial port for COM1 (pins D0 and D1) serial = new SerialPort(SerialPorts.COM1, 9600, Parity.None, 8, StopBits.One); // open the serial-port, so we can send and receive data serial.Open(); // add an event-handler for handling incoming data serial.DataReceived += new SerialDataReceivedEventHandler(serial_DataReceived); //wait until the end of the Universe :-)); } } } } I launch a simple serial port program like the old Hyper terminal on the PC where I peered the Bluetooth device and ran the test. Good surprise, I selected the port created on my PC (was port 6), open the port. The Bluetooth device went from a red blinking led to a always on led showing the device was correctly peered. Sounds good so far! So I typed “bonjour” and send it. instantly I get the “bonjour” back. So cool it’s working! I wanted to know more about the cheap and what can be setup, changes like the name of the device, the pin, the baud rate, etc. I used my preferred search engine Bing and quickly found out that it’s possible to change lots of things by sending couple of AT command. Those commands were used at the old age of modems It just remembered me that! Even if there are lots of cheap existing like the one I bought, most support exactly the same commands. I found a good documentation there. It’s not the same cheap and the AT commands are a bit different but I quickly found out that most were working. So I’ve decided to test if it was working. All what you have to do is send the commands when the device is not peered. You can do it either with a USB to serial FTDI cheap or directly from the board. I did it directly from the Netduino by modifying a bit the code to send the commands. I found the most interesting commands were the following:. Lauren, nice job! Only one thing: to send AT commands you say you modified the code. How? Thank you great job! can you tell me how you entered at-mode?
http://blogs.msdn.com/b/laurelle/archive/2013/04/29/adding-bluetooth-support-to-a-netmf-board-net-microframework.aspx
CC-MAIN-2015-40
refinedweb
399
69.48
In this section, we introduce Grover's algorithm and how it can be used to solve unstructured search problems. We then implement the quantum algorithm using Qiskit, and run on a simulator and device. Contents 1. Introduction You have likely heard that one of the many advantages a quantum computer has over a classical computer is its superior speed searching databases. Grover's algorithm demonstrates this capability. This algorithm can speed up an unstructured search problem quadratically, but its uses extend beyond that; it can serve as a general trick or subroutine to obtain quadratic run time improvements for a variety of other algorithms. This is called the amplitude amplification trick. Unstructured Search Suppose you are given a large list of $N$ items. Among these items there is one item with a unique property that we wish to locate; we will call this one the winner $w$. Think of each item in the list as a box of a particular color. Say all items in the list are gray except the winner $w$, which is purple. To find the purple box -- the marked item -- using classical computation, one would have to check on average $N/2$ of these boxes, and in the worst case, all $N$ of them. On a quantum computer, however, we can find the marked item in roughly $\sqrt{N}$ steps with Grover's amplitude amplification trick. A quadratic speedup is indeed a substantial time-saver for finding marked items in long lists. Additionally, the algorithm does not use the list's internal structure, which makes it generic; this is why it immediately provides a quadratic quantum speed-up for many classical problems. Creating an Oracle For the examples in this textbook, our 'database' is comprised of all the possible computational basis states our qubits can be in. For example, if we have 3 qubits, our list is the states $|000\rangle, |001\rangle, \dots |111\rangle$ (i.e the states $|0\rangle \rightarrow |7\rangle$). Grover’s algorithm solves oracles that add a negative phase to the solution states. I.e. for any state $|x\rangle$ in the computational basis:$$ U_\omega|x\rangle = \bigg\{ \begin{aligned} \phantom{-}|x\rangle \quad \text{if} \; x \neq \omega \\ -|x\rangle \quad \text{if} \; x = \omega \\ \end{aligned} $$ This oracle will be a diagonal matrix, where the entry that correspond to the marked item will have a negative phase. For example, if we have three qubits and $\omega = \text{101}$, our oracle will have the matrix:$$ U_\omega = \begin{bmatrix} \\ \end{bmatrix} \begin{aligned} \\ \\ \\ \\ \\ \\ \leftarrow \omega = \text{101}\\ \\ \\ \\ \end{aligned} $$ What makes Grover’s algorithm so powerful is how easy it is to convert a problem to an oracle of this form. There are many computational problems in which it’s difficult to find a solution, but relatively easy to verify a solution. For example, we can easily verify a solution to a sudoku by checking all the rules are satisfied. For these problems, we can create a function $f$ that takes a proposed solution $x$, and returns $f(x) = 0$ if $x$ is not a solution ($x \neq \omega$) and $f(x) = 1$ for a valid solution ($x = \omega$). Our oracle can then be described as:$$ U_\omega|x\rangle = (-1)^{f(x)}|x\rangle $$ and the oracle's matrix will be a diagonal matrix of the form:$$ U_\omega = \begin{bmatrix} (-1)^{f(0)} & 0 & \cdots & 0 \\ 0 & (-1)^{f(1)} & \cdots & 0 \\ \vdots & 0 & \ddots & \vdots \\ 0 & 0 & \cdots & (-1)^{f(2^n-1)} \\ \end{bmatrix} $$ Circuit Construction of a Grover Oracle (click to expand) If we have our classical function $f(x)$, we can convert it to a reversible circuit of the form: If we initialise the 'output' qubit in the state $|{-}\rangle$, the phase kickback effect turns this into a Grover oracle (similar to the workings of the Deutsch-Jozsa oracle): We then ignore the auxiliary ($|{-}\rangle$) qubit. For the next part of this chapter, we aim to teach the core concepts of the algorithm. We will create example oracles where we know $\omega$ beforehand, and not worry ourselves with whether these oracles are useful or not. At the end of the chapter, we will cover a short example where we create an oracle to solve a problem (sudoku). Amplitude Amplification So how does the algorithm work? Before looking at the list of items, we have no idea where the marked item is. Therefore, any guess of its location is as good as any other, which can be expressed in terms of a uniform superposition: $|s \rangle = \frac{1}{\sqrt{N}} \sum_{x = 0}^{N -1} | x \rangle.$ If at this point we were to measure in the standard basis $\{ | x \rangle \}$, this superposition would collapse, according to the fifth quantum law, to any one of the basis states with the same probability of $\frac{1}{N} = \frac{1}{2^n}$. Our chances of guessing the right value $w$ is therefore $1$ in $2^n$, as could be expected. Hence, on average we would need to try about $N/2 = 2^{n-1}$ times to guess the correct item. Enter the procedure called amplitude amplification, which is how a quantum computer significantly enhances this probability. This procedure stretches out (amplifies) the amplitude of the marked item, which shrinks the other items' amplitude, so that measuring the final state will return the right item with near-certainty. This algorithm has a nice geometrical interpretation in terms of two reflections, which generate a rotation in a two-dimensional plane. The only two special states we need to consider are the winner $| w \rangle$ and the uniform superposition $| s \rangle$. These two vectors span a two-dimensional plane in the vector space $\mathbb{C}^N.$ They are not quite perpendicular because $| w \rangle$ occurs in the superposition with amplitude $N^{-1/2}$ as well. We can, however, introduce an additional state $|s'\rangle$ that is in the span of these two vectors, which is perpendicular to $| w \rangle$ and is obtained from $|s \rangle$ by removing $| w \rangle$ and rescaling. Step 1: The amplitude amplification procedure starts out in the uniform superposition $| s \rangle$, which is easily constructed from $| s \rangle = H^{\otimes n} | 0 \rangle^n$. The left graphic corresponds to the two-dimensional plane spanned by perpendicular vectors $|w\rangle$ and $|s'\rangle$ which allows to express the initial state as $|s\rangle = \sin \theta | w \rangle + \cos \theta | s' \rangle,$ where $\theta = \arcsin \langle s | w \rangle = \arcsin \frac{1}{\sqrt{N}}$. The right graphic is a bar graph of the amplitudes of the state $| s \rangle$. Step 2: We apply the oracle reflection $U_f$ to the state $|s\rangle$. Geometrically this corresponds to a reflection of the state $|s\rangle$ about $|s'\rangle$. This transformation means that the amplitude in front of the $|w\rangle$ state becomes negative, which in turn means that the average amplitude (indicated by a dashed line) has been lowered. Step 3: We now apply an additional reflection ($U_s$) about the state $|s\rangle$: $U_s = 2|s\rangle\langle s| - \mathbb{1}$. This transformation maps the state to $U_s U_f| s \rangle$ and completes the transformation. Two reflections always correspond to a rotation. The transformation $U_s U_f$ rotates the initial state $|s\rangle$ closer towards the winner $|w\rangle$. The action of the reflection $U_s$ in the amplitude bar diagram can be understood as a reflection about the average amplitude. Since the average amplitude has been lowered by the first reflection, this transformation boosts the negative amplitude of $|w\rangle$ to roughly three times its original value, while it decreases the other amplitudes. We then go to step 2 to repeat the application. This procedure will be repeated several times to zero in on the winner. After $t$ steps we will be in the state $|\psi_t\rangle$ where: $| \psi_t \rangle = (U_s U_f)^t | s \rangle.$ How many times do we need to apply the rotation? It turns out that roughly $\sqrt{N}$ rotations suffice. This becomes clear when looking at the amplitudes of the state $| \psi \rangle$. We can see that the amplitude of $| w \rangle$ grows linearly with the number of applications $\sim t N^{-1/2}$. However, since we are dealing with amplitudes and not probabilities, the vector space's dimension enters as a square root. Therefore it is the amplitude, and not just the probability, that is being amplified in this procedure. In the case that there are multiple solutions, $M$, it can be shown that roughly $\sqrt{(N/M)}$ rotations will suffice. 2. Example: 2 Qubits Let's first have a look at the case of Grover's algorithm for $N=4$ which is realized with 2 qubits. In this particular case, only one rotation is required to rotate the initial state $|s\rangle$ to the winner $|w\rangle$[3]: - Following the above introduction, in the case $N=4$ we have $$\theta = \arcsin \frac{1}{2} = \frac{\pi}{6}.$$ - After $t$ steps, we have $$(U_s U_\omega)^t | s \rangle = \sin \theta_t | \omega \rangle + \cos \theta_t | s' \rangle ,$$where $$\theta_t = (2t+1)\theta.$$ - In order to obtain $| \omega \rangle$ we need $\theta_t = \frac{\pi}{2}$, which with $\theta=\frac{\pi}{6}$ inserted above results to $t=1$. This implies that after $t=1$ rotation the searched element is found. We will now follow through an example using a specific oracle. Oracle for $\lvert \omega \rangle = \lvert 11 \rangle$ Let's look at the case $\lvert w \rangle = \lvert 11 \rangle$. The oracle $U_\omega$ in this case acts as follows:$$U_\omega | s \rangle = U_\omega \frac{1}{2}\left( |00\rangle + |01\rangle + |10\rangle + |11\rangle \right) = \frac{1}{2}\left( |00\rangle + |01\rangle + |10\rangle - |11\rangle \right).$$ or:$$ U_\omega = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix} $$ which you may recognise as the controlled-Z gate. Thus, for this example, our oracle is simply the controlled-Z gate: Reflection $U_s$ In order to complete the circuit we need to implement the additional reflection $U_s = 2|s\rangle\langle s| - \mathbb{1}$. Since this is a reflection about $|s\rangle$, we want to add a negative phase to every state orthogonal to $|s\rangle$. One way we can do this is to use the operation that transforms the state $|s\rangle \rightarrow |0\rangle$, which we already know is the Hadamard gate applied to each qubit:$$H^{\otimes n}|s\rangle = |0\rangle$$ Then we apply a circuit that adds a negative phase to the states orthogonal to $|0\rangle$:$$U_0 \frac{1}{2}\left( \lvert 00 \rangle + \lvert 01 \rangle + \lvert 10 \rangle + \lvert 11 \rangle \right) = \frac{1}{2}\left( \lvert 00 \rangle - \lvert 01 \rangle - \lvert 10 \rangle - \lvert 11 \rangle \right)$$ i.e. the signs of each state are flipped except for $\lvert 00 \rangle$. As can easily be verified, one way of implementing $U_0$ is the following circuit: Finally, we do the operation that transforms the state $|0\rangle \rightarrow |s\rangle$ (the H-gate again):$$H^{\otimes n}U_0 H^{\otimes n} = U_s$$ The complete circuit for $U_s$ looks like this: Full Circuit for $\lvert w \rangle = |11\rangle$ Since in the particular case of $N=4$ only one rotation is required we can combine the above components to build the full circuit for Grover's algorithm for the case $\lvert w \rangle = |11\rangle$: 2.1 Qiskit Implementation We now implement Grover's algorithm for the above case of 2 qubits for $\lvert w \rangle = |11\rangle$. #initialization import matplotlib.pyplot as plt import numpy as np # importing Qiskit from qiskit import IBMQ, Aer, assemble, transpile from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister from qiskit.providers.ibmq import least_busy # import basic plot tools from qiskit.visualization import plot_histogram We start by preparing a quantum circuit with two qubits: n = 2 grover_circuit = QuantumCircuit(n) Then we simply need to write out the commands for the circuit depicted above. First, we need to initialize the state $|s\rangle$. Let's create a general function (for any number of qubits) so we can use it again later: def initialize_s(qc, qubits): """Apply a H-gate to 'qubits' in qc""" for q in qubits: qc.h(q) return qc grover_circuit = initialize_s(grover_circuit, [0,1]) grover_circuit.draw() Apply the Oracle for $|w\rangle = |11\rangle$. This oracle is specific to 2 qubits: grover_circuit.cz(0,1) # Oracle grover_circuit.draw() We now want to apply the diffuser ($U_s$). As with the circuit that initializes $|s\rangle$, we'll create a general diffuser (for any number of qubits) so we can use it later in other problems. # Diffusion operator (U_s) grover_circuit.h([0,1]) grover_circuit.z([0,1]) grover_circuit.cz(0,1) grover_circuit.h([0,1]) grover_circuit.draw() This is our finished circuit. sim = Aer.get_backend('aer_simulator') # we need to make a copy of the circuit with the 'save_statevector' # instruction to run on the Aer simulator grover_circuit_sim = grover_circuit.copy() grover_circuit_sim.save_statevector() qobj = assemble(grover_circuit_sim) result = sim.run(qobj).result() statevec = result.get_statevector() from qiskit_textbook.tools import vector2latex vector2latex(statevec, pretext="|\\psi\\rangle =") As expected, the amplitude of every state that is not $|11\rangle$ is 0, this means we have a 100% chance of measuring $|11\rangle$: grover_circuit.measure_all() aer_sim = Aer.get_backend('aer_simulator') qobj = assemble(grover_circuit) result = aer_sim.run(qobj).result() counts = result.get_counts() plot_histogram(counts) # Load IBM Q account and get the least busy backend device provider = IBMQ.load_account() provider = IBMQ.get_provider("ibm-q") device = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 3 and not x.configuration().simulator and x.status().operational==True)) print("Running on current least busy device: ", device) Running on current least busy device:) We confirm that in the majority of the cases the state $|11\rangle$ is measured. The other results are due to errors in the quantum computation. 3. Example: 3 Qubits We now go through the example of Grover's algorithm for 3 qubits with two marked states $\lvert101\rangle$ and $\lvert110\rangle$, following the implementation found in Reference [2]. The quantum circuit to solve the problem using a phase oracle is: - Apply Hadamard gates to $3$ qubits initialized to $\lvert000\rangle$ to create a uniform superposition: $$\lvert \psi) $$ - Mark states $\lvert101\rangle$ and $\lvert110\rangle$ using a phase oracle: $$\lvert \psi) $$ - Perform the reflection around the average amplitude: - Apply Hadamard gates to the qubits $$\lvert \psi_{3a} \rangle = \frac{1}{2} \left( \lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ - Apply X gates to the qubits $$\lvert \psi_{3b} \rangle = \frac{1}{2} \left( -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle +\lvert111\rangle \right) $$ - Apply a doubly controlled Z gate between the 1, 2 (controls) and 3 (target) qubits $$\lvert \psi_{3c} \rangle = \frac{1}{2} \left( -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ - Apply X gates to the qubits $$\lvert \psi_{3d} \rangle = \frac{1}{2} \left( -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ - Apply Hadamard gates to the qubits $$\lvert \psi_{3e} \rangle = \frac{1}{\sqrt{2}} \left( -\lvert101\rangle -\lvert110\rangle \right) $$ - Measure the $3$ qubits to retrieve states $\lvert101\rangle$ and $\lvert110\rangle$ Note that since there are 2 solutions and 8 possibilities, we will only need to run one iteration (steps 2 & 3). 3.1 Qiskit Implementation We now implement Grover's algorithm for the above example for $3$-qubits and searching for two marked states $\lvert101\rangle$ and $\lvert110\rangle$. Note: Remember that Qiskit orders it's qubits the opposite way round to this resource, so the circuit drawn will appear flipped about the horizontal. We create a phase oracle that will mark states $\lvert101\rangle$ and $\lvert110\rangle$ as the results (step 1). qc = QuantumCircuit(3) qc.cz(0, 2) qc.cz(1, 2) oracle_ex3 = qc.to_gate() oracle_ex3.name = "U$_\omega$" In the last section, we used a diffuser specific to 2 qubits, in the cell below we will create a general diffuser for any number of qubits. Details: Creating a General Diffuser (click to expand)Remember that we can create $U_s$ from $U_0$: $$ U_s = H^{\otimes n} U_0 H^{\otimes n} $$ And a multi-controlled-Z gate ($MCZ$) inverts the phase of the state $|11\dots 1\rangle$: $$ MCZ = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & -1 \\ \end{bmatrix} \begin{aligned} \\ \\ \\ \leftarrow \text{Add negative phase to} \; |11\dots 1\rangle\\ \end{aligned} $$ Applying an X-gate to each qubit performs the transformation: $$ \begin{aligned} |00\dots 0\rangle & \rightarrow |11\dots 1\rangle\\ |11\dots 1\rangle & \rightarrow |00\dots 0\rangle \end{aligned} $$ So: $$ U_0 = - X^{\otimes n} (MCZ) X^{\otimes n} $$ Using these properties together, we can create $U_s$ using H-gates, X-gates, and a single multi-controlled-Z gate: $$ U_s = - H^{\otimes n} U_0 H^{\otimes n} = H^{\otimes n} X^{\otimes n} (MCZ) X^{\otimes n} H^{\otimes n} $$ Note that we can ignore the global phase of -1. def diffuser(nqubits): qc = QuantumCircuit(nqubits) # Apply transformation |s> -> |00..0> (H-gates) for qubit in range(nqubits): qc.h(qubit) # Apply transformation |00..0> -> |11..1> (X-gates) for qubit in range(nqubits): qc.x(qubit) # Do multi-controlled-Z gate qc.h(nqubits-1) qc.mct(list(range(nqubits-1)), nqubits-1) # multi-controlled-toffoli qc.h(nqubits-1) # Apply transformation |11..1> -> |00..0> for qubit in range(nqubits): qc.x(qubit) # Apply transformation |00..0> -> |s> for qubit in range(nqubits): qc.h(qubit) # We will return the diffuser as a gate U_s = qc.to_gate() U_s.name = "U$_s$" return U_s We'll now put the pieces together, with the creation of a uniform superposition at the start of the circuit and a measurement at the end. Note that since there are 2 solutions and 8 possibilities, we will only need to run one iteration. n = 3 grover_circuit = QuantumCircuit(n) grover_circuit = initialize_s(grover_circuit, [0,1,2]) grover_circuit.append(oracle_ex3, [0,1,2]) grover_circuit.append(diffuser(n), [0,1,2]) grover_circuit.measure_all() grover_circuit.draw() aer_sim = Aer.get_backend('aer_simulator') transpiled_grover_circuit = transpile(grover_circuit, aer_sim) qobj = assemble(transpiled_grover_circuit) results = aer_sim.run(qobj).result() counts = results.get_counts() plot_histogram(counts) As we can see, the algorithm discovers our marked states $\lvert101\rangle$ and $\lvert110\rangle$. backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 3 and not x.configuration().simulator and x.status().operational==True)) print("least busy backend: ", backend) least busy backend:) As we can (hopefully) see, there is a higher chance of measuring $\lvert101\rangle$ and $\lvert110\rangle$. The other results are due to errors in the quantum computation. 4. Problems The function grover_problem_oracle below takes a number of qubits ( n), and a variant and returns an n-qubit oracle. The function will always return the same oracle for the same n and variant. You can see the solutions to each oracle by setting print_solutions = True when calling grover_problem_oracle. from qiskit_textbook.problems import grover_problem_oracle ## Example Usage n = 4 oracle = grover_problem_oracle(n, variant=1) # 0th variant of oracle, with n qubits qc = QuantumCircuit(n) qc.append(oracle, [0,1,2,3]) qc.draw() grover_problem_oracle(4, variant=2)uses 4 qubits and has 1 solution. a. How many iterations do we need to have a > 90% chance of measuring this solution? b. Use Grover's algorithm to find this solution state. c. What happens if we apply more iterations the number we calculated in problem 1a above? Why? With 2 solutions and 4 qubits, how many iterations do we need for a >90% chance of measuring a solution? Test your answer using the oracle grover_problem_oracle(4, variant=1)(which has two solutions). Create a function, grover_solver(oracle, iterations)that takes as input: - A Grover oracle as a gate ( oracle) - An integer number of iterations ( iterations) and returns a QuantumCircuitthat performs Grover's algorithm on the ' oracle' gate, with ' iterations' iterations. 5. Solving Sudoku using Grover's Algorithm The oracles used throughout this chapter so far have been created with prior knowledge of their solutions. We will now solve a simple problem using Grover's algorithm, for which we do not necessarily know the solution beforehand. Our problem is a 2×2 binary sudoku, which in our case has two simple rules: - No column may contain the same value twice - No row may contain the same value twice If we assign each square in our sudoku to a variable like so: we want our circuit to output a solution to this sudoku. Note that, while this approach of using Grover's algorithm to solve this problem is not practical (you can probably find the solution in your head!), the purpose of this example is to demonstrate the conversion of classical decision problems into oracles for Grover's algorithm. 5.1 Turning the Problem into a Circuit We want to create an oracle that will help us solve this problem, and we will start by creating a circuit that identifies a correct solution. Similar to how we created a classical adder using quantum circuits in The Atoms of Computation, we simply need to create a classical function on a quantum circuit that checks whether the state of our variable bits is a valid solution. Since we need to check down both columns and across both rows, there are 4 conditions we need to check: v0 ≠ v1 # check along top row v2 ≠ v3 # check along bottom row v0 ≠ v2 # check down left column v1 ≠ v3 # check down right column Remember we are comparing classical (computational basis) states. For convenience, we can compile this set of comparisons into a list of clauses: clause_list = [[0,1], [0,2], [1,3], [2,3]] We will assign the value of each variable to a bit in our circuit. To check these clauses computationally, we will use the XOR gate (we came across this in the atoms of computation). def XOR(qc, a, b, output): qc.cx(a, output) qc.cx(b, output) Convince yourself that the output0 bit in the circuit below will only be flipped if input0 ≠ input1: # We will use separate registers to name the bits in_qubits = QuantumRegister(2, name='input') out_qubit = QuantumRegister(1, name='output') qc = QuantumCircuit(in_qubits, out_qubit) XOR(qc, in_qubits[0], in_qubits[1], out_qubit) qc.draw() This circuit checks whether input0 == input1 and stores the output to output0. To check each clause, we repeat this circuit for each pairing in clause_list and store the output to a new bit: # Create separate registers to name bits var_qubits = QuantumRegister(4, name='v') # variable bits clause_qubits = QuantumRegister(4, name='c') # bits to store clause-checks # Create quantum circuit qc = QuantumCircuit(var_qubits, clause_qubits) # Use XOR gate to check each clause i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 qc.draw() The final state of the bits c0, c1, c2, c3 will only all be 1 in the case that the assignments of v0, v1, v2, v3 are a solution to the sudoku. To complete our checking circuit, we want a single bit to be 1 if (and only if) all the clauses are satisfied, this way we can look at just one bit to see if our assignment is a solution. We can do this using a multi-controlled-Toffoli-gate: # Create separate registers to name bits var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit) # Compute clauses i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 # Flip 'output' bit if all clauses are satisfied qc.mct(clause_qubits, output_qubit) qc.draw() The circuit above takes as input an initial assignment of the bits v0, v1, v2 and v3, and all other bits should be initialized to 0. After running the circuit, the state of the out0 bit tells us if this assignment is a solution or not; out0 = 0 means the assignment is not a solution, and out0 = 1 means the assignment is a solution. Important: Before you continue, it is important you fully understand this circuit and are convinced it works as stated in the paragraph above. 5.2 Uncomputing, and Completing the Oracle We can now turn this checking circuit into a Grover oracle using phase kickback. To recap, we have 3 registers: - One register which stores our sudoku variables (we'll say $x = v_3, v_2, v_1, v_0$) - One register that stores our clauses (this starts in the state $|0000\rangle$ which we'll abbreviate to $|0\rangle$) - And one qubit ($|\text{out}_0\rangle$) that we've been using to store the output of our checking circuit. To create an oracle, we need our circuit ($U_\omega$) to perform the transformation:$$ U_\omega|x\rangle|0\rangle|\text{out}_0\rangle = |x\rangle|0\rangle|\text{out}_0\oplus f(x)\rangle $$ If we set the out0 qubit to the superposition state $|{-}\rangle$ we have: If $f(x) = 0$, then we have the state:$$ \begin{aligned} &= |x\rangle|0\rangle\otimes \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ &= |x\rangle|0\rangle|-\rangle\\ \end{aligned} $$ (i.e. no change). But if $f(x) = 1$ (i.e. $x = \omega$), we introduce a negative phase to the $|{-}\rangle$ qubit:$$ \begin{aligned} &= \phantom{-}|x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|1\rangle - |0\rangle)\\ &= \phantom{-}|x\rangle|0\rangle\otimes -\tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ &= -|x\rangle|0\rangle|-\rangle\\ \end{aligned} $$ This is a functioning oracle that uses two auxiliary registers in the state $} $$ To adapt our checking circuit into a Grover oracle, we need to guarantee the bits in the second register ( c) are always returned to the state $|0000\rangle$ after the computation. To do this, we simply repeat the part of the circuit that computes the clauses which guarantees c0 = c1 = c2 = c3 = 0 after our circuit has run. We call this step 'uncomputation'. var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') cbits = ClassicalRegister(4, name='cbits') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit, cbits) def sudoku_oracle(qc, clause_list, clause_qubits): # Compute clauses i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 # Flip 'output' bit if all clauses are satisfied qc.mct(clause_qubits, output_qubit) # Uncompute clauses to reset clause-checking bits to 0 i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 sudoku_oracle(qc, clause_list, clause_qubits) qc.draw() In summary, the circuit above performs:$$ U_\omega|x\rangle|0\rangle|\text{out}_0\rangle = \Bigg\{ \begin{aligned} |x\rangle|0\rangle|\text{out}_0\rangle \quad \text{for} \; x \neq \omega \\ |x\rangle|0\rangle\otimes X|\text{out}_0\rangle \quad \text{for} \; x = \omega \\ \end{aligned} $$ and if the initial state of $|\text{out}} $$ var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') cbits = ClassicalRegister(4, name='cbits') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit, cbits) # Initialize 'out0' in state |-> qc.initialize([1, -1]/np.sqrt(2), output_qubit) # Initialize qubits in state |s> qc.h(var_qubits) qc.barrier() # for visual separation ## First Iteration # Apply our oracle sudoku_oracle(qc, clause_list, clause_qubits) qc.barrier() # for visual separation # Apply our diffuser qc.append(diffuser(4), [0,1,2,3]) ## Second Iteration sudoku_oracle(qc, clause_list, clause_qubits) qc.barrier() # for visual separation # Apply our diffuser qc.append(diffuser(4), [0,1,2,3]) # Measure the variable qubits qc.measure(var_qubits, cbits) qc.draw(fold=-1) # Simulate and plot results aer_simulator = Aer.get_backend('aer_simulator') transpiled_qc = transpile(qc, aer_simulator) qobj = assemble(transpiled_qc) result = aer_sim.run(qobj).result() plot_histogram(result.get_counts()) There are two bit strings with a much higher probability of measurement than any of the others, 0110 and 1001. These correspond to the assignments: v0 = 0 v1 = 1 v2 = 1 v3 = 0 and v0 = 1 v1 = 0 v2 = 0 v3 = 1 which are the two solutions to our sudoku! The aim of this section is to show how we can create Grover oracles from real problems. While this specific problem is trivial, the process can be applied (allowing large enough circuits) to any decision problem. To recap, the steps are: - Create a reversible classical circuit that identifies a correct solution - Use phase kickback and uncomputation to turn this circuit into an oracle - Use Grover's algorithm to solve this oracle 6. References - L. K. Grover (1996), "A fast quantum mechanical algorithm for database search", Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC 1996), doi:10.1145/237814.237866, arXiv:quant-ph/9605043 - C. Figgatt, D. Maslov, K. A. Landsman, N. M. Linke, S. Debnath & C. Monroe (2017), "Complete 3-Qubit Grover search on a programmable quantum computer", Nature Communications, Vol 8, Art 1918, doi:10.1038/s41467-017-01904-7, arXiv:1703.10535 - I. Chuang & M. Nielsen, "Quantum Computation and Quantum Information", Cambridge: Cambridge University Press, 2000. import qiskit.tools.jupyter %qiskit_version_table
https://qiskit.org/textbook/ch-algorithms/grover.html
CC-MAIN-2021-39
refinedweb
4,827
50.97
) Well nice contribution, as you said you have ideas and concepts but still stuck while coding it means you don't do enough practice in your early days of programming. When I was new to programming mene daily non stop 16 ghante tak practice ki hue he. Thora Thora her chez k bary me bta dia v.good madam. But I am still stupid. Please guide me What is a Variable in C++. I read a lot of books to understand it but samajh nahi ai. Bus etna pta k math me parha tha k variable koi esi chez he jiki value change hoti rehti he. Like x = 5; tu aljabra me x ek variable he jis me es wqt 5 store he bad me hum es me chahay tu 10 store kr skte hen. This was my concept from math. Ab ap batayn k C++ me variable kya chez he. kese likhte kya krta or kese krta he. After reading a lot of books I'm still confused. ap mere watan ki hen mujhe meri zuban samjayn tab shaid smj ay. Me bachelor tak teacher k guthne k sath beth k parha hun. VU ek dam new he mere lie. apko concepts hen ap btayn ap mujhe teach karen. Mujhe sekhne ka buhat shoq he. Yes I am ex student. But socha k asaan alfaz me ap se pocho ta k koi or read kre tu usy problem na ay. Hala k me math wali example me variable k bary me khud bta chuka hun. Programming ko real world se dekh k smja jay tu jaldi samaj ati he. neche Burried k comment me mene function ki bat ki he hala k me real life se uski example khudi de di he. ab dekho waha wo ya koi or kese explain krta he functions ko Just came here to appreciate your initiative. Wanna share some of my experience regarding programming with the newbies. May be kisi ki minor c help ho jaye. Btw, I am worst at programming, though I have loved C language from childhood. because of being middle-level language. The only good thing I had regarding programming is that I could understand the logic of programs somehow. Beshak wo kisi b language me ho, Wo jo USB me script viruses hote hain unki programming me panga kerne ka maza ata tha. Just becos k mujhe ye samaj aa jati thi k ye virus ker kia raha hai. Is tarha USB se without anti-virus, virus remove kerne ka b kafi experience hai. But as I said, m too bad at programming, just a little good at logic so aik DOS type ka OS khud banaya tha kabi, by just editing the command shell. One more thing k office walo ko Excel ka aik formulae bana k dia tha jo numbers ko words me convert kerta tha, jis se wo log cheques print kerte thay. I worked on Visual Basic for this purpose, thora sa idea net se lia tha, then done myself. Programming k students k liye logic understand mere khayal me pehla step hai, 2nd step me unko intehai practice kerni hoti hai, or practice se mere smait mostly logo ki jan jati hai. Mere aik C++ k teachers batatay thay k best programmer wo hai jo kisi program ki coding ko kisi software me enter kerne se pehle paper pe zubani likh sakta ho. Despite not being a programmer, I believe this is the MUST learn skill for programmer. Baki ustad g, tussi continue kero All the best. Very good comment bro. and practice se sbi ki jan jati he lekn practice hi insan ko perfect bnati he. Mark Zuckerberg and Bill gates are trying to make programming a compulsory subjects in their country schools. Ku k programming me hum sochte hen hamara mind chalta he. Baqi etna kuch kia hua he C me tu chalo bai mujhe ye samja do Function kya chez hoti he programming me. Wese daily life me agr me dekhu tu function koi esa kam hota he jo me ek dafa ya kisi khas arse k lie krta hun. ya jab chahy kr skta hun apni routine life k andr hi. for example me college jata hun. ye me apni daily life me koi function perform kr ra hun.Ya khana meri routine life he but mere samne koi b chez ay tu mere andr khane ki hiss jag jati he or me khane lg prta hun. ye me functions perform kr raha hun. programming me functions kya he mujhe ye ni pta yr. kya koi eski type wipe b hoti he kyaa?? CS201 k start me phla program jo hme krwaya gya video me wo kuch esa tha. #include <iostream.h> main(){ cout "Welcome to Virtual University of Pakistan."; system("Pause"); } kya yaha koi banda ya bandi es program ko tor k mujhe smja skta he kya line by line? Mujhe sekhne or discuss krne ka shoq he. Someone help me plz. well let me try it out. #include is a Preprocessor directives, preprocessors statement are called by compiler dynamically to include such code which are in libraries, such as cout cin are part of iostream.h library. main() it is method or function, it runs your written code inside it. cout is a output channel. it prints whatever you write in between " double quotes " system("pause"); is a function to pause your output on console window. what is variable. in layman term. variable is a bucket which holds value according to their Datatype. what is statement. in simple words, lets say what is If statement. for instance, you told your watchman or gateKeeper that whoever comes to meet me ask their name first and then if he tells you this name "Afrid" so let him in but if he tells this name "OMI", tells him that i am not home. this is how if statement works, if (blah blah) { execute this code } else {execute this code} what is arrays. as i mentioned above, that variable is bucket which holds value according to their datatype, similarly arrays are bunch of variables/buckets but of same datatype, just to access the values of variable, you will be using subscript operators "[]" or pointers. what is pointer. pointer is variable/bucket just like others but only difference is, this pointer variable/bucket hold other variable address! for instance int x = 5; x is bucket and it contains 5 as value. but x has address where it is located on stack, lets say "fs2321" is address of x variable/bucket. now lets declare int *ptr = &x; now this ptr bucket has contained "fs2321". streams are just like pipeLine, we can send our objects through streams in the form of "bits" to save it on disk or to sockets, or to console window. Nice, answer appreciated But I often heard term "Stack" you also use this in your comment. What is that thing. Is this (stack) a physical thing? dikhne me kesa hota he thora describe kr skte ho ap? kam kya he eska? and jesa ap ne kaha k variable that we created are stored in memory (RAM). and RAM me jis jaga bante hen wo jaga ka address hum pointer me store krte hen But my question is why we store the address of variables and other things. faida kya he address store krne ka. and If pointers are powerful or it may have any other use then why pointers do not exists in other modern languages like Java and C#? Okey. I also try to break this code and explain what happen when this code will execute. Our first line of code is # include <iostream.h> well. #include is a prepossess directive and it tell compiler that please include iostream.h in this program. Mtlb k #include ek tareqa he jo k c++ me phle se define kr dia gya he or ye header files or libraries ko include krne k lie use hota he. and Here iostream.h is a header file jo k c++ me phle se likhi pari he. or jaha p Dev install he un folder me exist krti he. or neche jo hum main, cout and system() ka function use kr rahy hen wo esi file me define hen. and if you remove iostream.h from your code. then compiler will gave you errors that main does not define. cout does not define. system("pause") does not define. such kind of errors. and second line of code is the main() function. Every C program has at least one main function. In simple words this is the entry point of our program. When our program starts then sb se phle main function ko call jati he. main function ki body se bahr b code likhte hen lekin phle woi execute hota he that is define in main function. cout hamari output stream he. and es k sath double less than ka symbol use hota he jo k data ki direction ko zahir krta he. Any thing that is written in double quote will show on your computer screen via cout or stream insertion operator. Welcome to virtual university of pakistan string he jo k double quote (qoma) " " me likhi gye he ye string cout se hote hue processor se gzr kr VGA card or LCD ki cable se b ho kr hamari lcd ya monitor p show hoga. If I am not wrong. and Last system("Pause"); ka function he in my very early days of CS201 me kafi din esi bat k khapta raha k sara program execute hota he but wo ek dam black screen ati he or khtm ho jati he mujhe dekhne ka moqa hi ni milta. tu yaha p system pause ka function hum use krte hen or system ko btate hen k jab tak hum keyboard koi or instruction na den tum program ko es line se agy mat jaty do. very simple. That's why screen p apko Press any key to continue likha nzr ay ga. or es function k bad } curly brace he jo k main ki body ya code block khtm hony ki elamat he. in layman term. stack and heap both are the blocks of memory called as stack and heap, which is provided to program by memory manager. for example, stack memory allocated to our program is 30mb and heap as 70mb; now this stack memory each bit has own address and same in heap as well. Whatever we do in our program, we write variables, pointers, functions and structures, these all take places in stack memory! lets declare some variables int a =1,b=2,*c=&b; these are stored on stack. well pointer also has it's own address, people think pointer points to some other variables, which is true but they are stored on stack so they have their own address as well. heap is dynamic memory block! also calls as free store. you can get chunk of memory on heap on RUNTIME/dynamically, through "new" keyword in c++. and in c "alloc(); calloc();" through functions. Sometimes you need of dynamic array which increases dynamically! then you request to "memory manager" for some memory allocation and memoryManager finds memory block on heap according to your request, if he found then he returns you address of that memory block so you keep that address in pointer variable. now let's talk about pointers. pointer are powerful as well as unsecure and complex. powerfulness of pointers : the most important things you do in c++ is through pointers, you needing dynamic arrays, dynamic memory allocation, dynamic objects, linkedList, queue, stack, trees. pointer is the only thing which made these all thing possible. Disadvantages of pointers : pointer are unsecure because you can not manipulate them effectively unless you are not quite familiar with the process beyond pointers happening. pointers are not just pointing to variable, they can point to other pointers as well, that brings complexity, when a pointer pointing to a pointer which is also pointing to another pointer, such as int a = 2; int *ptr1,ptr2,ptr2; ptr1 = &a; ptr2 = ptr1; ptr3 = ptr2; you can not predict what will be the value of ptr3** or ptr3***; unless you don't have concrete knowledge of it. another advantage is allocated memory from "freeStore". the pointers who pointing to memory block which is on heap, has to release that block of memory in destructor through "delete" keyword"; why other languages does not have pointers; The basic reason is the complexity, people sometimes mistakenly change something and it causes demage or crash, that's only reason beyond pointers. one another major reason is pointers can point to heap memory which causes stack over flow sometimes, because everyone don't really release the memory of pointers which is allocated from the heap. Advantages of pointers : you can manipulate variables without being touched it. you can send variables to function without being sent the actual variable; pointer helps you to manipulate arrays; you can send huge array as an argument to a function because of pointers; pointers can point to function as well. same as delegate in C#. and many more. Informative comment. in addition to this reply Where variable are stored in c++ Good Job Khurram . © 2020 Created by + M.Tariq Malik. Promote Us | Report an Issue | Privacy Policy | Terms of Service
https://vustudents.ning.com/group/cs201introductiontoprogramming/forum/topics/share-from-your-knowledge?commentId=3783342%3AComment%3A5813651&groupId=3783342%3AGroup%3A58836
CC-MAIN-2020-05
refinedweb
2,235
72.56
A class which is a member of a package is known as a top-level class. A class can be declared within another class. This type of class is called an inner class. If the class declared within another class is explicitly or implicitly declared static, it is called a nested class, not an inner class. The class that contains the inner class is called an enclosing class or an outer class. The following code declares an inner class. class Outer { public class Inner { // Members of the Inner class go here } // Other members of the Outer class go here } The Outer class is a top-level class. The Inner class is an inner class. It is a member of the Outer class. The Outer class is the enclosing (outer) class for the Inner class. An inner class can be the enclosing class for another inner class. There are no limits on the levels of nesting of inner classes. An instance of an inner class can only exist within an instance of its enclosing class. The following are some of the advantages of inner classes. The following code demonstrates the rules for accessing local variables inside a local inner class. The main() method declares two local variables called x and y. Both variables are effectively final. The variable x is never changed after it is initialized and the variable y cannot be changed because it is declared as final. public class Main { public static void main(String... args) { int x = 1;// w w w.ja v a 2 s . c om final int y = 2; class LocalInner { void print() { System.out.println("x = " + x); System.out.println("y = " + y); } } /* * Uncomment the following statement will make the variable x no longer * an effectively final variable and the LocalIneer class will not compile. */ // x = 100; LocalInner li = new LocalInner(); li.print(); } } The code above generates the following result. An inner class can inherit from another inner class, a top-level class, or its enclosing class. class A {/*from w w w . jav a 2 s . c o m*/ public class B { } public class C extends B { } public class D extends A { } } class E extends A { public class F extends B { } } The keyword static in Java makes a construct a top-level construct. Therefore, we cannot declare any static members (fields, methods, or initializers) for an inner class. It is allowed to have static fields in an inner class that are compile-time constants. class A { public class B { public final static int DAYS_IN_A_WEEK = 7; // OK public final String str = new String("Hello"); } } Each inner class is compiled into a separate class file. The class file name format for a member inner class and a static inner class is as follows: <outer-class-name>$<member-or-static-inner-class-name> The format for the class file name for a local inner class is as follows: <outer-class-name>$<a-number><local-inner-class-name> The format for the class file name for an anonymous class is as follows: <outer-class-name>$<a-number> <a-number> in a class file name is a number that is generated sequentially starting from 1 to avoid any name conflicts. We can define an inner class in a static context such as inside a static method or a static initializer. All static field members are accessible to such an inner class. class Outer {// ww w . j a v a2 s . co m static int k = 1; int m = 2; public static void staticMethod() { // Class Inner is defined in a static context class Inner { int j = k; // OK. Referencing static field k // int n = m; // An error. Referencing non-static field m } } }
http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0240__Java_Inner_Classes.htm
CC-MAIN-2018-34
refinedweb
612
65.73
Log Sql in grails for a piece of code There are time when we need to see the sql logging statement just for a method of for a particular code. Although we already have logSql property in DataSource to do it for us but it sometimes makes difficult if we need to see the log for a small piece of code rather than for whole project. So I need something that will execute my code withing a block which automatically starts sql logging and switch it off when code is finished. Groovy closures are the solution for this problem. For doing it I created a class LogSql which have a static execute method that takes the closure as parameter. import org.apache.log4j.Level import org.apache.log4j.Logger public static def execute(Closure closure) { Logger sqlLogger = Logger.getLogger("org.hibernate.SQL"); Level currentLevel = sqlLogger.level sqlLogger.setLevel(Level.TRACE) def result = closure.call() sqlLogger.setLevel(currentLevel) result } Now when I want to see the logs I do something like following. String name = "Uday" Person person LogSql.execute { person = Person.findByName(name) } This prints the sql statement for all the sql fired in the given piece of code block. Hope it helps Uday Pratap Singh uday@intelligrape.com I improved a little bit the code. I dealt with exceptions for always return to current level Cool, great help mate Hey, nice trick!! Only one thing, if you change the global “org.hibernate.SQL” logger, you change it for every other piece of code using that. Can’t you change it only fo the current thread? Nice,, good find,, i can get a lot I will make an annotation for it(with variations and enhancements) and add them To the super-Programmer Plugin annotation library.. Nice catch. Was looking for the same. Thanks for sharing.
https://www.tothenew.com/blog/log-sql-in-grails-for-a-piece-of-code/?replytocom=41041
CC-MAIN-2020-05
refinedweb
304
67.25