text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Add Support for indexing
Adding support for indexing by adding:
{{{
!python
def getitem(self, index): return getattr(self, self.slots[index])
def setitem(self, index, value): return setattr(self, self.slots[index], value) }}}
to the class definitions would make it closer to a mutable namedtuple since namedtuple supports indexing.
Do you have some particular use case for this? It's my experience that indexing namedtuple's has caused me a bug or two, and I've never actually wanted the feature.
+1, one use case would be replacing plain tuples/lists with recordtype-based types and having existing, legacy code handle it without major modifications.
Msgpack doesn't work without getitem . This code fails for exapmle:
Sorry, I didn't realize that you've switched developing to another project.
No problem. I should make it more obvious.
I think this problem doesn't exist in namedlist.namedlist, but please verify.
|
https://bitbucket.org/ericvsmith/recordtype/issues/3/add-support-for-indexing
|
CC-MAIN-2017-34
|
refinedweb
| 150
| 51.85
|
Binary image matrix returns 0 values
Hello, i'm trying to create simple program that detects when the "red" color entered the field and makes a time stamp to .csv file when certain treshold of RED is crossed.
So far i have this:
from SimpleCV import * cam = Camera() disp=Display() while disp.isNotDone(): capture = cam.getImage() #capturing the image smaller = capture.scale(300, 300) #resize the image red = smaller.colorDistance(Color.RED) redbin=red.binarize(105) #when i binarize with this treshold i get almost ideal detection matrika = redbin.getNumpy() #trying to get 0 and 255 values print matrika #want to see 0 and 255 values rdecabin.show() time.sleep(3) #repeating on 3 seconds
So basically since this treshold for binarize works great in my conditions, my plan was to: 1. get the matrix with 0 and 255 values 2. Count the number of 255 values and figure out "secondary treshold" 3. Write further code which timestamps in .csv file each time secondary treshold is met
Problem is, when i print "matrika" i get only 0 values, even though i can see clearly from "rdecabin.show()" that some of the values should be white.
I started with SimpleCV&Python yesterday so sorry for clumsy code.
Where is rdecabin defined? Have you looked at a histogram or sum of your matrika values?
|
http://help.simplecv.org/question/2404/binary-image-matrix-returns-0-values/
|
CC-MAIN-2019-35
|
refinedweb
| 222
| 67.76
|
Search: Search took 0.02 seconds.
- 16 Jun 2009 9:00 AM
My app also uses a lot of DataLists. I didn't switch them to ListViews yet, but the context menu is very needed. Are there other changes between the DataList and ListView (besides the store)?
...
- 16 Jun 2009 12:02 AM
- Replies
- 7
- Views
- 2,428
You might can force a refresh with the 'layout()' command?
- 15 Jun 2009 8:03 AM
Jump to post Thread: [2.0 M3] TabPane / TabItem bug by Rvanlaak
- Replies
- 1
- Views
- 1,240
Today i've migrated to the new m3 release, where i've found a bug in the TabPane
I'm extending the TabItems, after wich I loop it using the following code:
private void saveTabs()
...
- 15 Jun 2009 3:57 AM
Oh, and what about the context Menu's? The MenuItem constructors did contain a parameter for the style, but it has been removed...
Is there more documentation available about the Btn/MenuItem css...
- 15 Jun 2009 3:49 AM
Wow!
The m3 release seems to give a copule of deprication warnings.
The DataList is depricated? What are te differences between a ListView and a DataList?
I can't use the...
- 12 Jun 2009 6:23 AM
- Replies
- 4
- Views
- 2,367
Mmm,, I've solved it already.
It seems that when Datalists are empty, the height is 0px when it isn't using any layout. For now, I've added a BorderLayout to the container wich contains the list,...
- 12 Jun 2009 6:17 AM
- Replies
- 4
- Views
- 2,367
The drag & drop is succesfully implemented at DataLists. I've succeeded to DnD from lists to buttons, but now I'm trying to DnD between two lists of the same class with the following code:
...
- 11 Jun 2009 12:36 PM
Sven,
I know the meaning of prototyping, milestones etcetera. I'm only curious about the development stage of the library.
My system is not going to be used in a production environment, but It...
- 11 Jun 2009 6:55 AM
I discovered the Examples Development explorer is already using the m3 release of ExtGWT. So I started wonderering how the third release is progressing....
- 3 Jun 2009 1:48 AM
- Replies
- 4
- Views
- 2,375
Make links like below:
<a href="some-description" onclick="javascriptFunction(); return false;">text</a>
- 3 Jun 2009 1:08 AM
- Replies
- 1
- Views
- 1,338
I would like to implement multi-key keyboard shortcuts in my application. After searching on the forum, i haven't found anything usefull.
The listeners needs to be implemented on a viewport or on...
- 3 Jun 2009 1:04 AM
Jump to post Thread: keyboard shortcuts in 1.0 ... by Rvanlaak
- Replies
- 4
- Views
- 2,702
Is there any info available over howto implement keyboard shortcuts? I need multi-key shortcuts like CTRL+S
- 28 May 2009 6:13 AM
- Replies
- 1
- Views
- 1,624
I am also searching for a way to make a dynamic SubMenu. The SubMenu items have to be refreshed at the moment the menu is collapsed.
My scenario is:
1: The user is expanding the Menu of a...
- 28 May 2009 4:37 AM
I've found the sollution. It is the list wich is draggable, not the items in it!
DragSource source = new DragSource( this ) {
@Override
...
- 28 May 2009 4:34 AM
Anybody who has got the Dragging working withing lists?
- 28 May 2009 4:20 AM
- Replies
- 0
- Views
- 1,267
Hi,
Drag&Drop is having a important role in the application i am building. A button is the DragSource, and I want to set the header of a TabItem as DragTarget. I am doing this using the following...
- 27 May 2009 2:14 AM
I can't figure out how to make the items in a datalist draggable, is this fixed in the M2 build or am i doing something wrong?
public class QueueItem extends DataListItem {
............
- 26 May 2009 11:36 PM
Great, that worked! It actually is a weird issue in my opinion.. :-?
- 26 May 2009 4:57 AM
It indeed is the command you mentioned. For a Dialog it works this way:
getButtonBar().setEnableOverflow(false);
But,, then it behaves even weirder, thumb1 is initial state, thumb2 after...
- 26 May 2009 4:41 AM
I've got an other migration problem. GXT2.0-M2 has got some new components, for example the 'Overflow toolbars'
I've got a login box...
- 26 May 2009 1:53 AM
- Replies
- 7
- Views
- 3,654
My apologies,, I was wrong.. (A) I was working on a older version of my project..
The 1st bug also has been solved...
- 26 May 2009 1:47 AM
- Replies
- 7
- Views
- 3,654
Bug 2 and 3 are still there in the M2 build
- 26 May 2009 1:45 AM
The bug is solved in the M2 build. Now a selection will be made. Is it possible to override this select action? The entire viewport is filled with blue because of the selection.
- 26 May 2009 1:34 AM
- Replies
- 0
- Views
- 1,495
I am using the example code from the DualListField from the explorer. The two stores are filled from a rpc call, but somehow it doesn't show all the results from the query. When I add some hard coded...
- 25 May 2009 7:08 AM
- Replies
- 61
- Views
- 31,084
I also still wonder how to implement the language file to get a multilang app.
Results 1 to 25 of 69
|
https://www.sencha.com/forum/search.php?s=5fb7cf8b7fd7148b17b4fadfa460cbd9&searchid=14265577
|
CC-MAIN-2016-07
|
refinedweb
| 919
| 82.75
|
- Advertisement
Search the Community
Showing results for tags 'Behavior'.
Found 28 results
City building game inner workings
GytisDev posted a topic in For BeginnersHello,.
Attack of the Demons VS The Lord of the Man of the South - A wild NEW game!
Descent posted a topic in Indie ShowcaseWow); } } }
2017-12-18.DRTS.play-with-bot.png
Viir posted a gallery image in Projects
New release of DRTS web port - Play with Bots
Viir posted a blog entry in DRTS GameFollowing shows a game with the new bot and map..
How Artificial Intelligence has Shaped the History of Gaming
GameDev.net posted an article in Artificial IntelligenceThis is an extract from Practical Game AI Programming from Packt. Click here to download the book for free! When humans play games – like chess, for example – they play differently every time. For a game developer this would be impossible to replicate. So, if writing an almost infinite number of possibilities isn’t a viable solution game developers have needed to think differently. That’s where AI comes in. But while AI might like a very new phenomenon in the wider public consciousness, it’s actually been part of the games industry for decades. Enemy AI in the 1970s Single-player games with AI enemies started to appear as early as the 1970s. Very quickly, many games were redefining the standards of what constitutes game AI. Some of those examples were released for arcade machines, such as Speed Race from Taito (a racing video game), or Qwak (a duck hunting game using a light gun), and Pursuit (an aircraft fighter) both from Atari. Other notable examples are the text-based games released for the first personal computers, such as Hunt the Wumpus and Star Trek, which also had AI enemies. What made those games so enjoyable was precisely that the AI enemies that didn't react like any others before them. This was because they had random elements mixed with the traditional stored patterns, creating games that felt unpredictable to play. However, that was only possible due to the incorporation of microprocessors that expanded the capabilities of a programmer at that time. Space Invaders brought the movement patterns and Galaxian improved and added more variety, making the AI even more complex. Pac-Man later on brought movement patterns to the maze genre – the AI design in Pac-Man was arguably as influential as the game itself. After that, Karate Champ introduced the first AI fighting character and Dragon Quest introduced the tactical system for the RPG genre. Over the years, the list of games that has used artificial intelligence to create unique game concepts has expanded. All of that has essentially come from a single question, how can we make a computer capable of beating a human in a game? All of the games mentioned used the same method for the AI called finite-state machine (FSM). Here, the programmer inputs all the behaviors that are necessary for the computer to challenge the player. The programmer defined exactly how the computer should behave on different occasions in order to move, avoid, attack, or perform any other behavior to challenge the player, and that method is used even in the latest big budget games. From simple to smart and human-like AI One of the greatest challenges when it comes to building intelligence into games is adapting the AI movement and behavior in relation to what the player is currently doing, or will do. This can become very complex if the programmer wants to extend the possibilities of the AI decisions. It's a huge task for the programmer because it's necessary to determine what the player can do and how the AI will react to each action of the player. That takes a lot of CPU power. To overcome that problem, programmers began to mix possibility maps with probabilities and perform other techniques that let the AI decide for itself how it should react according to the player's actions. These factors are important to be considered while developing an AI that elevates a games’ quality. Games continued to evolve and players became even more demanding. To deliver games that met player expectations, programmers had to write more states for each character, creating new in-game and more engaging enemies. Metal Gear Solid and the evolution of game AI You can start to see now how technological developments are closely connected to the development of new game genres. A great example is Metal Gear Solid; by implementing stealth elements, it moved beyond the traditional shooting genre. Of course, those elements couldn't be fully explored as Hideo Kojima probably intended because of the hardware limitations at the time. However, jumping forward from the third to the fifth generation of consoles, Konami and Hideo Kojima presented the same title, only with much greater complexity. Once the necessary computing power was there, the stage was set for Metal Gear Solid to redefine modern gaming. Visual and audio awareness One of the most important but often underrated elements in the development of Metal Gear Solid was the use of visual and audio awareness for the enemy AI. It was ultimately this feature that established the genre we know today as a stealth game. Yes, the game uses Path Finding and a FSM, features already established in the industry, but to create something new the developers took advantage of some of the most cutting-edge technological innovations. Of course the influence of these features today expands into a range of genres from sports to racing. After that huge step for game design, developers still faced other problems. Or, more specifically, these new possibilities brought even more problems. The AI still didn't react as a real person, and many other elements were required, to make the game feel more realistic. Sports games This is particularly true when we talk about sports games. After all, interaction with the player is not the only thing that we need to care about; most sports involve multiple players, all of whom need to be ‘realistic’ for a sports game to work well. With this problem in mind, developers started to improve the individual behaviors of each character, not only for the AI that was playing against the player but also for the AI that was playing alongside them. Once again, Finite State Machines made up a crucial part of Artificial Intelligence, but the decisive element that helped to cultivate greater realism in the sports genre was anticipation and awareness. The computer needed to calculate, for example, what the player was doing, where the ball was going, all while making the ‘team’ work together with some semblance of tactical alignment. By combining the new features used in the stealth games with a vast number of characters on the same screen, it was possible to develop a significant level of realism in sports games. This is a good example of how the same technologies allow for development across very different types of games. How AI enables a more immersive gaming experience A final useful example of how game realism depends on great AI is F.E.A.R., developed by Monolith Productions. What made this game so special in terms of Artificial Intelligence was the dialog between enemy characters. While this wasn’t strictly a technological improvement, it was something that helped to showcase all of the development work that was built into the characters' AI. This is crucial because if the AI doesn't say it, it didn't happen. Ultimately, this is about taking a further step towards enhanced realism. In the case of F.E.A.R., the dialog transforms how you would see in-game characters. When the AI detects the player for the first time, it shouts that it found the player; when the AI loses sight of the player, it expresses just that. When a group of (AI generated) characters are trying to ambush the player, they talk about it. The game, then, almost seems to be plotting against the person playing it. This is essential because it brings a whole new dimension to gaming. Ultimately, it opens up possibilities for much richer storytelling and complex gameplay, which all of us – as gamers – have come to expect today.
An easy explanation to Dual Contouring?
QuesterDesura posted a topic in Math and PhysicsHello, ^^
Flocking Algorithm with Predators
Ryback posted a gallery image in Image of the Day
Predators: A Flocking Story
Ryback posted a topic in Indie ShowcaseHi everyone, A while back I read a few articles on flocking algorithms and it peaked my interest so I decided to write my own demo using OpenSceneGraph. A link to the result is below, I hope you like it!
Newbie - How to start?
Tootooni posted a topic in For BeginnersGood day dear people I'm completely new here and very nervous to be honest. I started with 3D-Design at my traineeship and can slowly start with my ideas for a survival/rpg game in 3d But since I only used RPG Makers until now I wonder how is the best way to start with a game, and what I would need for that. Have a wonderful day! ^.^)/) Tootooni
A bit of help with a behaviortree tree.
menyo posted a topic in Artificial IntelligenceI'm currently testing out behaviortree's with the LibGDX AI library. I have read a lot about behaviortree's past couple days but it's hard to get clear how I need to build the tree for my specific scenario. I'm looking to use a behaviortree for a Rimworld like game where the players units are not directly controllable. In the first place I'm wondering if I should have a single big tree for the complete AI or many smaller tree's, for example a separate tree for: Moving an item Building a building Crafting an item Resting when sleepy Eating when hungry In the examples I have seen they all talk about a "single job". Like entering a building, GoTo -> Open Door -> GoTo -> Close door. But what if I need to check if I have the keys on me? And I need to check a lot of these variables. When A unit is Idle I'd like him to maintain his primary needs if he has access to them. If his needs are satisfied enough he can take on certain jobs like building walls or crafting items. I have a a lot of different jobs but jobs like building or crafting items are relatively the same with a different outcome so I could probably make a abstract job for that, it helps but I will still end up with a really huge tree though. Another issue I'm facing is that when tasks are running, and something more important pops up (enemy spotted or some kind of emergence task) the unit should stop it's current task and act accordingly to the interruption. So since the task is running I need to do those checks on each runnable task then returned failed/cancelled and further down the sequence I need to do another check for these interruptions and handle them accordingly. I have briefly read into dynamic branches, not sure if GDX AI supports this but adding a behavior to the tree to handle an interruption seems a good idea. These dynamic branches also opens the opportunity to hold behaviors at the jobs and once a unit accepts a job it inserts that branch into it's own tree. I hope I'm clear, it's hard to explain and get a global view of a complex behavior tree. I have read several times that behavior tree's are very easy to understand and implement. Well, that might be the case for those small tree's I find everywhere. On the other hand I might be over complicating things.
Behavior Tool pre-release: Curvature Utility AI Suite
ApochPiQ posted a topic in Artificial IntelligenceI've just posted a pre-release edition of Curvature, my utility-theory AI design tool. Curvature provides a complete end-to-end solution for designing utility-based AI agents; you can specify the knowledge representation for the world, filter the knowledge into "considerations" which affect how an agent will make decisions and choose behaviors, and then plop a few agents into a dummy world and watch them run around and interact with things. Preview 1 (Core) contains the base functionality of the tool but leaves a lot of areas unpolished. My goal with this preview is to get feedback on how well the tool works as a general concept, and start refining the actual UI into something more attractive and fluid. The preview kit contains a data file with a very rudimentary "scenario" set up for you, so you can see how things work without cutting through a bunch of clutter. Give it a test drive, let me know here or via the Issue Tracker on GitHub what you like and don't like, and have fun!
BMFont: Different results with forcing zero offsets
Marc Klein posted a topic in AngelCodeHello, I'm using BMfont to create a Bitmap font for my text Rendering. I noticed some strange behaviour when I use the option "force Offsets to Zero". If I use this option my rendering resultions looks ok, but without it, there is some missing space between some characters. I attached the BMFont configuration files and font that I used. In the rendering result with variable Offset you can see that there is missing space right next to the "r" letter. To get the source and destination of my render rectangles I basically to following: void getBakedQuad(const fontchar_t* f, int* x_cursor, int * y_cursor, SDL_Rect* src, SDL_Rect* dest) { dest->x = *x_cursor + f->xoffset; dest->y = *y_cursor + f->yoffset; dest->w = f->width; dest->h = f->height; src->x = f->x; src->y = f->y; src->w = f->width; src->h = f->height; *x_cursor += f->xadvance; } Has somebody noticed a similar behaviour? orbitron-bold.otf SIL Open Font License.txt variable_offset.bmfc variable_offset.fnt zero_offset.bmfc zero_offset.fnt
Pathfinding Is A* good when entities should collide with eachother?
Midnightas posted a topic in Artificial IntelligenceHow would that work?
Behavior Pure Decision AI...best method to use
codeliftsleep posted a topic in Artificial IntelligenceI'm building an American football simulation(think football manager), and am wondering about the best way of implementing AI based on various inputs which are weighted based on the personality of the NPC...I have a version of Raymond Cattell's16PF model built into the game to be able to use their various personality traits to help guide decisions. I am going to use this extensively so I need this to be both flexible and able to handle many different scenarios. For instance, a GM has to be able to decide whether he wants to resign a veteran player for big dollars or try and replace them through the draft. They need to be able to have a coherent system for not only making a decision in a vacuum as a single decision but also making a decision as part of a "plan" as to how to build the team...For instance it makes no sense for a GM to make decisions that don't align with each other in terms of the big picture. I want to be able to have the decisions take in a wide range of variables/personality traits to come up with a decision. There is no NPC per se...There isn't going to be any animations connected to this, no shooting, following, etc...just decisions which trigger actions. In a situation like a draft, there is one team "on the clock" and 31 other teams behind the scenes working on trying to decide if they want to try and trade up, trade down, etc which can change based on things like who just got picked, the drop off between the highest graded player at their position/group and the next highest graded player in that position/next group, if a player lasts past a certain point, etc... There needs to be all of these things going on simultaneous for all the teams, obviously the team on the clock is goifn to have to decide whether it wants to make a pick or take any of the offers to move down in the draft from other teams that might want to move up, etc.. So I am planning on making use of something called Behavior Bricks by Padaone Games( bb.padaonegames.com )which is a Behavior Tree but in conversations with others who have worked on AI in major projects like this(EA sports) they said to combine this with a State Machine. My question is would I be able to do this using just Behavior Bricks or would I need to build the state machine separately? Is there something else already created for this type of purpose that I could take advantage of?.
Behavior RPG AI and Fairness - player's characters stats public to AI
Hebi posted a topic in Artificial IntelligenceWhat worries me is fairness, I don't want an AI with "god eyes". In a first attempt, the AI wasn't smart at all. It has a set of rules, that could be customized per possible enemy formation, that makes the AI acts as it has personality. For example: Wolves had a 80% prob of using Bite and 20% of using Tackle. Then I tried to make the AI do things with sense, by coding a reusable algorithm, then I can use rules to give different enemies formations different personalities, like this one is more raw damage focused and this other more status ailment focused. To achieve this I was trying to make the AI player to collect stats about the human player skills and characters by looking at the log, my game has a console that logs everything like most RPGs (think Baldur's Gate). An attack looks like this in the console: Wolf uses Bite Mina takes 34 points of damage Mina uses Fire Ball Wolf takes 200 points of damage The AI player has a function, run(), that is triggered each time a character it controls is ready to act. Then it analyzes the log and collect stats to two tables. One with stats per skills, like how many times it was used and how many times it did hit or was dodged, so the AI player knows what skills are more effective again the human player's team. These skills stats are per target. The second table is for human player's character stats. The AI player is constantly trying to guess the armor, attack power, etc, the enemies have. But coding that is quite difficult so I switched to a simpler method. I gave the AI player "god eyes". It has now full access to human player's stats, two tables aren't required anymore as the real character structure is accessible to the AI player. But the AI player pretends that it doesn't have "god eyes" by, for example, acting like it ignores a character armor again fire, until a fire attack finally hit that character, then a boolean flag is set, and the AI player can stop pretending it doesn't know the target exact armor again fire. Currently, the AI player can assign one of two roles to the characters it controls. Attacker and Explorer. More will be developed, like Healer, Tank, etc. For now there are Attackers and Explorers. These roles has nothing to do with character classes. They are only for the use of AI player. Explorer: will try to hit different targets with different skills to "reveal" their stats, like dodge rate, armor again different types of damage. They must chose which skills to use by considering all skills of all characters in the team, but I'm still figuring the algorithm so right now they are random but at least they take care to not try things that they already tried on targets that they already hit. Explorers will switch to attackers at some time during a battle. Attacker: will try to hit the targets that can be eliminated in less turns. When no stats are known they will chose own skills based on raw power, once the different types of armors are known to the AI, they will switch to skills that exploits this info, if such skills are available to them. Attackers will try different skills if a skill chosen based on raw power was halved by half or more and all enemy armors aren't known yet, but won't try for the best possible skill if one that isn't halved by as much as half its power is already known. Attackers may be lucky an select the best possible skill in the first attempt, but I expect a formation to work better if there are some explorers. This system opens up interesting gameplay possibilities. If you let a wolf escape, the next pack may come already knowing your stats, and that implies that their attackers will take better decisions sooner as the AI requires now less exploration. So, the description of an enemy formation that you can encounter during the game may be: PackOfWolves1: { formation: ["wolf", null, "wolf", null, "wolf", null, "wolf", null, "wolf", null, null, null, null, null, null], openStatrategy: ["Explorer", null, "Explorer", null, "Explorer", null, "Attacker", null, "Attacker", null, null, null, null, null, null] } What do you think about these ideas? Was something similar tried before? Would you consider unfair an AI with "god eyes"?
Behavior Potential uses for Psychology based game AI
Verj_Ohnfran posted a topic in Artificial IntelligenceHello, I have designed an AI system for games that replicates cognitive psychology models and theories, so that NPCs and other virtual characters can behave in more human-like and interesting ways. I have built a prototype in the Unity game engine, and it can produce quite complex behaviour, including learning and creativity. I am now wanting to develop it further and am looking for people or organisations to help. I am thinking about how I could present my AI system, and what would be a good way of demonstrating it. If you have any suggestions it would be great to hear them. I have a website that explains my AI system in detail: If you have any comments about the AI system, or know anyone who might be interested in helping to develop it, I would really appreciate hearing from you. Thanks for the help.
What makes a combat system for fighting games?
LukasIrzl posted a topic in Game Design and TheoryHi there, it's been a while since my last post. I was creating a bunch of games but there was always something missing. Something which makes the game (maybe unique)... After a few tries I decided to start a side project for a combat system which should be used for fighting games. I did a lot of research and programming to finally get something that makes actually fun to play. Well... it is only a prototype and I do not want to share it (yet). Now I decided to share my ideas of the basics of a combat system for fighting games. Don't get me wrong... This is only my way of doing stuff and I want as many feedback as possible and maybe it will help people with their games. I will provide a few code snippets. It will be some sort of OOP pseudo code and may have typos. Content 1. Introduction 2. Ways of dealing damage 1. Introduction What makes a combat system a combat system? I guess it could be easy to explain. You need ways of dealing damage and ways of avoiding damage. At least you need something for the player to know how to beat the opponent or the game. As i mentioned before, I will focus on fighting games. As it has ever been there is some sort of health and different ways to reduce health. Most of the times you actually have possibilities to avoid getting damage. I will focus on these points later on. 2. Ways of dealing damage How do we deal damage by the way? A common way to do so, is by pressing one or more buttons at one time in order to perform an attack. An attack is an animation with a few phases. In my opinion, an attack consists of at least four phases. 1. Perception 2. Action 3. Sustain 4. Release Here is an example animation I made for showing all phases with four frames: Every one of those has its own reason. One tipp for our designers out there is to have at least one image per phase. Now we should take a closer look at the phases itself. 2.1. Perception The perception phase should include everything to the point, the damage is done. Lets say, it is some sort of preparing the actual attack. Example: Before you would punch something, you would get in position before doing the actual action, right? Important note: the longer the perception phase is, the more time the opponent has to prepare a counter or think about ways to avoid the attack. Like having light and heavy attacks. The heavy attacks mostly have longer perception phases than the light ones. This means, that the damage dealt is likely greater compared to the light attacks. You would like to avoid getting hit by the heavy ones, right? 2.2. Action The action phase is the actual phase where damage is dealt. Depending on the attack type itself the phase will last longer or shorter. Using my previous example, heavy attacks might have a longer action phase than light attacks. In my opinion, the action phase should be as short as possible. One great way to get the most out of the attack animation itself is by using smears. They are often used for showing motion. There's ton of reference material for that. I like using decent smears with a small tip at the starting point and a wide end point (where the damage should be dealt). This depends on the artist and the attack. 2.3. Sustain At first sight, the sustain phase may be irrelevant. It is directly after the attack. My way of showing the sustain phase is by using the same image for the action phase just without any motion going on. The sustain phase should be some sort of a stun time. The images during the sustain phase should show no movement - kind of a rigid state. Why is this phase so important? It adds a nice feel to the attack animation. Additionally, if you want to include combos to your game, this is the phase, where the next attack should be chained. This means, while the character is in this phase of the attack, the player could press another attack button to do the next attack. The next attack will start at the perception phase. 2.4. Release The release phase is the last phase of the attack. This phase is used to reset the animation to the usual stance (like idle stance). 2.5. Dealing damage Dealing damage should be only possible during the action phase. How do we know, if we land a hit? I like using hit-boxes and damage-boxes. 2.5.1. Hit-boxes A hit box is an invisible box the character has. It shows it's vulnerable spot. By saying "Hit-box" we do not mean a box itself. It could be any shape (even multiple boxes together - like head, torso, arms, ...). You should always know the coordinates of your hit-box(es). Here is an example of a hit-box for my character: I am using Game Maker Studio, which is automatically creating a collision box for every sprite. If you change the sprite from Idle to Move, you may have a different hit-box. Depending on how you deal with the collisions, you may want to have a static hit-box. Hit-boxes could look something like this: class HitBox { /* offsetX = the left position of you hit-box relative to the players x coordinate offsetY = the top position of you hit-box relative to the players y coordinate width = the width of the hit-box height = the height of the hit-box */ int offsetX, offsetY, width, height; /* Having the players coordinates is important. You will have to update to player coordinates every frame. */ int playerX, playerY; //initialize the hit-box HitBox(offsetX, offsetY, width, height) { this.offsetX = offsetX; this.offsetY = offsetY; this.width = width; this.height = height; } //Update (will be called every frame) void update(playerX, playerY) { //you can also update the player coordinates by using setter methods this.playerX = playerX; this.playerY = playerY; } //Getter and Setter ... //Helper methods int getLeft() { return playerX + offsetX; } int getRight() { return playerX + offsetX + width; } int getTop() { return playerY + offsetY; } int getBottom() { return playerY + offsetY + height; } } When using multiple hit-boxes it would be a nice idea to have a list (or array) of boxes. Now one great thing to implement is a collision function like this: //check if a point is within the hit-box boolean isColliding(x, y) { return x > getLeft() && x < getRight() && y > getTop() && y < getBottom(); } //check if a box is within the hit-box boolean isColliding(left, right, top, bottom) { return (right > getLeft() || left < getRight()) && (bottom > getTop() || top < getBottom()); } 2.5.2. Damage-boxes Damage-boxes are, like hit-boxes, not necessarily a box. They could be any shape, even a single point. I use damage-boxes to know, where damage is done. Here is an example of a damage-box: The damage box does look exactly like the hit-box. The hit-box differs a bit to the actual damage-box. A damage-box can have absolute x and y coordinates, because there is (most of the times) no need to update the position of the damage-box. If there is a need to update the damage-box, you can do it through the setter methods. class DamageBox { /* x = absolute x coordinate (if you do not want to update the coordinates of the damage-box) y = absolute y coordinate (if you do not want to update the coordinates of the damage-box) width = the width of the damage-box height = the height of the damage-box */ int x, y, width, height; /* The damage the box will do after colliding */ int damage; //initialize the damage-box DamageBox(x, y, width, height, damage) { this.x = x; this.y = y; this.width = width; this.height = height; this.damage = damage; } //Getter and Setter ... //Helper methods int getLeft() { return x; } int getRight() { return x + width; } int getTop() { return y; } int getBottom() { return y + height; } } 2.5.3. Check for collision If damage-boxes and hit-boxes collide, we know, the enemy receives damage. Here is one example of a hit: Now we want to check, if the damage box collides with a hit-box. Within the damage-box we can insert an update() method to check every frame for the collision. void update() { //get all actors you want to damage actors = ...; //use a variable or have a global method (it is up to you, to get the actors) //iterate through all actors foreach(actor in actors) { //lets assume, they only have one hit-box hitBox = actor.getHitBox(); //check for collision if(hitBox.isColliding(getLeft(), getRight(), getTop(), getBottom()) { //do damage to actor actor.life -= damage; } } } To get all actors, you could make a variable which holds every actor or you can use a method you can call everywhere which returns all actors. (Depends on how your game is set up and on the engine / language you use). The damage box will be created as soon as the action phase starts. Of course you will have to destroy the damage-box after the action phase, to not endlessly deal damage. 2.6. Impacts Now that we know, when to deal the damage, we should take a few considerations about how to show it. There are a few basic elements for us to use to make the impact feel like an impact. 2.6.1. Shake the screen I guess, I am one of the biggest fans of shaking the screen. Every time there is some sort of impact (jumping, getting hit, missiles hit ground, ...) I use to shake the screen a little bit. In my opinion, this makes a difference to the gameplay. As usual, this may vary depending on the type of attack or even the type of game. 2.6.2. Stop the game This may sound weird, but one great method for impacts is to stop the game for a few frames. The player doesn't actually know it because of the short time, but it makes a difference. Just give it a try. 2.6.3. Stun animation Of course, if we got hit by a fist, we will not stand in our idle state, right? Stun animations are a great way to show the player, that we landed a hit. There is only one problem. Lets say, the player is a small and fast guy. Our enemy is some sort of a big and heavy guy. Will the first punch itch our enemy? I guess not. But maybe the 10th one will. I like to use some damage build up system. It describes, how many damage a character can get before getting stunned. The damage will build up by every time the character will get hit. After time, the built up damage reduces, which means, after a long time without getting hit, the built up shall be 0 again. 2.6.4. Effects Most games use impact animations to show the player, that he actually hit the enemy. This could be blood, sparkles, whatever may look good. Most engines offer particle systems, which makes the implementation very easy. You could use sprites as well. 2.7. Conclusion By using the four phases, you can create animations ideal for a fighting game. You can prepare to avoid getting hit, you do damage, you can chain attacks and you have a smooth transition to the usual stance. Keep in mind, the character can get hit at phases 1, 3 and 4. This may lead to cancel the attack and go into a stun phase (which i will cover later). A simple way to check for damage is by using hit-boxes and damage-boxes. 3. Ways of avoiding damage Now we are able to deal damage. There is still something missing. Something that makes the game more interesting... Somehow we want to avoid taking damage, right? There are endless ways of avoiding damage and I will now cover the most important ones. 3.1. Blocking Blocking is one of the most used ways to avoid damage (at least partially). As the enemy starts to attack (perception phase) we know, which attack he is going to use. Now we should use some sort of block to reduce the damage taken. Blocking depends on the direction the player is looking. Take a look at this example: If the enemy does an attack from the right side, we should not get damage. On the other side, if the enemy hits the character in the back, we should get damage. A simple way to check for damage is by comparing the x coordinates. Now you should think about how long the character is able to block. Should he be able to block infinitely? You can add some sort of block damage build up - amount of damage within a specific time the character can block (like the damage build up). If the damage was to high, the character gets into a stunning phase or something like that. 3.2. Dodging Every Dark Souls player should be familiar with the term dodging. Now what is dodging? Dodging is some sort of mechanism to quickly get away from the current location in order to avoid a collision with the damage box (like rolling, teleportation, ...) Sometimes the character is also invulnerable while dodging. I also prefer making the character shortly invulnerable, especially when creating a 2D game, because of the limited moving directions. 3.3. Shields Shields may be another good way to avoid taking damage. Just to make it clear. I do not mean a physical shield like Link has in the Legend of Zelda (this would be some sort of blocking). I mean some sort of shield you do have in shooters. Some may refill within a specific time, others may not. They could be always there or the player has to press a button to use them. This depends on your preferences. While a shield is active, the character should not get any damage. Keep in mind. You do not want to make the character unbeatable. By using shields which are always active (maybe even with fast regeneration), high maximum damage build up / block damage build up you may end up with an almost invulnerable character. 3.4. Jump / duck These alternatives are - in my opinion - a form of dodging. The difference between dodging and jumping / ducking is, that you do not move your position quickly. In case of ducking, you just set another hit-box (a smaller one of course). While during a jump, you are moving slowly (depends on your game). The biggest difference in my opinion is, jumping or ducking should have no invulnerable frames. I hope you enjoyed reading and maybe it is useful to you. Later on, I want to update the post more and more (maybe with your help). If you have any questions or feedback for me, feel free to answer this topic. Until next time, Lukas
Behavior Needing help understanding GOAP (Brent Owen article) - any pointers?
Tset_Tsyung posted a topic in Artificial IntelligenceHey all, As the heading says I'm trying to get my head around Goal Objective Action Planning AI. However I'm having some issues reverse engineering Brent Owens code line-by-line (mainly around the recursive graph building of the GOAPPlanner). I'm assuming that reverse engineering this is the best way to get a comprehensive understanding... thoughts? Does anyone know if an indepth explanation on this article (found here:), or another article on this subject? I'd gladly post my specific questions here (on this post, even), but not too sure how much I'm allowed to reference other sites... Any pointers, help or comments would be greatly appreciated. Sincerely, Mike
Designing Intelligent Artificial Intelligence
slayemin posted a blog entry in slayemin's JournalBelow.
Stop hacking
MvGxAce posted a topic in Networking and MultiplayerIs there a program to ensure that the game I've created does not get hacked from third party apps such as lucky patcher, game guardian, game killer etc. If so, how I do I prevent this obstacle from ruining the game. The game is online based but I just recently found out there are hackers. Is there a program I could use to stop this or is it in the coding. Thankyou
AI and Machine Learning
AlphaSilverback posted a blog entry in MoosehuntSo - the last couple of weeks I have been working on building a framework for some AI. In a game like the one I'm building, this is rather important. I estimate 40% of my time is gonna go into the AI. What I want is a hunting game, where the AI learns from the players behaviour. This is actually what is gonna make the game fun to play. This will require some learning from the creatures that the player hunt and some collective intelligence per species. Since I am not going to spend oceans of Time creating dialogue, tons of cut-scenes and an epic story-line and multiple levels (I can't make something interesting enough to make it worth the time - I need more man-power for that), what I can do, is create some interesting AI and the feeling of being an actual hunter, that has to depend on analysis of the animals and experimentation on where to attack from. SO.. To make it as generic as possible, I mediated everything, using as many interfaces a possible for the system. You can see the general system here in the UML diagram. I customized it for Unity so that it is required to add all the scripts to GameObjects in the game world. This gives a better overview, but requires some setup - not that bothersome. If you add some simple Game Objects and some colors, it could look like this in Unity3D: Now, this system works beautifully. The abstraction of the Animation Controller and Movement Controller assumes some standard stuff that applies for all creatures. For example that they all can move, have eating-, sleeping and drinking animations, and have a PathFinder script attached somewhere in the hierarchy. It's very generic and easy to customize. At some point I'll upload a video of the flocking behavior and general behavior of this creature. For now, I'm gonna concentrate on finishing the Player model, creating a partitioned terrain for everything to exist in. Finally and equally important, I have to design a learning system for all the creatures. This will be integrated into the Brain of all the creatures, but I might separate the collective intelligence between the species. It's taking shape, but I still have a lot of modelling to do, generating terrain and modelling/generating trees and vegetation. Thanks for reading, Alpha-
- Advertisement
|
https://www.gamedev.net/tags/Behavior/
|
CC-MAIN-2018-13
|
refinedweb
| 6,950
| 61.77
|
Robot not working after another app gains focus843807 Mar 3, 2010 12:25 AM
I'm trying to automate some clicking for a program using Robot. The problem is, once I give focus to the target program, Robot stops working (so basically, the initial click into the program works, but it's useless if I can't release the mouse). Intuitively, it seems like the target program has somehow gained a higher priviledge to control the mouse or something if it has focus. Programmatically giving focus back to the java program after every event won't work. Is there a way to fix this, or should I try something more native to Windows like C#? content has been marked as final. Show 6 replies
1. Re: Robot not working after another app gains focusDarrylBurke Mar 3, 2010 12:27 AM (in response to 843807)To get better help sooner, post a [_SSCCE_|] that clearly demonstrates your problem.
Use code tags to post codes -- [code]CODE[/code] will display as
Or click the CODE button and paste your code between the {code} tags that appear.
CODE
I'm moving this thread to a more appropriate forum.
db
2. This Thread is now movedDarrylBurke Mar 3, 2010 12:27 AM (in response to DarrylBurke)Note: This thread was originally posted in the [Java Programming|] forum, but moved to this forum for closer topic alignment.
3. Re: Robot not working after another app gains focus843807 Mar 3, 2010 1:08 PM (in response to DarrylBurke)Well, any sort of Robot code that does mouse stuff would work but here's a part of my code:
In this case, the cursor will move to 620,700 and the initial mousePress will work, but the release (and anything after that) won't.
Robot rr; try { rr = new Robot(); rr.mouseMove(620, 700); Thread.sleep(20); rr.mousePress(InputEvent.BUTTON1_MASK); Thread.sleep(20); System.out.println("Release!"); rr.mouseRelease(InputEvent.BUTTON1_MASK); } catch (AWTException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); }
4. Re: Robot not working after another app gains focus843807 Mar 3, 2010 3:31 PM (in response to 843807)billyboy:
How are you checking to see if it works? If you have an application that is processing the click events, and another application covers it, then it will no longer see the click events.
This code:
Does keep running and does click and release. I suspect your's is also. If you put one object at (256, 256) and another at (512, 512) you will soon see that they are alternatly chosen and gain and release focus appropriatly, but if one object is on top of the other, then the top object will recieve the click events.
import java.awt.AWTException; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.InputEvent; import java.awt.Point; import java.awt.Robot; import javax.swing.Timer; public class JRobot implements ActionListener{ Robot r; Timer t = new Timer(200, this); Point p = new Point(512, 512); boolean bTime = false; int iCount = 0; public JRobot(){ try{ r = new Robot(); t.start(); }catch(AWTException e){ System.out.println(e.toString()); } } public static void main(String[] args) { new JRobot(); } public void actionPerformed(ActionEvent e){ if(bTime){ p.x = 256; p.y = 256; }else{ p.x = 512; p.y = 512; } int iState = iCount%6; switch(iState){ case 1: r.mouseMove(p.x, p.y); break; case 2: r.mousePress(InputEvent.BUTTON1_MASK); break; case 4: r.mouseRelease(InputEvent.BUTTON1_MASK); break; case 5: bTime = !bTime; default: } if(iCount>=100) System.exit(0); iCount++; System.out.println(iState+" x " + p.x + " y "+ p.y); } }
5. Re: Robot not working after another app gains focus843807 Mar 6, 2010 4:23 PM (in response to 843807)Well, I can tell that Robot is no longer 'working' becuase nothing happens after the target app gains focus. Perhaps I should clarify the situation a little more.
There is the target application, which needs to have certain things clicked and whatnot. It's running and takes up most of the screen.
I start up the java app (say the one you just posted), and it creates a new Robot. OK.
It moves the mouse to 512,512. OK. At this point, the focus is still on the java app I just started up (eclipse or a commandline window).
It does a mousePress. At this point, the target application gains focus and recieves the mousedown event (if there is a button in the target app, it gets depressed).
It does a mouseRelease. Nothing happens (the button is still depressed).
it does a mouseMove to 256, 256. The cursor does not move. The button is still depressed. After this, Robot doesn't 'work'.
If I minimize the target app or make it lose focus and try to give focus back to the java app, sometimes Robot will begin working again; I haven't quite figured out what will make Robot regain control.
6. Re: Robot not working after another app gains focus843807 Mar 11, 2010 10:07 AM (in response to 843807)On that I've not a clue, I run in Solaris 10, Windows XP and 2003 server, and Ubuntu (what ever the flavor of the day is) and I cannot reproduce the problem you describe. Post a working example of your code that displays the problem you describe. Also try it on another machine and see if it's your local box only.
|
https://community.oracle.com/thread/1289136?tstart=105
|
CC-MAIN-2015-22
|
refinedweb
| 908
| 66.23
|
08 June 2012 03:24 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The unit was shut at the end of May and is likely to remain off line until at least the end of June, the source said.
The producer continues to operate its second 30,000 tonne/year line, the source said.
Market participants said buying sentiment has weakened following recent falls in feedstock acetone and propylene prices as well as bearish crude futures.
Demand in southeast Asia, which has helped to pick up the slack in Chinese demand, has slowed down as well, they added.
“Distributors have adequate stocks and everyone is waiting for feedstock prices to stabilise before buying new cargoes,’’
|
http://www.icis.com/Articles/2012/06/08/9567403/south-koreas-isu-chem-shuts-ipa-line-on-weak-demand.html
|
CC-MAIN-2014-23
|
refinedweb
| 114
| 69.92
|
The attached patch applies to 2.4.3 and should address the most seriousconcerns surrounding OOM and low-memory situations for most people. Asummary of the patch contents follows:MAJOR: OOM killer now only activates when truly out of memory, ie. whenbuffer and cache memory has already been eaten down to the bone.MEDIUM: The allocation mechanism will now only allow processes to reservememory if there is sufficient memory remaining *and* the process is notalready hogging RAM. IOW, if the allocating process is already 4x thesize of the remaining free memory, reservation of more memory (by fork(),malloc() or related calls) will fail.MEDIUM: The OOM killer algorithm has been reworked to be a little moreintelligent by default, and also now allows the sysadmin to specify PIDsand/or process names which should be left untouched. Simply echo aspace-delimited list of PIDs and/or process names into/proc/sys/vm/oom-no-kill, and the OOM killer will ignore all processesmatching any entry in the list until only they and init remain. Init (asPID 1 or as a root process named "init") is now alwaysignored. TODO: make certain parameters of the OOM killer configurable.W-I-P: The memory-accounting code from an old 2.3.99 patch has beenre-introduced, but is in sore need of debugging. It can be activated byechoing a negative number into /proc/sys/vm/overcommit_memory - but dothis at your own risk. Interested kernel hackers should alter the"#define VM_DEBUG 0" to 1 in include/linux/mm.h to view lots of debuggingand warning messages. I have seen the memory-accounting code attempt to"free" blocks of memory exceeding 2GB which had never been allocated,while running gcc. The sanity-check code detects these anomalies andattempts to correct for them, but this isn't good...SIDE EFFECT: All parts of the kernel which can change the total amount ofVM (eg. by adding/removing swap) should now callvm_invalidate_totalmem() to notify the VM about this. A new functionvm_total() now reports the total amount of VM available. The total VM andthe amount of reserved memory are now available from /proc/meminfo.diff -rBU 5 linux-2.4.3/fs/exec.c linux-oom/fs/exec.c--- linux-2.4.3/fs/exec.c Thu Mar 22 09:26:18 2001+++ linux-oom/fs/exec.c Tue Apr 3 09:32:07 2001@@ -386,23 +386,31 @@ } static int exec_mmap(void) { struct mm_struct * mm, * old_mm;+ struct task_struct * tsk = current;+ unsigned long reserved = 0; - old_mm = current->mm;+ old_mm = tsk->mm; if (old_mm && atomic_read(&old_mm->mm_users) == 1) {+ /* Keep old stack reservation */ mm_release(); exit_mmap(old_mm); return 0; } + reserved = vm_enough_memory(tsk->rlim[RLIMIT_STACK].rlim_cur >> + PAGE_SHIFT);+ if(!reserved)+ return -ENOMEM;+ mm = mm_alloc(); if (mm) {- struct mm_struct *active_mm;+ struct mm_struct *active_mm = tsk->active_mm; - if (init_new_context(current, mm)) {+ if (init_new_context(tsk, mm)) { mmdrop(mm); return -ENOMEM; } /* Add it to the list of mm's */@@ -424,10 +432,12 @@ return 0; } mmdrop(active_mm); return 0; }++ vm_release_memory(reserved); return -ENOMEM; } /* * This function makes sure the current process has its own signal table,diff -rBU 5 linux-2.4.3/fs/proc/proc_misc.c linux-oom/fs/proc/proc_misc.c--- linux-2.4.3/fs/proc/proc_misc.c Fri Mar 23 11:45:28 2001+++ linux-oom/fs/proc/proc_misc.c Tue Apr 3 09:32:27 2001@@ -173,11 +173,13 @@ "HighTotal: %8lu kB\n" "HighFree: %8lu kB\n" "LowTotal: %8lu kB\n" "LowFree: %8lu kB\n" "SwapTotal: %8lu kB\n"- "SwapFree: %8lu kB\n",+ "SwapFree: %8lu kB\n"+ "VMTotal: %8lu kB\n"+ "VMReserved:%8lu kB\n", K(i.totalram), K(i.freeram), K(i.sharedram), K(i.bufferram), K(atomic_read(&page_cache_size)),@@ -188,11 +190,13 @@ K(i.totalhigh), K(i.freehigh), K(i.totalram-i.totalhigh), K(i.freeram-i.freehigh), K(i.totalswap),- K(i.freeswap));+ K(i.freeswap),+ K(vm_total()), + K(vm_reserved)); return proc_calc_metrics(page, start, off, count, eof, len); #undef B #undef K }diff -rBU 5 linux-2.4.3/include/linux/mm.h linux-oom/include/linux/mm.h--- linux-2.4.3/include/linux/mm.h Mon Mar 26 15:48:13 2001+++ linux-oom/include/linux/mm.h Tue Apr 3 19:59:12 2001@@ -22,10 +22,19 @@ #include <asm/page.h> #include <asm/pgtable.h> #include <asm/atomic.h> /*+ * These are used to prevent VM overcommit.+ */++#define VM_DEBUG 0+extern long vm_reserved;+extern spinlock_t vm_lock;+extern inline long vm_total(void);++/* * Linux kernel virtual memory manager primitives. * The idea being to have a "virtual" mm in the same way * we have a virtual fs - giving a cleaner interface to the * mm details, and allowing different kinds of memory mappings * (from shared memory to executable loading to arbitrary@@ -456,10 +465,14 @@ extern int do_munmap(struct mm_struct *, unsigned long, size_t); extern unsigned long do_brk(unsigned long, unsigned long); struct zone_t;++extern long vm_enough_memory(long pages);+extern void vm_release_memory(long pages);+ /* filemap.c */ extern void remove_inode_page(struct page *); extern unsigned long page_unuse(struct page *); extern void truncate_inode_pages(struct address_space *, loff_t); diff -rBU 5 linux-2.4.3/include/linux/sysctl.h linux-oom/include/linux/sysctl.h--- linux-2.4.3/include/linux/sysctl.h Mon Mar 26 15:48:10 2001+++ linux-oom/include/linux/sysctl.h Tue Apr 3 12:57:27 2001@@ -130,11 +130,12 @@ VM_OVERCOMMIT_MEMORY=5, /* Turn off the virtual memory safety limit */ VM_BUFFERMEM=6, /* struct: Set buffer memory thresholds */ VM_PAGECACHE=7, /* struct: Set cache memory thresholds */ VM_PAGERDAEMON=8, /* struct: Control kswapd behaviour */ VM_PGT_CACHE=9, /* struct: Set page table cache parameters */- VM_PAGE_CLUSTER=10 /* int: set number of pages to swap together */+ VM_PAGE_CLUSTER=10, /* int: set number of pages to swap together */+ VM_OOM_NOKILL=11 /* string: List of PIDs to avoid killing on OOM */ }; /* CTL_NET names: */ enumdiff -rBU 5 linux-2.4.3/kernel/exit.c linux-oom/kernel/exit.c--- linux-2.4.3/kernel/exit.c Fri Feb 9 11:29:44 2001+++ linux-oom/kernel/exit.c Tue Apr 3 09:32:14 2001@@ -304,10 +304,15 @@ struct mm_struct * mm = tsk->mm; mm_release(); if (mm) { atomic_inc(&mm->mm_count);+ if (atomic_read(&mm->mm_users) == 1) {+ /* Only release stack if we're the last one using this mm */+ vm_release_memory(tsk->rlim[RLIMIT_STACK].rlim_cur >>+ PAGE_SHIFT);+ } if (mm != tsk->active_mm) BUG(); /* more a memory barrier than a real lock */ task_lock(tsk); tsk->mm = NULL; task_unlock(tsk);diff -rBU 5 linux-2.4.3/kernel/fork.c linux-oom/kernel/fork.c--- linux-2.4.3/kernel/fork.c Mon Mar 19 12:35:08 2001+++ linux-oom/kernel/fork.c Tue Apr 3 09:32:21 2001@@ -123,10 +123,11 @@ } static inline int dup_mmap(struct mm_struct * mm) { struct vm_area_struct * mpnt, *tmp, **pprev;+ unsigned long reserved = 0; int retval; flush_cache_mm(current->mm); mm->locked_vm = 0; mm->mmap = NULL;@@ -140,10 +141,19 @@ struct file *file; retval = -ENOMEM; if(mpnt->vm_flags & VM_DONTCOPY) continue;++ reserved = 0;+ if((mpnt->vm_flags & (VM_GROWSDOWN | VM_WRITE | VM_SHARED)) == VM_WRITE) {+ unsigned long npages = mpnt->vm_end - mpnt->vm_start;+ reserved = vm_enough_memory(npages >> PAGE_SHIFT);+ if(!reserved)+ goto fail_nomem;+ }+ tmp = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); if (!tmp) goto fail_nomem; *tmp = *mpnt; tmp->vm_flags &= ~VM_LOCKED;@@ -278,10 +288,11 @@ } static int copy_mm(unsigned long clone_flags, struct task_struct * tsk) { struct mm_struct * mm, *oldmm;+ unsigned long reserved; int retval; tsk->min_flt = tsk->maj_flt = 0; tsk->cmin_flt = tsk->cmaj_flt = 0; tsk->nswap = tsk->cnswap = 0;@@ -303,10 +314,14 @@ mm = oldmm; goto good_mm; } retval = -ENOMEM;+ reserved = vm_enough_memory(tsk->rlim[RLIMIT_STACK].rlim_cur >> PAGE_SHIFT);+ if(!reserved)+ goto fail_nomem;+ mm = allocate_mm(); if (!mm) goto fail_nomem; /* Copy the current MM stuff.. */@@ -347,10 +362,12 @@ return 0; free_pt: mmput(mm); fail_nomem:+ if (reserved)+ vm_release_memory(reserved); return retval; } static inline struct fs_struct *__copy_fs_struct(struct fs_struct *old) {diff -rBU 5 linux-2.4.3/kernel/sysctl.c linux-oom/kernel/sysctl.c--- linux-2.4.3/kernel/sysctl.c Fri Feb 16 16:02:37 2001+++ linux-oom/kernel/sysctl.c Tue Apr 3 09:32:49 2001@@ -42,10 +42,11 @@ /* External variables not in a header file. */ extern int panic_timeout; extern int C_A_D; extern int bdf_prm[], bdflush_min[], bdflush_max[]; extern int sysctl_overcommit_memory;+extern char vm_nokill[]; extern int max_threads; extern int nr_queued_signals, max_queued_signals; extern int sysrq_enabled; /* this is needed for the proc_dointvec_minmax for [fs_]overflow UID and GID */@@ -268,10 +269,12 @@ &pager_daemon, sizeof(pager_daemon_t), 0644, NULL, &proc_dointvec}, {VM_PGT_CACHE, "pagetable_cache", &pgt_cache_water, 2*sizeof(int), 0644, NULL, &proc_dointvec}, {VM_PAGE_CLUSTER, "page-cluster", &page_cluster, sizeof(int), 0644, NULL, &proc_dointvec},+ {VM_OOM_NOKILL, "oom-no-kill", + &vm_nokill, 256, 0644, NULL, &proc_dostring, &sysctl_string}, {0} }; static ctl_table proc_table[] = { {0}diff -rBU 5 linux-2.4.3/mm/mmap.c linux-oom/mm/mmap.c--- linux-2.4.3/mm/mmap.c Wed Mar 28 12:55:34 2001+++ linux-oom/mm/mmap.c Wed Apr 4 00:41:39 2001@@ -35,47 +35,163 @@ pgprot_t protection_map[16] = { __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111, __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 }; -int sysctl_overcommit_memory;+int sysctl_overcommit_memory = 0;++/* Unfortunately these need to be longs so we need a spinlock. */+long vm_reserved = 0;+long totalvm = 0;+spinlock_t vm_lock = SPIN_LOCK_UNLOCKED;++void vm_invalidate_totalmem(void)+{+ int flags;++ spin_lock_irqsave(&vm_lock, flags);+ totalvm = 0;+ spin_unlock_irqrestore(&vm_lock, flags);+}++long vm_total(void) +{+ int flags;++ spin_lock_irqsave(&vm_lock, flags);+ if(!totalvm) {+ struct sysinfo i;+ si_meminfo(&i);+ si_swapinfo(&i);+ totalvm = i.totalram + i.totalswap;+ }+ spin_unlock_irqrestore(&vm_lock, flags); ++ return totalvm;+} /* Check that a process has enough memory to allocate a * new virtual mapping. */-int vm_enough_memory(long pages)+long vm_enough_memory(long pages) { /* Stupid algorithm to decide if we have enough memory: while * simple, it hopefully works in most obvious cases.. Easy to * fool it, but this should catch most mistakes. */ /* 23/11/98 NJC: Somewhat less stupid version of algorithm, * which tries to do "TheRightThing". Instead of using half of * (buffers+cache), use the minimum values. Allow an extra 2% * of num_physpages for safety margin. */+ /* From non-overcommit patch: only allow vm_reserved to exceed+ * vm_total if we're root.+ */ - long free;+ int flags;+ long free = 0; - /* Sometimes we want to use more memory than we have. */- if (sysctl_overcommit_memory)- return 1;+ spin_lock_irqsave(&vm_lock, flags); + /* JDM: for testing the memory-accounting code, if VM_DEBUG is set+ * we calcualte the free memory both ways and check one against+ * the other. Otherwise we just calculate the one we need.+ */+#if (!VM_DEBUG)+ if(sysctl_overcommit_memory >= 0) {+#endif free = atomic_read(&buffermem_pages); free += atomic_read(&page_cache_size); free += nr_free_pages(); free += nr_swap_pages;- /*- * The code below doesn't account for free space in the inode- * and dentry slab cache, slab cache fragmentation, inodes and- * dentries which will become freeable under VM load, etc.- * Lets just hope all these (complex) factors balance out...++ /* The dentry and inode caches may contain unused entries. I have no+ * idea whether these caches actually shrink under pressure, but...+ */+ free += (dentry_stat.nr_unused * sizeof(struct dentry)) >> PAGE_SHIFT;+ free += (inodes_stat.nr_unused * sizeof(struct inode)) >> PAGE_SHIFT;++#if VM_DEBUG++ }++ if(sysctl_overcommit_memory < 0)+ free = vm_total() - vm_reserved;+ + /* Attempt to curtail memory allocations before hard OOM occurs.+ * Based on current process size, which is hopefully a good and fast heuristic.+ * Also fix bug where the real OOM limit of (free == freepages.min) is not taken into account.+ * In fact, we use freepages.high as the threshold to make sure there's still room for buffers+cache.+ *+ * -- Jonathan "Chromatix" Morton [JDM], 2001-03-24 to 2001-04-03+ */++ if(current->mm)+ free -= (current->mm->total_vm / 4) + freepages.high;+ else+ free -= freepages.min;++#if VM_DEBUG+ printk(KERN_DEBUG "vm_enough_memory(): process %d reserving %ld pages\n",+ current->pid, pages);+#endif++ if(pages > free)+ if( !(sysctl_overcommit_memory == -1 && current->uid == 0)+ && sysctl_overcommit_memory != 1)+ pages = 0;++ vm_reserved += pages;+ spin_unlock_irqrestore(&vm_lock, flags);+ + return pages;+}++/* Account for freeing up the memory */+inline void vm_release_memory(long pages)+{+ int flags;+ long free;++ spin_lock_irqsave(&vm_lock, flags);++ vm_reserved -= pages;++#if VM_DEBUG+ /* Perform sanity check */ + free = atomic_read(&buffermem_pages);+ free += atomic_read(&page_cache_size);+ free += nr_free_pages();+ free += nr_swap_pages;+ + /* The dentry and inode caches may contain unused entries. I have no+ * idea whether these caches actually shrink under pressure, but... */ free += (dentry_stat.nr_unused * sizeof(struct dentry)) >> PAGE_SHIFT; free += (inodes_stat.nr_unused * sizeof(struct inode)) >> PAGE_SHIFT; - return free > pages;+++ spin_unlock_irqrestore(&vm_lock, flags);++#if VM_DEBUG+ printk(KERN_DEBUG "vm_release_memory(): process %d freeing %ld pages\n",+ current->pid, pages);+#endif } /* Remove one vm structure from the inode's i_mapping address space. */ static inline void __remove_shared_vm_struct(struct vm_area_struct *vma) {@@ -199,10 +315,11 @@ unsigned long prot, unsigned long flags, unsigned long pgoff) { struct mm_struct * mm = current->mm; struct vm_area_struct * vma; unsigned int vm_flags;+ long reserved = 0; int correct_wcount = 0; int error; if (file && (!file->f_op || !file->f_op->mmap)) return -ENODEV;@@ -377,10 +494,11 @@ /* Undo any partial mapping done by a device driver. */ flush_cache_range(mm, vma->vm_start, vma->vm_end); zap_page_range(mm, vma->vm_start, vma->vm_end - vma->vm_start); flush_tlb_range(mm, vma->vm_start, vma->vm_end); free_vma:+ vm_release_memory(reserved); kmem_cache_free(vm_area_cachep, vma); return error; } /* Get an address range which is currently unmapped.@@ -556,10 +674,13 @@ unsigned long end = addr + len; area->vm_mm->total_vm -= len >> PAGE_SHIFT; if (area->vm_flags & VM_LOCKED) area->vm_mm->locked_vm -= len >> PAGE_SHIFT;+ if ((area->vm_flags & (VM_GROWSDOWN | VM_WRITE | VM_SHARED)) + == VM_WRITE)+ vm_release_memory(len >> PAGE_SHIFT); /* Unmapping the whole area. */ if (addr == area->vm_start && end == area->vm_end) { if (area->vm_ops && area->vm_ops->close) area->vm_ops->close(area);@@ -791,11 +912,11 @@ */ unsigned long do_brk(unsigned long addr, unsigned long len) { struct mm_struct * mm = current->mm; struct vm_area_struct * vma;- unsigned long flags, retval;+ unsigned long flags, retval, reserved = 0; len = PAGE_ALIGN(len); if (!len) return addr; @@ -822,11 +943,11 @@ return -ENOMEM; if (mm->map_count > MAX_MAP_COUNT) return -ENOMEM; - if (!vm_enough_memory(len >> PAGE_SHIFT))+ if (!(reserved = vm_enough_memory(len >> PAGE_SHIFT))) return -ENOMEM; flags = calc_vm_flags(PROT_READ|PROT_WRITE|PROT_EXEC, MAP_FIXED|MAP_PRIVATE) | mm->def_flags; @@ -840,16 +962,19 @@ vma->vm_end = addr + len; goto out; } } + /* * create a vma struct for an anonymous mapping */ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);- if (!vma)+ if (!vma) {+ vm_release_memory(reserved); return -ENOMEM;+ } vma->vm_mm = mm; vma->vm_start = addr; vma->vm_end = addr + len; vma->vm_flags = flags;@@ -908,10 +1033,13 @@ mm->map_count--; remove_shared_vm_struct(mpnt); zap_page_range(mm, start, size); if (mpnt->vm_file) fput(mpnt->vm_file);+ if ((mpnt->vm_flags & (VM_GROWSDOWN | VM_WRITE | VM_SHARED)) + == VM_WRITE)+ vm_release_memory(size >> PAGE_SHIFT); kmem_cache_free(vm_area_cachep, mpnt); mpnt = next; } flush_tlb_mm(mm); diff -rBU 5 linux-2.4.3/mm/mremap.c linux-oom/mm/mremap.c--- linux-2.4.3/mm/mremap.c Mon Mar 19 17:17:43 2001+++ linux-oom/mm/mremap.c Tue Apr 3 10:07:33 2001@@ -11,12 +11,10 @@ #include <linux/swap.h> #include <asm/uaccess.h> #include <asm/pgalloc.h> -extern int vm_enough_memory(long pages);- static inline pte_t *get_one_pte(struct mm_struct *mm, unsigned long addr) { pgd_t * pgd; pmd_t * pmd; pte_t * pte = NULL;diff -rBU 5 linux-2.4.3/mm/oom_kill.c linux-oom/mm/oom_kill.c--- linux-2.4.3/mm/oom_kill.c Tue Nov 14 10:56:46 2000+++ linux-oom/mm/oom_kill.c Tue Apr 3 09:26:08 2001@@ -11,143 +11,295 @@ * *.+ *+ * Reworked April 2001 by Jonathan "Chromatix" Morton [JDM] */ #include <linux/mm.h> #include <linux/sched.h> #include <linux/swap.h> #include <linux/swapctl.h> #include <linux/timex.h> /* #define DEBUG */ +enum {+ false = 0,+ true = 1+};++/* A list of PIDs and/or process names which shouldn't be killed in OOM situations */+char vm_nokill[256] = {0};+ /** * int_sqrt - oom_kill.c internal function, rough approximation to sqrt * @x: integer of which to calculate the sqrt * * A very rough approximation to the sqrt() function. */-static unsigned int int_sqrt(unsigned int x)+static unsigned long int_sqrt(unsigned long x) {- unsigned int out = x;+ unsigned long out = x; while (x & ~(unsigned int)1) x >>=2, out >>=1; if (x) out -= out >> 2; return (out ? out : 1); } /**+ * oom_unkillable - determine whether a process is in the list of "please don't hurt me"+ * processes. This allows sysadmins to designate mission-critical processes which will+ * never be killed unless no alternatives remain. NB: 'init' need not be explicitly+ * listed here, since it is already filtered out by other means.+ */+int oom_unkillable(struct task_struct *p)+{+ /* For each space-delimited entry in the vm_nokill array, check whether it's a number+ * or a name. If a number, match it to the PID of the specified process. If a name,+ * match to the name. Process names consisting entirely of digits are not supported.+ */++ /* A potential race condition exists here when the vm_nokill string is being modified+ * at the same time as OOM is reached. The chances of this should be minimal, and+ * some care is taken to minimise the effects. In particular, running off the end+ * of the array is guarded against.+ */++ char *wordstart, *wordend, *ptr;+ int c, pid;++ for(wordstart = wordend = vm_nokill, c = pid = 0;+ c < 256 && (*wordend) && (*wordstart);+ c++, wordstart = wordend + 1) {++ /* Find the start of the next word */+ for( ; (*wordstart == 0 || *wordstart == ' ' || *wordstart == '\t') && c < 256; c++, wordstart++)+ ;++ /* Find the end of this word */+ for(wordend = wordstart; *wordend != 0 && *wordend != ' ' && *wordend != '\t' && c < 256; c++, wordend++)+ ;++ /* No difference? Must be end of string */+ if(wordend == wordstart)+ break;++ /* Determine whether it's a number or name */+ for(pid = true, ptr = wordstart; ptr < wordend; ptr++)+ if(*ptr > '9' || *ptr < '0') {+ pid = false;+ break;+ }++ if(pid) {+ /* Does the pid match? */+ pid = simple_strtol(wordstart, 0, 10);+ if(pid == p->pid)+ return true;+ } else {+ /* Does the name match? */+ if(!p->comm)+ continue;+ if(!strncmp(wordstart, p->comm, wordend - wordstart) && strlen(p->comm) == (wordend - wordstart))+ return true;+ }++ }++ /* No match, so nothing special */+ return false;+}++/** * oom_badness - calculate a numeric value for how bad this task has been * @p: task struct of which task we should calculate+ * @pass: weight reduction factor for time-based corrections * * The formula used is relatively simple and documented inline in the * function. The main rationale is that we want to select a good task- * to kill when we run out of memory.+ * to kill when we run out of memory. This selection should also be+ * intuitive to the average sysadmin.+ *+ * Processes we don't really want to kill include:+ * Batch-run processes which have lots of computation done (which would be wasted)+ * System daemons or services (eg. mail server or database)+ * (usually these run for a long time and are well-behaved)+ * Processes started by the superuser (we assume he knows what he's doing)+ *+ * Processes we *do* want to kill include:+ * Unprivileged-user processes which are gratuitously consuming memory very quickly+ * (usually these will have short runtimes, CPU usage and high UIDs)+ * System daemons which have suddenly "sprung a leak" (this is a relatively rare event)+ *+ * The above rules are only guidelines - for example it makes more sense to kill a+ * virtually-unused mail or web server rather than an interactive client on a workstation,+ * whereas the reverse might be true on a busy server. However, this is partially accounted+ * for by the simple algorithm embedded here (busy servers consume CPU, right?).+ *+ * On the first pass through the list of processes, the OOM killer uses (pass == 0) which should+ * work well for most situations. If no "obvious" target shows up, the pass number is+ * increased on each subsequent run to select a lesser weight on the CPU time and run time values. *- * priniciple- * of least surprise ... (be careful when you change it)+ * In this way, long-running processes will still get terminated if they spring a leak, when no+ * other "obviously bad" (ie. shortlived and huge) processes are left in the system. */ -static int badness(struct task_struct *p)+static unsigned long badness(struct task_struct *p, int pass) {- int points, cpu_time, run_time;+ unsigned long points, cpu_time, run_time, t; + /* If there's no memory associated with this process, killing it will help nothing */ if (!p->mm) return 0;- /*- * The memory size of the process is the basis for the badness.++ /* Don't *ever* try to kill init!+ * If that's the only process left, we'll panic anyway... */- points = p->mm->total_vm;+ if (p->pid == 1 || (!strcmp("init", p->comm) && p->uid == 0))+ return 0; /*- * CPU time is in seconds and run time is in minutes. There is no- * particular reason for this other than that it turned out to work- * very well in practice. This is not safe against jiffie wraps- * but we don't care _that_ much...+ * The memory size of the process is the main badness factor.+ * Scale it up so it's roughly normalised relative to the total VM in the system.+ * On most systems, unsigned long is 32 bits - note that the following code will+ * produce bigger numbers on systems where this is not the case. */- cpu_time = (p->times.tms_utime + p->times.tms_stime) >> (SHIFT_HZ + 3);- run_time = (jiffies - p->start_time) >> (SHIFT_HZ + 10);-- points /= int_sqrt(cpu_time);- points /= int_sqrt(int_sqrt(run_time));+ points = p->mm->total_vm;+ t = vm_total();+ while(t < (1 << (sizeof(unsigned long) - 1))) {+ t <<= 1;+ points <<= 1;+ } /*- * Niced processes are most likely less important, so double- * their badness points.+ * Process with plenty of runtime and cpu time under their belt are more likely to be+ * well-behaved and/or important, so give them big goodness factors. These reduce to+ * "virtually immune" after about 1 week uptime and 1 day CPU time, or proportionally+ * equivalent values. Short-lived processes (with, say under 10 mins CPU time and 1+ * hour runtime or proportionate balances) will get nice big scores depending mostly+ * on their size.+ *+ * CPU time is in seconds and run time is in minutes (well, actually, 64 seconds).+ * There is no particular reason for this other than that it turned out to work+ * reasonably well in a few common testcases. */- if (p->nice > 0)- points *= 2;+ cpu_time = (p->times.tms_utime + p->times.tms_stime) >> SHIFT_HZ;+ run_time = ((unsigned long) jiffies - p->start_time) >> (SHIFT_HZ + 6);+ + /* If this isn't the first pass, give the CPU and run times progressively less weight */+ while(pass > 0) {+ pass--;+ cpu_time = int_sqrt(cpu_time);+ run_time = int_sqrt(run_time);+ }++ /* Make sure no divide-by-zero crap happens later, for very new processes */+ if(!cpu_time)+ cpu_time = 1;+ if(!run_time)+ run_time = 1;++ /* Apply the weights - since these are goodness factors, they reduce the badness factor */+ points /= cpu_time;+ points /= run_time; /* * Superuser processes are usually more important, so we make it * less likely that we kill those. */ if (cap_t(p->cap_effective) & CAP_TO_MASK(CAP_SYS_ADMIN) || p->uid == 0 || p->euid == 0)- points /= 4;+ points /= 2;++ /* Much the same goes for processes with low UIDs (FIXME: make this configurable) */+ if(p->uid < 100 || p->euid < 100)+ points /= 2; /* *;-#ifdef DEBUG- printk(KERN_DEBUG "OOMkill: task %d (%s) got %d points\n",- p->pid, p->comm, points);++ /* Always return at least 1 if there is any memory associated with the process */+ if(points < 1)+ points = 1;++#if defined(DEBUG) || VM_DEBUG+ printk(KERN_DEBUG "OOMkill: task %d (%s) got %lu points on pass %d (with cputime %lu and runtime %lu)\n",+ p->pid, p->comm, points, pass, cpu_time, run_time); #endif return points; } /* * Simple selection loop. We chose the process with the highest * number of 'points'. We need the locks to make sure that the * list of task structs doesn't change while we look the other way. *+ * As documented for badness(), we rescan the task list (with different weights) if we don't find a+ * particularly "bad" process. We put a cap on how many times we try this, though - a machine full+ * of many small processes can still go OOM.+ * * (not docbooked, we don't want this one cluttering up the manual) */ static struct task_struct * select_bad_process(void) {- int maxpoints = 0;+ unsigned long maxpoints = 0; struct task_struct *p = NULL; struct task_struct *chosen = NULL;+ int pass = 0;+ int suppress_unkillable = 0; read_lock(&tasklist_lock);- for_each_task(p) {- if (p->pid) {- int points = badness(p);- if (points > maxpoints) {- chosen = p;- maxpoints = points;- }+ for( ; suppress_unkillable < 2; suppress_unkillable++) {+ if(suppress_unkillable)+ printk(KERN_ERR "OOMkill: unable to find a non-critical process to kill!\n");++ do {+ for_each_task(p) {+ if (p->pid && (!oom_unkillable(p) || suppress_unkillable)) {+ unsigned long points = badness(p, pass);+ if (points > maxpoints) {+ chosen = p;+ maxpoints = points; }+ } else {+#if defined(DEBUG) || VM_DEBUG+ printk(KERN_DEBUG "OOMkill: task %d (%s) skipped due to vm_nokill list\n",+ p->pid, p->comm);+#endif+ }+ }+ pass++;+ } while(maxpoints < 1024 && pass < 10);+ if(maxpoints > 1)+ break; } read_unlock(&tasklist_lock);++ printk(KERN_INFO "Out of Memory: Selected process with badness %lu on pass %d\n",+ maxpoints, pass - 1);+ return chosen; } /**- * oom_kill - kill the "best" process when we run out of memory+ * oom_kill - kill the most appropriate).+ * CAP_SYS_RAW_IO set, send SIGTERM instead. */ void oom_kill(void) { struct task_struct *p = select_bad_process();@@ -189,22 +341,31 @@ * Returns 0 if there is still enough memory left, * 1 when we are out of memory (otherwise). */ int out_of_memory(void) {- struct sysinfo swp_info;+ long free; /* Enough free memory? Not OOM. */- if (nr_free_pages() > freepages.min)+ free = nr_free_pages();+ if (free > freepages.min)+ return 0;++ if (free + nr_inactive_clean_pages() > freepages.low) return 0; - if (nr_free_pages() + nr_inactive_clean_pages() > freepages.low)+ /*+ * Buffers and caches can be freed up (Jonathan "Chromatix" Morton)+ * Fixes bug where systems with tons of "free" RAM were erroneously detecting OOM.+ */+ free += atomic_read(&buffermem_pages);+ free += atomic_read(&page_cache_size);+ if (free > freepages.low) return 0; /* Enough swap space left? Not OOM. */- si_swapinfo(&swp_info);- if (swp_info.freeswap > 0)+ if (nr_swap_pages > 0) return 0; /* Else... */ return 1; }diff -rBU 5 linux-2.4.3/mm/shmem.c linux-oom/mm/shmem.c--- linux-2.4.3/mm/shmem.c Fri Mar 2 15:16:59 2001+++ linux-oom/mm/shmem.c Tue Apr 3 09:32:35 2001@@ -842,11 +842,10 @@ int error; struct file *file; struct inode * inode; struct dentry *dentry, *root; struct qstr this;- int vm_enough_memory(long pages); error = -ENOMEM; if (!vm_enough_memory((size) >> PAGE_SHIFT)) goto out; diff -rBU 5 linux-2.4.3/mm/swapfile.c linux-oom/mm/swapfile.c--- linux-2.4.3/mm/swapfile.c Thu Mar 22 09:22:15 2001+++ linux-oom/mm/swapfile.c Tue Apr 3 09:32:40 2001@@ -15,10 +15,13 @@ #include <linux/pagemap.h> #include <linux/shm.h> #include <asm/pgtable.h> +extern int sysctl_overcommit_memory;+extern void vm_invalidate_totalmem(void);+ spinlock_t swaplock = SPIN_LOCK_UNLOCKED; unsigned int nr_swapfiles; struct swap_list_t swap_list = {-1, -1}; @@ -405,11 +408,11 @@ asmlinkage long sys_swapoff(const char * specialfile) { struct swap_info_struct * p = NULL; struct nameidata nd;- int i, type, prev;+ int i, type, prev, flags; int err; if (!capable(CAP_SYS_ADMIN)) return -EPERM; @@ -450,11 +453,22 @@ swap_list.next = swap_list.head; } nr_swap_pages -= p->pages; swap_list_unlock(); p->flags = SWP_USED;- err = try_to_unuse(type);++ /* Don't allow removal of swap if it will cause overcommit */+ spin_lock_irqsave(&vm_lock, flags);+ if ((sysctl_overcommit_memory < 0) && + (vm_reserved > vm_total())) {+ spin_unlock_irqrestore(&vm_lock, flags);+ err = -ENOMEM;+ } else {+ spin_unlock_irqrestore(&vm_lock, flags);+ err = try_to_unuse(type);+ }+ if (err) { /* re-insert swap space back into swap_list */ swap_list_lock(); for (prev = -1, i = swap_list.head; i >= 0; prev = i, i = swap_info[i].next) if (p->prio >= swap_info[i].prio)@@ -485,10 +499,11 @@ out_dput: unlock_kernel(); path_release(&nd); out:+ vm_invalidate_totalmem(); return err; } int get_swaparea_info(char *buf) {@@ -789,10 +804,11 @@ ++least_priority; path_release(&nd); out: if (swap_header) free_page((long) swap_header);+ vm_invalidate_totalmem(); unlock_kernel(); return error; } void si_swapinfo(struct sysinfo *val)
|
http://lkml.org/lkml/2001/4/4/151
|
CC-MAIN-2017-13
|
refinedweb
| 4,376
| 53.1
|
-- SalokineTerata ?DateTime(2007-11-19T20:28:26Z)
It's right for me.
Bye.
I think it's important that pages names includes the word "Install" (may be more important than the word "Debian" which implicit on this wiki). however I came up with this name name to clarify what it is about : InstallKde vs InstallDell. also, the page name must be renamed to CamelCase. What do you think about it ?
If the name is fine for you, I'll take care to rename the pages InstallingDebianOn myself.
Yes, existing pages could be renames under !InstallingDebianOn/Brand/!?ModelName then.
FranklinPiat ?DateTime(2007-11-19T08:10:47Z)
- Hi, I would like to import desktop computers and laptops from [:Hardware#computers:Hardware]. Can I do it ? What do you think about to rename Installing_Debian_On/ pages as DebianOn/ ? Bye
-- SalokineTerata ?DateTime(2007-11-18T21:12:17Z)
Should PageFragment use PCI ids, or device "name" ? ?BRthe same applies to USB
For example : "Installing_Debian_On/PageFragment_Intel_ipw3945/etch" vs "Installing Debian On/PageFragment_PCI_8086-4227"
Pros/Cons:
- PCI ids could be easily and automatically linked from a "hardware database"..
- many PCI device with different IDs are often configured with the exact same steps under Linux, like intel 1000 network cards..
So..?
-- FranklinPiat ?DateTime(2007-06-19T09:03:29Z)
fixed bug described below about final parenthesis in URL that aren't properly detected by wiki and MUA.?BR i.e. Switched namespace from "(etch)" to "/etch" notation in URLs.
-- FranklinPiat ?DateTime(2007-06-19T09:03:29Z)
namespace problem: i have noticed a problem, with URLs like "" will be broken when emailed by most MUA and wikis...?BR that's because "intelligent" algorithm that convert URL into active links ... but drops the final ")". -- FranklinPiat ?DateTime(2007-06-18T21:45:40Z)
Namespace : the current namespace "Install_Debian_On/Dell/Latitude_D620(etch)" seems good now.?BR I'll go for that. -- FranklinPiat ?DateTime(2007-06-17T17:37:17Z)
Namespace : DO add codename to installation guide page name, like "Install_Debian_On/Dell/Latitude_D620(etch)". Therefore :
- The page remains accurate.
- People don't post comment that apply to testing inadvertently.
-- FranklinPiat ?DateTime(2007-06-17T17:24:00Z)
Updating parent namespace from "DebianOn" to "Installing_Debian_On" (see ["../Frontpage"]) -- FranklinPiat ?DateTime(2007-05-30T06:11:59Z)
That's it. The four first pages are created, which makes the basic structure of DebianOn.
Plus two sample page for [?DebianOnThinkpad] and [?DebianOnThinkpadT60].
I will make the templates later, once i have enough feed-back.
-- FranklinPiat ?DateTime(2007-05-27T20:03:21Z)
|
https://wiki.debian.org/InstallingDebianOn/Discussion?action=diff&rev1=1&rev2=15
|
CC-MAIN-2022-33
|
refinedweb
| 408
| 59.5
|
#include <wx/txtstrm.h>
This class provides functions that write text data using an output stream, allowing you to write text, floats, and integers.
You can also simulate the C++
std::cout class:
The wxTextOutputStream writes text files (or streams) on DOS, Macintosh.
Writes a character to the stream.
Set the end-of-line mode.
One of wxEOL_NATIVE, wxEOL_DOS, wxEOL_MAC and wxEOL_UNIX.
Writes the 16 bit integer i16 to the stream.
Writes the 32 bit integer i32 to the stream.
Writes the single byte i8 to the stream.
Writes the double f to the stream using the IEEE format.
Writes string as a line.
Depending on the end-of-line mode the end of line ('\n') characters in the string are converted to the correct line ending terminator.
|
https://docs.wxwidgets.org/3.0/classwx_text_output_stream.html
|
CC-MAIN-2018-51
|
refinedweb
| 128
| 68.77
|
This paper describes the new dynamic language extensibility model that has enabled Microsoft to introduce IronPython for ASP.NET, a new implementation of the popular Python programming language.
Ever since its original 1.0 version, ASP.NET has supported
language extensibility. This means that a third-party compiler
vendor can add support for using a new programming language for
ASP.NET pages. This model has worked quite well, and has allowed
many languages to be used with ASP.NETlanguages from
Microsoft (C#, Visual Basic, J#, and JScript) and from external
vendors, such as implementations of Eiffel and COBOL.
However, the downside of the current extensibility model is
that it primarily targets statically compiled languages like C#,
and is not well adapted to dynamic languages like Python.
We are therefore introducing a new model for language
extensibility. The new model aims to fill the lack of support for
dynamic languages, and it enables dynamic languages to fit much
more naturally into ASP.NET. Our initial implementation is
focused on the IronPython language, but in the near future we
will extend the model to work with any dynamic language.
I’ll begin by explaining the reason behind wanting to support
dynamic languages in ASP.NET. The last thing I want to do is
start a debate about the pros and cons of static typing versus
dynamic languages. Instead, I’ll summarize the reason for doing
this in a single word: choice. There are many good static
languages like C#, and many good dynamic languages like
IronPython, and in the end the choice of what to use comes down
to personal preference and to the nature of the project
you’re working on.
Giving ASP.NET users the choice of languages was part of the
design since our first version, and this is just another step in
that direction. Unlike a number of other Web platforms that
support only a single language, the ASP.NET team wants to enable
users to choose the language that fits them best.
Before getting into the new dynamic model, let’s start
with a discussion of the existing ASP.NET language extensibility
model: how it works, what makes it good, and what makes it
inappropriate for dynamic languages.
The ASP.NET compilation model is based on a powerful .NET
Framework technology named the Code Document Object Model, or the
CodeDOM for short. This model enables code to be written in a
language-independent way. The basic steps for processing ASP.NET
pages using the CodeDOM are these:
System.Web.UI.Page
language="C#"
@ Page
Page
Note that steps 1 through 4 happen only once, as long as the
page doesn’t change. Step 5 occurs for every request. An
almost identical sequence occurs with user controls (.ascx files)
and master pages (.master files).
Note: With the code-behind model in ASP.NET 2.0,
there is also a user-written partial class that comes into the
picture. However, this does not substantially change the
structure of the generated class.
What happens to all the server controls, the HTML markup, and
the code snippets in your .aspx file? They’re all handled
by code generated inside the derived class. ASP.NET generates
code that builds the control tree, code that renders markup, and
code for additional tasks like data binding. The code generation
process is relatively complex; the important thing to understand
is that page execution is driven by code.
Let’s take a simple case where you have a TextBox
control on your page, which might look like this:
TextBox
<asp:textbox
Somewhere in the generated code, and assuming that the page
language is C#, there will be code that looks like the following,
which builds and initializes the control:
TextBox MyTextBox = new TextBox();
MyTextBox.ID = "MyTextBox";
MyTextBox.Text = "Hello";
If you are interested in finding out more about the derived
Page class generated by ASP.NET, I encourage you to take a
closer look at it. Although it might not all make sense, you’ll
still recognize code that relates to many elements in your .aspx
page, and you might find the code enlightening.
The simplest way to look at the generated code is to set
debug="true" in the @ Page directive of the
.aspx page. Then purposely introduce a syntax error in a server
script block (a <script> element with the attribute
runat="server")for example, just create a
line that says SYNTAX ERROR. When you request the
page, you will see an error message that includes a Show
Complete Compilation Source link. Click the link and you
will see all the generated code. Look for the IDs of some of your
server controls, and you will see how the controls are built and
added to the tree.
debug="true"
<script>
runat="server"
SYNTAX ERROR
Show
Complete Compilation Source
The CodeDOM provides a powerful layer of abstraction between
ASP.NET and the programming languages used to create page logic.
This abstraction enables ASP.NET to support an arbitrary set of
languages without having any knowledge about them. And it goes
the other way as wellthe implementer of a CodeDOM
provider does not need to know anything about ASP.NET. In fact, a
CodeDOM implementation is useful in many scenarios that have
nothing to do with ASP.NET.
This is a much better model than one where ASP.NET would be
hard-coded to work with a small set of Microsoft languages, and
users would need to wait for a new version of the .NET Framework
to get expanded language support.
If the CodeDOM is so great, why are we coming up with a new
one?
The answer is that even though it is language independent, the
CodeDOM does make a number of assumptions about the capabilities
of supported languages. In particular, it assumes that any
language used for ASP.NET has the ability to produce true classes
in the .NET Framework sensethat is, classes that are
in on-disk assemblies and that can be loaded using standard APIs
like Type.GetType. Those classes must be able to inherit
from other classes like System.Web.UI.Page, override base
class methods, and declare methods with very specific
signatures.
Type.GetType
Unfortunately, for most dynamic languages, these seemingly
simple requirements are essentially out of reach. Even though
dynamic languages might have some form of class construct (as in
Python), this capability does not easily map to the .NET
Framework-style classes that we need, mostly because of the lack
of strong typing. For example, in C# you can write a method that
takes a string and returns an integer; in IronPython, you have no
way of specifying such a typed signature. In addition, inheriting
from existing classes and overriding specific methods in
IronPython is more difficult.
Because ASP.NET is designed to inherit from classes like
System.Web.UI.Page and to override a number of methods, we
were faced with an interesting challenge when we decided to add
IronPython support to ASP.NET.
In fact, we initially experimented with using the CodeDOM
approach. We wrote a prototype CodeDOM provider for IronPython,
and we had some success getting it to work with ASP.NET in
constrained scenarios. But we eventually realized that making the
CodeDOM work fully with IronPython would require extending the
language (for example, to add ways to specify typing). We felt
that this was not the right direction. Also, writing a CodeDOM
provider is a non-trivial task, so requiring each dynamic
language to provide one would make it harder for many languages
to adopt the model and support ASP.NET.
At this point where we went back to the drawing board and
decided to design a new extensibility model that makes more sense
for dynamic languages in ASP.NET.
A short detour: need to explain a little-known feature that is
already part of ASP.NET 2.0: so-called no-compile pages.
This is important because the new model is based on this feature,
and then extends it to support dynamic languages.
This feature is triggered by the CompilationMode
attribute in the page directive, which might look like the
following:
CompilationMode
<%@ Page CompilationMode="Never" %>
The no-compile option is also used if you set the
CompilationMode attribute to "Auto" in a page that has no
code.
As the name indicates, a no-compile page is not compiled. In
contrast with compiled pages, which are code driven, no-compile
pages are entirely data driven, and require no compilation at
all. As a result of not being compiled, they are faster to
process and more scalable (more on this later).
So what’s the catch? Well, it’s a big one: those pages
cannot contain any user code! Instead, they’re limited to static
HTML and server controls. Obviously, this is a big limitation,
which explains why no-compile pages are not widely used. But as
you’ll see later, our new model removes this restriction and
enables dynamic code in no-compile pages. Let’s spend a
little more time discussing how no-compile pages work by
comparing the behavior of compiled and no-compilation pages.
Again, I can use .aspx pages to illustrate the concept, but
things work the same way with user controls and master pages.
All I’ve said so far is that no-compile pages are data
driven instead of code driven, but what exactly does that mean?
If you recall the basic steps I described for the compiled
CodeDOM, it starts with parsing the page. The parsing step also
happens for no-compile pages, but that’s the only step they have
in common; everything else happens very differently. Let’s
look at those steps in more detail:
In the no-compile scenario, steps 1 and 2 only happen once (as
long as the page doesn’t change), while step 3 happens on
every request.
In this model, no class is ever derived from the base
Page class. Instead, the System.Web.UI.Page class
(or optionally a custom base page if there is an inherits
attribute) is instantiated directly.
inherits
In spite of their no-code limitation, no-compile pages are far
from useless. If you have a set of controls that encapsulate
their own functionality (such as a weather widget), you can put
together useful pages without any code. In fact, the next major
version of SharePoint relies heavily on this feature, for at
least two good reasons:
But I would certainly agree that for the general ASP.NET
developer, no-compile pages are of limited use. The new model
that we are creating for dynamic languages will likely change
this, because it allows no-compile pages to have code.
You now have enough background information that we can start
looking at this new model.
Disclaimer: We are still at a very early stage in
this project, so details are subject to change. In addition, we
have not yet reached the point where we have a generic pluggable
model, because at this time we are supporting only the IronPython
language. As a result, the discussion that follows is limited to
a high-level description of how the model works, and does not
explain how you could integrate new languages into this model.
But the time for this will come soon!
Up to this point, I’ve looked at two models for ASP.NET
pagesone that allows code but requires static
compilation, and one that does not require any compilation but
doesn’t allow any code. To reach our goal of integrating dynamic
languages into ASP.NET, we need a hybrid model that doesn’t cause
static compilation but still allows the page to contain code.
I’m not saying that no compilation at all should ever occur in
the new model. In fact, dynamic languages have a lot to gain from
compilation in terms of performance. But what we wanted to avoid
was the requirement of static compilation, by which we
mean the generation of an on-disk assembly implementing standard
.NET Framework types.
For the most part, the new model builds on top of ASP.NET 2.0.
However, we needed to make a small change to the ASP.NET parser.
This section describes the change, as well as the various ASP.NET
extensibility features that the new model uses to integrate into
ASP.NET.
As I’ve discussed, the new model is based on the ASP.NET
no-compile feature, and as a result it works very similarly to
what I described above. However, the problem with
no-compile pages is that normally the parser fails instantly when
it finds any code in the page. Obviously, this is a problem for
us if we are going to support dynamic code!
Even though we were really hoping to implement the new model
without changing System.Web.dll (the main ASP.NET assembly) we
found that we needed some small changes to the parser to enable
it to accept code in no-compile pages. For that reason, when you
install the dynamic language support, you get a new version of
System.Web.dll.
The change to ASP.NET is in the PageParserFilter API,
which gives external code a hook into the parser. This
PageParserFilter API already existed in ASP.NET 2.0;
it was simply expanded to accommodate the new model.
PageParserFilter
In the new model, we register a PageParserFilter class
in order to customize the parsing behavior and allow code in
no-compile pages. A Web.config file for a dynamic language
application will include this element:
<pages pageParserFilterType="Microsoft.Web.IronPython.UI.NoCompileCodePageParserFilter"
... />
The PageParserFilter class does the following:
<% ... %>
<%= ... %>
<%# ... %>
onClick="MethodName"
<script runat="server">
In the new model, we also implemented a custom HTTP module
(that is, a class that implements System.Web.IHttpModule).
You can see this registration in the Web.config for an IronPython
application:
System.Web.IHttpModule
<httpModules>
<add name="PythonModule"
type="Microsoft.Web.IronPython.DynamicLanguageHttpModule"/>
</httpModules>
The HTTP module is used to hook early into the application
domain cycle and register various components with the
dynamic-language environment. (The application domain is an area
within the process where all the code from one Web application is
executed.) The module is also used to implement an equivalent to
the Global.asax file for dynamic languages, which will be
discussed later.
In order to implement its behavior, the new model relies on
having all pages that use dynamic language extend a special base
class named ScriptPage, which in turn extends
System.Web.UI.Page. Similarly, we have special base
classes for user controls (ScriptUserControl) and master
pages (ScriptMaster). Having these base classes gives
dynamic language pages a way to participate in the page life
cycle and make everything fit together.
ScriptPage
ScriptUserControl
ScriptMaster
Let’s pause here and look at the features that the new model
supports.
Dynamic language pages don’t look much different from
regular ASP.NET pages, and can essentially use all standard
ASP.NET features, including the following:
<% ... %>
<%# ... %>
If you’ve used standard ASP.NET pages, there should be a
very short learning curve for using dynamic-language pages.
The new model supports a file similar to the Global.asax file,
but it works a bit differently. Instead of Global.asax, the file
is named Global.ext, where ext is the
language-specific extension (for example, for IronPython the file
name is Global.py).
One important difference is that this file contains only code,
unlike Global.asax, which contains a directive
(<%@ %> element) and a script block with a
runat="server" attribute. For example, a simple
Global.py might contain the following:
<%@ %>
def Application_BeginRequest(app):
app.Response.Write("Hello application!");
A dynamic language application contains an App_Script folder
that is similar to the App_Code directory, except that it
contains dynamic-language script files instead of static language
code files. But the general idea is the same: files in this
directory contain classes that are usable by code anywhere in the
application.
A dynamic language application can contain an HTTP handler
that is the equivalent of an .ashx file in a standard ASP.NET
application, but again it works a bit differently. As with the
Global.py file, handlers contain only code, unlike .ashx files,
which contain a directive that ASP.NET recognizes. Dynamic
language handlers are named using the pattern
Web_name.ext. In IronPython, .ext is
.pyfor example, Web_MyHelloHandler.py. The Web_
prefix is significant, because it is registered to specify that
the code file is an HTTP handler.
Dynamic language handlers must contain a ProcessRequest
method, which is invoked to handle the HTTP request. This is very
similar to IHttpHandler.ProcessRequest in .ashx files. For
example, a dynamic language handler file might contain the
following:
ProcessRequest
IHttpHandler.ProcessRequest
def ProcessRequest(context):
context.Response.Write("Hello web handler!")
The new model does not currently support an equivalent of
.asmx Web services. The Web service architecture works only with
standard .NET Framework types, which as noted earlier are
difficult to create with dynamic languages. To make things even
trickier, Web service class methods must be decorated with
special metadata attributes (like [WebMethod()]),
and dynamic languages typically have no syntax to do this.
[WebMethod()]
We are hoping to come up with a solution for this
limitation.
As you have seen, dynamic language pages are based on
no-compile pages, but can nonetheless contain code. In this
section I’ll show you how this code is actually handled.
In contrast to the CodeDOM, where all the user code in a page
becomes part of a generated source file, in the new model each
piece of user code in a page is treated as an individual entity.
Let’s look at the various types of user code to understand
how they are used.
In standard ASP.NET pages, user code in <script>
elements with a runat="server" attribute ends up
inside the body of the generated class, which is why it can
contain method and property definitions as well as field
declarations. But in dynamic language pages, we don’t
generate a new class at all; instead, ASP.NET directly
instantiates the class specified by the inherits attribute
(the ScriptPage class). In this respect, the term
"inherits" is inaccurate for dynamic language pages (and for
no-compile pages in general), because there is no inheritance
occurring.
What happens to the contents of the <script>
element? Instead of becoming part of a class, the code becomes a
kind of companion code for the ScriptPage class; you can
also think of it as a pseudo-partial-class. Nomenclature aside,
let me explain how it works by using a simple IronPython example
like this:
<script runat="server">
def Page_Load():
Response.Write("<p>Page_Load!</p>")
</script>
Here, the Page_Load method is not part of any class.
Instead, members of the page class (typically of type
ScriptPage) are ‘injected’ by ASP.NET in order
to be directly available (hence we’re able to use
‘Response’ directly). So for all practical purposes,
you can think of your methods as being part of the page class,
even though from a pure Python perspective they are not part of a
class at all.
Page_Load
In IronPython terminology, the code in the script block lives
in a module. Generally, there is one module associated
with each page, user control, or master page. (Note that there is
only one module instance per page, not an instance per HTTP
request.)
Let’s look at a different example:
<script runat="server">
someVar=5
</script>
Here, it is important to understand what the scope of the
someVar variable is. Given that it is a module-level
variable (module in the IronPython sense), and that there is only
one module instance per page, it follows that there is only one
instance of the variable. So semantically, it is very similar to
having a static field in a regular ASP.NET page.
someVar
The new model supports putting code in a code-behind file, as
with the standard ASP.NET model. However, there are important
differences in how this works in the two models.
In the standard model, the code-behind file contains a partial
class declaration, which is merged by the compiler with the
generated class.
In the new model, there is no class declaration in the
code-behind file. Instead, methods appear directly in the file,
outside of any containing construct. If this sounds similar to
what I described for in-line code (code in
<script runat="server"> blocks)
in the preceding section,
it’s because there really is no difference. In the new
model, you can take the exact content of a
<script runat="server"> element and move it to
a code-behind file (for example, .g. MyPage.aspx.py if the page
is MyPage.aspx) without changes.
Thus, everything I discussed above about the scope of
variables applies equally to code-behind files.
As with normal ASP.NET files, whether to put your IronPython
code in line or in a code-behind file is purely a matter of
personal preference: do you prefer to see the code directly in
the .aspx file, or would you rather keep it in its own code file?
The choice is yours.
Snippet expressions (<%= ... %>) and
statements (<% ... %>) also execute in the
context of the module created for the page’s code. As a
result, these snippets have access to methods and variables
defined in with in-line or code-behind code. For instance, if you
define a Multiply method in the
<script> block, you can write
<%= Multiply(6,7) %> in your page.
Multiply
<%= Multiply(6,7) %>
Code snippets also have access the members of the Page class.
For example, you could write <%= Title %> to
display the title of the page.
<%= Title %>
Even though data-binding expressions are a type of code
snippet, they deserve being discussed separately. The reason they
are particularly interesting is that in IronPython, data-binding
expressions work more naturally than they do in standard
pages.
If you have used ASP.NET data-binding expressions, you are
likely familiar with the Eval method. For instance, you
might have a GridView control with a templated column
containing the data-binding expression
<%# Eval("City") %>. Here, the Eval
method is used to get the value the column named
City for the current row in the database (or for
whatever data source you are using). This works, but the fact
that you must go through this Eval method is rather
awkward.
Eval
GridView
<%# Eval("City") %>
City
In the new model, the equivalent is to simply use the snippet
<%# City %>. Here, City is an
actual code expression in the dynamic language, instead of a
literal string that must be interpreted by the Eval
method. Hence, you are free to write arbitrary code in the
expression. For example, with IronPython you could write
<%# City.lower() %> to display the
City value in lower case. This improved syntax is
made possible by the late-bound evaluation supported in dynamic
languages. Even though the meaning of City is not
known at parse time, the dynamic language engine is able to bind
it to the correct object at run time.
<%# City %>
<%# City.lower() %>
Another case that demonstrates the flexibility of dynamic
languages over static languages is the injector mechanism
supported by the new model. This is best demonstrated using an
example.
Imagine that you have code in a page that reads a value from
the query string. The page URL might look like this:
In a C# page, you would get the value using the code like the
following:
String myValue = Request.QueryString["MyValue"];
But in a dynamic language application you can simply write the
following to achieve the same thing:
myVar = Request.MyValue
What exactly makes this work? In the new model we register a
special object known as an injector, which says something
like the following to the dynamic engine: "If you find an
expression SomeObj.SomeName, where
SomeObj is an HttpRequest object and
SomeName is not a real property of the
HttpRequest object, let me handle it instead of
failing."
SomeObj.SomeName
SomeObj
HttpRequest
SomeName
The way that the injector handles the expression is by calling
SomeObj.QueryString["SomeName"]. Even though
the expression Request.MyValue looks simpler than
Request.QueryString["MyValue"], in the end it is
really executing the same logic.
SomeObj.QueryString["SomeName"]
Request.MyValue
Request.QueryString["MyValue"]
The same injector mechanism is also useful in other cases. For
example, where you would write
SomeControl.FindControl("SomeChildControl") in C#,
you can simply write SomeControl.SomeChildControl in
a dynamic language application.
SomeControl.FindControl("SomeChildControl")
SomeControl.SomeChildControl
Furthermore, the injector mechanism is extensible, so if you
have your own type of collection that is indexed by string, you
can write a custom injector for it to simplify the syntax.
Though they may not be revolutionary, features like this and
like the simplified data-binding expression contribute to making
life easier when writing Web applications.
I have discussed the fact that the new model is using the
no-compile feature of ASP.NET. This may lead you to believe that
the code in dynamic language pages is interpreted, but that is
not the case. Though this may appear to contradict the definition
of the no-compile feature, the code in dynamic language
applications is compiled.
The explanation is that the term "no-compile" refers
explicitly to CodeDOM-style static compilation, which is not used
in the new model. But this does not prevent dynamic code from
being compiled on the fly by the dynamic language engine, and
that is what we do. As you would expect, the benefit of compiling
the code is that it executes much faster than if it were
interpreted.
In this document I have looked at both the existing model and
the new model for integrating programming languages into ASP.NET.
It is not my intention to convince our Web developers to stop
using C# and switch to IronPython. Rather, my goal is simply to
explain the differences between the two models, and to keep you
aware of what we are working on. As mentioned, this is still very
much work in progress, so you should expect a revised version of
this paper in the future.
Advertise |
Ads by BanManPro |
Running IIS7
Trademarks |
Privacy Statement
© 2009 Microsoft Corporation.
|
http://www.asp.net/DynamicLanguages/whitepaper/
|
crawl-002
|
refinedweb
| 4,376
| 55.95
|
A programing question
Hello!
I would like to define a procedure, which puts values in a dictionary:
def put: u={} u[Cathy]=3232556 u[John]=4256342
Now if I am outside the procedure, I cannot use the values of u's... Is there a method for that outside the procedura I can use these specific u's?
def put: u={} u[Cathy]=3232556 u[John]=4256342 put print u[Cathy] Traceback (click to the left of this block for traceback) ... SyntaxError: invalid syntax
Many Thanks!
Note that it should be "def put()", and the function call should be "put()". You're missing parentheses in both places.
|
https://ask.sagemath.org/question/7840/a-programing-question/
|
CC-MAIN-2017-34
|
refinedweb
| 106
| 83.25
|
The two ways:
- Code: Select all
Surface.convert_alpha()
and
- Code: Select all
Surface.set_colorkey((255,0,255))
But using eitherscheme in an example with a few different pics it does not work. The white backgrounds of all the images still get drawn.
- Code: Select all
import pygame
pygame.init()
screen = pygame.display.set_mode((500,500))
image = pygame.image.load('/home/metulburr/Pictures/bomb.jpeg').convert_alpha()
#image.set_colorkey((255,0,255))
run = True
while run:
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
screen.blit(image, (0,0))
pygame.display.flip()
EDIT: oh i am stupid, setting the colorkey is setting the color that you want to be transparent? lmao OK so that would be 255,255,255 for white. But doing so gives a choppy background, some transparent pixels and some not. I could only assume that the pics bg is not all pure white at 255,255,255, so i would have to go in an edit it to alpha anyways right? which would defeat the purpose of this whole thing.
EDIT2: It almost seems easier just to go into gimp and convert manuelly the background to transparent. Then chop the image, crop it, and set it as a name and be done with? I dont get the reasoning behind all this complex methods for creating games, unless you were trying to mimic a program like GIMP.
|
http://www.python-forum.org/viewtopic.php?p=2697
|
CC-MAIN-2014-52
|
refinedweb
| 233
| 69.38
|
in reply to
Would you stay with Perl if there were no CPAN?
What never ceases to amaze me is the number of people who start to code without a quick look through CPAN to see which parts of their project have been implemented before.
A friend of mine just told us at a perlmomgers meeting how bad Data::Dump output is and that he invented a better, more readable and terse format for displaying nested data.
Turned out to be a just kind of YAML ... only incomplete and not parsable!
sigh...!
Cheers Rolf
( addicted to the Perl Programming Language)
I imagine the situation to be as bad as (or if not, worse) in languages like Ruby, Python, JavaScript, PHP (Packagist). I mean, looking at the module count for those languages go up like crazy. There's got to be a lot of duplicated efforts, hasn't it?
How do we improve this?
Better don't confuse quantity with quality.
Gem seems to have plenty of orphaned alpha stuff because publishing seems to be very easy and hipsters are encouraged to reinvent the wheel, just with two or three strokes of monkey patching and no testing.
I mean should we encourage people to upload plenty of empty distributions with cryptic names?
And PyPi is far from CPAN.
See for a decent comparison!
He's a nice exception to the usual pythonistas spreading FUD, this community has a high ratio of fundamentalists.(at least I can't avoid meeting them)
Please don't misunderstand me, it's always good to learn from others and I like a lot about Python & Guido.
But don't underestimate the benefits of Perl culture and a decent community.
It is a good comparison, from 3.5 years ago. .NET's NuGet and node.js' npm are now in existence and growing very quickly. Same for Maven's Central Repo, which has doubled in the last 2 years, and has 46,349 packages with releases in the last 3 years, compared to CPAN's 11,820. .NET's NuGet has 12,191 updated in the last 3 years, and node.js' npm has 27,784 updated in the last 3 years.
I'm basically ignoring Python's PyPi and Ruby's RubyGems because I looked through all the modules/packages starting with 'A', and only 2% of them seem to be downloaded/released with any regularity, and indeed about the same 2% look to be the only ones I could imagine more than a handful of people ever finding useful, ever, just based on their problem space. I conclude from this that the barrier to entry of snagging those namespaces and uploading releases is much too low to have any semblance of quality.
On the other hand, the barrier on maven seems to be fairly appropriate, because rare are the junk modules, just from their names. Based on very informal skimming/reading, the majority of maven (Central Repo's 46,349 packages with a release in the last 3 years) seem to fit into: large enterprise-ish software/frameworks/wrappers, and plugins for all of those to interface with thousands of other systems and protocols.
Completely different from the maven repo, node.js' npm module list is filled with thousands of packages that either 1. implement a new DSL/dialect similar to JavaScript, trying to extend JavaScript's syntax to match the structure and brevity of many other languages, or 2. provide countless mutually exclusive ways to fit many other programming languages' orientations/paradigms/conveniences into JavaScript's syntax. Of course there are also tons and tons of plugin/interface/glue/wrapper packages to other libraries and protocols.
Also quite different, Haskell (well, ghc's) Hackage is the only one with fewer modules/distributions (with a release in the last 3 years) than CPAN - Hackage has 7,012. Then again, Hackage boasts a language/philosophy that deliberately and loudly eschews success, which is to say, "our measure of success is simply to fail as much as possible at everybody else's measures of success." Reading between the lines, they're trying to optimize for minimalism, efficiency, and elegance long-term, even in the published libraries, in exchange for some of the "benefits" of more "flood algorithm"-y approaches... As a result, the vast majority of Hackage packages implement thousands of "known" algorithms and standardized protocols/interfaces, making them very useful to scientists and other users of "hard" comp-sci. While not preaching "one way to do it", in most cases there is only one choice because it is so definitively/obviously optimum, there's no reason to ask the question if you really understand the problem space. And those who don't fully understand the problem spaces generally can't use the language anyway (such as myself).
Anyway, I suspect this post has been tl;dr for a while now... so to wrap up, don't compare Perl to Python or Ruby (same case with PHP) anymore, the three of those are so far behind the .NET ecosystem, the Java monster/monstrosity, and the less visible but ubiquitous JavaScript juggernaut, that if you want to talk about growing the Perl userbase by embracing and extending the other language communities, you should try to target the 90% of the the "trained" professional programmers who use the plurality languages/systems, not the other 10%. However, if you want to gain new users by osmosis from other largely non-"trained programmer" fields, such as IT and sysadmin/system engineering, bioinformatics, statistics (nm, don't bother), well... I don't know what to tell you. Note: "trained" can of course mean "self-trained," but I'm implying "academically or self-trained *well*," since plenty of self-trainers (and school-goers) aren't great at it. oops.. diversion
Note this post isn't about programming languages; it's about the packages in their central repos, if they have one, so I don't yet see a reason to mention all the other languages much bigger than Perl in userbase.
I think I'll write about the CPAN packages with releases in the last 3 years in another spot... .oO( there's only around 12,000; my first guess is it will seem to be a fairly even mix of all of the above-mentioned emphases/areas.. )
In case you're curious, this just hit my inbox, and is more about programming languages proper: General Purpose Programming Languages' Speed of Light.
Used as intended
The most useful key on my keyboard
Used only on CAPS LOCK DAY
Never used (intentionally)
Remapped
Pried off
I don't use a keyboard
Results (437 votes),
past polls
|
http://www.perlmonks.org/index.pl?node_id=1029512
|
CC-MAIN-2015-11
|
refinedweb
| 1,112
| 58.92
|
KeePass is a "free, open source, light-weight and easy-to-use password manager."
MinLock is a simple plugin for KeePass 2.x that keeps a minimized KeePass locked (re-locks almost immediately after it becomes unlocked, so long as it is still minimized).
This happens to be the case when KeePass is locked and minimized and then it's global auto-type feature is used, which can unlock KeePass and leave it unlocked.
To run, just download and extract the PLGX Plugin to your KeePass directory.
KeePass has a short page about plugin development for version 2.x here.
KeePass does have auto-lock features based on timers (e.g. idle time) that can achieve close to this but not quite as well.
The small VS.NET solution is available (you'll have to fix the reference to KeePass.exe).
Here's the full MinLock plugin class:
using System;
using System.Windows.Forms;
using KeePass.Plugins;
namespace MinLock
{
public sealed class MinLockExt : Plugin
{
IPluginHost m_Host;
Timer m_Timer;
public override bool Initialize(IPluginHost host)
{
m_Host = host;
// Upon unlocking, the database is opened and this event fires;
// it likely fires other times too (e.g. opening db from menu).
m_Host.MainWindow.FileOpened += MainWindow_FileOpened;
return base.Initialize(host);
}
public override void Terminate()
{
KillTimer();
base.Terminate();
}
void KillTimer()
{
if (m_Timer != null)
{
m_Timer.Dispose();
m_Timer = null;
}
}
void MainWindow_FileOpened(object sender, KeePass.Forms.FileOpenedEventArgs e)
{
if (m_Timer == null && m_Host.MainWindow.WindowState == FormWindowState.Minimized)
{
// Start a Windows.Forms.Timer, because it's based on the event loop it
// can't interrupt current calls, though calls to Application.DoEvents
// could wreak havoc, but that doesn't appear to be a problem here.
m_Timer = new Timer();
m_Timer.Interval = 1;
m_Timer.Tick += Timer_Tick;
m_Timer.Start();
}
}
void Timer_Tick(object sender, EventArgs e)
{
KillTimer();
if (m_Host.MainWindow.WindowState == FormWindowState.Minimized &&
m_Host.MainWindow.IsAtLeastOneFileOpen())
{
m_Host.MainWindow.LockAllDocuments();
}
}
}
}
This plugin simply responds to a file being opened (which for KeePass, my understanding is that this is only when it has opened its secure database file). Then KeePass has some things to finishing doing in the current callstack. Instead of locking in the FileOpened event handler, I do a bit of hackery and use a Windows.Forms.Timer to wait just a bit before re-locking the KeePass workspace. The timer can't fire until the underlying message pump runs, so in this way the plugin doesn't interrupt KeePass in the same callstack of the event firing. KeePass continues on merrily, appears to do all it needs to do with the database, and then very shortly afterwards the message pump runs, the timer tick event fires, and MinLock re-locks KeePass
MinLock was developed with KeePass version 2.19. See the plugin development page for details about PLGX compatibility with future versions of KeePass.
This approach turned out pretty well because MinLock re-locks KeePass before it even finishes auto-typing.
KeePass plugins are cake.
This might be better off as an official KeePass option-based feature instead of a plugin.
MinLock does break one minor feature of KeePass. If KeePass is minimized and locked, the user can right-click the tray icon and select "Unlock Workspace". That feature of KeePass does not restore the KeePass Password Safe window, so almost instantly after unlocking the plugin will re-lock KeePass. There are at least 3 other ways to unlock a minimized-and-locked KeePass that don't have this issue (because they all restore the KeePass window before presenting the unlock dialog); so use one of these instead: 1) double click the tray icon, 2) tray icon --> "Tray / Untray", 3) Crtl + Alt + K (default hotkey to show KeePass.
|
http://www.codeproject.com/Articles/404071/MinLock-a-KeePass-2-x-Plugin-to-keep-minimized-Kee?fid=1731655&df=10000&mpp=25&noise=2&prof=True&sort=Position&view=Topic&spc=Compact
|
CC-MAIN-2014-35
|
refinedweb
| 605
| 57.67
|
Tips on Exploring and Using PyGAMMA
This page gives some very basic advice as to how to explore and use PyGAMMA. As users make greater use of this facility, and have time to give back to the GAMMA/PyGAMMA community we encourage their contributions to this page.
Basics
If you are new to Python, you will first need to get comfortable with that language.
It will be assumed below that you downloaded and installed the PyGAMMA library.
The next step is to get familiar with what functions are available in GAMMA. There is currently no official documentation for this, but one can look into the Swig interface files to see what functions have been converted to Python and learn more about their interfaces. Here is a listing of what files have been "Swigged" so far: Swigged GAMMA Files.
Exploring PyGAMMA (Via the Swig Interface Files)
Say you want to create a spin system. You discover in the directory src/HSLib there are three promising files, SpinSystem.cc, SpinSystem.h and SpinSystem.i. The first two files are C++ code. The third file, SpinSystem.i, is the Swig interface file. All commands in files with a ".i" extension - that are not commented out - are available in Python. Swig knows how to convert Python strings, integers, floating point numbers, and arrays (lists, numpyarrays, etc) to and from C++ strings, integers, floats or doubles, and std::vector and std::list, etc.
The listing below is a truncated version of SpinSystem.i.
// SpinSystem.i // Swig interface file. %{ #include "HSLib/SpinSystem.h" %} %include "std_string.i" %include "std_vector.i" %include "HSLib/SpinSys.i" %rename(__assign__) spin_system::operator=; class spin_system: public spin_sys { public: spin_system(int spins=0); spin_system(const spin_system &sys); virtual ~spin_system (); spin_system& operator= (const spin_system &sys); virtual void shifts(double shift=0); virtual void shift(int, double); virtual double shift(int) const; double maxShift() const; double maxShift(const std::string& Iso) const; double minShift() const; double minShift(const std::string& Iso) const; double medianShift() const; double lab_shift(int) const; // Typically ~10^8 ! // ...REMOVED THIS SECTION OF CODE FOR BREVITY.. };
You can see that there are two was to create a new spin system. One involves using an integer as an input to specify the number of spins,
spin_system(int spins=0);
The other requires a previous spins system as an input,
spin_system& operator= (const spin_system &sys);
We'll put that knowledge to use very shortly.
Using PyGAMMA
We list here two ways to use PyGAMMA. You can use PyGAMMA interactively from a command line Python session, or you can write a PyGAMMA file/program that you will call/run from the command line (see below) with python.
- Using PyGAMMA from a Python command line session.
- Start a python session
- Import pygamma (and assign an alias to it, if you want)
- Use pygamma, e.g. to create a new spin system. Note, you can see that the variable sys in fact "points" to a proxy of a Swig spin_system.
C:\>python Enthought Python Distribution -- Version: 6.2-2 (32-bit) Python 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 7 2010, 13:28:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import pygamma as pg >>> >>> sys = pg.spin_system(3) >>> >>> sys <pygamma.pygamma.spin_system; proxy of <Swig Object of type 'spin_system *' at 0x00A6F8F0> > >>>
- Using PyGAMMA "code" from a file.
Here is a working example, that includes a spin system
The listing below could be typed into a Python command line session, but if your going to use something over and over again it's better to create a file and save your PyGAMMA python code. The following text in from the file called fid.py:
from __future__ import division import pygamma as pg infile = 'gsh_test.sys' outfile = "gsh_fid_pytest.txt" h1 = "FID Simulation Test" h2 = "using input sys file: " + infile outname = "test_lines" header = (h1, h2) sys = pg.spin_system() sys.read(infile) specfreq = sys.Omega() H = pg.Hcs(sys) + pg.HJ(sys) D = pg.Fm(sys) ac = pg.acquire1D(pg.gen_op(D), H, 0.001) ACQ = ac sigma = pg.sigma_eq(sys) sigma0 = pg.Ixpuls(sys, sigma, 90.0) mx = ACQ.table(sigma0) mx.dbwrite(outfile, outname, specfreq, sys.spins(), 0, header); # Print Table
This file/program can be run by typing this at the command line:
python fid.py
Here is the listing for the input file needed to run fid.py, called gsh_test.sys:
SysName (2) : gsh_test NSpins (0) : 3 - Chemical shifts for gsh and H2O. Iso(0) (2) : 1H Iso(1) (2) : 1H Iso(2) (2) : 1H PPM(0) (1) : 3.77 PPM(1) (1) : 6.0 PPM(2) (1) : 4.7 J(0,1) (1) : 6.5 J(0,2) (1) : 0.0 J(1,2) (1) : 0.0 MutExch(0) (2) : (1,2) Kex(0) (1) : 5.0 Omega (1) : 170.67
This example is currently part of our python test listed in the src/pyTests directory.
Accessing PyGAMMA Object Data
Since the code underlying PyGAMMA is fundamentally C++ code there will be a few differences in how you make use of it in Python from how you might expect.
For example, in Python all classes have member variables that are "public" meaning that anyone who has access to the object can access the member variables. In C++ however, there are frequently cases where internal class variables are defined as "private" meaning that only this class and a few select and well defined other classes can access it's members. So when accessing a C++ private variable's value, you often need to use "getter and setter" methods.
Here is an example for the PyGAMMA class IsotopeData.
We could create this Isotope Data for "Hyper Spin Hydrogen" and assigned it to the variable "id".
>>> id = pg.IsotopeData(1,"17H","Hyper Spin Hydrogen","Hydrogen",1,17,42,101,33) >>> >>> id <pygamma.pygamma.IsotopeData; proxy of <Swig Object of type 'IsotopeData *' at 0x1c253930> >
You can always check what attributes and methods are available in an object such as id (in this case of class IsotopeData) by using the python dir command, e.g.
>>> dir(id) ['HS', '__assign__', '__class__', '__del__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__swig_destroy__', '__swig_getmethods__', '__swig_setmethods__', '__weakref__', 'electron', 'element', 'mass', 'momentum', 'name', 'number', 'printStrings', 'qn', 'recept', 'rel_freq', 'symbol', 'this', 'weight']
Let's say you wanted to check id's mass; you might try to access it directly using id.mass, but this would not get you what you wanted...
>>> idm = id.mass >>> idm <bound method IsotopeData.mass of <pygamma.pygamma.IsotopeData; proxy of <Swig Object of type 'IsotopeData *' at 0x1c253930> >>
This would only get you the address of the member function mass.
Instead you will need to call the method mass like this, id.mass().
>>> idm = id.mass() >>> idm 17
Most of pygamma's objects are meant to be used internally but can be converted to python/numpy objects (providing a set of such functions is a good future pygamma project). E.g. if you want to convert a pygamma complex number to a numpy complex number you could do the following:
>>> import pygamma as pg >>> import numpy as np >>> n = pg.complex(3,2) >>> p = np.complex(n.Rec(),n.Imc()) >>> p (3+2j)
And recall that you could have discovered the Rec() and Imc() methods for producing the real and imaginary components of n by running dir(n).
Happy Trails exploring PyGAMMA!
|
http://scion.duhs.duke.edu/vespa/gamma/wiki/PyGammaUsageTips
|
CC-MAIN-2018-51
|
refinedweb
| 1,244
| 67.04
|
Hi,
I'm the original author and project manager for Webware. I have read
your comparison to Webware at.
I'm writing to kindly ask you to remove this item from your list:
* Web Components. SkunkWeb encourages the componentization of your
web pages through caching and the like. It can also call components on
other SkunkWeb servers if you set it up to do so.
Webware promotes compententization through several means and was
designed to do so from the very start.
- Webware breaks down into discrete, focused packages.
- The app server is based on servlet factories, of which, you
can install your own.
- Servlets can internally forward messages to each other.
- Servlets can include each other's output.
- The app server supports XML-RPC and Pickle-RPC which facilitate
Pythonic communication between multiple app server instances.
As you can see, there is plenty of focus on components/objects in
Webware.
Regarding caching, the app server caches all the servlets which in turn
decide for themselves what to cache. For example, a method might do
something like:
def foo(self):
if self._foo is None:
self._foo = lotsOfWorkToComputeFoo()
return self._foo
So foo() only works hard the first time and on subsequent calls returns
the cached instance/string/whatever.
The app server doesn't directly get involved in your caching decisions,
as the semantics of your application really determine what can be
cached and for how long.
Also, MiddleKit caches objects extracted from databases and guarantees
their uniqueness (e.g., one distinct record never creates more than one
Python object in a given process).
There are probably several other interesting differences between our
products, but since I'm not familar with SkunkWeb, I can only correct
the misperceptions of Webware.
In any case, the implication that Webware is focused on components is
the item I really wanted to address the most. Probably the "caching"
Cheers,
-Chuck
|
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/20020329055725.BGGZ29627.lakemtao01.cox.net@there/
|
CC-MAIN-2015-40
|
refinedweb
| 315
| 58.28
|
Created on 2017-08-08 14:09 by Mikołaj Babiak, last changed 2018-01-12 06:07 by rhettinger. This issue is now closed.
# list of tuples in form of (priority_number, data)
bugged = [
(-25.691, {'feedback': 13, 'sentiment': 0.309, 'support_ticket': 5}), (-25.691, {'feedback': 11, 'sentiment': 0.309, 'support_ticket': 3}), (-25.0, {'feedback': 23, 'sentiment': 0.0, 'support_ticket': 15}),
]
from queue import PriorityQueue
pq = PriorityQueue()
for item in bugged:
pq.put(item)
# TypeError: '<' not supported between instances of 'dict' and 'dict'
It seems that if priority_numbers are equal, heapq.heapify() falls back to comparing data element from tuple (priority_number, data).
I belive this is an undesired behaviour.
It is acctually listed as one of implementation challenges on:
"Tuple comparison breaks for (priority, task) pairs if the priorities are equal and the tasks do not have a default comparison order."
In python 2.7 the issue in not present and PriorityQueue.put() works as expected
I don't see any way to change PriorityQueue to fix this. The non-comparability of dicts likely won't be restored, nor will the lexicographic comparison order of tuples be likely to change.
One possible way forward is to provide a wrapper with the desired comparison behavior:
pq = PriorityQueue()
pq.put(Prioritize(13, task))
task = pq.get().item
where Prioritize is implemented something like this:
import functools
@functools.total_ordering
class Prioritize:
def __init__(self, priority, item):
self.priority = priority
self.item = item
def __eq__(self, other):
return self.priority == other.priority
def __lt__(self, other):
return self.priority < other.priority
from queue import PriorityQueue, Empty
from contextlib import suppress
bugged = [
(-25.691, {'feedback': 13, 'sentiment': 0.309, 'support_ticket': 5}), (-25.691, {'feedback': 11, 'sentiment': 0.309, 'support_ticket': 3}), (-25.0, {'feedback': 23, 'sentiment': 0.0, 'support_ticket': 15}),
]
pq = PriorityQueue()
for priority, item in bugged:
pq.put(Prioritize(priority, item))
with suppress(Empty):
while True:
item = pq.get_nowait().item
print(item)
Nick, what do you think about this proposal?
The problem being solved is that decorated tuples no longer work well for controlling ordering (because so many types are now non-comparable). This provides a wrapper to control which fields are used in comparison.
For implementation and API, there are several ways to do it, (accept separate arguments vs accept an existing sequence, standalone class vs a tuple subclass, pure python and/or c-implementation, etc).
The main downside I see to that approach is that it would still require quite a few client code changes to restore compatibility for folks upgrading from 2.7, and even though six could add a "six.Prioritize" backport, it would still be difficult for automated tools to work out *where* such a wrapper would be appropriate.
So I'm wondering whether it might be worth defining a heapq.compareitem helper that special cases tuples, such that heapq switched to using a slightly modified definition of tuple comparisons:
def compareitem(lhs, rhs):
"""<= variant that ensures all tuples are orderable"""
is not isinstance(lhs, tuple) or not isinstance(rhs, tuple):
return lhs <= rhs
# Compare tuples up to first unequal pair
for lhs_item, rhs_item in zip(lhs, rhs):
if lhs_item != rhs_item:
try:
return lhs_item < rhs_item
except TypeError:
pass
break
# All item pairs equal, or unorderable pair found
return len(lhs) <= len(rhs)
The key difference would be that if the heap-centric tuple comparison encounters a non-equal, unorderable pair of items, it would fall back to just comparing the tuple lengths (just as regular tuple comparison does when all item pairs are equal), rather than letting the TypeError propagate the way the default tuple comparison operator does.
The heap invariant would change slightly such that "storage.sort(key=heapq.compareitem)" would reliably preserve the heap invariant without raising an exception, while "storage.sort()" might instead fail with TypeError.
We already have recommendations in the heapq documentation on how to do a work-around. I'm looking at the more general problem of how can we make it easy once again to decorate a value with a sort value (not just for heaps but for anyplace where comparisons are made).
I would like our preferred answer to be something better than, "take all your existing functions that use comparisons and make new variants that compute and cache key functions". Instead, I would rather, "keep your existing functions simple and just wrap your data in something that specifies comparison values that are computed just once".
The old Schwartzian transform (decorate-compare-undecorate) had broad applicability but was effectively killed when a simple tuple no longer served for decoration.
FWIW, the DataClass discussion has also ventured into this territory (the field definitions can specify whether or not a field is included in the rich comparison methods).
My rationale for asking "What if we just changed heapq back to working closer to the way it used to work?" is that it's a case where arbitrarily ordering unorderable tuples made sense, and reverting it to the old behaviour is reasonably safe:
- some Py3 heapq code that previously raised TypeError would start using an arbitrary ordering instead
- Py2 heapq code would get a *different* arbitrary ordering in Py3, but it would still get an arbitrary ordering
I don't feel especially strongly about that though, so if you prefer the approach of defining a new more explicit idiom to replace the old "make a tuple" one, I think a new wrapper type is a reasonable way to go, but using "Prioritize" as a name is probably too specific to the PriorityQueue use case.
As a more generic name, "KeyedItem" might work:
```
@functools.total_ordering
class KeyedItem:
def __init__(self, key, item):
self.key = key
self.item = item
def __eq__(self, other):
return self.key == other.key
def __lt__(self, other):
return self.key < other.key
```
So applying an arbitrary key function would look like:
decorated = [KeyedItem(key(v), v) for v in values]
And if it was a tuple subclass, it would also work with APIs like the dict constructor.
I think it is a good idea to have a simple way to add a value to sort on in general, it could have some interesting use-cases. Also I am with Nick a name change would make the broader scope clearer.
What I am not sure about is making the comparison have to be between two items of the same wrapper type - since right now we are comparing priority to priority attributes.
It makes sense for PriorityQueues, but if we wanted to use this for something more arbitrary like comparing a string with an int priority to an int we end up having to convert both data types to the new wrapper type:
str_cmp = KeyedItem(20, 'cat')
int_cmp = KeyedItem(30, 30)
str_cmp < int_cmp
I don't like having to convert to the new wrapper unless it's relevant, I'd rather do:
str_cmp = KeyedItem(20, 'cat')
str_cmp < 30
It could be instead:
class KeyedItem:
def __init__(self, key, item):
self.key = key
self.item = item
def __eq__(self, other):
if not isinstance(other, KeyedItem):
return self.key == other
return self.key == other.key
def __lt__(self, other):
if not isinstance(other, KeyedItem):
return self.key < other
return self.key < other.key
...
FWIW, the new dataclasses module makes it really easy to create a wrapper:
from dataclasses import dataclass, field
from typing import Any
@dataclass(order=True)
class KeyedItem:
key: int
item: Any=field(compare=False)
def f(): pass
def g(): pass
print(sorted([KeyedItem(10, f), KeyedItem(5, g)]))
I'm thinking of just making an example in the docs and closing this out.
New changeset 0c3be9651f4f149f4a78bb7043d26db9e75cabc0 by Raymond Hettinger in branch 'master':
bpo-31145: Use dataclasses to create a prioritization wrapper (#5153)
|
https://bugs.python.org/issue31145
|
CC-MAIN-2021-49
|
refinedweb
| 1,278
| 54.63
|
IRC log of svg on 2012-01-05
Timestamps are in UTC.
20:01:50 [RRSAgent]
RRSAgent has joined #svg
20:01:50 [RRSAgent]
logging to
20:01:51 [trackbot]
RRSAgent, make logs public
20:01:52 [Zakim]
Zakim has joined #svg
20:01:53 [trackbot]
Zakim, this will be GA_SVGWG
20:01:53 [Zakim]
ok, trackbot; I see GA_SVGWG(SVG1)3:00PM scheduled to start now
20:01:55 [trackbot]
Meeting: SVG Working Group Teleconference
20:01:55 [trackbot]
Date: 05 January 2012
20:02:03 [cabanier]
cabanier has joined #svg
20:02:48 [Zakim]
GA_SVGWG(SVG1)3:00PM has now started
20:02:55 [Zakim]
+ +1.206.675.aaaa
20:03:18 [cabanier]
zakim, +1.206.675.aaaa is me
20:03:18 [Zakim]
+cabanier; got it
20:03:32 [Zakim]
+??P2
20:03:48 [Zakim]
+Doug_Schepers
20:03:51 [ed]
Zakim, ??P2 is me
20:03:51 [Zakim]
+ed; got it
20:04:24 [Zakim]
+ +33.9.53.77.aabb
20:04:50 [Zakim]
+ +41.83.2.aacc
20:04:51 [Tav]
zakim, +33 is me
20:04:51 [Zakim]
+Tav; got it
20:05:14 [Zakim]
+[IPcaller]
20:05:15 [heycam]
Zakim, [ is me
20:05:15 [Zakim]
+heycam; got it
20:05:18 [ed]
Agenda:
20:05:34 [vhardy]
vhardy has joined #svg
20:05:58 [ed]
Zakim, who's here?
20:05:58 [Zakim]
On the phone I see cabanier, ed, Doug_Schepers, Tav, +41.83.2.aacc, heycam
20:07:04 [vhardy]
ScribeNick: vhardy
20:07:30 [vhardy]
Topic: Sydney F2F
20:08:11 [vhardy]
ed: if you have not added your agenda request, now is a good time. If we do not manage to fill up on the time, we will use the time for spec editing and discussing the requirements.
20:08:11 [ed]
20:08:52 [vhardy]
heycam: discussing the requirements will take time, but we can finish it off. Looking at the remainder of the agenda, we may not fill all the time. Probably a lot of us did not have time to do all the work they needed to do.
20:09:28 [vhardy]
ed: today is the last day people can register for the SVG F2F.
20:09:31 [Zakim]
+ +61.2.980.5.aadd
20:09:34 [ed]
20:09:51 [cyril|away]
cyril|away has joined #svg
20:10:07 [shepazu]
Zakim, aadd is cyril|away
20:10:07 [Zakim]
+cyril|away; got it
20:10:07 [vhardy]
ed: anything more about the F2F?
20:10:32 [vhardy]
cyril: everything should be in order. The hotel is asking about the meeting times. I will say 8-9am to 6pm.
20:10:37 [vhardy]
ed: sounds fine to me.
20:10:51 [vhardy]
cyril: when is the SVG event?
20:11:11 [vhardy]
shepazu: then we should finish earlier that day. I'll put you in touch with John so that you can get the information.
20:11:25 [vhardy]
cyril: I think I am all set for the meeting.
20:11:44 [vhardy]
cyril: who is staying at the Novotel?
20:11:53 [vhardy]
vhardy/ed/heycam: we are.
20:12:02 [vhardy]
cyril: I will send an email for the hotel discount.
20:12:16 [vhardy]
shepazu: for the event, here is what John put forward.
20:12:20 [vhardy]
6-6:30: drinks.
20:12:29 [cyril]
RRSAgent, pointer
20:12:29 [RRSAgent]
See
20:12:57 [krit]
krit has joined #svg
20:13:10 [vhardy]
Talks by Vincent, Heycam, Chris and Dmitry.
20:13:44 [vhardy]
s/Heycam/
20:14:03 [vhardy]
ACTION: shepazu to send SVG social agenda to the www-svg-wg@w3.org
20:14:04 [trackbot]
Created ACTION-3193 - Send SVG social agenda to the www-svg-wg@w3.org [on Doug Schepers - due 2012-01-12].
20:14:26 [vhardy]
shepazu: there will be a panel session.
20:14:41 [vhardy]
shepazu: I appologize for not being able to attend myself.
20:14:56 [vhardy]
shepazu: the agenda can be tweaked / modified.
20:16:42 [vhardy]
Topic: introducing Dirk Schulze who will represent Adobe on the SVG WG.
20:16:48 [vhardy]
welcomes from the group :-)
20:17:28 [vhardy]
Topic: May F2F
20:17:41 [vhardy]
vhardy: the CSS WG decided yesterday to move to Hamburg.
20:18:17 [vhardy]
vhardy: proposes to keep the meeting in Hamburg.
20:18:23 [vhardy]
ed/heycam: yes, makes sense.
20:18:45 [vhardy]
RESOLUTION: The May F2F meeting will be in Hamburg instead of Bucharest.
20:19:00 [vhardy]
ACTION: vhardy to update the May F2F meeting page.
20:19:00 [trackbot]
Created ACTION-3194 - Update the May F2F meeting page. [on Vincent Hardy - due 2012-01-12].
20:19:04 [thorton_]
thorton_ has joined #svg
20:19:21 [vhardy]
cyril: is anybody from Microsoft or Apply going to attend the SVG meeting in Sydney?
20:19:37 [vhardy]
heycam: Jen said she would not be able to attend. I did not hear anything about Patrick.
20:19:47 [vhardy]
shepazu: I do not think Patrick will join.
20:20:02 [vhardy]
cyril: what about Apple? Dean?
20:20:37 [vhardy]
vhardy: may be you can reach Dean on the Webkit irc.
20:21:15 [vhardy]
heycam: Doug, what is the current state on Tiling and Mapping task force.
20:21:32 [vhardy]
shepazu: so far, nothing has happened. We need to be a bit more aggressive about it.
20:21:52 [vhardy]
shepazu: the task force has not been started yet. We need leadership
20:22:27 [vhardy]
shepazu: we talked about George Held leading the effort, but I am not 100% sure. One of the SVG-GIS proponents.
20:23:03 [vhardy]
shepazu: any suggestion on who could lead that.
20:23:21 [vhardy]
vhardy: would Chris Lilley know or have a suggestion?
20:23:30 [vhardy]
shepazu: not sure.
20:23:49 [vhardy]
shepazu: I'll ping Andreas Neuman and see if he has any ideas.
20:24:20 [ed]
20:25:20 [vhardy]
Topic: Allowing select SVG elements in <head>
20:25:59 [vhardy]
ed: this was an old proposal to consider certain elements to be placed in the <head> element of HTML. Cameron, do you know if this was discussed at any point.
20:26:13 [vhardy]
heycam: not by me. There might have been mails on the whatwg mailing list.
20:27:14 [vhardy].
20:28:00 [vhardy]
.... this works in most browsers but the validators do not like it. Hixie suggests taht the SVG WG categorizes some elements/context, some being rendering context and some 'metadata' context.
20:29:11 [vhardy]
... Then we should discuss the behavior when content is not rendered. This seems reasonnable. This is different from SVG metadata but that is ok. It should be allowed to have SVG in the <head> and it should not be rendered.
20:30:12 [vhardy]
... The second question is what happens if you only put a <defs> section in the HTML <head> element. This is not longer saying that this is SVG content. If SVG is one of the fundamental Web content, then we could accept that.
20:31:08 [vhardy]
heycam: if you have an HTML document and use the HTML parser, then, if there is no <svg> element, the <defs> element is not parsed as being in the SVG namespace.
20:31:46 [vhardy]
shepazu: yes, that is the way the parser works. And yes, there is relunctance by many people, for various reasons, to change the parser at this point. I have heard a proposal that SVG elements could be in the HTML namespace.
20:31:56 [vhardy]
.... I think ed suggested that at TPAC.
20:32:04 [vhardy]
ed: I do not remember saying that.
20:32:13 [vhardy]
shepazu: may be someone else suggested it.
20:32:41 [vhardy]
ed: having an SVG element without the surrounding <svg> element is hard because it is currently needed to resolve things like percentages.
20:32:51 [vhardy]
shepazu: I would contest that view.
20:34:08 [vhardy]
shepazu:.
20:34:41 [vhardy]
ed: I think this is harder than just 'believing' it is there. I think you could still have a stub element inserted, but that is still a bit of work.
20:35:07 [vhardy]
shepazu: I do not contest the implementation difficulty, but from a specification perspective, I do not think it is difficult.
20:35:53 [vhardy]
dirk: we should think about SVG elements fit into the box model of CSS. I do not think that defining the bounding box is enough.
20:36:03 [vhardy]
shepazu: may be we should address the first issue first.
20:36:49 [vhardy]
... we should decide if an <svg> element should be allowed in an HTML <head> element. Can we come on a consensus on that?
20:36:57 [vhardy]
ed: is there any browser where this does not work?
20:37:26 [vhardy]
shepazu: the only problem, I think, is that the content does not validate. The report says it works in Gekko, WebKit and Opera.
20:38:01 [vhardy]
shepazu: he put an <svg> in the <head> and inside the body of the html, he used that first svg in new svg. This seems pretty natual to me.
20:38:10 [vhardy]
shepazu: I think this is pretty common.
20:38:20 [vhardy]
rik: this seems pretty natural.
20:38:35 [vhardy]
rik: would foreignObject be allowed in there?
20:38:58 [vhardy]
shepazu: yes, I do not think the SVG would behave any different, except that it is not rendered. That seems to be the most consistent approach.
20:39:18 [vhardy]
shepazu: we should ask Hixie what he means by categorizing at metadata?
20:39:29 [vhardy]
shepazu: does anybody have a problem with this idea?
20:40:06 [vhardy]
heycam: the slight reservation I have is that in HTML, there is never rendering elements in the <head> element. I am wondering if that makes it trickier to implement.
20:40:18 [vhardy]
shepazu: apparently, this is not tricky because it is already implemented.
20:41:45 [vhardy]
heycam: the other thing I am wondering is that if we can have a foreignObject in the svg in the <head>, you could have a <body> element that can interfere with the parser's behavior.
20:42:15 [vhardy]
shepazu: I think that if you have SVG in the body, you can already have a similar situation. We could do research ourselves.
20:42:27 [vhardy]
heycam: this may be more involved parser-wise that it sounds.
20:42:49 [vhardy]
heycam: I think it may have consequences and change the parser. We should do some research.
20:44:27 [vhardy]
shepazu: I would like to come to a resolution on this.
20:44:42 [vhardy]
.... Cameron expressed concerns on the implications on the parser.
20:44:44 [krit]
Just for the SVGElement in the <header>. SVG Elements get to HMTL elements at the moment for WebKit:
20:44:46 [krit]
<!DOCUMENT html>
20:44:46 [krit]
<html>
20:44:46 [krit]
<head>
20:44:46 [krit]
<svg xmlns:
20:44:46 [krit]
<rect id="test" width="100" height="100" fill="red"/>
20:44:47 [krit]
</svg>
20:44:49 [krit]
</head>
20:44:51 [krit]
<body>
20:44:52 [krit]
<svg xmlns:
20:44:55 [krit]
<use xlink:
20:44:57 [krit]
<svg>
20:44:59 [krit]
</body>
20:45:01 [krit]
</html>
20:45:03 [krit]
(rmove the SVG element first :))
20:45:07 [krit]
<!DOCUMENT html>
20:45:07 [krit]
<html>
20:45:09 [krit]
<head>
20:45:11 [krit]
<rect id="test" width="100" height="100" fill="red"/>
20:45:13 [krit]
</head>
20:45:15 [krit]
<body>
20:45:17 [krit]
<svg xmlns:
20:45:19 [krit]
<use xlink:
20:45:21 [krit]
<svg>
20:45:22 [krit]
</body>
20:45:25 [krit]
</html>
20:45:50 [vhardy]
heycam: the advantage of having SVG in the head, is that you do not need to display=none to have it not show. It is a reasonnable thing to want. I think it can work, barring my reservations.
20:46:13 [ed]
should be <!DOCTYPE html>, no?
20:46:22 [krit]
sure, sorry
20:46:53 [krit]
still doesn't work
20:47:05 [vhardy].
20:48:02 [vhardy]
heycam: I think we need to write something that will change what the HTML spec. says and not what the SVG spec. says. This is more of a change for the HTML spec. If we can do the change in the integration spec. then great.
20:48:13 [vhardy]
heycam: we need to consult with Hixie to resolve this.
20:49:12 [vhardy]
ACTION: Shepazu to coordinate with Hixie on the right specification to add allowing SVG content, in metadata mode, in the <head> element of an HTML document.
20:49:13 [trackbot]
Created ACTION-3195 - Coordinate with Hixie on the right specification to add allowing SVG content, in metadata mode, in the <head> element of an HTML document. [on Doug Schepers - due 2012-01-12].
20:49:24 [ed]
ok, I can confirm that the svgs in the <head> do render
20:49:43 [vhardy]
shepazu: reporting on test.
20:50:10 [vhardy]
... if I put things in a defs, then it hides it. If it is not in a defs, then it renders.
20:50:51 [heycam]
20:51:21 [krit]
ed: Let me specify it, first example works, second doesn't
20:52:22 [vhardy]
heycam: the SVG is pushed to the body. I think that is because any element that is not recognized as a <head> element is automatically closing the <head> element and the result is pushed to the <body>
20:52:42 [vhardy]
dirk: with the surrounding <svg>, the example works, but without the surrounding <svg>, it does not work.
20:52:53 [vhardy]
ed: seeing that, I think it would be a little bit bigger change.
20:53:16 [shepazu]
20:54:17 [vhardy]
shepazu: if you look at the example I just posted, I am surprised that it happens that way.
20:55:02 [vhardy]
shepazu: what happens if we make the title below the SVG, then the document no longer has a title. Is there something that can only be in the head?
20:55:25 [vhardy]
heycam: I think things like <title> get pushed back to the head if they appear somewhere else.
20:55:36 [vhardy]
... after checking, <title> does not do that.
20:56:00 [vhardy]
shepazu: I'll talk to Hixie and Henri Sivonen.
20:56:32 [Zakim]
-Tav
20:56:37 [vhardy]
heycam: the behavior in an XHTML parser is probably different and more like what you would expect.
20:56:59 [vhardy]
heycam: for a feature we intend people to use, we should have the same behavior with the HTML or the XHTML parsers.
20:57:32 [vhardy]
shepazu: I checked that if the <title> appears after the <svg> in the head, it gets shoved into the body. May be it would still be seen as the title.
20:57:47 [vhardy]
rik: yes, it is still showing as the doc title.
20:57:50 [Zakim]
+Tav
20:57:55 [vhardy]
shepazu: yes, I tested hat too.
20:57:58 [Zakim]
-Tav
20:58:22 [vhardy]
shepazu: on the issue of allowing bare svg elements outside an <svg> root, I think this is a larger issue that we are going to have to solve or not solve.
20:59:10 [vhardy]
vhardy: I think this is the generic issue of svg elements in HTML.
20:59:32 [vhardy]
shepazu: yes, there is not much value in special casing the <defs> element to put in an <head> element.
20:59:35 [Zakim]
+Tav
21:01:30 [vhardy]
heycam: if we cannot put non-displayed SVG in the <head>, we would still need to provide a way for declaring SVG definitions without having to hide the surrounding <svg> element.
21:02:23 [vhardy]
shepazu: I think we agree that we should allow it in the head. If we are not going to allow that, there are solutions (e.g., 0x0 SVG).
21:02:42 [vhardy]
heycam: yes, that is true. They can also put it in the <defs> section of one of the rendered SVG elements.
21:03:00 [vhardy]
heycam: I was reverse engineering the reason for his request.
21:03:26 [vhardy]
shepazu: he is trying to separate things into the <head> explicitly, because this is what feels right to him.
21:03:37 [vhardy]
shepazu: I think this is what we should optimize for.
21:04:06 [vhardy]
shepazu: I'll email the list and cc Hixie and Henri. We may put it in the HTML5 spec, the SVG integration spec. or the SVG 2.0 spec.
21:04:29 [vhardy]
ed: if we do not have any other agenda item requests today, lets continue with SVG 2.0 requirements.
21:05:07 [vhardy]
heycam: before we move to that, Tav: what is the current state of the 2.0 document.
21:05:13 [vhardy]
tav: it works :-)
21:05:45 [vhardy]
heycam: is it in a state where we can start adding in the new features we are talking about. There will be some global editing of the HTML file.
21:05:57 [cyril]
21:05:59 [vhardy]
tav: yes.
21:06:25 [ed]
topic: SVG2 requirements
21:06:30 [vhardy]
ed: back to requirements.
21:07:03 [vhardy]
heycam: async/defer on <svg:script>
21:07:40 [vhardy]
.. I have mixed feelings. On one hand it is good to have the same behavior as HTML. On the other hand, these are legacy things that people do not use or are they things people do use today?
21:09:01 [vhardy]
... I think we should ask the HTML group whether it is a good idea or not.
21:09:17 [vhardy]
shepazu: async is new in HTML5.
21:09:24 [vhardy]
heycam: ok, I was not sure.
21:10:16 [vhardy]
shepazu: I am pretty sure async was new in HTML5.
21:10:18 [shepazu]
21:10:38 [vhardy]
shepazu: I am not sure when defered was added.
21:11:06 [vhardy]
vhardy: does it make sense ot only add async?
21:12:06 [vhardy]
shepazu: defer is at least as old as Gekko 9.1.1.
21:12:12 [vhardy]
heycam: I retract my relunctance.
21:12:28 [shepazu]
s/Gekko 9.1.1./Gecko 1.9.1/
21:12:38 [vhardy]
heycam: Jonas Sicking also says this should be supported.
21:13:45 [vhardy]
vhardy: where do the accepted requirements showing?
21:14:19 [vhardy]
RESOLUTION: accept "consider allowing async/defer on <svg:script>"
21:15:08 [cyril]
RRSAgent, pointer
21:15:08 [RRSAgent]
See
21:15:52 [vhardy]
ed: next one is
21:16:14 [vhardy]
dirk: what does SMIL data feedback mean?
21:16:52 [vhardy]
shepazu: I think it means being able to find the current position of things.
21:17:17 [vhardy]
ed: the request is not clear enough to be discussed.
21:17:46 [vhardy]
ACTION: Erik to ask David Dailey to add more details to the requirement:
21:17:46 [trackbot]
Created ACTION-3196 - Ask David Dailey to add more details to the requirement:
[on Erik Dahlström - due 2012-01-12].
21:17:55 [ed]
21:18:12 [vhardy]
ed: next one
21:18:37 [vhardy]
vhardy: I guess that depends on the convergence work we do on animation.
21:18:47 [vhardy]
ed: is there any resolution so far on convergence?
21:18:54 [vhardy]
vhardy: not that I am aware of.
21:19:25 [vhardy]
heycam: I am relunctant to take on new requirement without knowing what our broad direction for animation is.
21:19:42 [vhardy]
cyril: this is not only about animation, it is also about other timed elements.
21:19:56 [ed]
21:20:14 [vhardy]
... we started to discuss time containers on audio and video. Chris was not happy we decided to align with the HTML5 <audio> and <video>
21:20:27 [vhardy]
ed: is that for allowing time manipulation.
21:20:46 [vhardy]
cyril: the first step is to allow different timelines in the document.
21:21:36 [vhardy]
vhardy: could the requirement be just around 'time containers' (and not specifically SMIL)?
21:22:07 [vhardy]
cyril: what we want the resources in a document to be on a different timeline, not the elements.
21:22:16 [vhardy]
heycam: there are different aspects to time containers.
21:22:57 [vhardy]
... there are two examples. One is <par> and <seq> elements. The other is multi-media elements having their own timelines and synchronizing with that.
21:23:23 [vhardy]
shepazu: I like the general approach vhardy is proposing that we resolve that we want a form of time containers without going into details.
21:23:35 [vhardy]
cyril: I think resolving to have time containers in not precise enough.
21:25:03 [heycam]
ScribeNick: heycam
21:25:05 [heycam]
Scribe: Cameron
21:25:20 [heycam]
vhardy: my point is we already have multiple timelines in svg, with audio and video elements
21:25:27 [heycam]
… you have the main animation timeline, the <svg> is a time container
21:25:39 [heycam]
… if it's playing <audio> and <video> they have their own timeline
21:25:45 [heycam]
cyril: we don't have them in SVG2 yet
21:25:48 [heycam]
… they're in 1.2T, yes
21:25:52 [heycam]
vhardy: but we agreed to have them?
21:26:02 [heycam]
cyril: the requirement doesn't say if they follow the timeline of the document or have their own
21:26:19 [heycam]
vhardy: we could say we want facilities to sync the timelines of the multimedia resources with the document
21:26:29 [heycam]
… but that's different from being able to start and stop, and have completely separate timelines
21:26:49 [heycam]
… rik could talk more about this, but if you want to have a little walking character if you had nested timelines you could easily have a character you could start and stop and manipulate
21:26:53 [heycam]
cyril: that's movie clips in flash
21:27:05 [heycam]
… you could have that in svg by using a separate document and using <animate> to reference it
21:27:12 [heycam]
s/animate/animation/
21:29:12 [heycam]
heycam: brian will be talking animation at the F2F
21:33:31 [Zakim]
- +41.83.2.aacc
21:33:46 [Zakim]
-cabanier
21:33:47 [Zakim]
-ed
21:33:47 [Zakim]
-Doug_Schepers
21:33:50 [Zakim]
-heycam
21:33:51 [Zakim]
-Tav
21:33:52 [Zakim]
-cyrilaway
21:33:54 [Zakim]
GA_SVGWG(SVG1)3:00PM has ended
21:33:56 [Zakim]
Attendees were cabanier, Doug_Schepers, ed, +33.9.53.77.aabb, +41.83.2.aacc, Tav, [IPcaller], heycam, +61.2.980.5.aadd, cyril|away
21:34:02 [heycam]
RRSAgent, make minutes
21:34:02 [RRSAgent]
I have made the request to generate
heycam
21:35:11 [heycam]
Chair: Erik
21:35:12 [heycam]
RRSAgent, make minutes
21:35:12 [RRSAgent]
I have made the request to generate
heycam
21:35:56 [heycam]
Present: Rik, Doug, Cameron, Erik, Vincent, Dirk, Tav, Cyril
21:35:58 [heycam]
RRSAgent, make minutes
21:35:58 [RRSAgent]
I have made the request to generate
heycam
21:36:14 [heycam]
Scribe: Vincent, Cameron
21:36:15 [heycam]
RRSAgent, make minutes
21:36:15 [RRSAgent]
I have made the request to generate
heycam
21:56:16 [heycam]
Tav, nice blog post on the mesh gradients
21:56:32 [heycam]
the figures are more understandable with the bezier control points on there
22:08:36 [thorton]
thorton has joined #svg
22:20:35 [krit]
krit has joined #svg
22:26:37 [thorton]
thorton has joined #svg
22:32:31 [Zakim]
Zakim has left #svg
22:38:34 [krit]
krit has left #svg
|
http://www.w3.org/2012/01/05-svg-irc
|
CC-MAIN-2015-27
|
refinedweb
| 4,013
| 80.92
|
They?
Noisy Cricket, perhaps?
I find the nesting SD adapters (Micro->Mini->SD) oddly comforting.
For added yucks you can get CF->SD adapters, and then put the CF adapter in a PCMCIA card.
I don't know if you can still find these anywhere, but those and the CF/IDE adapters are great old-laptop-ressurection widgets.
I can't believe I actually have both -- an SD to CF adapter and CF to PCMCIA adapter. However, I no longer have a PCMCIA slot. I have instead photographers' storage that reads straight from CF.
I think the memory stick micro for my phone is smaller than that. I'd take it out for a photo, but i'm afraid i would lose it.
How annoying is the keyboard at that size?
Is PalmOS still the best the world has to offer as a phone OS? I despair a little...
The keyboard itself is only slightly smaller than on the 700; it's fine.
And: yes. It is.
Palm . . . does it then run Dali?
(As in; or have they somehow 'invented themselves' beyond earlier standards, not as in; programmer's time and resources have no value; please fix me.)
Yeah, DaliClock works fine on modern PalmOS devices. Though it runs in 160x160 instead of 320x320, because it runs as a 68020-emulated app instead of a native ARM app, since there exists no ARM-targeted PalmOS development environment that runs on MacOS or Linux.
It works in color, though.
What won't run from an ARMlet? The graphics initialization?
I have never heard of an ARMlet before today, and still haven't found any documentation on what they are or how to use them, but if you can tell me how to compile my code (on a Mac) such that at runtime (on the Palm device), WinGetBitmap() gives me a 320x320 frame buffer instead of a 160x160 frame buffer, well, then we'd be getting somewhere.
I had been under the impression that the only way to emit ARM executables for PalmOS was to be running NT.
Download and install PRC-tools OSX first.
Then look at this simple sample code which is far better than the instructions for explaining what to do. Basically in your armlet .c:
#include <PceNativeCall.h>
#include <Standalone.h>
STANDALONE_CODE_RESOURCE_ID (id)
Where id is usually 1 for just one armlet; compile it with:
arm-palmos-gcc -Wall -palmos5 -O2 -c myarmlet.c
link it with:
arm-palmos-gcc -nostartfiles -e start -o myarmlet myarmlet.o
where start is a function of the form unsigned long start(const void *emulStateP, char *userData68KP, Call68KFuncType *call68KFuncP) {...} containing your code.
In your 68k application wrapper.c:
#include <PalmOS.h>
#include <PceNativeCall.h>
and get the function pointer to start from
MemHandle startH = DmGetResource('armc', id);
void *startp = MemHandleLock(startH);
and call it with:
PceNativeCall(startp, NULL);
compile the wrapper with:
m68k-palmos-gcc -Wall -palmos5 -O2 wrapper.c
then hook them together with:
build-prc -n Armlets -c ARML wrapper myarmlet
For accessing WinGetBitmap(), do this or this with libarmboot.a.
Good luck.
oops, compile the wrapper so it doesn't end up as a.out:
m68k-palmos-gcc -Wall -palmos5 -O2 wrapper.c -o wrapper
oh and libarmboot.a is here
Ok, I've looked at this stuff and it still leaves me saying what the HELL?
I have no idea what's going on here. Why is there no documentation on this? When would I use any of these things and what exactly do I accomplish by so doing?
The docs are here I've never built Palm apps on OSX so I didn't know that you wouldn't get the docs with the OSX PRC-Tools.
If you put your code inside the armlet's start function as above, and declare WinGetBitmap() as
PACE_CLASS_WRAPPER(BitmapType *) WinGetBitmap(WinHandle winHandle)
{
PACE_PARAMS_INIT()
PACE_PARAMS_ADD32(winHandle)
PACE_EXEC_RP(sysTrapWinGetBitmap,BitmapType *)
}
and link the armlet against libarmtools.a from the above site, you should get a 320x320 bitmap.
The answer to the underlying question is that phone manufacturers are not motivated to document or otherwise support their APIs because (1) nobody buys phones based on the number of applications available for them, and (2) they are very ambivalent about competition for application sales. It's the same way with Symbian, WinCE, and even Java ME has sucky docs and stupid hoops you have to jump through compared to their mainstream x86 etc. stuff.
Damn this is hairy.
I assume that the only way to make the same executable work on older PalmOS devices (including POSE) is to include two copies of my code, one in 68k and one in ARM, and run the 68k version if the ARM version fails to load... but do all of those -palmos5 command-line arguments mean that I'm building executables that won't even get that far on older systems?
And will I have to do that PACE nonsense for every system call I make? (WinScreenMode, ErrDisplay, etc.)?
(I haven't actually gotten any of this shit to link yet, so I can't tell for myself.)
Yes :(
How annoying is the keyboard at that size?
It doesn't matter. You're not in the USA, so forget about the Centro. A Nokia E90 is the only sane option, and the keyboard on that is fine (the only downside is that it doesn't play nicely with US networks for anything other than voice calls). Mine was 20 quid on O2.
For now - a GSM Centro has been spotted, leaked, and photographed several times now. A release seems inevitable.
Perhaps so, but an E90 will still remain the only sane option. The problem isn't the lack of a GSM Centro, it's the lack of CDMA E90. Which is fine for those of us in Europe, as we get the better phone...
but europeans (including romania) don't want to buy stuff from nokia anymore, do we?
Heh, europeans complaining about globalization. Thanks, you made my evening.
Once you're a member of the United Nations, NATO, the G8, the G4, Kyoto signatory, OECD, WTO, and the most powerful european country in the IMF, you've long ago signed away the right to complain about globalization.
au contraire, mon ami. the europeans may complain, because the globalization rules are not made by them but the worldwide capital and market. the role of given voters of given countries is to neglect. ...so, no pointy fingers when it comes to world politics, please. who governs your country?
You get the government you deserve.
It is more than a driver issue - it is a Palm OS issue. Palm OS has a 32-bit addressing limit == 4GB.
A SD card is not a RAM chip though. From a software point of view it is a storage device, similar to a hard drive. 32 bits operating systems are not limited to 4G storage. So this is a driver/filesystem issue.
Palm OS doesn't know from storage devices. Remember, the guts of Palm OS are ancient and it all evolved from running on RAM. Everything since then has been tacked on, including their file system support. The 4GB limit is deep in Palm OS, it can't address anything beyond 32-bits. That was one of the things Palm OS 6 was supposed to fix with new underpinnings. And now Palm OS II, their Linux-based project which won't be out until 2009 (if anyone still cares by then - and I say that as a Palm OS user myself), is the Next Best Hope.
Palm OS Garnet, the current incarnation, is seriously creaky at this point. It was meant to be replaced several years ago, so it has soldiered on well past its design life. But it is also why we don't have 3G GSM Palm OS phones - Garnet can't handle the requirements of UMTS/HSDPA without a major overhaul. The implementation differences in the systems allowed it to handle CDMA/EV-DO, but just barely.
So, it turns out that if I manually put 8GB of data on the card, the Centro can access it all: it sees all the file names, and I can play all 8GB of MP3s. But it still reports it as a 4GB card with less-than-4GB on it.
So I suppose the reason Missing Sync refuses to put 8GB of data on the card is because it's believing the PalmOS lie about remaining space.
Hopefully this means it's a relatively easy fix, which means maybe it will actually happen some day (here's me holding my breath).
Funky - the FAT32 table is accessible, I guess that gives the Palm a back door.
I could swear I saw something about a Centro-only software update that lets you use a >4G card. I ignored it because I don't have a Centro, and now I can't find it.
Why did you choose a Centro instead of a 755p?
Because it's smaller.
"It's like a joke: like the "Noisy Cricket" gun from Men in Black."
These actually remind me of the 'microsofts' that people stuck in their head in Gibson's Neuromancer oevre.
If I remember correctly, the original ad campaign for Sony's Memory Stick format (from several years ago now) featured photos of shaved heads with memory slots at the base of their neck and volume controls behind their ears.
Is that what you're sporting in that usericon?
That's my emergency caffeine reserve.
It is the new hotness. The new teensy awesome hotness.
MicroSD is coming way too close to the Event Horizon of shrinking storage mediums: the limit beyond which it is possible to accidentally inhale your last three months' worth of photography.
(And man do I wish the Centro were available on Verizon. Oh well, my old 700p seems to be mostly soldiering on...)
Seriously, I've taken pills bigger than this thing. I've found larger items in my nose.
Pshaw. I've had much larger fragments of my anatomy surgically removed (also through my nose, BTW).
Luckily, I have a large nose.
Someone needs a manicure.
There must be something screwy/ambiguous about the SD standard. So many phones/PDAs seem to have a hard coded limit of 1GB, 2GB or 4GB.
I bought a 2GB miniSD card about a year ago that had a formatted capacity of only 1GB. I thought I had a knockoff card, but it turned out the card was "partitioned" in the factory to a single 1GB volume so it would work with all the buggy phones out there. SDFix2G fixed the card and my phone saw all 2GB of it. There might be a similar utility for the 8GB cards.
Well, there is the SDHC clusterfuck, but that only causes issues at 2GB. I suspect most of the rest of it is just lazy programmers saying "nobody could possibly need more than $X GB."...
Well, it's definitely formatted as 8GB; when I mount it on my Mac directly, it sees the full size.
I remember reading (and, of course, I don't remember where) that MicroSD cards are not meant to be removable. They're sort of a user-decides-how-much-storage-they-want-but-don't-really-take-it-out sort of idea. Transfers from card to computer were supposed to be via USB or Bluetooth or WiFi or some other magic/nonsense.
"Hello, my name is LohPhat and I have a problem."
I dumped Treos 2 years ago because the damned OS still has no MMU features to keep a buggy app from trashing memory. That and poor QA + suicidal firmware updates.
I toked on the big corporate weenie and got a crackberry. Mmm java OS. Useable. No more stylus.
My new 8320 from t-mobile has UMA () -- basically voip and data over wifi. Since I travel a lot out of the country it has dropped my phone bills dramatically -- no more $1.30 (or $5 in Russia) per minute roaming fees. On wifi the phone thinks it's in the US and all US calls are local and free (don't count towards my minute plan). All t-mobile hotspots auto-connect, again free calling; so many airports and Starbucks to choose from.
The Nokia tablets already have a VM or two to choose from to emulate Palms. Upgrading might be slicker outside Palm soon.
Not sure whether you care, but Matt Siber moved the URL for the Untitled Project (and appears to have some other newer stuff along the same lines, not all of which I'm sure was there four years ago).
Have you verified that you can only see 4GB of files on the card. IIRC, the drivers in the Centro can access the full 8GB of storage on the card, but the VFSVolumeSize API in Palm OS Garnet only can return unsigned 32-bit values for the total space and space used.
Well, Missing Sync believes the card is 4GB and won't put more MP3s on it.
Did you get the delicious pink flavor?
-bZj
the same specs as the treo you say...
didn't you have problems with your sd not working on your treo?
if those are your fingers holding the microsd card in the photo please for the love of god either trim or paint your long fingernails!!
you need those in order to grasp onto something that small. (no not talking about a weenie)
|
https://www.jwz.org/blog/2008/01/treo-700p-old-and-busted-centro-new-hotness/
|
CC-MAIN-2020-50
|
refinedweb
| 2,256
| 73.88
|
Effective Address Produced by (%ecx,%edx,4)
FACT: If initial values in
%ecx and
%edx are
0x10 and
0x4 respectively, then effective address produced by
(%ecx,%edx,4) is
0x20.
If initial values in
%ecx and
%edx are
0x4 and
0x4, then effective address produced by
0x4(%ecx,%edx,4) is
0x18.
If effective address produced by
0x6(,%ecx,4) is
0x26, then initial value of
%ecx is
0x8.
I am struggling with addressing... can someone understand why the above is happening?
- Assembly language code invalid operands
Here is my homework:
Using a loop and indexed addressing, write code that rotates the members of a 32-bit integerarray forward one position. The value at the end of the array must wrap around to the first posi-tion. For example, the array [10,20,30,40] would be transformed into [40,10,20,30].
Here is my code...
INCLUDE Irvine32.inc .data arr1 DWORD 11, 22, 33, 44, 55, 66, 77 arrSize = ($ - arr1) / TYPE arr1 temp DWORD ? .code main PROC ; calls the procedures call Clrscr mov esi,(arrSize-1) ; ESI points to last element mov temp,arr1[arrSize-1] ; save the last element in a ; variable mov ecx,arrSize dec ecx ; last element takes first ; position L1: ; moves the elements forward one position mov arr1[esi],arr1[esi-1] dec esi ; decrement the ESI pointer loop L1 mov arr1[esi],temp ; save last element at first ; position exit main ENDP END main
And the problem is...I get invalid instruction operands on...
mov temp,arr1[arrSize-1] mov arr1[esi],arr1[esi-1] mov arr1[esi],temp
- Binary Bomb Phase 9
I dont know what is this phase asking me for. please help me out. I tried using 1001 as my input, but its not working. Please help me find solve this phase.
08048f9c <fun9>: 8048f9c: 53 push %ebx 8048f9d: 83 ec 18 sub $0x18,%esp 8048fa0: 8b 54 24 20 mov 0x20(%esp),%edx 8048fa4: 8b 4c 24 24 mov 0x24(%esp),%ecx 8048fa8: 85 d2 test %edx,%edx 8048faa: 74 37 je 8048fe3 <fun9+0x47> 8048fac: 8b 1a mov (%edx),%ebx 8048fae: 39 cb cmp %ecx,%ebx 8048fb0: 7e 13 jle 8048fc5 <fun9+0x29> 8048fb2: 89 4c 24 04 mov %ecx,0x4(%esp) 8048fb6: 8b 42 04 mov 0x4(%edx),%eax 8048fb9: 89 04 24 mov %eax,(%esp) 8048fbc: e8 db ff ff ff call 8048f9c <fun9> 8048fc1: 01 c0 add %eax,%eax 8048fc3: eb 23 jmp 8048fe8 <fun9+0x4c> 8048fc5: b8 00 00 00 00 mov $0x0,%eax 8048fca: 39 cb cmp %ecx,%ebx 8048fcc: 74 1a je 8048fe8 <fun9+0x4c> 8048fce: 89 4c 24 04 mov %ecx,0x4(%esp) 8048fd2: 8b 42 08 mov 0x8(%edx),%eax 8048fd5: 89 04 24 mov %eax,(%esp) 8048fd8: e8 bf ff ff ff call 8048f9c <fun9> 8048fdd: 8d 44 00 01 lea 0x1(%eax,%eax,1),%eax 8048fe1: eb 05 jmp 8048fe8 <fun9+0x4c> 8048fe3: b8 ff ff ff ff mov $0xffffffff,%eax 8048fe8: 83 c4 18 add $0x18,%esp 8048feb: 5b pop %ebx 8048fec: c3 ret 08048fed <phase_9>: 8048fed: 53 push %ebx 8048fee: 83 ec 18 sub $0x18,%esp 8048ff1: c7 44 24 08 0a 00 00 movl $0xa,0x8(%esp) 8048ff8: 00 8048ff9: c7 44 24 04 00 00 00 movl $0x0,0x4(%esp) 8049000: 00 8049001: 8b 44 24 20 mov 0x20(%esp),%eax 8049005: 89 04 24 mov %eax,(%esp) 8049008: e8 d3 f8 ff ff call 80488e0 <strtol@plt> 804900d: 89 c3 mov %eax,%ebx 804900f: 8d 40 ff lea -0x1(%eax),%eax 8049012: 3d ec 03 00 00 cmp $0x3ec,%eax 8049017: 76 05 jbe 804901e <phase_9+0x31> 8049019: e8 57 03 00 00 call 8049375 <explode_bomb> 804901e: 89 5c 24 04 mov %ebx,0x4(%esp) 8049022: c7 04 24 c0 d0 04 08 movl $0x804d0c0,(%esp) 8049029: e8 6e ff ff ff call 8048f9c <fun9> 804902e: 83 f8 07 cmp $0x7,%eax 8049031: 74 05 je 8049038 <phase_9+0x4b> 8049033: e8 3d 03 00 00 call 8049375 <explode_bomb> 8049038: 83 c4 18 add $0x18,%esp 804903b: 5b pop %ebx 804903c: c3 ret 804903d: 66 90 xchg %ax,%ax 804903f: 90 nop
- Why linux uses interrupt gates for exceptions
As I was looking through linux kernel source code for x86 architecture I noticed that it uses interrupt gates for handling exceptions (interrupts 0-31). You can see it here:
Is there any reason to use interrupt gates instead of trap gates in those cases? As understood from reading in some web resources (e.g. in this answer, under tag
2)), using trap gates should be ok for exceptions.
If there is no such reason, then why not use trap gates? Why do we disable interrupts (using interrupt gates) if we don't have to?
- Trying to understand this piece of c code but just cant get it
I have been googling for couple of hours now for explanation for following code but just couldn't find, appreciating if someone can help me.
I have defined memory location like (address is just sample),
#define address (0x000001)
then i have a struct
typedef struct{ int a; int c; int f; } foo;
and last (this part i can't figure out) I have definition like
#define foo__ ( (foo *) address)
does this mean that I'm creating macro whereby I can access elements of structure
fooand that
foostructure begins at 0x000001?
I know that the code works - have tested it but there is no use if I cant understand what it does.
Edit. Sorry for unclear information on question - yea its LPC microcontroller by NXP which is used in embedded environment - should told that in the first place and - my bad.
Thanks for answers and commends - I figured it out now.
- C++ objdump use and disassembly
On a linux server, the command would be
$ objdump -t exercise11 > symbol_table.txt
The intention is to find the memory address and size (in bytes) for the three global variables in a program. The second portion is equivalent to
$ objdump -S exercise11 > disassembly.txt
The intention here is to search the text file and find the instruction for assigning a valuable to a variable.
My problem is that I am using Visual Studio 2017 on a local machine. I am in the visual studio command prompt but either I need something different or I am misunderstanding what I am reading in the output files. I have also used dumpbin commands but it did not seem to produce the correct information, either. Could you please show me how to do this in Visual studio and how to find the information I am searching?
- Read from address memory into a file in c
i need to do this: Write from a memory address into a file , only with read(), write() and open() system calls. If the file doesnt exist i have to create and if the file exists it has to overwrite.
Can someone help me?
void writef(char *arg[]){ int fh,sh; int rd,n; n = atoi(arg[3]); char *p=arg[2]; p = (char*) strtoul(p, NULL, 16); fh = open(p,O_WRONLY || O_RDONLY); sh = open(arg[1],O_WRONLY || O_RDONLY); if(fh == -1){ perror("open"); return; }else{ int ret = write(fh,arg[1],n); if(ret == -1){ perror("write"); return ; } } }
arg[1]is the file,
arg[2]is the addres and
arg[3]is the number of bytes to copy.
- MASM32 Direct Addressing - A2070 Invalid Instruction Operands
I am new to MASM coding. I found that it is really difficult to handle with registers, due to lack of knowledge in built-in functions.
I am trying to write a program to change all letters in input string to CAPITAL letters. Here is my code:
.386 .model flat, stdcall option casemap:none include windows.inc include kernel32.inc include msvcrt.inc includelib msvcrt.lib .data InputMsg db "String Input: (At most 20 characters)", 10, 0 OutputMsg db "Your string is ", 0 StringFormat db "%s", 0 .data? StringData db 20 dup(?) .code start: invoke crt_printf, addr InputMsg invoke crt_scanf, addr StringFormat, addr StringData, 20 ;Change lowercase letter to uppercase lea ecx, StringData CounterLoop: .if [ecx] >= 97 .if [ecx] <= 122 sub [ecx], 32 .endif .endif inc ecx .if [ecx] != 0 jmp CounterLoop .endif invoke crt_printf, addr OutputMsg invoke crt_printf, addr StringData invoke ExitProcess, NULL end start
I want to use ecx to store the effective address of StringData. However, A2070 error occured when I want to get the content of StringData.
Is [ecx] incorrect? How can I get the character in StringData using direct addressing? Thank you very much!
- Tricore Disassembling "Constants"
Can someone here explain me, how the TC17** assembler works out the "movh.a and lea" addressing (hex), and how i can calculate them for myself if i have an configuration value like shown in my picture, which is defined as a "constant" or a "global".
What i want to do is, creating/assemble this 32-bit instruction for myself, but i do not make any proccess the last days. Sure, i know how to assemble with Eclipse Toolchain, but i cant use this toolchain in my programm. I am programming with PHP but this doesnt really matter, if i know how to work out this.
As example, here is a picture with the IDApro View of the commands i have to assembly:
As 32-Bit Hex instruction it looks like this:
ASM: movh.a a15, #@HIS(configuration_value_1) HEX: 91 70 01 F8 ASM: lea a15, [a15]@LOS(configuration_value_1) HEX: D9 FF E4 67
What i want to do now is to work out that HEX-assembler instructions, with the right addressing to my variable. In this case its located at: "0x80177DA4".
In the instruction set, its explained like this:
Screenshot: movh.a command
Screenshot: lea + long offset addressing mode
|
http://quabr.com/47297162/effective-address-produced-by-ecx-edx-4
|
CC-MAIN-2017-47
|
refinedweb
| 1,626
| 66.57
|
Stefan Seefeld wrote: > Hi Robert, > > I believe the reason for this behavior is that an implicit cast from > object to dict and list will cause a copy, so you don't actually insert > the path into the original (global) path object, but a local copy, which > obviously isn't seen by the rest of the application. > > Robert wrote: >> Hi, >> >> Suppose the following C++ Boost.Python code below. Note that I am >> embedding the python interpreter to execute python scripts from C++: >> >> using namespace boost::python; >> object imported( import( "sys" ) ); >> dict sysdict( imported.attr( "__dict__" ) ); > > The call to 'attr()' returns an object, and you assign it to a dict. > This causes a copy. Try this instead: > > object dict_object = imported.attr("__dict__"); > dict sysdict = extract<dict>(dict_object); > >> >> list syspath( sysdict["path"] ); > > And likewise here: > > object list_object = sysdict["path"]; > list syspath = extract<list>(list_object); > >> >> syspath.append( "C:\testing" ); > > However, as you have discovered, there are ways to fold these lines to > make it more compact. I only spelled the individual steps out to > illustrate what's going on underneath. > > HTH, > Stefan > Thanks for your response.? After running the code above I find that it does not work, and I get the following output from python: TypeError: No to_python (by-value) converter found for C++ type: struct boost::python::extract<class boost::python::dict> Not sure what this means! Thanks again for all the help!
|
https://mail.python.org/pipermail/cplusplus-sig/2008-October/013834.html
|
CC-MAIN-2014-15
|
refinedweb
| 231
| 55.13
|
If you have ever used iterators in C# or Visual Basic, then this is essentially the same thing. You would need to enable the new experimental
/await compiler option, same as in the previous blog entry.
std::experimental::generator<int> evens(size_t count) { using namespace std::chrono; for (size_t i = 1; i <= count; i++) { yield i * 2; std::cout << "yielding..." << std::endl; std::this_thread::sleep_for(1s); } }
Calling this method would be fairly straightforward.
void main_yield() { for (auto ev : evens(7)) { std::cout << ev << std::endl; } }
And here’s the expected output.
I added the console output to demonstrate the laziness of the generator.
|
https://voidnish.wordpress.com/2016/04/
|
CC-MAIN-2018-17
|
refinedweb
| 103
| 56.45
|
This."
To illustrate the problem in gory detail, suppose you want to add the notion of modifiability to the Hierarchy. You need four new interfaces: ModifiableCollection, ModifiableSet, ModifiableList, and ModifiableMap. What was previously a simple hierarchy is now a messy heterarchy. Also, you need a new Iterator interface for use with unmodifiable Collections, that does not contain the remove operation. Now can you do away with UnsupportedOperationException? Unfortunately not.
Consider arrays. They implement most of the List operations, but not remove and add. They are "fixed-size" Lists. If you want to capture this notion in the hierarchy, you have to add two new interfaces: VariableSizeList and VariableSizeMap. You don't have to add VariableSizeCollection and VariableSizeSet, because they'd be identical to ModifiableCollection and ModifiableSet, but you might choose to add them anyway for consistency's sake. Also, you need a new variety of ListIterator that doesn't support the add and remove operations, to go along with unmodifiable List. Now we're up to ten or twelve interfaces, plus two new Iterator interfaces, instead of our original four. Are we done? No.
Consider logs (such as error logs, audit logs and journals for recoverable data objects). They are natural append-only sequences, that support all of the List operations except for remove and set (replace). They require a new core interface, and a new iterator.
And what about immutable Collections, as opposed to unmodifiable ones? (i.e., Collections that cannot be changed by the client AND will never change for any other reason). Many argue that this is the most important distinction of all, because it allows multiple threads to access a collection concurrently without the need for synchronization. Adding this support to the type hierarchy requires four more interfaces.
Now we're up to twenty or so interfaces and five iterators, and it's almost certain that there are still collections arising in practice that don't fit cleanly into any of the interfaces. For example, the collection-views returned by Map are natural delete-only collections. Also, there are collections that will reject certain elements on the basis of their value, so we still haven't done away with runtime exceptions.
When all was said and done, we felt that it was a sound engineering compromise to sidestep the whole issue by providing a very small set of core interfaces that can throw a runtime exception.
It was never our intention that programs should catch these exceptions: that's why they're unchecked (runtime) exceptions. They should only arise as a result of programming errors, in which case, your program will halt due to the uncaught exception.
The Collection interface provides this functionality. We are not providing any public implementations of this interface, as we think that it wouldn't be used frequently enough to "pull its weight." We occasionally return such Collections, which are implemented easily atop AbstractCollection (for example, the Collection returned by Map.values)..
While the names of the new collections methods do not adhere to the "Beans naming conventions", we believe that they are reasonable, consistent and appropriate to their purpose. It should be remembered that the Beans naming conventions do not apply to the JDK as a whole; the AWT did adopt these conventions, but that decision was somewhat controversial. We suspect that the collections APIs will be used quite pervasively, often with multiple method calls on a single line of code, so it is important that the names be short. Consider, for example, the Iterator methods. Currently, a loop over a collection looks like this:
for (Iterator i = c.iterator(); i.hasNext(); ) System.out.println(i.next());Everything fits neatly on one line, even if the Collection name is a long expression. If we named the methods "getIterator", "hasNextElement" and "getNextElement", this would no longer be the case. Thus, we adopted the "traditional" JDK style rather than the Beans style..
This is what is referred to as an "Internal Iterator" in the "Design Patterns" book (Gamma et al.). We considered providing it, but decided not to as it seems somewhat redundant to support internal and external iterators, and Java already has a precedent for external iterators (with Enumerations). The "throw weight" of this functionality is increased by the fact that it requires a public interface to describe upcalls.
It's easy to implement this functionality atop Iterators, and the resulting code may actually look cleaner as the user can inline the predicate. Thus, it's not clear whether this facility pulls its weight. It could be added to the Collections class at a later date (implemented atop Iterator), if it's deemed useful.
Because we don't believe in using Enumerations (or Iterators) as "poor man's collections." This was occasionally done in prior releases, but now that we have the Collection interface, it is the preferred way to pass around abstract collections of objects.
Again, this is an instance of an Enumeration serving as a "poor man's collection" and we're trying to discourage that. Note however, that we strongly suggest that all concrete implementations should have constructors that take a Collection (and create a new Collection with the same elements).
The semantics are unclear, given that the contract for Iterator makes no guarantees about the order of iteration. Note, however, that ListIterator does provide an add operation, as it does guarantee the order of the iteration.
People were evenly divided as to whether List suggests linked lists. Given the implementation naming convention, <Implementation><Interface>, there was a strong desire to keep the core interface names short. Also, several existing names (AbstractSequentialList, LinkedList) would have been decidedly worse if we changed List to Sequence. The naming conflict can be dealt with by the following incantation:
import java.util.*; import java.awt.*; import java.util.List; // Dictates interpretation of "List"
It was decided that the "set/get" naming convention was strongly enough enshrined in the language that we'd stick with it..
We view the method names for Enumeration as unfortunate. They're very long, and very frequently used. Given that we were adding a method and creating a whole new framework, we felt that it would be foolish not to take advantage of the opportunity to improve the names. Of course we could support the new and old names in Iterator, but it doesn't seem worthwhile.
It can be implemented atop the current Iterators (a similar pattern to java.io.PushbackInputStream). We believe that its use would be rare enough that it isn't worth including in the interface that everyone has to implement.
If you examine the goals for our Collections framework (in the Overview), you'll see that we are not really "playing in the same space" as JGL. Quoting from the "Design Goals" Section of the Java Collections Overview: "Our main design goal was to produce an API that was reasonably small, both in size, and (more importantly) in 'conceptual weight.'"
JGL consists of approximately 130 classes and interfaces; its main goal was consistency with the C++ Standard Template Library (STL). This was not one of our goals. Java has traditionally stayed away from C++'s more complex features (e.g., multiple inheritance, operator overloading). Our entire framework, including all infrastructure, contains approximately 25 classes and interfaces.
While this may cause some discomfort for some C++ programmers, we feel that it will be good for Java in the long run. As the Java libraries mature, they inevitably grow, but we are trying as hard as we can to keep them small and manageable, so that Java continues to be an easy, fun language to learn and to use.
Given that we provide core collection interfaces behind which programmers can "hide" their own implementations, there will be aliased collections whether the JDK provides them or not. Eliminating all views from the JDK would greatly increase the cost of common operations like making a Collection out of an array, and would do away with many useful facilities (like synchronizing wrappers). One view that we see as being particularly useful is List.subList. The existence of this method means that people who write methods taking List on input do not have to write secondary forms taking an offset and a length (as they do for arrays).
Primarily, resource constraints. If we're going to commit to such an API, it has to be something that works for everyone, that we can live with for the long haul. We may provide such a facility some day. In the meantime, it's not difficult to implement such a facility on top of the public APIs.
|
http://java.sun.com/j2se/1.4.2/docs/guide/collections/designfaq.html
|
crawl-002
|
refinedweb
| 1,436
| 53
|
Hidden Markov Model using TensorFlow
Hello Readers, this blog will take you through the basics of the Hidden Markov Model (HMM) using TensorFlow in Python. This model is based on the mathematical topic – probability distributions.
Markov Property
Markov Property – when the probability of future events depends only on the conditions of the present state and is independent of past events that took place.
Let’s take an example to understand the property. Suppose we flip a coin. There are two possibilities for the coin to land on either heads or tails. You will agree that the probability of both heads and tails showing up is 50%. Suppose in the first trial, we get heads, and then we again flip the coin. Will the previous result (heads) affect the output of this flip? Can the coin store the previous result to change our outcome this time? The answer is No, this time also the probability of both heads and tails showing up will remain 50%. The outcome of the present event is oblivious to the outcome of the past event. This is the Markov property.
Hidden Markov Model
Hidden Markov Models deals in probability distributions to predict future events or states. The model consists of a given number of states which have their own probability distributions. The change between any two states is defined as a transition and the probabilities associated with these transitions in the HMM are transition probabilities.
HMM is an abstract concept, therefore I have taken an example of a weather model to study it throughout.
Components of the Markov Model
- States: In each Markov model there are a finite set of states which can be anything like – “sleeping”,” eating”,” working” or “warm” and “cold” or “red”,” yellow” and “green” etc. These states are non-observable and therefore called hidden.
- Observations: Each state has a particular outcome or observation associated with it based on a probability distribution. These observations are visible to us. For example: when it is a sunny day, there is an 80% probability that Joe will eat ice cream whereas a 20% probability that he won’t.
- Transitions: Each state will have a probability defining the likelihood of transitioning to a different state. For example, there is a 90% chance that when today is sunny the next day will be sunny, and 10% that the next day would be rainy. Similarly, there is an 85% chance that when today is rainy the next day would be rainy and a 15% chance that the next day would be sunny.
Hidden Markov Model is specified by the following components:
As you have read the basics of the Hidden Markov Model. Now we consider Weather/ Temperature as an observation to our states and implement the HMM model.
Weather model
We model a simple weather system and try to predict the temperature based on given information:
- 0 encodes for a Rainy day and 1 encodes for a Sunny day.
- Let the first day in our sequence has an 85% chance of being rainy.
- There is a 10% chance that a rainy day follows a sunny day.
- There is a 15% chance that a sunny day follows a rainy day.
- The temperature is normally distributed every day. The mean and standard deviations are 0 and 5 on a rainy day and 20 and 15 on a sunny day.
- In our example, we take the average temperature to be 20 on a sunny day. And the range from 10 to 30 on a sunny day. The below table shows the transition matrix for the transition distributions taken in our example
Import the necessary Libraries
import tensorflow as tf import tensorflow_probability as tfp
To implement the Hidden Markov Model we use the TensorFlow probability module. We use python as our programming language.
tfd = tfp.distributions #shortform initial_dist = tfd.Categorical(probs=[0.85,0.15])#Rainy day transition_dist = tfd.Categorical(probs= [[0.9,0.1], [0.15,0.85]]) observation_dist = tfd.Normal(loc=[0.,20.], scale=[5.,15.])
We specify the initial, transitional, and observation distributions for our Hidden Markov Model. The initial distribution specifies the probability of a landing on a Rainy day as our sequence begins.
You can observe that the transition distribution 2D array in our code is the exact same transition matrix that we saw above. Also, the dimensions of this matrix are 2×2 corresponding to the two states in our model which are the rainy and sunny day state.
In the observation distribution, you see the Normal distribution with location `loc` and `scale` parameters.
Here `loc = mu` is the mean, `scale = sigma` is the std. deviation. These are used to solve the PDF(probability density function):
pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z Z = (2 pi sigma**2)**0.5 (Z - normalization constant)
The Hidden Markov Model
model = tfd.HiddenMarkovModel(initial_distribution=initial_dist, transition_distribution=transition_dist, observation_distribution=observation_dist, num_steps=7)
Here in the above lines of code, we call the inbuilt HMM with the defined parameters.
The number of steps in our code defines the number of days we wish to predict the average temperature. We call it for an entire week(7 days).
Technically it means how many times it will run through this probability cycle and run the model sequentially.
Printing the outputs
mean = model.mean() with tf.compat.v1.Session() as sess: print(mean.numpy())
model. mean() in the above code is a partially defined tensor/computation. To get the value of it we create a new session in TensorFlow and run that part of the graph.
Output: [3. 4.25 5.1875 5.890625 6.4179688 6.8134775 7.110109 ]
You can observe that the output starts from a 3-degree temperature. The first-day temperature is low because we defined the initial distribution with an 85% probability for a rainy day.
This temperature gradually rises as we predict the temperature for the further days. That is because now the model takes into account the transition probabilities.
You can play with the probability values to see the changes in temperature for yourself.
For example, We interchange the initial distribution probabilities to 15% and 85%. This means that there is a 15% probability of the first day to be rainy.
To this, we observe the below output:
Output: [17. 14.750002 13.062504 11.796877 10.847659 10.135745 9.60181 ]
Now it is evident that the temperature spikes as there is more probability for the first day to be sunny.
I hope you understood the basic implementation of this model. Keep reading!
|
https://valueml.com/hidden-markov-model-using-tensorflow/
|
CC-MAIN-2021-25
|
refinedweb
| 1,093
| 56.15
|
Database Independent Development in C
libdbi brings to the UNIX C/C++ developer a functionality that has long been available to other programmers. A single binary may now be made database independent, without a tedious ODBC installation.
Windows programmers have long been able to free themselves from the bonds of a single database by using ODBC. With the porting of ODBC to UNIX, and the adoption of OS X (UNIX in six colors) by Apple, database independence becomes available to nearly all application developers.
That’s the theory, at least. Unfortunately, configuring ODBC requires administrative privileges, and configuration is decidedly non-trivial. If you’re like me, and most of the applications you write are written to be run in a web hosting environment, you can count on not having administrative access to the server.
Installing
At the time of this writing, libdbi supports PostgreSQL, MySQL, SQLite, mSQL and Oracle databases. The very first prerequisite is that you must install the client libraries for every database engine that you want to make available. In my case I installed PostgreSQL, MySQL and SQLite, the three database engines which I use the most.
libdbi comes in two parts. The first, libdbi, provides the basic library that you’ll link your programs against. It contains the framework that figures out which functions to call in the database client libraries. You can download it from:
The second part is libdbi-drivers. This package builds to shared objects, one for each database, which in turn link to their respective database client libraries. This package can be downloaded from:
libdbi must be configured and installed first. It’s a fairly straightforward affair. All of the options are standard configure, except for the –with-ltdl and –with-shlib-ext options. If your system has a problem with the dlopen function call (which you won’t know until you’ve tried to run a libdbi program), you’ll need to enable the –with-ltdl flag, which uses a dlopen call provided by libtool. If you want the drivers to have an extension besides the default of .so, you’ll need to specify that with the –with-shlib-ext option.
Installing libdbi-drivers
libdbi-drivers is a little more interesting to install. You must explicitly tell configure which drivers you want to include using the usual round of –with-driver flags. Every UNIX distribution seems to install these drivers in slightly different places though, so some special flags have been provided. My OpenBSD machine, for instance, places its PostgreSQL includes under /usr/local/include/postgresql and the client libraries in /usr/local/lib. The –with-driver-incdir and –with-driver-libdir options let you specify the locations of includes and client libraries if that is necessary. Generally the configure script is smart enough to look under both the /usr and /usr/local directory trees, and this kind of complication isn’t usually needed.
In the current pre-1.0 version of the driver distribution, there is an undocumented step which you must take. The libraries are installed in /usr/local/lib/dbd, with the full version number as part of the file extension, just as other shared libraries use. You’ll need to create links in that folder ending in the .so file extension and pointing to the actual libraries.
In case you haven’t realized it, this all means that you’ve got a lot of shared libraries kicking around. Rather than just the necessary client libraries, you now also have all of the shared objects for the libdbi drivers, as well as the libdbi library itself. It’s a viable tradeoff, but it’s easy to see how problems could develop down the road.
{mospagebreak title=Making the Connection}
The library is simplicity itself to use, somewhat reminiscent of the PEAR::DB classes in PHP. The two basic data types are dbi_conn, to represent a database connection, and dbi_result, to represent a database result set. Both are typedefs for a void*.
When your program is first initialized, no database drivers are loaded. It will happily execute queries and send you back empty result sets. You initialize drivers in two steps.
First, a call to dbi_initialize() will enumerate the available drivers to the library. The argument can technically be NULL, causing the library to look in the default location. My experience has been, however, that you need to explicitly specify the folders where the drivers reside. In the default installation, or at least on my system, that is /usr/local/lib/dbd. The return value will be the number of drivers found. If 0 drivers are found, check that you specified the correct folder. If the folder is correct, make sure that you created the links to the actual shared libraries in the folder.
Once drivers have been initialized, a database object is created by calling dbi_conn_new(). The argument is the character string that describes the driver, and corresponds to the folder names in the libdbi-drivers/drivers folder. The return object is a uninitialized connection object, waiting for you to supply the connection parameters.
Parameters are set by calling dbi_conn_set_option() with the connection object, the parameter, and the parameter’s value. Common parameter names are host, username, password and dbname. For the sqlite driver, the first three are omitted and sqlite_dbdir is used instead. Examples are provided below which should clarify the parameters.
Finally, a connection is opened with dbi_conn_connect(), with the now initialized connection object as the only parameter.
{mospagebreak title=Getting Results}.
{mospagebreak title=An Example}
This is a very simple utility that I created to query the user list for an application I’ve been working on. Most applications will be a little more complex.
#include <stdio.h> #include <dbi/dbi.h>
int main() {
dbi_conn db; dbi_result result;
unsigned long user_id; const char* name; const char* email; int drivercount;
drivercount = dbi_initialize(“/usr/local/lib/dbd”); printf(“Loaded %d driversn”, drivercount); db = dbi_conn_new(“mysql”);
dbi_conn_set_option(db, “host”, “localhost”); dbi_conn_set_option(db, “username”, “user”); dbi_conn_set_option(db, “password”, “secret”); dbi_conn_set_option(db, “dbname”, “bigapp”);
dbi_conn_connect(db); result = dbi_conn_query(db, “SELECT user_id, name, email FROM author”); while (dbi_result_next_row(result)) { user_id = dbi_result_get_ulong(result, “user_id”); name = dbi_result_get_string(result, “name”); email = dbi_result_get_string(result, “email”); printf(“%2ld) %s <%s>n”, user_id, name, email); }
dbi_result_free(result); dbi_conn_close(db); dbi_shutdown();
return 0;
}
On my installation, providing that I called this userlist.c, my command line would look like this:
cc userlist.c -o userlist -I/usr/local/include -L/usr/local/lib -ldbi
As you can see, this program is blissfully short. It’s so short in fact that it makes C a very good candidate for handling some of the routine daily reporting work that every IT department has to suffer through. Generate some HTML output instead of the plain text that I showed and you start to put out very nice looking nightly reports. In later articles I’ll also discuss the use of libraries for generating PDF output. With that under your belt you’ll have a reporting tool that can rival what commercial solutions can provide.
{mospagebreak title=Extending libdbi}
The mechanism for adding a new driver is very straight forward. In addition to the detailed guide, the full source code of the other drivers is present, which gives you a good chance to see how others have done it before.
There are a couple of obvious candidates for inclusion. The FreeTDS library would be good for adding Microsoft SQL Server and Sybase support. The Firebird and Interbase client libraries are also good candidates. With these added to your tool kit there won’t be much that can stand in the way.
Unfortunately there really isn’t room for a full example of a driver port in this article. They’re rather long bits of code.
Conclusions
The libdbi library provides a feature for C programmers that has long been missing. Writing one program that can make use of multiple databases used to mean either using ODBC, with its inherent complications, or tying yourself to bulky technology like ADO or the Borland Database Engine. None of these solutions was a good answer for UNIX programmers, especially UNIX programmers deploying CGI programs on a hosted server.
The libdbi interface is clean, and the list of supported databases, although small, covers most of the needs of open source developers. The relatively straightforward framework for adding new drivers means that it’s not an unreasonable task to add support for databases that aren’t already included, provided client libraries and headers are available.
I’ll definitely be using libdbi in future products, both for CGI and desktop applications. If the library shows up well in production, I hope to make it one of my most frequently used tools.
|
http://www.devshed.com/c/a/practices/database-independent-development-in-c/3/
|
CC-MAIN-2015-11
|
refinedweb
| 1,453
| 54.63
|
address@hidden transcribed 7.5K bytes: > Devan C. dvn transcribed 6.5K bytes: > > Current Gitlab status: > > > > - Gitlab is running and accessible at `` > > So here's a problem I see with this as it is right now: > I'm a git admin. Before I give people a certain kind of access, be > it for one repo only, a range of repos or the group 'gnunet', I > have a sort of checklist. Can I digitally verify to some extent that the > key sent to me matches the person? Do we have a CAA signature? etc. > Now I see already one name as 'Owner' in the gnunet group who, to > my knowledge, has never signed anything. Correct me if I'm wrong > about ic.rbow. I should have mentioned: The current "GNUnet" group was created by wldhx to test the runners, and I am planning to rename or delete it before importing all of the repos. > We can only trust each other. > Since we have this CAA in place, we need more than trust, we need > some guidelines when someone is added to which permission level > in gitlab. > Previously the communication about what happened, which steps > were followed and that there is a new committer, were betwee > 1 or 2 people involved in administration. Now potentially everyone > can do this, which is either bad or good, so at the very least > we need to communicate about new rights given. As wldhx noted, the group persmissions should work largely the same as with gitolite. I think Owners of the whole Group should be limited, as we have had it, and there can be different Owners of various repos. > > > - Registration is open. There are no guarantees on uptime, or even > > data retention (though I don't expect data to disappear). > > > > - wldhx has kindly offered two "Gitlab Runners" for running CI jobs. > > These will be added as shared runners, to be used by any projects on > > the instance. This may be changed later to only be shared by projects > > under the GNUnet namespace. > > > > **TODO** > > > > - Setup email. Used for registration, password resets, > > notifcations, and interaction (eg. issue threads). > > > > - Currently run using containers with docker-compose. Will switch to > > using systemd services to with the containers, removing docker-compose. > > > > - Daily remote backups. Perhaps 'firefly.gnunet.org' could be the backup > > site, hmm? > > > > - Change configured hostname (in Gitlab) to 'gitlab.gnunet.org'. > > > > - Setup redirect from 'git.gnunet.org' -> 'gitlab.gnunet.org' > > ---- > > > > Current [MantisBT -> Gitlab Issues] status: > > > > Exporting: > > - Mantis can export to CSV and "Excell" XML > > - These do not contain comments (bugnotes). It looks like there might > > be a possibility to enable them via a configuration option[2]. Not > > sure who all has admin access, whom I could coordinate with. Maybe > > easier if I can get admin rights? Grothoff, what do you think? > > > > Importing: > > - I have found only a couple of scripts[3,4] for this. They are both out > > of date, for old versions of both softwares. I have tried both to no > > avail. [4] is the most promising; It's not so old. > > I would really appreciate any help working on this. > > > > I suppose this means that we will continue to use Mantis, and disable > > issues in Gitlab for now. Any protests or ideas? > > ---- > > > > Migrating from Gitolite: > > > > For those whom are not aware, we currently use gitolite for all of the > > lovely repos in our collection. We will need to copy all of the repos to > > Gitlab, as well as duplicate permissions, and setup githooks. > > > > 1. Create namespaces/groups on our Gitlab > > > > 2. Clone repos. This can be done via the web interface "Import" option > > when creating a new repo, or the new remote can be added, and then > > pushed. The little script found here can help with getting all the > > repos from Gitolite[5] > > > > 3. Setup redirects. eg. -> > > > > > > 4. Manually replicate permissions. Will need a Gitolite admin's help > > on this. > > > > 5. Setup githooks. We have quite a few githooks setup, so we will > > need to recreate those. > > > > After all of that is done, I think we should be ready to switch over > > to Gitlab for at least the git management and CI/CD. > > ---- > > > > That brings us to the final update: The CI System... > > > > - We have a couple of small runners (thanks wldhx). > > > > - We have some very basic '.gitlab-ci.yml'[6] files, defining jobs. > > - I will begin expanding these in the coming weeks. > > > > **TODO** > > > > - As we build out a matrix of pipelines, we will need more resources. > > 'firefly.gnunet.org' is a logical option. In the past I've seen it > > utilized heavily by experiments. As long as we are okay with dedicating > > some CPU and RAM to runners, then I will begin setting them up. > > > > - Setup Gitlab Container Registry [7] for storing our CI artifacts. > > > > - Expand our '.gitlab-ci.yml' files to include e2e tests, builds for > > multiple architecures, and continuous delivery of packages for various > > package managers. > > ---- > > > > Wow, so that's a lot of text. A lot of people have been asking me about > > the status of Gitlab, and if and how they can help with CI. I hope this > > gives people a thorough update, and answers. I really believe Gitlab can > > be a useful software suite, despite its shortcomings. My hopes are that > > it will help increase the feedback loop between development and testing, > > as well as make it easier and more welcoming for new contributors. > > > > > > Be well, and Happy Hacking! > > - Devan > > > > > > [0] > > > > [1] > > > > [2] > > > > [3] > > [4] > > [5] > > > > [6] > > [7] > > > > > _______________________________________________ > > GNUnet-developers mailing list > > address@hidden > > >
signature.asc
Description: PGP signature
|
https://lists.gnu.org/archive/html/gnunet-developers/2019-04/msg00011.html
|
CC-MAIN-2019-43
|
refinedweb
| 919
| 66.94
|
The goto statement is used for transferring the control of a program to a given label. The syntax of goto statement looks like this:
goto label_name;
Program structure:
label1: ... ... goto label2; ... .. label2: ...
In a program we have any number of goto and label statements, the goto statement is followed by a label name, whenever goto statement is encountered, the control of the program jumps to the label specified in the goto statement.
goto statements are almost never used in any development as they are complex and makes your program much less readable and more error prone. In place of goto, you can use continue and break statement.
Example of goto statement in C++
#include <iostream> using namespace std; int main(){ int num; cout<<"Enter a number: "; cin>>num; if (num % 2==0){ goto print; } else { cout<<"Odd Number"; } print: cout<<"Even Number"; return 0; }
Output:
Enter a number: 42 Even Number
|
https://beginnersbook.com/2017/08/cpp-goto-statement/
|
CC-MAIN-2018-05
|
refinedweb
| 151
| 55.27
|
I'm sorry but I'm not understanding xml. I would like to be able to create ids for all my xml elements preferably created autmatically, along with a common id (generally a name), in order to be able to reuse xml elements without describing them all again, and a way to search for them and view them.
How can this be done? Is an XML parser involved? I don't understand the structure of the namespaces, or xml ids at all. And they don't seem to have any reasonable or well-defined purpose, for what I'm trying to do.
|
http://forums.devshed.com/xml-programming-19/xml-namespaces-ids-parsing-968002.html
|
CC-MAIN-2017-47
|
refinedweb
| 102
| 72.16
|
vmod_vsthrottle is a Varnish vmod for rate-limiting traffic on a single Varnish server.
It offers a simple interface for throttling traffic on a per-key basis to a specific request rate.
Keys can be specified from any VCL string, e.g. based on client.ip, a specific cookie value, an API token, etc.
The request rate is specified as the number of requests permitted over a period. To keep things simple, this is passed as two separate parameters, ‘limit’ and ‘period’.
If an optional duration ‘block’ is specified, then access is denied altogether for that period of time after the rate limit is reached. This is a way to entirely turn away a particularly troublesome source of traffic for a while, rather than let them back in as soon as the rate slips back under the threshold.
This VMOD implements a token bucket algorithm. State associated with the token bucket for each key is stored in-memory using BSD’s red-black tree implementation.
Memory usage is around 100 bytes per key tracked.
vcl 4.0; import vsthrottle; backend default { .host = "192.0.2.11"; .port = "8080"; } sub vcl_recv { # Varnish will set client.identity for you based on client IP. if (vsthrottle.is_denied(client.identity, 15, 10s, 30s)) { # Client has exceeded 15 reqs per 10s. # When this happens, block altogether for the next 30s. return (synth(429, "Too Many Requests")); } # There is a quota per API key that must be fulfilled. if (vsthrottle.is_denied("apikey:" + req.http.Key, 30, 60s)) { return (synth(429, "Too Many Requests")); } # Only allow a few POST/PUTs per client. if (req.method == "POST" || req.method == "PUT") { if (vsthrottle.is_denied("rw" + client.identity, 2, 10s)) { return (synth(429, "Too Many Requests")); } } }
BOOL is_denied(STRING key, INT limit, DURATION period, DURATION block=0)
Arguments:
This function (
is_denied) can be used to rate limit the traffic for a specific key to a maximum of
limit requests per
period time.
If
block is > 0s, (0s by default), then always deny for
key for that length of time after hitting the threshold.
A token bucket is uniquely identified by the 4-tuple of its key, limit, period and block, so using the same key multiple places with different rules will create multiple token buckets.
Example
sub vcl_recv { if (vsthrottle.is_denied(client.identity, 15, 10s)) { # Client has exceeded 15 reqs per 10s return (synth(429, "Too Many Requests")); } # ... }
INT remaining(STRING key, INT limit, DURATION period, DURATION block=0)
Arguments:
This function returns the current number of tokens for a given token bucket. This can be used to create a response header to inform clients of their current quota.
Example
sub vcl_deliver { set resp.http.X-RateLimit-Remaining = vsthrottle.remaining(client.identity, 15, 10s); }
DURATION blocked(STRING key, INT limit, DURATION period, DURATION block)
Arguments - key: A unique identifier to define what is being throttled - limit: How many requests in the specified period - period: The time period - block: duration to block
If the token bucket identified by the four parameters has been blocked by use of the
block parameter in
is_denied(), then this function will return the time remaining in the block.
If it is not blocked, 0s is returned.
This can be used to inform clients how long they will be locked out.
Example
sub vcl_deliver { set resp.http.Retry-After = vsthrottle.blocked(client.identity, 15, 10s, 30s); }
|
https://docs.varnish-software.com/varnish-cache-plus/vmods/vsthrottle/
|
CC-MAIN-2019-51
|
refinedweb
| 559
| 56.35
|
Advertisement
Today we are going to make a Real-time/ live Sketch-making script using OpenCV in Python. OpenCV makes it very easy for us to work with images and videos on the computer. We will also make use of Numpy and Matplotlib to make this live sketch app.
Live Sketch Algorithm OpenCV
- Capturing Real-time Video from the source example – computer’s camera
- Reading each frame of the video, so that we can make manipulations on the frame.
- Converting each frame from colored to grayscale, using OpenCV functions.
- Making Use of Blurring on the grayscale image to remove all the noises on the picture/image. ( Gaussian Blur )
- Detecting Edges of the blurred image, for making it like a sketch of the person or object. (Canny Edge)
- Thresholding the Edge Detected image.
Learn Image Manipulations if you have any problem with any of the steps.
Source Code at Divyanshu Shekhar Github.
Installing Packages for Live Sketch OpenCV
Install OpenCV, as its the most important package for computer vision in Python.
pip install opencv-python
If you have installed OpenCV, No need to install NumPy as it already comes pre-packed with OpenCV. Otherwise,
pip install numpy
Install Matplotlib for Visualization of data. (Optional)
pip install matplotlib
Importing Packages for Live Sketch OpenCV
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
OpenCV Python Video Capture
Here we are making a livesketch() function in python for recording / Capturing the video from the front camera of the computer/Laptop.
cap = cv.VideoCapture(0)
VideoCapture() Function is used to capture video either from the camera or already recorded video. cap variable returns a boolean value (True if able to retrieve/capture video successfully, False if not able to successfully capture the video). It takes one parameter:
- 0 – Front Camera
- 1 – Rear Camera
- Source of the video – Example (Absolute path / Relative Path) – “D:/Video/test.mp4”
If the Cap returns True, then read() function is applied to it and it returns two things:
- Boolean Value (Was it successfully able to read the frame, If yes)
- Returns the frame of the video.
Each Frame is sent to a sketch() function that takes the frame as an input parameter and manipulates it to return a sketch of the frame.
Don’t forget to release the captured video at the end of the while loop. Otherwise, it will consume all your machine’s memory.
def liveSketch(): cap = cv.VideoCapture(0) while True: ret, frame = cap.read() cv.imshow("Live Sketch", sketch(frame)) if cv.waitKey(1) == 27: break cap.release() cv.destroyAllWindows()
OpenCV Sketch Function
The First Step is to convert the received frame from RGB / Colored to a grayscale image, using the OpenCV cvtColor() function. The function takes two parameters:
- Source – The image to be converted
- Type – Type of conversion. (In this case BGR2GRAY)
The Image is then blurred using Gaussian Blur. The Kernel size used in this case is 5×5. To know about kernels. Read the Image manipulations operations.
The Grayscaled image is blurred in order to remove any kind of noise in the image so that it is easy for edge detection.
The Blurred image is then used for edge detection using the canny edge detection function in OpenCV. You Can learn more about the Canny Edge Detection method in Image Manipulation OpenCV Tutorial. Link Above.
The Edge Detected image then is thresholded and the method used is THRESH_BINARY_INV to get the inverse of the binary threshold. In Simple words, the sketch will be of black color on a white background.
After all these manipulations the thresholded mask is returned to the livesketch() function and the image is shown on the window, till the user manually quits the window by pressing ESC.
def sketch(image): # Convert image to gray scale img_gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # Clean up image using Gaussian Blur img_gray_blur = cv.GaussianBlur(img_gray, (5, 5), 0) # Extract Edges canny_edges = cv.Canny(img_gray_blur, 30, 70) # Do an invert binarize the image ret, mask = cv.threshold(canny_edges, 120, 255, cv.THRESH_BINARY_INV) return mask
|
https://hackthedeveloper.com/live-sketch-opencv-python/
|
CC-MAIN-2021-43
|
refinedweb
| 676
| 64.91
|
django-board 0.2.1
A Django app for managing an organisation's board members page.
A pluggable Django app for managing board members of an organisation. Board members have titles, mini-biographies, and photos.
Installation is simple:
pip install django-board
Getting started
Add the following to your INSTALLED_APPS:
- sorl.thumbnail (if not already present);
- board.
Sync database changes, using:
python manage.py syncdb
django-board ships with no views, templates or URLs. To include a list of board members in a template, use the template tag board_members:
{% load board_tags %} {% board_members as members %} {% if members %} <ul> {% for member in members %} <li>{{ member.name }}</li> {% endfor %} </ul> {% endif %}
Running the tests
You can run the test suite using:
python manage.py test board
- Author: Dominic Rodger
- License: BSD
- Categories
- Package Index Owner: dominicrodger
- DOAP record: django-board-0.2.1.xml
|
https://pypi.python.org/pypi/django-board
|
CC-MAIN-2016-50
|
refinedweb
| 141
| 58.79
|
Like all Norse gods and Marvel villains, every open source project has a good origin story.
The new episode of “Grafana’s Big Tent” podcast provides that behind-the-code look at building Grafana Mimir, the horizontally scalable, highly performant open source time series database that Grafana Labs debuted in March.
In our “Grafana Mimir: Maintainers tell all” episode, Big Tent hosts Tom Wilkie and Mat Ryer welcome Marco Pracucci and Cyril Tovena, two engineers from Grafana Labs who are also maintainers of Mimir, for a deep-dive discussion into the planning and production of our newest open source project.
In this episode, you’ll learn why we launched Mimir, how we scaled to 1 billion active series, all about the new features like the split-and-merge compactor, future features the team is already working on, and why we named the project Mimir (“I was actually personally lobbying for Mjölnir, Thor’s hammer,” says Tom).
Note: This transcript has been edited for length and clarity.
What is Grafana Mimir?
Marco: Mimir is a next-generation time series database for Prometheus and beyond. It’s a time series database we have built at Grafana Labs, which is highly available, horizontally scalable, supports multi-tenancy, has durable storage … and blazing fast query performances over long periods of time.
Tom: You mentioned high scalability. What does that mean? Where does that come from?
Marco: We are seeing an increasing number of customers needing massive scale when it comes to collecting and querying metrics. And we see this growing need across the industry to scale to a number of metrics, which just a couple of years ago were unimaginable. When we are running Grafana Mimir, we are talking about more than a billion active time series for a single tenant, for a single customer.
Tom: A billion.
Marco: I didn’t even know how many zeros are in a billion.
Tom: This is an American billion, right? So it’s nine. It’s not a British billion, which has 12. … Cyril, what is the blazing fast performance bit of Mimir?
Cyril: There are more and more customers nowadays that want to query across multiple clusters, for instance. So this high cardinality of data across a single query exists, and Mimir has been designed to be able to fulfill this need. So you can query across a lot of series, across a lot of clusters, across a lot of namespaces, and you’re going to be able, for instance, get information across your cluster or across multiple clusters.
Tom: I always quote, when people ask me, that the techniques we’ve used have made certain high cardinality queries up to 40 times faster.
Marco: It’s increasing every time we talk about it. [laughs]
Tom: Is it?
Marco: Yeah, we started with 10 then 20, 30, 40. It’s getting bigger.
Tom: This was months ago. In the results I saw, there was one query that was 40 times faster.
Marco: There are some edge cases getting fast. But what we observe typically is about 10x faster with query sharding.
Mat: What’s query sharding? What’s going on there?
Cyril: The idea is that we will parallelize a query. So until now we were actually parallelizing a query just by day or by time. But in the case, as Marco said, where there’s a billion active time series in a cluster, then by time is not enough anymore. So you want to be able to split by data, and we call that shard. A shard is actually a set of series. So what we are going to do is we’re going to actually execute PromQL on a set of selected series. For instance, we’re going to use 16 times, and each of them will only work on 1/16th of the data. And then we’re going to recombine the data back on the front end and you’re going to be able to speed up the query by 16x — or 40x apparently.
“We’re always trying to find ways to release [features] as open source. So one of the reasons we’ve done Mimir is to get the core technology that we’ve built out there and in front of more people.”
—Tom Wilkie
Scalability limits: Mimir vs. Cortex vs. Thanos
Marco: We have found different limits [in scalability]. First of all to understand these limitations, we need to understand how the data is stored. Cortex and Thanos and also Mimir use Prometheus TSDB to store the metrics data in long-term storage. So basically Prometheus TSDB partitions data by time. And for a given period of time, the data is stored into a data structure, which is called the block. And again, inside the block, we have an index used to query this data and the actual time series data which are compressed chunks of data. Now we have found some limitations. There are well-known limits in the TSDB index, like the index can’t grow above 64 GB or even inside the index, there are multiple sections and some of the sections can’t be bigger than 4GB.
In the end, it means that you have a limit on how many metrics you can store for a given tenant, or for a given customer. With Mimir, we have introduced a new compactor, which we call the split-and-merge compaction algorithm, which basically allows us to overcome these limitations. Instead of having one single block for a given time period for a customer or a tenant, we shard the data into multiple blocks, and we can shard the data into as many blocks as we want, overcoming the single block limitation or the single block index limitation. Now another issue we have found, which again, it’s well known is that ingesting data, even a very large amount of data is not that complicated, but when it comes to querying back this data fast, things get more tricky.
One major scalability limit we had was that the PromQL is single thread. So if you run a single query, a complex query, a query that is hitting a large amount of metrics, it doesn’t matter how big your machine is. It doesn’t matter how many CPUs you have. The engine will just use one single CPU core. We actually wanted to take full advantage of all the computation power we have or our customers have. And what we have is query sharding, which allows us to run a single query across multiple machines or across multiple CPU cores.
Tom: With the new split-and-merge compactor, what are the limits now? Like how large of a cluster could I build or a tenant could I have?
Marco: There’s no system which scales infinitely. There are always limitations in every system. We are still running Grafana Mimir with up to 1 billion active time series for a single tenant. A single tenant is key in this context, because if you have a billion time series, but across a thousand different tenants, each tenant then is pretty small, right? It’s just a million series per tenant. I’m mentioning a billion time series for a single tenant because that’s the single unit, which is harder to split and to shard in the context of Cortex, Thanos, or even Mimir.
Tom: So we’ve tested up to a billion, and we don’t think it’s infinite. It’s somewhere between a billion and infinite.
Cyril: And we tested it with Grafana k6, too. So we created an extension in k6 for sending Prometheus remote write data. All of this is open source, so anyone can try it at home and see how it scales. You just need a lot of CPU to reach the power of sending a billion of series.
Marco: And memory. [laughs]
Tom: We haven’t mentioned Grafana k6 in this podcast yet. What is it?
Cyril: Grafana k6 is our load testing tool. So there are two products. There’s one product, which is the k6 CLI, where you can define a plan. With the CLI, we automatically load test your application by running your test plan. So it could be a plan that sends metrics, but it could also be load testing your shop website. The other product is the Grafana k6 Cloud, which takes the same plan but it runs in the cloud so you can actually scale your load test.
Mat: k6 uses testify, by the way, the popular go package for testing.
Tom: Oh, and why’d you mention that here, Mat.
Mat: Just because Prometheus also uses it…. Oh, and I created it.
Tom: Oh, there we go.
Mat: Essentially, without me, no one writing go code can be sure their code works. [laughs]
Tom: You reduced every if statement from three lines to one.
Mat: I’ve saved so many lines. Think of how much disk space.
Tom: All those curly braces that you’ve set free!
Why build a new open source project?
Tom: Why did we choose to build Mimir as a separate project? Why didn’t we just contribute these improvements back to Cortex?
Cyril: I think there are two answers. One is that the metrics space is very competitive. And so I think we wanted to build a new project. We were the biggest maintainer and contributors of Cortex and having our own project gives us more agility. It also helps us make sure that other competitors are not taking advantage of our work for free. The other answer is we had a lot of features that we had as close source, and we wanted to make them open source to allow other people to use and experiment with those features.
Marco: It’s fair to say that was a very difficult decision, and it’s all about trade off. It’s about finding a model, which allows us to succeed commercially while at the same time keeping most of our code open. Launching a new first-party project at Grafana Labs with Mimir, we think that fits quite well in the trade-off in this decision.
Tom: A lot of the features we’ve talked about adding to Mimir, a lot of the code we’ve built, I guess it’s worth saying that we had done all of this before for our Grafana Enterprise products and for our Grafana Cloud product. A lot of these ideas were being built in closed source and as a company at Grafana Labs, we really want these things to be open source. And we’re always trying to find ways to release them as open source. So partly one of the reasons we’ve done Mimir is getting the core technology that we’ve built out there and in front of more people.
Marco: When we launched Mimir, a recurring question we got was, as a Cortex user or as a community user was, “Can I jump on Mimir? Can I upgrade? How stable is it?” And the cool thing is that the features we have released in Mimir have been running in production for months at Grafana Labs at any scale — from small clusters to very big clusters, including the 1 billion series cluster we mentioned before. And that’s why we have confidence in the stability of this system.
Cyril: It’s battle-tested.
“We want to keep committing to our big tent philosophy … Right now, Mimir is focused on Prometheus metrics, but we want to support OpenTelemetry or Graphite or Datadog metrics in Mimir.”
— Marco Pracucci
Where did the name Mimir come from?
Tom: We’ve got this LGTM strategy, like logs, graphs, traces, and metrics. Loki, the logging system begins with L, Grafana the graphing system with G, Tempo, the tracing system with T … and Cortex, the metrics system with a C. So it had to begin with an M. And then we went back to our Scandinavian roots. We were looking for a word that began with M that came from mythology and that’s where we came up with Mimir. I was actually personally lobbying for Mjölnir, Thor’s hammer.
Mat: That’s cool. Difficult to spell though.
Tom: So Mimir is a figure in Norse mythology renowned for his knowledge and wisdom who was beheaded during a war. Odin carries around Mimir’s head and it recites secret knowledge and counsel to him.
Mat: And is he going to end up being a Marvel baddie as well?
Tom: Probably.
New features coming to Grafana Mimir
Mat: What are the future plans? Can we talk and give people a bit of a sneak peek?
Marco: We have big plans. So first of all, I think we want to keep committing to our big tent philosophy, and we want Mimir to be a general purpose time series database. Right now, Mimir is focused on Prometheus metrics, but we want to support OpenTelemetry or Graphite or Datadog metrics in Mimir. That’s something we are already working on and will soon be available in Mimir.
Cyril: So I love query performance. That’s why I’m going to talk about two improvements that we want to do in the future. So in Loki LogQL, we are looking into splitting instant range queries. So if you have a range vector with a very large duration, for instance 30 days, that can be very slow. We want to be able to split instant queries. We do that in Loki right now, because the amount of data for 30 days can be tremendous compared to the amount of samples you can have in metrics. We’re going to port [the feature] back into Mimir definitely.
When we worked on query sharding with Marco, we discovered a couple of problems and implementations that we wanted to do. We want to make the TSDB shard aware. So being able to actually request a specific shard from the index or figure out the data for a specific shard from the beginning. Obviously I think the best way to get that into Mimir will be to upstream that into Prometheus so this is something that we can definitely try to do.
Marco: Another area where you may expect some big improvements in the near future is around the Mimir operability. We already mentioned that we simplified a lot of configuration. It’s very easy to get Mimir up and running. But like any distributed system, there’s still some effort required to maintain the system over time — think about version upgrades or think about scaling or fine-tuning the limits for different tenants, stuff like this. So one of my big dreams around the project is to have the Mimir autopilot and trying to reach as close as possible to zero operations in order to run and maintain a Mimir cluster at scale.
Mat: Yes, please!
Marco: This is something which is very easy to say and very difficult to do, but we’ve got some ideas. There are some ongoing conversations at Grafana Labs on how we could significantly improve the operations of scale.
Cyril: That’s super interesting. We currently have the Red Hat team, who actually became members of the Loki project, and they are working on an operator for Loki. I think they have the same goal as you just described, Marco. They want to make it super simple to operate Loki at scale, especially upgrade the cluster or maintain the cluster. Maybe that’s something that could also be reused.
Marco: I think one of the very cool things about working at Grafana Labs is that there’s a lot of cross team [collaboration]. Just to mention one: Cyril built query sharding into Loki before we built it into Mimir. Then he came to me and to other people in the Mimir team with this baggage of knowledge around how to successfully build query sharding. And then we built it into Mimir as well. We learn from each other, and the autopilot will be another great opportunity to learn from other teams.
Cyril: We actually built [query sharding in Mimir] in a better way than in Loki. So now I’m a bit jealous.
Tom: Well, you have to go and backport it back to Loki.
Cyril: Well, that’s what I’m doing right now.
Tom: And then when you figure out how to build it better in Loki than in Mimir, you’ve got to come back and bring your improvements back to Mimir.
Don’t miss any of the latest episodes of “Grafana’s Big Tent”! You can now subscribe to our new podcast on Apple Podcasts and Spotify.
|
https://grafana.com/blog/2022/05/03/grafana-mimir-maintainers-tell-all/
|
CC-MAIN-2022-40
|
refinedweb
| 2,802
| 72.46
|
Let's briefly talk about the basic properties of the maximum heap:
- The maximum heap must be a complete binary tree
- When the parent node is n, the left child is n * 2 + 1 and the right child is n * 2 + 2
- When the child is n, its parent node is: (n-1) / 2 —-> This is very important and will be used later in the initialization
- The parent node is greater than or equal to the left child and the right child, but the left child is not necessarily greater than the right child
After understanding the basic properties above, you can first look at how to initialize a sequence to the maximum heap.
Construction of the largest heap
Idea: The process is like bubbling. Starting from the parent node with the highest sequence number, check whether the maximum heap is satisfied. If not, adjust it (after adjustment, check whether the adjusted node still meets the maximum heap property. If it is satisfied, you need to traverse down and adjust it. This part will be explained in the examples below. If it is satisfied, then continue to check whether the previous parent node is satisfied and directly go to the final 0 node.
For example: here is an array: x [] = {2 5 3 9 7 1 6}, the corresponding tree is:
The sequence length is 7 and the maximum subscript is 6. The largest parent node subscript is: (6-1) / 2 is 2 (the third element of the basic nature), and the corresponding value is 3.
His left child is 1, right child is 6, and the right child is older than the parent node, so you should adjust these two nodes to get:
After the node is adjusted, look at the previous parent node, the subscript is 1, and the corresponding value is 5.
His left child is 9, and the right child is 7, so the parent node should exchange with the left child to get:
Moving forward, the previous parent node is 0, the value is 2, the left child is 9, and the right child is 6, which does not meet the maximum heap. The parent node exchanges with the left child to get:
Before the exchange, the left child was 9 and now the left child is 2. As a result, the left child does not satisfy the maximum heap property, because the left child has a larger left child than the left child, so here appears in the brackets above: adjustment After that, you need to check whether the adjusted node still meets the nature of the maximum heap.
Here also continue to adjust the adjusted node:
At this point, a maximum heap is initialized!
Heap sort
In fact, after understanding how the maximum heap is constructed, heap sorting is easy.
Think about it, after the maximum heap is constructed, you actually get the maximum value directly: x [0],
If you exchange x [0] with the last digit, and then re-structure the remaining digits as before, won't you find the second largest digit?
At this point, the second largest number is x [0]. Swap it with the last number that you just participated in sorting, and then sort the remaining numbers to get the third largest number.
If you continue to loop, you can sort the current array.
Following the example, just swap 9 with the last number in the sorting process to get:
At this time, there are only 3, 7, 6, 5, 2, and 1.
Because x [0] is adjusted, it is obviously not satisfied to check whether x [0] is still the maximum heap property, so continue to adjust x [0] to get:
After x [0] is interchanged with x [1], the adjusted x [1] does not meet the maximum heap, so adjust it again:
Now the entire tree meets the maximum heap, and the maximum value x [0] that participates in sorting is now 7,
Therefore, x [0] is exchanged with the last bit currently participating in sorting, and we get:
At this time, only 1,5,6,3,2 participated in the ranking.
Follow the steps above to cycle again, so I won't write it here, just put the picture directly:
On the code:
#include <stdio.h> #include <string.h> #include <stdlib.h> // 打印数组void print(int *array, int len) { for (int i=0; i<len; i++) { printf("%d ", array[i]); } printf("\n"); } // 交换两个数值void swap(int *array, int i, int j) { int temp = array[i]; array[i] = array[j]; array[j] = temp; return; } // 对当前父节点进行排序// 查看该父节点是否满足最大堆,如果不满足则调整子节点void sort (int *array, int father, int len) { for (int lchild =father*2+1; lchild<len; lchild=father*2+1) { int k = lchild; // 先用k指向左孩子int rchild = lchild + 1; if ((rchild < len) && (array[rchild] > array[lchild])) { k = rchild; // 如果有右孩子,且右孩子比左孩子还大,则更新k } // 这里的k,指向了左右孩子中较大的那个if (array[k] > array[father]) { swap(array, k, father); // 交换父亲和孩子的数值father = k; // 这里就是查看被调整之后的节点k,是否依然满足最大堆} else { break; // 当前节点不需要被调整} } return; } int main(void) { int x[] = {2,5,3,9,7,1,6}; int len = sizeof(x)/sizeof(int); print(x, len); // 先输出原始序列// 最大子节点下标为len-1,所以它的父节点是(len-1-1) / 2 for (int i = (len - 2)/2; i>=0; i--) { sort(x, i, len); } print(x, len); // 输出初始化之后的最大堆for (int i=(len-1); i>0; i--) { swap(x, 0, i); // 把最大的一个值放到末尾,然后对剩余的数组进行排序sort(x, 0, i); } print(x, len); // 输出排序之后的序列return 0; } 最终输出为: 2 5 3 9 7 1 6 9 7 6 5 2 1 3 1 2 3 5 6 7 9
|
http://www.itworkman.com/75099.html
|
CC-MAIN-2022-21
|
refinedweb
| 904
| 54.39
|
.
Our dotMemory Example
For example, I have the following code hidden away in a console application.
internal class Program { private static readonly List<Car> Cars = new List<Car>(); public static void Main(string[] args) { var stringGenerator = new RandomStringGenerator(); var yearGenerator = new Random(); while (true) { var year = yearGenerator.Next(1900, 2020); var make = stringGenerator.Generate(); var model = stringGenerator.Generate(); Cars.Add(new Car { Make = make ,Model = model ,Year = year}); Console.WriteLine(value:$”{Cars.Count}: year:{year} make:{make} model:{model}”); } } }
This is an infinite loop that adds
Car class to a static list causing a memory leak. If I had a large application and needed to find out what was happening, I could start my application in Visual Studio with a profiler attached to it on startup.
After running my application, the following memory profile was generated.
As you can see, the total amount of memory used continued to climb until I killed the process. While the process was running I took two snapshots so that I could dive into the details of the profile.
Looking into Snapshot #2, we first get a general overview from the automatic memory inspection from dotMemory. As you can see the type string and
Car make up a large portion of the heap size.
I can also change views on the analysis to get a better idea of how the heap is organized. When I group by type I again see that
String and
Car make up a large number of instances. If I stopped at this point I might think that I have two memory leaks: one case where many instances of
String are leaking and another where
Car is leaking.
Fortunately, dotMemory provides other views of analysis so that I can pinpoint the issue. If I change the snapshot profile view to “Dominators,” I’m treated to a nice chart and tree which gives a better picture.
It turns out that nearly the entire object set memory profile is dominated by a list of
Car models. The
strings we were seeing in the other views were just children to the
car classes and actually not a separate leak.
Now that I know a list of
Car models is the source of a memory leak, I can determine where the leak is occurring in my code.
If I click on instances, I see the
car list at the top.
When clicking on the
car list/array, I’m talking to the analysis of the instance. The analysis on the instance again provides several metrics on an instance in memory, but I am interested in knowing where the instance can be found.
If I click on the “Create Stack Trace” tab I can see that it can be found in the
Program.Main function in the MemoryLeak namespace.
A memory leak like the example just provided would be obvious to anyone executing the application because it would fail quickly and consistently. It is much harder to detect a leak that takes days or weeks to cause a crash…or a leak that doesn’t necessarily cause a crash at all but is just taking up a lot of memory.
Fortunately, dotMemory can be used to profile applications running outside or in Visual Studio. The following are some of the profiling options.
Other Options
While I presented the dotMemory profiler by JetBrains, please be aware that there are many other great alternatives.
Red-gate has a profiler with similar features and price point:
Microsoft provides a free CLR profiler for .NET 4:
Visual Studio 17 has a built-in memory profiler and there are a number of free profilers available on the Microsoft marketplace.
Wrap Up
It is easy to forget about memory consumption in large-scale applications. Using a profiler makes what was once a daunting task much easier and ultimately helps with the scalability and costs of enterprise applications.
Hopefully, I have convinced you to consider profiling memory usage in your applications!
|
https://keyholesoftware.com/2019/02/28/net-memory-management-with-dotmemory/
|
CC-MAIN-2019-39
|
refinedweb
| 658
| 61.56
|
(For more resources related to this topic, see here.)
Installing matplotlib
Before experimenting with matplotlib, you need to install it. Here we introduce some tips to get matplotlib up and running without too much trouble.
How to do it...
We have three likely scenarios: you might be using Linux, OS X, or Windows.
Linux
Most Linux distributions have Python installed by default, and provide matplotlib in their standard package list. So all you have to do is use the package manager of your distribution to install matplotlib automatically. In addition to matplotlib, we highly recommend that you install NumPy, SciPy, and SymPy, as they are supposed to work together. The following list consists of commands to enable the default packages available in different versions of Linux:
- Ubuntu: The default Python packages are compiled for Python 2.7. In a command terminal, enter the following command:
sudo apt-get install python-matplotlib python-numpy python-scipy python-sympy
- ArchLinux: The default Python packages are compiled for Python 3. In a command terminal, enter the following command:
sudo pacman -S python-matplotlib python-numpy python-scipy python-sympy
If you prefer using Python 2.7, replace python by python2 in the package names
- Fedora: The default Python packages are compiled for Python 2.7. In a command terminal, enter the following command:
sudo yum install python-matplotlib numpy scipy sympy
There are other ways to install these packages; in this article, we propose the most simple and seamless ways to do it.
Windows and OS X
Windows and OS X do not have a standard package system for software installation. We have two options—using a ready-made self-installing package or compiling matplotlib from the code source. The second option involves much more work; it is worth the effort to have the latest, bleeding edge version of matplotlib installed. Therefore, in most cases, using a ready-made package is a more pragmatic choice.
You have several choices for ready-made packages: Anaconda, Enthought Canopy, Algorete Loopy, and more! All these packages provide Python, SciPy, NumPy, matplotlib, and more (a text editor and fancy interactive shells) in one go. Indeed, all these systems install their own package manager and from there you install/uninstall additional packages as you would do on a typical Linux distribution. For the sake of brevity, we will provide instructions only for Enthought Canopy. All the other systems have extensive documentation online, so installing them should not be too much of a problem.
So, let's install Enthought Canopy by performing the following steps:
- Download the Enthought Canopy installer from. You can choose the free Express edition. The website can guess your operating system and propose the right installer for you.
- Run the Enthought Canopy installer. You do not need to be an administrator to install the package if you do not want to share the installed software with other users.
- When installing, just click on Next to keep the defaults. You can find additional information about the installation process at.
That's it! You will have Python 2.7, NumPy, SciPy, and matplotlib installed and ready to run.
Plotting one curve
The initial example of Hello World! for a plotting software is often about showing a simple curve. We will keep up with that tradition. It will also give you a rough idea about how matplotlib works.
Getting ready
You need to have Python (either v2.7 or v3) and matplotlib installed. You also need to have a text editor (any text editor will do) and a command terminal to type and run commands.
How to do it...
Let's get started with one of the most common and basic graph that any plotting software offers—curves. In a text file saved as plot.py, we have the following code:
import matplotlib.pyplot as plt X = range(100) Y = [value ** 2 for value in X] plt.plot(X, Y) plt.show()
Assuming that you installed Python and matplotlib, you can now use Python to interpret this script. If you are not familiar with Python, this is indeed a Python script we have there! In a command terminal, run the script in the directory where you saved plot.py with the following command:
python plot.py
Doing so will open a window as shown in the following screenshot:
The window shows the curve Y = X ** 2 with X in the [0, 99] range. As you might have noticed, the window has several icons, some of which are as follows:
: This icon opens a dialog, allowing you to save the graph as a picture file. You can save it as a bitmap picture or a vector picture.
: This icon allows you to translate and scale the graphics. Click on it and then move the mouse over the graph. Clicking on the left button of the mouse will translate the graph according to the mouse movements. Clicking on the right button of the mouse will modify the scale of the graphics.
: This icon will restore the graph to its initial state, canceling any translation or scaling you might have applied before.
How it works...
Assuming that you are not very familiar with Python yet, let's analyze the script demonstrated earlier.
The first line tells Python that we are using the matplotlib.pyplot module. To save on a bit of typing, we make the name plt equivalent to matplotlib.pyplot. This is a very common practice that you will see in matplotlib code.
The second line creates a list named X, with all the integer values from 0 to 99. The range function is used to generate consecutive numbers. You can run the interactive Python interpreter and type the command range(100) if you use Python 2, or the command list(range(100)) if you use Python 3. This will display the list of all the integer values from 0 to 99. In both versions, sum(range(100)) will compute the sum of the integers from 0 to 99.
The third line creates a list named Y, with all the values from the list X squared. Building a new list by applying a function to each member of another list is a Python idiom, named list comprehension. The list Y will contain the squared values of the list X in the same order. So Y will contain 0, 1, 4, 9, 16, 25, and so on.
The fourth line plots a curve, where the x coordinates of the curve's points are given in the list X, and the y coordinates of the curve's points are given in the list Y. Note that the names of the lists can be anything you like.
The last line shows a result, which you will see on the window while running the script.
There's more...
So what we have learned so far? Unlike plotting packages like gnuplot, matplotlib is not a command interpreter specialized for the purpose of plotting. Unlike Matlab, matplotlib is not an integrated environment for plotting either. matplotlib is a Python module for plotting. Figures are described with Python scripts, relying on a (fairly large) set of functions provided by matplotlib.
Thus, the philosophy behind matplotlib is to take advantage of an existing language, Python. The rationale is that Python is a complete, well-designed, general purpose programming language. Combining matplotlib with other packages does not involve tricks and hacks, just Python code. This is because there are numerous packages for Python for pretty much any task. For instance, to plot data stored in a database, you would use a database package to read the data and feed it to matplotlib. To generate a large batch of statistical graphics, you would use a scientific computing package such as SciPy and Python's I/O modules.
Thus, unlike many plotting packages, matplotlib is very orthogonal—it does plotting and only plotting. If you want to read inputs from a file or do some simple intermediary calculations, you will have to use Python modules and some glue code to make it happen. Fortunately, Python is a very popular language, easy to master and with a large user base. Little by little, we will demonstrate the power of this approach.
Using NumPy
NumPy is not required to use matplotlib. However, many matplotlib tricks, code samples, and examples use NumPy. A short introduction to NumPy usage will tell you the reason.
Getting ready
Along with having Python and matplotlib installed, you also have NumPy installed. You have a text editor and a command terminal.
How to do it...
Let's plot another curve, sin(x), with x in the [0, 2 * pi] interval. The only difference with the preceding script is the part where we generate the point coordinates. Type and save the following script as sin-1.py:
import math import matplotlib.pyplot as plt T = range(100) X = [(2 * math.pi * t) / len(T) for t in T] Y = [math.sin(value) for value in X] plt.plot(X, Y) plt.show()
Then, type and save the following script as sin-2.py:
import numpy as np import matplotlib.pyplot as plt X = np.linspace(0, 2 * np.pi, 100) Y = np.sin(X) plt.plot(X, Y) plt.show()
Running either sin-1.py or sin-2.py will show the following graph exactly:
How it works...
The first script, sin-1.py, generates the coordinates for a sinusoid using only Python's standard library. The following points describe the steps we performed in the script earlier:
- We created a list T with numbers from 0 to 99—our curve will be drawn with 100 points.
- We computed the x coordinates by simply rescaling the values stored in T so that x goes from 0 to 2 pi (the range() built-in function can only generate integer values).
- As in the first example, we generated the y coordinates.
The second script sin-2.py, does exactly the same job as sin-1.py—the results are identical. However, sin-2.py is slightly shorter and easier to read since it uses the NumPy package.
NumPy is a Python package for scientific computing. matplotlib can work without NumPy, but using NumPy will save you lots of time and effort. The NumPy package provides a powerful multidimensional array object and a host of functions to manipulate it.
The NumPy package
In sin-2.py, the X list is now a one-dimensional NumPy array with 100 evenly spaced values between 0 and 2 pi. This is the purpose of the function numpy.linspace. This is arguably more convenient than computing as we did in sin-1.py. The Y list is also a one-dimensional NumPy array whose values are computed from the coordinates of X. NumPy functions work on whole arrays as they would work on a single value. Again, there is no need to compute those values explicitly one-by-one, as we did in sin-1.py. We have a shorter yet readable code compared to the pure Python version.
There's more...
NumPy can perform operations on whole arrays at once, saving us much work when generating curve coordinates. Moreover, using NumPy will most likely lead to much faster code than the pure Python equivalent. Easier to read and faster code, what's not to like? The following is an example where we plot the binomial x^2 -2x +1 in the [-3,2] interval using 200 points:
import numpy as np import matplotlib.pyplot as plt X = np.linspace(-3, 2, 200) Y = X ** 2 - 2 * X + 1. plt.plot(X, Y) plt.show()
Running the preceding script will give us the result shown in the following graph:
Again, we could have done the plotting in pure Python, but it would arguably not be as easy to read. Although matplotlib can be used without NumPy, the two make for a powerful combination.
Plotting multiple curves
One of the reasons we plot curves is to compare those curves. Are they matching? Where do they match? Where do they not match? Are they correlated? A graph can help to form a quick judgment for more thorough investigations.
How to do it...
Let's show both sin(x) and cos(x) in the [0, 2pi] interval as follows:
import numpy as np import matplotlib.pyplot as plt X = np.linspace(0, 2 * np.pi, 100) Ya = np.sin(X) Yb = np.cos(X) plt.plot(X, Ya) plt.plot(X, Yb) plt.show()
The preceding script will give us the result shown in the following graph:
How it works...
The two curves show up with a different color automatically picked up by matplotlib. We use one function call plt.plot() for one curve; thus, we have to call plt.plot() here twice. However, we still have to call plt.show() only once. The functions calls plt.plot(X, Ya) and plt.plot(X, Yb) can be seen as declarations of intentions. We want to link those two sets of points with a distinct curve for each.
matplotlib will simply keep note of this intention but will not plot anything yet. The plt.show() curve, however, will signal that we want to plot what we have described so far.
There's more...
This deferred rendering mechanism is central to matplotlib. You can declare what you render as and when it suits you. The graph will be rendered only when you call plt.show(). To illustrate this, let's look at the following script, which renders a bell-shaped curve, and the slope of that curve for each of its points:
import numpy as np import matplotlib.pyplot as plt def plot_slope(X, Y): Xs = X[1:] - X[:-1] Ys = Y[1:] - Y[:-1] plt.plot(X[1:], Ys / Xs) X = np.linspace(-3, 3, 100) Y = np.exp(-X ** 2) plt.plot(X, Y) plot_slope(X, Y) plt.show()
The preceding script will produce the following graph:
One of the function call, plt.plot(), is done inside the plot_slope function, which does not have any influence on the rendering of the graph as plt.plot() simply declares what we want to render, but does not execute the rendering yet. This is very useful when writing scripts for complex graphics with a lot of curves. You can use all the features of a proper programming language—loop, function calls, and so on— to compose a graph.
Plotting curves from file data
As explained earlier, matplotlib only handles plotting. If you want to plot data stored in a file, you will have to use Python code to read the file and extract the data you need.
How to do it...
Let's assume that we have time series stored in a plain text file named my_data.txt as follows:
0 0 1 1 2 4 4 16 5 25 6 36
A minimalistic pure Python approach to read and plot that data would go as follows:
import matplotlib.pyplot as plt X, Y = [], [] for line in open('my_data.txt', 'r'): values = [float(s) for s in line.split()] X.append(values[0]) Y.append(values[1]) plt.plot(X, Y) plt.show()
This script, together with the data stored in my_data.txt, will produce the following graph:
How it works...
The following are some explanations on how the preceding script works:
- The line X, Y = [], [] initializes the list of coordinates X and Y as empty lists.
- The line for line in open('my_data.txt', 'r') defines a loop that will iterate each line of the text file my_data.txt. On each iteration, the current line extracted from the text file is stored as a string in the variable line.
- The line values = [float(s) for s in line.split()] splits the current line around empty characters to form a string of tokens. Those tokens are then interpreted as floating point values. Those values are stored in the list values.
- Then, in the two next lines, X.append(values[0]) and Y.append(values[1]), the values stored in values are appended to the lists X and Y.
The following equivalent one-liner to read a text file may bring a smile to those more familiar with Python:
import matplotlib.pyplot as plt with open('my_data.txt', 'r') as f: X, Y = zip(*[[float(s) for s in line.split()] for line in f]) plt.plot(X, Y) plt.show()
There's more...
In our data loading code, note that there is no serious checking or error handling going on. In any case, one might remember that a good programmer is a lazy programmer. Indeed, since NumPy is so often used with matplotlib, why not use it here? Run the following script to enable NumPy:
import numpy as np import matplotlib.pyplot as plt data = np.loadtxt('my_data.txt') plt.plot(data[:,0], data[:,1]) plt.show()
This is as short as the one-liner shown in the preceding section, yet easier to read, and it will handle many error cases that our pure Python code does not handle. The following point describes the preceding script:
- The numpy.loadtxt() function reads a text file and returns a 2D array. With NumPy, 2D arrays are not a list of lists, they are true, full-blown matrices.
- The variable data is a NumPy 2D array, which give us the benefit of being able to manipulate rows and columns of a matrix as a 1D array. Indeed, in the line plt.plot(data[:,0], data[:,1]), we give the first column of data as x coordinates and the second column of data as y coordinates. This notation is specific to NumPy.
Along with making the code shorter and simpler, using NumPy brings additional advantages. For large files, using NumPy will be noticeably faster (the NumPy module is mostly written in C), and storing the whole dataset as a NumPy array can save memory as well. Finally, using NumPy allows you to support other common file formats (CVS and Matlab) for numerical data without much effort.
As a way to demonstrate all that we have seen so far, let's consider the following task. A file contains N columns of values, describing N–1 curves. The first column contains the x coordinates, the second column contains the y coordinates of the first curve, the third column contains the y coordinates of the second curve, and so on. We want to display those N–1 curves. We will do so by using the following code:
import numpy as np import matplotlib.pyplot as plt data = np.loadtxt('my_data.txt') for column in data.T: plt.plot(data[:,0], column) plt.show()
The file my_data.txt should contain the following content:
0 0 6 1 1 5 2 4 4 4 16 3 5 25 2 6 36 1
Then we get the following graph:
We did the job with little effort by exploiting two tricks. In NumPy notation, data.T is a transposed view of the 2D array data—rows are seen as columns and columns are seen as rows. Also, we can iterate over the rows of a multidimensional array by doing for row in data. Thus, doing for column in data.T will iterate over the columns of an array. With a few lines of code, we have a fairly general plotting generic script.
Plotting points
When displaying a curve, we implicitly assume that one point follows another—our data is the time series. Of course, this does not always have to be the case. One point of the data can be independent from the other. A simple way to represent such kind of data is to simply show the points without linking them.
How to do it...
The following script displays 1024 points whose coordinates are drawn randomly from the [0,1] interval:
import numpy as np import matplotlib.pyplot as plt data = np.random.rand(1024, 2) plt.scatter(data[:,0], data[:,1]) plt.show()
The preceding script will produce the following graph:
How it works...
The function plt.scatter() works exactly like plt.plot(), taking the x and y coordinates of points as input parameters. However, each point is simply shown with one marker. Don't be fooled by this simplicity—plt.scatter() is a rich command. By playing with its many optional parameters, we can achieve many different effects.
Summary
In this article we learned the basics of working with matplotlib. The basic figure types are introduced in this article with minimal examples.
Resources for Article:
Further resources on this subject:
- Advanced Matplotlib: Part 1 [Article]
- Advanced Matplotlib: Part 2 [Article]
- Plotting Data with Sage [Article]
|
https://www.packtpub.com/books/content/first-steps
|
CC-MAIN-2015-27
|
refinedweb
| 3,444
| 75.91
|
An Extensive Examination of Data Structures
Scott Mitchell
4GuysFromRolla.com
February 9, 2004
Summary: This article, the fourth in the series, begins with a quick examination of AVL trees and red-black trees, which are two different self-balancing, binary search tree data structures. The remainder of the article examines the skip list data structure, an ingenious data structure that turns a linked list into a data structure that offers the same running time as the more complex self-balancing tree data structures. (31 printed pages)
Note This article assumes the reader is familiar with C# and the data structure topics discussed previously in this article series.
Download the BuildingBetterBST.msi sample file.
Contents
Introduction
Self-Balancing Binary Search Trees
A Quick Primer on Linked Lists
Skip Lists: A Linked List with Self-Balancing BST-Like Properties
Conclusion
Books, which cool data structure that is much easier to implement than AVL trees or red-black trees, yet still guarantees a running time of log2 n.
Self-Balancing Binary Search Trees
Recall that new nodes are inserted into a binary search tree at the leaves. That is, adding a node to a binary search tree involves tracing down a path of the binary search tree, taking lefts and rights node's value. Therefore, the structure of the BST is relative to the order with which the nodes are inserted. Figure 2 depicts a BST after nodes with values of exhibits a linear search time.
What is important to realize is since. Since the topology of a BST is based upon the order, say a left child, but no right child, then the height of the right subtree because, equally important, the binary search tree property is maintained as well.ree for each node along this return path. If the heights of the subtrees differ violation, that are sometimes required. A thorough discussion of the set of rotations potentially needed by an AVL tree is beyond the scope of this article. What is important to realize is that both insertions and deletions can disturb the balance property or which at most 1, AVL trees guarantee that insertions, deletions, and searches will always have an asymptotic running time of log2 n, regardless of the order of insertions into the tree.
A Look at Red-Black Trees
Rudolf Bayer, a computer science professor at the Technical University of Munich, invented the red-black tree data structure in 1972. In addition to its data and left and right children, the nodes of a red-black tree contain an extra bit of information—a color, which can be either, and hopefully the diagram in Figure 7 clears through rotations. With red-black trees, the red-black tree properties are restored through re-coloring and rotations. Red-black trees are notoriously complex in their re-coloring and rotation rules, requiring the nodes along the access path to make decisions based upon their color in contrast to the color of their parents and uncles. (An uncle of a node n is the node that is n's parent's sibling node.) A thorough discussion of re-coloring and rotation rules is far beyond the scope of this article.
To view the re-coloring and rotations of a red-black tree as nodes are added and deleted, check out the red-black tree applet, which can also be accessed viewed at.
A Quick Primer on Linked Lists
One common data structure we've yet to discuss is the linked list. Since the skip list data structure we'll be examining next is the mutation of a linked list into a data structure with self-balanced binary tree running times, it is important that before diving into the specifics of skip lists we take a moment to discuss linked lists., which is since each element in the list maintains a reference to the next item in the list.
Linked lists have the same linear running time for searches as arrays. That is, to determine neighbor it belongs in the list. re-dimensioning. Recall that array's have fixed size. If an array needs to have more elements added to it than it has capacity, the array must be re-dimensioned. Granted, the ArrayList class hides the code complexity of this, but re-dimensioning still carries with it a performance penalty. In short, an array is usually a better choice if you have an idea on the upper bound of the amount of data that needs to be stored. If you have no conceivable notion as to how many elements need to be stored, then a link list might be a better choice.
In summary, because each element may potentially be visited, one right after the other. Pugh thought to himself that if half the elements in a sorted linked list had two neighbor references—one pointing to its immediate neighbor, and another pointing to the neighbor two elements ahead—while the other half had one, then searching a sorted linked list could be done in half the time. Figure 11 illustrates a two-reference sorted linked list.
Figure 11. A skip list
The way such a linked list saves search. Since we have found what we are looking for, we can stop searching.
Now, imagine that we wanted to search for Cal. We'd begin by starting at the head element, and then moving onto Bob. At Bob, we'd start by following the top-most reference to Dave. Since is reduced to log2 n.
Figure 12. By increasing the height of each node in a skip list, better searching performance can be gained.
Notice that in the nodes in Figure 12, every 2ith node has references 2i elements ahead (that is, the 20 element). Alice has a reference to 20 elements ahead—Bob. The 21 wreak havoc on the precise structure. That is, if Dave is deleted, now Ed becomes the 22 element, and Gil the 23 element, and so on. This means all of the elements to the right of the deleted element percent of the elements at height 1, 25 percent at height 2, 12.5 percent at height 3, and so on. That is, 1/2i percent of the elements were at height i. Rather than trying to ensure the correct heights for each element with respect to its ordinal index in the list, Pugh decided to randomly pick a height using the ideal distribution—50 percent at height 1, 25 percent because iterating through the list skips over lower-height elements.
In the remaining sections we'll examine the insertion, deletion, and lookup functions of the skip list, and implement them in a C# class. We'll finish off with an empirical look at the skip list performance and discuss the tradeoffs between skip lists and self-balancing BSTs.
Creating the Node and Nod named this class Node, and its germane code is shown below. (The complete skip list code is available in this article as a code download.)
public class Node { #region Private Member Variables private NodeList nodes; IComparable myValue; #endregion #region Constructors public Node(IComparable value, int height) { this.myValue = value; this.nodes = new NodeList(height); } #endregion #region Public Properties public int Height { get { return nodes.Capacity; } } public IComparable Value { get { return myValue; } } public Node this[int index] { get { return nodes[index]; } set { nodes[index] = value; } } #endregion }
Notice that the Node class only accepts objects that implement the IComparable interface as data. This is because a skip list is maintained as a sorted list, meaning that its elements are ordered by their data. In order to order the elements, the data of the elements must be comparable. (If you remember back to Part 3, our binary search tree Node class also required that its data implement IComparable for the same reason.)
The Node class uses a NodeList class to store its collection of Node references. The NodeList class, shown below, is a strongly-typed collection of Nodes and is derived from System.Collections.CollectionBase.
public class NodeList : CollectionBase { public NodeList(int height) { // set the capacity based on the height base.InnerList.Capacity = height; // create dummy values up to the Capacity for (int i = 0; i < height; i++) base.InnerList.Add(null); } // Adds a new Node to the end of the node list public void Add(Node n) { base.InnerList.Add(n); } // Accesses a particular Node reference in the list public Node this[int index] { get { return (Node) base.InnerList[index]; } set { base.InnerList[index] = value; } } // Returns the capacity of the list public int Capacity { get { return base.InnerList.Capacity; } } }
The
NodeList constructor accepts a height input parameter that indicates the number of references that the node needs. It appropriately sets the Capacity of the InnerList to this height and adds null references for each of the height references.
With the Node and NodeList classes created, we're ready to move on to creating the(IComparable): adds a new item to the skip list.
- Remove(IComparable): removes an existing item from the skip list.
- Contains(IComparable): { #region Private Member Variables Node head; int count; Random rndNum; protected const double PROB = 0.5; #endregion #region Public Properties public virtual int Height { get { return head.Height; } } public virtual int Count { get { return count; } } #endregion #region Constructors public SkipList() : this(-1) {} public SkipList(int randomSeed) { head = new Node(1); count = 0; if (randomSeed < 0) rndNum = new Random(); else rndNum = new Random(randomSeed); } #endregion protected virtual int chooseRandomHeight(int maxLevel) { ... } public virtual bool Contains(IComparable value) { ... } public virtual void Add(IComparable value) { ... } public virtual void Remove(IComparable class. Since we need to randomly determine the height when adding a new element to the list, we'll use this Random instance to generate the random numbers.
The SkipList class has two read-only public properties, Height and Count. Height returns the height of the tallest skip list element. Since two forms of the SkipList constructor. The default constructor merely calls the second, passing in a value of -1. The second form assigns to head a new Node instance with height 1, and sets count equal to 0. It then checks to see if the passed in randomSeed value is less than 0 or not. If it is, then it creates an instance of the Random class using an auto-generated random seed value. Otherwise, it uses the random seed value passed into the constructor.
Note Computer random-number generators, such as the Random class in the .NET Framework, are referred to as pseudo-random number generators because they don't seed. for which we're looking. If it's greater than the value we're looking for and find the value we're searching for, or exhaust all the "levels" without finding the value.
More formally, the algorithm can be spelled out with the following pseudo-code:(IComparable) method is quite simple, involving a
while and a
for loop. The
for loop iterates down through the reference level layers. The
while loop iterates across the skip list's elements.
public virtual bool Contains(IComparable value) { Node current = head; int i = 0; for (i = head.Height - 1; i >= 0; i--) { while (current[i] != null) { int results = current[i].Value.CompareTo(value); if (results == 0) return true; else if (results < 0) current = current[i]; else // results > 0 break; // exit while loop } } // if we reach here, we searched to the end of the list without finding the element return false; }
Inserting into a skip list
Inserting a new element into a skip list is akin to adding a new element in a sorted link list, and involves two steps:
- Locate where in the skip list the new element belongs. This location is found by using the search algorithm to find the location that comes immediately before the spot the new element will be added
- Thread the new element into the list by updating the necessary references.
Since skip list elements can have many levels and, therefore, many references, threading a new element into a skip list is not as simple as thread, which is populated as the search for the location for the new element is performed.
public virtual void Add; } // see if a duplicate is being inserted if (current[0] != null && current[0].Value.CompareTo(value) == 0) // cannot enter a duplicate, handle this case by either just returning or by throwing an exception return; // create a new node Node n = new Node (i = 0; i < n.Height; i++) { if (i < updates.Length) { n[i] = updates[i][i]; updates[i][i] = n; } } }
There are a couple of key portions of the Add(IComparable) method that are important. First, be certain to examine the first
for loop. In this loop, not only is the correct location for the new element located, but the updates array is also fully populated. After this loop,
Node instance, n, is created. This represents the element to be added to the skip list. Note that the height of the newly created
Node is determined by a call to the chooseRandomHeight() method, passing in the current skip list height plus one. We'll examine this method shortly. Another thing to note is that after adding the
Node, a check is made to see if the new
Node's height is greater than that of the skip list's head element's height. If it is, then the head element's height needs to be incremented, because the head element height should have the same height as the tallest element in the skip list.
The final
for loop rethreads the references. It does this by iterating through the updates array, having the newly inserted
Node's references point to the Nodes previously pointed to by the
Node in the updates array, and then having the updates array
Node update its reference to the newly inserted
Node. To help clarify things, try running through the Add(IComparable) method code using the skip list in Figure 14, where the added Node's height is 3.
Randomly Determining the Newly Inserted
Node's Height
When inserting a new element into the skip list, we need to randomly select a height for the newly added
Node. Recall from our earlier discussions of skip lists that when Pugh first envisioned multi-level, linked-list elements, he imagined a linked list where each 2ith element had a reference to an element 2i elements away. In such a list, precisely 50 percent of the nodes would have height 1, 25 percent with height 2, and so on.. Since there is a 50 percent probability that you will get a tails, a 25 percent probability that you will get a heads and then a tails, a 12.5 percent probability that you will get two heads and then a tails, and so on. The distribution works out to be the same as the desired distribution.
The code to compute the random height is given by the following simple code snippet:
One concern with the chooseRandomHeight()method is that the value returned might be extraordinarily large. That is, imagine that we have a skip list with, say, two elements, both with height 1. When adding our third element, we randomly choose the height to be 10. This is an unlikely event, since there is only roughly a 0.1 percent chance of selecting such a height, but it could conceivable happen. The downside of this, now, is that our skip list has an element with height 10, meaning there is a number of superfluous levels in our skip list. To put it more bluntly, the references at levels 2 up to 10 would not be utilized. Even as additional elements were added to the list, there's still only a 3 percent. The approach I chose to use is to use "fixed dice" when choosing the random level. That is,(IComparable) method, note that the maxLevel value passed in is the height of the head element plus one. (Recall that the head element's height is the same as the height of the maximum element in the skip list.)
The head element should be the same height as the tallest element in the skip list. So, in the Add(IComparable) method, if the newly added Node's height is greater than the head element's height, I call the IncrementHeight() method:
The IncrementHeight() is a method of the Node class that I left out for brevity. It simply increases the
Capacity of the
Node's
NodeList and adds a
null reference to the newly added level. For the method's source code refer to the article's code sample.
Note In his paper, "Skip Lists: A Probabilistic Alternative to Balanced Trees," Pugh examines the effects of changing the value of PROB from 0.5 to other values, such as 0.25, 0.125, and others. Lower values of PROB decrease:
- The element to be deleted must be found.
- That element needs to be snipped from the list and the references need to be rethreaded.
Figure 15 shows the rethreading that must occur when Dave is removed from the skip list.
Figure 15. Deleting an element from a skip list
As with the Add(IComparable) method, Remove(IComparable) maintains an updates array that keeps track of the elements at each level that appear immediately before the element to be deleted. Once this updates array has been populated, the array is iterated through from the bottom up, and the elements in the array are rethreaded to point to the deleted element's references at the corresponding levels. The Remove(IComparable) method code follows.
public virtual void Remove; } current = current[0]; if (current != null && current.Value.CompareTo(value) == 0) { count--; // We found the data to delete(); } } else { // the data to delete wasn't found. Either return or throw an exception return; } }
The first for loop should look familiar. It's the same code found in Add(IComparable), used to populate the updates array. Once the updates array has been populated, we check to ensure that the element we reached does indeed contain the value to be deleted. If not, the Remove() method simply returns. You might opt to have it throw an exception of some sort, though. Assuming the element reached is the element to be deleted, the count member variable is decremented and the references are rethreaded. Lastly, if we deleted the element with the greatest height, then we should decrement the height of the head element. This is accomplished through a call to the DecrementHeight() method of the Node class.
Analyzing scenario happening is extremely unlikely.
Since the heights of the elements of a skip list are randomly chosen, there is a chance that all, or virtually all, elements in the skip list will end up with the same height. For example, imagine that we had a skip list with 100 elements, all that happen, whose elements have a height associated with them. In this article we constructed a SkipList class and saw how straightforward the skip list's operations were, and how easy it was to implement them in code.
This fourth part of the article series is the last proposed part on trees. In the fifth installment, we'll look at graphs, which is a collection of vertexes with an arbitrary number of edges connecting each vertex to one another. As we'll see in Part 5, trees are a special form of graphs. Graphs have an extraordinary number of applications in real-world problems.
As always, if you have questions, comments, or suggestions for future material to discuss, I invite your comments! I can be reached at mitchell@4guysfromrolla.com.
Happy Programming!
References
- Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. "Introduction to Algorithms." MIT Press. 1990.
- Pugh, William. "Skip Lists: A Probabilistic Alternative to Balanced Trees." Available online at.
Related Books
- Combinatorial Algorithms, Enlarged Second Edition by Hu, T. C.
- Algorithms in C, Parts 1-5 (Bundle): Fundamentals by Sedgewick, Robert.
|
http://msdn.microsoft.com/en-us/library/aa289151.aspx
|
CC-MAIN-2013-20
|
refinedweb
| 3,317
| 61.67
|
The voltmeter is one of the most important instruments in electronics, which is why in today’s tutorial we will learn how to build our own Arduino voltmeter using a very cheap voltage sensor. The voltage we measure will be displayed to the user through the Nokia 5110 LCD monitor. This project is very easy to build and provides a very good learning experience, ideal for beginners.
What is B25 Sensor?
The B25 voltage sensor is very simple and easy to use with the voltage sensor module. It contains only two resistors, but if you prefer, you can choose to easily build your own module, or if you need an already built product, you can purchase this module sensor for less than $1. The sensor is based on an electronic voltage divider circuit, which is a very common circuit in electronic devices. It is used most of the time for t0. It converts higher voltages to lower voltages by using a pair of resistors and calculating the output voltage according to Ohm’s law.
Abstract
Any voltage supplied by the sensor to the Arduino is converted from 0 to 25 volts to the voltage within the Arduino analog pin capacity. For DC voltages, the voltage is less than 5 volts.
It should be noted that this particular sensor has a maximum input voltage of 25V. Exceeding the voltage at the input of the sensor may damage the Arduino.
Let’s Get Started
Collect Hardware
- Arduino Uno View on Amazon
- Nokia 5110 LCD View on Amazon
- Voltage Sensor View on Amazon
- Small Breadboard View on Amazon
- Jumper Wires View on Amazon
Note:- I have written a post for people looking for the best multimeter for electronics to buy?, do read it If you are interested.
Assemble Hardware
Let’s now connect all the parts together.
At first we connect the voltage sensor. It only has 3 pins. Of those, we only have to connect..
Software Requirements
Download Arduino IDE, and Source Code.
Source code Download Here
Arduino Source
#include <LCD5110_Graph.h> // THE LIBRARY I AM USING IS THIS:
LCD5110 lcd(8,9,10,12,11);
extern unsigned char BigNumbers[];
extern uint8_t ui[];
extern uint8_t startScreen[];
float voltage = 0.0;
int sensorPin = A0;
float sensorValue = 0.0f;
String voltageString =”0.0″;
int stringLength = 0;
float vout = 0.0;
float vin = 0.0;
float R1 = 30000.0;
float R2 = 7500.0;
void setup() {
Serial.begin(9600);
lcd.InitLCD();
lcd.drawBitmap(0, 0, startScreen, 84, 48);
lcd.update();
delay(3000);
lcd.setFont(BigNumbers);
delay(1000);
}
void loop() {
lcd.clrScr();
lcd.drawBitmap(0, 0, ui, 84, 48);
voltage = readVoltage();
Serial.println(voltage);
voltageString = String(voltage,1);
stringLength = voltageString.length();
displayVoltage(stringLength);
lcd.update();
delay(1000);
}
float readVoltage()
{
sensorValue = analogRead(sensorPin);
vout = (sensorValue * 5.0) / 1024.0;
vin = vout / (R2/(R1+R2));
return vin;
}
void displayVoltage(int length)
{
switch(length)
{
case 3: lcd.print(voltageString,14,19); break;
case 4: lcd.print(voltageString,2,19); break;
default: lcd.print(voltageString,2,19); break;
}
}
|
https://technicalustad.com/build-multi-meter-with-arduino-uno-and-b25-voltage-sensor/
|
CC-MAIN-2020-34
|
refinedweb
| 497
| 68.47
|
So far I have gotten to ex23 without much trouble. I have read the issues listed surrounding unicode not working, but I think I am missing something else. I cannot even get it to run and pump out question marks or the boxes. I have searched my code backwards from bottom to top looking for typos or syntax errors. I have read numerous pages on how unicode errors and “continuation byte” errors. I sent an email to the contact email when I purchased this product, but I’m really hoping someone is able to help out.
Thanks.
1% solution
I found this site () it talked about a number of things I didn’t understand but saw that people were swapping “UTF-8” with “latin-1”. When I do this, the program does run. Is it my incompetence? Is this something with my computer? I am racking my brain trying to understand why I cannot get this to work correctly.
NOT using the latin-1 for utf-8 "fix"
What I type in Powershell:
python ex23.py utf-8 strict
What my ex23.py looks like:
import sys script, input_encoding, error = sys.argv, input_encoding, error)
Error Output:
PS C:\Python\Learn> python test23.py utf-8 strict
Traceback (most recent call last):
File “test23.py”, line 23, in
main(languages, input_encoding, error)
File “test23.py”, line 6, in main
line = language_file.readline()
File “C:\Users\Levi\AppData\Local\Programs\Python\Python36\lib\codecs.py”, line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: ‘utf-8’ codec can’t decode bytes in position 11-12: invalid continuation byte
PS C:\Python\Learn>
|
https://forum.learncodethehardway.com/t/ex23-cant-run-very-confused/4170
|
CC-MAIN-2022-40
|
refinedweb
| 274
| 68.47
|
Richard Stallman wrote:
Does this fix it?
<patch against process.c 1.450> I finally reproduced this bug. By opening a TCP network stream (`open-network-stream') to a localhost server that connects and immediately closes the connection, then attempting to write to the stream, a reliable SIGPIPE is thrown. strace shows that emacs fails to unblock SIGPIPE or restore the signal handler. Richard fixed the blocked SIGPIPE. The following patch restores the signal handler. I have not verified that it's possible to crash emacs by killing the X server after a SIGPIPE (or that anyone would see the result), but strace output validates the change. -Dave *** process.c 14 May 2005 14:06:33 -0000 1.451 --- process.c 17 May 2005 04:02:22 -0000 *************** *** 5134,5139 **** --- 5134,5140 ---- int rv; struct coding_system *coding; struct gcpro gcpro1; + volatile SIGTYPE (*old_sigpipe)();GCPRO1 (object); ***************
*** 5258,5264 **** while (len > 0) { int this = len; - SIGTYPE (*old_sigpipe)();/* Decide how much data we can send in one batch.
Long lines need to be split into multiple batches. */ --- 5259,5264 ---- *************** *** 5401,5406 **** --- 5401,5407 ---- #endif /* not VMS */ else { + signal (SIGPIPE, old_sigpipe); #ifndef VMS proc = process_sent_to; p = XPROCESS (proc);
|
https://lists.gnu.org/archive/html/emacs-pretest-bug/2005-05/msg00223.html
|
CC-MAIN-2019-39
|
refinedweb
| 194
| 73.58
|
Dan Harmon says
Interesting problem. There are two questions to be answered; which wire in the box is hot and which is neutral, and which one on the swag fixture is which.
This sounds like it is likely the old knob and tube wiring from decades ago used when they didn't care much which wire went where although the plastic insulation is puzzling. The problem is that the large threaded part of the light bulb can become hot if it is not wired to the neutral and is all too easy to touch when changing a bulb. I would like to see you get this right and prevent any future shocks.
We will probably need some back and forth questions and answers, though, and I don't think that is possible in this part of HubPages. If you would either go to my hub on wiring a new light fixture (... or email me through the address in my profile I will be able to ask for more information and answer that way. Simply copy and paste your question into the comment box on the hub or into the email.
I guessing that you will need either a non-contact voltage tester (preferable) or a voltmeter of some kind to find the proper wires. Is such a thing available?
sheerplan says
First off I have to say: if you are unsure of any electrical connections then please call a licensed electrician. Electricity kills.
With what you have described (I do not have any pictures to go by) it sounds as if the two wires twisted together may be the neutral wires. The single white wire could be the switch-leg return (hot). But it would be hard to say without seeing it. They should be checked with a multimeter.
Take a close look at the insulation on the wires of the fixture - is there a small black stripe? if so this is the hot.
If there isn't a stripe look closely at the braided wires. Is one wire colored gold and the other silver? The gold colored wire is the hot. I hope this helps.
framistan says
You can use a "NEON TEST LIGHT" to tell you which terminal is HOT. They only cost a couple dollars. Picture i uploaded is a sample. they come in various styles. Some of them are built into a regular screwdriver.
|
http://hubpages.com/living/answer/105157/how-do-i-wire-old-ceiling-light-where-all-wires-are-the-same-color-in-the-light-and-ceiling-box
|
CC-MAIN-2016-44
|
refinedweb
| 398
| 82.04
|
Python now has a new statement called with. Many languages have a with statement, but they are usually different from the with statement in Python. In Python, the purpose of with is to let you set up a block of code that runs under a certain context. When that context finishes after your code runs, it runs some cleanup code.
with
What is a context? A common example is that of database transactions. With database transactions, you start a transaction, and then make all your changes to the database. Then you can decide whether to undo (or "roll back") all the changes or to commit them (that is, make them permanent), thus finishing the transaction.
While you're making the changes to the database, you're operating under the context of the particular database transaction. The context performs some code when the transaction starts (by creating the transaction and allocating the necessary resources) and then some code at the end (either by committing the changes, or by rolling back the changes). The context is the transaction itself, not the rolling back or committing. Contexts don't necessarily involve rolling back and committing changes.
Another example is that of a file being open. You can open a file, write to the file, and close the file. While writing to the file, you can think of your code as operating under the context of the file. The context, then, would open the file and, when you finish working with it, can handle the task of closing the file for you.
To create a context, create a class that includes the __enter__() and __exit__() member functions. In the __enter__() function, write the code that initializes the context (such as opening a file, or starting a database transaction), and in the __exit__() function either handle any exceptions or just perform the cleanup code. In the case of an exception, the __exit__() function receives three parameters describing the exception. (Note: For further details on creating your own contexts, see Writing Context Managers in PEP 843.
__enter__()
__exit__()
At first, it could seem like contexts are a bad idea. It seems like they hide away certain vital code (such as closing a file), leaving you unsure of whether the file actually closes. However, that's the case with any library that you use; abstraction hides away certain behavior, but you know that the library is doing the necessary work. The library lightens your work. Think of a context as simplifying your code, taking care of extra details that you otherwise would have to do yourself, as well as ensuring such behavior, much like garbage collection.
Several built-in Python modules and classes now have context management built in. To use them, use the new with keyword. Before you can use with, however, enable the with feature:
from __future__ import with_statement
(You won't have to do this starting with version 2.6.)
These built-in Python objects supporting contexts now have the necessary __enter__() and __exit__() functions to provide the context management. This will make life much easier in terms of cleanup. Look at the code:
with open('/test.txt', 'r') as f:
for line in f:
process(line)
By using the with statement, I don't need to worry about closing the file myself. The code does it for me. Indeed, if after this code runs I try to write the value of f, I'll see that the file is closed:
f
>>> print f
<closed file '/test.txt', mode 'r' at 0x009E6890>
For me personally, using a context such as this will require a slight shift in my thinking as I write such code, but in the end, it will help us all write code that has fewer bugs, because we won't have to worry about forgetting to close the files or performing other required cleanup code.
Here are a few more changes to the language.
dict
__missing__(self, key)
partition
rpartition
startswith
endswith
Key functions with min and max. You can now pass the name of a function to the min and max function to evaluate the ordinality of each function. For example, you might determine a string's ordinality with:
min
max
def myord(astr):
try:
num = int(astr)
return num
except:
return len(astr)
Then, given a list of strings, you can evaluate the maximum with the function:
print max(['abc','100','a','xyz','1'], key=myord)
More binary expression functions. Python 2.5 now includes two built-in functions that are useful for binary expressions. The function any() returns true if any of its items are true. The function all() returns true only if all items are true. Here's some sample code that shows any() in action:
any()
all()
>>> a = 10
>>> b = 20
>>> any([a>5, b>5])
True
>>> any([a>50, b>50])
False
>>> any([a>50, b>5])
True
Python 2.5 includes a few more minor changes that have short descriptions. Rather than reiterate them here, Other Language Changes discusses:
__index__
quit()
Pages: 1, 2, 3, 4
Next Page
Sponsored by:
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.onlamp.com/pub/a/python/2006/10/26/python-25.html?page=3
|
CC-MAIN-2016-18
|
refinedweb
| 873
| 70.43
|
timers(2) timers(2)
NAME [Toc] [Back]
timer_create(), timer_delete(), timer_settime(), timer_gettime(),
timer_getoverrun() - timer operations
SYNOPSIS [Toc] [Back]
#include <time.h>
int timer_create(
clockid_t clock_id,
);
DESCRIPTION [Toc] [Back]
timer_create(). If the
sigev_notify member of evp is SIGEV_SIGNAL, then the structure should
also specify the signal number to be sent to the process on timer
expiration. The signal to be sent is specified in the sigev_signo
field of evp. If the sigev_notify member of evp is SIGEV_NONE, no
notification is sent. If evp is NULL, then a default signal is sent
to the process. The defaults for the clocks CLOCK_REALTIME,
Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003
timers(2) timers(2)
CLOCK_VIRTUAL, and CLOCK_PROFILE are SIGALRM, SIGVTALRM, and SIGPROF.
Per-process timers are not inherited by a child process across a
fork() and are disarmed and deleted by an exec().
timer_delete()
The timer_delete() function deletes the specified timer, timerid,
previously created by the timer_create() function. If the timer is
armed when timer_delete() is called, the behavior is as if the timer
is automatically disarmed before removal. Any pending notifications
from the timer remain.
timer_settime()
The timer_settime() function. Any pending notifications from the timer
remain.
If the flag TIMER_ABSTIME is not set in the argument flags,
timer_settime() behaves as if the time until next expiration is set
equal to the interval specified by the it_value member of value. That
is, the timer will expire in it_value nanoseconds from when the call
is made.
If the flag TIMER_ABSTIME is set in the argument flags,
timer_settime(). A quantization error will not
cause the timer to expire earlier than the rounded-up time value.
If the argument ovalue is not NULL, the function timer_settime()
stores, in the location referenced by ovalue, a value representing the
previous amount of time before the timer would have expired or zero if
the timer was disarmed, together with the previous timer reload value.
The members of ovalue are subject to the resolution of the timer, and
Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003
timers(2) timers(2)
are the same values that would be returned by a timer_gettime() call
at that point in time.
timer_gettime()
The timer_gettime() function stores the amount of time until the
specified timer, timerid, expires and the timer's reload value into
the space pointed to by the value argument. The it_value member of
this structure will contain the amount of time before the timer
expires, or zero if the timer is disarmed. This value is returned as
the interval until timer expiration, even if the timer was armed with
absolute time. The it_interval member of value will contain the
reload value last set by timer_settime().
timer_getoverrun() function returns the timer expiration count for the
specified timer. The overrun count returned contains the number of
extra timer expirations which occurred between the time the signal was
generated and when it was delivered, up to but not including an
implementation defined maximum of DELAYTIMER_MAX. If the number of
such extra expirations is greater than or equal to DELAYTIMER_MAX,
then the overrun count is set to DELAYTIMER_MAX. The value returned
by timer_getoverrun() applies to the most recent expiration signal
delivery for the timer. If no expiration signal has been delivered
for the timer, the meaning of the overrun count returned is undefined.
RETURN VALUE [Toc] [Back]
Upon successful completion, timer_create() returns zero and updates
the location referenced by timerid to a timer_t which can be passed to
the per-process timer calls. Otherwise, timer_create() returns -1 and
sets errno to indicate the error. The value of timerid is undefined
if an error occurs.
Upon successful completion, timer_delete() returns zero. Otherwise,
timer_delete() returns -1 and sets errno to indicate the error.
Upon successful completion, timer_settime() returns zero and updates
the location referenced by ovalue, if ovalue is non-NULL.
Upon successful completion, timer_gettime() returns zero and updates
the location referenced by value, if ovalue is non-NULL. Otherwise,
timer_gettime() returns -1 and sets errno to indicate the error.
Upon successful completion, timer_getoverrun() returns the timer
expiration overrun count as explained above. Otherwise,
timer_getoverrun() returns -1 and sets errno to indicate the error.
Hewlett-Packard Company - 3 - HP-UX 11i Version 2: August 2003
timers(2) timers(2)
ERRORS [Toc] [Back]
If any of the following conditions occur, the timer_create() function
returns -1 and sets errno (see errno(2)) to the corresponding value:
[EAGAIN] The system lacks sufficient signal queuing resources
to honor the request.
[EAGAIN] The calling process has already created all of the
timers it is allowed by this implementation.
[EINVAL] The specified clock ID is not defined.
[EFAULT] The timerid or evp argument points to an invalid
address.
[ENOSYS] The function timer_create() is not supported by this
implementation.
If any of the following conditions occur, the timer_delete() function
returns -1 and sets errno to the corresponding value:
[EINVAL] The timer ID specified by timerid is not a valid
timer ID.
[ENOSYS] The function timer_delete() is not supported by this
implementation.
If any of the following conditions occur, the timer_settime(),
timer_gettime(), and timer_getoverrun() functions return -1 and set
errno to the corresponding value:
[EINVAL] The timerid argument does not correspond to an ID
returned by timer_create(), but not yet deleted by
timer_delete().
[EINVAL] The value structure passed to timer_settime()
specified a nanosecond value less than zero or
greater than or equal to 1000 million.
[EFAULT] The value or ovalue argument points to an invalid
address.
[ENOSYS] The timer_settime(), timer_gettime(), and
timer_getoverrun() functions are not supported by
this implementation.
EXAMPLES [Toc] [Back]
Create a timer, set it to go off in one minute, and deliver a SIGUSR1
signal:
Hewlett-Packard Company - 4 - HP-UX 11i Version 2: August 2003
timers(2) timers(2)
#include <signal.h>
#include <time.h>
timer_t timerid;
struct itimerspec one_minute = { {60, 0}, {0, 0} } ;
void handler()
{
int overrun = timer_getoverrun(timerid);
if (overrun == -1) {
perror("handler: timer_getoverrun()");
exit(1);
}
(void)printf("Timer expired, overrun count was %d,
overrun);
}
int main()
{
struct sigaction sigact;
struct sigevent sigev;
sigact.sa_handler = handler;
sigemptyset(sigact.sa_mask);
sigact.sa_flags = 0;
if (sigaction(SIGUSR1, &sigact, (struct sigaction *)NULL)
== -1) {
perror("sigaction");
exit(1);
}
sigev.sigev_notify = SIGEV_SIGNAL;
sigev.sigev_signo = SIGUSR1;
if (timer_create(CLOCK_REALTIME, &sigev, &timerid)
== -1) {
perror("timer_create");
exit(1);
}
if (timer_settime(timerid, 0, &one_minute, (struct itimerspec
== -1) {
perror("timer_create");
exit(1);
}
pause();
if (timer_delete(timerid) == -1) {
perror("timer_delete");
Hewlett-Packard Company - 5 - HP-UX 11i Version 2: August 2003
timers(2) timers(2)
exit(1);
}
return 0;
}
AUTHOR [Toc] [Back]
timer_create(), timer_delete(), timer_settime(), timer_gettime(), and
timer_getoverrun() were derived from the proposed IEEE POSIX P1003.4
standard, draft 14.
SEE ALSO [Toc] [Back]
clocks(2), getitimer(2).
STANDARDS CONFORMANCE [Toc] [Back]
timer_create(): POSIX.4
timer_delete(): POSIX.4
timer_getoverrun(): POSIX.4
timer_gettime(): POSIX.4
timer_settime(): POSIX.4
Hewlett-Packard Company - 6 - HP-UX 11i Version 2: August 2003
|
https://nixdoc.net/man-pages/HP-UX/man2/timer_delete.2.html
|
CC-MAIN-2022-27
|
refinedweb
| 1,132
| 54.52
|
Buy this book at Amazon.com.
Write a function called nested_sum that takes a nested list
of integers and add up the elements from all of the nested lists.
nested_sum
Use capitalize_all to write a function named capitalize_nested
that takes a nested list of strings and returns a new nested list
with all strings capitalized.
capitalize_nested.]..
Write a function called middle that takes a list and
returns a new list that contains all but the first and last
elements. So middle([1,2,3,4]) should return [2,3].
middle
middle([1,2,3,4])
[2,3]
Write a function called chop that takes a list, modifies it
by removing the first and last elements, and returns None.
chop.
''
If we execute a list parameter, the caller sees the change.
For example, delete_head removes the first element from a list:
delete_head
def delete_head(t):
del t[0]
Here’s how it is used:
>>> letters = ['a', 'b', 'c']
>>> delete_head(letters)
>>> print + [4]
>>> print t3
[1, 2, 3, 4]!
+=
You can read about this problem at, and you can download my
solution from.
Write a function called remove_duplicates that takes
a list and returns a new list with only the unique elements from
the original. Hint: they don’t have to be in the same order.
remove_duplicates Bayes
Think Python
Think Stats
Think Complexity
|
http://greenteapress.com/thinkpython/html/thinkpython011.html
|
CC-MAIN-2017-43
|
refinedweb
| 222
| 70.94
|
backports 1.0
Namespace for backported Python features
A few minutes ago, my fingers were poised for a moment above the keyboard as I prepared to backport the essential match_hostname() function (without which the Secure Sockets Layer is not actually secure!) from the Python 3.2 version of the ssl Standard Library to earlier versions of Python. Suddenly, I paused: what would I call the new distribution that I created in the Package Index to hold this small function?
It seemed a shame to consume an entire top-level name in the Package Index for what is, after all, a stopgap measure until older versions of Python are one day retired.
And so I conceived this backports namespace package. It reserves a namespace beneath which we can happy place all of the various features that we want to cut-and-paste from later Python versions. I hope that this will provide two benefits:
- It should provide greater sanity, and a bit more organization, in the Package Index.
- When you are ready to port a Python application to a new version of Python, you can search the code for any import statements that name a backports package, and remove the backports for features that have now “arrived” in the version of Python to which you are upgrading.
I have considered calling for all backports packages to issue a warning upon import if they detect that they are running under a version of Python that has now gained the feature they offer, but I think that will be unkind to actual users, since the most widespread versions of Python today still display warnings by default.
Building your own backports module
Placing a module of your own inside of the backports namespace requires only a few simple steps. First, set your project up like:
project/ project/setup.py project/backports/ project/backports/__init__.py <--- SPECIAL - see below! project/backports/yourpkg/ project/backports/yourpkg/__init__.py project/backports/yourpkg/foo.py project/backports/yourpkg/bar.py
This places your own package inside of the backports namespace, so your package and its modules can be imported like this:
import backports.yourpkg import backports.yourpkg.foo
The one absolutely essential rule is that the __init__.py inside of the backports directory itself must have the following code as its content:
# A Python "namespace package" # This always goes inside of a namespace package's __init__.py from pkgutil import extend_path __path__ = extend_path(__path__, __name__)
If you fail to include this code, then the namespace package might fail to see all of the packages beneath it, and import statements might return errors.
A live example of a package that implements all of this can be downloaded from the Python Package Index:
What if the feature is present?
An issue on which I am undecided is whether a backports package, if it finds itself on a modern enough version of Python, should simply import the “real” version of its feature from the Standard Library instead of offering the replacement. My guess is that this is not a good idea, because if — for some reason — an incompatibility crops up bewteen the tweaked code in a backport and the official code in the modern Standard Library, then it would be nice for developers using the backport to be faced with that breakage when they themselves try removing the backport, instead of being faced with it simply because a user tries running their program on more modern version of Python.
- Downloads (All Versions):
- 0 downloads in the last day
- 0 downloads in the last week
- 0 downloads in the last month
- Author: Brandon Craig Rhodes
- Package Index Owner: brandonrhodes
- DOAP record: backports-1.0.xml
|
https://pypi.python.org/pypi/backports/1.0
|
CC-MAIN-2015-27
|
refinedweb
| 613
| 54.46
|
Created on 2012-06-04 20:14 by sspapilin, last changed 2014-04-20 16:47 by orsenthil.
File test.py is
#!/usr/bin/env python
import urllib2
print urllib2.urlopen('').read()
When I issue
python test.py > out.txt
, I get file about 100KB in size, the beginning of the actual file. I repeated it a hundred times, and almost every time I get 98305 byte file, and a couple of times a 49153 bytes or 188417 bytes file.
When I replace urllib2 with urllib in test.py, I get full (4 MB) file.
I have Ubuntu 12.04 64-bit, Python 2.7.3 (from default Ubuntu repository, up-to-date as of 4-june-2012) and slow, 64KB/s, Internet connection.
However, I asked my friend with Windows and faster connection to check it, and he got partial download as well, while he had another size of partial file (50109 bytes). I do not know his OS ant Python versions.
The same problem exists in Python3.
That's surprising! I shall test it with http debug mode and see what's happening.
I've tested this on head, and the issue appears to be buggy ftp code in python.
From the attached tcpdump for fetching delegated-ripencc-20120706:
12:57:19.933607 IP myhost.39627 >: Flags [.], ack 511, win 115, options [nop,nop,TS val 129353190 ecr 1632444059], length 0
12:57:19.934853 IP myhost.39627 >: Flags [F.], seq 97, ack 511, win 115, options [nop,nop,TS val 129353191 ecr 1632444059], length 0
and a bit later:
12:57:20.043701 IP > myhost.50818: Flags [.], seq 46337:47785, ack 1, win 227, options [nop,nop,TS val 2552550247 ecr 129353204], length 1448
12:57:20.043717 IP myhost.50818 >: Flags [.], ack 47785, win 353, options [nop,nop,TS val 129353218 ecr 2552550247], length 0
12:57:20.043816 IP > myhost.50818: Flags [FP.], seq 47785:49153, ack 1, win 227, options [nop,nop,TS val 2552550247 ecr 129353204], length 1368
12:57:20.043992 IP myhost.50818 >: Flags [F.], seq 1, ack 49154, win 376, options [nop,nop,TS val 129353218 ecr 2552550247], length 0
12:57:20.094067 IP > myhost.50818: Flags [.], ack 2, win 227, options [nop,nop,TS val 2552550299 ecr 129353218], length 0
As you can see we're sending a FIN without sending a close command to the control connection, and in response the server stops sending data about 49k in. Per RFC 959 section 2.3: "The server may abort data transfer if the control connections are closed without command." so this is acceptable behaviour on the part of the server, and means we need to keep the control connection open for longer.
More particularly, the ftpwrapper's ftp member is being GCed sometime after FtpHandler.ftp_open returns.
Looking into this.
It seems that it doesn't happen for all servers, I can download large files reliably from other sources.
I'll make another wireshark recording to get more details for me to analyze.
> I'll make another wireshark recording to get more details for me to analyze.
Thank you! That will be useful. Please test it against 3.x version as it has seen cleanups recently.
This is actually the same problem as #18879.
Changing the sample to keep a reference to the addinfourl object avoids this issue.
This is even worse than #18879 in the sense that the error goes undetected and just leaves you with partial data.
Looking at the solution in #18879 I think we can reuse that, maybe even better by refactoring that to a common file proxy object.
I wasn't able to come up with a good testcase. :(
I tried similar approaches as in #18879 but I wasn't able to make them trigger the behaviour as it also seems to be an issue regarding actual network performance ... :/
Backport to 2.7 is currently missing as I'd need #18879 to be backported. If that is OK (I'd like to have this in 2.7) then I'd be happy to port both.
Antoine, I'm adding you here as I'm leveraging your patch from #18879.
I'd need some feedback about the backport, but this patch should be OK for 3.4. Also, if you had an idea how to test this - I tried, but failed so far.
Antoine, could you check my last comment in here?
(The nosy list got reset accidentally when I made that comment and got a conflict from the tracker).
Christian , with respect to patch, I agree with the logic (using something similar to #18879). Does all current unittests succeed with this? (I suspect not) A unittest for coverage would be helpful.
Well, this looks ok on the principle, but I haven't investigated the urllib-specific parts, so I'll let Senthil delve into this :)
Christian's patch is good.It helps in setting the socket.makefile file descriptor to a well behaving well file close wrapper and thus will help us prevent some tricky fd close issues.
I added tests for coverage to ensure that we are asserting the type and covering all the methods in urllib.response. Attaching the patch built on top of Chritians one.
New changeset bb71b71322a3 by Senthil Kumaran in branch '3.4':
urllib.response object to use _TemporaryFileWrapper (and _TemporaryFileCloser)
New changeset 72fe23edfec6 by Senthil Kumaran in branch '3.4':
NEWS entry for #15002
New changeset 8c8315bac6a8 by Senthil Kumaran in branch 'default':
merge 3.4
This is fixed in 3.4 and 3.5. I will backport to 2.7 ( I think, it is worth it).
|
https://bugs.python.org/issue15002
|
CC-MAIN-2017-51
|
refinedweb
| 940
| 76.93
|
Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Incubator Wiki" for change notification.
The "March2012" page has been changed by ArvindPrabhakar:
Signed off by mentor:
wave, bdelacretaz
--------------------
- Flume
+ Flume
- (project add text here)
+ Apache Flume is a distributed, reliable, and available system for efficiently collecting,
aggregating, and moving large amounts of log data to scalable data storage systems such as
Apache Hadoop's HDFS.
- Signed off by mentor:
+ Flume entered incubation on June 12th, 2011.
+
+ Progress since last report
+ *)
--------------------
Hama
@@ -521, +533 @@
--------------------
Sqoop
- (project add text here)
+ A tool for efficiently transferring bulk data between Apache Hadoop and structured datastores
such as relational databases.
+
+ Sqoop was accepted into Apache Incubator on June 11, 2011. Status information is available
at [].
+
+ Progress since last report:
+ * Sqoop PPMC voted in Kathleen Ting as a new committer.
+ * Sqoop PPMC voted in Jarek Jarcec Checho (Jarsolav Cecho) as PPMC member.
+ * Released Sqoop version 1.4.1-incubating, with support for Apache Hadoop versions 0.20,
0.23 and 1.0.
+ * Sqoop started it's graduation process on February 19th, 2012.
+
+ Progress on graduation:
+ * Community Vote: PASSED. Vote (1), Result (2)
+ * Incubator PMC Vote: PASSED. Vote (3), Result (4)
+ * During IPMC vote a concern was raised that Sqoop contains deprecated Java source in com.cloudera.sqoop
namespace which need be removed before graduation.
+ * Sqoop dev expressed that this code was:
+ * Marked deprecated and retained solely for backward compatibility.
+ * That there was a concrete plan to completely remove it by next major revision, work
for which had already started.
+ * That it was not motivated by Cloudera's corporate interests.
+ * The Incubator PMC and Sqoop community reached consensus that:
+ * Sqoop was within it's right to retain Java source code in com.cloudera.sqoop namespace
to provide backwards compatibility.
+ * And, that it was for the benefit of Sqoop community.
+ * And, that there is no such policy in ASF that Incubator can enforce on any Incubating
projects.
+ * And, that if anyone feels that this should be a policy, they must establish it via the
proper channel of providing the problem statement, impact of the problem, and solution to
the board for consideration.
+ * The draft board resolution that was voted on by IPMC has been submitted to the board
for consideration in it's next meeting scheduled for March 21, 2012.
+
+ (1)
+ (2)
+ (3)
+ (4)
Signed off by mentor:
---------------------------------------------------------------------
To unsubscribe, e-mail: cvs-unsubscribe@incubator.apache.org
For additional commands, e-mail: cvs-help@incubator.apache.org
|
http://mail-archives.apache.org/mod_mbox/incubator-cvs/201203.mbox/%3C20120306185440.63976.47252@eos.apache.org%3E
|
CC-MAIN-2015-40
|
refinedweb
| 414
| 56.86
|
Recently, helpful with the quick stats, I wanted to be able to wrap my mind around the population and population changes for each state. In order to do that I used the handy-dandy 2015 Census results and was able to quickly render up some visualizations using Kendo UI for Angular 2.
In this article, I'll walk you through how the project was built. Along the way, we'll learn some Angular 2 and Kendo UI. Feel free to check out the repo and the website.
To get started, we need to create an Angular 2 project that will provide a project structure. For this example, you'll need to have the Angular CLI installed and the Progress npm registry configured for use on your machine. Check out our fantastic Getting Started guide that walks you through the steps to get this set up.
Once your machine is ready, navigate to the directory where you would like your project to live and enter the
ng new command with your project name:
cd angular-projects ng new kendoui-ng2-census-visualizations
This command will scaffold a directory structure that contains all of your project's files:
installing ng2 create .editorconfig create README.md create src/app/app.component.css create src/app/app.component.html create src/app/app.component.spec.ts create src/app/app.component.ts create src/app/app.module.ts create src/app/index. Installed packages for tooling via npm.
For this example, we'll only modify
src/styles.scss and a few files in the
src/app directory.
Personally, this is where I like to initiate my git repo for a project. Sure, it's the very beginning and an easy point to recreate but this gets the habit of committing fresh in the brain.
Kendo UI for Angular 2 provides two main options to include the Kendo UI theme in your project. We can either use a precompiled theme that styles all components, or load the theme source files through Webpack to modify and customize its styles. For this example, we'll add Kendo UI's default styling to get things styled right off the bat. Then, we'll add a
<h1> and a few placeholder
<div> elements.
First, we need to install (
i) the module containing the default theme of Kendo UI for Angular 2 and save it to out
package.json dependency list (
-S or
--save).
npm i @telerik/kendo-theme-default -S
Once the package is installed, we need to update our
src/styles.scss file to utilize its styles. This is done by adding a new
font-family property and changed some styling for the
<h1> to show how you add custom styling:
src/styles.css
@import "~@telerik/kendo-theme-default/styles/packages/all"; @import url(''); // you can enter custom SCSS below h1 { font-family: 'Roboto', sans-serif; text-align: center; font-size: 5em; font-weight: 100; }
Next, we'll add a header and some
<div> elements to
src/app/app.component.ts. Ideally, we would have component files for each one of our visualizations to keep them modular and to prevent
app.component.ts from growing too large. For now, we'll just keep it in one file to make it quick and easy:
src/app/app.component.ts
@Component({ selector: 'app-root', styleUrls: ['./app.component.scss'], template: ` <h1>2015 US Census Data Visualized</h1> <div class="visualization"></div> <div class="visualization"></div> <div class="visualization"></div> ` })
I always like to run
ng serve from the projects directory at each step just to makes sure everything is hunky-dory. That way, when things go wrong, I know where to start debugging. At this point, the page is looking rather bare:
You can see the changes made in this section by looking at this commit in the repo.
For these visualizations, we'll use an array of objects for our data. This is the most common way of binding your data because we can set our model and don't have to modify our data.
The census data came as a CSV file so I opened the file up, made smaller data sets with the info I wanted, and saved those as individual CSV files. In order to easily convert them to JSON, I used Martin Drapeau's CSV to JSON site. There are lots of modules to do this or you could write your own little script but that was the fastest resource that I could find. 😊
Now we're ready to start visualizing some data! First, We'll install the chart module and save it the project's dependencies:
npm i @progress/kendo-angular-charts@0.9.1 -S
Once installed, we can use the charts module in our project:
src/app/app.module.ts
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { AppComponent } from './app.component'; import { ChartsModule } from '@progress/kendo-angular-charts'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, HttpModule, ChartsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
In
src/app/app.component.ts, we'll add the tags to the component to create the visualization. First, we'll add the selector for the chart (
kendo-chart) as well as the selectors for its data (
kendo-chart-series and
kendo-chart-series-item).
Selectors like
kendo-chart-series-item and
kendo-chart-value-axis-item (we haven't used that one yet, but we will 😁), must be nested within their parent (i.e.
kendo-chart-series-item must go inside
kendo-chart-series which must live in
kendo-chart). We're basically climbing down the tree. Here's what that looks like in the code:> ` })
To add data, we'll have to add some inputs into our
kendo-chart-series-item selector so that it knows how to get and handle our data. You can check out the whole API documentation on this component but here is the list of the inputs we'll be using now:
type: the series type visualization we want (we'll be using
barbut check out all the different series types!)
data: the data (inline) or a reference (I recommend looking at the details to get a thorough understanding)
field: the value of the data item
category: contains the category (the points will be rendered in chronological order if it's a date)> ` })
We've set the
data input to
populationData, so we'll need to create that object to bind it to the chart. To do this, we'll add it the
AppComponent class:
src/app/app.component.ts (at the bottom)
export class AppComponent { private populationData: Model[] = [{ "state": "Alaska", "population": 738432 }, { "state": "Arizona", "population": 6828065 }, { "state": "Arkansas", "population": 2978204 }, { "state": "California", "population": 39144818 }, { "state": "Colorado", "population": 5456574 }, { "state": "Connecticut", "population": 3590886 }]; }
I've only included 5 states to keep the code here short but you can either grab the gist of all the states here or you can view the whole whole file in the repo.
In order for this data to be interpreted correctly, we will need to declare the data's Model at the top of the file.
src/app/app.component.ts (at the top)
import { Component } from '@angular/core'; interface Model { state: string; population: number; } @Component({ ...
Okay, with the data added we should be able to serve up our project and see a chart! Run
ng serve and head on over to in the browser. If you have added all the states information it will looked super squished but you do have a visualization 📊 🙌!
If you ran into an problems or you just want to see what we changed in this section, check out the diff in this commit. If you find yourself with a page that just says "Loading" you can also check the console to see if any errors are popping up.
We can change the styling within the chart selectors with different inputs. Let's step through each additional selector and the new inputs we've added to each of the selectors we already have in place. First, let's take a look at what the code will look like with all these changes.
src/app/app.component.ts
@Component({ selector: 'app-root', styleUrls: ['./app.component.scss'], template: ` <h1>2015 US Census Data Visualized</h1> <div class="visualization"> <kendo-chart <kendo-chart-title </kendo-chart-title> <kendo-chart-series> <kendo-chart-series-defaults [gap]="0.25"> </kendo-chart-series-defaults> <kendo-chart-series-item </kendo-chart-series-item> </kendo-chart-series> <kendo-chart-category-axis> <kendo-chart-category-axis-item [majorGridLines]="{ visible: false }" [majorTicks]="{ visible: false }" [labels]="{ rotation: '-25' }"> </kendo-chart-category-axis-item> </kendo-chart-category-axis> <kendo-chart-value-axis> <kendo-chart-value-axis-item [max]="40000000" [majorTicks]="{ visible: false }"> </kendo-chart-value-axis-item> </kendo-chart-value-axis> </kendo-chart> </div> <div class="visualization"></div> <div class="visualization"></div> ` })
In order to get rid of the squished look, we can increase the height of the whole chart component by editing the style of
kendo-chart that was already in place.
<kendo-chart
We can then add a title to the chart. To do so we have to add and edit the selector for the title component,
kendo-chart-title. To have it match the
h1 text we can change the font to
Roboto.
<kendo-chart-title </kendo-chart-title>
When we made the chart bigger, the bars for each value of the data didn't change in height, leaving the data looking quite twiggy. To fix this, we actually just need to change the size of the gap between each bar. This customizations lives in the
kendo-chart-series-defaults selector and we just set it to a number.
<kendo-chart-series-defaults [gap]="0.25"> </kendo-chart-series-defaults>
Although we already added some inputs on the
kendo-chart-series-item, we can tack on some more for styling the bars. In order to see the grid lines for each bar we can change the opacity of the bars, then change the color to be less partisan 😉 and change the border color as well to match. There is a ton more that you can do with the category axis item component — you can find all that info in the API. Yay!
<kendo-chart-series-item </kendo-chart-series-item>
Next, we'll look at both of the axis items together. We'll remove the major grid lines from the category axis (the y-axis in this chart) because the bars for the series act as a guide well enough. We'll remove the tick marks for both axis because they seem unneeded BUT I highly recommend switching the boolean on these and testing them all for yourself! You can mess with all the other options for these axis too: the category axis item & the value axis item. We can also rotate the labels on the y-axis because…why not? Sometimes this may actually be necessary to fit the labels if you have long label names and/or limited space. The last thing we'll tweak is the max value of the x-axis. Since California has the largest population at 39144818, we'll go ahead and cap the value at 40000000.
<kendo-chart-category-axis> <kendo-chart-category-axis-item [majorGridLines]="{ visible: false }" [majorTicks]="{ visible: false }" [labels]="{ rotation: '-25' }"> </kendo-chart-category-axis-item> </kendo-chart-category-axis> <kendo-chart-value-axis> <kendo-chart-value-axis-item [majorTicks]="{ visible: false }" [max]="40000000"> </kendo-chart-value-axis-item> </kendo-chart-value-axis>
After looking at all these changes I decided to add a bottom border to the header.
src/styles.scss
h1 { font-family: 'Roboto', sans-serif; text-align: center; font-size: 5em; font-weight: 100; border-bottom: thin solid black; }
Here's the resulting chart:
Check out all the changes we made in this section in the commit.
That's it! We have a clean looking, easy to read visualization where we can compare the populations of all the states. Now I know that Ohio actually has the 7th largest population, yet somehow everyone knows everyone in Cincinnati, it's amazing! Seriously, it's kind of eerie (Ohio pun not intended) but I ❤️ it.
Now feel free to try it for yourself in the last two
.visualization
<div> elements!
Related resources:
Pingback: Dew Drop - January 4, 2017 (#2396) - Morning Dew()
|
http://developer.telerik.com/products/kendo-ui/visualizing-data-web-kendo-ui-angular-2/
|
CC-MAIN-2017-17
|
refinedweb
| 2,065
| 52.9
|
Here, we will be making “The Great Indian Flag” using Python Turtle Graphics. Here, we will be using many turtle functions like begin_fill(), end_fill() to fill color inside the Flag, penup(), pendown(), goto() etc to reaching the target.
Turtle graphics
In computer graphics, turtle graphics are vector graphics using a relative cursor upon a Cartesian plane. Turtle is a drawing board like feature which let us command the turtle and draw using it.
Features of turtle graphics:
- forward(x): moves the pen in forward direction by x units.
- backward(x): moves the pen in the backward direction by x units.
- right(x): rotate the pen in the clockwise direction by an angle x.
- left(x): rotate the pen in the anticlockwise direction by an angle x.
- penup(): stop drawing of the turtle pen.
- pendown(): start drawing of the turtle pen.
- begin_fill(): starts filling the color inside the shape.
- fillcolor(“color_name”): sets the color to be filled.
- end_fill(): stops filling the color.
Approach
1. import the turtle modules.
import turtle
2. Get a screen to draw on.
screen = turtle.Screen()
3. Define an instance for turtle(here “t”).
4. For making Indian Flag lets divide the process into 4 steps:
- The rectangle with orange color.
- Then the middle rectangle.
- Then the last Green Rectangle.
- Then Ashoka Chakra inside the middle rectangle.
5. Here dimensions of all three Rectangles are (800 units x 167 units), which makes up dimensions of the flag as (800 units x 501 units).
6. The turtle starts from coordinates (-400, 250).
7. Then from that position it makes the First rectangle of orange color.
8. Then from the ending point of the first rectangle,Turtle makes the Second rectangle of no color.
9. Then the Third green color rectangle is made. Now for Ashoka Chakra we need to perform a set of operations
- A Big Blue circle and a white circle just smaller than blue.
- Set of small blue circles on the inner lining of a blue and white circle.
- And finally spokes inside the two blue and white circles starting from Centre towards the outer direction.
10. Finally, The pride of one’s Nation is ready.
Below is the implementation of the above approach:
|
https://www.geeksforgeeks.org/how-to-make-indian-flag-using-turtle-python/?ref=rp
|
CC-MAIN-2021-21
|
refinedweb
| 368
| 68.16
|
List the available CloudWatch metrics for your instances
Amazon EC2 sends metrics to Amazon CloudWatch. You can use the AWS Management Console, the AWS CLI, or an API to list the metrics that Amazon EC2 sends to CloudWatch. By default, each data point covers the 5 minutes that follow the start time of activity for the instance. If you've enabled detailed monitoring, each data point covers the next minute of activity from the start time.
For information about getting the statistics for these metrics, see Get statistics for metrics for your instances.
Contents
Instance metrics
The
AWS/EC2 namespace includes the following instance metrics.
CPU credit metrics
The
AWS/EC2 namespace includes the following CPU credit metrics for your
burstable performance instances.
Amazon EBS metrics for Nitro-based instances
The
AWS/EC2 namespace includes the following Amazon EBS metrics for the
Nitro-based instances that are not bare metal instances. For the list of Nitro-based
instance types, see Instances built on the Nitro System.
Metric values for Nitro-based instances will always be integers (whole numbers), whereas values for Xen-based instances support decimals. Therefore, low instance CPU utilization on Nitro-based instances may appear to be rounded down to 0.
For information about the metrics provided for your EBS volumes, see Amazon EBS metrics. For information about the metrics provided for your Spot fleets, see CloudWatch metrics for Spot Fleet.
Status check metrics
The
AWS/EC2 namespace includes the following status check metrics. By default, status check metrics
are available at a 1-minute frequency at no charge. For a newly-launched instance,
status check metric data is only available
after the instance has completed the initialization state (within a few minutes of
the instance entering the running
state). For more information about EC2 status checks, see Status checks for your
instances.
Traffic mirroring metrics
The
AWS/EC2 namespace includes metrics for mirrored traffic. For more
information, see Monitoring mirrored traffic using Amazon CloudWatch in the
Amazon VPC Traffic Mirroring Guide.
Amazon EC2 metric dimensions
You can use the following dimensions to refine the metrics listed in the previous tables.
Amazon EC2 usage metrics
You can use CloudWatch usage metrics to provide visibility into your account's usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards.
Amazon EC2 usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about CloudWatch integration with service quotas, see Service Quotas Integration and Usage Metrics.
Amazon EC2 publishes the following metrics in the
AWS/Usage namespace.
The following dimensions are used to refine the usage metrics that are published by Amazon EC2.
Listing metrics using the console
Metrics are grouped first by namespace, and then by the various dimension combinations within each namespace. For example, you can view all metrics provided by Amazon EC2, or metrics grouped by instance ID, instance type, image (AMI) ID, or Auto Scaling group.
To view available metrics by category (console)
Open the CloudWatch console at
.
In the navigation pane, choose Metrics.
Choose the EC2 metric namespace.
Select a metric dimension (for example, Per-Instance Metrics).
To sort the metrics, use the column heading. To graph a metric, select the check box next to the metric. To filter by resource, choose the resource ID and then choose Add to search. To filter by metric, choose the metric name and then choose Add to search.
Listing metrics using the AWS CLI
Use the list-metrics command to list the CloudWatch metrics for your instances.
To list all the available metrics for Amazon EC2 (AWS CLI) an instance (AWS CLI) across all instances (AWS CLI)
The following example specifies the
AWS/EC2 namespace and a metric name
to view the results for the specified metric only.
aws cloudwatch list-metrics --namespace AWS/EC2 --metric-name
CPUUtilization
|
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html
|
CC-MAIN-2020-50
|
refinedweb
| 650
| 55.54
|
package com.test; import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import android.app.Activity; import android.content.Context; import android.os.Bundle; import android.util.Log; import android.widget.Toast; public class MainActivity extends Activity { private static final String TAG = MainActivity.class.getName(); private static final String FILENAME = "myFile.txt"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); String textToSaveString = "Hello Android"; writeToFile(textToSaveString); String textFromFileString = readFromFile(); if ( textToSaveString.equals(textFromFileString) ) Toast.makeText(getApplicationContext(), "both string are equal", Toast.LENGTH_SHORT).show(); else Toast.makeText(getApplicationContext(), "there is a problem", Toast.LENGTH_SHORT).show(); } private void writeToFile(String data) { try { OutputStreamWriter outputStreamWriter = new OutputStreamWriter(openFileOutput(FILENAME, Context.MODE_PRIVATE)); outputStreamWriter.write(data); outputStreamWriter.close(); } catch (IOException e) { Log.e(TAG, "File write failed: " + e.toString()); } } private String readFromFile() { String ret = ""; try { InputStream inputStream = openFileInput(FILENAME);); } inputStream.close(); ret = stringBuilder.toString(); } } catch (FileNotFoundException e) { Log.e(TAG, "File not found: " + e.toString()); } catch (IOException e) { Log.e(TAG, "Can not read file: " + e.toString()); } return ret; } }
Advertisements
32 thoughts on “Read/Write Text File/Data in Android example code”
tnx but where you put the file in assets folder ?
@gu
nope. you don’t need to put any files in anywhere
the problem is, you would like to write some data in a file and read it back
if you use the above code, a file will be created in your app sandbox (other app does not have access to it) and you can read it from there
Thx this helped me a lot!
Only got one problem:
When i try to read my data and send it to a TextView it only shows the latest string
hope you can help me
@Satanta, can you explain what are you trying to do and what is latest string?
i have two activities:
in the first, i want to write text from an EditText to a file like in your sample
it looks like this:
private OnClickListener btn=new OnClickListener() {
public void onClick(View v){
String name = etWinner.getText().toString();
writeToFile(name);
startActivity(new Intent(SubmitActivity.this, MainActivity.class));
}
};
private void writeToFile(String data){
String newline=”\r\n”;
try {
OutputStreamWriter oswName = new OutputStreamWriter(openFileOutput(HIGHSCORE, Context.MODE_PRIVATE));
oswName.write(newline);
oswName.write(data);
oswName.close();
}
catch (IOException e) {
Log.e(TAG, “File write failed: ” + e.toString());
}
}
in the second activity i try to read the stored data and write it to a textView:
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_highscore);
btn1 = (Button)findViewById(R.id.buttonBack);
btn1.setOnClickListener(btn);
String winners = readFromFile();
TextView tvNames=(TextView)findViewById(R.id.textViewNames);
tvNames.setText(winners);
}
…
private String readFromFile(){
String ret = “”;
try {
InputStream inputStream = openFileInput(HIGHSCORE););
}
ret = stringBuilder.toString();
inputStream.close();
}
}
catch (FileNotFoundException e) {
Log.e(TAG, “File not found: ” + e.toString());
} catch (IOException e) {
Log.e(TAG, “Can not read file: ” + e.toString());
}
return ret;
}
basically its just your code so i don’t get why it doesn’t work 😦
thx for the help
Satanta
edit:
by latest string i mean it only shows the last data stored, not all the data
like – let’s say String is “american pie”, it shows “american pie”
if i store another string it just shows this one, probably overwrites “american pie”
yes, previous data is going to be overwritten by the new data, thats how it supposed to work.
if you want to save the data then you need to open the file in MODE_APPEND, not MODE_PRIVATE (line #40)
For more information:
openFileOutput
Ahh okay!
thx for the quick help!
Tnx this is great but i have one question!
J have 3 variables, a,b and c. I want to save them like 0 10 30 etc. (30 is max) and when open application again i want to read them again so b and c wouldnt be 0 they will be 10 and 30 and use that. How J can save them in that way and read them? tnx a lot!
Hi thanks for your post..but after saving data into myFile.txt…i want see where that myFile.txt file is saved either sd card or in apps assets folder or some where else…
It worked. Thanks
@radha
You could use DDMS to see it->choose your devide-> choose: data / data / (your package) / files / mytext.txt
The inputstreamreader does not read my entire file. the size of the file is 15kb..Am struck..please help me out…
My file data is a Json FILE
check this out if you want to read write and delete using random access
hi
my question is
how to access text document from mobile in android app
I am doing Payment tracker project. My doubt is, if i am giving a number/value then i am pressing a button how to store that number/value in internal phone memory with a mark not send and when the internet is connected it will automatically send to the url given in program(that means post request)and the mark will change after the number/value is send basically it is about JSON Parsing. Can u Help me Plz!
i want to know, where mytextfile.txt is stored
Should i have to give any permission to store files.. I mean in manifest file?
@shwetha
No, You don’t need to provide any permissions.
when i execute this i am not getting where file is saved. Can you help me by giving procedure to execute this? when i execute this its showing “hello world”. I guess file is not getting created.I am getting error in “R.layout.main”. please can you give a steps to execute this?
how to check file present or not!
or value=null?
Hello, i’m so excited to look into your post. But I have a question. that program is about read/write file in txt. How about to read/write file in .doc ?? please help me, Mr. thankyou 🙂
Hi how do you load the text into a listview using simple adapter ?
I want to append words from edit text one per line and save to text file …
cat
dog
cow
Than i want to load the text from the text file back as an array into listview ….Is that possible please help
Please Answer…
Where this file will be saved?
If it is writting to already exist file, then how to make new file???
Please see the previous comment of Khắc Phục.
Just use a different file name. Here I had used “myFile.txt”. Use a different one, and a new file will be created.
No in the above path nothing is available
Are you using real device? Emulator or Genymotion device might not work.
real device
LG E-970
Where would I create a file if I wanted to have a default file created in the /data/data//files directory on compile/installation?
C programming details here
I’ve been exploring for a little for any high-quality articles or blog
posts on this sort of space . Exploring in Yahoo I ultimately stumbled
upon this website. Reading this info So i’m
satisfied to express that I have an incredibly just right uncanny feeling I came
upon just what I needed. I most indubitably will make certain to don?t fail
to remember this site and give it a look regularly.
Good job it is nice tutorial, check out my code also
|
https://tausiq.wordpress.com/2012/06/16/readwrite-text-filedata-in-android-example-code/
|
CC-MAIN-2017-30
|
refinedweb
| 1,254
| 59.8
|
//**************************************
// Name: Ordinal Number Generator in C++
// Description:A program that I wrote using C++ as my programming language that will ask the user to enter or give a number and then our program will generate the corresponding ordinal numbers. In this program I am using CodeBlocks 8.02 as my editor and Dev C++ as my compiler that is already integrated with CodeBlocks 8.02.
//**************************************
// ordinal.cpp
// Written By: Mr. Jake R. Pomperada, MAED-IT
// Tools : CodeBlocks 8.02
// Date : November 24, 2015
#include <iostream>
using namespace std;
int test(int i) {
char *message;
int a=0, mod100 = 0, mod10 = 0;
cout << "\n";
for (a=1; a<=i; a++) {
mod100 = (a % 100);
mod10 = (a % 10);
if (mod10 == 1 && mod100 != 11) {
message= "st";
} else if (mod10 == 2 && mod100 != 12) {
message="nd";
}
else if (mod10 == 3 && mod100 != 13) {
message= "rd";
} else {
message= "th";
}
cout <<" " << a << message << " ";
}
}
main(){
int number=0;
cout << "\n";
cout <<"\t Ordinal Number Generator in C++";
cout << "\n\n";
cout << "Enter a Number : ";
cin >> number;
test(number);
cout <<"\n\n";
cout << "End of the Program";
cout << "\n\n";
}.
|
http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=13796&lngWId=3
|
CC-MAIN-2017-51
|
refinedweb
| 179
| 61.56
|
#include "suricata-common.h"
#include "conf.h"
#include "util-device.h"
#include "util-ioctl.h"
Go to the source code of this file.
Definition in file util-ioctl.c.
Definition at line 707 of file util-ioctl.c.
References LiveDevice_::offload_orig.
output max packet size for a link
This does a best effort to find the maximum packet size for the link. In case of uncertainty, it will output a majorant to be sure avoid the cost of dynamic allocation.
Definition at line 132 of file util-ioctl.c.
References GetIfaceMTU().
output the link MTU
Definition at line 91 of file util-ioctl.c.
References SC_ERR_SYSCALL, SCLogInfo, SCLogWarning, and strlcpy().
Referenced by GetIfaceMaxPacketSize().
output offloading status of the link
Test interface for offloading features. If one of them is activated then suricata mays received packets merge at reception. The result is oversized packets and this may cause some serious problem in some capture mode where the size of the packet is limited (AF_PACKET in V2 more for example).
Definition at line 694 of file util-ioctl.c.
Definition at line 737 of file util-ioctl.c.
References SC_ERR_SYSCALL, SCLogInfo, SCLogWarning, and strlcpy().
Definition at line 724 of file util-ioctl.c.
References LiveDevice_::offload_orig.
|
https://doxygen.openinfosecfoundation.org/util-ioctl_8c.html
|
CC-MAIN-2020-24
|
refinedweb
| 204
| 53.37
|
TABLE OF CONTENTS
Page
1. SUMMARY...................................................................................................................................1
2. INTRODUCTION........................................................................................................................8
2.1 BACKGROUND .........................................................................................................8
2.2 TERMS OF REFERENCE..........................................................................................8
2.3 SOURCES OF INFORMATION................................................................................9
2.4 UNITS AND CURRENCY.........................................................................................9
4. HISTORY....................................................................................................................................15
7. METALLURGY.........................................................................................................................40
7.1 GENERAL .................................................................................................................40
7.2 PREVIOUS WORK...................................................................................................40
7.3 SAMPLE SELECTION.............................................................................................41
7.4 GRINDING TESTWORK..........................................................................................42
7.5 FLOTATION TESTWORK......................................................................................43
7.6 CYANIDATION TESTWORK ................................................................................45
7.7 DISCUSSION ............................................................................................................46
- ii -
Watts, Griffis and McOuat
TABLE OF CONTENTS
Page
8. MINING ......................................................................................................................................49
8.1 GENERAL .................................................................................................................49
8.2 THE SAADAH ZONE ..............................................................................................51
8.3 AL HOURA ZONE ...................................................................................................58
8.4 MOYEATH ZONE....................................................................................................58
9. PROCESSING............................................................................................................................60
9.1 GENERAL .................................................................................................................60
9.2 PROCESS DESCRIPTION.......................................................................................60
- iii -
Watts, Griffis and McOuat
TABLE OF CONTENTS
Page
- iv -
Watts, Griffis and McOuat
TABLE OF CONTENTS
Page
LIST OF TABLES
LIST OF FIGURES
-v-
Watts, Griffis and McOuat
1. SUMMARY
The Al Masane copper-zinc-gold silver deposits are located in southwestern Saudi Arabia,
approximately 640 km southeast of Jiddah. While prospecting in 1967, Hatem El-Khalidi,
Chairman of Arabian Shield Development Company (ASDC), a USA-based corporation
rediscovered the deposits that had been mined hundreds of years ago. In 1980, a program of
3,700 m of underground access and development and 20,000 m of underground diamond
drilling was completed by Watts, Griffis and McOuat Limited (WGM). Between 1982 and
1987, considerable infill diamond drilling was carried out which expanded the reserves.
A 44 km2 mining licence to develop the deposit was granted to ASDC by the Saudi Arabian
Government in 1993.
In 1993, ASDC retained WGM to update its 1982 feasibility study on the Al Masane deposits. A
review of the 1982 equipment selection and process flowsheet indicated that new technology
developed during the past ten years could be used to reduce the capital costs and improve the
metallurgical recoveries. In particular, the use of semi-autogenous grinding (SAG) to reduce the
capital cost of the grinding section and developments in reagents to improve metal recoveries
were believed to hold the greatest potential for improving the economics of the project.
The site is approximately 1,600 m above sea level on the Asir plateau in a temperate area of the
Kingdom. Rainfall is approximately 102 mm per year but the high evaporation rate causes the
area to be arid.
The Kingdom of Saudi Arabia has developed very modern infrastructure, including road, air,
port, power distribution and communications systems. Since the development boom of the 1970s
and 1980s, a well-developed construction industry is in place within the country. This will be
utilized to the greatest extent possible. Low power and labour costs and access to an operating
deep water port enhance the project economics.
-1-
Watts, Griffis and McOuat principal
sulphide minerals in all of the zones are pyrite, sphalerite, and chalcopyrite. The precious metals
occur chiefly in tetrahedrite and as tellurides and electrum.
Diluted, mineable and proven and probable reserves of 7,212,000 tonnes grading 1.42% Cu,
5.31% Zn, 1.19 g Au/t and 40.20 g Ag/t have been estimated by WGM. Also, approximately
953,000 tonnes of inferred resources grading 1.16% Cu, 8.95% Zn, 1.50 g Au/t and 60.79 g Ag/t
have been outlined in the Saadah, Al Houra and Moyeath zones. Infill diamond drilling is
necessary to upgrade these resources.
A significant feature of the Al Masane deposits is that they tend to have a much greater vertical
(plunge) extent than strike length. Thorough surface prospecting in the area by ASDC led to the
discovery of the Moyeath gossan in 1980. This zone has a relatively small surface exposure but
systematic definition drilling has outlined a sizeable orebody. Similarly, some of the small
showings outside the immediate area could yield substantial tonnages of ore. WGM has assumed
in this report that an ongoing exploration program is part of the project and that this will result in
extended mine life.
In addition to the immediate potential of the three known zones, exploration of gossans and
geochemical anomalies in the general area has a high probability of discovering additional
mineable resources which will increase the life of the operation.
It is presently proposed to bring the deposit into production at a rate of 700,000 tonnes per year.
Access to the mine is by a 700 m long decline and the two drifts that were driven as part of the
original mine exploration program completed in 1982.
-2-
Watts, Griffis and McOuat
The ore will be mined by trackless mining equipment using either cut and fill or open stoping
methods depending on the shape and location of each orebody. Ore will be transported from the
active mining areas to the underground dump pocket by a system of orepasses and trucks
specifically designed for the transport of ore underground. From the dump pocket the ore will be
fed at a controlled rate into the underground crusher. The crushed ore will then be transported by
a conveyor system installed in the decline to a stockpile on surface.
The concentrator will be designed to treat 2,000 tonnes per day of ore on a seven day per week
schedule and produce a copper concentrate, a zinc concentrate and a dore bullion. The ore will
be ground and copper and zinc concentrates recovered by flotation. The concentrates will be
dewatered by thickening and filtering and will be loaded into highway haulage trucks for the 350
km trip to the port of Gizan. A cyanidation plant is included in the process flowsheet to recover
the precious metal values from the final tailings and zinc concentrate. Dore bullion will be
produced.
Water conservation will be implemented to minimize the water requirement for the process. The
tailings from the flotation circuit will be filtered in order to recover the water which will be
recycled to the flotation process. The filtered tailings will be transported to the disposal area by
truck.
Senior operating staff will be European or North American engineers with considerable
experience in mining operations similar to Al Masane. A program to hire as many Saudi
personnel as possible will be undertaken. Some of the supervisors and operating personnel for
both underground and in the concentrator will be Filipinos with previous experience in mining.
The semi-skilled and labourer positions will be filled by other nationalities. A training program
will be implemented to upgrade the qualifications of all workers.
All personnel will be housed in a single status camp located in close proximity to the operation.
The accommodations will be in single and twin bedded rooms with recreation and dining
facilities provided. The total operating complement will be 300 personnel. The construction
workforce is expected to peak at 225 personnel.
-3-
Watts, Griffis and McOuat
Water for the process and potable requirements will be provided by three wells located in the
wadi downgradient from the mine. A test program has shown that the total required water supply
of 40 m3 per hour can be provided from these wells. Low level surface dams and subsurface
impermeable dams will be installed above each well to ensure that the aquifer is recharged and
that a reliable source of water is available. There is potential for increasing the amount of water
from the wells should an extra supply of water be required. Mine water will also be used to
provide additional water for the operation.
Power will be generated on site using four 3.2 MW diesel generators to meet the site requirement,
with three units operating and one unit spare.
Concentrates will be shipped in bulk to the deep water port of Gizan on the Red Sea by covered
trucks. The concentrates will be stored in an existing shed on the dock which has a capacity for
10,000 tonnes of copper concentrates and 15,000 tonnes of zinc concentrates. This shed will be
leased from the Port Authority. Concentrates will be shipped in 7,500 to 10,000 tonne lots to
smelters in North America, Europe and Japan. Ships will be loaded using a ship loader that will
be specially constructed for the purpose.
The equipment, methods and systems planned will consist of proven technology that has been
used in presently existing operations.
Implementation of the project will be undertaken under the direction of the owner's
personnel. The mine planning and development will be undertaken directly by ASDC
personnel and the mine staff will be hired early in the project life. The engineering,
procurement and construction management (EPCM) for the concentrator and infrastructure
will be undertaken by experienced consultants. The EPCM activities will be directed by
WGM as Agent for ASDC.
The capital cost for the project is estimated at $81.3 million. The mine will annually produce
34,900 tonnes of copper concentrate containing precious metal values and 58,000 tonnes of
-4-
Watts, Griffis and McOuat
zinc concentrates. Concentrates will be sold to smelters for recovery of the metal content. Dore
bullion will also be produced.
We have estimated the operating cost for the mine, concentrator, and infrastructure to be
$36.86 per tonne treated in the concentrator.
We have prepared an economic analysis of the project utilizing cash flow projections. The
following assumptions have been used for the base case:
50% of the capital required will be borrowed from the Saudi Industrial Development
Bank with a service charge of 2.5% per year. This loan will be repaid in equal
instalments over the life of the project.
25% of the capital required will be borrowed from private banks at an interest rate of 5%.
This loan will be repaid out of 50% of available cash flow.
-5-
Watts, Griffis and McOuat
25% of the capital required will be equity invested by new Saudi shareholders.
An existing $11,000,000 loan will be repaid to the Saudi government in one payment at
the end of the mine life.
The corporate structure for the base case assumes that a new Saudi company will be formed. The
new company will be owned 50% by ASDC and 50% by the new Saudi equity investors.
Cash flows have been calculated and the results are shown from the perspective of:
the Project;
new Saudi investors; and
Arabian Shield Development Company.
Cash flows have also been calculated to show the effect of various opportunities and risks
associated with the project. The results of this financial analysis are as follows:
-6-
Watts, Griffis and McOuat
The greatest effect on the internal rate of return (IRR) or rate of return (ROR) is caused by
increases to revenue, while reductions in operating costs and increases in ore grade and ore
reserves have a lesser but significant impact on the cash flows.
We recommend that ASDC bring the Al Masane mine into production. The timing for such a
decision is.
-7-
Watts, Griffis and McOuat
2. INTRODUCTION
2.1 BACKGROUND
While prospecting in 1967, Hatem El-Khalidi rediscovered the deposits which were then
acquired by Arabian Shield Development Company (ASDC), a USA-based corporation. In
1971 an exploration licence was granted to ASDC. Geophysical and geochemical surveys,
surface and underground diamond drilling and metallurgical testing culminated in a positive
feasibility study by Watts, Griffis and McOuat Limited (WGM) in 1982. During the period
from 1982 to 1987, further drilling to expand the mineral resources and several studies
relating to the water supply for the deposits were completed.
In 1993 ASDC retained WGM to update their 1982 feasibility study on the Al Masane deposits.
During the eleven years since the original feasibility study, a number of technical advances have
been achieved that impact favourably on the project and its economics. The purpose of this
updated feasibility study has been to consolidate all aspects of the Al Masane project into one
document which will be used to support financing for the project.
For the feasibility study, WGM prepared a new ore reserve estimate; carried out additional
metallurgical tests; prepared a new mining plan; reviewed the capital and operating costs; and
carried out an economic analysis of the project.
-8-
Watts, Griffis and McOuat
The WGM Project Manager/Metallurgical Engineer and Mining Engineer visited the property
in January 1994 with engineers from Davy International Canada (Davy) who had been
retained to carry out the project design and prepare the capital and operating costs. The team
examined the existing mine and surface facilities at Al Masane, and the port of Gizan and its
onload/offload facilities. They also visited mining operations in Saudi Arabia and discussed
costs and availability of manpower and material with contractors.
To serve as a basis for this present study, the services of a number of well known professional
consulting firms were engaged to separately investigate and prepare reports on different aspects
of the project. These reports, their conclusions and recommendations form an integral part of this
report. These firms include:
A complete list of the materials used in this report is provided under the heading List of Materials
Available For Review at the end of this report.
-9-
Watts, Griffis and McOuat
Metric units are used throughout this report. Gold and silver grades in the reserves are expressed
as grams per tonne (g Au/t).
All dollar amounts quoted in this report are expressed in United States dollars (US$). The
following exchange rate was used in this report.
- 10 -
Watts, Griffis and McOuat
3.1 LOCATION
Najran is the major town in the area and is serviced by air from Jiddah and Riyadh. Access from
the town of Najran to the project site is by a paved road to Sifah which is 130 km by road from
Najran. From Sifah, access to the site is by a 20 km gravel road which will be easily upgraded to
handle the volume of traffic that will be generated by the project. There are scheduled flights
from Jiddah to Abha and Najran.
From the west, there is a paved road between Abha and Gusap and then a dirt road to the
property.
The Al Masane project is located on the eastern side of the Asir Plateau at an elevation of 1,620
m. The climate is dry, and daily temperature highs range from 26C to 42C throughout the year.
Mean annual precipitation is 102 mm and evaporation rates are high. Rainfall generally occurs as
short, heavy downpours over small areas which cause flash floods and rapid runoff during March
and April.
- 11 -
Watts, Griffis and McOuat
The Kingdom of Saudi Arabia has modern power, road, port, airline and communications
systems. Since the development boom of the 1970s and 1980s, a well-developed construction
industry has been established within the country. This will be utilized during the construction
and operating phases of the Al Masane project.
In 1971, the Saudi Arabian Government awarded an exploration licence for the Al Masane
area to ASDC and National Mining Company (NMC), a Saudi Arabian company. Each
company had a 50% interest in the exploration licence until April 1992 when NMC assigned
its rights and obligations to the exploration licence to ASDC. To finance the development of
the Project, ASDC and NMC jointly obtained an interest free loan of $11 million from the
Saudi Arabian Ministry of Finance and National Economy in 1979. This loan was to be
repaid in ten equal annual instalments.
On May 22, 1993 Royal Decree No. 17/M was issued granting ASDC a 44 km2 mining lease for
the Al Masane project. The mining lease is valid for a period of 30 years and then can
be renewed for another period of 20 years, in accordance with Article 20 of the Saudi
Arabian Mining Code. An amendment was made in the loan agreement with the Saudi Arabian
Government which stipulates that when the profitability of the project is demonstrated, a Saudi
public stock company will be formed in which ASDC will own 50% of the stock. The other 50%
will be available for public subscription to Saudi citizens. The Saudi company will then assume
ownership of the mining lease.
Exploration in Saudi Arabia is governed by reconnaissance permits, which are preliminary, non-
exclusive documents, and exploration licences which confer exclusive rights and enable more
detailed work to be undertaken in a specific area.
- 13 -
Watts, Griffis and McOuat
An exploration licence may be very large, up to 10,000 km2, is usually granted for up to five
years and is renewable for a further four years. The licensee has the exclusive rights to explore
within the area and the exclusive right to obtain a mining lease.
Exploitation and processing are governed by mining leases; treatment plant and transportation
leases; small mine permits; quarry permits (whether raw materials leases or building
materials permits); and materials permits. With the exception of materials permits, these
documents confer exclusive rights to the holder.
A mining lease is restricted to a maximum area of 50 km2 and is issued initially for a period of up
to 30 years, renewable for a further 20 years. The licensee has the exclusive right to produce and
exploit specified minerals in the lease area. A maximum surface rental of SR 10,000 per square
kilometre per year is payable, together with income tax or a share of profits.
Non-Saudi companies, such as ASDC, are subject to income tax at progressive rates up to 45%,
but there is a five year exemption starting from the date of the first sale of products, or from the
beginning of the fourth year from the issue of the lease. Alternatively, the document holder may
enter into an agreement with the Ministry to pay an agreed percentage of net profits. This may
vary from 10% to 50%, according to the proportion of Saudi equity in the enterprise.
- 14 -
Watts, Griffis and McOuat
4. HISTORY
Mining has been traced back in this area of Precambrian rocks for over 1,200 years when small
scale copper recovery operations were carried out.
The gossans and ancient workings at Al Masane were rediscovered by Hatem El Khalidi in 1967.
In 1971 an exploration licence was granted to ASDC and NMC, and exploration programs
including geological mapping, geochemical sampling, geophysical surveys and drilling were
initiated.
Various regional investigations of the Al Masane area have been carried out by the United
States Geological Survey (USGS) mission. The first systematic mapping (1:500,000 scale)
was by Brown and Jackson who published the Geologic Map of the Asir Quadrangle in 1959
and Greenwood (1980) carried out reconnaissance (1:100,000 scale) mapping in 1974 of the
Wadi Malahah quadrangle, which includes Al Masane. Conway (1984) undertook geologic
mapping of the area at 1:20,000 scale in 1976.
By September 1980, a permanent exploration camp including water supply and power plant
had been established. A program of 3,700 m of underground access and development using
trackless mining equipment and 20,000 m of underground diamond drilling were completed
by WGM. Bulk metallurgical samples were taken from underground and pilot plant testwork
was done at the Colorado School of Mines Research Institute (CSMRI) in United States to
confirm the laboratory testwork completely previously by Lakefield Research (Lakefield) in
Canada on the drill core. The results from this underground program were incorporated into a
- 15 -
Watts, Griffis and McOuat
positive feasibility study issued in 1982 recommending development of the resource. This
phase of the exploration program was primarily financed by an $11 million interest-free loan
from the Saudi Arabian government which was to be repaid in ten equal instalments starting
December 1984. ASDC is currently attempting to reschedule payments to begin after the
commencement of commercial mining activities.
Continued surface prospecting in the immediate area by ASDC led to the discovery of the
Moyeath zone in late 1980. Although the surface expression of the gossan was small,
preliminary diamond drilling indicated a significant massive sulphide deposit at depth.
During the period from 1982 to 1987, infill diamond drilling was carried out on the Al Houra and
Moyeath deposits which expanded the ore reserves. A number of studies relating to the water
supply for the project were completed. A licence to develop the deposits was issued in May
1993.
- 16 -
Watts, Griffis and McOuat
The Al Masane massive sulphide deposits are located in the southern part of the Arabian Shield.
They occur within a marginal arc complex of volcanic, sedimentary, and intrusive rocks,
classified by the USGS as the Malahah Belt, belonging to the Upper Proterozoic Halaban Group
(Figure 2).
The volcanic rocks consist of andesitic to basaltic tuff, breccia, and pillow lava, and dacitic to
rhyolitic crystal tuff and porphyritic flows, all of which outcrop in four north-trending belts
separated by sedimentary sequences. The sedimentary rocks consist of metamorphosed graphitic
shales and mudstones, and volcaniclastic greywacke, mudstone, siltstone and conglomerate. The
layered rocks trend north-northwest, dip 75-85 west, and are deformed into upright to overturned
isoclinal folds with shallowly plunging axes. The region is cut by numerous minor and major
strike faults, parallel and subparallel to the trend of the layered rocks.
The Precambrian layered and intrusive rocks are unconformably overlain by the Cambro-
Ordovician Wajid Sandstone, which now occurs as mesa-type remnants on the higher mountain
tops.
Three mineralized zones with mineable reserves, the Saadah, Al Houra and Moyeath, have been
outlined by diamond drilling (Figure 3). The main ore deposits, Saadah and Al Houra, occur in a
volcanic sequence that consists of two mafic-felsic sequences with interbedded exhalative cherts
and metasedimentary rocks. Pillow facings, basal conglomerates, and other features indicate that
the stratigraphic facing is to the east and that the section is overturned.
- 17 -
Watts, Griffis and McOuat
Andesitic tuff and agglomerate underlie the Al Houra zone. A second, but much thinner mafic
unit consisting of pillow lava and massive andesite occurs between Al Houra and Saadah
deposits. The Saadah deposit is 1,200 m to the north and 150 m stratigraphically above the Al
Houra zone.
Each of the mafic units is immediately overlain by rhyolite porphyry and cherty rhyolite. The
exhalative facies and related mineralization occur in close proximity to the cherty rhyolite, which
forms at least two domes.
Both mineral deposits have the same association of massive sulphides, chert, lenses of massive
pale-green talc, chloritite, and dolomite breccia. Thin beds of black shale and epiclastic
sandstone are present locally.
Deformation within the volcanic group appears to be limited to local bulging and crenulation in
the softer schistose horizons, although shales interbedded with the volcanics are very intensely
folded.
The Moyeath deposit, 700 m east of Al Houra, has a west stratigraphic facing and is located
along an angular unconformity with underlying felsic volcanics and shales. The strike, average
dip and general plunge of the Moyeath mineralization is the same as that of the other two
deposits. All zones strike north-northwest, dip approximately 70 to the west and appear to
plunge at 45to the northwest.
Diabase sills up to 30 m thick intrude the Houra and Moyeath zones. East-west trending, Tertiary
basalt dikes crosscut all of the rock units.
5.3 MINERALIZATION
The Saadah and Al Houra zones were largely defined in the underground drilling program in
1979-81 and constitute the bulk of the proven and probable ore reserves. The Moyeath
- 20 -
Watts, Griffis and McOuat
deposit was discovered after the completion of underground development in 1980 and has only
been explored by more widely-spaced surface drilling.
The Saadah zone occurs stratigraphically above and 1,200 m to the north of the Houra zone and
is made up of three massive sulphide horizons overlying and flanking a dome of cherty rhyolite
(Figure 4). The three zones are known as: New Saadah, Middle Saadah and Old Saadah.
New Saadah is a large lens composed of several massive and semi-massive sulphide layers
interbedded with talc-carbonate-sulphide layers on the south slope of the rhyolite dome. The
Middle Saadah is a small lens of bedded massive sulphides in the tuffs and feldspathic cherts that
overlies the New Saadah. The Old Saadah lens overlies the cherty rhyolite dome. Prominent talc
and chloritite zones underlie this lens and several altered porphyritic rhyolite flows interfinger
with the sulphides.
The Al Houra zone consists of two groups of sulphide beds and lenses known as North Al Houra
and South Al Houra. North Al Houra consists, for the main part, of one tabular body of bedded
sulphides, intercalated with black shale and bedded talc (Figure 5). A thin persistent bed of
dolomite breccia is the proximal equivalent of the sulphide lens. Part of the zone has been split
along strike by a diabase sill. The South Al Houra is located in the same stratigraphic horizon,
200 m to the south, and consists of several lenses of ore grade massive sulphides and low grade
sulphide beds associated with a thick talc unit.
The Moyeath sulphide zone consists of high-grade sulphide beds interlayered with black shale
and disseminated sulphides in carbonate breccia (Figure 6).
The principal sulphide minerals in all of the zones are pyrite, sphalerite, and chalcopyrite. Small
amounts of galena, arsenopyrite, and tetrahedrite are present, and also traces of electrum and the
tellurides hessite and petzite. The precious metals occur chiefly in tetrahedrite and as tellurides
and electrum. Gangue minerals are talc, chlorite, dolomite, and quartz.
- 21 -
Watts, Griffis and McOuat
If the stratigraphic section is overturned (as observations at the minesite indicate), the Old Saadah
lens shows the classic zoning of many volcanogenic deposits: a copper-rich lower portion and a
zinc-rich top. Relative to the Saadah deposit, the Houra zone is higher in silver and gold, and the
Moyeath mineralization is higher in silver and zinc.
- 25 -
Watts, Griffis and McOuat
6.1 GENERAL
After an in-depth assessment of the dilution and extraction rates associated with several mining
methods, WGM has selected the cut-and-fill methods for the portion of the Saadah zone beneath
Wadi Saadah and blasthole open stoping methods in all other areas of the mine. The mineable,
proved and probable ore reserves at the Al Masane project are summarized in Table 1.
TABLE 1
SUMMARY OF% was used for the Saadah zone and 88% for Al Houra and Moyeath
zones.
6.2 DEFINITIONS
WGM has used the definitions as outlined in the Australasian Code for Reporting Mineral
Resources and Reserves. These definitions are as follows:
- 26 -
Watts, Griffis and McOuat
In defining a Mineral Resource, the ..[Geologist].. will only take into consideration
geoscientific data. In reporting a Mineral Resource, there is a clear implication that
there are reasonable prospects for eventual economic exploitation.
Mineral Resource estimates are not precise calculations, being dependent on the
interpretation of limited information on the location, shape and continuity of the
occurrence and on the available sampling results. Reporting of tonnage/volume
and grade figures should reflect the order of accuracy of the estimate by rounding
off to appropriately significant figures and, in the case of Inferred Mineral
Resources, by qualification with terms such as "approximately".
The term Inferred Mineral Resource means a Mineral Resource inferred from
geoscientific evidence, drill holes, underground openings or other sampling
procedures where the lack of data is such that continuity cannot be predicted
with confidence and where geoscientific data may not be known with a
reasonable level of confidence.
- 27 -
Watts, Griffis and McOuat
realistically assumed at the time of reporting. Ore reserves are subdivided into:
The term Probable Ore Reserves means Ore Reserves stated in terms of
mineable tonnes or volumes and grades where the corresponding Identified
Mineral Resource has been defined by drilling, sampling or excavation
(including extensions beyond actual openings and drill holes), and where the
geological factors that control the orebody are known with sufficient confidence
that the Mineral Resource is categorized as "Indicated".
The term Proved Ore Reserves means Ore Reserves stated in terms of mineable
tonnes or volumes and grades in which the corresponding Identified Mineral
Resource has been defined in three dimensions by excavation or drilling
(including minor extensions beyond actual openings and drill holes), and where
the geological factors that limit the orebody are known with sufficient
confidence that the Mineral Resource is categorized as "Measured".
The exploration drift in the Saadah area, which was used for underground drilling, is located in
the hanging wall of the mineralization at a depth of 105 m below the surface outcrop of the zone.
Most of the mineralization was drilled from the drift (and crosscuts) in a series of vertical fan
patterns at 60 m intervals. Within each fan, the vertical spacing of the ore intercepts was planned
to be 30 m, but since the orebody comprises several zones and the holes diverge, the average
vertical intercept interval for each zone (progressively further from the drift) was 27 m for New
Saadah, 35 m for Middle Saadah and 36 m for Old Saadah. The zone was drilled to an average
height of 75 m above and 200 m below the exploration drift level. Two additional holes were
drilled from surface to define the upper part of the Old Saadah zone, which could not be reached
from underground.
- 28 -
Watts, Griffis and McOuat
One crosscut was also driven across the New and Old Saadah Zones. The crosscut indicated that
both zones have smooth contacts and generally good mining characteristics with the exception of
some soft ground conditions due to the presence of talc.
In the Al Houra area, the exploration drift is located in the footwall of the mineralized zone which
extends to surface, from 170 to 190 m above the drift level. The zone was drilled at section
intervals of 60 m and at several intermediate sections in the more geologically complex part at the
south end of the deposit. The holes intercepted the zone to an average height of 75 m above the
drift. The upper part of the zone between the underground drilling and surface was estimated
from 35 surface drill holes, 10 of which were drilled during the original exploration phase in the
1970s and 25 as infill drilling after the completion of underground development. Two crosscuts
were driven across the mineralization in the Al Houra zone which confirmed the geological
interpretation used in this somewhat more complex deposit. The mineralization is
stratigraphically controlled and geological continuity is reasonably well assured.
The Moyeath deposit was drilled from surface only. Because of the rugged topography, which
limited drill sites and precluded systematic detailing of the zone, the drillholes are more widely
and irregularly spaced than the underground work in the other two zones.
A total of 264 holes were drilled through and adjacent to the three mineralized zones, 97 from
surface and 167 from underground. One hundred and thirty-one holes which met the reserve
grade and width criteria were used in the reserve estimate: 50 at Saadah, 55 at Al Houra and 17 at
Moyeath. A substantial number of the remaining holes were exploratory in nature and were used
to test the areas between deposits on a regular pattern basis.
All surface drilling was done from the footwall side of the various deposits because of steeply
rising topography over the hangingwall. The resultant low angle intercepts were interpreted
conservatively using background experience gained from the underground program.
- 29 -
Watts, Griffis and McOuat
Approximately 30 of the holes were BQ size (core diameter 36.4 mm) and the balance were of
AQ size (core diameter 26.9 mm). Core recovery was excellent. Hole locations were surveyed,
and bearings and dips were measured with a Tropari instrument at intervals of approximately
70 m.
All mineralized core was sampled. Core was split longitudinally using a diamond saw, and
representative pieces were used to determine specific gravity by weighing specimens and
measuring their volumes by displacement of water. Average specific gravities were 3.7 for the
Saadah zone, 3.5 for the Al Houra zone and 3.2 for the Moyeath zone.
The split core for assay was crushed to minus 6 mm and riffled to obtain a 200 gram sample,
which was sent for analysis to Lakefield. The balance of the crushed sample was retained at the
minesite.
Samples from ore zones, wallrock, and zones of strong mineralization irrespective of location
were taken in lengths of 2 m (average) and assayed; copper and zinc contents were determined
by X-ray fluorescence analysis, and precious metals by fire assay. Samples of weakly
mineralized rock from outside defined ore zones were treated as geochemical samples, and were
cut in 5-m lengths (average), and analyzed by atomic absorption methods.
Assaying was checked routinely by resubmitting 5% of all samples for re-assay. Two additional
and more specific tests were run, the first to test that the mine was sending a representative
sample, and the second to test one laboratory against another. In the first case, crushed rejects
stored at the mine and representing 20 individual samples were split a second time, renumbered
and submitted for re-assay. Most analyses differed by less than 10%. The averages of the 20
samples were very close, being 1.99% versus 1.96% for copper and 5.73% versus 5.89% for zinc.
For the second test, the pulps from 31 ore grade samples analyzed by Lakefield were
re-analyzed by Bondar-Clegg & Company Ltd. (Bondar-Clegg) in Ottawa. The check
- 30 -
Watts, Griffis and McOuat
analyses by Bondar-Clegg were 7.3% lower in copper, 0.5% higher in zinc and 6.5% higher
in silver. The average gold content was the same.
The Al Masane resources have been estimated from diamond drill cross sections at intervals of
60 m except in the South Al Houra where several 30 m intervals were used. The resource
outlines were drawn to conform with the geology and the various zones were subdivided into
blocks determined by the area of influence of each hole. The minimum block width used for
resource estimation was 2 m and narrower intersections were averaged with an appropriate
amount of wallrock to produce a minimum 2 m width.
Resources were considered to be in the measured category where the geology was clearly defined
by mining and/or drilling and the mineralized lens was intercepted by a minimum of three holes
on two sections not greater than 60 m apart with hole spacing on section of approximately 30 m
(60 m x 30 m blocks). A few larger dip projections were allowed within the interior of the
Saadah zones where the hole spacing on section exceeded 30 m, but the volumes involved were
small and detailed subdivision and categorization was not justified.
Along the perimeter of the mineralized zones, where adjacent holes were below cutoff grade or
too far away to conform with a regular pattern, the area of influence was taken to be half the
distance to the nearest hole on section or 30 m, whichever was smaller.
The average grade of a drillhole intercept was calculated by weighting the individual assays by
length and by specific gravity. A net smelter return of approximately $30 per tonne at recoveries
and metal prices specified in the economic analysis was used as a cutoff grade. The tonnage
represented by a drillhole was determined from the sectional area of each ore block, the section
spacing, and the average specific gravity calculated from the sample results.
- 31 -
Watts, Griffis and McOuat
Resources in the indicated category differ from measured resources in that the vertical spacing on
section was increased to 60 m and only two holes were required to define a resource body. The
area of influence was 30 m in each direction vertically and 30 m in each direction along strike (60
m x 60 m block).
On sections where the measured resource was limited to 15 m vertically at the edge of the drilled
area or between widely spaced holes, the mineralized zones were extended from the 15 m limit to
30 m or halfway to the next drillhole, whichever was less, and the projected zone was classified
as indicated.
Resources in the inferred category were estimated by extending the measured and/or indicated
blocks an additional 30 m where trends indicated potential extensions of mineralization exist. At
Moyeath, lateral extensions were increased to 40 m between probable blocks where the trend was
well defined and the drill pattern was very irregular due to hole deviation. Single intercepts
without proven continuity were classified as inferred resources assuming a maximum 30 m
projection as for an indicated resource block.
In 1994, WGM reviewed all diamond drilling to date and prepared a resource estimate for the Al
Masane project (Tables 2 and 3).
- 32 -
Watts, Griffis and McOuat
TABLE 2
SUMMARY OF MEASURED AND INDICATED RESOURCES
AL MASANE PROJECT
Cu Zn Au Ag
Zone Tonnes (%) (%) (g/t) (g/t)
Measured Resources
Saadah 4,317,000 1.74 4.95 1.05 29.52
Al Houra 1,881,000 1.48 5.77 1.71 58.56
Total Measured 6,198,000 1.66 5.20 1.25 38.33
Indicated Resources
Saadah 293,000 1.84 5.25 1.07 33.69
Moyeath 864,000 1.01 10.26 1.48 74.58
Al Houra 555,000 1.12 5.42 1.58 54.21
Total Indicated 1,712,000 1.19 7.83 1.44 60.98
TABLE 3
COMBINED MEASURED AND INDICATED RESOURCES
AL MASANE PROJECT
Cu Zn Au Ag
Zone Tonnes (%) (%) (g/t) (g/t)
- 33 -
Watts, Griffis and McOuat
Inferred resources have been estimated only for those areas where the mineralized trends have
not been fully explored and no major depth projections have been made along the plunge of the
deposits. The majority of the resources are adjacent to areas within the mining plan and are
readily accessible for mining. A total of 952,560 tonnes has been estimated as shown in Table 4.
Maximum projections of 30 m beyond drill holes were used at Saadah and Al Houra and 40 m
beyond drill holes at Moyeath. These projections were only made along established trends and
confirmed to be reasonable by using vertical projections of each zone. It is believed that the
majority of the inferred resources will be confirmed by further drilling. This is particularly true
of the Moyeath deposit where clusters of drill holes obtained excellent results but the intervening
ground could not be systematically explored due to hole deviation at depth.
Although all zones are open down plunge, no allowance has been made for this potential in the
inferred category. Projections beyond the zones defined by drilling are discussed in a later
section under Exploration Potential.
TABLE 4
INFERRED RESOURCES
AL MASANE PROJECT
Cu Zn Au Ag
Zone Tonnes (%) (%) (g/t) (g/t)
- 34 -
Watts, Griffis and McOuat
In the Old Saadah zone, high copper values occur at contiguous Sections 5055N and 5110N. The
average copper grade there is about 2.9% Cu, or twice the average copper grade for the
orebodies.
In the Al Houra zone, precious metal values exceeding 3 g Au/t and 100 g Ag/t occur at Sections
3555N and 3615N. The high values from these two sections increase the overall gold and silver
grade of the Houra zone considerably and give this zone the highest precious metal content of the
three deposits.
The Moyeath zone has the highest zinc content, lower copper and a relatively high gold and silver
content. It is most comparable to the Houra deposit.
The combined Saadah ore zones extend 700 m along strike from 4485N to 5185N and
approximately 290 m vertically from surface at 1,620 m to the deepest mineralization, that of the
New and Middle Saadah, at elevation 1,330 m. The volume of resources per vertical metre is
roughly constant throughout the first 190 m from surface to elevation 1,430 m, which is 85 m
below the current exploration level and also the bottom of the Old Saadah zone.
Approximately 3,971,000 tonnes or 92% of the total measured and indicated Saadah resources
occur in the surface to 14 m elevation interval. Within the remaining 100 m of depth between
elevation 1430 and 1330 m the combined tonnage of New and Middle Saadah in this interval
there are 350,000 tonnes or 8% of the total.
If the narrower south end of the New Saadah is excluded, 87% of the resources occur within a
strike length of 360 m and depth of 190 m. Maximum zone widths are up to 25 m.
- 35 -
Watts, Griffis and McOuat
The grade of the mineralization changes laterally, as explained earlier. It does not change
significantly in composition or character with depth. Given normal operations planning, mine
production should provide an ore supply of relatively constant composition over the life of the
operation.
The measured and indicated resources at Al Houra occur along a 900 m strike-length, extending
from Section 3045N to Section 3945N. The zone extends vertically for 270 m from the
unweathered upper extent at elevation 1700 m to the bottom of the zone at about 1430 m.
Approximately 950,000 tonnes of ore (45% of the zone total) occur as a single block 240 m long
between Section 3585 and 3825 between surface and the 1430 m elevation. Widths range from
2 m to 14.5 m and average about 4.5 m.
The Moyeath zone has a very long plunge extent of 700 m at an angle of 45% to the 1150 m
elevation. The strike length averages about 200 m and the maximum thickness of up to 14 m
occurs near the bottom of the defined part of the zone.
After a detailed review of the ore zones, dilution and recovery rates based on the selected mining
methods were applied by WGM to the measured and indicated resources. WGM's diluted,
mineable proved and probable ore reserve estimate is shown on Table 5.
In the Saadah zone, where mining will incorporate both cut and fill and open stoping methods,
we have determined that ore recovery will be 80% of the measured and indicated resources. Five
percent dilution at zero grade has been used to reflect the cut and fill mining method used in this
orebody. For the Al Houra and Moyeath zones, where open stoping mining methods will be
used, recovery of the measured and indicated resources will be 88% and a 15% dilution at zero
grade has been used. In practice, more ore may be recoverable. The decision to exploit more
difficult areas, or to develop narrow mining widths, will be based on local grades, metal prices
and other aspects of the economics of the operation at that time.
- 36 -
Watts, Griffis and McOuat
TABLE 5
SUMMARY OF MINEABLE% has been used for the Saadah zone and 88% for the Al Houra and Moyeath
zones.
The exploration potential can be addressed at two levels: (1) the mine area and the Masane
volcanic belt; and (2) areas outside the Masane belt. The immediate mine area was explored by
diamond drilling during the underground program. This work consisted mainly of horizontal
holes drilled from the drifts and a few deep holes drilled from crosscuts. Potential for additional
mineralization was recognized in the Al Houra zone, which still persists at 100 m below the
exploration level, and to the north (down plunge) of the Saadah zone. Based on the work to date,
additional underground development is required before meaningful exploration can be continued
in either of these areas.
Continued surface prospecting in the immediate mine area by ASDC led to the discovery of the
Moyeath gossan in late 1980. The gossan was small, without any ancient mine workings, and not
easily recognized. Diamond drilling has since shown this to be a significant massive
- 37 -
Watts, Griffis and McOuat
sulphide deposit which is open down-dip. Other gossans and geochemical anomalies in the area
require further investigation.
Gossans showing anomalous values in lead and zinc are reported by the USGS from an area
south of Wadi Shann which are as yet uninvestigated. Ancient copper workings and gossans are
known 5 km north of Al Masane at Bedoua, and 15 km to the south at Rehab. These occurrences,
plus the continuation of the favourable geologic setting, indicate that additional exploration on
the Al Masane belt should result in new discoveries.
The orebodies at Al Masane and the associated volcanic rocks are comparable to the lithologic
sequences of volcanogenic copper-zinc sulphide districts in other parts of the world. Such
districts commonly consist of a number of deposits variably distributed over areas of several
hundred square kilometres and within stratigraphic intervals of several kilometres of volcanic
deposition. By comparison, the Al Masane area - with favourable volcanic sequences over an
area of 900 km2 - has the dimensions of a mineral district, and is a vast area that as yet has been
only partially prospected.
A significant feature of the Al Masane ore deposits is that they tend to have a much greater
vertical (plunge) extent than strike length. A relatively small exposure such as the Moyeath zone
has been developed into a sizeable orebody by thorough and systematic definition drilling.
Similarly, some of the small known showings outside the immediate mine area could yield
substantial tonnages of new ore.
The principal prospects outside the Al Masane belt are a large copper geochemical anomaly
associated with gold-bearing quartz veins near Jabal Guyan, and a thin massive sulphide horizon
intercepted by a USGS drillhole at Dhahar. The most promising area known to date is at Talaa,
20 km south of Al Masane, where ASDC has intercepted copper-zinc mineralization in three
exploratory drillholes and the favourable horizon has been traced for at least 6 km by
geochemical sampling.
- 38 -
Watts, Griffis and McOuat
A surface exploration program will be carried out concurrently with the mining operation. An
annual budget of $250,000 has been proposed for the ongoing exploration program and funds
have been included in the economic analysis. Additional diamond drilling within the mining
lease area may possibly locate richer ore that could be used to increase the profitability of the
existing operation. Also, the entire volcanic area around Al Masane will be investigated,
extending from the copper showings at Dahar to the west, to the Hadbah nickel deposit at Wadi
Qatan to the east. To be most effective, the exploration group should act as a self-contained unit
divorced from responsibility for any daily activities at the mine. Such a unit would consist of a
field geologist, a geophysical technician, a drill crew, and miscellaneous support staff, all of
whom would live at work camps in the areas being explored.
The company is now in possession of the most of the necessary drilling equipment, and an assay
laboratory will be available at the mine to expedite the processing of samples; therefore, capital
expenditures would be small. Emphasis should be placed on geophysics and consideration
should be given to acquiring additional geophysical equipment, so that more effective or faster
alternative techniques might be used, as circumstances demand.
- 39 -
Watts, Griffis and McOuat
7. METALLURGY
7.1 GENERAL
Laboratory and pilot plant studies completed by Lakefield and the CSMRI in 1980 and 1981
established that 82% of the copper and 69% of the zinc at Al Masane could be recovered in
saleable copper and zinc concentrates.
A review of the 1982 equipment selection and flowsheets by WGM indicated that new
technology developed during the past ten years could be used to reduce the capital cost and
improve metallurgical recoveries. In particular the use of semi-autogenous grinding (SAG) to
reduce the capital cost of the grinding section and developments in reagents were believed to hold
the greatest potential for improving the economics of the project.
Preliminary tests on drill core samples were done by Lakefield in 1980. The sample used in the
majority of the testing was from the Saadah zone.
Subsequently individual bulk samples of the Saadah and the Al Houra deposits were shipped to
CSMRI for bench and pilot plant studies. A flowsheet consisting of sequential copper and zinc
flotation followed by cyanidation of the zinc tailings and zinc concentrate was developed to
optimize the copper and zinc flotation circuits and ensure high recovery of the precious metal
values.
- 40 -
Watts, Griffis and McOuat
Five new composite samples were prepared for the 1994 testwork program. The drill core and the
assay pulps from the analysis of the drill core from the program carried out in the 1980s were in
good condition and did not exhibit signs of oxidation. Accordingly the remaining half of the
sawn drill core was divided into two samples for the SAG mill testing program. The three
samples used for metallurgical and flotation evaluation were prepared from the crushed pulps
used for analysis of the drill core.
The composite sample for SAG grinding testing was selected from the "BQ" sized drill core
available in the core storage at the minesite. The "AX" core was not used as the core diameter
was too small to provide the proper size distribution required for the test program. Drill core
representative of a total cross section of the orebody including some of the wallrock was selected.
Based on these criteria, sufficient drill core to prepare only two composite samples was
available:
The composite samples for flotation testing were selected from the sample pulps remaining from
the assaying of the drill core. This material had been crushed to minus 10 mesh and had been
stored in double polyethylene bags in steel storage chests. Based on our interpretation of the
three zones in the deposit, sample pulps were selected from drill core intersections representative
of the mineralogy and metal content present in the deposits. The composite samples used for
metallurgical testing from the diverse locations across the three zones were assayed by Lakefield
as follows:
% Cu % Zn Au g/t Ag g/t
Saadah Zone 1.11 5.27 0.68 22.9
Al Houra Zone 1.27 5.81 2.01 71.0
Moyeath Zone 0.90 8.89 1.62 74.6
- 41 -
Watts, Griffis and McOuat
Although the head assays of the samples are lower than the average grade of the zone, we believe
that these samples are representative of the zones within the Al Masane deposit.
Representative samples of drill core from the Saadah and Al Houra/Moyeath zones were sent
to Hazen Research (Hazen) for standard MacPherson SAG mill grindability testwork. This
work is performed in an 18 inch diameter SAG mill and can be related to performance in
commercial scale plants. Bond rod and ball mill grindability tests were also performed in
order to assess the amenability of the Al Masane deposit to SAG milling and to establish
work indexes. The results are shown in the following table.
TABLE 6
WORK INDEX FOR SAMPLES TESTED
AL MASANE DEPOSITS
A detailed analysis of these results shows that the ore is relatively easy to grind and that a small
grinding media addition to the autogenous mill will be required for efficient grinding.
The size of the grinding mills has been selected to process 2,000 tonnes per day of the Saadah
zone which will be the major source of ore for the first four years.
- 42 -
Watts, Griffis and McOuat
A flotation testwork program consisting of 28 batch flotation tests was carried out at Lakefield on
the three composites representing the three zones at Al Masane. These tests investigated the
grind size and flotation conditions required to maximize metal recoveries and concentrate grades
from the various zones. A locked cycle test on each composite from each of the three zones was
performed to predict plant performance on each type of ore.
The initial grinding and flotation tests showed that a relatively fine grind of 80% passing 34
micrometres was required to attain satisfactory copper and zinc recoveries. The initial tests
confirmed that the talc present in the deposit did float readily in the copper circuit. Subsequent
testing showed that it could be depressed using a talc depressant called CMC (carboxy-methyl-
cellulose). For optimum effect, this reagent was added in stages and the flotation circuit for the
commercial plant was designed to allow for this staged addition of reagent. The testwork also
showed that a regrind of the copper rougher concentrate was necessary together with three
cleaning stages in order to make acceptable copper grades in the order of 25% Cu.
The zinc circuit was relatively straightforward with only two stages of cleaning being required to
attain zinc grades in excess of 54% Zn.
Following flotation testwork on each of the individual zones, an overall composite consisting of
Saadah/Al Houra/Moyeath drill core was prepared in the proportion of 60/30/10 which is the
proportion of each ore zone in the overall deposit. Two locked cycle tests were completed on the
composite and the results were as shown in Table 7.
- 43 -
Watts, Griffis and McOuat
TABLE 7
LOCKED CYCLE TESTWORK ON DEPOSIT COMPOSITE
AL MASANE PROJECT
Grade % Distribution
% Cu % Zn g Au/t g Ag/t Cu Zn Au Ag
Copper
Concentrate 25.00 7.63 15.50 511.0 84.8 5.0 46.9 41.7
Zinc
Concentrate 0.66 53.80 2.04 113.0 5.3 83.5 14.7 22.0
The copper and zinc concentrates produced under these conditions are good grade and are
relatively clean and free from impurities. Based on these locked cycle results, the metallurgical
balance for the diluted, mineable, proved and probable reserves head grade is predicted as shown
on Table 8.
- 44 -
Watts, Griffis and McOuat
TABLE 8
METALLURGICAL BALANCE
AL MASANE PROJECT
Grade % Distribution
% Cu % Zn g Au/t g Ag/t Cu Zn Au Ag
Copper
Concentrate 25.00 6.50 11.50 325.0 87.7 6.1 48.6 39.8
Final Zinc
Concentrate 0.66 53.00 0.65 35.0 3.8 82.6 4.6 7.1
Final Zinc Tail 0.14 0.70 0.25 5.50 8.4 11.3 18.1 11.5
A preliminary test program to evaluate the economics of the cyanidation of the zinc concentrate
and final tailings in order to improve gold and silver recoveries was carried out at Lakefield. The
program consisted of cyanidation of flotation tailings and zinc concentrates produced during the
flotation testwork program under typical cyanidation conditions. Gold and silver recoveries
ranged from 50% to 77%. Leaching at lower cyanide concentrations on the composite sample
resulted in lower cyanide consumption and with precious metal recoveries similar to those at
higher cyanide concentrations. These lower cyanide consumptions are similar to those obtained
during the original CSMRI test program.
- 45 -
Watts, Griffis and McOuat
TABLE 9
SUMMARY - CYANIDATION RESULTS
AL MASANE PROJECT
7.7 DISCUSSION
The flotation testwork was carried out on sample pulps that had been stored for ten years. As
these samples may have oxidized, flotation testwork on fresh samples could yield higher
concentrate grades and recoveries for both copper and zinc. We therefore believe that the results
obtained on these old samples provide conservative estimates of the results that can be expected
on fresh ore. These results are also superior to the results obtained during the original feasibility
tests which were performed on fresh ore samples. We expect that there is potential for higher
concentrate grades and recoveries than those predicted in this study.
The Saadah, Al Houra and Moyeath zones contain considerable quantities of talc which required
higher additions of the expensive reagent CMC to depress. An alternative flowsheet was
investigated which involved pre-floating the talc mineral in order to reduce the CMC
requirement. However, initial results showed severe copper and zinc losses. Following further
investigations into the operating conditions of the talc flotation, it was determined that
- 46 -
Watts, Griffis and McOuat
mall (2 kg) samples in batch flotation testing did not provide realistic results. A larger (10 kg)
test provided very promising results and a 50% reduction in reagent usage was obtained.
An assessment of the actual plant operating results at the Woodlawn mine in Australia and the
Fox mine in Canada showed that the success of talc flotation would not be predicted by
laboratory results. Based on this experience in these operations, we have included a talc prefloat
in the Al Masane flowsheet with the 50% reduction in CMC addition in the concentrator
operating costs.
The mineralogy of Al Masane is unusual in that 12% of the gold and 20% of the silver reports to
the zinc concentrate while a further 40% of the gold and silver reports to the final tailings. This
represents a considerable loss of precious metal values as the only payment for gold and silver is
a credit for the precious metals contained in the copper concentrate.
To overcome this problem, we have introduced a cyanidation circuit in the process flowsheet to
recover precious metals from the zinc concentrate and final tails. This makes a positive
contribution to the economics of the project under the base case conditions assumed for the
economic valuation. Further optimization of the cyanidation conditions could improve the
economics of the precious metals circuit.
We recommend that a pilot plant test be completed on fresh representative samples from the
Saadah and Al Houra zones. The objectives of this pilot plant program will be:
- 47 -
Watts, Griffis and McOuat
to assess the full economic impact of the cyanidation circuit for precious metals recovery.
It will be necessary to dewater the mine to obtain the bulk samples from the Saadah and Al Houra
zones. The cost for the mine dewatering and pilot plant test program has been budgeted in the
Project Implementation Plan.
- 48 -
Watts, Griffis and McOuat
8. MINING
8.1 GENERAL
Ore within the Saadah, Al Houra and Moyeath zones will be mined by underground trackless
mining equipment using either cut-and-fill or open stoping methods based on the location and
width of each orebody.
The main access to the mine will be the 700 m long exploration decline driven as a part of the
1979 exploration program between surface (at 1,620 m) and the 1515 level.
Access to the Saadah and Al Houra orebodies will be by lateral development along the 1515
level. As the Moyeath ore is concentrated at a lower elevation than the other zones, it will be
accessed by a 875 m decline driven downwards at 20% grade from the 1515 level to the 1340
level.
Local vertical access to the individual ore zones will be by ramps developed from the 1515 level,
except for Moyeath which will be developed from the base of the new access decline. As much
development as possible will be located in ore.
The mine will be designed to produce 700,000 tonnes per year, on a two shift, five days per week
basis.
Ore will be hauled by truck from all three ore zones to a underground crushing facility located at
the foot of the main access decline. The trucks will discharge through a grizzly onto a feeder
feeding a jaw crusher. The jaw crusher will dump onto a transfer belt which will transfer the
crushed ore onto the main decline conveyor. The decline conveyor will transport the ore to the
surface, where it will be transferred by a surface conveyor to the
- 49 -
Watts, Griffis and McOuat
coarse ore stockpile. Provision will be made at the transfer point between the decline and surface
conveyors for the diversion of waste rock.
Above the principal access level (1515 in the Saadah and Al Houra Zones, and 1340 level in the
Moyeath Zone) ore will be mucked by scooptrams either directly into orepasses where distances
are relatively short, or into trucks for transport to orepasses where hauls are longer. The bottom
of the orepasses will be equipped with chutes for efficient truck loading. The ore will then be
transferred to the crusher room by trucks. Where the ore lies at elevations lower than the
principal access horizon, the ore will be loaded into trucks and hauled directly to the crusher
room. A mobile rockbreaker will service the orepass and crusher room grizzlies throughout the
mine.
8.1.2 VENTILATION
Each of the ore zones will require an independent ventilation raise. In the case of the Saadah and
Al Houra zones, the raises will be installed as a part of the pre-production development program.
In the case of the Moyeath Zone, the raises will be installed after the access decline is completed.
Each raise will be equipped with exhaust fans which will extract air from the mine. Air will
enter the mine through the main decline from surface and will be distributed to each of the three
zones along the 1515 level. The air will be distributed to the working levels via the ramps, and
will finally be exhausted into the ventilation raises through ventilation raise access drives.
The mine will be pumped by means of portable high head submersible pumps capable of
pumping mine water without the removal of fine solids. Locally, headings will be dewatered by
compressed air and electrical face pumps, feeding to area pumps in each of the ore zones. The
area pumps will transfer the water to the main sumps on 1515 elevation where the main
dewatering pumps will transfer the water to surface. The existing pump column in the main
surface decline will be refurbished, and retained for emergency use.
- 50 -
Watts, Griffis and McOuat
Before the underground exploration workings were allowed to flood, reported mine water make
varied between 7 and 20 l/s. The inflow was reported to be a maximum when Wadi Saadah was
in flood, which suggests that there are fissures which connect the mine with the base of the wadi.
It is quite possible that water make will increase significantly as more ground is opened beneath
the wadi, and movements caused by mining close some fissures and open others.
Current planning is to incorporate all of the equipment purchased during the 1979 underground
exploration program into the equipment fleet for production mining. Significant amongst this
equipment are two Atlas Copco twin boom development jumbos, two electric five yard
scooptrams and four Jarco 23.6 tonne trucks. Amounts of $250,000 have been included in the
capital estimate for the refurbishment of this equipment.
8.2.1 GENERAL
The ore in the Saadah zone will be mined as two orebodies, the west (also known as the Middle
and New Saadah zones) and the east or Old Saadah zone. Mining methods will incorporate cut
and fill and open stoping methods.
As the best development of ore in the Saadah zone underlies Wadi Saadah, filled methods are
required in this zone. As the wadi is saturated at the interface of the wadi alluvials and for some
metres into the bedrock, and is prone to periodic flash flooding every year, precautions against
water inrushes will be taken.
Of the two orebodies which constitute the Saadah zone, the west orebody subcrops directly
beneath the wadi alluvials (Figure 7). It will be necessary to leave a substantial crown pillar in
the west orebody to protect the mine from flooding. The crown pillar will be recovered
- 51 -
Watts, Griffis and McOuat
at the very end of mine life. A full geotechnical program will be carried out in the pre-production
period to determine the characteristics of the crown pillar
WGM has selected the mechanised cut and fill method for the portion of the Saadah zone that lies
beneath the wadi because it will provide good control over dilution and a maximum of safety
against inrushes from the wadi.
For this area, fill will be provided from a surface quarry, from mill tailings, and from material
recovered from the wadi alluvials, which ever material is cheapest at the time. The fill quarrying
and supply operation will be contracted. The fill will be transferred to the working places by a
fill raise located on the banks of Wadi Saadah.
Conventional mechanised cut and fill mining will be employed, entries to the orebody being
provided from a centrally placed ramp system providing access to the orebody every 15 m.
Access and sub drifts into the ore will be ramped up and down in the conventional manner to
provide access to the ore at 5 m intervals. The north end of the levels will be similarly connected
by a ramp, but driven on operating cost ahead of the production horizon. This ramp will also
provide access to the ventilation and fill raises. Figure 8 shows the layout at 1515 m elevation
and a generalized schematic is shown in Figure 9.
The ore will be drilled using uppers, and the back secured from the top of the muckpile
previously levelled by scooptrams while mucking the "swell". If it becomes necessary to mine
the orebody "flatback" a production jumbo fitted with 6 m slides will be used to drill a row of
trimmers above the holes drilled by a ring drilling machine. In this way the cheap ring drilling
will be augmented to provide a trimmed back suitable for local support. Heavy scaling will be
accomplished by the mobile rockbreaker.
- 53 -
Watts, Griffis and McOuat
After a section has been blasted and secured in this manner, production mucking will take place,
after which the section will be filled. The method is cyclic, and careful planning will be required,
in conjunction both with the other orebody in the Saadah Zone, and ore being mined from other
zones.
South of section 4695, the Saadah orebody is not beneath the wadi and therefore open stoping
methods will be used. The hanging wall rocks are talcose however, and are not likely to stand for
long periods without some support. For this reason residual pillars will be left during the mining
operation which limit the overall extraction to just over 80%.
The ore will be developed vertically by ramps and laterally by mining rooms at approximately 30
m intervals driven wherever possible in ore. The orebodies will be undercut and drawpoints
developed into the undercut as required. The ore will be drilled and blasted from the lateral
drilling rooms leaving pillars as necessary to support weak hanging walls or the extraction of
muckpile ore lenses where they occur in close proximity. Ore transfer from the drilling and
blasting horizons will take place through the stopes to the drawpoint levels. Where these are
above the principal access horizons, the muck will be transferred by orepasses and loaded into
trucks by chutes. Where coincident with or below the principal access horizon, the muck will be
loaded into trucks directly. A generalised longitudinal section is shown in Figure 10.
As operations approach the crown pillar in the east orebody, a decision point will occur to leave
the remainder of the crown pillar to be mined at the end of the mine lifetime. If leakage becomes
severe it may be necessary to install a water collection system to control water inflow. It is
important to recognize that all risk from severe flooding cannot be eliminated. The use of fill will
reduce this risk, but will not eliminate it.
- 56 -
Watts, Griffis and McOuat
The method used to remove the pillar will be open stoping.
This zone underlies normal desert terrain, without wadi complications, and open stoping methods
will be used. Pillars will be left to support the stope walls and the rocks between two ore zones
where present. As in the open stoping sections of the Saadah zone, the design recovery will be
88%, the remaining ore being left in systematically designed pillars. These pillars may not be
recoverable, but actual mining experience may indicate that a more liberal recovery factor is
possible.
The general strategy will be to ramp up and down from the existing 1515 haulage level. Open
stoping will be carried out from drilling rooms driven at appropriate vertical intervals in the ore.
From these drilling rooms the same longhole drilling machines will be used to drill between
levels, and the ore will be blasted down to a drawpoint level below. The extraction of the stopes
will be by retreating to the access on each level, and mining from the bottom of the stope to the
top. Generally speaking, the ore above 1515 level will be mined first, using ore passes to transfer
the ore to 1515 level when the stope drawpoints are above, and using direct truck haulage from
below.
The Moyeath zone lies about 600 m to the southeast of the mine decline and at a rather lower
elevation than the other orebodies. It will be accessed by a 875 m ramp driven downwards at
20% grade.
The new access ramp will be driven from the 1515 level to intersect the orebody in the vicinity of
1340 elevation and from that point to ramp up and down to the drilling levels. The majority of
the ore lies between 1280 elevation and 1590 elevation.
- 58 -
Watts, Griffis and McOuat
A 350 m ventilation raise will be installed which will break through to surface in one of the
tributary valleys to Wadi Saadah. If future drilling indicates more high grade ore in the Moyeath
deposit, it may be advantageous to exploit the deposit from an independent access ramp from
surface.
- 59 -
Watts, Griffis and McOuat
9. PROCESSING
9.1 GENERAL
Based on the results of the testwork, a process flowsheet has been developed for a process plant
with a capacity of 700,000 tonnes per annum of ore (Figure 11).
Run-of-mine ore will be delivered to the underground dump pocket by truck. Ore will be fed
from the dump pocket by a reciprocating feeder to the jaw crusher which will crush the ore to
minus 200 mm. The crushed ore will fall onto a short belt conveyor which discharges onto a belt
conveyor which will transport the ore the surface using the existing main decline.
- 60 -
Watts, Griffis and McOuat
At the surface, the conveyor will discharge into a diversion chute which in the normal position
will direct the ore onto a conveyor belt transferring the ore to a 6,000 tonne crushed ore stockpile.
In the second position, the diversion chute will direct waste or ore to an emergency stockpile.
Ore can be reclaimed from the emergency stockpile by front end loader feeding a reclaim
conveyor. Waste will be stockpiled and used for backfill underground.
9.2.2 GRINDING
Ore will be withdrawn from the coarse ore stockpile at a rate of approximately 83 tonnes per hour
by three feeders onto a conveyor belt feeding the SAG mill. A belt weigh scale will monitor and
control the feed tonnage by adjusting the speed of the feeders.
The initial stage of grinding will be carried out by a 5.5 m diameter by 2.4 m long SAG mill
equipped with a 670 Kw motor and operated in closed circuit with a screen. The screen oversize
is returned to the SAG mill for further grinding. The screen undersize is ground to 80% minus 45
microns in a conventional ball mill-cyclone classification circuit consisting of a 3.7 m diameter
by 5.5 m long ball mill in closed circuit with five 375 mm cyclones.
Cyclone overflow at 80% minus 45 microns will flow by gravity to two conditioning tanks ahead
of the talc prefloat circuit. Reagent additions of lime to control the pH and sodium sulphite as a
zinc depressant will be made to the grinding circuit. Following conditioning with further
additives of lime and sodium sulphite, the pulp is aerated and the talc recovered in a flotation
concentrate in a bank of six 8.5 m3 cells. The talc concentrate is sent directly to final tailing.
The tailing from talc flotation is conditioned with Aerophine 3418 as copper collector and CMC
7LT as talc depressant before copper rougher flotation in twelve 8.5 m3 flotation cells. These
cells are in three banks of four cells in order to allow for the staged addition of CMC 7LT to
depress the remaining talc.
- 62 -
Watts, Griffis and McOuat
The copper rougher concentrate is reground to 80% minus 25 microns in a 2.1 m diameter by 2.7
m long ball mill operated in closed circuit with cyclones. Cyclone overflow is cleaned in three
stages to produce a 25% copper concentrate. The first cleaner tailing is scavenged in a bank of
six 4.5 m3 flotation cells prior to being combined with the copper rougher tailing and pumped to
the zinc flotation circuit.
The copper concentrate will be pumped to a 9.1 m diameter thickener and thickened to 55%
solids. The thickened concentrate will be pumped to a Larox pressure filter where it is dewatered
to 8% moisture. Filter cake is discharged onto a conveyor which transports into the loadout area
from where it is loaded into trucks for shipment to the port of Gizan.
The tailings from the copper circuit are conditioned in two stages of conditioning before flotation
in twelve 8.5 m3 flotation cells with a retention time of 15 minutes for zinc recovery. The
reagents used are lime to control the pH and copper sulphate and sodium iso-propyl xanthate to
activate and collect the zinc minerals.
The zinc rougher concentrate is cleaned in two counter current stages to produce a 53% Zn
concentrate. The zinc concentrate will be pumped to a 9.1 m diameter thickener and thickened to
60% solids. Thickened zinc concentrate is pumped to the pre-aeration and cyanidation leach
circuits for the recovery of precious metals from the concentrate.
The cyanidation circuit will consist of leach circuits for both the zinc concentrate and zinc
rougher tailing and recovery of the precious metal values by the Merrill Crowe process. The
Merrill Crowe process has been selected in order to maximize recovery of the silver content of
the ore which contributes up to 40% of the revenues from the cyanidation circuit.
- 63 -
Watts, Griffis and McOuat
In order to minimize the requirements for fresh water makeup, the flotation tailings will be
thickened and filtered and the water recycled to the flotation circuit.
The rougher tailing from the zinc flotation circuit will be pumped to a 22.9 m diameter tailings
thickener, and after thickening to 55% solids, will be dewatered on two 2.7 m diameter by 10
disc, disc filters. The filter cake will be repulped with barren solution from the cyanide circuit to
50% solids before pre-aerating and cyanidation in four mechanically agitated leach tanks.
Following a 24-hour leach time, the slurry will be pumped to the tailings belt filters for recovery
of the precious metals content from the solution.
The zinc concentrate from the zinc concentrate thickener at 60% solids will be diluted to 50%
solids prior to the pre-aeration and cyanidation leach circuits. Following a 24-hour leach time,
the zinc concentrate will be pumped to the holding tank preceding the pressure filter. The
pregnant solution from the filter will be combined with the solution from the zinc rougher tailing
belt filter prior to precious metals recovery.
Gold and silver is precipitated from the de-aerated pregnant solution using zinc dust using a
system called the Merrill Crowe process. The precipitate is collected in a filter press and is
mixed with fluxes and smelted in an induction furnace to produce dore bullion for shipment. The
barren solution following recovery of the precious metals content is used to repulp the zinc
rougher tailing prior to cyanidation circuit leach.
Each of the copper and zinc concentrates will be pumped to one of two Larox pressure filters.
The concentrate at 8% moisture will be discharged by conveyor into a loadout area. It will be
loaded into trucks using a front end loader for shipment to the port of Gizan. A truck scale is
provided for weighing outgoing shipments. From Gizan, concentrates will travel by ship to
copper and zinc smelters in Europe and Japan.
- 64 -
Watts, Griffis and McOuat
9.2.7 TAILINGS DISPOSAL
The zinc flotation tailings will be filtered on two 80 m2 belt filters operating in parallel. The
filters have been designed to allow for a two-stage countercurrent wash needed to obtain high
pregnant solution recovery, and minimize the cyanide content of the final tailing prior to disposal.
The filter cake containing approximately 22% moisture is conveyed to the surge pile prior to be
The filtered tailings will be impounded in an area east of the concentrator. The area is sufficient
size for the foreseeable life of the project. Tailings will be systematically dumped from the
higher ground progressing downstream towards the proposed final dam structure. During normal
operation the earth works will consist of internal dams and dykes to contain any seepage and
divert any uncontaminated water from spring storms away from the active disposal areas. These
active areas will change as the plant production requires.
Final reclamation of the area will require the placing of local material over the tailings to prevent
erosion.
9.2.8 REAGENTS
Reagents used in the concentrator will be purchased and received in bulk wherever possible.
Lime will be received in bulk and slaked prior to be being fed to the flotation circuit in a
pressurized loop. Carboxy methyl cellulose (CMC), the talc depressant, will be fed to the
flotation circuits from a pressurized loop.
Copper sulphate and sodium bisulphite will be delivered in 1,000 kg bulk bags and metered to the
circuits following mixing in an agitated tank. Aerophine 3418A and frother will be metered as
concentrated solutions while the sodium iso-propyl xanthate will be mixed and fed to the circuit
using metering pumps.
- 65 -
Watts, Griffis and McOuat
9.2.9 PROCESS CONTROL
The plant will incorporate the necessary instrumentation and control systems to provide control
of the plant. An on-stream analyzer will be provided to monitor and control the flotation
operation.
9.2.10 UTILITIES
Plant and pressure filter air will be supplied by rotary screw compressors. A separate system will
be used to provide dry, oil free air for instrumentation.
- 66 -
Watts, Griffis and McOuat
10. INFRASTRUCTURE
10.1 GENERAL
The mining and processing operations will be supported by an Administration and a Maintenance
Services department. The Administration department will be responsible for human resources,
accounting, purchasing, warehouse, site security, safety and training aspects of the operation.
The Maintenance Services department will administer the operation and maintenance of a 300
man on site camp facility including the power generation plant and water supply.
The site is readily accessible by a 20 km gravel road from the Sifah-Najran paved highway. This
access road will require upgrading to enable daily traffic both for construction materials and
concentrate shipments to the port city of Gizan. Road straightening, ditching and base
reconstruction will be required, particularly for the last half of the road into the mine site.
Materials for road construction are available locally.
Secondary access to the project site is via an unimproved gravel road along the wadi from Thar.
The road through the wadi is subject to considerable washouts and rebuilding during flood
periods and will only be used as a secondary access route.
Gates and gatehouses will be located at project boundaries from the north and south accesses.
A 300-man single status camp will be constructed, using trailer modules for sleeping
accommodation, wash facilities, recreation and kitchen (Figure 12). Separate facilities will be
provided for expatriate and Saudi employees and for other staff. An on-site medical facility will
be provided and will be staffed by a doctor and provide 24-hour nursing care.
- 67 -
Watts, Griffis and McOuat
A centrally-located services building will serve both the mine and the process plant. The mine dry
will have a capacity for 200 men. The warehouse will have space for surface and underground
equipment. Shops will include machine, welding, electrical and carpenter shops. This facility
will also include five equipment repair bays.
The project power requirements will be met by diesel generated power. The four generators will
be of 3.2 MW capacity, with three operating at one time. The power distribution within the
project boundaries will be 15-Kv pole mounted power lines.
The three existing generators (3 x 900 Kw) will be refurbished and utilized for the camp, related
infrastructure and emergency purposes.
Three 600,000 litre diesel storage tanks will be provided as well as diesel and gasoline storage.
The fenced and bermed area will have fire extinguishing equipment installed.
Water will be supplied from a system of wells as described in the following chapter. An
interconnected distribution system will be installed which includes electric pumps located at each
well head and fed from a 15-Kv power line and step down transformer. A surge tank will be
located near the well head closest to the plant. A 100-mm pipeline from each well head will
carry water into the surge tank. Pumps will operate as independent supply sources and will have
the ability to operate either solely or simultaneously depending on water availability and levels
within the surge tank. A pumping station at the surge tank will supply
- 69 -
Watts, Griffis and McOuat
water 6 km up the wadi via a 150-mm pipeline to the fresh/fire water reservoir located at the
minesite. A backup diesel powered pump would supply water in an emergency situation.
Sewage from the construction/operations camps, the service complex, concentrator and mine will
discharge via pipelines and lift stations into a package sewage treatment plant. The clean outflow
will recycle into the process makeup water and will also be used for general outdoor purposes.
The assay laboratory will be located in three 3-m wide portable trailers units. Equipment will be
installed for sample preparation, assaying for base metals and fire assaying for precious metals.
Copper and zinc concentrate will be transported by covered dump trucks to the Port of Gizan on
the Red Sea. The concentrate will be dumped within an existing 40 m wide by 100 m long
storage building leased by ASDC from the Port Authority. The concentrate will be stored until
the arrival of a regularly scheduled bulk carrier. Approximately 10,000 tonnes of zinc and 15,000
tonnes of copper could be stored within this area.
Facilities at the port will consist of a front-end loader to move concentrate within the shed after
truck unloading, and to load concentrate onto the ship loading conveyor system during
concentrate shipment. The conveying system provided by ASDC will consist of a hopper
conveyor loaded by the front-end loader, a transfer belt and a ship loader, all of which are mobile.
Approximately 20 truck loads per 24 hour day will be required to transport the concentrates to
the port.
- 70 -
Watts, Griffis and McOuat
11.1 GENERAL
Since 1980, the Bureau de Recherches Geologiques et Minieres (BRGM) on behalf of the
Saudi Arabian government has conducted a number of investigations of the water supply for
the Al Masane project. Three subsurface areas were identified within the Wadi Saadah-Wadi
Hizmah valley system, downstream from the mine area, that have the potential to supply the
water requirements. These sites were further investigated by drilling test wells, installing
piezometers and performing pumping tests to assess the potential of the groundwater aquifer.
This work was followed by an assessment of the surface and groundwater resources for
ASDC by Gentle Geophysics Limited (GGL) in 1984 and an appraisal of the surface
potential and storage reservoir feasibility by Aqua Data Systems Limited (ADS) in 1986.
Both studies investigated the possibility of the construction of a dam in Wadi Saadah. One of
the major concerns was the variable nature of the rainfall and whether the water supply
impounded by the dam could be replenished on an annual basis particularly in years of below
average annual rainfall.
In 1994, Shaheen & Peaker Limited (S&P) was retained by ASDC to assess the groundwater
resources potential and hydrogeological characteristics of the underground reservoirs in areas
identified in previous studies and the potential water supplies from surface/subsurface storage
facilities and dams.
- 71 -
Watts, Griffis and McOuat
The mean annual precipitation is 102 mm, ranging from 26 mm to 260 mm. Most of the
precipitation occurs during the months of March and April. Precipitation during the remaining
months is quite variable and unreliable for water supply. Evaporation rates are high. A mean
annual pan evaporation of 3,000 mm/year has been used by S&P for this study.
The S&P investigations indicate that the groundwater reservoir in the fractured/weathered
bedrock can provide the required amount of groundwater downgradient of the project site.
Groundwater was found at the top of the rock about 5 m below ground surface in most wells,
although perennial water was noted in several locations within Wadi Saadah. Well yields were
reported to be highest in the metasedimentary rocks followed by wells drilled in gabbro. Low
water yields were found in the granite and diabase rocks.
Pumping tests conducted on the existing exploration wells in January 1994 indicated that
pumping rates of 15 m3/h are feasible. Therefore three well fields will be able to meet the
40 m3/h water requirement of the operation. Proposed production well locations are:
Additional step-drawdown and well performance testing should be undertaken to establish and
monitor optimum well yield for each well field. The existing well MAS 4W, approximately
7.5 km downgradient of the mine site, could be used as a backup production well, if required.
- 72 -
Watts, Griffis and McOuat
The mine could also provide an additional source of water. During the 1979 underground
exploration program, reported mine water production varied between 7 and 20 l/s (0.4 to 1.2
m3/hr) with maximum inflow during the flooding of Wadi Saadah. Although it is not possible to
predict the inflow into the mine, it is expected that there will be some water available from this
source.
Additional reserves of water can be obtained with groundwater recharge and the use of the
alluvium as a subsurface storage reservoir for flood waters. Low level (4 m high) water retention
barriers will be constructed above each production well field to recharge the groundwater system.
The barriers would spread the flood waters across the width of the flood plain. The water
forming ponds behind the barrier could then rapidly infiltrate the alluvium. These low level
barriers would be constructed using local materials and be reinforced to minimize maintenance
(Figure 13).
An impermeable core for each barrier is necessary to prevent downgradient flow in the alluvium
and maximize recharge to the bedrock. The existing wadi flow would be diverted above the
barrier so the impact of flood water is reduced when it reaches the barrier. It is anticipated that
approximately 200,000 m3 or about one-half of the annual water requirement for the mine site
could be captured behind each barrier during the flood season. The installation of these barriers
recommended by S&P should therefore ensure a continued supply of water for the project.
After evaluating the various hydrological parameters such as water availability, process water
required, evaporation, exfiltration and sedimentation, S&P determined that the construction of a
40 m high dam above the mine site would create a suitable size reservoir for the Al
- 73 -
Watts, Griffis and McOuat
Masane operation. The reservoir reach would be approximately 3 km long. There appears to be
sufficient rainfall to fill the reservoir assuming an average precipitation of 100 millimetres per
annum. However over the past ten years there has been wide variances in annual precipitation
amounts at Al Masane and lower than average precipitation rates have been experienced for up to
three years. Evaporation losses are severe.
Water from the wells is of good quality and will be used to supply the potable water requirement.
Potable water will be treated by a packaged water treatment facility. A buried distribution
system from a potable water treatment plant will supply water around the project site.
11.5 DISCUSSION
Based on the information to date, we believe that a sustainable yield of 40 m3/h can be obtained
from pumping the groundwater system at about 15 m3/h in each of the three areas at 2, 3, and 4
km downgradient of Al Masane. A new production well field is recommended to be drilled at the
confluence of the Wadi Saadah-Hizmah and the Wadi Wazagh and Saadah about 3 km
downgradient of Al Masane. The three well fields should be evaluated to establish optimum
pumping rates and also to monitor impacts on local bedouin wells prior to implementing full
scale production. We recommend that low level water retention barriers be installed above each
production well field to recharge the alluvium.
We do not believe that the costs can be justified for importing water or constructing a dam to
supplement or provide the mine site with water at this time.
- 75 -
Watts, Griffis and McOuat
Environmental regulations for the mining industry in the Kingdom of Saudi Arabia are presently
being prepared. It is expected that the regulations will follow those presently implemented in
North America. Therefore, the mine, concentrator and infrastructure have been designed to meet
these North American standards.
The total area to be disturbed by all the surface facilities, excluding the water supply system but
including the tailings area is 57.6 hectares. The underground mining operation will be accessed
by an existing decline which will result in minimal surface disturbance. Only non-acid producing
waste rock will be used for surface construction activities. Any acid-producing waste rock
produced during mining operations will be used as underground fill. Any excess water from the
mine will be pumped to the concentrator and used as process water.
The processing plant has been designed to recover water from the tailings by filtration in order to
minimize water requirements. Reagents required in the process will be stored in a contained area
and mixed as required by the operator.
The filtered tailings will be drained in a stockpile area prior to being impounded in a designated
area east of the concentrator. This filtered deposition of tailings will reduce the possibility of acid
production since water availability is a requirement for oxidation of the sulphide content of the
tailings. Tailings dumping procedures will be designed to maximize evaporation of excess
moisture from the tailings in order to minimize any seepage of tailings water into the alluvium
and possibly into the groundwater. A series of temporary internal dams and ditches will be used
to collect and divert water flows away from the active disposal area during the infrequent
rainstorms. A monitoring program will be implemented to monitor the effect of the tailings
disposal on groundwater.
- 76 -
Watts, Griffis and McOuat
Appropriate areas have been allocated for maintenance activities and used oil and other
consumables will be collected and disposed in an environmentally acceptable manner. The
storage areas for oil, gasoline and reagents will be lined and bermed and enclosed within a
security fence.
The water supply for the project will be supplied from wells located in the wadi downstream from
the mine. The proposed water supply scheme will ensure that the subsurface aquifers are
recharged as completely as possible during the annual wadi flood which will result in a minimum
impact on the existing bedouin local wells.
The camp will be provided with a treated source of potable water. A sewage treatment plant will
be provided and the water reused within the camp for irrigation or reused in the process. An
incinerator will be provided for the disposal of other waste materials.
A baseline environmental audit will be performed prior to start of the operation and a continuing
environmental monitoring program will be implemented to monitor water quality, tailings
management and the in-plant environment throughout the mine life.
At the end of the mine life the site will be reclaimed. The mine accesses will be sealed, the camp
will be removed, the buildings demolished and the equipment sold. The site will be graded and
returned as closely as possible to its natural state. The tailings area will be stabilized by placing
wadi sand and boulders over the area in order to reduce the possibility of the tailings being blown
or eroded away. A dam will be constructed which will prevent any future migration of the
tailings down the wadi and this will be assessed based on the operating experience in the tailings
area.
Further information on environmental studies can be found in the report prepared by HBT Agra.
- 77 -
Watts, Griffis and McOuat
13. MARKETING
Al Masane will produce three products, zinc concentrate, copper concentrate and dore bullion.
The copper concentrate will contain valuable amounts of gold and silver.
Zinc accounts for about 47% of the net smelter return, followed by copper (30%), gold (15%) and
silver (8%).
Smelters and refineries process concentrates into metals that approach 100% purity. In general,
smelters do not buy concentrates but rather they charge a processing fee and either return the
metal to the concentrate producer or sell the metal for the producer's account on the open market.
These processing fees usually consist of a fixed amount per tonne of concentrate plus
adjustments the current metal prices, a recovery factor for each metal and any penalties for
deleterious metals in the concentrate.
These concentrates do not contain any deleterious elements from a smelting standpoint and
smelters should not charge any penalties when the concentrates are processed. The high precious
metal content of the copper concentrate makes it particularly desirable as a smelter feed. The
zinc concentrate grade is in the upper range for zinc concentrates and there will be willing buyers
on the world market.
- 78 -
Watts, Griffis and McOuat
The most recent Lakefield analysis of the two concentrates is shown below.
- 79 -
Watts, Griffis and McOuat
13.2 COPPER
Copper and gold were the first metals discovered and used by man. They are often found in the
native state and are easily worked. Artifacts of hammered copper that date back to 4500 BC have
been found among Chaldean remains.
From antiquity to the 19th century, copper's main value was related to its malleability and ease of
working, its durability and corrosion resistance and its availability. The first uses were utensils,
tools and weapons. Since the end of the 19th century, the greatest use is in the generation and
transmission of electricity for light, power and heat and in computer data communications.
Currently the electrical/electronic industries account for about 46% of copper consumption. The
remainder is divided among construction (17%), general engineering (17%), transport (11%) and
consumer and general (9%).
Current Western World production of refined copper is about 9.1 million tonnes annually of
which the major producers are the United States (25%), Western Europe (18%), Chile (14%) and
Japan (13%). This was produced from 7.7 million tonnes (contained copper) of concentrate with
the remainder coming from recycled material. The major producers of copper concentrate are
Chile (27%), the United States (23.5%) and Canada (9%).
WGM retained Brook Hunt & Associates Limited (Brook Hunt) to prepare a forecast of
copper and zinc prices as well as smelter treatment charges over the period 1996 to 2005.
Their comments have been included in the following discussion. Historical and projected
copper, zinc, gold and silver prices are shown in Figures 14, 15, 16 and 17.
Brook Hunt base their forecasts on the basic industrial economic cycle and the resulting
imbalances in supply and demand. They estimate that economic activity in the Western World
will record a cyclical peak between 1996 and 2001 with a decline in the early years of the 21st
century. They believe Western World copper consumption will reach 11.1 million tonnes with an
annual growth rate of 1.6%. Brook Hunt calculate that demand will exceed supply from 1997 to
2000 with a price cycle peak in 1999-2000.
- 80 -
Watts, Griffis and McOuat
Brook Hunt believes that copper prices will average $0.93 per pound (in 1993 dollars) over the
period 1996 to 2005.
Brook Hunt has also prepared an estimate of treatment and refining charges. Treatment charges
while related to the availability of concentrate, also move proportionally with changes in metal
price. Brook Hunt has estimated a total treatment and refining charge per pound of payable
copper of 22.3 per pound of copper.
13.3 ZINC
Unlike copper and gold, zinc never occurs in the native state. Although zinc has been used since
ancient times (the alloy brass is formed from copper and zinc) it was only recognized as a
separate element in 1721. Zinc's outstanding chemical property is its electropositive character.
This is the basis for the use of zinc in galvanizing or coating steel products. The coating of zinc is
corroded preferentially, in this way protecting steel from corrosion.
Zinc is sold in six standard grades varying from 98.3% zinc (Prime Western) to 99.99% zinc
(Special High Grade). The major uses today are galvanizing (49%), brass (19%) and die casting
(14%). It is estimated that 80% of future growth in zinc consumption will come from
galvanizing. Galvanized sheet steel shipments for auto skins have grown from 1.0 million tonnes
in 1980 to 4.6 million tonnes in 1992 and now accounts for 90% of sheet steel used in North
American cars.
Current Western World production of refined zinc is about 5.4 million tonnes of which the major
producers are Europe (40%), Japan (13%) and Canada (12%). In 1993, zinc concentrate
production in the Western World was 5.1 million tonnes and came primarily from mines in
Canada (20%), Australia (19%), Europe (13%), Peru (12%) and the United States (10%). The
difference in refined zinc and concentrate zinc came from the smelter's concentrate inventories
which have now been depleted. Zinc metal prices are currently quite depressed ($.44/lb), and
zinc concentrate production fell 10% world wide in 1993. Refined
- 81 -
Watts, Griffis and McOuat
zinc production did not fall however, and current LME stockpiles of refined zinc are in excess of
1 million tonnes.
According to the Brook Hunt forecast, total Western World consumption of zinc will reach
6.5 million tonnes by 2005 which is equivalent to an annual growth rate of 1%. By the turn of the
century, 2.5 million tonnes of additional mine capacity will be required as well as 800,000 tonnes
of refined zinc capacity. Between 1996 and 1998 the Western World zinc market is forecast to be
in deficit with a peak zinc price in 1999 and average zinc price of $0.58 (1993 dollars) between
1996 and 2005. Brook Hunt forecasts an average smelter treatment charge of $206 over the
period 1996-2005.
13.4 GOLD
Gold will account for approximately 15% of net revenue for Al Masane. During 1993, the
average price of gold rose to $360 from $344 in 1992 and during June 1994 was trading in the
$380 to $390 per ounce range. During the last three years, gold consumption
(fabrication, jewellery and industry) has exceeded mine production by 20%. This would indicate
that gold should be able to maintain its current price. It should be noted that forward sales of Al
Masane gold could add to its gold revenue.
13.5 SILVER
Silver accounts for about 8% of forecast net revenue from Al Masane. As with gold, silver
consumption continues to exceed new mine production. In 1993 the shortfall was estimated at
156 million ounces or 27% of mine production. Silver consumption is being driven by increased
consumption in developing countries, particularly India, Thailand and Mexico. In those three
countries, consumption apparently increased by 34% in 1993 over 1992.
- 82 -
Watts, Griffis and McOuat
WGM contacted several smelters and trading houses on a confidential basis to gauge the interest
of the industry to a project that would begin producing copper and zinc concentrates in the final
quarter of 1996. They were provided with a copy of the most recent concentrate analyses as
estimated by Lakefield and asked for their comments about the suitability of the concentrates and
their future requirements.
The response was very enthusiastic and indicated that the demand for zinc concentrate would be
high by late 1996 and that based on the current analyses, the zinc concentrate would incur no
penalties. High zinc metal inventories dominate the current zinc market. This has produced a
low metal price and the closure of several zinc mines. Smelters have continued to produce zinc
from concentrate stockpiles but these are now depleted. This will force smelters to curtail
production which will ultimately lead to a reduction in metal stockpiles and a rise in the price of
zinc. For instance, due to a lack of concentrate feed, Asturiana in Spain cut production in
November and December of 1993 by 10,000 tonnes per month and is expected to produce only
200,000 tonnes in 1994 although it has a capacity of 325,000 tonnes.
The inclusion of precious metals makes the copper concentrate a very attractive product. As with
the zinc concentrate, the copper concentrate is very clean and should not incur any penalties.
Copper has fared better than the other base metals in the early 1990s. Copper demand has been
more robust due to the expansion of the New Industrialized Countries, the dramatic expansion of
the Chinese economy and the rebuilding of the infrastructure in the former East Germany.
Dore will be produced from the gold plant and sent to the refinery. This is a readily saleable
commodity.
- 83 -
Watts, Griffis and McOuat
13.7 RECOMMENDATIONS
Once a production decision has been made, all prospective smelters should be
contacted. The smelters will require samples of the product and assurance of project
financing before they will seriously negotiate a smelter contract.
Our financial analysis assumes that major funding will be from the Saudi Industrial
Development Fund. Serious consideration should be given to financing the capital
equipment purchases through long term concentrate sales contracts with Japanese or
other trading houses. WGM has already spoken with one Japanese firm that has
expressed considerable interest in providing equipment in return for long term
concentrate contracts.
- 84 -
Watts, Griffis and McOuat
14.1 GENERAL
Following a favourable decision to proceed with the construction and the availability of
financing, a project team will be assembled to implement the Al Masane project. This team will
have to address two major aspects:
Project Management which involves the people and systems required to design
and construct the mine and plant facilities; and
We have assessed these requirements for the project and also taken into consideration the unique
characteristics of the project such as the location and infrastructure that is presently available both
at the site and in Saudi Arabia. We have also assessed the capabilities and present availability of
skilled personnel for implementing this size of project and we believe that the proposed
implementation plan will provide the most expeditious and economical method of bringing the Al
Masane project to commercial production.
The implementation plan calls for the appointment of a General Manager to direct both the
Project Management and Operations Management aspects of the project. The General
Manager will directly supervise the Operations Group which will initially be responsible for
the design and development of the underground mining operation. Staff and personnel will be
hired to develop the mine and bring it into operation. As the construction of the concentrator
and infrastructure is completed, the Group will assume the responsibility for commissioning
and start-up of these facilities.
- 89 -
Watts, Griffis and McOuat
It is proposed that WGM be appointed Project Manager reporting to the General Manager to
provide the services associated with the construction of the concentrator and infrastructure.
WGM would assemble a team of specialists to supervise the Engineering Design, Procurement
and Construction Management (EPCM) for this purpose. It is presently anticipated that EPCM
would be contracted to a firm or firms located in Toronto where their activities can be closely
supervised by WGM.
In order to implement the proposed plan, the first priority would be to appoint a General
Manager. This person would be responsible for the overall direction of the project including
Project Management and Operations Management (Figure 18). The qualifications for this person
would include mine operations and project management experience in order that he
can effectively direct both of the major management requirements of the project. This position
would report directly to the President of the ASDC.
The General Manger would be located at the project site and would be required to travel
extensively to monitor the various activities of the project both in Saudi Arabia and abroad. He
would be supported in the mine operations side by the nucleus of his permanent operating staff
and on the project construction side by experienced technical and construction personnel from the
Project Manager.
The General Manager would be supported on the Operations side by hiring the personnel that
will become the nucleus of the operating group.
The mine development would be undertaken by direct hire personnel under the direction of
ASDC supervisors. This will allow the mine staff to be hired and the required standards of
productivity and safety to be implemented for the initiation of the project. The mine
superintendent would be hired who would then be responsible for hiring the personnel for
- 90 -
Watts, Griffis and McOuat
bringing the mine into production. Mine staff and miners would be hired in order to provide an
experienced base staff from which the operation can be expanded to full capacity. From our
discussions with the existing operations and recruitment personnel in Saudi Arabia we believe
that the required experienced mine supervision and miners are available and that this approach
will be successful in establishing a well motivated and safe operating crew.
The administration superintendent and support staff would be hired early in the project to provide
accounting and logistical support for the Operations Group and to monitor the data provided by
the construction activity.
It is planned to hire the mill superintendent and maintenance superintendent during the latter
stages of construction phase of the project. These personnel will then be able to provide
input into the project at the appropriate time, become familiar with the project and hire their
department personnel prior to the start of production.
Following the completion of the project financing, the construction will move to the detailed
engineering design, procurement and construction phase. This work involves:
selection and supervision of contractors to carry out the designated work; and
It is proposed that WGM as Project Manager will provide a group to supervise the EPCM
activities. This will ensure the continuity of experience and technical input from the
- 92 -
Watts, Griffis and McOuat
Feasibility Study Phase is carried through into the detailed engineering phase. The group will be
headed by a WGM Project Manager who will report to the ASDC General Manager. The WGM
Project Manager will be based in Toronto and will be responsible for overseeing the engineering
and procurement activities in Toronto and will also provide any additional construction
management services that may be required at the project site.
The EPCM activities can be undertaken in a number of ways. This can range from hiring a
contractor to perform all the services to WGM acting as manager and undertaking each of the
individual EPCM services for the complete project. After carefully considering the various
options and considering the size and duration of this project there appears to be no advantage
to WGM undertaking any of the EPCM function directly. To do so they would have to
assemble a project team whereas an EPCM organization already has the systems and
personnel in place. It would be more advantageous to WGM to closely supervise and monitor
the work of the EPCM contractor.
The EPCM contractor may be chosen on a competitive bid basis or could be selected by
negotiation based on WGM's experience of Toronto based contractors. It is proposed that
WGM approach a short list of three EPCM firms to negotiate the terms of a contract for
EPCM for the project. This contract could be negotiated on a turnkey basis in which case
the EPCM company would deliver a plant on a fixed cost basis. The contract could also be
negotiated on a target-price basis which provides an incentive for the EPCM company to
build the project at the lowest possible cost. A third alternative is that the contract be
negotiated on direct cost reimbursable basis with close control of costs being implemented by
ASDC/WGM. Each of these alternatives will be examined within the project concept for the
best approach to the EPCM contractor.
- 93 -
Watts, Griffis and McOuat
A construction schedule of 18 months is envisaged for the Al Masane project. Based on present
delivery schedules, the SAG mill, ball mill and diesel generators appear to be the long delivery
items and will have to be ordered as soon as possible in order to meet this schedule. Should
refurbished equipment be available then the overall schedule may be shortened.
The schedule shown on Figure 19 outlines the major activities in the engineering procurement
and construction of the project based on presently envisaged plans.
In the Metallurgy section, we recommended that a pilot plant be completed to confirm certain
aspects of the metallurgy and plant design. This will require the mining of fresh representative
samples of the Saadah and Al Houra zones. As this may extend the construction schedule it is
imperative that the mine is dewatered and the sample obtained as soon as possible after a
production decision is made. It is possible that certain aspects of the detailed engineering can be
started prior to obtaining this information so the EPCM contractor should also be selected as soon
as possible after financing is arranged.
The mine preproduction will be phased in over a one year period prior to the startup of the
concentrator. The mine planning will start in early 1995 coincidental with mill engineering
which will allow sufficient time to order the mine equipment and develop the mine.
WGM believes that a three month commissioning period will be required to bring the plant to
design capacity.
- 94 -
Watts, Griffis and McOuat
15.1 INTRODUCTION
The capital cost to bring the Al Masane project into production is estimated at $81.3 million.
This cost includes the pre-production development in the mine, the construction of a 2,000 tonne
per day concentrator, infrastructure with a 300 man camp facility and the installation of a
cyanidation plant to increase the recovery of precious metals from the deposit. A summary of the
capital costs is as follows:
TABLE 10
SUMMARY OF CAPITAL COSTS
AL MASANE PROJECT
Area Cost
($ millions)
Mine 16.0
Concentrator 14.7
Infrastructure 18.9
Port Facilities 1.1
Water Supply 1.8
Project Implementation 4.5
Cyanidation Plant 4.4
Direct Costs 61.4
Indirect Costs 11.3
Contingency 8.6
Total Capital Cost 81.3
- 96 -
Watts, Griffis and McOuat
Capital costs allowances have been made for direct and indirect costs for construction of the
facilities and allowances for contingency and start up costs. No allowance is made for head
office costs incurred by ASDC during construction or operation of the mine. No allowance has
been made for inflation in these costs. WGM's project management costs are included.
The mine design and associated capital costs were prepared by WGM. The capital cost for the
concentrator and infrastructure was prepared by Davy under the technical direction of WGM.
The water supply alternatives and costs have been prepared by S&P. WGM has estimated the
Project Implementation Cost. A common cost base for labour and materials was used throughout
the capital cost estimates.
The cost estimate is based on the mine plans, flowsheets, equipment lists, drawings and level
plans, sketches, specifications and topographic drawings prepared during the studies and
observations made during the site visits.
Equipment costs are based on written quotations for major equipment obtained from equipment
suppliers. Concrete, steel, piping and earthwork costs are based on the drawings and costs
developed by Davy and their associated companies in Saudi Arabia and based on discussions
with supply companies during a visit to Saudi Arabia.
Labour costs are based on discussions held with Saudi mining company personnel, labour brokers
and Davy Saudi experience in similar construction situations. These labour costs have been
factored to reflect productivity levels experienced in Saudi Arabia both for underground, surface
and construction labour. Costs for on-site room and board are included in the labour costs, both
for construction and permanent operating personnel at the site.
The cost estimates have been prepared in a manner and in sufficient detail so that the estimates
have an assessed accuracy of 15%.
- 97 -
Watts, Griffis and McOuat
The WGM mine plan details the sources of ore for the first five years of production with a more
generalized approach for the remaining years of the mine life. While this plan extracts higher
grade ores as early in the life of the mine as possible, it is not considered an optimized plan. This
optimization work will be prepared in conjunction with a more detailed diamond drilling program
during the Project Implementation Phase.
The cost estimate includes the rehabilitation of the existing workings, pre-production
development for the establishment of production facilities in the Al Houra, Saadah and Moyeath
Zones, the installation of one ventilation raise in each zone, a fill raise in the Saadah zone only,
and development of an underground crusher room and related facilities. Costs are also included
for the establishment of surface facilities at the various raise collars, and road construction to
permit access to them.
The estimate includes costs for mining equipment required to mine the ore at a rate of 2,800
tonnes per day, five days per week. The existing equipment has been incorporated into the total
mine equipment fleet, and funds have been included for its refurbishment. Major items of new
equipment include the provision of one drill jumbo, two 8 yard production scoops, three
hydraulic ring drills, two trucks, and a variety of new utility equipment. Funds have also been
included for ventilation and pumping equipment underground and electrical distribution.
Moyeath development will be initiated in 1997 and will incur a total capital cost of $4 million for
development and additional equipment for the longer ore haul to the crushing plant.
The breakdown of the capital costs for the mine is shown on Table 11.
Management and technical supervision costs associated with these activities are included in
the Project Implementation costs.
- 98 -
Watts, Griffis and McOuat
TABLE 11
BREAKDOWN OF MINE CAPITAL COSTS
AL MASANE PROJECT
Area Cost
($ millions)
The capital cost estimate for the concentrator and infrastructure has been produced from the
flowsheets, equipment lists and general arrangement drawings prepared by Davy and presented in
their study. The capital cost for these facilities is estimated at $33.6 million. An additional $4.4
million is required for the construction of a cyanidation plant for the recovery of the precious
metal content of the ore not recovered into the copper concentrate.
The capital cost includes the construction of a 2,000 tonne per day concentrator for the
production of copper and zinc concentrates for sale to smelters. Infrastructure costs include
a camp site to house 300 employees and 12.8 MW power plant. The direct costs presented
in Table 10 include all equipment, materials, labour and construction costs. The indirect costs
include the EPCM costs, capital spares, freight and commissioning costs.
- 99 -
Watts, Griffis and McOuat
Potable and process water for the project will be obtained from wells located in Wadi Saadah
up to 6 km downstream of the mine. The cost for drilling extra wells and for the
construction of water retention barriers in the wadi that will recharge the floods into the
bedrock has been estimated by S&P to be $2.0 million. These costs include well drilling
supervision and the installation of the sub-surface dams. The cost of well pumps, electrical
supply, security fencing, pipeline and water collection and distribution system throughout the
site is included in the cost of the concentrator and infrastructure prepared by Davy.
The cost for Project Implementation is based on the scope of work described in Chapter 14. A
team will be hired to implement the project including a General Manager. The General Manager
will supervise development of the underground mine and the construction of the concentrator and
infrastructure. This phase will require the services of a number of specialists including
geological, mining, processing and construction personnel. These personnel will act on behalf of
the Owner to ensure that the project is implemented in a practical and economical manner. The
cost of the Project Implementation is estimated at $4.5 million. These costs are shown in Table
12.
The services and costs outlined in the Project Implementation plan are those required to provide
the necessary data and infrastructure to service the site and complete the detailed engineering for
the project. The services will provide the management information systems required to monitor
and control the project and that the project is coordinated to include the technical detail is
implemented to ensure that the project will be a success.
- 100 -
Watts, Griffis and McOuat
15.7 DISCUSSION
The use of good quality refurbished equipment should be considered. We recommend that only
major equipment items such as the crusher, grinding mills, belt filters, camp accommodation
units and diesel generators be considered in this instance as the quality of this refurbished
equipment can be carefully monitored. Davy has estimated that a possible savings of $2.5 million
could be realized from the use of refurbished equipment. The actual amount of the savings will
be determined when firm prices are obtained from suppliers prior to placing the order for the
equipment.
- 102 -
Watts, Griffis and McOuat
16.1 INTRODUCTION
The operating costs for the Al Masane project at a production rate of 700,000 tonnes per year is
estimated at an average of $36.86 per tonne over the life of the mine. A breakdown of the
operating costs is shown on Table 13.
TABLE 13
OPERATING COST SUMMARY
AL MASANE PROJECT
Area Cost
($/tonne)
Mining 16.82
Concentrator 11.22
Cyanidation Plant 3.73
Maintenance Services 3.49
Administration 1.60
Total 36.86
The operating cost for the mine at full production rate varies from $20.00 per tonne in the Saadah
zone to approximately $13.00 in the Al Houra zone. The average cost for mining is $16.82 per
tonne.
Operating costs in the concentrator are estimated at $11.22 per tonne of ore milled and $3.73 per
tonne milled in the cyanidation plant. This cost was prepared by Davy from a detailed forecast of
staffing levels, reagents, consumables and maintenance materials. Tailings
- 103 -
Watts, Griffis and McOuat
disposal has been estimated based on filtration of the total tailings and being trucked to the
disposal site east of the concentrator at a cost of $0.27 per tonne treated.
Maintenance required to support the underground and concentrator operations and maintain and
service the infrastructure is estimated to be $3.49 per tonne.
Administration costs of $1.60 per tonne include the management and administration of the site.
These costs are for the General Manager, accounting and warehouse services, and human
relations functions such as hiring, safety, training and camp administration.
Power to the site will be provided by an on-site diesel generation plant. The cost of power has
been allotted to each of the departments on the basis of the estimated operating load for each
department. The cost of power has been estimated at $0.04 per kilowatt hour generated.
16.2.1 LABOUR
Senior staff on the project will be European or North American expatriates with considerable
experience in mining operations similar to that at Al Masane. While a program to hire as many
Saudi personnel as possible will be undertaken, the supervisors and operating personnel both
underground and in the concentrator will be Filipinos with previous mining experience. The
semi-skilled and labourer positions will be filled by other nationalities. A training program will
be implemented to upgrade the qualifications of all workers.
The labour costs used will include an allowance for overtime, social services required by Saudi
law, travel and hiring costs, food and accommodation costs, as well as the direct labour cost
component.
Allowance has been made in the labour costs for the provision of on-site medical services. The
labour costs also include the provision of security services for the site.
- 104 -
Watts, Griffis and McOuat
In accordance with company policy of upgrading employee qualifications and ensuring a safety-
oriented workplace, a training department of four employees has been included in the labour
staffing.
Costs of maintenance supplies and consumables have been based on quotations obtained from
potential suppliers. Where actual costs are not available, costs have been based on installed
equipment cost or on a cost per operating hour.
The mine operating costs have been estimated to average $16.82 per tonne over the life of the
mine. The variability in costs depends on the method of mining used and includes the cost of
backfill if required.
Mine operating costs in the Saadah and Al Houra zones have been estimated to be an average of
$16.82 per tonne. Operating costs in the Moyeath zone are anticipated to be very similar.
The operating costs have been based on the achievement of productivity rates similar to those
found in other underground mines in Saudi Arabia. Typically one round per shift will be
expected from single heading development crews, 100 m per shift from the ring drills, and that
six men can fulfil the production blasting function.
Unit operating costs have been developed and used to estimate the operating cost per tonne.
Typical unit operating costs are:
- 105 -
Watts, Griffis and McOuat
Typical equipment operating costs (excluding operation and maintenance labour) per hour are:
Operating costs vary considerably throughout the mine. The variable elements of the operating
costs include the method used to mine the ore, the amount of access development required,
transport elements, and the use of fill. Operating cost estimates vary between about $20 in cut
and fill areas of the Saadah zone to a low of about $13 in certain parts of the open stoped areas.
The operating costs in the concentrator are estimated at $13.63 per tonne milled. A breakdown of
these cost is as follows:
- 106 -
Watts, Griffis and McOuat
16.4.1 LABOUR
The labour cost of $2.58 reflects the cost of the 73 staff and operating personnel required to
operate the concentrator on a 24 hour per day basis. The details of labour rates and staffing levels
are presented in the Davy report. Labour costs include the social services required under Saudi
law and accommodation and food costs.
16.4.2 REAGENTS
Reagent consumptions are based on the Lakefield testwork. Costs are based on current
quotations obtained from Saudi trading houses where possible or from an offshore supplier with
freight allowed to site. The Lakefield testwork program has shown that appreciable operating
cost savings in the order of $2.42 per tonne can be realized with the use of a talc prefloat on the
ore and this has been incorporated in our costs. However, a full scale pilot plant is necessary to
verify this result.
The cost of maintenance supplies has been estimated at 5% of the purchased equipment cost.
16.4.4 POWER
The electrical costs have been estimated on the basis of installed load and factored by load and
operating factors to arrive at an actual power demand. The cost for the concentrator is estimated
at $1.88 per tonne milled.
- 107 -
Watts, Griffis and McOuat
including the camp. The group will also provide shop facilities for the underground and surface
operations and assistance in major maintenance work in the concentrator.
16.6 ADMINISTRATION
The operation will be managed and administered by a team of specialists consisting of 45 people.
This group include the General Manager, Manager of Administration, Chief Accountant, Human
Resources Supervisor and a Purchasing/Warehouse Manager. A doctor and 24 hour nursing
services will also be provided.
The Administration Department will also be responsible for the overall training and safety
aspects of the operation and will provide the necessary expertise for the orientation of newly
hired personnel. The security of the site will also be administered by this Department. The cost
of these services has been estimated by Davy at $1.60 per tonne of ore processed.
The port facility will be staffed on a 24 hour per day basis to ensure the efficient unloading and
stockpiling of copper and zinc concentrates that are trucked from the minesite. Trucks arriving
from the minesite will be dumped into the storage shed at the dockside and will be moved into the
appropriate stockpile using a front end loader. Approximately 20 truckloads per day will be
received at the port each day. The shed has the capacity of storing approximately 25,000 tonnes
of concentrate.
A shiploading conveying system will be provided to load the ships at a rate of 500 tonnes per
hour. The shiploader will be fed by a front end loader using the same crew that unloads and
stockpiles the concentrates delivered by truck. Operating costs of the port facility is estimated at
$0.72 per tonne processed and is included in the concentrate shipping cost.
- 108 -
Watts, Griffis and McOuat
The operating cost of the cyanidation plant to recover gold and silver values form the zinc
concentrate and plant tailings has been estimated at $3.73 per tonne of ore milled. The cost
breakdown is shown below:
The major reagent cost is the cost of the cyanide consumed in the process.
Recent cyanidation testwork by Lakefield has shown that the reductions in cyanide consumption
could be realized by lowering the cyanide concentration in the leaching process. WGM has
incorporated this operating cost savings of $2.25 per tonne into our base case financial scenario.
16.9 DISCUSSION
Potential savings in operating costs can be made by obtaining electric power from the Saudi
Electric Power Company. Operating cost savings of $2.14 per tonne could be realized by the use
of less expensive electric power from the Saudi Electric Power Company and would enhance the
economics of the project.
- 109 -
Watts, Griffis and McOuat
17.1 GENERAL
The economics of the Al Masane project have been studied utilizing cash flow analysis. A Base
Case is described that includes those project elements which we believe are most likely to be
achieved. We have then examined further opportunities to improve the project, as well as those
elements which represent project risks. The cash flow projections are included in Appendix 1.
- 110 -
Watts, Griffis and McOuat
Smelter terms for zinc concentrates Pay for 85% of zinc content with a
minimum deduction of 8 units
Treatment charge-$200/tonne concentrate
Smelter terms for copper concentrate Copper Deduct one unit, pay for
100% of remainder
- 111 -
Watts, Griffis and McOuat
Operating Costs
Mining $16.82/tonne
Milling $14.95/tonne
General and administration $ 5.09/tonne
Total $36.86/tonne
- 112 -
Watts, Griffis and McOuat
Financial structure of New Company ASDC will earn 50% of the equity of the
new company by contributing the mining
lease, ore reserves and associated assets.
New Saudi shareholders will purchase
50% of the equity of the new company for
$20.4 million cash through a public
offering
New debt funding will be arranged for
$61.3 million (75% of total new capital)
- 114 -
Watts, Griffis and McOuat
Cash flow to New Company Project Cash Flow plus funds from Saudi
investors plus SIDF loan plus bank loans
less repayment of government advances,
SIDF loan, and bank loans less service
charges less interest expense less
capitalized interest
Cash flow to Saudi shareholders Dividends from New Company less initial
investment
- 115 -
Watts, Griffis and McOuat
When reviewing the cash flow projections, it is important to keep in mind that no inflationary
increases are included. This means that the calculated internal rate of return for the Base Case
(14%) would, in an inflationary environment of 4%, say, increase to about 18%.
There are several components of the project which provide opportunities to increase cash flow.
As well, several other elements of the project represent potential risks.
- 116 -
Watts, Griffis and McOuat
We have also calculated cash flows to show the effect of several risks:
Earlier repayment of outstanding advance of $11 million from Saudi government; and
Cash flows have been calculated to show the results from the perspective of:
1. the project;
2. the new Saudi investors; and
3. Arabian Shield Development Company.
The resulting rates of return from these investors are shown in Figures 21 to 24.
- 117 -
Watts, Griffis and McOuat
We have selected internal rate of return as the best way to show project returns. Internal rate
of return (IRR) for our base case is 14.0%. It comes as no surprise that the greatest effect on
IRR is caused by increases to revenue. An increase of 10% in metal prices results in a boost
to IRR from 14.0% (base case) to 21.6%. Ten percent higher ore grades result in a
19.2% IRR. Better ore grades could be achieved by better control of waste dilution during
mining or from more high grade ore than postulated.
Reduced operating costs (by 10%) gave an increase in IRR to 17.3%. One good opportunity
to reduce operating costs is found underground where better productivities by mining
personnel could well be achieved.
Increased ore reserves are highly likely as described in Section 6.9. An increase of 10%
increases project IRR to 14.9%.
Further metallurgical test work is required to achieve greater confidence in the percentage of
metal recoveries to be expected as a result of introducing a talc pre-float section into the
process flowsheet. A reduction of copper and zinc recoveries by 1 percentage point and
precious metals recoveries by 5 percentage points is believed to be the worst case and would
cause a reduction in project IRR to 12.9% from 14.0%.
Capital cost increases of 10% result in project IRR of 12.1%. Such increases could be caused
by renewed inflation in equipment manufacturing countries or cost overruns from
unanticipated construction problems.
Another area of the process flowsheet which requires further testwork is the cyanide section
for precious metals recovery. The risk here is that consumption of cyanide might be higher
than indicated by limited testwork to date. This increase (which might add $2.25 per tonne to
operating costs) would result in a project IRR of 12.0% (base case 14.0%).
Increased operating costs (up 10%) would reduce project IRR to 10.6%.
- 118 -
Watts, Griffis and McOuat
The undiscounted net cash flow to the new Company is $94.0 million for the Base Case. This is
distributed as dividends on the basis of ownership of the company of 50% to ASDC and 50% to
Saudi investors.
The rate of return for Saudi investors is 11.9% using the Base Case criteria. These criteria will be
subject to the sensitivities as discussed in Section 17.3.1. The results are illustrated in Figure 22,
which shows the rate of return projected for the new Saudi investors, while Figure 23 shows the
Net Cash Flow to Saudi investors, after repayment of their initial investment of $20.4 million.
The net cash flow to ASDC is illustrated in Figure 24. This is $37.0 million using the Base Case
assumptions. These criteria will be subject to the same sensitivities as discussed previously and
are also shown in Figure 24. These range from $26.5 million (in the case where operating costs
are increased by 10%) up to $62.7 million (in the case where metal prices were increased by
10%. In all cases, the net cash flow is after payment of corporate taxes in Saudi Arabia.
- 120 -
Watts, Griffis and McOuat
17.4 SUMMARY
In summary, we believe that our financial analysis shows that a viable project can be anticipated
at Al Masane. While there are several areas where technical or cost risks can be identified, we
believe these are outweighed by the opportunities to improve financial returns.
As outlined on page 34, inferred resources of almost one million tonnes have been calculated.
We believe these will be readily confirmed. Further, no allowance has been made for additional
potential immediately down plunge beneath the identified reserves.
In addition, we feel that the Al Masane area has the dimensions of a mineral district which has
only been partially prospected to date. As described on page 38, some of the small mineralized
showings outside the immediate mine area could yield substantial tonnages of new ore.
Reduced mining costs are deemed possible, especially if improved mining productivities can be
achieved as we feel may well be the case.
Another financial factor is the existing loan of $11 million originally borrowed to finance the
underground investigation program. We have shown repayment of this loan at the end of the
mine life. There is a reasonable prospect that the government can be persuaded to forgive this
debt.
Finally, no one can control metal prices. However, careful planning which employs such
techniques as forward selling, hedging and innovative smelter contracts can produce higher
income.
On balance, we feel that these positive factors provide good upside potential for financial returns
from a long-life mine at Al Masane.
- 124 -
Watts, Griffis and McOuat
18.1 CONCLUSIONS
1. Diluted, mineable, proven and probable reserves are 7,212,000 tonnes grading 1.42% Cu,
5.31% Zn, 1.19 g Au/t and 40.20 g Ag/t. An additional 953,000 tonnes grading 1.16%
Cu, 8.95% Zn, 1.50 g Au/t and 60.79 g Ag/t can be classified as Inferred Resources.
2. There is considerable potential to expand the reserves defined to date both within the
presently defined Saadah, Al Houra and Moyeath zones and within the mining lease.
3. Reserves are sufficient to sustain the operation at a production rate of 700,000 tonnes per
year for a period of approximately ten years.
4. The capital cost to bring the Al Masane project into production at a rate of 700,000 tonnes
per year has been estimated at $81.3 million.
5. The average operating cost over the life of the mine has been estimated at $36.86 per
tonne.
6. Our economic analysis of the project shows that the project will realize an internal rate of
return of 14.0% to the project, a rate of return of 11.9% and a net cash flow of
$26.6 million to new Saudi investors, and a net cash flow to ASDC of $37.0 million using
the Base Case assumptions. We believe that there is good potential for improving these
rates of return as further ore reserves are developed, as operating costs are firmed up and
as detailed financial planning is carried out.
- 125 -
Watts, Griffis and McOuat
18.2 RECOMMENDATIONS
We recommend that ASDC make a decision to bring the Al Masane mine into production.
The timing for such a decision is presently.
- 126 -
Watts, Griffis and McOuat
1994 The Outlook for Copper and Zinc Prices 1996-2005. For Watts, Griffis
and McOuat Limited.
CSMRI
Davy International
1994 Al Masane Project Feasibility Study for a 2,000 TPD Concentrator and
Related Infrastructure for Arabian Shield Development Company. 96
p. and Appendices.
- 127 -
Watts, Griffis and McOuat
Greenwood, W.R.
1994 Overview of capital and operating costs for concentrator and related
infrastructure.
Lakefield Research
1980 The Recovery of Copper, Zinc, Gold and Silver from Masane Drillcore
Samples.
- 128 -
Watts, Griffis and McOuat
1994 Revised Proposed Grinding System for the Al Masane Project for
Arabian Shield Development Based on Small Scale Tests at Hazen
Research Incorporated and Lakefield Research.
- 129 -
Watts, Griffis and McOuat
- 130 -
Watts, Griffis and McOuat
- 131 -
Watts, Griffis and McOuat
APPENDIX 1
- 132 -
|
https://www.scribd.com/document/362457062/Al-Masane-Saudi-Copper-Zinc-Mining-Feasibility-Study-Jul2294
|
CC-MAIN-2019-35
|
refinedweb
| 22,709
| 52.19
|
Created on 2008-07-14 11:32 by effbot, last changed 2021-01-27 21:14 by pablogsal.
CPython provides a Python-level API to the parser, but not to the
tokenizer itself. Somewhat annoyingly, it does provide a nice C API,
but that's not properly exposed for external modules.
To fix this, the tokenizer.h file should be moved from the Parser
directory to the Include directory, and the (semi-public) functions that
already available must be flagged with PyAPI_FUNC, as shown below.
The PyAPI_FUNC fix should be non-intrusive enough to go into 2.6 and
3.0; moving stuff around is perhaps better left for a later release
(which could also include a Python binding).
Index: tokenizer.h
===================================================================
--- tokenizer.h (revision 514)
+++ tokenizer.h (working copy)
@@ -54,10 +54,10 @@
const char* str;
};
-extern struct tok_state *PyTokenizer_FromString(const char *);
-extern struct tok_state *PyTokenizer_FromFile(FILE *, char *, char *);
-extern void PyTokenizer_Free(struct tok_state *);
-extern int PyTokenizer_Get(struct tok_state *, char **, char **);
+PyAPI_FUNC(struct tok_state *) PyTokenizer_FromString(const char *);
+PyAPI_FUNC(struct tok_state *) PyTokenizer_FromFile(FILE *, char *,
char *);
+PyAPI_FUNC(void) PyTokenizer_Free(struct tok_state *);
+PyAPI_FUNC(int) PyTokenizer_Get(struct tok_state *, char **, char **);
#ifdef __cplusplus
}
IMO the "struct tok_state" should not be part of the API, it contains
too many implementation details. Or maybe as an opaque structure.
There are a few things in the struct that needs to be public, but that's
nothing that cannot be handled by documentation. No need to complicate
the API just in case.
Sorry for the terribly dumb question about this.
Are you meaning that, at this stage, all that is required is:
1. the application of the PyAPI_FUNC macro
2. move the file to the Include directory
3. update Makefile.pre.in to point to the new location
Just I have read this now 10 times or so and keep thinking more must be
involved :-) [certainly given my embarrassing start to the Python dev
community re:asynchronous thread exceptions :-| ]
I have attached a patch that does this. Though at this time it is
lacking any documentation that will state what parts of "struct
tok_state" are private and public. I will need to trawl the code some
more to do that.
I have executed:
- ./configure
- make
- make test
And all proceed well.
That's should be all that's needed to expose the existing API, as is.
If you want to verify the build, you can grab the pytoken.c and setup.py
files from this directory, and try building the module.
Make sure you remove the local copy of "tokenizer.h" that's present in
that directory before you build. If that module builds, all's well.
Did that and it builds fine.
So my test procedure was:
- apply patch as per guidelines
- remove the file Psrser/tokenizer.h (*)
- ./configure
- make
- ./python setup.py install
Build platform: Ubuntu , gcc 4.2.3
All works fine.
thanks for the extra test files.
* - one question though. I removed the file using 'svn remove' but the
diff makes it an empty file not removed why is that? (and is it correct?)
It would be nice if this same C API was used to implement the 'tokenize' module. Issues like issue2180 will potentially require bug fixes in two places :-/
The previously posted patch has become outdated due to signature changes staring with revision 89f4293 on Nov 12, 2009. Attached is an updated patch.
Can it also be confirmed what are the outstanding items for this patch to be applied? Based on the previous logs it's not clear if it's waiting for documentation on the struct tok_state or if there is another change requested. Thanks.
From my read of this bug, there are two distinct tasks mentioned:
1. make PyTokenizer_* part of the Python-level API
2. re-implement 'tokenize' in terms of that Python-level API
#1 is largely complete in Andrew's latest patch, but that will likely need:
* rebasing
* hiding struct fields
* documentation
#2 is, I think, a separate project. There may be good reasons *not* to do this which I'm not aware of, and barring such reasons the rewrite will be difficult and could potentially change behavior like issue2180. So I would suggest filing a new issue for #2 when #1 is complete. And I'll work on #1.
Here's an updated patch for #1:
Existing Patch:
- move tokenizer.h from Parser/ to Include/
- Add PyAPI_Func to export tokenizer functions
New:
- Removed unused, undefined PyTokenizer_RestoreEncoding
- Include PyTokenizer_State with limited ABI compatibility (but still undocumented)
- namespace the struct name (PyTokenizer_State)
- Documentation
I'd like particular attention to the documentation for the tokenizer -- I'm not entirely confident that I have documented the functions correctly! In particular, I'm not sure how PyTokenizer_FromString handles encodings.
There's a further iteration possible here, but it's beyond my understanding of the tokenizer and of possible uses of the API. That would be to expose some of the tokenizer state fields and document them, either as part of the limited ABI or even the stable API. In particular, there are about a half-dozen struct fields used by the parser, and those would be good candidates for addition to the public API.
If that's desirable, I'd prefer to merge a revision of my patch first, and keep the issue open for subsequent improvement.
New:
- rename token symbols in token.h with a PYTOK_ prefix
- include an example of using the PyTokenizer functions
- address minor review comments
This seems to have stalled out after the PyCon sprints. Any chance the final patch can be reviewed?
Could you submit a PR for this?
I haven't seen any objections to this change, a PR will expose this to more people and a clear decision on whether this change is warranted can be finally made (I hope).
If the patch still applies cleanly, I have no issues with you or anyone opening a PR. I picked this up several years ago at the PyCon sprints, and don't remember a thing about it, nor have I touched any other bit of the CPython source since then. So any merge conflicts would be very difficult for me to resolve.
Okay, I'll take a look at it over the next days and try and submit a PR after fixing any issues that might be present.
Please hold this until finishing issue25643.
Thanks for linking the dependency, Serhiy :-)
Is there anybody currently working on the other issue? Also, shouldn't both issues now get retagged to Python 3.7?
I am working on the other issue (the recent patch is still not published). Sorry, but two issues modify the same code and are conflicting. Since I believe that this issue makes less semantic changes, I think it would be easier to rebase it after finishing issue25643 than do it in contrary order.
That makes sense to me, I'll wait around until the dependency is resolved.
Serhiy Storchaka is this still blocked? it's been a few years on either this or the linked issue and I'm reaching for this one :)
I am -1 exposing the C-API of the tokenizer. For the new parser several modifications of the C tokenizer had to be done and some of them modify existing behaviour slightly. I don't want to corner ourselves in a place where we cannot make improvements because is a backwards incompatible change because the API is exposed.
I'm interested in it because the `tokenize` module is painfully slow
> I'm interested in it because the `tokenize` module is painfully slow
I assumed, but I don't feel confortable exposing the built-in one.
> I assumed, but I don't feel confortable exposing the built-in one.
As an example of the situation, I want to avoid: every time we change anything in the AST because of internal details we have many complains and pressure from tool authors because they need to add branches or because it makes life more difficult for them it and I absolutely want to avoid more of that.
you already have that right now because the `tokenize` module is exposed. (except that every change to the tokenization requires it to be implemented once in C and once in python)
it's much more frustrating when the two differ as well
I don't think all the internals of the C tokenization need to be exposed, my main goals would be:
- expose enough information to reimplement Lib/tokenize.py
- replace Lib/tokenize.py with the C tokenizer
and the reasons would be:
- eliminate the (potential) drift and complexity between the two
- get a fast tokenizer
Unlike the AST, the tokenization changes much less frequently (last major addition I can remember is the `@` operator
We can hide almost all of the details of the tokenization behind an opaque struct and getter functions
For reimplementing Lib/tokenize.py we don't need to publicly expose anything in the C-API. We can have a private _tokenize module with uses whatever you need and then you use that _tokenize module in the tokenize.py file to reimplement the exact Python API that the module exposes.
Publicly exposing the headers or APIs opens new boxes of potential problems: ABI stability, changes in the signatures, changes in the structs. Our experience so far with other parts is that almost always is painful to add optimization to internal functions that are partially exposed, so I am still not convinced offering public C-APIs for the builtin tokenizer.
private api sounds fine too -- I thought it was necessary to implement the module (as it needs external linkage) but if it isn't then even better
> private api sounds fine too -- I thought it was necessary to implement the module (as it needs external linkage) but if it isn't then even better
We can make it builtin the same way we do for the _ast module, or we can have a new module under Modules (exposing the symbols in the dynamic table) **but** making them private (and not documented), which explicitly goes against what this issue proposes.
Either works for me, would you be able to point me to the starting bits as to how `_ast` becomes builtin?
> Either works for me, would you be able to point me to the starting bits as to how `_ast` becomes builtin?
and
But before that I have some questions. For example: How do you plan to implement the readline() interface that tokenize.py uses in the c-module without modifying tokenize.c?
I haven't looked into or thought about that yet, it might not be possible
It might also make sense to build new tokenize.py apis avoiding the `readline()` api -- I always found it painful to work with
> It might also make sense to build new tokenize.py apis avoiding the `readline()` api -- I always found it painful to work with
Then we would need to maintain the old Python APIs + the new ones using the module? What you are proposing seems more than just speeding up tokenize.py re-using the existing c code
I have built a draft of how the changes required to make what you describe, in case you want to finish them:
Problems that you are going to find:
* The c tokenizer throws syntax errors while the tokenizer module does not. For example:
❯ python -c "1_"
File "<string>", line 1
1_
^
SyntaxError: invalid decimal literal
❯ python -m tokenize <<< "1_"
1,0-1,1: NUMBER '1'
1,1-1,2: NAME '_'
1,2-1,3: NEWLINE '\n'
2,0-2,0: ENDMARKER ''
* The encoding cannot be immediately specified. You need to thread it in many places.
* The readline() function can now return whatever or be whatever, that needs to be handled (better) in the c tokenizer to not crash.
* str/bytes in the c tokenizer.
* The c tokenizer does not get the full line in some cases or is tricky to get the full line.
|
https://bugs.python.org/issue3353
|
CC-MAIN-2021-49
|
refinedweb
| 1,998
| 61.36
|
01 February 2010 22:35 [Source: ICIS news]
HOUSTON (ICIS news)--The US Chemical Safety Board (CSB) is seeking a 20.5% budget hike for the 2011 fiscal year to support the development of a regional office in or near Houston, the board said on Monday.
The Washington, DC-based group’s budget justification statement for the fiscal year beginning 1 October 2010 includes a request for a $12.7m (€9.1m) budget, up from $10.6m in fiscal 2010 and $1.9m above the number currently alloted by the Obama administration.
The ?xml:namespace>
The CSB is also requesting an additional three-person investigative team to focus on shorter-term investigations.
“The board believes that these two steps are essential to help close the gap between the number of serious chemical accidents that occur each year and the number the CSB is actually able to investigate,” the board said in its request.
In December, the board was unable to investigate an explosion at an American Acryl plant in
Last month, the CSB said its 17 open investigations was the largest number in its 11-year history.
In the budget request, the CSB is also seeking additional funds for items such as a director of operations, salary and benefits increases for five board members, and information technology (IT) equipment.
“People within the oil industry have told us that we give the best value for taxpayer dollar of any agency in the government,” CSB chairman John Bresland said. “$10m is what we spend, but in terms of accident prevention, that money is returned many times
|
http://www.icis.com/Articles/2010/02/01/9330713/us-chem-safety-board-requests-funds-to-create-houston.html
|
CC-MAIN-2014-52
|
refinedweb
| 264
| 50.26
|
it is ok, thanks. It works in HDIV spring example using Formtag. Guess is the configuration problem. Thanks anyway.
it is ok, thanks. It works in HDIV spring example using Formtag. Guess is the configuration problem. Thanks anyway.
Hi,
Just want to check with you guys how to use form action without putting the target URL to the "startpage" of HDIV. Now I will need to put all form action to "startpage" of else i will get...
Hi,
The problem is resolved. To use AES 256bits, you need to download the unlimited strength JCE policy files from Sun and install it. Replace the 2 jar files in your jdk folder.
...
Hi,
i trying to write a method to encrypt data using AES 256 but everytimes when i trying to pass the 256bits key to the java cipher it throw Invalid Key Exception, it works only for 128bits key....
it is resolved now. thanks.
ignore the "aa.repaint()", i forgot to remove it.
Hi,
Anyone know what is the problem? how to make it paint correctly when scroll?
import javax.swing.*;
import java.awt.*;
|
http://www.javaprogrammingforums.com/search.php?s=ee7dee786c2d66e081368d781b418ff8&searchid=1356954
|
CC-MAIN-2015-06
|
refinedweb
| 183
| 86.1
|
POW-MIA
POW-MIA stands for Prisoners Of War and Missing In Action. In the United States, it, and the associated black flag with the "POW-MIA" logo on it, are part of a widely believed conspiracy theory that the government of Vietnam secretly and maliciously continued to hold American prisoners of war long after the end of the Vietnam War into the 1980s, and may still be holding them today.
This myth is so widely believed that some municipalities fly the black POW-MIA flag over the city hall or their post office--it's even been known to fly over some states' capitol buildings, even California's. The flag is also a common sight at "Rolling Thunder" motorcycle runs, and several hucksters collected money during the 1980s for their abortive missions to Southeast Asia to locate American POWs. Four decades after the war's end, it shows no sign of dying out.
The flag was created by members of the National League of POW/MIA Families in 1970, due to perceived indifference among American politicans regarding the still-extant issue. The League still exists today, although their mission has shifted primarily towards recovery of remains. The League's press briefs still mention "live sightings" to some extent, but they have officially backed off enough on the conspiracy theory to work with Vietnam to recover remains.
History, impact and investigations[edit]
It's a myth, folks. Vietnam released all living U.S. prisoners of war back to the U.S. in 1973. The majority of POWs taken during the Vietnam War were pilots shot down over North Vietnam, like John McCain. Most potential POW-MIAs who were ground troops would probably have been killed on the spot, since the Viet Cong were guerrilla soldiers who could not afford to keep American prisoners with them. The vast majority of "MIAs" that remained unaccounted for at the end of the Vietnam War are widely believed to have been killed in action (KIA) running secret, illegal missions within or over Cambodia, the government listing them as MIAs instead of KIAs in order to prevent the media from finding out that they were invading a neutral country.[1]
This legend originated, like so many horrible things from its era, with Richard Nixon. Throughout his first term, Nixon used POWs as leverage in his negotiations with North Vietnam, claiming he wouldn't end the conflict unless North Vietnam and the Vietcong returned all American prisoners. Never mind that POWs are generally returned after a war concludes. Even after January 1973, with the Paris Peace Accords and the return of POWs in Operation Homecoming, Nixon still publicly insisted that the Vietnamese return (likely nonexistent) prisoners, in an effort to neutralize public criticism. Helped by right-wingers like H. Ross Perot and Congressman Bob Dornan, grassroots POW organizations and the media, the myth quickly took hold in public consciousness.[2]
Long after Nixon resigned and Saigon fell to the Communists, the story persisted. Rambo: First Blood Part II, Missing In Action, Missing In Action II, Uncommon Valor, P.O.W.: The Escape, and numerous other movies focused on the myth, as did novels, nonfiction books and television episodes. Ronald Reagan's administration exploited the controversy to revise American perception of the war; by focusing on POWs, conservatives reframed the conflict as a valiant anti-Communist struggle instead of an unpopular, morally dubious intervention.[3]
Besides its domestic effects, the POW-MIA myth certainly delayed the normalization of American diplomatic ties with Vietnam, and its admission into the United Nations.[4]
Throughout the '70s, '80s and even into the '90s, there were numerous "live POW sightings" by travelers in Southeast Asia. Ultimately, these resembled "live Bigfoot sightings" in that nobody ever saw one that was actually there. There were also numerous privately funded "rescue" operations by Bo Gritz and others. At least one expedition in the '80s was encouraged, if not bankrolled, by the People's Republic of China, which then had less-than-friendly relations with Vietnam.[5]
Because of a desire to leave no man behind, the United States Senate set up a committee to investigate these claims, and found that the Vietnamese government more or less complied as best as it could with its treaty obligations.[6] Since the normalization of relations with Vietnam in the 1990s, there have been efforts to recover and identify the remains of MIAs in Vietnam. Such efforts, as well as promoting awareness of POW/MIA issues for all wars (including Iraq) have since become the focus of the National League of Families of American Prisoners and Missing in Southeast Asia, the largest POW-MIA advocacy group, although they devote a lot of effort to the much more reasonable goal of identifying and bringing home bodies found in POW camps or other places in Vietnam and Korea.
John Hartley Robertson hoax[edit]
Toronto media, notably the Toronto Star[7][8] and Maclean's magazine[9] [10], have recently given the POW-MIA conspiracy some new lifeblood. Film critics from Toronto papers and magazines have credulously reported the notions, expressed in the 2013 Hot Doc's documentary Unclaimed, that a "left behind" vet declared KIA was found living in a village in Vietnam. Film critics seemed to ignore the wealth of information online[11][12] that the man claiming to be John Hartley Robertson was a known hoaxer. Fingerprints and DNA taken from the man in the Unclaimed documentary did not match reference samples. A simple phone call by Toronto reporters or even the filmmaker to the Defense Prisoner of War/Missing Personnel Office would have revealed this information.
See also[edit]
References[edit]
- ↑ H. Bruce Franklin, M.I.A., or Mythmaking in America (1993), pp. 105-113
- ↑ Franklin, 39-75
- ↑ Franklin, 137-140
- ↑ cf. Franklin 124-125, 129-130
- ↑ Franklin, 117-120
- ↑ The committee's report.
- ↑ Barnard, Linda. "Hot Docs premiere Unclaimed finds a Vietnam veteran left behind for 44 years", The Toronto Star, 25 April 2013. Retrieved on 15 May 2013.
- ↑ Barnard, Linda. "Unclaimed: Controversy erupts over man claiming to be missing Vietnam veteran", The Toronto Star, 2 May 2013. Retrieved on 15 May 2013.
- ↑ Johnson, Brian. "Forty years later in a village in Vietnam", Maclean's Magazine, 29 April 2013. Retrieved on 15 May 2013.
- ↑ Johnson, Brian. "Who’s the ‘slick fraudster’—the man claiming he’s an MIA or the U.S. military?", Maclean's Magazine, 2 May 2013. Retrieved on 15 May 2013.
- ↑ Mamer, Karl. "Did a Canadian filmmaker find a left behind Vietnam vet or did credulous Toronto media give new life to an old scam?", Skeptic North, 14 May 2013. Retrieved on 15 May 2013.
- ↑ Mamer, Karl. "How Western journalists in Vietnam and POW NGOs were not fooled by tales of MIA Vet back from the dead", Skeptic North, 15 May 2013. Retrieved on 15 May 2013.
|
https://rationalwiki.org/wiki/POW-MIA
|
CC-MAIN-2021-49
|
refinedweb
| 1,144
| 60.35
|
I've been working on this code for quite a long time now, and I still can't seem to find out my problem. I'm trying to split a file that contains integers into 4 pieces, have each child process sum them, and then eventually pipe them to the parent process.
I've rewritten this program many ways, but my problem is always the same; I can get the first child process to sum its numbers, but the other child processes end up printing "0" for their individual sums.
Any ideas? (Note: I understand that the if-statement logic in the child process may be somewhat convoluted to understand, but I have double-checked it, and it works for the first child process)
edit:edit:Code:#include <iostream> #include <unistd.h> #include <fstream> #include <sys/wait.h> #include <iostream> #include <fstream> using namespace std; int main() { int i,n=0,total_numbers=0,number, status, split_numbers; int previous_number[4]; int my_number[4]; int current_number[4] = {1,1,1,1}; int sum[4] = {0,0,0,0}; pid_t pid; ifstream myfile; myfile.open("test.dat"); while (myfile >> number) { total_numbers++; } split_numbers = total_numbers/4; myfile.close(); pid = fork(); for (i=0; i<4; i++) { if (pid == 0) { cout << "Child Process " << i << " " << getpid() << endl; myfile.open("test.dat"); my_number[i] = split_numbers * (i+1); previous_number[i] = split_numbers * i + 1; while (myfile >> number) { if (my_number[i] >= current_number[i] && previous_number[i] <= current_number[i]) { sum[i]+=number; cout << current_number[i] << endl; current_number[i]++; } } cout << sum[i] << endl; myfile.close(); exit(0); } else { waitpid(pid, &status, 0); cout << "Parent Process " << getpid() << endl; pid = fork(); } } return 0; }
NVM, current_number[i]++ should be outside the "if" loop. SOLVED.
|
http://cboard.cprogramming.com/cplusplus-programming/120971-problem-forking-child-processes.html
|
CC-MAIN-2015-22
|
refinedweb
| 277
| 57.2
|
Overview
Despite the wealth of information on the internet, installation guides for openCV are far and few between. Whilst we have used openCV packages in previous projects they have always been wrapped in an addon format, which obviously makes it easier to use but, for a forthcoming project we needed the ability to access the library directly. Briefly, openCV is a library of functions which mainly focuses on image analysis, processing and evaluation. In lamen’s terms, it allows computers to process and understand images and other forms of visual data.
In this post we will explain how we managed to work around the mysterious installation process, and provide a simple set of instructions that will enable you to install, build and use the openCV libraries and binaries on your system.
First you will need:
Ready … lets begin!
Building OpenCV
Update:
Having rebuilt openCV using the XCode again the makefile for some reason is not generated, will try and find out why. For now use the Terminal method as outlined below.
Attention: Steps 2 a and b document methods of installing both the static and shared libraries.
Step 1:
Download openCV and unzip it somewhere on your computer. Create two new folders inside of the openCV directory, one called StaticLibs and the other SharedLibs.
Step 2a: Build the Static Libraries with Terminal.
To build the libraries in Terminal.
- Open CMake.
- Click Browse Source and navigate to your openCV folder.
- Click Browse Build and navigate to your Static.
- Uncheck and type the following commands.
- cd <path/to/your/opencv/staticlibs/folder/> - make (This will take awhile) - sudo make install
Enter your password.
This will install the static libraries on your computer.
Step 2c: Build the Shared Libraries with Terminal.
- Open CMake.
- Click Browse Source and navigate to your openCV folder.
- Click Browse Build and navigate to your Shared.
- Check.
- cd <path/to/your/opencv/SharedLibs/folder/> - make (This will take awhile) - sudo make install
Enter your password.
This will install the shared libraries on your computer.
Make an Application
This is a very basic example, but the similar principles can be applied to other code.
For this post, lets make an application that shows two images, one normal and one that has been put through a blur filter.
Step 1:
- Create a new folder somewhere on the computer.
- Inside the folder, create a CMakeLists.txt file then create a BlurImage.cpp file.
- Then add an image file.
For this example I’ll use this fruity number.
Step 2:
Open the BlurImage.cpp in your favourite text editor and add the following text.
#include "opencv2/imgproc/imgproc.hpp" #include "opencv2/highgui/highgui.hpp" using namespace std; using namespace cv; Mat src; Mat dst; char window_name1[] = "Unprocessed Image"; char window_name2[] = "Processed Image"; int main( int argc, char** argv ) { /// Load the source image src = imread( argv[1], 1 ); namedWindow( window_name1, WINDOW_AUTOSIZE ); imshow("Unprocessed Image",src); dst = src.clone(); GaussianBlur( src, dst, Size( 15, 15 ), 0, 0 ); namedWindow( window_name2, WINDOW_AUTOSIZE ); imshow("Processed Image",dst); waitKey(); return 0; }
Save the file.
Step 3:
Open the CMakeLists.txt file then add the following text.
cmake_minimum_required(VERSION 2.8) project( BlurImage ) find_package( OpenCV ) include_directories( ${OpenCV_INCLUDE_DIRS} ) add_executable( BlurImage BlurImage.cpp ) target_link_libraries( BlurImage ${OpenCV_LIBS} )
Save the File.
Step 4:
Open Terminal and navigate to your applications directory.
- cd <path/to/application/folder> - /Applications/CMake.app/Contents/bin/cmake . - make
This will generate both the makefile and the executable file.
Then type
./BlurImage image.jpg
Yay … it works all you need to do is add your own .cpp file and alter the .txt file. Then follow the above commands.
For more detailed examples go to the openCV Tutorial Page, or check out the sample folder inside the opencv folder.
Huge thank you to Daniel Shiffman, whose guide put us on the right track :
Hi ! First i would like to thank you for the descriptive steps to install openCV. but i have a problem with the very last step. In step 4 when I input “make” in the terminal i don’t get the “[100%] Built target BlurImage.”
pls help me in sorting it out! Thank you
Hi Prabhakaran,
No problems, do you get any error messages? Could you possibly copy the output?
David
Hi David,
Thanks for the reply. I tried building using step 2a… It Works!!!!
I am new to mac and openCV. I had trouble installing but your blog made it easy. Thanks 🙂
Hi, in step 2a when i input “sudo make install” i get:
make: *** No rule to make target `install’. Stop.
Try going up one directory then try running the command again.
The same result
make: *** No rule to make target `install’. Stop.
Hi Graziano,
Am trying to figure out this error. For now try using the Terminal methods.
Cheers
Thanks. Unfortunately, the terminal method gives the same result for “make”
make: *** No targets specified and no makefile found. Stop.
Hmm could you possibly copy the terminal window output and commands and post them on here?
MacBook-Pro:~ graziano$ cd Documents/opencv-2.4.9/StaticLibs/
MacBook-Pro:StaticLibs graziano$ make
make: *** No targets specified and no makefile found. Stop.
Hi Graziano,
Have you ran CMake?
In your staticlib folder there should be a Makefile file.
Cheers
I recompiled and now it works. i think that the problem was in 10.10 SDK.
For a new project I have to create every time CMakeLists.txt ??
Hi…
First of all thanks for the guide.
Now, I have a problem … I followed the guide to install opencv 3.0, unfortunately, then, I realized that I need the version 2.4.* ’cause I have to use other libraries that are not supported in 3.0.
How can I then uninstall opencv 3.0 then go to reinstall the 2.4.9?
thanks
Luca
Hi Luca,
If you follow the guide using the openCV 2.4.* files it should replace the openCV 3.0.0 files.
Best
This is great and thanks a lot for the clean nice steps . worked like charm.
Hi,
under: “Step 2c: Build the Shared Libraries with Terminal” you showed commands for the terminal:
”
– cd
– make (This will take awhile)
– sudo make install
”
Should “staticlibs” be “sharedlibs” instead?
Thanks,
Katharina
In my previous comment I meant:
cd
should staticlibs be replaced by sharedlibs?
It should, just corrected the post. Thanks for spotting it.
D
Thanks a lot for the post. It worked perfectly for me under Mavericks.
Has anyone been able to use the Viz libraries on macosx?
Thanks
Hello David,
Thank you for creating a clear walk through guide. It worked flawlessly on Yosemite. Just wanted to make a comment: I used the path “ShareLibs” instead of “StaticLibs” as mentioned in the section “Step 2c: Build the Shared Libraries with Terminal.”
Thank you again.
Wow! I followed some compile instructions on the internet and they worked. That never happens! Thank you for this post.
You saved me from much heartache! Thank you very much! Installing openCV is such a pain on OSX
This worked perfectly for me under Mountain Lion, surprisingly! Why I’m still using Mountain Lion is another story.
Thank-you!
Extremely straightforward and the only opencv tutorial for mac that’s worked without crashing/exploding.
Thanks!!!!
Thank you, it works on Yosemite 10.10.2 without any problems.
Hi,
I have to build both static and shared libraries?
Not particularly depends on what you do with them. This post explains it quite well.
Hello, is there any advantage of using this method instead of using the homebrew package manager?
Hi Andre,
Its just personal preference. But, can’t see any disadvantages to using homebrew. This method for the most part tries to avoid filling your hard drive up with unnecessary data.
Best
David
How do I add a BlurImage.cpp file?
I’m a bit lost on that part.
Thanks.
Hi Ranvit, open terminal navigate to your folder then type touch BlurImage.cpp should generate the file.
Best
Thank you for creating this very good step by step tutorial!
Using it right now…
I got an error but after google it I found that must not exist white spaces anywhere on the path to the files.
Maybe you can add this information to this great howto.
Again, thanks for share your knowledge!
Thank you MOSO – your post helped a lot – was getting stuck at the “make” stage at 43% until I changed the paths to make sure there were no spaces in the folder names, and re-followed the tutorial above.
Some clarifications though if you can:
– This tutorial says (in the title) that it is for OSX 10.10, but step 2a and 2c get me to put something to do with MacOSX10.9.sdk into the CMAKE_OSX_SYSROOT field in CMAKE… can you check this isn’t for an old tutorial made for 10.9, or confirm we really are pointing CMAKE to the 10.9.sdk instead of the 10.10 one?
– your next tutorial on setting up XCode 6.1 for it (i am using Xcode 6.1.1 though, close enough I hope) says that it’s looking for files in /usr/local/include and /usr/local/lib. But this tutorial asks me to extract the OpenCV zip file “anywhere” on my computer… is that extracted zip file needed after this installation is complete? I’m not sure if my future programs are still referencing that.
I just want to be able to add OpenCV as a framework to my current projects. I already have other frameworks added but they are all “in-built” somehow.
Hi Aaron,
Many thanks for your comments.
I will change the white space errors.
The sdk issue is a curious one. When I installed the libraries on yosemite, for some reason i didnt have the the 10.10.sdk! It should be fine with 10.10 though.
With installing the libs, you download the source code to your desktop then through the cmake process you create the makefiles which then installs the libraries into your /usr/local/bin and /usr/local/include. So the libs are installed in a static location. Then in xcode you just point the compiler at the libs you need.
Best
Hi,
When I ‘make’ in step 2a and c, I get up to 61% but then it stops due to errors. I would suggest unchecking BUILD_opencv_legacy as well. It seems to have some terrible code that my compiler does not like. When I unchecked legacy, everything worked smoothly.
If legacy is an important component and if there is a way to make my compiler ignore those errors then I would love to hear it. In the meantime everything seems to be working fine. Thanks for this awesome tutorial.
Hi,
I have a problem after typing “make” in the step 2)a, it stops at 61pourcent and says:
/Users/tomlorentbourdo/Desktop/opencv-2.4.10/modules/legacy/src/calibfilter.cpp:98:9: error:
comparison of array ‘this->latestPoints’ not equal to a null pointer is
always true [-Werror,-Wtautological-pointer-compare]
if (latestPoints != NULL)
^~~~~~~~~~~~ ~~~~
/Users/tomlorentbourdo/Desktop/opencv-2.4.10/modules/legacy/src/calibfilter.cpp:526:9: error:
address of array ‘this->latestCounts’ will always evaluate to ‘true’
[-Werror,-Wpointer-bool-conversion]
if( latestCounts )
~~ ^~~~~~~~~~~~
2 errors generated.
make[2]: *** [modules/legacy/CMakeFiles/opencv_legacy.dir/src/calibfilter.cpp.o] Error 1
make[1]: *** [modules/legacy/CMakeFiles/opencv_legacy.dir/all] Error 2
make: *** [all] Error 2
mbp-de-tom:StaticLibs tomlorentbourdo$
Hi Tom,
Much like Dan in the comment above has done, I’d uncheck BUILD Legacy in the CMAKE application. Think that legacy has some issues.
Best
David
Oh well, sorry for making you lose your time I just didn’t see, thank you for your fast answer and your great work and thanks to Dan too!
No worries, if it still misbehaves. Try the opencv 3.0.0 alpha release
Best
|
https://blogs.wcode.org/2014/10/howto-install-build-and-use-opencv-macosx-10-10/
|
CC-MAIN-2017-13
|
refinedweb
| 1,982
| 67.96
|
Robot Arm with Voice Control - Tutorial (part 2)
Here is part 2 of the robotic arm tutorial. Click here for the first part.
Running julius
If you've managed to perform Part 1 of the tutorial successfully, then in the command line in the 'voxforge/auto' directory, run:
julius -input mic -C julian.jconf
Julius should then wait for microphone input, and if you speak into the microphone, for example 'elbow up' then it should print output similar to the following:
<<< please speak >>>
------
### read waveform input
Stat: capture audio at 48000Hz
Stat: adin_alsa: latency set to 32 msec (chunk = 1536 bytes)
Error: adin_alsa: unable to get pcm info from card control
Warning: adin_alsa: skip output of detailed audio device info
STAT: AD-in thread created
pass1_best: <s> ELBOW UP </s>
pass1_best_wordseq: 0 2 7 1
pass1_best_phonemeseq: sil | eh l b ow | ah p | sil
pass1_best_score: -9641.213867
### Recognition: 2nd pass (RL heuristic best-first)
STAT: 00 _default: 13 generated, 13 pushed, 5 nodes popped in 388
sentence1: <s> ELBOW UP </s>
wseq1: 0 2 7 1
phseq1: sil | eh l b ow | ah p | sil
cmscore1: 1.000 1.000 1.000 1.000
score1: -9581.207031
<<< please speak >>>
Note the decoder output
sentence1: <s> ELBOW UP </s>
As well as the confidence scores:
cmscore1: 1.000 1.000 1.000 1.000
score1: -9581.207031
Where cmscore1 indicates the confidences for the sentence, in this case, 'silence word word silence'. score1 shows the Viterbi score as calculated by Julius. All this is printed to the tty's stdout, and all we need to do is to filter out these lines and use them to control the robotic arm.
The program
I chose Python for the ease of programming as well as its wide range of modules. Since I'm not too worried about speed or memory usage, I managed to build the robotic arm and program it within a week despite my right hand being unable to type (yes, python's that great!). I assume by doing this tutorial you already have basic knowledge of Python. At the beginning of the script we import the required modules:
#!/usr/bin/python import pexpect ...
Pexpect is a module for creating a pseudo-tty that we can connect with the Julius LVCSR decoder to obtain its output. Once we obtain the output, we then 1. filter out the lines we need (mainly sentence1 and cmscore1, but I've also found the Viterbi score can be useful in some cases), and 2. filter out low-confidence results, and finally 3. send the commands via USB to the robotic arm.
Obtaining the output
There are many ways to obtain the output - I initially tried the subprocess module, but settled on the pexpect module as it didn't have the pipe buffering issues that subprocess encountered. With pexpect, spawning julius and getting its output in blocks was very simple:
child = pexpect.spawn ('julius -input mic -C julian.jconf') while True: try: child.expect('please speak') process_julius(child.before) except KeyboardInterrupt: child.close(force=True) break
As before, there are many ways to filter out the desired output and confidence scores. First, I need to determine if an output was generated at all:
def process_julius(out_text): match_res = re.match(r'(.*)sentence1(\.*)', out_text, re.S) if match_res: get_confidence(out_text) else: pass
Here we use the python regular expressions (re) module, so put at the beginning of the python script:
import re
After making sure we have a sentence, we can then process the text block and extract sentence1, cmscore1 and score1:
def get_confidence(out_text): linearray = out_text.split("\n") for line in linearray: if line.find('sentence1') != -1: sentence1 = line elif line.find('cmscore1') != -1: cmscore1 = line elif line.find('score1') != -1: score1 = line cmscore_array = cmscore1.split() #process sentence err_flag = False for score in cmscore_array: try: ns = float(score) except ValueError: continue if (ns < 0.999): err_flag = True print "confidence error:", ns, ":", sentence1 score1_val = float(score1.split()[1]) if score1_val < -13000: err_flag = True print "score1 error:", score1_val, sentence1 if (not err_flag): print sentence1 print score1 #process sentence process_sentence(sentence1) else: pass
In this function, I also set the criterion of 0.999 for cmscore1, and -13000 for score1. You can tweak these values until you get good accuracy and robustness, and training the acoustic model more would also help. If the output passes all these tests, we then pass the command to the process_sentence() function to move the robot arm. To proceed, let's put the python script aside for a bit and examine the robotic arm USB protocol.
Robot Arm USB Protocol
According to notbrainsurgery's deconstruction of the robotic arm's USB protocol, the motors are controlled by 3-byte USB control transfers. Since these are ordinary electric motors, we can only control their direction in addition to turning it on or off.
The first byte can be divided into four half-nibbles. A half-nibble is two bits i.e. 00 or 01.
Reading the first byte left to right,
- the first half-nibble controls the shoulder,
- the second controls the elbow,
- the third controls the wrist,
- the fourth controls the grip.
For the second byte, the fourth half-nibble controls the base rotation.
Finally, for the third byte, the last bit turns the light on or off.
The value of the half-nibble (01 or 10) commands the motor to run one direction or the other, while "00" will stop the motor.
So the python script needs to set the USB control bits appropriately to move the motors. I chose to send a 1-second command to move the motors and then stop automatically. This spares me from having to frantically say "stop" while the robot arm grinds against its safety gear when the movement limit is reached, but the disadvantage is that I may have to issue commands repeatedly to get the desired movement. To do this, we need to import 2 modules: the python time module, and the pyusb module (which requires libusb) for interacting with USB devices:
import time import usb.core
Now on to the final part of the tutorial!
child.expect timeout
If within 30 seconds of no activity (no one says), then child.expect fails to timeout.
Traceback (most recent call last):
File "./core_julius.py", line 50, in
child.expect('please speak', timeout=60)
File "/usr/lib/python2.7/dist-packages/pexpect.py", line 1316, in expect
return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
File "/usr/lib/python2.7/dist-packages/pexpect.py", line 1330, in expect_list
return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
File "/usr/lib/python2.7/dist-packages/pexpect.py", line 1414, in expect_loop
raise TIMEOUT (str(e) + '\n' + str(self))
pexpect.TIMEOUT: Timeout exceeded in read_nonblocking().
version: 2.4 ($Revision: 516 $)
command: /opt/julius/bin/julius
args: ['/opt/julius/bin/julius', '-input', 'mic', '-C', 'julius.jconf']
searcher: searcher_re:
0: re.compile("please speak")
buffer (last 100 chars): >>>
before (last 100 chars): >>>
after:
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 2972
child_fd: 3
closed: False
timeout: 30
delimiter:
logfile: None
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
|
http://www.aonsquared.co.uk/robot_arm_tutorial_2
|
CC-MAIN-2013-20
|
refinedweb
| 1,195
| 64.51
|
Opened 11 years ago
Closed 11 years ago
Last modified 11 years ago
#441 closed defect (fixed)
0.10-incompatibility: Broken.
Description
With Trac trunk (r3411) it stopped working with KeyError (check attached log). Code according to backtrace seems this section:
def _heading_formatter(self, match, fullmatch): ... anchor = self._anchors[-1] ...
I have modified it to the following (code taken from Trac Wiki formatter directly):
def _heading_formatter(self, match, fullmatch): ... anchor = fullmatch.group('hanchor') or '' ...
Seems to be working now. Hope this helps.
Attachments (8)
Change History (19)
Changed 11 years ago by
comment:1 follow-up: 2 Changed 11 years ago by
With Trac trunk (r3411) it stopped working with KeyError (check attached log).
Actually, this changed in [T3408].
def _heading_formatter(self, match, fullmatch): ... anchor = fullmatch.group('hanchor') or '' ...
No, that's not a correct fix. See the abovementioned change
for an example about how to get the
anchor.
The
hanchor group is for an optional, explicitely given
id,
which, most of the time, will not be given.
comment:2 Changed 11 years ago by
Changed 11 years ago by
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
Whilst MyOutlineFormatter.format is being tweaked, purely as a clarification, the following three lines could be moved outside the enclosing
for loop:
36 active = '' 37 if page == active_page: 38 active = ' class="active"'
...since neither
page nor
active_page changes from one loop iteration to the next.
comment:5 Changed 11 years ago by
comment:6 Changed 11 years ago by
cboos' macro.py suffers from inconsistent indentation schemes, and also, I think, introduces some bugs by accidentally shifting some statements to different block levels.
I've cleaned up the indentation and put the shifted statements back to their original block level, and am attaching the result as a patch....
Changed 11 years ago by
comment:7 Changed 11 years ago by
comment:8 Changed 11 years ago by
I've split up cboos' patch into multiple changes:
- fix441: Fix this ticket. (also, added the change I suggested in comment:4)
- tweaks: General refactoring.
- linewrap: Wrap overly long lines.
- wmbref: Convert to WikiMacroBase.
- 011notes: Add notes about 0.11.
(These should be applied in that order - they have interdependencies.)
Additionally, cboos' patch contained:
- Removal of trailing whitespace: I've not included that in the above patches, since it is easier for a committer to run a simple editor command, than review a patch doing the same (but it would be nice to have this done).
- Introduction of
coding: utf-8statement: I've dropped that, since it was only there to support the addition of a weird angled quote character in an added comment, which looked like a typo anyway.
- Placing of an assignment to
argsinto an
else:clause: I've removed this change, since it results in
argsbeing a different datatype (string vs. list) depending on whether args are provided, which doesn't seem right.
- A couple of odd whitespace changes within lines: I left these out.
I will now attach the 5 patch files.
comment:9 Changed 11 years ago by
Oops! There's an error in various places throughout cboos'
macro.py and my derivatives:
out.write(system_message(MESSAGE), None)
is supposed to be:
out.write(system_message(MESSAGE, None))
The error is present in the first of two uses of system_message in
macro.py. In
tweaks.patch, I accidentally spread the error to the second use too. In
linewrap.patch, there's one change which isn't a pure line-wrap: the accidental spreading is undone again.
Rather than re-attaching fixed versions of
tweaks.patch and
linewrap.patch, please just correct the placement of the parentheses as described above, in
tweaks.patch and
linewrap.patch, before applying them - thanks!
comment:10 Changed 11 years ago by
comment:11 Changed 11 years ago by
Yep, thanks for the fixes to my fixes, maxb :)
Also, the
system_message was slightly improved in [T3431].
Lastly, don't hesitate to comment and brainstorm further relative to the attachment:011notes.patch
Trac backtrace of TocMacro error
|
https://trac-hacks.org/ticket/441
|
CC-MAIN-2017-09
|
refinedweb
| 675
| 66.03
|
In this tutorials we will look how to build a sample distributed application.
First look at the definition.
Source :
In Zookeeper we can handle partial failure. Partial failure means something like this.
Suppose we send a message across the network to one node to another node.If the network fails sender does not know whether the receiver get the message. So the only way to find this is reconnect to the receiver and ask it. This is called the partial failure.
Zookeeper Characteristics
1.Simple
Zookeeper has some few valuable operations, such as ordering and notifications.2.Expressive
Can build large data structures and protocols.
3.Highly Available
Runs on a collection of machines and designed to be highly available.
4.Loosely coupled interaction
Zookeeper participants do not need to know about one another.
Installing Zookeeper
Here are the steps to install zookeeeper.
1. To install zookeeper require java 6 or later version. You can find the java latest version in
2. After installing the jdk set the path.
Windows : Set path in environment variable
Unix : Open a terminal type vi ~/.bash_profile -> export JAVA_HOME=/path /to/java/dir -> export PATH=$PATH:$JAVA_HOME/bin
3. Download Zookeeper
4.Unzip it and set zookeeper variable
5.Before running zookeeper service you have to make the zoo.cfg file inside the /conf/zoo.cfg as
tickTime=2000
dataDir=/path/zookeeper/dir
clientPort=2181
6. Now all are ready to start the Zookeeper server
zkServer.sh start
7. zkServer status is use to check zookeeper is running or not
Group Membership
Create a Group
import java.io.IOException;Zookeeper API :
import java.util.concurrent.CountDownLatch;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.Watcher.Event.KeeperState;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;();
}
}
Join The Group
public class ConnectionWatcher implements Watcher {
private static final int SESSION_TIMEOUT = 5000;
protected) {
if (event.getState() == KeeperState.SyncConnected) {
connectedSignal.countDown();
}
}
public void close() throws InterruptedException {
zk.close();
}
});
}
}
Retrive();
}
}
Znodes can be two types. Persistent or Ephemeral. This type is set at creation time and may not be change later. When the client's session ends or client exit the application Ephemeral node will deleted. Ephemeral nodes not have children. But the persistent node will not be deleted when client's session ends or client exit the application.
Watches
Watches allow client to get notification when a node changes in a some way. Watches are set by a zookeeper service.
Useful Links
There are lots of information about hadoop have spread around the web, but this is a unique one according to me. The strategy you have updated here will make me to get to the next level in big data. Thanks for sharing this.
Hadoop Training Chennai
Hadoop Training in Chennai
Big Data Training in Chennai
This is one such interesting and useful article that i have ever read. The way you have structured the content is so realistic and meaningful. Thank you so much for sharing this in here. Keep up this good work and I'm expecting more contents like this from you in future.
Big Data Training Chennai | Best hadoop training institute in chennai | hadoop course in Chennai
Thanks for sharing detailed information of unified functional testing automation tool. QTP Course in Chennai | QTP training
Nice and informative post..QTP Training in Chennai | QTP Training Institute in Chennai | QTP Course in Chennai
Excellent post!!!. The strategy you have posted on this technology helped me to get into the next level and had lot of information in it.
Web designing Course in Chennai | Hadoop Training in Chennai
Informative content by intelligent experts.It is really useful to me.Keep on posting more article like this.
Java course in chennai | Selenium Course in Chennai | Software Testing courses in chennai
Thanks for Sharing the valuable information and thanks for sharing the wonderful article..We are glad to see such a wonderful article..
QTP Training in Chennai | QTP Training Institute in Chennai | QTP Training
very nice article.Thanks for sharing the Post...!
Testing Training with Live Project
|
http://blog.rajithdelantha.com/2010/11/what-is-zookeeper.html
|
CC-MAIN-2018-30
|
refinedweb
| 687
| 53.17
|
I Built a NativeScript/Vue.js App and You Won't Believe What Happened Next...
Please forgive the clickbait title. I was struggling with what to actually title this blog post and just decided to give up and go a bit over the top. With how little I'm blogging lately, I figured this would at least put a smile on my reader's faces and that's worth something. ;) Speaking of my readers, those of you who have been around here for a while know I've been a fan of NativeScript since it's release, but I've also blogged very little about it. It's consistently been on my "Things I'm going to do this year" posts and I never get around to actually working with it. Well, the good news for me is that while I'm between jobs, I've got a client who wants to build a NativeScript app and I've got the time (while on the clock, and yes, I'm very lucky for that) to learn while I build out the project. Even more lucky for me is that there is a NativeScript Vue project that kicks major butt. I thought I'd share my experience playing with it over the past week as well a simple application I built with it.
The first thing I want to address it the development experience. I've been following the work the NativeScript team has done in that regards but I actually got to play with multiple variations of it and I have to say - they have done incredible work in this area.
The CLI
So yes, you have a command line. It's always felt a bit "heavy" to me in terms of installation and speed. That's not very scientific but it feels like the install process is a bit long and has a lot of moving parts. I did my testing in PowerShell as I didn't want to try getting the Android SDK running under WSL. The CLI can actually handle that for you, but in my case I already had Android installed. You can see more about this process at the CLI installation docs but I guess my point here is to not expect a quick
npm i nativescript that will finish in a few seconds. I don't think there's anything that can be done about that, just consider this as a heads up.
Once you do get it installed, the CLI works ok, but in my testing, the first run of an Android project seemed incredibly slow. Much more than I've seen with Cordova. But that's a one time pain. You can run
tns run android --bundle and it will automatically reload your application as you save files.
After that initial load the process is - I'll say - "reasonably" fast. For my small-ish project it took maybe 3-4 seconds for each reload as I worked. In general this never bothered me until I started working on design type stuff and it got a bit frustrating when I screwed things up.
The command line will broad any
console.log messages but I wish it would differentiate it a bit between it's own output as well. Here's a random example and while I know where my messages are, I'd like to see it called out more. (And yeah, it's way too small to even read. Sorry. But I haven't included a picture yet and it's way past due.)
Before I leave this section, a quick note. On multiple occasions I found that if I left the CLI running over night, in the morning it didn't seem to refresh well. I just
CTRL-C the CLI and ran it again and everything would be fine. I'm assuming something just got lost between the terminal and the Android simulator. If I were a betting man, I'd totally blame Android.
The GUI app you think you don't need but you should try it anyway
So yes, I know we're all "real" developers and we have to use the CLI for everything, but you may want to check out the Sidekick application. This is a desktop GUI that wraps the CLI operations and lets you quickly generate new projects and test them. It also does a great job of rendering information about your project like installed plugins and other settings.
Even more impressive, it can handle building to your iOS device... from Windows. In my testing this was a bit flakey. I know (or I'm pretty sure ;) it worked a few times, but I had trouble getting my last project working correctly. I'm going to assume it will work consistently though and that's pretty damn impressive.
If you want to learn more, you can watch this nice little video the NativeScript folks whipped up.
One oddity about the Sidekick is that while it has a "logs" output panel, you won't find console.log messages there. Instead, you want to ensure you select "Start Debugger":
This pops open a new window and while still "noisy" like the CLI, it is a bit easier to read I think the terminal.
The Simplest Solution - the Playground
So the third option, and one of the easiest if you want to skip worrying about SDKs, is the Playground. This is a web-based IDE that lets you play with NativeScript without having to install anything on your machine. It even includes multiple walkthrough tutorials to help you learn. Even better, you can use the QR code feature ("Yes!" all the markerters yell) and a corresponding app on your mobile device to test out the code. Oddly - you need two apps on your device and their docs don't tell you this - both the Playground app and the Preview app.
In general it felt like the refresh on this worked pretty well, on par with the CLI approach. But it's absolutely the simplest way to get started so I'd check it out if you aren't comfortable or familiar with the SDKs. And heck, even if you, consider using it for the nice tutorials.
My App
So after going through a few tutorials and just generally kicking the tires, I decided to build "INeedIt" once again. This is an app I've built in multiple languages, platforms, etc. over the past few years. It's a simple wrapper for the Google Places API. It's a rather simple app in three discreet pages.
The first page gets your location and then provides a list of service types (bars, restaurants, ATMs, etc). This is based on a hard coded list that the API supports.
When you select a type, it then asks the API to find all the results of that type within a certain range of your location.
The final page is just a "details" view.
Before I show the code, some things to take note of.
- This isn't a very pretty application. The UI controls provided by NativeScript work well, but you do have to spend some time with CSS to make this look nice, and customized for your application. I spent a little time fiddling with the CSS a bit but decided I wouldn't worry about it too much.
- On that detail view, Google Place's API used to return photos with it's detail result, now it has a separate API for that. I could have added that but decided to not worry about it. I only bring it up because the last version I built supported it.
- That map you see is an example of the Static Map API, one of my favorite Google services.
Ok, let's check out the code! First, the initial view. As an aside, I removed most of the data from the
serviceTypes variable to keep the length of the post down. I should really abstract that out into a service.
<template> <Page> <ActionBar title="INeedIt"/> <GridLayout rows="*, auto, *" columns="*, auto, *"> <ListView for="service in serviceTypes" @ <v-template> <Label : </v-template> </ListView> <ActivityIndicator : </GridLayout> </Page> </template> <script> import * as geolocation from 'nativescript-geolocation'; import { Accuracy } from 'ui/enums'; import TypeList from './TypeList'; export default { data() { return { loading:true, location:{}, serviceTypes:[ {"id":"accounting","label":"Accounting"},{"id":"airport","label":"Airport"}, {"id":"veterinary_care","label":"Veterinary Care"},{"id":"zoo","label":"Zoo"} ] } }, mounted() { console.log('lets get your location'); geolocation.getCurrentLocation({ desiredAccuracy: Accuracy.high, maximumAge: 1000, timeout: 20000}) .then(res => { let lat = res.latitude; let lng = res.longitude; this.location.lat = lat; this.location.lng = lng; this.loading = false; }) .catch(e => { console.log('oh frak, error', e); }); }, methods:{ loadService(e) { let service = e.item; this.$navigateTo(TypeList, {props: {service:service, location:this.location}}) } } } </script> <style scoped> ActionBar { background-color: #53ba82; color: #ffffff; } </style>
This is an example of SFC (Single File Components) that you may already be familiar with when working with Vue. I love that every aspect of this is the same except the layout, and frankly that wasn't much of a big deal. The only thing I struggled with was rendering the loading component in the middle of the page over the rest of the content and luckily nice people in the NativeScript Slack group helped me out. I don't want to minimize this. Learning layout stuff for NativeScript will be a process, but for the most part I think it generally just makes sense.
Now let's look at the next component, TypeList.vue.
<template> <Page> <ActionBar : <GridLayout rows="*, auto, *" columns="*, auto, *"> <ListView for="place in places" @ <v-template> <StackLayout> <Label : <Label : </StackLayout> </v-template> </ListView> <Label rows="0" rowSpan="3" col="0" colSpan="3" text="Sorry, there were no results." : <ActivityIndicator : </GridLayout> </Page> </template> <script> import places from '../api/places'; import Place from './Place'; export default { data() { return { loading:true, noResults:false, places:[] } }, props: ['service', 'location'], mounted() { places.search(this.location.lat, this.location.lng, this.service.id) .then(results => { console.log('results', results.data.result); this.places = results.data.result; if(this.places.length === 0) this.noResults = true; this.loading = false; }); }, methods: { loadPlace(event) { let place = event.item; this.$navigateTo(Place, {props: {place:place}}) } } } </script> <style scoped> Label.placeName { font-size: 20px; } Label.placeAddress { font-style: italic; font-size: 10px; } </style>
On startup, it uses an API (more on that in a second) to get a list of results for the specific type being viewed. Then it simply renders it out in a ListView. The API I'm importing is here:
import axios from 'axios/dist/axios'; // radius set to 2000 const RADIUS = 2000; export default { detail(id) { return axios.get(`{id}`); }, search(lat, lng, type) { return axios.get(`{lat}&lng=${lng}&radius=${RADIUS}&type=${type}`); } }
I wrote Webtask.io wrappers for my Google Places API calls to make it a bit easier to share the code. You got to lose that comment about the radius. Epic comment there.
The final component, Place.vue, handles getting the details and rendering it. I really only show a few values. You could do a lot more here.
<template> <Page> <ActionBar : <StackLayout> <Label : <Label : <Image : </StackLayout> </Page> </template> <script> import places from '../api/places'; export default { data() { return { loading:true, details:{ formatted_address:'' }, mapUrl:'' } }, props: ['place'], mounted() { console.log('load place id', this.place.place_id); places.detail(this.place.place_id) .then(res => { console.log('my details are:', res.data.result); this.details = res.data.result; this.mapUrl = `{this.details.geometry.location.lat},${this.details.geometry.location.lng}&zoom=14&markers=color:blue|${this.details.geometry.location.lat},${this.details.geometry.location.lng}&size=500x500&key=mykeyhere`; }); }, methods: { } } </script> <style scoped> </style>
You'll notice my use of the Static Maps API includes a hard coded key. You can use the sample key as you do for the Places API. I'd definitely abstract this out usually but as I was at the end of my demo I was getting a bit lazy. ;)
NativeScript Vue
In conclusion, I'm really impressed with Vue running under NativeScript. I'm going to go ahead and use it for the client's project and I definitely think it's worth your time. If you're already using it, I'd love to hear about your experience so please leave me a comment below.
I normally share my sample code but I don't have this in a repo anywhere. If anyone wants it though just ask and I'd be glad to share it.
Header photo by Patrick Fore on Unsplash
|
https://www.raymondcamden.com/2018/10/25/i-built-a-nativescriptvuejs-app-and-you-wont-believe-what-happened-next
|
CC-MAIN-2020-16
|
refinedweb
| 2,091
| 65.12
|
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
a
b
c
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
The easiest way is probably to create an
std::bitset representing the value, then stream that to
cout.
#include <bitset> ... char a = -58; std::bitset<8> x(a); std::cout << x; short c = -315; std::bitset<16> y(c); std::cout << y;
|
https://codedump.io/share/hQ4WodAfLJtI/1/how-to-print-using-cout-the-way-a-number-is-stored-in-memory
|
CC-MAIN-2017-43
|
refinedweb
| 184
| 54.49
|
This article presents a few examples on the use of the Python programming language in the field of data mining. The first section is mainly dedicated to the use of GNU Emacs and the other sections to two widely used techniques—hierarchical cluster analysis and principal component analysis.
This article is introductory because some topics such as varimax, oblimin, etc, are not included here and will be discussed in the future. The complete code is too long for a printed article, but is freely available at.
The toolbox used in this article is dependent on WinPython 3.4.4.2 and GNU Emacs 24.5 on Windows. My Emacs configuration for the Python language is very simple. The following lines are added to the dot emacs file:
(setq python-indent-guess-indent-offset nil) (org-babel-do-load-languages ‘org-babel-load-languages ‘((python . t))) (add-to-list ‘exec-path “C:\\WinPython-32bit-3.4.4.2\\python-3.4.4”) (global-set-key (kbd “<f8>”) (kbd “C-u C-c C-c”)) (setenv “PYTHONIOENCODING” “utf-8”) (setenv “LANG” “en_US.UTF-8”)
The first line is useful to avoid the warning message: ‘Can’t guess python-indent-offset, using defaults: 4’ from Emacs. The next three lines are to use Python in the org-mode, and the last four lines are to use Emacs as an IDE. In the following org file, text, code, figures and a table are present at the same time. This is not very different from a Jupyter Notebook. Each code section can be evaluated with C-c C-c. The export of the whole file as HTML (C-c C-e h h) produces the output shown in Figure 1.
#+title: PAH method GCMS #+options: toc:nil #+options: num:nil #+options: html-postamble:nil <...some-text-here...> #+begin_src python :var filename=”method.png” :results file :exports results <...some-python-code-here...> #+end_src #+results: [[file:method.png]] #+begin_src python :var filename=”chromatogram.png” :results file :exports results <...some-python-code-here...> #+end_src #+results: [[file:chromatogram.png]]#+attr_html: :frame border :border 1 :class center <...a-table-here...>
To do this, it is necessary that Python is recognised by the system. You can do this by going to (Windows 7) Start → Control panel→System→Advanced system settings → Environment variables → User variables for <your-username>→Create, if not present, or modify the variable path→Add C:\WinPython-32bit-3.4.4.2\python-3.4.4;
Another method is to use Emacs as an IDE. A Python file can simply be evaluated by pressing the F8 function key (see the above mentioned kbd “<f8>” option). Figure 2 shows an Emacs session with three buffers opened. On the left side is the Python code, on the right side on the top a dired buffer as file manager and on the right side bottom is the Python console with a tabular output. This is not very different from the Spyder IDE (which is included in the WinPython distribution) shown in Figure 3, with the same three buffers opened.
Hierarchical cluster analysis
This example is about agglomerative hierarchical clustering. The data table is the famous iris flower data set and is taken from. It has 150 rows and five columns: sepal length, sepal width, petal length, petal width, species’ name (iris setosa from row 1 to 50, iris versicolor from row 51 to 100, iris virginica from row 101 to 150). The code is short, as shown below:
import scipy.cluster.hierarchy as hca import xlrd from pylab import * rcParams["font.family"]=["DejaVu Sans"] rcParams["font.size"]=10 w=xlrd.open_workbook("iris.xls").sheet_by_name("Sheet1") data=[] data=array([[w.cell_value(r,c) for c in range(w.ncols)] for r in range(w.nrows)]) dataS=data-mean(data,axis=0) o=range(1,w.nrows+1) y=hca.linkage(dataS,metric=”euclidean”,method=”ward”) hca.dendrogram(y,labels=o,color_threshold=10,truncate_mode=”lastp”,p=25) xticks(rotation=90) tick_params(top=False,right=False,direction=”out”) ylabel(“Distance”) figtext(0.5,0.95,”Iris flower data set”,ha=”center”,fontsize=12) figtext(0.5,0.91,”Dendrogram (center, euclidean, ward)”,ha=”center”,fontsize=10) savefig(“figure.png”,format=”png”,dpi=300)
First, the table is read as an array with a nested loop; then, column centring is performed. There are various other scaling techniques, column centring is an example of one of them. The ‘S’ in dataS is for ‘scaled’. In this example, the euclidean metric and the ward linkage method are chosen. Many other metrics are available—for example, canberra, cityblock, mahalanobis, etc. There are also many other linkage methods —for example, average, complete, single, etc. Finally, the dendrogram is plotted as shown in Figure 4. In this example, the clusters are coloured by cutting the dendrogram at a distance equal to 10, using the option color_threshold. To enhance its readability, the dendrogram has been condensed a bit using the option truncate_mode.
Principal component analysis
This second example is about three different techniques —matrix algebra, singular value decomposition (SVD) and modular toolkit for data processing (MDP). About the first, the covariance matrix is calculated on the scaled data. Then, Eigenvalues and Eigenvectors are calculated from the covariance matrix. Lastly, the Eigenvalues and Eigenvectors are sorted. The scores are calculated by a dot product with the data scaled and the Eigenvectors. The percentage of variance is explained and its running total is also calculated.
covmat=cov(dataS,rowvar=False) eigval,eigvec=linalg.eig(covmat) idx=eigval.argsort()[::-1] eigval=eigval[idx] eigvec=eigvec[:,idx] scores=dot(dataS,eigvec) percentage=[0]*w.ncols runtot=[0]*w.ncols for i in range(0,w.ncols): percentage[i]=eigval[i]/sum(eigval)*100 runtot=cumsum(percentage)
The number of components (N), the variance explained by each component (VAR), its percentage (PCT) and the percentage running total (SUM) can be presented as a table. This table can be drawn using the package prettytable. The results are formatted with a certain number of decimal figures and then each column is added to the table.
from prettytable import * o=range(1,w.nrows+1) v=range(1,w.ncols+1) e=[“%.2f” % i for i in eigval] p=[“%.4f” % i for i in percentage] r=[“%.2f” % i for i in runtot] pt=PrettyTable() pt.add_column(“N”,v) pt.add_column(“VAR”,e) pt.add_column(“PCT”,p) pt.add_column(“SUM”,r) pt.align=”r”print(pt)
The result is a well-formatted table:
+----+------+--------+-------+ | N | VAR | PCT | SUM | +----+------+--------+-------+ | 1 | 4.23 | 92.4619| 92.46 | | 2 | 0.24 | 5.3066 | 97.77 | | 3 | 0.08 | 1.7103 | 99.48 | | 4 | 0.02 | 0.5212 | 100.00| +----+------+--------+-------+
The scree plot is plotted with a simple bar plot type (Figure 5), the scores (Figure 6) and the loadings (Figure 7) with plot. For the scores, the colours are chosen according to the different iris species, because in this example, the data are already categorised.
A bit more complex is the scores plot with clipart, as shown in Figure 8 as an example. The original clipart is taken from, and then processed via ImageMagick. Each clipart is read with imread, zoomed with OffsetImage and then placed on the plot at the scores coordinates with AnnotationBbox, according to the following code:
import matplotlib.image as imread from matplotlib.offsetbox import AnnotationBbox,OffsetImage i1=imread(“iris1.png”) i2=imread(“iris2.png”) i3=imread(“iris3.png”) o=range(1,w.nrows+1) ax=subplot(111) for i,j,o in zip(s1,s2,o): if o<51: ib=OffsetImage(i1,zoom=0.75) elif o>50 and o<101: ib=OffsetImage(i2,zoom=0.75) elif o>100: ib=OffsetImage(i3,zoom=0.75) ab=AnnotationBbox(ib,[i,j],xybox=None,xycoords=”data”,frameon=False,boxcoords=None) ax.add_artist(ab)
The two plots about scores and loadings can be overlapped to obtain a particular plot called the biplot. The example presented here is based on a scaling of the scores as in the following code:
xS=(1/(max(s1)-min(s1)))*1.15 yS=(1/(max(s2)-min(s2)))*1.15
Then the loadings are plotted with arrow over the scores, and the result is shown in Figure 9. This solution is based on the one proposed at; it probably is not the best way, but it works.
The 3D plots (Figures 10 and 11) do not present any particular problems, and can be done according to the following code:
from mpl_toolkits.mplot3d import Axes3D ax=Axes3D(figure(0),azim=-70,elev=20) ax.scatter(s1,s2,s3,marker=””) for i,j,h,o in zip(s1,s2,s3,o): if o<51: k=”r” elif o>50 and o<101: k=”g” elif o>100: k=”b” ax.text(i,j,h,”%.0f”%o,color=k,ha=”center”,va=”center”,fontsize=8)
Using the singular value decomposition (SVD) is very easy—just call pcasvd on the scaled data. The result is shown in Figure 12.
from statsmodels.sandbox.tools.tools_pca import pcasvd xreduced,scores,evals,evecs=pcasvd(dataS)
The modular toolkit for the data processing (MDP) package (see References 4 and 5) is not included in WinPython; so it’s necessary to download the source MDP-3.5.tar.gz from. Then open the WinPython control panel and go to the install/upgrade packages tab. Drag the source file and drop it there. Click on ‘Install packages’. Last, test the installation with the following command:
import mdp mdp.test()
This is a bit time consuming; another test is the following command:
import bimdp bimdp.test()
In the following example, the scores are calculated using the singular value decomposition; so the Figures 12 and 13 are equal among them but rotated compared with Figure 6. This has been explained at, where, to quote user Hong Ooi: “The signs of the Eigenvectors are arbitrary. You can flip them without changing the meaning of the result, only their direction matters.”
import mdp pca=mdp.nodes.PCANode(svd=True) scores=pca.execute(array(dataS))
The MDP package is more complex than described here. Many other things can be done with it. It’s also well documented; for example, the tutorial is more than 200 pages long.
The examples presented here are also typical applications for another, very widely used, free and open source software, R. An interesting comparison of Python and R for data analysis was published some time ago (Reference 7). I can’t make a choice, because I like them both. Currently, I use Python almost exclusively, but in the past, R was my preferred language. It’s useful to develop the same script for both and then compare the results.
Connect With Us
|
http://opensourceforu.com/2016/10/python-data-mining/
|
CC-MAIN-2017-26
|
refinedweb
| 1,775
| 50.43
|
The example below should be straight forward for you to modify for many python use cases. There’s only really a couple of steps, create a docker (if you need additional Python libraries), configure the Python operator, code, plus input and outputs.
Building a docker
There is a great existing blog that describes how to create a simple docker, so I won’t repeat that here. Below you can see my docker definition.
# Use an official Python 3.6 image as a parent image FROM python:3.6.4-slim-stretch # Install python libraries RUN pip install pandas RUN pip install tornado==5.0.2 # Add vflow user and vflow group to prevent error # container has runAsNonRoot and image will run as root RUN groupadd -g 1972 vflow && useradd -g 1972 -u 1972 -m vflow USER 1972:1972 WORKDIR /home/vflow ENV HOME=/home/vflow
Lets take the pipeline that we previously developed but now we will switch the JavaScript for Python.
Placing the Python3Operator on the canvas, shows no inputs and no outputs, for most pipelines you would want to modify this. The above JavaScript operator has an input called input(message) and an output called output(message), we would need something similar for Python.
I found acquiring the data into Python as a blob to be the easiest, as I had experienced character encoding issues, using the blob data type avoided this issue. The HTTP Client provides a blob output, which we will connect to.
We want the output of the python operator to be a message so that we can stop the pipeline running as before.
Now we have a Python operator with our input and output defined
Here’s the Python3 code that I used within the operator, the code is equivalent to the JavaScript example I shared previously
import pandas as pd from io import BytesIO def on_input(data): # Acquire Data as Bytes dataio = BytesIO(data) # Load data into Pandas Data Frame, skipping 5 rows df = pd.read_table(dataio, sep=',',skiprows=5, encoding='latin1', names=['ER_DATE','EXCHANGE_RATE']) # Replace the "-" characters with Null df['EXCHANGE_RATE'].replace('-', None, inplace=True) df = df.to_csv(index=False,header=False) # Create a DH Message - Data Hub api.Message attr = dict() attr["message.commit.token"] = "stop-token" messageout = api.Message(body=df, attributes=attr) api.send("outmsg", messageout) api.set_port_callback("input", on_input)
The easiest way I found to specify that my Python3Operator should use the pandas docker image, was to use the “Group” feature. We can then tag the group with the same tags as my docker to link them both together. Just right click on the python operator and choose Group. Now we can see the tags.
With that the pipeline is completed, we can save it (with a new name) and run it.
All being well, the pipeline should complete and we will see the same data as before.
Here’s a couple of links you may want to refer to.
Develop a custom Pipeline Operator with own Dockerfile
Automating Web Data Acquisition With SAP Data Intelligence
Hope it was useful for someone. 🙂
Thanks, Ian.
Nice post Ian. Thanks for sharing.
Regards
Dave
Nice post Ian!
How do groups in SDH work and can they also be used to influence node placement? Any references to documentation would be appreciated.
Thanks,
Henning
Hello Ian Henry,
I did the tutorial on how to use a python operator on SAP Data Hub Pipeline (with the developer edition). During the execution of my graph I get this issu “error while starting subengine: exit Status 127”. The Python3Operator process is dead.
I would like to mention that I receive this error regardless of the use of this Operator in any graph.
I hope to have a solution to my Problem.
Thanks
Tatiana
I would first try creating a new simple python docker, check if that works with the appropriate tags.
Then try creating a new operator with the correct python version and associate that with your docker.
|
https://blogs.sap.com/2018/05/29/using-the-sap-data-hub-pipeline-python-operator/
|
CC-MAIN-2020-40
|
refinedweb
| 663
| 64.71
|
- NAME
- SYNOPSIS
- DESCRIPTION
- NAMESPACE PREFIXES
- METHODS
- DEPENDENCIES
- SEE ALSO
- AUTHOR & COPYRIGHT
NAME
XML::RSS::Parser - A liberal object-oriented parser for RSS feeds.
SYNOPSIS
#!"; }
DESCRIPTION.
SPECIAL PROCESSING NOTES..
The parser will not include the root tags of
rssor
RDFin the tree. Namespace declaration information is still extracted.
The parser forces
channeland
iteminto a parent-child relationship. In versions 0.9 and 1.0,
channeland
itemtags.
This change is inherited from recent changes in XML::Elemental. The previous system was flawed and not widely adopted. Clarkian notation is the form used by XML::SAX and XML::Simple to name a few. Use the
process_namein XML::Elemental::Util to parse element and attribute names intoo their namespace URI and local name parts.
NAMESPACE PREFIXES
METHODS
The following objects and methods are provided in this package.
- XML::RSS::Parser->new
Constructor. Returns a reference to a new XML::RSS::Parser object.
- $parser->parse =item $parser->parse_file =item $parser->parse_string =item $parser->parse_urimethod inherited from Class::ErrorHandler.
Once the markup has been parsed it is automatically passed through the
rss_normalizemethod before the parse tree is returned to the caller.
-if the prefix is not known.
- XML::RSS::Parser->namespace(prefix)
Returns the namespace URI to the given prefix. Returns
undefif.
5 POD Errors
The following errors were encountered while parsing the POD:
- Around line 127:
=begin without a target?
- Around line 310:
'=item' outside of any '=over'
- Around line 364:
You forgot a '=back' before '=head1'
- Around line 390:
=back without =over
- Around line 400:
'=end' without a target?
|
https://metacpan.org/pod/release/TIMA/XML-RSS-Parser-4.0/lib/XML/RSS/Parser.pm
|
CC-MAIN-2016-18
|
refinedweb
| 255
| 59.8
|
Although numeric assignment is fairly straightforward, assignment operations involving objects is a much trickier proposition. Remember that when you're dealing with objects, you're not dealing with simple stack allocated elements that are easily copied and moved around. When manipulating objects, you really have only a reference to a heap-allocated entity. Therefore, when you attempt to assign an object (or any reference type) to a variable, you're not copying data as you are with value types. You're simply copying a reference from one place to another.-
Let's say you have two objects: test1 and test2. If you state test1 = test2, test1 is not a copy of test2. It is the same thing! The test1 object points to the same memory as test2. Therefore, any changes on the test1 object are also changes on the test2 object. Here's a program that illustrates this: -
using System; class Foo { public int i; } class RefTest1App { public static void Main() { Foo test1 = new Foo(); test1.i = 1; Foo test2 = new Foo(); test2.i = 2; Console.WriteLine("BEFORE OBJECT ASSIGNMENT"); Console.WriteLine("test1.i={0}", test1.i); Console.WriteLine("test2.i={0}", test2.i); Console.WriteLine("\n"); test1 = test2; Console.WriteLine("AFTER OBJECT ASSIGNMENT"); Console.WriteLine("test1.i={0}", test1.i); Console.WriteLine("test2.i={0}", test2.i); Console.WriteLine("\n"); test1.i = 42; Console.WriteLine("AFTER CHANGE TO ONLY TEST1 MEMBER"); Console.WriteLine("test1.i={0}", test1.i); Console.WriteLine("test2.i={0}", test2.i); Console.WriteLine("\n"); } }
Run this code, and you'll see the following output: -
BEFORE OBJECT ASSIGNMENT test1.i=1 test2.i=2 AFTER OBJECT ASSIGNMENT test1.i=2 test2.i=2 AFTER CHANGE TO ONLY TEST1 MEMBER test1.i=42 test2.i=42
Let's walk through this example to see what happened each step of the way. Foo is a simple class that defines a single member named i. Two instances of this class-test1 and test2-are created in the Main method and, in each case, the new object's i member is set (to a value of 1 and 2, respectively). At this point, we print the values, and they look like you'd expect with test1.i being 1 and test2.i having a value of 2. Here's where the fun begins. The next line assigns the test2 object to test1. The Java programmers in attendance know what's coming next. However, most C++ developers would expect that the test1 object's i member is now equal to the test2 object's members (assuming that because the application compiled there must be some kind of implicit member-wise copy operator being performed). In fact, that's the appearance given by printing the value of both object members. However, the new relationship between these objects now goes much deeper than that. The code assigns 42 to test1.i and once again prints the values of both object's i members. What?! Changing the test1 object changed the test2 object as well! This is because the object formerly known as test1 is no more. With the assignment of test1 to test2, the test1 object is basically lost because it is no longer referenced in the application and is eventually collected by the garbage collector (GC). The test1 and test2 objects now point to the same memory on the heap. Therefore, a change made to either variable will be seen by the user of the other variable.
Notice in the last two lines of the output that even though the code sets the test1.i value only, the test2.i value also has been affected. Once again, this is because both variables now point to the same place in memory, the behavior you'd expect if you're a Java programmer. However, it's in stark contrast to what a C++ developer would expect because in C++ the act of copying objects means just that-each variable has its own unique copy of the members such that modification of one object has no impact on the other. Because this is key to understanding how to work with objects in C#, let's take a quick detour and see what happens in the event that you pass an object to a method: -
using System; class Foo { public int i; } class RefTest2App { public void ChangeValue(Foo f) { f.i = 42; } public static void Main() { RefTest2App app = new RefTest2App(); Foo test = new Foo(); test.i = 6; Console.WriteLine("BEFORE METHOD CALL"); Console.WriteLine("test.i={0}", test.i); Console.WriteLine("\n"); app.ChangeValue(test); Console.WriteLine("AFTER METHOD CALL"); Console.WriteLine("test.i={0}", test.i); Console.WriteLine("\n"); } }
In most languages-Java excluded-this code would result in a copy of the test object being created on the local stack of the RefTest2App.ChangeValue method. If that were the case, the test object created in the Main method would never see any changes made to the f object within the ChangeValue method. However, once again, what's happening here is that the Main method has passed a reference to the heap-allocated test object. When the ChangeValue method manipulates its local f.i variable, it's also directly manipulating the Main method's test object.
Summary
A key part of any programming language is the way that it handles assignment, mathematical, relational, and logical operations to perform the basic work required by any real-world application. These operations are controlled in code through operators. Factors that determine the effects that operators have in code include precedence, and left and right associativity. In addition to providing a powerful set of predefined operators, C# extends these operators through user-defined implementations, which I'll discuss in Chapter 13.
|
http://www.brainbell.com/tutors/C_Sharp/Simple_Assignment_Operators.htm
|
CC-MAIN-2018-22
|
refinedweb
| 952
| 65.62
|
Just a minor issue and not C++-specific:
if ((depth = desktop_color_depth()) != 0)
I'd rather see this one split up; it's overloaded, thus more difficult to read, and most of all, it invites errors involving = and ==.
To put it more bluntly and dogmatic: Never make (active) use of side-effects.
_______________________________Indeterminatus. [Atomic Butcher]si tacuisses, philosophus mansisses
Yes, C++ exception handling fails in a lot of ways and I wouldn't specify exceptions in function types except that in my latest project I was getting segfaults until I added the throw's clause. I have no idea why but I just go with the flow.
IMO it is more likely that you have something wrong elsewhere and since the error specification would lead to small changes in the binary these might cover up the symptoms (temporarily, another change might make it crash again). I've had a program segfault unless a single bool assignment was commented out (the real cause was elsewhere and the program had been working fine despite the bug for a couple of days).
I saw a bunch of points I wanted to make, but I'll only touch on two.
First off, use of enums for constants. I don't like that idea, because that's not what they're for. What's wrong with const ints again? After all... they are constants.
Second, struct vs class. Thomas is right, and if you move on to more civilized languages (like C#) it goes one step further. In C#, structs are passed by value, whereas classes are passed by reference.
BAF.zone | SantaHack!
Please note that I did not follow this thread. Nothing's wrong with const int, and nothing's wrong with enum. Both have their uses, and their domains are orthogonal. An enumeration is quite explicit in what it's there for: providing a range of values, noone should care (or have to care) how the respective values are represented "underneath".
Yeah, enums have their uses, but not for setting a constant for screen width or height, IMO.
Asking about good practices, is it a good practice to exceed the char type limit?
For example:
char a = 127;
a += 2; // a will be -127 now right?
Instead of:
char a = 127;
if (a + 2 > 127) a = a + 2 - 256;
else a += 2;
a += 2; // a will be -127 now right?
if (a + 2 > 127)
If your first statment would be true how would that second statement ever evaluate to true? Never, bacause -127 is not bigger then 127. You can not detect an overflow with a bigger as sign. That makes no sense.
I always thought that char can hold upto 255 values and is unsigned per default? But I could be wrong about this.
No it's not.
It depends on the compiler. Some treat the type char type as being signed, some do not. If char's are signed, then yes it will wrap around, if not, it will now equal 129.
If you want to be certain whether or not a char will be signed or not, use signed char or unsigned char.
Sorry, I would than do this:
int a = 127;
signed char b;
if (a + 2 > 127) b = a + 2 - 256;
else b = a + 2;
If your first statment would be true how would that second statement ever evaluate to true? Never, bacause -127 is not bigger then 127. You can not detect an overflow with a bigger as sign. That makes no sense.
Actually, I believe numerical literals are ints by default, which means the char would get promoted to an int before the operation, so it would be 129.
Still, the whole thing is really sketchy and I would avoid the situation altogether. Overflows should be considered an error, not a tool.
--- <-- Read it, newbies.
-- you want to be certain whether or not a char will be signed or not, use signed char or unsigned char.
Well, if you're using it for text you should just use plain char, so that it's in line with whatever your compiler/project uses. If you want a 1 byte integer, you should explicitly use signed/unsigned char.
Actually, I believe numerical literals are ints by default
I believe they should be a signed integer type which matches the word size of the target platform (which isn't always int - wouldn't want to be too consistent or anything).
Well, if you're using it for text you should just use plain char, so that it's in line with whatever your compiler/project uses.
Yup. char values outside of 0-127 shouldn't be used for text anyway.
Sorry, I would than do this:
I would just do this:
signed char a;
// ...
if (a < 126) a += 2;
else a = 128 - a;
Would this not protect against the problem faced with the wrap-around?
graphic *noughts; graphic *crosses; graphic *buffer; graphic *back; graphic *scr;
There is no reason to allocate these dynamically.
The reason I did that was when I did noughts("nought.bmp"), I got the following error, which I have no idea how to fix:
mingw32-g++.exe -Wall -fexceptions -O2 -c "C:\Users\LennyLen\Documents\source code\tictac++\main.cpp" -o obj\Release\main.o
mingw32-g++.exe -Wall -fexceptions -O2 -c "C:\Users\LennyLen\Documents\source code\tictac++\game.cpp" -o obj\Release\game.o
C:\Users\LennyLen\Documents\source code\tictac++\game.cpp: In constructor `game::game()':
C:\Users\LennyLen\Documents\source code\tictac++\game.cpp:5: error: no matching function for call to `graphic::graphic()'
C:\Users\LennyLen\Documents\source code\tictac++\graphic.h:17: note: candidates are: graphic::graphic(const graphic&)
C:\Users\LennyLen\Documents\source code\tictac++\graphic.h:16: note: graphic::graphic(BITMAP*)
C:\Users\LennyLen\Documents\source code\tictac++\graphic.h:15: note: graphic::graphic(int, int)
C:\Users\LennyLen\Documents\source code\tictac++\graphic.h:14: note: graphic::graphic(const char*)
C:\Users\LennyLen\Documents\source code\tictac++\game.cpp:10: error: no match for call to `(graphic) (const char[11])'
I get the same for all of those declarations if I stop them from being dynamic.
In the end, I removed the graphic class altogether. It didn't really serve any purpose.
You can also use the initializer list to call the constructors of the stack instances:
The initializer list is something new to me. What benefit does it serve that is better than just assigning values inside the constructor?
Yeah, enums have their uses, but not for setting a constant for screen width or height, IMO.
It seemed a bit odd to me too, butit was suggested. I've changed them to const ints, even though it makes no real difference except stylistically.
I still need to add the exception handling, but that can wait until tomorrow.
The reason I did that was when I did noughts("nought.bmp"), I got the following error, which I have no idea how to fix:
...
I get the same for all of those declarations if I stop them from being dynamic.
If you do not initialize them in the initializer list, they must have a default constructor that can be called. See below.
The initializer list is something new to me. What benefit does it serve that is better than just assigning values inside the constructor?
Efficiency. Every class member must be constructed before control enters the body of the contructor. So, if you don't use the list you have a default construction plus an assignment.
It's the same logic that leads us in C++ to prefer delaying the declaration of variables as long as possible, so they can initialized on creation.
Efficiency.
Not only that. There are certain things that can only be initialized, not assigned later, like classes without default constructor , constant members, reference members and base classes.
I think this should be the final revison of this little project now:
main.cpp:
game.h:
game.cpp:
mouse.h:
mouse.cpp:
#include "mouse.h"
void position::get_pos() {
x = (mouse_x - EDGE) / SQUARE;
y = (mouse_y - EDGE) / SQUARE;
}
types.h:
#ifndef TYPES_H
#define TYPES_H
typedef enum { NONE, NOUGHT, CROSS } player_type; // enumerated constants for the player type
#endif
defines.h:
#ifndef DEFINES_H
#define DEFINES_H
#define WHITE makecol(255, 255, 255)
#define BLACK makecol(0, 0, 0)
const int GRIDSIZE = 3, NUMSQUARES = 9, TEXTLINE1 = 175, TEXTLINE2 = 190, WIDTH = 172, HEIGHT = 232, EDGE = 11, SQUARE = 50;
#endif
And here is all the code/resources with a Windows binary.
Is there an advantage to using a #define for makecol(..., ..., ...)?
If the preprocessor simply replaces your WHITE instances with "makecol(255, 255, 255)", you are just reusing the same thing. I'd just use "const int WHITE = makecol(255, 255, 255);" and avoid any redundant function calls to makecol() if you are just using it for white (or black, or lavender, or puce, ...).
If you're worried that it's trying to set the color before setting the color depth, just don't make it a const and set its value inside your init() function.
If you're worried that it's trying to set the color before setting the color depth, just don't make it a const and set its value inside your init() function.
It was more that I was trying to avoid any global variables.
It was more that I was trying to avoid any global variables.
And yet Allegro 4 and 5 both use globals all over the place. Once you set the screen resolution, you have SCREEN_W, SCREEN_H, you've got mouse_b, etc. It gives you access to globals all over the place.
I think globals have their place. Sometimes you just don't want to pass some variable around through a dozen function calls to be used in one or two places down the chain: you use a global.
I think globals have their place. Sometimes you just don't want to pass some variable around through a dozen function calls to be used in one or two places down the chain: you use a global.
I think #defines have their place. Sometimes you just don't want to write the same bit of text over and over in many places: you use a #define.
edit: And to be honest, the #define vs global variable has nothing to do with C++ explicitly. It's the same for any language that supports both. I'm only interested in in C++ specifics for now.
CResourceManager::add_color(makecol(255, 255, 255), "white");
CResourceManager::add_color(makecol(0, 0, 0), "black");
..
col = CResourceManager::get_color("white");
That's the proper way of doing it in OOP.Not with defines. And not with globals.
Whether you make the methods static or not depends on what you need.
You could go further and define a Resource class and inherit all Resources from that... you have to decide if you want:
a) // two methods for every resourceadd_a(type_a, id)add_b(type_b, id)type_a a = get_a(id);type_b a = get_b(id);
or:b) // having to cast everytime you get a resourceadd(resource, id)type_a a = (type_a)get(id);
Both have their pros and cons.
EDIT:Obviously, if you want to take the second path, you would have to write wrappers around all allegro c types. Otherwise you can't let them extend the resource class.
col = CResourceManager::get_color("white");I find this very error-prone. It's going to compile ok and bite you at runtime if you mistake "grey" and "gray", for example.
load_bitmap("sprite.bmp"); might compile ok but will bite you at runtime if the image can't be loaded.
Same goes for any resource once you use a ResourceManager.
|
https://www.allegro.cc/forums/thread/600126/809715
|
CC-MAIN-2020-05
|
refinedweb
| 1,947
| 73.68
|
The Seven Stages of Purification & The Insight Knowledges
- Christal Manning
- 1 years ago
- Views:
Transcription
1 The Seven Stages of Purification & The Insight Knowledges Ven. Matara Sri Nanarama e BUDDHANET'S BOOK LIBRARY Web site: Buddha Dharma Education Association Inc.
2 Buddhist Publication Society P.O. Box 61 54, Sangharaja Mawatha Kandy, Sri Lanka First published 1983 Second edition 1993 Copyright 1983, 1993 by The Sangha, Mitirigala Nissaraõa Vanaya All rights reserved ISBN Published for free distribution
3 The Seven Stages of Purification This is a book born of wide and deep meditative experience, a guide to the progressive stages of Buddhist meditation for those who have taken up the practice in full earnestness. to the real nature of phenomena. In the present book the stages of purification and the insight-knowledges are treated not only with the author s great erudition, but with the clarifying light of actual meditative experience. The author, the late Venerable Matara Sri àõ experience, and also this book, extend to serenity meditation (samatha) as well.
4 The Seven Stages of Purification and The Insight Knowledges A Guide to the Progressive Stages of Buddhist Meditation The Venerable Mahàthera Matara Sri àõàràma Translated from the Sinhala Buddhist Publication Society Kandy Sri Lanka
5 Contents Overview... iii Translator s Preface... 8 List of Abbreviations Introduction The Relay of Chariots Chapter I Purification of Virtue (Sãlavisuddhi) Chapter II Purification of Mind (Cittavisuddhi) The Obstructions and Aids to Concentration The Stages of Concentration Chapter III Purification of View (Diññhivisuddhi) Chapter IV Purification by Overcoming Doubt (Kankhàvitaraõavisuddhi)
6 Chapter V Purification by Knowledge and Vision of What is Path and Not-Path (Maggàmagga àõadassanavisuddhi) Knowledge by Comprehension (Sammasana àõa) The Ten Imperfections of Insight (Dasa vipassan upakkilesà) The Path and the Not-Path Chapter VI Purification by Knowledge and Vision of the Way (Pañipadà àõadassanavisuddhi) The Three Full Understandings (Pari à) The Progress of Insight Knowledge Chapter VII Purification by Knowledge and Vision ( àõadassanavisuddhi) Insight Leading to Emergence (Vuññhànagàminã Vipassanà)
7 2. Change-of-Lineage Knowledge (Gotrabhå àõa) The Supramundane Paths and Fruits Reviewing Knowledge (Paccavekkhana àõa) Conclusion Appendix 1 The Call to the Meditative Life Appendix 2 The Eighteen Principal Insights (From the Visuddhimagga, XX,90) Appendix 3 The Cognitive Series in Jhàna and the Path Appendix 4 Oneness About the Author The Buddhist Publication Society àõàrama Mahàthera the meditation master (kammaññhànàc- 8
9 ing the value of both samatha (serenity) and vipassanà (insight). The treatise grew out of a series of discourses on meditation which our venerable teacher gave to us, his pupils, in- àõa.. Mitirigala Nissaraõa Vanaya Mitirigala, Sri Lanka A Pupil October 25, Published for free distribution by Premadasa Kodituvakku, 38, Rosemead Place, Colombo 7 (1978). 9
10 Namo tassa bhagavato arahato Sammàsambuddhassa Homage be to the Blessed One, Accomplished and Fully Enlightened ha àõamoli, The Path of Purification, 4th ed. (BPS, 1979). 11
12 Introduction The Relay of Chariots The path of practice leading to the attainment of Nibbàna unfolds in seven stages, known as the Seven Stages of Purification (satta visuddhi). The seven in order are: 1. Purification of Virtue (silavisuddhi) 2. Purification of Mind (cittavisuddhi) 3. Purification of View (diññhivisuddhi) 4. Purification by Overcoming Doubt (kankhàvitaraõavisuddhi) 5. Purification by Knowledge and Vision of What is Path and Not-Path (maggàmagga àõadassanavisuddhi) 6. Purification by Knowledge and Vision of the Way (pañipadà àõadassanavisuddhi) 7. Purification by Knowledge and Vision ( àõadassanavisuddhi). In the attainment of Nibbàna itself, our minds are in direct relation to the seventh and last stage of this series, the Purification by Knowledge and Vision, which is the knowledge of the supramundane path. But this purification cannot be attained all at once, since the seven stages of purification form a causally related 12ifications are counted among nine items collectively called factors of endeavour tending to purification (pàrisuddhi-padhàniyanga), the last two of which are purification of wisdom and purification of deliverance. However, this same series of seven purifications forms the scaffolding of Bhadantàcariya Buddhaghosa s encycloped, ).. 14
15 Then, friend, is it for purification by knowledge and vision that the holy life is lived under the Blessed One? Not for this, friend. What, then, is the purpose, friend, of living the holy life under the Blessed One? Friend, it is for the complete extinction, somethinggate, might mount the first chariot in the relay, and by means of the first chariot in. 16
17 In the case of the seven purifications, the purity implied is reckoned in terms of the elimination of the unwholesome factors opposed to each purification. Purification of Virtue implies the purity obtained through abstinence from bodily and verbal misconduct as well as from wrong livelihood. Purification of Mind is the purity resulting from cleansing the mind of attachment, aversion, inertia, restlessness and conflict, and from securing it against their influx. Purification of View is brought about by dispelling the distortions of wrong views. Purification by Overcoming Doubt is purity through the conquest of all doubts concerning removal of defilements which obstruct the path of practice. And lastly, Purification by Knowledge and Vision is the complete purity gained by erad. 17
18 Chapter I Purification of Virtue (Sãlavisuddhi) Like any other tree, the great tree of the meditative life requires roots. The roots of the meditative sensedoors livelihood, being light in body and content at heart free from the burden of ownership as regards anything anywhere between the earth and the sky. Though these four principles were originally prescribed for monks and nuns, lay 18 precepts as their standard of virtue. Male and female lay-devotees have five precepts as a permanent standard of virtue in their everyday life. If they are more enthusiastic, they can undertake and keep the eight precepts with livelihood as the eighth, or the ten lay precepts, or the eight precepts recommended as the special observance for Uposatha days. The texts record several instances of persons who, without previously. 19 pleasant. 20
21 In the case of the pleasant feeling, friend Visàkha, the underlying tendency to attachment must be abandoned. In the case of the painful feeling, the underlying tendency to repugnance must be abandoned. And in the case of the neither-unpleasant-nor-pleasant feeling, the underlying tendency to ignorance);. 22
23 Chapter II Purification of Mind (Cittavisuddhi) See Vism. 111,
24 l.ators. 24
25 10. Even the supernormal powers, which are hard to maintain, may be an impediment for one who seeks insight. It will be useful to a meditating monk to understand beforehand the way of tackling the impediments. 4 Six impediments dwelling, family, gain, class, kin and fame can be overcome by giving up attachment to them. Three impediments building, travel and books are done away with by not undertaking the activities they imply. Affliction is an impediment to be overcome by proper medical treatment.). 25
26 impediments so that one can go on with one s meditation. detrimental to concentration, one should constantly protect the mind from falling under their influence, for through carelessness, one can lose whatever concentration one has already developed. Now, let us see how these six states occur. When the meditator applies himself to his subject of meditation, thoughts relating to that tendency- 27
28 times obstacles that the six occasions for the cleansing of concentration are obtained. In other words, in the very attempt to overcome the six obstacles, one fulfils the six conditions necessary for the cleansing of concentration. The six cleansings are thus the cleansing of the mind from hankeringification: practice. Wisdom implies the understanding of the purpose of one s meditation. The purpose should be the arousing of the knowledge of mind-andmatter repeatedly dwelling on some wholesome thought. 29
30 To develop concentration, all one s actions large or small must be done with mindfulness. One should make a special resolve to do everything with the right amount of mindfulness. When each and every act of a meditator is done mindfully, all his actions will begin to maintain a certain level of uniformity. And as this uniformity mindfulness and concentration might appear as something difficult or even unnecessary. One might even become discouraged by it. Understanding this possibility beforehand, one should make a firm determination to persist in one s practice. The progress of a meditator is nothing other than his progress in mindfulness and concentration. When, at the very start, one enthusiastically sets about developing mindfulness, when one makes an earnest effort to apply mindfulness, one will begin to see how the mind becomes receptive to mindfulness almost unwittingly. And once one becomes used to it, one will be able to practise mindfulness without any difficulty. One will then come to feel that 30
31 mindfulness is an activity quite in harmony with the nature of the mind. And ultimately, the meditator can reach a level at which he can practiseitators rise up from their seats, some lose that calmness.junction. Take, for example, the walking posture. This is a posture which offers an excellent opportunity to arouse the power of concentra- 31
32 tion. Many meditators find it easy to develop concentration in this posture. Suppose one has aroused some degree of mindfulness and concentration while walking. Now, when one intends to sit down, one should see to it that one does not lose what one has already gained. With concentration, one should make a mental note of the intention of sitting: intending to sit, intending to sit. Then, in sitting down also make a mental note: sitting, sitting. In this manner one should maintain unbroken whatever mindfulness and concentration one has already built up, and continue one s meditation in the sitting posture. This practice of making a mental note of both the intention and the act at the posturejunctions enables one to maintain mindfulness and concentration without any lapses. In trying to maintain unbroken mindfulness, one should consider well the dangers of neglecting that practice and the benefits of developing it. To develop mindfulness is to develop heedfulness, which is helpful to all wholesome mental states. To neglect mindfulness is to grow in heedlessness, the path leading to all unwholesome states, to downfall. With these considerations, one should make a firm determination and really try to develop mindfulness. When mindfulness develops, concentration, too, develops. Note that it is the development of mindfulness and concen- 32
33 tration that is called progress in meditation. Always bear in mind the Buddha s words: He who has mindfulness is always well; The mindful one grows in happiness. (S. I, 208) A meditator has to pay attention to the application of mindfulness at all times and under all circumstances. What needs special emphasis here is that the application of mindfulness should be so oriented as to lead one onward to the realization of Nibbàna. Mindfulness has to be taken up in a way and in a spirit that will effectively arouse the knowledge of the supramundane paths. It is only then that mindfulness can rightfully be called the enlightenment-factor of mindfulness (satisambojjhanga). Such mindfulness, well attuned to the path, leads to the goal of Nibbàna. Meditation is a battle with the mind. It is a battle with the enemies within the mental defilements. First of all, one has to recognize that these enemies, while battling among themselves, are at war with the good thoughts, too. Love is fighting with anger. Jealousy is in complicity with anger. Greed steps in as an ally to conceit and views. Views and conceit are mutually opposed, though they both owe their origin to greed. 33querading as a friend. Self-deceptions can occur even when the meditator is engaged in making a mental note. For instance, in mentally noting a painful feeling,itator incurs by this neglect is indescribably great. Failure to make a mental note of an object as such becomes a serious drawback in the development of one s meditative attention. As soon as one sees a pleasant object, one should make a mental note of it and summarily dismiss it. 34 ( hearing hearing ). animity in mentally noting these feelings. One should not note them with the idea of getting rid of them. The aim should be to comprehend the nature of phenomena by understanding pain as pain. The same principle applies to a pleasant object giving rise to a pleasant feeling. With 35ments known as the five hindrances (pa canãvaraõà), namely: sensual desire, ill will, sloth and torpor, agitation and remorse, and doubt. 5 There are three kinds of concentration qualifying as Purification of Mind: access concentration (upacàra-samàdhi), absorption concentration (appanà-samàdhi), and momentary concentration (khaõika-samàdhi). The first two are achieved through the vehicle of serenity (samatha),. 36
37 Here we will discuss the attainment of Purification of Mind via the approach of serenity. The fullest form of this purification is absorption concentration, which consists of eight meditative attainments (aññha samàpatti): four absorptions called jhànas, and four immaterial states (àruppas). The two main preparatory stages leading up to a jhàna are called preliminary work (parikamma) and access (upacàra). 6 The ordinary consciousness cannot be converted into an exalted level all at once, but has to be transformed by degrees. In the stage of preliminary work, one must go on attending to the subject of meditation for a long time until the spiritual faculties become balanced and function with a unity of purpose. Once the spiritual faculties gain that balance, the mind drops into access. In the access stage, the five hindr. 37
38 discovers that it has five distinguishing components called jhàna factors, namely: applied thought, sustained thought, joy, bliss and onepointed determination, one should repeat emerging from the jhàna and re-attaining it a good many times. This kind of practice is necessary because there is a danger that a beginner who remains immersed in a jhàna too long will develop excessive determination. 38 meditator should repeatedly attain to and emerge from the jhàna, reviewing it again and again. Applied thought (vitakka) is the application of the mind to the object, the thrusting of the mind into the object. Sustained thought (vicàra) is the continued working of the mind 39
40 on that same object. The distinction between these two will be clearly discernible at this stage because of the purity of the jhanic mind. The other three factors, joy, bliss and onepointedness, will appear even more distinctively before the mind s eye. It will be necessary to apply one s mind to these three factors a number of times in direct and reverse order so as to examine their quality. It is in this way that one fulfils the requirements reviewing,itator again concentrates his mind on the counter- 40
41 part sign. When his faculties mature, he passes through all the antecedent stages and enters absorption in the second jhàna, which is free from applied thought and sustained thought, and is endowed with purified joy, bliss and onepointedness. As in the case of the first jhàna, here too he has to practise for the fivefold mastery, but this time the work is easier and quicker. After mastering the second jhàna, the meditator). 41
42 Beyond the fourth jhàna lie four higher attainments, called immaterial states or immaterial jhànas, since even the subtle material form of the jhànas is absent. These states are named: the base of infinite space, the base of infinite consciousness, the base of nothingness, and the base of neither-perception-nor-nonperception. 10 They are attained by perfecting the power of concentration, not through refining the mental factors, but through training the mind to apprehend increasingly more subtle objects of attention. 10. In Pali: (1) àkàsàna càyatana, (2) vi àõa cayatana, (3) àki ca àyatana, (4) n eva sa ànàsa àyatana. 42 absorption concentration pertaining to one of the eight levels of attainment the four jhànas and the four immaterial states. The vehicle of insight aims at gaining momentary concentration by contemplating changing phenomena with mindfulness. When Purification of Mind is accomplished, 43
44 perception, mental formations and consciousness. Purification of View is attained as the meditator goes on attending to his meditation subject with a unified mind equipped with the six cleansings and the four conditions relating to the development of the spiritual faculties. (See pp ) Now the meditation subject begins to appear to him as consisting of two functionally distinguishable parts mind and matter rather than as a single unit. This purification gains its name because it marks the initial breakaway from all speculative views headed by personality view. 11 The method employed is a sequence of realizations called abandoning by substitution of opposites (tadangappahàna). The abandoning by substitution of opposites is the abandoning of any given state that ought to be abandoned by means of a particular factor of knowledge, which, as a constituent of insight, is opposed to it. It is like the abandoning of darkness. 44
45 1. Knowledge of Delimitation of Mind-and- Matter (nàmaråpapariccheda àõa) 2. Knowledge of Discerning Cause and Condition (paccayapariggaha àõa) 3. Knowledge of Comprehension (sammasana àõa) 4. Knowledge of Contemplation of Arising and Passing Away (udayabbayànupassanà àõa) 5. Knowledge of Contemplation of Dissolution (bhangànupassanà àõa) 6. Knowledge of Contemplation of Appearance as Terror (bhay upaññhàna àõa) 7. Knowledge of Contemplation of Danger (àdãnavànupassanà àõa) 8. Knowledge of Contemplation of Disenchantment (nibbidànupassanà àõa) 9. Knowledge of Desire for Deliverance (mu citukamyatà àõa) 10. Knowledge of Contemplation of Reflection (pañisankhànupassanà àõa) 11. Knowledge of Equanimity about Formations (sankhàr upekkhà àõa) 12. Knowledge in Conformity with Truth (Conformity Knowledge) (saccànulomika àõa) 13. Knowledge of Change-of-Lineage (gotrabhå àõa) 14. Knowledge of Path (magga àõa) 15. Knowledge of Fruit (phala àõa) 16. Knowledge of Reviewing (paccavekkhaõa àõa). 45itation of Mind-and-Matter, and is reached on attaining this knowledge. But as yet the insight knowledges proper (vipassanà àõa) have not arisen. The insight knowledges are ten in number, ranging from the Knowledge by Comprehension to Conformity Knowledge. They are founded upon the Purification of View and Purification by Overcoming Doubt, which in turn are founded upon the two roots, Purification of Virtue and Purification of Mind. To attain the Knowledge of Delimitation of Mind-and-Matter, the meditator, having purified his mind through the successful practice of concentration, focuses his attention on his meditation subject, which could be a hair, a skeleton, the rising and falling movements of the abdomen (i.e. the wind-element as a tactile object), or 46
47 mindfulness of breathing. As he goes on attendingbreaths strike against the tip of the nose or the upper lip as they enter and go out. The meditator 47
48 mind approaches and strikes the meditation subject. This happens at a developed stage in his meditation when he becomes aware of the distinctionmoments, each one a heap or mass of many mental factors. This is Delimitation of Mind. The ability to understand Mind-and-Matter as a heap necessarily implies the ability to distinguish one thing from another, since a heap is, by definition, a group of things lying one on another. This is the preliminary stage of the Knowledgeanimate things, since the knowledge, when complete, is threefold: internal, external, and internal-and-external. 48
49 Chapter IV Purification by Overcoming Doubt (Kankhàvitaraõavisuddhi). 49
50 To gain freedom from all doubts concerning the nature and pattern of existence, it is necessary to understand the law of cause and effect, clearly revealed to the world by the Buddha. This understanding is called the Knowledge of Discerning Cause and Condition (paccayapariggaha àõa). With the maturing of this knowledge the Purification by Overcoming Doubt is brought to completion. Thus the second knowledge is obtained in the process of reaching the fourth purification. This Knowledge of Discerning Cause and Condition is also known as knowledge of things-as-they-are (yathàbhåta- àõa), right vision (sammàdassana) and knowledge of relatedness of phenomena (dhamma-ññhiti àõa). Some who have had experience in insight meditation in past lives are capable of discerning cause and condition immediately along with their discerning of mind-and-matter. Owing to his Purification of View, the meditator goes beyond the perception of a being or person. Advancing to the Purification by Overcoming Doubt, he begins to understand that consciousness always arises depending on a particular sense faculty and a sense object, that there is no consciousness in the abstract. As the Buddha says: Just as, monks, dependent on whatever condition a fire burns, it comes to be 50fire ;consciousness ; a consciousness arising dependent on tongue and flavours is reckoned as a tongue-consciousness ; a consciousness arising dependent on body and tangibles is reckoned as a bodyconsciousness ; a consciousness arising dependent on mind and ideas is reckoned as a mind-consciousness. Mahàtaõhàsamkhaya Sutta M.I,259ff. 51
52 Thus the meditator understands that eyeconsciousnessitator 52
53 instance of a change of posture one should make a mental note of the action, as well as of the intention which impelled that acton. The mental noting should always register the preceding thought as well: 1. intending to stand intending to stand 2. standing standing. This method of making a mental note by way of cause and effect is helpful in understanding the relationship between the cause and the effect. The condition implied by the Knowledge of Discerning Cause and Condition is already found here. The meditator gradually comes to understand that thought is the result and that the object is its cause: It is because there is a sound that a thought-of-hearing (an auditory consciousness) has arisen. As he goes on making a note without a break, a skilful meditator would even feel as though his noting is happening automatically. It is not necessary- 53
54 tator is well read in the Dhamma, he will be able to gain a quicker understanding by reflecting according to the Dhamma. One who is not so well read will take more time to understand. Some meditators gain the knowledge concerning the process of formations at the very outset. A meditator who is well advanced in regard to reflections on the Dhamma can arouse this knowledge while meditating on some subject of meditation, equipped with the Purificationification, speculative views and grasping. 54ent a) (1) the knowledge of the modes of psychic power, (2) the divine earelement, (3) the penetration of other minds, (4) the knowledge of recollecting past lives, and (5) the knowledge of the passing away and re-arising of beings it is sometimes possible, 55
56 on attaining this stage, to see past lives together with their causes and conditions. To some meditators, even the functioning of the internal organs of the body becomes visible. Some have visions of their childhood experiences. One who has no direct knowledge can also arouse memories abandonment by substitution of opposites is the abandoning of a particular unwholesome thought by means of an antithetical wholesome thought; it can be compared to the dispelling of darkness by lighting a lamp. The abandonment by suppression (vikkhambhariappahàna), accomplished through serenity meditation, is more effective. By means of this method one can sometimes keep the five hindrances sup- 56
57 pressed even for a long time. The abandonment by cutting off (samucchedappahàna), accomplished by the supramundane pathknowledge, completely eradicates the defilements together with their underlying tendencies so that they will never spring up again. In insight meditation, the underlying tendencies to speculative views and sceptical doubts still persist. They are abandoned as a cutting off only by the path of Stream-entry. The eradication of the underlying tendencies to defilements: Mind strays, mind strays. If one goes on with mental noting throughout the day, one can, to a great extent, meditation suffering in hell. Or, Let me suffer this little pain for the sake of the supreme bliss of Nibbàna. An example is the venerable Lomasanàga Thera who endured piercing cold and scprching heat. Once while he was dwelling in in the Strivinghall) 58 precisely because I am afraid of the heat that I sat here. And he continued sitting there having reflected on the burning heat in Avici-hell. 13 While engaged in insight meditation, attending mentally to sections of formations, a meditator sometimes goes through experiences which reveal to him the very nature of formations. While sitting in meditation his entire body stiffens: this is how the earth-element makes itself felt. He gets a burning sensation at the points of contact: this is a manifestation of the fireelement. He is dripping with sweat: this is an illustration of the water-element. He feels as if his body is being twisted: here is the windelement at work. These are just instances of the four elements announcing themselves with a here-we-are! A meditator has to understand this language of the four elements. 13. MA. Commentary on Sabbàsava Sutta. 59
60 Chapter V Purification by Knowledge and Vision of What is Path and Not-Path (Maggàmagga àõadassanavisuddhi) The understanding of the distinction between the direct path and its counterfeit, the misleading path, is referred to as Purification by Knowledgeifications and the knowledges. By the time the meditator reaches this Purification by Knowledge and Vision of What is Path and Not-Path, he has gained a certain degree of clarity owing to his Purification by Overcoming Doubt. Since he has eliminated obstructive views and doubts, his power of concentration is 60
61 keener than ever. Now his concentration has reached maturity. His mind is virile and energetic.. 61
62 1. Knowledge by Comprehension (Sammasana àõa) Knowledge and Vision of What is Path and Not-Path comes in. This purification involves understanding characteristics impermanence (anicca), suffering (dukkha), and not-self (anattà). Such reflection characteristics. But the range of comprehension this knowledge involves is not the same for everyone. For some meditators, the comprehension is broad and extensive; for others, its range is limited. The duration of the occurrence of this knowledge also varies according to the way the formations relating to mind-and-matter are reflected upon. The Buddha s comprehension of formations pervaded all animate and inanimate objects in the ten thousand world-systems. The venerable Sàriputta s Knowledge by Comprehension pervaded everything animate and inanimate in the central region of India. The sutta expressions all is to be directly known (sabbaü abhi eyyaü), and all is to be fully known (sabbaü pari eyyaü) also refer to Knowledge by Comprehension. Here all (sabbaü) does not mean literally everything in the world, but whatever is connected with the five aggregates. The formula of comprehension given in the suttas says: 63
64 Any form whatever, whether past, future or present, internal or external, gross or subtle, inferior or superior, far or near all form he sees with right wisdom as it really is (thus): This is not mine, This is not I am, This is not my self. Any feelings whatever any perceptions whatever breathing, the rise-and-fall of the abdomen, or something 64
65 together with their causes and conditions. Now, as the meditator goes on attending to his meditation subject, the arising and the passing away of those formations become apparent to him. He sees, as a present phenomenon, how the formations of mind-and-matter connected with his subject of meditation keep on arising and passing away and undergoing destruction all in heaps. The understanding of formations as a heap is followed by the understanding of each of them separately. It is the continuity and compactness (ghana) of that which conceals the impermanence of formations. To understand them separately, to see the discrete phases within the process, is to understand the characteristic of impermanence. The impermanence of formations becomes clear to him in accordance with the saying: It is impermanent in the sense of undergoing destruction (Ps.I,53). Once the nature of impermanence is apparent, the painful nature and not-self nature of formations become apparent as well. When he makes a mental note of that understanding, the range of understanding itself grows wider. This is Knowledge by Comprehension, which comes as a matter of direct personal experience in the present. Based on this experience, he applies the same principle by induction to the past and the future. He understands by 65
66 inductive knowledge that all formations in the past were also subject to destruction. When he understands the impermanence of past formations, he makes a mental note of this understanding themselves sufficient for breaking up the defilements. However, eight additional modes have been indicated, grouped into four pairs: (1) internalexternal, (2) gross-subtle, (3) inferior-superior, (4) far-near. These eight modes are not apprehended by everyone in the course of reflection on formations. They occur with clarity only to those of 66
67 keen insight. Together with the three temporal modes, these make up the eleven modes of comprehension manifest. 67
68 perception, volition and consciousness the primary components of the mind (in mind-andmatter) are all impermanent. The meditator first has to reflect on his own set of five aggregates. At this stage his contemplation is not confined to his original meditation subject. Rather, contemplation pervades his entire body. He understands the nature of his whole body and makes a mental note of whatever he understands. This is comprehension. Not only in regard to his own body, but concerning those of others, too, he gains a similar understanding. He can clearly visualize his own body, as well as those of others, whenever he adverts to them. This is Knowledge by Comprehension. Some meditators become acutely aware of the frail nature of their body as well. In the Discourse to Màgandiya, the Buddha gives the following advice to the wandering ascetic Màgandiya: And when, Màgandiya, you have practised the Dhamma going the Dhamma-way, then, Màgandiya, you will know for yourself, you will see for yourself, that these (five aggregates) are diseases, boils and darts. (M.I,512) This again, is a reference to the abovementioned stage of comprehension. In the Dis- 68gru comprehension in different ways, sometimes briefly, sometimes in detail, depending on the particular disciple s power of understanding. The Pañisambhidàmagga gives forty modes of comprehension: (Seeing) the five aggregates as impermanent, as painful, as a disease, a boil, a dart, a calamity, an affliction, as alien, as disintegrating, as a plague, a disaster, a terror, a menace, as fickle, perishable, unenduring, as no protection, no shelter, no refuge, as empty, vain, void, not-self, as a danger, as subject to change, as having no core, as the root of calamity, as murderous, as to be 69 characteristic of not-self. Impermanence: impermanent, disintegrating, fickle, perishable, unenduring, subject to change, having no core, to be annihilated, formed, subject to death. Suffering: painful, a disease, a boil, a dart, a calamity, an affliction, a plague, a disaster, a terror, a menace, no protection,. 70 tendency to prolong the process of comprehension since one likes to go on reflecting in this way. For some meditators the process of comprehension reaches its culmination within a short period, for others it takes longer. When the Knowledge by Comprehension, starting from the meditation subject, extends to the five aggregates of the meditator, and from there to external develops to this stage, the meditator applies himself to meditation with great enthusiasm. He is even reluctant to get up from his meditation seat, as he feels he can continue reflecting on formations for a long time without any trouble. Sometimes 71 interconnected processes in the form of vibrations. They seem like a squirming swarm of worms. Even the body appears as a heap of fine elemental dust in constant transformation. 2. The Ten Imperfections of Insight (Dasa vipassan upakkilesà) From the stage of Knowledge by Comprehension up to the initial phase of the Knowledge of Arising and Passing Away, the meditator becomes aware of an increasing ability to meditate without difficulty. Extraneous thoughts have subsided, the mind has become calm, clear and serene. Owing to this serenity and non-distraction, defilements decrease and the 72, ) describes ten such imperfections: (1) illumination (obhàsa) (6) faith (adhimokkha) (2) knowledge ( àõa) (7) energy (paggaha) (3) rapturous delight (pãti) (8) assurance (upaññhàna) (4) calmness (passaddhi) (9) equanimity (upekkhà) (5) bliss (sukha) (10) attachment (nikanti). (1) Due to the developed state of his mind at this stage, a brilliant light appears to the meditator. concludes that the teacher had not foreseen this event and was mistaken on this point. He even supramundane stage. So he concludes that this could not possibly be the path, and dismisses the illumination with a mental note. In the same way he becomes aware that craving arises whenever he thinks: This is my illumination, and that conceit arises at the thought: Even my teacher does not possess an illumination like mine. Also, in conceiving his experience to be a supramundane stage, he recognizes that he is holding a wrong view. So he refuses to be misled by the illumination and succeeds in abandoning this particular imperfection of insight. 74 comprehension, the meditator becomes transported with joy. Uplifting joy arises in him like heaving waves of the sea. He feels as though he is sitting in the air or on a cushion stuffed with cotton-wool. Here, again, the unskilful meditator is deceived. The skilful meditator, however, applies the same method of discernment as he did in the case of illumination. Regarding this imperfection as a manifestation of craving, conceit and wrong view, he frees himself from its deceptive influence. 75
76 (4) The fourth imperfection of insight is buoyancy of body and mind. Though the meditator had already experienced some calmness even in the initial stages of meditation, the calmness that sets in at the beginning of the Knowledge 76
77 spend most of his time worshipping and preaching. He feels impelled to write letters to his relatives instructing them in the Dhamma. Due to excessive faith, he even starts crying, which makes him seem ridiculous. This wave of enthusiasmlessness mindfulness overcoming this imperfection of insight. (10) The subtle imperfection of insight called attachment is one which is latent in all other imperfections. The unskilful meditator conceives a subtle attachment to his insight which is adorned with such marvellous things as illumination; thus he is carried away by craving, conceit and view. The skilful meditator uses his discerning wisdom and frees himself from the influence. 78 imperfections and the stepping on to true insight, that is, to the highroad of mental noting. At the end of this purification the mature phase of Knowledge of Arising and Passing Away sets in to begin the next purification. 79
80 Chapter VI Purification by Knowledge and Vision of the Way (Pañipadà àõadassanavisuddhi) 1. The Three Full Understandings (Pari à) ( àtapari à), (2) full understanding as investigating (tiraõapari à), (3) full understanding as abandoning (pahànapari à). (1) The plane of full understanding as the known extends from the Knowledge of Delimitation of Mind-and-Matter through the Knowledge of Discernment of Conditions. The function exercised in this stage is the understanding of the individual nature of phenomena. In brief this 80
81 understanding extends simply to the salient characteristics of phenomena. Thus the meditator, manifestation and proximate cause. The full understanding as the known enables the meditator to grasp the essential nature of phenomena, which it presents in terms of ultimate categories. (2) Full understanding as the known provides the basis for the next stage, full understanding as investigating, which extends from Comprehension by Groups through the Knowledge of Arising and Passing Away. At this stage the meditator advances from discerning the specific nature of individual phenomena to disc 81
82 of Dissolution and culminates in the Knowledge of Equanimity about Formations. In this stage, as the ignorance obscuring the true nature of formations dissolves and things are seen for what they are, defilements begin to be dispersed. They are compelled to quit the recesses of the mind, and the more they vacate, the more strength of understanding the mind gains. A meditator will find it useful to bear in mind this threefold division of mundane full understandingifications, Purification by Knowledge and Vision of the Way. The way signifies the practice or the process of arriving at the goal. The understanding, knowledge, or illumination relating to the process of arrival is the Knowledge and Vision of the Way. The purification or elimination of defilements by means of that knowledge is Puri- 82
83 fication by Knowledge and Vision of the Way. It is at this point that there begins to unfold the series of full-fledged insight knowledges which will climax in the attainment of the supramundane paths. Purification by Knowledge and Vision of the Way comprises eight stages of knowledge: 1. Knowledge of Contemplation of Arising and Passing Away (udayabbayanupassana àõa) 2. Knowledge of Contemplation of Dissolution (bhangànupassana àõa) 3. Knowledge of Appearance as Terror (bhay upaññhàna àõa) 4. Knowledge of Contemplation of Danger (àdinavànupassanà àõa) 5. Knowledge of Contemplation of Disenchantment (nibbidànupassanà àõa) 6. Knowledge of Desire for Deliverance (mu citukamyatà àõa) 7. Knowledge of Contemplation of Reflection (pañisankhànupassanà àõa) 8. Knowledge of Equanimity about Formations (sankhàr upekkhà àõa). Knowledge in Conformity with Truth or Conformity Knowledge (anuloma àõa) is also included in this purification as a ninth stage of knowledge. 83
84 (1) Knowledge of Contemplation of Arising and Passing Away. Purification by Knowledge and Vision of the Way starts with the mature phase of the Knowledge of Arising and Passing Away, which sets in after the meditator has dispelled the deception posed by the imperfections of insight, either through his own unaided efforts or with the help of the teacher s instructions. phenomena is the Knowledge of Contemplation of Arising and Passing Away (Ps.I,1). It is by contemplating alteration of the present condition. In order to see impermanence, one has to perceive the characteristic of passing away, and for passing away to be seen, the event of arising must also be seen. The Knowledge of Arising and Passing Away 84
85 involves the seeing of both arising and dissolution. At this stage, the process of arising and dissolution becomes manifest to the meditator in the very subject of meditation he has taken up. Now that he has passed the dangers posed by the imperfections of insight, the meditator proceeds with greater determination in his work of contemplation. All the three characteristics of existence now become clear to him in a reasoned manner. Though these characteristics appeared to him already in the early phase of the Knowledge of Arising and Passing Away, they were not so clear then because of the adverse influence of the imperfections. But with the imperfections gone, they stand out in bold relief. Since the highroad of insight knowledge begins with the Knowledge of Arising and Passing Away, the meditator should be especially acquainted with this particular knowledge. He requires a thorough understanding of the three characteristics impermanence, suffering and not-self each of which has two aspects: (1) that which is impermanent and the characteristic of impermanence; (2) that which is suffering and the characteristic of suffering; (3) that which is not-self and the characteristic of not self. 85
86 The referent of the first set of terms i.e. that which is impermanent, suffering and not-self is the five aggregates. The characteristic of impermanence is the mode of arising and passing away; the characteristic of suffering is the mode of being continually oppressed; the characteristic of not-self is the mode of insusceptibility to the exercise of power. The five aggregates are thus impermanent because they arise and pass away, suffering because they are continually oppressed, and not-self because there is no exercising power over them. The Pañisambhidàmagga explains the three characteristics thus: (It is) impermanent in the sense of wearing away. (It is) suffering in the sense of bringing terror. (It is) not-self in the sense of corelessness (Ps.I,53). All the three characteristics are to be found in the five aggregates. The aim of the insight meditator should be to arouse within himself an understanding of these three characteristics. This kind of effort might appear, at first sight, as a mental torture. But when one considers the solace which this beatific vision yields, one will realize that in all the three worlds there is no worthier aim than this. As the Buddha says: To that monk of serene mind who has entered an empty house and sees with right insight the Dhamma, there arises a 86
87 sublime delight transcending the human plane (Dhp. 373). The characteristic of impermanence is concealed by continuity. The characteristic of suffering is covered up by the change of postures. The characteristic of not-self is overcast with compactness. The process of formations needs to be analyzed. Once it is seen as a heap or series, impermanence is understood. By resisting the impulse to change one s postures, suffering is understood. By analyzing the mass of formations into its constituents earth, water, fire, air, contact, feeling, etc., the characteristic of not-self becomes evident. When these three characteristics become clear to the meditator, he is in a position to carry on his meditation well. As the meditator goes on attending to his meditation subject, the subject begins to appear to him as clearly as it did at the stage of comprehension. Now, when the formations which make up mind-and-matter become manifest to him, he is able to distinguish the material and mental components of his meditation subject. If, for example, he takes the rise and fall of the abdomen as his subject, he comes to understand that within one rising movement of the abdomen there is a multiplicity of such move- 87
88 ments and that within one falling movement there is also a series of similar movements. He can also see mentally that a series of thoughts arises along with this process, taking each fractional movement as object. If he attends to the in-breathing and out-breathing as the subject of his meditation, he can mentally distinguish between the numerous phases of the windelement connected with the process. He is also aware that a series of thoughts arises, cognizing each phrase. When he is able to distinguish in this manner, his mind traverses his entire body, making it the subject of meditation. He understands that his entire body is a heap of elemental dust. It occurs to him that this heap of elemental dust composing his body is always in a state of motion, like the fine dust motes seen floating in the air when viewed against the sun s rays. At this stage his mind does not wander towards other objects. His attention is now fully engrossed in meditation. When he becomes aware of the components of matter and mind as heaps, series or masses, he begins to see the arising and the passing away of those distinct parts. Here, one has to take into account another important fact, namely, that all the phenomena subsumed under mind-and-matter pass through three stages: (1) arising (uppàda), (2) persist- 88
89 ence (ñhiti), and (3) dissolution (bhanga). Birth, decay and death occur even within a very short period of time just as much as within the duration of a long period. Of these three stages, arising or birth and dissolution or death are apparent. The intermediate stage of persistence or decay is not so clear. Arising is the beginning of impermanence, persistence its middle and dissolution its end. The three characteristics impermanence, suffering and not-self are now very clear to the meditator. Impermanence is mentally discernible to him as if it were something visible to his very eyes. Four things appear with clarity before his calm mind: (1) the arising, (2) the cause of arising, (3) the dissolution, (4) the cause of dissolution. The knowledge which arises together with this clarity of vision is the Knowledge of Arising and Passing Away. At whatever moment this knowledge dawns upon a meditator at an experiential level as a realisation, he should do well to stop at that point for a considerable period of time in order to reflect upon it over and over again. The Knowledge of Arising and Passing Away is a significant starting-post. Since greater acquaintance with it will come in useful to a meditator even in the matter of re-attaining to fruition (phalasamàpatti), one can contemplate with the 89
90 Knowledge of Arising and Passing Away even a hundred or a thousand times. Now, the meditator who has developed the Knowledge of Arising and Passing Away and repeatedly practised it, directs his mind to his subject of meditation. The process of arising and passing away then becomes manifest to him in that very subject. Even in raising his arm and putting it down, he can visualize the beginning, the middle and the end of the process of arising and passing away. But sometimes the middle is not clearly discernible. This is also so in the case of the rising and falling movements of the abdomen. In mindfulness of breathing, the beginning, the middle and the end of the in-breaths and the out-breaths are apparent. The mind does not wander. As the meditator continues to keep his meditative attention on the meditation subject, after some time the beginning and the middle stages of the process seem to disappear. Only its end is apparent. When attending to the rising movement of the abdomen, the beginning and the middle become almost indiscernible. Only the end is apparent. So also in the case of the falling movement of the abdomen. In raising the arm and lowering it or in lifting the foot and putting it down, the beginning and the middle are not apparent. Only the end of each process stands out. In the case of the in-breaths and the 90
91 out-breaths, the in-coming and the out-going are not felt. All that the meditator feels is the touch sensation left by the in-breaths and the outbreaths at the tip of the nose or on the upper lip where they normally strike as they pass. And this is so palpable to him that he can almost hear its rhythm tuck-tuck-tuck. He is not aware of any other object. Sometimes a meditator, on reaching this stage, might think that his meditation has suffered a setback since the meditation subject is no longer clear to him. He even stops meditating. If he is meditating under a teacher, he approaches him and complains about the setback he is faced with. He confesses that he has lost his interest in meditation that he is fed up with it. The teacher, however, points out, with due reasons, that this is not a setback in meditation, but rather a sign of progress: At the start, you had taken up the subject of meditation in terms of signs and modes. A mode is a model. All these meditation subjects in-breathing and outbreathing, hairs, fingers, etc. are mere concepts. Now that you have developed your mindfulness and concentration, your wisdom has also developed. By developed wisdom a non-existing sign is understood as non-existing. So you must not be disappointed. This is how the perception of the compact disappears. 91
92 The perception of the compact (ghanasa à) is the tendency to take as a unity what is really a multiplicity of actions and functions. Compactness is fourfold: (1) compactness as a continuity (santatighana) (2) compactness as a mass (samåha-ghana) (3) compactness as a function (kicca-ghana) (4) compactness as an object (àrammanaghana). At the developed stage of insight meditation, the perception of compactness begins to disintegrate. The rising and falling movements of the abdomen become less and less palpable. One loses awareness of one s entire body. Earlier the meditator could visualize his own body in the seated posture, but now even that becomes imperceptible to his mind. This is the point at which the concept breaks up. Here one has to abide by the teacher s instructions and be diligent in practice. In his everyday life, man depends on a multitude of concepts of conventional origin. When the perception of compactness disintegrates, conventional notions also break up. One is beginning to move from the fictions believed by the deluded to the truths seen by the noble ones: Whatever, monks, has been pondered over as 92
93 truth by the world with its gods and Màras, by the progeny consisting of recluses and brahmins, gods and men, that has been well discerned as untruth by the noble ones as it really is with right wisdom this is one mode of reflecting. And whatever, monks, has been pondered over as untruth by the world with its gods and Màras that has been discerned as truth by the noble ones as it really is with right wisdom this is the second mode of reflection (Dvayatànupassanà Sutta, Sn. 147). (2) Knowledge of Contemplation of Dissolution. When the meditator no longer sees the arising of formations and only their dissolution is manifest to him, he has arrived at the Knowledge of Dissolution. Resuming his meditation after this experience, he sees the formations making up mind-and-matter to be constantly disintegrating, like the bursting of water bubbles or like froth boiling over from a pot of rice. He comes to understand that there is no being or person, that there are only mere formations always disintegrating. While this Knowledge of Dissolution is going on within him, the meditator has the extraordinary experience of being able to see the thought with which he reflected on dissolution. Then he reflects on that thought as well. Thus he enters upon a special phase of powerful insight known 93
94 as reflective insight (pañivipassanà); it is also called insight into higher wisdom (adhipa à vipassanà). As the Pañisambhidàmagga says: Having reflected on an object, he contemplates the dissolution of the thought which reflected on the object. The appearance (of formations) is also void. This is insight into higher wisdom (Ps.I,58). After reflecting on an object representing mind-and-matter, the meditator reflects upon the reflecting thought itself. Thus he now sees dissolution not only in every immediate object he adverts to, but in every thought he happens to think as well. (3) Knowledge of Appearance as Terror. When everything coming under mind-andmatter is seen to be disintegrating, the meditator feels as though he is in a helpless condition. Since the mind-body process to which he has been clinging is seen to be breaking up, he gets alarmed to an unusual degree. Witnessing the dissolution of everything he has been depending on, terror arises in him as he fails to find any shelter or refuge anywhere. This knowledge of fearfulness is technically called the Knowledge of Appearance as Terror. When this knowledge arises, the meditator should make a mental note of his experience of terror. Otherwise this terror will continue to haunt him. Being unable to put an end to it, he will find it difficult to proceed 94
95 with his meditation. So at this point, too, it is essential to make a mental note. (4) Knowledge of Contemplation of Danger. The understanding dawns that the entire gamut of saüsàric existence in the three realms throughout the three periods past, future and present is subject to the same dissolution. With this insight, the knowledge of terror gives rise to an awareness of the dangers of formations. This is called the Knowledge of Contemplation of Danger. To understand the dangers of formations is to understand that they are wretched from beginning to end. The meditator sees no advantage whatsoever in the entire mass of formations. They appear to him only as a heap of dangers which present no choice between a desirable and an undesirable section. He feels as though he has come upon a thicket infested with furious leopards and bears, reptiles and robbers. With this understanding of the danger, dispassion arises. The meditator gets disgusted with all formations. He thinks: How much suffering have I undergone in the past for the sake of this tabernacle? How much more have I to endure just to perpetuate this frame of formations? The passage from the knowledge of the dissolution to this experience of disenchantment is the powerful phase of insight meditation. The knowledges in this series arise almost simultaneously. Immediately with the knowledge of dissolu- 95
96 tion, the knowledges of terror, of danger and of disenchantment arise. Hence this entire series is sometimes simply termed disenchantment. Whenever a meditator finds that the knowledge of dissolution has arisen within him, he should make it a point to stick to his meditation seat, even if it means foregoing meals and refreshments. He should continue to sit motionless, allowing the cycle of insight knowledges to turn full circle. Those of keen insight pass through these stages very rapidly. (5) Knowledge of Contemplation of Disenchantment. When the dangers in formations are understood, disenchantment sets in without any special effort. This knowledge of disenchantment, arisen through dissatisfaction with formations, is a kind of knowledge with which a meditator has to be well acquainted. The dissatisfaction is aroused by perceiving the dangers in formations. Initially it concerns the formations connected with the particular subject of meditation. However, when this knowledge is well developed, whatever occurs to the meditator arouses only disenchantment, whether it be his own five aggregates or those of others. All objects and places, all kinds of becoming, generation and destiny, and all stations of consciousness and abodes of beings appear in a way that heightens this disenchantment. At first the 96
97 insight meditator has been thinking only of winning freedom from possible rebirth in the four planes of misery the hells, the animal realm, the plane of afflicted spirits (petas), and the planes of titans (asuras). But now, because of this dissatisfaction with regard to formations by understanding their dangers, he is disgusted not only with the four lower planes but with all the three realms of existence: the sense-sphere realm, the fine-material realm and the immaterial realm. He cannot see any solace anywhere not even in the heavens and Brahma worlds since all formations appear as fearful. When this dissatisfaction becomes acute, very often a meditator gets whimsical ideas which can be detrimental to his practice. He becomes dissatisfied with his meditation and meditates without relish. He thinks of stopping his meditation and going somewhere else. He even develops a dislike towards his teacher and other elders who seek his welfare. In view of this situation, it is advisable for a meditator intending to take up insight meditation to inform his meditation teacher or any other elder about his intention. Failing that, he should at least make a firm determination well beforehand to withstand the obstacles that might confront him in the course of insight meditation. For even after reaching this stage of disenchantment, one has to proceed further. 97
98 In such cases the meditation teacher, too, must be resourceful. He should recognize that the real source of the meditator s dissatisfaction is his insight into the dangers of formations, and that this discontent has only been displaced and transferred to other things. When a meditator comes and complains about his practice, place of residence, etc., the teacher must use skilful means to dispel his despondency and re-arouse his ardour for meditation. It is a good sign that, despite his problems, the meditator does not altogether give up his meditation. (6) Knowledge of Desire for Deliverance. The Knowledge of Disenchantment is followed by the Knowledge of Desire for Deliverance. The meditator now becomes desirous of being delivered from all the planes of becoming, destiny and generation found in all the three realms. He desires deliverance from all formations and thinks: How shall I escape from this entire mass of formations bound up with defilements? Some peculiarities are noticeable in the meditator now, not present in the earlier stage. He is always reflecting on his own shortcomings. He does not stick to his meditation subject. He becomes restless and never feels at ease. For a while he gets up from the meditation seat and starts pacing up and down. Then again he comes and sits down. He turns his meditation seat to face 98
99 another direction. He keeps on folding his robes several times and thinks of changing his requisites. Various plans for renovating his compound and even for changing the attitudes of other people enter his mind. But still he does not stop his meditation. However, in a situation like this, a meditator has to be extremely careful, otherwise his meditation is likely to suffer a setback. He should understand that all these whims and fancies are transient. If some impulse to leave his meditation seat arises at an unusual hour, he should make a mental note of it and refuse to respond to it. The meditator should form a resolve to be firm in dealing with these whimsical ideas of changing postures, requisites, etc., until he has gotten over this lapse whether it lasts for a few minutes or continues for a number of hours or days. (7) Knowledge of Contemplation of Reflection. Once he has recovered from this lapse, the meditator s powers of reflection increase and he passes through a series of important insights. These insights are classified into several groups, the most comprehensive being the eighteen principal insights; a set of forty modes of reflection also occurs to him with clarity. 16 Sometimes only a few of these insights and modes are conspicuous. As his understanding by means of mental noting 16. For the eighteen principal insights see Appendix 2; for the forty modes of reflection see p
100 progresses, the mind engaged in noting gets keener. The task before the meditator now is the comprehension of the five aggregates of clinging as impermanent, suffering and not-self. The eighteen principal insights and the forty modes of reflection can all be distributed among these three characteristics. Every one of the above contemplations disperses the defilements by the method of substitution of opposites. Along with this process of elimination, the Knowledge of Desire for Deliverance reaches maturity. The meditator becomes more enthusiastic in developing insight and carries on contemplation through the principal insights and modes of reflection. This kind of reflection is called Knowledge of Contemplation of Reflection or Reflective Insight. At the stage of the Knowledge of Reflection, insight tends to become renewed. Some unusual physical pains may occur when one reaches this stage. One may suffer severe headaches and a feeling of heaviness in the head, clumsiness of body or giddiness or drowsiness. One should, however, mentally note these painful feelings with diligence and try to bear up under them. Then those pains will gradually subside, so much so that one will be relieved of them until one reaches the very culmination of insight meditation. Sometimes pains arise due to physical causes such as ordinary illnesses. But even such pains, once they are over- 100
101 come by sheer will-power, will not come up again. Sometimes this method even completely cures chronic ailments like headaches. When the Knowledge of Reflection arises, insight has become highly developed. At this point it looks as though insight is about to reach its climax. This impels the meditator to make the firm determination: Whatever there is to be done to win deliverance from existence, all that will I do. (8) Knowledge of Equanimity about Formations. The next in the series of insight knowledges is Knowledge of Equanimity about Formations. The equanimity referred to results from a conviction that all the foundational work for uprooting the defilements has been accomplished and that no further effort is required in this direction. The knowledge of equanimity arises with the understanding of voidness (su atà): that everything is void of self or what belongs to self. Since the meditator sees that there is neither a self nor anything belonging to a self in relation to himself as well as others, voidness is discerned in a fourfold manner: (i) There is no my self. (ii) There is nothing belonging to my self. (iii) There is no another self. (iv) There is nothing belonging to another self. 101
102 As the meditator goes on making a mental note of all that occurs to him in this manner, the mind engaged in observation becomes keener and keener until it reaches a stage of unruffled calm. At this stage, called equanimity about formations, the meditator experiences no terror over the dissolution of formations, since he has discerned their ultimate voidness. Nor is there any delight regarding the keenness of reflection. As the Visuddhimagga says: He abandons both terror and delight and becomes indifferent and neutral towards all formations (XXI,61). Reflection on formations now goes on effortlessly like a well-yoked chariot drawn by welltrained horses. The object presents itself to the reflecting mind without any special effort. It is as if the mind is propping up its objects. Just as water-drops fallen on a lotus leaf slide off at once, so distracting thoughts of love and hate do not stick to the meditator s mind. Even if an attractive or repulsive object is presented to him just to test his knowledge of equanimity about formations, it will simply roll away from his mind without stimulating greed or hatred. There is equanimity at this stage because the meditator understands objects in terms of the four elements. Owing to the absence of defilements, the meditator s mind seems pure like the mind of an Arahant, though at this point the suppression of 102
103 defilements is only temporary, effected by the substitution of opposites through insight. It will be a great achievement if the meditator can continue to maintain this state of equanimity. The Pañisambhidàmagga defines the Knowledge of Equanimity about Formations thus: Wisdom consisting of desire for deliverance together with reflection and composure is Knowledge of Equanimity about Formations (Ps.I,60f.). According to this definition, equanimity about formations has three stages: (1) desire for deliverance, (2) reflection, and (3) composure. Composure (santiññhàna) is a significant characteristic of equanimity about formations. It implies the continuity of knowledge or the occurrence of series of knowledges as an unbroken process. No extraneous thoughts can interrupt this series. For a meditator who has reached this stage, very little remains to be done. Some meditators are unable to go beyond the Knowledge of Equanimity about Formations due to some powerful aspirations they have made in the past, such as for Buddhahood, or Paccekabuddhahood, Chief Discipleship, etc. In fact, it is at this stage that one can ascertain whether one has made any such aspiration in the past. Sometimes when he has reached this stage the meditator himself comes to feel that he is cherishing a 103
104 powerful aspiration. However, even for an aspirant to Buddhahood or Paccekabuddhahood, the Knowledge of Equanimity about Formations will be an asset towards his fulfilment of the perfection of wisdom (pa à-pàramã). This Equanimity of Formations is of no small significance when one takes into account the high degree of development in knowledge at this stage. (9) Conformity Knowledge. After Equanimity about Formations comes Knowledge in Conformity with Truth, or briefly, Conformity Knowledge. To gain this knowledge the meditator has nothing new to do by way of meditation; this knowledge simply arises by itself when Knowledge of Equanimity about Formations comes to full maturity. The function of Conformity Knowledge is to conform to the insights which had gone before, or to stabilise those gains by repeated practice. According to the Visuddhimagga, this conformity has to be understood in two senses: as conformity to the function of truth in the eight preceding kinds of insight knowledge, and as conformity to the thirtyseven requisites of enlightenment which are to follow soon. 17 When the eight preceding kinds 17. The thirty-seven requisites of enlightenment comprise: the four foundations of mindfulness, the four right endeavours, the four bases of spiritual power, the five spiritual faculties, the five spiritual powers, the seven enlightenment factors, and the eight noble path factors. For details see Ledi Sayadaw, The Requisites of Enlightenment (Wheel No. 171/174). 104
105 of insight knowledge make their pronouncements like eight judges, Conformity Knowledge, like a righteous king, sits in the place of judgement and impartially and without bias conforms to their pronouncements by saying, You have all discharged your duties well. And just as the judgement of a righteous king conforms with the ancient royal custom, so this Conformity Knowledge, while conforming to the eight kinds of knowledge, also conforms to the thirty-seven enlightenment factors, which are like the ancient royal custom (Vism. XXI, ). Though Knowledge of Equanimity about Formations is generally regarded as the culmination of Purification by Knowledge and Vision of the Way, it is Conformity Knowledge that imparts completeness to the Way. Purification by Knowledge and Vision of the Way may be said to have eight knowledges only in a qualified sense, since the last of them, Knowledge of Equanimity about Formations, includes Conformity Knowledge as well. 105
106 Chapter VII Purification by Knowledge and Vision ( àõadassanavisuddhi) With the completion of Knowledge of Equanimity about Formations, six stages of purification are complete. Purification by Knowledge and Vision, the seventh and final stage, comes next. This purification consists in the knowledge of the four supramundane paths. But before we discuss this directly, it is necessary to say a few things about the process immediately leading up to it. 1. Insight Leading to Emergence (Vuññhànagàminã Vipassanà) The most developed phase of the Knowledge of Equanimity about Formations is called insight lending to emergence. This insight brings one to the portal of the supramundane path. As this insight progresses, there arises the cognitive series (cittavãthi) heralding the supramundane path. Those of keen insight, when they reach Knowledge of Equanimity about Formations, fulfil at the same time the requirements for insight leading to emergence and at once pass 106
107 through it to the supramundane paths and fruits. But the majority, when they reach this stage, go to the verge of Conformity Knowledge, and, unable to proceed further, come back to the Knowledge of Equanimity about Formations. This is illustrated in the Visuddhimagga by the simile of the crow: When sailors board a ship, it seems, they take with them what is called a land-finding crow. When the ship gets blown off its course by gales and goes adrift with no land in sight, then they release the land-finding crow. The crow takes off from the masthead and after exploring all the quarters, if it sees land, it flies straight in the direction of it; if not, it returns and alights on the masthead. So too, if Knowledge of Equanimity about Formations sees Nibbàna, the state of peace, as peaceful, it rejects the occurrence of all formations and enters only into Nibbàna. If it does not see it, it occurs again and again with formations as its object. Vism. XXI, 65 If the meditator is well acquainted with the Dhamma and has discriminative wisdom, he will understand what has happened. Then he can again reflect on formations and go up to Conformity Knowledge. 107
108 By now the meditator has gained a good understanding of the nature of all compounded things (sankhatadhammà). So he is in a position to make an inference as to the nature of the Uncompounded (asankhata). There are three distinctive qualities of compounded things: (1) the impeding quality, (2) the signifying quality, and (3) the desiring quality. Regarding the first of these, the meditator thinks: Compounded things are bound up with impediments. Nibbàna, which I am seeking, is free from impediments. By impediment is meant something that has the nature of impeding. The impediments have the nature of causing a moral person to violate his moral precepts and of making him unrestrained; the nature of disrupting the concentration of one who is bent on attaining concentration and of driving him to distraction; and the nature of obscuring the wisdom of one who is developing wisdom and of casting him into delusion. Compounded things impede by way of lust, hatred, delusion, conceit, jealousy, views, and so on. In the Uncompounded there is no impediment whatsoever. The main impediment is the personality view. One who is deceived by this view must abandon it. The impediment brought about by views can be eliminated only by getting rid of views. Nibbàna is free from the impediment of 108
109 views. It is free from the impediment of uncertainty. In fact, it is free from all the impediments brought about by defilements. The meditator now sees that all compounded things are oppressed by impediments. He feels that the day he is free from these compounded things he can attain Nibbàna. As to the signifying quality, the meditator understands that all compounded things become manifest through signs and modes. Everything in mind-and-matter (nàma-råpa) is defined by way of various modes, such as time, place, direction, occasion, colour, shape, etc. As the Buddha says: If, ânanda, all those modes, characteristics, signs and exponents by which there comes to be a designation of mind-andmatter were absent, would there be manifest any contact? There would not, Lord. Wherefore, ânanda, this itself is the cause, this is the origin, this is the condition for contact. That is to say, mind-and-matter. Mahà-Nidàna Sutta, D.II,62 Everything compounded rests on a mass of suffering: The world rests on suffering (S.I,40). The meditator understands that Nibbàna is free from suffering. Compounded things are liable to 109
110 decay and death. In the Uncompounded there is no decay and death. The idea that Nibbàna is a tranquillization also occurs to the meditator now. Desire means wish or longing. Compounded things cater to wishes. Their very existence is bound up with longing and desire. Food and drink, clothes and dwellings, the cake of soap, the razor and the broom all these things are always in a process of wearing away. Various efforts are required to check this process of decay, and all these efforts are the outcome of longing. When one object of desire breaks up, man hankers for another. He goes on hankering like this because of the wish-begetting nature of compounded things and the nagging impulses they create. When the meditator is in a position to infer that the Uncompounded is free from this characteristic, he is much relieved at heart. So he turns his attention to the Uncompounded, trying his best to attain it. Knowing well that the compounded is fraught with suffering, and that the Uncompounded is free from suffering, he puts forth the necessary effort with the determination: Somehow I will attain it. It is when he makes such an endeavour that insight leading to emergence develops within him. Insight leading to emergence is the climax of insight knowledge. This insight leads directly and infallibly to the supramundane path, 110
111 referred to by the term emergence. The insight leading to emergence comprises three kinds of knowledge: fully-matured knowledge of equanimity about formations, conformity knowledge, and change-of-lineage (still to be discussed). It covers the mundane moments of consciousness in the cognitive series issuing in the supramundane path that is, the mindmoments called preliminary work (parikamma), access (upacàra), and conformity (anuloma). Since the phase of preliminary work has the task of attending to deficiencies in the balancing of the spiritual faculties, some meditators with sharp and well-balanced faculties skip this phase and go through only access and conformity. The rest must pass through all three. The mind at this stage is working with such rapidity that the entire process has to be reckoned in terms of thought-moments. (See Appendix 3.) Up to the time of insight leading to emergence, the meditator had been contemplating the three characteristics of all formations impermanence, suffering and not-self. As he continues reflecting on the three characteristics with keen insight, when he reaches insight leading to emergence, one characteristic stands forth more prominently than the others. Which one stands forth depends on his dominant spiritual faculty. One in whom faith is predominant 111
112 will discern impermanence and subsequently apprehend Nibbàna as the signless (animitta); his path is called the signless liberation. One in whom concentration is predominant will discern the mark of suffering and apprehend Nibbàna as the desireless (appaõihita); his path is called the desireless liberation. One in whom wisdom is predominant will discern the mark of not-self and subsequently apprehend Nibbàna as voidness (su atà); his path is called the voidness liberation. The particular outstanding characteristic comes up distinctly in the most developed phase of knowledge of equanimity about formations, and persists as the mode of apprehension through three phases of insight leading to emergence: preliminary work, access and conformity. 2. Change-of-Lineage Knowledge (Gotrabhå àõa) During these three phases, the meditator s mind is working with formations as its object. He is seeing formations as impermanent, suffering or not-self. But with the next step, Change-of- Lineage Knowledge, a radical change takes place. As soon as Change-of-Lineage Knowledge occurs, the mind lets go of formations and takes 112
113 Nibbàna as its object. This knowledge gains its name because at this point the meditator changes lineage, that is, he passes from the rank of a worldling (putthujjana) to the rank of a noble one (ariya). In the three phases preceding change-of-lineage the defilements continue to be abandoned temporarily through the substitution of opposites. Change-of-lineage itself does not directly abandon defilements in any way, but it heralds the onset of the supramundane path, which abandons defilements permanently by cutting off their roots. According to the definition given in the Pañisambhidàmagga, Change-of-Lineage Knowledge is the understanding of emergence and the turning away from the external. This knowledge emerges from formations as signs and turns away from their occurrence. The object of consciousness is twofold as sign (nimitta) and occurrence (pavatta). Sign is the mode, occurrence implies the occurring of defilements and formations. At the stage of change-oflineage, consciousness abandons the sign so that almost automatically it becomes aware of that reality which is signless. In other words, it takes as its object Nibbàna. At this stage, defilements as such are not yet destroyed. But the tendency of the mind to grasp formations by means of signs and modes is discontinued and thus the 113
114 signs associated with the defilements are transcended. This particular tendency had already been broken down to a great extent in the preceding course of insight meditation as, for instance, when breath becomes imperceptible and the consciousness of the body is lost. However, when the mind emerges from the sign at change-of-lineage, it is irreversible. During the preceding stages of knowledge up to and including equanimity about formations, a fall away from onward progress is possible. But when change-of-lineage occurs, the attainment of the supramundane path is assured. Whereas preliminary work, access and conformity are mundane (lokiya) and the path and fruit supramundane (lokuttara), change-of-lineage has an intermediary position. The Visuddhimagga illustrates the transition to the path thus: Suppose a man wanted to leap across a broad stream and establish himself on the opposite bank, he would run fast and seizing a rope fastened to the branch of a tree on the stream s near bank and hanging down, or a pole, would leap with his body tending, inclining and leaning towards the opposite bank, and when he had arrived above the opposite bank, he would let go, fall on the opposite bank, 114
115 staggering first and then steadying himself there; so, too, this meditator who wants to establish himself on Nibbàna, the bank opposite the kinds of becoming, generation, destiny, station and abode, runs fast by means of the contemplations of rise and fall, etc., and seizing with conformity s adverting to impermanence, pain or not-self, the rope of materiality fastened to the branch of his selfhood and hanging down, or one among the poles beginning with feelings, he leaps with the first conformity-consciousness without letting go and with the second he tends, inclines and leans towards Nibbàna like the body that was tending, inclining and leaning towards the opposite bank; then being with the third next to Nibbàna, which is now attainable, like the others arriving above the opposite bank, he lets go that formation as object with the ceasing of that consciousness and with the change-of-lineage consciousness he falls on to the unformed Nibbàna, the bank opposite, but staggering as the man did, for lack of (previous) repetition, he is not yet properly steady on the single object. After that he is steadied (in Nibbàna) by Path Knowledge. (Vism. XXII,6) 115
116 3. The Supramundane Paths and Fruits In the same cognitive series, immediately after the mind-moment of change-of-lineage comes the supramundane Path-Knowledge, followed directly by its corresponding fruition. Both the Path-Knowledge and Fruit-Knowledge take Nibbàna as their object. The path (magga) lasts for only a single moment of consciousness, whereas fruition (phala) occurs for either two or three mind-moments. For those of sharp faculties who skipped the phase of preliminary work, three moments of fruition occur; for others there are only two moments of fruition. All these events, the three preparatory moments, the path and fruition, belong to a single cognitive series called the cognitive series of the path because it brings the liberating knowledge of the path. After this cognitive series there occurs a fresh cognitive series which reviews the path attainment. This Reviewing-Knowledge takes formations as its object, not Nibbàna as do the paths and fruits. (See Appendix 3.) Path-consciousness has the nature of emerging from both sign and occurrence. The understanding of emergence and turning away from both (i.e. from the occurrence of defilements and from the sign of aggregates produced 116
117 by them) is knowledge of the path (Ps.I,69). Up to this point the meditator had already become convinced that formations are painful and that their cessation, Nibbàna, is bliss. Now, with the path, he actually realizes this through direct seeing of Nibbàna. The Patisambhidàmagga says: Seeing that formations are painful and that cessation is blissful is called the understanding of emergence and turning away from both (defilements and formations). That knowledge touches the Deathless State (Ps.I,70). The Milindapa ha describes the transition from insight contemplation of formations to the realization of Nibbàna by the path as follows: That consciousness of his, while mentally traversing the range of reflection back and forth, transcends the continuous occurrence of formations and alights upon non-occurrence. One who, having practised rightly, has alighted upon non-occurrence, O King, is said to have realized Nibbàna (p. 326). It is for the attainment of this supramundane path that the meditator has done all his practice. The aim of all his endeavours in fulfilling virtue and in developing meditation was the arousing of this path-consciousness. The pathconsciousness accomplishes four functions in a single moment, one regarding each of the Four Noble Truths: 117
118 (1) it penetrates the truth of suffering by fully understanding it; (2) it penetrates the truth of suffering s origin (craving) by abandoning it; (3) it penetrates the truth of the path (the Noble Eightfold Path) by developing it; (4) it penetrates the truth of suffering s cessation (Nibbàna) by realizing it. This exercise of four functions simultaneously can be illustrated by the sunrise. With the rising of the sun, visible objects are illuminated, darkness is dispelled, light appears and cold is allayed. As the sun illuminates visible objects, so Path-Knowledge fully understands suffering; as the sun dispels darkness, so Path-Knowledge abandons the origin of suffering; as the sun causes light to be seen, so Path-Knowledge (as right view) develops the (other) path factors; as the sun allays cold, so Path-Knowledge realizes the cessation which is the tranquillization of defilements. There are four supramundane paths which must be passed through to reach full purification and liberation: the path of Stream-entry (sotàpattimagga), the path of Once-return (sakadàgàmimagga), the path of Non-return (anàgàmimagga) and the path of Arahantship (arahattamagga). These four paths have to be 118
119 attained in sequence. Attainment of all four can occur in a single life, or it can be spread out over several lifetimes; but once the first path is reached, the meditator is assured of never falling away and is bound to reach the final goal in at most seven lives. Each path arises only once. Each has its own particular range of defilements to burst. When a path arises, immediately, by the power of knowledge, it bursts the defilements within its range. The first path, the path of Stream-entry, breaks the three fetters of personality view, doubt and clinging to rules and rituals. One who passes through this path and its fruition becomes a Stream-enterer (sotàpanna). He has entered the stream of the Dhamma, is forever liberated from the possibility of rebirth in the four lower planes (see above, p. 55), and will be reborn at most seven more times in the human or heavenly worlds. The second path, the path of Once-return, does not eradicate any defilements completely but greatly reduces the roots-greed, hatred and delusion. One who dies as a Once-returner (sakadàgami) will be reborn in the human world only one more time before attaining deliverance. The third path, the path of Non-return, bursts the two fetters of sensual desire and aversion. One who passes away as a Non- 119
120 returner (anàgàmi) will not be reborn at all in the sense-sphere realm; he is reborn only in the higher Brahma worlds where he attains final deliverance. The fourth path, the path of Arahantship, eradicates the five subtle fetters desire for fine-material existence (in the Brahma worlds), desire for non-material existence (in the formless worlds), conceit, restlessness and ignorance. The Arahant or liberated one is free from all bondage to saüsàra. He lives in the full attainment of deliverance. Purification by Knowledge and Vision, the seventh and last purification, consists in the knowledge of the four supramundane paths. Following each path, its own respective fruition occurs as its immediate result. Whereas the path performs the task of breaking up defilements, fruition experiences the bliss of Nibbàna when this demanding exertion subsides: The understanding of the relaxation of endeavour is Knowledge of Fruition (Ps.I,71). Since the fruition-consciousness immediately follows the knowledge of the path without a time-lag, the path-concentration is called concentration-with-immediate-result (ànaritarika-samàdhi). This indescribably keen concentration enables wisdom to cut through the range of defilements and purify the mental- 120
121 continuum. The Pañisambhidàmagga states: The understanding of the eradication of defilements owing to the purity of non-distraction is knowledge of concentration-with-immediateresult (Ps.I,2). The commentaries record that some held the view that Fruition-Knowledge arises a number of hours or days after Path- Knowledge; however, the term with-immediateresult (ànantarika) irrefutably conveys the sense of immediacy (literally, without an interval ). Hence that dissentient view is groundless. 4. Reviewing Knowledge (Paccavekkhana àõa) After fruition there occurs Reviewing Knowledge. With this knowledge the meditator reviews five things: the path, its fruition, the defilements abandoned, the defilements remaining, and Nibbàna. Such is the case for Stream-enterers, Once-returners and Nonreturners. But the Arahant has no reviewing of remaining defilements as he has cut them off entirely. Thus there is a maximum of nineteen reviewings, though some disciples may not review defilements abandoned and remaining. Some fail to undertake this reviewing immediately because of the exhilarating joy of attain- 121
122 ment. However, they can review their attainment upon later reflection. The dissentient view that there is an interval between path-consciousness and fruition-consciousness could have arisen due to a misunderstanding of such instances of later recollection. The reviewing is not a deliberate act but something that occurs as a matter of course. Hence there is nothing wrong if it takes place afterwards. With the attainment of the first three fruitions, the meditator, at the time of reviewing, gains the conviction that one essential part of his task is done. When the fruit of Arahantship is attained through the knowledge of the fourth path, he wins the blissful realization that his task has been fully accomplished: He understands, Destroyed is birth,the holy life has been lived, what had to be done has been done, there is nothing further beyond this (M.I,41; M.L.S. I, p. 50). 122
123 Conclusion We have provided a general sketch of the Seven Stages of Purification and the sequence of insight knowledges. This is by no means a comprehensive survey of the field of meditation. At the outset of practice, a beginner must understand clearly the method of mental noting. Any laxity in this respect is bound to mar or retard one s progress in meditation. So one should pursue this practice of mental noting with faith and diligence. In all types of meditation, mindfulness and full awareness should receive special attention. A meditator should not disclose to others his level of progress, for to proclaim one s attainments is normally due to defilements. However, for the purpose of getting instructions, one may disclose one s experiences to a suitable person, such as a teacher or an advanced practitioner. In ancient times, to kindle a fire one had to go on rubbing two kindling-sticks together for a long time, unceasingly. If, after rubbing the sticks together a few times until they became a little warm, one stopped to rest, one had to start the process all over again. Therefore, to make a fire with kindling-sticks, one has to go on rubbing ceaselessly however long it might take 123
124 until fire is produced. The meditator has to proceed in the same way. He cannot succeed if he practises by fits and starts. He must apply himself to meditation without a break until the Supreme Goal of his endeavour is realized. Knowing and seeing the eye, monks, as it really is, knowing and seeing forms as they really are, knowing and seeing eyeconsciousness as it really is, knowing and seeing eye-contact as it really is, and knowing and seeing whatever feeling pleasant, unpleasant, or neither pleasant nor unpleasant arises dependent on eye-contact as it really is, one gets not attached to the eye, gets not attached to forms, gets not attached to eyeconsciousness, gets not attached to eyecontact, and gets not attached even to that feeling that arises dependent on eyecontact. And for him as he abides unattached, unfettered, uninfatuated, contemplating the peril (in the eye, etc.), the five aggregates of grasping go on to future diminution. That craving which makes for re-becoming, which is accompanied by delight and lust, finding delight here and there, decreases in him. His bodily disturbances 124
125 cease, his mental disturbances cease; his bodily afflictions cease, his mental afflictions cease; his bodily distresses cease, his mental distresses cease; and he experiences physical and mental happiness. Whatever view such a one has, that becomes for him Right View, whatever intention he has, that becomes for him Right Intention; whatever effort he puts forth, that becomes for him Right Effort; whatever mindfulness he has, that becomes for him Right Mindfulness; and whatever concentration he has, that becomes for him Right Concentration. But his bodily actions and his verbal actions and his livelihood have already been purified earlier. So this Noble Eightfold Path comes to be perfected in him by development. While this Noble Eightfold Path is being developed by him thus, the four foundations of mindfulness also go on to fulfilment through development and the four right efforts and the four bases of psychic power and the five spiritual faculties and the five powers and the seven factors of enlightenment go on to fulfilment through development. And in him these two things occur coupled together: serenity and insight. Those 125
126 things that should be fully understood by direct knowledge he fully understands by direct knowledge. Those things that should be abandoned by direct knowledge he abandons by direct knowledge. Those things that should be developed by direct knowledge he develops by direct knowledge. And those things that should be realized by direct knowledge he realizes by direct knowledge. Mahàsaëàyatanika Sutta, M.III,287ff. 126
127 Appendix 1 The Call to the Meditative Life The intrinsic value of the life of a meditative monk is beyond estimation. There are various marvellous ways of life in this world. But there can hardly be a more marvellous way of life than that of a meditative monk. When you come to think about this, you have reason to congratulate yourself on taking up this way of life. This life of a meditative monk is not only invaluable, but pure and clean. All the other marvellous ways of life in this world are concerned with external things. They have to do with things external with external mechanics. The life of a meditator, on the other hand, is concerned with the internal mechanics the mechanics of mind-control. The Buddha was the greatest meditator of all times. The life of the meditative monk originated with him. The birth of a Buddha is an extremely rare phenomenon in the world. Not all who listen to his Dhamma take to this life of meditation; only a few of them take up the meditative life in earnest. Be happy that you are counted among these fortunate few. Think about the tranquil results following from the practice of the tranquillizing Dhamma which the Buddha has preached. If, on some 127
128 memorable day in your lives, you conceived the idea of renunciation of going forth from home to homelessness it was as the result of a powerful thought force within you. You should always recall that event as one of great significance in your lives. You were able to leave behind your father and mother, your wife and children, your relatives and friends, and your wealth, due to a powerful thought force and a spirit of renunciation aroused in you by listening to the Dhamma. You should not surrender this great will power under any circumstances. You may rest assured that the step you have taken is quite in keeping with the ideal type of going forth described in the discourses. The Sàma aphala Sutta (Discourse on the Fruits of Recluseship) of the Dãgha Nikàya portrays the true spirit of renunciation behind the act of going forth in these words: Now, a householder or a householder s son or someone born in some family or other listens to the Dhamma. And on hearing the Dhamma, he conceives faith in the Perfect One. When he is possessed of that faith he reflects: Full of hindrances is the household life a path for the dust of passions. The going forth is like being in the open air. It is not easy for one living the house- 128
129 hold life to live the holy life in all its fullness, in all its purity, with the spotless perfection of a polished conch-shell. Let me, then, cut off my hair and beard; let me clothe myself in saffron robes and let me go forth from home to homelessness. Then, before long, leaving behind his property, be it small or great, leaving behind his circle of relatives, be it small or great, he cuts off his hair and beard, he clothes himself in the saffron robes and goes forth from home to homelessness. Dãgha Nikàya I,62ff. With this kind of going forth you have stepped into an environment most congenial to the development of the mind. But, as in any other adventure, here too one has to be on one s guard against possible dangers. There are four stages in the life of a meditative monk: (1) the occasion of going forth from the household life; (2) the preliminary stage in his meditative life when he starts taming his mind in solitude with the help of a meditation subject; (3) the encountering of dangers in the course of meditation in solitude; (4) the stage of enjoying the results of his meditation. 129
130 To illustrate these stages we may, first of all, compare the going forth of a meditator to the arriving in a clearing of a jungle after passing through a thorny thicket. The household life is, in fact, a thicket full of thorns. But even though one has arrived in a clearing in the jungle, one has yet to face dangers coming from wild beasts and reptiles. So the meditator, too, in the preliminary stage of his practice has to encounter many distracting thoughts which are as dangerous as those wild beasts and reptiles. But with perseverance he succeeds in overcoming these dangers. This is like reaching a valuable tract of land after passing the dangerous area. At this stage the meditator has scored a victory over distracting thoughts. Now the world, together with its gods, looks up to him as a man of great worth and starts paying homage to him worshipfully. But then the meditator, complacent with his initial success, parades through this valuable tract of land and gets bogged down in a morass. For gain, fame and praise are comparable to a morass. Some meditators get bogged down in this morass neck-deep and are unable to step out from it. Others get stuck in it for a while but manage to scramble out. Yet others see its dangers well in time and avoid it altogether. The life of a meditator, then, is one which is not only precious, but precipitous in that it 130
131 requires a great deal of caution. I do hope that these observations will give you some food for thought so that you will continue with your meditative life with refreshed minds and renewed vigour. This meditative life should be steered with great care and caution, avoiding the rugged cliffs of aberration. If that thought force which once proceeded in the right direction lapses into an aberration halfway through, it will lose its momentum. Therefore, you should build up a keener enthusiasm and re-charge that thought force, cutting off all possibilities of lapses. 131
132 Appendix 2 The Eighteen Principal Insights (From the Visuddhimagga, XX,90) 1. The contemplation of impermanence (anniccànupassanà): abandons the perception of permanence. 2. The contemplation of suffering (dukkhànupassanà): abandons the perception of pleasure. 3. The contemplation of non-self (anattànupassanà): abandons the perception of self. 4. The contemplation of disenchantment (nibbidànupassanà): abandons delighting. 5. The contemplation of fading away (viràgànupassanà): abandons lust. 6. The contemplation of cessation (nirodhànupassanà): abandons originating. 7. The contemplation of relinquishment (pañinissaggànupassanà): abandons grasping. 8. The contemplation of destruction (khayànupassanà): abandons the perception of compactness. 9. The contemplation of passing away (vayànupassanà): abandons the accumulation (of kamma). 132
133 10. The contemplation of change (vipariõàmànupassanà): abandons the perception of stability. 11. The contemplation of the signless (animittànupassanà): abandons the sign. 12. The contemplation of the desireless (appaõihitànupassanà): abandons desire. 13. The contemplation of voidness (su atàtnupassanà): abandons adherence (to the notion of self). 14. The higher wisdom of insight into phenomena (adhipa à-vipassanà): abandons adherence due to grasping at a core. 15. Correct knowledge and vision (yathàbhåta- àõadassana): abandons adherence due to confusion. 16. The contemplation of danger (àdinavànupassanà): abandons adherence due to attachment. 17. The contemplation of reflection (pañisankhànupassanà): abandons non-reflection. 18. The contemplation of turning away (vivaññànupassanà): abandons adherence due to bondage. Characteristic of Impermanence:Nos.1,6,8,9,10,11,14 Characteristic of Pain (Suffering): Nos.2,4,5,12,16 Characteristic of Not-self: Nos.3,7,13,15,17,18 133
134 Appendix 3 The Cognitive Series in Jhàna and the Path The cognitive series (cittavãthi) is an explanatory tool introduced in the Abhidhamma and the commentaries to account for the organization of acts of mind into purposive sequences. In the philosophy of mind underlying the Abhidhamma, the mental process falls into two general categories. One is passive consciousness, the other active consciousness. Passive consciousness consists of a succession of momentary mental states of a uniform nature, called the lifecontinuum . Whereas the mind-moments of the lifecontinuum are all identical in nature and function, those of active consciousness are quite different from each other. With their distinct 134
135 characters and modes, these mind-moments are welded by certain laws of interrelatedness into a functionally effective sequence called the cognitive series (cittavãthi, literally, avenue of mental acts). Cognitive processes themselves are of different kinds, the principal distinction being that between a sensory process and an internal reflective process. A full sensory process consists of seventeen mind-moments. In the first part of this series, the mind adverts to the impinging sense-object, cognizes it, receives the impression, examines it and determines its nature. Up to this point the process occurs quite automatically, but following the determinative act the mind responds to the sense-object according to its own volition. It is in this phase, consisting of seven mind-moments called javanas, that fresh kamma is generated. Following the phase of javanas, the mind registers the impression, then lapses back into the life-continuum (bhavanga). In a complete reflective series of the usual kind, in which the object is a reflectively considered sense-impression, a mental image or an idea, the process is less diversified. After emerging from the continuum, the mind adverts to the object, then enters the javana phase where it forms a volitional response; finally it registers the object and lapses once more into the lifecontinuum. 135
136 Jhanic attainment and path attainment are both instances of the reflective cognitive series, but differ significantly from the usual kind of process. In the usual series the javana moments are all identical, but here they exhibit a progression of stages. In the case of jhanic attainment, following the moment of adverting, the javana phase moves through five stages: preliminary work (parikamma); access (upacàra); conformity (anuloma); change-of-lineage (gotrabhå); full absorption (appana). Some meditators start from the access stage itself without preliminary work. They are those whose spiritual faculties have already been well-prepared. Conformity is the application of the mind in accordance with the work already done, thus stabilizing one s gains. With change-of-lineage, the lineage in this context is the sense-desire sphere. This refers to the surpassing of the lineage of the sense-desire sphere and growing into (or developing) the exalted lineage (i.e. the finematerial and the immaterial spheres). The absorption stage is the jhàna itself, which can last from a single mind-moment to a long series of such moments; depending on the meditator s skill. The object of all the javana moments is the same, the counterpart sign (pañibhàganimitta). We can depict the jhanic process as follows: 136
137 Lc Md Pw Acc Con Chl Abs *** *** *** *** *** *** *** Lc Life-continuum Md Mind-door adverting Pw Preliminary work Acc Access Con Conformity Chl Change-of-lineage Abs Absorption The three asterisks in each case indicate that each mind-moment has three sub-moments: arising, persisting and dissolution. In the case of path-attainment, the preliminary stages are similar to those for jhàna, but here change-of-lineage involves surpassing the mundane plane to develop the supramundane. The culmination of the process is the path and fruit. The path invariably lasts only for one moment. The fruit lasts two moments when preliminary work is included, three moments when preliminary work is omitted. A full pathattainment can be depicted thus: Lc Md Pw Acc Con Chl P F F Lc *** *** *** *** *** *** *** *** *** *** P Path F Fruit 137
138 Appendix 4 Oneness It is said in the Pañisambhidàmagga: The mind cleansed in these six respects becomes purified and reaches oneness. And what are these onenesses? (1) The oneness aroused by the recollection of liberality; (2) the oneness aroused by the occurrence of the sign of serenity meditation; (3) the oneness aroused by the occurrence of the characteristic of dissolution; and (4) the oneness aroused by the occurrence of cessation. The oneness brought about by the recollection of liberality applies to those who are of a generous disposition. The oneness aroused by the occurrence of the sign of serenity meditation is attainable by those who apply themselves to the development of the mind. The oneness aroused by the occurrence of the characteristic of dissolution is peculiar to those who develop insight meditation. The oneness aroused by the occurrence of cessation is an experience of the Noble Ones. Ps.I,166ff. 138
139 The oneness referred to here is none other than concentration. In this context, however, it is reckoned as fourfold according to the way in which various individuals come by that concentration. Out of these four, the first type of concentration can be attained either by reflecting on a particular act of liberality one has recently performed, or by mentally dwelling on other charitable deeds lying to one s credit. The second type of oneness is the concentration leading to the exalted meditations which are still on the mundane level. It is also called absorption concentration. This comprises the four jhànas (absorptions) pertaining to the finematerial realms and the four meditative attainments of the four immaterial realms. The third type of oneness is the concentration arisen in the course of insight meditation by way of reflection on the nature of sankhàras, or formations. Even without attaining a concentration of mind by means of any serenity meditation as such, a meditator practising insight meditation directs his mind to a particular section of formations. Now, if he goes on reflecting with perseverance, he will reach this oneness this concentration. Ultimately, even this concentration will gather the same degree of strength as absorption concentration. As the meditator equipped with this kind of concentra- 139
140 tion continues to reflect on the formations, insight knowledges will develop. And at whatever moment he attains the supramundane path, that path-consciousness comes to be reckoned as a jhàna in itself, since it has some affinity with the factors proper to jhànas, such as the first jhàna. What are known as transcendental meditations in Buddhism are these supramundane levels of concentration within reach of the pure insight meditator. The fourth type of oneness mentioned above is the concentration which the Noble Ones achieve when they attain to the fruits of the noble path (see p. 119). It is called the oneness aroused by the occurrence of cessation because it has Nibbàna as its object. The Noble Ones who have attained to a path-consciousness such as that of the Stream-enterer are able to re-arouse its fruit and enjoy the bliss of Nibbàna again and again. This is the normal practice of Noble Ones who have attained to one of the four stages of realization. 18 One thing worth mentioning in this connection is that if the meditators practising insight meditation have already obtained either an access concentration or an absorption concentration through some kind of serenity medita- 18. Sotàpanna (Stream-enterer); Sakadàgàmi (Once-returner); Anàgami (Non-returner); Arahant (the Accomplished One). 140
141 tion, it will be comparatively easy for them to achieve the desired results. On the other hand, one who takes up the practice of pure insight meditation without any prior experience in concentration will have to put forth, from the very start, an unremitting endeavour until the desired results are attained. He should, in fact, give up all expectations for his body and life in an all-out struggle to reach the Supreme Goal. 141
142 About the Author The author of this treatise, the Venerable Matara Sri àõàràma Mahàthera, was born in the town of Matara in southern Sri Lanka in the year He received his initial ordination (pabbajjà) as a novice monk in 1917 and his higher ordination (upasampadà) in He underwent a traditional monastic training and in the course of his higher education in the temple gained proficiency in knowledge of the Dhamma and in the scriptural languages, Pali and Sanskrit. While still living in the temple he already evinced a keen interest in meditation; subsequently, beginning in 1945, he left the confines of temple life and took to the life of a forest monk, dwelling and meditating in forest monasteries and meditation centres. In 1951 his patronage was sought by the Sri Kalyàni Yogàshramiyà Saüsthà, an organization of meditation centres founded by the Venerable K. Sri Jinavaüsa Mahàthera. This organization, which counts well over fifty branch centres in Sri Lanka, conferred upon him the eminent position of mahopàdhyàya, chief preceptor and teacher, a position he held up to his death. When a group of Burmese meditation masters headed by the Venerable Mahasi Sayadaw visited Sri Lanka in 1958, the Venerable àõàràma undertook a course of intensive training in the 142
143 Burmese system of insight (vipassanà) meditation under the guidance of the Venerable U Javana, a senior pupil of Mahasi Sayadaw. In recognition of his ability, the Burmese meditation masters imparted to him the complete training necessary to become a fully qualified meditation master (kammaññhànàcariya). An opportunity to apply this training and skill towards the guidance of others came in 1967, when he was invited to become the resident meditation master of the newly opened Mitirigala Nissaraõa Vanaya, an austere meditation monastery founded by Mr. Asoka Weeraratne (now Venerable Bhikkhu Dhammanisanthi). As the meditation master of Mitirigala Nissaraõa Vanaya, the venerable author gave instructions in meditation to a wide circle of meditators, including monks from Western countries. The Venerable àõàràma passed away in April 1992, in his 92nd year, after a brief illness. In addition to the present work the venerable author has four other publications in Sinhala to his credit, Bhàvanà Màrgaya, an exposition of the path of meditation, Vidarshanà Parapura, a work on instruction and practice in the lineage of insight meditation, Samatha-vidarsanà Bhàvanà Màrgaya, on meditation for calm and insight, and Sapta Anupassanà, on the seven contemplations of insight. 143
144 or its equivalent to cover air mail postage. Write to: The Hony. Secretary Buddhist Publication Society P.O. Box 61 54, Sangharaja Mawatha Kandy Sri Lanka
The Not-self Strategy
The Not-self Strategy Thanissaro Bhikkhu As the Buddha once said, the teaching he most frequently gave to his students was this: All fabrications are inconstant; all phenomena are not-self (anatta) (MN --
Meditations On First Philosophy
1-1 Meditations On First Philosophy and correction. This revision renders the French edition specially valuable. Where it seems desirable an alternative reading from the French is given in square brackets.,
What Buddhist Believe
What Buddhist Believe Expanded 4th Edition Dr. K. Sri Dhammanada e BUDDHANET'S BOOK LIBRARY E-mail: bdea@buddhanet.net Web site: Buddha Dharma Education Association Inc. Published by
Everything We do Matters. Venerable Wuling
Everything We do Matters Venerable Wuling Everything We do Matters Venerable Wuling Amitabha Publications Chicago Q IN APPRECIATION To my mother, Evelyn Bolender. I came to visit her for three
Table of Contents RESPONDING TO ACCIDENTS AND COMPLAINTS OR TO ALLEGED OR SUSPECTED CHILD ABUSE 19
1 Table of Contents INTRODUCTION 2 SECTION ONE: PRINCIPLES OF GOOD PRACTICE 4 SECTION TWO: PROTECTING AND PROMOTING CHILDREN S RIGHTS 5 SECTION THREE: DEVELOPING SAFE RECRUITMENT PRACTICES 8 SECTION FOUR:
The Practice Which Leads To NibbŒna
The Practice Which Leads To NibbŒna (Part 1) Pa Auk Sayadaw (Compiled and Translated by U.Dhamminda) The Practice Which Leads To NibbŒna (Part 1) Pa Auk Sayadaw (Compiled and Translated by U.Dhamminda)
Initiation Into Hermetics
Initiation Into Hermetics A Course of Instruction of Magic Theory & Practice by Franz Bardon Publisher: Dieter Ruggeberg, Wuppertal, West Germany 1987 Edition; First Ed. 1956 by Verlag H. Bauer, Freiburg,
Good Research Practice What Is It?
Good Research Practice What Is It? Explores some of the ethical issues that arise in research, and is intended to provide a basis for reflection and discussion. It is aimed at researchers in every field
Only we can help ourselves
Only we can help ourselves Venerable Dhammavuddho Thero e BUDDHANET'S BOOK LIBRARY E-mail: bdea@buddhanet.net Web site: Buddha Dharma Education Association Inc. AN INWARD JOURNEY BOOK
Teaching Restorative Practices with Classroom Circles
Teaching Restorative Practices with Classroom Circles Amos Clifford, Center for Restorative Process Developed for San Francisco Unified School District Table of Contents Introduction: Teaching Restorative
|
http://docplayer.net/114112-The-seven-stages-of-purification-the-insight-knowledges.html
|
CC-MAIN-2016-44
|
refinedweb
| 18,138
| 51.89
|
Nuts has a flexible moduling system that makes organizing components easier.
import
Each Nuts configuration file is a module. A module file can explicitly import other configuration module files.
syntax
<import> tag is used to do the importing. For example:
The attributes of <import> tag are:
- includes. Mandatory attribute. It determines which components are included in this import. Either "*" or explicit list of names have to be specified. Wildcard can be appended to any valid id to denote a search by prefix.
- excludes. Optional attribute. The syntax is same as "includes" except it specifies the list of names excluded from the import.
- namespace. Optional attribute. If "namespace" is not specified, the components imported are directly put into the global namespace. When it is specified, for example, namespace="my" is specified, all components imported will be named in the pattern "my.xxx" where xxx is the original name defined in the original file.
- file. Denotes the filename to be imported. The path is relative to the location of the current configuration file.
- resource. Denotes the name of a resource that can be loaded from either the current class loader or an alternative class path. "file" and "resource" cannot be both specified.
- classpath. The alternative class path to search for the resource. This attribute can only be specified when "resource" is specified.
Odds and Ends
The syntax of <import> is easy. But there are a few things to be noted:
- The same module file or resource can be imported more than once. For example:The above configuration is legal as long as "includes", "exludes", "namespace" are properly used so that there's no name clash.
- A module file is evaluated at most once. Even when "a.xml" is imported twice, the components in it are evaluated only once. Thus, singleton components are still singleton even when they are imported more than once and registered under different key.
- <import> may break auto-wiring. Bytype autowiring can be broken because there may be more than one objects for the same type; Byname and byQualifiedName autowiring can also be broken because the components may be now registered with a namespace.
- Refrain from using the wildcard character "*" for the "includes" attribute. It can easily become a nightmare for maintainance when "*" is abused. Because it is not easy to know what exactly is imported.
Dependency Injection for Configuration Modules
Typically, dependencies of components are all manual-wired within xml configuration files. Yet, it may still be useful sometimes to provide dependency to the xml configuration file from within Java.
Such requirement can be implemented by global dependency declared by the "depends" attribute of the <module> tag.
For example:
The above configuration module is not self-contained because it depends on two components named "account_id" and "balance" to be injected. When instantiating, the key "account_id" and "balance" have to be present in the container.
The following Java code injects the dependencies and glues everything together:
The myaccount_id and mybalance will then be wired to the BankAccount object.
Created by benyu
On Sun Nov 20 18:11:39 CST 2005
Using TimTam
|
http://docs.codehaus.org/display/YAN/Nuts+Moduling
|
crawl-003
|
refinedweb
| 513
| 57.67
|
Users solution, disasters. And, above all, you’ll discover that hacking BSD is fun. So, pull your chair up to your operating system of choice and let’s start hacking.Hack 1: Get the Most Out of the Default Shell right at home. If you’re new to the command line or consider yourself a terrible typist, read on. Unix might be a whole lot easier than you think.
NetBSD and OpenBSD also ship with the C shell as their default shell. However, it is not always the same tcsh, but often its simpler variant, csh, which doesn’t support all of the tricks provided in this hack.
However, both NetBSD and OpenBSD provide a tcsh package in their respective package collections.
History and Auto-Completion
I hate to live without three keys: up arrow, down arrow, and Tab. In fact, you can recognize me in a crowd, as I’m the one muttering loudly to myself if I’m on a system that doesn’t treat these keys the way I expect to use them.
tcsh uses the up and down arrow keys to scroll through your command history. If there is a golden rule to computing, it should be: “You should never have to type a command more than once.” When you need to repeat a command, simply press your up arrow until you find the desired command. Then, press Enter and think of all the keystrokes you just saved yourself. If your fingers fly faster than your eyes can read and you whiz past the right command, simply use the down arrow to go in the other direction.
The Tab key was specifically designed for both the lazy typist and the terrible speller. It can be painful watching some people type out a long command only to have it fail because of a typo. It’s even worse if they haven’t heard about history, as they think their only choice is to try typing out the whole thing all over again. No wonder some people hate the command line!
Tab activates auto-completion. This means that if you type enough letters of a recognizable command or file, tcsh will fill in the rest of the word for you. However, if you instead hear a beep when you press the Tab key, it means that your shell isn’t sure what you want. For example, if I want to run sockstat and type:% so
then press my Tab key, the system will beep because multiple commands start with so. However, if I add one more letter:% soc
and try again, the system will fill in the command for me:
% sockstat
Editing and Navigating the Command Line
There are many more shortcuts that can save you keystrokes. Suppose I’ve just finished editing a document. If I press my up arrow, command, but it would be much easier to hold down the Ctrl key and press a . That would bring me to the very beginning of that command so I could replace the vi with wc . For a mnemonic device, remember that just as a is the first letter of the alphabet, it also represents the first letter of the command at a tcsh prompt.
I don’t have to use my right arrow to go to the end of the command in order to press Enter and execute the command. Once your command looks like it should, you can press Enter. It doesn’t matter where your cursor happens to be.
Sometimes you would like your cursor to go to the end of the command. Let’s say I want to run the word count command on two files, and right now my cursor is at the first c in this command:
% wc mydocs/today/verylongfilename
If I hold down Ctrl and press e , the cursor will jump to the end of the command, so I can type in the rest of the desired command. Remember that e is for end.
Finally, what if you’re in the middle of a long command and decide you’d like to start from scratch, erase what you’ve typed, and just get your prompt back? Simply hold down Ctrl and press u for undo.
If you work in the Cisco or PIX IOS systems, all of the previous tricks work at the IOS command line.
Did you know that the cd command also includes some built-in shortcuts? You may have heard of this one: to return to your home directory quickly, simply type:% cd
That’s very convenient, but what if you want to change to a different previous directory? Let’s say that you start out in the /usr/share/doc/en_US. ISO8859-1/books/handbook directory, then use cd to change to the /usr/ X11R6/etc/X11 directory. Now you want to go back to that first directory. If you’re anything like me, you really don’t want to type out that long directory path again. Sure, you could pick it out of your history, but chances are you originally navigated into that deep directory structure one directory at a time. If that’s the case, it would probably take you longer to pick each piece out of the history than it would be to just type the command manually.
Fortunately, there is a very quick solution. Simply type:% cd –
Repeat that command and watch as your prompt changes between the first and the second directory. What, your prompt isn’t changing to indicate your current working directory? Don’t worry, “Useful tcsh Shell Configuration File Options” [Hack #2] will take care of that.
Learning from Your Command History
Now that you can move around fairly quickly, let’s fine-tune some of these hacks. How many times have you found yourself repeating commands just to alter them slightly? The following scenario is one example.
Remember that document I created? Instead of using the history to bring up my previous command so I could edit it, I might have found it quicker to type this:
% wc !$ The
!$
tells the shell to take the last parameter from the previous command. Since that command was:
wc mydocs/today/verylongfilenam e
19 97 620 mydocs/today/verylongfilename
The !$ tells the shell to take the last parameter from the previous command. Since that command was:
% vi mydocs/today/verylongfilename
it replaced the !$ in my new command with the very long filename from my previous command.
The ! (or bang!) character has several other useful applications for dealing with previously issued commands. Suppose you’ve been extremely busy and have issued several dozen commands in the last hour or so. You now want to repeat something you did half an hour ago. You could keep tapping your up arrow until you come across the command. But why search yourself when ! can search for you?
For example, if I’d like to repeat the command mailstats , I could give ! enough letters to figure out which command to pick out from my history:
$ !ma
! will pick out the most recently issued command that begins with ma . If I had issued a man command sometime after mailstats command, tcsh would find that instead. This would fix it though:% !mai
If you’re not into trial and error, you can view your history by simply typing:% history
If you’re really lazy, this command will do the same thing:
% h
Each command in this history will have a number. You can specify a command by giving ! the associated number. In this example,. Or perhaps you just find it frustrating typing one letter, tabbing, typing another letter, tabbing, and so on until auto-complete works. If I type:
% ls -l b
then hold down the Ctrl key while I press d :
backups/ bin/ book/ boring.jpg
ls -l b
I’ll be shown all of the b possibilities in my current directory, and then my prompt will return my cursor to what I’ve already typed. In this example, if I want to view the size and permissions of boring.jpg, I’ll need to type up to here:% ls -l bor
before I press the Tab key. I’ll leave it up to your own imagination to decide what the d stands for.
See Also
- man tcsh
Make the shell a friendly place to work in.
Now that you’ve had a chance to make friends with the shell, let’s use its configuration file to create an environment you’ll enjoy working in. Your prompt is an excellent place to start.
Making Your Prompt More Useful
The default tcsh prompt displays % when you’re logged in as a regular user and hostname# when you’re logged in as the superuser. That’s a fairly useful way to figure out who you’re logged in as, but we can do much better than that.
Each user on the system, including the superuser, has a .cshrc file in his home directory. Here are my current prompt settings:
dru@~:grep prompt ~/.cshrc
if ($?prompt) then
set prompt = "%B%n@%~%b: "
That isn’t the default tcsh prompt, as I’ve been using my favorite customized prompt for the past few years. The possible prompt formatting sequences are easy to understand if you have a list of possibilities in front of you. That list is buried deeply within man cshrc , so here’s a quick way to zero in on it:
dru@~:man cshrc
/prompt may include
Here I’ve used the / to invoke the manpage search utility. The search string prompt may include brings you to the right section, and is intuitive enough that even my rusty old brain can remember it.
If you compare the formatting sequences shown in the manpage to my prompt string, it reads as follows:
set prompt = "%B%n@%~%b: "
That’s a little dense. Table 1-1 dissects the options.
Table 1-1. Prompt characters
Table 1-1. Prompt characters (continued)
With this prompt, I always know who I am and where I am. If I also needed to know what machine I was logged into (useful for remote administration), I could also include %M or %m somewhere within the prompt string.
The superuser’s .cshrc file (in /root, the superuser’s home directory has an identical prompt string. This is very fortunate, as it reveals something you might not know about the su command, which is used to switch users. Right now I’m logged in as the user dru and my prompt looks like this:
dru@/usr/ports/net/ethereal:
Watch the shell output carefully after I use su to switch to the root user:
dru@/usr/ports/net/ethereal: su
dru@/usr/ports/net/ethereal:
Things seem even more confusing if I use the whoami command:
dru@/usr/ports/net/ethereal: whoami
dru
However, the id command doesn’t lie:
dru@/usr/ports/net/ethereal: id
uid=0(root) gid=0(wheel) groups=0(wheel), 5(operator)
It turns out that the default invocation of su doesn’t actually log you in as the superuser. It simply gives you superuser privileges while retaining your original login shell.
If you really want to log in as the superuser, include the login (-l) switch:
dru@/usr/ports/net/ethereal: su -l
root@~: whoami
root
root@~: id
uid=0(root) gid=0(wheel) groups=0(wheel), 5(operator)
I highly recommend you take some time to experiment with the various for matting sequences and hack a prompt that best meets your needs. You can add other features, including customized time and date strings and command history numbers [Hack #1], as well as flashing or underlining the prompt.
Setting Shell Variables
Your prompt is an example of a shell variable. There are dozens of other shell variables you can set in .cshrc. My trick for finding the shell variables section in the manpage is:
dru@~:man cshrc
/variables described
As the name implies, shell variables affect only the commands that are built into the shell itself. Don’t confuse these with environment variables, which affect your entire working environment and every command you invoke.
If you take a look at your ~/.cshrc file, environment variables are the ones written in uppercase and are preceded with the setenv command. Shell variables are written in lowercase and are preceded with the set command.
You can also enable a shell variable by using the set command at your command prompt. (Use unset to disable it.) Since the variable affects only your current login session and its children, you can experiment with setting and unsetting variables to your heart’s content. If you get into trouble, log out of that session and log in again.
If you find a variable you want to keep permanently, add it to your ~/.cshrc file in the section that contains the default set commands. Let’s take a look at some of the most useful ones.
If you enjoyed Ctrl-d from “Get the Most Out of the Default Shell” [Hack #1], you’ll like this even better:
set autolist
Now whenever you use the Tab key and the shell isn’t sure what you want, it won’t beep at you. Instead, the shell will show you the applicable possibilities. You don’t even have to press Ctrl-d first!
The next variable might save you from possible future peril:
set rmstar
I’ll test this variable by quickly making a test directory and some files:
dru@~:mkdir test
dru@~:cd test
dru@~/test:touch a b c d e
Then, I’ll try to remove the files from that test directory:
dru@~/test:rm *
Do you really want to delete all files? [n/y]
Since my prompt tells me what directory I’m in, this trick gives me one last chance to double-check that I really am deleting the files I want to delete.
If you’re prone to typos, consider this one:
set correct=all
This is how the shell will respond to typos at the command line:
dru@~:cd /urs/ports
CORRECT>cd /usr/ports (y|n|e|a)?
Pressing y will correct the spelling and execute the command. Pressing n will execute the misspelled command, resulting in an error message. If I press e , I can edit my command (although, in this case, it would be much quicker for the shell to go with its correct spelling). And if I completely panic at the thought of all of these choices, I can always press a to abort and just get my prompt back.
If you like to save keystrokes, try:
set implicitcd
You’ll never have to type cd again. Instead, simply type the name of the directory and the shell will assume you want to go there.{mospagebreak title=Hack 3: Create Shell Bindings}
Train your shell to run a command for you whenever you press a mapped key.
Have you ever listened to a Windows power user expound on the joys of hotkeys? Perhaps you yourself have been known to gaze wistfully at the extra buttons found on a Microsoft keyboard. Did you know that it’s easy to configure your keyboard to launch your most commonly used applications with a keystroke or two?
One way to do this is with the
bindkey
command, which is built into the
tcsh
shell. As the name suggests, this command binds certain actions to
cer tain keys. To see your current mappings, simply type bindkey . The output is several pages long, so I’ve included only a short sample. However, you’ll recognize some of these shortcuts from “Get the Most Out of the Default Shell” [Hack #1].
Standard key binding
s
"^A" -> beginning-of-line
"^B" -> backward-char
"^E" -> end-of-line
"^F" -> forward-char
"^L" -> clear-screen
"^N" -> down-history
"^P" -> up-history
"^U" -> kill-whole-line
Arrow key bindings
down -> history-search-forward
up -> history-search-backward
left -> backward-char
right -> forward-char
home -> beginning-of-line
end -> end-of-line
The ^ means hold down your Ctrl key. For example, press Ctrl and then l , and you’ll clear your screen more quickly than by typing clear . Notice that it doesn’t matter if you use the uppercase or lowercase letter.
Creating a Binding
One of my favorite shortcuts isn’t bound to a key by default: complete-word-fwd . Before I do the actual binding, I’ll first check which keys are available:
dru@~:bindkey | grep undefined
"^G" -> is undefined
"305" -> is undefined
"307" -> is undefined
<snip>
Although it is possible to bind keys to numerical escape sequences, I don’t find that very convenient. However, I can very easily use that available Ctrl g. Let’s see what happens when I bind it:dru@~:bindkey "^G" complete-word-fwd
When I typed in that command, I knew something worked because my prompt returned silently. Here’s what happens if I now type ls -l /etc/, hold down the Ctrl key, and repeatedly press g :
ls -l /etc/COPYRIGH
T
ls -l /etc/X11
ls -l /etc/aliases
ls -l /etc/amd.map
I now have a quick way of cycling through the files in a directory until I find the exact one I want. Even better, if I know what letter the file starts with, I can specify it. Here I’ll cycle through the files that start with a :
ls -l /etc/a
ls -l /etc/aliases
ls -l /etc/amd.map
ls -l /etc/apmd.conf
ls -l /etc/auth.conf
ls -l /etc/a
Once I’ve cycled through, the shell will bring me back to the letter a and beep.
If you prefer to cycle backward, starting with words that begin with z instead of a , bind your key to complete-word-back instead.
When you use bindkey , you can bind any command the shell understands to any understood key binding. Here’s my trick to list the commands that tcsh understands:
dru@~ man csh
/command is bound
And, of course, use bindkey alone to see the understood key bindings. If you just want to see the binding for a particular key, specify it. Here’s how to see the current binding for Ctrl-g:
dru@~:bindkey "^G"
"^G" -> complete-word-fw d
Specifying Strings
What’s really cool is that you’re not limited to just the commands found in man csh . The s switch to bindkey allows you to specify any string. I like to bind the lynx web browser to Ctrl-w:
dru@~:bindkey -s "^W" "lynxn"
I chose w because it reminds me of the World Wide Web. But why did I put n after the lynx ? Because that tells the shell to press Enter for me. That means by simply pressing Ctrl-w, I have instant access to the Web.
Note that I overwrite the default binding for Ctrl-w. This permits you to make bindings that are more intuitive and useful for your own purposes. For example, if you never plan on doing whatever ^J does by default, simply bind your desired command to it.
There are many potential key bindings, so scrolling through the output of bindkeys can be tedious. If you only stick with “Ctrl letter” bindings, though, it’s easy to view your customizations with the following command:
dru@~:bindkey | head -n 28
As with all shell modifications, experiment with your bindings first by using bindkey at the command prompt. If you get into real trouble, you can always log out to go back to the defaults. However, if you find some bindings you want to keep, make them permanent by adding your bindkey statements to your .cshrc file. Here is an example:
dru@~:cp ~/.cshrc ~/.cshrc.orig
dru@~:echo ‘bindkey "^G" complete-word-fwd’ >> ~/.cshrc
Notice that I backed up my original .cshrc file first, just in case my fingers slip on the next part. I then used >> to append the echoed text to the end of .cshrc. If I’d used > instead, it would have replaced my entire .cshrc file with just that one line. I don’t recommend testing this on any file you want to keep.
Along those lines, setting:
set noclobber
will prevent the shell from clobbering an existing file if you forget that extra > in your redirector. You’ll know you just prevented a nasty accident if you get this error message after trying to redirect output to a file:
.cshrc: File exists.
See Also
- man tcsh
- “Useful tcsh Shell Configuration File Options”
[Hack #2]
{mospagebreak title=Hack 4: Use Terminal and X Bindings}
Take advantage of your terminal’s capabilities.
It’s not just the tcsh shell that is capable of understanding bindings. Your FreeBSD terminal provides the kbdcontrol command to map commands to your keyboard. Unfortunately, neither NetBSD nor OpenBSD offer this fea ture. You can, however, remap your keyboard under X, as described later.
Creating Temporary Mappings
Let’s start by experimenting with some temporary mappings. The syntax for mapping a command with kbdcontrol is as follows:
kbdcontrol -f number "command"
Table 1-2 lists the possible numbers, each with its associated key combination.
Table 1-2. Key numbers
Table 1-2. Key numbers (continued)
Those last three key combinations may or may not be present, depending upon your keyboard. My Logitech keyboard has a key with a Windows icon next to the left Ctrl key; that is the left GUI key. There’s another key with a Windows icon next to my right Alt key; this is the right GUI key. The next key to the right has an icon of a cursor pointing at a square containing lines; that is the Menu key.
Now that we know the possible numbers, let’s map lynx to the Menu key:
% kbdcontrol -f 64 "lynx"
Note that the command must be contained within quotes and be in your path. (You could give an absolute path, but there’s a nasty limitation coming up soon.)
If I now press the Menu key, lynx is typed to the terminal for me. I just need to press Enter to launch the browser. This may seem a bit tedious at first, but it is actually quite handy. It can save you from inadvertently launching the wrong application if you’re anything like me and tend to forget which commands you’ve mapped to which keys.
Let’s see what happens if I modify that original mapping somewhat:
% kbdcontrol -f 64 "lynx"
kbdcontrol: function key string too long (18 > 16)
When doing your own mappings, beware that the command and its argu ments can’t exceed 16 characters. Other than that, you can pretty well map any command that strikes your fancy.
Shell Bindings Versus Terminal Bindings
Before going any further, I’d like to pause a bit and compare shell-specific bindings, which we saw in “Create Shell Bindings” [Hack #3], and the terminal-specific bindings we’re running across here.
One advantage of using kbdcontrol is that your custom bindings work in any terminal, regardless of the shell you happen to be using. A second advantage is that you can easily map to any key on your keyboard. Shell mappings can be complicated if you want to map them to anything other than “Ctrl letter”.
However, the terminal mappings have some restrictions that don’t apply to the tcsh mappings. For example, shell mappings don’t have a 16 character restriction, allowing for full pathnames. Also, it was relatively easy to ask the shell to press Enter to launch the desired command.
Terminal bindings affect only the current user’s terminal. Any other users who are logged in on different terminals are not affected. However, if the mappings are added to rc.conf (which only the superuser can do), they will affect all terminals. Since bindings are terminal specific, even invoking su won’t change the behavior, as the user is still stuck at the same terminal.
More Mapping Caveats
There are some other caveats to consider when choosing which key to map. If you use the tcsh shell and enjoy viewing your history [Hack #1], you’ll be disappointed if you remap your up and down arrows. The right and left arrows can also be problematic if you use them for navigation, say, in a text editor. Finally, if you’re physically sitting at your FreeBSD system, F1 through F8 are already mapped to virtual terminals and F9 is mapped to your GUI terminal. By default, F10 to F12 are unmapped.
If you start experimenting with mappings and find you’re stuck with one you don’t like, you can quickly return all of your keys to their default mappings with this command:% kbdcontrol -F
On the other hand, if you find some new mappings you absolutely can’t live without, make them permanent. If you have superuser privileges on a FreeBSD system you physically sit at, you can carefully add the mappings to /etc/rc.conf. Here, I’ve added two mappings. One maps lynx to the Menu key and the other maps startx to the left GUI key:
keychange="64 lynx"
keychange="62 startx"
Since the superuser will be setting these mappings, the mapped keys will affect all users on that system. If you want to save your own personal mappings, add your specific kbdcontrol commands to the end of your shell configuration file. For example, I’ve added these to the very end of my ~/.cshrc file, just before the last line which says endif :
% kbdcontrol -f 64 "lynx"
% kbdcontrol -f 62 "startx"
Making Mappings Work with X
This is all extremely handy, but what will happen if you try one of your newly mapped keys from an X Window session? You can press that key all you want, but nothing will happen. You won’t even hear the sound of the system bell beeping at you in protest. This is because the X protocol handles all input and output during an X session.
You have a few options if you want to take advantage of keyboard bindings while in an X GUI. One is to read the documentation for your particular window manager. Most of the newer window managers provide a point and click interface to manage keyboard bindings. My favorite alternative is to try the xbindkeys_config application, which is available in the ports collection [Hack #84]:
# cd /usr/ports/x11/xbindkeys_config
# make install clean
This port also requires xbindkeys:
# cd /usr/ports/x11/xbindkeys
# make install clean
Rather than building both ports, you could instead add this line to /usr/ports/x11/xbindkeys_config/Makefile:
BUILD_DEPENDS= xbindkeys:${PORTSDIR}/x11/xbindkeys
This will ask the xbindkeys_config build to install both ports.
Once your builds are complete, open an xterm and type:
% xbindkeys –defaults
~/.xbindkeysrc
% xbindkeys_config
The GUI in Figure 1-1 will appear.
Figure 1-1. The xbindkeys_config program
Creating a key binding is a simple matter of pressing the New button and typing a useful name into the Name: section. Then, press Get Key and a lit tle window will appear. Press the desired key combination, and voilà, the correct mapping required by X will autofill for you. Associate your desired Action:, then press the Save & Apply & Exit button.
Any keyboard mappings you create using this utility will be saved to a file called ~/.xbindkeysrc.
See Also
- man kbdcontrol
- man atkbd
- The xbindkeys web site ( xbindkeys.html)
Please check back next week for the continuation of this article.
|
http://www.devshed.com/c/a/administration/customizing-the-user-environment-in-bsd/
|
CC-MAIN-2016-50
|
refinedweb
| 4,581
| 70.33
|
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
#include <aio.h> int aio_waitn(struct aiocb *list[], uint_t nent, uint_t *nwait, const struct timespec *timeout);
The aio_waitn() function suspends the calling thread until at least the number of requests specified by nwait have completed, until a signal interrupts the function, or if timeout is not NULL, until the time interval specified by timeout has passed.
To effect a poll, the timeout argument should be non-zero, pointing to a zero-valued timespec structure.
The list argument is an array of uninitialized I/O completion block pointers to be filled in by the system before aio_waitn() returns. The nent argument indicates the maximum number of elements that can be placed in list[] and is limited to _AIO_LISTIO_MAX = 4096.
The nwait argument points to the minimum number of requests aio_waitn() should wait for. Upon returning, the content of nwait is set to the actual number of requests in the aiocb list, which can be greater than the initial value specified in nwait. The aio_waitn() function attempts to return as many requests as possible, up to the number of outstanding asynchronous I/Os but less than or equal to the maximum specified by the nent argument. As soon as the number of outstanding asynchronous I/O requests becomes 0, aio_waitn() returns with the current list of completed requests.
The aiocb structures returned will have been used in initiating an asynchronous I/O request from any thread in the process with aio_read(3C), aio_write(3C), or lio_listio(3C).
If the time interval expires before the expected number of I/O operations specified by nwait are completed, aio_waitn() returns the number of completed requests and the content of the nwait pointer is updated with that number.
If aio_waitn() is interrupted by a signal, nwait is set to the number of completed requests.
The application can determine the status of the completed asynchronous I/O by checking the associated error and return status using aio_error(3C) and aio_return(3C), respectively.
Upon successful completion, aio_waitn() returns 0. Otherwise, it returns -1 and sets errno to indicate the error.
The aio_waitn() function will fail if:
There are no outstanding asynchronous I/O requests.
The list[], nwait, or timeout argument points to an address outside the address space of the process. The errno variable is set to EFAULT only if this condition is detected by the application process.
The execution of aio_waitn() was interrupted by a signal.
The timeout element tv_sec or tv_nsec is < 0, nent is set to 0 or > _AIO_LISTIO_MAX, or nwait is either set to 0 or is > nent.
There is currently not enough available memory. The application can try again later.
The time interval expired before nwait outstanding requests have completed.
The aio_waitn() function has a transitional interface for 64-bit file offsets. See lf64(5).
See attributes(5) for descriptions of the following attributes:
aio.h(3HEAD), aio_error(3C), aio_read(3C), aio_write(3C), lio_listio(3C), aio_return(3C), attributes(5), lf64(5)
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
|
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i098mm/index.html
|
CC-MAIN-2015-22
|
refinedweb
| 508
| 52.7
|
This C Program creates a file & store information. We frequently use files for storing information which can be processed by our programs. In order to store information permanently and retrieve it we need to use files and this program demostrate file creation and writing data in that.
Here is source code of the C program to create a file & store information.The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to create a file called emp.rec and store information
* about a person, in terms of his name, age and salary.
*/
#include <stdio.h>
void main()
{
FILE *fptr;
char name[20];
int age;
float salary;
/* open for writing */
fptr = fopen("emp.rec", "w"););
}
$ cc pgm95.c $ a.out Enter the name raj Enter the age 40 Enter the salary 4000000.
|
https://www.sanfoundry.com/c-program-create-file-store-information/
|
CC-MAIN-2018-13
|
refinedweb
| 142
| 76.01
|
Reminders for setting up an alternate python-versioned Django site on a non-root URL under WSGI on Apache
Lord help me, but that's the title I'm going with. In case it's not already obvious, you will not desire to read the following. Its existence is merely to document the tricky issues that some other sap (future-me, most likely) will encounter under a very precise set of circumstances, as enumerated in the title.
The first, and most frustipating, problem was getting mod_wsgi to use the proper Python libraries. Because I had to leave the stock Python 2.4 in place but needed 2.6 to run Django, it only dawned on me after many hours to re-configure and compile mod_wsgi with an explicit reference to the Python 2.6 version. Extra care is also needed to make sure that the other Python modules are built with the correct version and end up in the proper
site-packages/ directory.
Don't forget to point to the proper Python version in the
manage.py file's shebang.
The mod_wsgi documentation is a snap to read, but only once you understand it thoroughly. I eventually stumbled upon
WSGIPythonPath (Edit: I meant the
python-path argument to WSGIDaemonProcess, since WSGIPythonPath doesn't work in daemon mode), which did make module inclusion so much more pleasant. The more you can do with Apache directives, the more simple becomes your
.wsgi file. In the end, mine was just:
import os os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['PYTHON_EGG_CACHE'] = '/tmp' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()
Put the
.wsgi file somewhere removed from your Django app directory. Somewhere that's easily accessible to the httpd daemon, like DocumentRoot.
Running elsewhere than a web root is always dicey in Django. An extra complication is that WSGIScriptAlias gives you app it's own root URL, but that nothing in Django knows this, so you have to adjust all links with a -- preferably not hardcoded -- root path that matches the WSGIScriptAlias's setting. Same goes for
STATIC_URL and
ADMIN_MEDIA_PREFIX in
settings.py, along with matching Apache Alias directives and Directory permissions.
Oh, and if you don't include HTTP 404 and 500 error templates, Django won't run in non-DEBUG mode. I kept seeing the error that it couldn't load a 500 template and thinking that meant it was trying to show me an actual HTTP 500 error (of which I had seen plenty thus far) and couldn't find the template to do so.
Finally, if you're not running mod_wsgi in daemon mode, you'll need to restart Apache after every source code change. I must've fixed the same problem five times in five different ways before I realized that all of my fixes had worked, but I was still viewing the old code running. Ay Caramba.
So that right there is how I spent about ten hours on New Year's Day.
Archived Comments
For reference, the source code reloading behaviour is described in:
Thanks, Graham. I referred to your own WSGI articles a number of times and they helped me out of a couple of jams. Much appreciated.
|
https://whathesaid.ca/2012/01/03/reminders-for-setting-up-an-alternate-python-versioned-django-site-on-a-non-root-url-under-wsgi-on-apache/
|
CC-MAIN-2019-30
|
refinedweb
| 534
| 63.8
|
On 1/26/06, Peter N. Lundblad <peter@famlundblad.se> wrote:
> We usually use the same name for the struct tag and the typedef.
Fixed.
> There is no documentation, but I guess the namespace field is the
> namespace identifier (i.e. DAV in DAV:blah) and the val field is the
> namespace URI. Is that correct?
Yes - I updated the fields to be more precise.
> > + XML_Parser xmlp;
> > + ns_t *ns_list;
>
> We have a wrapper for expat in svn_xml.h that gives us some error
> handling. Maybe use that (and extend it with namespace support)?
Perhaps. But, my expectation is a lot of the XML parsing code is
going to go into serf rather than stay in Subversion. I'm not sure
yet.
> > + const char *attr, *attr_val;
> > + attr = attrs[0] + 6;
>
> Woops, this will point at garbage if the attribute name is just xmlns.
I'm not yet trying to deal with bogus XML. I just want to get enough
parsing functional so that I can make progress on the other stuff.
> > + /* default namespace */
> > + ns = "";
>
> Do DAV prohibit declaring another default namespace or something?
It needs some namespace... *shrug*
> s/1/TRUE/
> > + }
> > +
> > + /* check for 'prop' */
> > + if (!ctx->in_prop && strcasecmp(name, "prop") == 0)
>
> No my DAV ingorance shines through:-(, but are names case-insensitive in
> DAV?
Greg says no in another post. ;-)
> To be correct, you need to pop the namespaces of this element, but you
> know that. (I also saw the comment about the state of this code in the
> log message about check-pointing).
Yah. I added a FIXME to note this.
> expat doesn't guarantee that CDATA in an element come in one chunk, so you
> need to collect the CDATA from consecutive calls.
I'll figure that out once I get a better idea where the XML parsing
code should live.
Thanks. -- justin
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Jan 27 03:20:34 2006
This is an archived mail posted to the Subversion Dev
mailing list.
|
https://svn.haxx.se/dev/archive-2006-01/0878.shtml
|
CC-MAIN-2019-39
|
refinedweb
| 344
| 76.62
|
The Rob.
Update: Rob updated this article on March 5, 2014, getting everything up to date, as this is a rather fast-moving technology at the moment.
Update: Updating again September 9, 2014!
Recently I was working with a client to train their internal teams on how to build web applications. During this process it occurred to me that the way we presently architect the front-end is very strange and even a bit broken. In many instances you’re either copying huge chunks of HTML out of some doc and then pasting that into your app (Bootstrap, Foundation, etc.), or you’re sprinkling the page with jQuery plugins that have to be configured using JavaScript . It puts us in the rather unfortunate position of having to choose between bloated HTML or mysterious HTML, and often we choose both.
In an ideal scenario, the HTML language would be expressive enough to create complex UI widgets and also extensible so that we, the developers, could fill in any gaps with our own tags. Today, this is finally possible through a new set of standards called Web Components.
Web Components?Web Components?
Web Components are a collection of standards which are working their way through the W3C and landing in browsers as we speak. In a nutshell, they allow us to bundle markup and styles into custom HTML elements. What’s truly amazing about these new elements is that they fully encapsulate all of their HTML and CSS. That means the styles that you write always render as you intended, and your HTML is safe from the prying eyes of external JavaScript.
If you want to play with native Web Components I’d recommend using Chrome, since it has the best support. As of Chrome version 36, it is the first browser to ship all of the new standards.
Le Practical ExampleLe Practical Example
Think about how you currently implement an image slider, it might look something like this:
<div id="slider"> <input checked="" type="radio" name="slider" id="slide1" selected="false"> <input type="radio" name="slider" id="slide2" selected="false"> <input type="radio" name="slider" id="slide3" selected="false"> <input type="radio" name="slider" id="slide4" selected="false"> <div id="slides"> <div id="overflow"> <div class="inner"> <img src="images//rock.jpg"> <img src="images/grooves.jpg"> <img src="images/arch.jpg"> <img src="images/sunset.jpg"> </div> </div> </div> <label for="slide1"></label> <label for="slide2"></label> <label for="slide3"></label> <label for="slide4"></label> </div>
See the Pen CSS3 Slider by Rob Dodson (@robdodson) on CodePen
Image slider adapted from CSScience. Images courtesy of Eliya Selhub
That’s a decent chunk of HTML, and we haven’t even included the CSS yet! But imagine if we could remove all of that extra cruft and reduce it down to only the important bits. What would that look like?
<img-slider> <img src="images/sunset.jpg" alt="a dramatic sunset"> <img src="images/arch.jpg" alt="a rock arch"> <img src="images/grooves.jpg" alt="some neat grooves"> <img src="images/rock.jpg" alt="an interesting rock"> </img-slider>
Not too shabby! We’ve ditched the boilerplate and the only code that’s left is the stuff we care about. This is the kind of thing that Web Components will allow us to do. But before I delve into the specifics I’d like to tell you another story.
Hidden in the shadowsHidden in the shadows
For years the browser makers have had a sneaky trick hidden up their sleeves. Take a look at this
<video> tag and really think about all the visual goodies you get with just one line of HTML.
<video src="./foo.webm" controls></video>
There’s a play button, a scrubber, timecodes and a volume slider. Lots of stuff that you didn’t have to write any markup for, it just appeared when you asked for
<video>.
But what you’re actually seeing is an illusion. The browser makers needed a way to guarantee that the tags they implemented would always render the same, regardless of any wacky HTML, CSS or JavaScript we might already have on the page. To do this, they created a secret passageway where they could hide their code and keep it out of our hot little hands. They called this secret place: the Shadow DOM.
If you happen to be running Google Chrome you can open your Developer Tools and enable the
Show user agent shadow DOM flag. That’ll let you inspect the
<video> element in more detail.
Inside you’ll find that there’s a ton of HTML all hidden away. Poke around long enough and you’ll discover the aforementioned play button, volume slider, and various other elements.
Now, think back to our image slider. What if we all had access to the shadow DOM and the ability to declare our own tags like
<video>? Then we could actually implement and use our custom
<img-slider> tag.
Let’s take a look at how to make this happen, using the first pillar of Web Components, the template.
TemplatesTemplates
Every good construction project has to start with a blueprint, and with Web Components that blueprint comes from the new
<template> tag. The template tag allows you to store some markup on the page which you can later clone and reuse. If you’ve worked with libraries like mustache or handlebars before, then the
<template> tag should feel familiar.
<template> <h1>Hello there!</h1> <p>This content is top secret :)</p> </template>
Everything inside a template is considered inert by the browser. This means tags with external sources—
<img>,
<audio>,
<video>, etc.—do not make http requests and
<script> tags do not execute. It also means that nothing from within the template is rendered on the page until we activate it using JavaScript.
So the first step in creating our
<img-slider> is to put all of its HTML and CSS into a
<template>.
See the Pen CSS3 Slider Template by Rob Dodson (@robdodson) on CodePen
Once we’ve done this, we’re ready to move it into the shadow DOM.
Shadow DOMShadow DOM
To really make sure that our HTML and CSS doesn’t adversely affect the consumer we sometimes resort to iframes. They do the trick, but you wouldn’t want to build your entire application in ’em.
Shadow DOM gives us the best features of iframes, style and markup encapsulation, without nearly as much bloat.
To create shadow DOM, select an element and call its
createShadowRoot method. This will return a document fragment which you can then fill with content.
<div class="container"></div> <script> var host = document.querySelector('.container'); var root = host.createShadowRoot(); root.innerHTML = '<p>How <em>you</em> doin?</p>' </script>
Shadow HostShadow Host
In shadow DOM parlance, the element that you call
createShadowRoot on is known as the Shadow Host. It’s the only piece visible to the user, and it’s where you would ask the user to supply your element with content.
If you think about our
<video> tag from before, the
<video> element itself is the shadow host, and the contents are the
tags you nest inside of it.
<video> <source src="trailer.mp4" type="video/mp4"> <source src="trailer.webm" type="video/webm"> <source src="trailer.ogv" type="video/ogg"> </video>
Shadow RootShadow Root
The document fragment returned by
createShadowRoot is known as the Shadow Root. The shadow root, and its descendants, are hidden from the user, but they’re what the browser will actually render when it sees our tag.
In the
<video> example, the play button, scrubber, timecode, etc. are all descendants of the shadow root. They show up on the screen but their markup is not visible to the user.
Shadow BoundaryShadow Boundary
Any HTML and CSS inside of the shadow root is protected from the parent document by an invisible barrier called the Shadow Boundary. The shadow boundary prevents CSS in the parent document from bleeding into the shadow DOM, and it also prevents external JavaScript from traversing into the shadow root.
Translation: Let’s say you have a style tag in the shadow DOM that specifies all h3’s should have a
color of red. Meanwhile, in the parent document, you have a style that specifies h3’s should have a
color of blue. In this instance, h3’s appearing within the shadow DOM will be red, and h3’s outside of the shadow DOM will be blue. The two styles will happily ignore each other thanks to our friend, the shadow boundary.
And if, at some point, the parent document goes looking for h3’s with
$('h3'), the shadow boundary will prevent any exploration into the shadow root and the selection will only return h3’s that are external to the shadow DOM.
This level of privacy is something that we’ve dreamed about and worked around for years. To say that it will change the way we build web applications is a total understatement.
Shadowy SlidersShadowy Sliders
To get our
img-slider into the shadow DOM we’ll need to create a shadow host and populate it with the contents of our template.
<template> <!-- Full of slider awesomeness --> </template> <div class="img-slider"></div> <script> // Add the template to the Shadow DOM var tmpl = document.querySelector('template'); var host = document.querySelector('.img-slider'); var root = host.createShadowRoot(); root.appendChild(document.importNode(tmpl.content, true)); </script>
In this instance we’ve created a
div and given it the class
img-slider so it can act as our shadow host.
We select the template and do a deep copy of its internals with
document.importNode. These internals are then appended to our newly created shadow root.
If you’re using Chrome you can actually see this working in the following pen.
See the Pen CSS3 Slider Shadow DOM by Rob Dodson (@robdodson) on CodePen
Insertion PointsInsertion Points
At this point our
img-slider is inside the shadow DOM but the image paths are hard coded. Just like the
<source> tags nested inside of
<video>, we’d like the images to come from the user, so we’ll have to invite them over from the shadow host.
To pull items into the shadow DOM we use the new
<content> tag. The
<content> tag uses CSS selectors to cherry-pick elements from the shadow host and project them into the shadow DOM. These projections are known as insertion points.
We’ll make it easy on ourselves and assume that the slider only contains images, that way we can create an insertion point using the
img selector.
<template> ... <div class="inner"> <content select="img"></content> </div> </template>
Because we are projecting content into the Shadow DOM using an insertion point, we’ll also need to use the new
::content pseudo-element to update our CSS.
#slides ::content img { width: 25%; float: left; }
If you want to know more about the new CSS selectors and combinators added by Shadow DOM, take a look at this cheat sheet I threw together.
Now we’re ready to populate our
img-slider.
<div class="img-slider"> <img src="images/rock.jpg" alt="an interesting rock"> <img src="images/grooves.jpg" alt="some neat grooves"> <img src="images/arch.jpg" alt="a rock arch"> <img src="images/sunset.jpg" alt="a dramatic sunset"> </div>
This is really cool! We’ve cut the amount of markup that the user sees way down. But why stop here? We can take things a step further and turn this
img-slider into its own tag.
Custom ElementsCustom Elements
Creating your own HTML element might sound intimidating but it’s actually quite easy. In Web Components speak, this new element is a Custom Element, and the only two requirements are that its name must contain a dash, and its prototype must extend
HTMLElement.
Let’s take a look at how that might work.
<template> <!-- Full of image slider awesomeness --> </template> <script> // Grab our template full of slider markup and styles var tmpl = document.querySelector('template'); // Create a prototype for a new element that extends HTMLElement var ImgSliderProto = Object.create(HTMLElement.prototype); // Setup our Shadow DOM and clone the template ImgSliderProto.createdCallback = function() { var root = this.createShadowRoot(); root.appendChild(document.importNode(tmpl.content, true)); }; // Register our new element var ImgSlider = document.registerElement('img-slider', { prototype: ImgSliderProto }); </script>
The
Object.create method returns a new prototype which extends
HTMLElement. When the parser finds our tag in the document it will check to see if it has a method named
createdCallback. If it finds this method it will run it immediately. This is a good place to do setup work, so we create some Shadow DOM and clone our template into it.
We pass the tag name and prototype to a new method on the
document, called
registerElement, and after that we’re ready to go.
Now that our element is registered there are a few different ways to use it. The first, and most straightforward, is to just use the
<img-slider> tag somewhere in our HTML. But we can also call
document.createElement("img-slider") or we can use the constructor that was returned by
document.registerElement and stored in the
ImgSlider variable. It’s up to you which style you prefer.
SupportSupport
Support for the various standards that makeup Web Components is encouraging, and improving all the time. This table illustrates where we’re presently at.
But don’t let the lack of support in some browsers discourage you from using them! The smarties at Mozilla and Google have been hard at work building polyfill libraries which sneak support for Web Components into **all modern browsers**! This means you can start playing with these technologies today and give feedback to the folks writing the specs. That feedback is important so we don’t end up with stinky, hard to use syntax.
Let’s look at how we could rewrite our
img-slider using Google’s Web Component library, Polymer.
Polymer to the Rescue!Polymer to the Rescue!
Polymer adds a new tag to the browser,
<polymer-element>, which automagically turns templates into shadow DOM and registers custom elements for us. All we need to do is to tell Polymer what name to use for the tag and to make sure we include our template markup.
See the Pen Polymer Slider by Chris Coyier (@chriscoyier) on CodePen.
I find it’s often easier to create elements using Polymer because of all the niceties built into the library. This includes two-way binding between elements and models, automatic node finding and support for other new standards like Web Animations. Also, the developers on the polymer-dev mailing list are extremely active and helpful, which is great when you’re first learning the ropes, and the StackOverflow community is growing.
This is just a tiny example of what Polymer can do, so be sure to visit its project page and also checkout Mozilla’s alternative, X-Tag.
IssuesIssues
Any new standard can be controversial and in the case of Web Components it seems that they are especially polarizing. Before we wrap up, I want to open up for discussion some of the feedback I’ve heard over the past few months and give my take on it.
OMG it’s XML!!!OMG it’s XML!!!
I think the thing that probably scares most developers when they first see Custom Elements is the notion that it will turn the document into one big pile of XML, where everything on the page has some bespoke tag name and, in this fashion, we’ll make the web pretty much unreadable. That’s a valid argument so I decided to kick the bees’ nest and bring it up on the Polymer mailing list.
The back and forth discussion is pretty interesting but I think the general consensus is that we’re just going to have to experiment to see what works and what doesn’t. Is it better, and more semantic, to see a tag name like
<img-slider> or is our present “div soup” the only way it should be? Alex Rusell composed a very thoughtful post on this subject and I’d recommend everyone take the time to read it before making up their mind.
SEOSEO
At this moment it’s unclear how well crawlers support Custom Elements and Shadow DOM. The Polymer FAQ states:
Search engines have been dealing with heavy AJAX based application for some time now. Moving away from JS and being more declarative is a good thing and will generally make things better.
The Google Webmaster’s blog recently announced that the Google crawler will execute JavaScript on your page before indexing it. And using a tool like Fetch as Google will allow you to see what the crawler sees as it parses your site. A good example is the Polymer website, which is built with custom elements and is easily searched in Google.
One tip I’ve learned from speaking with members of the Polymer team is to try to make sure the content inside of your custom element is static, and not coming from a data binding.
<!-- probably good --> <x-foo> Here is some interesting, and searchable content... </x-foo> <!-- probably bad --> <x-foo> {{crazyDynamicContent}} </x-foo> <!-- also probably bad --> <a href="{{aDynamicLink}}">Click here</a>
To be fair, this isn’t a new problem. AJAX heavy sites have been dealing with this issue for a few years now and thankfully there are solutions out there.
AccessibilityAccessibility
Obviously when you’re hiding markup in secret shadow DOM sandboxes the issue of accessibility becomes pretty important. Steve Faulkner took a look at accessibility in shadow DOM and seemed to be satisfied with what he found.
Results from initial testing indicate that inclusion of ARIA roles, states and properties in content wholly inside the Shadow DOM works fine. The accessibility information is exposed correctly via the accessibility API. Screen readers can access content in the Shadow DOM without issue.
The full post is available here.
Marcy Sutton* has also written a post exploring this topic in which she explains:
Web Components, including Shadow DOM, are accessible because assistive technologies encounter pages as rendered, meaning the entire document is read as “one happy tree”.
*Marcy also points out that the img-slider I built in this post is not accessible because our css label trick makes it inaccessible from the keyboard. Keep that in mind if you’re looking to reuse it in a project.
Surely there will be bumps along the way but that sounds like a pretty great start!
Style tags? Um, no thanks.Style tags? Um, no thanks.
Unfortunately
<link> tags do not work inside of the Shadow DOM, which means the only way to pull in external CSS is through
@import. In other words,
<style> tags are—for the moment—unavoidable.*
Keep in mind that the styles we’re talking about are relevant only to a component, whereas we’ve previously been trained to favor external files because they often affect our entire application. So is it such a bad thing to put a
<style> tag inside of an element, if all of those styles are scoped just to that one entity? Personally I think it’s OK, but the option of external files would be very nice to have.
* Unless you use Polymer which gets around this limitation with XHR.
Now it’s your turnNow it’s your turn
It’s up to us to figure out where these standards should go and what best practices will guide them. Give Polymer a shot, and also look at Mozilla’s alternative to Polymer, X-Tag (which has support all the way down to Internet Explorer 9).
Also, make sure you reach out to the developers at Google and Mozilla who are driving the bus on these standards. It’ll take our feedback to properly mold these tools into something we all want to use.
While there are still some rough edges, I think Web Components will eventually usher in a new style of application development, something more akin to snapping together Legos and less like our current approach, which is often plagued by excess boilerplate. I’m pretty excited by where all of this is heading, and I look forward to what the future might hold.
Really interesting stuff here. Another step in the modular direction. This combined with element-queries could be quite powerful.
A small bug: It looks as if the support table got the wrong figure associated with it. [Admin note]: fixed.
Sorry about that guys. I just sent a message to Chris. The support table is here for the time being.
Psssst! The browser support table is the wrong image.
I’ve been interested in learning about the Shadow DOM for some time now, this is article is a good launching pad for me. I can’t wait to see where we can go from here.
FYI, the browser support image is coming in as the Video Shadow DOM.
So which one has more support right now, x-tags or polymer? Also, why don’t they just give us the polyfills + the functionality, and disclude all the extra elements. Isn’t the point of web components to kinda write your own and add your own semantics? Or am I misunderstanding this. Anyways, thanks for the article, this has answered a lot of my questions!
X-Tags uses some of the Polymer polyfills so the two projects are pretty related. X-Tags adds additional support to IE9 so if you need to work on that platform then X-Tags will be your best bet.
You can work with just the polyfills if you want. Polymer separates them into two sections. platform.js is all the polyfills and polymer.js is all the framework-y bits (data binding, stuff like that). You can also run the individual polyfills if you only want one or two things.
@Rob Dodson Thank you, I didn’t know they were separated like that. I’ve got one more question though. Even if they ever do let you use the tag for css, because of the fact that the shadow dom doesn’t bleed out styles, wouldn’t you have to use multiple of them and thus have a slower page load time due to multiple requests? Or am i wrong/are they working on a fix for that.
The Polymer CodePen doesn’t render its result for me in Chrome 30 on Mac OS 10.6.8.
This looks like an excellent tech coming down the pipe! I work for an online university and our course media re-uses components that would fit into this model very nicely, making production much more efficient.
I have Chrome 30 and it works for me. Although I did see it show up blank once. Might need to refresh.
You indicate that there is no support for this stuff in IE, but your non-Polymer CodePen works just great in IE 11. So, what is the IE support story?
Hm that’s odd…
Microsoft has not implemented any of the specs to my knowledge. I’ve spoken to MS people about this and they’ve also publicly tweeted that they’re keeping an eye on Web Components but don’t have plans to implement them yet. I think you can follow their support of Shadow DOM here.
It might be that IE11 is treating the template tag as HTMLUnknownElement and ignoring it. Then applying the rest of the styles to the tags inside of it. I don’t have a copy of IE11 to test on but if that’s the case then it’s accidental progressive enhancement :)
Great writeup as usual!
Re. SEO, we’re going to put up a video on polymer-project.org that has a bit more to grasp than the FAQ entry. The current answer is not filling.
Re. in shadow dom: FWIW, last I heard, the spec gods were considering putting back addStyleSheet(). One thing Polymer does for you is convert your to and inline those rules in the element. This eases development and you don’t have to know the extra bits of shadow dom nuance. When we say “Polymer’s sugaring”, this is a great example. It’s something that makes building web components easier.
Rob mentions that CSS can only be applied through a style tag in the template code, with no support for link tags currently. I know that it’s possible to alter the shadow DOM (like the video tag controls etc.) through CSS pseudo-elements selectors, as mentioned in this previous article. Would this be possible for web components as well?
Absolutely! I didn’t touch on styling the Shadow DOM because it’s such a huge topic but there are a number of ways to go about it. I’ve written a couple posts on the topic:
Styling Shadow DOM pt. 1
Styling Shadow DOM pt. 2
First off, nice article!
You didn’t discuss security in the list of issues above. Specifically, I’m struggling to understand how Web Components fit with the iframe security model.
JS security/isolation: If you have data associated with a third-party widget, you’ve still gotta load an iframe in order to execute JS or access storage without the underlying page having access, is that right?
HTML security/isolation: If the rendered widget HTML contains sensitive info that you don’t want to expose to the underlying page, then, again, you’d need to wrap it in an iframe. Is that right?
Hey Jared,
I don’t think Web Components add anything new in terms of JS or HTML security so all the current rules that you’ve cited still apply.
Rob, thanks for sharing, this is really exciting! You have inspired me to get something to work for IE7 / IE8.
I have put together a proof of concept, of a possible polyfill / alternate solution for older browsers.
My code goes down a completely different route to the “standard”, but what I was hoping to achieve was some of the same end-goals, but in older versions of IE. I have only implemented some basics for now, but my code could be improved to add some of the other nice features you described. Maybe someone can extract the useful bits of my code and make a completely cross-browser compatible polyfill.
Would you mind having a look here, and let me know what you think?
This is the only HTML markup to add the web component to a page:
This is really cool but I hope the
<style>tag issues get sorted. Ideally I’d want to still style these components in my global, minified CSS file using some sort of special CSS selector.
dude, best gravatar ever
The
<style>tag is primarily for the benefit of the component author. The author can expose a number of ways for the consumer to reach in and style the element from the outside. Those styles would live with the rest of your app’s CSS.
I posted a couple of links higher up in the thread to a pair of articles I wrote on styling elements. I agree that it would be awesome to use external files for everything, and it sounds like the browser developers are working on that, but it’s really tricky because of all the new scenarios Shadow DOM and HTML Imports introduce.
Generally speaking, a web components is a sort of black box. Styling is a duty of the author, but customization is possible, Here you can see how I do that for my components library .
Is Polymer supported by all the major browsers? I didn’t think it had widespread support at all yet.
Polymer works in Chrome, FF, Opera, IE 10+ and Safari 6+
Really interesting article. I can foresee HTML “libraries” similar to the ones we know in jQuery.
But since this will probably not get into IE anytime soon, I doubt I will have the chance to use it.
That’s where Polymer and X-Tags come in. They’re designed to sneak support into non-compliant browsers. I think of both libraries in the same way that I think of jQuery.
I love this sort of stuff! It will be very interesting when this becomes more standardized, but in the meantime you can create this exact type of thing through AngularJS directives.
Yep that’s true. I know the Angular team is looking at Web Components and hopefully they’ll leverage them more in future versions. I can foresee a future where directives are replaced by components.
I can see some advantages from a modular perspective. It will make sharing widgets much easier.
However I’m a little concerned this will be the equivalent to cleaning your room by shoving the mess into the closet. “You caan’t see it so it must be clean right?” The great and bad thing about HTML is that it is completely free form. In addition its a very forgiving language many syntax errors don’t get caught.
I’d be more confident in this solution if there were stronger syntax rules in place for code being produced. That way people could use web components knowing a certain level of quality was present. (Think Rails with it’s “convention over configuration” mantra)
Either way the development of web components will be interesting to watch unfold.
That sounds an awful lot like XHTML. That HTML is so forgiving is a good thing! It enables many people, not just die hard coders, to easily publish and share information.
But to your point about quality, I think we already deal with this now in the JS library world. We prefer libraries that are well written, actively developed and have a tidy codebase. Web Components won’t be any different.
I can see your point Rob about stricter rules on HTML. Like I mentioned in my post the forgivable nature of HTML is a blessing and a curse. It all depends on the author. And the portable nature of web components is appealing.
I take your point about quality libraries naturally floating to the surface. But with today’s crowd sourcing mentality there’s still some pretty bad code snippets that get passed around. I think in general terms the community will govern itself, but I am still concerned that we’re masking potentially bad code by simply hiding it and making it less accessible.
One problem I forsee with custom elements is that it will create some confusion with newer Web developers. If they look at the source code on someone’s Web page, how will they knew which elements are standard, and which are custom? We should encourage authors to use a “x-” prefix in front of their element names to show they are non-standard HTML elements, or maybe require this in the Web Components spec.
Also, I noticed custom elements are created with JavaScript. What if the user has JavaScript disabled? They won’t be able to see anything of the custom element. And what will the user experience be like when first loading the page, before the JavaScript that creates the custom elements has been loaded? For example, in the
img-sliderelement, before the element is loaded and running, will the user just see a row of four
imgelements on the page?
The spec requires that all custom elements have a dash, “-“, in their tag name. That’s how you can tell they’re non-standard. Initially, if I recall, they actually tried the “x-” thing but developers found it annoying having to write “x” over and over again when the dash would convey the same thing.
Many of the HTML5 standards rely heavily on JavaScript so without it you’re going to be breaking a lot more than just Web Components. I honestly don’t know if the ability to turn it off in browsers is going to be around much longer. Firefox 23 removed the ability to do it in your settings (though you can still do it in about:flags I think). If your users are turning it off in large numbers then you probably don’t want to use Web Components.
I didn’t mention HTML Imports in this post, but I’ve written about them on my blog and Eric Bidelman has a post on HTML5 Rocks that covers them. Typically you import elements at the top of the page using a
<link rel="import">tag. Imports load just like CSS so there is no FOUC.
Simply Awesome Rob, Love it. Worthy reading
Thanks, glad you enjoyed it!
I went to the meetup in Google Australia several weeks ago. Alex Danilo gave the link to the presentation he was explaining. Thought this would help here also.
Thanks Steven. That’s a really great deck, I used it a while back when I was first getting into the topic. One thing that I’ll point out is that the
<element>tag has been removed so you can no longer declare custom elements that way. It was a big change and a lot of the articles and presentations out there still use it because the spec has changed so much in such a short amount of time.
I put together a slide deck which clarifies some of the things that have changed. In particular the removal of the
<element>tag, updated nomenclature for lifecycle callbacks and also the change from ::pseudo to ::part selectors. In fact, ::part has recently changed again to ^ and ^^ (“the hat and the cat”) selectors. I’ll have to write a blog post on that and leave it in this comment thread :)
Lovely post, I like this sort of stuff, nice blog Rob I have read a full blog and very interesting and easily understand blog, very good job.
First of all, you’ve got a very good HTML slider example here … :)
I’m amazed at what HTML5 is able to do these days and the new web components you showed here is going to improve it even more. I think with these the Flash vs HTML war, finally has a winner ;)
Thank you!
Its a long time ago when I heard about @key-frame but I didnt thought that was as much easy as posted, I am feeling blessed, Thank you chris for the lovely!
Safari supports Shadow DOM? Are you sure? That is contrary to everything other source I can find.
In May, they were talking about removing the code from Webkit removing the code from Webkit because, following the Blink split, no port was using it.
Yeah that’s kind of a weird one. The
webkitCreateShadowRootmethod works in current versions of Safari and will allow you to hide stuff in a shadow root document fragment. But as you point out, there’s talk of the Safari team removing the underlying code that allows this. So it technically works today but might not tomorrow.
I have seen some chatter from Apple folks on the Web Apps WG mailing list recently related to Custom Elements, so I’m hopeful they’re coming around.
Nice article, thanks! Web components and the shadow dom sound great, although for me, the question is nesting. We build components to simplify our lives, and we build complex components by composing simpler ones ad infinitum. I’m not seeing from this article how the shadow dom will work for that.
Hey Tim,
The
<content>tag allows you to nest web components inside of one another and have the whole thing render as you would expect. There’s also a
<shadow>tag which we didn’t cover. I didn’t go super deep into the topic in order to keep the length of the post down, but Dominic Cooney and Eric Bidelman do a really good job explaining these topics in their HTML5 Rocks posts: Shadow DOM 101, Shadow DOM 301
The most interesting think of ShadowDom is css isolation. Web components css rules shoudn’t affect outer (and possibly inner) html. What about a rule like
A { color: red !important;}into a not isolated web component? This was one of the main difficult I run into developing WebFragments.
Nesting web components is also possible without shadow dom, it just depends on how it is implemented.
People need to accept that that’s perfectly acceptable. There’s absolutely nothing technically wrong with it and it doesn’t even go against best practices.
Great article. I am curious about something though:
In the section Insertion points you said:
How would you handle insertions, if you didn’t make this assumption?
I think if you were using native web components, you might need to build those pieces in JavaScript and pop them inside the template during the createdCallback. Or each slide could be its own custom element and then you could nest them inside of
<img-slider>tag.
If you’re using Polymer you might be able to come up with a solution using template repeat.
So, I guess there are a few ways to go about it. I wanted to keep the example as simple as possible so it wouldn’t trip people up. That’s why I decided to hand code it like that.
That is what I thought… But if that is the case, I’m not sure I see yet how Web Components are superior to, lets say for example, jQuery plugins. Not saying that is not a cool thing, but I think Web Components still need a little more evolution.
Maybe it is just a that Web Components is still a very young thing, or maybe I need to look more into it. Anyway, thank you for the article. Very interesting.
Is that compulsory to use web components with Polymers and X-tags libraries only…?
Hi there! I’m coming to you from the Future to say please please let’s not do this thing. I work at a company that some time in the past created these so-called “widgets” that encapsulate a bunch of markup / functionality, with the idea being you just pop them into the page and they just work! Problem is, they never work exactly how we want them to out of the box, and we’re stuck going through many, many different files that all depend on each other in mysterious ways. I am totally on board with the latest and greatest in our industry, but this black-boxing of functionality is not something we want. The maintainability of something like this is a nightmare. These are always over-engineered… because, well, the “user” developer will never see all the code in the background, right? Wrong. They’ll need to and they’ll end up cursing your name. Let’s nip this right in the bud.
This voice from the future speaketh wisely.
Hi Rob, have you ever add a Polymer element to codeopen.io ? I’d like to create a Pen of a polymer element we created. It’s a full javascript preloader( no gif animation).. Thanks
Quick question, please. And perhaps I read / pseudo-scanned a bit too quickly, but if one of the key ideas here is modularity and the ability to grow your own and then – ideally, I presume – share with others, aren’t there going to be naming conflicts.
Much like prefixing with x- as someone mentioned above, should there be some sort of provision to mitigate namespace issues?
Or is this minor and/or not an issue and I’m just too excited from the start of The World Cup to think straight?
|
https://css-tricks.com/modular-future-web-components/?utm_source=CSS-Weekly&utm_campaign=Issue-85&utm_medium=web
|
CC-MAIN-2022-40
|
refinedweb
| 6,623
| 72.16
|
Letztes Jahr konnte ich bereits einen frühen Prototypen von VLINQ sehen. VLINQ ist ein Addon für
This looks like a VERY cool project! Thanks for releasing it...
I grabbed the MSI, installed it (on Vista), but don't see the solution/source. The C:\Program Files\Microsoft VLinq Query Builder\ folder only has the bin's, ico, etc
Is the MSI only bin's or does it also contain the source?
Thanks,
Greg
Ref:....
I tried to sign in on the code-website but that does not seem to work.
installed on VS 2008 Pro RTM on Vista Ultimate x86.
not working, no designer icon in vs items.
something in setup is not working.
the files are there ... looks like VS has the stuff.
perhaps we need to run devenv with a packages command ??
Very strange. Can you check the file 'VLinq queries.zip' is in \Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\1033
If so, close all VS instance and try 'devenv /installvstemplates'
Thanks to Denny, please try to run the setup 'as admin' if your under Vista.
Salut Mitsu, super l'article...
J'ai 1 souci avec mon VS.NET 2008 prof.;
je vois pas "LINQ to SQL" dans Add Item...
Tu sais comment fixer ça ?
merci
-Vince
Hello VinceX.NET,
Did you try to run the setup as admin ?
If you have both VS2005 and VS2008 installed, then after installation, run the VS2008 command line and enter "devenv.exe /installvstemplates".
Hi Mitsu!
Thanks! Seems to be a great tool!
I've tried installing it on 'VS.NET Team Suite 2008 RTM' without success. The installation ends successfully but "VLinq Query" item is not addded to the item list. Following your guides, I checked "\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\1033". It didn't contain the 'VLinq queries.zip' file so I added it from the sources by hand and ran 'devenv /installvstemplates'. This time 'VLinq query' was shown in the items list but when I try to invoke the designer of my VLinq query, I get the error 'cannot locate resource 'querybagdesigner.xaml'.
My Windows is XP SP2.
Any ideas what the problem is?
Hi,
It seems that you are not on a english system.
Next release will be localized.
No! I am using the English version of Windows XP. I've just noticed one problem. My Visual Studio is installed on 'D:\Program Files\Microsoft Visual Studio v9.0' but the item templates copied by the installer is put on 'C:\Program Files\Microsoft Visual Studio v9.0'. I've put them in the correct path manually but still the same problem with finding 'querybagdesigner.xaml'.
The Visual LINQ Query Builder is a Visual Studio 2008 Add-In and designer that helps you create LINQ to SQL Queries in your application.
Great! Big help - I've been teaching Linq and I think this will really help - especially with the joins.
Thank you!
Visual LINQ Query Builder add-in looks very cool
My source code reading didn't have any rhyme or reason to it this week, but most of them were large,
Visual Linq query builder for Linq to Sql: VLinqThe Visual Linq query builder is a Visual Studio 2008...
Hi Hi!
Installed on Vista x86 Visual Studio Team System 2008...the install completed fine, and I see the item templates...but after selecting Add->Vlinq Queries, I get the following error: Custom tool error: Unable to initialize the Query Formatter File Queries2.vlinq Line:1 Column: 1
Cheers!
Sorry, it was a nice concept, but it just doesn't work.
Judging by all the problems everyone is having, the installation obviously wasn't tested on Vista. I suggest you test it on several Vista machines and release a version that works. And the devenv /installallvstemplates hack? It removes all my templates! Yes, I ran it all 'as Administrator'.
Hi, we will fix those issues quickly.
Please understand this project is not our daily work.
Sorry again for these troubles.
Si hace unos meses hablábamos de una útil e imprescindible herramienta para probar nuestras consultas
Looks pretty amazing, I love the visualization of the collapsed query, it's not often a UI designer can create exactly what a developer wants, but this looks like it.
I'll try it just because it looks so cool!
Looking forward to test the VLINQ
The installation has been patched. Thanks for your patience and sorry again for the first setup version.
You can still continue to post your comments here and tell if you meet some issues.
Französische Studenten haben als Internship in Redmond ein Visual Studio 2008 Add-In zur visuellen Erstellung
If says 'Connection failed' however I have selected a valid connection from the data connnections within Server Explorer. In fact I can open all tables etc. from the data connections windows.
It continously asked for recompilation when it has already been compiled.
@Dave: sounds strange. When in the property editor, make sure you have checked the radiobox correctly in addition to providing the connnection string.
Yes, recompilation is asked many times. We are using the project compiled information to run the query which is a strange choice but which allows to execute the query in a very close environment that what the user will have at runtime. This part has to be optimized because we ask for recompilation at each time the document is modified (even if it does not generates new code). I hope to solve this point in the next release.
I'm trying to do a join between 2 entities. One is a Group table and one is a Status table. The primary key on the Status table is StatusId. There is a foreign key on the Group table for StatusId. They are both GUIDs
When I try to use the tool to do a join I get this error: "The Property "StatusId" type is not valid in a join declaration". I know I could just reference Status from within the Group table but I'm just trying to see if the join functionality works.
Welcome to the forty-third issue of Community Convergence. The last few weeks have been consumed by the
Thanks to Roger Jennings for highlighting the Visual LINQ Query Builder project that is now on MSDN Code
I'm having the same problem that Jay is having. All of our keys are GUIDs, and this tool is telling me that "The property <guidcolumnhere> is not valid in a join declaraion". Great looking tool, just not usable if you're using GUIDs.
Un editor visuale per LINQ-to-SQL
Sul Mitsu's blog viene presentato VLINQ, ovvero E' un add-in per Visual Studio 2008 che consente di creare query per LINQ-to-SQL in modo visuale. Allo stesso indirizzo è disponibile anche un breve tutorial. VLINQ è ospitato su MSDN Code
h2.entry-title {font-size: 1.1em; clear:left;} ul.hfeed {list-style-type: none;} li.xfolkentry {clear
thank you for the tool.
it provides good help with easiness.
Visual LINQ Query Builder is an add-in to Visual Studio 2008 Designer that helps you visually build LINQ to SQL queries. Functionally it provides the same experience as, for instance the Microsoft Access Query Builder, but in the LINQ domain. The entire
Nice effort, however the tool is extremely confusing! I wasn't able to create a single query, because the tool is not intuitive at-all. The article was my only hope and it leaves-out far too many details during the walk-through. For example: I have no idea how the author proceeded from a "New (blank) Query" to this screen, showing a datasource:
When I tried to add a datasource, all I saw was a blank dialog with no obvious purpose.
The flashing and zooming graphics are neat, but the tool is buried so far down in the IDE and it is too hard to find and the tools' UI is far too vague.
Nice initial effort, but this thing needs a lot more work before I can recommend it to my team. I hate to be the dissenter here, but someone had to say it.
Thanks anyway
Hi nealb,
Even if many people found it easy to use, I take your point. The best I can recommand is to look at the webcast (see link on the project page).
Just to remind, this is a free tool developed by interns. One of the goals was to test what kind of new UI we could make for a visual studio addin using WPF. It's maybe not intuitive for everyone but I think it's an interesting try. We will fix some issues but we did not plan to 'work' again on this project. It's free, the source code is provided and I think it's already a good thing !
Apologies for the sparseness of my posting the last few weeks - work and life have been busy here lately
Apologies for the sparseness of my posting the last few weeks - work and life have been busy here lately.
null refference exception is thrown at having section when I click the textboxes (< edit >).
"We will fix some issues but we did not plan to 'work' again on this project. It's free, the source code is provided and I think it's already a good thing !"
Move it to Codeplex! Others will work on it for sure.
I had problems on my 64bit XP. Install works but templates don't show up. BUT geez people it's free and it works in some situations. Just say It didn't work for me and move on. Don't slam them because it did not work for you. Well I will try it on my computer at home...
Correction! I was trying to get it to work in a Web Site project and the directions clearly indicate that it works with a Web Application Project. It works great! Wonderful work. Thank you so Much!
nice work
really relly good
and lost of time saving
Bonjour Mitsu
Plus haut une autre personne parle d'un "connection failed" quand il essaye de tester la requete.
J'ai le meme probleme et ma connection string est bien definie dans les proprietes.
Je peux acceder par le server explorer a mes bases avec la meme connection string donc je ne comprend pas pourquoi VLINQ ne pourrai pas.
merci de l'aide que vous pourriez apporte
Mes excuses pour le peu de publications au cours des dernières semaines – le travail et la vie en générale
No he podido, probarlo en mi vs2008, me sigue mandando este error "No se puede localizar el recurso querybagdesigner.xaml", alguna sugerencia de como arreglarlo?
Tengo Vista Bussines en Español y VS2008 Professional en Español.
Espero su ayuda....
Problems I encountered:
1 I could not get it to generate more than one query in the code. Query bag shows 2 but only one is generated.
2. When I press the Preview button, It displays "No Data. If your query..." (sorry, I couldn't cut and paste)
3. When I first create a query it adds my namespace to the Type when generating the code as in:
from b in context.GetTable<Namespace.Brand>()
when I manually remove the namespace. it compiles correctly.
It would be really useful to work against different DataContext such as the Entitiy Framework and frameworks such as ADO.Net Data Services .
Tom
It appears this VLinq query builder installer (vlinq.msi) works only on VS 2008 eds., Standard or above, but not on Express Eds., as I tried on both versions. Am I correct?
Very cool extension. Have you considered adding this to the Visual Studio Gallery?
This tool has worked as advertised for me...very cool and helpful for this novice. If anyone knows how to tell it to take the first row in a return set, I would greatly appreciate the knowledge transfer. Specifically, I'm trying to sort by a column in desc order, then take the top row (i.e. the last row added).
Thanks.
Even with the "fixed" setup it is not possible to install it because it does not find "VLinq queries.zip"
Any hints?
i just gave up because of the installation issues on vista. nice idea but needs a lot of improvement - why, but why do you need a accordian type functionality??
why cant this be simple?
When I run the query, I get an error message popup box that is a window title "Preview Unavailable". The contents of the window reads "Connection failed" with an OK button.
I have selected the database connection and I can freely explore the database from the object explorer as well as in the query designer, I just can't connect for some reason?
For the "recompile" messages, make sure you don't have the *.designer.cs file open. This can prevent it from being regenerated.
Great tool for a first round version! Thanks!
Van egy ilyen kis LINQ Query szerkesztő: Visual Linq query builder for Linq to Sql: VLinq Ami "poén"
tengo el mismo problema de Julio Cesar Ortega
es decir mis windows (xp sp3 y vista) y mi vs2008 sp1 estan en spanish.
Ya pude instalarlo, pero cuando intento utilizarlo, me da el error de no encontrar el recurso.
Its work fine when I need data from Data base but how i do same for insert update,delete
if possible please let me know
mails.shailesh@gmail.com
insert, update and delete actions are automatically generated by Linq to Sql. You just have to call context.SubmitChanges().
=================================================
Sam
<a href="">Link Building</a>
allways got zhe error1001:can't find the path D:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\Data\1033\VLinq queries.zip
can you tell me how can I setup the VLinq buildder?
my Email: mars8466@163.com
thanks!
Great product. But I will be damned if I can get it to work. No matter what config I try I get a connection failed error. I have a server explorer connection created, and I am able to open and edit items via that. I have also coded a simple query based off the datacontext and that works fine. I have tried to use the VS connections and the custom connections. Neither of these work.
I have tried it all - radio buttons, exact connection string entered manuall... does not work.
When I try to use any operator other than the == against an integer or a decimal; it breaks. For example v.Cost > 1000 it tries to run then it says "String must be exactly one character long". It is not even a string, it is an integer or decimal.
but it can't support other language vs2008...
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement
|
http://blogs.msdn.com/mitsu/archive/2008/04/02/visual-linq-query-builder-for-linq-to-sql-vlinq.aspx
|
crawl-002
|
refinedweb
| 2,497
| 75.3
|
There are just a handful of concepts at the core of OOP. This article covers the most important ones: inheritance, encapsulation, and polymorphism. I also discuss a few related topics to help you put these ideas into context.
Without a doubt, inheritance is the most well-known principle of OOP. Inheritance can be defined as the ability to inherit properties and methods and extend the functionality of an existing class in a new one.
If you're thinking ahead, you might imagine creating a new "Wall" class that extends the "Brick" class you created earlier. However, that is not how inheritance works.
Looking at the relationship between a brick and a wall, the best way to code this is not through inheritance but rather by a concept called composition.
A simple rule of thumb determines whether the relationship between classes is one that warrants inheritance or composition. If you can say class A "is a" class B, you're dealing with inheritance. If you can say class A "has a" class B, the relationship is one of composition.
Here are some examples of inheritance:
Here are examples of composition:
So what is the difference in how inheritance and composition are implemented? Let's compare how this works, starting with inheritance:
Animal.as
package com.adobe.ooas3 { public class Animal { public var furry:Boolean; public var domestic:Boolean; public function Animal() { trace("new animal created"); } } }
The Animal.as code is the base Animal class, which you will now extend using inheritance with a Cat class:
Cat.as
package com.adobe.ooas3 { public class Cat extends Animal { public var family:String; public function Cat() { furry = true; domestic = true; family = "feline"; } } }
If you look at the Cat class, the constructor assigns values to three different properties. On close inspection, only one of these properties (family) is defined in the Cat class. The other properties (furry and domestic) come from the Animal base class.
While this not exactly the most practical example, you can see how class inheritance allows you to build upon existing functionality to create a new blueprint for you to start using as you develop your project.
Now if you wanted to create half a dozen cats, you could simply do this by instantiating the Cat class, which has all the properties already set up, rather than using the generic Animal class and having to define the properties again and again for each instance.
New in ActionScript 3.0 is
an
override keyword that is used when (you guessed it) you want to
override a method defined in the class that you extended. This useful feature
prevents you from accidentally running into naming conflicts with methods
between classes that extend each other.
On the other hand,
composition doesn't have any formal syntax like the
extends keyword. Composition simply instantiates its own instance of any class it wants
to use.
Let's take the Brick class created earlier. In this next example you'll create a Wall class that uses composition to instantiate instances of the Brick class:
Wall.as
package com.adobe.ooas3 { import com.adobe.ooas3.Brick; public class Wall { public var wallWidth:uint; public var wallHeight:uint; public function Wall(w:uint, h:uint) { wallWidth = w; wallHeight = h; build(); } public function build():void { for(var i:uint=0; i<wallHeight; i++) { for(var j:uint=0; j<wallWidth; j++) { var brick:Brick = new Brick(); } } } } }
In the code above, the Wall class accepts two arguments passed to its constructor, defining the width and height in bricks of the wall you want to create.
Let's do a quick test of this class by instantiating it on the main Timeline of a blank FLA file:
import com.adobe.ooas3.Wall; var myWall:Wall = new Wall(4,4);
If you run Test Movie (Control > Test Movie), you'll see that 16 Brick
instances are created with corresponding
trace statements displayed in the Output panel to create a
4 x 4 wall (see Figure 2).
Figure 2. Output panel of Wall class getting executed
Apart from the difference in class relationship between inheritance and composition (as I discussed earlier), composition has the advantage of being able to add functionality to another class at runtime. It allows you to have control over the creation and destruction of class instances, whereas with inheritance the relationship between the classes is fixed and defined at the time the code is compiled.
|
http://www.adobe.com/devnet/actionscript/articles/oop_as3_03.html
|
crawl-002
|
refinedweb
| 735
| 52.09
|
Hello,
I have daily rainfall data in milimeters in GeoTIFF format with naming convention chirps_YYYYMMDD.tif (example chirps_20100101.tif), and I also have 1 raster of dry-spell with name dslr_chirps_20091231.tif and both raster have same spatial resolution and extent.
Then I would like to calculate dry-spell for 1 Jan 2010, IF(rainfall>1,dslr=0,dslr+1). Using raster calculator I can use this formula: Con("chirps_20100101.tif" > 1, "dslr_chirps_20091231.tif" == 0, "dslr_chirps_20101231.tif" + 1)
The raster output will be dslr_chirps_20180101.tif
Problem:
I have 10 years daily rainfall data and would like to calculate daily dry-spell for all the available data period.
How to loop the above calculation using model builder, when the output for each calculation will use as input for the next calculation?
I don't know how to do this in the model builder - but with python maybe you could define your input data and output with the following code?
It's from: datetime - Iterating through a range of dates in Python - Stack Overflow
from datetime import timedelta, date def daterange(start_date, end_date): for n in range(int ((end_date - start_date).days)): yield start_date + timedelta(n) start_date = date(2009, 12, 31) end_date = date(2010, 12, 31) for single_date in daterange(start_date, end_date): d = single_date.strftime("%Y%m%d") d1 = single_date+timedelta(days=1) d2 = d1.strftime("%Y%m%d") input1 = "DSLR_CHIRPS_{}.tif".format(d) input2 = "CHIRPS_{}.tif".format(d2) output = "DSLR_CHIRPS_{}.tif".format(d2) print("{} - {} - {}".format(input1, input2, output))
Thank you for your reply, I will try it.
|
https://community.esri.com/t5/python-questions/loop-conditional-in-raster-calculator/td-p/95738
|
CC-MAIN-2022-21
|
refinedweb
| 255
| 60.41
|
Over?
I personally know many programmers who aren’t in their right mind.
Looks to me like this is a good example of the flaw with Microsoft’s typedefs. If the typedefs had been defined as INT16, INT32, and INT64, there wouldn’t be a problem (well, not this problem) with porting to 64 bit. The typedefs would need to be updated, but they would still be logically correct (i.e. INT32 is still 32 bits, and LONG just doesn’t exist). In fact, such a port is still possible, but more difficult.
It’s hard to blame this entirely on Microsoft, though. Their typedefs were still far better than the poorly defined C/C++ built-in types.
I never liked most of Pascal, but declaring an integer variable’s range like 1..20 was a good idea. In many (most?) cases you’re using integers for counting and know you don’t need the full range. Then you let the compiler choose the best size. Yes, there need to be pragmas to nail down actual sizes, just like there are pragmas for struct alignment. But as it stands the compiler has no easy way to tell how big a number you might put in that variable.
"Parsing" data files by overlaying a C struct is a bad idea anyway. What about other issues like endianness and alignment? I think it’s a far better idea to read the file in chunks (well, read large parts of it and parse it in chunks, really) and copy the data of interest into an in-memory structure, doing conversions as necessary. Sure, it might be a bit slower, but it’ll be far more portable.
Do all of the architectures that Win32 runs on have the same endianness? I guess they must, or overlaying that bitmap structure over a bitmap file would fail on some platforms but not others. I can’t actually remember off the top of my head which platforms Windows NT is or has been available for, though.
With all that said, I do think it was a good idea to leave the data sizes the same. Knowing the kinds of nasty tricks and stupid mistakes application developers make, it would have been a portability nightmare. The days when the release of a new system meant rewriting or heavily modifying your application are (in most cases) behind us, and I like it much better this way. (Not to say that good programmers shouldn’t use practices that make their programs generally storage-size-independent, though.)
I would generally ignore anything Beer28 says, he is a Linux troll and a poor one at that.
"The Win64 team selected the LLP64 data model, in which all integral types remain 32-bit values and only pointers expand to 64-bit values. Why? "
To create gratuitous incompatibility with Unix.
"If a LONG expanded from a 32-bit value to a 64-bit value, it would not be possible for a 64-bit program to use this structure to parse a bitmap file. "
So fix the header to not say "LONG" but instead "DWORD" or whatever it should be.
DrPizza: One structure down, 20 billion to go. And most of the 20 billion belong to you – the application progammer – not to Windows.
and fix the other 3 million structures with the same thing, oh, plus any user made structures…
I don’t know about that
Hey, if you don’t like it, you can always #define long __int64 #define int __int32 or something.
Of course then you’ll have trouble using other people’s header files and linking with the C++RT…
To take an analogy of perhaps what effect changing the type widths would have, think about the impact of migrating your code from old 8-bit characters to 16-bit Unicode.
If you think that nearly all existing code is going to run in 32-bit emulation anyway, then all code that wants or needs to be 64-bit should take the trouble to make sure it is properly 64-bit, in which case a ‘better’ choice of type width could have been made.
IMHO, the current choice seems to be about short-term ease (unless its just a conspiracy to make source compatibility with 64-bit Linux more difficult :) )
Well if MS had realized this was a problem years ago when doing 64-bit Alpha port, or even with the ia64 port then most of the important structures could have been fixed long ago, and warnings could have been displayed so people could fix their own code.
It’s not really a compatibility issue, as it only makes a difference when re-compiling code, not when using an existing binary.
"most of the important structures could have been fixed". If you don’t fix them all then the cure becomes worse than the disease.
"short-term ease": Yup, this is a topic I intend to come back to in a few months.
Hey Raymond,
"Nobody in their right mind would transfer a pointer across processes: Separate address spaces mean that the pointer value is useless in any process other than the one that generated it, so why share it?"
The only case I can think of to pass pointer values across processes is for abstract cookies. I know that I’ve designed APIs in the past where you register something and get a handle/cookie back. That handle cookie is either a poitner directly or the pointer XORd with some private value. While this is most common in proc, I can imagine someone doing it cross proc. Implementing one of these legacy interfaces in 64-bit land means that you have to create a 64->32 bit map where you didn’t need to map before. Not insurmountable but not straightforward either.
Joe
What Raymond is basically saying is that because on-disk and on-wire formats weren’t specified with explicit widths like INT32 instead of LONG then they were broken. User programs copied this brokenness and hence the LLP64 had to be chosen because of bad design choices made > 15 years ago.
Is that a fair assessment?
Cooney, no they won’t. My trackback got lost, but:
I agree with Derek Park, I wish Microsoft would drop the typedefs altogether and explicitly use __int8, __int16, __int32, and __int64; remove the ambiguity.
While we’re on the subjuct of typedefs, who at Microsoft was the genius behind "DWORD64" in BaseTsd.h? This is a contradiction in terms, I guess they never heard of QWORD. Also, why do the new 64-bit types end in "PTR?" (e.g. DWORD_PTR) This is a misnomer since these aren’t pointers.
Derek: Win32 does have explicit typedefs. There’s UINT8 to UINT64, INT8 to INT64, ULONG32/LONG32 (actually defined as non-long ints, probably by mistake) and ULONG64/LONG64. They just are relatively new and rarely used.
Ben: yes, Windows only ever ran on little-endian architectures. There’s very little places concerned with endianness in Win32, and they’re all #ifdef _MAC (i.e. the Win32 port to MacOS for Office and Internet Explorer)
If you are going to use explicitly sized ints, you should use the typedef defined by the C99 standard:
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
…
Brian: I don’t think DWORD_PTR is not supposed to be used as an int. It is defines the int type that is large enough to hold a pointer.
Joe:
> Not insurmountable but not straightforward either.
Indeed — in fact, an interview question I often ask "industry" candidates is to critique such a system, and then describe to me how they would implement such a system in 64 bit land without changing the requirement that the unique cookies be 32 bit integers.
It most definately is not straightforward — you can run into problems of security, efficiency, portability, all kinds of stuff. I quite like "open ended" interview questions.
Larry,
Your link is about transferring handles, not pointers. You can transfer handles, but you will need OS support to do it. Pointers are still useless as pointers outside of their process.
Cooney, absolutely – but handles are logically pointers (the HANDLE type is a PVOID).
Swamp Justice: Revisionist history. The C99 types you’re describing weren’t available in 1985, when many of these structures were finalized. We’re not prescient.
Long in (c++).net is 64-bit
Asking me, it is fix the header files, don’t "fix" the compiler. Many headers now DO use the new DWORD_PTR stuff, why not simply fix those? Gosh, a simple search/replace would do, is that hard?
BTW what will be the size of DWORD_PTR in 64 edition? 8 bytes I guess… so I’ll look at DWORD part and will think "4 bytes" and then will have to remember the PTR part and say "oh, 8 bytes"… Also, this means UINT and UINT_PTR could have sizes… well, asking me, this is at least confusing… And by the way, WORD wasn’t supposed to be fixed 2 bytes, but thanks to Intel it is…
Raymond: Care to comment on why the VB.NET team made the opposite decision (redefining "Integer" to 32 bits), breaking VB6 persistence formats, not to mention Win32 API calls? Thanks! :-)
Waleri: The reason for DWORD_PTR is so that on 32-bit systems it stays DWORD. Does C99 have a "pointer that is the same size as an integer" type?
Phil: I am not qualified to comment on VB.NET design principles.
Long time reader, first time poster. love the show.
I don’t really buy Raymond’s inital argument that the definition of a bmp file (or any file format) should in some way define the size of data types in an OS. Why not create a new data type for 64 bit quantities, higher precision reals, etc. (there are plenty of Windows specials, anyway – DWORD, for example).
As to RPC and DCOM, why isn’t data transmitted btween these in some network (architecture-independent) format?
Well, reading through the above post before sending, I can see that you’ve got to work with what you’ve got.
Were there any other possibilities? A couple i can think of right now are a) new 64-bit data types, to keep RPC, file etc. formats valid, b) versioning in RPC and file formats to allow on the fly conversion (e.g. bmps translated in app layer [maybe with help from a library] plus new ’64-bit bmp format’, RPC, etc. translated in OS subsystem). c) ?
Any thoughts on these? Would they have been considered?
"What about other issues like endianness and alignment?"
Not to mention input validation. An attacker can inject unexpected values into a data file and crash the program / gain priviledges that way.
I can understand why you wouldn’t want to use INT32-type things everywhere–that would mean a crapload of search-and-replace when you were porting to 64-bit. But why not say that you should use INT{8,16,32,64} in serializable structures, and {SHORT,INT,LONG} otherwise?
Brent: Are you saying that existing structures should be retrofitted to use the INT<n> types? But that would violate the "don’t break existing 32-bit code" rule. Consider:
struct something {
INT a;
LONG b;
};
becomes
struct something {
INT32 a;
INT32 b;
};
Great, the underlying type of "b" changed from "signed long" to "signed int" -> build breaks.
For better or worse, I worked on a product that passed a HANDLE across processes. Specifically, we wanted to seperate a web browser plugin implementation into its own process for stability reasons. (Said plugin used OpenGL. At the time hardware acceleration could be a bit sketchy, driver problems were all too common.) So we (from memory) sprintfed the HANDLE for the plugin’s window into a "%d" and passed it into the child process on the command line. Said plugin was cross platform, a similar trick was done on Linux/X-Windows. I’m fuzzy on how the MacOS9 version did it. The entire thing seemed overly clever to me, but it worked like a charm.
This got me thinking. My understanding is that a HANDLE is (handwave) a void pointer. So quietly turning into a 64-bit pointer could cause problems; I suppose it depends on our code’s ability to write a 64-bit integer and read it back.
Of course, in this particular case it’s moot; the company went under and the code is basically dead. It will almost certainly never be compiled into a 64-bit binary, so things should keep Just Working. (crosses fingers)
"Why do I need ‘pointer that is same in size as integer’ type in the first place?" -> Look through your Platform SDK header files and you’ll see plenty of reasons.
Frankly, not breaking existing source code seems like a pretty pathetic goal, seeing as how it’s pretty much doomed to failure anyway. Having to go in and cut/paste some types or even just typedef for compatibility wouldn’t take anyone very much time. It’s not like any sane person would expect they could reduce their testing requirements because "it just compiled" anyway, right?
The most egregious example of this I can think of is the system32 directory.
While it won’t break existing code (mostly), we’re going to end up being saddled with 64-bit DLLs being in System32, and 32-bit DLLs being in WOW64 for a very, very long time (there’s only one more doubling of bit-size needed before we can individually label every subatomic particle in the universe)…
Ok, some bad programmers would have had to have changed 1 constant somewhere in their code to fix this. Was it worth it?
Ray,
I love this. Up until today, all the arguments on Raymond’s blog were all about how it was stupid for Microsoft to jump through hoops to make existing binary applications to work.
The argument usually went "Why don’t we just force the developers to recompile their stupid broken applications and ship a new one?".
Now that the issue is not revising source definitions in the header files, the claim is that we should stop those apps that used these types from compiling.
Ah, the irony.
Suppose you installed the latest header files and nothing compiled any more. Even code that was previously perfectly legal. (In other words, you’re innocent!) But now you have to go and upgrade to Win64 every window procedure, every call to SetTimer, every WM_NOTIFY handler, every owner-drawn listbox and menu… even though your program has no intention of being a 64-bit program.
How would you react? Would you say, "Thanks, Microsoft! After four days of effort, I’m finally back to where I was with preceptible benefit to me! Too bad I can’t use MFC’s class builder any more – it spits out code that doesn’t compile any more. And the code samples in all the magazines I own and web pages I visit don’t work any more, including this function I just copied from a magazine without really understanding how it works but it sure does the job…"
Or would you say, "Heck, for all this effort I could’ve ported it to OS/2."
Here, here! Hat’s off to Ray and Larry for weathering the storm! Just wanted to toss in some quick kudos to the MS folks who have worked VERY hard over several Windows releases to keep our favorite apps up and going. In an earlier comment a disgruntled developer asked why would MS make this decision about LLP64 by answering:
"To create gratuitous incompatibility with Unix"
I will gratuitously suggest that if by this comment he means preserving combatibility with legacy apps between Window’s releases then I give a resounding three cheers to people like Ray who help keep the wheels in motion with this little "incompatibility".
Well, what about this:
SomeStringFn(LPTSTR) turns to
SomeStringFnA(LPSTR) and
SomeStringFnW(LPWSTR)
So why not
SomeIntegerFn(UINT) to become
SomeIntegerFn32(UINT32) and
SomeIntegerFn64(UINT64)
P.S. – same to be applied to structures, etc
Raymond: C99 specifies intptr_t and uintptr_t as optional type aliases for signed and unsigned integer types large enough to hold a pointer. (They are optional because there may not be large enough integer types.) Even VC++ has definitions for them now.
API/header bloat, probably. A few entries back Raymond mentioned a similar scenario, and the test matrix nightmare that would ensue. (That was for adding a flag, but same diff.) Not to mention the age-old documentation question – when do you use 32, when do you use 64, when do you use generic "I don’t care" for best portability?
I can believe the application problems. How come, then doesn’t Windows make LONG a typedef to int, and allow the (naked) long be 64 bit?
Is it because changing both LONG and long to 64 bits would break tons of stuff; while changing LONG to 32 bits (say typedef to int) and then making long 64 bits while break somewhat less, but still plenty, of stuff?
Unix programmers can’t can’t assume sizeof(long) == size(int) for a very uh, long, time now. Otherwise nothing will compile on many interesting platforms. With Linux these days, and other Unixes quickly dying out, that might not be the case for long, though…
Hi, I was the baligerant poster who originally made the comment about type widths staying the same.
Since I’m porting alot of the MFC defs to a new linux lib I’m spearheading from windef.h, I’m pretty familiar with the various millions of types declared in that lovely document.
<quote?</quote>
GlobalAlloc(), Mapped Files, ATOM’s across instances, Named Pipes, ect…
IPC with memory handles wasn’t uncommon in my now defunct windows programming style. Of course I would pass handles for API, and not the actual paged addresses, because of protected mode annoyances like page protection ect….
Also I don’t even know if a paged address from one process would even be the same address for another process because of the context switching in the kernel ect…, and the restoring of the registers from the different process’s ldt in GDT. I’m pretty sure the addresses are absolute as 32 bit in the virtual paged addy table, but who really knows, maybe there was some effect from process’s registers when they are restored to execute the slice.
Who the heck knows, and nobody will ever know because windows is closed source. I am happy because now I can flip open the kernel source and voyeuristically peer in to my hearts content. I even have handbook guides to help me along. Thanks Linus, you da man.
"I don’t even know if a paged address from one process would even be the same address for another process because of the context switching in the kernel etc" -> ?? Processes have separate address spaces. An address in one process is meaningless in any other process. So asking whether it’s the "same" is like asking if my phone has the same telephone number in a different area code.
Raymond, it seems we’re talking about different things. Yes, so called LLP64 will perfectly preserve the structures, but my point is that structures should be updated in a manner that they won’t depend on INT/LONG size, such preservation will not be an issue anymore. Anyhow, this is plain theory, since due to backward compatibility reasons we’ll stuck in 32-bit world forever, due to mixing fixed with nonfixed datatypes, like HIWORD(lParam).
Presumably, WinFX will be free from these issues, but all the problems you mentioned in your first post will remain – how application written in WIN32 and WINFX will share memory, etc…
"structures should be updated in a manner that they won’t depend on INT/LONG size" -> The cost here is that by changing every structure in the system from "LONG" to "int32_t", you break existing perfectly legal code. Do this too much and people say, "Obviously Microsoft has an ulterior motive in making widespread breaking changes to Win32 and forcing people to rewrite their Win32 code – they are intentionally making Win32 programming so difficult that people will give up and switch to WinFX."
I wouldn’t call them "bad" design choices. Who could have predicted that in the next 15 years that the system you were designing would have to *remain source code compatible* with a processor with four times the register size? (OS/2 was the operating system "for the ages"; Windows was just a toy.)
Brian Friesen: The _PTR suffix means that the integer has the same size as a native pointer. I.e., sizeof(X_PTR) == sizeof(void*). It’s explained in MSDN.
(The SDK can’t define new types beginning with __; those are reserved by the C and C++ language standards.)
The cost here is that by changing every structure in the system from "LONG" to "int32_t", you break existing perfectly legal code.
Legal in which context, LONG doesn’t exist in C++ or C, none of the typedefs in windef.h exist in the standards. So really when you use these types in the first place, which is accepted as the norm in windows programming, you are asking for it not to be portable.
If you really wanted to match windows types for portability, could you just doctor windef.h and that would be the end of it?
All those types are a bunch of fooey anyway.
By "perfectly legal code" I meant of course "perfectly legal Win32 code.">
But how could picking one ever break the other for legacy code?
legacy Win32 code will always have LONG as typedef’d from long in winnt.h, so in the 32 bit VC compiler context, that’s always a 32 bit value.
Having LONG be 32 bits in a 64 bit compiler context where the "long" compiler type is possibly 64 bits wide, is certainly a little confusing, but at least it wouldn’t break anything. I think that’s what you guys ended up doing from reading the first post on this blog here. Just keeping the original windef.h and winnt.h widths.
so
typedef int INT
typedef INT LONG
Another thing you could do from the compiler perspective is make 32 bit and 64 bit pragma blocks where the actual "long" C type is 32 bits in the 32bit pragma block, and 64 bit in the 64 bit pragma block, like it is in java, with 32 bit ints and 64 bit longs.
#pragma win32
// BLOCK
#pragma win64
like that. I’m guessing you guys already built that into the preprocessor and compiler.
You could actually just have the preprocessor go through and just macro change "long" to "int" in the win32 code blocks within that pragma directive, so it wouldn’t even require a compiler change persay.
At any rate, with the GNU tools you’re responsible for making your own abstract types of any kind, so ultimately you, yourself have to change them. This is my situation now, so I’m focusing on that.
<quote>legacy Win32 code will always have LONG as typedef’d from long in winnt.h, so in the 32 bit VC compiler context, that’s always a 32 bit value. </quote>
I mean that for a 32 bit C++ compiler. It could be different for a 64 bit compiler, in which case you could do pp replacements before you start lexing/parsing/compiling the code.
actually, if the pp went through and macro replaced all the LONG, ULONG, to INT, UINT and long to int, all the 32 bit code blocks would work fine,
then when you would call a function from a 64 bit block with a LONG return type, it would be a type mismatch.
So ultimately the compiler would have to be involved, smart converting types between 32 and 64 bit blocks.
That’s why they pay you guys the big bucks though right!
from 64 bit block functions with 64 bit wide return types to 32 bit pragma blocks calling, have the compiler issue data loss warnings for 32 bit cut offs.
If people ignore them at least you tried. Other than that I think it would be ok.
If they really want the whole 64 bits, they move the func out of the 32 bit block into the 64 bit.
For those that don’t need the extra width, they can keep coding as usual with the win32 block pragma’s and pretend AMD64 was never released.
A blog reaction
If you want LONG to be a 64-bit integer when compiler on a 64-bit machine, then you have to figure out how to change the definition of "struct something" so that the following legal Win32 code compiles cleanly and operates identically both as 32-bit and as 64-bit:
something s;
fread(fp, &s, sizeof(s));
int i = s.a;
long l = s.b;
I see, I was thinking 32 and 64 bit versions of the API as well in which ever block. 32bit pragma block retains the 32 bit versions of stdlib.h or cstdlib and the rest of the API outside the standard libraries.
I realize that would be next to impossible for you to accomplish though.
If you’re going to use the same system dll API for both the 32 and 64 bit blocks it wouldn’t work.
I’m going to see how GNU handled this. I don’t have a 64 bit chip so I haven’t been interested but I bet they came up with a crafty solution.
<quote>4.2.1 int vs. long
Since the sizes of int and long are the same on a 32-bit platforms, programmers have of-ten been lazy and used int and long inter-changeably. But this will not work anymore with 64-bit systems where long has a larger size than int.
Due to its size a pointer does not fit into a variable of type int. It fits on Unix into a long variable but the intptr_t type from ISO C99 is the better choice.</quote>
Well, I guess this will kind of suck at first, but amd64 does have 32 bit compatibility mode, and it’s better to stick to standards. I think they did the right thing.
Not having all those typedefs to same width types in winnt.h and windef.h probably are going to help gcc/g++’s case along when it comes to this switch.
It’s been this way with java since the jdk1.1, so it’s not a new concept.
RPC/DCOM do use an architecture-independent format.
One of the goals of the Win64 design is *not to break existing 32-bit code*. If structures changed from, say, LONG to int32_t, you would have build breaks like
error: assigning signed long to signed int.
on compilers that are strict about int/long separation.
LP64 v.s. LLP64 (aka Unix64 v.s. Win64) Getting Ready for 64-bit Windows Why did the Win64 team choose the LLP64 model?…
"RPC/DCOM do use an architecture-independent format."
So why would changing the sizes of the data types in the OS affect these protocols? i.e. i don’t think this is a valid argument.
"One of the goals of the Win64 design is *not to break existing 32-bit code*."
Given this, i can certainly see why they made the decision they did, then. Can someone clear something up for me – are talking about the 64bit version of XP, or are we talking about Longhorn? (RTFA, or find one, is a valid answer!)
If Longhorn, then I was under the impression that apps had to be recompiled for this new OS anyway. Please correct me if I’m wrong.
Finally, one last q. Given Raymond’s answers, why couldn’t they have gone the route of a new set of data types. You went from WORD to DWORD. Why not DDWORD, etc.? Adding new types would not break any existing code at all.
I’m not talking about wire formats. I’m talking about structures in header files.
And I’m talking about 64-bit Windows in general, not tied to a specific release – Windows XP 64-bit, Windows Server 2003 64-bit, etc.
I don’t see how inventing a new data type helps you fix existing structures. You can’t touch them carelessly without breaking source code compatibility.
Besides, there *are* new types, like the INT{8,16,32,64} mentioned above. So I’m not sure why you’re saying the Win64 designers should have invented something that they already invented.
I guess I don’t understand what your proposed "DDWORD" type would be used for, different from the existing UINT64 type.
Now that I think of it, why didn’t MS just add a "WIN32_COMPATIBLE" flag to the compiler that kept all the sizes the same while by default letting the types sizes float to ones that make more sense in the processor architecture?
Surely the effort of typing 24 characters wouldn’t be too much to ask *even* of people too lazy to have programmed their code correctly in the first place…
Okay, consider: You download the latest Platform SDK, recompile your program, and you get all these errors. Is your reaction:
(a) Gosh, I’d better go through and modify my 50,000-line program to work with these new 64-bit compatible structures.
(b) !@#$!! Microsoft, why do they go around breaking perfectly good code? I’m not going to port to 64-bit Windows any time soon, why do I have to go through and modify my 50,000-line program to be compatible with something I don’t care about?
"Why not a WIN32_COMPATIBLE compiler flag?" -> You might have a different opinion of this approach after you spend four days tracking down a problem caused by somebody #define’ing this flag in one header file (but not another), causing two structure definitions to mismatch.
The Win64 team went through multiple proposals before settling on the one they chose. I experienced the pain of previous attempts that tried some of the things people have been suggesting. It was not fun. "Hey, I’m making a checkin to winbase.h that *prevents all of Windows from compiling*." You don’t make friends that way.
Ummm, actually, that argument always was BS. The reason not to break binaries is that you’re hurting the wrong people. You’re hurting end users that weren’t to blame for the poorly written code in the first place. Not only does this win you no customers, it punishes the innocent.
Breaking the compile punishes the guilty. Hopefully enough that they get out of the business. Darwin is way too dead in the modern world as it is.
I wonder, though, about this decision regarding System32… who exactly was that supposed to protect, and from what? It’s *more* likely to break binaries (of apps that made some path assumptions that are now broken), but *less* likely to break recompiles…
Does this have anything to do with Steve tromping around yelling "DEVELOPERS!"? :-)
"Ah, the irony.".
As a matter of fact, I have great admiration for MS developers, which succeeded to maintain backward compatibility for code that can be 15 years old. And still, if written carefully, it will compile and *work* flawlessly today. It suggests highest rank of professionalism among these developers. I just take my hat off to you, guys.
"Ben: yes, Windows only ever ran on little-endian architectures. There’s very little places concerned with endianness in Win32, and they’re all #ifdef _MAC (i.e. the Win32 port to MacOS for Office and Internet Explorer)"
Wasn’t IE-Mac a completely different codebase/engine than IE/Win32 ?
What about creating new header files that fix the mistakes made when the current ones were made? A winbase_new.h would allow for new projects to use fixed headers while older projects can use winbase.h until someone decides to update them.
Creating two versions of the structure completely misses the point. The whole point of the exercise is to ensure that structures *stay the same* between 32-bit and 64-bit. Otherwise 64-bit code wouldn’t be able to read BMP files created by a 32-bit program.
"Are you saying that existing structures should be retrofitted to use the INT<n> types?"
No, I’m saying that the designers of Win32 should have anticipated that one day there might be 64-bit architectures that people would want to run Windows on, and designed the type system with room to expand without massive code breakage. Especially since they were in the middle of a 16-to-32-bit change!
If WinFX is a complete replacement for Win32 (I’m a little fuzzy on this), I hope it *is* being designed this way, with separate "{small,medium,large} integer" and "{16,32,64}-bit integer" types.
Alex Blekhman wrote:
"
Hmmm… are you sure about this?
I always heard it like this:
4 bits = nybble
8 bits = 1 byte
2 nybbles = 1 byte
2 bytes = 1 word
… and the logical extension from there is 2 words = 1 dword, 4 words = 1 qword.
*shrugs* I don’t recall ever hearing that the length of a word was CPU specific.
>>> *shrugs* I don’t recall ever hearing that the length of a word was CPU specific.
BYTE is always 8 bits
WORD is largest amount of BYTEs CPU can process at once. When first Intel CPUs became popular, WORD was equal to two BYTEs and when CPUs WORD began to grow, WORD remain 2 bytes, for the same reasons we’re discussing here now – everybody *knew* that WORD is two BYTEs and changing that would broke too many things.
KJK::Hyperion wrote: Windows only ever ran on little-endian architectures. There’s very little places concerned with endianness in Win32, and they’re all #ifdef _MAC
Did I hear wrong about the X-Box Next running on a PowerPC in big-endian mode, then?
So what’s the type of size_t and ptrdiff_t when compiling 64-bit? These have to be (at least) 64-bit quantities. If long and unsigned long–the largest integral types defined by the C standard–aren’t big enough for these quantities, how can they be defined in a standard and useful manner? What will happen when you use sizeof (which is a size_t by definition)?
Raymond wrote: "Note however that converting a program from Win16 to Win32 typically resulted in two unrelated codebases (or a single file with a LOT of #ifdef’s) because the Win16->Win32 shift was so huge."
I disagree. Win16->Win32 was painful only if you had sloppy Win16 code. Message packer/cracker macros handled the different packing schemes for the parameters. Quicken had simultaneous 16- and 32-bit versions from the same source with minimal #ifdefs (and no Win32s). In fact, for a while we built 16-bit with Borland and 32-bit with Microsoft compilers.
How big are HANDLEs (kernel and GDI) and WPARAMs in Win64? When compiling with STRICT, GDI HANDLEs are defined as pointers to different types so that the compiler can do stricter type checking. To do that in Win64, then HANDLEs would have to be 64-bits, like a pointer. Both WPARAM and LPARAM have to hold HANDLEs from time to time as do LRESULTs, so are they all 64-bit?
I never understood why the SDK came up with VOID, CHAR, LONG, etc. Why not use the standard keywords if the underlying types could never be changed? I agree with the early commentor. Size-specified types should be used for persistent formats and "on the wire", but everything else should use the size-neutral types.
A byte is not 8 bits. A byte is the size of the smallest directly-addressable unit of memory. On most modern processors, it happens to be 8 bits. That’s why RFCs use the term octet to be unambiguous and TeX can be compiled on machines that have bytes as small as 6 bits.
A word is not 16 bits. A word is the natural size of the processor’s arithmetic unit. If you’re a C programmer, this is an int. Typically this was also the size of an address, but that seems to be changing as we move to 64-bit machines. On current processors, a word is typically 32-bits, but the term has been abused to the point that it’s now ambiguous.
DWORD, as far as I can tell, was coined by Microsoft to mean a double word. But it stuck at 32 bits, since words were 16 bits when the term was introduced.
A quadword is four words. On a 32-bit machine, this should mean 128 bits, not 64. If my memory is correct, VAX/VMS got this right. I wonder what it was on Alpha/VMS.
I don’t know who invented DWORD, but Windows got it from Intel assembly language.
Simon Cooke [exMSFT] wrote:
"*shrugs* I don’t recall ever hearing that the length of a word was CPU specific."
Yes, it’s very common to think that word related rather to number of bytes than to architecture. It comes as surprise to a lot of people. Actually, the more correct term is "machine word", since the size of word determined both by CPU and data bus. Strict definition is: size of machine word is equal to amount of bits that machine can operate on as unit. Usually hardware designers tend to make well balanced systems, so they make data bus wide enough to operate on CPU register at once. Therefore, most of the time machine word is equal to CPU register. Here’s additional info: "Understanding Intel Instruction Sizes" ().
As Waleri already explained it (), success of PC (which was 16-bit then) was so tremendous that terms of that era became engraved in people’s memory.
<Quote>
structures should be updated in a manner that they won’t depend on INT/LONG size" -> The cost here is that by changing every structure in the system from "LONG" to "int32_t", you break existing perfectly legal code.
</Quote>
Yes, but that will occure *only* when one recompile with WIN64 as a target. If the compiler have a switch for both WIN32/WIN64 as a target, it would be up to the developer to decide whether to compile with new settings and face the consequences or not.
<Quote>
something s;
fread(fp, &s, sizeof(s));
int i = s.a;
long l = s.b;
</Quote>
Good example of a bad code. Aside of sizeof(s) vs. 4 problem, there are also problems with little endian/big endian of the variables.
Many years ago, part of the Microsoft C 6.0 or was it 7.0 documentation was a little book how to write application in a WIN32 ready manner. I think all these issues were covered there. I think it is time to reprint this manual for WIN64… well, maybe it is a little too late :) Even today, compiler have a switch to warn about W64 portability problems – I just wonder how many ppl here use it (I don’t :)
My point is, that years ago, when the dilemma was whether INT should be 16 or 32, the decision was to made it 32, so why choice now?
It seems that ppl never learn from their mistakes. We had 16 vs 32 now we had 32 vs 64.. soon we’ll run to 64 vs 128.. or I guess it will be 32 vs 128… We had Y2K problem, in couple decades we’ll run into time_t problem (somewhere around the yeaar 2038). I understand nobody’s perfect. No one can predict everything, but now we talk about problems we already encountered before and instead solve them, we didn’t solve them even now. Instead we try to find a workaround and postpone the problem
Wow, so many misconceptions I hardly know where to begin.
Raymond wrote: The SDK can’t define new types beginning with __; those are reserved by the C and C++ language standards.
They’re reserved to the implementation, which MS dictates at least part of (for instance, the sizes of fundamental types). Don’t tell me the Platform SDK people have suddenly developed a concern for namespace pollution after years of wantonly defining macros with no prefixes.
Raymond wrote:
If structures changed from, say, LONG to int32_t, you would have build breaks like
error: assigning signed long to signed int.
on compilers that are strict about int/long separation.
Such compilers are, so far as I’m aware, a figment of your imagination. Such conversions are entirely legal. Conversion of pointer types (long * to int *) is a different matter, admittedly.
Simon Cooke wrote: *shrugs* I don’t recall ever hearing that the length of a word was CPU specific.
I think I’m going to add this to my random signature collection.
Waleri: BYTE is always 8 bits.
A byte was originally the unit of storage used for character codes. As such it has varied between about 6 and 12 bits on different systems; in fact the PDP-10 allowed software to determine the size of a byte. C and C++ require at least 8-bit bytes, however, and 8 bits has become the de facto standard – yet, with the increasing use of Unicode, it is normal to use 16 or 32 bits for a character code. For precision one should use "octet" to mean a group of 8 bits.
Note however that converting a program from Win16 to Win32 typically resulted in two unrelated codebases (or a single file with a LOT of #ifdef’s) because the Win16->Win32 shift was so huge.
One of the goals of the Win32->Win64 transition is that you can write a program once, *without any ifdef’s* (well okay maybe one or two), and have it compile and run as both a Win32 program and a Win64 program.
Another goal was that existing Win32 code should remain valid. (Even if it wasn’t Win64-compliant, it should still be valid Win32 code.)
How well do these alternate proposals hold up in the face of these two constraints? Just saying, "That’s bad code" is a cop-out. Whether you like it or not, there’s a lot of bad code out there.
(I don’t know what to make of the suggestion that sizeof(LONG) < sizeof(long) on Win64. Surely if the Win64 team came up with such a model you would point to it as proof that Microsoft developers are morons.)
bobsmith: Two versions of the header file with different definitions for types, structures, and functions creates the "incompatible libraries" problem. Suppose you have two code libraries, an older one that uses the old definitions, and a newer one that uses the new definitions. Your program needs to use both libraries. What do you do? Whichever one you pick, you’ll be incompatible with the other one.
(Ben: __ is reserved for the implementation, and the Platform SDK is not the implementation. It’s just a header file in the application namespace.)
From this
I get the impression LP64 might be as rational a choice for Unix as LLP64 is for Windows.
I guess it’s because Unix apis tend to work on the assumption that sizeof(long) >= sizeof(void *). Unix code has to worry about endianess issues, multiple compilers and so on, and is less likely to write C structures to disk with a single fwrite call.
On the other hand in 64 bit Windows with the LLP64 model, code that tries to fit a pointer into an int will break at compile time, which is easy to fix. But there is lots of application code that makes implicit assumptions about types when it writes structures to disk, and if you changed sizeof(int) you’d break it silently at run time.
ok, my last post didn’t get through.
But thanks for your answers and time Raymond, and everyone else too. Definitely one of the more interesting posts. All those for more of the same technical stuff say aye. Passed.
I guess to avoid similar problems in the future, it would be nice to have a warning in the compiler about assigning nonfixed data type to a fixed one. Warning should be generated even if sizeof(UINT) >= sizeof(UINT32)
UINT src;
UINT32 dst;
dst = src; // Produce a warning
While you are busy porting from Win32 to Win64 please prepare for Win128 to avoid the hazzle next time. ;^)
"DrPizza: One structure down, 20 billion to go. And most of the 20 billion belong to you – the application progammer – not to Windows. "
But I only have to care if I recompile in 64-bit. If I don’t, the old 32-bit compiled-in sizes are maintained.
And if I’ve written a program that assumes that longs in structs are a particular size then I’ve probably also written a program that has other 64-bit portability issues anyway. Which I’ve got to fix anyway.
So what’s the "win"?
@Adrian
How big are HANDLEs (kernel and GDI) and
WPARAMs in Win64?
64 bit. Obviously ;-)
I gues on Win128 they’d be 128bit.
I never understood why the SDK came up with
VOID, CHAR, LONG, etc. Why not use the
standard keywords if the underlying types
could never be changed?
Because the underlying types may change I guess, the typedefs could be altered to remain the same size as the platform and compilers changed.
DWORD, as far as I can tell, was coined by
Microsoft to mean a double word.
I think DWORD meant a 32 bit integer in Win16, and now it Win64 it still means the same thing. If a API function needs to use DWORD parameters to store pointers, the parameter type changes to DWORD_PTR which will be pointer sized. So rather than criticising them for not breaking third party code, why not praise them for fixing their own APIs."?
I remember seeing a compiler that raised a warning if you did
long l;
int i = l; // warning: nonportable – potential truncation
If you change LONG to INT32 then code that went
long l;
bmih.biWidth = l; // warning raised here
will not get a warning when they didn’t before.
I guess we could have invented "LONG32", which then would create the strange situation that on 64-bit machines, "LONG32" isn’t a "long".
Like I said, many possibilities were considered. Perhaps in your opinion we should have taken the greater risk and chosen a model that would have required more work to convert a program to Win64, hoping that people would perceive the extra work as worth the hassle.
Note that it wasn’t under Windows 95 that people finally perceived the extra work as worth the hassle to port from Win16 to Win32! 64-bit Windows has been available since Windows XP 64-bit Edition – do you see many people porting to Win64? Shouldn’t the goal be to make it easier to port to Win64, not harder?
"And if I’ve written a program that assumes that longs in structs are a particular size then I’ve probably also written a program that has other 64-bit portability issues anyway. Which I’ve got to fix anyway."
But you don’t have to fix them if you have no intention of porting to Win64. See above.
"? "
Er… if you’re not recompiling it doesn’t matter what the structure changes to, because you’re not recompiling. If you are recompiling, then at the absolute worst you’ll get a compile-time error (because the compiler doesn’t know what an INT32 is), which you can fix.
"Like I said, many possibilities were considered. Perhaps in your opinion we should have taken the greater risk and chosen a model that would have required more work to convert a program to Win64, hoping that people would perceive the extra work as worth the hassle. "
What greater risk? The only things that’ll change are programs rebuilt as 64-bit binaries, and they fall into two categories already:
programs that need fixing to become 64-bit clean (in which case fixing the structure makes their job no harder)
programs that are already 64-bit clean (in which case fixing the structure makes their job no harder)
Except thanks to this decision, there are vanishingly few programs in the latter category. If LP64 were picked, at least all the cross-platform scientific/maths/etc. programs would be 64-bit clean (or very nearly so).
"64-bit Windows has been available since Windows XP 64-bit Edition – do you see many people porting to Win64? Shouldn’t the goal be to make it easier to port to Win64, not harder? "
Maybe if XP 64 were (a) available on something other than Itanium (the x86-64 version still isn’t out…) (b) not crippled (as the Itanium version omits lots of features the 32-bit version has (c) useful (as there’s next to no software that benefits from Itanium or 64-bit that you’d want to run on WinXP) we’d see more Win64 uptake.
"But you don’t have to fix them if you have no intention of porting to Win64. See above. "
But if I’ve no intention of porting, it doesn’t matter ANYWAY because the definitions I’m using will be the same as they always were, because they’re compiled into the program, which will be running under WoW64.
"But if I’ve no intention of porting, it doesn’t matter ANYWAY because the definitions I’m using will be the same as they always were"
No, because your proposal changes LONG to INT32. The type changed. You install the latest Platform SDK, recompile your program, and it doesn’t build any more. Does this make you happy or sad?
"because they’re compiled into the program" – and then what happens when you recompile?
Most people expect that installing the latest Platform SDK will not introduce build breaks.
Raymond Chen: "If a LONG expanded from a 32-bit value to a 64-bit value, it would not be possible for a 64-bit program to use this structure to parse a bitmap file."
The *real* problem is that the most commonly used languages for Windows development do not specify the size of numeric types other than a, say, the size of a ‘long int’ must be bigger or equal to the size of an ‘int’ etc. And that was of course the reason for inventing DWORD etc..
Also, I don’t understand why it is a problem to define a DWORD to be 32 bits in a 64 bit system?
"No, because your proposal changes LONG to INT32."
No, it changes structure definitions that previously erroneously said "long" to say something else (presumably "int").
My 32-bit program is unaltered anyway (because it treats long and int equivalently, save for truncation warnings).
My 64-bit program needs careful checking anyway (because I need to make sure I’m not assuming that integral types are big enough to hold pointer types).
"The type changed. You install the latest Platform SDK, recompile your program, and it doesn’t build any more. Does this make you happy or sad? "
Huh? Why would it stop building?
"and then what happens when you recompile? "
As 32-bit? Nothing.
"Why would it stop building?"
Because you changed "long" to "int". You yourself noted that doing this will raise truncation warnings. And if you compile with "treat warnings as errors" your build is broken.
As far as the whole Win32 and Win64 compatibility goes, what happens if a programmer does this:
int temp_buffer[4];
int *some_int = NULL;
temp_buffer[0] = (int) some_int;
And before you say that’s crazy, realize that automatic garbage collector libraries that work in C actually do checks on pointers *and* ints because they realize it is a real issue that programmers will put pointers into ints and extract them back out as pointers later. As someone else was pointing out, they used %d to print out a handle to another program, yet they should have been using %p. And there’s also the issue of programs that have hardcoded structure sizes which container pointers. It’d seem that all the issues with pointers would motivate someone to realize that it’s not possible to keep longs the same size and assume that’d fix all problems. Though I will agree it might motivate people to switch to Win64..until their badly written code starts corrupting/crashing because of assumptions on pointer size as well.
I am in the process of porting a 32-bit app to Win64. The app is over 15 million lines of code. I’ve been at this for 6+ months.
Raymond, THANK YOU for LLP64.
While I’m on the subject, DrPizza, Raymond is absolutely correct when saying that changing "long" to "int" in code introduces build breaks. I’ve experienced it over thousands of lines of broken code, which I would have had to fix.
Unfortunately for me, I have to port this application to Linux-64 as well, under Mainwin. Since under linux "long" is 64 bits, the mainwin headers had no choice to define LONG as "typedef int". You can’t imagine how broken the linux builds are right now.
God, how I wish Windows and Unix had chosen the same model.
It looks like we will have no choice but to ban the use of the word "long" in our code and enforce it by writing checkin scripts that parse the code and make sure no programmer has used this type.
I found this site by trying to find a way to tell gcc (linux) to make "long" 32 bits. I found a switch, "-mlong32", but I am not sure how this will break other things, especially when compiling/linking external libraries we use (like STL etc). Any comments?
"God, how I wish Windows and Unix had chosen the same model. "
Given that unix chose first….
"Because you changed "long" to "int". You yourself noted that doing this will raise truncation warnings. And if you compile with "treat warnings as errors" your build is broken. "
It ought to only raise truncation warnings when conversion from long to int is a truncation. Which only ought to be the case when compiling for 64-bit. Which needs truncation fixes anyway.
Just because things are bad doesn’t mean that we should intentionally make things worse.
I already explained the truncation warning here
and here“>
and alluded to it here
(assuming the reader would pick up on the long-to-int assignment).
"It ought to only raise truncation warnings when conversion from long to int is a truncation."
On your compiler perhaps. There is at least one compiler that raises warnings for *potential* behavior, not just actual behavior – as I already mentioned here“>
I find it frustrating that in my comments I keep having to repeat myself.
Raymond wrote: __ is reserved for the implementation, and the Platform SDK is not the implementation. It’s just a header file in the application namespace.
$ pwd
/cygdrive/c/Program Files/Microsoft Visual Studio .NET 2003/Vc7/PlatformSDK/Include
$ grep ‘(# *define|struct) *__’ *.h | wc -l
8844
The same goes for _ followed by a capital letter, by the way.
$ grep ‘(# *define|struct) *_[A-Z]’ *.h | wc -l
5380
pete diemart – I wish it was only 15 years :)
As part of my last job on the Windows team I was the Dev Mgr for NTVDM. We still run Visicalc. Wasn’t that released back in ’81?
I understand why you chose LLP64, but I still have a big reservation about it.
The C standard guarantees that
sizeof(short) <= sizeof(int) <=sizeof(long)
and
long is the biggest int.
People who have followed the standard may now have their code broken, as long is no longer the biggest.
Alternatively, we are supporting people who ignored the standard by assuming that sizeof(long) would not change.
Effectively we are rewarding bad programmers.
Now I know that you have to live in the real world, and you probably made the right choice, but it still grates on me a bit. :-)
This discussion is fascinating.
Of course MS did not have any other options: there are far too much code out there which assume LONG <B>and</B> long are 32 bit that anything else would be a rebuttal; and the coders are <B>customers</B>.
OTOH, about the choice made by Unices: there, idioms such as:
printf("foo: %lun", (unsigned long)sizeof(foo));
which furthermore is code that went to precautions to be maximally portable (among Unices), led them to the only possible solution: ensuring that the assertion size_t<=ulong still holds with 64-bit. Even if it meant breaking a lot of code (written in the ’80s, usually on Vaxen) which incorrectly assumed that int==long (or, more often, long==int32_t). Less code, older, and less vociferous coders: less hassle.
As someone that writes code intended to be portable (without #ifdef) between Windows (Win32 and Win64) and Unices, I certainly know this is a minefield. However, I am really much more annoyed by the lack of long long in CL until 2003: this means that for yet a pair of years I will have to "support" the __int64 hack.
And this leads to Michael’s point: as a consensuated end to the long long debate, it was agreed to add in the C99 standard the following subclause:
<BLOCKQUOTE>
7.17 Common definitions <stddef.h>
Recommended practice
[#4] The types used for size_t and ptrdiff_t should not have an integer conversion rank
greater than that of signed long unless the implementation supports objects large
enough to make this necessary.
</BLOCKQUOTE>
Having closely followed the whole thread that ended here, I do not feel this is actually a reward toward bad programmers. I do not believe bad programmers will receive any reward in any case. I rather believe it is just that "not bad" programmers that cared about portability, and which had part of their code brocken, were penalized; and this above point was in order to soften this.
Since Unix compilers generally enhance this recommendation, while MS did not, it only turns out that Unix "not bad" programmers voiced their concerns louder than MS "not bad" ones. Nothing more.
Michael Smith: This is not guaranteed in C99. However, the MS C/C++ compiler implements only a fraction of the changes made in C99, so when compiling for 64-bit targets it’s compliant with neither C90 nor C99, which is a shame.
Michael: what do you mean by "guaranteed"?
Of course a "Recommended practice" is no guarantee! It is rather the contrary, at least to the coder reading the Standard; it should mean that while he may be customary for him to see this behaviour, this text reminds him he can encounter different situations (yes, Standardese is a strange dialect.)
In 2005, compilers’ conformance to C99 is not 100% even for the most advanced one.
Of course VC does not claim to be among them either. I am reading they are after C++:98, which is another piece of cake entirely.
I dont know. I’d have thought theres far too much code out there written by coders who expect
int len = pszEnd – pszStart;
or, more generically, for an int to be big enough to hold the result of a (char* – char*)
The question should be who came up with these standards? Peter?
What should have been done was the same thing that was done going from 16 to 32 bits.
pointer sizes= 16, 32, 64
int = 16, 32, 64
long = 32
longlong = 64
short = 16
then of course the int16, int32, int64 variations.
I support 16 and 32 bit in one code base and would have been automatic for 64bit but now it’s going to be a hassle of coming up with my own typedef data types.
PingBack from
PingBack from
PingBack from
PingBack from
|
https://blogs.msdn.microsoft.com/oldnewthing/20050131-00/?p=36563/
|
CC-MAIN-2017-34
|
refinedweb
| 10,101
| 70.94
|
0
I need help with just one line. I need an "F" to be displayed on the same line the student with 45 points. I keep getting the "F" to display on a new line below the 45. I guess something is wrong with my if statement. I also need to display the % symbol after the 83.3. I'm using python 3.5.1. Thank you so much for any help.
def main(): file_students = open("student_points.txt" , "r") stu_name = file_students.readline() num_stu = 0 f_students = 1 pass_students = 5/6 print("Student\t\t\tPoints\t\tGrade") print("-------------------------------------\n") while stu_name != "": stu_name = stu_name.rstrip("\n") stu_points = file_students.readline() stu_points = int(stu_points) print(stu_name, "\t\t", stu_points, sep="") num_stu += 1 if stu_points < 60: print("\t\t\t\t\tF") stu_name = file_students.readline() file_students.close() print() print("Number of students processed=", num_stu) print("% of students who passed=" , format(pass_students * 100 , ".1f")) main()
The students and their corresponding grades are:
Johnson Smith: 93
Maryanne James: 80
Stanton Chase: 45
Mildred Morris: 90
George Deitz: 89
Maisie Kling: 79
|
https://www.daniweb.com/programming/threads/503984/can-someone-please-help-me-with-one-line
|
CC-MAIN-2017-09
|
refinedweb
| 174
| 69.48
|
Agenda
See also: IRC log
SW: Regrets from TV, TimBL, Ashok
... Agenda as published:
<DanC_lap> +1 ok
SW: Minutes from 21 February
approved
... Minutes for f2f still coming together, please let's get them cleaned up and out
<DanC_lap> (Norm, which day of the ftf are you minuting? likewise Dave?)
SW: Next telcon 20 March, JR to
scribe
... Regrets NM for 27 March, DO and DC for 20 March, NM at risk for 20 March
DO, NW: Went well
NM: We used to do code reviews on
a project I was on, and unless everyone, or almost everyone,
had actually read the code, the review was cancelled
... Sometimes at our meetings, I find myself thinking, in reply to a comment, "I don't think you would have said that if you had read the whole document". . .
... and this feels like a bit of a downer
<DanC_lap> (yes, it's worthwhile to set clearer expectations about who is to read what)
SW: Skipping item 3 (Issue
webApplicationState-60 (ISSUE-60)) in Raman's absence
... Please set realistic due dates for actions to review: HST, DC
SW: Lengthy comment from Al
Gilman on ARIA and TagSoupIntegration:
... [Summarizes] -- : is a problem with existing browser implementations
DC: I would like to see details
backing up the factual claims that AG makes
... There should be tests in their test suite
NM: Not just a browser implementation problem, but that there's a buggy dependency -- fixing it for one browser breaks it for another
SW: We'll return to ARIA when TBL is on the call
DO: IE8 announcement includes
something about namespace defaulting on unknown elements, e.g.
svg elements
... and you can put ns decls on the HTML element
<Stuart> dave's blog entry:
DO: But there are serious limitations
... 1) NS decls don't work for attributes
... 2) Default NS decl only works once per subtree, e.g. no MathML inside SVG
... Since svg uses xlink:href, the first is a problem
... and the embedding problem looks bad also
... don't know why they made these restrictions
... Documents ways of doing [NS decls] in IE7 using the OBJECT tag
... What do we do about this?
... What if the TAG sent a comment to the IE Team
... "Glad to see you're moving in this direction
... could you add full NS support, please?"
... We should not only ask HTML 5 WG to do what we need, but the vendors as well
DC: DO, you could attend an HTML
WG telcon and ask MS themselves
... Some attention from TBL on the validator architecture issue, which is good
HST: Question of clarification: We are talking about HTML here, yes?
DO: Yes
HST: So I'm guessing that the nesting constraint has to do with the empty tag issue: no way to tell when embedding stops
DO: Don't know. Do know that you can't use the XHTML namespace at all
NM: Where does media type feed in to this story?
<Zakim> Norm, you wanted to ask if this is beta-limitations or intended design or do we know?
DO: I agree that we should check on that
<noah> NM: In particular, do we know whether if IE 8 gets application/xhtml+xml?
NW: Can't tell if this is just for beta 1, not clear if this is it or we may get more?
DO: I think this is just what's in beta 1, the feature set may evolve, but not sure
NM: At the show where this was
announced, this came in the context of IE 8's plan to default
to standards-compliant rendering
... that was a big deal, leading off the press conference, so I don't think they'll go back on that
NM: Please be clear that I do not speak at all for Microsoft.
NW: Interesting to find out for sure about the NS stuff
NM: Not clear
<DanC_lap> (the SXSW crowd was very happy about the IE8 default change)
<Norm> I sure hope XHTML support exists and has standard XML namespace support.
SW: Feedback to browser vendors a bad idea?
<noah> I'm just passing on my understanding of what I thought I heard at the MIX conference last week. We obviously should contact Microsoft if we want to know what they've really announced and which parts they view as commitments beyond the current IE 8 beta.
DC: I'm opposed -- we have a WG for this
HT: This is not about HTML 5, which is what the WG is aimed at. It's about the existing HTML specs.
HT:It seems very appropriate for W3C or TAG to say "this is interesting and useful" or "not".
HT:It's not entirely clear to me why we'd say that to the HTML 5 workgroup.
NM: But how are you supposed to know you're looking at HTML 5?
DC: No signal, you're supposed to assume 5
NM: OK, so the HTML 5 WG is addressing the general question of what browsers should do with everything that looks like HTML
DO: They are changing HTML 4
DC: They can't change the HTML 4
spec.
... It was locked in 1998
DO: So it's an add-on that is intended to work with HTML 4
<noah>Specifically, I asked Dan for a clarification as to whether HTML 5 documents are distinguished by, say, an HTML 5 doctype. Dan said "no, you are either a browser coded with knowledge of HTML 5 or not". I then said "I think Dan is right: handling namespace prefixes in such content is within the purvue of the HTML WG"
DO: I would certainly like to see
the HTML 5 WG do something about namespaces in HTML
... but IE8 is going to ship first, and we should try to get them to do the right thing
... or in particular make sure that what they do doesn't hurt more than it helps
DC: What I'm offering you is a low-latency forum to achieve this
SW: I think more interaction of the sort DC suggests would be good before we do anything
DO: What about asking the TAG to come to an HTML 5 WG telcon?
HST: I don't think we're ready for that, the TAG doesn't have an opinion on this proposal yet
<DanC_lap> . HTML Versioning and DOCTYPEs
<Zakim> ht, you wanted to ask the Chair to schedule discussion
SW: We'll come back to this, Henry will introduce it
<scribe> ACTION: DO to send pointers to MS and Ruby Namespace proposals [recorded in]
<trackbot-ng> Created ACTION-123 - Send pointers to MS and Ruby Namespace proposals [on David Orchard - due 2008-03-20].
SW: The XRI TC have responded to
our questions
... Also note the call for review of another XRI spec:
<scribe> ACTION: Norman to review XRI response to our questions [recorded in]
<trackbot-ng> Created ACTION-124 - Review XRI response to our questions [on Norman Walsh - due 2008-03-20].
<scribe> ACTION: Stuart to review XRI response to our questions [recorded in]
<trackbot-ng> Created ACTION-125 - Review XRI response to our questions [on Stuart Williams - due 2008-03-20].
SW: We've had a response to our
input of the diagram to SWEO, expressing some confusion
... Trying to find a date when we can have Leo and Richard for a discussion about this
SW: Any replies to JR's message to www-tag about link-header?
JR: Two private replies, one
positive and one suggesting a meeting to discuss further
... No reply from Mark Nottingham yet
SW: Graham Kline missed the note because of the BCC use
JR: I will follow up with him and MN directly
SW: Anyone yet read NM's input to the f2f:
NM: I've been thinking a lot
about HTTP redirections
... A lot of the discussion of information resources is about whether the term is well-defined
... I'm happy with it, because it captures a valuable aspect of what you know when you get a repr. of an information resource
... but it seemed to me on reflection that the problems we have with repr. of e.g. me are similar to ones we have wrt repr. of generic resources
NM: I've had some skeptical/this
doesn't help feedback, but no positive feedback yet
... I wrote it to be helpful, if it isn't, we shouldn't spend time on it
... so we should only spend time on this if/when someone says "yes, that's a useful starting point"
<DanC_lap> (I see one short response from fielding; is this the one Noah referred to? )
<DanC_lap> (hmm... there are more...)
JR: I read through it, and I
think there's a lot of overlap with what the Health Care and
Life Sciences group have been looking at
... but we ended up in a very different place
... I think it would help to have some use cases in place
NM: I'm not pushing the solutions I offered, so much as the scenarios. . .
DC: What's the thesis statement?
<Stuart> As a quick summary: the intuition is to acknowledge that due to conneg and just general lack of consensus in the community, the current deployed use of 200 isn't sufficiently consistent and reliable for rigorous reasoning in the semantic Web.
NM: I commend in particular the section labelled "Why Information Resource is not the right abstraction"
JR: I hope AWWSW will look at this
NM: That's not the same as the TAG looking at it, given that if I'm right we might want to change something in WebArch, whereas AWWSW is just trying to formalize what we have already
JR: But I certainly expect that once we've formalized things, we'll be feeding back on problems
HST: CURIEs comments not quite ready for review, maybe next week
|
http://www.w3.org/2008/03/13-tagmem-minutes.html
|
CC-MAIN-2015-11
|
refinedweb
| 1,647
| 74.22
|
Run-Time Type Checking in C++
Introduction
A frequently asked question is: "How can I identify/check the type of an object in C++ at run-time?" Let me show you by resolving a simple problem!
Complete the program to display "Bark!" in the first call of CFoo::AnimalSays and "Miaou!" in the second one.
class Animal {/*...*/}; class Dog : public Animal {/*...*/}; class Cat : public Animal {/*...*/}; class CFoo { public: void AnimalSays(Animal*) {/*...*/} }; int main(int argc, char* argv[]) { Dog rex; Cat kitty; CFoo foo; foo.AnimalSays(&rex); foo.AnimalSays(&kitty); return 0; }
One First Try
The first idea is to add a member variable that stores info about the type.
#include <iostream> class Animal { public: enum AnimalType {TypeDog, TypeCat}; }; class Dog : public Animal { public: Dog() : m_type(TypeDog) {} const AnimalType m_type; }; class Cat : public Animal { public: Cat() : m_type(TypeCat) {} const AnimalType m_type; }; class CFoo { public: void AnimalSays(Animal*); }; int main(int argc, char* argv[]) { Dog rex; Cat kitty; CFoo foo; foo.AnimalSays(&rex); foo.AnimalSays(&kitty); return 0; } void CFoo::AnimalSays(Animal* pAnimal) { if(((Dog*)pAnimal)->m_type == Animal::TypeDog) std::cout << "Bark! "; else if(((Cat*)pAnimal)->m_type == Animal::TypeCat) std::cout << "Miaou! "; }
Now, please take a look at the CFoo::AnimalSays function implementation and imagine that you have not only two but fifty-two animal types (in other words, classes derived from Animal)! Quite ugly! It will be hard to write it with no errors and it also will be hard to read/modify/maintain. However, there can be even worse solutions...
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/cpp/cpp/cpp_mfc/general/article.php/c14535/RunTime-Type-Checking-in-C.htm
|
CC-MAIN-2017-09
|
refinedweb
| 260
| 50.23
|
x:Type Markup Extension
x:Type is used to supply the attribute value for a property that takes Type. However, many properties that take Type as a value are able to accept the name of the type directly (the string value of the type's Name); check the documentation for the specific property for details. x:Type is essentially a markup extension equivalent for a typeof() operator in C# or the GetType operator in Microsoft Visual Basic .NET.
You define the default XML namespace for any given XAML page as an attribute on the root element. Generally, the default XML namespace you use for Windows Presentation Foundation (WPF) programming is the WPF namespace. The identifier for that namespace is. The vast majority of types that are intended for common WPF application programming are within this namespace. Therefore you generally do not need to map a prefix to obtain a type when using x:Type. You may need to map a prefix if you are referencing a type from a custom assembly, or for types that exist in a WPF assembly but are within a CLR namespace that was not mapped to be part of the WPF namespace from that assembly. For information on prefixes, XML namespaces, and mapping CLR namespaces, see XAML Namespaces and Namespace Mapping.
Attribute syntax is the most common syntax used with this markup extension. The string token provided after the x:Type identifier string is assigned as the TypeName value of the underlying TypeExtension extension class. The value of this attribute is the Name of the desired type.
x:Type can be used in object element syntax. In this case, specifying the value of the TypeName property is required to properly initialize the extension.
x:Type can also be used in a verbose attribute usage that specifies the TypeName property as a property=value pair:
The verbose usage is often useful for extensions that have more than one settable property, or if some properties are optional. Because x:Type has only one settable property, which is required, this verbose usage is not typical.
In the WPF XAML processor implementation, the handling for this markup extension is defined by the TypeExtension class.
x:Type XAML.
|
http://msdn.microsoft.com/en-us/library/vstudio/ms753322(v=vs.90)
|
CC-MAIN-2014-41
|
refinedweb
| 368
| 51.38
|
> From: Robert Weiner <address@hidden> > Date: Sun, 17 Dec 2017 23:33:04 -0500 > Cc: address@hidden > > I have made this requested change and herein attach the patch. I hope > you can integrate it sometime. Thanks. I have a few minor comments: First, please provide a ChangeLog-style commit log for the changes (see CONTRIBUTE for the details). > +(defun find-library-name (library &optional no-error) > "Return the absolute file name of the Emacs Lisp source of LIBRARY. > -LIBRARY should be a string (the name of the library)." > +LIBRARY should be a string (the name of the library). > +Signals an error if the source location is not found, unless optional > +NO-ERROR is non-nil, in which case nil is returned." Please try to avoid using passive tense in documentation and comments, doing so makes the text longer and more complex. In this case: Signal an error if the source location is not found, unless optional NO-ERROR is non-nil, in which case silently return nil. (Note that I also modified the tense of the verbs to be consistent with the first sentence of the doc string.) Similar issues exist with other doc string changes in this patch. > -(defun find-function-search-for-symbol (symbol type library) > - "Search for SYMBOL's definition of type TYPE in LIBRARY. > -Visit the library in a buffer, and return a cons cell (BUFFER . POSITION), > -or just (BUFFER . nil) if the definition can't be found in the file. > +(defun find-function-search-for-symbol (symbol type library &optional > no-error) > + "Search for SYMBOL's definition of TYPE in LIBRARY. > +Visit the library in a buffer, and return a (BUFFER . POSITION) pair, > +or nil if the definition can't be found in the library. This second alternative of the return value makes this an incompatible change. Is that really necessary? It also makes it impossible to distinguish between the two kinds of failures. > ;; FIXME for completeness, it might be nice to print something like: > ;; foo (which is advised), which is an alias for bar (which is advised). > - (while (and def (symbolp def)) > - (or (eq def function) > - (not verbose) > + ;; 5/26/2016 - fixed to not loop forever when (eq def function) > + (while (and def (symbolp def) (not (eq def function))) > + (or (not verbose) > (setq aliases (if aliases The above seems to be an unrelated change. Also, please don't leave dates of changes in the sources (or maybe the whole comment is unnecessary). > -(defun find-function-noselect (function &optional lisp-only) > - "Return a pair (BUFFER . POINT) pointing to the definition of FUNCTION. > +(defun find-function-noselect (function &optional lisp-only no-error) > + "Return a (BUFFER . POINT) pair pointing to the definition of FUNCTION or > nil if not found. The first sentence is too long, it should fit on the default window width of 80 columns, and preferably be even shorter. > +Signals an error if FUNCTION is null. ^^^^^^^ "Signal" > -If FUNCTION is a built-in function, this function normally > -attempts to find it in the Emacs C sources; however, if LISP-ONLY > -is non-nil, signal an error instead. > +Built-in functions are found within Emacs C sources unless > +optional LISP-ONLY is non-nil, in which case an error is signaled > +unless optional NO-ERROR is non-nil. Here you took text that was very clear and modified it to use passive tense, which made it less so. Most of the changes are unnecessary anyway, as you just needed to add what happens with NO-ERROR non-nil. So I'd use something like this: If FUNCTION is a built-in function, this function normally attempts to find it in the Emacs C sources; however, if LISP-ONLY is non-nil, it signals an error instead. If the optional argument NO-ERROR is non-nil, it returns nil instead of signaling an error. > (if (not function) > - (error "You didn't specify a function")) > + (error "You didn't specify a function")) Hmm... why did the indentation change here? > (defun find-function-do-it (symbol type switch-fn) > - "Find Emacs Lisp SYMBOL in a buffer and display it. > + "Find Emacs Lisp SYMBOL of TYPE in a buffer, display it with SWITCH-FN and > return t, else nil if not found. Once again, this sentence is too long for the first sentence of a doc string. I also question the decision to return t if the function succeeds: couldn't it return a more useful value, like the buffer where the function is displayed? > (defun find-function (function) > "Find the definition of the FUNCTION near point. > +Return t if FUNCTION is found, else nil. Likewise here (and elsewhere in a few similar functions). > -(defun find-variable-noselect (variable &optional file) > - "Return a pair `(BUFFER . POINT)' pointing to the definition of VARIABLE. > +(defun find-variable-noselect (variable &optional file no-error) > + "Return a (BUFFER . POINT) pair pointing to the definition of VARIABLE or > nil if not found. Sentence too long. > -(defun find-definition-noselect (symbol type &optional file) > - "Return a pair `(BUFFER . POINT)' pointing to the definition of SYMBOL. > -If the definition can't be found in the buffer, return (BUFFER). > +(defun find-definition-noselect (symbol type &optional file no-error) > + "Return a (BUFFER . POINT) pair pointing to the definition of SYMBOL or > nil if not found. Likewise. > -The library where FACE is defined is searched for in > -`find-function-source-path', if non-nil, otherwise in `load-path'. > -See also `find-function-recenter-line' and `find-function-after-hook'." > +The library searched for FACE is given by `find-function-source-path', > +if non-nil, otherwise `load-path'. See also I agree that the original text was sub-optimal, but saying that a library is "given by" a path variable is IMO confusing. How about this variant instead: The library that defines FACE is looked for in directories specified by `find-function-source-path', if that is non-nil, or `load-path' otherwise. Thanks again for working on this.
|
https://lists.gnu.org/archive/html/bug-gnu-emacs/2017-12/msg00651.html
|
CC-MAIN-2021-39
|
refinedweb
| 990
| 65.01
|
,.
In The Brothers Karamazov, Part 3, Book 7 why does Rakitin get angry about the outcome of Alyosha Karamazov's visit to Grushenka?
Rakitin is a spiteful and petty seminary student who has no religious calling and hates the Karamazovs. Alyosha Karamazov thinks Rakitin is his friend but, in fact, he is jealous of Alyosha and wishes him ill. Grushenka is Rakitin's first cousin, although he goes to a lot of trouble to conceal that fact. Grushenka had asked Rakitin to bring Alyosha to her so that she can seduce him and has agreed to pay him 25 rubles. He takes advantage of Alyosha's spiritual crisis after the elder's death to entice him to Grushenka's, hoping to watch Alyosha disgrace himself as a novice monk. In Part 3, Book 7, Chapter 3, when Alyosha and Grushenka make friends and recognize each other as kindred spirits, he becomes infuriated. He is forced to look at his cousin in a different light, and he is outed to Alyosha as a false friend. Moreover, the unfolding of events is like a mirror that he cannot help but look into, and he is not happy with the reflection he sees. Thus, he abandons Alyosha, even though Alyosha didn't take offense to his appalling behavior.
In The Brothers Karamazov, Part 3, Book 7 what is the meaning of Alyosha Karamazov's vision on the night he is praying and dozing beside Elder Zosima's coffin?
Alyosha Karamazov is angry with God for allowing his elder to be disgraced after death, and he has a minor rebellion. After his visit to Grushenka, in which she gave him an onion and he gave her an onion in return (both were the occasion of grace for the other), Alyosha returns to the monastery to pray in the elder Zosima's room. Father Paissy is reading the Gospel, which happens to be the story about the Wedding of Cana, when Jesus turned water into wine. In Part 3, Book 7, Chapter 3, Alyosha has a vision of Zosima in a twilight state—halfway between sleeping and waking. Zosima tells him he is at the Wedding of Cana; he got there because he gave "a little onion," referencing the story Grushenka told about the woman who could not be saved from hell because she refused to share her onion. Zosima again tells Alyosha that he must go out into the world. In the vision, Zosima also tells him to look at the sun—which is a representation of God, but Alyosha cannot look yet. However, he is filled with spiritual rapture and goes outside to embrace the earth, knowing he has new strength in his soul. The vision is a reassurance from Alyosha's spiritual guide that he is happy in heaven, and an exhortation for him to do his own spiritual work and pick up where Zosima left off. Thus, the dream is Alyosha's little miracle following the death of his elder.
In The Brothers Karamazov, Part 3, Books 7 and 8, and Part 4, Book 11 how does Dostoevsky use Madame Khokhlakov to create comedy in the novel?
Madame Khokhlakov provides comic relief in almost every scene she is in, such as Part 3, Book 7, Chapter 2; Part 3, Book 8, Chapter 3; and Part 4, Book 11, Chapter 2. She is portrayed as a garrulous scatterbrain who changes her opinion according to the way the wind blows. For example, first she talks Elder Zosima up, contriving miracles for him where none exist (for example, saying he cured Lise of her fevers). Then she sends Rakitin to spy at the monastery to find out what people are saying about Zosima's rotting corpse, and opines that she would not have expected "such conduct from such a venerable old man as Father Zosima." When Dmitri Karamazov comes to borrow money from her, she tries to send him to the gold mines, and later says he almost killed her. But when she thinks he might be acquitted at trial, she says afterward she will invite him to dinner with lots of guests, in case he happens to have another murderous impulse. These are just a few of the many examples in which Madame Khokhlakov creates laughter with her absurd thoughts or behavior.
In The Brothers Karamazov, Part 2, Book 4 why does Captain Snegiryov change his mind about taking money from Alyosha Karamazov when he first comes to call?
In Part 2, Book 4, Chapter 7, Alyosha Karamazov's first errand of mercy to the Snegiryov family is to take 200 rubles to the captain as some form of recompense for the way Dmitri Karamazov treated him. Captain Snegiryov is very poor, and most of his family members are ill. At first, he takes the money and thinks about all the ways in which he can improve his family's lives. But then Alyosha goes too far in promising additional funds from both Katerina and himself. Snegiryov begins to feel like a beggar receiving handouts—and from the relative of his enemy. The worst of his altercation with Dmitri is that his son feels the humiliation of his father beyond anything, and there is no way to restore the boy's pride. Snegiryov begins thinking about how he will face his proud son after taking money from these "aristocrats," which is why he initially refuses. Later, Alyosha is able to convince him to take the money after he feels that his pride has been restored.
What is the result of Elder Zosima's creed that all are guilty for all and responsible for all?
Elder Zosima's creed makes the world a kinder, gentler place and discourages the tendency to judge others—which doesn't mean they can't be held accountable for their bad deeds. Rather, Zosima's creed calls for forgiveness of bad deeds, which are the result of everyone's falling short and failing the sinner. Such an attitude of compassion is a means for transformation of people who have "sinned." For example, Dmitri Karamazov's sins are the result of being abandoned as a child and the subsequent treatment he gets from his father. Zosima takes responsibility for Dmitri's sins by bowing down to him in Part 1, Book 2, Chapter 6, and he tasks Alyosha Karamazov with looking after his brother, which Alyosha fails to do. For example, Zosima tells Alyosha to leave and look after his family in Part 2, Book 4, Chapter 1, and then asks him if he has seen Dmitri when he comes back from the monastery in Part 2, Book 6, Chapter 1, saying, "Perhaps you'll still be able to prevent something terrible." Alyosha fails to find Dmitri, and instead has a long conversation with Ivan Karamazov at the tavern. The narrator notes that, later in life, Alyosha will wonder how the elder's exhortation could have gone out of his head (Part 2, Book 5, Chapter 5). If Alyosha and Ivan had done more to intervene in the quarrel between father and son, Pavel Smerdyakov may not have gotten the idea to kill Old Karamazov, and he certainly would not have been in the garden at that fateful moment when Karamazov is killed. Alyosha's taking responsibility for Grushenka's sin in Part 3, Book 7, Chapter 3 is a catalyst for a great change in her, which begins when Alyosha doesn't condemn her and calls her sister. After that, she stops pretending she is hard-hearted and acknowledges her love for Dmitri. With the help of Alyosha, she is able to see Dmitri through his ordeal, after he is accused of murder.
How do readers of The Brothers Karamazov know that Ivan Karamazov hates his father?
Ivan Karamazov is infuriated by his father's behavior at the monastery in Part 1, Book 2, Chapter 8, and pushes Maximov out of the carriage in a fit of temper. After old Karamazov gets drunk in Part 1, Book 3, Chapter 8, he accuses Ivan of despising him and looking at him with maliciousness. He also says to him that "You came here with something in mind," which is a foreshadowing of Old Karamazov's fate. After Dmitri breaks in and beats Fyodor Karamazov, he tells Alyosha Karamazov that he is more afraid of Ivan than Dmitri. Ivan also tells Alyosha in Part 1, Book 3, Chapter 9 that he would not be sorry if Dmitri killed their father, and even admits that he wishes for the old man's death. When Pavel Smerdyakov tells Ivan he is worried Dmitri will murder him in Part 1, Book 5, Chapter 6, he feels no worry. Yet, he has a premonition that something will happen to his father, although he ignores it. He has pledged to Alyosha that he will protect their father, yet he leaves him unprotected. Finally, he feels guilty about the old man's death because he knows that he wished him dead.
In The Brothers Karamazov, in what ways is Rakitin an atheist?
Rakitin is both an atheist and an enemy of religion. It was common for sons of the lower middle class to attend seminary school to get an education, but Rakitin has no religious vocation nor allegiance to the Russian Orthodox faith, even if he is a seminary student. Early on, he scoffs at Elder Zosima's bowing to Dmitri Karamazov, implying that the old man was simply shrewd and wanted to hedge his bets in the event that Dmitri ended up being a parricide (Part 1, Book 2, Chapter 7). He makes fun of Alyosha Karamazov because he is upset that God has allowed the elder's corpse to rot, taunting his friend by saying, "they passed you over for promotion" (Part 3, Book 7, Chapter 2). In fact, he is no friend of Alyosha's but wants to bring him down and disgrace him, which is why he promises to bring the innocent monk to Grushenka for 25 rubles. When Alyosha and Grushenka mutually recognize the other's goodness, Rakitin becomes furious because his plan to prove Alyosha a hypocrite has been foiled (Part 3, Book 7, Chapter 3). At the end of the novel, Rakitin visits Dmitri in jail and tries to turn him into an atheist, telling him that it is possible to love man without loving God (Part 4, Book 11, Chapter 4).
In The Brothers Karamazov, Part 3, Book 9 why does Dmitri Karamazov hold on to the 1,500 rubles from Katerina instead of immediately giving it back?
In Part 3, Book 9, Chapter 7, Dmitri Karamazov understands that Katerina has given him the 3,000 rubles as a test and also so that he can feel like a scoundrel when he appropriates her money. He understands that the basis for their relationship is that she sacrifices herself for him so that she can continually prove her superiority. He falls in with her plan, partly out of spite, partly out of a lack of discipline, and partly because he thinks himself to be a scoundrel. However, he does not consider himself a thief. As long as he keeps back 1,500 rubles of the money Katerina gave him, there is a chance he will return it and simply owe her half of the money. Once he deliberately spends the money, however, especially after holding on to it for weeks, he proves that he is both a scoundrel and a thief. In his despair over Grushenka's defection to her old lover, he finally decides to spend the remaining money because, in his view, he has nothing left to lose.
In The Brothers Karamazov, Part 3, Book 8 after Grushenka confesses her love, why does she tell Dmitri Karamazov they should farm the land?
Grushenka realizes that both she and Dmitri Karamazov are undisciplined and likely to waste money and wind up as paupers. They also need the mental discipline of hard work to help reign in their emotions, and she feels that they both need to be morally rehabilitated. In her drunken state, she thinks of the noble peasants who work the land and make an honest living by the sweat of their brow, which is why she proposes to Dmitri that they "work on the land" in Part 3, Book 8, Chapter 8. She says, "I want to scrape the earth with my hands," which is also symbolic of getting down to the essentials and dropping the external veneer with which she has been hiding her real personality.
In The Brothers Karamazov, Part 3, Books 7 and 8 why does Grushenka go to her former lover Mussyalovich who suddenly wants to marry her?
Grushenka was madly in love with Mussyalovich, the Polish officer, and she has been nursing the remnants of that love as well as the wound she still carries from his terrible rejection. She hints to Alyosha Karamazov that maybe she will meet him for spite, to show him how strong she's grown and then to make fun of him. But she also says that she will "crawl to him like a little dog." Grushenka has been brooding on this loss for five years, and now she has a chance to resolve it—one way or the other. In Part 3, Book 7, Chapter 3, and Part 3, Book 8, Chapter 7, once she arrives at Mokroye, she realizes that she no longer loves this Pole but only her memory of a girlhood ideal. She also realizes that he only wants to take advantage of her again. This time, he will take her money if she only agrees to marry him.
|
https://www.coursehero.com/lit/The-Brothers-Karamazov/discussion-questions/page-3/
|
CC-MAIN-2020-24
|
refinedweb
| 2,285
| 63.12
|
Streams can be of fixed or infinite length. In the previous example, we used an array of integers to create a fixed length stream. However, if we want to process data arriving through a network connection, this stream of data may appear to be infinite. We can create both types of streams in Java 8.
We will use a
Rectangle class to demonstrate the use of streams in several sections of this chapter. The class possesses position and dimension variables and three methods:
scale: This changes the size of a rectangle
getArea: This returns its area
toString: This displays its values
The class declaration follows:
public class Rectangle { private int x; private int y; private int height; private int width; public Rectangle(int x, int y, int ...
No credit card required
|
https://www.safaribooksonline.com/library/view/learning-java-functional/9781783558483/ch04s02.html
|
CC-MAIN-2018-09
|
refinedweb
| 131
| 61.06
|
1 /**2 * FieldFilter.java3 *4 * Copyright (c) 2000 Douglass R. Cutting.5 ., 675 Mass Ave, Cambridge, MA 02139, USA.19 */20 21 package org.nemesis.forum.search;22 23 import java.io.IOException ;24 import java.util.BitSet ;25 26 import org.apache.lucene.index.IndexReader;27 import org.apache.lucene.index.Term;28 import org.apache.lucene.index.TermDocs;29 import org.apache.lucene.search.Filter;30 31 /**32 * A Filter that restricts search results to Documents that match a specified33 * Field value.34 *35 * For example, suppose you create a search index to make your catalog of widgets36 * searchable. When indexing, you add a field to each Document called "color"37 * that has one of the following values: "blue", "green", "yellow", or "red".38 * Now suppose that a user is executing a query but only wants to see green39 * widgets in the results. The following code snippet yields that behavior:40 * <pre>41 * //In this example, we assume the Searcher and Query are already defined.42 * //Define a FieldFilter to only show green colored widgets.43 * Field myFilter = new FieldFilter("color", "green");44 * Hits queryResults = mySearcher.execute(myQuery, myFilter);45 * </pre>46 *47 * @author Matt Tucker (matt@Yasna.com)48 */49 public class FieldFilter extends Filter {50 51 private String field;52 private String value;53 private Term searchTerm;54 55 /**56 * Creates a new field filter. The name of the field and the value to filter57 * on are specified. In order for a Document to pass this filter, it must:58 * <ol>59 * <li>The given field must exist in the document.60 * <li>The field value in the Document must exactly match the given61 * value.</ol>62 *63 * @param field the name of the field to filter on.64 * @param value the value of the field that search results must match.65 */66 public FieldFilter(String field, String value) {67 this.field = field;68 this.value = value;69 searchTerm = new Term(field, value);70 }71 72 public BitSet bits(IndexReader reader) throws IOException {73 //Create a new BitSet with a capacity equal to the size of the index.74 BitSet bits = new BitSet (reader.maxDoc());75 //Get an enumeration of all the documents that match the specified field76 //value.77 TermDocs matchingDocs = reader.termDocs(searchTerm);78 try {79 while(matchingDocs.next()) {80 bits.set(matchingDocs.doc());81 }82 }83 finally {84 if (matchingDocs != null) {85 matchingDocs.close();86 }87 }88 return bits;89 }90 }91
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/nemesis/forum/search/FieldFilter.java.htm
|
CC-MAIN-2016-44
|
refinedweb
| 424
| 53.37
|
Before I discuss the ASP.NET Core dependency injection system, I feel it is important to take a bit of time to try to illustrate the problem that Dependency Injection is designed to solve. To do that, I have had to move the Empty template on a bit from where I left off in the last article. This has involved adding some standard MVC stuff to the project, including the HomeController and its associated Views, along with the site Javascript and CSS files from the Web Application template. I also had to add a couple of extra packages to the project to enable the serving of static files and to cater for the TagHelpers used in the views.
The Problem
A common pattern within web development - especially among beginners - is to pile a whole lot of code that does everything that a web page needs (data, emailing, error handling, logging, authentication etc) into one place. In the Microsoft world, this practice has evolved from spaghetti code in a classic ASP code file, to the code behind file for a web form, and then to a controller in an MVC application. Take a look at the following:
[HttpPost] public async Task<ActionResult> Contact(ContactViewModel model) { using (var smtp = new SmtpClient("localhost")) { var mail = new MailMessage { Subject = model.Subject, From = new MailAddress(model.Email), Body = model.Message }; mail.To.Add("Support@domain.com"); await smtp.SendMailAsync(mail); } return View(); }
This is an example of the type of code that can be found in many demo samples (including some on my site). It features a controller action method that responds to a
POST request and generates and sends an email message from the posted form values represented as a view model. So what's wrong with it?
Dependencies
The main problem is that the controller class has a responsibility for something which is not part of its primary concern. The code that manages the emailing is a dependency of the controller class. This potentially introduces the following problems:
- If you want to change your emailing system over to e.g. Exchange Web Services, you have to make changes to the controller class . This violates the Single Responsibility Principal. The controller class should only be responsible for martialling data for views. It should not be responsible for sending email. Changing the code concerning one of the controller class's responsibilities could result in bugs being introduced into its other responsibilities.
- If you have similar emailing functionality in multiple places in the application, you have to make the same change multiple times. The more places you have to make changes in, the more chance there is of you introducing other bugs to other parts of the application - or overlooking one or more areas where the change needs to be applied.
- If you want to unit test your controller's
Contactmethod (to ensure it returns the correct View, for example), you will not be able to do so without generating an email. This will require communication with outside processes, which will slow tests down.
- Your test may fail because of some failure in the mail sending routine rather than any logical failure in view selection (which is what a unit test is supposed to cover).
Good design practice recommends that you create specific components or services to cover discrete and separate pieces of functionality (or separate your concerns), and then call upon these services where needed. In this example, the code concerned with mailing should be separated out into its own component which will expose a public method that can be called from within the controller. This is how the code might be refactored to create a separate class called
EmailSender, which is responsible for the actual sending of the email:
using System.Net.Mail; using System.Threading.Tasks; namespace WebApplication2.Services { public class EmailSender { public async Task SendEmailAsync(string email, string subject, string message) { using (var smtp = new SmtpClient("localhost")) { var mail = new MailMessage { Subject = subject, From = new MailAddress(email), Body = message }; mail.To.Add("Support@domain.com"); await smtp.SendMailAsync(mail); } } } }
Note that this is purely for illustration only. A real email component will have a lot of validation and error handling included. And this is how the
[HttpPost] public async Task<ActionResult> Contact(ContactViewModel model) { EmailSender _emailSender = new EmailSender(); await _emailSender.SendEmailAsync(model.Email, model.Subject, model.Message); return View(); }
This is better, because now the controller has no knowledge of how emails are sent. As far as the controller is concerned, the
EmailSender class is a black box that encapsulates the details of how emails are generated and dispatched. All the controller class needs to know is that the
SendEmailAsync method that accepts some strings. If you want to change anything to do with sending emails, you change the
EmailSender class/component only. You don't have to change the controller code at all. This eliminates the possibility of new errors being introduced into the controller class when updates to emailing routines are being made. It also saves time as you don't have to make identical changes in any other places where email is sent.
But there is still a problem. The controller is still dependent on (tightly coupled with) a specific (or concrete) type of email component. When you test the controller action, an email will still get sent. You could makes changes to the code to replace the
MockEmailSender class that doesn't actually do anything whenever you want to run your tests. However, having to do that in multiple places, and for all the other services that rely on outside processes (logging, data access etc.) every time you wanted to run your tests, and then reversing all the changes afterwards is not a practical solution. The reality is that you are unlikely to bother to execute your tests at all.
Dependency Injection
Dependency injection (DI) is a design pattern. The problem it attempts to address is the one outlined above, where a class is tightly coupled to a specific implementation of a service. DI offers one way to loosen that coupling by having the service injected into the dependent class. You can do this in a number of ways. You can inject the service into the class's constructor or into one of its properties using the setter. Or, rather than inject a specific type of the service (
MockEmailSender or
ExchangeEmailSender) you inject an abstraction that represents the service. The abstraction is defined in an interface. In this very simple example, the interface specifies one method:
public interface IEmailSender { Task SendEmailAsync(string email, string subject, string message); }
The
IEmailSender interface specifies a pattern (contract is another term for the same thing) that any type that wants to be seen as an
IEmailSender must conform to in order for it to comply. Currently, that means that the type must implement a
SendEmailAsync method that takes three strings as specified by the interface. The existing
EmailSender class already meets that requirement, so now it just needs to formally implement the interface to become an
IEmailSender. This is achieved by adding a colon after the class name followed by the name of the interface that it should implement:
public class EmailSender : IEmailSender { public async Task SendEmailAsync(string email, string subject, string message) { ....
The next iteration of the controller sees the
EmailSender class injected via the class constructor. It is saved to a private backing field for use wherever required within the controller:
public class HomeController : Controller { IEmailSender _emailSender; public HomeController(EmailSender emailSender) { _emailSender = emailSender; } .... [HttpPost] public async Task<ActionResult> Contact(ContactViewModel model) { await _emailSender.SendEmailAsync(model.Email, model.Subject, model.Message); return View(); } ....
The truth is that you can stop refactoring at this point. This is an example of dependency injection - known by the slightly pejorative term "poor man's dependency injection". The code within the controller follows the maxim "program to an interface", which decouples the controller from a specific implementation (apart from in its constructor), and enables separation of concerns. If you have no plans to unit test your code, and you can see no good reason for changing the implementation of your
IEmailSender at the moment (i.e. it is not part of the requirement spec to do so) then you can go on your merry way and this code is just fine. However, you might want to continue reading to find out how to further decouple the dependency, and to learn about the dependency injection system in ASP.NET Core.
The following code shows how you replace the concrete implementation of the
EmailSender class with the interface in the controller class, removing all dependencies on
public class HomeController : Controller { IEmailSender _emailSender; public HomeController(IEmailSender emailSender) { _emailSender = emailSender; } .... [HttpPost] public async Task<ActionResult> Contact(ContactViewModel model) { await _emailSender.SendEmailAsync(model.Email, model.Subject, model.Message); return View(); } ....
A private field is added to the constructor class. The data type for the field is
IEmailSender, and it is used to store the instance of the
IEmailSender that is injected via the constructor that has also been added to the class. That instance is then referenced in the
Contact method as before. However, if you try to run the application at the moment, it will generate an exception
The exception is generated because, at the moment, there is no way for the application to know what concrete type
IEmailSender should be resolved to. You cannot instantiate an interface and start calling methods on it. It's an abstraction and the methods it defines have no implementation. You can see from the message above that the exception is generated by the new Dependency Injection framework, and it is this that needs to be told what to use whenever it encounters
IEmailSender. You do this by adding a registration to a dependency injection container.
DI Containers at their simplest are little more than dictionary-like structures that store the abstraction and the concrete type that should be invoked wherever the abstraction used. Most DI containers offer far more functionality than that, but, at their core, that's all they are - a kind of look-up table.
Dependency Injection is not new in ASP.NET MVC. People have been doing it for years and using a variety of third party DI containers to manage the the resolving of types. What is new in MVC 6 is that a very basic DI container is included as part of the framework. It is minimalistic and doesn't cover advanced use cases, but should be adequate for most common scenarios.
In the preceding tutorial, I registered MVC as a service with the DI container using the
AddMvc extension method in the Startup class's
ConfigureServices method. This is where you will generally register other services , either explicitly, or through extension methods. The following line of code shows how to register the
services.AddTransient<IEmailSender, EmailSender>();
This is the simplest method for adding services to the application - specifying the service as the first parameter and the implementation as the second. If you set a breakpoint on the ConfigureServices method and explore the services, you can see that a lot are already registered with varying lifetime values:
The
Transient scope via the
AddTransient<TService, TImplementation> method. Services registered with
Transient scope are created whenever it is needed within the application. That means that a new instance of the
EmailSender class will be created by the dependency injection framework every time the
Contact method is executed. Two other Lifetime values can be seen in the image:
Singleton and
Scoped. Singletons are registered via the
AddSingleton method and will result in one instance of the service being created on application start and being made available to all requests thereafter. Items added with a
Scoped lifetime via the
AddScoped method are available for the duration of the request. There is also an
AddInstance method that enables you to register a singleton, but you are responsible for creating the intance rather than leaving it to the DI system.
As I said previously, the built in dependency injection system is quite light on features. Each item has to be registered manually. It is expected that the developers of existing, more advanced dependency injection containers will undertake the work required to make their systems compatible with the requirements of ASP.NET Core to enable easy use of them instead of the default container.
Summary
This article began by taking a look at the need for dependency injection in an ASP.NET MVC application, and then explored the ASP.NET Core dependency injection system and covered the creation, registration and consumption of a simple service.
|
https://www.mikesdotnetting.com/article/285/asp-net-core-dependency-injection-and-services-in-mvc
|
CC-MAIN-2018-51
|
refinedweb
| 2,100
| 52.29
|
In this series, we’re going to create a carpooling app with React Native. This will be a two-part series showing you how to create a full-stack React Native app which uses PHP as the back-end.
The first part covers the following:
- Setting up a Pusher app
- Setting up a Google project
- Setting up Laradock
- Creating the server component
- Exposing the server using ngrok
While the second part will cover the following:
- Creating the app
- Running the app
I’ve previously written a similar tutorial: Build a ride hailing app with React Native. The main difference between the two is that the first one shows how to build an app similar to the following:
The main idea of the above apps is to provide a ride-hailing service to users. This is traditionally called “Ridesharing”.
While this tutorial will show you how to build an app similar to these:
The main idea of the above apps is for users to share their ride with people who are going the same route as them. This is traditionally called “Carpooling”. Though there’s a couple of differences between traditional carpooling apps and the app that we’re going to build:
- The person sharing the ride doesn’t necessarily own the vehicle. This means that they can leave the vehicle at an earlier time than the person they picked up. The only rule is that the person who shared the ride needs to still be in the vehicle until they pick up the other person.
- The person sharing the ride can only pick up one person. “One person” doesn’t necessarily equate to a physical person. There can be two or more, but the idea is that once the person has accepted another user to share a ride with, then they can no longer accept a new request from other users.
Prerequisites
This tutorial requires the following to be already set up on your machine:
- React Native development environment - the series assumes that you already have set up all the software needed to create and run React Native apps. The series will show you how to create the app for both Android and iOS devices. We will use the
command to create a React Native project. You can either have both Android Studio and Xcode set up on your machine or just one of them. Additionally, you can set up Genymotion so you can easily change your in-app location. Be sure to check out the setup instructions if you haven’t setup your machine already.
react-native init
- Docker and Docker Compose - the series assumes that you already have Docker and Docker Compose running on your machine. We will be using those to easily setup a server with all the software that we need. This also assures that we both have the same environment.
- Git - used for cloning repos.
Knowing the basics of creating a React Native app is required. This means you have to know how to run the app on an emulator or your device. You should also have a good grasp of basic React concepts such as props, refs, state, and the component lifecycle.
Knowledge of Docker is required. You should know how to setup Docker on your operating system and setup a containers from scratch. Note that Docker has poor support for Windows 7 and 8. So if you’re using any of those systems, you might have difficulty in following this tutorial.
Knowledge of the following will be helpful, but not required. I’ll try to cover as much detail as I can, so readers with zero knowledge of the following will still be able to follow along:
Lastly, the tutorial assumes that you know your way around the operating system that you’re using. Knowing how to install new software, execute commands in the terminal is required.
What we’ll be building
Before we proceed, it’s important to know what exactly we’ll be building. The app will have two modes:
- sharing - this allows the user to share their ride so that others can make a request to ride with them. For the rest of the series, I’ll be referring to the users who uses this feature as the “rider”.
- hiking - this allows the user to make a request to ride with someone. I’ll be referring to these users as “hikers”.
Below is the entire flow of the app. I’m using Genymotion emulator for the user that plays the rider, and iPhone for the hiker. This is so I can emulate a moving vehicle by using Genymotion’s GPS emulation tool:
I can simply click around the map so that React Native’s Geolocation is triggered. This then allows me to use Pusher Channels to send a message to the hiker so that they’re informed of the rider’s current location.
Now, let’s proceed with the app flow:
1. First, the rider enters their username and clicks Share a ride:
2. Rider types in where they want to go and selects it from the drop-down. Google Places Autocomplete makes this feature work:
3. After selecting a place, the app plots the most desirable route from the origin to the destination. The red marker being the origin, and the blue one being the destination:
If the rider wants to pick another place, they can click on the Reset button. This will empty the text field for entering the place as well as remove the markers and the route from the map.
4. At this point, the rider clicks on the Share Ride button. This triggers a request to the server which then saves all the relevant data to an Elasticsearch index. This allows hikers to search for them later on.
To keep the route information updated, we use React Native’s Geolocation feature to watch the rider’s current location. Every time their location changes, the Elasticsearch index is also updated:
5. Now let’s take a look at the hiker’s side of things. First, the hiker enters their username and clicks on Hitch a ride:
6. Next, the hiker searches for their destination. To keep things simple, let’s pick the same place where the rider is going:
7. Once again, the app plots the most desirable route from the hiker’s origin to their destination:
8. The hiker then clicks on the Search Ride button. At this point, the app makes a request to the server to look for riders matching the route added by the hiker. The rider should now receive the request. Pusher Channels makes this feature work:
9. Once the rider accepts the request, the hiker receives an alert that the rider accepted their request:
10. At this point, the hiker’s map will show rider’s current location. React Native’s Geolocation feature and Pusher Channels make this work:
At the same time, the rider’s map will show their current location on the map. This is where you can use Genymotion’s GPS emulation tool to update the rider’s location:
11. Once the rider is near the hiker, both users will receive a notification informing them that they’re already near each other:
12. Once they are within 20 meters of each other, the app’s UI resets and it goes back to the login screen:
We will use the following technologies to build the app:
- Elasticsearch - for saving and searching for routes.
- Pusher Channels - for establishing realtime communication between the rider and the hiker so they are kept updated where each other is.
- PHP - for saving and searching documents from the Elasticsearch index.
- Google Maps - for showing maps inside the app.
- Google Places Autocomplete - for searching for places.
- Google Directions API - for getting the directions between the origin and the destination of the riders and hikers.
- Geometry Library Google Maps API V3 - for determining whether a specific coordinate lies within a set of coordinates.
The full source code of the app is available on this Github repo.
Setting up a Pusher app
We’ll need to create a Pusher app to use Pusher Channels. Start by creating a Pusher account if you haven’t done so already.
Once you have an account, go to your dashboard and click on Channels apps on the left side of the screen, then click on Create Channels apps. Enter the name of your app and select a desirable cluster, preferably one that’s nearest to your current location:
Once the app is created, click on the App Settings tab and enable client events:
This will allow us to trigger events right from the app itself. That way, the only thing that we need to do on the server is to authenticate requests. Don’t forget to click on Update once you’re done.
The API keys which we’ll be using later are on the App keys tab.
Setting up a Google project
We will be using three of Google’s services to build this app:
- Google Maps
- Google Places
- Google Directions
This requires us to create a Google project at console.developers.google.com so we can use those services.
On your dashboard, click on the Select a project dropdown then click on Create project. Enter the name of the project and click Create:
Once the project is created, click on Library on the left side. Look for the following APIs and enable them:
- Maps SDK for Android
- Maps SDK for iOS - note that if you don’t enable this, and followed the installation instructions for iOS, Apple Maps will be used instead.
- Places SDK for Android
- Places SDK for iOS
- Directions API
- Geocoding API
Once those are enabled, click on the Credentials menu on the left side, then click on the Create credentials button and select API key:
That will generate an API key which allows you to use the services mentioned above. Take note of the key as we will be using it later.
You can choose to restrict access so not just anybody can use your key once they get access to it. To avoid problems while developing the app, I recommend to just leave it for now.
Setting up Laradock
Laradock is a full PHP development environment for Docker. It allows us to easily set up the development server. Go through the following steps to setup Laradock:
Configuring the environment
- Clone the official repo (
). This will create a
git clone --branch v7.0.0
directory. Note that in the command above we’re cloning a specific release tag (v7.0.0). This is to make sure we’re both using the same version of Laradock. This helps you avoid issues that has to do with different configuration and software versions installed by Laradock. You can choose to clone the most recent version, but you’ll have to handle the compatibility issues on your own.
laradock
- Navigate inside the
directory and create a copy of the sample
laradock
file.
.env
- Open the
file on your text editor and replace the existing config with the following. This is the directory where your projects are saved. Go ahead and create a
.env
folder outside the
laradock-projects
folder. Then inside the
laradock
, create a new folder named
laradock-projects
. This is where we will add the server code:
ridesharer
APP_CODE_PATH_HOST=../laradock-projects
This is the Elasticsearch port configuration. The one below is actually the default one so in most cases, you don’t really need to change anything. But if you have a different configuration, or if you want to use a different port because an existing application is already using these ports then this is a good place to change them:
ELASTICSEARCH_HOST_HTTP_PORT=9200 ELASTICSEARCH_HOST_TRANSPORT_PORT=9300
This is the path where the Apache site configuration is located. We will be updating it at a later step. This is just to let you know that this is where it’s located:
APACHE_SITES_PATH=./apache2/sites
Adding a virtual host
- Open the
file and add a new virtual host (you can also replace the existing one if you’re not using it):
laradock/apache2/sites/default.apache.conf
<VirtualHost *:80> ServerName ridesharer.loc DocumentRoot /var/www/ridesharer Options Indexes FollowSymLinks <Directory "/var/www/ridesharer"> AllowOverride All <IfVersion < 2.4> Allow from all </IfVersion> <IfVersion >= 2.4> Require all granted </IfVersion> </Directory> </VirtualHost>
The code above tells Apache to serve the files inside the
directory whendirectory when
/var/www/ridesharer
is accessed on the browser. If the directory hasis accessed on the browser. If the directory has
file in it, then it will get served by default (if the filename is not specified).file in it, then it will get served by default (if the filename is not specified).
index.php
The
directory maps to the application directory you’ve specified earlier on thedirectory maps to the application directory you’ve specified earlier on the
/var/www
file:file:
.env
APP_CODE_PATH_HOST=../laradock-projects
This means that
is equivalent tois equivalent to
/var/www/ridesharer
..
/laradock-projects/ridesharer
This is why we’ve created a
folder inside thefolder inside the
ridesharer
directory earlier. Which means that any file you create inside thedirectory earlier. Which means that any file you create inside the
laradock-projects
folder will get served.folder will get served.
ridesharer
- Update the operating system’s
file to point out
hosts
to
ridesharer.loc
:
localhost
127.0.0.1 ridesharer.loc
This tells the browser to not go looking anywhere else on the internet when
is accessed. Instead, it will just look in the localhost.is accessed. Instead, it will just look in the localhost.
Configuring Elasticsearch
Open the
file and search forfile and search for
docker-compose.yml
. This will show you the Elasticsearch configuration:. This will show you the Elasticsearch configuration:
ElasticSearch Container
### ElasticSearch ######################################## elasticsearch: build: ./elasticsearch volumes: - elasticsearch:/usr/share/elasticsearch/data environment: - cluster.name=laradock-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 ports: - "${ELASTICSEARCH_HOST_HTTP_PORT}:9200" - "${ELASTICSEARCH_HOST_TRANSPORT_PORT}:9300" depends_on: - php-fpm networks: - frontend - backend
Under the environment, add the following:
- xpack.security.enabled=false
So it should look like this:
environment: - cluster.name=laradock-cluster - bootstrap.memory_lock=true - xpack.security.enabled=false - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
This disables the need to authenticate when connecting to Elasticsearch.
You can choose to enable it later so that not just anyone can have access to the Elasticsearch index. But to avoid problems with authentication while we’re developing, we’ll disable it for now.
Bringing up the container
Navigate inside the
directory and bring up the container with Docker Compose:directory and bring up the container with Docker Compose:
laradock
docker-compose up -d apache2 php-fpm elasticsearch workspace
This will install and setup Apache, PHP, and Elasticsearch on the container. There’s also a workspace so you can log in to the container. This allows you to install packages using Composer.
This process should take a while depending on your internet connection.
Troubleshooting Laradock issues
If you’re having problems completing this step, it is most likely a port issue. That is, another process is already using the port that the containers wants to use.
The quickest way to deal with a port issue is to change the default ports that Apache and Elasticsearch are using (or whatever port is already occupied by another process). Open the
file inside thefile inside the
.env
folder and make the following changes:folder and make the following changes:
laradock
For Apache, replace the values for either
oror
APACHE_HOST_HTTPS_PORT
(or both):(or both):
APACHE_PHP_UPSTREAM_PORT
# APACHE_HOST_HTTPS_PORT=443 APACHE_HOST_HTTPS_PORT=445 # APACHE_PHP_UPSTREAM_PORT=9000 APACHE_PHP_UPSTREAM_PORT=9001
For Elasticsearch:
# ELASTICSEARCH_HOST_HTTP_PORT=9200 ELASTICSEARCH_HOST_HTTP_PORT=9211 # ELASTICSEARCH_HOST_TRANSPORT_PORT=9300 ELASTICSEARCH_HOST_TRANSPORT_PORT=9311
It’s a good practice to comment out the default config so you know which one’s you’re replacing.
If the issue you’re having isn’t a port issue, then you can visit Laradock’s issues page and search for the issue you’re having.
Creating the server component
Installing the Dependencies
Once all the software is installed in the container, Docker will automatically bring it up. This allows you to login to the container. You can do that by executing the following command while inside the
directory:directory:
laradock
docker-compose exec --user=laradock workspace bash
Once you’re inside, navigate inside the
folder and create afolder and create a
ridesharer
file:file:
composer.json
{ "require": { "alexpechkarev/geometry-library": "1.0", "elasticsearch/elasticsearch": "^6.0", "pusher/pusher-php-server": "^3.0", "vlucas/phpdotenv": "^2.4" } }
Save the file and execute
. This will install the following packages:. This will install the following packages:
composer install
- as mentioned earlier, this allows us to determine whether a specific coordinate lies within a set of coordinates. We will be using this library to determine if the directions returned by the Google Directions API covers the hiker’s pick-up location (origin).
geometry-library
- this library allows us to query the Elasticsearch index so we can add, search, update, or delete documents.
elasticsearch
- this is the official Pusher PHP library for communicating with Pusher’s server. We will be using it to authenticate requests coming from the app.
pusher-php-server
- for loading environment variables from
vlucas/phpdotenv
files. The
.env
file is where we put the Elasticsearch, Google, and Pusher config.
.env
Adding environment variables
Inside the `laradock-projects/ridesharer` directory, create a `.env` file and add the following:
PUSHER_APP_ID="YOUR PUSHER APP ID" PUSHER_APP_KEY="YOUR PUSHER APP KEY" PUSHER_APP_SECRET="YOUR PUSHER APP SECRET" PUSHER_APP_CLUSTER="YOUR PUSHER APP CLUSTER" GOOGLE_API_KEY="YOUR GOOGLE API KEY" ELASTICSEARCH_HOST="elasticsearch"
This file is where you will put the keys and configuration options that we will be using for the server.
Loader file
Since the majority of the files we will be creating will use either the configuration from the
file or connect to the Elasticsearch server, we will be using this file to do those task for us. That way, we simply need to include this file on each of the files instead of repeating the same code.file or connect to the Elasticsearch server, we will be using this file to do those task for us. That way, we simply need to include this file on each of the files instead of repeating the same code.
.env
Start by importing the
class to the current scope. This allows us to use theclass to the current scope. This allows us to use the
Elasticsearch\ClientBuilder
class without having to refer to its namespaceclass without having to refer to its namespace
ClientBuilder
everytime we need to use it:everytime we need to use it:
Elasticsearch
// laradock-projects/ridesharer/loader.php use Elasticsearch\ClientBuilder;
Include the vendor autoload file. This allows us to include all the packages that we installed earlier:
require 'vendor/autoload.php';
Load the
file:file:
.env
$dotenv = new Dotenv\Dotenv(__DIR__); $dotenv->load(); $elasticsearch_host = getenv('ELASTICSEARCH_HOST'); // get the elasticsearch config
After that, connect to Elasticsearch:
$hosts = [ [ 'host' => $elasticsearch_host ] ]; $client = ClientBuilder::create()->setHosts($hosts)->build();
Setting the type mapping
Since we will be working with coordinates in this app, we need to tell Elasticsearch which of the fields we will be using are coordinates. That way, we can query them later using functions which are specifically created to query geo-point data. This is done through a process called Mapping
Start by including the loader file:
<?php // laradock-projects/ridesharer/set-map.php require 'loader.php';
Next, we can now proceed with specifying the actual map. Note that an error might occur (for example, the index has already been created, or one of the datatypes we specified isn’t recognized by Elasticsearch) so we’re wrapping everything in a
. This allows us to “catch” the error and present it in a friendly manner:. This allows us to “catch” the error and present it in a friendly manner:
try..catch
try { $indexParams['index'] = 'places'; // the name of the index $myTypeMapping = [ '_source' => [ 'enabled' => true ], 'properties' => [ 'from_coords' => [ 'type' => 'geo_point' ], 'to_coords' => [ 'type' => 'geo_point' ], 'current_coords' => [ 'type' => 'geo_point' ], 'from_bounds.top_left.coords' => [ 'type' => 'geo_point' ], 'from_bounds.bottom_right.coords' => [ 'type' => 'geo_point' ], 'to_bounds.top_left.coords' => [ 'type' => 'geo_point' ], 'to_bounds.bottom_right.coords' => [ 'type' => 'geo_point' ] ] ]; // next: add code for adding the map } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); }
Breaking down the code above, we first specify the name of the index we want to use. This shouldn’t already exist within Elasticsearch. If you’re coming from an RDBMS background, an index is synonymous to a database:
$indexParams['index'] = 'places';
For the actual type mapping, we only need to specify two properties:
andand
_source
..
properties
allows us to specify whether to enable returning of the source when getting documents. In Elasticsearch, theallows us to specify whether to enable returning of the source when getting documents. In Elasticsearch, the
_source
contains the fields (and their values) that we’ve indexed.contains the fields (and their values) that we’ve indexed.
_source
In a real-world app, you don’t really want this option to be enabled as it will affect the search performance. We’re only enabling it so that we don’t have to perform an additional step to fetch the source whenever where querying the index:
'_source' => [ 'enabled' => true ],
The other property that we need to specify is:
properties
'from_coords' => [ 'type' => 'geo_point' ],
If the field that you want to work with is located deep within other fields, then you use the dot notation to specify the parent:
'from_bounds.top_left.coords' => [ 'type' => 'geo_point' ]
Lastly, add the code for creating the index with the map that we specified:
$indexParams\['body'\]['mappings']['location'] = $myTypeMapping; // specify the map $response = $client->indices()->create($indexParams); // create the index print_r($response); // print the response
Access
on your browser and it should print out a success response.on your browser and it should print out a success response.
Note that if you have another local development environment that’s currently running, it might be the one that takes priority instead of Laradock. So be sure to disable them if you can’t access the URL above.
Creating users
When someone uses the app, they need to login first. If the username they used doesn’t already exist then it’s created.
Start by getting the data passed from the app, in PHP this is commonly done by extracting the field name from the
global variable. But in this case, we’re using the PHP input stream to read the rawglobal variable. But in this case, we’re using the PHP input stream to read the raw
data from the request body. This is because this is how Axios (the library that we’ll be using in the app later on) submits the data when sending requests to the server:data from the request body. This is because this is how Axios (the library that we’ll be using in the app later on) submits the data when sending requests to the server:
<?php // laradock-projects/ridesharer/create-user.php require 'loader.php'; $data = json_decode(file_get_contents("php://input"), true); $username = $data['username']; // get the value from the username field
Construct the parameters to be supplied to Elasticsearch. This includes the
and theand the
index
. You can think of the. You can think of the
type
as the table or collection that you want to query.as the table or collection that you want to query.
type
$params = [ 'index' => 'places', // the index 'type' => 'users' // the table or collection ];
Specify the query. In this case, we’re telling Elasticsearch to look for an exact match for the username supplied:
$params['body']['query']['match']['username'] = $username; // look for the username specified
Execute the search query, if it doesn’t return any “hits” then we create a new user using the username that was supplied:
try { $search_response = $client->search($params); // execute the search query if($search_response\['hits'\]['total'] == 0){ // if the username doesn't already exist // create the user $index_response = $client->index([ 'index' => 'places', 'type' => 'users', 'id' => $username, 'body' => [ 'username' => $username ] ]); } echo 'ok'; } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); }
Saving routes
Whenever a rider shares a ride, the following information needs to be stored in the index:
- username
- origin
- destination
- origin coordinates
- destination coordinates
- the steps from the origin to destination
Start by getting the data submitted from the app:
<?php // laradock-projects/ridesharer/save-route.php require 'loader.php'; $google_api_key = getenv('GOOGLE_API_KEY'); $data = json_decode(file_get_contents("php://input"), true); $start_location = $data['start_location']; // an array containing the coordinates (latitude and longitude) of the rider's origin $end_location = $data['end_location']; // the coordinates of the rider's destination $username = $data['username']; // the rider's username $from = $data['from']; // the descriptive name of the rider's origin $to = $data['to']; // the descriptive name of the rider's destination $id = generateRandomString(); // unique ID used for identifying the document
Make a request to the Google Directions API using the
function. Thefunction. The
file_get_contents()
endpoint expects theendpoint expects the
directions
andand
origin
to be passed as a query parameter. These two contains the latitude and longitude value pairs (separated by a comma). We simply pass the values supplied from the app.to be passed as a query parameter. These two contains the latitude and longitude value pairs (separated by a comma). We simply pass the values supplied from the app.
destination
The
function returns a JSON string so we use thefunction returns a JSON string so we use the
file_get_contents()
function to convert it to an array. Specifyingfunction to convert it to an array. Specifying
json_decode()
as the second argument tells PHP to convert it to an array instead of an object (when the second argument is omitted or set toas the second argument tells PHP to convert it to an array instead of an object (when the second argument is omitted or set to
true
):):
false
$steps_data = []; $contents = file_get_contents("{$start_location['latitude']},{$start_location['longitude']}&destination={$end_location['latitude']},{$end_location['longitude']}&key={$google_api_key}"); $directions_data = json_decode($contents, true);
Loop through the array of steps and construct an array (
) that only contains the data that we want to store. In this case, it’s only the latitude and longitude values for each of the steps:) that only contains the data that we want to store. In this case, it’s only the latitude and longitude values for each of the steps:
$steps_data
if(!empty($directions_data['routes'])){ $steps = $directions_data['routes'][0]['legs'][0]['steps']; foreach($steps as $step){ $steps_data[] = [ 'lat' => $step['start_location']['lat'], 'lng' => $step['start_location']['lng'] ]; $steps_data[] = [ 'lat' => $step['end_location']['lat'], 'lng' => $step['end_location']['lng'] ]; } }
Next, construct the data that we’ll save to the Elasticsearch index:
if(!empty($steps_data)){ $params = [ 'index' => 'places', 'type' => 'location', 'id' => $id, 'body' => [ 'username' => $username, 'from' => $from, 'to' => $to, 'from_coords' => [ // geo-point values needs to have lat and lon 'lat' => $start_location['latitude'], 'lon' => $start_location['longitude'], ], 'current_coords' => [ 'lat' => $start_location['latitude'], 'lon' => $start_location['longitude'], ], 'to_coords' => [ 'lat' => $end_location['latitude'], 'lon' => $end_location['longitude'], ], 'steps' => $steps_data ] ]; }
Make the request to index the data:
try{ $response = $client->index($params); $response_data = json_encode([ 'id' => $id ]); echo $response_data; }catch(\Exception $e){ echo 'err: ' . $e->getMessage(); }
Here’s the function for generating a unique ID:; }
Searching routes
When a hiker searches for a ride, a request is made to this file. This expects the origin and destination of the hiker to be passed in the request body. That way, we can make a request to the Google Directions API with those data:
<?php // /laradock-projects/ridesharer/search-routes.php require 'loader.php'; $google_api_key = getenv('GOOGLE_API_KEY'); $params['index'] = 'places'; $params['type'] = 'location'; $data = json_decode(file_get_contents("php://input"), true); // the hiker's origin coordinates $hiker_origin_lat = $data['origin']['latitude']; $hiker_origin_lon = $data['origin']['longitude']; // the hiker's destination coordinates $hiker_dest_lat = $data['dest']['latitude']; $hiker_dest_lon = $data['dest']['longitude']; $hiker_directions_contents = file_get_contents("{$hiker_origin_lat},{$hiker_origin_lon}&destination={$hiker_dest_lat},{$hiker_dest_lon}&key={$google_api_key}"); $hiker_directions_data = json_decode($hiker_directions_contents, true);
Store the hiker’s steps into an array. We will be using it later to determine whether the hiker and the rider have the same route. Note that we’re only storing the
for the first step. This is because thefor the first step. This is because the
start_location
of all the succeeding steps overlaps with theof all the succeeding steps overlaps with the
start_location
of the step that follows:of the step that follows:
end_location
$hikers_steps = []; $steps = $hiker_directions_data['routes'][0]['legs'][0]['steps']; // extract the steps foreach($steps as $index => $s){ if($index == 0){ $hikers_steps[] = [ 'lat' => $s['start_location']['lat'], 'lng' => $s['start_location']['lng'] ]; } $hikers_steps[] = [ 'lat' => $s['end_location']['lat'], 'lng' => $s['end_location']['lng'] ]; }
Next, we construct the query to be sent to Elasticsearch. Here we use a
function calledfunction called
decay
to assign a score to each of the routes that are currently saved in the index. This score is then used to determine the order in which the results are returned, or whether they will be returned at all.to assign a score to each of the routes that are currently saved in the index. This score is then used to determine the order in which the results are returned, or whether they will be returned at all.
gauss
Specifying the
means all the documents which don’t meet the supplied score won’t be returned in the response. In the code below, we’re querying for documents which are up to five kilometers away from the origin. But once the documents have ameans all the documents which don’t meet the supplied score won’t be returned in the response. In the code below, we’re querying for documents which are up to five kilometers away from the origin. But once the documents have a
min_score
which are not within 100 meters, the score assigned to them is halved:which are not within 100 meters, the score assigned to them is halved:
current_coords
$params['body'] = [ "min_score" => 0.5, // the minimum score for the function to return the record 'query' => [ 'function_score' => [ 'gauss' => [ 'current_coords' => [ "origin" => ["lat" => $hiker_origin_lat, "lon" => $hiker_origin_lon], // where to begin the search "offset" => "100m", // only select documents that are up to 100 meters away from the origin "scale" => "5km" // (offset + scale = 5,100 meters) any document which are not within the 100 meter offset but are still within 5,100 meters gets a score of 0.5 ] ] ] ] ];
If you want to know more about how the function works, check this article out: The Closer, The Better.
Next, construct the coordinates for the hiker’s origin and destination. We will use this to compute the distance between the hiker’s origin and destination, as well as the hiker’s origin and the rider’s destination. We will need these values later on to determine whether the routes returned from the query matches the hiker’s route:
$hikers_origin = ['lat' => $hiker_origin_lat, 'lng' => $hiker_origin_lon]; $hikers_dest = ['lat' => $hiker_dest_lat, 'lng' => $hiker_dest_lon];
Send the request and loop through all the results:
try { $response = $client->search($params); if(!empty($response['hits']) && $response['hits']['total'] > 0){ foreach($response['hits']['hits'] as $hit){ $source = $hit['_source']; $riders_steps = $source['steps']; $current_coords = $source['current_coords']; $to_coords = $source['to_coords']; $riders_origin = [ 'lat' => $current_coords['lat'], 'lng' => $current_coords['lon'] ]; $riders_dest = [ 'lat' => $to_coords['lat'], 'lng' => $to_coords['lon'] ]; // check whether the rider's route matches the hiker's route if(isCoordsOnPath($hiker_origin_lat, $hiker_origin_lon, $riders_steps) && canDropoff($hikers_origin, $hikers_dest, $riders_origin, $riders_dest, $hikers_steps, $riders_steps)){ // the rider's username, origin and destination $rider_details = [ 'username' => $source['username'], 'from' => $source['from'], 'to' => $source['to'] ]; echo json_encode($rider_details); // respond with the first match break; // break out from the loop } } } } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); }
The
function uses thefunction uses the
isCoordsOnPath()
function from thefunction from the
isLocationOnPath()
library. This accepts the following arguments:library. This accepts the following arguments:
php-geometry
- An array containing the latitude and longitude of the coordinate we want to check.
- An array of arrays containing the latitude and longitude of each of the steps.
- The tolerance value in degrees. This is useful if the place specified isn’t near a road. Here, I’ve used a high value to cover for most cases. As long as the hiker’s origin is somewhat near to a road, then it should be fine.
function isCoordsOnPath($lat, $lon, $path) { $response = \GeometryLibrary\PolyUtil::isLocationOnPath(['lat' => $lat, 'lng' => $lon], $path, 350); return $response; }
The
function determines whether the rider and the hiker are both treading the same route. This accepts the following arguments:function determines whether the rider and the hiker are both treading the same route. This accepts the following arguments:
canDropoff()
- the coordinates of the hiker’s origin.
$hikers_origin
- the coordinates of the hiker’s destination.
$hikers_dest
- the coordinates of the rider’s origin.
$riders_origin
- the coordinates of the rider’s destination.
$riders_destination
- an array containing the hiker’s steps.
$hikers_steps
- an array containing the rider’s steps.
$riders_steps
The way this function works is that it first determines who leaves the vehicle last: the rider or the hiker. The app works with the assumption that the rider has to ride the vehicle first, and that they should pick up the hiker before they get to leave the vehicle. Otherwise, the hiker won’t be able to track where the vehicle is. This means that there are only two possible scenarios when it comes to the order of leaving the vehicle:
- rider rides vehicle → rider picks up hiker → rider leaves the vehicle → hiker leaves the vehicle
- rider rides vehicle → rider picks up hiker → hiker leaves the vehicle → rider leaves the vehicle
The tracking starts once the rider picks up the hiker. So we measure the distance between the hiker’s origin and their destination, as well as the hiker’s origin and the rider’s destination. This then allows us to determine who will leave the vehicle last by comparing the distance between the two.
Once we know the order in which the two users leaves the vehicle, we can now use the
function to determine if the destination of the person who will leave the vehicle first is within the route of the person who will leave the vehicle last:function to determine if the destination of the person who will leave the vehicle first is within the route of the person who will leave the vehicle last:
isCoordsOnPath()
function canDropoff($hikers_origin, $hikers_dest, $riders_origin, $riders_dest, $hikers_steps, $riders_steps) { // get the distance from the hiker's origin to the hiker's destination $hiker_origin_to_hiker_dest = \GeometryLibrary\SphericalUtil::computeDistanceBetween($hikers_origin, $hikers_dest); // get the distance from the hiker's origin to the rider's destination $hiker_origin_to_rider_dest = \GeometryLibrary\SphericalUtil::computeDistanceBetween($hikers_origin, $riders_dest); $is_on_path = false; // whether the rider and hiker is on the same path or not if($hiker_origin_to_hiker_dest > $hiker_origin_to_rider_dest){ // hiker leaves the vehicle last // if the rider's destination is within the routes covered by the hiker $is_on_path = isCoordsOnPath($riders_dest['lat'], $riders_dest['lng'], $hikers_steps); }else if($hiker_origin_to_rider_dest > $hiker_origin_to_hiker_dest){ // rider leaves the vehicle last // if hiker's destination is within the routes covered by the rider $is_on_path = isCoordsOnPath($hikers_dest['lat'], $hikers_dest['lng'], $riders_steps); }else{ // if the rider and hiker are both going the same place // check whether either of the conditions above returns true $is_on_path = isCoordsOnPath($hikers_dest['lat'], $hikers_dest['lng'], $riders_steps) || isCoordsOnPath($riders_dest['lat'], $riders_dest['lng'], $hikers_steps); } return $is_on_path; }
Update route
Every time the location changes, the app makes a request to this file. The app sends the unique ID that the server responded with when the route was created. This allows us to fetch the existing document from the index. We then update the source with the new coordinates:
<?php // laradock-projects/ridesharer/update-route.php require 'loader.php'; $data = json_decode(file_get_contents("php://input"), true); // get the request body and convert it to an array $params['index'] = 'places'; $params['type'] = 'location'; $params['id'] = $data['id']; // the id submitted from the app // the latitude and longitude values submitted from the app $lat = $data['lat']; $lon = $data['lon']; $result = $client->get($params); // get the document based on the id used as the parameter $result['_source']['current_coords'] = [ // update the current coordinates with the latitude and longitude values submitted from the app 'lat' => $lat, 'lon' => $lon ]; $params['body']['doc'] = $result['_source']; // replace the source with the updated data $result = $client->update($params); // update the document echo json_encode($result);
Delete route
Once the rider accepts a request from the hiker, the app makes a request to this file so that the existing route will be deleted. We need to do this because we don’t want other hikers to make another request to the same rider (remember the 1:1 ratio of the rider to hiker?). Also, note that we’re using the rider’s
to query the index. We haven’t really put any security measures to only allow a username to be used on a single app instance, but this tells us that a user can only save one route at a time:to query the index. We haven’t really put any security measures to only allow a username to be used on a single app instance, but this tells us that a user can only save one route at a time:
username
<?php // laradock-projects/ridesharer/delete-route.php require 'loader.php'; $data = json_decode(file_get_contents("php://input"), true); $params['index'] = 'places'; $params['type'] = 'location'; $params['body']['query']['match']['username'] = $data['username']; // find the rider's username $result = $client->search($params); // search the index $id = $result['hits']['hits'][0]['_id']; // only get the first result unset($params['body']); $params['id'] = $id; $result = $client->delete($params); echo json_encode($result);
Delete index
Deleting the index (
) isn’t really required for the app to work. Though it will be useful when testing the app. This allows you to reset the Elasticsearch index so you can control the results that are returned when you search for riders:) isn’t really required for the app to work. Though it will be useful when testing the app. This allows you to reset the Elasticsearch index so you can control the results that are returned when you search for riders:
delete-index.php
<?php // laradock-projects/ridesharer/delete-index.php require 'loader.php'; try { $params = ['index' => 'places']; $response = $client->indices()->delete($params); print_r($response); } catch(\Exception $e) { echo 'err: ' . $e->getMessage(); }
Authenticating requests
Below is the code for authenticating requests so that Pusher will allow the user to use the Channels service. This requires the keys from the App keys tab earlier. Be sure to replace the placeholders with your keys:
<?php // laradock-projects/ridesharer/pusher-auth.php require 'vendor/autoload.php'; // load the .env file located on the same directory as this file $dotenv = new Dotenv\Dotenv(__DIR__); $dotenv->load(); // get the individual config from the .env file. This should be the same as the one's you have on the .env file $app_id = getenv('PUSHER_APP_ID'); $app_key = getenv('PUSHER_APP_KEY'); $app_secret = getenv('PUSHER_APP_SECRET'); $app_cluster = getenv('PUSHER_APP_CLUSTER');
Set the content type to
as this is what the Pusher client expects in the client side:as this is what the Pusher client expects in the client side:
application/json
header('Content-Type: application/json');
Connect to the Pusher app using the keys and options. The options include the cluster where the app is running from, and whether to encrypt the connection or not:
$options = ['cluster' => $app_cluster, 'encrypted' => true]; $pusher = new Pusher\Pusher($app_key, $app_secret, $app_id, $options);
Lastly, get the data sent by the Pusher client and use it as an argument for the
method. This method returns the success token required by the Pusher client:method. This method returns the success token required by the Pusher client:
socket_auth()
$channel = $_POST['channel_name']; $socket_id = $_POST['socket_id']; echo $pusher->socket_auth($channel, $socket_id);
As you can see, we didn’t really apply any form of authentication in the code above. In a real-world scenario, you want to have some form of authentication before returning the success token. This can be a unique ID that’s only assigned to the users of your app, it can also be a key which is then decrypted to come up with a token used for authenticating the request. This unique ID or key is sent from the client side so the server can verify it.
You can test if the server is working by accessing any of the files you created earlier.
Exposing the server with ngrok
So that you can access the virtual host
from the app, you need to setup ngrok. This allows you to expose your virtual host to the internet.from the app, you need to setup ngrok. This allows you to expose your virtual host to the internet.
- Go to your dashboard and download ngrok.
- Unzip the archive.
- Authenticate ngrok using your auth token (
)
.\ngrok authtoken YOUR_AUTH_TOKEN
- Expose the virtual host:
ngrok http -host-header=ridesharer.loc 80
This will give you an output similar to the following:
Copy the HTTPS URL as that’s what we’re going to use in the app later on.
Conclusion
That’s it! In this tutorial, we’ve set up the server to be used by the app. Specifically, you’ve learned the following:
- How to setup and use Laradock.
- How to use PHP to index, search and delete Elasticsearch documents.
- How to use the Google Directions API to get the directions between two coordinates.
- How to use ngrok to expose your virtual host.
You can find the code used in this tutorial on this Github repo.
In the second part of this series, we’ll be covering how to create the actual app.
Originally published on the Pusher tutorial hub.
|
https://hackernoon.com/create-a-carpooling-app-with-react-native-part-1-setting-up-the-server-sw193djk
|
CC-MAIN-2019-47
|
refinedweb
| 6,854
| 52.39
|
I am remaking a program that we use at a car dealership. That allows you to search through our database of cars using a stock number. We mainly use to see if it has passed inspection to be saleable on our car lot. I can use the search function I have for the program to ..
Category : python-idle
I wrote a program in python and it works fine but,the problem was every time it runs it takes the user input until the while loop was true.Due to this my console fills up and it looks like mess.What should i need to do in order to clear my console every time it takes input ..
I tried to run this code on different python platforms got different output for same python code here are the outputs: Microsoft V.S Code=7654321.0 python official ide=77316373.73737083 import math a=1234567 i=len(str(a)) number=0 while a>0: digit=a%10 number=digit*math.pow(10,i)+number a=a/10 i=i-1; print(number) Source: Python..
I’ve been messing around with AutoHotKey, trying to create some useful shortcuts for myself. I can open file explorer, calculator, chrome, etc., but I haven’t been able to write a program that opens Python IDLE. My program says: !t:: Run pythonw return I’ve tried pythonw.exe, python.exe, python, python3.exe, and a bunch of other combinations. Any ..
How to set to native Python env vars for IDLE in Windows? Like PYTHONIOENCODING, PYTHONLEGACYWINDOWSSTDIO and so on. A quick search here didn’t find anything relevant except this, but this question is related only to Linux. Web-search also didn’t bring useful results, I tried follow this guide, which simply says one should define Windows user ..
i am trying to open a python file with idle ide, and make some changes to the file. How can I do this using git bash? I know i can just open it without bash too, but i want to learn how it is done using bash. i tried this: idle filename.py but it didn’t ..
I’ve installed python 3.9.1 by homebrew. [email protected] ~ % idle3 macOS 11 or later required ! zsh: abort idle3 So… I don’t know how to fix it. Help Or maybe there’s something similar that I can use? report: Source: Python..
I’m not sure why, but every time I hit F5 in IDLE to run my program, it just comes up with ==== RESTART: C:filepathhere.py ===== and any statements that I might have printed in the program. It then doesn’t come up with the >>>, so I can’t interact with my program. Why is this? I’ve ..
How do I create three functions? At least one should have one parameter and one should return have at least one return value. This is the question: "Add at least three new functions to custom.py. At least one of them should take at least one parameter, and at least one of them should have at ..
from matplotlib import pyplot as plt When I run this in vscode there are no problems, but in the IDLE I get this: Traceback (most recent call last): File "C:UserstysonOneDrivevscodebook_handlingbookweeding.py", line 3, in from matplotlib import pyplot as plt #imports matplotlib to allow graph drawing ModuleNotFoundError: No module named ‘matplotlib’ I’ve got the up to ..
Recent Comments
|
https://askpythonquestions.com/category/python-idle/
|
CC-MAIN-2021-04
|
refinedweb
| 574
| 84.57
|
Remote the app: and jar: protocols
RESOLVED FIXED in Firefox 19
Status
()
P1
critical
People
(Reporter: fabrice, Assigned: jduell.mcbugs)
Tracking
(Depends on 1 bug, {smoketest})
Firefox Tracking Flags
(blocking-basecamp:+, firefox19 fixed, firefox20 fixed, b2g18 fixed)
Details
(Whiteboard: [qa-])
Attachments
(10 attachments, 10 obsolete attachments)
We need that in b2g to safely access the applications packages. See bug 813468 for more details.
blocking-basecamp: --- → ?
We don't need to remote the entire protocol, just open the fd in the parent (with the right access rights) and then share with the child. The existing code can take it from there.
blocking-basecamp: ? → +
.
(In reply to Brian Smith (:bsmith) from comment #2) > . s/remote the JAR protocol/remote the app protocol/
I agree with you, but the goal of this bug is simpler: avoid having to give app processes rights to read user data directly off the file system (and god forbid write). I'm in favor of doing what you suggest but I think it should be a followup.
The jar: protocol doesn't do any actual reading itself. It simply wrapps other protocols and extracts data that they return. So remoting jar: doesn't do a whole lot. I think we have at least four options here: * Remote app:// so that the file-reading an unpacking happens in the parent. * Mark the /data/local/webapps directory world-readable. *. * Map app:// to jar://someprotocol://path/to/webapp.zip and make "someprotocol" ask the parent process for a file handle and then read data from that. Again, the parent would have to do appropriate security checks to make sure that the requested file is the package for the app. There's likely other options too though.
At this point (for v1), I'm not in favor of a solution where we do actual sending of data across process boundaries. That would be a rather destabilizing change performance-wise. What type of files can go in /data/local/webapps?
(In reply to Jonas Sicking (:sicking) from comment #5) > * Mark the /data/local/webapps directory world-readable. This would mean that any (compromised) app would be able to enumerate what versions of what apps the user has installed. that is, it creates a problem similar to the one that motivated us to restrict app:// to same-app.
(In reply to Chris Jones [:cjones] [:warhammer] from comment #6) > What type of files can go in /data/local/webapps? Instead of playing ping-pong, let me write out my concern. bsmith's point in comment 7 is one problem. Enumerating apps is pretty bad, but "just" a fingerprinting issue. What concerns me is being able to read specific *files* of custom-generated apps. Imagine I go to mybank.com in the browser, and it says "Ohai cjones we have a b2g app for you" that's generated on the fly and customized for my info. Then allowing arbitrary apps to read those files can expose my private data. We of course all think this is a bad design for an app, but nothing prevents it. This isn't a problem with /system/b2g/webapps because an XHR to github will retrieve the same bits.
(In reply to Jonas Sicking (:sicking) from comment #5) > *. Currently app://APP_ID/resource maps to jar:, so this is what we should try. As for checking that the package is only requested by the app itself, we already added that in bug 773886.
.
(In reply to Brian Smith (:bsmith) from comment #10) > . The check currently happens on the content process because all the process is done on the content process... but in any case I don't think we can fix this problem without having a 1-on-1 relationship between processes and apps. The current test is something like 'is the calling principal authorized to load the to-be-loaded content?'. But if the relationship isn't 1-on-1 I don't believe the parent can truly know who the calling principal is... because that information must come from the content process itself, which we're assuming can be compromised. If there's a 1-on-1 relationship then the parent process can know which app is loaded on which process. But since there isn't, the best it can know is which set of apps is loaded on which process... but to know which app is doing the invoking it has to trust the content process. And if we trusted the content process we wouldn't need to do this check on the parent. Kind of a catch-22 there :) Cris raised a good point on comment #8, but again it's something that I don't think can be fully solved unless there's a 1-on-1 relationship between apps and processes, and for the same reason. If I can compromise the content process but the content process doesn't have direct access to a directory I can try and trick the parent process into giving me the content of said directory. It makes the exploiting harder, assuming the parent process controls which apps are running on which processes (since then attacker would have to be running on the same process than the victim), but that's all. I've been giving it some thought, and thinking on a way we could use setfsuid along with having two identities for processes (since you can set a fsuid on a per thread basis) to isolate content threads inside the process... the problem is that the permission boundary to change fsuid is the process, not the thread. So if thread A has permission to change thread's B fsuid, thread B has permission to change it back to the original value. And the only way to have a permission per app would be to run the permission setter as root. So, in short, I think that to correctly prevent an exploit scenario as the one described by Cris on comment#9 we can either: a) Have one uid (and thus one process) per app, as Android does or b) Just tell the developers that they should *not* include anything they consider private as part of a packaged app, since it isn't really private. Option b solves the problem (for some very relaxed value of "solve", true :P) and has the advantage of not needing us to change anything besides giving read (evidently never write!) permissions on the app directory to the content processes.
Antonio, looks like you're working on this. Assigning to you.
Assignee: nobody → amac
I don't think option a is viable on the timeframe we have right now (nor I do know if it's viable at all to have one content process per app, considering they're not exactly lightweight). And without implementing option a, anything that we implement doesn't actually add any security layer against a possible exploit(it adds more complexity to an exploit, but just that). So option b was doing nothing. If that's what's required then sure, I can take that :P. Otherwise, if you want to actually remote the protocols, I don't know how the communication between parent and content processes work well enough to take this, I'm afraid. Reassigning it to nobody for the time being.
Assignee: amac → nobody
Jason, can you take this or are you too overloaded?
Assignee: nobody → jduell.mcbugs
Flags: needinfo?(jduell.mcbugs)
Setting priority based on triage discussions. Feel free to decrease priority if you disagree.
Priority: -- → P1
I'm on this.
Flags: needinfo?(jduell.mcbugs)
Increasing priority up to P1 critical smoketest blocker - even with the workaround about to be implemented in bug 813468, we'll still need this to implement true fix that the smoke test would test for installing of packaged apps.
Mass Modify: All un-milestoned, unresolved blocking-basecamp+ bugs are being moved into the C3 milestone. Note that the target milestone does not mean that these bugs can't be resolved prior to 12/10, rather C2 bugs should be prioritized ahead of C3 bugs.
Target Milestone: --- → B2G C3 (12dec-1jan)
Target Milestone: B2G C3 (12dec-1jan) → B2G C2 (20nov-10dec)
I expect to have a patch for this today.
The patches I'm about to land cover everything but two issues: 1) the parent process is currently opening the JAR file on the main thread from within the IPDL call. That's not so horrible--we do a blocking open() for JARs on desktop too (see bug 817202) and it only happens once per jar (so for B2G apps will only happen once per app). Let me know if we think this is good enough for now. I can make the open() happen on a different thread, just haven't gotten to it yet. 2) The patches don't apply to beta yet: nsJARChannel.cpp in particular has stuff in m-c that isn't on beta. I'm working on it. Neither of these need to block reviewing these patches. Hand-wavey design overview: - We convert app:// URIs to regular 'jar:file://' URIs if process can open JAR regularly, i.e. if in-process or if a "core app" with I/O permission). - Else we convert to new 'remoteopenfile://" URI scheme, which JARChannels are now trained to recognize and treat specially: they launch an open of the file handle on the parent and get notified when it's done. - filehandle is stored in a RemoteOpenFile class that looks like a regular nsIFile but doesn't support any I/O operations except OpenNSPRFileDesc(). JARChannel passes this fake nsIFile down into JAR internals (zipreader, zipcache, etc) and everything works magically w/o any changes to non-e10s code. - Optimization skips opening file on parent if we know zipcache is caching file already - Security: we only allow apps to open their own application.zip file.
I need mwu (taras on vacation) for the JAR bits. I'm thinking jdm can review the rest of the patch (or bent or honza).
These don't cover security checks for making sure we only allow application.zip to be opened (we disable them for xpcshell). Not sure how to write automated test for that, since we kill offending processes. mwu: sorry, lots of context diff in this patch. Only real changes are to use 'jar:remoteopenfile' if we're on child, and to disable all the tests for e10s testing except the simple async open of non-nested jar (see comment). Rest of patch is mainly moving the rest of the code into an "if (!inChild)" block (indenting is what's causing the diff to be big).
Attachment #690875 - Flags: review?(mwu)
I think we want this part (for speed reasons) but I split it out because it's the only part of the code that affects regular desktop beta: it adds a "IsCached" method to nsIZipReader.idl". I doubt many addons are using that (and since I'm only adding a new method, we shouldn't be breaking JS addons). If we only add method to an IDL do we need to change uuid? I did in patch, but suspect we can leave it the same (and may want to since we're changing beta).
Attachment #690877 - Flags: review?(mwu)
Fabrice: let me know if there's actually a way from C++ to get the full exact path match for the application.zip file. I didn't see one (can't use message manager, right?) so I implemented a hack where we just make sure that the path ends in /APP_ID/application.zip.
Attachment #690882 - Flags: review?(fabrice)
Comment on attachment 690874 [details] [diff] [review] Part 2: main patch Review of attachment 690874 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/ipc/RemoteOpenFileChild.cpp @@ +48,5 @@ > +RemoteOpenFileChild::Init(nsIURI* aRemoteOpenUri) > +{ > +// TODO: require tabchild unless security disabled > +#if 0 > + if (!aRemoteOpenUri || !aTabChild) { Forgot to remove this in patch series. changed to just "if (!aRemoteURI)" @@ +187,5 @@ > +RemoteOpenFileChild::OpenNSPRFileDesc(int32_t aFlags, int32_t aMode, > + PRFileDesc **aRetval) > +{ > +// TODO: remove this early test code > +// return mFile->OpenNSPRFileDesc(flags, mode, _retval); also removed now. @@ +301,5 @@ > + return mFile->GetParent(aParent); > +} > + > +nsresult > +RemoteOpenFileChild::InitWithPath(const nsAString &filePath) Meh--some of these functions need to be not supported instead of delegated to underlying nsIFile, else mURI and mFile could get out of sync. Irrelevant for actual patch logic, but I'll fix them. New patch coming up. ::: netwerk/ipc/nsIRemoteOpenFileListener.idl @@ +8,5 @@ > + > +/** > + * nsIRemoteOpenFileListener: passed to RemoteOpenFileChild::AsyncRemoteFileOpen. > + * > + * Interface for notifying when the file has been opened as in available in s/as in/and is/.
Attachment #690874 - Attachment is obsolete: true
Attachment #690896 - Flags: review?(mwu)
Comment on attachment 690873 [details] [diff] [review] Part 1: Changes to B2G JS Review of attachment 690873 [details] [diff] [review]: ----------------------------------------------------------------- r=me with nits fixed. ::: dom/apps/src/Webapps.jsm @@ +773,5 @@ > this.doInstallPackage(msg, mm); > break; > + case "Webapps:GetAppInfo": > + return { 'basePath': this.webapps[msg.id].basePath, > + 'isCoreApp': !this.webapps[msg.id].removable }; Nit: we use double quotes for strings in this file. ::: netwerk/protocol/app/AppProtocolHandler.js @@ +18,5 @@ > function AppProtocolHandler() { > + this._appInfo = []; > + this._runningInParent = Cc["@mozilla.org/xre/runtime;1"]. > + getService(Ci.nsIXULRuntime). > + processType == Ci.nsIXULRuntime.PROCESS_TYPE_DEFAULT; Nit: we usually try to align this way: Cc["@mozilla.org/xre/runtime;1"] .getService(Ci.nsIXULRuntime) .processType == Ci.nsIXULRuntime.PROCESS_TYPE_DEFAULT; @@ +37,4 @@ > > + if (!this._appInfo[aId]) { > + let reply = cpmm.sendSyncMessage("Webapps:GetAppInfo", > + { id: aId }); Nit: align { id... with "Webapps:... @@ +39,5 @@ > + let reply = cpmm.sendSyncMessage("Webapps:GetAppInfo", > + { id: aId }); > + this._appInfo[aId] = { 'basePath': reply[0].basePath + "/", > + 'isCoreApp': reply[0].isCoreApp }; > + Nit: trailing whitespace, and use double quotes for strings. Also, you could add the "/" in the parent and just do this._appInfo[aId] = reply[0]; @@ +67,5 @@ > } > > // Build a jar channel and masquerade as an app:// URI. > + let appInfo = this.getAppInfo(appId); > + let uri = ''; |let uri;| is enough. @@ +69,5 @@ > // Build a jar channel and masquerade as an app:// URI. > + let appInfo = this.getAppInfo(appId); > + let uri = ''; > + if (this._runningInParent || appInfo.isCoreApp) { > + // In-parent and CoreApps can directly access files, so use jar:file:// I'm not 100% sure the CoreApps behavior is really wanted or just a consequence of the current file permissions on /system and something we want to change. That could be a follow up anyway.
Attachment #690873 - Flags: review?(fabrice) → review+
Comment on attachment 690882 [details] [diff] [review] Part 5: security checks Review of attachment 690882 [details] [diff] [review]: ----------------------------------------------------------------- So, if you need the basePath for an appId, we should just add that to AppsService.js which is our c++ friendly interface to Webapps.jsm Also note that the rule is that an app can only access its own application.zip, unless it has the webapps-manage permission and then can access any application.zip (we need that for the homescreen to be able to load icons from other apps packages). There is code dealing with that in the current setup at ::: netwerk/ipc/NeckoParent.cpp @@ +310,5 @@ > + > + // at this point our URI is a regular file:// uri of jar file, aka the > + // original nsIJARURI.JARFile (except with scheme 'file' instead of > + // 'remoteopenfile' > + nsPrintfCString mustMatch("/%lu/application.zip", (unsigned long)appId); the path used is not /appId/application.zip but /UUID/application.zip, where UUID is generated when installing a new app, or set up by the build system (it can be, eg. calculator.gaiamobile.org). So I think the following test will always fail.
Attachment #690882 - Flags: review?(fabrice) → review-
Comment on attachment 690896 [details] [diff] [review] Part 2, v2 main patch Review of attachment 690896 [details] [diff] [review]: ----------------------------------------------------------------- Mostly looks ok. It's too bad jarcache doesn't currently have some sort of call so you can check if a file is already in the jarcache. I just need an answer to the question on nsJARChannel::OnRemoteFileOpenComplete. ::: modules/libjar/nsJARChannel.cpp @@ +339,5 @@ > nsCOMPtr<nsIFileURL> fileURL = do_QueryInterface(mJarBaseURI); > if (fileURL) > fileURL->GetFile(getter_AddRefs(mJarFile)); > } > + // if we're in child process and have special "remoteopenfile:://" scheme, "remoteopenfile://"? @@ +341,5 @@ > fileURL->GetFile(getter_AddRefs(mJarFile)); > } > + // if we're in child process and have special "remoteopenfile:://" scheme, > + // create special nsIFile that gets file handle from parent when opened. > + if (!mJarFile && XRE_GetProcessType() != GeckoProcessType_Default) { Hm, so we only try this when we can't directly get the file, so trying to do file:// would work in child processes. Are there currently cases where we have to support that? Guess it won't matter once the child processes get chrooted. @@ +904,5 @@ > //----------------------------------------------------------------------------- > +// nsIRemoteOpenFileListener > +//----------------------------------------------------------------------------- > +nsresult > +nsJARChannel::OnRemoteFileOpenComplete(nsresult aOpenStatus) Should we check aOpenStatus? Seems like we should fail immediately if we weren't able to open the file on the parent side. @@ +911,5 @@ > + > + // files on parent are always considered safe > + mIsUnsafe = false; > + > + nsCOMPtr<nsJARInputThunk> input; This should probably be a nsRefPtr though I realize that a lot of the code here is using nsCOMPtr for nsJARInputThunk..
Comment on attachment 690896 [details] [diff] [review] Part 2, v2 main patch Review of attachment 690896 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/ipc/Makefile.in @@ +30,5 @@ > NeckoCommon.h \ > NeckoMessageUtils.h \ > ChannelEventQueue.h \ > + RemoteOpenFileParent.h \ > + RemoteOpenFileChild.h \ Are these actually used outside of netwerk/ipc? ::: netwerk/ipc/PNecko.ipdl @@ +46,5 @@ > PFTPChannel(PBrowser browser, SerializedLoadContext loadContext); > PWebSocket(PBrowser browser, SerializedLoadContext loadContext); > PTCPSocket(nsString host, uint16_t port, bool useSSL, nsString binaryType, > nullable PBrowser browser); > + PRemoteOpenFile(URIParams fileuri, nullable PBrowser browser); Do we have tests that exercise this? If not, let's not allow a null browser until it's absolutely required. ::: netwerk/ipc/RemoteOpenFileChild.cpp @@ +64,5 @@ > + NS_ENSURE_SUCCESS(rv, rv); > + > + // Note: this doesn't do any actual File I/O, so OK to do on child. > + rv = NS_NewLocalFile(NS_ConvertUTF8toUTF16(path), false, > + getter_AddRefs(mFile)); nit: weird indentation @@ +110,5 @@ > + gNeckoChild->SendPRemoteOpenFileConstructor(this, uri, tabChild); > + > + // Can't seem to reply from within IPDL Parent constructor, so send open as > + // separate message > + SendAsyncOpenFile(); You could probably avoid this by calling the code that needs to reply from NeckoParent::RecvPRemoteOpenFileConstructor. @@ +136,5 @@ > + } > + > + MOZ_ASSERT(mListener); > + > + mListener->OnRemoteFileOpenComplete(aRV); I think we should have an ActorDestroyed method that calls OnRemoteFileOpenComplete with a failure code when an IPDL error occurs, otherwise the nsIChannel contract doesn't get fulfilled for the JAR channel's listener. ::: netwerk/ipc/RemoteOpenFileChild.h @@ +78,5 @@ > + > + // regular nsIFile object, that we forward most calls to. > + nsCOMPtr<nsIFile> mFile; > + nsCOMPtr<nsIURI> mURI; > + nsCOMPtr<nsIRemoteOpenFileListener> mListener; I'm worried that there's a cycle here that keeps the JAR channel and this object alive forever. Please check this? ::: netwerk/ipc/RemoteOpenFileParent.cpp @@ +12,5 @@ > +#if !defined(XP_WIN) && !defined(MOZ_WIDGET_COCOA) > +#include <fcntl.h> > +#endif > + > +#if 0 Remove this block.
Attachment #690896 - Flags: review?(josh) → review+
Comment on attachment 690877 [details] [diff] [review] Part 4: skip parent file open if JAR is cached on child. Review of attachment 690877 [details] [diff] [review]: ----------------------------------------------------------------- ::: modules/libjar/nsJAR.cpp @@ +1071,5 @@ > + MutexAutoLock lock(mLock); > + > + nsAutoCString uri; > + rv = zipFile->GetNativePath(uri); > + if (NS_FAILED(rv)) return rv; nit: put return on a new line.
(In reply to Brian Smith (:bsmith) from comment #2) >. I will address this concern in bug 822072.
I don't think it's at all important that the child process doesn't have access to reading the non-signed parts of the app package. I.e. I think it's totally fine if the implementation here always opens the package file in the parent process and sends a cloned filehandle to the child process. What we should have is app:-protocol level checks which prevents the child process from opening files from the zip which aren't unsigned. However those checks can run in the child process. If someone hacks the child process then it can do anything it wants (within the confines of the child process) and so it doesn't matter if it can load javascript or anything else from the app package. It can still only read contents from its own package anyway, and so no privacy sensitive information could be read.
Target Milestone: B2G C2 (20nov-10dec) → B2G C3 (12dec-1jan)
mwu is travelling this week. Can we get someone other than him to review: - Part 2, v2 main patch - Part3: xpcshell tests ?
Comment on attachment 690875 [details] [diff] [review] Part3: xpcshell tests Review of attachment 690875 [details] [diff] [review]: ----------------------------------------------------------------- ::: modules/libjar/test/unit/test_jarchannel.js @@ +26,5 @@ > ); > > const fileBase = "test_bug637286.zip"; > const file = do_get_file("data/" + fileBase); > +// on child we'll test with jar:ipcfile:// instead of jar:file:// remoteopenfile ::: modules/libjar/test/unit/xpcshell.ini @@ +3,4 @@ > tail = > > [test_jarchannel.js] > +[test_jarchannel_e10s.js] This probably needs skip-if for OS X; we don't run IPC xpcshell tests on mac.
Attachment #690875 - Flags: review?(mwu) → review+
Comment on attachment 690896 [details] [diff] [review] Part 2, v2 main patch r=me if I get an answer about why we're not checking aOpenStatus, or a new patch that checks it. (doing this now in case I'm not around for the response)
Attachment #690896 - Flags: review?(mwu) → review+
with fabrice's nits fixed
Attachment #690873 - Attachment is obsolete: true
Attachment #693323 - Flags: review+
for your browsing convenience...
Updated main patch. I only have one real review question for jdm: > I think we should have an ActorDestroyed method that calls > OnRemoteFileOpenComplete with a failure code when an IPDL error occurs, > otherwise the nsIChannel contract doesn't get fulfilled for the JAR channel's > listener. jdm: but doesn't an IPDL error occurring mean that the whole child process gets killed, so we don't need to worry about this? > > + nsCOMPtr<nsIRemoteOpenFileListener> mListener; > > I'm worried that there's a cycle here that keeps the JAR channel and > this object alive forever. Please check this? We do the standard necko thing of releasing the mListener after we call OnRemoteFileOpenComplete(). IIUC we're essentially guaranteed to either be called by the parent at some point do that, or be killed due to an IPDL error. But just in case I've added a call to OnRemoteFileOpenComplete and release the listener in RemoteOpenFileChild::ReleaseIPDLReference(). Is that basically the equivalent of the ActorDestroyed you were suggesting? Nits: > > +nsJARChannel::OnRemoteFileOpenComplete(nsresult aOpenStatus) > > Should we check aOpenStatus? Seems like we should fail immediately if > we weren't able to open the file on the parent side. Thanks for catching this. It turns out that we'd already be dead if NS_FAILED(aOpenStatus) as IPDL kills us for sending an invalid FileDescriptor. But that may change someday, so I've changed the code to check and fail. > so trying to do file:// would work in child processes. Are there currently > cases where we have to support that? Guess it won't matter once the child > processes get chrooted. At the moment "core" apps are granted the power to open file:// directly on child, and my part1 patch allows them to do so for efficiency. So yes. > This should probably be a nsRefPtr Yup. Changed. I went ahead and changed all other nsCOMPtrs<thunks> in nsJARChannel.cpp to nsRefPtrs too, since they're obviously wrong. > + RemoteOpenFileParent.h \ > + RemoteOpenFileChild.h \ > > Are these actually used outside of netwerk/ipc? Child is (included by JARChannel). Parent isn't, so I tried to remove it, only to discover that we seem to need to export all headers that we want to use EXPORTS_NAMESPACE with (i.e #include as "mozilla/..."). Correct me if I'm missing something. Seems like yet another drawback of C++ namespaces :) > PRemoteOpenFile(URIParams fileuri, nullable PBrowser browser); > > Do we have tests that exercise this? If not, let's not allow a null browser Yup. xpcshell test in the test patch. > nit: weird indentation fixed. > Can't seem to reply from within IPDL Parent constructor, so send open as > separate message I'm punting on this for now, and will fix if/when we switch to doing the fd open() async on the parent. Sending another message is cheap too IIRC what cjones has told me. > Remove this block. done.
Attachment #690896 - Attachment is obsolete: true
Attachment #693325 - Flags: review?(josh)
Attachment #690877 - Attachment is obsolete: true
Attachment #693329 - Flags: review+
With this patch things are working on the phone (yay). I've traced through the debugger to make sure we're doing the open on the parent. Performance for launching a sample packaged app in an opt build seems fine. The only codepath that hasn't gotten much testing are the checks based on getCoreAppsBasePath(). On desktop B2G it looks like we don't use "coreAppsDir", and on the phone core apps are just accessing file:// directly. I still need to make the patches work on Windows/OSX at least to the point where I don't break tests/builds. Coming up soon...
Attachment #690882 - Attachment is obsolete: true
Attachment #693342 - Flags: review?(fabrice)
Comment on attachment 693342 [details] [diff] [review] Part5, v2: security checks Review of attachment 693342 [details] [diff] [review]: ----------------------------------------------------------------- That's almost ready! r- because getCoreAppsBasePath() and getWebAppsBasePath() must also be implemented in AppsServiceChild.jsm and not just in Webapps.jsm. This is because AppsService.js loads AppsServiceChild.jsm when instanciated from the child. For now we don't use these calls so it may be ok to just throw or return null. ::: dom/apps/src/AppsService.js @@ +58,5 @@ > return DOMApplicationRegistry.getAppFromObserverMessage(aMessage); > }, > > + getCoreAppsBasePath: function getCoreAppsBasePath() { > + debug("getCoreAppsBasePath(}"); Nit: s/(}/() @@ +63,5 @@ > + return DOMApplicationRegistry.getCoreAppsBasePath(); > + }, > + > + getWebAppsBasePath: function getWebAppsBasePath() { > + debug("getWebAppsBasePath(}"); Ditto ::: dom/apps/src/Webapps.jsm @@ +2053,5 @@ > return AppsUtils.getAppFromObserverMessage(this.webapps, aMessage); > }, > > + getCoreAppsBasePath: function() { > + return FileUtils.getDir("coreAppsDir", ["webapps"], true, true).path; s/true, true/false since we can't create directories on the system partition
Attachment #693342 - Flags: review?(fabrice) → review-
Attachment #693342 - Attachment is obsolete: true
Attachment #693788 - Flags: review?(fabrice)
with jdm's fixes.
Attachment #690875 - Attachment is obsolete: true
Attachment #693791 - Flags: review+
Setting and asserting for mWebAppsBasePath needs to be skipped if we've disabled IPC security. Broke xpcshell tests. Only change is an "if (!gDisableIPCSecurity) {..}" to wrap that code in the NeckoParent constructor.
Attachment #693788 - Attachment is obsolete: true
Attachment #693788 - Flags: review?(fabrice)
Attachment #693821 - Flags: review?(fabrice)
Simple approach: skip remoting fd open on parent for Windows/OSX, since we'll only use those platforms for desktop builds, where child processes aren't restricted from opening files. I tried to get full remote support for OSX but couldn't get desktop B2G to work even w/o any of my patches, so that'll have to wait. Try run here:
This time with non-empty patch! :)
Attachment #693824 - Attachment is obsolete: true
Comment on attachment 693325 [details] [diff] [review] Part 2, v3 main patch Review of attachment 693325 [details] [diff] [review]: ----------------------------------------------------------------- My fears about ActorDestroy have been assuaged.
Attachment #693325 - Flags: review?(josh) → review+
Comment on attachment 693821 [details] [diff] [review] Part5, v4: security checks Review of attachment 693821 [details] [diff] [review]: ----------------------------------------------------------------- r=me with interfaces' UUID updated. ::: dom/interfaces/apps/mozIApplication.idl @@ +15,1 @@ > interface mozIApplication: mozIDOMApplication Please update the UUID ::: dom/interfaces/apps/nsIAppsService.idl @@ +58,5 @@ > + > + /** > + * Returns the basepath for regular packaged apps > + */ > + DOMString getWebAppsBasePath(); Please update the UUID.
Attachment #693821 - Flags: review?(fabrice) → review+
Comment on attachment 693825 [details] [diff] [review] Part 6: fixes for Windows/OSX I'm running into weirdness on Windows here--seeing null nsCOMPtr deref somehow connected with the nsZipReaderCache:IsCached function I added. Possible workaround is to just not test on windows, since OSX/Linux/B2G seem fine, and the codepath won't be run on production on windows (just an xpcshell test). But I'd like to know what's going on, so investigating some more.
Given the time constraints here, I suggest we think about landing this and filing a follow up bug.
(In reply to JP Rosevear [:jpr] from comment #53) > Given the time constraints here, I suggest we think about landing this and > filing a follow up bug. Agreed.
Comment on attachment 693825 [details] [diff] [review] Part 6: fixes for Windows/OSX OK this works everywhere but windows, and I'll disable the one test there that fails.
Comment on attachment 693825 [details] [diff] [review] Part 6: fixes for Windows/OSX Review of attachment 693825 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/ipc/RemoteOpenFileParent.h @@ +27,5 @@ > private: > nsCOMPtr<nsIFileURL> mURI; > + > + // clang on OSX will fail with warning if if sees 'unused' private member, so > + // hide mFd I don't think this comment is necessary.
Attachment #693825 - Flags: review?(josh) → review+
What is the expected performance impact of this change on app startup time? I saw comment 42, just wondering if you have any more specific expectations.
> What is the expected performance impact of this change on app startup time? At application launch time, we will now open the file handle for the application.zip on the parent instead of the child. This will add an IPDL round trip to the time for application launch, which I don't expect to be significant compared with the actual I/O for loading the app. That said I don't have numbers or a good plan to get them. If someone does, that would be swell. Even a side-by-side eyeball comparison with two phones, one with these patches and one w/o, would be useful as a sanity check. Do we have resources to do that? A possible optimization could be to have the parent keep open filehandles (or better, zipcache mmapped regions) to the most commonly used apps, and when forking a child process close all of them except the one that belongs to the app. I'd wager it's not worth the effort, but that's the most obvious path forward to me if there's a perf issue here.
uuids updated
Attachment #693821 - Attachment is obsolete: true
Attachment #695177 - Flags: review+
Attachment #695177 - Attachment description: Part5, v4: security checks → Part5, v5: security checks
Ripping out parts of necko sec patch that I need to land this, since there's no reason to stop progress :) Keeping separate from rest of patch in case we wind up backing this patch out and then the necko-sec patch needs to land before this re-lands.
Attachment #695181 - Flags: review+
> What is the expected performance impact of this change on app startup time? And the code here only affects packaged apps that are *not* core apps (which for the moment at least still have file I/O permission and just open their application.zip file normally). So to test perf, download one of the packaged apps from
skip-if syntax needs os == "win", not "windows":
Status: NEW → RESOLVED
Closed: 7 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
Whiteboard: [qa-]
status-b2g18: --- → fixed
status-firefox19: --- → fixed
status-firefox20: --- → fixed
|
https://bugzilla.mozilla.org/show_bug.cgi?id=815523
|
CC-MAIN-2019-26
|
refinedweb
| 5,070
| 56.15
|
It appears a new Lao alphabet routine is needed. I may have to generate the rules for alphabetizing...here's the tough part: Lao is not a typical job for an alphabetic sort.
I have already developed a "Lao.pm" module (not yet submitted to CPAN, and may need to use a different namespace) that will identify Lao characters by consonant, vowel, punctuation, and tone marks, and will further classify the consonants by their Lao classes (high/mid/low). So I have the tools for distinguishing at the character level, e.g. \p{Lao::InLaoCons}\p{Lao::InLaoTone}\p{Lao::InLaoVowel}, but need to map the characters to an alphabetical order, and this part seems beyond my experience.
Blessings,
~Polyglot~
In reply to New Alphabet Sort Order
by Polyglot
Thunder fish
Shocky knifefish
TBD
Electric eels were invented at the same time as electricity
Before electricity was invented, electric eels had to stun with gas
Results (293 votes). Check out past polls.
|
http://www.perlmonks.org/?parent=897216;node_id=3333
|
CC-MAIN-2017-09
|
refinedweb
| 162
| 54.83
|
We’ll discuss
Getting started with the D3 library
Using arc from the d3-shape package
SVG path elements
SVG group elements and the transform attribute
Using ES6 Template Literals
Adding the mouth to our face
Assignment: Tweak the face
Getting started with the D3 library
Alright we’re almost there. All we need now is the mouth. The shape of the mouth should be an arc. But unfortunately, there’s no arc primitive in SVG. But there is an arc helper function from D3 that we can use. D3 is subdivided into multiple packages, and I know for a fact that D3 shape provides the arc primitive so I just googled for D3 shape landed at the Github page for d3-shape and then I’m gonna find on this page “arc”. So this links to the documentation for arcs.
We’re going to need to pull in the D3 library onto this page. So I’m gonna go back to unpkg.com and I’m gonna say unpkg.com/d3. And it looks like this resolves by default to the nice minified build that we want. So I’m just going to copy that, and make yet another script tag on this page.
<script src=""></script>
Using arc from the d3-shape package
And now in our
index.js, we should be able to say
import { arc } from ‘d3'; And just to see if this works, I’m gonna say
console.log(arc). And sure enough, something appears. So it looks like our import has worked correctly. But how do we use this arc function? Here’s some useful example code for D3 arc.
var arc = d3.arc() .innerRadius(0) .outerRadius(100) .startAngle(0) .endAngle(Math.PI / 2);
So what’s being referred to as
d3.arc here, is just
.arc for us. So this is what we need to do. We need to create an arc generator instance with code that looks something like this. So I’m gonna copy this code, I’ll get rid of our console.log there, and I’m gonna make a new variable called
mouthArc –
const mouthArc . And I’m going to set that equal to.. I’m going to paste that code from the documentation page so that creates a new arc generator and sets up the inner radius, outer radius, the start angle, and the end angle.
const mouthArc = arc() .innerRadius(0) .outerRadius(100) .startAngle(0) .endAngle(Math.PI / 2);
What you see here is typical of the D3 API where
arc() is sort of a constructor function, and then this pattern —
.innerRadius(0).outerRadius(100).startAngle(0).endAngle(Math.PI / 2) is called method chaining – where you call a function on this object, you pass it some argument, and it actually returns back the original object. The same thing returned from the constructor
arc(), so you can do this chaining pattern, where you can set multiple things, and then the thing that’s returned by the last function gets assigned to this variable
mouthArc. But luckily, it’s the same thing that was returned by the constructor
arc(). So yeah, that’s the pattern of method chaining.
SVG path elements
The way that we use this arc generator, is that we need to create an SVG
<path> element so in our SVG element, I’m going to create a path and then set the d attribute of this
mouthArc invoked as a function.
<path d={mouthArc()}/>
I think that should work. Let me just try something –
console.log(mouthArc()) invoked as a function.
Okay, it outputs the correct thing – namely this cryptic string that’s actually a program in a domain-specific language for SVG paths. It means – M means for move to the x-coordinate. 6.12 and the y-coordinate – 100 or something like that. But we don’t really need to understand the details of this language unless we’re writing a library like d3-shape. But the thing to understand is this crazy string can be sent to the value of the
d attribute for an SVG path.
I’ll get rid of this console dialog. My question now is, why are we not seeing anything? My theory is it might be going off the screen, so let me just try changing
endAngle(Math.PI * 2);
Ok, now we’re seeing something. Here in the corner. And if we change
innerRadius to be say
90, now we’re seeing this arc.
SVG group elements and the transform attribute
But the problem now is we want the center of this arc to be the center of this circle. And this is kind of tricky, because arcs don’t have X and Y. But there is this cool thing we can use called an SVG group element. And if you put things inside of an SVG group element, and then move the group element, everything inside the group element moves as well. So let me put everything inside of a group element.
Okay, I’ll open the group element like this
<g> and then at the end of everything I’m gonna close the group element like this
</g>. Then I’m gonna indent everything inside of this group element. Now I’m going to translate everything inside of this group element by
centerX and
centerY. And we can do that like this.
<g transform={`translate(${centerX},${centerY})`}>
Now everything moved. See that our arc is in the right place, but now our whole smiley face is in the wrong place.
Using ES6 Template Literals
Let me just break down what is going on here. These back ticks
`` are an ES6 string literal. It’s pretty much a string template, where you can inject values of variables using this syntax here –
$ dollar sign. And then open curly brace
{ and then close curly brace
}.
`${}`
So what this is doing, is its producing a string that says,
translate, and this opening and closing parenthesis, it’s actually inside the string.
See, the
transform attribute expects a string that is a sort of a expression in a domain-specific language. And one of the valid expressions is
translate. You could also say
rotate(90), that rotates everything and
rotate(45) but all we need right now is
translate.
Alright, so now that everything inside of this group element is being translated, we don’t need to use
centerX and
centerY for these circles anymore. We can just use the default values of zero – meaning we could just get rid of
cx and
cy for that one. See now it’s in the right spot. And then we don’t need
centerX and
centerY here in the circle that draws the eyes. But these should be
-eyeOffsetX and
-eyeOffsetY.
<circle cx={-eyeOffsetX} cy={-eyeOffsetY} r={eyeRadius} /> <circle cx={eyeOffsetX} cy={-eyeOffsetY} r={eyeRadius} />
Okay, there’s our left eye. And now let’s get our right eye working. We can just delete
centerX + and then delete
centerY, but then it’s gonna be
- eyeOffsetY.
Adding the mouth to our face
All right! Now we’re just left with tweaking the
mouthArc. Which we can do up here.
const mouthArc = arc() .innerRadius(90) .outerRadius(100) .startAngle(Math.PI / 2) .endAngle(Math.PI * 3 / 2);
We’re almost there. We just need to change the
innerRadius and the
outerRadius. But I think what I’d like to do is set the
outerRadius, and the
innerRadius programmatically, based on some width. So let me just make a new variable called
mouthWidth, and I want it to be say
20 pixels.
const mouthWidth = 20;
And I’ll make another variable called
mouthRadius. And this is the one that we can tweak. So I’ll set it to
200 —
const mouthRadius = 200;So the
innerRadius should be
mouthRadius, and then the
outerRadius should be
mouthRadius + mouthWidth.
const mouthArc = arc() .innerRadius(mouthRadius) .outerRadius(mouthRadius + mouthWidth) .startAngle(Math.PI / 2) .endAngle(Math.PI * 3 / 2);
Okay, now we could just tweak
mouthRadius and the thickness of the mouth itself should remain constant. Okay! great. This looks like a smiley face to me all right.);
Assignment: Tweak the face
There we have it. That’s how you can make a smiley face with React and D3. Now here is an assignment for you.
I’d like you to fork this work in Vizhub and tweak it to make a really cool-looking face. Part of what I want to teach here is how you can teach yourself to use these technologies. Because I’m not gonna cover every possible thing you could do with SVG. But I can get you started, and provide the foundation for doing really cool stuff. So I would encourage you to you know do a Google search for SVG ellipse, SVG lines, SVG rect, maybe even SVG gradients. If you want to get fancy or drop shadows, things like that, I want you to get creative. Maybe do something like add irises to the eyes, of the face, or add a nose, or add teeth, or I don’t know make the face a square. It’s up to you. But I just want you to fork this face, and experiment and be creative.
To submit this assignment, just copy the Vizhub link, and share that in our course Slack channel. Also, please take a look at other students’ work, and maybe comment on one of the other students’ faces.
|
https://datavis.tech/datavis-2020-episode-7-lets-make-a-face-part-iii-with-react-d3/
|
CC-MAIN-2020-45
|
refinedweb
| 1,577
| 74.19
|
Who's driving this car? At first glance it appears that as a developer, you have very little if no control over how MapReduce behaves. In some regards this is an accurate assessment. You have no control over when or where a MapReduce job runs, what data a specific map job will process or which reducer will handle the map's intermediate output. Feeling helpless yet?
Don't worry the truth is that despite all that, there are a number of ninja techniques you can use to take control of how data moves through your MapReduce job to influence the ultimate outcome.
Local Aggregation & Combiners
When a mapper runs a produces it's intermediate output, it is first written to disk before sent over the network through the shuffle/sort and on to the reducer. The two pain points should be evident: local disk I/O on the mapper and of course network traffic.
We can use the typical word count example to better illustrate. If we were to do a word count on the book Pride and Prejudice, our mapper would read in the text line-by-line and emit a key value pair that would consist of a key of an individual word and a value of 1. Nothing unexpected here expect the practical issue that all those key/value pairs will be first written to disk before been sent along the process over the network. For one book that may not be a problem, but we are talking Big Data here and so this is sub-optimal.
To work around this issues we can use the concept know as local aggregation which simply means that we want to consolidate the data before it writing it to disk. Local aggregations can be implemented in two way. First we could use internal structures to store data directly in the mapper. The downside to this approach is memory pressure and the potential that we exceed the amount of memory allocated to the JVM that the map job runs in.
A better method is to make use of the a combiner. Combiners act as local reducers aggregating data by key while its in memory. The difference between the two methods discussed is the combiners will spill or write to disk as buffer limits are reached. This obviously resolves the potentially out of memory issue. It can however results in duplicate keys being emitted to the Shuffle/Short which is generally not an issue considering where started from.
Not that if the reducer function is both associative and commutative (i.e. sum of word counts) a reducer can function as both as a reducer and a combiner.
Shuffle
After all the map jobs have finished the shuffle is run. The shuffle partitions the data by key and ensures that all records for a given from all mappers are sent to a single reducer. This works because the default partitioner is the HashPartitioner which calculates a hash for the key and ensures that all key/value pairs with the same hash are sent to the same reducer. When you are working primarily with keys that are primitive data types, the built in partition process will normally suffice. When you begin to work with composite keys and complex types things get interest and its time for another ninja move.
The Custom Partitioner
The Partition<K,V> is an abstract class with a single method that allows you to take control of the shuffle process and direct your data to a reducer of your choosing. This class contains just a single method getPartition() whose job is to determine which reducer a specific record should be directed to. The key, value and number of reducers are passed in as arguments which gives you everything you needs to partition your data and direct it to the most appropriate place.
To better illustrate lets look at an example. Our map job outputs a composite key that consists of date in YYYYMMDD format and weather station identifier delimited by a pipe character. If the desired behavior is to send all intermediate output for a given year (the natural key) to a single reducer the default partitioner will not work.
Take two keys [20130312|006852] and [20130313|007051] for example. The default partition will calculate a hash over the entire key resulting in different hashes and the potential that the records are sent to separate reducers. To ensure that both records are sent to the same reducer let's implement a customer partitioner.
public class YearPartitioner implements Partitioner { @Override public int getPartition(Text key, LongWritable value, int numReduceTasks) { //Split the Year out of the key and convert to int int year = Integer.parseInt(sKey.substring(0, 4)); //Use mod to balance years across the # of available reducers return year % numReduceTasks; } }
In the rather arbitrary example above, we took command over the shuffle by splitting out the year (a.k.a. the natural key) and then simply used the modulus function to balance the stream of years over the available reducers. Pretty simple and pretty straight-forward, right? Not so fast kung-fu master.... To guarantee that all relevant rows within a partition of data are sent to a single reducer you must also implement a grouping comparator which considers only the natural key (in this example the Year) as seen below.
public class YearGroupingComparator extends WritableComparator { @Override public int compare(Text key1, Text key2) { int year1 = Integer.parseInt(key1.toString().substring(0, 4)); int year2 = Integer.parseInt(key2.toString().substring(0, 4)); return Integer.compareTo(year1, year2); } }
Before you run off an write your own partitioner though note that as with anything, great power requires great responsibility. Since you are taking control of the partitioning process, MapReduce does nothing to ensure you effectively distribute your data over the reducers. This means that an ineffectively written partitioner can quickly take away all the embarrassingly parallel abilities MapReduce has given you.
Sort
Data that is delivered to the reducer is guaranteed to be grouped by either the key or by the partition/comparator function described above. Suppose though, that we wanted to calculate a moving average over the set of data. In this case, it's important that the records are delivered to the reduce function in a specific order (i.e. ordered by date). The last ninja skill we look at is the ability to do a secondary sort using another custom comparator this time one that looks at the entire composite key.
The Secondary Sort
Using the example from above to illustrate how a secondary sort works, suppose again that we are processing large amounts of multiple inputs. Our partition function/comparator ensure that all the weather station data from 2013 from all the mappers for is sent to a single reducer. But there is a high probability that if this data is spread out that it will arrive out of order. Sure we could build large arrays or other memory structures to hold and then sort the data in the JVM instance of the reducer, but this option doesn't scale well.
Instead, we can define a comparator that evaluates first the natural key and then the composite key to correctly order the key/value pairs by date as seen in the example below:
public class SortComparator extends WritableComparator { @Override public int compare(Text key1, Text key2) { int year1 = Integer.parseInt(key1.toString().substring(0, 4)); int year2 = Integer.parseInt(key2.toString().substring(0, 4)); if (year1 == year2){ int month1 = Integer.parseInt(key1.toString().substring(5, 7)); int month2 = Integer.parseInt(key2.toString().substring(5, 7)); return Integer.compareTo(month1, month2); } return Integer.compareTo(year1, year2); } }
Wrap-Up
To use your new ninja move, we must configure the MapReduce job correct. The combiner, partitioner and comparators are all define in the job configuration as seen below:
conf.setCombinerClass(Combiner.class); conf.setPartitionerClass(Partitioner.class); conf.setOutputValueGroupingComparator(PartitionComparator.class); conf.setOutputKeyComparatorClass(SecondarySortComparator.class);
I hope this post has been helpful in expanded your knowledge of MapReduce and has given you some tools to take hold of how execution occurs within the the framework.
Till next time!
|
https://blogs.msdn.microsoft.com/bluewatersql/2014/10/01/mapreduce-ninja-moves-combiners-shuffle-doing-a-sort/
|
CC-MAIN-2019-18
|
refinedweb
| 1,356
| 52.49
|
Also, I've just added a Name property to the attributes. So, you can alias methods / actions in your code. (I can see this as being very useful if someone were switching between platforms and you didn't want to / couldn't rename all of your methods to match.)
So, you would declare an action like:
[DirectAction("user")]
public class UserController
{
[DirectMethod("getUser")]
public string GetUserById(int Id)
{...}
}
and your client side method would be user.getUser instead of UserController.GetUserById
@durlabh: I think this thread must continue as intended, namely a central thread for comments, questions, and improvements regarding Evan's Ext.Direct .NET Router. I appreciate your ExtJs related initiatives but please do not use this thread as a 'promotional platform' for your version.
@Dave.Sanders: Name property is a good suggestion.
- Join Date
- Apr 2007
- Location
- Sydney, Australia
- 17,597
- Vote Rating
- 753
Dave, I like your ideas, however make sure you've got the latest version (first post of this thread).Evan Trimboli
Sencha Developer
Twitter - @evantrimboli
Don't be afraid of the source code!
Ok, I've attached my version of the code. I looked around at the latest version, but I think the differences were things that I had modified in my code (in a slightly different way.) If I missed something, please let me know and I'll update my code.
Some highlights: (any line numbers are from my code)
1. I've done a little bit of cleaning up in the logic of DirectProcessor around line 58 just to make it more concise.
2. I've changed out how the JSON is created by using the DefaultValueHandling property on JsonSerializerSettings. (See this thread here on JSON.Net board - it was a feature I implemented, then James modified a year or so ago.) This, combined with [DefaultValue] attributes added to JsonResponse makes the JSON more concise. If an event is sent back, then you don't get the other unnecessary null fields. (DirectProcessor line 60 and attributes in DirectResponse)
3. I've added DirectEvent, which can be passed back by an action/method to invoke a clientside event in Ext.Direct.
4. I've modified DirectResponse to recognize an exception or an event and to send different response types down to the client. (DirectResponse line 14)
5. I've modified DirectProcessor.ProcessRequest to be able to handle the return of a List<object> from the method, and to create a List<DirectResponse> that it passes back to be serialized. This allows events AND data to be sent down to the client at the same time. (Though caution has to be taken due to how events are ran by Ext.Direct.) (DirectProcessor line 77)
6. I've modified ProcessRequest to catch and handle exceptions. Instead of just handing DirectExceptions like your code does, I handle ALL exceptions, but I've wrapped the try catch in a Preprocessor directive so that the IDE will still catch on errors while in debug mode. You can put it in Release mode to have the errors sent only to the client. (This needs finished to not send down callstack when in Release mode.) (DirectProcessor Line 72)
I think that's it, but you can see why it was easier to just attach my code instead of trying to describe patches. Take a look and let me know what's good / bad. I'd be happy to help integrate these into your code, or vice versa.
Cheers
D
UPDATED: Code files removed. LATEST CODE IS IN THIS POST
UPDATE: Attached a new version of the code, there was a slight bug in finding actions / methods by the name you give to the attribute. Which was a feature I forgot to mention:
7. You can abstract the name of the action / method used in the client side away from the server side by specifying a name property when using the DirectAction or DirectMethod attributes. example:
[DirectAction("myAction")]
public class YourAction
{
[DirectMethod("myMethod")]
public string YourMethod
}
would map the clientside myAction.myMethod() to the server side YourAction.YourMethod(). Useful if you really do decide to change your server back end, or having naming conflicts between client side objects and server side objects. I personally use it so that I can name all of my Direct Action classes as "ObjectController" without having to call "ObjectController" on the clientside.
anyway, the previous post had a bug, so I'm updating the code in that post to fix the bug.
I have another version of the library now that will pass back the "enableBuffer" property on methods that I propose in this other thread.
I don't want to post it up yet because I don't want to muddy the waters between official ext and the ux I'm using for adding this feature to Ext.Direct. But if someone wants it, let me know. The change is pretty straight forward, just add a new property to DirectMethod called enable buffering, and make it render to the json, then use it when you use the DirectMethod attribute when needed.
I would also suggest adding "enableBuffer" at the Provider level so that it conforms with the Ext.Direct interface.
Direct methods not working with directFn & baseParams
I am using the Router 0.6 and can get it to work by calling a Direct method from my Javascript code, but when I try to use it to populate a combobox specifying a directFn and baseParams, request.Data in DirectProcessor.ProcessRequest ends up looking like this:
{object[1]}
[0]: {object}
I get the following error:
Error occurred while calling Direct method: Object of type 'System.Object' cannot be converted to type 'System.String'
It doesn't seem to matter what I put in the baseParams--I'd like to send more than one parameter. Can anyone help me?
I actually haven't tried the Direct Data store stuff yet, or even really looked at it but:
1. Can we see some code showing how you are calling it and
2. Can we see what is being sent to the server, via Firebug
Then maybe I'll have an idea. I should be getting to trying the data store stuff here in the next day or so.
Dave
It seems that with the directFn approach, it sends the parameters as a JSON object, so the problem can be reproduced by simply passing a JSON object to a Direct method. For example:
Code:
var config = { param1: 'test1', param2: 2 }; MyApp.MyDirectMethod(config, function(e, result) { alert(Ext.util.JSON.encode(result.result)); });
Code:
JsonConvert.DeserializeObject<DirectRequest>(json)
I just tried using durlabh's alternate Ext.Direct .Net implementation () which uses Microsoft's JavaScript serialization. It deserializes the JSON object into an object that can be cast into a Dictionary<string, object> type.
Ah, for anything that I need to send a JSON object to the server, I encode it first:
Code:
Ext.encode(myJSONObject);
In my current case, I am deserializing the string into a dictionary:
Code:
JsonConvert.DeserializeObject<Dictionary<string, string>>(FormData.ToString())
That said - this won't handle cases where the object contains objects, etc.)
|
https://www.sencha.com/forum/showthread.php?68161-Ext.Direct-.NET-Router&p=363813&viewfull=1
|
CC-MAIN-2015-22
|
refinedweb
| 1,189
| 64.3
|
sem_unlink()
Destroy a named semaphore
Synopsis:
#include <semaphore.h> int sem_unlink( const char * sem_name );
Since:
BlackBerry 10.0.0
Arguments:
- sem_name
- The name of the semaphore that you want to destroy.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:.
Errors:
- EACCES
- You don't have permission to unlink the semaphore.
- ELOOP
- Too many levels of symbolic links or prefixes.
- ENOENT
- The semaphore sem_name doesn't exist.
- ENAMETOOLONG
- The sem_name argument is longer than (NAME_MAX - 8).
- ENOSYS
- The sem_unlink() function isn't implemented for the filesystem specified in path.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/sem_unlink.html
|
CC-MAIN-2015-22
|
refinedweb
| 123
| 54.08
|
I'm learning C, but i have a long experience with higher level programming languages.
I was reading about header files so i was playing around with them, however I noticed that I could call a function from another file without #including it (but it's in the same directory), how is that possible ?! (I'm java programmer)
Is it the make file, linker that is configured that way or what ?
I use Dev-Cpp
We have two files
main.c
add.c
add(int x,int y)
There's a few different things going on here. First I'll go over how basic compilation of multiple files works.
If you have multiple files, the important thing is the difference between the declaration and definition of a function. The definition is probably what you are used to when defining functions: You write up the contents of the function, like
int square(int i) { return i*i; }
The declaration, on the other hand, lets you declare to the compiler that you know a function exists, but you don't tell the compiler what it is. For example, you could write
int square(int i);
And the compiler would expect that the function "square" is defined elsewhere.
Now, if you have two different files that you want to interoperate (for example, let's say that the function "square" is defined in add.c, and you want to call square(10) in main.c), you need to do both a definition and a declaration. First, you define square in add.c. Then, you declare it at the beginning of main.c. This let's the compiler know when it is compiling main.c that there is a function "square" which is defined elsewhere. Now, you need to compile both main.c and add.c into object files. You can do this by calling
gcc -c main.c gcc -c add.c
This will produce the files main.o and add.o. They contain the compiled functions, but are not quite executable. The important thing to understand here is that main.o is "incomplete" in a sense. When compiling main.o, you told it that the function "square" exists, but the function "square" is not defined inside main.o. Thus main.o has a sort of "dangling reference" to the function "square". It won't compile into a full program unless you combine it with another .o (or a .so or .a) file which contains a definition of "square". If you just try to link main.o into a program, i.e.
gcc -o executable main.o
You will get an error, because the compiler will try to resolve the dangling reference to the function "square", but wont find any definition for it. However, if you include add.o when linking (linking is the process of resolving all these references to undefined functions while converting .o files to executables or .so files), then there won't be any problem. i.e.
gcc -o executable main.o add.o
So that's how to functionally use functions across C files, but stylistically, what I just showed you is "not the right way". The only reason I did is because I think it will better help you understand what's going on, rather than relying on "#include magic". Now, you might have noticed before that things get a little messy if you have to redeclare every function you want to use at the top of main.c This is why often C programs use helper files called "headers" which have a .h extension. The idea of a header is that it contains just the declarations of the functions, without their definitions. This way, in order to compile a program using functions defined in add.c, you need not manually declare every function you are using, nor need you #include the entire add.c file in your code. Instead, you can #include add.h, which simply contains the declarations of all the functions of add.c.
Now, a refresher on #include: #include simply copies the contents of one file directly into another. So, for example, the code
abc #include "wtf.txt" def
is exactly equivalent to
abc hello world def
assuming that wtf.txt contains the text "hello world".
So, if we put all the declarations of add.c in add.h (i.e.
int square(int i);
and then at the top of main.c, we write
#include "add.h"
This is functionally the same as if we had just manually declared the function "square" at the top of main.c.
So the general idea of using headers is that you can have a special file that automatically declares all the functions you need by just #including it.
However, headers also have one more common use. Let's suppose that main.c uses functions from 50 different files. The top of main.c would look like:
#include "add.h" #include "divide.h" #include "multiply.h" #include "eat-pie.h" ...
Instead, people often move all those #includes to the main.h header file, and just #include main.h from main.c. In this case, the header file serves two purposes. It declares the functions in main.c for use when included by other files, and it includes all of the dependencies of main.c when included from main.c. Using it this way also allows chains of dependencies. If you #include add.h, not only do you get the functions defined in add.c, but you also implicitly get any functions which add.c uses, and any functions they use, and so on.
Also, more subtly, #including a header file from it's own .c file implicitly checks for errors you make. If for example, you accidentally defined square as
double square(int i);
in add.h, you normally might not realize until you were linking that main.o is looking for one definition of square, and add.o is providing another, incompatible one. This will cause you to get errors when linking, so you won't realize the mistake until later in the build process. However, if you #include add.h from add.c, to the compiler, your file looks like
#include "add.h" int square(int i) { return i*i; }
which after processing the #include statement will look like
double square(int i); int square(int i) { return i*i; }
Which the compiler will notice when compiling add.c, and tell you about. Effectively, including your own header in this way prevents you from falsely advertising to other files the type of the functions you are providing.
Why you can use a function without ever declaring it
As you have noticed, in some cases you can actually use a function without every declaring it or #including any file which declares it. This is stupid, and everyone agrees that this is stupid. However, it is a legacy feature of the C programming language (and C compilers) that if you use a function without declaring it first, it just assumes that it is a function returning type "int". So in effect, using a function is implicitly declaring that function as a function which returns "int" if it is not already declared. It's very strange behavior if you think about it, and the compiler should warn you if you it doing that behavior.
Header Guards
One other common practice is the use of "Header Guards". To explain header guards, let's look at a possible problem. Let's say that we have two files: herp.c, and derp.c, and they both want to use functions contained in each other. Following the above guidelines, you might have a herp.h with the line
#include "derp.h"
and a derp.h with the line
#include "herp.h"
Now, if you think about it, #include "derp.h" will be converted to the contents of derp.h, which in turn contains the line #include "herp.h", which will be converted to the contents of herp.h, and that contains... and so on, so the compiler will go on forever just expanding the includes. Similarly, if main.h #includes both herp.h and derp.h, and both herp.h and derp.h include add.h, we see that in main.h, we end up with two copies of add.h, one as a result of #including herp.h, and one as a result of including derp.h. So, the solution? A "Header guard", i.e. a piece of code which prevents any header from being #included twice. For add.h, for example, the normal way to do this is:
#ifndef ADD_H #define ADD_H int sqrt(int i); ... #endif
This piece of code is essentially telling the preprocessor (the part of the compiler which handles all of the "#XXX" statements) to check if "ADD_H" is already defined. If it isn't (ifndef) then it first defines "ADD_H" (in this context, ADD_H doesn't have to be defined as anything, it is just a boolean which is either defined or not), and then defines the rest of the contents of the header. However, if ADD_H is already defined, then #including this file will do nothing, because there is nothing outside of the #ifndef block. So the idea is that only the first time it is included in any given file will it actually add any text to that file. After that, #including it will not add any additional text to your file. ADD_H is just an arbitrary symbol you choose to keep track of whether add.h has been included yet. For every header, you use a different symbol to keep track of whether it has been included yet or not. For example, herp.h would probably use HERP_H instead of ADD_H. Using a "header guard" will fix any of the problems I listed above, where you have duplicate copies of a file included, or an infinite loop of #includes.
|
https://codedump.io/share/2XJHsGZQb1GO/1/calling-a-function-from-another-file-in-the-same-directory-in-c
|
CC-MAIN-2017-43
|
refinedweb
| 1,657
| 77.13
|
Code. Collaborate. Organize.
No Limits. Try it Today.
First of all, what does term "inline" mean?
Generally the inline term is used to instruct the compiler to insert the code of a function into the code of its caller at the point where the actual call is made. Such functions are called "inline functions". The benefit of inlining is that it reduces function-call overhead.
Now, it's easier to guess about inline assembly. It is just a set of assembly instructions written as inline functions..
asm
Assembly language appears in two flavors: Intel Style & AT&T style. GNU C compiler i.e. GCC uses AT&T syntax and this is what we would use. Let us look at some of the major differences of this style as against the Intel Style.
If you are wondering how you can use GCC on Windows, you can just download Cygwin from.
%
%eax, %cl
eax, cl.
mov eax, edx
mov %edx, %eax
b
w
l
movl %edx, %eax
$
addl $5, %eax
%eax
movl $bar, %ebx
%ebx
movl bar, %ebx
movl 8(%ebp), %eax
%ebp
For all our code, we would be working on Intel x86 processors. This information is necessary since all instructions may or may not work with other processors.
We can use either of the following formats for basic inline assembly.
asm("assembly code");
or
__asm__ ("assembly code");
Example:
asm("movl %ebx, %eax"); /* moves the contents of ebx register to eax */
__asm__("movb %ch, (%ebx)"); /* moves the byte from ch to the memory pointed by ebx */
Just in case we have more than one assembly instruction, use semicolon at the end of each instruction.
Please refer to the example below (available in basic_arithmetic.c in downloads).
#include <stdio.h>
int main() {
/* Add 10 and 20 and store result into register %eax */
__asm__ ( "movl $10, %eax;"
"movl $20, %ebx;"
"addl %ebx, %eax;"
);
/* Subtract 20 from 10 and store result into register %eax */
__asm__ ( "movl $10, %eax;"
"movl $20, %ebx;"
"subl %ebx, %eax;"
);
/* Multiply 10 and 20 and store result into register %eax */
__asm__ ( "movl $10, %eax;"
"movl $20, %ebx;"
"imull %ebx, %eax;"
);
return 0 ;
}
Compile it using "-g" option of GNU C compiler "gcc" to keep debugging information with the executable and then using GNU Debugger "gdb" to inspect the contents of CPU registers.
-g
gcc
gdb
In extended assembly, we can also specify the operands. It allows us to specify the input registers, output registers and a list of clobbered registers.
asm ( "assembly code"
: output operands /* optional */
: input operands /* optional */
: list of clobbered registers /* optional */
);
If there are no output operands but there are input operands, we must place two consecutive colons surrounding the place where the output operands would go.
It is not mandatory to specify the list of clobbered registers to use, we can leave that to GCC and GCC’s optimization scheme do the needful.
asm ("movl %%eax, %0;" : "=r" ( val ));
In this example, the variable "val" is kept in a register, the value in register eax is copied onto that register, and the value of "val" is updated into the memory from this register.
val
eax
When the "r" constraint is specified, gcc may keep the variable in any of the available General Purpose Registers. We can also specify the register names directly by using specific register constraints.
r
The register constraints are as follows :
+---+--------------------+
| r | Register(s) |
+---+--------------------+
| a | %eax, %ax, %al |
| b | %ebx, %bx, %bl |
| c | %ecx, %cx, %cl |
| d | %edx, %dx, %dl |
| S | %esi, %si |
| D | %edi, %di |
+---+--------------------+
int no = 100, val ;
asm ("movl %1, %%ebx;"
"movl %%ebx, %0;"
: "=r" ( val ) /* output */
: "r" ( no ) /* input */
: "%ebx" /* clobbered register */
);
In the above example, "val" is the output operand, referred to by %0 and "no" is the input operand, referred to by %1. "r" is a constraint on the operands, which says to GCC to use any register for storing the operands.
%0
no
%1
Output operand constraint should have a constraint modifier "=" to specify the output operand in write-only mode. There are two %’s prefixed to the register name, which helps GCC to distinguish between the operands and registers. operands have a single % as prefix.
=
The clobbered register %ebx after the third colon informs the GCC that the value of %ebx is to be modified inside "asm", so GCC won't use this register to store any other value.
%ebx
int arg1, arg2, add ;
__asm__ ( "addl %%ebx, %%eax;"
: "=a" (add)
: "a" (arg1), "b" (arg2) );
Here "add" is the output operand referred to by register eax. And arg1 and arg2 are input operands referred to by registers eax and ebx respectively.
add
arg1
arg2
eax
ebx
Let us see a complete example using extended inline assembly statements. It performs simple arithmetic operations on integer operands and displays the result (available as arithmetic.c in downloads).
#include <stdio.h>
int main() {
int arg1, arg2, add, sub, mul, quo, rem ;
printf( "Enter two integer numbers : " );
scanf( "%d%d", &arg1, &arg2 );
/* Perform Addition, Subtraction, Multiplication & Division */
__asm__ ( "addl %%ebx, %%eax;" : "=a" (add) : "a" (arg1) , "b" (arg2) );
__asm__ ( "subl %%ebx, %%eax;" : "=a" (sub) : "a" (arg1) , "b" (arg2) );
__asm__ ( "imull %%ebx, %%eax;" : "=a" (mul) : "a" (arg1) , "b" (arg2) );
__asm__ ( "movl $0x0, %%edx;"
"movl %2, %%eax;"
"movl %3, %%ebx;"
"idivl %%ebx;" : "=a" (quo), "=d" (rem) : "g" (arg1), "g" (arg2) );
printf( "%d + %d = %d\n", arg1, arg2, add );
printf( "%d - %d = %d\n", arg1, arg2, sub );
printf( "%d * %d = %d\n", arg1, arg2, mul );
printf( "%d / %d = %d\n", arg1, arg2, quo );
printf( "%d %% %d = %d\n", arg1, arg2, rem );
return 0 ;
}
If our assembly statement must execute where we put it, (i.e. must not be moved out of a loop as an optimization), put the keyword "volatile" or "__volatile__" after "asm" or "__asm__" and before the ()s.
volatile
__volatile__
__asm__
()
asm volatile ( "...;"
"...;" : ... );
__asm__ __volatile__ ( "...;"
"...;" : ... );
Refer to the following example, which computes the Greatest Common Divisor using well known Euclid's Algorithm ( honoured as first algorithm).
#include <stdio.h> ;
}
Here are some more examples which use FPU (Floating Point Unit) Instruction Set.
An example program to perform simple floating point arithmetic:
#include <stdio.h>
int main() {
float arg1, arg2, add, sub, mul, div ;
printf( "Enter two numbers : " );
scanf( "%f%f", &arg1, &arg2 );
/* Perform floating point Addition, Subtraction, Multiplication & Division */
__asm__ ( "fld %1;"
"fld %2;"
"fadd;"
"fstp %0;" : "=g" (add) : "g" (arg1), "g" (arg2) ) ;
__asm__ ( "fld %2;"
"fld %1;"
"fsub;"
"fstp %0;" : "=g" (sub) : "g" (arg1), "g" (arg2) ) ;
__asm__ ( "fld %1;"
"fld %2;"
"fmul;"
"fstp %0;" : "=g" (mul) : "g" (arg1), "g" (arg2) ) ;
__asm__ ( "fld %2;"
"fld %1;"
"fdiv;"
"fstp %0;" : "=g" (div) : "g" (arg1), "g" (arg2) ) ;
printf( "%f + %f = %f\n", arg1, arg2, add );
printf( "%f - %f = %f\n", arg1, arg2, sub );
printf( "%f * %f = %f\n", arg1, arg2, mul );
printf( "%f / %f = %f\n", arg1, arg2, div );
return 0 ;
}
Example program to compute trigonometrical functions like sin and cos:
#include <stdio.h>
float sinx( float degree ) {
float result, two_right_angles = 180.0f ;
/* Convert angle from degrees to radians and then calculate sin value */
__asm__ __volatile__ ( "fld %1;"
"fld %2;"
"fldpi;"
"fmul;"
"fdiv;"
"fsin;"
"fstp %0;" : "=g" (result) :
"g"(two_right_angles), "g" (degree)
) ;
return result ;
}
float cosx( float degree ) {
float result, two_right_angles = 180.0f, radians ;
/* Convert angle from degrees to radians and then calculate cos value */
__asm__ __volatile__ ( "fld %1;"
"fld %2;"
"fldpi;"
"fmul;"
"fdiv;"
"fstp %0;" : "=g" (radians) :
"g"(two_right_angles), "g" (degree)
) ;
__asm__ __volatile__ ( "fld %1;"
"fcos;"
"fstp %0;" : "=g" (result) : "g" (radians)
) ;
return result ;
}
float square_root( float val ) {
float result ;
__asm__ __volatile__ ( "fld %1;"
"fsqrt;"
"fstp %0;" : "=g" (result) : "g" (val)
) ;
return result ;
}
int main() {
float theta ;
printf( "Enter theta in degrees : " ) ;
scanf( "%f", &theta ) ;
printf( "sinx(%f) = %f\n", theta, sinx( theta ) );
printf( "cosx(%f) = %f\n", theta, cosx( theta ) );
printf( "square_root(%f) = %f\n", theta, square_root( theta ) ) ;
return 0 ;
}
GCC uses AT&T style assembly statements and we can use asm keyword to specify basic as well as extended assembly instructions. Using inline assembly can reduce the number of instructions required to be executed by the processor. In our example of GCD, if we implement using inline assembly, the number of instructions required for calculation would be much less as compared to normal C code using Euclid's Algorithm.
asm
You can also visit Eduzine© - electronic technology magazine of EduJini, the company that I work with.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
void CMfcAsmTestDlg::OnBnClickedButton1()
{
char format[] = "%s %s\n";
char hello[] = "Hello";
char world[] = "world";
_
}
}
asm("__emit 0fh");
/tmp/ccNpl4q8.s:14: Error: invalid character '_' in mnemonic
int addten(int somevalue)
{
int retval = 0;
asm {
mov eax, somevalue
add eax, 10
mov retval, eax
}
return retval;
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/15971/Using-Inline-Assembly-in-C-C?msg=2888886&PageFlow=FixedWidth
|
CC-MAIN-2014-23
|
refinedweb
| 1,470
| 52.94
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.