text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On 08 November 2005 14:44, Malcolm Wallace wrote: > Whilst we are wishing for language features, I wonder if anyone > else would be interested in being able to export/import such cpp > macros using the normal Haskell module mechanism? At the moment, > it is ugly that you need to either #include or #define your macros > separately in every module where you use them. But if the cpp phase > were integrated more tightly into the compiler, it could be possible > to treat macros almost like ordinary function definitions. *shudder* I disagree most strongly. CPP is a well-defined preprocessing stage, it is easily understood by virtue of being separate from Haskell. I don't even want to start thinking about what happens if you merge the CPP and Haskell namespaces... ugh! I guess we have a different idea of what's "ugly" :) Cheers, Simon
http://www.haskell.org/pipermail/glasgow-haskell-users/2005-November/009265.html
CC-MAIN-2014-42
refinedweb
143
59.94
Today on FAIC, a detective story. Suppose you have this class in assembly Alpha.DLL: namespace Alpha { public class Bravo { public void Charlie() { System.Console.WriteLine("Alpha Bravo Charlie"); } } } Pretty straightforward. You call this method from assembly Delta.EXE like this: namespace Delta { public interface IFoxtrot { void Charlie(); } public class Echo : Alpha.Bravo, IFoxtrot { } public class Program { static void Main() { IFoxtrot foxtrot = new Echo(); foxtrot.Charlie(); } } } Notice that class Echo does not re-implement Charlie, but that’s fine; the base class implementation suffices to meet the requirements of interface IFoxtrot. If we run this code and put a breakpoint on Charlie the call stack window in Visual Studio says: Alpha.dll!Alpha.Bravo.Charlie() [External Code] Delta.exe!Delta.Program.Main() [External Code] It’s unsurprising that there is “external code” inserted that calls Main for you, but what’s with the “external code” between Main and Charlie? That should just be a straight call, right? Something strange is going on here. It is possible for debugging purposes to programmatically inspect the call stack — that is, to write a program that prints out its own call stack, rather than simply examining it in a debugger. (You should not do this for purposes other than debugging because the jitter is allowed to rearrange the call stack as it sees fit. Inlined methods do not appear on the call stack, tail recursive methods do not appear on the call stack, and so on. Use the caller information attributes in C# 5 if you want to write a method that knows what its caller was.) So let’s change the implementation of Charlie to print out the caller. (Alternatively, as commenter leppie points out, Visual Studio has a “show external code on the call stack” feature that we could use as well, but for our purposes here let’s just print it out.) public void Charlie() { System.Console.WriteLine("Alpha Bravo Charlie"); System.Console.WriteLine((new System.Diagnostics.StackFrame(1).GetMethod().Name)); } And now if we run it we get the output Alpha Bravo Charlie Delta.IFoxtrot.Charlie What the heck is going on here? What’s going on is: the CLR requires that any method which implements an interface method be a virtual method, but the only candidate, Alpha.Bravo.Charlie, is non-virtual. Therefore when generating class Echo, the C# compiler actually generates the code as though you’d written: public class Echo : Alpha.Bravo, IFoxtrot { void Delta.IFoxtrot.Charlie() { base.Charlie(); } } The explicit interface implementation is a virtual method, satisfying the CLR. The compiler-generated method is marked as not having any source code, so the debugger writes External code in the call stack. Mystery solved! Extra credit mystery: Why did I have to specify that Echo and Bravo be in different assemblies? Leave your guesses in the comments. Special thanks to Stack Overflow users Timwi and Jeff Moser for the inspiration for this blog post. Next time on FAIC: Just why are these adventures so fabulous anyway? It’s not *much* of a challenge if you skip the extra credit question, go and read the SO question and answer, and then attempt it. True that. So it shows [External Code] and if you turn on the option to ‘Show External Code’ will it show Delta.IFoxtrot.Charlie() ? Hah, I never even thought of that. Yes, of course it does. Silly me. no wonder, with your skills you must have been typing these programs in notepad 😛 totally forgetting VS 🙂 In the case someone wants to learn more about Show External Code like I did: And now I know about Caller Info Attributes and a problem I once had has now been solved. I guess that in the same assembly (better,same compilation unit?) the compiler would have detected the non-virtual method and complained? …or maybe just adjusted Alpha.Charlie to be “virtual”, and compile in a “normal” way, without the stub? I would bet on the first though.. Your second guess is the right one; the C# compiler silently makes Alpha.Bravo.Charlie both virtual and sealed. Because the C# compiler is smart enough to mark Bravo.Charlie() as virtual for you when it detects that Echo (a subclass of Bravo in the same assembly) needs it to be virtual in order to implement IFoxtrot. The C# compiler would have no opportunity to make this optimization if Bravo is in a different assembly than Echo, because Bravo’s assembly would have to be compiled before the code in Echo can be analyzed. And I do assume it’s a performance optimization the compiler is making when the two classes are in the same assembly; that way, calls to Echo.Charlie() entail one virtual dispatch rather than an indirection plus a virtual dispatch. On the other hand, calls to Bravo.Charlie() would then always pay the penalty of a virtual dispatch, counter-balancing the performance gain for Echo.Charlie(). So I know that haven’t quite figured it all out yet. You are 90% of the way there. Alpha.Bravo.Charlie is marked as both virtual *and* sealed, which means that the jitter can eliminate the virtual indirection should it choose to. If the jitter knows that a virtual method will only ever have exactly one implementation, it can dispatch directly to it. Whether it does this optimization or not, I don’t know. “If the jitter knows that a virtual method will only ever have exactly one implementation, it can dispatch directly to it. Whether it does this optimization or not, I don’t know.” Oh, come on !! “The jitter” is not a single entity. Sure Eric could find out if a particular version of a particular IL to “machine code” compiler, in a particular scenario, does or does not make the optimization ( we’re talking about x86, x64, Itanium, Silverlight, Windows Mobile, Windows Phone, Windows RT and let us not forget those which are not even maintained by Microsoft like Mono for Mac or Mono for iOS, the latter being somehow forced to have everything prejitted ahead of time – still an IL to machine code compilation, etc, etc X all their versions X many situations and cases ). But in a situation like this it is better to simply say that “it depends on many many things which are outside of my control” or the shorter “I don’t know”. I recommend you read this older “FAIC” post: This immediately reminded me of Chris Brumme’s decade-old blog post on the same topic, which also answers the extra credit question: “the CLR requires that any method which implements an interface method be a virtual method” Yes, but why? I suppose it could be related to an implementation detail of the dynamic dispatch on interfaces… @Michael, thanks for the link, Chris Brumme says something relevant to my.” Ok, then I guess my question does not have a simple answer 🙂 Let’s just say that’s how they chose to do it.
https://ericlippert.com/2013/05/23/the-mystery-of-the-inserted-method/
CC-MAIN-2019-39
refinedweb
1,163
63.09
Brokenness Hides Itself By jwadams on Sep 28, 2005 When engineers get together and talk, one of the things they like to bring out and share is war stories; tales of debugging daring-do and the amazing brokenness that can be found in the process. I recently went through an experience that makes good war story material, and I thought I'd share it. A couple weeks ago, there were multiple reports of svc.configd(1M) failing repeatedly with one of: svc.configd: Fatal error: invalid integer "10" in field "id" svc.configd: Fatal error: invalid integer in database Since I'm one of the main developers of svc.configd(1M), I started to investigate. I first had the people hitting it send me their repositories, but they all checked out as having no problems. The problem was only being seen on prototype Niagara machines and some Netra T1s; the first reproducible machine I got console access to was a Niagara box. Figuring out what happened Unfortunately, the box was running old firmware, which significantly restrained its usability; I spent more time fighting the machine than working on tracking down the problem. I finally boot neted the machine, mounted the root filesystem, and added a line: sulg::sysinit:/sbin/sulogin </dev/console 2<>/dev/console >&2: > uu_strtouint::bp > :c mdb: stop at libuutil.so.1`uu_strtouint mdb: target stopped at: uu_strtouint: save %sp, -0x68, %sp > ::step[1] mdb: target stopped at: uu_strtouint+4: ld [%fp + 0x60], %l6 > $C 2 feefb968 libuutil.so`uu_strtouint+4(1cb4e4, feefba9c) feefb9d0 string_to_id+0x24(1cb4e4, feefba9c) feefba38 fill_child_callback+0x20(feefbbe4, 2) feefbaa0 sqlite_exec+0xd8(13de08, 2) feefbb18 backend_run+0x74(89b48, 169848) feefbb80 scope_fill_children+0x38(5e9f00, 1000) ... . So I proceeded to investigate uu_strtouint(). The first step was to see how the function was failing; there are a number of different ways to get to uu_set_error(), which sets up libuutil's equivalent of errno. A simple breakpoint led to the following code segment: 269 if (strtoint(s, &val, base, 0) == -1) 270 return (-1); 271 272 if (val < min) { 273 uu_set_error(UU_ERROR_UNDERFLOW); 274 return (-1); 275 } else if (val > max) { 276 uu_set_error(UU_ERROR_OVERFLOW); 277 return (-1); 278 : > strtoint::dis ! grep call | sed 's/libuutil.so.1`//g' strtoint+0xc: call +8 <strtoint+0x14> strtoint+0x204: call +0x12a30 <PLT:__udiv64> strtoint+0x2b8: call +0x12988 <PLT:__umul64> strtoint+0x404: call -0xd30 <uu_set_error> strtoint+0x414: call -0xd40 <uu_set_error> strtoint+0x424: call -0xd50 <uu_set_error> strtoint+0x440: call -0xd6c <uu_set_error> strtoint+0x450: call -0xd7c <uu_set_error> strtoint+0x460: call -0xd8c <uu_set: 103 multmax = (uint64_t)UINT64_MAX / (uint64_t)base; 104 105 for (c = \*++s; c != '\\0'; c = \*++s) { ... 116 if (val > multmax) 117 overflow = 1; 118 119 val \*= base; 120 if ((uint64_t)UINT64_MAX - val < (uint64_t)i) 121 overflow = 1; 122 123 val += i; 124 }The division always occurs, so I looked at the multiply routine first; disassembling it showed the following suspicious section: > __umul64::dis ! sed 's/libc.so.1`//g' ... __umul64+0x38: cmp %l7, 0 __umul64+0x3c: call +0xc95e4 <PLT:.umul> __umul64+0x40: mov %i3, %i0 __umul64+0x44: mov %l6, %i1 .. code is: usr/src/lib/libc/sparc/crt/mul64.c: 36 extern unsigned long long __umul32x32to64(unsigned, unsigned); ... 70 unsigned long long 71 __umul64(unsigned long long i, unsigned long long j) 72 { ... 81 if (i1) 82 result = __umul32x32to64(i1, j1); 88 return (result); 89 } usr/src/lib/libc/sparc/crt/muldiv64.il: 29 .inline __umul32x32to64,8 30 call .umul,2 31 nop 32 mov %o0, %o2 33 mov %o1, %o0 34 mov %o2, %o1 35 .endFrom? I first wrote a small C program, to test: % cat > math_test.c <<EOF #include <stdio.h> int main(int argc, char \*argv[]) { unsigned long long x, y, z; x = 10; y = 1; z = x \* y; printf("%llx\\n", z); return (0); } EOF % cc -o math_test math_test.c % ./math_test a % truss -t \\!all -u '::__\*mul64' ./math_test /1@1: -> libc_psr:__mul64(0x0, 0xa, 0x0, 0x1) /1@1: <- libc_psr:__mul64() = 0 a % ... (moving over to niagara machine) ... % truss -t \\!all -u '::__\*mul64' ./math_test /1@1: -> libc:__mul64(0x0, 0xa, 0x0, 0x1) /1@1: <- libc:__mul64() = 1 10000000a . The fact that these weren't included in the sun4v libc_psr is an oversight, but #2 means that it wouldn't have mattered if they did. The Netra T1's running into this problem are explained by the fact that there are a set of missing /platform/\*/lib symlinks for the following platforms: SUNW,Ultra-1-Engine SUNW,UltraAX-MP SUNW,UltraAX-e SUNW,UltraAX-e2 SUNW,UltraSPARC-IIi-Engine SUNW,UltraSPARC-IIi-cEngine SUNW,UltraSPARCengine_CP-20 SUNW,UltraSPARCengine_CP-40 SUNW,UltraSPARCengine_CP-60 SUNW,UltraSPARCengine_CP-80Which final question is "Why now?". What changed to make this a problem? Four days before the first reported incident, 6316914 was putback, which switched the build from the Studio 8 to the Studio 10 compilers. Because of the libc_psr masking, no-one noticed the problem until they put the bits on the (much rarer) platforms with the bug. The Fix To fix this, you simply move the __{u,}{mul,div}64 functions from libc_psr back into libc, using the v8plus versions that were in libc_psr. libc's assembly files are already being compiled in v8plus mode due to atomic_ops(3C), so it just required shuffling around some code, removing stuff from makefiles, and deleting the old, out of date code. This was done under bugid 6324631, integrated in the same build as the compiler switch, so only a limited number of OS/Net developers were effected. Life was all better. Well, almost. The follow-on In testing my fix, I did a full build, bfued a machine, and just dumped fixed binaries on other machines. The one thing I didn't test was BFUing from broken bits to fixed bits. And, of course, there was an unforeseen problem (bugid 6327152). To understand what went wrong, I'm going to have to do some background on how BFU works. bfu is a power-tool whose essential job is to dump a full set of binaries over a running system, even if the new bits are incompatible with the ones running the system. To do this, it copies binaries, libraries, and the dynamic linker into a set of subdirectories of /tmp: /tmp/bfubin, /tmp/bfulib, and /tmp/bl. It then uses a tool called "bfuld" to re-write the "interepreter" information for the binaries in /tmp/bfubin, to point at the copied ld.so.1(1). It then sets LD_LIBRARY_PATH in the environment, to re-direct any executed programs to the copied libraries, and sets PATH=/tmp/bfubin. This gives BFU a protected environment to run in. The problem is that auxiliary filters (like libc_psr) were not disabled, so programs running in the BFU environment were picking up the libc_psr from /platform. Once the \*new\* libc_psr was extracted, programs were no longer protected from the broken __\*mul64() routines. Since things like scanf(3C) use __mul64 internally, this caused breakage all over the place, most noticeably in cpio(1). The fix for this is reasonably simple; set LD_NOAUXFLTR=1 in the environment to prevent auxiliary filters from being used,[3] make a copy of libc_psr.so.1 into /tmp/bfulib, and use LD_PRELOAD=/tmp/bfulib/libc_psr.so.1 to override the bad libc functions. The latter part of this can be removed once we're sure no broken libcs are running around. Conclusion I hope you've enjoyed this. The bug ended up being surprisingly subtle (as many compiler bugs are), but luckily the fix was relatively simple. The Law of Unintended Consequences applies, as always. Footnotes: [1] ::steping over the "save" instruction is a standard SPARC debugging trick; it makes the arguments to the function and the stack trace correct. [2] Position-Independent Code, which is how shared libraries are compiled. [3] Ironically, if we had done this \*before\* the compiler switch was done, BFU would have immediately failed when run on the broken bits, and the whole problem would have been noticed much more quickly. [ Technorati OpenSolaris Solaris ]
https://blogs.oracle.com/jwadams/entry/brokenness_hides_itself
CC-MAIN-2015-22
refinedweb
1,335
62.88
"Radioactive decay" is the process by which an unstable atom loses energy and emits ionizing particles - what is commonly refered to as radiation. Exposure to radiation can be dangerous and is very important to measure to ensure that one is not exposed to too terribly much of it. The radioactivity of a material decreases over time, as the material decays. A radioactive decay curve describes this decay. The x-axis measures time, and the y-axis measures the amount of activity produced by the radioactive sample. 'Activity' is defined as the rate at which the nuclei within the sample undergo transititions - put simply, this measures how much radiation is emitted at any one point in time. The measurement of activity is called the Becquerel (Bq). Here is a sample radioactive decay curve: (Click on the pictures to view full-sized images) Now here's the problem we'd like to solve. Let's say Sarina has moved into a new apartment. Unbeknownst to her, there is a sample of Cobalt-60 inside one of the walls of the apartment. Initially that sample had 10 MBq of activity, but she moves in after the sample has been there for 5 years. She lives in the apartment for 6 years, then leaves. How much radiation was she exposed to? We can actually figure this out using the radioactive decay curve from above. What we want to know is her total radiation exposure from year 5 to year 11. Total radiation exposure corresponds to the area between the two green lines at time = 5 and time = 11, and under the blue radioactive decay curve. This should make intuitive sense - if the x axis measures time, and the y axis measures activity, then the area under the curve measures (time * activity) = MBq*years, or, approximately the total number of MBq Sarina was exposed to in her time in the radioactive apartment (technically, this result is the number of neutrons she was exposed to, but this gets a bit complicated, so we'll ignore it. Sorry, physicists!). So far, so good. But, how do we calculate this? Unlike a simple shape - say a square, or a circle - we have no easy way to tell what the area under this curve is. However, we have learned a technique that can help us here - approximation. Let's use an approximation algorithm to estimate the area under this curve! We'll do so by first splitting up the area into equally-sized rectangles (in this case, six of them, one rectangle per year): Once we've done that, we can figure out the area of each rectangle pretty easily. Recall that the area of a rectangle is found by multiplying the height of the rectangle by its width. The height of this rectangle: is the value of the curve at 5.0. If the curve is described by a function, f, we can obtain the value of the curve by asking for f(5.0). f(5.0) = 5.181 The width of the rectangle is 1.0. So the area of this single rectangle is 1.0*5.181 = 5.181. To approximate how much radiation Sarina was exposed to, we next calculate the area of each successive rectangle and then sum up the areas of each rectangle to get the total. When we do this, we find that Sarina was exposed to nearly 23 MBq of radition (technically, her apartment was bombarded by 23e6 * 3.154e6 = 7.25e13 neutrons, for those interested...). Whether or not this will kill Sarina depends exactly on the type of radiation she was exposed to (see this link which discusses more about the ways of measuring radiation). Either way, she should probably ask her landlord for a substantial refund. In this problem, you are asked to find the amount of radiation a person is exposed to during some period of time by completing the following function: def radiationExposure(start, stop, step): ''' Computes and returns the amount of radiation exposed to between the start and stop times. Calls the function f (defined for you in the grading script) to obtain the value of the function at any point. start: integer, the time at which exposure begins stop: integer, the time at which exposure ends step: float, the width of each rectangle. You can assume that the step size will always partition the space evenly. returns: float, the amount of radiation exposed to between start and stop times. ''' To complete this function you'll need to know what the value of the radioactive decay curve is at various points. There is a function f that will be defined for you that you can call from within your function that describes the radioactive decay curve for the problem. Could you pls write the code for this question ???
http://forums.devshed.com/python-programming-11/help-941168.html
CC-MAIN-2016-44
refinedweb
803
60.75
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources. The weighted mean is a type of average that has a weight for every observation being averaged. It is used in this book to make numerical predictions based on similarity scores. The weighted mean has the formula shown in Figure B-3, where x1...xn are the observations and w1...wn are the weights. A simple implementation of this formula that takes a list of values and weights is given here: def weightedmean(x,w): num=sum([x[i]*w[i] for i in range(len(w))]) den=sum([w[i] for i in range(len(w))]) return num/den In Chapter 2, weighted means are used to predict how much you’ll enjoy a movie. This is done by calculating an average rating from other people, weighted by how similar their tastes are to yours. In Chapter 8, weighted means are used to predict prices.
http://my.safaribooksonline.com/book/web-development/9780596529321/mathematical-formulas/weighted_mean
CC-MAIN-2014-10
refinedweb
163
63.8
ii This PDF was made available free of charge via: (not for resale) Martin Evening This PDF was made available free of charge via: (not for resale) vi introduction This PDF was made available free of charge via: (not for resale) Acknowledgments The turnaround time for this PDF supplement has had to be quite swift. I would like to thank Rod Wynne-Powell for providing his technical proof editing services once again. Thank you to Jeff Schewe for his help advice on how to get the content published on Lightroom-news.com (where you can leave comments if you wish) and Pam Pfiffner at Peachpit for her help with the PDF publishing and proofreading. Martin Evening, June 2007. vii tip Dont forget that you can use the Contents section here to navigate Contents Lightroom book About the Lightroom 1.1 update supplement. . . . . . . . . . . . . vi Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . vii . . . . . . . . . . . . . 353 viii introduction This PDF was made available free of charge via: (not for resale) . . . . . . . . . . . . . . 385 Contents ix Toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Compare view display. . . . . . . . . . . . . . . . . . . . . . . . 415 Painter tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Sort functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Sort order button. . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Filmstrip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Rating filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Breadcrumbs text. . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Color label filtering. . . . . . . . . . . . . . . . . . . . . . . . . . 420 Modified filters and selections via the filmstrip. . . 421 Virtual copy and Master copy filters. . . . . . . . . . . . 421 3 Develop module . . . . . . . . . . . . . . . . . . . . . . 423 Contents xi 4 Lightroom preferences . . . . . . 485 xii This PDF was made available free of charge via: (not for resale) . . . . . . . . . . . 505 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 xiii xiv Contents With the release of the 1.1 update, the Lightroom program has undergone a number of fundamental changes. The programs performance has been tuned to run that little bit faster and more smoothly and the Optimize option can also help improve Lightrooms. Lets start by looking at the File, Edit and Help menus, which are common to all of the Lightroom modules (I have not included the Window menu here because this is the one menu list that hasnt changed in 1.1). Page references Lightroom database 308311 File Menu Figure 1.1 The File menu, which is common to all Lightroom 1.1 modules.). This change in terminology from Library to Catalog now provides a clearer distinction between it and the Library module. Plus there is now better support for multiple catalogs. The top three items in the File menu allow you to create a new catalog (New Catalog), open an existing catalog (Open Catalog), or choose: Open Recent and select a recently opened catalog from the fly-out menu. Note that whenever you choose to create a new catalog or you choose to load an existing catalog, you have to restart Lightroom after you do so in order to launch the program using the new catalog. This is because in Lightroom 1.1 you can only open a single catalog at a time you wont be able to open several catalogs at once just yet. Exporting catalogs A lot of people may find they will be fine using just the one catalog for all their images. But lets say you are sharing a computer running Lightroom with other people; each user can maintain their own separate catalog to reference and manage the images they are interested in working with. 354 This PDF was made available free of charge via: (not for resale) note With Lightroom 1.1, the Develop Snapshots will now also get stored in the XMP metadata. This means that you can now export the Snapshots associated with the photos when you carry out an Export. Figure 1.2 The Export as Catalog dialog will appear whenever you select photos to export. This includes the option to export the selected photographs only, or all the photos in the current library grid/filmstrip view. 355 Page references Lightroom previews data Thumbnail processing 312314 If you deselect the Export. 356 This PDF was made available free of charge via: (not for resale) If you check both this and the Export negative files and Include available previews options, you will then end up with an exported catalog that looks like the folder shown in Figure 1.4, where the catalog folder contains an .lrcat catalog file, a Previews.lrdata file that contains the thumbnails and preview image data and a sub folder that contains the master negatives. Figure 1.4 This shows a folder view of an exported catalog along with the Images folder and previews file. Importing catalogs Now lets imagine you have transferred the exported catalog to another computer. You can then go to the Import from Catalog menu item, select the exported .lrcat file and open it. You will then see the Import from Catalog dialog shown in Figure 1.5, where you can choose to import the images by copying them to a new location and add them to a current Lightroom catalog. Alternatively, you can choose to import the files by referencing them in their present location. 357. 358 This PDF was made available free of charge via: (not for res. 359 Export as Catalog... Because I only wanted to export things like the ratings, color label and keyword metadata edits, I didnt need to check the Include available previews option. After all, as I was about to export back to the main library, there was no need for me to include the previews again. 360 This PDF was made available free of charge via: (not for resale) 8. Here is the master catalog on the main computer after merging all the metadata edits from the laptop exported catalog. As you can see, the color labels and ratings have now updated. 361 362 This PDF was made available free of charge via: (not for resale) this submenu item. But it is one of those changes that once you have learned where the new settings are, you will soon become accustomed to the change. 363 364 This PDF was made available free of charge via: (not for resale) Page references Preview cache 311314 365 Page references Sharing metadata Metadata settings 316317 366 This PDF was made available free of charge via: (not for resale) the Library module) or Photo Save Metadata to file (in the Develop module), or use the new shortcut: CommandS (Mac), ControlS (PC), or, when the Automatically write changes into XMP option is switched on. OK, lets rating Creative suite 3, in Lightroom. However, where you have edited a non-raw file such as a JPEG, TIFF or PSD image using the Develop settings in Lightroom, and the develop settings have been written to the files XMP space, Bridge 2 will now consider such files to be raw files and will open them up via Camera Raw rather than open them directly in Photoshop. Thats what I mean by mixed blessings. If you want Lightroom to retain the ability to modify the XMP space of non-raw files for data such as file ratings, keywords and labels etc. but exclude storing the develop settings, you should uncheck the Write develop settings to XMP option. If you do this, your Lightroom develop settings for non-raw files will only get written to the catalog and they wont get exported when you choose Save Metadata. But raw and DNG files will continue to be modified as before. On the plus side, you will never be faced with the confusion of seeing your non-raw images such as JPEGs unexpectedly default to open via Camera Raw when you try to open them up in 367 Page references Sharing metadata 316317 Automatically write changes into XMP Write develop settings to XMP for JPG, TIFF and PSD DNG Save Metadata command 368 This PDF was made available free of charge via: (not for resale) In the case of JPEG, TIFF and PSD files, the XMP data will be saved inside the file header itself. However, because you are also saving the Lightroom develop settings, these files will default to opening in Bridge 2 (as part of the Creative Suite 3) always using the Adobe Camera Raw dialog. Automatically write changes into XMP Write develop settings to XMP for JPG, TIFF and PSD Save Metadata command DNG Now if you were to open a JPEG, TIFF or PSD image that had been edited in Lightroom without using the Save Metadata command, such files will open from Bridge 2 directly into Photoshop and will not open via the Camera Raw dialog. as in step 1 where the non-raw files will default to open via Camera Raw, which is perhaps not what the customer wanted! 369 Automatically write changes into XMP Write develop settings to XMP for JPG, TIFF and PSD Save Metadata command DNG 3. Now lets look at what happens when the Write develop settings to XMP for JPEG, TIFF and PSD is disabled. 370, all except for the develop settings. Overall this is a useful configuration for preserving the informational metadata in non-raw files that have been modified via Lightroom. But the develop settings wont be transferred and hence the appearance of such images will not always match between how they look in Lightroom, and how they look in other programs. This PDF was made available free of charge via: (not for resale) note You can access the latest version of Camera Raw for Photoshop and Bridge by going to the Adobe website: photoshop/cameraraw.html. 371 Figure 1.10 To keep the Camera Raw edits in sync with Lightroom, you need to make sure that the Camera Raw settings are always saved to the .xmp files. Figure 1.11 You can use Synchronize Folder to run a quick scan for metadata updates. 372 This PDF was made available free of charge via: (not for resale) 373 374 This PDF was made available free of charge via: (not for resale) Page references If you check the Metadata Browser panel in the Library module, notice that there are now four new add categories: Aperture, Shutter Speed, ISO Speed Rating and Label. These additions are pretty self-evident in that they provide extra ways to filter the images that are displayed in the content area. In the case of Label this is exactly the same as clicking on a color label swatch in the Filters section of the Filmstrip. In order to make the Metadata Browser panel more manageable you can use the Catalog Settings to customize which items are visible in this panel. Importing photos 3853 Importing I have already covered Import from Catalog (see page 357), which leaves the other import options that are now divided into: Import Photos from Disk and Import Photos from Device This means that you can choose from the File menu whether to import photos from a disk location or from a camera card mounted on the desktop. But note that if you have a card mounted on the computer and you click on the Import button in the Library module, youll still have a choice of whether to import from a card or from the disk. Figure 1.12 If a camera card is mounted on the desktop, the Import dialog still offers a choice of import options. And if you are importing from a camera card there is a new option to Eject card after importing, which will do just that after an import has been successful. The advantage of this is that you wont have to manually eject the card via the Finder/Explorer. You can just unplug the disk after the import has been completed. Some people prefer to delete the camera files first before ejecting and then reformat the card in the camera before shooting more images. Thats how I like to do things because often on a busy shoot it can get confusing to put a card back into the camera and if you see there are still images on it, not always know if it is safe to reformat or not. 375 As you can see, the redesigned Import dialog (see Figure 1.13) is more compact and the Dont re-import suspected duplicates option is actually a rewording of the previous Ignore suspected duplicates option. The new wording is now clearer I think. 376 This PDF was made available free of charge via: (not for resale) For example, lets say you have two computers that share the same controlled vocabulary, i.e. they both share the same keyword hierarchy structure. If you were to export a photo from one computer and import it into the other, then the Write Keywords as Lightroom Hierarchy option wont make any difference because the keyword hierarchy for the individual keywords will be recognized anyway. Please note that I am talking about a normal export and import here, not about the new export/import catalog command. But if this option is unchecked and the second computer does not share the same information, the keywords will otherwise be output as a flat View list without a Lightroom-recognized hierarchy. Page references Exporting photos Keyword Hierarchy 34, 107108 96 377 Page references Filtering images 7679 Filters The File Filters submenu duplicates the menu items that already appear in the Library modules Library menu. The reason why the menu list appears twice in Lightroom is so that the Filtering can always be accessible via the File menu, when working in any of the other modules. In the example shown in Figure 1.15, you can see how the Filters submenu allows you to filter by Rating, Flag, Color Label or Copy Status (which is a new filtering option found in 1.1). Note. Figure 1.15 You can now use the File Filters Filter by Copy Status Virtual Copies to filter the catalog to display photos that are virtual copies only. But it is probably simpler to use the button circled here in the Filmstrip to achieve the same filtered outcome. 378 This PDF was made available free of charge via: (not for resale) Edit menu Figure 1.16 The Edit menu that is common to all Lightroom 1.1 modules. 379 2. Here you can see that all the one star images have now been selected (these are the photos with the light gray cell frame borders). I then went to the Edit menu again and chose Select by Color Label Add to Selection Yellow. 380 This PDF was made available free of charge via: (not for resale) 3. By using Add to Selection I was able to add all the yellow color label images to the one star-rated images. Of course, some of the yellow label images were already selected by the one star rating. But I could have used the Select by Color Label menu to add the yellow label images to a red yellow selection and then used the Select by Rating Intersect with Selection menu to select just the one star rated photos that had a red or yellow label. The Edit Select by menu options can be used in this way to create any number of selection rules. This can be very useful when managing large folders of photos. 381 Page references Help menu 35 Help menu Figure 1.17 The Help menu that is common to all Lightroom 1.1 modules. This screen shot shows the Library module Help menu, but the other modules look similar except they will have help items and help shortcut relevant to the module you are in. And last, the Help menu which has two new additions. Figure 1.17 shows the Help menu found in the Library module. If you select Library Help, you can access the off-line Lightroom 1.1 user guide where you can browse the help guide options for the current module (see Figure 1.18 below). Figure 1.18 The Lightroom 1.1 off-line Help guide. This guide is installed with the program and will display the contents in a Web browser format. 382 This PDF was made available free of charge via: (not for resale) 383 384 This PDF was made available free of charge via: (not for resale) Library Menu Filters As I mentioned before, the Filter items highlighted here are simply a duplicate of the File Filters submenu and the Filters section found in the Filmstrip. Figure 2.2 Here is a view of the Library menu showing how to filter all the picked photos and below that a detail view of the Filmstrip, with the Picks filter (circled) made active, which is probably easier to use than navigating the Library menu. 386 This PDF was made available free of charge via: (not for resale) Page references Filtering images 7677 Subfolder filtering The Include Photos from Subitems filter is also new. This lets you determine whether to include or exclude the photos contained in any subfolders, thereby allowing you to hide photos that are in any subfolders. In the Folders panel view shown in Figure 2.3, I have selected a folder called Model castings that contains 159 photos of which 147 photos are contained in its two subfolders. This means there are 12 photos floating around in the Model castings folder that are not assigned to either of these two subfolder items. If I were to deselect Include Photos from Subitems in the Library menu, any filtered searches I carry out will apply just to these 12 photos and exclude the other 147. Figure 2.3 In this Folders panel view, the Model castings folder has been highlighted. It contains two subfolders. You will note that the photo count in these two folders does not add up to 159. This is because there are 12 photos located in the Model castings folder that are not contained in any subfolder. As mentioned in the text, Include Photos from Subitems will allow you to filter all the photos in the Model castings folder. When it is switched off you can filter the root level folder contents only and exclude all subfolders. 387 Synchronize Folder Toward the bottom of the Library menu is the Synchronize Folder This is really an update of the previous Check for Missing Photos and Folders option that you could only access via the contextual menu (i.e. you had to know to right mouse-click on a folder to access this option). Now it is up there in the Library menu where it will be far more discoverable (as well as remaining part of the contextual menu). But the new Synchronize Folder command is a lot more effective because it allows you to really keep your folders updated with any changes that may have been made outside Lightroom. What this means is that should you happen to change the contents of a folder that has already been imported into Lightroom, the Synchronize Folder command will get Lightroom to update all the information about the folder contents such as looking for images that may have been added or removed from that folder. The Synchronize Folder command can also scan to see if the metadata has been updated outside Lightroom. Let me give you an example of how this would work. Suppose you import a folder of images into Lightroom either by copying them to the Lightroom catalog folder or by choosing import by reference to a current disk location. Now lets suppose that you added some new images to that folder at the system level. If you did this when using Lightroom 1.0 or one of the earlier betas, there would be no easy way to tell Lightroom to automatically check to add these new additions to the catalog. But in Lightroom 1.1, if you select this particular folder and choose Library Synchronize Folder, Lightroom 1.1 will check for new photos and give you the option to Import these and update the catalog information about what is in that folder. 388 This PDF was made available free of charge via: (not for resale) Page references Import Photos 3840 If you check the Import New Photos option in the Synchronize Folder dialog, you can choose to simply import and update the catalog, or you can choose Show import dialog before importing, which will open up the Import Photos dialog shown in Figure 2.5. The default setting for Synchronize Folder... will automatically import the files to the same folder they are in currently without showing the Import dialog and without modifying the filename, develop settings, metadata or keywords. Perhaps the only reason for choosing to show the Import Photos dialog when synchronizing a folder would be if you wish to adjust any of these settings as you import and update the Lightroom catalog. Note that if you have removed any photos from the folder outside Lightroom, Synchronize Folder will also remove those files from the catalog, thereby keeping the Lightroom catalog completely updated for all new additions as well as any photos that are no longer located in the original folder. Scan for Metadata updates works identically to the Read metadata from files option in the Library module Metadata menu (see page 399). For example, if you edit the metadata in any photos outside Lightroom, such as in Bridge or some other program where the metadata edits you make are saved back to the files XMP header space or saved to an XMP sidecar file, you can use Synchronize Folder to sync any metadata changes to the Lightroom catalog. 389 Page references Previews 312314 Updating previews The Library Previews submenu commands will now apply to all currently filtered images regardless of any selection that you have in place. So for example, if you had 37 photos filtered in the content area, but only one photo was selected, and you chose Library Previews Discard 1:1 Previews, the warning dialog shown in Figure 2.6 will appear: 390 This PDF was made available free of charge via: (not for resale) Photo Menu Page references Virtual Copies 80, 218 The new Go to Folder in Library menu item will always take you to the Folder in the folder panel for whichever photo is currently the most selected in the Library module. For example, if you have photos in the content area that are based on a Collection or a Keyword selection, the photos you are looking at could have originated from any number of separate folders. By using Go to Folder in Library you can quickly jump to the folders for any particular photo. tip If you right-click on a photo in the Library grid, you can access the contextual menu which will also allow you to choose Go to Folder in Library, or Show in Finder. The Create Virtual Copy command is one that you will want to use a lot of the time. Creating virtual copies of photos is a great way to experiment with creating different versions of photographs, such as black-and-white versions or different croppings. So it is about time that it was given its own keyboard shortcut: Command + (Mac), Control + (PC). Once you have created one or more virtual copies you can then choose the new Set Copy as Master command to make any virtual copy version of an image become the new master version (and make the old master version a virtual copy). 391 Page references Removing images 82 Now if you hit the delete key you will be faced with a slightly changed dialog (shown in Figure 2.8) that offers (as before) the option to simply remove the current photo(s) from the catalog, or the option to delete it completely from the disk. Note here that selecting Delete from Disk will remove the photos from the catalog and then send them to the system trash/recycle bin. The handy shortcut: CommandD (Mac) or D key (PC) will select the Delete from Disk option. Although the warning message says that this process cannot be undone, it is not in fact a complete deletion. The photos you delete can still be accessed via the trash/recycle bin. It will only be when you choose to empty the trash/recycle bin that the images will truly be deleted forever. So if you are aware of this difference and wish to avoid having to go through the above dialog each time you hit delete to remove photos from the catalog, you can instead now use the Photo Remove Photos from Catalog command, or the Option + Delete (Mac), Alt + Delete (PC) shortcut. 392 This PDF was made available free of charge via: (not for resale) Metadata Menu Page references Keyword stamp tool 97 393 2. Once the painter tool has been made active you can scroll down its tool options to choose what kind of setting you want to apply with it. For example, the list here is currently set to work with Settings and when selected, another list next to it will let you choose from a list of saved Develop presets. If you select Rotation, the menu will change to allow you to select a specific rotation or to flip an image. If Metadata is selected, the menu list will let you choose from pre-saved metadata templates. And likewise, if Rating, Pick or Label are selected, you are also offered a choice of settings to work with. Here I selected Keywords. 394 This PDF was made available free of charge via: (not for resale) 3. With Keywords selected, you can enter the keyword you want to apply in the empty field next to the Painter tool menu. As you enter a keyword, Lightroom will auto-complete the text as you type from previous or recently used keywords. In this step I wanted to apply the keyword Details. We are now ready to put the Painter tool to use. In the screen shot shown here I have deliberately made the Painter tool bigger than it normally appears in order to make it stand out more. Basically, you just drag with the Painter tool anywhere in the content area. Dragging with the tool over any photo will apply the currently active setting to it. In this example I used the Painter tool to paint the Details keyword to specific photos. When you have finished using the Painter tool, click in the empty area of the toolbar where the Painter tool normally lives, to exit working with it. Note You can still use mouse clicks to apply settings with the Painter tool, but dragging is a more effective way to work with it when applying settings across multiple images in the library content area. The Painter tool also has two modes of operation. When you first use it to apply a setting you will see the Painter tool in spray can mode using the icon shown on the left. But when you hover over an image that has just had the Painter tool treatment note that the icon changes to show an eraser ( 395 Page references Metadata panel 8389 Figure 2.10 The left shows how the metadata information displayed in Lightroom 1.0 when more than one photo was selected and the photos all had different metadata information. The right shows how the Metadata panel in Lightroom 1.1 will display the information for the most selected (target) photo when Show Metadata for Target Photo Only is selected. 396 This PDF was made available free of charge via: (not for resale) Figure 2.11 An example of the Show Metadata for Target Photo Only function in use. Note that although all the photos have been selected and the Titles are different, we can now read the information for the most selected photo. in the Show Metadata for Target Photo Only mode, the one thing you do need to be aware of is that you are beware that although this menu item can prove useful (for the reasons I have described), you probably wont want to have this enabled all the time. 397 Page references Export XMP metadata 316317 398 This PDF was made available free of charge via: (not for resale) with the easy to remember CommandS (Mac), ControlS (PC) keyboard shortcut. In practice Id recommended leaving the Automatically write changes into XMP Catalog setting switched off. When you are working in the Library or Develop module, use the CommandS (Mac), ControlS (PC) shortcut every time you wish to export and update the metadata to a photo or a group of selected photos. In the long run this will allow you to work much quicker than having the auto update option running in the background. Page references DNG format Convert to DNG 39, 49, 106 Also in the Metadata menu is The Read Metadata from Files menu option. This replaces the previous Metadata XMP Import XMP Metadata from File menu item which basically means you can use the Read Metadata from Files menu command to ask Lightroom to explicitly read in the metadata from a particular photo or group of selected photos. To summarize, Save Metadata to Files and Read Metadata to Files allow you to update the metadata to and from the photo files while working in Lightroom. These provide manual controls for updating the metadata in the files. But remember that the new Synchronize Folder command discussed on page 388, will also allow you to achieve the same thing. 399 Figure 2.12 Here is an overview of the Library module interface where I have highlighted all the main interface changes in yellow. Figure 2.13 On a Mac, you can command + click on the title bar to view the catalog file directory path. 400 This PDF was made available free of charge via: (not for resale) Find panel Page references On the face of it, the Find dialog (shown in Figure 2.14) has undergone a cosmetic makeover. The top section is better laid out so that users are more clearly aware that they can select different Text criteria to search by, other than Anywhere. The Rule section, which is now placed beneath, is also more prominent. Lets recap how the Find panel works. To carry out a text search you check the box marked Text and select what kind of text you want to search, such as by Filename. You would then combine this with a rule such as Contains (where there is a partial match), Contains All (for an exact match), Doesnt Contain (to exclude files that match the text entered below), Starts With (obviously anything that begins with the phrase entered) and Ends With (for anything that ends with the phrase entered). For example, you could use the Text search section to search specifically by Filename using a Contains rule and enter the phrase you are looking for in the text field below. I use this search method quite a lot whenever clients make their final image selections and send me a list of filenames. All I need to do is search by entering the last four digits. Find panel 99102 401 Date searches Instead of using the term Capture Time to search by date, the term Date is now used (see Figure 2.15). This change now more accurately reflects what the EXIF Date Time Original field actually means since this field does not strictly always refer to capture date. It could refer to the date a file was scanned or the date that a new Photoshop document was created. Ill be discussing later the new changes to the way date capture and date time information is displayed in the Metadata panel. If you check the popup menu shown here note that there are some new date search settings. You can now also search by: Today, This Week, Yesterday and Last Week. Figure 2.15 The Find panel showing all the date search options. 402 This PDF was made available free of charge via: (not for resale) Page references Metadata Browser 98 403 Page references Quick Develop panel 120124 404 White Balance is clearer (rather than being labeled WB) and the Tone control section is neatly separated with the Auto Tone button at the top and the reset button relabelled Reset All, which makes it a lot clearer that clicking on it will reset all the develop settings that have been applied to a photo and not just those that have been applied via Quick Develop. As a result, use this button with caution. Down at the bottom a new Clarity develop adjustment has been added to Quick Develop. If you hold down the Option key (Mac), or Alt key (PC), the Clarity adjustment switches to say Sharpening and the Vibrance control below it switches to say Saturation. When you do this, the Sharpening control in Quick Develop allows you to adjust the amount of sharpening. All these new features the Clarity adjustment and new sharpening controls will be discussed in more detail in Chapter 3. This PDF was made available free of charge via: (not for resale) Keywording panel Page references Over in the Keywording panel (see Figure 2.20) the Keyword Tags section has a new menu next to it. The default view will show Enter Keywords. This can be used, as before, to enter new keywords and edit existing ones. Or, you can select the Keywords & Parents option to view the keywords only without editing them. Keywording panel 9597 Keyword Sets behave the same as before. You can click on the Keyword Set menu to load one of the keyword presets that ship with Lightroom such as: Outdoor Photography, Portrait Photography or Wedding Photography. Note If keywords are removed using an external program, the keywords will not appear removed when you view the photo in Lightroom. 405 Page references Metadata panel Metadata panel 8487 Figure 2.21 Here are the two new view modes for the Metadata panel: Large Caption (left) and Location (right). 406 This PDF was made available free of charge via: (not for resale) Note Just when you think you have covered every angle, you find something else thats new! Ian Lyons has written an article for computer-darkroom.com in which he explains how Lightroom 1.1 can now read audio sidecar files that are associated with a captured image and list these in the Metadata panel. If an audio sidecar file is present and you click on the action arrow next to the metadata item, Lightroom will playback the audio file on your computer. For the full story, go to this link:. com/lr_11/lr-11-gps.htm. Figure 2.22 The Metadata panel with all the new items highlighted in yellow. 407 Page references Sidecar files 308, 317 Virtual Copies 80, 218 Figure 2.23 Here is a view of a master photo with three virtual copies. The copy names will now be shown in the Metadata. 408 This PDF was made available free of charge via: (not for resale) Page references 68 316317 In order to keep track of which files have been updated and which have not, Lightroom now offers a few visual clues. If you go to the View menu and open the Library View Options dialog (shown in Figure 2.24), there is a checkbox in the Cell Icons section called: Unsaved Metadata. When this is checked you will see a metadata status icon appear in the top right corner of the library grid cells (see Figure 2.25) which appears whenever there is what Lightroom calls a metadata status conflict. In other words, youve updated the metadata for this photo and the metadata information embedded in the photos XMP space is now out of sync with the current Lightroom catalog file. If the metadata written to a photos metadata is now out of date this isnt really such a bad situation to be in. After all, the 409 metadata has been saved, its just that the metadata update is so far stored in the catalog file only (which you are backing up regularly, right?). The appearance of this icon is more of a reminder that should you wish to update the metadata to the file as well, it is now time to do so and Lightroom is highlighting for you the files that are in need of metadata saving. Lightroom will even show an interim icon ( ) as it scans a photo, checking to see if the metadata is in need of an update. Figure 2.25 When the Unsaved metadata icon is enabled in the Library View grid options the icon in the top right corner will indicate the metadata status has changed. A downward arrow indicates that Lightroom settings need to be saved to the file. An upward arrow indicates that settings have been edited externally and need to be read. Figure 2.26 This dialog will appear to confirm if you wish to save changes to disk. 410 This PDF was made available free of charge via: (not for resale) Page references Show photos with this label will filter the catalog images to reveal all those with color labels that match the selected photo. If a photo has been cropped in Lightroom, the Cropped item will appear in the Metadata panel (showing the crop dimensions in pixels). Click on the action arrow next to it to go directly to the Develop module in Crop mode. Show photos taken with this ISO will filter the catalog images to reveal all those with ISO settings that match the selected photo. Color labels Cropping 79 130134 88 Date formats 89 Date representation Note Next to Date Time Original is the Go to Date action arrow. This will go to the Date section of the Metadata Browser panel and filter the catalog to show all photos with matching dates. Figure 2.27 In the case of camera capture files that have not been converted to DNG, the Date Time Original, Date Time Digitized and Date Time entries will all agree. Note The Date field action arrow no longer takes you to the Edit Capture Time dialog. To access this you will need to choose Edit Capture Time... from the Metadata menu. Figure 2.28. Figure 2.29 Similarly, if I was to create an Edit copy as a TIFF, PSD or JPEG version of the original, the Date Time will reflect that this version of the master image was created at a later date. Figure 2.30 And if you import a photo that was originally created as a new document in Photoshop or was originally a scanned image, only the Date Time field will be displayed showing the date that the file was first created. 411 Page references Web gallery e-mail links The E-Mail field now has an action arrow next to it. Another Lightroom user can send an email to the creator by simply clicking on the action arrow (see Figure 2.31). Lightroom automatically creates a new mail message (as shown in Figure 2.32) via whatever mail program you are using on your computer. If the program is not currently running, Lightroom launches it automatically. Figure 2.31 In this view of the Metadata panel you can see the action arrows next to the E-Mail and Website items. Figure 2.32 When you click on the E-Mail action arrow, this will automatically launch your default e-mail client program and compose a new email using the email address in the Metadata. Similarly, if you click on the action arrow next to the Website field youll go directly to the creators chosen website link. 412 This PDF was made available free of charge via: (not for resale) note The Copyright section also now has an action arrow next to the Copyright Info URL which when clicked takes you directly to the website. Above that there is also a Copyright Status field which is new to Lightroom 1.1 (but is already included in Bridge CS3), where you can set the copyright status as being Unknown, Copyrighted or Public Domain. You can edit the copyright status via the Metadata panel, or you could go to the Metadata panel Presets menu to choose Edit Presets and create a new custom metadata preset via the Metadata Presets dialog shown in Figure 2.33. All images you apply this preset to (such as when importing) will be marked with the desired copyright status. 413 note image. So here is how it works. In Figure 2.34 you can see a library grid view of images taken at a model casting. I tend to shoot these castings with the camera tethered to the computer and update the Title field with each models name and model agency as I go along. In the screen shot shown here you can see that the Title field is currently active and I have typed in the models details. Instead of hitting Enter to commit this data entry, I can use the Command key (Mac), Control key . Figure 2.34 If you use the Command key (Mac), Control key (PC) plus the left right arrow key to navigate between photos, highlighted fields in the Metadata panel will remain targeted and you are ready to edit the same metadata field for the next photo. 414 This PDF was made available free of charge via: (not for resale) Toolbar Page references There are a few small items worth noting in the toolbar, such as changes to the Compare view behavior, the new Painter tool and the sort ordering. Toolbar Compare view 11, 60 72 Figure 2.35 The new toolbar in interface for the Library module. 415 Page references Painter tool 97 Sorting images 105 79 The Painter tool replaces the previous Keyword Stamper tool. The Painter tool can be used the same way as the Keyword Stamper. You can use it as an easy way to repeat applying a keyword to images in the content area, except the Painter tool now allows you to go much further. You can read more on pages 393395 about how this new feature works. Sort functions The Sort menu in the toolbar now resolves some of the possible contradictions in the way color labels are identified in Bridge and how they are identified in Lightroom. Instead of having a single sort option of sort by Color Labels, there are now two options: Sort by Label Text and sort by Label Color. And the reason for this is as follows: 1. In Lightroom 1.0 and 1.1, the default color label set uses the following text descriptions alongside each label: Red, Yellow, Green, Blue, Purple (to access the dialog shown here, go to the Library module Metadata menu and then go to the Color Label Set menu). OK, this is not a particularly imaginative approach, but the label text that is used here neatly matches the label text descriptions that were used in Bridge 1 (as included with the Creative Suite 2). Notice in the Lightroom 1.1 dialog shown here that it says: If you wish to maintain compatibility with labels in Adobe Bridge, use the same names in both applications. So far, so good. Lightroom 1.0 and 1.1 are both compatible with Bridge 1.0 when using the default settings in both programs. 416 This PDF was made available free of charge via: (not for resale) 417 If you edit a photos color label setting in Bridge and then in Lightroom using the Library module Metadata Read Metadata from File command, a similar conflict will occur. But instead of showing a white label, Lightroom will display no color labels in the grid or filmstrip views. If you go to the Metadata panel on the other hand, you will see the label text description that was applied in Bridge next to the Color Label item in the list. I would say that Lightroom handles the conflict situation better, but how can you use the label criteria that was applied in Bridge and make use of it in Lightroom? Well this brings us neatly back to the Lightroom 1.1 update and why there are now two new sort options for color labels. The sort by Label Color option allows you to sort photos by color labels that have been applied in Lightroom. The sort by Label Text option allows you to sort photos that have had text labels applied in Lightroom as well as letting you to sort photos where the labels have been applied via Bridge (because Lightroom is only able to read the label text part correctly). 418 This PDF was made available free of charge via: (not for resale) Page references The sort order buttons shown in Figure 2.37 now make a lot more sense. Even though I have been using Lightroom on a daily basis I could still never work out the old stairs going up/going down icon! This new one makes it much easier to work out whether the sort order is ascending or descending. Sorting images Another useful feature is the way the sort order is now numerically sensitive. This means that Lightroom will reorder the following number sequence correctly: 1,2,3,4,5,6,7,8,9,10,11,12,13 Previously, Lightroom would have reordered the numbers like this: 1,10,11,12,13,2,3,4,5,6,7,8,9. I think most people probably did not even notice there was a problem here because they were using their camera or Lightroom to name their files and Lightroom would always rename using zeros to fill in the empty numbers before a number sequence. It was more often a problem where someone had perhaps made a whole lot of Edit copies and gone beyond Edit10. Anyway, this is now no longer a problem. Filmstrip filtering 105 7679 Filmstrip Rating filters Nothing too much has changed here except to mention that the rating filter section interface is more obvious. Instead of little dots, we now have grayed out stars and this makes it easier to see that this is where you click to activate the rating filters. The rating menu uses an icon button to represent whether the filtering rule is: Rating is greater than or equal to, Rating is less than or equal to or Rating is equal to. Again, this is more clearly worded than before and will hopefully make things easier for newcomers to understand. 419 Page references Breadcrumbs text Filmstrip filtering 7679 Figure 2.39 The breadcrumbs text will highlight the number of photos selected. Figure 2.40 If you Optionclick (Mac), Altclick (PC) on a color label swatch in the filmstrip, you can now make inverted color label filter selections. In the color label filter swatch section of the filmstrip you can now Optionclick (Mac) or Altclick (PC) on a color swatch to make an inverted color swatch selection. To be more precise, if you Option/ Alt click on a color swatch, this will select all photos that have a color label assigned except for that color. The inverted swatch selection will exclude photos that have no color label. But remember, if you go to the File menu the Filters Filter by Color Label submenu includes options to filter photos by No Label, to select all those photos that have no label status. There is also an Other Label option that will allow you to filter photos that have a label status not completely recognized by Lightroom. In other words, choosing File Filters Filter by Color Label Other Label will allow you to filter photos that have had their color labels edited in Bridge but the label text descriptions are not currently synchronized with those used 420 This PDF was made available free of charge via: (not for resale) by Lightroom right now (see pages 416418 for the reasons why Lightroom and Bridge sometimes have different ideas about what these color labels mean). Page references Virtual Copies 80, 218 Figure 2.41 The Filmstrip also includes filter Virtual Copy and filter by Master Copy buttons. 421 422 This PDF was made available free of charge via: (not for resale) Develop module Develop Menu Exporting and importing presets The Develop menu now looks rather slimmed down, which is because most of the menu items have now been moved over to the new Settings menu for the Develop module. But the first new item we have here is New Preset Folder This menu item takes you to the new Folder dialog where you can create a new Preset folder to be used for storing Develop module preset settings. Figure 3.3 The Presets folder showing the newly created Black & White Presets folder. 424 This PDF was made available free of charge via: (not for resale) If you take a look at the Presets panel in the Develop module after installing the 1.1 update, notice how presets are initially segregated into Lightroom Presets and User Presets. This folder separation offers the advantage of making it easier for you to manage lots of Develop presets and group them in ways that are more meaningful or easier to manage. In the Figure 3.3 example, I created a new folder called Black & White Presets to store all my black and white conversion settings. You can use preset folders to store groups of presets any way you like, although you cant create subfolders of presets yet. Page references Develop Presets 223224 Figure 3.4 This view shows the contextual menu options for the Presets panel. But what you can do is to use the contextual menu shown in Figure 3.4 to access Export or Import develop settings. The contextual menu shown here can normally be accessed by right-mouse clicking on a folder inside the panel (although Macintosh users who are not using a two-button mouse will need to hold down the Control key and click with the mouse to see this menu). When you select either of these menu items, you will be taken directly to the Finder/Explorer browser window that points directly to the Lightroom Develop Presets folder. This menu addition is great because it now makes it much easier for you to share develop presets settings with other Lightroom users. For starters you may wish to go to Richard Earneys Inside-lightroom. com website where you can access lots of different develop presets for Lightroom. No need to fuss about looking for the Develop Presets folder on the system. Just use the Import command in this menu to locate and add these to the Presets panel in the Develop module. 425 Page references Saving a preset 224 1. One way you might use this feature would be to create a new folder called Camera Presets and adjust the Develop settings for different ISO settings shot with that camera. Save these settings as new presets, checking just the items shown here (note that the New Develop Preset dialog now contains a Folder menu where you can select which folder to save a new preset to). 426 This PDF was made available free of charge via: (not for resale) 427 Page references Virtual copies Edit in Photoshop 218 Photo menu 224225 Edit externally The Edit in Adobe Photoshop and Edit in Other Application menu items will open the dialog shown in Figure 3.6, where the Edit items are listed in a different order. More importantly we now have Copy File Options, which is enabled when either the Edit a Copy with Lightroom Adjustments or the Edit a Copy option is selected. If you then click on the disclosure triangle to reveal the Copy File options, you can adjust the File format, Color Space, Bit Depth and Compression options. If you refer to Chapter 4 of this PDF you can read how it is possible to configure the default copy file settings in the External Editing preferences. The Copy File options here simply allow you to override these default settings at the time a copy is being made. Figure 3.6 When you choose to edit a photo externally, there are now some new Copy File options. 428 This PDF was made available free of charge via: (not for resale) Page references Deleting photos 82 Saving Metadata See page 398399 for the low down on: Save Metadata to File, Read Metadata from File and Update DNG Preview & Metadata commands. 429 Settings Menu The Settings menu is new to the Develop module. But it is basically now a new menu location in which to contain many of the menu items that were previously located in the Develop module Develop menu. 430 This PDF was made available free of charge via: (not for resale) 2. I selected the image to the left to make this the most selected image and then chose Match Total Exposures from the Develop module Settings menu. In this screen shot I reselected the original photo and you can see how the exposure setting is much improved and the exposure values for the other photos are also more even now. The Adobe Photoshop lightroom book : 1.1 update 431 Page references Before and After settings 208211 With this view mode you have the option of previewing a selected image to see the before and after versions appear side by side. 432 This PDF was made available free of charge via: (not for resale) 2. In Lightroom 1.1 you now have the facility to Swap Before and After Settings. If you use this menu command from the Settings menu, or use the keyboard shortcut: Command+Option+Shift+up arrow (Mac), Control+Alt+Shift+ up arrow (PC), you can switch the before and after settings. This is useful if you reach a point in the develop editing where you like the initial improvements you made and you then want to make further tweaks but wish to compare these with the current view. In a situation like this, you simply select the Swap Before and After Settings command and then carry on editing the photo. Tip You can load a Snapshot or a History state into the Before view by right mouse-clicking (you can also Control-click on Macs) on the History state or Snapshot in the panel list. This will pop a contextual menu allowing you to load the history state or Snapshot to the Before view. 433 Page references Zoom options View Menu 23, 6970 Progressive zooms There are several ways that you can control the zoom settings in the Library and Develop modules. For example, if you go to the Navigator panel you can use the fly-out menu to set a custom zoom magnification for the close-up loupe view. If you make the Zoom slider option active in the toolbar you can use the slider shown in Figure 3.12 to adjust the zoom level at any time magnifying the photo using the same step values as shown in the Navigator panel. Figure 3.12 There is also a zoom slider for the Develop module toolbar where you can use the slider to set the zoom level to the same zoom settings shown in Figure 3.11. 434 This PDF was made available free of charge via: (not for resale) The new Zoom In Some and Zoom Out Some commands will allow you to zoom in or out using the same incremental steps as offered by the toolbar zoom slider. You can use Command+Shift+= (Mac), Control+Shift+= (PC) to zoom in and Command+Shift+minus (Mac), Control+Shift+minus (PC) to zoom out. These shortcuts will also work when operating in the Library module. The Auto Show mode only makes the tool overlays visible when the cursor is rolled over the content area. In other words, the crop guides, Remove Red Eye circles or Remove Spots circles will disappear from view when you roll the mouse cursor outside the image area such as up to the top panel menu. In Figure 3.14 you can see how the Remove Spots tool circles are visible. Shown here is the area that is being healed (the circle with the thicker border) and the area that is being sampled from (the circle with the thinner border). We see these circles highlighted more clearly to emphasize which is currently the most active circle and the arrow reinforces the relationship between the two, indicating which is the source and which is the destination (for more about the new look spotting tools, refer to the spotting tools section on pages 447456). Just below the active, highlighted circles you can see a circle indicating an area that has also been healed (the faint gray circle) where only the healed area is shown, because this circle is not currently active. 435 Page references Remove Spots tool 212213 If you select the Always Show menu option, the tool overlays behavior matches that found in Lightroom 1.0 in which they always remain visible. If you want to hide the tool overlays, select Never Show from the menu. When this menu option is selected the overlays remain hidden, even when you roll the mouse cursor over the image. But as soon as you start working with a tool, the tool overlay behavior automatically reverts to the Auto Show mode; and if you look at the Tool Overlay menu, you will see this option is highlighted. Another way to work with this new tool overlay show/hide feature is to use the associated keyboard shortcut: Command+ShiftH (Mac), Control+ShiftH (PC) to toggle between showing and hiding the tool overlays. In practice, I find it simpler also to just use the H key to toggle between showing and hiding the tool overlays. Later in this chapter we shall be looking at the Remove Spots Tool and Remove Red Eye tools in more detail. Figure 3.14 Here is an example of the Remove Spots tool in use with the tool overlay visible. 436 This PDF was made available free of charge via: (not for resale) Page references The crop guide options have been extended in this 1.1 update so that you can choose from six different types of overlay in the Crop Guide Overlay menu. Cropping 130134 I have listed all the different crop guide overlays below, starting with the Grid overlay. Note that regardless of whichever crop guide you choose, the grid always appears when you rotate the crop by dragging with the cursor outside the crop bounding box. Grid Jeff Schewe 437 Thirds Jeff Schewe Diagonal Jeff Schewe 438 This PDF was made available free of charge via: (not for resale) Triangle Jeff Schewe Golden Ratio Jeff Schewe 439 Golden Spiral Jeff Schewe Jeff Schewe Figure 3.22 You can use the Shift+O shortcut to switch the orientation of a crop. 440 This PDF was made available free of charge via: (not for resale) Cancelling a crop You can now use the Escape key to revert to a previously applied setting made during a crop session. Lets say that the picture shown in Figures 3.163.22 had been cropped slightly on the left. If you were to alter the crop by adjusting the crop ratio or crop angle and then hit the Escape key, you will always be taken back to the original crop setting. If, on the other hand, you adjusted the crop, exited the crop mode for this photo, started editing photos in another folder and returned later to this picture, the new crop setting will become the one that Lightroom reverted back to when you hit Escape. The crop cancel command is worked out on a per-session basis. Figure 3.23 In case you havent noticed in the previous screen shots, the Develop module toolbar has a new crop lock icon. This icon is now colored. But the distinction between when the crop aspect ratio is locked and when it is unlocked is a lot more subtle than in the previous version of Lightroom. Notice also that the Aspect Ratio menu that is next to it is now more clearly labeled. 441 Clarity slider I believe Jeff Schewe campaigned hard to get this particular feature included into Adobe Camera Raw, and here it is now in Lightroom 1.1. As Jeff himself will tell you, Clarity is a hybrid based on two separate contrast enhancing techniques. One is a local contrast enhancement technique, devised by Thomas Knoll, using a low amount and high radius setting in the Photoshop Unsharp Mask filter. The other is a midtone contrast enhancement Photoshop technique that was originally devised by Mac Holbert of Nash Editions. Those who have bought my most recent book, Adobe Photoshop CS3 for Photographers, can read there the steps Mac used in Photoshop to create this effect. The Photoshop instructions are admittedly quite complex. However, Clarity is now available as a simple one shot slider control in the Basic panel of the Develop module. This PDF was made available free of charge via: (not for resale) 2. In this next screen shot you can see how the pumpkins looked after adjusting the Clarity slider. In this example I have. Tip I usually aim to add a Clarity value of about 10 and no more than that. However, a number of Lightroom users have been complaining about the limited output sharpening in Lightroom. So you might want to consider giving certain photographs a clarity boost just prior to printing. The way I suggest you do this is to create a virtual copy of the master photo and mark this as a print only copy (using the Copy Name field in 443 Page references White Balance 139141, 146 1. Click on the White Balance tool to undock it. The tool allows you to take a white balance measurement from anywhere on the photo. 444 This PDF was made available free of charge via: (not for resale) 445 446 This PDF was made available free of charge via: (not for resale) Spotting tools Page references Remove Spots tool 212213 Mouse-down on the sample circle and drag to reposition the circle. Jeff Schewe Figure 3.24 This figure shows a combined series of snapshots taken of the Remove Spots tool in action to illustrate the different ways you can use this tool. 447 Clone or Heal The options here are the same as they were previously. In Clone mode, the Remove Spots tool copies pixels using a feathered circle edge. In Heal mode the Remove Spots tool copies pixels and blends them around the inner edge of the circle. You can also use the Clone/Heal buttons in the toolbar to switch the spotting mode for a Remove Spots circle. Another important thing to be aware of with the Heal mode is that if you click with the Remove Spots tool in Heal mode, rather than drag to set the sample point, the Remove Spots tool behaves like the spot healing brush in Photoshop. That is to say, Lightroom will automatically select the best point to sample from. This behavior is well worth noting when using the Synchronize Settings (see below) to synchronize your spotting work. Ill be coming on to this shortly. Spot Size This used to be called Cursor Size, but its still pretty clear that you can adjust the size of the Remove Spot cursor circle by adjusting the slider. You can do this before you apply the tool or you can use the slider to readjust the size of a selected circle. You can also use the square bracket keys to adjust the spot size of the cursor before you use it to create a new spot. Use the left bracket ([) to make the spot size smaller and the right bracket (]) to make the spot size bigger. Click only Just click with the Remove Spots tool to remove a mark or blemish. In this respect you could say that the Remove Spots tool is able to work a bit like the spot healing brush that is found in Photoshop and Photoshop Elements. When you click with the Remove Spot tool in Lightroom, it automatically places the sample circle for you and use a certain amount of built-in intelligence to choose a suitable point to sample from. 448 This PDF was made available free of charge via: (not for resale) 449 Synchronized spotting Tip if you have made a selection of images via the Filmstrip (or in the Library Grid view) you can also use the Command+ShiftS (Mac), Control+ShiftS (PC) shortcut to open the Synchronize Settings dialog. You can quite easily synchronize the spotting work done to a single photo in Lightroom with other photos. All you have to do is to make a selection of images via the Filmstrip. Make sure the photo that has had all the spotting work done to it, is the one that is the most selected or target photo and then click on the Sync button (see Figure 3.25). This opens the Synchronize Settings dialog shown in Figure 3.26. If you click the Check None button, check the Spot Removal checkbox and then click the Synchronize button, Lightroom will synchronize the spot removal settings across all the selected images. Figure 3.26 The Synchronize Settings dialog with Spot Removal checked only. 450 This PDF was made available free of charge via: (not for resale) Page references Synchronize settings 125, 220221 Figure 3.27 The Adobe Camera Raw 4.1 dialog that is available for Adobe Photoshop CS3 and Bridge 2.0. 451 Page references Remove Red Eye tool 214 Before the 1.1 update, the Lightroom cursor was in the shape of a cross; You marquee dragged with the cursor to define the area you wanted to treat and Lightroom calculated where to apply the red eye correction within that area. What you ended up with was a rectangle overlay that would shrink to fit around the pupil and automatically correct the red eye. Figure 3.28 The new Remove Red Eye tool cursor design. The new Lightroom 1.1 Remove Red Eye tool cursor is shown here in Figure 3.28. And as you can see, it now looks a lot different. The way you use the tool is to target the center of the pupil using the cross hair in the middle and drag outwards to draw an ellipse that defines the area you wish to correct. You dont have to be particularly accurate. In fact it is quite interesting to watch how this tool works when you lazily drag to include an area that is a lot bigger than the area you define with the Remove Red Eye tool cursor. It is quite magical the way Lightroom knows precisely which area to correct. The cursor will shrink to create an ellipse overlay representing the area that has been targeted for the red eye correction. 452 This PDF was made available free of charge via: (not for resale) 453 tip You can adjust the size of the cursor by using the square bracket keys. Use the left bracket ([) to make the cursor size smaller and the right bracket (]) to make the cursor size bigger. To be honest, the cursor size doesnt always make much difference because big or small, once you click with the tool you can drag the cursor to define the area you wish to affect. The cursor size is probably more relevant if you are using the Red Eye tool to click on the pupils to correct them rather than dragging. But as I say, the tool always seems to do such a great job anyway of locating the area that needs to be corrected! 454 2. After I released the mouse, a Remove Red Eye tool ellipse overlay shrank to fit the area around the eye. As I just mentioned, you dont really have to be particularly accurate with the way you define the eye pupils. In this screen shot you can see that I applied a red eye correction to the eye on the right. Notice here how the first ellipse overlay has a thinner border and the current ellipse overlay is thicker, indicating that this one is active. When a red eye ellipse is active you can then use the two sliders in the toolbar to adjust the Remove Red Eye settings. Use the Pupil Size slider to adjust the size up or down for the area that is being corrected. Next to this is the Darken slider that you can use to fine-tune the appearance of the pupil. I tend to find that the Lightroom autocorrection (using the midway settings of 50) is usually spot on in most instances. This PDF was made available free of charge via: (not for resale) 455 4. All will become even clearer if you hold down the mouse over the center of the pupil and drag the ellipse overlay outwards. Basically the ability to resize the shape of the red eye correction and reposition it allows a much greater degree of red eye control than you could previously in Lightroom. 456 This PDF was made available free of charge via: (not for resale) Color panel Page references Over in the Color panel (see Figure 3.29) you can use either the Shift key or you can use the Command key (Mac), Control key (PC), to click on the individual color swatches to choose which specific color slider controls you want to have visible. Previously you could only click to see one set of color sliders at a time or click on the All button to have all of them visible at once. This new refinement allows you to economize on screen real estate. Color panel 168171 Figure 3.29 The Color panel allows you to use the Shift key or the Command key (Mac), Control key (PC) to select multiple color swatches. This new feature will allow you to adjust the panel view so that only the color sliders you need to work with are visible. 457 Page references Lens Corrections panel 176179 Figure 3.30 The Highlight Edges Defringe command can be used to auto correct color fringing in the extreme highlights. 458 This PDF was made available free of charge via: (not for resale) 459 460 This PDF was made available free of charge via: (not for resale) 3. In this last example I selected the All Edges Defringe option. The difference between this and the previous screen shot may appear quite subtle on the screen, but when you compare the before and after by toggling the effect in Lightroom you should be able to see a distinct improvement. I therefore like to look upon the All Edges defringe setting as a way to polish up the edges and remove the chromatic aberration that the slider settings on their own cant manage. 461 Page references Reset Develop settings 220 The Reset button will reset the Develop settings to whatever default user settings you might have already set in Lightroom. Just to recap, I explained earlier on page 427 how you could now create user default settings in Lightroom and that you could use the Lightroom Presets preferences to make default settings specific to the camera body serial number and ISO setting. So if you click on the Reset button, Lightroom will use whatever default setting you have created and reset the Develop settings to this. Set Default How do you create a default setting? Well, you could select a specific photo, go to the Develop menu and choose Set Default Settings... That will create a new default setting in Lightroom and if you have the above preferences set to make the setting specific to the camera body and ISO type, the default setting you create will be specific to that combination of camera and ISO setting. Or alternatively, you can hold down the Option key (Mac), Alt key (PC) and the Reset button in the Develop module will change to Set Default Whichever method you use, the dialog shown below in Figure 3.31 will appear, asking you to confirm that you want to go ahead and set this setting as a new default. Figure 3.31 If you hold down the Option key (Mac) or Alt key (PC), the Reset button will change to say Set Default... When you click on this button it will pop the dialog shown here. It is important to heed the warning here, that clicking Update to Current Settings will permanently update the default settings used in Lightroom. 462 This PDF was made available free of charge via: (not for resale) Reset (Adobe) And finally, if you hold down the Shift key, the Reset button will change to say: Reset (Adobe). Clicking on the button in this mode, overrides the Lightroom user default settings and resets the Develop settings to the standard Adobe Lightroom default values. Figure 3.32 If you hold down the Shift key the Reset button will change to say Reset (Adobe). Clicking this button will reset a photo to the normal Adobe Lightroom default values. 463 Page references Removing noise 172174 464 This PDF was made available free of charge via: (not for resale) Figure 3.33 The Presets panel in the Develop module contains two new sharpening preset settings. When you click to select one of these it will modify the Detail panel settings only. 465 Sharpen Portraits As you read the rest of this section it will become apparent what the individual sliders do and which combination of settings will work best with some photographs and not others. But to start with lets look at the two preset settings found in the Lightroom Presets subfolder. Figure 3.34 shows. 466 This PDF was made available free of charge via: (not for resale) Sharpen Landscapes The other preset setting you can chose is Sharpen Landscapes. This combination of sharpening slider settings is most appropriate for subjects that contain a lot of edge detail. You could include quite a wide range of subject types in this category. In Figure 3.35 I used the Sharpen Landscapes preset to sharpen an architectural scene. Basically you would use this particular preset whenever you needed to sharpen photographs that contained a lot of fine edges. 467 Figure 3.36 The sample image used in the final section of this chapter can be accessed via the following link: 468 This PDF was made available free of charge via: (not for resale). 469 0100. 470 This PDF was made available free of charge via: (not for resale). 471 Radius slider. 472 This PDF was made available free of charge via: (not for res. 473. 474 This PDF was made available free of charge via: (not for resale) 2. When the Detail slider is raised to the maximum setting, all of the sharpening effect is allowed to filter through, unconstrained by the effect Detail would otherwise have on the sharpening effect. When Detail is set to 100, you could say that the Amount and Radius sharpening settings are allowed to process the image with almost the same effect as the Unsharp Mask filter in Photoshop. 475 476 This PDF was made available free of charge via: (not for resale) note. Masking slider The Masking slider adjustment adds a final level of suppression control and was inspired by Bruce Frasers written work on his Photoshop sharpening techniques. If you want to read more about Bruces dont 477. 478 This PDF was made available free of charge via: (not for resale) 479 480 This PDF was made available free of charge via: (not for resale) 481 Page references Removing noise 173, 175. As with the Sharpening controls, you can only evaluate the effect of the noise reduction sliders by viewing the image at a 1:1 view or higher. In Figure 3.38. Figure 3.38 This close-up view shows a comparison between the old style Noise reduction in Lightroom 1.0 (left) and the new improved noise reduction in Lightroom 1.1 (right). Both versions were captured using the same noise reduction settings. 482 This PDF was made available free of charge via: (not for resale) 483 484 This PDF was made available free of charge via: (not for resale) Lightroom preferences So much has been changed in the way that the Lightroom preferences have been laid out and added to, that it seemed best to offer a complete guide to configuring the preference options, including those items that are new to Lightroom 1.1. Page references General Preferences General preferences 13, 314 Lets begin with the startup options. If you deselect Show splash screen during startup, you can disable the Lightroom screen from showing. This is just a cosmetic thing really and it depends on whether you want to have the screen shown in Figure 4.2 displayed or not as the program loads. 486 This PDF was made available free of charge via: (not for resale) Once you let the pirates on board, be warned that some things in Lightroom may never be the same again! Figure 4.4 shows how the Rename photo dialog will look after switching to pirate mode. And take a close look at the Library module toolbar, do you see anything different? Hint: check out the Filmstrip options screen shot towards the end of this chapter. When you first installed Lightroom you would have had the option to choose whether to be notified automatically of any updates to the program. In case you missed checking this, you can check the Automatically check for updates option. 487 Catalog selection In the Default Catalog section you can select which catalog should be used each time you launch Lightroom (such as Load the most recent). Note that the Choose button is gone. This is because what used to be known as a library file is now defined in Lightroom as being a catalog file. If you refer back to the General menus chapter, you can read on page 354 about how you can create new catalogs via the File menu in Lightroom 1.1. Also missing from here is the Automatically back up library option, which has now been moved to the Catalog Settings. These are located separately in the File menu, but you can also jump directly to the Catalog Settings by clicking on the Go to Catalog Settings button at the bottom of the General preferences dialog. And from there you can use the backup section to decide at which times you wish to backup the Lightroom catalog. For more information about the Catalog Settings and working with catalogs in Lightroom, please refer to Chapter 1 on the General Menus items. 488 This PDF was made available free of charge via: (not for resale) Presets preferences Page references Develop presets 223224 note Auto tone can often produce quite decent auto adjustments and may at least provide an OK starting point for newly imported photos. But on the other hand, it may produce uglier looking results. Experience seems to suggest that photographs shot of general subjects such as landscapes, portraits and most photographs shot using the camera auto exposure settings will often look improved when using auto tone. Subjects shot under controlled lighting conditions, such as still lifes and studio portraits, can often look worse when using auto tone. It really If you are familiar with the controls found in Develop and Quick Develop then you will know there is an Auto Tone button that can be used to apply a quick auto adjustment to a photo, automatically adjusting the Exposure, Blacks, Brightness and Contrast to produce what Lightroom thinks would be the best combination. Checking Apply auto tone adjustments will turn this feature on as a default setting. 489 Camera-linked settings The next two items in the Default Develop Settings section are linked to a new feature found in the Develop module Develop menu called Set Default Settings which I discussed earlier on pages 426427. Basically, if you check Make defaults specific to cameras serial number and Make defaults specific to camera ISO setting, you can use these preference checkboxes to determine whether certain default settings can be made camera-specific and/or ISO-specific. The Reset all default Develop settings button allows you to revert all the develop settings to their original defaults. Restoring presets At the bottom in the Presets section we have reset buttons that can be used to restore various Lightroom settings. Restore Export presets will restore the preset list used in the File Export dialog. Figure 4.6 The Keywording panel, showing the Keyword Set section. 490 This PDF was made available free of charge via: (not for resale) Page references Keywording panel 9097 Keyword sets 9596 51 Figure 4.8 These templates can be reset to their default presets. On the left, the Filename Template Editor and on the right, the Text Template Editor. 491 Exporting presets Now that we have the ability to export photos as a catalog, you can check the Store presets with catalog option if you would like Lightroom to export the custom presets you created and used as part of an exported catalog. This is useful if you are migrating library images from one computer to another because it will save you having to transfer your custom settings separately (such as your develop settings) or if you wish to share your custom settings with other users. But then again, maybe you would prefer not to give away your own custom settings. This preference item gives you that choice. 492 This PDF was made available free of charge via: (not for resale) Import preferences Page references Import preferences 14 Figure 4.10 shows the Import preferences. The top two items were previously in the File Management preferences section. There is now a simple checkbox for Show Import dialog when a memory card is detected. When this item is checked, this will force the Import dialog to appear automatically whenever you insert a camera card into the computer. The Ignore camera-generated folder names when naming folders option can help fast track the import process if you wish to import everything directly from a camera card into a single Lightroom folder. For example, maybe you have a camera card with photos that were shot using more than one camera and they have ended up in several different folders. When this option is checked, all the card folder contents will get grouped into one folder. Some photographers like using their cameras ability to capture and store JPEG file versions alongside the raw capture files. However, prior to the Lightroom 1.1 update, Lightroom would treat the JPEG captured versions as if they were sidecar files. But if the Treat JPEG files next to raw file as separate photos option is left checked, Lightroom will now respect these as being separate files that must be imported into the Lightroom catalog along with the raw versions. In addition to this, Lightroom 1.1 should no longer treat any non-JPEG files that can be imported as being sidecar files. 493 Page references DNG options 9, 39, 49 Figure 4.11 The Convert Photo to DNG dialog shares the same settings options as those listed in the Import preferences. We first have the File Extension, which can use lowercase dng or uppercase DNG, whichever you prefer. Next, the JPEG Preview that can be: None, Medium Size or Full Size. If you really want to trim the file size down then you could choose not to embed a preview, knowing that a new preview will always be generated later when the DNG file is managed elsewhere. You might select this option if you figure there is no value in including the current preview. A medium size preview will economize on the file size and this would be suitable for standard library browsing. After all, most of the photos in your catalog will probably only have medium sized previews. But if you want the embedded previews (and remember these are the previews embedded in the file and not the Lightroom catalog previews) to always be accessible at full resolution and you dont consider the resulting increased file size to be a burden, then choose the Full Size preview option. This will have the added benefit that should you want to view these DNGs in other applications such as iView Media Pro, you will be able to preview the DNGs at full resolution using a preview that was generated by Lightroom. This means that photos processed in Lightroom as DNGs 494 This PDF was made available free of charge via: (not for resale) should then preview exactly the same when they are viewed in other applications. tip For the Image Conversion Method you have two options. You can choose Preserve Raw Image, which will preserve the raw capture data in its original mosaic format, or you can choose Convert to Linear image, which will carry out a raw conversion to demosaic (convert) the original raw data to a linear image what Lightroom is doing anyway when converting the raw image data to its internal RGB space. Except if you do this as part of a DNG conversion you will end up with huge DNG file sizes. This is also a one-way process. If you convert a raw file to a linear image DNG, you wont be able to take the raw data back to its original mosaic state again and you wont be able to reconvert the raw data a second time. So why bother? In nearly every case you will want to preserve the raw image. But there are a few known instances where DNG compatible programs are unable to read anything but a linear DNG file from certain cameras. Even so, I would be wary of converting to linear for the reasons I have just mentioned. tip Personally I have no trouble converting everything I shoot to DNG and never bother to embed the original raw data with my DNGs. I do however sometimes keep backup copies of the original raw files as an extra insurance policy. But in practice I have never had cause to use these. 495 Page references External Editors 14 51 This is the one set of preference controls where there hasnt been much change. The Lightroom External Editing preferences will let you customize the pixel image editing settings for Photoshop plus one other external, pixel editing program (such as Adobe Photoshop Elements or Corel Paint Shop Pro). Here you can establish the File Format, Color Space, Bit Depth and where applicable, Compression settings that are used whenever you ask Lightroom to create an Edit copy of a library image to work on in an external pixel editing program. You can use the Edit in Photoshop section to establish the default file editing settings when choosing CommandE (Mac), ControlE (PC) to edit a selected photo in Photoshop. In this example The TIFF format is used with a ProPhoto RGB color space, 16-bits per channel bit depth and ZIP compression. And below this you can specify the default settings to use when editing photos in an Additional External Editing application. For example, if you had Photoshop Elements installed on your computer you might want to use the settings shown here to create edit versions as TIFF files in 8-bit per channel using the sRGB color space. 496 This PDF was made available free of charge via: (not for resale) And at the bottom we have the new Edit Externally File Naming section. Previously, Lightroom would always append each externally edited image with Edit at the end of the filename. And as you created further edit copies, Lightroom would serialize these: Edit1, Edit2 etc. In Lightroom 1.1 you can customize the file naming. In the Figure 4.13 example, you could create a custom template that adds a date stamp after the original filename. 497 tip Some database and FTP systems may prefer spaces to be removed from file names. For example, when I upload files to my publisher, the FTP server they use will not allow such files to be uploaded. This is where the When a file name has a space item can come in useful, because you can choose to substitute a space, with a dash (-) or an underscore (_) character. 498 This PDF was made available free of charge via: (not for resale) Interface preferences Page references Interface 17 Figure 4.16 You can use the contextual menu to quickly access the full range of panel end mark options. 499 You can select a panel end mark via the Lightroom Interface Panels preferences, or you can use the contextual menu in the Library module to access them quickly. Just right-click on the end of the panels list and navigate to the Panel End Mark submenu and select the desired panel end mark. I have compiled here a visual reference guide (see Figure 4.17) that shows all the different panel marks you can now choose from and of course you can also choose None if you dont want to see any kind of panel mark appearing in the Lightroom modules. Figure 4.17 This shows all the panel end marks now available in Lightroom 1.1. 500 This PDF was made available free of charge via: (not for resale) Page references Do you fancy creating your own panel end mark design? Well, its quite easy to do. Take a screen shot of one of the current panel end marks to get an idea of the scale of the design and how it will look against the panel background color. Create a new custom design in Photoshop scaled to the correct size and on a transparent layer. Save this graphic using the PNG file format (which can support transparency). Now go to the Panel End Mark menu and select: Go to Panel End Marks Folder. This will reveal the folder in the system finder. Place the PNG file you have just created in there and then reopen the Lightroom preferences. You will now see your custom design listed in the Panel End Mark 24 Background appearance 17 If you want to change the panel font size, go to the Panel Font Size section and select Small or Large Size. However, this change will not take effect until after you have relaunched Lightroom. Lights Out The Lightroom Interface preferences let you customize the appearance of the interface when using the Lights Out and Lights Dim mode. Bear in mind here that these preference settings also allow you to create a Lights up setting. So instead of using black as the Lights Out color, you could try setting this to light gray or white even. Background The Background section will let you customize the background appearance when viewing an image in Loupe mode. You can adjust the fill color or choose to add a pinstripes texture pattern. 501 Figure 4.18 The Lightroom 1.1 Filmstrip interface showing most of all the new Filmstrip extra view options. 502 This PDF was made available free of charge via: (not for resale) Interface Tweaks These last two items were also in the previous Interface preferences dialog. When Zoom clicked point to center is checked, the loupe view uses the click point as the center for its magnified view. To understand how this works, try clicking on the corner of a photo in the standard loupe view. If Zoom clicked point to center is checked, the corner will zoom in to become centered on the screen. When this option is unchecked the photo will zoom in with the corner point positioned beneath the mouse cursor. Personally, I find that deselecting this option offers a more logical and useful zoom behavior. The Use typographic fractions option offers a fine-tuning tweak to the Lightroom interface and the way the shutter speeds are presented within the Metadata panel in the Library module. In the example shown in Figure 4.19 the fractions option was left unchecked in the left version and on the right the shutter speed is represented when typographic fractions are used. Figure 4.19 The left hand view of the Metadata panel shows the shutter speed displayed when Use typographic fractions is switched off. And the right hand panel view shows the shutter speed displayed when Use typographic fractions is switched on. 503 504 This PDF was made available free of charge via: (not for resale) Were not quite done yet. There are still a few more innovations in Lightroom 1.1 that should get a mention. Namely, whats new in the Slideshow, Print and Web modules. For this final chapter I have grouped these new features all together into a single chapter. Page references Slideshow module 262279 Slideshow module The Slideshow module has undergone a minor menu change (see Figure 5.1 below), offering more playback options. There has also been a revision of the slideshow display mechanism, which means that you should see smoother transitions when playing slideshows. Play menu The most noticeable change here is the addition of a Play menu. This contains a few menu items that were previously listed in the Slideshow menu and the new items are those highlighted here in yellow. For example, we have: Go to Next Slide and Go to Previous Slide. These arent new features, just new to the Play menu in Lightroom 1.1. Note the keyboard shortcut shown here that uses the Command key (Mac) or Control key (PC) plus the left or right arrow keys to progress through the slides. You can also use the arrow keys on their own to navigate the Filmstrip. But the use of the Command/Control key plus arrow will preserve any selection of images that is active. 506 This PDF was made available free of charge via: (not for resale) When you want to preview a Slideshow you can, as before, use the Preview button (see Figure 5.3) in the Slideshow toolbar to play backthe slideshow in the Content area. But you can now go to Play Preview Slideshow to do the same thing, or use the Option+Return (Mac), Alt+Return (PC) keyboard shortcut. Figure 5.3 The Slideshow toolbar, where I have highlighted the Preview play button. To force Lightroom to run a slideshow using all photos, us: Play Run Slideshow with All Photos or the Shift+Return keyboard shortcut. This means you can keep Use Selected Photos in the Which Photos submenu selected so that the default Lightroom behavior is always to respect selections (which I think most people will prefer) and use the Shift+Return shortcut whenever you want to override this default and run a slideshow that plays all photos instead. 507 Page references Print module Print overlays Print Job panel 230 259 240243 246, 251, 253, 256 Print module Print overlays If you go to the Layout panel in the Print module you will notice a new item called Image Print Sizes in the Show Guides section at the bottom. When this is checked, Lightroom displays the dimensions of a photo above each cell using the ruler units selected at the top of the Layout panel. This overlay wont be seen in the final print. Figure 5.4 This view of the Print module shows an example of a contact sheet template with the Image Print Sizes overlay enabled. You can see the print size for each image displayed in the top left corner of each cell. 508 If you uncheck Print Resolution in the Print Job panel (see Figure 5.5), Lightroom will output an image at its native pixel resolution without resampling the image data to fit a set pixel resolution. This could be useful if you wanted to output a print file from Lightroom as a PDF without resampling the pixel data. Do this and you can save out a full pixel size master print file from Lightroom. Note, however, that the images native resolution must fit within the range of 72480 pixels per inch. This PDF was made available free of charge via: (not for resale) Web module Page references Web module 280303 Note The Lightroom team recently announced that Airtight Interactive have released three new Flash-based gallery templates for Lightroom. You can find out more about these new templates and how to download them by going to: blogs.adobe. com/lightroomjournal/2007/06/ airtight_interactive_web_galle.html. Figure 5.7 If Use Selected Photos is checked, Web galleries can be based on a Filmstrip selection when more than one image is selected. 509 Web templates Page references Template Browser panel 303 Labels panel 285 Appearance panel 288292 The Web menu also includes an item called New Template and another called New Template Folder. A template is basically the same as a preset (which was the menu term used in version 1.0). As with the Develop module you can now place your templates in folder groupings. Appearance panel The Appearance panel shown in Figure 5.11 includes a new option to Add Drop Shadows to Photos. You can also uncheck the Section Borders option if you want to remove the dotted line that separates the site title and the Collection title at the top and bottom of the web gallery pages. 510 To show you how these will affect the web gallery pages, Figure 5.10 shows an example of a gallery page where Add Drop Shadow to Photos and Section Borders have both been checked. If you click on the swatch color next to it you can choose a new color for the section divider lines (see the Colors palette in Figure 5.12). This PDF was made available free of charge via: (not for resale) The Image Pages section is brought over from the Output Settings panel. You used to be able to set the image page pixel dimensions in the Output Settings. But now you can do this via the Appearance panel instead. Plus you can switch the Photo Borders on or off and define the border color of your choice. Figure 5.10 This view shows a preview of an HTML gallery page showing a gallery photo with a border and drop shadow added. 511 Page references 294297 292293 Output panel 298301 Removing modules 314315 The Image Settings panel is now renamed the Image Info panel, which is shown here in Figure 5.13. The functionality remains exactly the same: you use these options to add metadata information such as the filename to the page views. As I just mentioned, the Output Settings is still there, but minus the page size control (which has been moved to the Appearance panel). There is also an option to include the Copyright Only information or include All the metadata information in the gallery JPEG images. Including all the metadata information embeds details like the IPTC and keywords metadata in the web gallery JPEG images. Choose this option if you want to include such metadata, but be aware that this increases the file size of the individual gallery images. If Copyright Only is selected, that information is all thats is embedded, which keeps the gallery JPEGs lightweight in size. The Output panel has been renamed the Upload Settings panel. No changes here either apart from the new name. Removing modules Advanced Lightroom users will be aware that it is possible to hack into the program contents, remove modules from the Lightroom program and for the program to then launch and run as normal, minus the module you just removed. With version 1.0 there were some overlap issues that were not completely resolved by the effect this would have on other modules and the menu items in those modules. For example, if you remove the Slideshow module from Lightroom 1.1, the Impromptu Slideshow menu will be disabled in the Window menu for the other Lightroom modules. And if you remove the Export module, the Export menu commands will be omitted from the File menu. For full instructions on how to remove the Lightroom modules, refer to pages 314315 in the Adobe Photoshop Lightroom 1.0 book. 512 This PDF was made available free of charge via: (not for resale) Index A B Bridge 2 367, 370, 417 C Camera Presets 426427 Cancelling a crop 441 Catalog path navigation 400 Catalogs 355360 Catalog path 364365 Catalog Settings 363364 file handling settings 366368 general catalog settings 354356 Metadata settings 355 Exporting catalogs 356 Exporting presets 492 Exporting with negatives 357 Exporting without negatives 356357 Importing catalogs 354 Including available previews 354 New Catalog 398 Open Recent 404 Catalog Settings 378, 442443 Clarity 416418 Color Label Sets Bridge 2 457 Color panel 415 Compare view 413 Copy Name searches 402 Count display in Folders panel 429 Create Virtual Copy 441 Crop Guide Overlays. SeeDevelop F File Handling preferences 498 File Menu 354368 Filmstrip 419421 color label filtering 420 Rating filters 419 Filtering images 378 filter by copy status 387 Find panel 401 Copy Name searches 402 Date searches 401 G General preferences 486488 Grid view. SeeLibrary module H Help menu 382, 382383 History I Ignore camera-generated folder names 493 Import preferences 493495 Interface preferences 499503 module Index 513 L Lens Corrections panel 458461. See alsoDevelop module Library module Library Menu 391392 Library View options 409 Lightroom preferences External editing preferences Edit in Photoshop 496 External Editing preferences 496497 Edit Externally File Naming 497 File Handling preferences 498 illegal characters 498 Reading Metadata 498 General preferences 486488 Automatically check for updates 487 Catalog selection 488 Completion sounds and prompts 488 Show splash screen during startup 486 Import preferences 493497 DNG options 494495 Ignore camera-generated folder names 493 Show Import dialog 493 Treat JPEG files next to raw file as separate 493 Interface preferences 499503 Background 501 Filmstrip view options 502 Lights Out mode 501 Panel end marks 499501 Panel Font Size 501 Show photos in navigator on mouse-over 502 Tweaks 503 Use typographic fractions 503 Zoom clicked point to center 503 Presets preferences 489492 Camera-linked settings 490 Default Develop Settings 489 Exporting presets 492 514 Index N Noise reduction 482483 Color 482 Luminance 482 O Optimize catalog 364 P Painter tool 393395, 416 Panel end marks 499500 Custom panel end marks 501 Pirate mode 487 Presets preferences 489492 Print module 508 Print overlays 508 Print Resolution 508 R Real World image sharpening with Photoshop CS2 477 Remove Red Eye tool 452456 Darken 454 fine-tuning the effect 456 Pupil Size 454 Remove Spots tool 447450 ACR Synchronized spotting 451 Click and drag 448 Click only 448 Clone or Heal 448 Editing the spot circles 449 heal mode synchronization 450 Hiding the spot circles 449 Spot Size 448 synchronized spotting 450451 Undoing/deleting crop circles 449 Remove Spots Tool 436 Removing modules 512 Removing photos 462 Reset Develop settings 463 Restoring presets 451 S Saturation 404 Saving metadata 366367 Saving Metadata 429 Scan for Metadata updates 389 Set Default settings 462 Sharpening 389, 464481 1:1 view evaluation 469 Amount slider 470471 Capture sharpening 464 Detail slider 474476 Landscape sharpening 467 Luminance sharpening 469 Masking slider 477479 Portrait sharpening 466 Radius slider 472473 sample image 468 Sharpening presets 465467 Show in Folder in Library 428 Slideshow module 506507 Play menu 506 Preview Slideshow mode 507 Slideshows and selections 507 T Toolbar 415 Tool Overlay menu 435436 Treat JPEG files next to raw file as separate 493 U Updating previews 390 Use typographic fractions 503 V Vibrance 404 Virtual copies Create Virtual Copy command 391 master copy filters 421 Set Copy as Master 391 Virtual copy filters 421 W Web module 509512 Add Drop Shadows to Photos 510 Appearance panel 510 Image Info panel 512 Output Settings 512 Site Info panel 510 Upload Settings 512 Web templates 510 Which Photos submenu 509 White Balance tool 376 X XMP automatically write changes into XMP 368370, 398 Camera Raw edits in Lightroom 372 saving metadata to 367 sidecar files 398 viewing edits in Camera Raw 371 Z Zoom clicked point to center 503 Index 5.
https://it.scribd.com/document/262389416/Photoshop-Lightroom
CC-MAIN-2019-51
refinedweb
17,343
66.07
Red Hat Bugzilla – Bug 140809 initial fstab should contain info for mqueue filesystem Last modified: 2014-03-16 22:50:49 EDT Description of problem: I actually don't know where the initial /etc/fstab is created. I assume anaconda; if this is wrong, please reassign. The problem is that the mqueue filesystem is not mounted. This makes it impossible to determine which message queues have been created and more importantly, it makes it cumbersome to remove existing message queues (i.e., we ship no tool to do this, one would have to write a new one). Therefore I suggest adding such an entry to /etc/fstab when it is created. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1.Compile this code #include <fcntl.h> #include <mqueue.h> int main (void) { struct mq_attr a; return mq_open("/aaa", O_CREAT|O_RDWR,0600, 0); } 2.run it 3.try to remove the message queue /aaa Actual results: need to write program using mq_unlink Expected results: rm /dev/mqueue/aaa Additional info: I don't know whether there is a standard for the name of the directory where the filesystem is mounted. I doubt it. Therefore add none /dev/mqueue mqueue nodev,noexec 0 0 Most of these we've been doing mounts for in rc.sysinit as opposed to explicit lines in /etc/fstab. Of course, there are the more general questions of what this is used for, wtf it's an all new filesystem, that there _needs_ to be a standardized location to mount it among others... I think I explained fairly well why this filesystem needs to be mounted. There is no way for a sysadmin to see what message queues are created and take up resources. And there is no tool to remove them. With or without a standard mounting point, we need this mount. It's something new in RHEL4 since RHEL3 had no message queues which is why this is not available so far. And as for mounting in rc.sysinit. I don't think it's adequate, we should have the entry in /etc/fstab just as there is an entry for /dev/shm. Chunking on the shouldfix list. Jay: new *features* at this stage on the shouldfix list? *giggle*. If it's going to be in /etc/fstab, it needs to be created in anaconda, or similar. I'm just not clear on why this is implemented as yet another bizarro filesystem that then has to be mounted by everyone causing more slow-down to the bootup process, confusion on things when it's not mounted (ie -- in what situations do I now need this for chroots), etc. Hence, why I don't think that "just add it to /etc/fstab" is a good enough answer. Plus, I'd like some sort of real consensus other than us just making something up on if we have to mount something on _where_ it gets mounted. And /dev is ugly due to dev now being tmpfs for udev. Uni reported it, so it got some points for that, plus it *felt* like one of those things that if we were going to do it, then it needed to be done in GA and not something which was introduced at an update. So, if the resolution is, "Nope, we're not doing this" that's good enough in my opionion. Remember, one of the key tenets of these lists is to get to a resolution, not necessarily fix every issue. > I'm just not clear on why this is implemented as yet another bizarro filesystem > that then has to be mounted by everyone If this would not have to be done with a filesystem, we would yet another "bizarro" tools like ipcs. Using a filesystem is *much* more desirable. And back to the issue. Imagine a system without ipcs. This is exactly where we are now wrt to POSIX mqs. The sysadmin needs to be able to have insight. We need to have the filesystem mounted. OK, so the options are thus: 1) Add support for /etc/fstab.d. Package a file that goes there. 2) Add a hotplug helper that automatically mounts the filesystem when the filesystem is registered with the kernel 3) Add the filesystem to /etc/fstab. This requires: a) modifying anaconda -and- b) add a %post to some package to do the fstab modifications on upgrade 4) Randomly mount the filesystem in rc.sysinit (like /proc, /sys) Note that 2), while relatively clean, and useful for filesystems in general, may not work at all. 1) obviously requires work to both mount and to libc for *mntent. So, in any case, 1), 2), and 3b) require changes to a package. Logically, this goes in the package that introduces the functionality. Assigning to kernel for now; I suppose it could be libc as well. 1) is really no option IMO. This is unnecessary, non-standard, and has the potential for limitless confusion. What happens if there are multiple entries for the same mount point? What about programs parsing /etc/fstab directly? What about statically linked code using old libc code? 3) is the correct solution but refused because it is work. This shouldn't be reason enough. I don't know enough hotplug to judge 2). And how is it supposed to work? Message queues are no module. Support is always compiled in. Has it been checked whether there is any hook in the code which calls into hotplug? 4) is probably the least intrusive possibility. I'd be OK with it although it is plainly inconsistent since /dev/shm is handled in fstab. 3) is certainly doable. It's just that initscripts really isn't the place for the fstab-modifying %post. One minor note is that 3/4 it also requires a udev change to make any subdirectory on startup if it's under /dev, but that's doable. Still current for RHEL4. Moving to FC devel; this is unlikely to change for already-released RHEL. *** This bug has been marked as a duplicate of 145892 ***
https://bugzilla.redhat.com/show_bug.cgi?id=140809
CC-MAIN-2016-44
refinedweb
1,018
74.29
Utsira Thank you for your answer @dgelessus , really interesting. Utsira @omz thanks, it works now, that's awesome. I've used SceneKit quite a lot with Swift, but I'm still new to Python/ Pythonista. One question, how come you implement SCNVector3as a Python class, rather than importing it with the other ObjC classes? Just trying to understand the implications of working with ObjC libraries in Pythonista. thanks. Utsira Apologies for bumping a 2-year-old thread. When I try to run the above script in Python 3.5 I get an error at the mapcommand saying: no Objective-C class named 'b'SCNView'' found Is this some 2.7 -> 3.5 issue? What is the extraneous 'bin the error message referring to? thanks in advance. Utsira @ccc I've updated to the latest version of the script. This time I'm seeing a different error. I'm trying to send a .pyfile from Working Copy to Pythonista. Here's the traceback: Traceback (most recent call last): File "/private/var/mobile/Containers/Shared/AppGroup/FAB03D8A-5598-46CF-A26A-B76EE628E876/Pythonista3/Documents/read_from_working_copy_app.py", line 6, in <module> import appex, editor, os, shutil File "/var/containers/Bundle/Application/AE76B4C7-AEC5-4858-9A87-3F2F77424054/Pythonista3.app/Frameworks/PythonistaKit3.framework/pylib/site-packages/editor.py", line 5, in <module> raise NotImplementedError('Not available in app extension') NotImplementedError: Not available in app extension Utsira @ccc I've not been able to get the script working. This is the end of the traceback: with open(src, 'rb') as fsrc: IsADirectoryError: [Errno 21] Is a directory: '/var/mobile/Containers/Shared/AppGroup/379D48B7-C511-4AF4-80DD-2A91B9CD9D2A/File Provider Storage/Pythonista-and-Working-Copy' I'm not an experienced Pythonista user though, so I might have installed the script incorrectly. I added it as a URL extension, is that correct? Do we need to put anything in the Arguments field in the Pythonista extension editor? Could someone describe the correct way to install the script? Utsira posted in Pythonista • read more Utsira When I fire Pythonista 3 up I have a folder for "Pythonista 2 Documents". Have they been copied across to Pythonista 3 (or is this some kind of iOS8 open-in-place magic)? In other words, will those docs stay there if I delete Pythonista 2? How safe is it generally to delete Pythonista 2? Utsira @smath I just submitted a pull request to your repo, just a couple of small changes. When I push/upload, the operation works, but when it switches back to Pythonista, a notification pops up saying that I have no such file in my library. Utsira.
https://forum.omz-software.com/user/utsira
CC-MAIN-2022-27
refinedweb
436
57.57
0 I know it's something I'm doing wrong, but I can't seem to figure out what it is. The console reads the text you input and replies to what you have sent with a message set specifically for your input message. When I input "hello", it replied with "Hello! How are you? (Answers = good, bad)", but when I go to enter either "yes" or "no", the console closes without submitting the message I've specified. Can anybody help me out? using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication3 { class TestClass1 { static void Main() { String UserInput = Console.ReadLine(); String PrevAnswer = null; switch (PrevAnswer) { case null: { switch (UserInput) { case "hello": { Console.WriteLine("Hello! How are you? (Answers = good, bad)"); break; } case "goodbye": { Environment.Exit(0); break; } } break; } case "hello": { switch (UserInput) { case "good": { Console.WriteLine("That's good to hear"); Console.ReadLine(); break; } case "bad": { Console.WriteLine("Oh, that's sad to hear :("); break; } } break; } } Console.ReadLine(); } } }
https://www.daniweb.com/programming/software-development/threads/449723/readline-won-t-stop-the-console-from-closing
CC-MAIN-2017-47
refinedweb
164
62.95
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030225 Description of problem: When using an example from the perldoc perlform pages I am able to reproduce a segmentation fault in perl installed by default on RedHat 9. Specifically this is in the formline function that is part of perl. (perldoc -f formline) Version-Release number of selected component (if applicable): perl-5.8.0-88.x How reproducible: Always Steps to Reproduce: 1. Run code provided. #!/usr/bin/perl # load some text... open(FILE,"/usr/share/doc/redhat-release-9/EULA"); $text = <FILE>; close(FILE); # call the swrite subroutine defined later print swrite(<<'END', "$text"); ^<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< ~~ END # simplified swrite routine taken from the perlform perldoc page sub swrite { my ($format,@strings) = @_; $^A = ""; formline($format,@strings); return $^A; } Actual Results: $ ./test.pl Segmentation fault Expected Results: It should print the EULA reformatted to the specified width without the blank lines (basically just reprint the EULA). Run it on a RH8 or any box with perl compiled from source. Additional info: Stepping through with the perl debugger it just hangs. Using gdb to do a backtrace, I get this: (gdb) bt #0 0x4207bee3 in memmove () from /lib/tls/libc.so.6 #1 0x08a0e040 in ?? () #2 0x400ad00c in Perl_sv_clear (my_perl=0x804bcb8, sv=0x899eeac) at sv.c:5083 #3 0x400ad764 in Perl_sv_free (my_perl=0x804bcb8, sv=0x899eeac) at sv.c:5226 #4 0x40098ef3 in Perl_av_clear (my_perl=0x804bcb8, av=0x8664c3c) at av.c:456 #5 0x400cae2c in Perl_leave_scope (my_perl=0x804bcb8, base=210) at scope.c:903 #6 0x400c906c in Perl_pop_scope (my_perl=0x3ffffbe1) at scope.c:137 #7 0x400d1be8 in Perl_pp_return (my_perl=0x804bcb8) at pp_ctl.c:1850 #8 0x40086a0a in Perl_runops_debug (my_perl=0x804bcb8) at dump.c:1414 #9 0x4003a9bb in S_run_body (my_perl=0x804bcb8, oldscope=-240) at perl.c:1705 #10 0x4003a645 in perl_run (my_perl=0x804bcb8) at perl.c:1624 #11 0x080493a3 in main () #12 0x420156a4 in __libc_start_main () from /lib/tls/libc.so.6 I ahve scripts that use this, though not ofter, that I wront on a RH8 machine. I upgraded a while back, but don't normally run with that function from my notebook. I ahve been trying to debug the problem and did a fresh install of RH9 and still ahd the issue. Traced it down to the swrite function/sub and wrote the test script above and here we are... Thanks Very sorry for the long delay in processing this bug report. This bug is no longer a problem for the perl versions in any Red Hat OS release.
https://bugzilla.redhat.com/show_bug.cgi?id=107206
CC-MAIN-2019-18
refinedweb
428
67.86
Integer Roots April 19, 2013 One solution first brackets the answer between lo and hi by repeatedly multiplying hi by 2 until n is between lo and hi, then uses binary search to compute the exact answer: (define (iroot k n) (let loop ((hi 1)) (if (< (expt hi k) n) (loop (* hi 2)) (let loop ((lo (/ hi 2)) (hi hi)) (let* ((mid (quotient (+ lo hi) 2)) (mid^k (expt mid k))) (cond ((<= (- hi lo) 1) (if (= (expt hi k) n) hi lo)) ((< mid^k n) (loop mid hi)) ((< n mid^k) (loop lo mid)) (else mid))))))) A different solution uses Newton’s method, which works perfectly well on integers: (define (iroot k n) (let ((k-1 (- k 1))) (let loop ((u n) (s (+ n 1))) (if (<= s u) s (loop (quotient (+ (* k-1 u) (quotient n (expt u k-1))) k) u))))) You can run the program at. This function will be added to the Standard Prelude the next time it is updated. A quick solution using power of 1/k in Haskell: iroot :: (Floating b, Integral c, RealFrac b) => b -> b -> c iroot k n = floor . exp $ 1/k * log n def iroot(k,n): capturelasti=0 for i in range(0,1000): if pow(i,k) == n: return(i); if pow(i,k) n: return(capturelasti) #Tests print(iroot(3,125)) print(iroot(3,126)) print(iroot(3,124)) print(iroot(6,124)) from itertools import count def iroot(k, n): ….return next(x for x in count() if x**kn) Ahem. The blog had edited my code? from itertools import count def iroot(k, n): ..return next(x for x in count() if x**kn) Another try. The software doesn’t like less-than or greater-than signs? from itertools import count from operator import le, gt def iroot(k, n): ..return next(x for x in count() if le(x**k, n) and gt((x+1)**k, n)) There’s a link to instructions on how to post source code in the red bar above. The source code is embedded in HTML where < and > have to be written < and > (assuming I get this right :) or otherwise escaped. Go Newton’s method! This is a Common Lisp one, not using any floating arithmetic. It may have a bug – I don’t know if it always converges. def iroot(k, n): x = float(n)**(1/float(k)) return int(x) #include #include #define e 2.718281828 main() { int x,y; printf(“x,y\n”); scanf(“%d %d”, &x,&y); iroot(x,y); } void iroot(int m,int n) { float z; z=(log(n))/m; int root; root=pow(e,z); printf(“%d”,root); } Did in SmallTalk; the initial guess for Newton’s method is taken by calculating 2 ^ (floor(log2(n)) / k).
http://programmingpraxis.com/2013/04/19/integer-roots/2/
CC-MAIN-2015-06
refinedweb
464
73.92
Opened 11 years ago Closed 7 years ago #2234 enhancement closed fixed (fixed) use poll reactor on POSIX platform that provide poll(2) Description Currently the default is always select. We can do better than this. Change History (45) comment:1 Changed 11 years ago by comment:2 Changed 10 years ago by comment:3 Changed 10 years ago by The following tahoe issue ticket now contains a link to this ticket: # bug in Twisted, triggered by pyOpenSSL-0.7 comment:4 Changed 10 years ago by You lost me at "Makefile". Would you mind rephrasing that chunk of stuff in a language that humans understand - like, say, Python? :-) comment:5 Changed 10 years ago by As background, the Makefile stuff that Zooko quoted above is from the Tahoe project's top-level Makefile, which really just contains convenient aliases so I can type 'make test' instead of 'PYTHONPATH=localstuff trial --reactor=poll allmydata.test'. To work around #3218 (which causes foolscap and tahoe test failures under Twisted-8.x and pyOpenSSL-0.7 on the select reactor), we changed this Makefile to add --reactor=poll to the trial command line, but since not all platforms support pollreactor, there are a bunch of conditionals to make sure we don't use --reactor=poll on, say, windows. (we might have something elsewhere to use iocpreactor there). If trial could be conveniently invoked from python code, we would probably replace this 'make test' target with a python script that did something like: from twisted.python import runtime r = "default" if runtime.platformType == "posix": r = "poll" sys.exit(trialRunner(suite, reactor=r)) (in fact, zooko would probably write a setuptools plugin to make 'python setup.py test' do exactly this :-). comment:6 Changed 10 years ago by In fact, I've already written such a setuptools plugin, by borrowing code from the Elise project, but there may have been some problem using it, or else I just got distracted before I finished it. I'll probably dig it up again, especially since Barry Warsaw just wished for better setuptools integration for tests: But, actually this has little to do with the topic of this ticket, which is that choosing the select reactor as the default doesn't seem as good as choosing poll reactor as the default, if there is a poll reactor available. comment:7 Changed 10 years ago by comment:8 Changed 10 years ago by comment:9 Changed 10 years ago by Poll shouldn't be used by default on OS X. comment:10 Changed 10 years ago by Okay, so how about if the default reactor is poll on linux and cygwin and select on everything else? That's what the Tahoe makefile currently does. comment:11 Changed 10 years ago by cygwin isn't an officially supported platform. Other than that, no problems with that suggestion are immediately obvious. Do you want to supply a patch? comment:12 Changed 10 years ago by I would be interested in contributing a cygwin buildslave. Yes, I want to supply a patch. comment:13 Changed 10 years ago by Cool, a cygwin slave would be great. comment:14 Changed 10 years ago by Please provide name, password, and master for the cygwin buildbot. Also, would you like a cygwin buildbot for pyOpenSSL too? comment:15 Changed 10 years ago by A comment on the tracker wouldn't be a good place for most of that information. Find me on IRC. comment:16 Changed 9 years ago by Okay, we have a cygwin buildslave now, but it goes into a fast loop and fills my filesystem with test log: comment:17 Changed 9 years ago by comment:18 Changed 9 years ago by This issue also affects setuptools_trial -- setuptools_trial issue #2 comment:19 Changed 9 years ago by This ticket has been mentioned on the tahoe-dev list: comment:20 Changed 9 years ago by So I was just hacking setuptools_trial to select the poll reactor on cygwin or linux2: When I realized -- hey wait a minute, why am I waiting for the cygwin buildslave to work? Why don't we fix this ticket for all the platforms which are already passing Twisted buildbot tests and worry about cygwin once we have a working cygwin buildslave? comment:21 Changed 8 years ago by comment:22 Changed 8 years ago by I agree. How about restricting the scope of this ticket. I propose just one thing: if the platform is linux, use poll reactor as the default; for all other cases, continue to use select reactor as the default. Once this is implemented, we can file tickets for further refinements to the selection scheme. comment:23 Changed 8 years ago by Good idea! comment:24 Changed 8 years ago by comment:25 Changed 8 years ago by comment:26 Changed 8 years ago by comment:27 Changed 8 years ago by As long as we're changing this, I'd want these defaults: Linux: epoll, switching to POSIX default if unavailable POSIX: poll if available, otherwise select Simply trying epoll,poll,select in that order on POSIX platforms would accomplish this. comment:28 Changed 8 years ago by Eventually, yes. I'm not going to do all that, though. I'm just going to make a simple change. Further customizing the selection logic should be easier based on the outcome of this ticket, though. comment:29 Changed 8 years ago by comment:30 Changed 8 years ago by A note on the current implementation: many platforms are POSIX but not Mac OS X, not just Linux. One *hopes* all of them have poll, but somehow that seems too much to ask... It would be nice not to break the default reactor on such platforms, even if they're unsupported. Checking for existence of poll object in select module might be better. comment:31 Changed 7 years ago by comment:32 Changed 7 years ago by Ready for review, I think. Buildbot run: The problems appear to be normal ongoing windows issues. comment:33 Changed 7 years ago by Reviewed: I read through this patch and didn't see anything wrong with it. I liked the clarification that this isn't actually choosing the "best" reactor. (Do I need to do anything else to mark this ticket as reviewed?) comment:34 Changed 7 years ago by Standard procedure is to remove "review" keyword and reassign back to author. ReviewProcess documents the things you should be checking. comment:35 Changed 7 years ago by comment:36 Changed 7 years ago by comment:37 Changed 7 years ago by What's this stuff about "modern OS X"? OS X 10.6.7, the Apple distributed Python executable does not have it. comment:38 Changed 7 years ago by AFAICT the *OS* at least has it, which is why I assumed Python exposed it: According to this - (as usual, foom knows the answer to everything), Apple chose not to expose because certain fds aren't supported. The only relevant one is PTYs, but that is an issue (why does OS X suck?). The implication of the linked bug is that some releases of Python might decide expose a still-broken poll, so possibly hardcoding select() on OS X is still the right thing to do... Maybe we should ask James, since he's well informed. comment:39 Changed 7 years ago by Okay, I have reviewed that patch and checked for the things mentioned in ReviewProcess. Other than the issue raised in comment:38, it is ready to merge. comment:40 Changed 7 years ago by I reverted back to old behavior (no poll on OSX), and added description of why we are not choosing e.g. epoll on Linux just yet. doesn't show any relevant errors AFAICT. comment:41 Changed 7 years ago by Thanks! - Please remove the date from test_default.py's copyright header - Can you change the docstring style for isMacOSXin runtime.pytoo? - I think the news fragment would benefit from a "The ..." at the beginning. I wonder if the level of detail now in the default reactor's description (in the twisted_reactors.py dropin file) will scale to more complex selection schemes. We'll find out, I guess. :) Please address the above to your satisfaction and then merge. Thanks! comment:42 follow-up: 43 Changed 7 years ago by I just noticed go by in my RSS reader, and I have one comment: We shouldn't have FUD about our own code, especially in comments. The issues with epoll on Linux and poll(2) on OS X are nicely well-described. But the issues with kqueue ("isn't in great shape") and IOCP ("probably not quite ready for prime time") are super vague, and, in the latter case, I think possibly wrong. Please link to specific issues in that comment so that a future maintainer can know whether it's an appropriate time to update the code to select a better default reactor. (Also, while you're at it, transforming "#4429" into the appropriate URL would be a kindness to future readers.) comment:43 follow-up: 44 Changed 7 years ago by comment:44 Changed 7 years ago by I'll make the descriptions more accurate (remind me what the kqueue issues are? besides PTY support.) PTY support's a big one, but ... hrm. the relevant ticket is a big mess that doesn't really describe the issues, but that's probably the thing to link to. Somebody should update the summary there. Possibly in the comment you could just say that it is "out of date". That being said: if IOCP really doesn't have any reported issues (which I doubt, but will verify) would you be willing to have IOCP the default on Windows? "any" reported issues? :) I think the important thing would be fewer reported issues than the select reactor on Windows :). Well, that, and no significant missing functionality. SSL was the big one for a long time, but now that that's taken care of, what's left? Do serial ports work? Standard I/O? I know that subprocesses use the same gross hack as all the other Windows reactors. I'd definitely be willing - happy, even - to have it be the default, but I'm happy to be overridden here by anyone with an issue that I haven't thought of (Pavel or Jean-Paul might know about something). Tahoe, the Least-Authority Filesystem, currently has the following reactor-selection login in its Makefile: It seems like poll reactor could be the default reactor on platforms which support poll reactor, and then Tahoe could stop doing anything about reactor choice at all in its Makefile.
http://twistedmatrix.com/trac/ticket/2234
CC-MAIN-2018-13
refinedweb
1,788
70.43
How to Switch an Arduino Output on and Off From Your Android Mobile. Arduino for Beginners Introduction: How to Switch an Arduino Output on and Off From Your Android Mobile. Arduino for Beginners Update: See Andriod/Arduino for Beginners - Design Custom Andriod menus to switch Arduino outputs on and off. Absolutely No Programming Required for an alternative that does not require any programming at all. How Beginners can switch an Arduino output on and off from their Androd mobile using pfodApp No Soldering Required, No Android Coding Required. Also see Single Click on/off from your Android mobile using Arduino and pfodApp for an alternative that does not use a menu. Introduction This instructable follows on from Arduino for Beginners, controlled by Android, and explains a simple Arduino sketch that lets you switch a digital output on and off from your Android mobile phone. This instructable also covers debugging your sketch while connected to your mobile. Once you understand this basic sketch you can use it a basis for your own projects. No Android programming is required, but using pfodApp you can easily customize the mobile's display by changing text strings in your Arduino sketch. This project uses the same parts as Arduino for Beginners, controlled by Android and assumes you have already built that project. Note: In this instructable we are switching the Uno Led on and off (D13), if you want to switch your own led connected to D3 on and off just change the line int led = 13; to int led = 3; in the code samples. Step 1: The First Sketch unzip it to your arduino/libaries directory. (Open the Arduino IDE File->preferences window to see where your local Arduino directory is). NOTE: remove the bluetooth shield before uploading the sketch because the bluetooth shield uses the same pins the USB connection does and the programming get confused. The first example sketch is (FirstDigitalOutputSketch.ino). (See for the basics of Arduino coding) #include "pfodParser.h" // include the library pfodParser parser; // create the); } // the loop routine runs over and over again forever: void loop() { byte in = 0; byte cmd = 0; if (Serial.available()) { in = Serial.read(); // read the next char cmd = parser.parse(in); // pass it to the parser returns non-zero when a command is fully parsed if (cmd != 0) { // have parsed a complete msg { to } if ('.' == cmd) { // pfodApp } } cmd = 0; // have processed this cmd now // so clear it and wait for next one } // else no serial chars just loop } Note: in the code above all the strings are enclosed with F(“ “) This macro makes sure the strings are placed in the program FLASH where you have much more room. (See What fails when you add lots of Strings to your Arduino program.) Install pfodApp on your mobile and set up a connection to your bluetooth shield as described in the pfodAppForAndroidGettingStarted.pdf. I called my connection Uno. Click on the connection to connect and the sketch above will return the menu shown above. Clicking the “Toggle Led” menu turns the Uno led on and off. (The led is near the USB connection and partially hidden by the bluetooth shield.) If you want to switch your own led connected to D3 on and off just change the line int led = 13; to int led = 3; in the code samples and plug you led (with a resistor) into D3. The next step will have a closer look the messages that produce the menu and control the led. Step 2: The Pfod Messages, {} The next step will change the sketch to update the mobile menu as the led turns on and off. Step 3: The Second Sketch (in bold):-} The screen shots above shows what the menu and debug view look like. There are lots of other screens you can specify in your sketch, like slider menu items, multi-selection lists etc, but many projects just need a few buttons to control them and this sketch will serve as a good starting point. The next step will cover debugging your sketch on your mobile. Step 4: Debugging Your Sketch. Next Steps To learn more about coding Arduino see To learn more about pfod check out the pfod Specification and the projects on added reference to new instructable that does all the programming for you. Very simple, i liked it!
http://www.instructables.com/id/How-to-switch-an-Arduino-output-on-and-off-from-yo/
CC-MAIN-2017-43
refinedweb
722
69.52
Computational Clusters and HPCA blog about Microsoft's Compute Cluster Server product and High Performance Computing (HPC) in general. Evolution Platform Developer Build (Build: 5.6.50428.7875)2006-01-03T18:41:00ZCheck out the new Windows HPC Community site<P>Our new community portal is open! Go here: <A href=""></A> for all of the latest news, blogs and discussion forums on Microsoft Compute Cluster Server. </P><div style="clear:both;"></div><img src="" width="1" height="1">Dougli Technology Previews<DIV>The first Community Technology Preview (CTP) of Compute Cluster Server has been released to the web at <A href=""></A>. </DIV> <DIV> </DIV> <DIV>The product team will be releasing a CTP once a month up to RTM.</DIV><div style="clear:both;"></div><img src="" width="1" height="1">BrettB ports for Bioinformatics apps<P><A href=""></A></P> <P>The Cornell Theory Center has taken 10 of the top open source apps in the bioinformatics industry and ported them to Windows. They are all available as Visual Studio projects.</P> <P> </P><div style="clear:both;"></div><img src="" width="1" height="1">BrettB Gallery for Compute Cluster Server<P><A href=""></A></P> <P>Check out the GotDotNet gallery for Windows Computer Cluster Server 2003. It has labs, examples, a detailed drilldown into the SC05 demo, and more.</P> <P>We'll be posting more code to this site as we progress towards release of the product.</P><div style="clear:both;"></div><img src="" width="1" height="1">BrettB Chat on HPC and Compute Cluster Server - Jan. 13<P><A href="">Details here</A> </P> <P><A href=""></A> </P> <P> </P><div style="clear:both;"></div><img src="" width="1" height="1">BrettB your network connections to more meaningful names<DIV>Not rocket science, but very helpful...</DIV> <DIV> </DIV> <DIV>If you go to the Network Connections in the Windows Control Panel, you'll see "local area connection" and "local area connection 2" on machines with multiple NICs.</DIV> <DIV> </DIV> <DIV>On my cluster nodes I rename network connections to be "public" and "private" instead of "local area connection" and "local area connection 2". This can be done from the network connections control panel: right-click the network connection and select "rename".</DIV> <DIV> </DIV> <DIV>These names also appear in the Todo List for CCS and it makes it easy to setup the topology when they are named appropriately.</DIV><div style="clear:both;"></div><img src="" width="1" height="1">BrettB CCS log files and controlling their output quantity<DIV class=ms-PostBody> <DIV>Log files that are created by various CCS services:</DIV> <UL> <LI><STRONG>ManagementService.log,</STRONG> <U>C:\Program Files\Microsoft Compute Cluster Pack\LogFiles</U>. Exists on all CNs, useful in diagnosing compute node discovery and configuration related problems such as HN could not be found, node stays in Configuring state… <LI><STRONG>TodoList.log</STRONG>, <U>C:\Program Files\Microsoft Compute Cluster Pack\LogFiles</U>. Exists only on HN, useful in diagnosing ToDoList discovery and configuration related problems <LI><STRONG>Binlsvc.log</STRONG>, <U>C:\WINDOWS\Debug </U>(If RIS is a problem). Exists only on HN (if RIS is installed), useful in diagnosing compute RIS related problems.</LI></UL> <DIV>To control debug logging level for head node logs:<BR> cluscfg setparams eventloglevel=[information][verbose]<BR></DIV></DIV><div style="clear:both;"></div><img src="" width="1" height="1">BrettB is your friend<DIV>If you haven't tried the clusrun script that comes with the CCS install yet, you are missing out. This script comes in handy in so many ways. Simply call clusrun with the command you want run on the remote nodes.</DIV> <DIV> </DIV> <DIV>Want to copy a file to/from all nodes in the cluster? - clusrun</DIV> <DIV>Want to make a registry change to all nodes in the cluster? - clusrun</DIV> <DIV>Want to execute a program on all running nodes in a cluster? - clusrun.</DIV> <DIV> </DIV> <DIV>Of course, you can do all the things you can do through a job submit, e.g. choose a remote scheduler, filter nodes by state, give username/pwd, redirect stdout, etc. but, clusrun makes it easy to do from the cmd line.</DIV> <DIV> </DIV> <DIV>If you are administering a CCS cluster, you need to know how to use clusrun. You can find it in C:\Program Files\Microsoft Compute Cluster Pack\Bin. </DIV><div style="clear:both;"></div><img src="" width="1" height="1">BrettB first cluster job<DIV class=ms-PostBody> <DIV>OK, I've got CCS installed on my head node and compute nodes, now what?</DIV> <DIV> </DIV> <DIV>The first thing to try is "clusrun /all ver" in a command prompt. This will return the OS version for all nodes in your cluster. Check to see the version is returned in the command window for each of your nodes.</DIV> <DIV> </DIV> <DIV>Next, I recommend trying the batchpi exe at <A href=""></A> at the HPC Labs link. Copy the contents of the zipped folder to a shared folder on your head node. Use the client utilities to submit a job that executes the batchpi.exe process. Redirect stdout to a textfile in the job submission. Execute the job and your stdout textfile should contain the estimated value of pi.</DIV> <DIV>A command-line version of this could look like: "job submit /jobname:BatchPI /numprocessors:1 /workdir:\\headnode\PI\ /stdout:pi.out batchpi.exe 1000"</DIV> <DIV>The <A href="file://\\headnode\PI\">\\headnode\PI\</A> folder should be shared out and the submitting user should have read/write permissions in that folder.</DIV> <DIV> </DIV> <DIV>Now that batch jobs are working on your cluster, try an MPI job. I recommend trying a Linpack job. You can get the code from <A href=""></A>. Once you have compiled the Linpack app, a command-line version of the submission would look like:</DIV> <DIV>"job submit /jobname:linpack /numprocessors:8 /workdir:\\headnode\linpack\ mpiexec xhpl.exe hpl.dat"</DIV> <DIV>Of course, the numprocessors argument should match what's in your hpl.dat file and the linpack dir should be shared out with read/write permissions to the submitting user.</DIV> <DIV> </DIV> <DIV>If these three types of jobs run, your cluster is in good shape.</DIV></DIV><div style="clear:both;"></div><img src="" width="1" height="1">BrettB<P><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana">As you have probably heard, we are releasing a new product in 2006 called <A title=Windows Compute Cluster Server 2003</A>. -</SPAN><FONT face=Verdana size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"> <A title=</A> </SPAN></FONT><FONT face=Verdana size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana">.</SPAN></FONT></P> <P><FONT face=Verdana size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><?xml:namespace prefix = o<o:p>I'll be using this blog to post information about Compute Cluster Server (CCS). There will be generic HPC information as well as tips/tricks specific to CCS.</o:p></SPAN></FONT></P> <P><FONT face=Verdana size=2><SPAN style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><o:p>We're very excited to bring out a new product for HPC. We think we'll be able to unlock a bunch of scenarios that will enable "personal supercomputing". Try it yourself by going to <A href=""></A> and sign up to download CCS Beta 2.</o:p></SPAN></FONT></P><div style="clear:both;"></div><img src="" width="1" height="1">BrettB
http://blogs.msdn.com/b/hpc/atom.aspx
CC-MAIN-2015-06
refinedweb
1,282
55.44
Sometimes you don't need a framework like Angular or React to demonstrate an idea or concept in JavaScript. You just want a framework-agnostic, plain JavaScript development environment to play around with things like web workers, service workers, new JavaScript constructs, or IndexedDB, for example. In this blog post, you are going to learn how to quickly prototype plain JavaScript apps using webpack 4 to create such environment with zero config and low development overhead. Webpack is a leading static module bundler for frontend apps. It is used by tools such as create-react-app to quickly scaffold frontend projects. According to the webpack documentation, since version 4.0.0, webpack doesn't require a configuration file to bundle your projects; however, the module bundler remains highly configurable to meet the increasing complexity of your projects down the line. "With webpack 4 zero config, you can stop scratching your head on how to spin a JavaScript app quickly and avoid overengineering a quick proof of concept using a framework when JavaScript is enough." You can find the final version of this exercise on the webpack-prototype repo on GitHub. However, I encourage you to read on and build the webpack app prototype gradually to better understand the heavy lifting that webpack is doing for you. Setting Up Zero Config Webpack 4 Head to your terminal and make a directory where you want to store your learning project your current working directory. Then, create a folder named webpack-prototype and make it your current working directory. You can do this easily with the following command: mkdir webpack-prototype && cd webpack-prototype This line of commands creates the webpack-prototypedirectory and then makes it the current working directory. Once there, create a new NPM project and install webpack locally along with the webpack-cli: npm init -y npm install webpack webpack-cli --save-dev webpack-cliis the tool used to run webpack on the command line. Next, create a simple file structure under this directory that resembles the following: webpack-prototype |- package.json |- /dist |- index.html |- /src |- index.js package.json is already provided to you when you created the NPM project. By default, webpack 4 will look for a src/index.js file to use as an entry point. The entry point tells webpack which module it should use to start building its internal dependency graph. From this module, webpack can infer which other modules or libraries the application depends on and include them in your bundle. Also, webpack uses dist/index.html as the default main HTML file for your application where the generated bundle will be automatically injected. Thus, the src directory holds all of your application source code (the code that you'll create from scratch, write, delete, edit, and so on). The dist directory is the distribution directory for the application. This directory holds code that has been minimized and optimized by webpack. In essence, the dist directory holds the webpack output that will be loaded in the browser once the application is run. You can create these files quickly by issuing the following commands: macOS / Linux: mkdir src dist && touch dist/index.html src/index.js Windows: mkdir src dist && echo.> dist/index.html && echo.> src/index.js mkdir is used to create directories across operating systems. However, touch is only available in Unix and Unix-like operating systems. echo is a Windows equivalent of touch. echo. creates a file with one empty line in it. Open the project in your preferred IDE or code editor. You can run code .or webstorm .to open the current working directory if you have installed the command line tools for Visual Studio Code or WebStorm. Give some life to dist/index.html by adding the following code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Webpack Prototype</title> </head> <body> <script src="main.js"></script> </body> </html> Within the <body> tag, you load a main.js file through a <script> tag; however, you have not created such file: <script src="main.js"></script> No worries. main.js will be created automatically for you by webpack once it creates your project bundle. As a precaution, to prevent publishing your code to NPM by accident, open package.json and do the following: - Add "private": true,as a property. - Delete the "main": "index.js",line. package.json should look like this: { "name": "webpack-prototype", "version": "1.0.0", "description": "", "private": true, "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "webpack": "^4.25.1", "webpack-cli": "^3.1.2" } } Now, give some life to src/index.js. For now, add a simple message on the screen: // src/index.js const createElement = message => { const element = document.createElement("div"); element.innerHTML = message; return element; }; document.body.appendChild(createElement("Webpack lives.")); Finally, to test everything is working as intended, you need to create a bundle. This can be done by issuing the following command: npx webpack Using npx, you can emulate the same behavior of the global installation of webpack but without the actual global installation. npx uses the local version of webpack you installed earlier. If you have npm >= 5.2installed in your system, you have npxavailable. However, running this command from the command line is not efficient or too memorable. A better approach is to create a build NPM script in package.json which does the same thing as npx webpack: { "name": "webpack-prototype", "version": "1.0.0", "description": "", "private": true, "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "build": "webpack" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "webpack": "^4.25.1", "webpack-cli": "^3.1.2" } } Now, you can run npm run build instead which is a more familiar command for JavaScript developers. Running webpack does the following: - Use src/index.jsas the entry point. - Generate dist/main.jsas the webpack output. Notice that there is a warning when you build your app: WARNING in configuration. The 'mode' option has not been set. You'll learn soon about this in this blog post. Verify that the bundle was created correctly by doing the following: - Open the distdirectory. Do you see the main.jsfile there? If yes, the output worked. - If you are curious, open main.js. Observe the file consists of a highly minimized one-line of JavaScript code. - Open dist/index.htmlin the browser. You should see Webpack lives.printed on the screen. To open dist/index.html, find the file through the file system and double-click it. Your default browser should then open the file. Change the message string in src/index.js to the following: // src/index.js const createElement = message => { const element = document.createElement("div"); element.innerHTML = message; return element; }; document.body.appendChild( createElement("Webpack lives by the love of Open Source.") ); Reload the browser tab presenting index.html. Notice that the printed message doesn't change. For it to change, you need to update your output bundle. To do this, you'll need to execute npm run build again to re-create the bundle and then refresh the page. Run the command and refresh the page. Webpack lives by the love of Open Source. should now be shown on the screen. This is not optimal. What you want is Hot Module Replacement to exchange, add, or remove modules while an application is running and without requiring a full reload. What are the benefits of enabling Hot Module Replacement for you as a developer? - During a full reload, the state of your application is lost. HRM lets you retain your app state. - By only updating what has changed in the app, you can save time. - Changes in your source CSS and JavaScript files are shown in the browser instantaneously which closely resembles an update done directly through the browser's dev tools. To enable HRM, follow this steps: - Install webpack-dev-serverwhich provides you with a simple web server with the ability to live-reload your app: npm install webpack-dev-server --save-dev - Create a start:devNPM script within package.json: { "name": "webpack-prototype", "version": "1.0.0", "description": "", "private": true, "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "build": "webpack", "start:dev": "webpack-dev-server --mode development --content-base dist/ --open --hot" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "webpack": "^4.25.1", "webpack-cli": "^3.1.2", "webpack-dev-server": "^3.1.10" } } That start:dev NPM script is a mouthful. What's it doing? { // ... "scripts": { // ... "start:dev": "webpack-dev-server --mode development --content-base dist/ --open --hot" } // ... } webpack-dev-server: Runs the webpack dev server. --mode development: The mode configuration option tells webpack to use its built-in optimizations accordingly. development produces a much more readable output than production. If you leave this option out, the default option is production. You may also set it to none which disables any default behavior. Learn more about the differences between the webpack developmentand productionmodes here. --content-base dist/: Tells the dev server from where to serve your static content. By default, webpack-dev-server will serve the files in the current directory. However, in this case, you want the content to be served from dist/ where your index.html file is. --open: Opens the default app url in the system's default browser. Here, it's. --hot: Enables Hot Module Replacement by adding the HotModuleReplacementPlugin and switching the server to hot mode. - Run the webpack dev server: npm run start:dev Your default browser will open up, load, and present you with your app again. Do something crazy: stop the webpack-dev-server, delete the main.js file that was created earlier under the dist directory, and execute npm run start:dev again. The default browser will open again and you will see the message printed on the screen. How is that possible if you deleted main.js? webpack-dev-server watches your source files and re-compiles your bundle when those file change. However, this modified bundle is served from memory at the relative path specified in publicPath. It is not written under your dist directory. If a bundle already exists at the same URL path, by default, the in-memory bundle takes precedence. This is all taken care of automagically by specifying this line on your index.html: <script src="main.js"></script> That's it! You can now add more complex code to src/index.js or import other modules to it. Webpack will build its internal dependency graph and include all these in your final bundle. Try that out! Importing Modules Using Zero Config Webpack Webpack with Sean."; link.href = ""; link.target = "_blank"; return link; }; export default createBanner; Save the changes made to src/banner.js. Then update src/index.js as follows: // src/index.js import createBanner from "./banner.js"; const createElement = message => { const element = document.createElement("div"); element.innerHTML = message; return element; }; document.body.appendChild( createElement("Webpack lives by the love of Open Source.") ); document.body.appendChild(createBanner()); Save the changes made to src/index.js. Look at the browser. You'll now see the message Webpack lives by the love of Open Source. and a Learn Webpack with Sean hyperlink under it which on click takes you to the Webpack Learning Academy, a comprehensive webpack learning resource by Sean Larkin. The same principle", "Webpack", "Today"], "*"); link.href = ""; link.target = "_blank"; return link; }; export default createBanner; Recall that*Webpack*Today. What about CSS? Adding CSS Stylesheets to Zero Config Webpack Does adding CSS files work the same as adding JavaScript modules? Find out!", "Webpack", "Today"], "*"); link.href = ""; link.target = "_blank"; link.classList = "banner"; return link; }; export default createBanner; Save src/banner.js and... you get the following error in the command line: ERROR in ./src/banner.css 3:0 Module parse failed: Unexpected token (3:0) You may need an appropriate loader to handle this file type. A solution could be to move banner.css to the dist folder and call it from index.html using a <link> tag: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Webpack Prototype</title> <link rel="stylesheet" type="text/css" href="banner.css"> </head> <body> <script src="main.js"></script> </body> </html> This certainly works but you'll lose live reloading for that CSS file. Any changes made to dist/banner.css would require you to refresh the browser. As discussed earlier, that's not optimal. What can be done? Create a minimal configuration file to use a webpack loader to handle loading CSS files. Extending Zero Config Webpack with Minimal Configuration Under the root directory, create a webpack.config.js file: macOS / Windows: touch webpack.config.js Windows: echo.> webpack.config.js In order to import a CSS file from within a JavaScript module using webpack, you need to install and add the style-loader and css-loader to the module configuration that will live within webpack.config.js. You can do that by following these steps: - Install style-loaderand css-loader: npm install --save-dev style-loader css-loader - Once those two packages are installed, update webpack.config.js: // webpack.config.js module.exports = { module: { rules: [ { test: /\.css$/, use: ["style-loader", "css-loader"] } ] } }; The module rulesuse a regular expression to test which files it should look for and provide to the loaders specified under use. Any file that has a .cssextension is served to the style-loaderand the css-loader. Save the changes on webpack.config.js. Finally, you need to tell webpack-dev-serverto use webpack.config.jsas the configuration file through the --configoption. You do that by adding the --config webpack.config.jsoption to the start:devNPM script present in package.json: { "name": "webpack-prototype", "version": "1.0.0", "description": "", "private": true, "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "build": "webpack", "start:dev": "webpack-dev-server --mode development --content-base dist/ --open --hot --config webpack.config.js" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "css-loader": "^1.0.1", "style-loader": "^0.23.1", "webpack": "^4.25.1", "webpack-cli": "^3.1.2", "webpack-dev-server": "^3.1.10" }, "dependencies": { "lodash": "^4.17.11" } } - Save the changes made on package.json. Stop the webpack-dev-server and execute npm run start:dev again. Observe that now the 90's Retro banner has a blue background, padding, and white text: You can use this configuration file to add any other loaders you may need to address needs such as compiling SCSS to CSS, transpiling JavaScript, loading image assets, and many more. There are lots of webpack loaders to address different project tasks. Check the full webpack loader list. "Learn how to create a minimal configuration extension to Webpack Zero Config to handle loading CSS files easily." For good measure, let's try loading images into our files as they are oftentimes part of a prototype. A much better approach is to use images from a CDN. Start by downloading the logo of webpack available here. Save it as webpack-logo.png and move it to the src directory. Update src/index.js to import the image as follows: // src/index.js import createBanner from "./banner.js"; import WebpackImg from "./webpack-logo.png"; const createElement = message => { const element = document.createElement("div"); element.innerHTML = message; return element; }; document.body.appendChild( createElement("Webpack lives by the love of Open Source.") ); document.body.appendChild(createBanner()); Save the file. As you may be thinking, an error is shown in the command line about this type of file, a PNG image, not being able to be loaded: ERROR in ./src/webpack-logo.png 1:0 Module parse failed: Unexpected character '�' (1:0) You may need an appropriate loader to handle this file type. As with CSS, you need an image loader. Follow these steps to add it to your webpack module: - Install the file-loaderwebpack loader: npm install --save-dev file-loader - Add a new rule to module ruleswithin webpack.config.js: // webpack.config.js module.exports = { module: { rules: [ { test: /\.css$/, use: ["style-loader", "css-loader"] }, { test: /\.(png|svg|jpg|jpeg|gif)$/, use: ["file-loader"] } ] } }; This time the regular expression in the rule test is looking for files that have popular image file extensions, such as .png and .jpeg. - Restart the webpack dev server. The file loading error is now gone. - Update src/index.jsto make use the of the image: // src/index.js import createBanner from "./banner.js"; import WebpackImg from "./webpack("Webpack lives by the love of Open Source.") ); document.body.appendChild(createBanner()); document.body.appendChild(createImage(WebpackImg)); Take a look at the browser. The webpack logo now loads on the screen: Your Webpack Zero Config configuration extension now includes the ability to load images. The config file still remains minimal, lightweight, and easy to understand. Building a JavaScript App with Webpack 4 As a final note of this process, once you want to build your app again, simply execute npm run build. The images imported into JavaScript modules will be processed and added to the dist output directory. Any image variables will have the final URL of that post-processing image, which may look like this: dist/e5e245191fd9c9812bc78bd0cea9a12c.jpeg You can also use your images within CSS files to add them as element backgrounds, for example. Conclusion For a simple and quick JavaScript prototype, JavaScript, CSS, and image assets are plenty to get a lot done. You are now empowered with knowledge on how to use webpack 4 to create a development environment for JavaScript projects with zero config needed. If the project requires CSS, you can extend the zero config to use CSS and file loaders to create beautiful JavaScript apps fast. As an alternative, if you prefer to, feel free to use cloud environments that use webpack 4 such as StackBlitz. You can find a polished version of this exercise on the webpack-prototype repo on GitHub. The final version uses Google Fonts and an improved structure to create a much better looking webpack banner using webpack! .
https://auth0.com/blog/zero-config-javascript-app-prototyping-with-webpack/
CC-MAIN-2020-16
refinedweb
2,986
60.11
Raspberry Pi Lesson 9 Screen04 The Screen04 lesson builds on Screen03, by teaching how to manipulate text. It is assumed you have the code for the Lesson 8: Screen03 operating system as a basis. 1 String Manipulation Variadic functions look much less intuitive in assembly code. Nevertheless, they are useful and powerful concepts. Being able to draw text is lovely, but unfortunately at the moment you can only draw strings which are already prepared. This is fine for displaying something like the command line, but ideally we would like to be able to display and text we so desire. As per usual, if we put the effort in and make an excellent function that does all the string manipulation we could ever want, we get much easier code later on in return. Once such complicated function in C programming is sprintf. This function generates a string based on a description given as another string and additional arguments. What is interesting about this function is that it is variadic. This means that it takes a variable number of parameters. The number of parameters depends on the exact format string, and so cannot be determined in advance. The full function has many options, and I list a few here. I've highlighted the ones which we will implement in this tutorial, though you can try to implement more. The function works by reading the format string, and then interpreting it using the table below. Once an argument is used, it is not considered again. The return value of the function is the number of characters written. If the method fails, a negative number is returned. Further to the above, many additional tweaks exist to the sequences, such as specifying minimum length, signs, etc. More information can be found at sprintf - C++ Reference. Here are a few examples of calls to the method and their results to illustrate its use. Hopefully you can already begin to see the usefulness of the function. It does take a fair amount of work to program, but our reward is a very general function we can use for all sorts of purposes. 2 Division Division is the slowest and most complicated of the basic mathematical operators. It is not implemented directly in ARM assembly code because it takes so long to deduce the answer, and so isn't a 'simple' operation. While this function does look very powerful, it also looks very complicated. The easiest way to deal with its many cases is probably to write functions to deal with some common tasks it has. What would be useful would be a function to generate the string for a signed and an unsigned number in any base. So, how can we go about doing that? Try to devise an algorithm quickly before reading on. The easiest way is probably the exact way I mentioned in Lesson 1: OK01, which is the division remainder method. The idea is the following: - Divide the current value by the base you're working in. - Store the remainder. - If the new value is not 0, go to 1. - Reverse the order of the remainders. This is the answer. For example: So the answer is 100010012 The unfortunate part about this procedure is that it unavoidably uses division. Therefore, we must first contemplate division in binary. For a refresher on long division expand the box below. Let's suppose we wish to divide 4135 by 17. 0243 r 4 17)4135 0 0 × 17 = 0000 4135 4135 - 0 = 4135 34 200 × 17 = 3400 735 4135 - 3400 = 735 68 40 × 17 = 680 55 735 - 680 = 55 51 3 × 17 = 51 4 55 - 51 = 4 Answer: 243 remainder 4 First of all we would look at the top digit of the dividend. We see that the smallest multiple of the divisor which is less or equal to it is 0. We output a 0 to the result. Next we look at the second to top digit of the dividend and all higher digits. We see the smallest multiple of the divisor which is less than or equal is 34. We output a 2 and subtract 3400. Next we look at the third digit of the dividend and all higher digits. The smallest multiple of the divisor that is less than or equal to this is 68. We output 4 and subtract 680. Finally we look at all remaining digits. We see that the lowest multiple of the divisor that is less than the remaining digits is 51. We output a 3, subtract 51. The result of the subtraction is our remainder. To implement division in assembly code, we will implement binary long division. We do this because the numbers are stored in binary, which gives us easy access to the all important bit shift operations, and because division in binary is simpler than in any higher base due to the much lower number of cases. 1011 r 1 1010)1101111 1010 11111 1010 1011 1010 1 This example shows how binary long division works. You simply shift the divisor as far right as possible without exceeding the dividend, output a 1 according to the poisition and subtract the number. Whatever remains is the remainder. In this case we show 11011112 ÷ 10102 = 10112 remainder 12. In decimal, 111 ÷ 10 = 11 remainder 1. Try to implement long division yourself now. You should write a function, DivideU32 which divides r0 by r1, returning the result in r0, and the remainder in r1. Below, we will go through a very efficient implementation. function DivideU32(r0 is dividend, r1 is divisor) set shift to 31 set result to 0 while shift ≥ 0 if dividend ≥ (divisor << shift) then set dividend to dividend - (divisor << shift) set result to result + 1 end if set result to result << 1 set shift to shift - 1 loop return (result, dividend) end function This code does achieve what we need, but would not work as assembly code. Our problem comes from the fact that our registers only hold 32 bits, and so the result of divisor << shift may not fit in a register (we call this overflow). This is a real problem. Did your solution have overflow? Fortunately, an instruction exists called clz or count leading zeros, which counts the number of zeros in the binary representation of a number starting at the top bit. Conveniently, this is exactly the number of times we can shift the register left before overflow occurs. Another optimisation you may spot is that we compute divisor << shift twice each loop. We could improve upon this by shifting the divisor at the beginning, then shifting it down at the end of each loop to avoid any need to shift it elsewhere. Let's have a look at the assembly code to make further improvements. .globl DivideU32 DivideU32: result .req r0 remainder .req r1 shift .req r2 current .req r3 clz shift,r1 lsl current,r1,shift mov remainder,r0 mov result,#0 divideU32Loop$: cmp shift,#0 blt divideU32Return$ cmp remainder,current addge result,result,#1 subge remainder,current sub shift,#1 lsr current,#1 lsl result,#1 b divideU32Loop$ divideU32Return$: .unreq current mov pc,lr .unreq result .unreq remainder .unreq shift clz dest,src stores the number of zeros from the top to the first one of register dest to register src You may, quite rightly, think that this looks quite efficient. It is pretty good, but division is a very expensive operation, and one we may wish to do quite often, so it would be good if we could improve the speed in any way. When looking to optimise code with a loop in it, it is always important to consider how many times the loop must run. In this case, the loop will run a maximum of 31 times for an input of 1. Without making special cases, this could often be improved easily. For example when dividing 1 by 1, no shift is required, yet we shift the divisor to each of the positions above it. This could be improved by simply using the new clz command on the dividend and subtracting this from the shift. In the case of 1 ÷ 1, this means shift would be set to 0, rightly indicating no shift is required. If this causes the shift to be negative, the divisor is bigger than the dividend and so we know the result is 0 remainder the dividend. Another quick check we could make is if the current value is ever 0, then we have a perfect division and can stop looping. .globl DivideU32 DivideU32: result .req r0 remainder .req r1 shift .req r2 current .req r3 clz shift,r1 clz r3,r0 subs shift,r3 lsl current,r1,shift mov remainder,r0 mov result,#0 blt divideU32Return$ divideU32Loop$: cmp remainder,current blt divideU32LoopContinue$ add result,result,#1 subs remainder,current lsleq result,shift beq divideU32Return$ divideU32LoopContinue$: subs shift,#1 lsrge current,#1 lslge result,#1 bge divideU32Loop$ divideU32Return$: .unreq current mov pc,lr .unreq result .unreq remainder .unreq shift Copy the code above to a file called 'maths.s'. 3 Number Strings Now that we can do division, let's have another look at implementing number to string conversion. The following is pseudo code to convert numbers from registers into strings in up to base 36. By convention, a % b means the remainder of dividing a by b. function SignedString(r0 is value, r1 is dest, r2 is base) if value ≥ 0 then return UnsignedString(value, dest, base) otherwise if dest > 0 then setByte(dest, '-') set dest to dest + 1 end if return UnsignedString(-value, dest, base) + 1 end if end function function UnsignedString(r0 is value, r1 is dest, r2 is base) set length to 0 do set (value, rem) to DivideU32(value, base) if rem > 10 then set rem to rem + '0' otherwise set rem to rem - 10 + 'a' if dest > 0 then setByte(dest + length, rem) set length to length + 1 while value > 0 if dest > 0 then ReverseString(dest, length) return length end function function ReverseString(r0 is string, r1 is length) set end to string + length - 1 while end > start set temp1 to readByte(start) set temp2 to readByte(end) setByte(start, temp2) setByte(end, temp1) set start to start + 1 set end to end - 1 end while end function In a file called 'text.s' implement the above. Remember that if you get stuck, a full solution can be found on the downloads page. 4 Format Strings Let's get back to our string formatting method. Since we're programming our own operating system, we can add or change formatting rules as we please. We may find it useful to add a %b operation that outputs a number in binary, and if you're not using null terminated strings, you may wish to alter the behaviour of %s to take the length of the string from another argument, or from a length prefix if you wish. I will use a null terminator in the example below. One of the main obstacles to implementing this function is that the number of arguments varies. According to the ABI, additional arguments are pushed onto the stack before calling the method in reverse order. So, for example, if we wish to call our method with 8 parameters; 1,2,3,4,5,6,7 and 8, we would do the following: - Set r0 = 5, r1 = 6, r2 = 7, r3 = 8 - Push {r0,r1,r2,r3} - Set r0 = 1, r1 = 2, r2 = 3, r3 = 4 - Call the function - Add sp,#4*4 Now we must decide what arguments our function actually needs. In my case, I used the format string address in r0, the length of the format string in r1, the destination string address in r2, followed by the list of arguments required, starting in r3 and continuing on the stack as above. If you wish to use a null terminated format string, the parameter in r1 can be removed. If you wish to have a maximum buffer length, you could store this in r3. As an additional modification, I think it is useful to alter the function so that if the destination string address is 0, no string is outputted, but an accurate length is still returned, so that the length of a formatted string can be accurately determined. If you wish to attempt the implementation on your own, try it now. If not, I will first construct the pseudo code for the method, then give the assembly code implementation. function StringFormat(r0 is format, r1 is formatLength, r2 is dest, ...) set index to 0 set length to 0 while index < formatLength if readByte(format + index) = '%' then set index to index + 1 if readByte(format + index) = '%' then if dest > 0 then setByte(dest + length, '%') set length to length + 1 otherwise if readByte(format + index) = 'c' then if dest > 0 then setByte(dest + length, nextArg) set length to length + 1 otherwise if readByte(format + index) = 'd' or 'i' then set length to length + SignedString(nextArg, dest, 10) otherwise if readByte(format + index) = 'o' then set length to length + UnsignedString(nextArg, dest, 8) otherwise if readByte(format + index) = 'u' then set length to length + UnsignedString(nextArg, dest, 10) otherwise if readByte(format + index) = 'b' then set length to length + UnsignedString(nextArg, dest, 2) otherwise if readByte(format + index) = 'x' then set length to length + UnsignedString(nextArg, dest, 16) otherwise if readByte(format + index) = 's' then set str to nextArg while getByte(str) != '\0' if dest > 0 then setByte(dest + length, getByte(str)) set length to length + 1 set str to str + 1 loop otherwise if readByte(format + index) = 'n' then setWord(nextArg, length) end if otherwise if dest > 0 then setByte(dest + length, readByte(format + index)) set length to length + 1 end if set index to index + 1 loop return length end function Although this function is massive, it is quite straightforward. Most of the code goes into checking all the various conditions, the code for each one is simple. Further, all the various unsigned integer cases are the same but for the base, and so can be summarised in assembly. This is given below. .globl FormatString FormatString: format .req r4 formatLength .req r5 dest .req r6 nextArg .req r7 argList .req r8 length .req r9 push {r4,r5,r6,r7,r8,r9,lr} mov format,r0 mov formatLength,r1 mov dest,r2 mov nextArg,r3 add argList,sp,#7*4 mov length,#0 formatLoop$: subs formatLength,#1 movlt r0,length poplt {r4,r5,r6,r7,r8,r9,pc} ldrb r0,[format] add format,#1 teq r0,#'%' beq formatArg$ formatChar$: teq dest,#0 strneb r0,[dest] addne dest,#1 add length,#1 b formatLoop$ formatArg$: subs formatLength,#1 movlt r0,length poplt {r4,r5,r6,r7,r8,r9,pc} ldrb r0,[format] add format,#1 teq r0,#'%' beq formatChar$ teq r0,#'c' moveq r0,nextArg ldreq nextArg,[argList] addeq argList,#4 beq formatChar$ teq r0,#'s' beq formatString$ teq r0,#'d' beq formatSigned$ teq r0,#'u' teqne r0,#'x' teqne r0,#'b' teqne r0,#'o' beq formatUnsigned$ b formatLoop$ formatString$: ldrb r0,[nextArg] teq r0,#0x0 ldreq nextArg,[argList] addeq argList,#4 beq formatLoop$ add length,#1 teq dest,#0 strneb r0,[dest] addne dest,#1 add nextArg,#1 b formatString$ formatSigned$: mov r0,nextArg ldr nextArg,[argList] add argList,#4 mov r1,dest mov r2,#10 bl SignedString teq dest,#0 addne dest,r0 add length,r0 b formatLoop$ formatUnsigned$: teq r0,#'u' moveq r2,#10 teq r0,#'x' moveq r2,#16 teq r0,#'b' moveq r2,#2 teq r0,#'o' moveq r2,#8 mov r0,nextArg ldr nextArg,[argList] add argList,#4 mov r1,dest bl UnsignedString teq dest,#0 addne dest,r0 add length,r0 b formatLoop$ 5 Convert OS Feel free to try using this method however you wish. As an example, here is the code to generate a conversion chart from base 10 to binary to hexadecimal to octal and to ASCII. Delete all code after bl SetGraphicsAddress in 'main.s' and replace it with the following: mov r4,#0 loop$: ldr r0,=format mov r1,#formatEnd-format ldr r2,=formatEnd lsr r3,r4,#4 push {r3} push {r3} push {r3} push {r3} bl FormatString add sp,#16 mov r1,r0 ldr r0,=formatEnd mov r2,#0 mov r3,r4 cmp r3,#768-16 subhi r3,#768 addhi r2,#256 cmp r3,#768-16 subhi r3,#768 addhi r2,#256 cmp r3,#768-16 subhi r3,#768 addhi r2,#256 bl DrawString add r4,#16 b loop$ .section .data format: .ascii "%d=0b%b=0x%x=0%o='%c'" formatEnd: Can you work out what will happen before testing? Particularly what happens for r3 ≥ 128? Try it on the Raspberry Pi to see if you're right. If it doesn't work, please see our troubleshooting page. When it does work, congratulations, you've completed the Screen04 tutorial, and reached the end of the screen series! We've learned about pixels and frame buffers, and how these apply to the Raspberry Pi. We've learned how to draw simple lines, and also how to draw characters, as well as the invaluable skill of formatting numbers into text. We now have all that you would need to make graphical output on an Operating System. Can you make some more drawing methods? What about 3D graphics? Can you implement a 24bit frame buffer? What about reading the size of the framebuffer in from the command line? The next series is the Input series, which teaches how to use the keyboard and mouse to really get towards a traditional console computer.
http://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/os/screen04.html
CC-MAIN-2015-18
refinedweb
2,940
60.55
- summary: Avformat cannot stream continuously --> Avformat can't stream continuously After a several hours of continuously playing (about 10-12h) melted stops and print out "[mpegts @ 0x7f18f4391480] Application provided invalid, non monotonically increasing dts to muxer in stream 1: 4026529920 >= -4026531840" I use avformat as a consumer and a short avi (2-3min duration) movies. Eof is set to loop. I also tried to play it using swig/perl script and I got the same result. My perl consume settings are: $c = new mlt::FilteredConsumer( $profile, "avformat:udp://192.168.90.102:1234"); $c->set( "real_time", 1 ); $c->set( "terminate_on_pause", 0 ); $c->set( "f", "mpegts" ); $c->set( "vcodec", "mpeg4" ); $c->set( "b", "3000k" ); $c->set( "acodec", "aac" ); $c->set( "ab", "128k" ); $c->set( "samplerate", 48000); $c->set( "channels",2); also there is [mpegts @ 0x7f18f4391480] Encoder did not produce proper pts, making some up. [aac @ 0x7f18f43a57e0] Que input is backward in time We experienced the same bug trying different codecs and formats. After several hours of debugging we think tracked it down to a bug in ffmpeg in a deprecated API. Melt uses the avcodec_encode_audio function, which is deprecated since beginning of 2012. ffmpegs APIchanges file states: 2012-01-15 - lavc 53.56.105 / 53.34.0 New audio encoding API: 67f5650 / b2c75b6 Add CODEC_CAP_VARIABLE_FRAME_SIZE capability for use by audio encoders. 67f5650 / 5ee5fa0 Add avcodec_fill_audio_frame() as a convenience function. 67f5650 / b2c75b6 Add avcodec_encode_audio2() and deprecate avcodec_encode_audio(). Add AVCodec.encode2(). While deprecating the old API they introduced the occurring bug by using a 32bit integer for timestamps instead of 64bit like everywhere else in the code. Using plain ffmpeg doesn't trigger this bug, since they don't use the compatibility API. The patch to fix this is simple. I filed a bug upstream: diff -ur ffmpeg-1.2.1/libavcodec/internal.h ffmpeg-1.2.1-64bit-samplecount/libavcodec/internal.h --- ffmpeg-1.2.1/libavcodec/internal.h 2013-06-30 16:21:55.448676657 +0200 +++ ffmpeg-1.2.1-64bit-samplecount/libavcodec/internal.h 2013-06-30 16:24:37.165331386 +0200 @@ -67,7 +67,7 @@ * Internal sample count used by avcodec_encode_audio() to fabricate pts. * Can be removed along with avcodec_encode_audio(). */ - int sample_count; + int64_t sample_count; #endif /** Melts avformat consumer is somewhat to conservative regarding ffmpeg API versions. The "new" encode API (avcodec_encode_audio2) will only be used, if LIBAVCODECVERSION >= 55 although it's available since ffmpeg 1.0. The current stable ffmpeg contains libavcodec version 54. 92.100. A check whether the API version is greater or equal 54 should be sufficient. At least for me it compiles and works as intended. Maybe that works for all API version checks in the avformat consumer and producer. Regarding API version usage, libav and ffmpeg went through a lot of API changes, and I chose to wait a little for things to settle and then make a major update wherein it was more manageable to use old vs. new API rather broadly. There is a lot to balance between versions mlt tries to support. Also, I prefer not to use a new API as soon as it becomes available because it might not be "fully baked." However, in this case, I am testing a change to use encode_audio2() for LIBAVCODEC_VERSION_MAJOR >= 54. Git commit eceaa40d6b3bef3866f8f4bf07df6b2cb0a58b3c switches to new avcodec_encode_audio2() API for libavcodec >= 54. Log in to post a comment.
https://sourceforge.net/p/mlt/bugs/175/
CC-MAIN-2017-26
refinedweb
555
57.06
C+: /* Anonymous functions */ #include <boost/lambda/lambda.hpp> #include <boost/lambda/if.hpp> #include <iostream> #include <vector> #include <algorithm> int main() { std::vector<int> v; v.push_back(1); v.push_back(3); v.push_back(2); // boost::lambda::_1 substitutes the first argument // can't use std::endl instead of \n because of type issues (see boost c++ libraries page 27) std::for_each(v.begin(), v.end(), std::cout << boost::lambda::_1 << "\n"); std::cout << std::endl; // also available: if_then_else, if_then_else_return std::for_each(v.begin(), v.end(), boost::lambda::if_then(boost::lambda::_1 > 1, std::cout << boost::lambda::_1 << "\n")); std::cout << std::endl; // Visual C++ 2010 version (no Boost.Lambda required - but not part of standard yet) std::for_each(v.begin(), v.end(), [] (int i) { if (i > 1) std::cout << i << std::endl; }); } First we populate the vector with the numbers 1, 3, 2 in that order. We then print them out using three different techniques: - Technique 1 uses Boost to create a lambda function. The for each loop provides the value of the current vector item as the first argument to the function, and the _1placeholder captures the value of this argument. Thus, the numbers are printed in sequence. - Technique 2 uses the more complex if_then structure to only print numbers greater than 1, once again in order. This prints 3, 2. - Technique 3 reproduces the results of technique 2 in much simpler terms. We use a C++11 lambda function which receives the current vector item value in i and prints it if greater than 1. This prints 3, 2. I think it would be generally agreed that the last version is both the most readable and the most flexible. Let’s have a look at the lambda function definition in more detail: [] (int i) { if (i > 1) std::cout << i << std::endl; } The square brackets indicate that we will now define a lambda function. The bracketed part which follows is the list of arguments to be received by the function – this can be omitted altogether if there are no arguments (a set of () brackets on its own is not required as it is in a regular function definition). Finally there is the regular body of code inside the braces. Apart from readability, what is the benefit? Well, if you are only going to use a function once, and particularly if it a short function, it is rather cumbersome to define it separately in the code when its only real relevance is to the one place where you use it. We could have equally used a function object like this for technique 3: void printGreaterThanOne(int i) { if (i > 1) std::cout << i << std::endl; } ... std::for_each(v.begin(), v.end(), printGreaterThanOne); but that is less convenient and less readable. Capture Semantics Let us take a slightly more tricky example: #include <boost/lambda/lambda.hpp> #include <boost/bind.hpp> #include <string> #include <vector> #include <iostream> #include <algorithm> void pb(std::string s, std::vector<int> &sizes) { sizes.push_back(s.size()); } int main() { std::vector<std::string> strings; strings.push_back("Some"); strings.push_back("Random"); strings.push_back("Text"); std::vector<int> sizes; // Technique 1: Iterator for loop for (std::vector<std::string>::iterator it = strings.begin(); it != strings.end(); ++it) sizes.push_back(it->size()); // Technique 2: Boost.Bind with function object std::for_each(strings.begin(), strings.end(), boost::bind(pb, _1, boost::ref(sizes))); // Technique 3: C++11 Lambda function without capture semantic std::transform(strings.begin(), strings.end(), std::back_inserter(sizes), [] (std::string s) -> int { return s.size(); }); // Technique 4: C++11 Lambda function with capture semantic std::for_each(strings.begin(), strings.end(), [&] (std::string s) { sizes.push_back(s.size()); }); // Print technique 1: Iterator for loop for (std::vector<int>::iterator it = sizes.begin(); it != sizes.end(); ++it) std::cout << *it << std::endl; // Print technique 2: C++11 Lambda function std::for_each(sizes.begin(), sizes.end(), [] (int s) { std::cout << s << std::endl; }); } This code shows four different ways to populate an STL vector with the lengths of strings stored in another STL vector. Let’s take a look: Technique 1 would be the old-school approach (note in C++11 you can use the auto keyword instead of explicitly specifying the iterator type in the for loop definition, but I have used the more verbose original version here). It simply iterates over the strings vector and pushes the length of each item onto the sizes vector. Technique 2 uses Boost.Bind to bind each string one at a time to the first argument of the function object pb, together with a reference to the sizes vector so that pb can push the calculated string length onto it. Technique 3 uses a C++11 lambda function which takes a string and simply returns its length. STL then uses back_inserter to push the returned value into the sizes vector. Technique 4 uses a C++11 lambda function which does the string length calculation and pushes it onto the sizes vector itself. Techniques 2 and 4 are analagous here except that technique 4 does not require the sizes vector as a second argument. Instead, it uses a feature called capture semantics to specify that the lambda function would like access to all of the variables in the parent scope, by reference. This is done with the [&] specifier shown above. Parent scope means precisely that: the set of braces which are the immediate parent within which the lambda function is defined. This can be at the class or function scope, or at an arbitrary sub-scope defined within a function (which is a great technique for limiting the lambda function to being able to access only certain variables in the function). Technique 3 does not capture anything from the parent scope, specified with []. Why does technique 3 not need to capture any variables while technique 4 does? The key difference is that while technique 3’s lambda function merely calculates and returns the size of the string passed as its argument, technique 4 additionally pushes back the value onto the sizes vector, but this is not passed as an argument, therefore it needs to access it from the parent scope. Technique 3’s lambda function does not access anything besides its own argument. As it does not capture anything, if it tried to do so, an error would occur. Capture Semantics In Detail As well as no capture ( []) and capture all by reference ( [&]), you can also capture variables in the parent scope by value using the [=] specifier. This will naturally prevent those variables from being modified within the function. In addition, you can specify lists of variables to capture. If a variable is preceded by &, it is captured by reference, otherwise it is captured by value: [&foo, bar, &baz] Here, foo and baz are captured by reference, while bar is captured by value. No other variables from the parent scope are accessible. If you want to be able to access all members of the enclosing class, use: [this] This also works if you are passing a lambda function as an argument to a function in a different class – the captured class will be the one the lambda function was originally defined in. Finally, you can specify that all variables are to be captured by reference or value, with the exception of a specified list, as follows: [&, a, b, c] captures everything in the parent scope by reference except for a, b and c which are captured by value. [=, a, b, c] captures everything in the parent scope by value except for a, b and c which are captured by reference. Returning values from a lambda function As we saw in the earlier examples, lambda functions can return values in the same way as regular functions. The syntax is as follows: [] (std::string s) -> int { return s.size(); } ie. you use the construct "-> returntype" after the list of function arguments. If no return type is supplied, it is assumed to be void. Note that in cases where it is clear what the return type should be, it may also be omitted, for example: int x = [] (int a, int b) { return a + b; }(1, 2); Here the return type must be an int, so it does not need to be specified. A Note On Boost.Function Boost.Function is compatible with C++11 Lambda functions. For example: struct SomeStruct { boost::function<void (Button &)> onClick; } myStruct; ... class SomeClass { void SomeFunc() { myStruct.onClick = [this] (Button &b) { ... }; } }; works as you would expect. More Information I hope you found this brief introduction to C++11 lambda functions useful. There is plenty more to learn about Lambda functions in C++11 and a good starting point is MSDN’s Lambda Expression Syntax page. If you are feeling particularly masochastic, Examples of Lambda Expressions is an excellent and thorough read. Good luck! Very good information. Lucky me I discovered your site by accident (stumbleupon). I havce saved it for later! Note that along with lambda functions, C++11 has come with std::function.
https://katyscode.wordpress.com/2012/09/22/c11-about-lambda-functions/
CC-MAIN-2018-17
refinedweb
1,502
63.29
Warning: this page refers to an old version of SFML. Click here to switch to the latest version. Web requests with HTTP Introduction SFML provides a simple HTTP client class which you can use to communicate with HTTP servers. "Simple" means that it supports the most basic features of HTTP: POST, GET and HEAD request types, accessing HTTP header fields, and reading/writing the pages body. If you need more advanced features, such as secured HTTP (HTTPS) for example, you're better off using a true HTTP library, like libcurl or cpp-netlib. For basic interaction between your program and an HTTP server, it should be enough. sf::Http To communicate with an HTTP server you must use the sf::Http class. #include <SFML/Network.hpp> sf::Http http; http.setHost(""); // or sf::Http http(""); Note that setting the host doesn't trigger any connection. A temporary connection is created for each request. The only other function in sf::Http, sends requests. This is basically all that the class does. sf::Http::Request request; // fill the request... sf::Http::Response response = http.sendRequest(request); Requests An HTTP request, represented by the sf::Http::Request class, contains the following information: - The method: POST (send content), GET (retrieve a resource), HEAD (retrieve a resource header, without its body) - The URI: the address of the resource (page, image, ...) to get/post, relative to the root directory - The HTTP version (it is 1.0 by default but you can choose a different version if you use specific features) - The header: a set of fields with key and value - The body of the page (used only with the POST method) sf::Http::Request request; request.setMethod(sf::Http::Request::Post); request.setUri("/page.html"); request.setHttpVersion(1, 1); // HTTP 1.1 request.setField("From", "me"); request.setField("Content-Type", "application/x-www-form-urlencoded"); request.setBody("para1=value1¶m2=value2"); sf::Http::Response response = http.sendRequest(request); SFML automatically fills mandatory header fields, such as "Host", "Content-Length", etc. You can send your requests without worrying about them. SFML will do its best to make sure they are valid. Responses If the sf::Http class could successfully connect to the host and send the request, a response is sent back and returned to the user, encapsulated in an instance of the sf::Http::Response class. Responses contain the following members: - A status code which precisely indicates how the server processed the request (OK, redirected, not found, etc.) - The HTTP version of the server - The header: a set of fields with key and value - The body of the response sf::Http::Response response = http.sendRequest(request); std::cout << "status: " << response.getStatus() << std::endl; std::cout << "HTTP version: " << response.getMajorHttpVersion() << "." << response.getMinorHttpVersion() << std::endl; std::cout << "Content-Type header:" << response.getField("Content-Type") << std::endl; std::cout << "body: " << response.getBody() << std::endl; The status code can be used to check whether the request was successfully processed or not: codes 2xx represent success, codes 3xx represent a redirection, codes 4xx represent client errors, codes 5xx represent server errors, and codes 10xx represent SFML specific errors which are not part of the HTTP standard. Example: sending scores to an online server Here is a short example that demonstrates how to perform a simple task: Sending a score to an online database. #include <SFML/Network.hpp> #include <sstream> void sendScore(int score, const std::string& name) { // prepare the request sf::Http::Request request("/send-score.php", sf::Http::Request::Post); // encode the parameters in the request body std::ostringstream stream; stream << "name=" << name << "&score=" << score; request.setBody(stream.str()); // send the request sf::Http http(""); sf::Http::Response response = http.sendRequest(request); // check the status if (response.getStatus() == sf::Http::Response::Ok) { // check the contents of the response std::cout << response.getBody() << std::endl; } else { std::cout << "request failed" << std::endl; } } Of course, this is a very simple way to handle online scores. There's no protection: Anybody could easily send a false score. A more robust approach would probably involve an extra parameter, like a hash code that ensures that the request was sent by the program. That is beyond the scope of this tutorial. And finally, here is a very simple example of what the PHP page on server might look like. <?php $name = $_POST['name']; $score = $_POST['score']; if (write_to_database($name, $score)) // this is not a PHP tutorial :) { echo "name and score added!"; } else { echo "failed to write name and score to database..."; } ?>
https://www.sfml-dev.org/tutorials/2.3/network-http.php
CC-MAIN-2018-05
refinedweb
746
57.16
Archived:Find bluetooth devices silently) Introduction Here is a package containing a source code and a sis file of an implementation of silent bluetooth devices search. It is a simple module which provides a single method, called discover. Overview This method can be called asynchronously (by passing a callable object as parameter) or synchronously (without parameter). It returns a list containing all found bluetooth devices MAC addresses. Code Snippets Here is an example of how to use the bluetooth search method synchronously. import silent_bt print silent_bt.discover() Here is an example of how to use the bluetooth search method asynchronously. def cb_func(devices): print devices silent_bt.discover(cb_func) With this method we can choose a device to be searched like in the example below: import silent_bt def watch_device(device): found_devices = [] if type(device) == str: if device in silent_bt.discover(): found_devices.append(device) return found_devices elif type(device) == list: for dev in device: if dev in silent_bt.discover(): found_devices.append(dev) return found_devices return None def find(device): device_found = watch_device(device) if device_found is not None: print "device found: %s"% device_found[0] else: find(device) find("00:1f:5b:df:21:6c") Output device found: 00:1f:5b:df:21:6c or verify if a device has entered or has exited the range area of the searching device: import silent_bt old_list = [] new_list = [] def observer(): global old_list global new_list new_list = silent_bt.discover() for dev in old_list: if dev not in new_list: print "%s has exited" % dev for dev in new_list: if dev not in old_list: print "%s has entered" % dev old_list = new_list observer() observer() Output 00:1f:5b:df:21:6c has entered 2c:1d:aa:cb:22:2c has entered 00:1f:5b:df:21:6c has exited 16 Sep 2009
http://developer.nokia.com/community/wiki/Archived:Find_bluetooth_devices_silently
CC-MAIN-2014-15
refinedweb
290
50.77
Perl::Critic::Policy::Compatibility::ProhibitUnixDevNull - don't use explicit /dev/null This policy is part of the Perl::Critic::Pulp add-on. It ask you to not to use filename /dev/null explicitly, but instead File::Spec->devnull() for maximum portability across operating systems. This policy is under the maintenance theme (see "POLICY THEMES" in Perl::Critic) on the basis that even if you're on a Unix system now you never know where your code might travel in the future. devnull() is new in File::Spec version 0.8, so you should require that version (it's included in Perl 5.6.0 and up). The checks for /dev/null are unsophisticated. A violation is reported for any string /dev/null, possibly with an open style mode part, and any qw containing /dev/null. open my $fh, '< /dev/null'; # bad do_something ("/dev/null"); # bad foreach my $file (qw(/dev/null /etc/passwd)) # bad String comparisons are allowed because they're not uses of /dev/null as such but likely some sort of cross-platform check. if ($f eq '/dev/null') { ... } # ok return ($f ne '>/dev/null'); # ok /dev/null as just part of a string is allowed, including things like backticks and system. print "Flames to /dev/null please\n" # ok system ('rmdir /foo/bar >/dev/null 2>&1'); # ok $hi = `echo hi </dev/null`; # ok Whether /dev/null is a good idea in such command strings depends what sort of shell you reach with that command and how much of Unix it might emulate on a non-Unix system. If you only ever use a system with /dev/null or if everything else you write is hopelessly wedded to Unix anyway then you can disable ProhibitUnixDevNull from your .perlcriticrc in the usual way (see "CONFIGURATION" in Perl::Critic), [-Compatibility::ProhibitUnixDevNull] Perl::Critic::Pulp, Perl::Critic, File::Spec <>.
http://search.cpan.org/~kryde/Perl-Critic-Pulp/lib/Perl/Critic/Policy/Compatibility/ProhibitUnixDevNull.pm
CC-MAIN-2015-35
refinedweb
309
52.6
So I’m certain this will be met with mixed response, because really .Net already has several decent BDD frameworks and many of you will chastise me for adding yet another framework when really BDD has nothing to do with what testing framework you use. So why you ask? - Most of the BDD frameworks I’ve looked at are Acceptance style and trying to make stories into executable code (NBehave, StoryTeller, Fitnesse.Net ,Acceptance, etc). I want something that describes to other developers in a behavior centric way what my code is doing (like RSpec’s default DSL does). This is not aimed at business analysts. - The other RSpec style framework for C# I’ve played with while very nice, did not go over well with people I’ve tried to introduce to BDD. - Using NUnit in a context BDD style is normally what I do but produces a lot of artifacts, is underscore heaven and provides no guidance to newer practitioners of BDD. With all this in mind how does SpecMaker improve our situation at all? First lets look at how I might approach a BDD test with NUnit public class SpecGameWhenStartingUp { private Game _game; private void startGame(){ // left out for clarity } [SetUp] public override void SetUp() { startGame(); } [Test] public void should_have_3_lives() { Assert.AreEqual(3, _game.lives); } } On the surface there isn’t much wrong with this. Asserts are less than ideal (Rspec matchers would be nice), underscores on my “should_” are so so, context in the class name leads to lots of classes and me playing around a bunch with inheritance. However, none of this is a game breaker and for those of you who have a good workable flow with this approach and are happy with it, please continue to use it. I however, am not happy with the flow, also BDD really is not easy for me to teach. Some of you may get it easily and teach it easily, but .Net developers as a whole seem to be driven towards framework specific knowledge (telling them to not think “test” when test is staring at them on the method messes with their heads), and even then it’d better not be too “cutting edge” in language features or friction becomes a risk where someone may end up learning more than just BDD. So what are my goals then for SpecMaker? - It has to be terse, and be shorter than the NUnit approach. - Eliminate underscores where possible. - Act as a series of guidelines to beginning BDD’ers. - Output spec results in a variety of formats. - Remove the “Test” entirely from the vocabulary. - Use convention over configuration where possible. - Provide custom matchers and a DSL to make your own custom matchers. What will it not do or be? - Do acceptance testing. This could change in the future, but then I’d be inclined to borrow code from others that do this well already. - Make BDD a “click next” thing. There is no substitute for continually trying to refine your approach and understand “behavior” and what that means. I’m not sure there is one true right answer here and that makes it so incredibly hard to make a framework that does that. - Be as nice as RSpec. - Integration tests. This is setup very much one-to-one. Again this was a simplicity choice and as I dog food this I may completely change my mind here. So with all the ceremony out of the way here is where I’m at so far for the same BDD code above only with SpecMaker { private void startGame(){ //left out for clarity } public void when_starting_a_game() { startGame(); should(“have 3 lives”, ()=> { _game.lives.Has(3).Total(); } ); should(“require valid username”, RuleIs.Pending); } } Running specmaker.exe on the the dll where this spec is located outputs something like this: At this time specs come from the class name minus “Spec” at the end. Context start with “when” or the method will not get picked up as context. This is also staggeringly new and may have bugs, issues, bad docs, ugly code, etc. But since I’ve moved this to Github I encourage everyone to have a go and improve it as they see fit or for their own purposes. I have a number of plans and ideas to improve this, but I feel this is a good start to get some positive work done and save myself some grief over my NUnit based tests. Let me know what you think, any and all criticism is appreciated, but at the end of the day this does actually fulfill an itch that I myself have had for some time.
http://lostechies.com/ryansvihla/2009/08/08/introducing-specmaker-rspec-style-bdd-in-c/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+LosTechies+%2528LosTechies%2529
crawl-003
refinedweb
774
70.73
DBIx::Class::Schema::PopulateMore::Command - Command Class to Populate a Schema This is a command pattern class to manage the job of populating a DBIx::Class::Schema with information. We break this out because the actual job is a bit complex, is likely to grow more complex, and so that we can more easily identify refactorable and reusable parts. This class defines the following attributes. This is the Schema we are populating contains a callback to the exception method supplied by DBIC This is an arrayref of information used to populate tables in the database How we know the value is really something to inflate or perform a substitution on. This get's the namespace of the substitution plugin and it's other data. We define a visitor so that we can perform the value inflations and or substitutions. This is still a little work in progress, but it's getting neater The index of previously inflated resultsets. Basically when we create a new row in the table, we cache the result object so that it can be used as a dependency in creating another. Eventually will be moved into the constructor for a plugin Set an index value to an inflated result given an index, returns the related inflated resultset Loads each of the available inflators, provider access to the objects Holds an object that can perform dispatching to the inflators. This module defines the following methods. lazy build for the "visitor" attribute. lazy build for the "inflator_loader" attribute lazy build for the "inflator_dispatcher" attribute The command classes main method. Returns a Hash of the created result rows, where each key is the named index and the value is the row object. Dispatch to the correct inflator Given a hash suitable for a DBIx::Class::Resultset create method, attempt to update or create a row in the named source. returns the newly created row or throws an exception if there is a failure Given a fields and values, combine to a hash suitable for using in a create_fixture row statement. Correctly create an array from the fields, values variables, skipping those where the value is undefined. given a value that is either an arrayref or a scalar, put it into array context and return that array. Please see DBIx::Class::Schema::PopulateMore For authorship information Please see DBIx::Class::Schema::PopulateMore For licensing terms.
http://search.cpan.org/~jjnapiork/DBIx-Class-Schema-PopulateMore-0.19/lib/DBIx/Class/Schema/PopulateMore/Command.pm
CC-MAIN-2017-13
refinedweb
395
50.16
These are chat archives for rust-lang/rust impl<T: ?Sized> MyCoolExtensions for T where T: OriginalTrait { } : ?Sizedpart of the impl block. public, private, etc? pub. The only alternative is just leaving it out, resulting in a field being module-internal. pubon fields only makes sense when the overall struct is also exported ( pub struct ...). And it might be worth knowing that you can directly initialize structs outside of their module if all their fields are public (so when you have a pub struct MyType(pub u32)you can create a MyTypeoutside of its module by writing MyType(42), without a newfunction or similar. MyType, all fields need to be visible (so either your pattern matching code is in the same module, or all the fields are public) I got to know it yesterday: ownership is from varibles sharing the same object in the heap and from different thread have different lifecycle, then we do not know when to deallocate it. But just now, I came up this idea: we can save the references count in the object in the heap, then every reference's lifecycle ends, the count minus 1. When the count is 0, deallocate the memory. So, ownership is needless and the compiler can do the minus references count and deallocate memory job for us too. Rc<T>and Arc<T>types exist for exactly this purpose ownership, one is reference count. I just want to know why Rust choose ownership? changing something when you only need to take another reference to memory location Arcis atomic, Rcis not why it cannot compile? I havewhy it cannot compile? I have fn main() { let y: &Vec<i32>; { let x = &vec![1, 2, 3]; y = x; } println!("{}", y[0]); } movethe ownershipfrom xto y, why the error is borrowed value does not live long enough fn main() { let y: &Vec<i32>; { let tmp = vec![1, 2, 3]; let x = &tmp; y = x; // tmp gets deallocated here } println!("{}", y[0]); } tmpvariant can't work because the vector is dropped at the end of the inner block. tmpvariable automatically. (1 + 2) + 3there are several intermediate results &(1 + 12) I want a complete web framework, frontend and backend with transparent data synchronisation between the two, with blazing fast performance, all held together by Rust's exquisite type system. This I can support. use std::io::{BufReader, BufRead}; use std::fs::File; pub fn handle_file(filename: &str) -> Result<(), io::Error> { let file = BufReader::new(File::open(filename)?); for (i, line) in file.lines().filter_map(|result| result.ok()).enumerate() { // do stuff } Ok(()) } from package import *in Python? std::io::preludeis a useful thing to learn about, thanks. std::io::preludewas for, though now that I look it's imported in all the examples on the docs page for std::io impl<R: Read + ?Sized> ReadBytesExt for R Readis also ReadBytesExt Readbecause there is impl<T> Read for Cursor<T> where T: AsRef<[u8]>() execin Rust? i.e. replaces the running rust program with the specified program. or maybe not (I should learn to read): Unix-specific extensions to the std::process::Command builder trait EventValidator<EVENT, ERROR> { fn is_valid(&self, event: EVENT) -> Result<&self, ERROR>; } I want to indicate that any impl which will implement this trait could return either self or an Error but compiler says that : | 5 | fn is_valid(&self, event: EVENT) -> Result<self, ERROR>; | ^^^^ undefined or not in scope | = help: no candidates by the name of `self` found in your project; maybe you misspelled the name or forgot to import an external crate? How I to achieve it ? @alexander-irbis trait EventValidator<EVENT, ERROR> { fn is_valid(self: &Self, event: EVENT) -> Result<&Self, ERROR>; } so this is correct signature ? :-) fn xxx(&self)is the same as fn xxx(self: &Self) CommandExttrait is unix specific.
https://gitter.im/rust-lang/rust/archives/2016/12/09
CC-MAIN-2019-35
refinedweb
630
62.48
Web-Based Password Reset is not just about writing a web client in ASP.NET. I mentioned that a few times when talking to different people. Everyone can do that by writing their own WCF client. If reverse engineering the FIM WebService protocol is too hard, there is the open source client supported by the community. In fact, BlueVault has done exactly that. It definitely will not be too hard for us to do. However, when we think through the scenarios in depth, we realize most customers want web-based SSPR so that people not connected to the network can also reset their password. That implies exposing not only the portal, but also indirectly exposes FIMService to the extranet. This make us rethink our security model. In this blog post and the coming few ones, I am going to talk about a few improvements related to the security aspect of web-based SSPR. Scenario In FIM 2010, password reset from the intranet would require user authenticates themselves using QA Gate. In R2, when ITPros exposes web-based SSPR to the extranet, they might want to have additional authentication for added security (e.g. RSA token) yet keeping intranet reset as easy as before. What is Security Context? We tackle this scenario by introduce something called security context which can be found in the extended attribute of the request. namespace Microsoft.ResourceManagement.WebServices.WSResourceManagement { public enum SecurityContext { Extranet, NoneSpecifiedNoneSpecified }} } A request tagged with Extranet means it comes from the SSPR portal that is serving requests coming from the extranet. How does Security Context Work? If you look at the new workflow designer UI, you will notice some of the gate-configuration pages have an extract section for SecurityContext. The description is self-explanatory. If set to Extranet, the activity/gate will only be run if the request comes from the extranet. How do I Configure SecurityContext Tagged in Requests from SSPR Portals? In setup, there is the option to specify that. That translates to <add key="SecurityContextAssertion" value="[Extranet|NoneSpecified]" /> at "C:\Program Files\Microsoft Forefront Identity Manager\2010\Password [Registration|Reset] Portal\Web.config" Nice! 🙂 Basically this means we will know that the request is coming from the Extranet by checking the Request and then have the option to add an additional authentication gate of some kind if we like to? Will there be more authentication gates delivered with with FIM? //Henrik Nilsson Yes, the new registration and reset portal in R2 are designed to allow people to use the portal from any coffee shop Yes and Yes. FIM R2 ships with additional OTP gates which I wil talk about in coming pots Really nice! can't wait to see your blog post! Hi! Can I use FIM Password Registration Portal over internet? For example, if I have users without a joined domain pc or FIM client installed, Are users able to do the registration over internet (out of corp network) and then change the password?? I installed FIM portal but i recieved this An error has occurred. Please try again, and if the problem persists, contact your help desk or system administrator. (Error 3000) do you have any idea?
https://blogs.technet.microsoft.com/aho/2011/11/28/fim-2010-r2-web-based-password-reset-part-2/
CC-MAIN-2017-47
refinedweb
530
55.84
Topology C | FORTRAN-legacy | FORTRAN-2008 MPI_Group_union Definition MPI_Group_union takes the union of two MPI groups to create a group that contains processes of both groups, without duplicates. Copy Feedback int MPI_Group_union(MPI_Group group_a, MPI_Group group_b, MPI_Group* union_group); Parameters - group_a - The first of the two groups to include in the union. - group_b - The second of the two groups to include in the union. - union_group - The variable in which store the group representing the union of the two groups given. Returned value The error code returned from the group union. - MPI_SUCCESS - The routine successfully completed. Example Copy Feedback #include <stdio.h> #include <stdlib.h> #include <mpi.h> /** * @brief Illustrates how to get the union of two groups of processes. * @details This code gets all processes of the default communicator and splits * them in two groups, designed to cover all cases: processes that belong to * both groups, one group or none. * It then gets the union of these two groups and creates a communicator * containing the processes of the union group. Each process then prints whether * it belongs to the communicator of the union group or not. * * This application is meant to be run with 4 processes. The union can * be visualised as follows: * * +-----------+---+---+---+---+ * | Processes | 0 | 1 | 2 | 3 | * +-----------+---+---+---+---+ * | Group A | X | | X | | * | Group B | | | X | X | * | Union | X | | X |A; int groupAprocesses[2] = {0, 2}; MPI_Group_incl(world_group, 2, groupAprocesses, &groupA); // Keep the processes 2 and 3 in the group B MPI_Group groupB; int groupBprocesses[2] = {2, 3}; MPI_Group_incl(world_group, 2, groupBprocesses, &groupB); // Get the union of both groups MPI_Group union_group; MPI_Group_union(groupA, groupB, &union_group); // Get my rank in the communicator int my_rank; MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); // Create a communicator made of the processes in the union group MPI_Comm new_communicator; MPI_Comm_create(MPI_COMM_WORLD, union_group, &new_communicator); if(new_communicator == MPI_COMM_NULL) { // I am not part of the communicator created, so I am not part of the union group printf("Process %d is not part of the union group.\n", my_rank); } else { // I am part of the communicator created, so I am part of the union group printf("Process %d is part of the union group.\n", my_rank); } MPI_Finalize(); return EXIT_SUCCESS; }
https://www.rookiehpc.com/mpi/docs/mpi_group_union.php
CC-MAIN-2019-43
refinedweb
353
51.48
#include <db.h> int DB_ENV->set_lg_max(DB_ENV *dbenv, u_int32_t lg_max); Set the maximum size of a single file in the log, in bytes. Because DB_L DB_ENV->set_lg_bsize). See Log File Limits for more information. The DB_ENV->set_lg_max interface may be used only to configure Berkeley DB before the DB_ENV->open interface is called. The DB_ENV->set_lg_max function returns a non-zero error value on failure and 0 on success. The database environment's log file size may also be set function may fail and return a non-zero error for the following conditions: Called after DB_ENV->open was called. The size of the log file is less than four times the size of the in-memory log buffer. The specified log file size was too large. The DB_ENV->set_lg_max function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB_ENV->set_lg_max function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/env_set_lg_max.html
crawl-001
refinedweb
180
54.73
This is the mail archive of the pthreads-win32@sourceware.org mailing list for the pthreas-win32 project. #endif errors in ptw32_InterlockedCompareExchange.c 2.9.0 release / win64 stability? [PATCH] Preprocessor fixes [patch] Support for x64 Windows condvar implementation const missing? crash with pthreads after dlclose() Default Stack Size 0?? Does static library works ? Email Database of small to medium sized businesses in the US good job! LGPL question patch from chromium Pthread_join waits endlessly for already ended thread pthread_t and stl maps Release Build dependency on debug rt in vs7.1 Re: starvation in pthread_once? Re: Static Library Initialization (again?) strange pthread_create cap Re: Trying to develop a pool executor from pthreads... in C++ Visual 2005 compiler error WARNING VIRUS Re: you would have wanted too Winsock dependency. working release test failed in debug mode you would have wanted too
http://www.sourceware.org/ml/pthreads-win32/2009/subjects.html
crawl-003
refinedweb
141
61.83
The QGraphicsView to display the tuning curve and the inharmonicity. More... #include <tuningcurvegraph.h> The QGraphicsView to display the tuning curve and the inharmonicity. This is the Qt implementation of the TuningCurveGraphDrawer. It forwards the drawings of TuningCurveGraphDrawer to a AutoScaledToKeyboardGraphicsView. The user can change the computed tuning curve with a mouse click on the bars. This class is basically managing these mouse clicks. A click selects the corresponding key on the keyboard by sending a MSG_KEY_SELECTION_CHANGED message. Definition at line 48 of file tuningcurvegraph.h. Constructor, linking AutoScaledToKeyboardGraphicsView with the TuningCurveGraphDrawer. Definition at line 44 of file tuningcurvegraph.cpp. Empty virtual destructor. Definition at line 53 of file tuningcurvegraph.h. TuningCurveGraph::handleMouseInteraction. If the mouse is within the valid range the function computes the corresponding key index. In the calculation mode the coordinates are used to manually edit the tuning curve while in all other modes a message is sent to select the corresponding key. Definition at line 142 of file tuningcurvegraph.cpp. Definition at line 57 of file tuningcurvegraph.h. Function for handling mouse moves. Moving the mouse will continuously change the tuning curve by dragging the green marker. The function is only active if mPressed is true. It will use mPressedX as x value and the actual QMouseEvent's y coordinate. These coordinates are passed to the function handleMouseInteraction. Definition at line 92 of file tuningcurvegraph.cpp. Function handling a mouse click. The mouse press event is used to manually edit the tuning curve. If the a mouse button is pressed this function sets mPressed to true and stores the mouse x position in the member variable mPressedX. It also passes the actual coordinates to the function handleMouseInteraction. Definition at line 68 of file tuningcurvegraph.cpp. Mouse release event to stop the change of the tuning curve. A mouse release event will set mPressed to false. The final coordinates are passed to the function handleMouseInteraction. Definition at line 115 of file tuningcurvegraph.cpp. Definition at line 56 of file tuningcurvegraph.h. Is the mouse pressed? Definition at line 69 of file tuningcurvegraph.h. The x coordinate where the mouse was pressed in first instance. Definition at line 72 of file tuningcurvegraph.h.
http://doxygen.piano-tuner.org/class_tuning_curve_graph.html
CC-MAIN-2022-05
refinedweb
365
53.07
Welcome to the fourth module of the C Programming series, in this particular tutorial we will talk about the Hello World Program in C, the most famous code, through which we will start our journey in the practical world of coding and also will see the structure of the program like how our code structure looks. So, gear up your energy, Let’s go into the depth of this module. Hello World Program in C This is the very first code which we will see towards the practical implementation of coding. The motive of writing this program is it tells us the basic structure of the program like how and what are the necessary and important things which we have to use while writing our code. This program simply prints the statement “Hello World” on the output screen. This is a simple code that makes us understand how c programs are executed and constructed. Let’s see the code program: #include <stdio.h> int main( ) { printf ("Hello World"); return 0; } When the above program will be executed, the output we get on the screen is: The above program can be written in many ways using the C Programming language, but as of now we will focus on these basic steps and structure and will cover all the more in the upcoming tutorials. Structure of the Program You all must have noticed many things written in the above program and also wondering what are all that, so let’s see the meaning of each line and thoroughly understand the structure of the program. Header File: #include<stdio.h> It allows to perform standard input and output function in the program such as scanf() and printf() function, we have used printf() function in the above code, or we can say Header files tells the compiler that the code is of C language and through this compiler imports all the necessary libraries to perform the operations. #include is the preprocessor here, which tells the compiler about the particular action that has to be done. stdio simply means standard input-output, which allows the compiler to perform the various input-output actions. Main Function: int main () Here the function called main is declared of return type int. The program will begin its execution from the first line inside the main function and keep executing each line till the end. here int simply tells the user that the program is returning some value at the end ie.., we can see in the last 0 has been returned, it indicates the successful compilation of the code, if your code doesn’t return any value then you can go for void main also. void means the null value that no value has to be returned. Output to be displayed on the screen: printf ( “Hello World” ) ; This line is called the C Statement it has three parts the 1st one is printf function and the 2nd is the sentence or word you want to display out in the output screen and the last is the semicolon, to terminate the statement. The main work of this line is that it simply prints whatever is in the double quotes on the screen. Return type: return 0; This should be included in every program you write as it is the return of the int main. The value 0 indicates that the program has been executed successfully. Curly Braces: { } Start the block by using the curly braces “{” and end the block by using “}” these braces. All the contents between these braces are called the function body and that defines what happened when the main function is called. It tells the scope of the function that through this the function statement is valid and will be considered. So, this was all about the structure of the program, these are the necessary things that must be included while writing any program. I hope you all enjoyed this tutorial because finally, you all have landed in the practical world of coding, from now we will start our journey in the practical world of C Programming and will see many cool concepts for which you all must be waiting and must be excited too. Until then stay connected, Happy Learning!
https://usemynotes.com/hello-world-program-in-c/
CC-MAIN-2021-43
refinedweb
704
63.22
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. HTTPCache HTTPCache. Usage NOTE: Eventually, my hope is that this module can be integrated directly into requests. That said, I've had minimal exposure to requests, so I expect the initial implementation to be rather un-requests-like in terms of its API. Suggestions and patches welcome! UPDATE: See: Here is the basic usage: import requests from httpcache import CacheControl sess = requests.session() cached_sess = CacheControl(sess) response = cached_sess.get('') If the URL contains any caching based headers, it will cache the result in a simple dictionary. Below is the implementation of the DictCache, the default cache backend. It is extremely simple and shows how you would implement some other cache backend: from httpcache.cache import BaseCache class DictCache(BaseCache): def __init__(self, init_dict=None): self.data = init_dict or {} def get(self, key): return self.data.get(key, None) def set(self, key, value): self.data.update({key: value}) def delete(self, key): self.data.pop(key) See? Really simple. Design The CacheControl object's main task is to wrap the GET call of the session object. The caching takes place by examining the request to see if it should try to ue the cache. For example, if the request includes a 'no-cache' or 'max-age=0' Cache-Control header, it will not try to cache the request. If there is an cached value and its value has been deemed fresh, the it will return the cached response. If the request cannot be cached, the actual request is peformed. At this point we then analyze the response and see if we should add it to the cache. For example, if the request contains a 'max-age=3600' in the 'Cache-Control' header, it will cache the response before returning it to the caller. Tests The tests are all in httpcache/tests and is runnable by py.test. TODO - Support the Vary header (only match when all headers are the same) Disclaimers HTTPCache is brand new and maybe totally broken. I have some tests and it is a pretty direct port of httplib2 caching, which I've found to be very reliable. With that in mind, it hasn't been used in a production environment just yet. If you check it out and find bugs, let me know.
https://bitbucket.org/icordasc/httpcache
CC-MAIN-2017-51
refinedweb
400
65.73
30725/import-my-aws-credentials-using-python-script import boto3 s3 = boto3.resource('s3') for bucket in s3.buckets.all(): print(bucket.name) but this code does not work, how to import my AWS credentials so that this will return correct output? error : botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "" error : botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "" expected output : list of buckets in my aws account. Using AWS Cli Configure your IAM user then run the following code curl "" -o "awscli-bundle.zip" unzip awscli-bundle.zip sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws I believe that you are using the ...READ MORE It might be throwing an error on ...READ MORE The 502 Bad Gateway Error is Hosting a website using S3 bucket is ...READ MORE AFAIK, The instance state transition must go ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/30725/import-my-aws-credentials-using-python-script?show=30727
CC-MAIN-2020-50
refinedweb
156
52.15
User Name: Published: 08 Jun 2009 By: Brian Mains Brian Mains discusses how to implement the Arrange-Act-Assert pattern in TypeMock. Most developers that use unit testing in their applications probably know what a mocking library is. In case you don’t, a mocking library provides a way to mock a portion of your code, allowing your application to work as is without having to write any special code to do so. Most mocking libraries on the web require the use of interfaces; this is because an interface makes it easier to switch the actual implementation from a live object to a mocked one. The interface defines the signature and thus makes it easy to ensure the mocked object also implements the same interface. This design implementation has a minimal impact regarding to the amount of coding effort, but has a bigger impact on the design of the application. TypeMock works around some of the issues by supplying a very dynamic mocking library that does not require the use of interfaces. Actually, any object, whether implementing an interface or not, can be mocked, which gives TypeMock a lot of power. TypeMock originally provided two different ways to write tests. One of those approaches was to use the MockManager class. The MockManager class has a static Mock method that returned a Mock object that you could define your mocks with. The following could be a mock for a class defined using the MockManager. Mock TypeMock uses this class to define the series of properties and method calls that will occur in this specific test case, and then validate the results. The second approach uses natural mocking: a more natural approach to reading and reacting to code called in a unit test. This flowing style of running test code worked really well and felt more natural to the tester, because the developer actually writes the code as in: The challenge with this way to develop unit tests is that the natural way can’t be used with internal/private methods. This is rectified using the MockManager approach, but then that’s a mix and match of the two options, which isn’t wrong or hard to do, but it would be nice if one solution solved both issues. TypeMock now brings forth some new objects to develop unit tests with: the ArrangeActAssert namespace. The idea of the Arrange/Act/Assert process is a way to develop unit tests. It starts by arranging the test and all associated mocks, followed by the setting up of the code to actually perform the work of the test, and lastly the assertion process verifies that all outputs are correct. To bring these objects into your testing framework, import the C# and VB.NET API’s referenced in the latest TypeMock assembly. Let’s begin by looking at a basic mock: In Listing 3, the new Arrange/Act/Assert approach divides the test up into three parts. The arrange part uses the new TypeMock instance/static methods to setup the objects to mock, along with its properties and methods to call or mock. The second part, the act part, sets up the class and gets the value. The third part, the assertion part, verifies that the class is correct. Let’s walk through the arrange portion of the test. The Isolate object, a static object, is the core object to defining mockings. Its Fake property returns an object that allows a developer to mock an instance or static class/method. It’s generic to specify the class to mock, but to mock a static object, use the overload which accepts the type of an object. The SwapNextInstance method swaps the next creation of the class with the mock. Later on, when MyClass is instantiated, the object returned is swapped with the mock object, something handy to have. The other method, WhenCalled, is another very useful method that allows us to control what actually gets called or mocked. When the Value property is called on the MyClass mock instance, a value of 1 is returned. SwapNextInstance WhenCalled Value 1 So you can see WhenCalled is important because it controls the actions of your code, and can control the inner workings of code. Using the WillReturn method ignores the call to the code defined within WhenCalled. The () notation is a lambda expression that allows you to specify code segments to run, and the code within WhenCalled gets controlled by the following method option. Using CallOriginal() will call the original method implementation. WhenCalled WillReturn () CallOriginal() When it comes to WhenCalled, this method can be used to test not only properties as shown above, but also methods (as I also stated), along with the setters of a property. If the above code segment was rewritten to set the Value property to another value, this would serve to mock the setter of that property, instead of the getter of the property as defined above. Alternatively, TypeMock provides the ability to mock private properties/methods, using the NonPublic property of the Isolate object. While not as fully featured as public members, private members use the MockManager definition-like approach to setup what needs mocked, and what action should take. This could be done using the following: NonPublic The NonPublic object’s members require a fake object. This means that you have to have a fake specified using the Isolate.Fake member. NonPublic can also fake static members, by using an override that takes a Type object instead. Isolate.Fake TypeMock, using this new approach, can mock static objects too. For instance, if you have code that uses the factory method pattern, TypeMock can mock the static method using the Isolate object. The following code segments could be used to mock that static create method: In this way, the static method can be mocked to do whatever you need to. Typically, the a faked instance of the type being created could be returned by changing IgnoreCall to WillReturn. IgnoreCall WillReturn TypeMock provides one other feature I’ll mention in this article: verification. The Isolate.Verify property returns an object that is used to perform verification that the correct object’s properties or methods are called during the code’s execution. It can be used to verify that either a public or private method was called, with or without argument checking. Isolate.Verify For instance, at the end of a test, the code below could be used to determine that a public method was called, and a private property had its setter called: The first method verifies that obj.SetValue was called with the exact parameter argument of 1. If WasCalledWithAnyArguments was used instead, the parameter wouldn’t matter. The second call checks that the Value property, a private property, was called to assign this value. obj.SetValue 1 WasCalledWithAnyArguments Verification can also work the other way; it can check to see that a method wasn’t called, which can also be useful. TypeMock is cool, and you can see that you do not need to use an interface to mock an object; it can work with any object directly through the use of generics. TypeMock can mock anything through a new Isolate keyword; using the Fake member to create the actual fake, and the WhenCalled method to change the internals to the code calling and actually change the behavior. The NonPublic member can also manipulate private methods too. This development approach is a hybrid of the two approaches I mentioned at the beginning of the article, to create a new way to develop unit tests in a more managed way.
http://dotnetslackers.com/articles/designpatterns/TypeMocks-Arrange-Act-Assert.aspx
CC-MAIN-2015-14
refinedweb
1,267
59.03
On Sun, Aug 24, 2008 at 9:59 AM, Fredrik Lundh <fredrik at pythonware.com> wrote: > Mohamed Yousef wrote: > >> > > Python doesn't use a global namespace -- importing a given module into one > module doesn't make it visible everywhere else (and trust me, this is a very > good thing). why isn't it a good thing (even if optional) consider the sitution in which a utility module is used every where else - other modules - you may say import it in them all , what i changed it's name ? go back change all imports... this doesn't seem good and what about package wide varailbles ? > (it uses a global module cache, though, so it's only the first import that > actually loads the module) > >> my goal is basically making W() aware of the re module when called >> from A > > to do that, add "import re" to the top of the C module. and sys ,string.. etc add them all twice or four / n times ? remove one and then forget to remove one of them in a file and start debugging ... this really doesn't look to be a good practice > see this page for a little more on Python's import mechanism: > > > Regards, Mohamed Yousef
https://mail.python.org/pipermail/python-list/2008-August/492877.html
CC-MAIN-2014-15
refinedweb
204
77.16
The! Yossi Siles, a Senior Offering Manager at IBM, published this article on the new software version for our all-flash IBM FlashSystem A9000 and A9000R storage systems with IBM HyperSwap support.:.! Hi guys, we have a surprise for you! We have just released a Python library for our XCLI client. It's an open-source project and free for use by everyone under the Apache 2 license. It enables connecting to the storage and managing all its operations. It supports all XCLI managed storage types: This is the first open-source version of the Python XCLI client library ever. This may not sound like such a big deal, but it actually is. It enables users to tailor the way they use the Spectrum Accelerate Family storage. The XIV GUI is great, and so is the new IBM Hyper-Scale Manager and they let users to do almost anything with the storage. But there are cases that you would rather have specific functionality that is suitable just for you. The flexibility and power of Python and XCLI enables you to tailor it to your exact needs. Let's take two simple examples: 1. You want to take snapshots for the production volumes in pool FOO every day. Just create a scheduled task and run the following script: from pyxcli.client import XCLIClient xcli = XCLIClient.connect_ssl('admin', 'adminadmin', 'mystorage.comp.com') volumes = xcli.cmd.vol_list(pool='FOO).as_list for volume in volumes: if volume.startwith('production'): xcli.cmd.snapshot_create(vol=volume.name) 2. You'd like to clean up your system and remove all empty volumes. Here is how you create a list of unused volumes: volumes = [volume for volume in xcli.cmd.vol_list().as_list if volume.used_capacity == '0'] print volume.name The combined power of Python and XCLI management has been unleashed! You are welcome to use it! Download the package from: - GitHub: - PyPI: or simply run 'pip install pyxcli' from your command prompt. Have fun, Tzur and Alon:. IBM Spectrum Control Base Edition has been upgraded to a new version - 3.0.1. This release replaces the previous version (3.0.0). It brings significant enhancements in the application performance and stability. These include: For nearly a year in development, a new version of IBM Storage Management Pack for Microsoft System Center Operations Manager (SCOM), 2.5.0, is out. This release brings support for the latest IBM's all-flash storage systems: super-fast IBM FlashSystem A9000 and IBM FlashSystem A9000R with microcode 12.0.x. In addition, it supports the newest releases of the following IBM storage products: Note: FlashSystem V9000 will be supported in Q3 of 2016.: A new version of the IBM XIV Provider for Microsoft Windows Volume Shadow Copy Service (VSS), version 2.8.0, joins the ranks of the IBM's cloud storage solutions that support the newly introduced all-flash IBM FlashSystem A9000 and IBM FlashSystem A9000R. A new version of the IBM Storwize Family Storage Replication Adapter is here. It's version 3.2.0, which adds support for the local HyperSwap for IBM SVC and Storwize systems, running IBM Spectrum Virtualize software. This new topology is used with VMware vSphere to enable transparent VMware vMotion migration and automatic VMware HA failover of virtualized workloads between physical data centers. This update: A new milestone for IBM Spectrum Control Base Edition - version 3.0.0. This release brings support for the newest IBM's all-flash storage offerings: ultra-fast IBM FlashSystem A9000 and IBM FlashSystem A9000R. In addition, the web management interface of the package has been enhanced and improved to support the abstracted storage provisioning method. This release can be installed on RHEL 7.0–7.2. In general, IBM Spectrum Control Base Edition became a more robust, user-friendly and safe solution for managing multiple IBM storage platforms. We are proud to announce a new HAK release - version 2.6.0. Starting from this version, IBM XIV Host Attachment Kit has been renamed to IBM Storage Host Attachment Kit to indicate support for additional storage systems, including the newly announced all-flash IBM FlashSystem A9000 and IBM FlashSystem A9000R. In addition, this version introduces support for RHEL 7.2, SLES 12 and SLES 12 SP1. Moreover, selected Linux releases can now be run on IBM Power Systems servers, as detailed in the HAK release notes.! are proud to announce that the IBM Storage Driver for OpenStack has been recently upgraded to a new version - 1.6.0. This version brings support for OpenStack Liberty release, as well as for DS8870 microcode version 7.5 SP3 and DS8880 version 8.01. In addition, this release introduces the XIV and Spectrum Accelerate support for the OpenStack consistency groups. It also brings in the RESTful API that replaced the Java-based ESSNI driver in DS8000 storage systems.. We are happy to announce the release of an updated OpenStack Cinder driver for IBM Spectrum Virtualize, Storwize Family and FlashSystem V9000. This release complies with OpenStack Liberty specifications, emphasizing IBM's continuous commitment to making the cutting-edge OpenStack features available for IBM storage customers. Another change is the enabling or disabling the fast format option during thick volume creation. As some volume operations are disabled until fast format is completed, the customer can now skip the fast format, when creating a thick volume. This makes the volume operational immediately after its creation. Version 7.6 of IBM Spectrum Virtualize and Storwize Family will be supported after its release with no additional Cinder driver code change. We..!
https://www.ibm.com/developerworks/mydeveloperworks/blogs/8aaf6e95-0915-442b-b03c-fd7f412fe248?maxresults=50&sortby=0&lang=pt_br
CC-MAIN-2017-26
refinedweb
923
56.96
From: Vesa Karvonen (vesa.karvonen_at_[hidden]) Date: 2001-06-03 11:54:00 I'm currently writing metacode for my resource library that I hope to be able to offer to Boost next week. I find that the Boost metaprogramming and type traits facilities are not mature enough for my taste, so I have ported a mini version of our metaprogramming facilities to Boost conventions. call_traits =========== The call_traits<> is basically a mini-BLOB (antipattern). Specifically the call_traits<> template is four functions in one. It is preferable to have simple primitive template metafunctions, because: 1. Such metafunctions can be made to use standard interfaces rather than the special case interface of call_traits. 2. Primitive metafunctions are easier to use, understand and port (fact of life). For instance, should one little thing fail to compile in call traits, you might not be able to port the whole thing. 3. Primitive metafunctions scale better in large scale development (with a BLOB you need to include everything whether or not you need it). (In fact, currently I only need param_type.) So, I would prefer to see the special interface of call_traits<>: template <typename T> struct call_traits<T> { typedef ... value_type; typedef ... reference; typedef ... const_reference; typedef ... param_type; }; split into four distinct metafunctions (here in pseudo code): namespace call_traits { template<class T> struct value_type {typedef ... type;}; template<class T> struct reference_type {typedef ... type;}; template<class T> struct const_reference_type {typedef ... type;}; template<class T> struct param_type {typedef ... type;}; } The above metafunctions are clearly superior to use in metacode than the call_traits<> template. I can provide examples if you don't trust me. arithmetic_traits ================= The boost/arithmetic_traits.hpp can be simplified by a factor of ~4. Currently the arithmetic types are repeated 4 times: - as non-cv types, - as const types, - as volatile types, and - as const volatile types. A smarter implementation would look something like this: namespace detail { typedef make_typelist < const volatile char , const volatile signed char , const volatile unsigned char , const volatile short ,... >::type const_volatile_arithmetic_types; template<class T> struct type_const_volatile { typedef const volatile T type; }; template<class T> struct type_identity { typedef T type; }; } template<class T> struct is_arithmetic { enum { value = typelist_has < detail::const_volatile_arithmetic_types , typename type_inner_if < is_reference<T>::value , detail::type_identity<T> , detail::type_const_volatile<T> >::type >::value; }; }; MPL === Technical Issues in Boost MPL: - The current code is not completely ported to MSVC++, so I can not use it. - The factory template uses O(N*N) tokens, which is not optimal. It is possible to do with only O(N) tokens, which can significantly reduce compilation times. - The code makes no use of private or public to hide implementation details or hilite the interfaces of metacode. This makes it more difficult to understand the code. -? - list_node<> doesn't check the validity of the template arguments. At least NextNode has special requirements. This same problem can be seen in many places. - What is wrong with using enum {} when there is no particular reason to have a specific type? Does it fail on some broken compilers? Is the use of BOOST_STATIC_CONSTANT() always worth it? IIRC even Bjarne Stroustrup has commented that static class constants are a misfeature. - The endline layout is a bad idea. It makes maintanance more difficult. I can already pinpoint examples of broken code layout in MPL. See Code Complete, ISBN 1-55615-484-4, for details. IMO Boost should have a guideline that says that library authors are recommend not to use endline layout, because it is not maintainable. - MPL should be separated into multiple more cohesive libraries. For example: - type traits (is_same, is_convertible, ...) - actually Boost already has this library and I don't see much value in duplicating the library. - template metaprogramming (if, switch, for, while, ...) - metadata-structures (lists, trees, etc...) - I think that a lot could be learned by using existing knowledge on functional programming. For example: - Chris Okasaki: Purely Functional Data Structures, ISBN 0521663504 Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/06/12755.php
CC-MAIN-2019-47
refinedweb
662
58.58
Thanks for you response. I received this error using the snippet Compiler Error Message: BC30451: Name 'var' is not declared Request.QueryString.GetVal Your question, your audience. Choose who sees your identity—and your question—with question security. I tried using the snippet below. I sure that the server is running aspnet 2.0 framework. CS0246: The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?) <% var UserName = Request.QueryString.GetVal if(!string.IsNullOrEmpty(u { Session["UserName"] = userName; } if (!string.IsNullOrEmpty(Ses { Response.Redirect("", false); } %> Open in new window var UserName = Request.QueryString.GetVal Sorry for the confusion. This is what I wanted you to try. string userName = Request.QueryString.GetVal
https://www.experts-exchange.com/questions/25981129/BC30109-'String'-is-a-class-type-and-so-is-not-a-valid-expression.html
CC-MAIN-2018-26
refinedweb
121
55
HPC Cluster-SOA Client Template Updated: February 3, 2010 The Cluster-SOA Debugger includes a C# project template called HPC Cluster-SOA Client. The template includes references to the necessary HPC namespaces and the outline of the code that is used to create and start a session on a Windows HPC 2008 cluster. The client application for a service that runs on a Windows HPC cluster creates a session through the HPC job scheduler. The scheduler allocates service instances on the compute nodes as workers for the session. The client application then invokes the methods that are exposed by the service instances. The following table describes the template elements: See Also Show:
http://technet.microsoft.com/en-us/library/ee945377.aspx
CC-MAIN-2014-42
refinedweb
112
53
A Gentle Introduction to Dataframes — Part 1 of 3 Becoming a Data Alchemist — Dataframes …Learning My First Trick Introduction same and or similar tasks in Python. As I continue to gain proficiency, I am beginning to see the power of Python. Certainly, the learning curve for Python is higher than that of Excel, but as I am discovering the effort put forth in learning the “Python way” is paying off in creative flexibility. Before I get “too fancy” with my tutorials I am going to start with the basics. As we continue our journey in becoming data alchemists, I will work to ramp up the sophistication of my spells 😊. In this three-part tutorial, this being part-one, I will introduce you to the Dataframe, a central tool in Python, and what I would consider Pythons version of the Excel Worksheet. I have developed this tutorial as the precursor to part two and three in manner which get progressively more technical. Here I introduce you to the “basic” coding required to get to know and clean data using Dataframes. In part two of this tutorial I introduce more “advanced” techniques for summarizing and formatting analysis with Dataframes and in part three I cover some tips on formatting data using multiple formats in one column. Now lets get started with this tutorial! What is a DataFrame? Like the Excel worksheet, the Dataframe is used to view, clean, and transform data into insights. Essentially think rows and columns, pivot tables, functions etc, all of the functionality you would expect in a worksheet is mostly available in Dataframes. However, unlike Excel worksheets, Dataframes are not visible until you load them with data and display them. Below, I will cover this simple step within my first example, but before showing you that code, I want to give you a preview of the tasks I will be covering in this tutorial. I selected some basics transformations that frequently need to be available prior to performing analysis. Common Data Preparation Tasks Step 1. Loading Your Data — Using Jupyter Notebook # Code snippets shown in grey for easy cutting and pasting import pandas as pd # read file stored in current working directory, using excel file here df_BlogMovieData = pd.read_excel(‘BlogMovieData.xlsx’) df_BlogMovieData Step 2. Cleaning Up Your Data — Finding Nulls, Understanding Data Types, Changing Data Types, and Formatting Data - Checking for Nulls in our columns: df_BlogMovieData.isnull().sum() - Filling Blanks or Nulls columns with 0 # Fill nulls with 0 df_BlogMovieData.fillna(0, inplace=True) - Changing data types if required, and format your numbers to improve comprehension df_BlogMovieData[“Profits”].astype(float) #(float, int,other) df_BlogMovieData.head().style.format({‘ProductionCost’: ‘${:,.0f}’, ‘Domestic_Gross’ : ‘${:,.0f}’,’Foriegn_Gross’ : ‘${:,.0f}’, ‘Worldwide_Gross’ : ‘${:,.0f}’,’Profits’ : ‘${:,.0f}’}) Step 3. Making Changes to the Structure of your Dataframes — Delete Columns, Rename Columns, Combining Columns - Get Column Names df_BlogMovieData.columns - Drop Column Names df_BlogMovieData.drop(‘Profits’, axis=1, inplace = True) - Rename Columns df_BlogMovieData.rename(columns={“studio”: “Movie_Studio”}, inplace = True) - Add Columns & Combine Numbers df_BlogMovieData[‘TotalProfits’] =(df_BlogMovieData[‘Worldwide_Gross’] — df_BlogMovieData[‘ProductionCost’]) - Summing Column TheTotalProfits = df_BlogMovieData[‘TotalProfits’].sum() - Sorting Column TheTotalProfits = df_BlogMovieData[‘TotalProfits’].sum() Step 4. Getting More Advanced — Extracting Data from Dataframe Columns using .apply(Lambda with .split()) Now that you have a sense of the basics, lets look at something a little more advanced but necessary when cleaning data… extracting substrings from larger strings. In python there are several ways this can be done. Below I have shown an intuitive way for the beginner to perform string extraction. In Excel we use — “Left”, “Right” or “Mid”. In Python its not quite that simple. In Python we use a combination of functions “.split()”, “apply()” and something called “Lambda” to access each element in your Dataframe. See the below example showing extracting left, mid, and right using hypothetical characters”~” & “#” for this example. - Sorting Column TheTotalProfits = df_BlogMovieData[‘TotalProfits’].sum() - Get left GetTheLeft = df_BlogMovieData[“MovieTitle”].apply(lambda x: x.split(‘~’)[0] if x.find(“~”) != -1 else None)# Essentially . apply() passes in our element, cell by cell, in the lambda we look for “~”, if it finds it, it splits the string at the “~” and returns a list [0,1] in two parts at locations 0, and 1. The x.split[0] is the first location and returns the string on the left side and passed into our variable. - Get mid GetTheMiddle = df_BlogMovieData[“MovieTitle”].apply(lambda x: x.split(‘~’)[1].split(‘#’)[0] if x.find(“~”) != -1 else None)# similar to above, except here we find if a “~” exists in our string, if so split at “#”, which returns two strings. We access the first string using x.split[0] and the split it again at the “~” but this time instead of grabbing the first location([0]) we grab the second with x.split[1] which is then inserted into our variable. - Get right GetTheRight = df_BlogMovieData[“MovieTitle”].apply(lambda x: x.split(‘#’)[1] if x.find(“~”) != -1 else None)# Similar to left above, except change the character in which we split and access the second position[1] vs. first position in returned variable ([0]). print (GetTheLeft) print (GetTheMiddle) print (GetTheRight) Summary & Final Note Above I covered some of the basics of Dataframes. My aim here and beyond is to provide a gentle introduction to Python in a manner that novices can understand. In future blogs we will get more advanced with material (See (Summarizing data in DataFrames, and Formatting Frustrations with df.describe()). I look forward to seeing you in our next adventure in becoming Data Alchemists! Next Stop — DATA ALCHEMY!
https://rgpihlstrom.medium.com/becoming-a-data-alchemist-dataframes-85b299609b06
CC-MAIN-2021-31
refinedweb
912
57.87
I want to build my ZigBee networks containing 5 XBee Pro S2(5 Arduino Uno) as Router and one gateway CPX4 as Coordinator. Configuration: --->In the gateway: *PAN ID: 0x4a59 *channel: 0xc *scan all channels ---->In router XBee Pro 2: *PAN ID: 4a59 *channel scan: 16 *DH, DL destination: 0 I choose the star topology because each xbee module will send a temperature and humidity value to the server through the gateway. These XBee modules receive nothing from gateway.(modules---->gateway). It's the best choice or not ?if you can give me suggesions? in script python: To start,I tested this code which i can find my network XBee (it work 100%): - Code: Select all print 'Starting up...' #print ("We have " + nodes.size() + " Nodes. Checking to see if incoming is new...") nodes = zigbee.getnodelist(refresh=True) nodes = filter(lambda x: x.type != 'coordinator', nodes) # Print the table: print "%12s %12s %8s %24s" % \ ("Label", "Type", "Short", "Extended") print "%12s %12s %8s %24s" % \ ("-" * 12, "-" * 12, "-" * 8, "-" * 24) for node in nodes: print "%12s %12s %8s %12s" % \ (node.label, node.type, \ node.addr_short, node.addr_extended) I can implemented the protocol CSMA / CD (Carrier Sense Multiple Access / Collision Detection): the module xbee sent these doonnes when the channel is available (in script)? With this code , i read a data (Byte) from one XBee (with the x-ctu , i send a number and I receive.. work 100%) ..but ...How do I get the data from the 5 XBee? - Code: Select all from socket import * # Create the socket, datagram mode, proprietary transport: sd = socket(AF_XBEE, SOCK_DGRAM, XBS_PROT_TRANSPORT) sd.bind(("", 0xe8, 0, 0)) # Block until a single frame is received, up to 255 bytes: print "Waitting For New Packet" #sd.recvfrom(packetSize) payload, src_addr = sd.recvfrom(255) print "payload" print payload Arduino How can I send a number from xbee to the gateway?!! - Code: Select all #include <XBee.h> #include <string.h> XBee xbee = XBee(); char basehtml[30] = "ok"; XBeeAddress64 addr64 = XBeeAddress64(0x00000000, 0x00000000); ZBTxRequest zbTx = ZBTxRequest(addr64, (uint8_t*) (basehtml) , sizeof(basehtml)); void setup() { xbee.begin(9600); Serial.begin(9600); } void loop() { xbee.send(zbTx); delay(1000); } Thanks... Sincerely,
http://adafruit.com/forums/viewtopic.php?f=8&t=37931&p=187146
CC-MAIN-2014-15
refinedweb
353
65.22
Hi this is a homework assignment, and I dont want anybody to just give it away because I really want to learn my self but I am so stumped.. I understand I need too add the first number of pennies together but I am so confused on how to do this, would i use 2 variables to get this and add them together? But if i did this I dont think it would work in one cout statement since Day 1 should start with 1 penny... so thats why I was thinking I needed to use pennies++; but I have no clue somebody please point me in the right direction here is the code so far it should be 1 penny for day 1it should be 1 penny for day 1Code:#include <iostream> #include <cmath> #include <iomanip> using namespace std; int main() { int totalDays = 0, pennies = 0, days = 1, pay; do{ cout << "How many days have you worked? "; cin >> totalDays; if(totalDays < 1) cout << endl << "Pick up some hours please...re-enter." << endl << endl; }while(totalDays < 1); cout << setw(5) << left << endl << "Day #" << setw(17) << right << "Pay" << endl << "------------------------" << endl; while(days <= totalDays) { pennies += days; cout << setw(5) << left << "Day " << days++ << setw(17) << right << pennies; cout << endl; } system("pause"); return 0; } 2 pennies for day 2 4 pennies for day 3 8 pennies for day 4 because 1+2+4 plus the 1 penny for that day...and this makes me think like the pennies for all those days have to have seperate variables but I know for sure that thats not what im suppose to do...so confused
http://cboard.cprogramming.com/cplusplus-programming/142295-adding-pennies-together-every-day-they-add-together-help.html
CC-MAIN-2014-35
refinedweb
268
68.67
#include <Modelx.h> List of all members. Constructor Destructor Intersection method using bounding spheres between this and another model. Creates the dynamic bounding sphere with its bounding sphere hierarchy. Creates the default animation if one exists. Returns the current animation. Returns the absolute elapsed time. Init the given model with the core model. Intersection of a ray with a model. Returns if an animation is current running. Renders the given model. Resets and cleans up the model. Sets the elapsed time by frame. Sets the elapsed time. Time is clamped between the current animation time. Sets the next animation. Updates the given model by time. The dynamic bounding sphere of tis model. Orientation of the model in world space. Position of the model in world space. Scaling of the model. Flag, if quaternions should be used. By default on.
http://es3d.sourceforge.net/doxygen/class_e_s3_d_1_1_modelx.html
CC-MAIN-2017-30
refinedweb
139
65.08
Introduction: After saving images into folder now I am going to explain how to display images slideshow using Ajax Slideshowextender control from project folder images using asp.net. you observe above code I have define lot of properties to ajax:SlideShowExtender now I will explain each property NextButtonID - ID of the button that will allow you to see the next picture. PlayButtonID - ID of the button that will allow you to play/stop the slideshow. PreviousButtonID - ID of the button that will allow you to see the previous picture. PlayButtonText - The text to be shown in the play button to play the slideshow. StopButtonText - The text to be shown in the play button to stop the slideshow. PlayInterval - Interval in milliseconds between slide transitions in play mode. ImageTitleLabelID - ID of Label displaying current picture's title. ImageDescriptionLabelID - ID of Label describing current picture. Loop - Setting this to true will allow you to view images in a round-robin fashion. AutoPlay - Setting this to true will play the slideshow automatically on render. SlideShowServicePath - Path to the webservice that the extender will pull the images from. SlideShowServiceMethod - The webservice method that will be called to supply images After that add one new Images folder to your application and add some images to that folder here we are going to display slideshow based on images available in Images folder. After that add one new webservice page to your application and give name as Slideshow.asmx because here I used the same name in slideshowextender if you give different name change the path name of slideshowextender also. Here we need to remember one point that is we need to write webmethods this format only and use exact parameters that should be same as whatever I mentioned in web method In this web method we have a chance to change GetSlides method name only but return type and parametername should match After that write the following code in webservice page Demo Download sample code attached 39 comments : its so nice great work ya...nice and asp.net vb? i have a gallery which contain more than one album.. whihc stores images of all album in same folder... how can i use slideshow album wise... Thanks man. very helpful. Dear Suresh, I this tutorial extremely useful, it is the one that I was looking for hours. But I need to modify it a bit like follow: I don't need a play button to use it to play but to get the full link of image and set for example to and textbox!! If you could help me on that it will be highly appreciated. However thanks a lot for sharing. Armend how can I make it that when I click on an image, the image will be shown as popup? @Josey.. check these posts show image in popup whenever click on image Image slideshow gallery in asp.net iTS rEALLY SUPERBB YAAR.. THNKS nice work dude. Appriciate Thanks dear its very usefull to me...because i were face problems to show images using below code AjaxControlToolkit.Slide[] slides = new AjaxControlToolkit.Slide[3]; slides[0] = new AjaxControlToolkit.Slide("8.jpg", "Image1", "house"); slides[1] = new AjaxControlToolkit.Slide("9.jpg", "Image2", "seaside"); slides[2] = new AjaxControlToolkit.Slide("img05.jpg", "Image3", "car"); return (slides); but after apply ur code my problem is solved..now i enjoy Slideshow in my Testing Website.. Hi nice coding but this is one time slideshow for images but if i want to do more that one time with different images then what to do and how to save this slideshow in our computer Done Same code... but nothing happen... I write code in contentPlaceHolder... please help me with it... thanks in advance... Dear Suresh, Nice tutorial very useful. But how can i apply effects to images that means Image will slide from left to right or vice versa. Please Help !!!!!!!!!!!!! Hi,. how to prevent click previous button when first image is displaying its not working in my case i hv done with same instructions... How To set a hyperlink on the image? hi Suresh .., This is Nice. and i will develop a Photo Gallery for my existing site .. My Requirement is i have a moster page div tag ie Gallery .. and i will create a special folder ie Admin .Then i will give access to Admin to upload a image file to DB from admin access page and it will show or Reflect to total all other pages .. how to write a code? Hi Suresh this is very nice. may i know how to implement the same with ashx file my concept is images stored in database as binary format and i need to retrive those images and presented in a ajax slide extender control is it possible to do..? Hello, I am trying to implement this, but in this line: string[] imagenames = System.IO.Directory.GetFiles(Server.MapPath("~/Images")); We have the pictures ina virtual directory, so I read from the database something like this "" and so on. How can I do it??? sir, vb.code pls.. that is very helpful Dear Suresh thank you. Your code is very helpful for the asp.net lovers like me. Thank you very much. Hi, im not able to see the images.. i gave the folder name in the code :( lukin good suresh..hahahah :) :) I want to pass the location of image from database to slideshowextender (getslide() method)..Please help friend.... Thanks.. I want to pass the location of image from database to slideshowextender (getslide() method)..Please help friend.... Thanks.. Helpful advice in Ajax Slideshowextender control. Thanks! Anna.... nuvvu manishivi kaadey maa paalita devunivey.. Hi Well how come we implement the same thing when we have to consume the web-service you created above from other website ?? i am facing many problems .. anyone tried so ?? servicemappath is a headache issue for me ?? help meeee if we want to click on the individual slideshow image and it must redirect to another page ....is there any solution for it thanks its very useful is there any way to display contents of database as slide show like one record at time from database in vb.net dear suresh i put your html code in master page... but this didnt work...that is only the designing of table with three buttons and labels are show only.when click play button it only changed to stop.but images not shows and remaining button did'nt work. what is my problem.. please help me immediately bcz i should submit my project within one day... web service code is using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; /// /// Summary description for Slideshow /// Slideshow : System.Web.Services.WebService { public Slideshow() { //Uncomment the following line if using designed components //InitializeComponent(); } [System.Web.Services.WebMethod] [System.Web.Script.Services.ScriptMethod] public AjaxControlToolkit.Slide[] GetSlides() { string[] imagenames =System.IO.Directory.GetFiles(Server.MapPath("~/SlideImages")); AjaxControlToolkit.Slide[] photos = new AjaxControlToolkit.Slide[imagenames.Length]; for (int i = 0; i < imagenames.Length; i++) { string[] file = imagenames[i].Split('\\'); photos[i] = new AjaxControlToolkit.Slide("~/SlideImages/" + file[file.Length - 1], file[file.Length - 1], ""); } return photos; } } Hello Suresh, This very uf and thank u.... how to bind images from xml file in Asp.net ajax slideshow extender.Plz help as soon as possible
https://www.aspdotnet-suresh.com/2011/03/ajax-slideshowextender-control-sample.html
CC-MAIN-2019-47
refinedweb
1,219
68.16
Is there a way to, for example, print Hello World! every n seconds? For example, the program would go through whatever code I had, then once it had been 5 seconds (with time.sleep()) it would execute that code. I would be using this to update a file though, not print Hello World. For example: startrepeat("print('Hello World')", .01) # Repeats print('Hello World') ever .01 seconds for i in range(5): print(i) >> Hello World! >> 0 >> 1 >> 2 >> Hello World! >> 3 >> Hello World! >> 4 import threading def printit(): threading.Timer(5.0, printit).start() print "Hello, World!" printit() # continue with the rest of your code My humble take on the subject, a generalization of Alex Martelli's answer, with start() and stop() control: from threading import Timer class RepeatedTimer(object): def __init__(self, interval, function, *args, **kwargs): self._timer = None self.interval = interval self.function = function self.args = args self.kwargs = kwargs self.is_running = False self.start() def _run(self): self.is_running = False self.start() self.function(*self.args, **self.kwargs) def start(self): if not self.is_running: self._timer = Timer(self.interval, self._run) self._timer.start() self.is_running = True def stop(self): self._timer.cancel() self.is_running = False Usage: from time import sleep def hello(name): print "Hello %s!" % name print "starting..." rt = RepeatedTimer(1, hello, "World") # it auto-starts, no need of rt.start() try: sleep(5) # your long-running job goes here... finally: rt.stop() # better in a try/finally block to make sure the program ends! Features: start()and stop()are safe to call multiple times even if the timer has already started/stopped intervalanytime, it will be effective after next run. Same for args, kwargsand even function!
https://pythonpedia.com/en/knowledge-base/3393612/run-certain-code-every-n-seconds
CC-MAIN-2020-16
refinedweb
284
63.56
Convolution and Correlation Usage Examples This section demonstrates how you can use the routines to perform some common convolution and correlation operations both for single-threaded and multithreaded calculations. The following two sample functions Intel® oneAPI Math Kernel Library scond1and sconf1simulate the convolution and correlation functions SCONDand SCONFfound in IBM ESSL* library. The functions assume single-threaded calculations and can be used with C or C++ compilers. Function scond1for Single-Threaded Calculations #include "mkl_vsl.h" int scond1( float h[], int inch, float x[], int incx, float y[], int incy, int nh, int nx, int iy0, int ny) { int status; VSLConvTaskPtr task; vslsConvNewTask1D(&task,VSL_CONV_MODE_DIRECT,nh,nx,ny); vslConvSetStart(task, &iy0); status = vslsConvExec1D(task, h,inch, x,incx, y,incy); vslConvDeleteTask(&task); return status; } Function sconf1for Single-Threaded Calculations #include "mkl_vsl.h" int sconf1( int init, float h[], int inc1h, float x[], int inc1x, int inc2x, float y[], int inc1y, int inc2y, int nh, int nx, int m, int iy0, int ny, void* aux1, int naux1, void* aux2, int naux2) { int status; /* assume that aux1!=0 and naux1 is big enough */ VSLConvTaskPtr* task = (VSLConvTaskPtr*)aux1; if (init != 0) /* initialization: */ status = vslsConvNewTaskX1D(task,VSL_CONV_MODE_FFT, nh,nx,ny, h,inc1h); if (init == 0) { /* calculations: */ int i; vslConvSetStart(*task, &iy0); for (i=0; i<m; i++) { float* xi = &x[inc2x * i]; float* yi = &y[inc2y * i]; /* task is implicitly committed at i==0 */ status = vslsConvExecX1D(*task, xi, inc1x, yi, inc1y); }; }; vslConvDeleteTask(task); return status; } Using Multiple Threads For functions such as sconf1described in the previous example, parallel calculations may be more preferable instead of cycling. If , you can use multiple threads for invoking the task execution against different data sequences. For such cases, use task copy routines to create m>1 mcopies of the task object before the calculations stage and then run these copies with different threads. Ensure that you make all necessary parameter adjustments for the task (using Task Editors) before copying it. The sample code in this case may look as follows: if (init == 0) { int i, status, ss[M]; VSLConvTaskPtr tasks[M]; /* assume that M is big enough */ . . . vslConvSetStart(*task, &iy0); . . . for (i=0; i<m; i++) /* implicit commitment at i==0 */ vslConvCopyTask(&tasks[i],*task); . . . Then, mthreads may be started to execute different copies of the task: . . . float* xi = &x[inc2x * i]; float* yi = &y[inc2y * i]; ss[i]=vslsConvExecX1D(tasks[i], xi,inc1x, yi,inc1y); . . . And finally, after all threads have finished the calculations, overall status should be collected from all task objects. The following code signals the first error found, if any: . . . for (i=0; i<m; i++) { status = ss[i]; if (status != 0) /* 0 means "OK" */ break; }; return status; }; /* end if init==0 */ Execution routines modify the task internal state (fields of the task structure). Such modifications may conflict with each other if different threads work with the same task object simultaneously. That is why different threads must use different copies of the task.
https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/statistical-functions/convolution-and-correlation/convolution-and-correlation-usage-examples.html
CC-MAIN-2021-31
refinedweb
488
50.46
Abstract base class for NetDevice queue length controller. More... #include "queue-limits.h" Abstract base class for NetDevice queue length controller. Introspection did not find any typical Config paths. QueueLimits is an abstract base class providing the interface to the NetDevice queue length controller. Child classes need to implement the methods used for a byte-based measure of the queue length. The design and implementation of this class is inspired by Linux. For more details, see the queue limits Sphinx documentation. No Attributes are defined for this type. No TraceSources are defined for this type. Size of this type is 32 bytes (on a 64-bit architecture). Definition at line 43 of file queue-limits.h. Definition at line 41 of file queue-limits.cc. References NS_LOG_FUNCTION. Available is called from NotifyTransmittedBytes to calculate the number of bytes that can be passed again to the NetDevice. A negative value means that no packets can be passed to the NetDevice. In this case, NotifyTransmittedBytes stops the transmission queue. Returns how many bytes can be queued Implemented in ns3::DynamicQueueLimits. Record number of completed bytes and recalculate the limit. Implemented in ns3::DynamicQueueLimits. Get the type ID. Definition at line 32 of file queue-limits.cc. References ns3::TypeId::SetParent(). Record the number of bytes queued. Implemented in ns3::DynamicQueueLimits. Reset queue limits state. Implemented in ns3::DynamicQueueLimits.
https://www.nsnam.org/doxygen/classns3_1_1_queue_limits.html
CC-MAIN-2022-33
refinedweb
226
54.08
Introduce Header auto-generates a header, previously defined by the user. Language Support Supported: C#, VB.NET, JavaScript Not relevant: ASP.NET, XAML, HTML Go to JustCode -> Options -> Code Style -> Common -> File Header Text Type the text that you wish to appear as a header when using command Introduce Header. You can use pre-defined tags using $TAGNAME$ syntax. Their content will be replaced by a context sensitive content when applying the command. Available tags: DATE - date when the command is applied AUTHOR - currently logged user for the machine PROJECT_NAME - name of the project in which the file is located FILE_NAME - name of the file for which the command is applied Escaping character '\' can be used to escape the tags replacement. Example: "\$AUTHOR$". Position the caret over a using statement, namespace statement or an empty line at the beginning of a C#, VB or JavaScript file. Select Introduce Header from the VisualAids Code menu. The result is (taking into account the header text from the tags example above): The command is also available as a Code Cleaning Step.
https://docs.telerik.com/help/justcode/code-generation-introduce-header.html
CC-MAIN-2020-24
refinedweb
178
53.81
I came across an issue that recently left me scratching my head. Although it's quite specific, this post might help somebody in the future should they notice strange things happening when editing their request object. TL;DR: If you update (i.e. copy and replace) request.POST or request.GET in a view or elsewhere, any subsequent calls to request.REQUEST will still return values from the old, outdated GET and POST dictionaries. This can cause difficult-to-debug problems I was in the following situation; a view foo_view received a request, copied and updated the request.POST passing control over to a bar_view. This bar_view then read the request information using request.REQUEST and used the updated value to execute some code and return a response. def bar_view(request): country = request.REQUEST.get("country", None) ... return render_to_response(...) def foo_view(request): request.POST = request.POST.copy() request.POST.update({ 'country': 'Ireland' }) return bar_view(request) countryshould be Irelandbut instead it was repeatedly returning None. I imagined that after updating the POSTdictionary REQUESTwould now return values from that updated dictionary. This is not the case. The docs say of request.REQUEST: For convenience, a dictionary-like object that searches POSTfirst, then GET. Inspired by PHP's $_REQUEST. For example, if GET = {"name": "john"}and REQUEST["name"]would be "john", and REQUEST["age"]would be "34". While request.POST and request.GET are dictionary-like QueryDict instance, request.REQUEST is actually a MergeDict instance. This MergeDictlooks like a dictionary from the outside, but actually acts as a wrapper around a number of other dictionaries, and simply loops through its children looking for matches when queried. This means that if you update request.POST or request.GET, any calls to request.REQUEST after the update will still return values from the old GET and POST dictionaries. What to take away: - Avoid rendering and returning other views. I find that this only leads to trouble. I can become very difficult to visualise what is happening. Issue redirects instead where possible. - Don't use request.REQUEST. It is a nice convenience for checking all passed parameters but if you are doing anything more complex be sure you understand what it does. - Be extra careful if you are editing the requestobject within a request/response cycle.
https://timmyomahony.com/blog/djangos-request-updating-with-request-get
CC-MAIN-2021-17
refinedweb
380
53.17
I have been a histogram assignments to do in C++ although i kno how to do in Visual Basic. I have tried to do it in C++ but it seems ive missed out a few steps. below is what i have done so far, please correct me where am wrong. thank Ellie The program should allow the tutor to enter in the various marks which the students have been awarded, until the tutor enters in a mark exceeding 100. At this point the program should display a histogram. Each star represents a student who achieved a module mark in the range shown. This is an example of the output. The example below shows the distribution of marks for 20 students. Your program should work with any number of student marks entered. 0-29 *** 30-39 ***** 40-69 ******** 70-100 **** 20 students in total. • As the tutor enters each mark, a counter should count the number of student’s marks which have been entered. • Use the same 4 category ranges shown here. • Make sure the display is neatly formatted as above. • Your program should make use of ‘loops’ for the display of each category. #include <iostream> using namespace std; int main () { int Grade1 = 0-29; int Grade2 = 30-39; int Grade3 = 40-69; int Grade4 = 70-100; int Mark = 0; int Totalmark = 0; while (Mark <= 100) cout<< Grade1<< endl; (Mark >= 0 && Mark <= 29) Grade1 = Grade1 + 1 // am finding a problem on this line Totalmark = Marks + Totalmark return 0; }
https://www.daniweb.com/programming/software-development/threads/404020/i-need-help-please-c-histogram
CC-MAIN-2018-51
refinedweb
247
80.72
supuflounder I’m not sure what problem you are referring to. The initial problem was (and still is) a tendency to crash if a print statement was used. The suggestion about using the print statement shown was to avoid the crash, but since I have debugged that piece of code, I have no need of that print statement. The question now is why doesn’t touch_began get called. I fail to see how the suggestion of the print statement solved that. I did make the change, and was completely unsurprised when the solution to problem A did not fix problem B.? supuflounder class dig(Scene) ...other methods, setup, draw,etc. def touch_began(self, touch): x, y = touch.location print('touch_began [', x, ',', y, ']') This was based on the example found here supuflounder I converted it from ‘update’ to ‘draw’, no joy. Still crashed after 20 seconds. I commented out the print statement, twenty minutes ago, and it is still running. This is sort of scary; how could a print statement cause it to crash? And if the answer is “it filled up the console buffers and ran out of space,” the question then becomes “how could this be allowed to happen?” It could stop logging, it could issue an error message and stop the program, it could start throwing away lines at the beginning. I was using all of these techniques 25 years ago. I could even detect a recursive function running out of stack space, and stop execution. I did that 30 years ago. So I am very disturbed when a program whose purpose it is to help people learn a language can be crashed by too many print statements. So I tried to use touch_began, and it did not respond to touches. I had put a print statement in, def touch_began(self, touch): x, y = touch.location print('touch_began [', x, ',' , y, ']') and if I touched the scene at all, no printout seems to have occurred. Any ideas what I’ve done wrong now? Some more study suggests that I probably want to represent the 36 squares as ShapeNode subclasses. I am expecting that the list is in Z-order, from bottom to top. What I then want to do is place the animated nodes, a different subclass of ShapeNode, at the end of this list, so they appear on top of the squares. These nodes may be removed by a user action (for example, a successful hit). These shapes may span several squares (they are long and thin, think snakes). I’ve written a book which had many chapters on graphics APIs in Windows, and pre-retirement I taught courses on this, so I am not a newbie at graphics and animations; my problem is to map what I know to the Pythonista libraries. supuflounder @JonB: I based my code on the example found in It seems to put the drawing code in ‘update’ and there is no ‘draw’ method. If this is not the correct way to do it, perhaps this example should be changed. supuflounder @cvp: there is no animation because this represents about 2% of the total code that this app will need supuflounder I am writing my first Python program. The code so far is what is shown below. For right now, it draws a square grid on the left side of the screen, in landscape mode. When I get parts of it working, I will begin to worry about issues like switching from landscape to portrait, but that’s for later. The goal right now is to draw a 6x6 grid and respond to finger touches. <<How do you use this forum? I can’t scroll it up to see my message as I am typing it in! I lost track of where I was and can’t figure out how to get back to it. Attempts to position the caret seem to just toggle between composition (keyboard active) and display (keyboard gone). Attempts to position the caret are frustrating because instead of dragging the carer, it scrolls the window. And the caret seems to have an off-by-one-line error; when I finally get it positioned, the typing goes into the line avove. And the window does not autoscroll when I’m typing into it, so the bottomost line is invisible behind the keyboard.>> Before I lost track of where I was, I started to explain that the program crashes after 20 seconds. I’m using the Python 3.x version to run it. I conjecture two explanations: There is a bug in Pythonista or Python that I am hitting I am doing something in this code that is frantically consuming resources, and when they run out, instead of a friendly warning telling me where it failed, it just quits. The most likely candidate is the second item. Can someone check the code below and see if I have committed some egregious newbie errot? from scene import * # A square is defined by a tuple # x0, y0, x1, y1, count # class dig (Scene): def setup(self): self.background_color = 'green' def update(self) : width = min(self.size[0], self.size[1]) background('green') fill('red') stroke(1,1,0) stroke_weight(3) rect(0,0,width,width) print('rect(', 0, ',', 0, ',', width, ',', width, ')') delta = width/6 # draw the vertical grid lines stroke_weight(1) for i in range(1, 6): line(i * delta, 0, i * delta, width) # draw the horizontal grid lines for i in range(1,6): line(0, i * delta, width, i * delta) run(dig()) <<this line was the vestigial line that I lost track of>> But the problem is that this runs for about 20 seconds,
https://forum.omz-software.com/user/supuflounder
CC-MAIN-2022-27
refinedweb
943
79.4
The Break node terminates the closest enclosing loop or switch statement in which it appears. Control is passed to the statement that follows the terminated statement, if any. Examples In this example, the For Number Loop node is called at start. It then loop 5x each loop it call “if node” if the index is equal to 3 then it will stop the loop. The “Debug Log” node will called after “If node” finished whatever the condition is true or false, but when the index is 3 the node will never be called again because the loop is stopped by “Break” node. Flow Graph: Generated Script: using UnityEngine; using System.Collections.Generic; public class Program : MonoBehaviour { private int index; public void Start() { for(index = 0; index < 5; index += 1) { if((index == 3)) { break; } Debug.Log(index); } } } Output: 0 1 2
https://maxygames.com/docs/unode/nodes/break/
CC-MAIN-2022-33
refinedweb
140
71.34
A list is an ordered collection of values. In Java, lists are part of the Java Collections Framework. Lists implement the java.util.List interface, which extends java.util.Collection. A list is an object which stores a an ordered collection of values. "Ordered" means the values are stored in a particular order--one item comes first, one comes second, and so on. The individual values are commonly called "elements". Java lists typically provide these features: Adding a value to a list at some point other than the end will move all of the following elements "down" or "to the right". In other words, adding an element at index n moves the element which used to be at index n to index n+1, and so on. For example: List<String> list = new ArrayList<>(); list.add("world"); System.out.println(list.indexOf("world")); // Prints "0" // Inserting a new value at index 0 moves "world" to index 1 list.add(0, "Hello"); System.out.println(list.indexOf("world")); // Prints "1" System.out.println(list.indexOf("Hello")); // Prints "0" The Collections class offers two standard static methods to sort a list: sort(List<T> list)applicable to lists where T extends Comparable<? super T>, and sort(List<T> list, Comparator<? super T> c)applicable to lists of any type. Applying the former requires amending the class of list elements being sorted, which is not always possible. It might also be undesirable as although it provides the default sorting, other sorting orders may be required in different circumstances, or sorting is just a one off task. Consider we have a task of sorting objects that are instances of the following class: public class User { public final Long id; public final String username; public User(Long id, String username) { this.id = id; this.username = username; } @Override public String toString() { return String.format("%s:%d", username, id); } } In order to use Collections.sort(List<User> list) we need to modify the User class to implement the Comparable interface. For example public class User implements Comparable<User> { public final Long id; public final String username; public User(Long id, String username) { this.id = id; this.username = username; } @Override public String toString() { return String.format("%s:%d", username, id); } @Override /** The natural ordering for 'User' objects is by the 'id' field. */ public int compareTo(User o) { return id.compareTo(o.id); } } (Aside: many standard Java classes such as String, Long, Integer implement the Comparable interface. This makes lists of those elements sortable by default, and simplifies implementation of compare or compareTo in other classes.) With the modification above, the we can easily sort a list of User objects based on the classes natural ordering. (In this case, we have defined that to be ordering based on id values). For example: List<User> users = Lists.newArrayList( new User(33L, "A"), new User(25L, "B"), new User(28L, "")); Collections.sort(users); System.out.print(users); // [B:25, C:28, A:33] However, suppose that we wanted to sort User objects by name rather than by id. Alternatively, suppose that we had not been able to change the class to make it implement Comparable. This is where the sort method with the Comparator argument is useful: Collections.sort(users, new Comparator<User>() { @Override /* Order two 'User' objects based on their names. */ public int compare(User left, User right) { return left.username.compareTo(right.username); } }); System.out.print(users); // [A:33, B:25, C:28] In Java 8 you can use a lambda instead of an anonymous class. The latter reduces to a one-liner: Collections.sort(users, (l, r) -> l.username.compareTo(r.username)); Further, there Java 8 adds a default sort method on the List interface, which simplifies sorting even more. users.sort((l, r) -> l.username.compareTo(r.username)) Giving your list a type To create a list you need a type (any class, e.g. String). This is the type of your List. The List will only store objects of the specified type. For example: List<String> strings; Can store "string1", "hello world!", "goodbye", etc, but it can't store 9.2, however: List<Double> doubles; Can store 9.2, but not "hello world!". Initialising your list If you try to add something to the lists above you will get a NullPointerException, because strings and doubles both equal null! There are two ways to initialise a list: Option 1: Use a class that implements List List is an interface, which means that does not have a constructor, rather methods that a class must override. ArrayList is the most commonly used List, though LinkedList is also common. So we initialise our list like this: List<String> strings = new ArrayList<String>(); or List<String> strings = new LinkedList<String>(); Starting from Java SE 7, you can use a diamond operator: List<String> strings = new ArrayList<>(); or List<String> strings = new LinkedList<>(); Option 2: Use the Collections class The Collections class provides two useful methods for creating Lists without a List variable: emptyList(): returns an empty list. singletonList(T): creates a list of type T and adds the element specified. And a method which uses an existing List to fill data in: addAll(L, T...): adds all the specified elements to the list passed as the first parameter. Examples: import java.util.List; import java.util.Collections; List<Integer> l = Collections.emptyList(); List<Integer> l1 = Collections.singletonList(42); Collections.addAll(l1, 1, 2, 3); The List API has eight methods for positional access operations: add(T type) add(int index, T type) remove(Object o) remove(int index) get(int index) set(int index, E element) int indexOf(Object o) int lastIndexOf(Object o) So, if we have a List: List<String> strings = new ArrayList<String>(); And we wanted to add the strings "Hello world!" and "Goodbye world!" to it, we would do it as such: strings.add("Hello world!"); strings.add("Goodbye world!"); And our list would contain the two elements. Now lets say we wanted to add "Program starting!" at the front of the list. We would do this like this: strings.add(0, "Program starting!"); NOTE: The first element is 0. Now, if we wanted to remove the "Goodbye world!" line, we could do it like this: strings.remove("Goodbye world!"); And if we wanted to remove the first line (which in this case would be "Program starting!", we could do it like this: strings.remove(0); Note: Adding and removing list elements modify the list, and this can lead to a ConcurrentModificationException if the list is being iterated concurrently. Adding and removing elements can be O(1) or O(N) depending on the list class, the method used, and whether you are adding / removing an element at the start, the end, or in the middle of the list. In order to retrieve an element of the list at a specified position you can use the E get(int index); method of the List API. For example: strings.get(0); will return the first element of the list. You can replace any element at a specified position by using the set(int index, E element);. For example: strings.set(0,"This is a replacement"); This will set the String "This is a replacement" as the first element of the list. Note: The set method will overwrite the element at the position 0. It will not add the new String at the position 0 and push the old one to the position 1. The int indexOf(Object o); returns the position of the first occurrence of the object passed as argument. If there are no occurrences of the object in the list then the -1 value is returned. In continuation of the previous example if you call: strings.indexOf("This is a replacement") the 0 is expected to be returned as we set the String "This is a replacement" in the position 0 of our list. In case where there are more than one occurrence in the list when int indexOf(Object o); is called then as mentioned the index of the first occurrence will be returned. By calling the int lastIndexOf(Object o) you can retrieve the index of the last occurrence in the list. So if we add another "This is a replacement": strings.add("This is a replacement"); strings.lastIndexOf("This is a replacement"); This time the 1 will be returned and not the 0; For the example, lets say that we have a List of type String that contains four elements: "hello, ", "how ", "are ", "you?" The best way to iterate over each element is by using a for-each loop: public void printEachElement(List<String> list){ for(String s : list){ System.out.println(s); } } Which would print: hello, how are you? To print them all in the same line, you can use a StringBuilder: public void printAsLine(List<String> list){ StringBuilder builder = new StringBuilder(); for(String s : list){ builder.append(s); } System.out.println(builder.toString()); } Will print: hello, how are you? Alternatively, you can use element indexing ( as described in Accessing element at ith Index from ArrayList ) to iterate a list. Warning: this approach is inefficient for linked lists. Lets suppose you have 2 Lists A and B, and you want to remove from B all the elements that you have in A the method in this case is List.removeAll(Collection); numbersB.removeAll(numbersA); System.out.println("B cleared: " + numbersB); } this will print A: [1, 3, 4, 7, 5, 2] B: [13, 32, 533, 3, 4, 2] B cleared: [13, 32, 533] Suppose you have two lists: A and B, and you need to find the elements that exist in both lists. You can do it by just invoking the method List.retain); List<Integer> numbersC = new ArrayList<>(); numbersC.addAll(numbersA); numbersC.retainAll(numbersB); System.out.println("List A : " + numbersA); System.out.println("List B : " + numbersB); System.out.println("Common elements between A and B: " + numbersC); } List<Integer> nums = Arrays.asList(1, 2, 3); List<String> strings = nums.stream() .map(Object::toString) .collect(Collectors.toList()); That is: Object::toString Stringvalues into a Listusing Collectors.toList() ArrayList is one of the inbuilt data structures in Java. It is a dynamic array (where the size of the data structure not needed to be declared first) for storing elements (Objects). It extends AbstractList class and implements List interface. An ArrayList can contain duplicate elements where it maintains insertion order. It should be noted that the class ArrayList is non-synchronized, so care should be taken when handling concurrency with ArrayList. ArrayList allows random access because array works at the index basis. Manipulation is slow in ArrayList because of shifting that often occurs when an element is removed from the array list. An ArrayList can be created as follows: List<T> myArrayList = new ArrayList<>(); Where T ( Generics ) is the type that will be stored inside ArrayList. The type of the ArrayList can be any Object. The type can't be a primitive type (use their wrapper classes instead). To add an element to the ArrayList, use add() method: myArrayList.add(element); Or to add item to a certain index: myArrayList.add(index, element); //index of the element should be an int (starting from 0) To remove an item from the ArrayList, use the remove() method: myArrayList.remove(element); Or to remove an item from a certain index: myArrayList.remove(index); //index of the element should be an int (starting from 0) This example is about replacing a List element while ensuring that the replacement element is at the same position as the element that is replaced. This can be done using these methods: Consider an ArrayList containing the elements "Program starting!", "Hello world!" and "Goodbye world!" List<String> strings = new ArrayList<String>(); strings.add("Program starting!"); strings.add("Hello world!"); strings.add("Goodbye world!"); If we know the index of the element we want to replace, we can simply use set as follows: strings.set(1, "Hi world"); If we don't know the index, we can search for it first. For example: int pos = strings.indexOf("Goodbye world!"); if (pos >= 0) { strings.set(pos, "Goodbye cruel world!"); } Notes: setoperation will not cause a ConcurrentModificationException. setoperation is fast ( O(1)) for ArrayListbut slow ( O(N)) for a LinkedList. indexOfsearch on an ArrayListor LinkedListis slow ( O(N)). The Collections class provides a way to make a list unmodifiable: List<String> ls = new ArrayList<String>(); List<String> unmodifiableList = Collections.unmodifiableList(ls); If you want an unmodifiable list with one item you can use: List<String> unmodifiableList = Collections.singletonList("Only string in the list"); The); The List interface is implemented by different classes. Each of them has its own way for implementing it with different strategies and providing different pros and cons. These are all of the public classes in Java SE 8 that implement the java.util.List interface: public class ArrayList<E> extends AbstractList<E> implements List<E>, RandomAccess, Cloneable, Serializable ArrayList is a resizable-array implementation of the List interface. Storing the list into an array, ArrayList provides methods (in addition to the methods implementing the List interface) for manipulating the size of the array. Initialize ArrayList of Integer with size 100 List<Integer> myList = new ArrayList<Integer>(100); // Constructs an empty list with the specified initial capacity. - PROS: The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. So getting and setting each element of the List has the same time cost: int e1 = myList.get(0); // \ int e2 = myList.get(10); // | => All the same constant cost => O(1) myList.set(2,10); // / - CONS: Being implemented with an array (static structure) adding elements over the size of the array has a big cost due to the fact that a new allocation need to be done for all the array. However, from documentation: The add operation runs in amortized constant time, that is, adding n elements requires O(n) time Removing an element requires O(n) time. On coming On coming public class LinkedList<E> extends AbstractSequentialList<E> implements List<E>, Deque<E>, Cloneable, Serializable LinkedList is implemented by a doubly-linked list a linked data structure that consists of a set of sequentially linked records called nodes. Iitialize LinkedList of Integer List<Integer> myList = new LinkedList<Integer>(); // Constructs an empty list. - PROS: Adding or removing an element to the front of the list or to the end has constant time. myList.add(10); // \ myList.add(0,2); // | => constant time => O(1) myList.remove(); // / - CONS: From documentation: Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index. Operations such as: myList.get(10); // \ myList.add(11,25); // | => worst case done in O(n/2) myList.set(15,35); // / On coming On coming On coming On coming
https://sodocumentation.net/java/topic/2989/lists
CC-MAIN-2020-29
refinedweb
2,465
57.37
Administration updatetool $ /tools/glassfishv3/bin/updatetool Select JRuby 1.4.0 and click install. JRuby along with Rails 2.3.5, Merb, MySQL JDBC adapters will be installed inside glasfishv3/GlassFish/jruby. Now add it to your path. $ export JRUBY_HOME=/tools/glasfishv3/GlassFish/jruby $ export PATH=$JRUBY_HOME:$PATH Install JRuby 1.4.0 from jruby.org. Configuration Before you begin, first start GlassFish v3 server $ glassfishv3/bin/asadmin start-domain JRuby In case you are using your own JRuby installation then you need to tell GlassFish where to find it. There are 2 ways you can do it. Using asadmin CLI or using Admin console. asadmin CLI $asadmin configure-jruby-container --jruby-home=/tools/jruby-1.4.0 Admin console In your browser type:, then select Configure=>Ruby container and then enter JRuby instalation in JRuby Home field and click Ok. JRuby Runtime Pool JRuby Home to configure the runtime pool settings. Here is how you can configure JRuby runtime pool using asadmin CLI $ asadmin configure-jruby-container --jruby-runtime=2 --jruby-runtime-min=2 --jruby-runtime-max=2 monitoring blog. asadmin CLI Here is how you can configuring monitoring for Ruby applications using asadmin CLI $ asadmin configure-jruby-container --monitoring=true Setting --monitoring to false will simply disable it. Admin console Develop a Ruby application Now that you have installed GlassFish v3 and JRuby and configured the JRuby container, runtime pool etc. Its time to develop a Ruby web application. To keep this blog simple, I would go for an extremely simple Rackup script config.ru. config.ru class HelloWorld def call(env) [200, {"Content-Type" =>"text/plain"}, ["Hello world!"]] end end run HelloWorld.new In the above code HelloWorld class is a Rack application. A Rack application is the one that responds to call and returns array of HTTP status code, HTTP headers and the content. HelloWorld simply returns "HelloWorld!" as the content body. Now lets deploy it. Deployment Assuming you are in the directory where config.ru file is located. Do the following to deploy on GlassFish: $ asadmin deploy . If you want to deploy using some other version of JRuby, lets say you have JRuby 1.5.0.dev and want to deploy using this version of JRuby, this is what you would do: $ asadmin deploy --property jruby.home=/tools/jruby-1.5.0.dev . If you don't like doing it using asadmin CLI, you can use admin console as well. Simply login to and chose applications node and on deployment screen chose applicaton type as Ruby Application. See the image below Now Run... Now access the application at monitoring stats Enterprise Server and then Monitor tab. You should see different stats such as JRuby Runtme Pool, HTTP and JRuby container stats. Go get GlassFish v3 and start running, monitoring your Ruby on Rails or Merb or Sinatra or any Rack application. Send us your comments at users@GlassFish.dev.java.net. - Login or register to post comments - Printer-friendly version - vivekp's blog - 2889 reads
https://weblogs.java.net/node/337976/atom/feed
CC-MAIN-2015-35
refinedweb
500
51.44
Content-type: text/html #include <ar.h> The archive command ar is used to combine several files into one. Archives are used mainly as libraries to be searched by the link editor ld. Each archive begins with the archive magic string. #define ARMAG "!<arch>\n" /* magic string */ #define SARMAG 8 /* length of magic string */ Following the archive magic string are the archive file members. Each file member is preceded by a file member header which is of the following format: #define ARFMAG "`\n" /* header trailer string */ struct ar_hdr /* file member header */ { char ar_name[16]; /* '/' terminated file member name */ char ar_date[12]; /* file member date */ char ar_uid[6] /* file member user identification */ char ar_gid[6] /* file member group identification */ char ar_mode[8] /* file member mode (octal) */ char ar_size[10]; /* file member size */ char ar_fmag[2]; /* header trailer string */ }; All information in the file member headers is in printable ASCII. The numeric information contained in the headers is stored as decimal numbers (except for ar_mode which is in octal). Thus, if the archive contains printable files, the archive itself is printable. If the file member name fits, the ar_name field contains the name directly, and is terminated by a slash (/) and padded with blanks on the right. If the member's name does not fit, ar_name contains a slash (/) followed by a decimal representation of the name's offset in the archive string table described below. The ar_date field is the modification date of the file at the time of its insertion into the archive. Common format archives can be moved from system to system as long as the portable archive command ar is used. Each archive file member begins on an even byte boundary; a newline is inserted between files if necessary. Nevertheless, the size given reflects the actual size of the file exclusive of padding. Notice there is no provision for empty areas in an archive file. Each archive that contains object files (see a.out(4)) includes an archive symbol table. This symbol table is used by the link editor ld to determine which archive members must be loaded during the link edit process. The archive symbol table (if it exists) is always the first file in the archive (but is never listed) and is automatically created and/or updated by ar. The archive symbol table archive member at file offset 114 defines name. The archive member at file offset 122 defines object. The archive member at file offset 426 defines function and the archive member at file offset 434 defines name2. Offset +0 +1 +2 +3 ___________________ 0 | 4 | 4 offset entries |___________________| 4 | 114 | name |___________________| 8 | 122 | object |___________________| 12 | 426 | function |___________________| 16 | 434 | name2 |___________________| 20 | n | a | m | e | |____|____|____|____| 24 | \0 | o | b | j | |____|____|____|____| 28 | e | c | t | \0 | |____|____|____|____| 32 | f | u | n | c | |____|____|____|____| 36 | t | i | o | n | |____|____|____|____| 40 | \0 | n | a | m | |____|____|____|____| 44 | e | 2 | \0 | | |____|____|____|____| The string table contains exactly as many null terminated strings as there are elements in the offsets array. Each offset from the array is associated with the corresponding name from the string table (in order). The names in the string table are all the defined global symbols found in the common object files in the archive. Each offset is the location of the archive header for the associated symbol. If some archive member's name is more than 15 bytes long, a special archive member contains a table of file names, each followed by a slash and a new-line. This string table member, if present, will precede all ``normal'' archive members. The special archive symbol table is not a ``normal'' member, and must be first if it exists. The ar_name entry of the string table's member header holds a zero length name ar_name[0]=='/', followed by one trailing slash (ar_name[1]=='/'), followed by blanks (ar_name[2]==' ', etc.). Offsets into the string table begin at zero. Example ar_name values for short and long file names appear below. Offset +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 __________________________________________________ 0 | f | i | l | e | _ | n | a | m | e | _ | |____|____|____|____|____|____|____|____|____|____| 10 | s | a | m | p | l | e | / | \n | l | o | |____|____|____|____|____|____|____|____|____|____| 20 | n | g | e | r | f | i | l | e | n | a | |____|____|____|____|____|____|____|____|____|____| 30 | m | e | x | a | m | p | l | e | / | \n | |____|____|____|____|____|____|____|____|____|____| Member Name ar_name _______________________________________________________________ short-name | short-name/ | Not in string table | | file_name_sample | /0 | Offset 0 in string table | | longerfilenamexample | /18 | Offset 18 in string table _____________________|______________|___________________________ ar(1), ld(1), strip(1), a.out(4) The strip utility will remove all archive symbol entries from the header. The archive symbol entries must be restored with the -ts options of the ar command before the archive can be used with the link editor ld.
http://backdrift.org/man/SunOS-5.10/man3head/ar.h.3head.html
CC-MAIN-2016-44
refinedweb
861
65.96
I'm going on vacation to Thailand tomorrow (touch wood). But I'm more worried about the flight itself than the current political unrest. My partner Tracy is prone to fidget, plus she has a major Freecell addiction. Combining a 14 hour flight with suddenly going cold turkey on Freecell doesn't strike me as too good an idea! Fortunately, I have a Zune. And I have XNA Game Studio. So I spent a couple of hours over Thanksgiving knocking up a simple mobile version of Freecell. I hadn't done much work on Zune before, but it was pleasingly straightforward to develop on Windows (running in a 320x240 resolution) and then move the result over to Zune. But here's the thing. Our flight lasts 14 hours. The Zune battery does not last that long even when playing music, let alone running a game at the same time! For this to be any use, my game had to use as little power as possible. On Windows or Xbox, battery life is irrelevant. The machine has a certain amount of power available, and you might as well use all of it. If your Update and Draw methods take less than 1/60 second to execute, that means you are wasting processing power that could be used to add more funky effects. Pedantic readers may wish to point out that battery life does matter for Windows laptops. True, but there's not much you can do about it. If a Windows game has spare CPU cycles the XNA Framework just busy-waits until the next tick time. We have to do that because the Windows task scheduler has a rather coarse time granularity. If we put the CPU to sleep, there is no guarantee it will wake up exactly when we want. So on Windows, if you add sleep calls in the interests of power efficiency, you can no longer guarantee a steady framerate. Not so on Zune, which has the ability to sleep for tiny and exact periods of time. So on Zune, if your game uses fixed timestep mode and Update plus Draw finish in less than the allotted time, the framework works out when the next tick is due, then puts the CPU to sleep for exactly the right period. In other words, the less work your code does, the less battery it will use. My first power saving tweak was to reduce the target timestep in my Game constructor: TargetElapsedTime = TimeSpan.FromTicks(TimeSpan.TicksPerSecond / 30); I have some animations of cards sliding around, but these look fine at 30 fps, so there is no need to waste power trying to animate them at 60. I tried reducing this even further to 20 fps, but that didn't look so good. Note how I use TimeSpan.FromTicks rather than TimeSpan.FromSeconds: this is because the TimeSpan.FromSeconds method has a nasty implementation that internally rounds to the nearest millisecond. I've been bitten by that a few times in the past, and learned to do my time computations in ticks rather than seconds. Freecell doesn't have a lot of animation. In fact, most of the time the display is entirely static, and the game is just waiting for user input. I only wanted to bother running my update logic and redrawing the screen if there really was something changing. First off, I changed my InputManager.Update method to report whether any inputs have changed since the previous time I looked at them: class InputManager { GamePadState currentState; GamePadState previousState; public bool Update() { previousState = currentState; currentState = GamePad.GetState(PlayerIndex.One); return currentState != previousState; } I then added two boolean flags to my game class, the first to store whether any cards are currently performing a move animation, and the second to indicate whether the screen needs to be redrawn: public class FreecellGame : Microsoft.Xna.Framework.Game { bool isAnimating = false; bool displayDirty = true; I changed my Update method to work like so: protected override void Update(GameTime gameTime) { if (input.Update() || isAnimating) { puzzle.Update(input); isAnimating = puzzle.IsAnimating; displayDirty |= puzzle.DisplayDirty; displayDirty |= isAnimating; } if (!displayDirty) SuppressDraw(); base.Update(gameTime); } Finally, I updated my Draw method to reset the displayDirty flag: protected override void Draw(GameTime gameTime) { renderer.Begin(); renderer.Draw(backgroundTexture, Vector2.Zero, Color.White); puzzle.Draw(renderer); renderer.End(); base.Draw(gameTime); displayDirty = false; } With this logic in place, most update ticks will wake up, see there is no new input, and immediately put the CPU back to sleep again. Occasionally there will be a new button press, in which case I call puzzle.Update. If that does anything interesting in response to the new input, such as moving the selection focus, it will set the puzzle.DisplayDirty property, which causes me to redraw the screen just once before going back to sleep. Even more occasionally, the puzzle update will move a card to a new location. This begins a sliding animation, which sets puzzle.IsAnimating to true. As long as the animation is in progress, Update and Draw will be called every 30 seconds, the same as any other game. When the card reaches its final location, puzzle.IsAnimating goes back to false, so Update and Draw will no longer be called and the CPU can go back to sleep. I thus created a veritable Prius of the Zune gaming world. But wait... What if Tracy wants to change music in the middle of a game? I should let her bring up the Guide to choose playlists and skip tracks, neh? As soon as I called Guide.Show, I realized I had a problem. The Zune system UI draws over the top of the game, but if the game never refreshes the screen, the system UI never gets a chance to refresh either! The Guide appeared when I pressed the button to bring it up, but then immediately froze because my game had gone back to sleep. I fixed this by wrapping my Update logic with an IsActive check (which skips the SuppressDraw call while the Guide is active), and adding a timer that continues redrawing the screen for one second after the Guide is dismissed, in order to properly display the Guide close animation. My final update logic: int postGuideDelay = 0; protected override void Update(GameTime gameTime) { if (IsActive) { if (input.Update() || isAnimating) { if (input.ShowGuide) { Guide.Show(); postGuideDelay = 30; return; } puzzle.Update(input); isAnimating = puzzle.IsAnimating; displayDirty |= puzzle.DisplayDirty; displayDirty |= isAnimating; } if (postGuideDelay > 0) { postGuideDelay--; } else { if (!displayDirty) SuppressDraw(); } } base.Update(gameTime); } Great article, really. Got my five. Have a nice trip to Thailand! Interesting post, but I strongly disagree with your justification for wasting all of the available CPU on desktop/laptop machines. I know most games operate like this, and I’ve observed first-hand my sleeping process not waking up when I wanted it to because of something else hammering the system (even something stupid like a fading tooltip on Win9x). But I’ve also spent hundreds, maybe thousands of hours playing Live for Speed, and it uses a very small amount of CPU – only what it actually needs to do its job. It’s a very time-critical app (a racing sim) but it sleeps. It still achieves a rock solid 60fps with vsync. I have my email client, IM, web browser, UPS monitor, etc running in the background, yet things work fine. How can this be, according to your logic? I like it when my system runs cool, quiet, and consumes less power. I don’t like it when the fans have to spin up for a simple game like AudioSurf even when I’m just browsing its menu system. It’s wasteful, unnecessary, and bad programming IMO. Fortunately there are some countermeasures, like forcing the CPU and GPU to stay underclocked and setting affinity to a single core only, but these aren’t perfect solutions and shouldn’t be necessary in the first place…
https://blogs.msdn.microsoft.com/shawnhar/2008/12/02/zune-battery-efficiency/
CC-MAIN-2017-43
refinedweb
1,323
64.91
Hello! I'm a complete noob, and I am try to make a pong game, and I'm trying to thrust the CPU paddle to the ball's x position. I'm not trying to instantaneously set the paddle's x pos. to the ball's x pos., I'm trying to thrust the paddle to the ball's x pos. How would you do that? I'm using C#, and so far my code is: using UnityEngine; using System.Collections; public class P2_Movement : MonoBehaviour { public float thrust; public Rigidbody rb; // Use this for initialization void Start () { rb = GetComponent<Rigidbody>(); } // Update is called once per frame void FixedUpdate () { float ball_pos = GameObject.Find("Ball").transform.position.x; rb.AddForce(ball_pos, 0, 0, ForceMode.Impulse); } } My current code just finds the ball game object's x position, and then it thrusts my CPU paddle using the ball's x position as its speed. How can I make it so that my CPU paddle thrusts towards the ball's x position? targetPosition-currentPosition = direction to add. Unity2D trouble positioning object in script 1 Answer Spawn Procedural Planet at parent's position 0 Answers Best way to pick up objects in unity? 2 Answers Make projectile move in certain direction depending on position for base object (C#) 1 Answer how can i limit position between 2 object ?? 1 Answer
https://answers.unity.com/questions/1180408/how-do-you-thrust-an-object-to-another-objects-pos.html
CC-MAIN-2020-29
refinedweb
225
55.74
Post Syndicated from nellyo original Morris MacMatzen | NYT Post Syndicated from nellyo original Morris MacMatzen | NYT Post Syndicated from Chris Munns original,. At a high level, you perform the following tasks for this walkthrough: You may incur charges for the resources you use including, but not limited to, the Amazon EC2 instance and the associated network charges. cd /tmp wget -O epel.rpm –nv \ sudo yum install -y ./epel.rpm Respond “Y” to all requests for approval to install the software. sudo yum install python2-certbot-apache.noarch Respond “Y” to all requests for approval to install the software. If you see a message appear about SELinux, you can safely ignore it. This is a known issue with the latest version of. Security note: As of the time of publication, this website also supports TLS 1.0. I recommend that you disable this protocol because of some known vulnerabilities associated with it. To do this: SSLProtocol all -SSLv2 -SSLv3 -TLSv1 sudo service httpd restart Use the following steps to avoid incurring any further costs.. Post. Post Syndicated from Randall Hunt original Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon.: [email protected]. Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.. Post Syndicated from Randall Hunt original Today we’re launching a new feature for AWS Certificate Manager . ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA. First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA. Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain. Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next. In this next screen, I I’ll create a new S3 bucket to store my CRL in and click Next. Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create. A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process. Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients). Now I can use a tool like openssl to sign my cert and generate the certificate chain. $openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem Using configuration from openssl_root.cnf Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem: Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows stateOrProvinceName :ASN.1 12:'Washington' localityName :ASN.1 12:'Seattle' organizationName :ASN.1 12:'Amazon' organizationalUnitName:ASN.1 12:'Engineering' commonName :ASN.1 12:'secure.internal' Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next. Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully. Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate. From there it’s just similar to provisioning a normal certificate in ACM. Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments. Available Now ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security.). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates. I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below. Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original <p>Let’s Encrypt recently <a href="">launched SCT embedding in certificates</a>. This feature allows browsers to check that a certificate was submitted to a <a href="">Certificate Transparency</a> log. As part of the launch, we did a thorough review that the encoding of Signed Certificate Timestamps (SCTs) in our certificates matches the relevant specifications. In this post, I’ll dive into the details. You’ll learn more about X.509, ASN.1, DER, and TLS encoding, with references to the relevant RFCs.</p> <p>Certificate Transparency offers three ways to deliver SCTs to a browser: In a TLS extension, in stapled OCSP, or embedded in a certificate. We chose to implement the embedding method because it would just work for Let’s Encrypt subscribers without additional work. In the SCT embedding method, we submit a “precertificate” with a <a href="#poison">poison extension</a> to a set of CT logs, and get back SCTs. We then issue a real certificate based on the precertificate, with two changes: The poison extension is removed, and the SCTs obtained earlier are added in another extension.</p> <p>Given a certificate, let’s first look for the SCT list extension. According to CT (<a href="">RFC 6962 section 3.3</a>), the extension OID for a list of SCTs is <code>1.3.6.1.4.1.11129.2.4.2</code>. An <a href="">OID (object ID)</a> is a series of integers, hierarchically assigned and globally unique. They are used extensively in X.509, for instance to uniquely identify extensions.</p> <p>We can <a href="">download an example certificate</a>, and view it using OpenSSL (if your OpenSSL is old, it may not display the detailed information):</p> <pre><code>$ openssl x509 -noout -text -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451 … CT Precertificate SCTs: Signed Certificate Timestamp: Version : v1(0) Log ID : DB:74:AF:EE:CB:29:EC:B1:FE:CA:3E:71:6D:2C:E5:B9: AA:BB:36:F7:84:71:83:C7:5D:9D:4F:37:B6:1F:BF:64 Timestamp : Mar 29 18:45:07.993 2018 GMT Extensions: none Signature : ecdsa-with-SHA256 30:44:02:20:7E:1F:CD:1E:9A:2B:D2:A5:0A:0C:81:E7: 13:03:3A:07:62:34:0D:A8:F9:1E:F2:7A:48:B3:81:76: 40:15:9C:D3:02:20:65:9F:E9:F1:D8:80:E2:E8:F6:B3: 25:BE:9F:18:95:6D:17:C6:CA:8A:6F:2B:12:CB:0F:55: FB:70:F7:59:A4:19 Signed Certificate Timestamp: Version : v1(0) Log ID : 29:3C:51:96:54:C8:39:65:BA:AA:50:FC:58:07:D4:B7: 6F:BF:58:7A:29:72:DC:A4:C3:0C:F4:E5:45:47:F4:78 Timestamp : Mar 29 18:45:08.010 2018 GMT Extensions: none Signature : ecdsa-with-SHA256 30:46:02:21:00:AB:72:F1:E4:D6:22:3E:F8:7F:C6:84: 91:C2:08:D2:9D:4D:57:EB:F4:75:88:BB:75:44:D3:2F: 95:37:E2:CE:C1:02:21:00:8A:FF:C4:0C:C6:C4:E3:B2: 45:78:DA:DE:4F:81:5E:CB:CE:2D:57:A5:79:34:21:19: A1:E6:5B:C7:E5:E6:9C:E2 </code></pre> <p>Now let’s go a little deeper. How is that extension represented in the certificate? Certificates are expressed in <a href="">ASN.1</a>, which generally refers to both a language for expressing data structures and a set of formats for encoding them. The most common format, <a href="">DER</a>, is a tag-length-value format. That is, to encode an object, first you write down a tag representing its type (usually one byte), then you write down a number expressing how long the object is, then you write down the object contents. This is recursive: An object can contain multiple objects within it, each of which has its own tag, length, and value.</p> <p>One of the cool things about DER and other tag-length-value formats is that you can decode them to some degree without knowing what they mean. For instance, I can tell you that 0x30 means the data type “SEQUENCE” (a struct, in ASN.1 terms), and 0x02 means “INTEGER”, then give you this hex byte sequence to decode:</p> <pre><code>30 06 02 01 03 02 01 0A </code></pre> <p>You could tell me right away that decodes to:</p> <pre><code>SEQUENCE INTEGER 3 INTEGER 10 </code></pre> <p>Try it yourself with this great <a href="">JavaScript ASN.1 decoder</a>. However, you wouldn’t know what those integers represent without the corresponding ASN.1 schema (or “module”). For instance, if you knew that this was a piece of DogData, and the schema was:</p> <pre><code>DogData ::= SEQUENCE { legs INTEGER, cutenessLevel INTEGER } </code></pre> <p>You’d know this referred to a three-legged dog with a cuteness level of 10.</p> <p>We can take some of this knowledge and apply it to our certificates. As a first step, convert the above certificate to hex with <code>xxd -ps < Downloads/031f2484307c9bc511b3123cb236a480d451</code>. You can then copy and paste the result into <a href="">lapo.it/asn1js</a> (or use <a href="">this handy link</a>). You can also run <code>openssl asn1parse -i -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451</code> to use OpenSSL’s parser, which is less easy to use in some ways, but easier to copy and paste.</p> <p>In the decoded data, we can find the OID <code>1.3.6.1.4.1.11129.2.4.2</code>, indicating the SCT list extension. Per <a href="">RFC 5280, section 4.1</a>, an extension is defined:</p> <pre><code>Extension ::= SEQUENCE { extnID OBJECT IDENTIFIER, critical BOOLEAN DEFAULT FALSE, extnValue OCTET STRING — contains the DER encoding of an ASN.1 value — corresponding to the extension type identified — by extnID } </code></pre> <p>We’ve found the <code>extnID</code>. The “critical” field is omitted because it has the default value (false). Next up is the <code>extnValue</code>. This has the type <code>OCTET STRING</code>, which has the tag “0x04”. <code>OCTET STRING</code> means “here’s a bunch of bytes!” In this case, as described by the spec, those bytes happen to contain more DER. This is a fairly common pattern in X.509 to deal with parameterized data. For instance, this allows defining a structure for extensions without knowing ahead of time all the structures that a future extension might want to carry in its value. If you’re a C programmer, think of it as a <code>void*</code> for data structures. If you prefer Go, think of it as an <code>interface{}</code>.</p> <p>Here’s that <code>extnValue</code>:</p> <pre><code>04 81 F5 0481F200F0007500007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2 </code></pre> <p>That’s tag “0x04”, meaning <code>OCTET STRING</code>, followed by “0x81 0xF5”, meaning “this string is 245 bytes long” (the 0x81 prefix is part of <a href="#variable-length">variable length number encoding</a>).</p> <p>According to <a href="">RFC 6962, section 3.3</a>, “obtained SCTs can be directly embedded in the final certificate, by encoding the SignedCertificateTimestampList structure as an ASN.1 <code>OCTET STRING</code> and inserting the resulting data in the TBSCertificate as an X.509v3 certificate extension”</p> <p>So, we have an <code>OCTET STRING</code>, all’s good, right? Except if you remove the tag and length from extnValue to get its value, you’re left with:</p> <pre><code>04 81 F2 00F0007500DB74AFEEC… </code></pre> <p>There’s that “0x04” tag again, but with a shorter length. Why do we nest one <code>OCTET STRING</code> inside another? It’s because the contents of extnValue are required by RFC 5280 to be valid DER, but a SignedCertificateTimestampList is not encoded using DER (more on that in a minute). So, by RFC 6962, a SignedCertificateTimestampList is wrapped in an <code>OCTET STRING</code>, which is wrapped in another <code>OCTET STRING</code> (the extnValue).</p> <p>Once we decode that second <code>OCTET STRING</code>, we’re left with the contents:</p> <pre><code>00F0007500DB74AFEEC… </code></pre> <p>“0x00” isn’t a valid tag in DER. What is this? It’s TLS encoding. This is defined in <a href="">RFC 5246, section 4</a> (the TLS 1.2 RFC). TLS encoding, like ASN.1, has both a way to define data structures and a way to encode those structures. TLS encoding differs from DER in that there are no tags, and lengths are only encoded when necessary for variable-length arrays. Within an encoded structure, the type of a field is determined by its position, rather than by a tag. This means that TLS-encoded structures are more compact than DER structures, but also that they can’t be processed without knowing the corresponding schema. For instance, here’s the top-level schema from <a href="">RFC 6962, section 3.3</a>:</p> <pre><code>. </code></pre> <p>Right away, we’ve found one of those variable-length arrays. The length of such an array (in bytes) is always represented by a length field just big enough to hold the max array size. The max size of an <code>sct_list</code> is 65535 bytes, so the length field is two bytes wide. Sure enough, those first two bytes are “0x00 0xF0”, or 240 in decimal. In other words, this <code>sct_list</code> will have 240 bytes. We don’t yet know how many SCTs will be in it. That will become clear only by continuing to parse the encoded data and seeing where each struct ends (spoiler alert: there are two SCTs!).</p> <p>Now we know the first SerializedSCT starts with <code>0075…</code>. SerializedSCT is itself a variable-length field, this time containing <code>opaque</code> bytes (much like <code>OCTET STRING</code> back in the ASN.1 world). Like SignedCertificateTimestampList, it has a max size of 65535 bytes, so we pull off the first two bytes and discover that the first SerializedSCT is 0x0075 (117 decimal) bytes long. Here’s the whole thing, in hex:</p> <pre><code>00>This can be decoded using the TLS encoding struct defined in <a href="">RFC 6962, section 3.2</a>:</p> <pre><code>enum { v1(0), (255) } Version; struct { opaque key_id[32]; } LogID; opaque CtExtensions<0..2^16-1>; …; </code></pre> <p>Breaking that down:</p> <pre> 0>To understand the “digitally-signed struct,” we need to turn back to <a href="">RFC 5246, section 4.7</a>. It says:</p> <pre><code>A digitally-signed element is encoded as a struct DigitallySigned: struct { SignatureAndHashAlgorithm algorithm; opaque signature<0..2^16-1>; } DigitallySigned; </code></pre> <p>And in <a href="">section 7.4.1.4.1</a>:</p> <pre><code; </code></pre> <p>We have “0x0403”, which corresponds to sha256(4) and ecdsa(3). The next two bytes, “0x0046”, tell us the length of the “opaque signature” field, 70 bytes in decimal. To decode the signature, we reference <a href="">RFC 4492 section 5.4</a>, which says:</p> <pre><code>The digitally-signed element is encoded as an opaque vector <0..2^16-1>, the contents of which are the DER encoding corresponding to the following ASN.1 notation. Ecdsa-Sig-Value ::= SEQUENCE { r INTEGER, s INTEGER } </code></pre> <p>Having dived through two layers of TLS encoding, we are now back in ASN.1 land! We <a href="">decode</a> the remaining bytes into a SEQUENCE containing two INTEGERS. And we’re done! Here’s the whole extension decoded:</p> <pre><code># Extension SEQUENCE – RFC 5280 30 # length 0x0104 bytes (260 decimal) 820104 # OBJECT IDENTIFIER 06 # length 0x0A bytes (10 decimal) 0A # value (1.3.6.1.4.1.11129.2.4.2) 2B06010401D679020402 # OCTET STRING 04 # length 0xF5 bytes (245 decimal) 81F5 # OCTET STRING (embedded) – RFC 6962 04 # length 0xF2 bytes (242 decimal) 81F2 # Beginning of TLS encoded SignedCertificateTimestampList – RFC 5246 / 6962 # length 0xF0 bytes 00F0 # opaque SerializedSCT<1..2^16-1> # length 0x75 bytes 00 – RFC 5426 # SignatureAndHashAlgorithm (ecdsa-sha256) 0403 # opaque signature<0..2^16-1>; # length 0x0046 0046 # DER-encoded Ecdsa-Sig-Value – RFC 4492 30 # SEQUENCE 44 # length 0x44 bytes 02 # r INTEGER 20 # length 0x20 bytes # value 7E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD3 02 # s INTEGER 20 # length 0x20 bytes # value 659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419 # opaque SerializedSCT<1..2^16-1> # length 0x77 bytes 0077 # Version sct_version v1(0) 00 # LogID id (aka opaque key_id[32]) 293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478 # uint64 timestamp (milliseconds since the epoch) 000001627313EB2A # CtExtensions extensions (zero-length array) 0000 # digitally-signed struct – RFC 5426 # SignatureAndHashAlgorithm (ecdsa-sha256) 0403 # opaque signature<0..2^16-1>; # length 0x0048 0048 # DER-encoded Ecdsa-Sig-Value – RFC 4492 30 # SEQUENCE 46 # length 0x46 bytes 02 # r INTEGER 21 # length 0x21 bytes # value 00AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC1 02 # s INTEGER 21 # length 0x21 bytes # value 008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2 </code></pre> <p>One surprising thing you might notice: In the first SCT, <code>r</code> and <code>s</code> are twenty bytes long. In the second SCT, they are both twenty-one bytes long, and have a leading zero. Integers in DER are two’s complement, so if the leftmost bit is set, they are interpreted as negative. Since <code>r</code> and <code>s</code> are positive, if the leftmost bit would be a 1, an extra byte has to be added so that the leftmost bit can be 0.</p> <p>This is a little taste of what goes into encoding a certificate. I hope it was informative! If you’d like to learn more, I recommend “<a href="">A Layman’s Guide to a Subset of ASN.1, BER, and DER</a>.”</p> <p><a name="poison"></a>Footnote 1: A “poison extension” is defined by <a href="">RFC 6962 section 3.1</a>:</p> <pre><code)) </code></pre> <p>In other words, it’s an empty extension whose only purpose is to ensure that certificate processors will not accept precertificates as valid certificates. The specification ensures this by setting the “critical” bit on the extension, which ensures that code that doesn’t recognize the extension will reject the whole certificate. Code that does recognize the extension specifically as poison will also reject the certificate.</p> <p><a name="variable-length"></a>Footnote 2: Lengths from 0-127 are represented by a single byte (short form). To express longer lengths, more bytes are used (long form). The high bit (0x80) on the first byte is set to distinguish long form from short form. The remaining bits are used to express how many more bytes to read for the length. For instance, 0x81F5 means “this is long form because the length is greater than 127, but there’s still only one byte of length (0xF5) to decode.”</p> Post Syndicated from ris original Let’s Encrypt has announced that ACMEv2 (Automated Certificate Management Environment) and wildcard certificate support is live. ACMEv2 is an updated version of the ACME protocol that has gone through the IETF standards process. Wildcard certificates allow you to secure all subdomains of a domain with a single certificate. (Thanks to Alphonse Ogulla) Post Syndicated from Jonathan Kozolchyk original. Post Syndicated from jake original In ACMQueue magazine, Bridget Kromhout writes about containers and why they are not the solution to every problem. The article is subtitled: “Complex socio-technical systems are hard; film at 11.” “Don’t get me wrong—containers are delightful! But let’s be real: we’re unlikely to solve the vast majority of problems in a given organization via the judicious application of kernel features. If you have contention between your ops team and your dev team(s)—and maybe they’re all facing off with some ill-considered DevOps silo inexplicably stuck between them—then cgroups and namespaces won’t have a prayer of solving that. Development teams love the idea of shipping their dependencies bundled with their apps, imagining limitless portability. Someone in security is weeping for the unpatched CVEs, but feature velocity is so desirable that security’s pleas go unheard. Platform operators are happy (well, less surly) knowing they can upgrade the underlying infrastructure without affecting the dependencies for any applications, until they realize the heavyweight app containers shipping a full operating system aren’t being maintained at all.” Post Syndicated from mikesefanov original Kuhu Shukla (bottom center) and team at the 2017 DataWorks Summit By Kuhu Shukla This post first appeared here on the Apache Software Foundation blog as part of ASF’s “Success at Apache” monthly blog series. As I sit at my desk on a rather frosty morning with my coffee, looking up new JIRAs from the previous day in the Apache Tez project, I feel rather pleased. The latest community release vote is complete, the bug fixes that we so badly needed are in and the new release that we tested out internally on our many thousand strong cluster is looking good. Today I am looking at a new stack trace from a different Apache project process and it is hard to miss how much of the exceptional code I get to look at every day comes from people all around the globe. A contributor leaves a JIRA comment before he goes on to pick up his kid from soccer practice while someone else wakes up to find that her effort on a bug fix for the past two months has finally come to fruition through a binding +1. Yahoo – which joined AOL, HuffPost, Tumblr, Engadget, and many more brands to form the Verizon subsidiary Oath last year – has been at the frontier of open source adoption and contribution since before I was in high school. So while I have no historical trajectories to share, I do have a story on how I found myself in an epic journey of migrating all of Yahoo jobs from Apache MapReduce to Apache Tez, a then-new DAG based execution engine. Oath grid infrastructure is through and through driven by Apache technologies be it storage through HDFS, resource management through YARN, job execution frameworks with Tez and user interface engines such as Hive, Hue, Pig, Sqoop, Spark, Storm. Our grid solution is specifically tailored to Oath’s business-critical data pipeline needs using the polymorphic technologies hosted, developed and maintained by the Apache community.. I, however, did not have to go far to get answers. The Tez community actively came to a newbie’s rescue, finding answers and posing important questions. I started attending the bi-weekly Tez community sync up calls and asking existing contributors and committers for course correction. Suddenly the team was much bigger, the goals much more chiseled. This was new to anyone like me who came from the networking industry, where the most open part of the code are the RFCs and the implementation details are often hidden. These meetings served as a clean room for our coding ideas and experiments. Ideas were shared, to the extent of which data structure we should pick and what a future user of Tez would take from it. In between the usual status updates and extensive knowledge transfers were made. Oath uses Apache Pig and Apache Hive extensively and most of the urgent requirements and requests came from Pig and Hive developers and users. Each issue led to a community JIRA and as we started running Tez at Oath scale, new feature ideas and bugs around performance and resource utilization materialized. Every year most of the Hadoop team at Oath travels to the Hadoop Summit where we meet our cohorts from the Apache community and we stand for hours discussing the state of the art and what is next for the project. One such discussion set the course for the next year and a half for me. We needed an innovative way to shuffle data. Frameworks like MapReduce and Tez have a shuffle phase in their processing lifecycle wherein the data from upstream producers is made available to downstream consumers. Even though Apache Tez was designed with a feature set corresponding to optimization requirements in Pig and Hive, the Shuffle Handler Service was retrofitted from MapReduce at the time of the project’s inception. With several thousands of jobs on our clusters leveraging these features in Tez, the Shuffle Handler Service became a clear performance bottleneck. So as we stood talking about our experience with Tez with our friends from the community, we decided to implement a new Shuffle Handler for Tez. All the conversation points were tracked now through an umbrella JIRA TEZ-3334 and the to-do list was long. I picked a few JIRAs and as I started reading through I realized, this is all new code I get to contribute to and review. There might be a better way to put this, but to be honest it was just a lot of fun! All the whiteboards were full, the team took walks post lunch and discussed how to go about defining the API. Countless hours were spent debugging hangs while fetching data and looking at stack traces and Wireshark captures from our test runs. Six months in and we had the feature on our sandbox clusters. There were moments ranging from sheer frustration to absolute exhilaration with high fives as we continued to address review comments and fixing big and small issues with this evolving feature. As much as owning your code is valued everywhere in the software community, I would never go on to say “I did this!” In fact, “we did!” It is this strong sense of shared ownership and fluid team structure that makes the open source experience at Apache truly rewarding. This is just one example. A lot of the work that was done in Tez was leveraged by the Hive and Pig community and cross Apache product community interaction made the work ever more interesting and challenging. Triaging and fixing issues with the Tez rollout led us to hit a 100% migration score last year and we also rolled the Tez Shuffle Handler Service out to our research clusters. As of last year we have run around 100 million Tez DAGs with a total of 50 billion tasks over almost 38,000 nodes.. About the Author:. Post Syndicated from Bruce Schneier original. Post Syndicated from Stephenie Swope original been meaning to learn about. In the “Comments” section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make them more useful to you. – Stephenie Post WAF, Ozones,! Post Syndicated from Alex Tomic original Amazon EC. Here is how the sample solution works: HTTPS POSTto CloudFront. POSTand encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POSTciphertext is then sent to origin servers. The high-level steps to deploy this solution are as follows: . In this section, you will generate an RSA key pair by using OpenSSL: You should see version information similar to the following. $ openssl genrsa -out private_key.pem 2048 The command results should look similar to the following. Generating RSA private key, 2048 bit long modulus ................................................................................+++ ..........................+++ e is 65537 (0x10001) You should see output similar to the following. writing RSA key. Complete the Add Public Key configuration boxes: -----BEGIN PUBLIC KEY-----and -----END PUBLIC KEY-----lines. Complete the Create profile configuration boxes: Complete the Create configuration boxes: Launch the sample application by using a CloudFormation template that automates the provisioning process. To finish creating the CloudFormation stack: us-east-1(US East [N. Virginia]). See Step 1 for artifact staging instructions. While still on the CloudFront console, choose Distributions in the navigation pane, and then: In this step, you store the private key in the EC. <KMSKeyID>in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholder in the following command. ssm get-parameterin the following command in the ValueThe key material has been truncated in the following output. Use the following steps to test the sample application with field-level encryption: https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application. Post Syndicated from Janna Pellegrino original alike. Each course lasts 10–15 minutes, and you can learn at your own pace. To find these free courses, navigate to the Security, Identity & Compliance page. I suggest the following introductory courses: To supplement your foundational training, take these security-focused courses: AWS Training and Certification continually evaluates and expands the training courses available to you, so be sure to visit the website regularly to explore the latest offerings. – Janna Post Syndicated from corbet original The Let’s Encrypt project, working to encrypt as much web traffic as possible, looks forward to the coming year. .” Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original% in a single year – incredible. We’re proud to have contributed to that, and we’d like to thank all of the other people and organizations who also worked hard to create a more secure and privacy-respecting Web.. We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018. Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public. One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 client software options for Let’s Encrypt. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is being added to the Apache httpd server. Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. Firefox, Chrome). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. Government agencies are waking up to the need for stronger security to protect constituents. The media community is working to Secure the News. We’ve got some exciting features planned for 2018.. Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls. When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database. We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at [email protected]. We ask that you make an individual contribution if it is within your means. We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web! Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original <p, <a href="">the Web went from 46% encrypted page loads to 67%</a>.</p> <p.</p> <h1 id="service-growth">Service Growth</h1> <p>We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018.</p> <p>Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public.</p> <p>One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 <a href="">client software options for Let’s Encrypt</a>. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is <a href="">being added to the Apache httpd server</a>.</p> <p>Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. <a href="">Firefox</a>, <a href="">Chrome</a>). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. <a href="">Government</a> <a href="">agencies</a> are waking up to the need for stronger security to protect constituents. The media community is working to <a href="">Secure the News</a>.</p> <h1 id="new-features">New Features</h1> <p>We’ve got some exciting features planned for 2018.</p> <p>First, we’re planning to introduce an ACME v2 protocol API endpoint and <a href="">support for wildcard certificates</a> along with it. Wildcard certificates will be free and available globally just like our other certificates. We are planning to have a public test API endpoint up by January 4, and we’ve set a date for the full launch: Tuesday, February 27.</p> <p.</p> <h1 id="infrastructure">Infrastructure</h1> <p.</p> <p>Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls.</p> <p>When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database servers, and that will continue in 2018.</p> <p.</p> <p.</p> <h1 id="finances">Finances</h1> <p.</p> <p.</p> <p.</p> <h1 id="support-let-s-encrypt">Support Let’s Encrypt</h1> <p>We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to <a href="">sponsor</a> Let’s Encrypt please email us at <a href="mailto:[email protected]">[email protected]</a>. We ask that you make an <a href="">individual contribution</a> if it is within your means.</p> <p>We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!</p> Post Syndicated from Todd Cignetti original establish that you control a domain name when requesting SSL/TLS certificates with ACM. Previously ACM supported only email validation, which required the domain owner to receive an email for each certificate request and validate the information in the request before approving it. With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. After you have configured the CNAME record, ACM can automatically renew DNS-validated certificates before they expire, as long as the DNS record has not changed. To make it even easier to validate your domain, ACM can update your DNS configuration for you if you manage your DNS records with Amazon Route 53. In this blog post, I demonstrate how to request a certificate for a website by using DNS validation. To perform the equivalent steps using the AWS CLI or AWS APIs and SDKs, see AWS Certificate Manager in the AWS CLI Reference and the ACM API Reference. In this section, I walk you through the four steps required to obtain an SSL/TLS certificate through ACM to identify your site over the internet. SSL/TLS provides encryption for sensitive data in transit and authentication by using certificates to establish the identity of your site and secure connections between browsers and applications and your site. DNS validation and SSL/TLS certificates provisioned through ACM are free. To get started, sign in to the AWS Management Console and navigate to the ACM console. Choose Get started to request a certificate. If you previously managed certificates in ACM, you will instead see a table with your certificates and a button to request a new certificate. Choose Request a certificate to request a new certificate. Type the name of your domain in the Domain name box and choose Next. In this example, I type. You must use a domain name that you control. Requesting certificates for domains that you don’t control violates the AWS Service Terms. With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. Choose DNS validation, and then choose Review. Review your request and choose Confirm and request to request the certificate. After a brief delay while ACM populates your domain validation information, choose the down arrow (highlighted in the following screenshot) to display all the validation information for your domain. ACM displays the CNAME record you must add to your DNS configuration to validate that you control the domain name in your certificate request. If you use a DNS provider other than Route 53 or if you use a different AWS account to manage DNS records in Route 53, copy the DNS CNAME information from the validation information, or export it to a file (choose Export DNS configuration to a file) and write it to your DNS configuration. For information about how to add or modify DNS records, check with your DNS provider. For more information about using DNS with Route 53 DNS, see the Route 53 documentation. If you manage DNS records for your domain with Route 53 in the same AWS account, choose Create record in Route 53 to have ACM update your DNS configuration for you. After updating your DNS configuration, choose Continue to return to the ACM table view. ACM then displays a table that includes all your certificates. The certificate you requested is displayed so that you can see the status of your request. After you write the DNS record or have ACM write the record for you, it typically takes DNS 30 minutes to propagate the record, and it might take several hours for Amazon to validate it and issue the certificate. During this time, ACM shows the Validation status as Pending validation. request. Refer to the Troubleshooting Section of the ACM User Guide for instructions about troubleshooting validation or issuance failures. You now have an ACM certificate that you can use to secure your application or website. For information about how to deploy certificates with other AWS services, see the documentation for Amazon CloudFront, Amazon API Gateway, Application Load Balancers, and Classic Load Balancers. Note that your certificate must be in the US East (N. Virginia) Region to use the certificate with CloudFront. ACM automatically renews certificates that are deployed and in use with other AWS services as long as the CNAME record remains in your DNS configuration. To learn more about ACM DNS validation, see the ACM FAQs and the ACM documentation. If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the ACM forum or contact AWS Support. – Todd Post Syndicated from Maggie Burke original! The AWS Security Blog will publish an updated version of this list regularly going forward. You also can subscribe to the AWS Knowledge Center Videos playlist on YouTube. – Maggie By continuing to use the site, you agree to the use of cookies. more information
https://noise.getoto.net/tag/acm/
CC-MAIN-2019-51
refinedweb
7,097
62.27
This is my school's assignments and I'm a beginner in C++.. I'm trying to write a program which prompts strings input from user. Say if user enters "My name is blabla", the program should print the exact same thing without ignoring the characters after the whitespace. I tried using a buffer loop but I couldn't think of a way of how to terminate it after input has done. i tried, adding cin.eof() at the end of loop doesn't have any effect.(In fact, adding cin.eof() is not a good solution because I still want to store characters as long as the user hits space, not enter) I want the loop to terminate after the user hits enter but not space, but since both are whitespace so I'm stuck. Here's my code: using namespace std; #include <iostream> #include <string.h> int handle_name(char *name); int main() { char name[100]; int emplnum = 0; cout<<"Enter your numbers: "<<endl; cin>>emplnum; cout<<"Please enter your name: "<<endl; while (!cin.eof()) { cin>>name; cout<<name<<" "; } return 0; //check if name doesn't contain digits. int handle_name(char *name){ int index; for(index = 0; name[index]!='\0';index++) { if (isdigit(name[index])) { return 0; break; } } if(name[index] == '\0') return 1; } In line 19, to terminate the buffer loop after user has hit 'enter', I thought of adding this line of code if((cin>>name)=='\n') break; but I got error. Any other way to solve this? Any help will be much appreciated. Thanks.
https://www.daniweb.com/programming/software-development/threads/316910/how-to-end-buffer-loop
CC-MAIN-2021-31
refinedweb
258
73.07
Pyvit is a toolkit for interfacing with cars from Python. It aims to implement common hardware interfaces and protocols used in the automotive systems. Install Pyvit pyvit can be installed with pip: pip install pyvit. Getting Started Using a CANtact The CANtact tool is directly supported by pyvit. It should work on Windows, OS X, and Linux. Example This examples goes on bus and prints received messages: from pyvit import can from pyvit.hw.cantact import CantactDev dev = CantactDev("/dev/cu.usbmodem1451") dev.set_bitrate(500000) dev.start() while True: print(dev.recv()) You will need to set the serial port ( /dev/cu.usbmodem1451 in this example) correctly. SocketCAN SocketCAN interfaces are supported, however they are only available on Linux. Using SocketCAN requires Python 3+ Example The device can now be accessed as a SocketCanDev. This examples goes on bus and prints received messages: from pyvit import can from pyvit.hw import socketcan dev = socketcan.SocketCanDev("can0") dev.start() while True: print(dev.recv()) Using Peak CAN Tools Peak CAN tools (also known as GridConnect) are support through SocketCAN. This functionality is only available on Linux For kernels 3.6 and newer, skip to step 5. - Download the Peak Linux driver. - Install dependancies: sudo apt-get install libpopt-dev - Build the driver: cd peak-linux-driver-x.xx make sudo make install - Enable the driver: sudo modprobe pcan - Connect a Peak CAN tool, ensure it appears in /proc/pcan. Note the network device name (ie, can0) - Bring the corresponding network up: sudo ifconfig can0 up
https://haxf4rall.com/2019/07/17/pyvit-python-vehicle-interface-toolkit/
CC-MAIN-2019-51
refinedweb
254
52.46
Summary: Solaris and Linux file system behavior has changed over time, breaking one of the assumptions in Postfix. See below for a description of the behavior and how it disagrees with standards. Postfix is not affected on systems with standard (POSIX, X/Open) file system behavior, i.e. *BSD, AIX, MacOS, HP-UX, and very old Sun/Linux systems. The fix and workarounds are simple. There are efforts to get the non-standard behavior approved by standards (a function called llink). Today's fix for Solaris, Linux etc. also makes Postfix future-proof for such changed over time: instead of recursively following the symlink and creating a hardlink to the file thus found, it creates a hardlink to the symlink itself. This behavior disagrees with, for example, the POSIX.1-2001 and X/Open XPG4v2 standards, and is the default on current Solaris, IRIX and Linux systems. On systems with this non-standard behavior, Postfix may be vulnerable depending on how it is configured. Postfix allows a root-owned symlink as a local mail destination, so that mail can be delivered to e.g. /dev/null which is a symlink on Solaris. 2. What configurations are (not) affected ========================================= A configuration is considered affected when an attacker with local access to a system can make Postfix append mail to an existing file of a different user. Appendix A gives a procedure to determine if a system is affected. The following configurations are NOT affected: Postfix on FreeBSD 7.0, OpenBSD 4.3, NetBSD 4.0, MacOS X 10.5, AIX 5.3, HP-UX 11.11, Solaris 1.x, Linux kernel 1.2.13, and other systems with standard hardlink behavior. However, these systems may become affected when they share file systems with hosts where users can create hardlinks to symlinks. Also not affected are the following configurations: a) maildir-style delivery with the Postfix built-in local or virtual delivery agents; b) mail delivery with non-Postfix local or virtual delivery agents; c) mailbox-style delivery with the Postfix built-in virtual delivery agent when virtual mailbox parent directories have no "group" or other write permissions. The following configurations are known to be affected on Linux kernel >= 2.0, Solaris >= 2.0, OpenSolaris 11-2008.5, IRIX 6.5, and other systems where users can create hardlinks to symlinks: a) mailbox-style delivery with the Postfix built-in local delivery agent; b) mailbox-style delivery with the Postfix built-in virtual delivery agent when virtual mailbox parent directories have "group" or other write permissions. 3. Solution =========== If your system is affected, upgrade Postfix, apply the patch in Appendix C, or apply one of the countermeasures in section 4. Updated versions will be made available via for Postfix versions 2.3, 2.4, 2.5, and 2.6. Individual vendors will provide updates depending on their support policy. 4. Countermeasures ================== Each of the following countermeasures will prevent privilege escalation through Postfix via hardlinked symlinks: 1) Protect mailbox files (maildir files are not affected). The script in Appendix B makes sure that the system mail spool directory is owned by root, that the sticky bit is turned on, and that each UNIX account has a mailbox file; it also has suggestions for virtual mailbox file deliveries (again, maildir files are not affected). 2) Don't allow non-root users to create hardlinks to objects of other users. This behavior is configurable on some systems. Appendix A: Procedure to find out if a system is affected ========================================================= As mentioned in section 2, not affected are maildir-style delivery with the Postfix built-in local or virtual delivery agents, mail delivery with non-Postfix local or virtual delivery agents, and mailbox-style delivery with the built-in Postfix virtual delivery agent when virtual mailbox parent directories have no "group" or other write permissions. To find out if a system may be affected, execute the following commands as non-root user on a local file system: $ PATH=/bin:/usr/bin:$PATH $ mkdir test $ cd test $ touch src $ ln -s src dst1 $ ln dst1 dst2 $ ls -l For the test to be valid, all commands should complete without error. The system is NOT affected when "ls -l" output shows one symlink (dst1 -> src) and two files (dst2, src) as in example A.1. Example A.1: lrwxr-xr-x 1 user users 3 Mmm dd hh:mm dst1 -> src -rw-r--r-- 2 user users 0 Mmm dd hh:mm dst2 -rw-r--r-- 2 user users 0 Mmm dd hh:mm src However, the system may become affected when it shares file systems with hosts where users can create hardlinks to symlinks as described The system is affected when "ls -l" output shows two symlinks and one file as in example A.2, with the following Postfix configurations: a) mailbox-style delivery with the Postfix built-in local delivery agent; b) mailbox-style delivery with the Postfix built-in virtual delivery agent when virtual mailbox parent directories have "group" or other write permission. Example A Appendix B: Procedure to protect mailbox files ============================================== This section describes one of the countermeasures (see section 4) that eliminate the problem without updating Postfix. The Perl script below hardens systems that use mailbox-style deliveries with the Postfix built-in local delivery agent; it makes sure that the system mailspool directory is root-owned and sticky, and that every UNIX account has a mailbox file. The script assumes that mailbox files are stored under /var/mail. Similar actions would be needed for systems that use mailbox-style delivery with the Postfix built-in virtual delivery agent, but this is needed only when Postfix virtual mailbox parent directories have "group" or other write permissions. Unfortunately, an automated script for this cannot be made available due to the large variation between Postfix configurations. #!/usr/bin/perl # fix-mailspool - Make sure the mailspool directory is root-owned # and sticky, and that every UNIX account has a mailbox file. use Fcntl; $debug = 0; # Follow compatibility symlink. $mailspool="/var/mail/"; chown(0, -1, $mailspool) || die("can't set root ownership for $mailspool: $!\n"); chmod((stat($mailspool))[2] | 01000, $mailspool) || die("can't set sticky bit for $mailspool: $!\n"); while(($name, $passwd, $uid, $gid, $quota, $comment, $gcos, $dir, $shell) = getpwent()) { print "user $name\n" if $debug; $mailbox = ($mailspool . $name); if (! -e $mailbox) { print "create $mailbox\n" if $debug; if (!sysopen(MAILBOX, $mailbox, (O_CREAT | O_RDWR | O_EXCL), 0600)) { warn("can't create $mailbox: $!\n"); } else { # XXX fchown() is not portable. chown($uid, $gid, $mailbox) || warn("chown $mailbox: $!\n"); close(MAILBOX); } } elsif (! -f $mailbox) { warn("$mailbox is not a regular file\n"); } elsif ((stat($mailbox))[4] != $uid) { warn("$mailbox is not owned by $name\n"); } } Appendix C: Source code patch ============================= This patch is suitable for Postfix 2.0 and later. It presents the least invasive change that eliminates the problem. Future Postfix releases may adopt a different strategy. The solution introduces the following. *** src/util/safe_open.c.orig Sun Jun 4 19:04:49 2006 --- src/util/safe_open.c Mon Aug 4 16:47:18 2008 *************** *** 83,88 **** --- 83,89 ---- #include <msg.h> #include <vstream.h> #include <vstring.h> + #include <stringops.h> #include <safe_open.h> /* safe_open_exist - open existing file */ *************** *** 138,150 **** * for symlinks owned by root. NEVER, NEVER, make exceptions for symlinks * owned by a non-root user. This would open a security hole when * delivering mail to a world-writable mailbox directory. */ else if (lstat(path, &lstat (fstat_st->st_dev != lstat_st.st_dev --- 139,167 ---- * for symlinks owned by root. NEVER, NEVER, make exceptions for symlinks * owned by a non-root user. This would open a security hole when * delivering mail to a world-writable mailbox directory. + * + * Sebastian Krahmer of SuSE brought to my attention that some systems have + * changed their semantics of link(symlink, newpath), such that the + * result is a hardlink to the symlink. For this reason, we now also + * require that the symlink's parent directory is writable only by root. */ else if (lstat(path, &lstat_st) <
http://article.gmane.org/gmane.mail.postfix.announce/110
crawl-002
refinedweb
1,328
56.55
Consistently, one of the more popular stocks people enter into their stock options watchlist at Stock Options Channel is General Motors Co. (Symbol: GM). So this week we highlight one interesting put contract, and one interesting call contract, from the June expiration for GM. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $34.50 strike, which has a bid at the time of this writing of 70 cents. Collecting that bid as the premium represents a 2% return against the $34.50 commitment, or a 16.1% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to GM's upside potential the way owning shares would, because the put seller only ends up owning shares in the scenario where the contract is exercised. So unless General Motors Co. sees its shares decline 2.4% and the contract is exercised (resulting in a cost basis of $33.80 per share before broker commissions, subtracting the 70 cents from $34.50), the only upside to the put seller is from collecting that premium for the 16.1% annualized rate of return. Interestingly, that annualized 16.1% figure actually exceeds the 4.1% annualized dividend paid by General Motors Co. by 12%, based on the current share price of $35.31. And yet, if an investor was to buy the stock at the going market price in order to collect the dividend, there is greater downside because the stock would have to lose 2.35% to reach the $34.50 strike price. 4.1% annualized dividend yield. Turning to the other side of the option chain, we highlight one call contract of particular interest for the June expiration, for shareholders of General Motors Co. (Symbol: GM) looking to boost their income beyond the stock's 4.1% annualized dividend yield. Selling the covered call at the $35.50 strike and collecting the premium based on the 73 cents bid, annualizes to an additional 16.4% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 20.5% annualized rate in the scenario where the stock is not called away. Any upside above $35.50 would be lost if the stock rises there and is called away, but GM shares would have to advance 0.5% from current levels for that to occur, meaning that in the scenario where the stock is called, the shareholder has earned a 2.5% return from this trading level, in addition to any dividends collected before the stock was called. The chart below shows the trailing twelve month trading history for General Motors Co., highlighting in green where the $34.50 strike is located relative to that history, and highlighting the $35.50 General Motors Co. (considering the last 251 trading day GM historical stock prices using closing values, as well as today's price of $35.31) to be 23%. Top YieldBoost G.
https://www.nasdaq.com/articles/interesting-june-stock-options-gm-2015-05-11
CC-MAIN-2022-21
refinedweb
509
67.65
Python Programming, news on the Voidspace Python Projects and all things techie. Elixir and Just Enough Magic Via my usual Python dripfeed (PlanetPython) I read the announcement about Elixir. Elixir is a new declarative mapper for SQLAlchemy. It draws inspiration from ActiveRecord (Ruby on Rails) and has what is both a very nice syntax, and an unusual one for Python : has_field('name', Unicode(60)) has_many('movies', of_kind='Movie', inverse='director') using_options(tablename='directors') class Movie(Entity): has_field('title', Unicode(60)) has_field('description', Unicode(512)) has_field('releasedate', DateTime) belongs_to('director', of_kind='Director', inverse='movies') has_and_belongs_to_many('actors', of_kind='Actor', inverse='movies') using_options(tablename='movies') class Actor(Entity): has_field('name', Unicode(60)) has_and_belongs_to_many('movies', of_kind='Movie', inverse='actors') using_options(tablename='actors') So how are the classes here configured, when there are only what look like function calls within the class namespaces ? A peek into the source code reveals the trick. has_field (to pick an example) is defined in fields.py. They are classes, wrapped inside instances of the Statement class. When they are called, the call is added to a list of statements, stored as a class attribute Statement.statements. Entity has a metaclass, which means that when subclass definitions are executed (at the time the module is imported) then Statement.process is called. Statement.process is a class method, which processes all the statements that have been added to the list (which is then cleared). Because the list is cleared each time process is called [1], statements only contains entries for the class currrently being created. It's not thread-safe, but then importing modules isn't anyway. This is a very nice trick, not too difficult to work out, but just enough magic. I really like the syntax anyway. I'm not sure I'd use a trick like this myself, but the authors of Elixir have created a nice DSL. (With not a self in sight as Andrzej was quick to point out - oh and the source code is very readable, which is a good sign.) Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-13 20:52:33 | | Categories: Python, Hacking Automatic Properties and a Metaclass Conundrum I've fixed the (fairly subtle) bug in my properties metaclass (see the previous blog entry). This means that instead of using the metaclass directly, you can create a class that uses it, and just subclass that : __metaclass__ = __properties__ Any subclasses of WithProperties will have properties automatically created from methods declared with a get_, set_ (etc) prefix [1] : def __init__(self): self.__value1 = "A read only value" self.__value2 = None def get_readonly(self): print 'getting readonly attribute' return self.__value1 def get_test(self): print 'Getting test attribute' return self.__value2 def set_test(self, value): print 'Setting test attribute' self.__value2 = value This started out as a toy implementation, but I think this is actually a nice way of declaring properties. I might even use it... The original version of the metaclass worked fine, but when you subclassed a class using it, the metaclass wouldn't be called for the subclass. The subclass still had its __metaclass__ attribute set, it just wasn't being used. The bug was due to the final line of the __new__ method in my metaclass (I've already changed it in the post below). The offending line was : Can you spot what is wrong with this? It took Christian [2] and me several whole minutes of forehead-wrinkling scrutiny (and a close encounter with some code by Ian Bicking) to work it out. All classes are types. Metaclasses creates new instances of types, with the specific characteristics that you supply. When a class is created (normally using the class statement), the metaclass is called, which is responsible for creating the new class. Creating a new class using type(classname, bases, newClassDict) meant that my class was an instance of type - not of my metaclass. The reason that subclasses still had the __metaclass__ attribute set was because of the normal inheritance rules... Changing that line to type.__new__(meta, classname, bases, newClassDict) means that the my class really is an instance of the metaclass, and subclassing it works as expected. ... >>> type(x) <type 'type'> >>> class y(type): pass ... >>> class x(object): ... __metaclass__ = y ... >>> type(x) <class '__main__.y'> I've updated my article on metaclasses to include this. Oh, my achievement on the book on Saturday was writing minus seven pages of the Python tutorial. Refactoring at work. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-12 21:18:24 | | Categories: Python, Hacking Painless Properties Python properties are great. They can provide a clean API with useful functionality. def __init__(self): self.__attribute = None def __getattribute(self): return self.__attribute def __setattribute(self, value): self.__attribute = value def __delattribute(self): print "You can't delete attribute." attribute = property( fget = __getattribute, fset = __setattribute, fdel = __delattribute, doc = "Some docstring" ) There are two problems with properties. The syntax is a bit ugly and it leaves your getter and setter methods inside the class namespace. The hack below is a metaclass which offers one solution to these problems. You don't use property directly, but declare methods that begin with 'get_', 'set_' or 'del_'. You can also have class attribute docstrings that start with 'doc_'. Following 'get_', 'set_', 'del_' or 'doc_' is the property name they belong to. A class that uses this metaclass will have the appropriate properties created, and the methods used won't appear in the class namespace. def __new__(meta, classname, bases, classDict): names = set() propertyDict = { 'doc': {}, 'get': {}, 'set': {}, 'del': {} } newClassDict = {} importantNames = set(['del_', 'doc_', 'get_', 'set_']) for name, item in classDict.items(): if name[:4] in importantNames and len(name) > 4: propertyName = name[4:] names.add(propertyName) propertyDict[name[:3]][propertyName] = item else: newClassDict[name] = item for name in names: fget = propertyDict['get'].get(name) fset = propertyDict['set'].get(name) fdel = propertyDict['del'].get(name) doc = propertyDict['doc'].get(name) newClassDict[name] = property(fget=fget, fset=fset, fdel=fdel, doc=doc) return type.__new__(meta, classname, bases, newClassDict) class WithProperties(object): __metaclass__ = __properties__ To see it in action, create an instance of the following class and access the 'test' and 'readonly' properties. You can also look inside the class namespace using dir(Test) to verify that the getter / setter / etc methods and attributes aren't there. You should also see that type(Test) is __properties__. def __init__(self): self.__test = 3 self.__readonly = 2 def get_test(self): print 'Getting test' return self.__test def set_test(self, value): print 'Setting test' self.__test = value def del_test(self): print 'Attempting to delete test' doc_test = "Docstring for test property" doc_readonly = "a read only property" def get_readonly(self): print 'Getting readonly' return self.__readonly Obviously this is only a toy implementation, caveat emptor. The only issue that I'm aware of (which is easy to fix if you have the desire) is that it uses sets, so it requires Python 2.4 or greater. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-11 18:55:43 | | Categories: Python, Hacking Microsoft Invented AJAX (and they wish they hadn't) I've just read a very interesting blog entry by Jeff Attwood: Did IE6 Make Web 2.0 Possible?. By somewhere around 2002 Microsoft had effectively won the browser war. IE 6 was introduced in August 2001. Up until then Microsoft had released a new version of IE every year or eighteen months or so. XMLHttpRequest, which is central to AJAX, was introduced in IE 5.0 in March 1999 as a proprietary feature. Conventional wisdom says that Microsoft stopped developing the browser after they won the war because they were scared of the 'web as platform' damaging the market for desktop applications. Jeff suggests, rather ironically, that by letting the browser market stagnate (so that by 2004 something like 95% of people browsing the internet were using a single browser version) Microsoft made it dramatically easier for people to contemplate writing web applications!. Nice theory. Personally I think the web still sucks as a platform (anyone using a browser based IDE yet ?) and I don't see much sign of that changing. Nicer web apps are great, but why does everything need to be delivered through a browser ? I think client apps with collaborative features (or other web service integration) are the way to go. Oh, another interesting thing: This is also from last year. Slides 36-38 show exactly which features from Python Javascript 2 will borrow. This is iterators, generators and list comprehensions. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-11 14:45:49 | | Categories: General Programming Happy Birthday Voidspace Voidspace.org.uk is four years old today. Happy birthday it. In web years that's practically ancient... Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-11 00:58:58 | | Cats or Dogs, C# or Java I like James Tauber's new site, Cats or Dogs [1]. It asks you a series of questions, where you choose one thing over another. Then it presents you with some information about other people's choices. For example : People who prefer c# to java are 3.0 times more likely to prefer boxers to briefs. People who prefer java to c# are 50% more likely to prefer briefs to boxers. This might be significant. You can view some of the results it generates, here. Can you guess that I'm trying to find a distraction from writing ? Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-10 23:12:49 | | Categories: Fun, General Programming And Today in the Junk Folder Oops... it looks like someone forgot to fill in a few values before firing up their evil spam machine : %TO_CC_DEFAULT_HANDLER Subject: %SUBJECT Sender: "%FROM_NAME" <%FROM_EMAIL> Mime-Version: 1.0 Content-Type: text/html Date: %CURRENT_DATE_TIME %MESSAGE_BODY Received today. My current favourite is the subject of a piece of spam I received yesterday: The Chronicles of The Rogue Pirate Ninjas: Revised, Act One. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-10 12:05:55 | | How Vista is Good for Python (well... maybe) Smart and perceptive people that you are, I'm sure that the recent series of 'Mac vs PC' adverts won't have escaped your notice. Windows Vista is finally out, and for the first time in living memory [1] computer users everywhere are considering a major change to their operating system. Apple is hoping that some of those people will switch to Mac OS. And why shouldn't they? Apple's star has been rising of late. The iPod underpinned their financial stability, cheaper hardware have contributed to rising sales and the buzz about Vista has led to mainstream press [2] coverage about how Vista is just Microsoft playing catch-up with Mac OS. The capability of running Windows on Mac hardware is also helping those on the edge of changing to make the jump. Linux users can take heart from this. A lot of work has gone into making Linux easier to install, and friendlier for non-ultra-geeks to use on the desktop. Distributions like Ubuntu are making this their explicit goal. Some Mac hardware owners run Linux, and if running a non-Microsoft OS on the desktop is becoming fashionable then it can only be good news for Linux. None of this changes the fact that Microsoft are starting from a pretty good position. Statistically almost every desktop computer in the world is running their operating system. I can't be bothered to look it up, but what is it - still greater than 95% ? Assuming that more people continue to switch away from Windows, and it certainly looks that way, then eventually we will get close to a magical tipping point. At the moment it makes financial sense for many software companies, particularly small ones, to develop exclusively for the Windows operating system. It simply isn't worth the investment of time and resources to develop for alternative platforms. At some point it will not only be a nice idea for software firms to make their applications cross-platform, but it will become good business sense - followed not long after by essential business sense. Where is that tipping point ? 15% ? 20% ? As we approach this point programming languages with a strong history of providing hassle-free cross platform development environments will be ever more important. That includes Python. Not only that, but cross-platform libraries like wxWidgets and QT will also become more important. This is good news for users of those libraries, whichever language they are consumed from. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-02-10 01:17:54 | | Categories: Python, Computers Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2007_02_10.shtml
CC-MAIN-2018-05
refinedweb
2,174
57.77
When a user clicks on a button or link on a Web page, there can be a delay between posting to the server and the next action that happens on the screen. The problem with this delay is that the user may not know that they already clicked on the button and they might t hit the button again. It’s important to give immediate feedback to the user so they know that the application is doing something. This article presents a few different methods of providing the user with immediate feedback prior to the post-back. You’ll learn to disable the button and change the message on that button. You’ll see how to pop-up a message over the rest of the screen. You’ll also learn how to gray out the complete background of the page. And finally, you’ll see how to use spinning glyphs from Font Awesome to provide feedback to the user that something is happening. A Sample Input Screen Figure 1 shows a sample screen where the user fills in some data and then clicks a button to post that data to the server. I’m only going to use a single field for this screen, but the technique presented works on any input screen. When the user clicks on a button that posts data to the server, your job is to give the user some immediate feedback that something has happened so that they don’t click on the button (or any other button) again until the process has completed. There are many ways that you can provide feedback to the user that you’ve begun to process their request. Here’s a list of just some of the things you can do. - Redirect the user to another page with a message that their request is being processed. - Disable all the buttons on the screen so they can’t click anything else. - Hide all of the buttons on the screen so they can’t click any of them. - Change the text on the button they just clicked on. - Disable the button they just clicked on. - Pop-up a message over the screen. - Gray out the page they’re on. - Any combination of the above items. In this article, you’ll learn how to do a few of these items. To create the screen shown in Figure 1, use Visual Studio to create a new MVC application. Create a folder under \Views called \ProgressSamples. Create a new view in this folder called ProgressSample.cshtml. Create a new controller called ProgressSamplesController that can call this new page. The code to create this view is shown in Listing 1. A Class to Hold the Input For the sample screen, you’ll be entering a music genre such as Rock, Country, Jazz, etc. That means you need a class to use as a model for the input screen. The class shown in the following code snippet called MusicGenre will be used in this article for data binding on the form. public class MusicGenre { public MusicGenre() { GenreId = 0; Genre = string.Empty; } public int GenreId { get; set; } public string Genre { get; set; } } The Controller In the controller, you need two methods (as shown in Listing 2): one displays the screen and one handles posting data from that screen. The first method, ProgressSample, is very simple in that it creates an instance of the MusicGenre class, passes that to the ActionResult returned from this method, and then is passed on to the view. The second method handles the post-back from the page. To simulate a long-running process, just add a call to the Thread.Sleep() method and pass in 3000 to simulate a three-second operation. Return the model back from this method so the data stays in place on the page when this operation is complete. Change the Button When Clicked The first example is very simple. You change the text of the button and disable it (Figure 2) so that the user doesn’t click on the button again. These two steps are accomplished by writing a very simple JavaScript function, as shown in the following code snippet. <script> function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true); $(ctl).text(msg); return true; } </script> Pass a reference to the button that was clicked to the DisplayProgressMessage function, and a message to change the text of the button. Use a jQuery selector to set the button’s disabled property to true. This causes the button to become disabled. Use the text method to set the text of the button to the message passed in. After creating this function, modify the submit button to call the DisplayProgressMessage function. Add an onclick event procedure to the submit button, as shown in the code snippet below. Pass a reference to the button itself using the keyword this, and the text Saving… which is displayed on the button. <button type="submit" id="submitButton" class="btn btn-primary" onclick="return DisplayProgressMessage(this, 'Saving...');"> Save </button> Add Pop-Up Message Let’s now enhance this sample by adding a pop-up message (Figure 3) in addition to changing the text of the button. Create a <div> with a <label> in it, and within the label, place the text you wish to display. Add a CSS class called .submit-progress to style the pop-up message. Set this <div> as hidden using the Bootstrap class Hidden so it won’t show up until you want it to display. Here’s the HTML you’ll use for this pop-up message: <div class="submit-progress hidden"> <label>Please wait while Saving Data...</label> </div Create the submit-progress style using a fixed position on the screen. Set the top and left to 50% to place this <div> in the middle of the page. Set some padding, width, and margins that are appropriate for the message you’re displaying. Select a background and foreground color for this message. Finally, set a border radius and a drop-shadow so the pop-up looks like it’s sitting on top of the rest of the page. To make this pop-up appear when clicking on the button, update the DisplayProgressMessage function, as shown in the following code snippet: <script> function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true).text(msg); $(".submit-progress").removeClass("hidden"); return true; } </script> Notice that I changed this function a little. I chained together the setting of the disabled property and the setting of the text on the button. This isn’t absolutely necessary, but it’s a little more efficient as the selector only needs to be called one time. Next, remove the hidden class from the <div> tag to have the pop-up message appear on the page. Gray Out the Background To provide even more feedback to the user when they click on the button, you might "gray out" the whole Web page (Figure 4). This is accomplished by applying a background color of lightgray and an opacity of 50% to the <body> element. Create a style named .submit-progress-bg to your page that you can apply to the <body> element using jQuery. <style> .submit-progress-bg { background-color: lightgray; opacity: .5; } </style> Change the DisplayProgressMessage to add this class to the <body> tag when the button is clicked, as shown in the code snippet below: function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true).text(msg); $(".submit-progress").removeClass("hidden"); $("body").addClass("submit-progress-bg"); return true; } Add a Font Awesome Spinner Another way to inform a user that something’s happening is to add some animation. Luckily, you don’t need to build any animation yourself; you can use Font Awesome () for this purpose. Add Font Awesome to your project using the NuGet Package Manager within Visual Studio. Font Awesome has many nice glyphs to which you can add a "spin" effect (Figure 5). Add an <i> tag to the submit progress pop-up <div> that you added before and set the CSS class to "fa fa-2x fa-spinner fa-spin". The first class name "fa" simply identifies that you wish to use the Font Awesome fonts. The second class name "fa-2x" says you want it to be two times the normal size of the glyph. The third class name "fa-spinner" is the actual glyph to use, which, as shown in Figure 5, has different-sized white circles arranged in a circle. The fourth class name "fa-spin" causes the glyph to spin continuously. Adding that <i> tag with those classes set to spin causes your pop-up message to display that glyph next to your text. <div class="submit-progress hidden"> <i class="fa fa-2x fa-spinner fa-spin"></i> <label>Please wait while Saving Data...</label> </div> Of course, you do need to change the .submit-progress style a little bit in order to fit both the glyph and the text within the message area and to make them look good side-by-side. You can add the following styles just below the other .submit-progress you created before to override the original styles. This makes the glyph appear in the right place in your message. You also add an additional style for the <i> tag to put a little bit of space between the glyph and the text. .submit-progress { padding-top: 2em; width: 23em; margin-left: -11.5em; } .submit-progress i { margin-right: 0.5em; } If you were to run the page right now without making any changes, you’ll either not see the glyph at all, or you’ll see it but it won’t be animated. The problem is that when the JavaScript function is running, it runs on a single execution context and then immediately returns to the browser, which then executes the form post to the server. The animated glyph needs to have its own execution context in which to run. To accomplish this, you need to use the JavaScript setTimeout function around the code that un-hides the <div> tag with your spinning glyphs. You can set any amount of time that you want on the setTimeout, but usually just a single micro-second will do, as shown in the code below. function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true).text(msg); $("body").addClass("submit-progress-bg"); // Wrap in setTimeout so the UI // can update the spinners setTimeout(function () { $(".submit-progress").removeClass("hidden"); }, 1); return true; } Summary In this article, you learned how to provide some feedback to your users when they’re about to call a long operation on the server. It’s important to provide the feedback so that they don’t try to click on the same button again, or try to navigate somewhere else while you’re finishing a process. I’m sure that you can expand upon the ideas presented in this article and apply it to your specific use cases. Remember to run any animations in a different execution context using the setTimeout function. Sample Code You can download the sample code for this article by visiting my website at. Select PDSA Articles, then select "CODE Magazine—Progress Messages" from the drop-down list.
https://www.codemag.com/Article/1507051/Display-a-Progress-Message-on-an-MVC-Page
CC-MAIN-2020-10
refinedweb
1,864
71.55
17 August 2012 09:12 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The ground-breaking ceremony was held on 15 August, according to Songyuan government’s website. Sinochem Jilin Changshan Chemical received approval from its parent company, Sinochem Group, on 14 June to double its ammonia and urea capacities to 360,000 tonnes/year and 600,000 tonnes/year respectively, a company source said. The company will invest a total of yuan (CNY) 1.2bn ($188m) and the construction will take two years, said the source. After the completion of the project, the company is also expected to double its sales revenue to CNY 1.4bn from 710m in 2011, the source
http://www.icis.com/Articles/2012/08/17/9587854/chinas-sinochem-jilin-changshan-expanding-urea-ammonia-units.html
CC-MAIN-2014-10
refinedweb
112
65.32
This is an idea I got from this article: I read it a while back, but was recently thinking of ways to make a decent 3D interface. Not an interface that is in 3D, but an interface that allows for rotating, scaling, and moving 3D objects. I’ve done all sorts of menus, toolbars, keyboard shortcuts, but it never was too usable. I thought this might be better. I’m not sure it will be better for that purpose, but at least I wound up with a cool component that will surely be useful for something. Here’s some demos. First with labels, like you see above. Just click anywhere. Drag to the item you want, and release. And here with icons: The use is a bit different, but pretty straightforward. When you create the menu, you need to specify the parent (usually best to place in main class so it's above everything), the number of segments, outer radius, icon radius, inner radius, and default select event handler. Actually, you only need to specify the first two. The rest have default params. [as]wheel = new WheelMenu(this, 8, 80, 60, 10, onSelect);[/as] The number of items, and radii cannot be changed after creation. Hey, this is a minimal comp! After creation, you add your items, specifying the index of the item, the icon or label, and any data. [as]wheel.setItem(0, "one", "one"); wheel.setItem(1, "two", "two"); wheel.setItem(2, "three", "three"); wheel.setItem(3, "four", "four"); wheel.setItem(4, "five", "five"); wheel.setItem(5, "six", "six"); wheel.setItem(6, "seven", "seven"); wheel.setItem(7, "eight", "eight");[/as] The second param is iconOrLabel. You can pass in an instance of any display object, or a class that extends DisplayObject and it will be used as an icon. Or you can pass in a string and a label will be created. No, you can't have a label and an icon. Again, this is a minimal comp, and the layout for that would open a big can of worms. Anyway, you could make your own class that contains a label and an icon and use that easily enough. So if you need both, there you go. It's up to you to make sure your label isn't too long or your icon isn't too big. And use the iconRadius param of the constructor to adjust how close to the center it goes. A neat trick - you can even set iconRadius larger than outerRadius and your icons/labels will appear outside, around the menu. To activate it, usually you want to listen for a MOUSE_DOWN event and call wheel.show(). It will automatically center itself on the mouse. Here's the code for the full example above: [as]package { import com.bit101.components.Component; import com.bit101.components.Label; import com.bit101.components.WheelMenu; import flash.display.Sprite; import flash.events.Event; import flash.events.MouseEvent; [SWF(backgroundColor=0xffffff, width=800, height=800)] public class Playground extends Sprite { private var wheel:WheelMenu; private var label:Label; public function Playground() { Component.initStage(stage); label = new Label(this, 10, 10); wheel = new WheelMenu(this, 8, 80, 60, 10, onSelect); wheel.setItem(0, "one", "one"); wheel.setItem(1, "two", "two"); wheel.setItem(2, "three", "three"); wheel.setItem(3, "four", "four"); wheel.setItem(4, "five", "five"); wheel.setItem(5, "six", "six"); wheel.setItem(6, "seven", "seven"); wheel.setItem(7, "eight", "eight"); stage.addEventListener(MouseEvent.MOUSE_DOWN, onMouseDown); } private function onMouseDown(event:MouseEvent):void { wheel.show(); } private function onSelect(event:Event):void { label.text = "You chose item " + wheel.selectedItem; } } } [/as] Download / checkout here: These are fantastic. Thanks! Really great idea. Thanks for bringing it to my attention. Hey, cool. I’ve done this same thing a few times, but never really thought of it as something that would be useful as a package. BUT, it seems to fit great with the rest of the minimal comps package. Be good to put some kind of background on your examples there … I thought my plugin had packed up! Nice menu though … I might have to use it in a project I’m doing at the moment. Really intuitive – nice one! Cool thing. But you should also add a keyboard event listener when you show it, and listen for Esc key to cancel and close the menu. This is standard practice with any popup menu. nice one. could be useful for some touch screen work. thanks! @Erki – hard to see the need for an escape key when simply releasing the mouse achieves the same thing… I’d understad if the menu were modal, but it’s counter-intuitive behaviour to mouse down and press escape simultaneously. @Iain – Agreed. @KP – Another stunning piece of work! Thank you. I’d love to see your minimal take on a list box / scroll panel at some point: I remember a post where you mused on using a lambda function to calculate the scroll value… sounded v cool Without the pie, that’s very close to the contextual menus you can find in Alias/Maya… Looks really nice, Keith. A little usability suggestion–with 8 segments, rotating by 22.5 degrees would make it easier to use. You would have up, down, left, right and 4 diagonals, vs 8 diagonals. This would also help with muscle memory. Thanks Robert. I know what you mean. there is a protected var in there for starting rotation, which is -90. I had meant to make a setter for that. But programmatic would probably be nice, so the top segment would always be centered. Wasn’t sure how to handle it best, so this feedback helps. Erki, Oliver, yeah, I could add an escape key listener, but not sure i see the point. just release the mouse in the center or outside. import com.bit101.components.WheelMenu; not currently seeing the WheelMenu.as in the components folder from the google source. Hey Keith, another small ui suggestion, you could make the outer circle of the menu a ‘mile high’ so that the user only has to click -> hold -> drag in a general direction instead of the current solution where the user has to click->hold->visually aim the cursor -> release for selecting an option. Josh… are you checking it out from svn or downloading the zip or grabbing the swc? Tyler. Not sure I get what you mean by “mile high”. great menu, I just love it – only one suggestion, it isn’t functional near borders. Very cool! reminds me of the nav from the Monkey Island series. I think Josh is suggesting that the circle’s sector detection should extend to the whole screen. This lets you gesture more vigorously and not worry about staying within the visual circle. […] > New MinimalComp: WheelNav | BIT-101 Blog […] little bug exist in WheelMenu Class highlightColor setter. _buttons[i].selectedColor = _highlightColor; there’s no selectedColor property exist in ArcButton internal class. this should fix problem _buttons[i].highlightColor = _highlightColor; […] components from BIT-101: Minimalcomps Update #1: there has been another update New MinimalComp: WheelNav, but I’m pretty sure I won’t been using this one very […]
http://www.bit-101.com/blog/?p=1703
CC-MAIN-2017-17
refinedweb
1,190
68.06
Does anyone know how to determine what platform your c# code is running on e.g. whether it is running on linux or windows so that I can execute different code at runtime. I have a c# windows app that I want to build to target windows and linux platforms. So far I have created 2 project files pointing to the same set of source code files. I then use a conditional compilation statement one of the projects called LINUX. Where there are difference in the actual code I use coditional statements using the conditional compilation statement, e.g #if (LINUX) ' do something #endif You can detect the execution platform using System.Environment.OSVersion.Platform: public static bool IsLinux { get { int p = (int) Environment.OSVersion.Platform; return (p == 4) || (p == 6) || (p == 128); } } How to detect the execution platform ? The execution platform can be detected by using the System.Environment.OSVersion.Platformvalue. However correctly detecting Unix platforms, in every cases, requires a little more work. The first versions of the framework (1.0 and 1.1) didn't include any PlatformIDvalue.
https://codedump.io/share/rh2X1jwtHAZX/1/how-to-check-the-os-version-at-runtime-eg-windows-or-linux-without-using-a-conditional-compilation-statement
CC-MAIN-2018-26
refinedweb
181
50.84
Just got back from a weekend with Barbara in College Station, TX, where she is working on her master's in computer graphics at the TAMU Vizlab. Saw Final Fantasy at the big megaplex on the edge of town, followed up with lots of kibbitizing on the graphics technologies used. More work on Ganymede. I downloaded the 1.4 JDK beta towards the end of last week and made a few fixups to get everything to work under 1.4. So far as I know, all of the Ganymede code, server, client, everything, now work on every version of Java from 1.1.7 to 1.4beta. Pretty impressive if you ask me. I keep finding myself scheming about things to do once I officially drop support for JDK 1.1, though. In particular, I'm thinking how much fun it would be to implement support for a more scalable, disk-based database system using the random access file methods and the Java weak reference API for implementing a memory cache. Anyone have any good white papers on simple key-based transactional database systems? I've looked at Berkeley DB, but I'm thinking I shouldn't need to create a dependency on their code with as much transactional logic as is already in the Ganymede server. How hard can it be to implement a random access paging file with discrete indices for namespace constraints? Seems like the only real trick would be figuring out how to make transactional commits reasonably atomic on disk. The way Ganymede does it now, by keeping a ganymede.db file and an ongoing journal file is easy, File append and atomic rename make reliable transactions simple to do. Anyone want to email me with suggestions on books and/or web pages describing simple transactional disk models I might look at?
http://www.advogato.org/person/jonabbey/diary.html?start=14
CC-MAIN-2016-07
refinedweb
306
62.58
Next: Tips for Setuid, Previous: Enable/Disable Setuid, Up: Users and Groups [Contents][Index] Here’s an example showing how to set up a program that changes its effective user ID. This is part of a game program called caber-toss that manipulates a file scores that should be writable only by the game program itself. The program assumes that its executable file will be installed with the setuid bit set and owned by the same user as the scores file. Typically, a system administrator will set up an account like games for this purpose. The executable file is given mode 4755, so that doing an ‘ls -l’ on it produces output like: -rwsr-xr-x 1 games 184422 Jul 30 15:17 caber-toss The setuid bit shows up in the file modes as the ‘s’. The scores file is given mode 644, and doing an ‘ls -l’ on it shows: -rw-r--r-- 1 games 0 Jul 31 15:33 scores Here are the parts of the program that show how to set up the changed user ID. This program is conditionalized so that it makes use of the file IDs feature if it is supported, and otherwise uses setreuid to swap the effective and real user IDs. #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <stdlib.h> /* Remember the effective and real UIDs. */ static uid_t euid, ruid; /* Restore the effective UID to its original value. */ void do_setuid (void) { int status; #ifdef _POSIX_SAVED_IDS status = seteuid (euid); #else status = setreuid (ruid, euid); #endif if (status < 0) { fprintf (stderr, "Couldn't set uid.\n"); exit (status); } } /* Set the effective UID to the real UID. */ void undo_setuid (void) { int status; #ifdef _POSIX_SAVED_IDS status = seteuid (ruid); #else status = setreuid (euid, ruid); #endif if (status < 0) { fprintf (stderr, "Couldn't set uid.\n"); exit (status); } } /* Main program. */ int main (void) { /* Remember the real and effective user IDs. */ ruid = getuid (); euid = geteuid (); undo_setuid (); /* Do the game and record the score. */ … } Notice how the first thing the main function does is to set the effective user ID back to the real user ID. This is so that any other file accesses that are performed while the user is playing the game use the real user ID for determining permissions. Only when the program needs to open the scores file does it switch back to the file user ID, like this: /* Record the score. */ int record_score (int score) { FILE *stream; char *myname; /* Open the scores file. */ do_setuid (); stream = fopen (SCORES_FILE, "a"); undo_setuid (); /* Write the score to the file. */ if (stream) { myname = cuserid (NULL); if (score < 0) fprintf (stream, "%10s: Couldn't lift the caber.\n", myname); else fprintf (stream, "%10s: %d feet.\n", myname, score); fclose (stream); return 0; } else return -1; } Next: Tips for Setuid, Previous: Enable/Disable Setuid, Up: Users and Groups [Contents][Index]
http://www.gnu.org/software/libc/manual/html_node/Setuid-Program-Example.html
CC-MAIN-2014-41
refinedweb
473
70.94
pip install aiobotocore The Python aiobotocore library is among the top 100 Python libraries, with more than 43,241,528 downloads. This article will show you everything you need to get this installed in your Python environment. How to Install aiobotocore on Windows? - Type "cmd"in the search bar and hit Enterto open the command line. - Type “ pip install aiobotocore” (without quotes) in the command line and hit Enteragain. This installs aiobotocore for your default Python installation. - The previous command may not work if you have both Python versions 2 and 3 on your computer. In this case, try "pip3 install aiobotocore"or “ python -m pip install aiobotocore“. - Wait for the installation to terminate successfully. It is now installed on your Windows machine. Here’s how to open the command line on a (German) Windows machine: First, try the following command to install aiobotocore on your system: pip install aiobotocore Second, if this leads to an error message, try this command to install aiobotocore on your system: pip3 install aiobotocore Third, if both do not work, use the following long-form command: python -m pip install aiobotoc aiobotocore on Linux? You can install aiobotocore on Linux in four steps: - Open your Linux terminal or shell - Type “ pip install aiobotocore” (without quotes), hit Enter. - If it doesn’t work, try "pip3 install aiobotocore"or “ python -m pip install aiobotocore“. - Wait for the installation to terminate successfully. The package is now installed on your Linux operating system. How to Install aiobotocore on macOS? Similarly, you can install aiobotocore on macOS in four steps: - Open your macOS terminal. - Type “ pip install aiobotocore” without quotes and hit Enter. - If it doesn’t work, try "pip3 install aiobotocore"or “ python -m pip install aiobotocore“. - Wait for the installation to terminate successfully. The package is now installed on your macOS. How to Install aiobotocore in PyCharm? Given a PyCharm project. How to install the aiobotoc "aiobotocore"without quotes, and click Install Package. - Wait for the installation to terminate and close all pop-ups. Here’s the general package installation process as a short animated video—it works analogously for aiobotocore if you type in “aiobotocore” in the search field instead: Make sure to select only “aiobotocore” because there may be other packages that are not required but also contain the same term (false positives): How to Install aiobotocore in a Jupyter Notebook? To install any package in a Jupyter notebook, you can prefix the !pip install my_package statement with the exclamation mark "!". This works for the aiobotocore library too: !pip install my_package This automatically installs the aiobotocore library when the cell is first executed. How to Resolve ModuleNotFoundError: No module named ‘aiobotocore’? Say you try to import the aiobotocore package into your Python script without installing it first: import aiobotocore # ... ModuleNotFoundError: No module named 'aiobotocore' Because you haven’t installed the package, Python raises a ModuleNotFoundError: No module named 'aiobotocore'. To fix the error, install the aiobotocore library using “ pip install aiobotocore” or “ pip3 install aiobotocore” in your operating system’s shell or terminal first. See above for the different ways to install aiobotoc.
https://blog.finxter.com/how-to-install-aiobotocore-in-python/
CC-MAIN-2022-33
refinedweb
518
65.42
In this article, we will learn about c++ switch statement.The C++ switch statement executes one statement from multiple conditions. It is like if-else-if ladder statement in. Below is the structure of switch statement switch(expression){ case value1: //code to be executed; break; case value2: //code to be executed; break; ...... default: //code to be executed if all cases are not matched; break; } C++ switch example #include <iostream> using namespace std; int main () { int num; cout<<"Enter a number to check grade:"; cin>>num; switch (num) { case 10: cout<<"It is 10"; break; case 20: cout<<"It is 20"; break; case 30: cout<<"It is 30"; break; default: cout<<"Not 10, 20 or 30"; break; } } Output: Enter a number: 10 It is 10 Output: Enter a number: 55 Not 10, 20 or 30 Another C++ switch − You passed Your grade is D Do you want to hire us for your Project Work? Then Contact US.
https://blog.codehunger.in/c-plus-plus-switch-statement/
CC-MAIN-2021-43
refinedweb
156
63.73
Archives The Collective vs. the Individual: - Anyone and everyone can contribute to it. Sure there needs to be a group of people that keep an eye out for dupes, malicious posts or spam but that can anyone with some time and passion. - If you contribute something you should be responsible to keep it up to date. Like a good boy scout, clean up after yourself. - If you see something out of date, update it. - If anyone asks about something that would be on the list (where can I find a list of SharePoint bloggers which gets asked a lot) you point them at the list. If it's not on the (and you know the answer) that would be a trigger to add it.. Follow). Share: Now here it is after a few versions and specifically after a checkout/check. Scott Hanselman is my hero Scott Hanselman is, hands down, simply my hero today. A couple of years ago he posted a pretty massive list of useful tools, utilities, and must have stuff a any self-respecting geek should have on his hard drive. Now he's updated the list and it's massive (and broken down into catagories to boot). I find that I use about 1/2 of the tools listed and now I'm off to investigate the other half. If there's one blog entry you need to bookmark, clip, or copy and paste into OneNote (if that's your thing) this is it. Check out his 2005 updated list here. Everyone else is. Here's my personal top 10 from his list (plus some that are not there) that I faithfully use every day: - ReSharper - Basically live and die by this. I can do all the refactorings by hand, but it's just so sweet to rename a class and have 30 files update automatically. - WinMerge - Fantastic util and very handy for trying to diff SharePoint xml files between environments. Open source to boot. - NAnt - Build, build, build (and do a whole lot more) - TestDriven.Net - Test, test, test (and then refactor and test some more) - OneNote - I clip tons of stuff everyday. It's from my old days as a graphic artist where I would clip pictures and put them in what's called a "graveyard". OneNote lets me do this, search it, organize and much more. I also keep a OneNote section on blog ideas and whatnot. Very handy! - Notepad++ - Scott pushes Notepad2, but I prefer this Notepad replacement. YMMV. - Midnight Commander - Scott doesn't have this listed so call me a command line guy, but I prefer this util over Explorer anyday. Reminds me of the odl XTree tool (which I used all the time as well) - CopySourceAsHtml - Great for blog postings! - Reflector - I'm poking inside Microsofts assemblies everyday trying to see what makes things tick. - Web Services Studio 2.0 - This is an invaluable tool when trying to debug Web Services from SharePoint. Get it now! Moving to 2.0 and other goodness What a week of living H-E-double-hockey-sticks as I've had about 10 hours of sleep for the entire week (which is a bit low even for me) and basically got very little done. Two things that I wanted to mention though. Yesterday I had the pleasure of working with Kit George, the Program Manager for the CLR team at Microsoft. The CLR team was in Calgary these past few days (and have been in Canada for the past week) helping out customers with some compatibility testing with the 2.0 framework. It was great talking to Kit and found a few intersting facts out I didn't know. There are about 66 developers on the CLR team alone which gives you a real idea of how big the entire team is (so add on project/program managers, architects, testers, etc.). Oh yeah, and if you're looking forward to the GZipStream class coming in System.IO.Compression like I am (goodbye SharpZipLib!), you can thank Kit. That's his baby. If you're looking for a migration path then I suggest you install the framework. Really. Just install it. You can get the redist here which, like it's 1.1 cousin, is just a single 20mb setup.exe that you run. Install it on a development server and in your Web Site properties in IIS just retarget it for 2.0. No iisreset needed. No reboot required. Your 1.1 ASP.NET apps should work fine (they'll be a short JIT when someone accesses them for the first time). We only did a small test (3 or 4 apps) but the 30 or so others running on that server didn't show any issues and nobody reported any problems. Try it out for awhile like that. Then you can think about the recompiling of apps, which is a different issue than just retargeting for the platform. The retargeting and installation of the 2.0 framework is pretty simple. We did find a silly problem sometimes because the ViewState was cached, so just relaunch your browser to fix it. This was when we were in the middle of a task, retargeted the system, then just tried carrying on from there. Not something you would normally do. If you do deploy and try testing but still find any issues you might want to drop a note off to Kit and his team. If you're looking to recompile (and you will at some point) then you'll need of course Visual Studio and all that jazz. While the retargeting is simple and the team says it'll probably work fine 99% of the time (there will be those off things that are exceptions, again if you come across them let Microsoft know!) it's the recompiling that's going to bite you. I've been recompiling apps here at work from time to time (what else does a geek like me do on his lunchbreak?) to see what kind of issues there are (not testing the apps, just recompiling the solutions) and there are some deprecated things that you'll need to rework your code at some point (one of the biggest ones is system.configuration). The other thing is splitting up of assemblies. Let's say you have a common assembly for logging or something that 10 ASP.NET apps use. You can't be expected to upgrade all 10 apps at the same time as the testing effort might be too much to bear. So you'll be in a mixed mode scenario where your environment might be targetted for 2.0 and a few of the apps are native 2.0 (migrated and recompiled) but others (like common assemblies) are still 1.1. This is the scenario that needs the most testing and you'll probably stumble over issues (DateTime serialization being one of the biggest ones). So just something to watch out for and that there's going to be some work migrating to 2.0. Finally Serge posted a great example in the comments from Thursdays post of how he accesses the SQL directly for retrieving things from a command line (via bcp but you can do it with osql as well). He did say that he tried my Wrapper classes but they didn't provide enough depth of functionality and were a bit of work. This resonates with me that I need to shift a bunch of focus on the library and make it useful instead of a bit of a toy which is what it kind of is right now. If I'm going to go around preaching for people to write remote apps I feel that I should provide you with an easy to use and rich tool to do this. So I will be getting back to putting more time into the wrappers and hopefully provide a complete set of objects that anyone can use for any purpose. Web Services are still slower than the Object Model but the advantage is that you don't have to run it directly on the server and some caching might be able to speed things up a bit. I think it'll be a long time before Microsoft does something like make the SharePoint object model remotable (if ever) so hopefully this will help in the meantime. Please stop using the database and start writing remote apps As I get down off my soapbox from yesterday about overpriced third-party SharePoint tools and components I have a new gripe (seems like a good week for this). Time and time again I'm seeing two things become apparent in tools that are coming onto the market. First, some are just bypassing the rules of the road and doing stuff directly against the SQL databases. Second, about 9 out of 10 tools must be run directly on the server and require a local administrator. Okay, first on the SQL stuff. I don't understand why developers can't get over this. You've seen numerous posts (including Fitz going on a few times about it) to get out of the database. So why do you keep doing it? My only reasoning is that you're frustrated by the lack of access to things that we think should be apparant and well, we can all write T-SQL so why not talk to it directly? What's the harm? I'm not going to get into the issues of what the database is doing as Fitz had a good explaination around that. Maybe that position should change though as more and more products are just blatently going against the database to do things like counting the number of sites in a portal. Rather than looping through a property in the Object Model, a quick "SELECT COUNT(1) FROM WEBS" will do the trick. So the $10,000 question might be should we be doing this? If you're using the database directly can you do simple selects, counts, etc. as long as you're not doing updates? Second is the ever growing number of tools that keep requiring two things. One to be run directly on the server and two to be run by a local administrator. It's great for some tools because they are administrator tools, but I think the one big thing with SharePoint that people are having trouble with is the admisistration. More and more companies are either outsourcing their infrastructure or trying to get everyone to do their jobs from a single location. At our company, a huge effort just finished on stopping the free love access that users had to systems. The other thing is that a SharePoint administrator shouldn't have to have local admin to the SharePoint box. In a clustered or scaled out scenario, that means they really need access to several boxes (including the database servers). I think what we need is more tools that users can use off their desktop (via Web Services) to do the mundane stuff that SharePoint doesn't necessarily expose through the Web UI, or for custom tasks that are specific to your business needs. Again, we're given lots of tools but most of them we might be in a position to not be able to use in our environment due to restrictions on our infrastructure. I don't think the answer is to keep opening up the infrastructure to allow a SharePoint admin local root power but that's just me. A feeling of Deja Vu A long time ago in a technology far, far, away I used to write these things called Doors that ran on community-based, dial-up, text-based systems called BBSes (yes, before the big bad "Internet"). A BBS was like a website (but not really) where communities got together, posted messages, swaped files and email and played games (sounds like a website huh?). Doors were all the rage and I ran many systems and wrote a bunch of Door games and all was well in the universe. Along game a man named Tim Stryker who was a bit of a radical, but basically a genius. He (and others) put together a system called MajorBBS (MBBS for short). This was a unique system over traditional BBS systems of the time because rather than spawning off separate processes to run all these Door games available, MBBS would run them as Dynamic Link Librairies (DLLs) and all modules in a system would run in real-time (at first they were actually linked into the system and you had to recompile the system when you bought a new module, crazy huh?). Things like real-time chatting, real-time updates from people playing in games, were now possible with the approach and technology Mr. Stryker and Galacticomm built. However here's the catch. Building a regular (non-MBBS) Door game was pretty much a no-brainer for a good programmer. There were some frameworks out there that handled the Serial IO for you (remember this is all dial-up, async, 2400 baud and all that) but for the most part you wrote a single player application and only worried about it while it was running. The Door would launch, the user would play, the executable would end and the user was shuffled back to the BBS system from whence he (or she) came. Then Major BBS modules appeared but there was a bit of a catch on building these as you had to now deal with shared memory, be wary of heavy processing and not taking down all the other lines (up to 255 on a single PC!) and stuff like that. Additionally you had to outfit yourself with some pretty specialized tools. Borland C was the only compiler you could use (later Microsoft Visual C was able to do the job), a special memory extender library (PharLap) had to be used, etc. On top of all that, you also had to buy your MBBS system just to test your code. All in all, I personally invested about $2000-$3000 bucks to build my development environment which was a lot for back then. To offset this, Major BBS developers could charge an arm and leg for their products. Even a simple game like Tic-Tac-Toe (with a chat feature) would sell for hundreds of dollars. I built small RPG games that I sold for over $1000. That's each. And you could get away with selling licenses for more lines. The more lines (users) a BBS wanted, the more your could charge. The short of it was that we could get away with this because it was difficult to build the software, an investment for the environment, and then there was support and the fact that any guy off the street couldn't just sit down one night and say "I'm going to build a MBBS module and make a boatload of cash". A Door developer on the other hand could knock off a pretty decent game in a few days/weeks and sell it for $20 a pop and laugh all the way to the bank. There are parallels with BBS Door Development that can be done on a shoestring and say writing ASP.NET web apps using free tools today. I can download the .NET SDK for free and compile my system. Web Matrix isn't a bad tool. I can deploy or sell my web solution or I could build .NET client applications using something like SharpDevelop. In other words, I can build the Doors of yesterday today with .NET and some cheap (free) tools. With Web Parts and SharePoint the experience is very much like what we went through with Major BBS development. You need a specialized environment (Windows Server 2003). You need a specific IDE (Visual Studio .NET although you might be able to use something like SharpDevelop). You need to develop right on the server or else put up with very complicated ways of remote debugging (trust me, debugging with PharLap and protected memory dumps was no picnic either). You need to know a lot of about how SharePoint works, what works (and what doesn't) and how to twist it to do your bidding. Development in SharePoint is not a walk in the park and can be expensive to setup and work with. Why am I telling you all this? I'm seeing the MBBS trend happen again but this time in the SharePoint space. No, people are not writing Door games for SharePoint (although that does entice my already overtaxed project list). There are however lots of niche products coming out and while some are great, some not so great, they're (for the most part) expensive. I'm also not talking about expensive as in comparing them to 1980s prices or through inflation but just the fact that a small widget (or set of widgets) that does something useful (say backup files, deploys changes to your SharePoint sites, provides better than basic workflow, etc.) is pretty costly for what it does. Don't get me wrong. I'm all for free enterprise and making money however I just look across the SharePoint tool and Web Part space and feel a little deja vu coming on. Again, there is an investment here and a return that companies want to see on that investment. However it just feels costly for a few Web Parts. Maybe it's me, maybe times have changed, but is it really worth thousands of dollars to a company to purchase an out of the box Recycle Bin for SharePoint? Sure there are free tools and low(er) cost Tools, Web Parts, and Solutions coming out as well but my gut feel is that when you direct someone to a commercial product more often than not it's going to cost a few bucks to get that tool. I look at just a sampling of a lot of tools and we're talking $5k, $10k, and up for many of them. Again, I'm not trying to paint everyone with a single brush and I guess cost is relative and in the grand scheme of things, some companies have that kind of money to invest and don't see it as a large outlay. I just feel like I'm back in 1980 where we, the MBBS developers, could charge 10 times more for a module than a BBS door just because we knew how to build them. I might be alone in my thoughts here but hey, it's my blog and I'll ramble if I want to. Catching up, CTP Madness, and PDC ranking I'm playing catch up today as things in the universe are just a little off kilter for me. The weekend was a bit of a bust with regards to getting my Remote SharePoint Explorer out as I was bogged down in re-installing (yet again) Windows XP. For the life of me, I couldn't get my SATA drive to boot as the master device so couldn't get my OS moved from a small 40gb drive to where it should have been in the first place, on a nice happy 200gb one. So I had to run out and grab a 200gb IDE which the system would recognize. Then I spent the better part of the day screwing around trying to figure out why it kept complaining that it couldn't find hal.dll (when it was clearly there). Finally I yanked the SATA connection out and all was well. Seems Windows XP got confuzzled over where it was looking for files. At boot time it was talking to my IDE, but then at install time it figured my SATA drive was in charge and thus couldn't continue. Installed XP on the IDE, replugged the SATA drive back in and that was that. Sometime around Sunday night I finally finished setting up Visual Studio for the umpteenth time. You can read the brochures and rally around products like Norton Ghost and Acronis Disk Image, but moving installed programs under Windows around just never works. There are always left over droppings of registry settings, files in common places, and configurations in Documents and Settings that screw up moving anything from drive to drive. On top of that, some kind of weird electrical oddity occured in our building at work so Monday was Snow Day for me and we were turned away at the doors (well, the few that didn't read the email on Sunday night saying the building was closed). So I tried getting some work done but just couldn't and ended up killing some brain cells with Halo 2 most of the day. Okay, with that out of the way I did find a very cool page on Channel 9 that handles all the grunt work of CTP madness for you. You know, when you try to install a Community Technology Preview of something but end up messing up your environment so badly (thank the maker for Virtual PC) that you don't know what to do. CTP Build Finder is a slick tool that lets you select a product (from the list of all the current CTPs) and it will show you what other CTP products it's compatible with. Very cool and very useful if you're planning to try out something like Indigo and Longhorn at the same time. Check it out here. Thanks to everyone who supported me (or felt sorry for me) and clicked on the PDC link. As far as rankings go, I'm tied in first place with 41 referrals (there are three different urls it's tracking, the original one, the original one with a "/" and the archived entry). Of course, referrals are not just the criteria for the contest as they'll be reading the blogs and deciding based on content and all that jazz. My pathetic attempts at humor will of course put me in last place here but hey, it's all fun until someone loses an eye. I do have a dilemma with the PDC contest though. The contest ends mid August while the early bird registration ends in July. The early registration will save you $500USD for the conference, which in Canadian money is like $10,000. So if I wait for the contest and lose, I'm costing my company an additional $500USD for the registration. What's a girl to do? Finally on a side note, there was an article on Slashdot about the rise and fall of blogs and how it's degraded into a personal rant fest and most people would like all bloggers to curl up and die. This post just contributes to that dribble so enjoy the wasted bandwidth and 2 minutes of your life you took reading this as you're not going to get it back. Send Bil to PDC This morning is special. It's special because I'm on my knees to the SharePoint blogging community (and anyone else who stumbled on this blog after googling for "contest"). The good geeks over at Channel 9 are sending some lucky nerd to this years Professional Developers Conference through a simple contest and I want to be that nerd. This blog post is my plea for you to send me to PDC. Why? Why send this lowly MVP to PDC and not say someone else who doesn't gripe about not going to TechEd every day of the event? It's not like I'm asking to be sent to Tibet for an all-expense paid vacation as a yak herder. This years PDC is in Los Angeles. No, PDC is no TechEd. It does not have the glitz, the traffic, the smog, the shopping malls or the proximity of Disney World, not to mention a comparable number of television channels. Yet PDC is like the Willy Wonka of IT. With it's hands on experience with the next version of SharePoint (hopefully), in-depth talks on the future of portals, and long all-night drinking sessions that touch on the very nature of Information Technology. How can you not send me there, at least to live vicariously through my blog entries that I'll surely post. Think of the precendent if you don't send Bil to PDC. The next time some Fortune 500 mega-conglomerate denies a geeks desire to seek knowledge and share wisdom, the IT organization will be left with no appeal. "Remember Bil" they'll say. My promise to the world, should you send me to PDC, is simple. - Daily updated photos of what's happening including anything so embarassing it will be blogged about for years to come - At least one morning where I'm hung over and have to give a presentation or speak publically and don't come off looking like a complete idiot - An in-depth interview in the PDC garage where you provide the questions with someone so close to SharePoint that we can only call him "Deep Throat" - A blog-by-blog posting several times throughout the day on the magical goings on and what I step in when it comes to SharePoint, development, and IT - A live demonstration of my Whack-A-Fitz concept - An answer to the age old question. Boxers or Briefs? So please, for the love of all that is managed code, send me to PDC. Click on the link below to do this. Really. Just one click. That's all. SharePoint Wrappers, a new tool for you, and more on TechEd that I didn't go to No SharePointy stuff today as I'm just spending most of my day re-configuring our VMs so calls to SPUtility.GuessLoginNameFromEmail will work. I do have some goodies coming up this weekend as a server move is going to keep me away from the office so I'm going to be VPCing at home on non-project work and have a few things to post on my SharePoint Wrappers project which provide an OO way to access SharePoint services from remote locations (like desktops). I'll be posting a blog and source update to a new tool called Remote SharePoint Explorer (for lack of a better name) which mimics what SharePoint Explorer from Navigo does, but you can run it from your desktop (doesn't have as much functionality as SharePoint Explorer does, but gets you as close as you can via Web Services). So keep an eye out for that this weekend. Fitz resurfaced from drinking and carrying on at TechEd, an event that I was unable to make it to (grumble, grumble, grumble, gripe, gripe, grip) and posted a few notes from the event. He's got a nice little block on Navigo Systems that have a solution for boolean type searches and, as Fitz puts it, is a much more powerful and flexible search option that can be a drop-in replacement for the standard search. Also check out SharePad and the FrontPage RPC stuff that the Interlink Group put together. It's an excellent toolkit and for those that want to build custom file handlers in their portals, it's the way to go. I took a look at it a couple of weeks ago and am just trying to see how to leverage it to do some SharePoint wizardry that we can't get out of the box. Great stuff. XmlWebPart ala carte The WebPartPages namespace and the various Web Parts that Microsoft has in them keeps puzzling me. While everyone is off at TechEd, some of us still have work to do (yeah, that was today's grumbling). I had previously tried to instantiate my own copy of a ListFormWebPart (because it's sealed so I can't inherit from it) and wrapping it up in my own custom Web Part. This failed miserably so here we are again. Now I'm bugged by the XmlWebPart. It's a great Web Part for rendering Xml without having to build your own XmlDocument and do transformations and all that jazz. Just set the Xml (or XmlLink) property and optionally the Xsl (or XslLink) property and voila, insto no-nonsense Xml rendering. Trouble is that I can't seem to do this on the fly. The Xml links have to be static. The best compromise I can find is to: - Grab the SPWebPartCollection from the current SPWeb (through a url to a Web Part Page I create which has an XmlWebPart on it) - Find the XmlWebPart on the page by walking through the SPWebPartCollection and matching up on GetType or something - Set the Xml (or XmlLink) property once the XmlWebPart is found - Redirect the user to the custom Web Part Page for happy viewing Or maybe I'm just trying to beat a dead horse and should just do my own dang transformation? I know Microsoft has to use the WebPartPages for their own Web Parts but why is it so difficult for anyone else to just tap in and leverage them? Man, a week without TechEd (and the bright Orlando sunshine) and it's been raining for a few days here in Cowtown. Sucks to be me. Rats... ...MSDN documentation fails me again. MSDN says public virtual int CollectionBase.Count {get;} Reflector says public int CollectionBase.Count {get;} Visual Studio says cannot override inherited member 'System.Collections.CollectionBase.Count.get' because it is not marked virtual, abstract, or override Kinda sucks when you're trying to write Unit Tests with TDD and you can't do something as simple as this: 26 [TestFixture] 27 public class PluginCollectionFixture 28 { 29 [Test] 30 public void UponCreationCountIsZero() 31 { 32 PluginCollection collection = new PluginCollection(); 33 Assert.AreEqual(0, collection.Count); 34 } 35 } 36 37 public class PluginCollection : CollectionBase 38 { 39 public PluginCollection() 40 { 41 } 42 43 public override int Count 44 { 45 get { return -1; } 46 } 47 } Oh well. At least I'm going to TechEd. Oh, wait. Never mind. NOT going to TechEd and tracking down a SharePoint bug Unfortunately due to project commitments I'm stuck here in Cowtown for the week instead of basking in Florida at TechEd 2005. So expect a week of bitching and griping with at least a blog a day where I mention how craptastic it is as I read my fellow bloggers entries about how fab it is down there (and if you're blogging from down there, raise a glass to the poor schmuck here in rainy Calgary). I'm also tracking down a potential bug with SharePoint document libraries and single quote characters. Thanks to the resources of my team (way to go guys) we may have uncovered something that needs a hotfix (at least it's critical enough for us to have one). I still need to confirm it as I can't seem to find anything in any MSKB I've looked through (both public and private) so stay tuned. Copy and Paste in Explorer View I've ranted before about Explorer view and how it basically screws with your version history. This is basically why I delete it on most document libraries I roll out. So if you copy and paste a document (or a series of documents) from one Explorer View to another here's what you get: - The document is copied across from source to destination - An entry in the version history is created in the destination library for each entry in the source document library - All versions point to the unique copies of the document in the destination library but the document itself is the latest version from the source document library YMMV. At least this is what I experience whenever I do it. Might be my infrastructure that is causing this but I've tried it with various browsers, end user operating systems, and service packs and get the same results. Try it out with your own setup and let me know if you get the same results. Update Note: Got an email from my fellow Canuck on the other side of our country Amanda Murphy about their product called Trace. Looks pretty nice and adds a cut/copy command to the menu and the ability to resolve conflicts (like mapping columns) when you paste a document into a new doclib. So check it out here. If you buy it remember to mention my name so I can get my $1 commission (kidding about that Amanda). Whack-A-Fitz Fitz has been pretty quiet lately but he always resurfaces and pops up like those damned Whack-A-Mole games from time to time. I wonder what they have him doing as he's been quiet for a few weeks now (and I prodded him a couple of times in email with no response, which basically means nothing but thought I would throw that in). Anyways, he's got a nice short post on this year's Professional Developers Conference and to keep an eye on the Office/SharePoint Development Track. He dropped a hint of this awhile ago but now it looks more solid. While I hate the fact that I can't attend Tech-Ed this year and drink heavily and hang out in the SharePoint Cabana, not even a free X-Box 360 will prevent me from getting my butt down to PDC. Looks like SharePoint is going to kick into overdrive and re-enforce the message big Bill gave about SharePoint being one of the key collaboration technologies in the Office Space and Microsoft will be letting us peek at what we can expect next year. Any ill effects if the crawler is removed? Yup, asking today instead of sharing. Ever have your SharePoint domain crawler account show up in every @$%!%!&*!@ list where there's a user lookup. I seem to have them in any of the sites we have spun up around here. A freaky looking account name that nobody seems to know who it is (ours is called spsSpider). So the question is, are there any ill effects removing the name from the profiles? I mean, the account is setup in the Central Admin and part of the admin group so it should have access. I guess when it "visits" the site to crawl it, it leaves its name droppings there just like any user but is there any problem removing the accounts profile from the SharePoint Profile Database. Does it come back when the next crawl happens? Is there any way to get rid of it so your users don't see it or maybe I should rename the account to Alan Smithee (heavy thud as nobody gets that joke)?
http://weblogs.asp.net/bsimser/archive/2005/06
CC-MAIN-2014-35
refinedweb
5,812
76.45
I worked with puppet (< 0.25) back in 2008/2009. We were able to deploy 200 servers from scratch and manage them. It worked fine. I'm now with a new customer and I'm pushing Puppet (and I'm also back to puppet on a side project). We're considering Puppet 2.6 to manage RHEL/CentOS 5 or 6 hosts. I'm "upgrading myself" to Puppet 2.6's new concepts and features. Anyway consider this for the sake of argument: - node server1.hostingcompanyAlpha.com -- hosted on a dedicated server at provider Alpha -- production - node server2.hostingcompanyBeta.net -- hosted on a dedicated server at provider Beta -- production - node staging.myprivatenetwork.priv -- hosted on my customer's private network -- staging/QA - node dev.myprivatenetwork.priv -- hosted on my customer's private network -- development server Those four nodes must host the same elements: - Apache HTTPD with multiple VHosts - PHP - Extra software ... There are a few differences between nodes: - Servers don't have the same capabilities (CPU/Mem/bandwidth): we need to tweak Apache's MaxClients settings on a per-host basis - We need to tweak PHP : displaying errors on 'staging' and 'dev' but hiding them on server1/server2 (ie: setting 'display_errors' to 'on' or 'off' in php.ini) - On development and staging/testing servers we need to change some of the VHosts definitions: add extra serverAliases, etc ... - server1, server2 and staging/dev must use different DNS servers (/ etc/resolv.conf) and RPM Mirrors (yumrepo{ }) I've read the following blog post: Back with puppet < 0.25, we'd use "global variables" (not even node inheritance). manifest/sites.pp had something like: $envname = 'prod' $envstr = '' $dns_servers = [ '10.0.0.42', '10.10.1.42' ] import "classes/*.pp" node 'server1.hostingcompanyAlpha.com' { $ = 300 $yum_base = " $dns_servers = [ '1.2.3.4', '1.2.4.4' ] # Hosting Co.'s resolvers include mywebserver } node 'server2.hostingcompanyBeta.net' { $ = 200 $yum_base = " $dns_servers = [ '8.8.8.8.8' , '8.8.4.4' ] include mywebserver } node 'staging.myprivatenetwork.priv' { $ = 50 $php_display_errors = 'on' $envname = 'staging' $envstr = 'stag' include mywebserver } node 'dev.myprivatenetwork.priv' { $ = 20 $php_error_reporting = "E_ALL" $php_display_errors = 'on' $envname = 'dev' $envstr = 'dev' include mywebserver } manifests/classes/mywebserver.pp would contain somethine like this: import "php" import " class mywebserver { include centos # which would in turn include modules 'yum' and 'resolv' include include php include php::apc define { 'mysite' : servername => "", documentroot => "/var/www/html/mysite.com", } } modules/ had: # defaults $ = 150 $ class { file { "/etc/ : content => template(" # which would then use $ } } We also had a = "*", $port = 80, $servername, $documentroot, ...) define which would write VHosts files based on the following template: <VirtualHost <%= ip %>:<%= port%>> ServerName <%= servername %> <% if envstr != '' -%> ServerAlias <%= envstr %><%= servername %> <% end -%> <% if envname != 'prod' -%> php_admin_value display_errors on <% end -%> ... </VirtualHost> modules/yum/manifests/init.pp had: # defaults $yum_base = " class yum { yumrep { "os" : baseurl = "${yum_base}/RPMS.os", } } modules/php/manifests/init.pp: php_memory_limit = "32M" php_error_reporting = "..." php_display_errors = "off" and so on ... (huge list) with php.ini.erb : display_errors = <%= php_display_errors %> error_reporting = <%= php_error_reporting %> ... And so on ... If you haven't dozed off already, you get the idea. :-) That way we could provide safe/sane default settings which could easily be tweaked on a per-host or per-class basis. Parameters were quite easy to track and were in the code (which is stored in SVN). There might be some scoping problems from time to time, I have to admit. But once we had our "pattern", things would be smooth. I do now have trouble understanding how I should proceed with Puppet 2.6 (and 2.7 in the future), if I'm to avoid global variables. Parameterized class are seemingly not an option and, from what I understand and read, are more of an alternative to class inheritance (Nigel Kersten's comment on issue #13537): you don't need to inherit from a parent class if you want to tweak some of the ressources, you could just pass parameters to your class in the first place. Indeed I do not see how parameterized classes would help me get such flexibility. Or I'd have to "pass down" all the variables I need from my node/host down to every class. Cumbersome, at best. Parameterized class a good for "coarse grained" tuning, but not "per host" tweaks" on a variety of items. Next option is an External Node Classifier which I would have to develop and maintain to fetch/rewrite/generate/whatever variables I need. I could probably get what I want but I'd end up having an extra tool to maintain (and train people to use). A "middle of the road" option would be Hiera and create a hierarchy base on the '$hostname' fact (yaml backed for 'custom' values, and probably a puppet backed to fetch the defaults ? I see that you can also pass a default value to hiera()) . I assume that's the way to go. Well... I'm a bit puzzled by all this. :-) I'd like some input from you guys (and gals !). I'll try and install Hiera (nice, more stuff I need to package !) anyway. By the way what's the target version (2.8 ?) for Hiera's integration into Puppet ? Thanks !
https://grokbase.com/t/gg/puppet-users/124rhvfhjp/moving-from-puppet-0-25-to-puppet-2-6-global-scope-variables
CC-MAIN-2022-21
refinedweb
849
59.6
For as long as I’ve been working on Sitecore there has been this really annoying issue where setting the link manager to include server url and running under https will cause urls to be generated with the port number included. e.g. which naturally you don’t actually want. Aside: This issue was finally fixed in Sitecore 9.1 To overcome this there are a few methods you can take. Method 1 – Set the Scheme and Port on you site defenition This is possibly the smallest change you can make as it’s just 2 settings in a config file. Setting the external port on site node to 80 (yes 80) tricks the link manager code into not appending the port number as it does it for everything other than port 80. <configuration xmlns: <sitecore> <sites xdt: <site name="website"> <patch:attribute</patch:attribute> <patch:attribute/sitecore/content/MySite</patch:attribute> <patch:attributehttps</patch:attribute> <patch:attribute80</patch:attribute> </site> </sites> </sitecore> </configuration> What I don’t like about this method though, is your setting something to be wrong to get something else to come out right. It’s all a bit wrong. Method 2 – Write your own link provider The second method which I have generally done is to write your own provider which strips the port number off the generated URL. For this you will need: 1. A patch file to add the provider: <configuration xmlns: <sitecore> <linkManager defaultProvider="sitecore"> <patch:attribute <providers> <add name="CustomLinkProvider" type="MySite.Services.CustomLinkProvider, MySite" languageEmbedding="never" lowercaseUrls="true" useDisplayName="true" alwaysIncludeServerUrl="true" /> </providers> </linkManager> <mediaLibrary> <mediaProvider> <patch:attribute MySite.Services.NoSslPortMediaProvider, MySite </patch:attribute> </mediaProvider> </mediaLibrary> </sitecore> </configuration> 2. A helper method that removes the ssl port namespace MySite { /// <summary> /// Link Helper is used to remove SSL Port /// </summary> public static class LinkHelper { /// <summary> /// This method removes the 443 port number from url /// </summary> /// <param name="url">The url string being evaluated</param> /// <returns>An updated URL minus 443 port number</returns> public static string RemoveSslPort(string url) { if (string.IsNullOrWhiteSpace(url)) { return url; } if (url.Contains(":443")) { url = url.Replace(":443", string.Empty); } return url; } } } 3. The custom link provider which first gets the item url the regular way and then strips the ssl port using Sitecore.Data.Items; using Sitecore.Links; namespace MySite { /// <summary>Provide links for resources.</summary> public class CustomLinkProvider : LinkProvider { public override string GetItemUrl(Item item, UrlOptions options) { // Some code which manipulates and exams the item... return LinkHelper.RemoveSslPort(base.GetItemUrl(item, options)); } } } 4. The same provider for media using Sitecore.Data.Items; using Sitecore.Resources.Media; namespace MySite { /// <summary> /// This method removes SSL port number from Media Item URLs /// </summary> public class NoSslPortMediaProvider : MediaProvider { /// <summary> /// Overrides Url mechanism for Media Items /// </summary> /// <param name="item">Sitecore Media Item</param> /// <param name="options">Sitecore Media Url Options object</param> /// <returns>Updated Media Item URL minus 443 port</returns> public override string GetMediaUrl(MediaItem item, MediaUrlOptions options) { var mediaUrl = base.GetMediaUrl(item, options); return LinkHelper.RemoveSslPort(mediaUrl); } } } What I don’t like about this method is it’s messy in the opposite way. The port number is still being added, and we’re just adding code to try and fix it after. Credit to Sabo413 for the code in this example Method 3 – Official Sitecore Patch Given that it’s Sitecore’s bug, it does actually make sense that they fix it. After all people are paying a license fee for support! This simplifies your solution down to 1 extra patch file and a dll. What’s better is as it’s Sitecores code they have the responsibility of fixing it, if it ever breaks something, and you have less custom code in your repo. You can get the fix here for Sitecore version 8.1 – 9.0. So this may leave you wondering how did Sitecore fix it? Well having a look inside the dll reveals they wen’t for method 2.
https://himynameistim.com/2019/07/02/removing-port-443-from-urls-generated-by-sitecore/
CC-MAIN-2020-40
refinedweb
657
54.93
The above code works fine (in debug and non debug mode).The above code works fine (in debug and non debug mode).Code:#include <iostream> using namespace std; class Test { public: void immortal() { cout << "I am Immortal!" << endl; } }; int main() { Test* t = new Test; t->immortal(); delete t; t->immortal(); return 0; } I have a similar situation in my project. Where an object gets deleted, and then a successful call to it's method is made. But when that method tried to access some data member (in this case an integer) my program crashed! How to detect where the object gets deleted? There is no obvious hint form. Also function calls working fine confuse me more
http://cboard.cprogramming.com/cplusplus-programming/101631-easy-way-find-**-deleted.html
CC-MAIN-2015-27
refinedweb
116
76.01
Why use hooks? Hooks are a new feature in React. They are an excellent way to share stateful logic between components. They are also incredibly composable which fits in great with React since React is all about composition. Please see the hooks documentation for more information on the basics of hooks. I will also list some other great resources at the end of this post. Rules to be considered a custom hook - The name of the custom hook needs to start with use like useState. - The custom hook can call other hooks. - The custom hook needs to follow the rules of using hooks, this being only calling hooks from the top level of the function only. You can not call hooks from within conditionals, loops, or nested functions. Basic example Here is a simple and trivial example to get us started. This is a custom hook called useCounter. A user of this hook can create a counter with ease by passing in the initial count and then using the count and functions returned. I first have the use of the custom hook in a Counter component. All I have to do is invoke it and I get the state and functions I need. import React from 'react' import useCounter from './useCounter' const Counter = ({initialCount}) => { // here is the invocation of useCounter // I can pass in the initial count // It returns to me the count as well as two functions to help me change it const { count, increment, decrement } = useCounter(initialCount) return ( <div> <button onClick={increment}>Increment</button> <h1>{count}</h1> <button onClick={decrement}>Decrement</button> </div> ) } Here is the implementation of useCounter. It follows the rules as stated above. It begins with use and calls other hooks from within it. The other hooks are called at the top level. I could have easily included this within the Counter component, but it is super useful to be able to extract the logic and state into a custom hook when logic gets complicated or needs to be reused. import React from 'react' const useCounter = initial => { const [count, setCount] = React.useState(initial) const increment = () => { setCount(c => c + 1) } const decrement = () => { setCount(c => c - 1) } return { count, increment, decrement, } } Here is another example. This one uses useEffect as well as useState. This hook could be imported anywhere you need a clock in your application. You would only need to invoke it and then clock would always hold the current local time. import React from 'react' const useClock = () => { const [clock, setClock] = React.useState(new Date().toLocaleTimeString()) React.useEffect(() => { let intervalId = setInterval(() => { setClock(new Date().toLocaleTimeString()) }, 1000) return () => { clearInterval(intervalId) } }, []) return { clock, } } Hooks composition Thus far in this article, I have shown custom hooks that use the base hooks of useState and useEffect which React provides. Hooks can call other custom hooks as well! This leads to a powerful composition of hooks pattern. Below is an example of using a custom hook within another custom hook. It easily could have been implemented in one hook, but hopefully it demonstrates composing them. import React from 'react' const useIsEmpty = () => { const [count, setCount] = React.useState(0) const empty = React.useMemo(() => count === 0, [count]) const increment = () => { setCount(x => x + 1) } const decrement = () => { setCount(x => x - 1) } return { empty, increment, decrement, } } const useList = () => { const [list, setList] = React.useState([]) const {empty, increment, decrement} = useIsEmpty() const addToEnd = (newItem) => { setList(l => [...l, newItem]) increment() } const removeLast = () => { setList(l => [...l.slice(0, l.length)]) decrement() } return { list, addToEnd, removeLast, empty } } Try out hooks today! See what you can do with hook. Try implementing something you would normally do in React but with hooks. - Check out this list of hooks to see what others are doing. - Check out this great post by Tanner Linsley on hooks - Also have a look at this post by Dan Abramov. He has been posting lots of awesome content and people have been helping to translate it into many languages! Thanks for reading! Top comments (1) Hi Christian, Tanner's link is not working. I got some good resources I would recommend considering the article topic if you would like.
https://dev.to/chriswcs/-basics-of-making-custom-hooks-in-react-5f6g
CC-MAIN-2022-40
refinedweb
678
57.57
Could using the on chip EEPROM memory be a part of a solution? Any tricks / ideas out there? A few weeks ago we had the discussion how to put the sourcecode of a sketch into the Arduino. This ended in adding a unique reference to the source in the code by means of an UUID which is in fact just a 16 bytes unique number. See - - for the whole discussion.This technique might be usefull for you. I have not tried to access the UUID array from the sketch, should be possible as it is just access to progmem ...Rob #include <EEPROM.h>static int boardIdBaseAddress = 0; // or whatever you wantunsigned); if ((b1 == 0xff) && (b2 == 0xff) && (b3 == 0xff) && (b4 == 0xff)) { b1 = random(256); b2 = random(256); b3 = random(256); b4 = random(256); EEPROM.write(boardIdBaseAddress, b1); EEPROM.write(boardIdBaseAddress + 1, b2); EEPROM.write(boardIdBaseAddress + 2, b3); EEPROM.write(boardIdBaseAddress + 3, b4); }} Problem with the random generator is that it is not so random at all, it is an algorithm and unless every sketch has an unique seed you will get the same signature for every board. Furthermore if your boards have used the EEPROM before (or in a factory test who knows) your code will probably not write your signature, so you need an erasor sketch that writes 0xFF to EEPROM first otherwise previous values could be identical ... AVRdude can also do that.It is quite hard to make this process automatic AND robust (failproof). I think I would use some makefile with AVRdude to write the signature to EEPROM after uploading the sketch, and that makefile would increment the EEPROM file every time called. It can keep track of the numbers applied on which day etc. Or let some SED command change a signature in the source to some sequence number.How many boards do you intend to label this way?Rob PPS: Is the any pre-defined usage for the EEPROM bytes, or does the sketch have full reign over every available byte in the EEPROM? The reference does suggest using randomSeed(analogRead(0)) (or some unused analog input). QuoteThe reference does suggest using randomSeed(analogRead(0)) (or some unused analog input).On a 328 you also have analog( to measure internal temperature, but it is not reliable random - - but it might be good enough for your application. It won't cost you a free pin as it is internall so allways free....If your application has a ethernetshield you might consider fetching the serial number from a simple PHP script on a local website or a randomnumber from. Imho: Sometimes it is much easier to solve things with some extra tools than try to build it all in one sketch. static int boardIdBaseAddress = 0;static int timer = 0;static byte b5 = 0;static byte b6 = 0;static byte b7 = 0;static byte b8 = 0;unsigned); b5 = EEPROM.read(boardIdBaseAddress + 4); b6 = EEPROM.read(boardIdBaseAddress + 5); b7 = EEPROM.read(boardIdBaseAddress + 6); b8 = EEPROM.read(boardIdBaseAddress + 7); if ((b1 != 'A') || (b2 != 'P') || (b3 != 'P') || (b4 != '1') || ((b5 == 0xff) && (b6 == 0xff) && (b7 == 0xff) && (b8 == 0xff))) { b1 = 'A'; b2 = 'P'; b3 = 'P'; b4 = '1'; randomSeed(timer); b5 = random(256); b6 = random(256); b7 = random(256); b8 = random(256); EEPROM.write(boardIdBaseAddress, b1); EEPROM.write(boardIdBaseAddress + 1, b2); EEPROM.write(boardIdBaseAddress + 2, b3); EEPROM.write(boardIdBaseAddress + 3, b4); EEPROM.write(boardIdBaseAddress + 4, b5); EEPROM.write(boardIdBaseAddress + 5, b6); EEPROM.write(boardIdBaseAddress + 6, b7); EEPROM.write(boardIdBaseAddress + 7, b8); }}void loop(){ // ... if (computer_asks_for_ID) { setupEeprom(); // ID will be in b5, b6, b7, b8 send_to_computer(b5, b6, b7, b8); } // ... timer++;} Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=65577.msg480714
CC-MAIN-2017-04
refinedweb
630
67.15
Blog Map SharePoint 2010 exposes list data via OData. I’m currently working on an article around SharePoint and OData. As part of this effort, at various points, I’ll blog some of my intermediate samples. This post details the minimum number of steps to query a SharePoint list using OData. This blog is inactive.New blog: EricWhite.com/blogBlog TOCThis post is one in a series on using the OData REST API to access SharePoint 2010 list data. By far, the easiest way to get started with SharePoint development is to use the 2010 Information Worker Demonstration and Evaluation Virtual Machine that runs using Hyper-V. The instructions in this and future related posts will be for that virtual machine. If you have setup your own development environment with a different machine name, it is easy enough to adjust these procedures to work with your own servers. 1. Download, extract, and boot the virtual machine. The download includes detailed instructions. 2. Log into the virtual machine. Use the username brads. The password is pass@word1. 3. Create a new list. Name the list ‘Inventory’. Create a two new columns: Description, and Cost. Make the type of the Description column be ‘Single line of text’. Make the type of the Cost column be ‘Currency’. Enter a couple of rows of sample data.After completing this task, my list looked like this: 4. Start Visual Studio 2010 in the virtual machine. 5. Create a new project. Click File -> New -> Project. Select a directory for the project. Set the name of the project to Gears. The New Project dialog box will look something like this: 6. Right click on the References node in the Solution Explorer window, and click Add Service Reference. Enter for the address. Change the namespace to Data. The Add Service Reference dialog box will look like this: Click OK. 7. Copy and paste the following code into Program.cs: using System;using System.Collections.Generic;using System.Linq;using System.Net;using Gears.Data; namespace Gears{ class Program { static void Main(string[] args) { TeamSiteDataContext dc = new TeamSiteDataContext(new Uri("")); dc.Credentials = CredentialCache.DefaultNetworkCredentials; var result = from d in dc.Inventory select new { Title = d.Title, Description = d.Description, Cost = d.Cost, }; foreach (var d in result) Console.WriteLine(d); } }} 8. Press F5 to run. If everything worked, you will see something like the following:
http://blogs.msdn.com/b/ericwhite/default.aspx?PageIndex=6
CC-MAIN-2015-32
refinedweb
396
62.54
"You must remember this, A kiss is just a kiss, a sigh is just a sigh. The fundamental things apply. As time goes by…" Fundamental to TechNet are the contributors. As time goes by, their contributions become enriched and read by generations to come. They will always be remembered, and potentially for a very long time! As we draw to an end of January submissions for TechNet Guru, let's remind ourselves of some of the amazing [Windows development related] gold medal winners of 2013. Over a series of blogs I will highlight some of the Gold medal winners in the categories I help to judge, I'll list some of the comments and some further analysis – or just plain gushing! There's a lot of winners to cover, and I want to give each some time and don't want to lose you, so I'll break it into chunks, starting with the first three months of May, June and July. This sensational article shines as an example of how to entice the next generation of developers into the world of game design. Articles like this remind me of when I was just 14 (1984) and the hours I'd spend copying in literally many full pages worth of 0s & 1s (binary) of a complete 3D maze game into my Sinclair ZX Spectrum, from magazines like Crash to make my first computer games. The pain. The pleasure! How times change. Here is a great '84 interview with Sinclair, discussing their newest technologies. Some things never change! This was a great example of exactly the kind of thing TechNet readers love to read, and Microsoft love to support. This article provides a great example and downloadable project for using the ISpellCheckerFactory and ISpellChecker interfaces. The factory is for determining which language to choose and instantiating the appropriate spell checker. The article itself also shows quite a nice example of how to create wrappers over Win32 libraries and APIs. To learn more about the Spell Checker API, start here. Senthil was the only WP entrant in the first month's competition, which was not a full month anyway. However, he still won gold, for this very short entry. The reason is because this is an example of how simple your contribution can be. If it adds value, saves someone time, answers someone's need, it's a TechNet "most wanted". Even this small article has been altered 13 times since conception. Note the comments, he even made it to the front page of TechNet Wiki, as well as a featured blog! This is another great primer for hobby and professional developers alike to salivate over. Again taking me back to my early youthful "experimental" days 🙂 By which, I mean shoving wires into the serial port of my beloved ZX Spectrum, in an attempt to wire up my house to my home computer (akin to today's Interior Automation projects). My earliest 1980s experiments ended in tears, with a pop and a puff of smoke, and weeks playing no games, waiting for repair… o_O This article makes use of that very useful SerialPort Class. Serial communications sound simple, but there are many pitfalls you have to consider and protectively code for. In my experience some considerations are aborted sessions, break in communications, missing or corrupted information. If you have control of both ends, like a microcontroller, it is therefore often a good idea to implement a protocol on top, like a basic version of TCP/IP, a "handshake" protocol. A simple but perfect example of what is essentially an old and underused, but very useful method of communicating both within an application, and separate processes across networks. Named Pipes can be duplex, and work over a network, even if the servers have the same name. Named Pipes also support impersonation which allow for permissions. This article explains the NamedPipeServerStream and NamedPipeClientStream classes and how to use them to perform simple communication. This really is a great primer for the simply awesome Microsoft.Phone.Maps namespace. I've had a lot of fun with the Windows Phone Maps API, you couldn't want for a better set of such powerful classes to turbo boost your Windows development. Tiziano gives a great explanation in this article. If you want to read more, start here. This is a very clever article, well worth the gold. It basically explains how to rebuild your own relational database from data stored in XML files. If you wanted an MS SQL solution, I'll slip in a mention for FOR XML and OPENXML, providing an ability to import and export data as XML. This was a worthy winner, because one thing this shows is both how flexible WPF and SIlverlight can be, and also how deep sometimes you have to go to alter something as simple as the ScrollViewer thumb. For more examples, here are a few from me: - WPF Styles and Control Templates – Made in code - Overriding SystemColors in WPF - WPF Themes – Using and tweaking This is such a useful article, useful in many scenarios. For example, if we were to crawl the Wiki for keywords, for converting into links to portals, or description pages, this explains a great way to do that. This kind of article gets loved for generations, probably lauded and taught in classrooms across the globe. This is TechNet at it's best, thanks Reed! I am also a big fan of the AdomdClient namespace, as it has often come to my aide, pulling SSAS data into my Silverlight charts, just as Jaliya discusses. This really is the best of both worlds for data mining and navigating multidimensional data models (like time-stamped snapshots of table data for trend analysis) Another good launch page into this namespace is in TechNet Library: Developing with ADOMD.NET Back for another gold in June was Senthil, with a nice tutorial on the Location API. Again to me, this triggers off the desire to begin another pet project. This is such a useful article which again I have found personally rewarding. I must also point anyone interested in this technology as much as I am to the most awesome resource of all, MSDN's Windows Sensor and Location Platform! As RC says, a classic demo subject indeed. RenderTransform makes life very easy in this respect. No doubt just a coincidence but maybe Ken took inspiration from me for this idea, as earlier that month, I had published my own example in an MSDN sample project about Button content: World Clocks – Animated Icon (shown in a RadioButon) – Animated Button Icon Welcome Magnus to the community with a great example of a commonly sought solution on the forums. Both Magnus and I have answered countless similar questions on the WPF forum (where I am also a moderator) and if this is of help to you, please also review some of my similar MSDN Gallery sample projects: - Creating & Grouping Expanders – Just One Expanded (Accordion like) and Animating - Grouping Buttons (as ToggleButtons) and Changing IsChecked Background Colour - The "Select all" CheckBox in column header - Synchronising Listbox selected items - How to manage available/selected lists. Simple examples. MVVM and Code behind - Hiding Selected Items From Other Lists (ComboBoxes) - TreeView SelectedItem TwoWay MVVM (plus expand to selected and close all others) In fact I have over 80 WPF related projects in MSDN Gallery that you can download and learn from. I hope you find them useful. Let's finish with a final tally of Guru Gold winners for May, June and July, in order of medals, then simply mentions: Reed Kimble – [VB May] [VB June] [VB July] Senthil Kumar (isenthil) – [WP May] [WP July] Dan Randolph – [C# June] Tiziano Cacioppolini – [WP June] Gaurav Khanna – [WPF June] Jaliya Udagedara – [C# July] Ken Tucker [Apps July] Magnus MM8 (Magnus Montin) [WPF July] Congratulations to all the authors above. We are very lucky to have had the first three months of our competition supported by such big hitters in the community. In my next blog of this series I will look more closely at the Guru competitions in August and September of 2013, when everyone has settled into the idea of this new competition and more gems arrived for our delectation! For a complete list of competitions, winners and contributions by category, START HERE! Best regards, Peter Laker That clock’s a GIF! I thought it was an embedded app at first. =^) Excellent post! Good articles like these TechNet Guru winners "always should be shared" to help inspire us produce more articles in the best practices and solutions. I think I’m officially writing a blog post about a blog post, so that’s a little strange. =^) Anyway Blogged about this blog: congratulations!!!! History of TechNet Guru First of all, if you don’t know the history of TechNet Guru, then you
https://blogs.technet.microsoft.com/wikininjas/2014/01/29/technet-guru-gold-winners-in-windows-development-may-july-2013/
CC-MAIN-2017-04
refinedweb
1,475
58.32
Tron 2.0: FAQ/Walkthrough by Der Schnitter Version: v1.1.0 | Updated: 2003-10-23 | Original File Tron 2.0 Walkthrough by Thomas "Schnitter" Leichtle Version v1.1.0 -- 16th October, 2003 // A variant of the Hello World program #include <iostream> using namespace std; int main() { cout << "Greetings, Programs!" << endl; return 0; } ******************************************************************************* Table of Contents ******************************************************************************* 01. Introduction 02. Legalese 03. Basic Game Mechanics a) The Stats b) Memory and Utilities c) Additional Info 04. Primitives, Subroutines and Important Objects a) Primitives and Combat Subroutines b) Defense Subroutines c) Utility Subroutines d) Objects 05. Characters 06. Enemies 07. Notes for the Walkthrough 08. Walkthrough a) Unauthorized User a.a) Program Initalization a.b) Program Integration b) Vaporware b.a) Lightcyclearena and Gridbox b.b) Prisonercells b.c) Transportstation b.d) Primary Digitizer c) Legacy Code c.a) Alans Desktop PC d) System Restart d.a) Packet Transport d.b) Energy Regulator d.c) Power Occular e) Antiquated e.a) Testgrid e.b) Main Processor Core e.c) Old Gridarena e.d) Main Energy Pipeline f) Master User f.a) City Hub f.b) Progress Bar f.c) Outer Gird Getaway f.d) fCon Labs / Ma3a gets saved f.e) Remote Access node g) Alliance g.a) Security Server g.b) Thornes Outer Partition g.c) fCon Labs / Data Wraith Preparation g.d) Thornes Inner Partition g.e) Thornes Core Chamber h) Handshake h.a) Function Control Deck i) Database i.a) Security Socket i.b) Firewall i.c) fCon Labs / Alan Lost i.d) Primary Docking Port i.e) fCon Labs / Security Breach i.f) Storage Section j) Root of all Evil j.a) Construction Level j.b) Data Wraith Training Grid j.c) fCon Labs / The fCon team takes over j.d) Command Module k) Digitizer Beam k.a) Not compatible 09. Subroutines from Bins by Sublevel and COW locations 10. Lightcycle Game mode a) Power Ups b) Additional Information c) The Stats of the Lightcycles d) List of the Racetracks e) What is unlocked when 11. FAQ 12. Credits 13. Changelog 14. Version History ******************************************************************************* 01. Introduction ******************************************************************************* Greetings Programs, the information contained within this document should help you on getting through the game of Tron 2.0, which carries on the tradition started in the Tron movie of some 20 years ago. For those that have not seen the movie, I recommend that you do this before playing the game. It is not a must, but it will help you get into the mood and design that is carried on in the game. Also it will help you deal with the background info you gain during the game, since there are a few references to the original movie or it's char- acters which might not be understood easily if you've not seen it. This is also my first ever Walkthrough, so I do hope that you forgive the mis- takes I might make here and help me out with it. I will especially need help on some of the terms used in the original, English version of the game, since I live in Germany and did use the German version of the game which has all dialog synchronized and all texts translated. ******************************************************************************* 02. Legalese ******************************************************************************* This walkthrough is a work I have poured many hours into and I do hope that anyone that wants to copy or rework this guide will acknowledge this by giving credits to all the people that helped me and to me, of course. Also it would not be very nice if you sold this walkthrough since it is intended as a free work to be shared with all people who need it. Sadly I will not be able to stop anyone from misusing it, but know this, you shall be cursed by all people who gave their contribution hereto. ******************************************************************************* 03. Basic Game Mechanics ******************************************************************************* Tron 2.0 is at it's core a standard FPS game, with a few nice twists concerning weapons and character enhancement. These twist make the game that much more interesting. But it also needs some explanation as to how the system of character evolvment works. a) The Stats - Build Points These work in much the same way as experience points do in RPG. Each time you gain 100 Build points you will be able to update your stats. At each update you will be given 7 points which you can allocate to the stats. You can allocate a maximum of 20 points to any of the five stats and during the game you will earn enough Build points for 63 update points (not enough to bring all stats to full). - Health This stat should be pretty self explanatory. Each update point will add five health points to your maximum health. If you have allotted 20 update points to it you will gain an extra 100 health point bonus. - Energy This, too, should be self explanatory. The upgrade works just like with the health stat (+ 5 energy/point - 100 point bonus at 20 upgrade points). - Weapon Efficiency This stat will decrease energy useage of your weapons if you spend points on it. This will be useful later in the game when you are able to upgrade weapons with subroutines like 'Corrosion' or 'Megahurtz' as they will ad- ditionally increase the energy useage of weapons. - Transferrate Increasing this stat will lower the time it takes to download subroutines, e-mails and permissions from bins or core dumps. - Processor A longer bar here means that it will not take as much time to defragment damaged memory sectors, port subroutines or disinfect them. - Upgrade recommendations for the stats I would suggest that the main part of the upgrade points should be spent primarily on the first three stats. This is because I never found Processor or Transferrate as practical as the others. I do know that opinions may differ here, but I usually had enough time to port, download or disinfect, but sometimes I ran out of energy or health extremely fast. b) Utilities and Memory - The memory You can see how much memory you have by pressing 'F1' and looking at the empty slots in the outer ring there. Depending on the system you are in the configuration and amount of this memory will vary. Memory is used for mounting subroutines and you can change those subroutines at any time, even in the midst of a battle since pressing 'F1' will also pause the game. - The porting Icon In the upper left you will find a small circle with the same symbol in it as unported subroutines display. To port a subroutine just drag and drop it here. - The defragmentation Icon In the lower middle of the ring you will find the Icon for defragmentation. Your memory can only become fragmented during battle and if you have a few empty memory blocks. Should your memory become fragmented just drag and drop the innards of the affected memory block here. - The virus killer icon If one of your subroutines got infected just drag and drop it over here and wait for it to become cleansed of the impurities. c) Additional Info - Virus infection Should one of your subroutines become infected try to disinfect it as soon as possible. Also see if you can seperate it from other subroutines adjacent to it by creating at least one empty memory slot between the infected and the uninfected subroutines. This usually means unmounting a subroutine, but it will also keep the Virus from spreading. ******************************************************************************* 04. Primitives, Subroutines and Important Objects ******************************************************************************* Primitives are the basic shape weapons and you can activate that form at any time. If you want to use any of the two upgrades however you will have to in- stall the according combat subroutine in your memory. The memory blocks you need for a Subroutine are determined by it's version. Alpha subroutines need 3 blocks, beta need 2 blocks and gold ones need only one block of memory. The Objects section will describe the few objects you can pick up with your action key. a) Primitives and the Combat Subroutines - Disc Primitive The Disc Primitive is the first weapon you will receive in the game. It will also be the one most used, since it is in it's basic form the only weapon that does not use any energy. Also coupled with the right Utilities it will be a very formidable and powerful weapon. - Disc Sequencer (Subroutine) The Sequencer subroutine let's you throw 2 or more Discs in quick sucession for an only miminal energy cost. The only drawback is that you can only block after all the Discs have returned to your hand, which can sometimes get you into grizzly situation, so choose wise when to use it. - Disc Cluster (Subroutine) The Cluster Disc is good for groups of enemy since it will work almost like a fragmentation grenade. Sadly there are not that many large groups in the game so it's usefulness might not be exploited to it's fullest. - Ball Primitive The Ball Primitive (and one of it's subroutines) is the weapon used by the Z-Lots throughout the game. It's low accuracy coupled with that it uses energy made it a weapon I did not use very often. - Ball Launcher (Subroutine) The Ball Lauchner can be a very deadly weapon, especially on the higher version levels with it's higher rate of fire. It is fairly accurate also. Since it also does some splash damage it is also effective against groups of enemies. - Ball Drunken Dims (Subroutine) This is the most powerful incarnation of the Ball Primitive, it's accuracy will improve by increasing the version. Does splash damage and can be put to good use against groups. - Rod Primitive (aka Prod) The Rod, in it's most basic incarnation is a weapon with which you have to get close to the enemy, since it can only be used in close combat. This weapon should only be used in rare circumstances, where stealth is possible and enemies are far and between. Energy usage is also very high for this weapon. - Rod Suffusion (Subroutine) You want a shotgun, here you have a shotgun. Powerful at short ranges, gets more accurate in higher versions and more powerful. If you don't know how shotguns are used, well, then this weapon is not for you. :) - Rod LOL (Subroutine) Did I hear anyone say sniper? Well, here is your all purpose, kill in one headshot sniper rifle. High Energy useage, but also very powerful against single enemies. Kills most opponents in one shot. Also good to take out Finders at long ranges. Be careful if you pair a gold LOL up with a gold Corrosion and Megahurtz, because you will be down to only 3 or 4 shots with this weapon, although they will be very powerful. - Blaster Primitive Standard Machinegun, high energy useage, low accuracy, low power. Should go to the trash bin. 'Nuff said. - Blaster Energy Claw (Subroutine) You know you always wanted to be a vampire. This subroutine transforms you into one. Works on single enemies as well as groups (if they are hugging each other). It transforms the health it takes from the enemy to energy for you. Therefore this weapon will only use energy once the enemy runs out of health. It is a very useful weapon against the shield ICPs and also against most other single enemies as the enemy will not be able to attack as long as the Claw grips him. - Blaster Prankster Bit (Subroutine) This is the single most powerful weapon in the whole game, which is why you only get it in the last part of it. Fires a guided missile that will create a black hole at it's impact point. For this weapon to work you will have to hold onto the mouse button. Steering is the same as with the Disc. If you release the mouse button before it impacts into something it will explode prematurely. The lethality radius will increase in higher versions . Only drawback is the high energy useage, so if you want to use it often better have a white energy patch routine nearby. :) b) Defense Subroutines - Submask Your standard helmet. Depending on the version it will give you either 10%, 12% or 15% of protection. - Encryption Standard Body Armor, offers protection of 15%, 25% or 30%. - Peripheral Seal Armor for the arms, offers protection of 8%, 9% or 12%. - Support Safeguard Armor for the legs, offers protection of 8%, 9% or 12%. - Base Damping Armor for the feer, offers protection of 5%, 6% or 8%. - Viral Shield Helps protect your subroutines from becoming infected with a virus. Protection offered will be either 30%, 50% or 75%. c) Utility Subroutines - Fuzzy Signature This will help you sneak up on enemies. Installing it means your steps make 25%, 50% or even 75% less noise than usual. - Power Block This subroutine is only useful for the Disc Block, but do not write it off yet, since it will be very useful. The Disc might (or even should be, IMHO) your primary weapon throughout the game. This means you will also have to master the art of blocking. And returning an enemy Disc to it's owner and doing damage by this is no bad thing, I would say. In it's gold version it is powerful enough to take out most enemies with one Power Block. - Megahurtz Increases the damage potential of weapons, but also increases their energy useage. This is helpful with every weapon, but it will be best coupled with the Disc Primitive. Since the Disc Primitive uses no energy you get a damage boost on it for free. :) - Corrosion This will 'poison' the enemy if you hit him with a weapon. The time the enemy will stay posioned will increase with version. Again it will also increase energy useage of a weapon, which makes this another good add-on for the Disc Primitive. - Primitive Charge This Subroutine will increase the damage your Primitives (!) do slightly. Since no energy is needed to employ it, it will be a very useful addition indeed if you fancy on using the Primitives only or mostly (the Disc comes to mind). - Triangulation This will give you a sniper scope. Depending on version it will give you the option to zoom in further on a target. This will transform some of the weapons into sniperweapons. Again a must have I would think. - Y-Amp You really want to reach that archivebin with the much need subroutine but can not jump high enough? Well, install the Y-Amp it will make you jump, jump higher that is. Higher version will increase the jump height. - Profiler This is a subroutine that displays information about enemies. The higher the version the better the info you get. - Virus Scan Will tell you if subroutines you want to download are infected. On beta it will tell you which subroutines specifically are infected and on gold it will disinfect them on download. d) Objects - Build note When you find a Build note it will increase your version by 2 points. There is a total of 100 notes hidden in the whole game. The number that can be found in each level is given in the upper left corner of the HUD. - Code Optimization Ware You will love these little critters. Upon useing one (with the use key) they will give you the option to increase the version level of one of your subroutines (e.g. alpha -> beta). - Core Dumps Upon defeating an enemy a core dump will appear. Picking it up with the use key will replenish some of your health and/or energy. The core dump will get weaker over time and vanish altogether. Also you may find certain permissions or subroutines in the enemies core dump. As they will also fade into nothingness I suggest that you pick them up as soon as possible. ******************************************************************************* 05. Characters ******************************************************************************* Here listed are the characters you will meet throughout the game. - Jet Bradley He is the son of Alan Bradley, programmer of the original Tron program and co-programmer of Ma3a. Jet is a bit of the rebellious kind as you will learn from the e-mail you can download throughout the game. - Alan Bradley The father of the main character. He apparently got kidnapped and one of your objectives will be to locate him. - Ma3a An AI programmed mainly by Lora Bradley - the late wife of Alan Bradley - and Alan Bradley. Ma3a is responsible for digitizing Jet and transporting him into the world inside the computer. - Thorne / The Master User Thorne is a former Encom employee, who worked in security. He sold the dizitizing technology to Encoms rival fCon. During an experiment to prove that the technology he sold works something goes awry and his form is corrupt- ed. Now he is trying to gain power in the computer world. He is also the cause for the spreading corruption. - Kernel The commander of the ICP units in Ma3a's systems. He will not tolerate any unauthorized programs in his system. He is also a powerful and formidable warrior. - Mercury A program programmed by the mysterious user Guest. Sent to help you gain ac- cess to Ma3a. Current version is v6.2.1 - The fCon Trio This Trio will trouble you later in the game. They are the ones responsible for Alans disappearance. They also do everything to get the technology working for fCon. - Several Programs During your quest you will find many helpful civilian programs. Do not derez them as it will end your game. ******************************************************************************* 06. Enemies ******************************************************************************* This section lists the enemies that you will meet during the game. - ICPs ICPs come in three flavors. First there is the basic grunt with a weak armor and only using a standard Disc Primitive. Then there is the upgraded grunt, signified by the forcefield around him. He will use the Disc Sequencer. The last kind of ICP will carry a shield around him. He might use a Disc Cluster subroutine. To defeat it use a weapon with splash damage or circle your Disc behind him and then return it to you to hit it from behind. - Finder Small, floating robots. Not very strong, but due to their size hard to hit. The laser they fire is deadly accurate. Can be fatal in large numbers. Take them out as fast as possible. Also, if the chance is there sneak up from be- hind to destroy them. Upon destruction I recommond that you are not too close to them as they will explode in a large radius and might take you with them. I found these to be the single most annoying enemy in the whole game. - Z-Lots Z-Lots are former civilian programs that have been transformed by the cor- ruption. They will use both the Ball Primitive and the Ball Launcher as weapons. They are fairly easy to defeat, but can infect your subroutines with a virus. - Rector Scripts These are powerful entities that are spreading the corruption. As weapon they will use the Ball Drunken Dims subroutine. They also have a high protection and can take a few hits. Upon defeat they will explode, so don't be to close to them when they go down. As with the Z-Lots these enemies will be able to infect your subroutines with a virus. - Resource Hogs Their armor strenght resembles that of the ICPs, but they use the Rod Suf- fusion subroutine as weapons. This should make it clear that they should be take out at long range, where their weapons are not as effective. Much less deadly than ICPs. (Mircosoft gets their share of Hogs. :) ) - Seekers Welcome to the search engine from hell. These beast are a pain in the you know what to take out. You will only meet two in the whole game. Use Sequencer or any other powerful routine to take it down fast. Also take care of the Resources Hogs helping the first one and the Data Wraiths helping the second one. - Data Wraiths These are human users much like Jet, but trained to infiltrate other computers. They have a cloaking ability and they can run with a short burst of speed. Their armor though is weak, as is their weapon the Mesh Primitive. Luckily they don't use the Energy Claw or Prankster Bit subroutine. They should not pose much of a problem, as they are more apperance than power. ******************************************************************************* 07. Notes for the Walkthrough ******************************************************************************* - Build notes The location of most Build notes in the game is random, therefore I can not give exact locations for them. The number of Build notes found in a level will also be given beside the sublevel name. - Downloadables (e-mails, subroutines, permissions) The location of the downloadables remains the same. The location of each will be listed in the walkthrough. - Permissions Permissions will be abreviated like this: P7 (= permission 7) - Archive bins I will list the permissions needed for each archive bin like this: bin(npn) = bin needs no permission / bin(3;4) bin needs P3 and P4 Also archive bins will be called just 'bins' througout the walkthrough. - Floating Boxes, crates, cubes (whichever you like to call them) These will be called just boxes throughout the walkthrough. - Code Optimization Ware Code Optimization Ware will receive this acronym: COW (Mooooooo!) :) - Subroutines status Subroutines will receive depending on their status the following suffixes: (a) = Alpha level subroutine (b) = Beta level subroutine (g) = Gold level subroutine (i) = Infected subroutine (np) = Subroutine has to be ported in the Stats screen (##) = A number telling the energy cost to download So 'Submask (b)(i)(np)(45)' means that you will download a Beta level Sub- mask subroutine, that is infected and has to be ported and with a download cost of 45 Energy. - Memory configuration The memory configuration for each sub-level of the game in will be written beside the sublevel name in this manner: (2;5;1;1) This means that you have a set of 2 connected blocks, a set of 5 connected blocks and 2 sets of single memory blocks to mount subroutines in. - Sublevel subject line A sublevel subject line will look like in this example: a.a) Program Initialization (2;3;6) (3 Build notes) -> Name of sublevel; Memory blocks available; Build notes hidden Sublevels that are only Ligthcycle races or cutscenes will not contain any information about memory blocks or Build notes. - Version info At the end of the walkthrough for a given sublevel I will put the highest achieveable version up to that point. Remember that sometimes you are given Build points on entering a sublevel exit. - Sublevels and Mainlevels Each Mainlevel (e.g. Vaporware) is divided into several sublevels. This only for clarification purposes. And now onto the main part, which is why you probably came here in the first place. :) ******************************************************************************* 08. Walkthrough ******************************************************************************* So here we are, you have pressed the start button for the single player game and are first treated to the intro. After watching (or skipping) it the game starts. a) Unauthorized User a.a) Program Initialization (2;3;6) (3 Build notes) After materializing you are first greeted by Byte (who is quite full of himself I might add). He will offer you to do a tutorial, which I re- commend you do in any case as you will be rewarded with 105 Build points for doing the Basic tutorial. This will also give you a head start in stats. I will not give any hints on the tutorial other than to do what Byte says. If you like you can skip the combat tutorial, since it will hold no rewards (permission 7 will be gifted onto you magically). But do re- member to heal up before entering the datastream back to the point where you materialized. Now the fun will start. After exiting the datastrem and walking into the main hall you will be greeted by a few Z-Lots that you should derez quickly. After this follow Byte to the lift and go down. Now you should first turn left. In this area you will find a few boxes and two bins (npn) with e-mails in them. Now go back to the lift and down the right way to where Byte will play key again. Talk to the program then follow Byte down the corridor. After noting that you can not jump high enough follow Byte to the next force field. On deactivation go in, and right, but do not use the ramp but rather the ledge above it. You will end up on a platform with a bin (npn) containing an e-mail and a P8. Now go back and use the ramp, either left or right (it does not matter but right is shorter) and go to the blocked bridge. Activate the panel beside it to turn it off for a few seconds and thereby removing the blockage, which will free ROMie and give you the chance to raid the bins on the other side, which is what we are going to do now. The first bin(npn) right behind the bridge will give you the option to download the 'Y-Amp (a)(25)' and the 'Blaster Primitive (25)'. After raiding this bin you will first have to fight back a few Z-Lots that spawned on the far side of the bridge. Then go to the other side of the room and get to the other bin by jumping on some of the smaller boxes. The bin(2;3;8) contains P7 and an e-mail. If you want or need you can heal up at the patch routines. The Y-Amp should be installed by now. Go back to the room where you couldn't quite jump high enough and defeat any remaining Z-Lots there. One of them will hold a 'Profiler(a)(i)' subroutine ready for download in his core dump. Now jump onto the ledge and go down the corridor to the boxes and the next bin(2;3;7) with a 'Profiler (a)(25)' and an e-mail. Now go to the end of the corridor to finish this sublevel and to see a cutscene. Version: v1.3.3 a.b) Program Integration (2;3;6) (4 Build notes) Another short cutscene and you are good to go for this part. Here you will see your first Code Optimization Ware. Sadly you will be forced to use it here and now, or the game (specifically Byte) will not progress. Since the selection is not great and you will soon enough find a Gold Profiler, I recommend to optimize the Y-Amp. After using the COW Byte, in his infinite wisdom, will note that you do not yet have the per- mission to open the door. Follow Byte, and while your are at it, you might relieve the two bins (npn) of their e-mails. Be careful in your advance though, since an ICP will wait at the lower end of the ramp. Now Byte will open a door in the wall. Jump over and traverse the corridor and be prepared to fight back a few Z-Lots after jumping down on the far end. Now you have to jump through the corridor with the corroded floor and upon reaching its end you will have to destroy another Z-Lot. Now turn left and go to the boxes. Duck down, to reach the bin(npn) hidden behind them with P1 and 'Fuzzy Signature (a)(25)' in it. Now go to the forcefield and de- activate it with the panel on its right side (P1 needed) and enter the datastream behind it to join up with Byte again. Go back to the door at the beginning of the sublevel and open it with the newly gained permission. Now follow the only corridor that you have access to now, sneak up on the ICPs and take them out. One of the ICPs will have the P4 in it's core dump. Now talk to the program outside of the ICPs room. Go on a bit to see a corridor to your right with a Sec Rezzer at its end and follow it down. Turn right again to see a few boxes and a bin (1;2) containing an e-mail. Then enter the small corridor right after the boxes and follow it until you reach a room with a broken bridge. Here jump over to the right and onto the boxes to reach the bin(1) with P4 and 'Submask (a)(np)(15)' in it. Now go back to where you cloud jump over the bridge, but look down and jump onto the bin(1) floating there to get a P2. Now use the boxes to jump to the other side of the bridge. Here you will have to duck through the small hole to end up below a room with a broken glass floor. Go to the broken part and jump up to gain access to the bin(1;4) here. It contains P2, 'Profiler (b)(35)', 'Virus Scan (a)(25)' and 'Y-Amp (a)(25)'. Now go back to the bridge and use the corridor with the small upwards slope and derez any ICPs in the area. If you want you can now deactivate any Sec Rezzers and access the bin with the e-mail since you now should have the needed permission. Then go ahead to the room with the patch routines in it and activate the panel on it's far end to reroute the power stream blocking your way. Be on your guard as a few ICPs will be spawned to keep you from exiting this section. After their deresolution, go back to the room with the power streams and go down to Byte to enter the datastream that opens there. NOTE: This is a one-way datastream only and the Build notes are hidden only in the first part of this sublevel, so make sure you have found all four before entering the datastream!!!!! On exiting the datastream you will be in an area that has and outer ring and an inner circle, both of which are interconnected at every quater of the circle. In the inner circle you will find the port to exit this sublevel, but as of yet it is still protected by a force- field. To deactivate it you will have to supply energy to the four bits that are in the rooms connected to the outer ring. Going right from the entry point will bring you to the first room (the one with the number 1 outside, naturally). Go in and supply energy to the bit. Upon completion of this task a few ICPs will rez in and quarantine fields will be errected to hamper your movement in the outer ring. To get further right to room 2 you will have to use the inner circle. Going further right from room two and entering the inner circle at the next opening will enable you to download an e-mail form a bin(1). To the right of room 4 is another bin(2;4) with 'Submask (a)(15)' and 'Primitive Charge (a)(25)' ready for download. After taking care of all the bits and ICPs enter the port to end this sub- and mainlevel. On to Vaporware. Version: v1.7.1 b) Vaporware b.a) Lightcyclearena and Gridbox (9;7;1) (5 Build notes) After the cutscene you will end up in the staging area for Lightcycle warriors. To continue just walk over to the counter in front of you to get a Rod Primitve (which can not yet be used as a weapon, also your Disc and Blaster have been confiscated), which will activate the Light- cycle in the arena. After getting the Rod climb the stairs and go around either to the left or right side to the next program that wants to talk to you. You will also note a bin(2;4;6) with an e-mail that you can download later and a bin(npn) with P2, P6 and an e-mail within. NOTE: Before entering the port to either the training area or any of the Lightcycle races, make sure that you scoured every corner of the area for Build notes. New ones will appear on finishing a race, but the old ones may also disappear, so make sure you catch every note!!!! After talking to the program you should enter the port to partake in the tutorial for Lightcycleraces. The tutorial is again pretty self- explanatory. On finishing the tutorial (in Version 1.010 of the game you are also able to skip Lightcycle races) search the area for Build Notes, then talk to the program on the lower floor to start the first real race. When you exit the datastream after your win you will first see two ICPs standing at one of the panels. Go to them and listen to their talk. When they ask you which one wins, tell them the blue one will win (Option 2). Then wait and listen a bit and you will gain 5 Build points. Now look around for Build notes again and then go to where you made the tutorial and talk to the ICP there to gain P4. With this you can download the remaining e-mail. Also you should now look for the locker that can be opened with the P4 to gain the Super-Lightcycle. Now enter your next race. You win, you do some Build note searching, you do some talking and then you will enter your last race. What is different about this race is that you do not necessarily kill your enemies, but that your rather have to get to the other side of the raster and drive onto the fallen tower on the right side of the arena (marked by an exit point). Do this to finish this sublevel. Version: v2.1.6 b.b) Prisonercells (9;7;1) (4 Build notes) Talk to Mercury, then walk down the corridor and turn left. Depending on where the ICP stands either sneak up on him or charging at him. Now wait for Mercury to catch up. Talk to her until you receive the P1, then open the door and enter the area with the holding cells. You will have to be especially careful of the Finder in this area, since you do not yet have a weapon you can destroy it with. On exiting the doorway go straight ahead and walk around the back of the building that is in front of you. Enter the room you walked around through the door and derez the ICP. In it's core dump you should find the 'Suffusion (a)(np)' subroutine, which I suggest you port and equip before going further. With the Suffusion Rod take out the Finder and any remaining ICPs in the area to be able to move more freely. Some of the ICPs moving around on the outside will carry the P5 with them. Before going on you might want to empty the two bins in the area. The first bin(npn) in between the floating boxes will hold P2 and P6, the 'Virus Scan (b)(35)' and the 'Fuzzy Signature (a)(25)'. Use the boxes to reach the higher up ledge and through it the second bin(npn) with P3 and the 'Peripheral Seal (b)(i)(20)'. With all the permissions you have gained you are also able to open some of the cells now. In one of them is a COW, put it to good use. Also, take care that you pick up the two Build notes in this area, as it is a no return area. After getting all the goodies, enter the control room again and supply some energy to the bit lying on one of the consoles. Then follow it to ROMie's cell and open it, so that he can open the datastream to the next area of this sublevel. In the next area first use the I/O node to have Mercury tell you what to do next. Then seek out the ICP in the area and after his demise take out the two Encryption units. Then enter the lift to go up. On emerging from the lift seek and destroy any ICPs in the accessible area, then return to the lift. From the lift you can see floating boxes on one wall. Go there to find a bin(1;5) with P3 and 'Suffusion(a)(50)' in it. From there go right and go onto the ramp leading to the corridor (where nothing should be alive anymore if you took out all ICPs). Follow the corridor until the first room, enter it and look for the bin(npn) with the P4 in it. Then go to the next room with a bin(3) con- taining an e-mail and 'Y-Amp (b)(75)'. Back to the lift. You have now the permissions needed to enter the building besides the lift. Enter it and the datastream within. Now go around one of the corners and take the two ICPs out. On picking up your Disc a few more ICPs will come in throught the datastream. Take care of them and then go ahead to empty the bin(npn) with the following: P8, 'Power Block(a)(25)', 'LOL(a)(np)(65)' and 'Peripheral Seal(a)(i)(15)'. You should also have all four Build notes by now. Return to the hall by datastream and go to the white patch routine, while fighting off the four ICPs that are waiting for you here. Right by it you will find a forcefield on the floor and a panel beside it. Use the panel to lower the forcefield, go down the ramp and follow the corridor to end this sublevel. Version: v2.7.4 b.c) Transportstation (9;7;1) (5 Build notes) Exit the room with the I/O node and suggest to the ICPs that they should better leave. After their expiration go to the boxes and open the bin(npn) to find P2 and 'Virus Scan(g)(50)'. Go on and walk through the corridor at the end of the room. Be careful on exiting it. There will be a Finder floating about to the upper right. Also an ICP will patrol the ledge in the about the same direction as you can see the Finder. On a ledge to the left above you will be a few more ICPs (one with Sequencer as weapon). Take out the Finder and the ICP to the right , then turn in that direction. You will find a ramp to the next higher level. Go there and fight back the remaining ICPs, then go over the energy bridge to the left and access the bin(npn). It will hold P2, P3, 'Virus Scan (b)(35)' and the 'Viral Shield (a)(i)(20)'. After this go back to the ramp and to it's right, download the e-mail and activate the bridge. Then return to the ramp and go to the lower level again, we will do some extra hunting. Position yourself in front of the left of the two squares where the boxes are coming out. Jump onto one when it comes close enough to allow it. Depending on the direction it takes when reaching the highest level you will either end up on a box or on a ledge near a patch routine. Whatever your loacation , make your way to the bin in the corner by using the boxes. The bin (npn) has P5 and 'Sequencer (a)(40)' in it. Now jump back to the patch routine and go to the end of the ledge. Once ther jump over the blocks to the other side and go around. Now wait for the gold box to appear from the square and jump on it. Jump off it on the ledge with the COW. To get back down you will have to use a few boxes that come up towards you to reach the ledge with the patch routine again, then go to its end repeat the jumping over to the other side and then jump onto one of the red boxes instead of the golden one. Then jump onto the platform beside the bridge you activated earlier. Two Build notes should be in your possesion right now. Go towards the I/O node, but rather than activating it right away you should first take out the Finder to the right and the two ICPs in the area behind the wall to which the I/O node is fitted. Two more Build notes are hidden in the area with boxes here also. Now get some updated info from Mercury and continue to the right. Go to the bin(npn) on top of the boxes close to where the finder was patrolling to get an e-mail and P5. Get down and advance further into the room, up a ramp and get to the stack of boxes hiding another bin(5) with P1 and another e-mail. Then go around the stack, enter the next room and take care of the ICPs here. First go to the right of the room to reach a bin(2;3;5) with these items: P4, P7, 'Primitive Charge (a)(i)(25)' and 'Profiler (b)(i)(35)'. Now go to the door on the left and out onto the platform where the civilian program is. Press the button on the left side of the board to release the first Mooring App. Go back the way you came and watch out for the ICPs that spawned. In the area behind the I/O node you will now also be able to download the e-mail form the bin(1;5) there. Then go through one of the three tunnels into the large hall and remove the remaining two Mooring Apps there. Then, you can either make it quick and jump on the transport fast, or you can hang around and take out the last few ICPs that want to stop you. Congratulations on another finished sublevel. Version: v2.9.1 b.d) Primary Digitizer (9;7;1) (5 Build notes) After the cutscenes (with some Pong jokes) go ahead and download the e-mail form the bin(npn) then exit through the door, take care of the two ICPs in front of it and download the P4 from the core dump. Turn left and walk towards the boxes now. Use the boxes to climb to the top of the wall and jump over to the other side. Delete the 10 or so Z-Lots in the large open area and the two smaller halls to the left and right. After this enter first the hall on the left to find two bins there. The first bin(npn) holds P3, the other bin(npn) has 'Sequencer (a)(i)(50)', Suffusion (a)(50)' and 'Fuzzy Signature (a)(i)(50)'. Then go to the bin (3) to find 'Sequencer (b)(80)', 'Primitive Charge (a)(i)(25)', 'Profiler (a)(i)(25)' and 'Profiler (b)(np)(35)'. Before getting Byte you may want to go to the bin(npn) hanging beside the door with the forcefield locking it. To rejoin with Byte just go to the far wall on the left (below the area with the corruption in it). Talk to Byte, then follow him and let him drop the forcefield. Enter the corridor to find a COW and tunnels branching off into three seperate directions. Go into each and go close to the edge of the door- way there and look down to check for possible Build notes, after that you may choose one of the to jump down. Then check the bins in the back of the room. The right bin(3) will hold a P5, 'LOL (a)(i)(50)', 'Guard Fortification (a)(15)' and 'Launcher (a)(i)(75)'. The left bin(3) nets you an e-mail, 'LOL (a)(50)' and 'Fuzzy Signature (b)(np)(75)'. After your raid, go to the other end of the room (while opening all available doors to check for Build notes) and open the door on your left to enter Ma3a docking area. Remove the presence of the ICPs here. One of the ICPs holds a 'Profiler (a)', another a 'Fuzzy Signature (a) (np)', and the other hold P3's in their core dumps. Now you should re- lease Ma3a. Do this by supplying energy to the two bits in the panels in front of Ma3a's holding tank. Also check if you have found 4 Build notes up to now, this is vitally important if you want to get all Build notes in this sublevel. After the releas of Ma3a another few ICPs will spawn to attack you. On their premature deactivation talk a few times to the civilian program in the area to get P7, with which you can activate the switch in the middle section of the room you originally jumped into. Remeber to not stand below the lift when you call it. :) Board it to go up and enter an area you will find quite familiar. Turn right and use the boxes again to scale the wall another time. Again you will also have to fight off a few Z-Lots, one on the left side will have a 'Corrosion (a)' subroutine in its core dump, which I suggest you get here. Then make your way to the large force field in the back of the area and talk to Byte there. After he takes down the forcefield follow Byte to the exit area, while being harrased by Z-Lots and having to keep a look-out for the last Build note of this level. After you reached the port area you will have to fend off the Z-Lots until the cutscene starts. Congratulations on getting through the second main-level. Version: v3.2.3 c) Legacy Code c.a) Alans Desktop PC (3;3;2;2;1) (3 Build notes) On rematerialization follow Ma3a to the bin(npn) and talk to her, then download the video archive, after that do some more talking until Ma3a gets a signal from Guest. Follow her to the I/O node, where she will request of you to configure the com-ports. To do this you will have to go to the other side of the hall, where another I/O node is off to your right and where four programs are waiting for their activation in a slumped down position. To configure the port (ports 1-4 are numbered from left to right) look at the ports from the second I/O node and look at the rings on the floors of the ports. These rings are open to one side. If the opening is showing towards the entry of the port go to the corresponding program and talk to it (e.g. Prog 2 for port 2) and wait until it has entered it's port. Right beside the second I/O node you will also find a panel with which you can change the directions into which the rings are pointing. To turn them you will have to press the switch to the lower right in the panel. Configure all ports in this way. Use the second I/O node now to gain additional information and also on how to go on with your mission. You are now sent off to hunt e-mail fragments. To start with this go into the corridor that is to the right of the one where the I/O node you just used is and follow it until you find a room with a forcefield and a lot of empty and a few filled archive bins. One bin(npn) holds just an e-mail, another bin(npn) con- tains an e-mail and 'LOL (a)(65)' and the third bin(7) a P3. To continue on press the button on the panel close to the forcefield to extend a bridge. Go to the middle of the bridge, turn left and talk to the program there to gain the first e-mail fragment. Then return to the bridge and continue on to a one-way datastream. Be wary of the Finders that spawn after you retrived the e-mail. On exiting the datastream you will end up in a room that looks similar to the first bridge room. In one bin(npn) you will find an e-mail and a P5, in the other bin(5) you will get 'Triangulation (b)(100)' and 'Suffusion (b)(np)(90)'. Now go on to the middle of the bridge and talk to the program. This time you will have to destroy the finders before the program can retrieve the e-mail fragment. While doing this a few more finders will come in through exits on the far left and right walls , be very wary of them or they will attack you from behind. The LOL would be a very good weapon to use. The best way is to hide behind the triangle shaped block and to lean and take the Finders down. No matter what the battle will be not an easy one, because of the Finders high rate of fire. Good Luck. If part three of the e-mail quest you will get the same treatment as in part two, only with more Finders and the Finders from the walls will now come in through entrance high up on the wall. In the room you start you will see a bin(npn) with an e-mail within. After gaining access to the last e-mail fragment, go on on the bridge and use the I/O node located there. Then go on and use the lift to end up back in the com-port area. As of now the security system of the desktop PC will have taken notice of your presence. Therefore you will have to battle a few ICPs on your return topside. Also you will have gained P7 by now, which will allow you to raid the bin with the P3 that you could not access earlier. With this permission you can access the bin(3) beside the lift, with P8, an e-mail, 'LOL (a)(50)', 'Virus Scan (b)(np)(35)' and 'Suffusion (b)(90)' within it. Go back to the port with which you entered the sublevel and activate the bit on one of the pillars to end this rather short level. Version: v3.8.5 d) System Restart d.a) Packet Transport (6;5;3;1;1) (5 Build notes) You are now in a transport and have to find a hiding place before they find and delete you. First talk to the Marcella program, then go to the back of this car (think of the transport as a train, with you being in the front car of it) to find two bins in between a few boxes one bin(npn) with an e-mail, the other bin(npn) with P1 and 'Cluster (a) (45)'. Now use the ramp to go down and take care of any ICPs in this car. There is also a ramp on the other side of the car, go up there now. Two ICPs will have useful things in their core dumps, one carries a P3 (enabling to open the doors on the upper level of car 1), the other will hold a 'Profiler (g)'. One the upper level you are now you will find P6 in a bin(npn). Check the doors on the upper levels of car 1 first before exploring its lower level. On the lower level you will find hidde in a niche, behind some boxes a bin(1) with P3 and two subroutines, which are 'Guard Fortification (a)(25)' and 'Fuzzy Signature (b)(75)'. Also there is another program you can talk to on the lower level. Do this, then exit the car through one of the doors on the lower level and jump over to the next car. Use the left door to enter, but be wary of the enemies to your right. Now go straight, then right, then another right and then a left to reach a room below the upper walk way with two bins in it. One bin(1;3) has a P2 in it, the other bin(npn) contains an e-mail. Now exit the room, go right and then turn right and use the blocks to jump up onto the walkway. Go on and take down the ICP (with P4), then walk on and look down the edge of the walkway until you see a Build note (this one is in a fixed location) and jump down there. From there go right and right again to find another well hidden bin(1;2;3) with P4, 'Base Damping (a)(15)' and 'Triangulation (b)(100)'. Now make your way to the exit of this car, and jump over to the next. In the third (and last) car you will find ICPs patrolling the lower level, so you will have a battle ahead of you. Be especially wary of the ICPs carrying shield, either use Ball weapons or the Power Block to defeat them. After the battle search the right side in the back of the car to find a bin(1;2;3;4) with a P5 in it. Also you should have gotten at least P6, P7 and P8 from ICP core dumps. Now use the ramp on the left side to go up one level and to reach a bin(1;2;3;4;5) with P7, 'Profiler (g)(i)(50)' and 'Y-Amp (b)(i)(75)' in it. Now use the front door of the car (the one pointing towards car 2) to find a COW(P6 needed to operate) which you will put to good use. Then go to the right side of the car. There you will find to the front a bin(npn) with an e-mail and a bin(6) with a 'Suffusion (b)(np)(90)'. Now go to the lower level and exit the car to its back. There you should find a program that will tell you how to avoid deletion (he might have gone inside during your battle). Then check if you found all Build notes in this sublevel before finishing it by using the left side back door on the upper level of car 3. Version: v4.0.5 d.b) Energy Regulator (6;5;3;1;1) (4 Build notes) Upon start of the level you are hidden between a few boxes in the middle of a huge hall. Use the LOL or the Disc/Triangulation combo to take out first the ICP on the far platform to your left, then the one on the right to be able to move about more freely. Then make your way over to the left side platform first. Use the red and blue boxes to get to the bin(npn) with P5 and P6 inside first, then go on to the next bin (5;6) to download the video archive in there. After this go to the forcefield on the right side and deactivate it with your newly gained permission and enter the datastream after this. On exiting follow Byte a few meters and then take out the ICPs and the Finder. Before you can activate the I/O node where Byte is waiting you must go to the bins at the far end of the room. One bin(5) holds the needed P2 and a 'Base Damping (a)(15)' and a 'Submask (b)(i)(20)', the other bin(npn) holds three e-mails. Now communicate with Ma3a then go on down the floor and enter the datastream. Now go straight ahead to the end of the walkway (oh, and while you are at it you could also take out the opposition) to the bin(5;6) with P1, P6 and 'Virus Scan (g)(np)(50)' in it, then go back to the crossroads and use it. At its end first search the area on the right, then go down the left way. Then just follow the way until you reach the panel with the patch routines beside it. Charge your energy (important) then turn right and go towards the edge. There you will see a moving floor. It will always move three sections further. The first part of the way will be solid in the last part of the way a few floor panels will be missing . Here it is moving fairly slow, so it should be no problem getting to the other side here with a bit of jumping. Once you reached the other side, take down the ICPs then follow each of the three branches to its end and energize it. After this you will have to do some more panel jumping. The pattern is like the first one, only this time the panels move faster. Upon your safe return to the other side you will be assaulted by several ICPs. Also, a new branch to the right will be opened by the civilian program in the area. After going down the newly opened branch watch your back carefully, because two ICP with Sequencers will come up behind, additionally you will have more ICPs opposing you in the direction you need to go. Take them out before going to the area with the boxes and two bins floating among them. One bin(1;6) holds an e-mail, the other bin(1;6) the 'Cluster (a)(45)' and the 'Fuzzy Signature (b)(75)'. Walk on down the way until you reach Ma3a, then enter the datastream behind her to end this sublevel. Version: v4.3.0 d.c) Power Occular (6;5;3;1;1) (3 Build notes) Ma3a needs you to clean up the area, well, go on, do it. After cleaning the area of all ICPs and the lone Finder go back to where the patch routines are near your starting point. Go up the stairs opposite of them and turn right to find a bin(npn) with an e-mail, P4 (can also be gained through one of the ICPs in the area), 'Triangulation (a)(75)' and 'Triangulation (b)(np)(100)'. Then use the boxes to reach the bin (4) that is floating shortly after the starting point and access the video archive in it. Now go to the bin(4) with four subroutines contained within it, which are 'Submask (b)(20)', 'Guard Fortification (b)(20)', 'Base Damping (a) (i)(15)' and 'Encryption (a)(i)(15)'. The bin(npn) floating above this one holds an e-mail. Now talk to Ma3a, then take the lift that just appeared and ride it to the lower level. To the left you will see a few bins, but we will come back to them later. Rather turn to the right now and talk to the program there. After talking to the program for a bit he will lower a bridge with a few boxes on it. Use it to cross to the other side, then turn left and follow the corridor to its end. There will be ICPs waiting for you. One of the ICPs will hold a P6. Now use the other bridge to get to a datastream. To your right you will also find a COW, but you might want to keep it around for the beta LOL you will find later, so you can upgrade it to gold level. Use the data- stream now. To your left are a few boxes and a bin(npn) with 'Cluster (a)(i)(45)' and two e-mails. For solving the mission however you will have to turn right. Jump over, then use the bit to the right, then use the moving platform to get to the other side. Move further right, pass the junction to the next bit and jump onto the moving platform in front of you. From there get to the last bit and activate it (No), then use the moving platform that is opposite to the bit. On the other side go right , jump over and activate the bit there (Yes), now go back to the last bit and use it again (Yes). Now jump back to the bit that is second to front (where you came in through the datastream), and activate it (Yes) also. Get back to the first bit and use it as you used the others. Now the lenses are configured. Return back topside through the datastream. First take out all ICPs on this level. Now it is time to raid the two bins that could not be accessed earlier. The first bin(4;6) nets you a 'Y-Amp (g)(i)(100)'. The second bin(4;6) holds a 'LOL (a)(65)', 'LOL (b)(110)' and an e-mail. Make sure you get the LOL beta and, if you have not used it yet, I suggest that you use the COW in this area with it. After getting everything you want go to the control room opposite of the bins and use the panel there to turn the Occular. Then go back up topside to Ma3a. The Build notes should all be in your possesion by now. Talk to Ma3a, then use the datastream to get up into the control tower. From there snipe all ICPs (therefore also my suggestion to get a gold LOL) trying to reach Ma3a until the timer has reached zero. The ICPs will attack in waves, use the time in between waves to recharge your energy at the conveniently located patch routine. After your sucessful defence you will have finished this main level. Version: v4.7.6 e) Antiquated e.a) Testgrid (6;3;3) (0 Build notes) Now you will have to fight your first Seeker. There is one very important rule that you will have to follow when fighting a Seeker, never and I mean never stand too close to it. I can kill you in one blow if you are too close. If you want to defeat it easily use the Disc or Sequencer from a few meters off and throw it repeatedly at its head until the energy discharge starts. At that time it will bury itself and three Resource Hogs will spawn. Take them down and if possible use their core dumps to heal and recharge your energy (or get to the patch routine if that is possible). After the Hogs are defeated watch the ground closely, so that you can tell where the Seeker will appear. Position yourself right and repeat what you did before and you should have no trouble defeating it, although this method will take a little time. After defeating it you will have mastered another sublevel. What is more at the end of this sublevel you will also regain the Blaster Primitive, that got taken away from you during the 'Program Integration' sublevel. Version: v4.8.6 e.b) Main Processor Core (6;3;3) (4 Build notes) Upon entering this sublevel you will first meet I-No, talk to him for a little while to gain a bit information about this old system. Then use the I/O node and then go on to the forcefield to your right. I-NO will lower it for you and then the fun starts. Both on the left and right tanks will move into firing position. They will start to fire at you as soon as you show yourself. Luckily you are a bit faster and can evade the shells of the tanks. What makes the next part hard is that you have to cross over to the other side of the hall first. You will have to do this by interconnected platforms that can be destroyed by the tanks and although the platforms will reform after a while it is still frustrating standing on a platform in the exact same moment that a cannon shell hits it. The only thing I can say is Good Luck. If you made it to the other side go to the bin(npn) in between the boxes to get a P3 and an e-mail. Also on the back wall a COW will be moving around. He will travel in a circle, just wait until it gets down. After using it and getting the Build note (this one seems fixed) on top of the boxes return to the huge pillar in the middle of the room and activate one of the panels that are located to its left and right sides to go up. On reaching the topside you will have to deafeat 4 Resource Hogs with one holding a 'Peripheral Seal (a)' in ist core dump. After their defeat activate each of the four panels located on the edge of the middle platform. Use the I/O node on the upper ring and talk to I-NO, after this a one-way datastream will open on the middle platform, use it to go on. You will end up in a room connected to a circular walkway. The walkway has a room after each quater of the circle. Also there is only one way to go around and after appearing in a new room you will have to fight off several Resource Hogs, so be prepared. On the opposite side of the first room you will find two bins, one bin(npn) with an e-mail and an- other bin(7) with P8 in it. Since you can not do very much here now, I suggest that you go on to the next room. On trying to exit the first room a few Hogs want to keep you from doing exactly this, one will gain you a P5 and another a 'Peripheral Seal (b)', though. As it is you can not do very much in the second room either, just now, since you are still missing a few permissions, so go on to the third. Don't stay in the third either, where you want to go is the fourth room. On the raised platfrom in the back left of the room are a few boxes, and a bin(npn) with P6 and 'Sequencer (a)(25)'. With the P6 you will now be able to access the bin(6) in the third room with a P7, and with this you can get the contents of the bin(7) in the first room. One of the Hogs might carry the P8 around also in its core dump. Now we will empty the bins in room 2. Go there to the raised platform on the left side and you will find 3 bins among the boxes there. Two bins(7) will hold one and two e-mail respectively, the other bin(8) holds a P5, P7, 'Cluster (b)(i)(90)' and a 'Fuzzy Signature (b)(15)'. After we have raided all the bins we can set out to do what we came here for. First go to room 2 and activate the panel on the raised platform there, then do the same to another panel in room 4. Now return to the I/O node in room 1 and talk to I-NO there. After this you have finished this sub- level and will be automatically transported to the next. Version: v5.1.7 e.c) Old Gridarena Now you will be treated to three consecutive Lightcyclematches. After defeating the opposition in each room try to get a Shield power-up before exiting the current area. You will have to find an exit in each arena which will be on a wall. It is a section that was protected by a forcefield during the race. While going to the next section you will have to evade a few Tank programs. On winning the third you will get an exit point, just drive over it. Good Luck programs. Version: v5.2.7 e.d) Main Energy Pipline (6;3;3) (4 Build notes) After you get your mission update, walk around the boxes and take care of the Resource Hogs, then go to the empty bit socket and turn right there. Down by the boxes there you will find the bit for the socket. Supply it with energy and activate it to go on. After crossing the energy bridge look for a bin(npn) behind the boxes to find a P2, 'Peripheral Seal (a)(np)(15)', 'Triangulation (b)(100)', 'Submask (a) (15)' and 'Base Damping (b)(20)'. Then go on up the ramp to the left. On reaching the top of the ramp you will be in a room with two huge pillars in its middle. First derez the Resource Hogs hanging around. Then turn left to find a bin(2) on top of a few boxes with an e-mail in it. Then go around and exit the hall on the other side. Now you will end up in a room with a lot of moving platforms and a countdown running. On the left, right and front wall are platforms with a button to push. You will activate all three switches to solve this part. The left one should be the easiest to reach, then go on to the middle one and activate the right one last. You will get a bonus on your countdown after activating one of the switches. After the last switch has been activated the platform you are standing on will auto- matically rise to the top, so stay on it. Also be wary of the two Finders floating around above you. On reaching the top first take care of the Hogs coming in through the door then turn your attention to the bins in the middle section. The lower bin(npn) of the two holds P5, an e-mail, 'Corrosion(a)(i)(100)' and 'Corrosion(b)(225)'. The other bin(5) contains just an e-mail. Enter the large hall through the door that was defended by the Resource Hogs. Now jump over the obstacles to the right of the door to reach a COW. To the left of the door you may find a Build note so check there, too. Then go to the floating boxes and bins on the left side of the room. In the lower bin(npn) is a P3, in the upper bin(3) is an e-mail. After the retrieval talk to the program on the far side of the room. Jet has proven his persuasive skills and a datastream will now be open for you to enter, do so. You will end up in the sphere you just saw from the outside. Talk to I-NO and wait for him to extend the ramp to the Legacy Code. Walk up the ramp and retrieve the code disc to end this mainlevel. Version: v5.6.5 f) Master User f.a) City Hub (6;2;2;2;1) (5 Build notes) Welcome to the City program. First turn around and go around the right corner to find a program that will give you a 'Viral Shield (b)', an offer you can not turn down (unless you used COWs on the Viral Shield). Now you should check the immediate area for Build notes, as you will not be able to return here. After your check activate the panel to call a transport to the other side. Leave the transport and walk to the right where you will find a bin(npn) with P5, 'Viral Shield (a)(20)', 'Launcher (b)(i)(115)' and 'Virusscan (b)(np)(35)'. Then look for any Build notes in the area, this is also a no return area ynd you should have three before talking to the program in this area. Look for the low level compiler and talk to it. You will find it when you enter the door further down on the right side wall. In there you will also find a bin(5) with an e-mail. After the talk an new challenge will spawn. You will have to defend the three towers in the middle from the corruption forces for 2:00 minutes when a few ICPs appear to help (yes, help) you. It would be best to install the Viral Shield in your memory, so that your subroutines do not get infected so easily. Be especially on the lookout for Z-Lots that are on the walkways that connect the towers, as they are the ones that will be responsible for taking them down. They should be your first priority to destroy, then attack the other Z-Lots. You may also have to battle against your first Rector Scripts here, be careful around them as they can pack a punch and also do not go down easily. When the cutscene with the ICPs and Mercury is finished check the area around the Progress Bar for Build notes first, also a COW will be in front of the bar. Go to the patch routines and walk through the arcway to the left. Right after passing under it you will find a few boxes and a bin(npn) with 'Viral Shield (a)(20)' and 'Launcher (a)(75)' in it. Go up the ramp to the back alley. Here you will find a few Z-Lots and a Rector Script harrasing a civilian program. Help it by eliminating the corruptive forces. Several of the Z-Lots will carry 'Corrosion (a)' and one will carry a 'Launcher (a)'. After the demise of the enemy talk to the program you just saved to get a P3. Go to the I/O node at the entrance to the alley and talk to Guest there. Check if you have gotten all Build notes and then go to the entrance of the Progress Bar and meet back up with Ma3a and end this sublevel. f.b) Progress Bar (6;2;2;2;1) (4 Build notes) The first part of this level is fairly easy. First you should set out to find all four Build notes and while you are at it, disinfect any subroutines from the preceding battles if you have not yet done so. Also look for the COW on the back wall of the lower level. If you enter the datastream to the upper level you will find to your right one bin (npn) with an e-mail and P8 and another bin with P6, 'Profiler(g)(50)', 'Virus Scan(g)(50)' and 'Y-Amp(g)(75)'. After your gathering operation talk to all programs in the area, until you have to talk to the DJ, who wants to know what the other programs would like to listen to. Ask the programs then tell the DJ the answer (usually Track 6). There are 3 programs on the lower and two on the upper floor to ask. Now you are able to talk to the High Level Compiler, who will agree to compile the TRON Legacy Code. Then talk to Guest through the I/O node on the upper level. Interesting turn of events, wouldn't you say? And this is not the end of it, because now, Thorne, the Master User will make it's presence known. You will have to defend Ma3a for 3:00 minutes during the compilation of the code. When Thorne charges up a sickly green energy ball fire your weapon at it until it disappears, then take care of the hordes of Z-Lots pestering you until Thorne charges up his next ball. The best way to finish this fight is to go to the upper level, as there is only one Z-Lot up there (with a 'Drunken Dims (a)' subroutine), and you can hide in the back where the I/O node is to protect you from the attacks of the Z-Lots on the lower level. You will also be able to access the white energy patch routine, which will en- able you to use energy based weapons on a more free basis. Stop all of Thornes energy balls until the timer runs out and you will have finished this sublevel. Version: v6.0.7 f.c) Outer Raster Getaway Another Lightcycle level, this one works just like the one in the EN12-82 system minus the tanks. Oh, and did I mention, Run, User Run. :) :P Version: v6.2.7 f.d) fCon Labs / Ma3a gets saved Not really a level, rather a cutscene with its own description f.e) Remote Access Node (6;2;2;2;1) (5 Build notes) In this sublevel you will find only Resource Hogs as your enemies, so be prepared to fight a few long range battles. After the talkage heal up if you need to do so and start to disinfect any subroutines that got infected during your battle with Thorne. Turn around and take the right turn at the junction. Go on and take another right and at the next junction go down to the left to find a bin(npn) on the left wall with P1 in it (which can be also gleaned from one of the Hogs on this upper level). Right opposite of the junction you just walked down, close to the edge is a formation of three cube like objects where the last one is a bit higher than the first two. On there you will find a bit that needs some energy, supply it. Then follow the bit to a datastream. Go up to the area with the many boxes and the five bins. The first bin you should (and can) access is the bin(npn) with a P4 and an e-mail in it. Then there are two more bins(4) with an e-mail each in it. The bins with the subroutines can not be accessed just yet, so you will have to wait a bit. Check if you have gained the two Build notes from the upper level before entering the data stream you opened with the bit. Enter the data stream and take care of the Hogs on the next level. Then walk left form the datastream and enter the space between the large two blocks towards the edge of the open space of this level. Look down to see a few blocks and a bin further down. Jump down to the bin(npn) for P5, then jump back up again. Go further left to find another bin(5) and get P1 and P8 from it. On the oppsite side of the block this bin is hanging in front of you will find another bin(npn) with an e-mail in it. Now go further left to find a dead end with another bit needing energy. Supply it and then go to the right of the datastream to find another datastream that will take you down to the floor level. Also, see if you have gained another two Build notes on the middle level before entering the stream. On the floor level, first take out the Hogs, then access the bin(npn) to the left of the datastream for a P2 and a P8. After this walk to the boxes to the right of the datastream and scale them to reach a COW. Then search the area for the last Build note. After you have done this activate the bit on the panel right opposite to the datastream. Now get back to the toplevel, we will now take care of the two bins we could not access earlier. Oh, and we will also take care of the blue ICPs that want to stop you. The first bin(1;2) with three subroutines holds 'Power Block (b)(np)(75)', 'Cluster (b)(90)' and 'Sequencer (b)(80)', the second bin(1;2;8) has a 'Triangulation (g)(125)' and a 'Profiler(b) (35)'. Now return to the junction where you found the bin with the lone P1 in it. Talk to Mercury, who is waiting for you there, a few times and you will end this main level. Version: v6.6.6 (The version of the beast) g) Alliance g.a) Security Server Another cutscene with its own designation g.b) Thornes Outer Partition (10;4;1;1) (5 Build notes) What you never thought possible has happened, you are now allied with the forces of the Kernel that was out to eradicate you during the first part of the game. The first thing you have to do is make your way down to the lower level, do this by turning around and going to the left hand side, there you will find a possibility to go down. Now go up to the ICPs that you could already see from the ledge. Then go on to the bin(npn) on the left side with a P6, an e-mail and 'Corrosion (a)(i) (15)' in it. Now use the twisted bridge to assault the Z-Lots on the other side, this time you will have help from the ICPs of the Kernel. Before doing anything else fight your way through to the end and eradicate and corruption forces you find in the area. The end of the 'assault area' is where there are a lot of defeated ICPs lying around. Also there is a white patch routine to the right. Now go back towards the bridge first to start retriving data from the bins you could not access because of the raging battle. Opposite of where the health ball was (about the middle of the battlefield) are two bins(npn) of which one holds 'Launcher (a)(25)' and the other the P8 get it. Now go back to the area where the deafeated ICPs are lying around. To the left of the street you will see a few pieces of floating debris, use it to jump over to the pillar like remains on the other side. There you will find a bin(npn) with the 'Drunken Dims (a)(50)' Subroutine. Jump back and get to the bin(npn) near the white patch routine with an e-mail and 'Encryption (b)(20)' in it. Now go on until you reach and I/O node and use it. After the talking go over to the other side of the room and use the datastream there. Go ahead, use the ramp on the right side, take out the Z-Lots, then go to the area with the moving rings. Once there you will have to destroy the Rector Scripts that are spawned in the middle of the rings while simultaneously having to defend yourself from Z-Lots. After this battle the sublevel will be finished. Version: v7.0.0 g.c) fCon Labs / Data Wraith Preparation Another Cutscene. g.d) Thornes Inner Partition (10;4;1;1) (5 Build notes) After talking with the defeated program get out of the room and walk down the slope. Be careful in this level, as there will be quite a few Rector Scripts lurking around. At the base of the slope first turn left to find a bin(npn) with a P1 and e-mail in it and another two bins(1) also containing an e-mail each. Then go to the right where you will find a few patch routines. Then go on and defeat the Z-Lots and the Rector Script that are blocking you way, until you find another defeated ICP lying by the wayside. From there take a right turn. You should find a energy patch routine to the right and a bin to the left above you on a ledge. Go up the slope a bit and jump over to the bin(1) containing P4, P6 and an e-mail. Now jump down and go back to the junction and follow the lower path around until you find a bin(npn) (close to a health routine) with P8, an e-mail and 'Viral Shield (g) (30)' in it. Then go back to the junction again and walk to its end to find a bin(4;6;8) with 'Corrosion (b)(i)(150)', 'Corrosion (b)(150)' and 'Drunken Dims (b)(i) (120)' in it. Also in this area you will find a COW to use. Retrace your steps to the junction again and go down the way and this time follow it to its end. You will see a bin(npn) with an e-mail in it. From there go right, down the path that is guarded by Z-Lots. At its end a bit hidden you will find a downwards slope that is the exit to this sublevel. Version: v7.2.2 g.e) Thornes Core Chamber (10;4;1;1) (0 Build notes) Here you will meet Thorne, Alan and the Kernel. After some talking the battle against the Kernel starts. The first thing you should take note of is, the pillars and some floor panels are destructible, so take care around them. The second thing you should be careful of, is that you do not hit Alan. During the first part of the battle the Kernel will stand up on the ledge with the beaten Thorne. From there he will attack you with the Sequencer Disc. Either run from them (bad idea) or deflect them with Power Block (good idea). If need be try to heal up at the health routine. After you have done some damage to the Kernel he will deactivate his shield and come down to fight you one on one in a Disc battle. If you are like me, then you will honor his request and fight Disc Primitive against Disc Primitive, if you do not want to honor his request use any weapon available to you. Either way Good Luck to you! :) After the battle you will have finished this main level. Version: v7.3.9 h) Handshake h.a) Function Control Deck (3;1;1) (4 Build notes) Now you ended up in Thornes PDA. Configure yourself for a battle against Finders, which will be the only enemies here. Then set out to find all permissions and Build notes first before persuading the PDA OS to help you. You should first exit the room through one of the doors and get to the panel that activates the bridge. Then return to the starting room and go on the lift there. Activate it by using the switch beside it. Once down exit the room through the only exit. To the right of the exit you will find a bin(npn) with an e-mail and P2 and P6 in it. Then continue your way. In the large hall use the two big, moving platforms to get to the other side, then go through the door on the right. Call down the lift in the room you will reach and go up. On the upper level you will find a bin(npn) with P2, P3 and an e-mail within. Then use the left or right door to exit and make your way to the panel that activates the bridge. Go back to the first room now and access the bin(2;3;6) there. A 'Base Damping (g)(25)' and an e-mail will wait here. Then use the lift to go down again and move to the room with the moving platforms. Wait until one platform advances towards you. While it is coming closer press the switch on the panel on the large contraption in this room. Now use the two platforms to go to the other side and push the button there, too. Now go trough the corridor on the right and use the lift to the upper level. Once there go to the back of the room and activate the switch located there. Then use one of the exits of the room and the bridges you activated earlier and go back to the first room. Flip the switch there also to finish this, rather small, mainlevel. Version: v7.6.2 i) Database i.a) Security Socket (5;2;2;1;1;1;1;1) (5 Build notes) Finally you have reached fCon's server. Your first objective here will be to find an access to the firewall. First turn right and talk to the programs there, then continue your way to the right and follow the half circle around to its end. There you should take out the Finder and the ICPs. One of them carries a P5. Now you download the e-mail from the bin(npn) you see. Now go to where there is a deactivated datastream in a niche shortly before the end of half-circle and go down the ramp opposite of it. Once you've reached the bottom there is only one way to follow. Remove all opposition and go on until you reach a dead end, do not energize the bit you saw just yet. In the dead end you will find a COW and a bin(npn) with a P2, 'Energy Claw (a)(100)', 'Cluster (a)(45)' and 'Fuzzy Signature (b)(10)' in it. Now go back and energize the bit and follow it to its socket. Be prepared now, as you will meet your first Data Wraiths in just a moment. Activate the bit. Enter the room and find a black-blue pillar in its middle. Use it to deactivate the forcefields to your left and right. In the right room is a bin(npn) with P5 and P7. Go back up to where the deactivated datastream pad was and enter the now active stream. Go to the next datastream to reach the lower level and make it your first priority to take out the Hogs. Now go ahead and destroy the four yellow power tabs on the wall. Now go back topside and go back to the starting section of the level. Once there you will see how a few ICPs over to your left will activate an energy bridge and start to attack you. Kill their processes. Go on until you reach a Sec Rezzer from there go down to your left, pass the boxes with the bin and the closed off port to the firewall an go into the next section. First go to the back of the area where the boxes are to find a bin(2;5;7) with 'Power Block (b)(75)', 'Fuzzy Signature(b)(15)' and 'Peripheral Seal(a)(15)' in it, then go down the stairs opposite of the Sec Rezzer. Go right from the stairs and follow the way until you find a ramp. At the top of the ramp you will see the datastream that will transport you to the second modulator socket. This socket is configured just like the first one, so repeat what you did there. Also hidden in one of the niches here is a bin(2;5) with P8, 'Primitive Charge (b)(15)' and 'Y-Amp (g)(100)'. Now return to the exit and fight back a few more ICPs. Then go towards the firewall port, but do not enter it yet, because you might first want to access the bin beside it, as you now have sufficient permission to do so. This bin(2;5;7;8) holds the 'Base Damping(b)(20)' and the very nice 'Megahurtz (a)(85)'. Now enter the port and finish this sublevel. Version: v8.0.1 i.b) Firewall (5;2;2;1;1;1;1;1) (5 Build notes) Watch the cutscene, then take care of the ICPs in the room. The bin that is in this room can not be accessed yet, so will come back to it later. This level is pretty straight forward. Look for the one niche where a datastream is active and enter it. You will end up in a plat- form with two tubes and a walkway in its middle that is perpendicular to it. To get through the tubes safely wait for the moving force field to disapper then run through the tube. At the far end of the platform you will find a bit which you activate, then turn around and return to the starting room by means of the datastream. Once there go on to the next niche with an active datastream. In the middle of the second platform you enter now you will find a lone ICP and off to your right two bins(npn) one with P2 and P3, the other with an e-mail within. You will also have to be a bit more careful with the forcefields now, as they move towards you, then back to where they started. This time you will have to follow them on their 'retreat'. Activate the bit, return to the main room and enter the next data- stream. In the tubes of the third platform you will have two moving forcefields per tube. In the middle of the tube you will find a marked section. Follow the first field until it dissipates close to the middle, then wait on the marking for the second field to move away form you and follow it. In the middle section to the left you will find a few boxes and a bin(npn) with P5, an e-mail, 'Guard Fortification (b)(20)', 'Encryption (b)(20)', 'Submask (b)(np)(20)' and 'Profiler (g)(50)'. Activate the bit, then return to the main room. You will be greeted by ICPs, but they will quickly leave if you find the right persuasive means. :) In one of the niches of the main room a COW will have appeared. We can now also take care of the bin(2;3;5) here, that holds a 'Energy Claw(a) (50)' and a 'Megahurtz (a)(np)(100)'. Go on to the last active data- stream to enter the configuration platform. Ring 1 is the inner ring, Ring 2 the middle one and number 3 is the outer ring. The panel on the left will activate Ring 1 and 3, the panel in the middle will move Rings 2 and 3 and the right one moves 1 and 2. The rings activated will all move one sixth of a circle further. Use the panels to align the breaks in the rings with the energy couplings on the left and right wall to solve this mission. Enter the datastream and you will end this sublevel. Version: v8.3.1 i.c) fCon Labs / Alan Lost Another cutscene. i.d) Primary Docking Port (5;2;2;1;1;1;1;1) (5 Build notes) Your first objective will be the removal of any enemy presence in the area. Do this now. Some of the ICPs will carry a P5. After the demise of the enemies return to the bin close to where you started. This bin (npn) holds P2 and P3. Then go ahead to the junction and turn left. Here you will find several bins. The one closest to the junction, is a bin(5) that holds 'LOL (b)(i)(110)'. To get to the next bin(2;3;5) you will have to jump on a few boxes. It holds 'Power Block (g)(np)(100)'. Further down you will find a COW, after this return to the junction and go down the right path now. There you will find a bin(2;3) with P5 and two e-mails. Then follow the path until you find Alan and a program looking at the server. Follow the program to the room with the plans. On the far side of the table you will find a bit that needs energizing. When you have done this just follow the bit to its socket, go through the door and activate the shuttle and don't mind the Data Wraiths that are trying futily to stop you. Both of the Wraiths will hold an 'Energy Claw (a)'. Once you've docked with the shuttle go up to the I/O node to communicate with Alan, then go on to the panel in front of the first security bit socket. Press it to open the socket, then hit the middle of it with your Disc. You will know you succeeded when the clamps open. Before going on to the next bit socket you will have to fend of the Data Wraiths that will spawn on the ledge high above you. Just LOL at them. :) Then repeat this with the next bit. Now you will have to enter the shuttle again and use it to reach the other side where you will do the same to the bits there. Also there is a bin(npn) with an e-mail here. After you have taken care of the bits board the shuttle again. On your next dock you should first take care of all the ICPs here, then go on towards the I/O node. First you might want to access the bin(npn) that is floating above the boxes, it holds a P2 and P7. Then use the I/O node. Go to the bridge, activate it, cross it and finish this sublevel. Version: v8.6.5 i.e) fCon Labs / Security Breach Another cutscene i.f) Storage Section (5;2;2;1;1;1;1;1) (0 Build notes) Welcome to your second Seeker to battle in this game. This one is much harder to defeat than the one in the old Encom system. For one thing he will not have to reatreat for a while when a discharge hits him, as there are no discharges here. Then he will be aided by Data Wraiths which try to attack you from all angles while you try to destroy the seeker. A good combination of subroutines might be the LOL (gold), paired with Megahurtz and Corrosion (also gold), as well as the Energy Claw (beta or gold). Attack the Seeker itself with the LOL and any Data Wraiths that you can not reach on the far left ledge. Destroy all remaining Data Wraiths with the Claw, which will also aid you in re- charging your energy for the LOL. Other than that, Good Luck. :) After this ordeal board the shuttle that has appeared on the right side of the battlefield. This will end this mainlevel. Version: v8.8.0 j) Root of all Evil j.a) Construction Level (5;3;2;1;1;1;1;1) (5 Build notes) We are getting closer and closer to finishing this game, are we not? Well, to continue, first remove the ICPs, then go, from your starting point, straight ahead and turn left first. There you can see a bin(npn) with P2; 'Primitive Charge (b)(15)', 'Megahurtz (a)(150)', 'Peripheral Seal (g)(np)(25)' and 'Y-Amp (g)(10)'. Then turn around and jump up to the other bin(2) with an e-mail and 'Prankster Bit (a)(100)' in it. Now pass by the Sec Rezzers and move onto the lift (a few boxes are on it) and activate it through the panel. After reaching the top take out any ICPs in the next hall, then look for Build notes, after this go to the back of the hall. There you will find on the floor level a doorway that is protected (lol!) by a rather erratic forcefield. This will enable you to slip through. Follow the corridor until you reach the one door that you are able to open with your permission and do so. You will end up in a room vital to the security of the system. In there you will have to activate 6 panels 3 to each side, 2 on the upper and one on the lower level each. After activating a panel you will also have to fight Data Wraith that tele- port in. After your activation of all 6 panels look for the lift in this room (right, upper walkway) and use it. Go down the corridor and open the door, then turn left as it is the only thing you can do here and follow the path to the open door. You will end up in the large hall again. Go to the walkways on the other side and enter the doorway that has now opened there. Follow the path until you enter a room where a COW is. Go through this room to its other end and exit the room here. Again follow the path until you reach another door. Inside you will find a few ICPs and Alan. Defeat the ICPs then talk to Alan, which will end this sublevel. Version: v9.0.8 j.b) Data Wraith Training Grid This is another Lightcyclerace sublevel. Do or skip, there is no try, or something like that. Version: v9.2.8 j.c) fCon Labs / The fCon team takes over Another cutscene. j.d) Command Module (5;3;2;1;1;1;1;1) (3 Build notes) Now you ended up in the command module. Right opposite to your starting position you will find a bin(npn) with two e-mails, 'Megahurtz (b) (130)' and 'Viral Shield (g)(30)' in it. Then retrieve the Build note from the escape pod, after this go through the door and enter the command section. There you will have to take out the ICPs first. After this look for a second Build note in the area. Now you will have to release the stabilization bits. Go to the 3-D Wireframe model of the Data Wraith carrier and use each bit (four total) in it once. Then go the front left side to the huge disc that is hanging there and use it. Then get back to the Escape Pod and move through the red force field. You will now end up in a section close to Alans last location. Be care- ful here as some of the level will deconstruct rather violently, and we don't want to get burried under the rubble now, would we? In this part you will also find the very last Build note for the game. The path you should follow is very obvious. If you made it through the room with the dropping ceiling go to the door that is 'guarded' by the patch routines and move through it to finish this sublevel. Version: v9.5.6 k) Digitizer Beam k.a) Not compatibel (4;4;4) (0 Build Notes) This is it, the final battle of the game. And not only are you limited in the choice of your weapons, no, this battle will also consist of three seperate stages. The first thing you should take care of, try not to fall down. It is sometimes very hard to move around the platforms without being able to discern if there is a hole in the floor or not. The next thing is, try to have an obstacle between you and the fCon team at all times when it is possible. Third, heal up if the possibilty is there. The weapons used by the first form will be Blaster, Prankster Bit and some kind of infection attack. The second form will miss the Prankster Bit and the third incarnation will have the Blaster and the Prankster Bit missing. I found that the boss is best defeated with the Disc Primitive (or it's subroutine Sequencer if energy allows). Keep your distance from it and hit it from afar and the boss should not pose much of a problem. If you have gold level defensive subroutines he will deal you a lot less damage. If possible couple the Disc with Primitive Charge, Corrosion and Megahurtz for additional damage to the boss. Other than that I can only wish you Good Luck and I hope you had fun with the game. :) Final Version: v9.7.6 ******************************************************************************* 09. Subroutines and COWs per level ******************************************************************************* Here I will list for a fast reference what subroutines and COWs you can gain in which sublevel. I will only list the Version and Energy need of the subroutines here. The COWs will be put at a place at which they can still be used with any of the subroutines found up to that point. That means, if I put the COW as a break in between the subroutines of a sublevel, you can only use it on the sub- routines above that COW, those that follow below can not be improved by that COW as it will not be accessible for them anymore. I will also list subroutines gained from core dumps if they are not accessible through bins for a while. These will be the ones that do not have an energy rating behind them. a) Unauthorized User a.a) Program Initalization Y-Amp alpha 25 Energy Blaster Primitive n/a 25 Energy Profiler alpha 25 Energy a.b) Program Integration COW (has to be used here as part of the plot) Fuzzy Signature alpha 25 Energy Submask alpha 15 Energy Profiler beta 35 Energy Virus Scan alpha 25 Energy Y-Amp alpha 25 Energy Submask alpha 15 Energy Primitive Charge alpha 20 Energy b) Vaporware b.a) Lightcyclearena and Gridbox none found / Lightcyclerace b.b) Prisonercells Virus Scan beta 35 Energy Fuzzy Signature alpha 25 Energy Peripheral Seal beta 20 Energy Suffusion alpha n/a COW Suffusion alpha 50 Energy Y-Amp beta 75 Energy Power Block alpha 25 Energy LOL alpha 65 Energy Peripheral Seal alpha 15 Energy b.c) Transportstation Virus Scan gold 50 Energy Virus Scan beta 35 Energy Viral Shield alpha 20 Energy Sequencer alpha 40 Energy Primitive Charge alpha 25 Energy Profiler beta 35 Energy COW b.d) Primary Digitizer Sequencer alpha 50 Energy Suffusion alpha 50 Energy Fuzzy Signature alpha 50 Energy Sequencer beta 80 Energy Primitive Charge alpha 25 Energy Profiler alpha 25 Energy Profiler beta 35 Energy COW LOL alpha 50 Energy Guard Fortification alpha 15 Energy Launcher alpha 75 Energy LOL alpha 50 Energy Fuzzy Signature beta 75 Energy Corrosion alpha n/a c) Legacy Code c.a) Alans Desktop PC LOL alpha 65 Energy Triangulation beta 100 Energy Suffusion beta 90 Energy LOL alpha 50 Energy Virus Scan beta 35 Energy Suffusion beta 90 Energy d) System Restart d.a) Packet Transport Cluster alpha 45 Energy Profiler gold n/a Guard Fortification alpha 25 Energy Fuzzy Signature beta 75 Energy Base Damping alpha 15 Energy Triangulation beta 100 Energy Profiler gold 50 Energy Y-Amp beta 75 Energy Suffusion beta 90 Energy COW (Permission 6 is needed to operate this one) d.b) Energy Regulator Base Damping alpha 15 Energy Submask beta 20 Energy Virus Scan gold 50 Energy Cluster alpha 45 Energy Fuzzy Signature beta 75 Energy d.c) Power Occular Triangulation alpha 75 Energy Triangulation beta 100 Energy Submask beta 20 Energy Guard Fortification beta 20 Energy Base Damping alpha 15 Energy Encryption alpha 15 Energy Cluster alpha 45 Energy Y-Amp gold 100 Energy LOL alpha 65 Energy LOL beta 110 Energy COW e) Antiquated e.a) Testgrid none / Boss Battle e.b) Main Processor Core COW Peripheral Seal beta n/a Sequencer alpha 25 Energy Cluster beta 45 Energy Fuzzy Signature beta 15 Energy e.c) Old Gridarena none / Lightcyclerace e.d) Main Energy Pipeline Peripheral Seal alpha 15 Energy Triangulation beta 100 Energy Submask alpha 15 Energy Base Damping beta 20 Energy Corrosion alpha 100 Energy Corrosion beta 225 Energy COW f) Master User f.a) City Hub Viral Shield beta n/a (talk to a program) Viral Shield alpha 20 Energy Launcher beta 115 Energy Virus Scan beta 35 Energy Virus Scan alpha 20 Energy Launcher beta 75 Energy COW f.b) Progress Bar Profiler gold 50 Energy Virus Scan gold 50 Energy Y-Amp gold 75 Energy COW Drunken Dims alpha n/a f.c) Outer Gird Getaway none / Lightcyclerace f.d) fCon Labs / Ma3a gets saved none / Cutscene f.e) Remote Access node Cluster beta 75 Energy Sequencer beta 80 Energy Triangulation gold 125 Energy Profiler beta 35 Energy COW g) Alliance g.a) Security Server none / Cutscene g.b) Thornes Outer Partition Corrosion alpha 15 Energy Launcher alpha 25 Energy Drunken Dims alpha 50 Energy Encryption beta 20 Energy g.c) fCon Labs / Data Wraith Preparation none / Cutscene g.d) Thornes Inner Partition Virus Scan gold 30 Energy Corrosion beta 150 Energy Corrosion beta 150 Energy Drunken Dims beta 120 Energy COW g.e) Thornes Core Chamber none / Boss Battle h) Handshake h.a) Function Control Deck Base Damping gold 25 Energy i) Database i.a) Security Socket Energy Claw alpha 100 Energy Cluster alpha 45 Energy Fuzzy Signature beta 10 Energy Power Block beta 75 Energy Fuzzy Signature beta 15 Energy Peripheral Seal alpha 15 Energy Primitive Charge beta 15 Energy Y-Amp gold 100 Energy Base Damping beta 20 Energy Megahurtz alpha 85 Energy COW i.b) Firewall Guard Fortification beta 20 Energy Encryption beta 20 Energy Submask beta 20 Energy Profiler gold 50 Energy Energy Claw alpha 50 Energy Megahurtz alpha 100 Energy COW i.c) fCon Labs / Alan Lost none / Cutscene i.d) Primary Docking Port LOL beta 110 Energy Power Block gold 100 Energy COW i.e) fCon Labs / Security Breach none / Cutscene i.f) Storage Section none / Boss Battle j) Root of all Evil j.a) Construction Level Primitive Charge beta 15 Energy Megahurtz alpha 150 Energy Peripheral Seal gold 25 Energy Y-Amp gold 10 Energy Prankster Bit alpha 100 Energy COW j.b) Data Wraith Training Grid none / Lightcyclerace j.c) fCon Labs / The fCon team takes over none / Cutscene j.d) Command Module Megahurtz beta 130 Energy Viral Shield gold 30 Energy k) Digitizer Beam k.a) Not compatible none / Final Boss Battle ******************************************************************************* 10. Lightcycle Game ******************************************************************************* In this section I will list all power-ups, other things to know, the stats of the Lightcycles, the list of the levels, what is unlocked after winning a race and what those cryptic abbreviations mean listed under the 'Own Game' tab. Abbreviations I will use in this section: LC = Standard Lightcycle (of movie fame) SLC = Super Lightcycle Turbo = Turbospeed of LC/SLC Max Spd. = Maximum Speed Min Spd. = Minimum Speed Acc. = Acceleration Loss = Speedloss in Curves Well, now let's get down to business, shall we? a) Power Ups On almost every racemap you can pickup and use extras. Here I am going to introduce them: - Shield Your standard, basic energy shield. Can withstand one hit. With it you can break safely through any single wall, just be careful that there is not a second one behind. Sometimes you can also drive through enemy with it activated - Nitro Gives you a burst of speed for a short time, helpful to stay ahead of an oppenent and then use a quick 'jab' to run him into your wall. - Turbo Charger Similar to the Nitro, only that you will go even faster and that every LC on the grid will benefit from the activation of this. Can also be used for surprise attacks. - Wall Extender Works just like the food in the snakes game, extends the length of your wall permanently. - Power-Up Steal An enemy has a power up that you would like, well, just borrow it from him with this item. - Wall Spike Let the enemy get close and alongside you then activate this item to send out short walls to the left and right, which will make short work of him. - Rocket Powerful ordinance for your little LC. If it does not hit an enemy LC (and thereby destroying it) it will coutinue on until it hits a map wall. Will go through all layers of energy walls along its path. Good for use in thight spots. - Automatic Power-Up On activation of this device all power-ups that your opponents currently hold will be activated. Use this if you want to stop them from using a Wall-Spike or Rocket on you. Might also backfire if you hit a Turbo Charger and are not prepared for it. - Wall Reset This will destroy all walls that were created until the point you activated this item. If you mean to use it to get through an enemy wall though, be careful, as it takes some time until the walls disapper. b) Additional Information - Speedup fields On some maps you will find green fields that will accelerate your LC and every other LC on the map (no matter what kind) to a predetermined and high speed. - Slowdown fields These glow red and are the exact opposite of the above, meaning they will slow down every LC on grid to a predetermined speed. - Energy Blocks These are blocks that appear and vanish. If they are to appear the ground will shortly glow in a red square marking the location of the block. You should not run into them, and you should steer clear if you are on a location where a block will appear. - Turbospeed This statistic affects the speed at which a LC/SLC can move when it activates a Nitro or Turbocharger power-up. - Maximum Speed This is the highest attainable speed when moving over normal gird. This speed will be reached when you hold the forward key pressed. It varys from LC to LC, and also sometimes from map to map, depending on which speed the game is set. - Minimum Speed This is the slowest your LC will go. Braking will be activated when you hold the backwards key. - Acceleration The meaning of this should be clear, I hope. - Speedloss in curves Each time you turn you will loose a certain amount of speed. So turning often will slow you down considerably and make you an easy target for opposing Lightcyclists. - Colors The colors determine the level of the AI as well as the quality of the LC. Blue are the weakest bikes, then followed by yellow, red, green and finally purple. The numbers behind the color of the bike in the selection screen only denote different shades of that color to choose from and do not have any impact on LC performance. c) The Stats of the Lightcycles Note: These values are only for comparision of the different capabilities of the LCs. They were gained by measuring the length of the bars with a ruler off the monitor. I would say that this is just a rather crude approximization, but this was to give at least some kind of numerical comparision for the LCs, I hope it helps. (Okay, I know I should have gotten all the top and low speeds, but I was too lazy. There, are you now satisfied. :) ). All but the number for Speedloss go by the maxime 'The higher, the better'. For Speedloss a lower value is better. Type/Color of LC Turbo Max Spd. Min Spd. Acc. Loss LC Blue 1.5 1.5 1.5 1.5 1.5 LC Yellow 2.5 2.5 2.5 2.0 1.5 LC Red 3.5 3.0 3.5 2.5 1.5 LC Green 4.5 4.0 4.5 3.0 1.5 LC Purple 6.5 5.5 6.0 3.5 1.5 SLC Blue 5.0 5.5 5.0 6.0 10.0 SLC Yellow 6.0 6.5 6.0 7.0 10.0 SLC Red 7.0 7.5 7.0 8.0 10.0 SLC Green 8.0 8.5 8.0 9.0 10.0 SLC Purple 10.0 10.0 10.0 10.0 10.0 d) List of Racetracks Note: LC/SLC colums here will list which kind of colors are allowed. The Code column list the Levelcode display in the 'Own Game' tab. The Turtorial track can not be selected for an own game. The other four tracks that are listed with 'n/a' have no level code, because they feature a track each of their specific set (e.g. track 13, features tracks from 2, 7 and 17 in the following table). Name LC SLC Lives Waves Code 1) Tutorial All All 10 1 n/a 2) Newbie Authentification Blue None 8 3 LC02S01 3) Binary Zone All None 6 3 n/a 4) Format:/C Blue Blue 4 2 LC03S01 5) Conscripts Revenge All None 6 3 LC01S01 6) fCon Zone All All 6 3 n/a 7) Batchfile Graveyard All None 8 3 LC02S02 8) Curse of the Bitrate All None 10 5 LC01S02 9) Avatar Alley All All 2 3 LC03S02 10) Dead Man's Cache None Yellow 4 2 LC03S03 11) Green Hornets Green Green 6 3 LC04S01 12) Super Cycle Open None All 8 4 LC01S03 13) Old Zone All All 6 3 n/a 14) Purple Warez Purple None 4 2 LC04S03 15) Urban Area All All 6 3 n/a 16) Sourcecode Revelation None All 8 4 LC04S02 17) 01100101 None All 8 4 LC02S03 e) What is unlocked when Note: The numbers here corresspond to those found in the list above. 1) SLC Blue 2) Yellow LC 3) Shield / LC01 tracks 4) Nitro 5) Red LC / Turbocharger 6) LC 04 tracks 7) Yellow SLC / Wall Extender 8) Power-up Steal 9) Green LC / Wall Spike 10) Rocket 11) Red SLC / Automatic Power-up 12) None 13) Purple LC / LC02 tracks 14) None 15) Green SLC / LC03 tracks 16) None 17) Purple SLC / Wall Reset ******************************************************************************* 11.FAQ ******************************************************************************* Q: The game does not run on my system! The game does not run well on my System! Why? A: Sorry, but I am not a technician, either ask your way around on message boards or ask Monoliths Tech Support to aid you. Q: The Lightcycleraces are so hard, isn't there any way to skip them? A: You can skip them if you have the right version of the game. For the US version you will have to get the first patch from the official Tron 2.0 website. If you have the german version this patch is already implemented in the final release. Q: I can not supply energy to the bits in the EN12-82 system mainlevel. Why? A: This is a bug in the game. To avoid it you have to make sure that you did not skip the lightcyclerace in this mainlevel. Monolith may release a patch that will solve this problem. Q: The game is so hard, isn't there any way to make it easier? A: Well, first you should look in the options what difficulty level you use. Tron 2.0 allows you to change the difficulty level at any time during the game. If you are already playing on easy, well, then look for cheats in the net, although you should try to get through the game at least once without using cheats. Makes the feeling of having something accomplished that much nicer. ******************************************************************************* 12. Credits ******************************************************************************* Steve Lisberger - for creating the Tron universe Syd Mead - for designing the look and feel of Tron The Monolith team - for creating an outstanding game The GVim team - for creating a superb text editor () The 'Alt + Tab' shortcut - for making writing this thing much easier and last but (hopefully) not least me - for writing up this Walkthrough ******************************************************************************* 13. Changelog ******************************************************************************* - Version v0.3.0 -- October 6th, 2003 Well, this version apparently was not meant to be. Due to a stupid mistake I made I saved an empty textfile over some of the text I've had already written. Life can sometimes be frustrating, can it not? Still, I had a good laugh on it afterwards. Also it kept me from formatting what I had written up to that point (it was chaotic to say the least) and I could with a new one where I also concentrated on the formatting right away. All in all it was not that bad. :) - Version v1.0.0 -- October 14th, 2003 Release Candidate Version This is the first version I released. No changes have been made to the Walkthrough as of yet. - Version v1.1.0 -- October 16th, 2003 First Patch (everyone loves a patch, at least if it fixes problems) :) - New section (10) for the single player Lightcycleraces was added - Reworked Section 3 'Basic Game Mechanics' - Reworked Section 7 'Notes for the Walkthrough' - Reworked document width to 79 characters per line - Corrected minor mistakes found during reworking - Minor formatting adjustments in Section 9 ******************************************************************************* 14. Contact Information ******************************************************************************* If you want to contact me, you can do this by one of the following methods: E-Mail: DeathbrngNOSPAMHERE@NOSPAMHEREaol.com (remove the NOSPAMHERE) and please refer to the Tron 2.0 Walkthrough in the subject line AIM: SN Deathbrng ICQ: 150971635 (mostly invisible / Authorization required) End of Line
http://www.gamefaqs.com/pc/529599-tron-2-0/faqs/26235
CC-MAIN-2015-11
refinedweb
20,135
79.4
Results 1 to 4 of 4 - Join Date - Dec 2011 - 2 - Thanks - 0 - Thanked 0 Times in 0 Posts Chunking Returned MySQL Dataset Into Multiple Arrays I'm working to compute the median hours to resolution for tech support workers in my organization. I have pretty good idea of what needs to happen to compute the median, I'm just not sure how to get there. I currently have a mysql dataset being returned with values similar to the data below. Code: hoursResolveDiff,lastname 0,Astudillo 3,Astudillo 7,Astudillo 7,Astudillo 8,Astudillo 8,Astudillo 22,Astudillo 25,Astudillo 46,Astudillo 51,Astudillo 103,Astudillo 117,Astudillo 145,Astudillo 148,Astudillo 169,Astudillo 173,Astudillo 334,Astudillo 339,Astudillo 0,Blakeman 0,Blakeman 0,Blakeman 0,Blakeman 0,Blakeman 0,Blakeman 1,Blakeman 2,Blakeman 2,Blakeman 2,Blakeman 3,Blakeman 4,Blakeman 4,Blakeman 5,Blakeman 5,Blakeman 5,Blakeman 6,Blakeman 6,Blakeman 6,Blakeman 6,Blakeman 7,Blakeman 7,Blakeman 7,Blakeman 10,Blakeman ... Any help is appreciated. I'm really having trouble getting started. Thanks for your time. Scott - Join Date - May 2002 - Location - Marion, IA USA - 6,295 - Thanks - 4 - Thanked 84 Times in 83 Posts Are you trying to compute the median for each person or a median for everybody? If for each person why not just query for a list of people and use that result to query the other data you need for computations and then do the computation for each. Something like: 1. Get a list of all the tech support personnel 2. Use that list in a foreach loop to query each person for the list of hours they worked 3. Compute the medianSpookster CodingForums Supreme Overlord All Hail Spookster - Join Date - Dec 2011 - 2 - Thanks - 0 - Thanked 0 Times in 0 Posts Yes, I'm trying to compute the median of each individual. I've got the first part worked out. $hours is now an array of arrays for each worker like so... Code: Array ( [] => Array ( [0] => 0 [1] => 0 [2] => 0 [3] => 0 [4] => 1 [5] => 3 [6] => 122 [7] => 211 [8] => 212 [9] => 256 ) [Astudillo] => Array ( [0] => 0 [1] => 3 [2] => 7 [3] => 7 [4] => 8 [5] => 8 [6] => 22 [7] => 25 [8] => 46 [9] => 51 [10] => 103 [11] => 117 ) [Blakeman] => Array ( [0] => 0 [1] => 0 [2] => 0 [3] => 0 [4] => 0 [5] => 0 [6] => 1 [7] => 2 [8] => 2 [9] => 2 [10] => 3 ) [Blank] => Array ( [0] => 1 [1] => 5 [2] => 6 [3] => 6 [4] => 20 [5] => 25 [6] => 28 [7] => 93 [8] => 115 [9] => 119 [10] => 170 [11] => 190 [12] => 260 ) ) PHP Code: foreach( $hours as $k=>$v ) { echo 'Median hours for '.$k.' is '.computeMedian($v).'<br>'; } function computeMedian(Array $v) { $c = count($v); $hl = floor($c/2); $hu = ceil($c/2); if( $hl == $hu ) return $v[$hl]; return (($hl+$hu)/2); } - Join Date - May 2002 - Location - Marion, IA USA - 6,295 - Thanks - 4 - Thanked 84 Times in 83 Posts It would be easier if you didn't try to rework the data after getting it from the database. You didn't provide any information on your database structure but if you did this correctly you should be able to do something as simple as this: In place of the hardcoded array you would loop through the resultset from the query in much the same way. PHP Code: <?php $employees = array( array( 'name' => 'john', 'hours' => array(10, 20, 30, 40) ), array( 'name' => 'jack', 'hours' => array(15, 25, 35, 45) ) ); var_dump($employees); foreach ($employees as $employee) { $totalHours = 0; $totalDays = 0; foreach ($employee['hours'] as $hours) { $totalHours += $hours; $totalDays++; } echo "Name: " . $employee['name'] . " Median Hours: " . $totalHours / $totalDays . "</br>"; } ?>PHP Code: array 0 => array 'name' => string 'john' (length=4) 'hours' => array 0 => int 10 1 => int 20 2 => int 30 3 => int 40 1 => array 'name' => string 'jack' (length=4) 'hours' => array 0 => int 15 1 => int 25 2 => int 35 3 => int 45 Name: john Median Hours: 25 Name: jack Median Hours: 30 Last edited by Spookster; 12-09-2011 at 10:41 PM.Spookster CodingForums Supreme Overlord All Hail Spookster
http://www.codingforums.com/php/245933-chunking-returned-mysql-dataset-into-multiple-arrays.html
CC-MAIN-2015-48
refinedweb
684
58.96
Split a File as a Stream Split a File as a Stream Curious about when you would actually use the splitAsStream method? Here's a use case of splitting a file into chunks that can be processed as streams. Join the DZone community and get the full member experience.Join For Free I recently discussed how the new (@since 1.8) method splitAsStream in the class Pattern works on character sequences, reading only as much as needed by the stream and not running ahead with pattern matching, creating all the possible elements and returning them as a stream. This behavior is the true nature of streams, and it is the way it has to be to support high-performance applications. In this article, I will show a practical application of splitAsStream, where it really makes sense to process the stream and not just split the whole string into an array and work on that. The application, as you may have guessed from the title of the article, is splitting up a file along some tokens. A file can be represented as a CharSequence as long it is not longer than 2GB. The limit comes from the fact that the length of a CharSequence is an int value, and that is 32 bits in Java. The file length is long, which is 64-bit. Since reading from a file is much slower than reading from a string that is already in memory, it makes sense to use the laziness of stream handling. All we need is a character sequence implementation that is backed up by a file. If we can have that, we can write a program like the following: public static void main(String[] args) throws FileNotFoundException { Pattern p = Pattern.compile("[,\\.\\-;]"); final CharSequence splitIt = new FileAsCharSequence( new File("path_to_source\\SplitFileAsStream.java")); p.splitAsStream(splitIt).forEach(System.out::println); } This code does not read any part of the file — that is not needed yet — and assumes that the implementation FileAsCharSequence is not reading the file greedily. The class FileAsCharSequence implementation can be: package com.epam.training.regex; import java.io.*; public class FileAsCharSequence implements CharSequence { private final int length; private final StringBuilder buffer = new StringBuilder(); private final InputStream input; public FileAsCharSequence(File file) throws FileNotFoundException { if (file.length() > (long) Integer.MAX_VALUE) { throw new IllegalArgumentException("File is too long to handle as character sequence"); } this.length = (int) file.length(); this.input = new FileInputStream(file); } @Override public int length() { return length; } @Override public char charAt(int index) { ensureFilled(index + 1); return buffer.charAt(index); } @Override public CharSequence subSequence(int start, int end) { ensureFilled(end + 1); return buffer.subSequence(start, end); } private void ensureFilled(int index) { if (buffer.length() < index) { buffer.ensureCapacity(index); final byte[] bytes = new byte[index - buffer.length()]; try { int length = input.read(bytes); if (length < bytes.length) { throw new IllegalArgumentException("File ended unexpected"); } } catch (IOException e) { throw new RuntimeException(e); } try { buffer.append(new String(bytes, "utf-8")); } catch (UnsupportedEncodingException ignored) { } } } } This implementation reads only that many bytes from the file as needed for the last, actual method call to charAt or subSequence. If you are interested, you can improve this code to keep only the bytes in memory that are really needed and delete bytes that were already returned to the stream. To know which bytes are not needed, a good hint is that the splitAsStream never touches any character that has a smaller index than the first ( start) argument of the last call to subSequence. However, if you implement the code in a way that it throws the characters away and fails if anyone wants to access a character that was already thrown, then it will not truly implement the CharSequence interface, though it still may work well with splitAsStream so long as long the implementation does not change and it starts needing some already passed characters. (Well, I am not sure, but it may also happen in a case where we use some complex regular expression as a splitting pattern.) Happy coding! Published at DZone with permission of Peter Verhas , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/split-a-file-as-stream?fromrel=true
CC-MAIN-2019-47
refinedweb
700
54.32
have always liked to complain that Python is slow. People have always liked to complain that XML processing is slow. I've always thought both complaints are typically misguided. Neither Python nor XML emerged out of the need for ultrasonic speed at runtime. Python emerged as an expressive language for programming applications, and XML as an expressive system for building data representation into applications. In both cases the goal for the resulting applications is ease of maintenance. This ease primarily comes from the expressiveness that Python and XML allow. One should expect that such gain in productivity would be matched by a corresponding loss in some area, and sometimes the deficit comes from performance. Complaints about the downsides of this tradeoff often stem from premature optimization, when developers insist on basing key decisions on meaningless benchmarks rather than considering the entire profile under which an application operates. All that having been said, there are certainly real world cases where one has chosen Python and XML tools with an open mind, and then decided that a performance boost is in order upon careful profiling. There are many ways to squeeze performance out of Python XML applications, some of which take advantage of Python or XML's expressive power. Each application will need its own mix of practical optimizations, but in this article I shall present a modest technique that might be handy to have in your war chest. Dictionaries are perhaps the core data structure most dear to the heart of Python. The basic object model of Python is largely constructed upon dictionaries. They have been optimized to the utmost by some of the brightest minds in the computer industry. Conventional wisdom suggests that if you need to optimize a bit of code in Python, you should take advantage of the excellent Python/C interface. A better bit of advice is that you might first want to check whether you are taking full advantage of dictionaries. One detail to keep in mind is that dictionaries are more suited to fast lookup operations than memory savings: again there is a tradeoff, space for speed in this case. My pet description of XML's fundamental data model is "labeled strings in nested packages". The labeling and nesting are what differentiate XML from good old comma or tab-delimited value and tabular ("square") data models such as spreadsheets and classic SQL databases. This same labeling and nesting makes for a natural accommodation of data from XML in Python dictionaries. Sometimes writing tortured SAX code for complex processing just doesn't cut it as an alternative to loading documents into heavyweight DOM or the like for XPath access. You may be better off writing SAX code that is just sophisticated enough to shape the key bits from XML into a structure of nested dictionaries. Take for example the address labels format I've been using as an example in some earlier installments, reproduced in listing 1. <> Well, it's not an exact reproduction. In earlier articles I used the character …, mistakenly thinking that it was the ellipsis in Unicode. In fact, it is NEL, an end of line character used most often in IBM mainframe data. I erroneously looked up a table based on Windows code pages rather than Unicode, which illustrates all too well the danger I cautioned against earlier of confusing character sets. Starting from this article I'm using the correct character …. As an aside, there has been a lot of controversy in the XML community because NEL, the character I happened to mistake for ellipsis, has been added as a standard white space character in a revision that will probably become XML 1.1. Some claim this breaks backward compatibility and is not worth the convenience of a (relatively) few mainframe users. The fact that … is whitespace from the Unicode point of view explains what I thought was confusing treatment of the character by Python string processing APIs. … … … But back to the main topic of the article. Listing 2 contains some SAX code to create a handy dictionary of dictionaries from labels.xml. Python 2.x is the only requirement: I used Python 2.3.3. import sys from xml import sax from textnormalize import text_normalize_filter #Subclass from ContentHandler in order to gain default behaviors class label_dict_handler(sax.ContentHandler): #Define constants for important states CAPTURE_KEY = 1 CAPTURE_LABEL_ITEM = 2 CAPTURE_ADDRESS_ITEM = 3 def __init__(self): self.label_dict = {} #Track the item being constructed in the current dictionary self._item_to_create = None self._state = None return def startElement(self, name, attributes): if name == u"label": self._curr_label = {} if name == u"address": self._address = {} if name == u"name": self._state = self.CAPTURE_KEY if name == u"quote": self._item_to_create = name self._state = self.CAPTURE_LABEL_ITEM if name in [u"street", u"city", u"state"]: self._item_to_create = name self._state = self.CAPTURE_ADDRESS_ITEM return def endElement(self, name): if name == u"address": self._curr_label["address"] = self._address if name in [u"quote", u"name", u"street", u"city", u"state"]: self._state = None return def characters(self, text): if self._state == self.CAPTURE_KEY: self.label_dict[text] = self._curr_label curr_dict = None if self._state == self.CAPTURE_ADDRESS_ITEM: curr_dict = self._address if self._state == self.CAPTURE_LABEL_ITEM: curr_dict = self._curr_label print repr(text), curr_dict if curr_dict is not None: if curr_dict.has_key(self._item_to_create): curr_dict[self._item_to_create] += text else: curr_dict[self._item_to_create] = text return if __name__ == "__main__": parser = sax.make_parser() downstream_handler = label_dict_handler() #upstream: the parser; downstream: the next handler in the chain filter_handler = text_normalize_filter(parser, downstream_handler) #XMLFilterBase is designed so that the filter takes on much of the #interface of the parser itself, including the "parse" method filter_handler.parse(sys.argv[1]) label_dict = downstream_handler.label_dict textnormalize is a SAX filter that I contributed to The Python Cookbook. A SAX parser can report contiguous text using multiple characters events. In other words, given the following XML document: textnormalize characters <spam>abc</spam> The text "abc" could technically be reported as three characters events: one for the "a", one for the "b" and a third for the "c". Such an extreme case is unlikely in real life, but in any case textnormalize ensures that all such broken-up events are reported to downstream SAX handlers in the manner most developers would expect. In the above case, the filter would consolidate the three characters events into a single one for the entire text node "abc". This frees downstream handlers from implementing code to deal with broken-up text nodes, which can be rather cumbersome when mixed into typical SAX-style state machine logic. I've reproduced the filter class as listing 3 for sake of convenience. In listing 2 label_dict_handler is my special SAX handler for the labels format. It's a somewhat rough case and does not go into the detail required to handle some edge cases, but it does accomplish the required task. It constructs a dictionary of labels, each of which is a dictionary of label details. The address details are given as yet another nested set of dictionaries. The main module code constructs the chain from the parser through the textnormalize filter to the label_dict_handler handler. I ran this code and prepared to experiment with the resulting dictionary as follows: label_dict_handler $ python -i listing2.py labels.xml Python's standard pprint module is a friendly way to inspect the contents of dictionaries: pprint >>> import pprint >>> pprint.pprint(label_dict) {u'Ezra Pound': {'address': {u'city': u'Hailey', u'state': u'ID', u'street': u'45 Usura Place'}}, u'Thomas Eliot': {'address': {u'city': u'Stamford', u'state': u'CT', u'street': u'3 Prufrock Lane'}, u'quote': u'\n \n Midwinter Spring is its own season\u2026\n '}} >>> Notice the proper maintenance of the Unicode from the XML document, including the tricky ellipsis character. In this form, the labels data is easy to query: >>> #Get the Eliot quote ... >>> print repr(label_dict[u"Thomas Eliot"].get(u"quote")) u'\n \n Midwinter Spring is its own season\u2026\n ' >>> #How about a Pound quote? ... >>> print repr(label_dict[u"Ezra Pound"].get("quote")) None >>> #Tell me which labels do have quotes ... >>> print [ k for k, v in label_dict.iteritems() if v.has_key(u"quote") ] [u'Thomas Eliot'] The equivalent XPath to the last sample query is: /labels/label[quote]/name The Python dictionary query is certainly a bit more wordy, and it uses Python trickery such as list comprehensions; but it's more efficient than most XPath implementations, and if you're using Python to a fair extent you're likely to be familiar with all the neat idioms anyway. Notice that I use the method iteritems to spool out the contents of the dictionary rather than the more common items. iteritems requires Python 2.2 and saves memory by creating the key/value pairs for each item in the dictionary on demand rather than all at once. On older Python versions just replace with items. iteritems items The dictionary data is just as easily manipulated although it would take some additional work to save any such updates back into the XML format. Also in Python and XML Processing Atom 1.0 Should Python and XML Coexist? EaseXML: A Python Data-Binding Tool More Unicode Secrets Unicode Secrets This article's fare is not a technique for all seasons. For one thing, even a trimmed down SAX handler that does nothing but create dictionaries can become rather complicated and tricky to debug. The code to generate equivalent data bindings and Python structures in any of the many tools I've examined in this column is almost trivial compared to the SAX presented in this article. But using plain SAX to construct dictionaries is likely to offer more raw speed and less memory overhead. It also has the advantage of requiring no third party modules. One possible variation is to use SAX to construct specialized objects rather than dictionaries, which makes the query code a bit neater, but does reintroduce some overhead. It has been a slow couple of months in the Python/XML community. One interesting development has been the announcement of Rx4Rdf. As its web page says: Rx4RDF is a specification and reference implementation for querying, transforming and updating W3C's RDF by specifying a deterministic mapping of the RDF model to the XML data model defined by XPath. Rx4RDF shields developers from the complexity of RDF by enabling you to use familiar XML technologies like XPath, XSLT and XUpdate. We call their RDF equivalents RxPath, RxSLT, and RxUpdate respectively. Browsing through the code, Rx4RDF appears to start with various modules from 4Suite, but combines and enhances these in very interesting ways for an RDF-centric approach to data processing. I especially found intriguing the description of Racoon, which sounds like a variation on the idea behind the popular Cocoon server, but using RDF and Python rather than XML/XSLT and Java. © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2004/01/14/py-xml.html
CC-MAIN-2016-07
refinedweb
1,832
55.03
Why Ruby? When I initially started learning Ruby, I read a lot of articles talking about the problems with Ruby. "It's a dead language." "Nobody uses Ruby anymore." "Just learn Python. It's faster anyways." While these comments didn't hinder me at all, they did make me question the reason for learning Ruby: why didn't my bootcamp choose a faster more popular language that was still beginner friendly like Python? Needless to say, I still attended the bootcamp and learned Ruby, and I am so happy I did. In The Beginning There Was C, + More Although I never graduated university (life has a habit of getting busy), I did work towards a software engineering degree. My university, I'm sure like many others', introduced programming with one of the more unforgiving languages, C++. I remember not even worrying about the language: I didn't know the difference between compiled and interpreted languages. Weird syntax? Yeah, that's just programming. Things got difficult when I started my second semester computer science course. We were writing games and dynamic memory allocation didn't come naturally to me. Bootcamp After I dropped out I spent some time learning HTML, CSS, and then JavaScript. Thankfully, JavaScript has a lot of similarities with C++ when it comes to syntax. Sure types aren't a thing (I know they can be #TypeScript), but I felt fine with the loops and conditional statements, and I sure do love semicolons. In 2020 I decided to take a leap of faith and applied for Flatiron School's software engineering program. The curriculum would include JavaScript, Ruby, React, and Rails. I was very excited to learn how to build real-world applications. My first project was using an API to make a CLI application, and it was so great to finally understand what an API was. The Saving Gem When I started learning Ruby, I didn't think much of the differences between it, JavaScript or C++. Sure, ending blocks of code instead of using curly brackets was strange and leaving out parentheses was not something I did. Honestly, I still coded very much in the C++ way my university taught me: for/while loops, the parentheses, etc. But then something wonderful happened. I learned about enumerables. A beautiful chord was struck. I could easily alter data with a few simple lines of code. I could check if every element in an array was even with .all?. I could easily see if I had a user with the name Voltron with .any?. I could even change every element in an array to the same thing with .collect. I started to the see the beauty of doing things the Ruby way. The Epiphany After diving deep into JavaScript and learning about higher-order functions, I realized other languages could do similar things to Ruby. .forEach and .map for .each and .map. Even after realizing this I still felt Ruby was more magical. Maybe I felt this way because I could simply express what I wanted the computer to do with fewer lines of code. Maybe Ruby was my first real language. Regardless of why, I was a Rubyist. Again... Why Ruby? Fast forward to present day post graduation: the job search. Searching for and applying to jobs for months I have realized that while definitely not a dead language and still highly in demand, Ruby is not the most sought after language. And most of the time teams that are looking for Ruby/Rails devs are looking for senior engineers. I needed to adapt and broaden my skills. I decided that while I would continue perfecting my Ruby skills and make apps with Rails and JavaScript, I would also learn a new language: Python. Easy Right? I made this decision mostly based on the fact that I have seen Python used in multiple FOSS project, and I had heard it was easy to automate tasks using the language. I started learning using "Automate the Boring Stuff". I hadn't gotten very far in when I decided that Python wasn't right for me. Too many colons, no visual ending to block of code besides the lack of white space, and don't get me started on needing to type exit() vs just exit in the Python Interpreter. Ruby had spoiled me. In my moment of contempt for such a foul looking language I decided to see if Python was indeed faster than Ruby! Surely Ruby 3.0.0 would give the "two snakes" a run for their money. The Race Initially I wanted to see how fast the languages could loop through a range and do some calculations while printing the answers. Interestingly enough I learned that Ruby's puts and Python's print() beats Ruby's puts but not its Ruby def do_the_thing(num) success = [] no_success = [] while num > 0 if rand(num) >= num - 10 success.push(num) else no_success.push(num) end num -= 1 end return success.size end sum = 0 100.times do sum += do_the_thing(1000000) end puts sum Python import random def do_the_thing(num): success = [] no_success = [] while num > 0: if random.randint(1, num) >= num - 10: success.append(num) else: no_success.append(num) num -= 1 return len(success) sum = 0 for x in range(1, 100): sum += do_the_thing(1000000) print(sum) C++ #include <iostream> #include <vector> #include <ctime> using namespace std; int doTheThing(int num) { srand(time(NULL)); vector < int > success; vector < int > noSuccess; while(num > 0) { if((rand() % num + 1) >= num - 10) { success.push_back(num); } else { noSuccess.push_back(num); } num--; } return success.size(); } int main() { int sum = 0; for(int i = 0; i <= 100; i++) { sum += doTheThing(1000000); } cout << sum << endl; return 0; } JavaScript function doTheThing(num) { let success = []; let noSuccess = []; while (num > 0) { if((Math.floor(Math.random() * Math.floor(num))) >= num - 10) { success.push(num); } else { noSuccess.push(num); } num -= 1; } return success.length; } let sum = 0; for(let i = 0; i <= 100; i++) { sum += doTheThing(1000000); } console.log(sum) The Results Thanks to Linux's time command, I was able to time how long a given command would take (I used node for the JS and I combined the compilation time and run time for C++). JavaScript definitely surprised me the most with how speedy quick it was, and I didn't think Python would be as slow as it was. It might have something to do with including that random module (I had to change the name of the file to rand because the interpreter didn't like the file having the same name as the module). The Real Results Ultimately I know every programming language has its pros and cons and obviously generating a random number isn't the best benchmark for a language. But I definitely had fun writing the same code in 4 different languages and writing C++ was an especially exciting trip down memory lane. It's fun knowing that each language has it's own quirks and seeing C++ after learning Ruby and JS and a little bit of Python was very eye-opening. I highly recommend learning another language once you've got your first one down solid. You will learn different ways of doing things and if one thing is certain it is that we should always be learning in this industry! Discussion (1) Interesting post, Ruby 3 is JIT now, but I read some post saying is really not faster for bigger projects. Heres my solution and time, Nim lang code, looks similar to Python or Ruby, runs like C or C++, Nim compiles to JavaScript too (like TypeScript), uses automatic deterministic compile-time memory management. Linux timesays: 🙂👍
https://practicaldev-herokuapp-com.global.ssl.fastly.net/szam/ruby-vs-python-vs-c-vs-javascript-h56
CC-MAIN-2021-49
refinedweb
1,276
72.26
It binds JS to your wasm! In my last wasm article I talked about how to compile a C library to wasm so you can use it on the web. One thing that stood out to me (and to many readers) is the crude and slightly awkward way you have to manually declare which functions of your wasm module you are using. To refresh your mind, this is the code snippet I am talking about: const api = { version: Module.cwrap('version', 'number', []), create_buffer: Module.cwrap('create_buffer', 'number', ['number', 'number']), destroy_buffer: Module.cwrap('destroy_buffer', '', ['number']), }; Here we declare the names of the functions that we marked with EMSCRIPTEN_KEEPALIVE, what their return types are, and what the types of their arguments are. Afterwards we can use the functions on the api object to invoke these functions. However, using wasm this way doesn't support strings and requires you to manually move chunks of memory around which makes many library APIs very tedious to use. Isn't there a better way? Why yes there is, otherwise what would this article be about? C++ name mangling While the developer experience would be reason enough to build a tool that helps with these bindings, there's actually a more pressing reason: When you compile C or C++ code, each file is compiled separately. Then a linker takes care of munging all these so-called object files together and turning them into a wasm file. With C, the names of the functions are still available in the object file for the linker to use. All you need to be able to call a C function is the name, which we are providing as a string to cwrap(). C++ on the other hand supports function overloading, meaning you can implement the same function multiple times as long as the signature is different (e.g. differently typed parameters). At the compiler level, a nice name like add would get mangled into something that encodes the signature in the function name for the linker. As a result, we wouldn't be able to look up our function with its name anymore. Enter embind embind is part of the Emscripten toolchain and provides you with a bunch of C++ macros that allow you to annotate C++ code. You can declare which functions, enums, classes or value types you are planning to use from JavaScript. Let's start simple with some plain functions: #include <emscripten/bind.h> using namespace emscripten; double add(double a, double b) { return a + b; } std::string exclaim(std::string message) { return message + "!"; } EMSCRIPTEN_BINDINGS(my_module) { function("add", &add); function("exclaim", &exclaim) } Compared to my previous article, we are not including emscripten.h anymore, as we don't have to annotate our functions with EMSCRIPTEN_KEEPALIVE anymore. Instead we have an EMSCRIPTEN_BINDINGS section in which we list the names under which we want to expose our functions to JavaScript. To compile this file, we can use the same setup (or, if you want, the same Docker image) as in the previous article. To use embind, we add the --bind flag: $ emcc --bind -O3 add.cpp Now all that's left is whipping up an HTML file that loads our freshly created wasm module: <script src="/a.out.js"></script> <script> Module.onRuntimeInitialized = _ => { console.log(Module.add(1, 2.3)); console.log(Module.exclaim("hello world")); }; </script> As you can see, we aren't using cwrap() anymore. This just works straight out of the box. But more importantly, we don't have to worry about manually copying chunks of memory to make strings work! embind gives you that for free, along with type checks: ![ DevTools errors when you invoke a function with the wrong number of arguments or the arguments have the wrong type]() This is pretty great as we can catch some errors early instead of dealing with the occasionally quite unwieldy wasm errors. Objects Many JavaScript constructors and functions use is options objects. It's a nice pattern in JavaScript, but extremely tedious to realize in wasm manually. embind can help here, too! For example, I came up with this incredibly useful C++ function that processes my strings, and I urgently want to use it on the web. Here is how I did that: #include <emscripten/bind.h> #include <algorithm> using namespace emscripten; struct ProcessMessageOpts { bool reverse; bool exclaim; int repeat; }; std::string processMessage(std::string message, ProcessMessageOpts opts) { std::string copy = std::string(message); if(opts.reverse) { std::reverse(copy.begin(), copy.end()); } if(opts.exclaim) { copy += "!"; } std::string acc = std::string(""); for(int i = 0; i < opts.repeat; i++) { acc += copy; } return acc; } EMSCRIPTEN_BINDINGS(my_module) { value_object<ProcessMessageOpts>("ProcessMessageOpts") .field("reverse", &ProcessMessageOpts::reverse) .field("exclaim", &ProcessMessageOpts::exclaim) .field("repeat", &ProcessMessageOpts::repeat); function("processMessage", &processMessage); } I am defining a struct for the options of my processMessage() function. In the EMSCRIPTEN_BINDINGS block, I can use value_object to make JavaScript see this C++ value as an object. I could also use value_array if I preferred to use this C++ value as an array. I also bind the processMessage() function, and the rest is embind magic. I can now call the processMessage() function from JavaScript without any boilerplate code: console.log(Module.processMessage( "hello world", { reverse: false, exclaim: true, repeat: 3 } )); // Prints "hello world!hello world!hello world!" Classes For completeness sake, I should also show you how embind allows you to expose entire classes, which brings a lot of synergy with ES6 classes. You can probably start to see a pattern by now: #include <emscripten/bind.h> #include <algorithm> using namespace emscripten; class Counter { public: int counter; Counter(int init) : counter(init) { } void increase() { counter++; } int squareCounter() { return counter * counter; } }; EMSCRIPTEN_BINDINGS(my_module) { class_<Counter>("Counter") .constructor<int>() .function("increase", &Counter::increase) .function("squareCounter", &Counter::squareCounter) .property("counter", &Counter::counter); } On the JavaScript side, this almost feels like a native class: <script src="/a.out.js"></script> <script> Module.onRuntimeInitialized = _ => { const c = new Module.Counter(22); console.log(c.counter); // prints 22 c.increase(); console.log(c.counter); // prints 23 console.log(c.squareCounter()); // prints 529 }; </script> What about C? embind was written for C++ and can only be used in C++ files, but that doesn't mean that you can't link against C files! To mix C and C++, you only need to separate your input files into two groups: One for C and one for C++ files and augment the CLI flags for emcc as follows: $ emcc --bind -O3 --std=c++11 a_c_file.c another_c_file.c -x c++ your_cpp_file.cpp Conclusion embind gives you great improvements in the developer experience when working with wasm and C/C++. This article does not cover all the options embind offers. If you are interested, I recommend continuing with embind's documentation. Keep in mind that using embind can make both your wasm module and your JavaScript glue code bigger by up to 11k when gzip'd — most notably on small modules. If you only have a very small wasm surface, embind might cost more than it's worth in a production environment! Nonetheless, you should definitely give it a try. RSS or Atom feed and get the latest updates in your favorite feed reader!Subscribe to our
https://developers.google.cn/web/updates/2018/08/embind?hl=ko
CC-MAIN-2019-09
refinedweb
1,198
64.3
On Fri, Jul 23, 2021 at 04:02:26PM +0200, Claudio Fontana wrote: > On 7/23/21 3:50 PM, Jose R. Ziviani wrote: > > On Fri, Jul 23, 2021 at 11:41:19AM +0200, Claudio Fontana wrote: > >> On 7/23/21 12:09 AM, Jose R. Ziviani wrote: > >>> When a module is not found, specially accelerators, QEMU displays > >>> a error message that not easy to understand[1]. This patch improves > >>> the readability by offering a user-friendly message[2]. > >>> > >>> This patch also moves the accelerator ops check to runtine (instead > >>> of the original g_assert) because it works better with dynamic > >>> modules. > >>> > >>> [1] qemu-system-x86_64 -accel tcg > >>> ERROR:../accel/accel-softmmu.c:82:accel_init_ops_interfaces: assertion > >>> failed: > >>> (ops != NULL) > >>> Bail out! ERROR:../accel/accel-softmmu.c:82:accel_init_ops_interfaces: > >>> assertion failed: (ops != NULL) > >>> 31964 IOT instruction (core dumped) ./qemu-system-x86_64 ... > >>> > >>> [2] qemu-system-x86_64 -accel tcg > >>> accel-tcg-x86_64 module is missing, install the package or config the > >>> library path correctly. > >>> > >>> Signed-off-by: Jose R. Ziviani <jziviani@suse.de> > >>> --- > >>> accel/accel-softmmu.c | 5 ++++- > >>> util/module.c | 14 ++++++++------ > >>> 2 files changed, 12 insertions(+), 7 deletions(-) > >>> > >>> diff --git a/accel/accel-softmmu.c b/accel/accel-softmmu.c > >>> index 67276e4f52..52449ac2d0 100644 > >>> --- a/accel/accel-softmmu.c > >>> +++ b/accel/accel-softmmu.c > >>> @@ -79,7 +79,10 @@ void accel_init_ops_interfaces(AccelClass *ac) > >>> * all accelerators need to define ops, providing at least a > >>> mandatory > >>> * non-NULL create_vcpu_thread operation. > >>> */ > >>> - g_assert(ops != NULL); > >>> + if (ops == NULL) { > >>> + exit(1); > >>> + } > >>> + > >> > >> > >> Ah, again, why? > >> This change looks wrong to me, > >> > >> the ops code should be present when ops interfaces are initialized: > >> it should be a code level assertion, as it has to do with the proper order > >> of initializations in QEMU, > >> > >> why would we want to do anything else but to assert here? > >> > >> Am I blind to something obvious? > > > > Hello! > > > > Thank you for reviewing it! > > > > The problem is that if your TCG module is not installed and you start > > QEMU like: > > > > ./qemu-system-x86_64 -accel tcg > > > > You'll get the error message + a crash with a core dump: > > > > accel-tcg-x86_64 module is missing, install the package or config the > > library path correctly. > > ** > > ERROR:../accel/accel-softmmu.c:82:accel_init_ops_interfaces: assertion > > failed: (ops != NULL) > > Bail out! ERROR:../accel/accel-softmmu.c:82:accel_init_ops_interfaces: > > assertion failed: (ops != NULL) > > [1] 5740 IOT instruction (core dumped) ./qemu-system-x86_64 -accel tcg > > > > I was digging a little bit more in order to move this responsibility to > > module.c but there isn't enough information there to safely exit() in > > all situations that a module may be loaded. As Gerd mentioned, more work > > is needed in order to achieve that. > > > > However, it's not nice to have a crash due to an optional module missing. > > It's specially confusing because TCG has always been native. Considering > > also that we're already in hard freeze for 6.1, I thought to have this > > simpler check instead. > > > > What do you think if we have something like: > > > > /* FIXME: this isn't the right place to handle a missing module and > > must be reverted when the module refactoring is completely done */ > > #ifdef CONFIG_MODULES > > if (ops == NULL) { > > exit(1); > > } > > #else > > g_assert(ops != NULL); > > #endif > > > > Regards! > > > For the normal builds (without modular tcg), this issue does not appear right? Yes, but OpenSUSE already builds with --enable-modules, we've already been shipping several modules as optional RPMs, like qemu-hw-display-virtio-gpu for example. I sent a patch some weeks ago to add "--enable-tcg-builtin" in the build system but there're more work required in that area as well. > So maybe there is no pressure to change anything for 6.1, and we can work on > the right solution on master? > > Not sure how we consider this feature for 6.1, I guess it is still not a > supported option, > (is there any CI for this? Probably not right?), > > so I would consider building modular tcg in 6.1 as "experimental", and we can > proceed to do the right thing on master? For OpenSUSE Tumbleweed, when we release QEMU 6.1, I can add my patch to "--enable-tcg-builtin" for downstream only. I'm fine with it too. Thank you!!! > > Thanks, > > Claudio > > > > >> > >>> if (ops->ops_init) { > >>> ops->ops_init(ops); > >>> } > >>> diff --git a/util/module.c b/util/module.c > >>> index 6bb4ad915a..268a8563fd 100644 > >>> --- a/util/module.c > >>> +++ b/util/module.c > >>> @@ -206,13 +206,10 @@ static int module_load_file(const char *fname, bool > >>> mayfail, bool export_symbols > >>> out: > >>> return ret; > >>> } > >>> -#endif > >>> > >>> bool module_load_one(const char *prefix, const char *lib_name, bool > >>> mayfail) > >>> { > >>> bool success = false; > >>> - > >>> -#ifdef CONFIG_MODULES > >>> char *fname = NULL; > >>> #ifdef CONFIG_MODULE_UPGRADES > >>> char *version_dir; > >>> @@ -300,6 +297,9 @@ bool module_load_one(const char *prefix, const char > >>> *lib_name, bool mayfail) > >>> > >>> if (!success) { > >>> g_hash_table_remove(loaded_modules, module_name); > >>> + fprintf(stderr, "%s module is missing, install the " > >>> + "package or config the library path " > >>> + "correctly.\n", module_name); > >>> g_free(module_name); > >>> } > >>> > >>> @@ -307,12 +307,9 @@ bool module_load_one(const char *prefix, const char > >>> *lib_name, bool mayfail) > >>> g_free(dirs[i]); > >>> } > >>> > >>> -#endif > >>> return success; > >>> } > >>> > >>> -#ifdef CONFIG_MODULES > >>> - > >>> static bool module_loaded_qom_all; > >>> > >>> void module_load_qom_one(const char *type) > >>> @@ -384,4 +381,9 @@ void qemu_load_module_for_opts(const char *group) {} > >>> void module_load_qom_one(const char *type) {} > >>> void module_load_qom_all(void) {} > >>> > >>> +bool module_load_one(const char *prefix, const char *lib_name, bool > >>> mayfail) > >>> +{ > >>> + return false; > >>> +} > >>> + > >>> #endif > >>> > >> > signature.asc Description: Digital signature
https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg06161.html
CC-MAIN-2021-39
refinedweb
867
58.48
Introduction: Here I will explain how to show gridview row details in tooltip on mouseover with jQuery using asp.net in C# and VB.NET. Description: In previous articles I explained show gridview row details in tooltip on mouseover using jQuery in asp.net with C# and VB.NET. To show gridview row details in tooltip on mouseover with jQuery we need to write the following code in aspx page If you observe in header section I added jQuery plugin and tooltip plugin by using those files we can display gridview row details in tooltip. To get those files download attached sample code or from this url bassistance.de tooltip plugin Now in code behind add the following namespaces C# Code After that add below code in code behind VB.NET Code Demo Download Sample Code Attached 10 comments : thanks sir, this post is really helpful for me. superb.. thanku <script>document.write('welcome')</script> chacha please plugin plugin n use kar ka javascript ka code use kar ka demo do yar.there are many plugin available in google .but we r searching to implement tooltip with origional javas cript code not by plugin. this doesnot work with update panel hi hi can i talk with u ? i got troubel with my aplication , i make aplication using asp.net ,, can i sharing with u ? if u right please add my account in skype .. Yessy.Marshella NICE thanks
http://www.aspdotnet-suresh.com/2013/05/jquery-show-gridview-row-details-in.html
CC-MAIN-2017-30
refinedweb
237
74.19
0 I am writing the following program that calculates the average of a group of test scores where the lowest score in the group is dropped. It has scores. This function should be called just once by main and should be passed the five scores. - int findLowest(): should find and return the lowest of the five scores passed to it. It should be called by the calcAverage, who uses the function to determine which of the five scores to drop. So far, here is my version of the program: #include <iostream> #include <conio> using namespace std; //Function prototype void getscore(int, int, int, int, int); int main() { int score1, score2, score3, score4, score5; cout << "Enter your five test scores: "; cin >> score1 >> score2 >> score3 >> score4 >> score5; getscore (score1, score2, score3, score4, score5) <<endl; getch(); return 0; } void getscore(int score1, int score2, int score3, int score4, int score5) { cout << "Your five test scores are:\n"; cout << score1 <<endl; cout << score2 <<endl; cout << score3 <<endl; cout << score4 <<endl; cout << score5 <<endl; } But, as far as the void calcAverage() and the int findLowest() functions, I am totally confused. I have read my textbook. My instructor tells me that I can use an if-else statement for this program to find the lowest score; she doesn't want us using MAX or MIN or anything like that; we haven't gotten that far. I need help!!!:|
https://www.daniweb.com/programming/software-development/threads/75451/finding-the-lowest-of-five-test-scores
CC-MAIN-2017-17
refinedweb
232
79.84
We are back again with a new pinvoke scenario. In the recent days I had the opportunity to work with one of our clients who was trying to utilize a native library in his .net application and was failing to do so, as his application was crashing due to marshaling issues. As you have seen from earlier posts when dealing with issues where you will have to send data types to the native world marshaling plays an important role . In this particular scenario we had a rather complex request of sending the triple pointer. While there are ways you can marshal the data to the native , much easier way to do this would be to use unsafe context. We need to compile the .net application with /unsafe. To maintain type safety and security, C# does not support pointer arithmetic, by default. However, by using the unsafe keyword, you can define an unsafe context in which pointers can be used .The unsafe keyword denotes an unsafe context, which is required for any operation involving pointers. You can use the unsafe modifier in the declaration of a type or a member. For more information about pointers, see the topic Pointer types.. . We had the function signature like (int) inatrace_initialize(int *argc, char **argv[]); Here is how the sample looks like , The native component had a definition like below , Our client did not have control over it but we created a simple dll so as to call into it from the managed world. Here is the .h file #ifdef TESTDLL_EXPORTS #define TESTDLL_API __declspec(dllexport) #else #define TESTDLL_API __declspec(dllimport) #endif extern "C" { TESTDLL_API int dynatrace_initialize(int* argc, char**argv[]); TESTDLL_API void dynatrace_uninitialize(int* argc, char**argv[]); } This is the .cpp file extern "C" TESTDLL_API int dynatrace_initialize(int* argc, char**argv[]) { // heap allocate array of char ptrs of length *argc. char** parray = (char**)malloc(*argc * sizeof(char*)); for (int i = 1; i <= *argc; i++) { char buffer[1024] = { 0 }; // local buffer for sprintf sprintf_s(buffer, 1024, "Test String %d", i); // create output string in local buffer parray[i-1] = (char*)malloc(strlen(buffer)+1); // allocate heap buffer strcpy_s(parray[i-1], strlen(buffer)+1, buffer); // copy local var buffer to heap buffer } *argv = parray; return *argc; } One thing to note – We have wrapped our C++ code in ‘extern “C”’ declarations to turn off C++ name-decoration. If your native code does not do this, you’ll need to use the C++ decorated name in the EntryPoint attribute here. This can be found the ‘dumpbin.exe /exports’ run on the DLL name. Now coming to the main part , the usage of unsafe code , //Setting the dllimport [DllImportAttribute(@"D:\Sample\Debug\TestDll.dll", EntryPoint = "inatrace_initialize", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)] public unsafe static extern int inatrace_initialize(int* argc, char*** argv); //Here is the main namespace ManagedConsole { class Program { static void Main(string[] args) { string[] results; results = Program.GetUnsafeResults(11); if (results != null) { Console.WriteLine("Found {0} results:", results.Length); foreach (var item in results) { Console.WriteLine(item); } } } } static string[] GetUnsafeResults(int count) { string[] results = null; unsafe { int argc = count; char** argv; count = dynatrace_initialize(&argc, &argv); results = new string[argc]; for (int i = 0; i < count; i++) { results[i] = Marshal.PtrToStringAnsi(new IntPtr(argv[i])); } // Uninitialize the dynamically generated memory inatrace_uninitialize(&argc, &argv); } return results; } There are scenarios when you would not want the memory to be garbage collected, fixed statement can be used to declare pointers . It helps in pinning the location of the objects in memory so that they will not be moved by garbage collection until the execution of fixed block . One more advantage is the performance improves with the usage of unsafe block. Thanks to my escalation engineer Steve Horne for helping us out on the scenario. Keep Coding! Nandeesh Swami
https://blogs.msdn.microsoft.com/dsvc/2014/01/16/the-pinvoke-diary-how-to-send-a-triple-pointer-to-a-native-world-from-c/
CC-MAIN-2016-44
refinedweb
628
53.81
AWS Lambda Function is a great service for developing and deploying serverless applications. A lambda function stores its log messages in CloudWatch Logs and one would invariably end up with a large and ever increasing number of log streams like the screenshot below. Trying to do log analysis and debug operation issues here is possible but definitely not fun and effective. This post will provide a step by step guide on how to stream the logs from a AWS Lambda function to Elasticsearch Service so that you can use Kibana to search and analysis the log messages. 1. Create Elasticsearch Endpoint First you will have to create a AWS Elasticsearch domain. Follow the instructions on AWS here. Once the domain is created, click on the link to it under the Elasticsearch Dashboard and note the DNS for Kibana under the Overview tab. 2. Format Log Messages in Lambda Function The log messages from the lambda function need to be in a format that can be parsed using CloudWatch filters. Typically it means you should use a logging library instead of print statements to write your log message. For example, the standard logging module in Python: import logging logger = logging.getLogger() logger.setLevel(logging.INFO) def handle(event, context): logger.info('lambda function rocks!') ... will generate the following in the CloudWatch log [INFO] 2019-08-04T15:00:01.283Z d7d9d6cc-7b2d-447d-a07e-ffc96c940610 lambda function rocks! One suggestion here is to format your log message proper using your logging library as this will make the configuration later much easier and less error prone. 3. Enable Streaming from CloudWatch Log to Elasticsearch Service Next go to the CloudWatch dashboard in AWS Console and do the following: - Click on Logs to view all your log groups. - Find and select the log group corresponding to the lambda function you want to stream the logs to Elasticsearch Servoce, e.g. /aws/lambda/my-lambda-function. - Click on the Actions button at the top and select “Stream to Amazon Elasticsearch Service” from the arrow drop-down - Select the Elasticsearch domain you created in step 1 for the Amazon ES cluster drop down. Click Next - Use this page to select the log format and test it against your log files to make sure the messages are indexed properly in Elasticsearch. For example, with the log message in Step 2, select Log Format Common Log Format with the following Subscription Filter Pattern [level, timestamp=*Z, request_id=”*-*”, message] - Click Next and Start Streaming 4 Configure Kibana Now we are ready to set up Kibana to view and search the log messages: - Go to Kibana on your browser using the DNS noted in Step 1. - Click on Management tab on the left - Click Index Patterns->Create index pattern and create an index with pattern cwl-*. By default, the lambda function generated in previous step (more on this later) creates indices with pattern cwl-<yyyy.mm.dd> - Click on Discover tab and the index pattern you just created should appear in the dropdown under the Add a filter+ link. Select it to view the log messages streaming from your Lambda function. Now that you have the log messages indexed and stored in Elasticsearch, you can use Kibana to, for example: - Search log messages using indexes, e.g. by log level, keywords in message, etc. - Create dashboard to visualize how the lambda function is performing - Create alerts on certain log events What’s Going On Under the Hood The diagram below shows the solution created by the procedure above (Step 3 in particular). A lambda function is created by AWS to listen to the log event of the source lambda functions (via their associated CloudWatch log groups). It then processes the event payload before sending it to the target Elasticsearch Service. Streaming multiple log groups The generated lambda function can be used to stream logs from multiple log groups (and hence multiple source lambda functions) as shown in the above diagram. However you need to change the codes in the generated lambda function to work around an issue with the current version of Elasticsearch by including the log group name in the Elasticsearch index name: // index name format: cwl-YYYY.MM.DD var indexName = [ 'cwl-' + payload.logGroup.toLowerCase().split('/').join('-') + '-' + timestamp.getUTCFullYear(), // log group + year ('0' + (timestamp.getUTCMonth() + 1)).slice(-2), // month ('0' + timestamp.getUTCDate()).slice(-2) // day ].join('.');
https://raymondhlee.wordpress.com/category/cloud/page/2/
CC-MAIN-2021-04
refinedweb
728
59.53
(Since version 2.3.0) Please use play.api.libs.ws.WSResponse (Since version 2.3.0) Please use play.api.libs.ws.WSRequest (Since version 2.3.0) Please use play.api.libs.ws.WSRequestHolder Retrieves or creates underlying HTTP client. Retrieves or creates underlying HTTP client. Note that due to the Plugin architecture, an implicit application must be in scope. Most of the time you will want the current app: import play.api.Play.current val client = WS.client Prepares a new request using an implicit client. Prepares a new request using an implicit client. The client must be in scope and configured, i.e. implicit val sslClient = new play.api.libs.ws.ning.NingWSClient(sslBuilder.build()) WS.clientUrl("") the URL to request the client to use to make the request. Prepares a new request using a provided magnet. Prepares a new request using a provided magnet. This method gives you the ability to create your own URL implementations at the cost of some complexity. object PairMagnet { implicit def fromPair(pair: Pair[WSClient, java.net.URL]) = new WSRequestHolderMagnet { def apply(): WSRequestHolder = { val (client, netUrl) = pair client.url(netUrl.toString) } } } import scala.language.implicitConversions val client = WS.client val exampleURL = new java.net.URL("") WS.url(client -> exampleURL).get() a magnet pattern. Prepares a new request using an implicit application. Prepares a new request using an implicit application. This creates a default client, which you can then use to construct a request. import play.api.Play.current WS.url("").get() the URL to request the implicit application to use. Asynchronous API to to query web services, as an http client. Usage example: When greater flexibility is needed, you can also create clients explicitly and pass them into WS: Or call the client directly: Note that the resolution of URL is done through the magnet pattern defined in WSRequestHolderMagnet. The value returned is a Future[WSResponse], and you should use Play's asynchronous mechanisms to use this response.
https://www.playframework.com/documentation/ja/2.3.x/api/scala/play/api/libs/ws/WS$.html
CC-MAIN-2021-39
refinedweb
328
54.29
From: John Maddock (john_at_[hidden]) Date: 2006-08-07 12:07:51 Simon Atanasyan wrote: > If you or anybody else have a questions like 'Is it a bug in Sun > C++?' or 'Is it a known bug and when this annoyed big will be fixed?' > fill free to > ask me directly :-) Well in that case :-) If you have a chance to look at the TR1 library failures that would be a big help. Almost all the failures all appear to be caused by the appropriate header files not getting included. The situation here is somewhat different to normal, because we're trying to replace existing std lib header files, so that #include <memory> includes the Boost.TR1 version of <memory> which forwards on the real version as well as including it's own extra parts. However, for some reason the Boost versions of these headers appear not to be being picked up. Any ideas? Many thanks, John Maddock. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2006/08/108935.php
CC-MAIN-2021-43
refinedweb
180
74.39
In previous issues of CUJ, I have provided a philosophical view of the C/C++ relationship and presented a case for significantly increasing the compatibility of C and C++. In this final article of the series, Ill give examples of how one might go about increasing C/C++ compatibility. The ideal is full compatibility. The topics include, variadic functions, void*, bool, f(void), const, inline, and VLAs (variable length arrays). These topics demonstrate concerns that must be taken into account when trying to increase C/C++ compatibility. Introduction Making changes to a language in widespread use, such as C [1, 2] and C++ [3], is not easy. In reality, even the slightest change requires discussion and consideration beyond what would fit in this article. Consequently, each of the eleven case studies I present here lacks detail from the perspective of the C and C++ ISO standards committees. However, the point here is not to present complete proposals or to try to tell the standards committees how to do their job. The point is to give examples of directions that might (or might not) be taken and examples of the kind of considerations that will be part of any work to improve C and/or C++ in the direction of greater C/C++ compatibility. The examples are chosen to illustrate both the difficulties and the possibilities involved. In many cases, the reader will ask How did the designers of C and C++ get themselves into such a mess? My general opinion is that the designers (not excluding me) and committees (not excluding the one on which I serve) got into those messes for reasons that looked good to competent people on the day, but werent [4]. Let me emphasize that my answer is not a variant of let C adopt C++s rules. That would be both arrogant and pointless. The opposite suggestion, let C++ adopt Cs rules, is equally extreme and unrealistic. To make progress, both languages and language communities must move towards a common center. My suggested resolutions are primarily based on the following considerations: - What would break the least code? - How easy is it to recover from code broken by a change? - What would give the greatest benefits in the long run? - How complicated would it be to implement the resolution? Many changes suggested here to increase C/C++ compatibility break some code and all involve some work from implementers. Some of the suggested changes would, if seen in isolation, be detrimental to the language in which they are suggested. That is, I see them as sacrifices necessary to achieve a greater good (C/C++ compatibility). I would never dream of suggesting each by itself and would fight many of the suggested resolutions except in the context of a major increase in C/C++ compatibility. A language is more than the sum of its individual features. Variadic Function Syntax In C, a variadic function is indicated by a comma followed by an ellipsis at the end of an argument list. In C++, the ellipsis suffices. For example: // C and C++ int printf(const char*, ...); // C++ int printf(const char* ...); The obvious resolution is for C to accept the plain ellipsis in addition to what is currently accepted. This resolution breaks no code, imposes no run-time overhead, and the additional compiler complexity is negligible. Any other resolution breaks lots of code without compensating advantages. C requires a variadic function to have at least one argument specified; C++ doesnt require that. For example: void f(...); // C++, not C This case could be dealt with either by allowing the construct in C or by disallowing it in C++. The first solution would break no user code, but could possibly cause problems for some C implementers. The latter could break some code. However, such code is likely to be rare and obscure, and there are obvious ways of rewriting it. Consequently, I suggest adopting the C rule and banning the construct in C++. Breaking code should never be done lightly. However, sometimes it is better to break code than to let a problem fester. In making such a case, the likely importance of the banned construct should be taken into account, as should the likelihood of code using the construct hiding errors. The probable benefits of the change have to be major. Whenever possible (meaning almost always), code broken by a language change should be easily detected by a compiler. Breaking code isnt all bad. The odd easily diagnosable incompatibility that doesnt affect link compatibility, such as the introduction of a new keyword, can be good for the long-term health of the community. It reminds people that the world changes and gives encouragement to review old code. Compatibility switches are needed, though, to serve people who cant/wont touch old source code. Im no fan of compiler options, but they are a fact of life and a compatibility switch providing practical backwards compatibility can be a price worth paying for progress. Pointer to void and NULL In C, a void* can be assigned to any T* without an explicit cast. In C++, it cannot. The reason for the C++ restriction is that a void*-to-T* conversion can be unsafe [4]. On the other hand, this implicit conversion is widely used in C. For example: // malloc() returns a void* int* p = malloc(sizeof(int)*n); // NULL is often a macro for (void*)0 struct X* p = NULL; From a C++ point of view, malloc is itself best avoided in favor of new, but Cs use of (void*)0 provides the benefit of distinguishing a nil pointer from plain 0. However, C++ retained the Classic C definition of NULL and maintained a tradition for using plain 0 rather than the macro NULL. Had assignment of (void*)0 to a pointer been valid C++, it would have helped in overloading: void f(int); void f(char*); void g() { // 0 is an int, call f(int) f(0); // error in C++, but why not call f(char*)? f((void*)0); } What can be done to resolve this incompatibility? - C++ accepts the C rule. - C accepts the C++ rule. - Both languages ban the implicit conversion except for specific cases in the standard library, such as NULL and malloc. - C++ accepts the C rule for void*, and both languages introduce a new type, say raw*, which provides the safer C++ semantics. I could construct several more scenarios, involving features such as a null keyword, a new operator for C, and macro magic. Such ideas may have value in their own right. However, among the alternatives listed, the right answer must be (1). Resolution (2) breaks too much code, and (3) breaks too much code and is also messy. Note that I say this while insisting that much of the code that (3) would break deserves to be broken, and that a type violation is my primary criterion for deeming a C/C++ incompatibility non-gratuitous [5]. This is an example where I suggest that in the interest of the C/C++ community, we must leave our language theory ivory towers, accept a wart, and get on with more important things. Alternative (4) allows programmers to preserve type safety in new code (and in code converted to use it), but dont think that benefit is sufficient to add a new feature. In addition, I would seriously consider a variant of (1) that also introduced a keyword meaning the NULL pointer to save C++ programmers from unnecessarily depending on a macro [6]. wchar_t and bool C introduced the typedef wchar_t for wide characters. C++ then adopted the idea, but needed a unique wide character type to guide overloading for proper stream I/O, etc., so wchar_t was made a keyword. C++ introduced a Boolean type named by the keyword bool. C then introduced a macro bool (in a standard header) naming a keyword _Bool. The C choices were made to increase C++ compatibility while avoiding breaking existing code using bool as an identifier. For people using both languages, this is a mess (see the appendix of [4]). Again we can consider alternative resolutions: - C adopts wchar_t and bool as keywords. - C++ adopts Cs definitions and abolishes wchar_t and bool as keywords. - C++ abolishes wchar_t and bool as keywords and adopts wchar_t and bool as typedefs, defined in some standard library header, for keywords _Wchar and _Bool. C adopts _Wchar as the type for which wchar_t is a typedef. - Both languages adopt a new mechanism, possibly called typename, that is similar to typedef except that it makes new type rather than just a synonym. typename is then used to provide bool and wchar_t in some standard header. The keywords bool, wchar_t, and _Bool would no longer be needed. - C++ introduces wchar as a keyword, removes wchar_t as a keyword, and introduces wchar_t as a typedef for wchar in the appropriate standard header. C introduces bool and wchar as keywords. Many C++ facilities depend on overloading, so C++ must have specific types rather than just typedefs. Therefore (2) is not a possible resolution. I consider (3) complicated (introducing two special words where one would do), and its compatibility advantages are illusory. If bool is a name in a standard header, all code had better avoid that word because there is no way of knowing whether that header might be used in the future, and any usage that differs from the Standard will cause confusion and maintenance problems [4]. Suggestion (4) is an intriguing idea, but in this particular context, it shares the weaknesses of (3). Solution (1) is the simplest, but a keyword, wchar_t, with a name that indicates that it is a typedef is also a mistake, so I suggest that (5) is the best solution. One way of looking at an incompatibility is: What could we have done then, had we known what we know now? What is the ideal solution? That was how I found (5). Preserving wchar_t as a typedef is a simple backwards-compatibility hack. In addition, either C must remove _Bool as a keyword, or C++ must add it. The latter is the simplest solution and breaks no code. Empty Function Argument Specification In C, the function prototype: int f(); declares a function f that may accept any number of arguments of any type (but f may not be variadic) as long as the arguments and types match the (unknown) definition of the function. In C++, the function declaration: int f(); declares f as a function that accepts no arguments. In both C and C++: int f(void); declares f as a function that accepts no arguments. In addition, the C99 committee (following C89) deprecated: int f(); This f() incompatibility could be resolved cleanly by banning the C++ usage. However, that would break a majority of C++ programs ever written and force everyone to use the more verbose notation: int f(void); which many consider an abomination [4]. The obvious alternative would be to break C code that relies on being able to call a function declared without arguments with arguments. For example: int f(); int g(int a) { return f(a); // not C++, deprecated in C } I think that banning such calls is the right solution. It breaks C code, but the usage is most error-prone, has been deprecated in C since 1989, and has been caught by compiler warnings and lint for much longer. Thus: int f(); should declare f to be a function taking no arguments. Again, a backwards-compatibility switch might be useful. Prototypes In C++, no function can be called without a previous declaration. In C, a non-variadic function may be called without a previous prototype, but doing so has been deprecated since 1989. For example: int f(i) { // error in C++, deprecated in C int x = g(i); // ... } All compilers and lints that I know of have mechanisms for detecting such usage. The alternatives are clear: - Allow such calls (as in C, but deprecated). - Disallow calls to undeclared functions (as in C++). The resolution must be (2) to follow C++ and the intent of the C committee, as represented by the deprecation. Choosing (1) would seriously weaken the type checking in C++ and go against the general trend of programming without compensating benefits. Old-Style Function Definition C supports the Classic C function declaration syntax; C++ does not. For example: int f(a,b) double b; /* not C++ */ { /* ... */ } int g(char *p) { f(p,p); // uncaught type error // ... } The problem with the call of f arises because a function defined using the old-style function syntax has a different semantics from a function declared using the modern (prototype-style, C++-style) definition syntax. The function f is not considered prototyped, so type checking and conversion is not done. Resolving this cannot be done painlessly. I see three alternatives: - Adopt Cs rules (i.e., allow old-style definitions with separate semantics). - Adopt C++s rules (i.e., ban old-style definitions). - Allow old-style definitions with exactly the same semantics as other function definitions. Alternative (1) is not a possible solution because it eliminates important type checking and leads to surprises. I consider (2) the best solution, but see no hope for its acceptance. It has the virtues of simplicity, simple compile-time detection of all errors, and simple conversion to a more modern style. However, my impression is that old-style function definitions are still widely used and sometimes even liked for aesthetic reasons. That leaves (3), which has the virtues of simple implementation and better type checking, but suffers from the possibility of silent changes of the meaning of code. Warnings and a backwards-compatibility switch would definitely be needed. Enumerations In C, an int can be assigned to a value of type enum without a cast. In C++, it cannot. For example: enum E { a, b }; E x = 1; // error in C++, ok in C E x = 99; // error in C++, ok in C I think that the only realistic resolution would be for C++ to adopt the C rule. The C++ rule provides better type safety, but the amount of C code relying on treating an enumerator as an int is too large to change, and I dont see a third alternative. The definition of enum in C and C++ also differs in several details relating to size. However, the simple fact that C and C++ code today interoperate while using enums indicates that the definitional issues can be reconciled. Constants In C, the default storage class of a non-local const is extern and in C++ it is static. The result is one of the hardest-to-resolve incompatibilities. Consider: // uninitialized const const int x; const int y = 2; int f(int i) { switch (i) { // use y as a constant expression case y: return i; // ... } } An uninitialized const is not allowed in C++, and the use of a const in a constant expression is not allowed in C. Both of these uses are so widespread in their respective languages that examples such as the one above must be allowed. This precludes simply adopting the rule from one language or the other. It follows that some subtlety is needed in the resolution, and subtlety implies the need for experimentation and examination of lots of existing code to see that undesired side effects really are absent. That said, here is a suggestion: distinguish between initialized and uninitialized consts. Initialized consts follow the C++ rule. This preserves consts in constant expressions, and if an initialized const needs to be accessed from another translation unit, extern must be explicitly used: // local to this translation unit const int c1 = 17; // available in other translation units extern const int c2 = 7; On the other hand, follow the C rule for uninitialized consts. For example: // available in other translation units const int c3; // error: uninitialized const static const int c4; Inlining Both C and C99 provide inline. Unfortunately, the semantics of inline differ [4]. Basically, the C++ rules require an inline function to be defined with identical meaning in every translation unit, even though an implementation is not required to detect violations of this one definition rule, and many implementations cant. On the other hand, C99 allows inline functions in different translation units to differ, while imposing restrictions intended to avoid potential linker problems. A good resolution would: - Not impose burdens on C linker technology. - Not break the C++ type system. - Break only minimal and pathological code. - Not increase the area of undefined or implementation-specific behavior. An ideal solution would strengthen (3) and (4), but thats unfortunately impossible. Requirement (2) basically implies that the C++ ODR must be the rule, even if its enforcement must in the Classic C tradition [6] be left to a lint-like utility. This leaves the problem of what to do about uses of static data. For example: // use of static variables in/from // inlines ok in C++, errors in C: static int a; extern inline int count() { return ++a; } extern inline int count2() { static int b = 0; b+=2; return b; } Accepting such code would put a burden on C linkers; not accepting it would break C++ code. I think the most realistic choice is to ban such code, realizing that some implementations would accept it as an extension. The reason that I can envision banning such code is that I consider it relatively rare and relatively unimportant. Naturally, wed have to look at a lot of code before accepting that evaluation. There is a more important use of static data in C++ that cannot be banned: static class members. However, since static class members have no equivalent in C, this is not a compatibility problem. static C++ deprecates the use of static for declaring something local to this translation unit in favor of the more general notion of namespaces. The possible resolutions are: - Withdraw that deprecation in C++. - Deprecate or ban that use of static in C and introduce namespaces. Only (1) is realistic. Variable-Sized Data Structures Classic C arrays are too low level for many uses: they have a fixed size, specified as a constant, and an array doesnt carry its size with it when passed as a function argument. Both C++ and C99 added features to deal with this issue: - C++ added standard library containers. In particular, it added std::vector. A vector can be specified by a size that is a variable, a vector can be resized, and a vector knows its size (i.e., a vector can be passed as an object with its size included, there is a member function for examining that size, and the size can be changed). - C99 added VLAs. A VLA can be specified by a size that is a variable, but a VLA cannot be resized, and a VLA doesnt know its size. The syntax of the two constructs differ, and either could be argued to be more convenient and readable than the other: void f(int m) { int a[m]; // variable length array vector<int> v(m); // standard library vector // ... } A VLA behaves much like a vector without the ability to resize. On the other hand, VLAs are designed with a heavier emphasis on run-time performance. In particular, elements of a VLA can be, but are not required to be, allocated on the stack. For C/C++ compatibility, there are two obvious alternatives: 1. Accept VLAs as defined in C99. Ban VLAs. Choosing (1) seems obvious. After all, VLAs are arguably the C99 committees greatest contribution to C and the most significant language feature added to C since prototypes and const were imported from C with Classes. They are easy to implement, efficient, reasonably easy to use, and backwards compatible. Unfortunately, from a C++ point of view, VLAs have several serious problems [7]: a. They are a very low-level mechanism, requiring programmers to remember sizes and pass them along. This is error-prone. This same lack of size information means that operations, such as copying and range checking, cannot be simply provided. b. A VLA can allocate an arbitrary amount of memory, specified at run time. However, there is no standard mechanism for detecting or handling memory exhaustion. This is particularly bothersome because a VLA looks so much like an ordinary array, for which the memory requirements can be calculated at compile time. For example:#define M1 99 int f(int m2) { // array, space requirement known int a[M1]; // VLA, space requirement unknown int b[m2]; // ... } This can lead to undefined behavior and obscure bugs. c. There is no guarantee that memory allocated for elements of a VLA is freed if a function containing it is exited abnormally (such as by an exception or a longjmp). Thus, use of VLAs can lead to memory leaks. d. By using the array syntax, many programmers will see VLAs as favored by the language or recommended over alternatives, such as std::vector, and as more efficient (even if only potentially so). e. VLAs are part of C, and std::vector is not, so if VLAs were accepted for the sake of C/C++ compatibility, people would accept the problems with VLAs and use them in the interest of maximal portability. The net effect is that by accepting VLAs, the result would be a language that encouraged something that, from a C++ point of view, is unnecessarily low-level, unsafe, and can leak memory. It follows that a third alternative is needed. Consider: 3. Ban VLAs and replace them with vector (possibly provided as a built-in type). 4. Define a VLA to be equivalent and interchangeable with a suitably designed container, array. Naturally, (3) is unacceptable because VLAs exist in the C Standard, but it would have been a close-to-ideal solution. However, we can use the idea as a springboard to a more acceptable resolution. How would array have to be designed to bridge the gap between VLAs and C++ Standard library containers? Consider possible implementations of VLAs. For C-only, a VLA and its size are needed together only at the point of allocation. If extended to support C++, destructors must be called for VLA elements, so the size must (conceptually, at least) be stored with a pointer to the elements. Therefore, a naive implementation of array would be something like this:template<class T> class array { int s; T* elements; public: // allocate "n" elements and // let "elements" refer to them array(int n); // make this array refer to p[0..n-1] array(T* p, int n); operator T*() { return elements; } int size() const { return s; } // the usual container operations, // such as = and [], much like vector }; Apart from the two-argument constructor, this would simply be an ordinary container that could be designed to allocate from the stack, just like some VLA implementations. The key to compatibility is its integration with VLAs: void h(array<double> a); // C++ void g(int m, double vla[m]); // C99 void f(int m, double vla1[m], <BR> array<double> a1) { // a2 refers to vla1 array<double>a2(vla1,m); // p refers to a1s elements double* p = a1; h(a1); h(array(vla1,m)); // a bit verbose h(m,vla1); // ??? g(m,vla1); g(a1.size(),a1); // a bit verbose g(a1); // ??? } The calls marked with ??? cannot be written in C++. Had they gotten past the type checking, the result would have executed correctly because of structural equivalence. If we somehow accept these calls, by a general mechanism or by a special rule for array and VLAs, arrays and VLAs would be completely interchangeable, and a programmer could choose whichever style best suited taste and application. Clearly, the array idea is not a complete proposal, but it shows a possible direction for coping with a particularly nasty problem of divergent language evolution. Afterword There are many more compatibility issues that must be dealt with by a thorough description of C/C++ incompatibilities and their possible resolution. However, the examples here should give a concrete basis for a debate both on principles and practical resolutions. Again, please note that the suggested resolutions dont make much sense in isolation; I see them as part of a comprehensive review to eliminate C/C++ incompatibilities. I suspect that the main practical problem in eliminating the C/C++ incompatibilities would not be one of the compatibility problems listed above. The main problem would be that starting with Dennis Ritchies original text the two standards have evolved independently using related but subtly different vocabularies, phrases, and styles. Reconciling those would take painstaking work of several experienced people for several months. The C++ Standard is 720 pages; the C99 standard is 550 pages. I think the work would be worth it for the C/C++ community. The result would be a better language for all C and C++ programmers. References and Notes [1] ISO/IEC 9899:1990, Programming Languages C. [2] ISO/IEIC 9899:1999, Programming Languages C. [3] ISO/IEC 14882, Standard for the C++ Language. [4] Bjarne Stroustrup. Sibling Rivalry: C and C++ (AT&T Labs Research Technical Report TD-54MQZY, January 2002), < ~bs/sibling_rivalry.pdf>. [5] Andrew Koenig and Bjarne Stroustrup. C++: As Close to C as Possible But No Closer, The C++ Report, July 1989. [6] Bjarne Stroustrup. C and C++: Siblings, C/C++ Users Journal, July 2002. [7] It has been suggested that considering VLAs from a C++ point of view is unfair and disrespectful to the C committee because C is not C++ and C/C++ compatibility isnt part of the C Standard committees charter. I mean no disrespect to the C committee, its members, or to the ISO process that the C committee is part of. However, given that a large C/C++ community exists and that VLAs will inevitably be used together with C++ code (either through linkage or through permissive compiler switches), an analysis is unavoidable and needed. [8] Bjarne Stroustrup. C and C++: A Case for Compatibility, C/C++ Users Journal, August 2002. Bjarne Stroustrup is the designer and original implementer of C++. He has been a member of the C/C++ community since he first used C in 1975. For 17 years, he worked in Bell Labs Computer Science Research Center alongside people such as Dennis Ritchie and Brian Kernighan. In the early 1980s, he participated in the internal Bell Labs standardization of C. He is the author of The C++ Programming Language and The Design and Evolution of C++. His research interests include distributed systems, operating systems, simulation, design, and programming. He is an AT&T Fellow and heads AT&T Labs Large-scale Programming Research department. He is actively involved in the ANSI/ISO standardization of C++. He received the 1993 ACM Grace Murray Hopper award and is an ACM fellow.
http://www.drdobbs.com/c-and-c-case-studies-in-compatibility/184401562
CC-MAIN-2015-48
refinedweb
4,468
53.92
My previous post shows how to choose last layer activation and loss functions for different tasks. This post we focus on the multi-class multi-label classification. We are going to use the Reuters-21578 news dataset. With a given news, our task is to give it one or multiple tags. The dataset is divided into five main categories: For example, one given news could have those 3 tags belonging two categories In previous step, we read the news contents and stored in a list One news looks like this average yen cd rates fall in latest week tokyo, feb 27 - average interest rates on yen certificates of deposit, cd, fell to 4.27 pct in the week ended february 25 from 4.32 pct the previous week, the bank of japan said. new rates (previous in brackets), were - average cd rates all banks 4.27 pct (4.32) money market certificate, mmc, ceiling rates for the week starting from march 2 3.52 pct (3.57) average cd rates of city, trust and long-term banks less than 60 days 4.33 pct (4.32) 60-90 days 4.13 pct (4.37) average cd rates of city, trust and long-term banks 90-120 days 4.35 pct (4.30) 120-150 days 4.38 pct (4.29) 150-180 days unquoted (unquoted) 180-270 days 3.67 pct (unquoted) over 270 days 4.01 pct (unquoted) average yen bankers' acceptance rates of city, trust and long-term banks 30 to less than 60 days unquoted (4.13) 60-90 days unquoted (unquoted) 90-120 days unquoted (unquoted) reuter We start up the cleaning up by After this our news will looks much "friendly" to our model, each word is seperated by space. average yen cd rate fall latest week tokyo feb 27 average interest rate yen certificatesof deposit cd fell 427 pct week ended february 25from 432 pct previous week bank japan said new rate previous bracket average cd rate bank 427 pct 432 money market certificate mmc ceiling rate weekstarting march 2 352 pct 357 average cd rate city trust longterm bank le 60 day 433 pct 432 6090 day 413 pct 437 average cd rate city trust longterm bank 90120 day 435 pct 430 120150 day 438 pct 429 150180 day unquoted unquoted 180270 day 367 pct unquoted 270 day 401 pct unquoted average yen banker acceptance rate city trust andlongterm bank 30 le 60 day unquoted 413 6090 day unquoted unquoted 90120 day unquoted unquoted reuter Since a small portation of news are quite long even after the cleanup, let's set a limit to the maximum input sequence to 88 words, this will cover up 70% of all news in full length. We could have set a larger input sequence limit to cover more news but that will also increase the model training time. Lastly, we will turn words into the form of ids and pad the sequence to input limit (88) if it is shorter. Keras text processing makes this trivial. from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences max_vocab_size = 200000 input_tokenizer = Tokenizer(max_vocab_size) input_tokenizer.fit_on_texts(totalX) input_vocab_size = len(input_tokenizer.word_index) + 1 print("input_vocab_size:",input_vocab_size) # input_vocab_size: 167135 totalX = np.array(pad_sequences(input_tokenizer.texts_to_sequences(totalX), maxlen=maxLength)) The same news will look like this, each number represents a unique word in the vocabulary. array([ 6943, 5, 5525, 177, 22, 699, 13146, 1620, 32, 35130, 7, 130, 6482, 5, 8473, 301, 1764, 32, 364, 458, 794, 11, 442, 546, 131, 7180, 5, 5525, 18247, 131, 7451, 5, 8088, 301, 1764, 32, 364, 458, 794, 11, 21414, 131, 7452, 5, 4009, 35131, 131, 4864, 5, 6712, 35132, 131, 3530, 3530, 26347, 131, 5526, 5, 3530, 2965, 131, 7181, 5, 3530, 301, 149, 312, 1922, 32, 364, 458, 9332, 11, 76, 442, 546, 131, 3530, 7451, 18247, 131, 3530, 3530, 21414, 131, 3530, 3530, 3]) embedding_dim = 256 model = Sequential() model.add(Embedding(input_vocab_size, embedding_dim,input_length = maxLength)) model.add(GRU(256, dropout=0.9, return_sequences=True)) model.add(GRU(256, dropout=0.9)) model.add(Dense(num_categories, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(totalX, totalY, validation_split=0.1, batch_size=128, epochs=10) After training our model for 10 epochs in about 5 minutes, we have achieved the following result. loss: 0.1062 - acc: 0.9650 - val_loss: 0.0961 - val_acc: 0.9690 The following code will generate a nice graph to visualize the progress of each training epochs. import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() Take one cleaned up news (each word is separated by space) to the same input tokenizer turning it to ids. Call the model predict method, the output will be a list of 20 float numbers representing probabilities to those 20 tags. For demo purpose, lets take any tags will probability larger than 0.2. textArray = np.array(pad_sequences(input_tokenizer.texts_to_sequences([input_x_220]), maxlen=maxLength)) predicted = model.predict(textArray)[0] for i, prob in enumerate(predicted): if prob > 0.2: print(selected_categories[i]) This produces three tags pl_uk pl_japan to_money-fx the ground truth is pl_japan to_money-fx to_interest The model got 2 out of 3 right for the given news. We start with cleaning up the raw news data for the model input. Built a Keras model to do multi-class multi-label classification. Visualize the training result and make a prediction. Further improvements could be made The source code for the jupyter notebook is available on my GitHub repo if you are interested.Share on Twitter Share on Facebook
https://www.dlology.com/blog/how-to-do-multi-class-multi-label-classification-for-news-categories/
CC-MAIN-2018-47
refinedweb
997
57.98
The part in red seems to be the problem. The loop goes on and on even if i enter 'y' or 'n' . What's wrong?The part in red seems to be the problem. The loop goes on and on even if i enter 'y' or 'n' . What's wrong?Code:#include <iostream> #include <cctype> using namespace std; int main() { cout << "Enter whole numbers that you want me to total up:" << endl; int value = 0; //value to be added int total = 0; //total of all values char yesno = 0; //check if user wants to continue do { cout << "Enter a whole number: "; //let user enter number cin >> value; total += value; //add value to total cout << "Do you want to add some more numbers? (y/n) "; //check if user wants to add more numbers cin >> yesno; while (tolower(yesno) != ('y'||'n')) //tell user that they have entered wrong value { cout << "You have entered an invalid answer. Please try again (y/n): "; cin >> yesno; } }while (tolower(yesno) == 'y'); //loop stops if user doesn't enter 'y' cout << endl //tell user the total << "The total is " << total << "." << endl << endl; system("PAUSE"); return 0; } P.S one more questions: wat does header file <cstdlib> and <cstdio> do? The first book I read 'bout C++ seems to 'enforce' the two header files in all the example programs. But in the book <<Ivor Horton's Beginning C++ The complete language>> by the Wrox company , they only put <iostream> for the basic programs. I'm on to the chapter 'bout loops, but headers file they have used so far are: <iostream> <limits> <cctype> <iomanip> . Can someone explain?
https://cboard.cprogramming.com/cplusplus-programming/48709-program-doesnt-work-should.html
CC-MAIN-2017-34
refinedweb
269
78.48
Facebook profile importer jobs I need to open an account as an official reseller with the company bitmain, for buy product in large quantities some information of the company Beiji...Road, Shatou Industrial area, Shajing Town, Baoan District, Shenzhen, Guangdong Province, China. I am hoping to find a local contact in China or possibly an exporter or importer to the United States. I am an authorized importer of a European industrial bearings brand in India. I need help with some local contacts to get me in touch with purchase managers of industries, SME's and local factories of your area. ...our product without the address and name of the eu representative. according to Article 6 of the European Union's Product Safety Law, the name and address of the agent or importer must be printed on the re-package of consumer goods. Now, according to our communication with the Amazon legal department, Amazon said we need an EU representative. We are I am looking for someone that can write a script that will add some extra required chunks into a WAV file. I have attached the docu...attached) There is a script written in ruby that can already do this, so you may be able to use it... I know nothing about Ruby. Here is the script: [url removed, login to view] The projects consist in a partial restyling of the website [url removed, login to view] trough the following changes: - On all th...in the sidebar with a horizontal Menu placed above the content (has to be responsive!) and remove the sidebar. - Migrate content from qtranslate to WPLM using qTranslate Importer and enable the language switcher on the header. hello want start building a wordpress theme with demo importer to sell them on themforest the first example will be [url removed, login to view] , we need to build a similar to this with some demo variation the them must 100% clean and responsive to be accepted on themforest let me know I need a new website. I need you to design and build a website for my small business. I need a website, importer, shopping cart, building solution, add in stripe, set up search, import data, set up WordPress and put on my hosting site. ... 2) Modification to the SOAP Header. The second part of the project is that I need to move namespace references from the body to the envelope. I've used the WSDL importer to import the file which generates a SOAP request with the following: <?xml version="1.0"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="[url removed, login to view]" .. fetches all of his ebay listings. 3. The ebay user Hi, We are Seeking For Distributor, Sale Representative, Importer & Trading Companies To Expand Our Business All Round The World. Our Main Product are Safety Gloves & Safety Wear. We are Based from Sialkot-Pakistan and You Can Visit Us at [url removed, login to view] Thanks & Best Regard, Muhammad Tariq Online Marketing Executive Komaroo Leather An Israel importer looking for Clipping Path of about 50 photos + some retouching Project is called Jigi Data Importer. Data is like XML with tags and the parser must pick list of items. Items can have a lot of number of nodes, those nodes will be listed as child. The result will be saved to database, see description. Please read PDF before chat, 1st chat is in weak position. Looking for someone experienced with Add Image Renaming Function to CSV Importer for Brands Gateway SCript I need some help with my business. Hi I am Exporter av Somali Sesam,one of the worldes largest ,we need big sesam importer as soon as possiple. I need a simple tool that allows to bulk upload products from CSV into Storenvy store via their API. Here is the link: [url removed, login to view] My budget for this is $60. Thanks. Project is called Data Importer. Data is like XML with tags and the parser must pick list of items. Items can have a lot of number of nodes, those nodes will be listed as child. The result will be saved to database, see description. ...our products have a pdf manual which customers can download. We want to change this to confluence. Confluence is part of the atlassian stack (jira). Confluence has a word importer that can be used. The work that needs to be done This is a manual process, below are the steps do so. 1. Determine product name and language from folder name or pdf document HI thanks for applying! I am looking for someone who can upload products from [url removed, login to view] and aliexpress in my store, I lost my previous importer but i hope we can partner. Please be experienced with wordpress and listing Tools, the Listing tool tool we will currently be using is AliDropship" (I will paste link below) If you have your own tool recommendations
https://www.freelancer.com/job-search/facebook-profile-importer/
CC-MAIN-2018-13
refinedweb
821
71.04
Detect and create Haikus final poem = ''' This is my long poem. It is too long for haiku, because it's tail is quite large. '''; final haikuPoem = ''' An old silent pond... A frog jumps into the pond, splash! Silence again. '''; print(Haiku.isHaiku(poem); // False; 19 syllables print(Haiku.isHaiku(haikuPoem); // True final haiku = Haiku.create(haikuPoem); print(haiku.firstLine); // ['An', 'old', 'silent', 'pond...']; print(haiku.thirdLine); // ['splash!', 'Silence', 'again.']; print(const CitationFormatter('You').format(haiku)); // *An old silent pond... // A frog jumps into the pond, // splash! Silence again.* // - You Add this to your package's pubspec.yaml file: dependencies: haiku: ^1.0.1 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:haiku/haiku.
https://pub.dartlang.org/packages/haiku/versions/1.0.1
CC-MAIN-2019-13
refinedweb
143
71.21
Get the SoR Robotics Android App on Android Market for FREE. See this forum post for details. 0 Members and 1 Guest are viewing this topic. bool CompareArrays() { for(int x = 0; x < 10; x++) { if(array1[x] != array2[x]) { return false; } } return true;}void main() { if(CompareArrays == true) { print("wooohoooo"); } //not sure about the 'print' command, I'm a C++ coder, so I use cout<< "blabla";} would strcmp or memcmp work? MEMCMP(3) NEWLIB MEMCMP(3)NAME 6.7 `memcmp'--compare two memory areasSYNOPSIS #include <string.h> int memcmp(const void *S1, const void *S2, size_t N);DESCRIPTION This function compares not more than N characters of the object pointed to by S1 with the object pointed to by S2.RETURNS The function returns an integer greater than, equal to or less than zero according to whether the object pointed to by S1 is greater than, equal to or less than the object pointed to by S2.PORTABILITY `memcmp' is ANSI C. `memcmp' requires no supporting OS subroutines.SEE ALSO memcmp is part of the libc library. The full documentation for libc is maintained as a Texinfo manual. If info and libc are properly installed at your site, the command info libc will give you access to the complete manual.NEWLIB 2006 Aug 09 MEMCMP(3) if(array1[x] != array2[x]) { return false;} Compared to the C language, C++ introduced extra features, including declarations as statements, function-like casts, new/delete, bool, reference types, inline functions, default arguments, function and operator overloading, namespaces and the scope resolution ( operator, classes (including all class-related features such as inheritance, member functions, virtual functions, abstract classes, and constructors), templates, exception handling, runtime type identification, and the overloaded input (>>) and output (<<) operators for input and output respectively. Noob or not... that is the only way you can compare two arrays. In loop of somekind (for, while, do-while).
http://www.societyofrobots.com/robotforum/index.php?topic=1867.msg13117
CC-MAIN-2015-35
refinedweb
315
56.15
In Java 1.5, a new class known as Scanner class was introduced to simplify the task of getting input from the user. The Scanner class is in java.util package which allows the user to read the data dynamically from the keyboard. It can be used to read a File on the disk. The Java Scanner class extends Object class is present in java.lang package and implements Iterator and Closeable interfaces. The Java Scanner class that breaks down its input into tokens using any regular expression delimiter that is whitespace by default and it’s useful for parsing files also. Scanner class provides many methods to read and parse various primitive byte, short, int, float, double or even an object of String class. Java Scanner class commonly used to parse a string of text and primitive types using a regular expression. Before using a Scanner class, the program must import the package import java.util.*; OR Java.util.Scanner;. For example, java.util.Scanner specifies that Scanner is a class in the package util and util is a package in the package java. A Scanner class to create and obtain input from command window is as follows: Scanner input = new Scanner(System.in); As per the above syntax • You can declare scanner type object say 'input.' • To create a Scanner object, the new keyword used. •The Scanner class constructor, which takes an InputStream object (i.e., System.in) as a parameter. The System.in specifies the standard console keyboard input. We can create an instance of the Scanner class using one of the following constructors: • Scanner(File source): It constructs a new Scanner that scans the specified file using the system’s default charset. • Scanner(File source, String charsetName): It constructs a new Scanner that scans the specified file using the specified character set. • Scanner(InputStream source): It constructs a new Scanner from a byte input stream using the system’s default charset. • Scanner(InputStream source, String charsetName): It constructs a new Scanner from a byte input stream using the specified charset. • Scanner(Readable source): It constructs a new Scanner from a character stream. • Scanner(String source): It constructs a new Scanner from a String. • Scanner(ReadableByteChannel source): constructs a Scanner that produces values scanned from the specified channel. • Scanner(ReadableByteChannel source, String charsetName): constructs a Scanner that produces values scanned from the specified channel. • Scanner(Path source): constructs a Scanner that produces values scanned from the specified file. • Scanner(Path source, String charsetName): constructs a Scanner that produces values scanned from the specified file. The various methods of the scanner class are as follows: The nextShort, nextFloat, and nextLong methods in Scanner work similarly. Each of these methods (except nextLine) reads a set of consecutive characters until whitespace (such as a blank or a tab) encountered. The nextLine method reads all of the data that a user typed on a line before pressing the Enter key on a keyboard. We can create a scanner that reads data from the console by using a System.in argument to the Scanner constructor: Scanner scanner = new Scanner(System.in); The value returned by a method can copy into a variable whose data type matches that of the returned value. For instance, the integer value that is returned by the nextInt method copy into another variable called number1 using an assignment statement: int number1 = scanner.nextlnt(); If the user enters a value that is different from the type expected (such as a floating-point number), a run-time error result. The following statement reads a string from the console by using the next method and stores it in the variable named string1: String string1 = scanner.next(); The variables number1 and string1 can be used as needed in the rest of the program. We now make a small program that inputs two integers in Java and display result. In this program, we use the methods available in Scanner class to read data typed at the keyboard and place it into variables that we specify. import java.util.*; class InputDemo { public static void main (String args []) { int a,b; double pi; String name; Scanner s = new Scanner(System.in); System.out.printIn("Enter thevalue of a & b"); a = s.nextInt(); b = s.nextInt(); System.out.printIn("Enter the value of pi"); pi = s.nextDouble(); System.out.printIn("Enter the your name"); name = s.next(); System.out.printIn("The value of a = "+a); System.out.println("The value of b = "+b); System.out.print1n("The value of pi = "+pi); System.out.println("Name = "+name ); } } • Java Scanner class can do all that a BufferedReader class does with the same efficiency. • Java Scanner is not thread safe, but BufferedReader is thread safe. • Java Scanner class has various methods to manipulate input stream data. • Java Scanner class tokenizes the underlying stream using a delimiter that is white space by default. • The string manipulation in scanner class is entirely more comfortable as each word can be obtained as a token and handled separately. • Java Scanner class can parse the underlying stream for string and primitive types like int, short, float, double, and long using regular expression. • Java BufferedReader class can read only String but Scanner can read both String and primitive types. • Java Scanner class use (1KB) little buffer size as comapre to BufferdReader class has a significantly large buffer (8KB), that means if you were reading Lengthy data from a file, then use BufferedReader but for short you use Scanner class. • Java BufferedReader class is synchronized while Scanner class is not. Which means, you cannot share Scanner class between multiple threads, but you can share the BufferedReader class object. • Java Scanner class extends Object class and implements Iterator and Closeable interfaces. • Java Scanner class is not synchronised. • Java Scanner is not thread-safe. • Java Scanner has little buffer memory (1KB). • Java Scanner class can parse the underlying stream of data bit slower. • Java Scanner class comes from JDK 1.5
https://ecomputernotes.com/java/what-is-java/java-scanner-class
CC-MAIN-2019-35
refinedweb
993
56.96
JBoss.orgCommunity Documentation The <rich:effect> utilizes a set of effects provided by the scriptaculous JavaScript library. It allows to attach effects to JSF components and html tags. No developers JavaScript writing needed to use it on pages Presents scriptaculous JavaScript library functionality To create the simplest variant of <rich:effect> on a page, use the following syntax: Example: ... <rich:effect ... Example: import org.richfaces.component.html.HtmlRichEffect; ... HtmlRichEffect myEffect = new HtmlRichEffect(); ... It is possible to use <rich:effect> in two modes: attached to the JSF components or html tags: ... <!-- attaching by event --> <rich:panel> <rich:effect <!--panel content--> </rich:panel> ... <!-- invoking from JavaScript --> <div id="contentDiv"> <!--div content--> </div> <input type="button" onclick="hideDiv({duration:0.7})" value="Hide" /> <input type="button" onclick="showDiv()" value="Show" /> <rich:effect <rich:effect <!-- attaching to window on load and applying on particular page element --> <rich:effect ... "name" attribute defines a name of the JavaScript function that is be generated on a page when the component is rendered. You can invoke this function to activate the effect. The function accesses one parameter. It is a set of effect options in JSON format. "type" attribute defines the type of an effect. For example, "Fade", "Blind", "Opacity". Have a look at scriptaculous documentation for set of available effect. "for" attribute defines the id of the component or html tag, the effect is attached to. RichFaces converts the "for" attribute value to the client id of the component if such component is found. If not, the value is left as is for possible wiring with on the DOM element's id on the client side. By default, the target of the effect is the same element that effect pointed to. However, the target element is might be overridden with "targetId" option passed with "params" attribute of with function paramenter. "params" attribute allows to define the set of options possible for particurar effect. For example, 'duration', 'delay', 'from', 'to'. Additionally to the options used by the effect itself, there are two option that might override the <rich:effect> attribute. Those are: "targetId" allows to re-define the target of effect. The option is override the value of "for" attribute. "type" defines the effect type. The option is override the value of "type" attribute. You can use a set of effects directly without defining the <rich:effect> component on a page if it's convenient for you. For that, load the scriptaculous library to the page with the following code: Example: ... <a4j:loadScript ... If you do use the <rich:effect> component, there is no need to include this library because it's already here. For more information look at RichFaces Users Forum. <rich:effect> has no skin parameters and custom style classes, as the component isn't visual. Here you can get additional information how to create an image banner using <rich:effect> and <a4j:poll> components and figure out how to create an HTML banner from "Creating HTML Banner Using Effects And Poll RichFaces Wiki" article . In the RichFaces Cookbook article you can find information how to make a Slide Show with help of the <rich:effect> and <a4j:poll> components. On the component LiveDemo page you can see the example of <rich:effect> usage. How to save <rich:effect> status see on the RichFaces Users Forum.
https://docs.jboss.org/richfaces/3.3.X/3.3.1.GA/en/devguide/html/rich_effect.html
CC-MAIN-2022-40
refinedweb
550
50.12
Symmetry solution in Clear category for Counting Tiles by veky def checkio(radius): ran,solid,total=range(int(radius)+1),0,0 for x in ran: for y in ran: solid+=abs(x+y*1j+1+1j)<=radius total+=abs(x+y*1j) June 1, 2013 gflegar on Jan. 28, 2014, 10:38 a.m. <p>Same idea as mine. Just more readable :)</p> veky on Jan. 28, 2014, 11:13 a.m. <p>I'm getting old, obviously. :-D</p> glebmikh on March 3, 2015, 3:40 p.m. <p>Very cool! Could you explain 1j stuff?) Thanks</p> veky on July 20, 2015, 2:05 p.m. <p>Sorry, I don't know how I missed this. 1j is the imaginary unit, x+y*1j is a complex number with real part x and imaginary part y. It represents a point (x,y) in the plane. In the same way, x+y*1j+1+1j represents a point (x+1,y+1). abs() gives the distance from origin. Is it clearer now?</p> veghadam1991 on Oct. 13, 2015, 5:35 p.m. <p>Hmmm... with complex number. Clear and very creative too! Amazing! I like your solutions. :D Since when do you program in Python and any other languages?</p> veky on Oct. 13, 2015, 8:06 p.m. <p>Thanks. Many people forget Python has native support for complexes.</p> <p>I program for about 30 years, and have written a nontrivial amount of code in various BASICs, Pascal, HP RPL, C, C++, Wolfram Language, Perl and of course Python; and a trivial amount of code in 6502 assembler, FORTH, Java, batch language, ksh, Haskell, Prolog, Scheme, and various other languages, some of whom I designed. :-)</p> <p>Python is the only language I use professionally, and I started to use it when Python3 came out (about 6 years ago) - but for a long time it was just another language. About 2.5 years ago I started to really appreciate its strengths, and realized it really clicks with my deep convictions about coding.</p> piter239 on April 26, 2020, 4:24 p.m. <p>FORTH, the passion of my youth! It was about 1998, and there were no real computer to try it out, but I just loved the idea!</p> carel.anthonissen on May 3, 2017, 8:04 a.m. <p>Very tidy.</p> <p>Would you consider ever using itertools.permutations to collapse multiple outer loops?</p> <p>For example:</p> <pre class='brush: python'>for x,y in permutations(ran,2): solid+=abs(x+y*1j+1+1j)<=radius total+=abs(x+y*1j)<radius </pre> veky on May 4, 2017, 6:12 a.m. <p>Those are not the same. I want occasionally x and y to be the same. permutations will only give me injections.</p> <p>You mean `product(ran, repeat=2)`. And yes, I would use it if I had multiple places in the same program to use it. For only one time, probably nested loops are easier.</p> Tinus_Trotyl on July 26, 2017, 3:34 a.m. <p>As neat as from a schoolbook !!! </p> veky on July 26, 2017, 5:33 a.m. <p>Do you find that surprising? :-)</p> Tinus_Trotyl on July 26, 2017, 3:23 p.m. <p>What can I say. . . surprising has gone pretty relative in your case :-)</p> <p>I see you solved it in four times a quarter circle but how about processing eight 45° sectors simultaneously ? <a href=""> here </a></p> veky on July 26, 2017, 3:48 p.m. <blockquote> <p>What can I say. . . surprising has gone pretty relative in your case :-)</p> </blockquote> <p>I'm travelling to the Land of Weird Compliments again... :-D</p> <blockquote> <p>processing eight 45° sectors simultaneously ? here</p> </blockquote> <p>As you have seen, 4 quarters is <em>simpler</em> than whole circle processing. 8 eighths seems to me more complicated than that.</p> hao.daniel on Feb. 25, 2020, 5:28 p.m. <p>Hi, after reading yours, I've a strong urge to pick my math books again, and study hard. ;-) (btw, do you think someone with deep math background has<em>* advantage</em>* on coding/solving problems? </p> veky on Feb. 26, 2020, 7:04 a.m. <p>I think those things are correlated, but not necessarily in a cause and effect sort of way. But you're welcome to prove me wrong. [Obligatory XKCD: <a href="]">]</a> :-)</p> June 1, 2013 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/counting-tiles/publications/veky/python-3/symmetry/share/df646ff414d460a983bd30870ef87e0f/
CC-MAIN-2021-17
refinedweb
767
78.04