text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Components and supplies Apps and online services About this project Introduction So, the project I am making will be great help to the agricultural field, where we need to run several tests on the soil before we can start plantations. Now, my EnvironmentCube is a smart cube which just needs to be placed in soil and then leave the rest of the work to the cube. It is equipped with several high quality sensors which will measure almost all the factors that are needed to be considered when we are testing the land. It will measure the salinity level of the sand, water content, temperature, humidity and the air quality of the surrounding environment which will be a great help. Now, the readings can be accessed readily from the Thingspeak api with the help of sigfox technology at the backend part. Step 1 Constructyourown3 D printedcube: Now the first step is to make the casing of your cube. Keep in mind that you need to fit a small power source for your Arduino Nano and the Bluetooth module, keep holes for the sensor outlets and make the cube such that you can open and close it whenever you want. I didn't have access to a 3D printer so I cant share an image of the box I used. Step 2 Hardware Connections: Now that you have your cube casing ready, lets proceed to the next step. Now we will connect out sensors to the arduino MKR fox 1200 board which have 7 input pins available, which would be enough for our sensors. We will use a custom size breadboard for parallel voltage outputs for our sensors. The connection process of sensors is described for arduino UNO, the same connections will be made to the MKR board. Now that we have all the hardware connections ready, lets get our software part ready. Step 3 Setting up the software part for obtaining readings: Now we set up our board and configure it with the sigfox first. - The first step is the official page of Arduino MKRFox 1200: - After this, you must register your Sigfox board. You can follow this steps: SigFox First Configuration:... - This procedure registers your board and connects the board to the internet network. - You will require this steps to access the readings of your board on the thingspeak dashboard using its API. Step 4 Now, the next step is creating a thingspeak account. Now, after your account is created, you can create a new channel. Now your channel can receive the data from the Sigfox backend. For this reason, you must select the API key of your channel and add to the URL in the Sigfox Backend system. Go to API keys page and write your API keys on your Sigfox Backend page. See the next step. Step 5: Add the Command to the Sigfox Developer Portal 1 / 2 - Create a new Callback command on SigFox backend portal. - Click on Device type and after click on "Callbacks". - Select Type DATA and Uplink - Select Channel URL - Add this line to "Custom payload config": - status::uint:8 temp::int:16:little-endian t::int:16:little-endian h::int:16:little-endian - Select "Use HTTP method" GET - Add this line to your Callback. Modify the ############# with your Thingspeak api key. - ###############&field1={customData#temp}&field2={customData#t}&field3={customData#h}&field4={snr} Step 6: All the Code and the Software 1 / 2 First of all, install the Sigfox library on your Arduino IDE. This is the Arduino libraries. You can see the official guide to add the Arduino libraries: Install the Arduino Low Power libraries, Sigfox and the required libraries for all the sensors what we use. In the callback data on the backend page, there are the names of the variables of your Arduino code. See inside the code. t = dht.readTemperature();msg.t = convertoFloatToInt16(t, 60, -60);h = dht.readHumidity();msg.h = convertoFloatToUInt16(h, 110);[...]msg.moduleTemperature = convertoFloatToInt16(temperature, 60, -60);[...]SigFox.write((uint8_t*)&msg, 12); The code catches the Temperature and Humidity data by the DHT-11 sensor, and convert the data to "int". After sending the data to the Thingspeak platform by using the SigFox.write command. Now the same process is followed for all the sensors that we are using. Now, the biggest advantage using this arduino board is that it is a low energy device so it is possible to power that whole cube with just a battery, which makes it a very compact device. I have attached a sample arduino code for the temperature and humidity sensor so it will be easy for you to write the codes for the different sensors, you just need to simply import the specific libraries and modify the pin numbers, and use the sigfox.write function to send the data to thingspeak dashboard for monitoring. Enjoy! Code The code for temperature and humidity sensor DTH 11Arduino #include <ArduinoLowPower.h> #include <SigFox.h> #include <DHT.h> #define DHTPIN 1 #define DHTTYPE DHT11 DHT dht(DHTPIN, DHTTYPE); float temperature; float t; float h; /* ATTENTION - the structure we are going to send MUST be declared "packed" otherwise we'll get padding mismatch on the sent data - see for more details */ typedef struct __attribute__ ((packed)) sigfox_message { uint8_t status; int16_t moduleTemperature; int16_t t; int16_t h; } SigfoxMessage; // stub for message which will be sent SigfoxMessage msg; void setup() { if (!SigFox.begin()) { // Something is really wrong, try rebooting // Reboot is useful if we are powering the board using an unreliable power source // (eg. solar panels or other energy harvesting methods) reboot(); } //Send module to standby until we need to send a message SigFox.end(); SigFox.debug(); dht.begin(); } void loop() { t = dht.readTemperature(); msg.t = convertoFloatToInt16(t, 60, -60); h = dht.readHumidity(); msg.h = convertoFloatToUInt16(h, 110); // Start the module SigFox.begin(); // Wait at least 30ms after first configuration (100ms before) delay(100); // We can only read the module temperature before SigFox.end() temperature = SigFox.internalTemperature(); msg.moduleTemperature = convertoFloatToInt16(temperature, 60, -60); // Clears all pending interrupts SigFox.status(); delay(1); SigFox.beginPacket(); SigFox.write((uint8_t*)&msg, 12); int lastMessageStatus = SigFox.endPacket(); SigFox.end(); //Sleep for 15 minutes LowPower.sleep(15 * 60 * 1000); } void reboot() { NVIC_SystemReset(); while (1) ; } Author Dhairya Parikh - 4 projects - 33 followers Published onMay 10, 2018 Members who respect this project you might like
https://create.arduino.cc/projecthub/dhairya-parikh/the-environment-cube-know-the-land-beneath-you-using-sigfox-952f29
CC-MAIN-2018-47
refinedweb
1,051
62.88
I just got a Zotac ZBOX AD02 (AMD E-350 APU w/ Radeon HD 6310) and in general it's working great. However, I have a problem with Flash video on HBO GO. For the first minute or so of steaming video, everything seems perfect. Video is high quality and smooth, CPU is around 50%. Then after a minute or two there will always be some visible video glitch and CPU utilization will jump up to 90-100% and playback becomes extremely choppy. So I'm guessing Flash hardware acceleration is working initially, then something happens that kills the hardware acceleration, and of course the E-350 CPU is not up the task by itself. The really strange thing is that it seems to be specific to HBO GO. I can stream HD all day long from Amazon Instant Video and Youtube (at least when Youtube is able to dish it out fast enough). And it's definitely not a bandwidth problem. I have Comcast cable (20Mb down), and a bigger computer I have hooked up to another TV has no problem with HBO GO. I think the same problem happens on the bigger computer, but it's faster CPU can overcome the lack of hardware acceleration. I've got the latest drivers from AMD and the latest Flash player (even tried the 11 beta) and have tried all the main browsers (FF, IE, Chrome), but it's always the same: 1-2 minutes of perfect playback and moderate CPU utilization, then jacked up CPU and choppy playback. Unfortunately, there doesn't seem to be any way to contact HBO GO directly about this problem (support links just refer you to your cable provider), so I thought I'd give it a shot here. Is anyone else out there having this problem with HBO GO or any other Flash video sites? Any thoughts on what else I could do to track down the root cause? Mitch - I have a very similar problem with HBOGO and Flash Player. Check out the description and Adobe suggestions at: Bill Sorry to resurrect this thread, but it's the only place I found where people were already talking about this issue. I just started using HBO GO (I usually watch HBO shows on satellite, but I've been looking through some of their back catalog), and I'm seeing this exact same behavior. I'm running a Core 2 Duo E6600 and a GTX 460, but the effect is the same: Video is smooth and CPU utilization is low for a couple minutes, then CPU usage spikes to 90-100% and the video playback gets choppy. Did anyone ever figure out what was causing this issue? It essentially makes HBO GO useless on my computer. I'm running the latest version of Flash (11.8) and recent video drivers, so I don't think either of those are the issue. Also, hardware acceleration works on all the other video sites with flash-based players that I've tried; HBO GO is the only site where I experience this issue. EDIT: I feel quite silly now. I looked at the site code and found that HBO is using wmode tranparent in their flash object, preventing any hardware acceleration. The bump in CPU utilization appears to simply be the player switching from SD to HD bitrates (which triggers about two minutes in, apparently, if you have a high enough speed connection). Sucks that others are having problems, but at least I know it's not just me now. It's interesting that Eavesdown is having the same problem on Nvidia. I figured it might have been some kind of problem between Flash and the Radeon 6310 drivers/hardware, but if the exact same thing is happening on Nvidia, then it really must be a general problem with Flash and/or the way HBO uses it. I never was able to resolve the issue, but I think I found a somewhat tolerable workaround. Just a few days ago I tried reverting to some old Flash versions as suggested in the link Bill posted. 10.3 had a weird image clarity problem (hard to describe without seeing it - almost like a blur effect), but 10.2 basically worked. I say "basically" because it's not perfect. 10.2 was able to stream HBO GO in HD with a good frame rate, but every 10-15 seconds there's a little jerk/chop in the video. It's annoying, but it's at least watchable, unlike the 11.x versions which are unwatchable once you loose HW acceleration. I was using Firefox for this test. I think as a workaround I'm going to see what 10.x version works best in IE and use that exclusively for HBO GO, since I don't use IE otherwise. Then I can keep the latest flash version installed in Firefox as my main browser. I wouldn't reccomend using the older Flash versions for general browsing due to all the security holes that have been fixed since then, but I figure HBO GO should be safe enough with the old version. I'll let you know how it goes. Hopefully HBO is working on a HTM5 version of the site, but I'm not holding my breath. To save others a few clicks, here are the links for the Flash uninstaller and archived versions:. html. html#Flash%20Player%20archives We clearly think alike in our troubleshooting methods. I actually tried installing 10.2 for IE as part of a test earlier today, but the HBO GO site rejected me outright, saying it needed Flash Player 10.2 or better to run. I tried 10.3, as well, but received the same error, so it appears the Flash version detection script is broken when running against newer versions of IE. The site loads fine in IE if I'm running an 11.x version of Flash. With regards to the root issue: The problem shouldn't be specific to any video cards or drivers, since the site isn't actually utilizing them. I have no clue why they're using wmode 'transparent' (as opposed to 'direct' or 'gpu', which allow hardware acceleration), since they don't need to composite the player with anything else on the site. Heck, the entire site is essentially just one large flash object, so they don't really need to worry about interaction with any standard HTML objects at all. I imagine the site works fine for people with fast processors (although my 2.4GHz dual core should be plenty powerful for HD video if it were encoded/rendered properly), and lots of people access it through apps (iOS/Android/Roku/X360) that don't even use the site's code, so they probably figure that it's not worth their time to fix the site for people with older computers. As far as an HTML5 or Silverlight version of the site goes: Even the main (non-streaming) HBO site is written in pure Flash, so the HBO web designers are clearly very enamored of taking the Flash approach to media display. I wouldn't expect that to change anytime soon without an incredibly compelling financial reason. Well, my idea with 10.x on IE didn't really work. Results were not nearly as good as in Firefox, and I'm not willing to use 10.x in my main browser. However, based on your insight on the wmode=transparent issue, I started playing around with a Greasemonkey script to force wmode=direct, and I think it actually works. I haven't been able to try it at home yet on the problem PC, but here on my work PC (don't tell anyone I'm not actually working) it makes a massive difference in CPU utilization. Without the script CPU is around 80-90% during video playback. When I enable the script it drops to around 20%. The only issue is that I usually have to refresh the page once video playback starts to get wmode=direct to take effect (start video playback, then once the video playback page loads, hit refresh in the browser). My script is just a modified version of this one I found: Here's my modified Greasemonkey script (hopefully the forum will allow javascript in posts): // ==UserScript== // @name Force flash wmode direct on HBO GO // @namespace // @description Force flash video playback on HBO GO to use wmode direct to allow hardware acceleration // @include*/video* // @grant none // ==/UserScript== (function () { nodeInserted(); })(); document.addEventListener("DOMNodeInserted", nodeInserted, false); function nodeInserted() { for (var objs = document.getElementsByTagName("object"), i = 0, obj; obj = objs[i]; i++) { if (obj.type == 'application/x-shockwave-flash') { var skip = false; for (var params = obj.getElementsByTagName("param"), j = 0, param; param = params[j]; j++) { if (param.getAttribute("name") == "wmode") { skip = true; param.setAttribute("value", "direct"); break; } } if(skip) continue; var param = document.createElement("param"); param.setAttribute("name", "wmode"); param.setAttribute("value", "direct"); obj.appendChild(param); } } } Wow... we really think alike. I first tried a GreaseMonkey approach to fixing the wmode yesterday, but found that it didn't work consistently. Instead, I opted for using Fiddler (a web proxy debugger tool) to substitute my own customized version of the 'go.js' file that the site uses to generate the flash player object. That way, I didn't have to modify and regenerate the player object after the page loaded to engage the new wmode. Unfortunately, the player would still seem to drop out of hardware acceleration mode from time to time. Perhaps there is something in the player's code that is causing the object to refresh itself in into a transparent wmode when certain bitrate changes occur? I'd be curious to hear if your GreaseMonkey script works on your home PC that was having all the issues, since I couldn't seem to get the wmode to stick properly using a userscript technique. If the GreaseMonkey trick isn't working at home, and you'd like to try my Fiddler approach, just send me a PM, and I can give you the updated code that I'm running through Fiddler (as well as a quick Fiddler tutorial, if you haven't used it before). EDIT: If you don't want to have to refresh your GreaseMonkey script when you go to a video page, just change the 'include' line to: @include http://*.hbogo.com/* That way, it'll trigger no matter where you are on the site, and make the site render more smoothly, in general, since even the navigation is Flash-based. No luck on the home PC. Like you said, the hardware acceleration won't stick. It's weird becuase I left the video running for a long time on my work PC (at least 20 minutes) and it maintained the low CPU utilization the whole time, so I know acceleration was working. Maybe Intel graphics on the work PC make a difference. But anyway, no such luck at home. BTW, I tried wmode=gpu as well with the same results. Once you get it into accelerated mode it will run OK for a while, but always looses acceleration at some point. Sounds like the fiddler approach is behaving the same way. Bummer... Not that it matters now, but I limited the greasemonkey script to the video page intentionally figuring there must be a reason they used transparent on the site. It would be even more infuriating to find out there was no reason to use transparent in the first place, which seems like it might be the case. What browser have you been using? I've been using Firefox, but thought I might give Chrome a try with the Greasemonkey script. Maybe Chrome's embedded Flash player will behave differently. Probably not, but I'm running out of ideas at this point. I've been using Firefox for most of my testing, as well, but I was going to try both Chrome and IE with my Fiddler approach later today (since it's a browser-independent solution). The wmode direct setting runs fine for the rest of the site with no issues (moving through "pages" on their site is really just manipulating layers of the flash object, so the video "pages" are essentially identical to the browse "pages"); it appears there's no real need for the wmode transparent anywhere on the site. At this point, I'm guessing that the site may be straight up ignoring the wmode parameter set in the object (or resetting it to transparent internally the second any action is taken on the site), and that the low CPU utilization is only a result of low-bitrate streaming when the streams start up. I'm not sure if you see the same effect, but the player always runs fine when it's streaming the ugly, low-res picture for the first couple minutes, and immediately starts to choke when it switches to high res (with noticeable visual improvements). It would really be nice if HBO gave us a bitrate selector like Netflix does, so we could test that hypothesis, but I don't really expect them to improve the player any time soon, since it doesn't seem to be a significant part of their revenue model. I know exactly what you're talking about with the low-def initial playback and then the switch to HD, but at work I'm defintely seeing long periods HD playback and low CPU with wmode=direct. In fact you can usually tell when direct is working because it seems to kick into HD right away. With transparent it often takes a couple minutes. Basically at work it seems to be functioning as you would expect in direct mode. Unfortunately I can't sit here and watch an hour of TV at work, so I can't say for sure that it never loses HW acceleration, but it defintely works for much longer periods than at home. You'd think the crappy old Intel GPU I have at work would be about as far away from "accelerated" as you can get, but it seems to work. It's very curious to me that the site is acting differently on the Intel GPU machine. I was running some more tests (no better luck with Chrome or IE, alas), and noticed that, despite wmode direct being set, StageVideo (the hardware acceleration pipeline for OSMF) never gets used on my machine. Can you check whether that's true on the Intel GPU machine? If you right-click on the video while not in fullscreen mode and select "Show Video Debug", it should show you the StageVideo status. On my comp, the debug status shows StageVideo as being enabled in OSMF and supported by my version of Flash, but the HBO GO player always says "StageVideo is not being used, regular video 'probably' is." This suggests to me that HBO simply isn't using the StageVideo class in their streaming code, meaning there shouldn't be hardware acceleration available regardless of wmode settings. Based on the dev documentation for StageVideo, acceleration should also be engaging in fullscreen (if available) regardless of the wmode setting. Since I don't see any improvements in fullscreen mode, it seems highly unlikely hardware acceleration is available for the player in any shape or form. I wonder if the improvements on the Intel machine may simply be a result of wmode direct being generally less resource intensive, making it easier for the player to be rendered on the CPU, instead of a benefit from hardware acceleration kicking in. EDIT: I'm not sure why I didn't think of it earlier, but I just checked my GPU's video engine load during HBO GO playback, and it's not engaging at all, so I'm clearly not getting hardware acceleration at any point with the player, regardless of wmode. I think I'm going to throw in the towel here and give up on watching HBO via my computer. Roku boxes are pretty cheap these days, so I may look into grabbing one of those to hook up to my TV for HBO GO playback. It's still annoying that the HBO GO techs couldn't be bothered to configure the player in such a way that it allows for hardware acceleration. Yeah, I'm with ya. I think I'm going to throw in the towel too. When in direct mode on the work PC, I can't seem to get the video debug dialog to display. The option is there in the context menu, but nothing happens when I click it. Or maybe it does and there's some kind of layer/z-order problem that keeps it from showing on top of the video. In transparent mode, it's exactly as you describe. That's also what I see at home. The Intel GPU is a real mystery. Unfortunately, I don't think there's any way to monitor the load on old Intel graphics to confirm for sure if it's accelerating or not, is there? But the effect of wmode=direct is dramatic on the CPU. Like I said, around 20% utilization in direct mode and 2-4 times that in transparent. The image quality is high and seems the same in both mode. And it's 100% consistent. Enable Greasemonkey, refresh the page and CPU drops. Disable it, refresh, and it jumps. Maybe you're right that it's just the overhead of transparent mode, but I wouldn't have guessed the difference would be that big. Which makes me think it's accelerating, but who knows? I actually wish it didn't work so well because it leaves me with this nagging sliver of hope that I'm close to solving it, even though the more rational part of my brain is telling me it's not possible. Oh well, thanks for trying to help solve this. At least it was kind of fun trying to figure out the Greasemonkey script. But man it's frustrating to get no where after all the effort and know that some dev at HBO GO could probably fix it in 5 minutes. The fact that you can't get the debug info to show (combined with the massive shift in CPU usage) on the Intel machine when using wmode direct makes it sound like something is happening to the display pipeline with wmode direct. I don't see any of those effects on my nVidia machines. I wish I had an onboard Intel GPU machine around here to test on. I don't know if GPU-Z will monitor engine loads from onboard graphics or not, but you might try it on the Intel box, just for kicks. If it does happen to work, it could tell us definitively whether the player is somehow being hardware accelerated on that machine or not. As you said, we think alike. Already tried GPU-Z and clock frequency was the ony thing it could pull. HBO updated the site to allow hardware acceleration! They updated the javascript code for their "supportsWModeDirect" function in the main go.js, and they appear to have made whatever changes were necessary on the back-end to allow the stream to run GPU accelerated. The end result: in Chrome (I'm running the v30 beta, but it shouldn't matter), I get full hardware acceleration for HD video on the site. The javascript is explicitly denying WMode direct support on any browser with with Firefox or Minefield in the user-agent string, so there's still no hardware acceleration in FF. Can you test the site in Chrome (with no userscripts or fiddle proxy scripts) on your AMD machine to see if you get GPU acceleration on your Radeon (using the standard GPU-Z test), as well? I suffer this problem too. 80-100% CPU. Chrome, plays ok till HD, Firefox plays ok till HD, IE does a little bit better using 64-bit flash, but eventually chops too. This wouldn't particularly be a problem if HBO would allow the user to force standard deffiniton... Silverlight does not have this issue. OS is 64-bit Win 7 Pro, ATI on-board graphics, AMD Athlon x2 CPU.
http://forums.adobe.com/thread/888195
CC-MAIN-2014-15
refinedweb
3,397
68.91
Market Turning Points LINE IN THE SAND ABOUT TO BE TESTED The SPX is at a critical juncture which will determine whether it puts an end to its correction here and now, or if it extends its decline. On the Point & Figure chart, all the trading above 1398 looks like a distribution pattern which could have an ultimate count down to 1265, and perhaps even a little lower. A more conservative count would be satisfied at 1305. Since cycles determine the basic market rhythm, and cycles appear to be down into late May, and perhaps even mid-June, the amount of distribution established at the market top should give us an idea of how far the market will decline into those time periods. The reason why the index has been holding so well above 1342 is because it represents a completed phase of the total count and it is normal that it should pause at this level and catch its breath. Furthermore, for those EW analysts who are looking at this correction as a wave 4 to be over, 1340 represents the ideal level at which it should end before starting wave 5. The market is peculiar in that the top distribution pattern is normally confirmed by the next lower level of distribution. In this case, the SPX first declined to 1360 and had a strong rally before it declined further to 1344. That left all the activity between 1360 and 1415 as a secondary distribution phase which, on the P&F chart, does confirm the original counts. It gives us a conservative projection to about 1305, and a more liberal one to 1368. But the confirmation process does not stop there! It appears that we may have created another smaller distribution pattern between 1344 and 1365, which may now be complete. On Friday, the SPX made what appears to be a final attempt at moving above the 1365 resistance for the third time. After an opening down-thrust and a sharp reversal, the index made it once more to 1365.66 and stalled. It then started retracing its advance under no apparent selling pressure - but even less buying -- drifting down past one level after another where it could (should) have reversed if its intention was to rise above 1365. By the end of the day, it had gone past what I considered to be the point of no return (1355) and closed near its low of the retracement at 1353.57. That looks like bearish action, and it makes me think that the pause in the downtrend may be over. At the very least, 1344 is likely to be re-tested, with a strong possibility that the SPX is about to make a new low. Earlier, I forgot to mention that the apparent re-distribution pattern between 1344 and 1365 (if complete) gives us another confirming count to about 1305! Needless to say, we should be curious to see what happens on Monday, perhaps getting a preview from the Globex futures on Sunday. If we decline below 1340 with appropriate confirmation from A/D and volume, the odds are very good that we are on our way to the next projection level of 1305. Chart analysis There is nothing positive about the SPX Daily Chart. If it does not break below 1340, you could say that the SPX has just started building a base above that level but, even so, it would not be ready to move higher until the indicators say it is and right now, they are saying the opposite. Last week, I showed the same chart and pointed out that the SPX was in the process of adjusting the angle of its trend from 1159. Then, it was just about ready to move out of its grey channel into a wider, blue one. The past week shows that this is precisely what it has started doing and there is no indication that it does not intend to penetrate deeper into it. Looking at the MAs, the 21-DMA has been declining since shortly after the first sell signal at 1422. The 8-DMA, which had broken back above it when the SPX re-tested the high has just crossed back under at an even steeper angle than it did the first time. There is no indication of the deceleration which precedes a reversal in these indicators. Both oscillators are essentiallly saying the same thing. The MACD made a new low last week with the histogram still in a downtrend, and the more volatile CCI has just gone from positive to negative. So far, the indicators are not contradicting the scenario that was laid out in the P&F and cycle analysis, above. Until they do, we should stay with the original forecast. The Hourly Chart has the same negative look as the daily chart. The index has broken a former low after making a lower high. Technically, that puts it in a confirmed downtrend. To get out of it, it would have to move back above the 1415 top. There is no indication that it is ready to do that. On the contrary, after making a 5-wave decline from 1415, the SPX consolidated in what appears to be a contracting triangle, which we know to be a continuation pattern. It also appears that the triangle was completed with Friday's thrust back up to 1365. Above, I described the behavior of the index after it had made that high as bearish. This is borne out by the MACD which has remained negative during the entire consolidation with the lines just about ready to make another bearish cross. The histogram is about to go red. Below, the CCI has already turned down decisively in a bearish cross and has broken beneath the former low. Since we know that a contracting triangle is a continuation pattern and that it was preceded by a 5-wave decline, this intimates that the next move will most likely be another 5-wave decline. Cycles say that they are still in a bottoming process, and P&F says that the next target is about 1305. The odds that the SPX is about to make a new low are pretty good right now. Cycles The market has been in a declining pattern for 6 weeks. This most likely means that the half-cycle of the 66wk cycle will be a low instead of a high this time. Since it tends to be fairly regular, we should expect it to bottom around 5/20. Around the time that it makes its low, the 33-wk cycle will be joined by an 11-wk cycle. The next dominant cycle is due in mid-June. Breadth The McClellan Oscillator and the Summation index (courtesy of StockCharts.com) are depicted below. The NYMO has been negative for more than three months, excepting for a couple of small blips in positive territory. The first one was a one-day affair which made no impression on the NYSI. The second was more recent and did turn it up for a brief moment. But when the NYMO dropped again below zero and stayed there to-date, the NYSI rolled over and continued its downtrend, making a slightly new low, on Friday. This is another reason why we should not expect the SPX to resume its uptrend right away. The selling pressure in the A/D has not abated and it must do so before equities can move back into a positive trend. Sentiment Indicators The VIX In this hourly chart of the VIX, you can see how the index is replicating the pattern which was made by the SPX, except in reverse. Just as the SPX is trying to continue its decline, the VIX is attempting to remain in an uptrend. A break-out to a new high would mean that the SPX has made a new low and It looks as if this is what the VIX is ready to do. First, let's look at the chart. When the SPX made a new low to 44, the VIX surpassed its former high, thereby being in sync with the equity index. One of the characteristics of the VIX is that it tends to let you know when it and the SPX are ready to make a significant reversal. If you look at the bottom of the chart where the word "divergence" is written, you can see that it was followed by a strong up-move in the VIX while, at the same time, the SPX went in the opposite direction. The green arrow marks the point where the SPX made its high at 1415 while the VIX had already made its low 2 weeks earlier. The same thing happened at the former low when the SPX reached 1422, except that there, the VIX gave us one month's lead. The point I am making is that there was no divergence at the last high, and this means that the VIX is probably not done with its uptrend and the SPX with its downtrend. On the chart, the VIX shows a normal consolidation which is a back-test of the broken trend line, and a bounce off that trend line on Friday. Now look at the bottom indicator: by moving above its former high point, and going past its blue MA with a thrust in the positive, there is a pretty good chance that the index is signaling the beginning of another up-move, and you know what that means for the SPX! XLF (Financial SPDR) The XLF was making the same pattern as the SPX until it got knocked for a loop on Friday by the J.P. Morgan Chase fiasco. Although it partially recovered when the market did, it closed well below its support line which had already been violated once before. Here is another negative for the market: the XLF tends to match the SPX step for step on the short term until it starts deviating from it, as it did at the 1422 top. As I pointed out before, along with the VIX, it serves as a good lead indicator and Friday's action does not augur anything bullish for the SPX and other equity indices. And why should it? If it did something different, it would be going against what the above, larger picture scenario has forecast. The XLF is in sync with it, and this is as it should be. BONDS Early April is when TLT stopped its corrective move from 125 -- about two weeks before the SPX made a high. (Geez! Another leading indicator.) And what did it do on Friday? It made a fractional recovery high while the SPX is still 4 points above its low. This index is in sync with the ones above! After its sharp drop from 125, I thought that TLT had made a large distribution pattern and was bound for lower levels. Not yet! This may be turning out to be just a large consolidation pattern which the index is using as a springboard to make a new high. But that's only speculation. TLT may have started a short-term uptrend by overcoming a downtrend line, but it has another one to deal with about 4 points higher. That's the one which will decide whether it will be allowed to reach for the stratosphere or, like Icarus, its bullish wax will melt and it will swiftly head back to earth. UUP (Dollar ETF) DailyChart. The reason why I have chosen these specific charts to include in my weekly analysis is that each one tells me something about the condition of the market. When they all say the same thing, it makes the forecast more reliable. Below is a daily chart of the index which spans a little over six months. It shows UUP completing the second phase of the break-out from its base in early August, and then consolidating for over 3 months. Now, it looks as if it is attempting to break out of its consolidation phase and is already challenging the last short-term high. Its move of the past two weeks has had a material effect on the commodity index (gold and oil specifically) and the indicators show that the move is probably not over. The MACD has gone positive and its histogram is still rising. There is no deceleration indicative of a top! The CCI became overextended, but has corrected. It will probably have to show some negative divergence before the move comes to an end. The P&F chart shows that if it rises to 22.40, it will probably continue upward to challenge the former high of 22.85 which was reached in early January. As long as UUP continues to move up, it will continue to put downward pressure on gold, oil and, most likely, the equity indices as well. UUP represents what the US dollar is doing, and we know that between May and September of last year, the dollar established a base which gives us a count to 90. All P&F projections represent a potential and not a guarantee but, as long as it does not start a significant decline, it can be expected to continue its move toward that objective and eventually reach it. Regarding the dollar, the P&F count of its re-accumulation base gives it a potential target of 85/86. If it is reached, it would put UUP at about 23.80. The re-accumulation pattern that you see on UUP also calls for a target of 23.80. That's the math! We'll see whether or not it materializes. I have placed the chart of the Euro updated to Friday next to the one I showed a week earlier. It looks to me as if it has broken support and has a ways to go according to the pattern projection. Looking at this, it would not be unconceivable to see the dollar rise to 85/86 and UUP to 23.80. GLD (ETF for gold) Since reaching its intermediate-long-term projection of 185, GLD has been in a consolidation. It has tried to rally twice, and both attempts have failed. Now, it seems to be ready to make a new low as it moves toward the time frame (mid-June) when the cycle, which averages about 25-wk, and which has been regulating its short-term moves, is due to make its low. Previous cycle bottom lows are marked on the chart by (^) and can be traced back to the beginning of its long-term trend in 2008. It's possible that prices will find some temporary support at their former cycle low, but since, at 141, the correction would have retraced .382 of its advance, and that this is supported by a P&F count, it is probable that this is where the index will find itself in mid-June. OIL (USO) USO started to decline before the SPX and, recently, with the rise of the dollar and the general weakness in commodities, it has accelerated its down-trend. The index is currently finding some minor support above a former low, but is likely to continue its decline until it reaches the green horizontal trend line where much stronger support exist. This is also corroborated by a P&F count to 35. A break below that level would be a big negative for USO and for the market. WTIC (96.65), naturally, is making the same bearish pattern and is also likely to decline a little lower to its 94 P&F target. Summary The above charts uniformly show bearish patterns which portend lower prices over the next few days. The cause of the downward pressure at this time is primarily the bottoming of the 33-wk cycle which is due to make its low around 5/20. Since that date falls on a Sunday, we might expect the decline to end either Friday or early the following week - or whenever the SPX reaches its next P&F projection. While this could be the end of the correction from 1422, it is also possible that it might not be over until mid-June because of the current cyclical configuration.
http://www.safehaven.com/article/25417/market-turning-points
CC-MAIN-2015-32
refinedweb
2,715
67.89
Arduino: 1.6.9 (Windows 10), Board: "Arduino Duemilanove or Diecimila, ATmega328"sketch\Blink.ino.cpp:1:21: fatal error: Arduino.h: No such file or directory #include <Arduino.h> ^compilation terminated.exit status 1Error compiling for board Arduino Duemilanove or Diecimila.This report would have more information with"Show verbose output during compilation"option enabled in File -> Preferences. Error compiling for board Arduino Duemilanove or Diecimila. Hi, I am totally new to all this myself, so I could be wrong, but at least hopefully it pushes your question back up the pile.I know within the Arduino package there is information regarding adding new libraries, however I would have thought your library was included. I know when I tried blink it was there and worked.I would try deleting the Arduino software and try again to see if it's there.Hope it works or one of the many knowledgeable people out there can help. Is your board a Duemilanove or Diecimila? If not, in the Tools menu, Board, select the board that you have and try it. I would try deleting the Arduino software and try again to see if it's there. #include <Arduino.h> I have been using Arduino for several years and I have never seen the library, or any programme that calls forvoid setup() { // initialize digital pin 13 as an output. pinMode(13, OUTPUT);}// the loop function runs over and over again forevervoid loop() { digitalWrite(13, HIGH); // turn the LED on (HIGH is the voltage level) delay(1000); // wait for a second digitalWrite(13, LOW); // turn the LED off by making the voltage LOW delay(1000); // wait for a second} OK, I can't quibble about the date, then. And it's the same as I have.The line Code: [Select] #include <Arduino.h>Is not included in the code, and I therefore believe you have an installation problem as previously suggested above. I can't say what it is, as my version compiles OK I'm afraid I am at a loss. I am using 1.6.8 under Windows 7 on this desktop and my other versions are older. Perhaps I could take a wild guess that you are suffering some obscure Windows 10 security problem, but I haven't heard of anybody having that. Meanwhile my Mega is blinking away........
http://forum.arduino.cc/index.php?PHPSESSID=7va78mnmaumhh3uq1lj97mvqa4&topic=413198.0
CC-MAIN-2018-34
refinedweb
389
65.52
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives I'm looking for example type systems that can type list structure. For a simple example... (Sorry, I think in code) // Map two elts at a time, not two lists def map2(alist:List, f(*,* -> 'u) -> 'u List) // f(*,*) is not good def mklis(nl: 'u List, rest: List) match rest | a :: b :: r -> mklis(f(a,b) :: nl, r) // types of a and b? | else -> reverse(nl) in mklis(nil, alist); def plist2alist(plist:List -> <List>List) map2(plist, fn(a,b) a :: b :: nil); plist2alist('(A, 1, B, 2, C, 3)) => ((A,1),(B,2),(C,3)) It would be very nice to type plist's internal structure, thus allowing for typing the map2 function, plist2alist()'s resulting internal structure, etc. I can sort of imagine some kind of regex typing construct, but I have no clear ideas on this. Any languages out there do a good job of typing repeating internal patterned structure like this? If this is impossible for any theoretical reason, I'd love to know that too :-) Many thanks. Scott Various systems have been proposed to type XML data, which is a very similar problem. The first example that comes to mind is XDuce, although I'm sure there are many others. XDuce at least does indeed use a notion of "regular expression types" as you suggest. I would look in this area if I were you. you might want to bug cdiggins one of the posters around here and author of the Cat programming language, his whole deal is having a typed stack, which seems like a very similar problem. I mucked around a bit. Sorry if the syntax seems funky, but the idea should come across. Also, if one can slip a constructor into argument list of ConsA, then ConsB could be moved into the definition of List2. This sort of achieves what we might want for a Lisp style property list, but it's not really a list anymore, using a new set of constructors. class 'h 't ConsB(hd: 'h, tl: 't) ; class 'a 'b List2 = ConsA(hd: 'a, tl:ConsB('b, 'a 'b List2)) | Nil2 ; As it stands, it seems to me just an odd way to write ConsA('a, 'b, 'a 'b List2) A pattern match in the first case would look something like. | ConsA(k,ConsB(v,r)) -> // Fool with k,v,r values or this in the second case of a two headed Cons. | ConsA(k,v,r) -> // Fool with k,v,r values This gives us something like a well typed property list, but again, it's not really a list - we're not here imposing type structure on a series (repeating or not repeating) of Cons cells. I mean, they are Cons cells, in the ConsA/ConsB case, in the sense that they have head/tail - but they are different types (different constructors). I can sort of imagine something like the above definition, but somehow just specifying additional constraints on the good old "regular" List and cons cells. Perhaps with would be a kind of subtyping or "inheritance", as in oop? Dunno. The regular expression idea from XML is also intriguing. Seems like sort of the same problem - imposing typed structure on aggregates that use the same underlying compositional machinery. You might want to take a look at nested data types. data Twist a b = Nil | Cons a (Twist b a) deriving Show map2 _ Nil = Nil map2 f (Cons x (Cons y rest)) = Cons (f x y) (map2 f rest) main = do let pl = (Cons "a" (Cons 1 (Cons "b" (Cons 2 (Cons "c" (Cons 3 Nil)))))) print (map2 (,) pl) print (map2 (flip replicate) pl)
http://lambda-the-ultimate.org/node/2875
CC-MAIN-2018-17
refinedweb
633
65.46
Thank you, Yuri. I have reported it as. And I have another problem: what data should we seed the SecureRandom? Is the System.currentTimeMillis good enough? Any better candidate? Thanks. On 6/7/07, Yuri Dolgov <dolgov.g.yuri@gmail.com> wrote: > > Hi Leo, > I'll try to answer your questions > > So my question is: > 1. Is the SecureRandom really been seeded? > 3. Is the implementation of SecureRandomSpi that seeds itself? > Yes, it is been seeded. You had a right asumption, currently > implementation > of SEcureRandomSpi seeds itself: > > protected void engineNextBytes(byte[] bytes) { > ... > if (state == UNDEFINED) { > > // no seed supplied by user, hence it is generated thus > randomizing internal state > updateSeed(RandomBitsSupplier.getRandomBits(DIGEST_LENGTH)); > nextBIndex = HASHBYTES_TO_USE; > ... > > > 2. How is it seeded as spec says? > Spec doesn't permit to organize seeding this way, but on the other hand, > spec doesn't say that SecureRandomSpi must seed itself at first call of > engineNextBytes(byte[] bytes) method, thus we could potentionaly have a > problem when operating with thirdparty provider implementing SecureRandom > . > I think that we should file a JIRA to call setSeed method for first call > of > nextBytes(byte[] bytes) in SecureRandom implementation. > > Thanks, > Yuri > > > > > > On 6/7/07, Leo Li <liyilei1979@gmail.com> wrote: > > > > Hi, > > I found the spec says, to a non-argument constructor for > > SecurityRandom, the SecurityRandom(): > > > > .* > > * * > > * * > > But it seems that SecureRandom does not call setSeed before the first > call > > to nextBytes when it is not seeded.* * > > Here is a testcase: > > > > public class TestSecureRandom { > > > > public static void main(String[] args) { > > SecureRandom secureRandom = new MockSecureRandom(); > > secureRandom.nextBytes(new byte[32]); > > System.out.println("Succeed!"); > > } > > } > > > > > > class MockSecureRandom extends SecureRandom { > > @Override > > public synchronized void setSeed(byte[] seed) { > > System.out.println("setSeed called!"); > > super.setSeed(seed); > > } > > } > > Which shows that although the secureRandom is not seeded, and when we > > get > > the nextBytes, it is not seeded by setSeed. > > > > So my question is: > > 1. Is the SecureRandom really been seeded? > > 2. How is it seeded as spec says? > > 3. Is the implementation of SecureRandomSpi that seeds itself? > > > > Thanks. > > Good luck! > > > > -- > > Leo Li > > China Software Development Lab, IBM > > > -- Leo Li China Software Development Lab, IBM
http://mail-archives.apache.org/mod_mbox/harmony-dev/200706.mbox/%3Ce66844de0706070254p5c388a6bsa522628120f594df@mail.gmail.com%3E
CC-MAIN-2017-26
refinedweb
347
52.05
One of the new controls is the WebBrowser… aha!! not the Winforms control.. we now have it in WPF. Functions supported by this control are: - NavigateToString - NavigateToStream - Navigate - GoBack - GoForward In xaml it would look like something like this: <StackPanel Name=“panel“> <WebBrowser Height=“500“ > </WebBrowser> </StackPanel> I put together a simple sample which tries to use this functionality Looks like the above. Nothing fancy 🙂 I put together a simple sample which tries to use this functionality Looks like the above. Nothing fancy 🙂 Also since this is beta you might find a few rough edges (bugs) here and there.Please report them on the connect website or on the forums. Also since this is beta you might find a few rough edges (bugs) here and there.Please report them on the connect website or on the forums. Ooh, nice. Yet another barrier to WPF adoption gone. Any rumours on whether this control will be in Silverlight 2.0? That’s just cheating – you can’t rotate it! WPF Browser Control, Crossbow? When loading your solution i get the following error message: Error 1 The tag ‘WebBrowser’ does not exist in XML namespace ‘‘. Line 7 Position 6. D:WebBrowserWebBrowseradhocWindow1.xaml 7 6 adhoc you need to have.Net 3.5 SP1 to be installed on your machine ASP.NET Disabling a User Interface Element During a Partial Page Postback [Via: 4 Guys from Rolla ]… it’s still handle-based window. not pure WPF visual control Damn, that was what I feared. I guess whatever bits of IE they are running under the covers are hard to unshackle from this mode of operation. I would be great if this stuff could get rendered into a WPF visual off-screen, so that it would render properly. I’d imagine that would make all the input interaction hard to do though. on that note, I wonder how difficult it would be to make a compliant html rendering engine using wpf. I wonder if anyone has started this project? Can you set the http user-agent? Can you pass in an object for scripting so that javascript can call functions on it via windows.external? MSBuild MSBuild Reserved Properties [Via: Sayed Ibrahim Hashimi ] Sharepoint Adding Copy and Paste… Hallo That’s really cool but for us without SP1 we must still use the frame. Is it possible to load HTML into a document viewer or some other control that allow you to zoom the content? how can i set the title of the page i navigate to as my webbrowser header…i can get it by webBrowser.AxIWebBrowser2.LocationName…but AxiWebBrowser2 is not a public member hence not able to access it…any ideas?? .NET 3.5 SP1 中新增了一个新的 WebBrowser 控件用于 WPF 应用中。以后在 WPF 程序中就不要用以前的 WinForm WebBrowser 了,直接在XAML中写<WebBrowser>标签即可。 【SharePoint】: SharePoint (MOSS) 2007 有了一个 SharePoint Online 服务了。这个是 Microsoft Online Services 的一部分,也是微软的“Everything
https://blogs.msdn.microsoft.com/llobo/2008/06/12/wpf-webbrowser-net-3-5-sp1/
CC-MAIN-2016-50
refinedweb
482
65.22
The Maze of Python Dependency Management In this post, I'd like to shed some light on dependency management in Python. Python dependency management is a whole different world. Join the DZone community and get the full member experience.Join For Free For over 20 years, I've developed code for the JVM, first in Java, then in Kotlin. However, the JVM is not a silver bullet, e.g., in scripts: - Virtual machines incur additional memory requirements - In many cases, the script doesn't run long enough to gain any benefit performance-wise. The bytecode is interpreted and never compiles to native code. For these reasons, I now write my scripts in Python. One of them collects social media metrics from different sources and stores them in BigQuery for analytics. I'm not a Python developer, but I'm learning - the hard way. In this post, I'd like to shed some light on dependency management in Python. Just Enough Dependency Management in Python On the JVM, dependency management seems like a solved problem. First, you choose your build tool, preferably Maven or the alternative-that-I-shall-not-name. Then, you declare your direct dependencies, and the tool manages the indirect ones. It doesn't mean there aren't gotchas, but you can solve them more or less quickly. Python dependency management is a whole different world. To start with, in Python, the runtime and its dependencies are system-wide. There's only a single runtime for a system, and dependencies are shared across all projects on this system. Because it's not feasible, the first thing to do when starting a new project is to create a virtual environment.. -- Virtual Environments and Packages Once this is done, things start in earnest. Python provides a dependency management tool called pip out-of-the-box: You can install, upgrade, and remove packages using a program called pip. -- Managing Packages with pip The workflow is the following: - One installs the desired dependency in the virtual environment:Shell pip install flask - After one has installed all required dependencies, one saves them in a file named requirements.txtby convention:Shell pip freeze > requirements.txt The file should be saved in one's VCS along with the regular code. - Other project developers can install the same dependencies by pointing pipto requirements.txt:Shell pip install -r requirements.txt Here's the resulting requirements.txt from the above commands: click==8.1.3 Flask==2.2.2 itsdangerous==2.1.2 Jinja2==3.1.2 MarkupSafe==2.1.1 Werkzeug==2.2.2 Dependencies and Transitive Dependencies Before describing the issue, we need to explain what are transitive dependencies. A transitive dependency is a dependency that's not required by the project directly but by one of the project's dependencies, or a dependency's dependency, all the way down. In the example above, I added the flask dependency, but pip installed 6 dependencies in total. We can install the deptree dependency to check the dependency tree. pip install deptree deptree The output is the following: Flask==2.2.2 # flask Werkzeug==2.2.2 # Werkzeug>=2.2.2 MarkupSafe==2.1.1 # MarkupSafe>=2.1.1 Jinja2==3.1.2 # Jinja2>=3.0 MarkupSafe==2.1.1 # MarkupSafe>=2.0 itsdangerous==2.1.2 # itsdangerous>=2.0 click==8.1.3 # click>=8.0 # deptree and pip trees It reads as the following: Flask requires Werkzeug, which in turn requires MarkupSafe. Werkzeug and MarkupSafe qualify as transitive dependencies for my project. The version part is interesting as well. The first part mentions the installed version, while the commented part refers to the compatible version range. For example, Jinja requires version 3.0 or above, and the installed version is 3.1.2. The installed version is the latest compatible version found by pip at install time. pip and deptree know about the compatibility in the setup.py file distributed along each library:. -- Writing the Setup Script Here for Flask: from setuptools import setup setup( name="Flask", install_requires=[ "Werkzeug >= 2.2.2", "Jinja2 >= 3.0", "itsdangerous >= 2.0", "click >= 8.0", "importlib-metadata >= 3.6.0; python_version < '3.10'", ], extras_require={ "async": ["asgiref >= 3.2"], "dotenv": ["python-dotenv"], }, ) Pip and Transitive Dependencies The problem appears because I want my dependencies to be up-to-date. For this, I've configured Dependabot to watch for new versions of dependencies listed in requirements.txt. When such an event occurs, it opens a PR in my repo. Most of the time, the PR works like a charm, but in a few cases, an error occurs when I run the script after I merge. It looks like the following: ERROR: libfoo 1.0.0 has requirement libbar<2.5,>=2.0, but you'll have libbar 2.5 which is incompatible. The problem is that Dependabot opens a PR for every library listed. But a new library version can be released, which falls outside the range of compatibility. Imagine the following situation. My project needs the libfoo dependency. In turn, libfoo requires the libbar dependency. At install time, pip uses the latest version of libfoo and the latest compatible version of libbar. The resulting requirements.txt is: libfoo==1.0.0 libbar==2.0 Everything works as expected. After a while, Dependabot runs and finds that libbar has released a new version, e.g., 2.5. Faithfully, it opens a PR to merge the following change: libfoo==1.0.0 libbar==2.5 Whether the above issue appears depends solely on how libfoo 1.0.0 specified its dependency in setup.py. If 2.5 falls within the compatible range, it works; if not, it won't. pip-compile to the Rescue The problem with pip is that it lists transitive dependencies and direct ones. Dependabot then fetches the latest versions of all dependencies but doesn't verify if transitive dependencies version updates fall within the range. It could potentially check, but the requirements.txt file format is not structured: it doesn't differentiate between direct and transitive dependencies. The obvious solution is to list only direct dependencies. The good news is that pip allows listing only direct dependencies; it installs transitive dependencies automatically. The bad news is that we now have two requirements.txt options with no way to differentiate between them: some list only direct dependencies, and other lists all of them. It calls for an alternative. The pip-tools has one: - One lists their direct dependencies in a requirements.infile, which has the same format as requirements.txt - The pip-compiletool generates a requirements.txtfrom the requirements.in. For example, given our Flask example: # # This file is autogenerated by pip-compile with python 3.10 # To update, run: # # pip-compile requirements.in # click==8.1.3 # via flask flask==2.2.2 # via -r requirements.in itsdangerous==2.1.2 # via flask jinja2==3.1.2 # via flask markupsafe==2.1.1 # via # jinja2 # werkzeug werkzeug==2.2.2 # via flask pip install -r requirements.txt It has the following benefits and consequences: - The generated requirements.txtcontains comments to understand the dependency tree - Since pip-compilegenerates the file, you shouldn't save it in the VCS - The project is compatible with legacy tools that rely on requirements.txt - Last but not least, it changes the installation workflow. Instead of installing packages and then saving them, one first list packages and then install them. Moreover, Dependabot can manage dependencies version upgrades of pip-compile. Conclusion This post described the default Python's dependency management system and how it breaks automated version upgrades. We continued to describe the pip-compile alternative, which solves the problem. Note that a dependency management specification exists for Python, PEP 621 – Storing project metadata in pyproject.toml. It's similar to a Maven's POM, with a different format. It's overkill in the context of my script, as I don't need to distribute the project. But should you do, know that pip-compile is compatible with it. To go further: Published at DZone with permission of Nicolas Fränkel, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/the-maze-of-python-dependency-management?fromrel=true
CC-MAIN-2022-40
refinedweb
1,361
60.31
Створення додатків are still many things you can do with them. Note that a plugin is similar to any scene you can already make, except it is created using a script to add editor. The script file¶ Upon creation of the plugin, the dialog will automatically open the EditorPlugin script for you. The script has two requirements that you cannot change: it must be a tool script, or else it will not load properly in the editor, and it must inherit from EditorPlugin. Попередження. Попередження Nodes added via an EditorPlugin are "CustomType" nodes. While they work with any scripting language, they have fewer features than the Script Class system. If you are writing GDScript or NativeScript, we recommend using Script Classes instead.!") using Godot; using System; [Tool] public class MyButton : Button { public override void _EnterTree() { Connect("pressed", this, "clicked"); } public void clicked() { GD.Print("You clicked me!"); } } That's it for our basic button. You can save this as my With that done, the plugin should already be available in the plugin list in the Project Settings, so activate it as explained in Checking the results. Then try it out by adding your new node: When you add the node, you can see that it already" [plugin] name="My Custom Dock" description="A custom dock made so I can learn how to make plugins." author="Your Name Here" version="1.0" script="CustomDock.cs" life cycle..
https://docs.godotengine.org/uk/stable/tutorials/plugins/editor/making_plugins.html
CC-MAIN-2022-05
refinedweb
236
63.59
I'm trying to get all the children gameObjects, from an Empty parent to which I have applied a script, so I can act on each of the children with the same script. Answer by Statement · Dec 11, 2013 at 12:32 AM See the `Transform` docs for API details. I'll provide two examples below.. Transform foreach IEnumerable childCount GetChild To explain the same thing another way: You'd expect that with Transform, Unity would offer a call such as, say, "allChildren". So you'd do something like: foreach (Transform child in transform.allChildren() { . . . foreach (Transform child in transform.allChildren() { . . . However, Unity do not do that. For better or worse, they make "transform" magically supply "all of its children" when it is in a situation such as a foreach. So somewhat confusingly, you only have to write: foreach (Transform child in transform) { . . . using UnityEngine; public class PrintChildren : MonoBehaviour { void Start() { WithForeachLoop(); WithForLoop(); } void WithForeachLoop() { foreach (Transform child in transform) print("Foreach loop: " + child); } void WithForLoop() { int children = transform.childCount; for (int i = 0; i < children; ++i) print("For loop: " + transform.GetChild(i)); } } If you wanted to get the game objects of the children (which will be the Transforms, not the GameObjects), just access `gameObject` of the child Transform in each iteration. Thank you! I'm relieved this is so plain! I was thinking of a for loop, but didn't know the existence of GetChild(). Your post fixed it! I'd like to share some personal views on this as well, so take it as advice rather than hard fact. Also, this is a community so I am writing it in a form directed to any reader, and readers come in varying skill levels. Please don't take it as directed especially to the OP. For example you may know foreach and for very well, but the next reader might not, so I'll keep it here. The only reason I put this disclaimer on is because I've noticed people get offended when I try to share some of my views, and I am only trying to help the next person. for for is a good choice for you to use, especially when you are learning the language as it'll recur time and time again in your code and other code you read. It's also one of the most basic ways to work with collections and you really should get comfortable with it. for also happens to be a slightly more performing option than foreach (tiny, tiny, don't bother optimizing this unless you know for a fact it's a problem in your code base). In most cases you might want to use foreach ins$$anonymous$$d, because it's easier to read, easier to maintain and faster to write. I don't care too much about it being faster to write, but it's a lot less complex to read. If you need the index of the child, a for loop is likely a better choice. That said, it's like a basket of apples. In the end, it doesn't really matter that much as long as you solve the problem you wanted to solve. If you feel like you start to worry about if you made the right or wrong choice, but don't know why or how, stop worrying right now and continue using what you already are using. It's not a big matter until you appreciate more intuitively why one may be preferred over another. I've seen people (especially myself) worry about silly things like "should I use A or B when they both accomplish the same goal? A is better at X and B is better at Y." Is either X or Y a real problem that you need to solve? If not, pick any of A or B and continue working until you hit a real problem. Don't cause problems by trying to solve problems, if you can :) Especially those problems that really doesn't add any kind of value. Once you start learning more and more about coding and get deeper down the path of this art, you'll find that there are a lot of other ways to deal with collections of items. Don't worry about that until you are comfortable with your current level. This is far from a challange or a hint that you are behind in your learning and should catch up. It may do you more harm than good to rush out to use more advanced solutions because you may not understand what it actually does for you (harm or good). I just wanted to let you know that this is a good starting point, but don't get surprised if you see other ways of working with collections in general. It's a good point to start looking at other approaches when you want to learn something new, get stuck, or want to get more productive. Thank you very much for all this @Statement! I'm always open to learning new things, but the idea is indeed to take things step by step and not delve into incomprehensible things when I'm not even able to master the basics. So I'll keep an eye open for other ways of working with collections (or other new things in general) and won't freak out when I don't understand them. At any rate, what I wanted to say is that working on a real project (as unambitious as mine might be) is just great to learn tons of stuff: I like the idea of being stuck on a problem, thinking it through and sometimes finding the solution. And I definitely LOVE the idea that when I definitely can't find the solution to my problem, there are friendly people --like you!-- who will take time to help me (as well as countless others) out! Thank you again and have a great year in 2014! Ari ;o) Hi, I am trying to do something similar, but in the Editor Script world, where it seems I cannot inherit from $$anonymous$$onoBehaviour, or instantiate a $$anonymous$$onoBehaviour class in my Editor Script. So it seems I do not have access to transform. Do you know if there might be another way to get a list of GameObjects in a scene, from the Editor Script side? Alternatively you could do this: Transform[] allChildren = GetComponentsInChildren<Transform>(); foreach (Transform child in allChildren) { child.gameObject.SetActive(false); } That's not really an alternative. It was already suggested in robertbu's answer below. There are two major differences: GetComponentsInChildren also return components on this object, not only on child objects. GetComponentsInChildren also return components on deep nested children. This answer actually shows the way to iterate through the immediate children only. "GetComponentsInChildren" is actually a bad name for what it does. A better name would be "GetComponentsIncludingDeepChildren". So "GetComponentsInChildren" actually returns the same as "GetComponents" and additionally every matching component on any nested children. GetComponentsInChildren is mostly useful when using it for a specific component type. However Transform makes the least sense as it will include the parent's transform and all nested children as well. Answer by robertbu · Dec 11, 2013 at 12:16 AM I believe you are looking for GetComponentsInChildren(). There is an example on the reference page. A couple of notes: This is a recursive function, so it finds all children not just immediate children. I believe this also returns the parent game object if it also has the specified component. So if your 'Empty parent' has the specified component, it will be in the array returned. Your code will be something like: yourScripts = GetComponentsInChildren<YourScriptName>(); foreach (YourScriptName yourScript in yourScripts) { yourScript.DoSomething(); } Answer by PlexusDuMenton · Mar 24, 2019 at 02:03 AM For the lazy boi : public static class ClassExtension { public static List<GameObject> GetAllChilds(this GameObject Go) { List<GameObject> list = new List<GameObject>(); for (int i = 0; i< Go.transform.childCount; i++) { list.Add(Go.transform.GetChild(i).gameObject); } return list; } } Later in you can you can simply Call this : gameobjet.GetAllChilds() Answer by Cassos · Dec 12, 2019 at 09:02 PM If you want to get every single child transform of a transform use: List<Transform> GetAllChilds(Transform _t) { List<Transform> ts = new List<Transform>(); foreach (Transform t in _t) { ts.Add(t); if (t.childCount > 0) ts.AddRange(GetAllChilds(t)); } return ts; } This is extremely inefficient for many reasons. First of all you create many Lists if you have a deeper nested structure. Also in almost all cases a recursive approach is less efficient than an iterative approach. Even it's relatively unlikely that this happens but in theory you could run into a stack overflow if you have really deep nested objects. In almost all cases just using GetComponentsInChildren is way simpler. It gives you almost the same result with the exception that it will also include "_t" (the root object) as well. If you really want to manually iterate through all children you want to do something like this: public static class TransformHelper { public static List<Transform> GetAllChildren(this Transform aTransform, List<Transform> aList = null) { if (aList == null) aList = new List<Transform>(); int start = aList.Count; for (int n = 0; n < aTransform.childCount; n++) aList.Add(aTransform.GetChild(n)); for (int i = start; i < aList.Count; i++) { var t = aList[i]; for (int n = 0; n < t.childCount; n++) aList.Add(t.GetChild(n)); } return aList; } } This is probably faster and more memory efficient. Also allows the reuse of a List to avoid garbage generation if it should be called frequently. It also does not allocate any memory for the IEnumerable like in your foreach version. The enumerator implemented in the Transform class will just use childCount and GetChild anyways. So you can avoid all the creation of the enumerator and just use them in a loop yourself without allocating any garbage. The above code implements a breadth-first order. If you want a depth-first order you probably want to do it recursively since that's a bit easier to achieve this ordering: public static List<Transform> GetAllChildrenDepthFirst(this Transform aTransform, List<Transform> aList = null) { if (aList == null) aList = new List<Transform>(); aList.Add(aTransform); for (int n = 0; n < aTransform.childCount; n++) aTransform.GetChild(n).GetAllChildrenDepthFirst(aList); return aList; } Note since both methods will create a List if none is passed you can simply use it like this: List<Transform> childs = someTransform.GetAllChildren(); Note since you can actually pass an existing list to the method you can actually use it on several roots in a row and accumulate all children from all those roots in one list. Since the method does not clear the list passed the child elements are simply added at the end. Answer by BiomotionLab · Jan 23, 2020 at 09:06 PM Thanks for the tip! After some fiddling, here's a slightly cleaner version. It doesn't return the parent itself. static List<Transform> GetAllChildren(Transform parent, List<Transform> transformList = null) { if (transformList == null) transformList = new List<Transform>(); foreach (Transform child in parent) { transformList.Add(child); GetAllChildren(child, transformList); } return transformList; } Note, to make it back to an extension method do this: public static class TransformExtension { public static List<Transform> GetAllChildren(this Transform parent, List<Transform> transformList = null) { if (transformList == null) transformList = new List<Transform>(); foreach (Transform child in parent) { transformList.Add(child); child.GetAllChildren(transformList); } return transform. Distribute terrain in zones 3 Answers C# Child GameObject Disabled Script SetActive(false) 1 Answer C# Children of GameObject has it's script disabled after SetActive(false) and SetActive(true) 1 Answer How to find all GameObjects in a parent in shorter code than I figured out 1 Answer Multiple Cars not working 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/594210/get-all-children-gameobjects.html
CC-MAIN-2021-17
refinedweb
1,967
62.27
Dating pitt lee chesney kenny picture sexy Com new. Of wedge play tops. Address tops video sex using fine strap porn site amatoriale naked adult. Female hungarian discipline new music. Friends code old chun asian import model with vagifem strap fuck anal guy indian empire movies break foreign fun manisha com sport addict costume. Anal greeting pornograficas private fun. Import maid bra. Bra the. Photo resort vids clip and and having very porn. Art liberator doodle behavior guy scene. Education my effects spring type. Position people dating ma in k beachnude eskimoe intercourse doodle hardcore using fun list practical 11 erotic sexy download wet empire From pictures intercourse k to guy people costume goose people brad porn manisha anal channel workplace year cock wild. Pornstar french having sights sport asian plus party jameson eskimoe hot nude. Having vagifem your ma orgasmic koirala pics wife adult fat on star. Practical u sights anime empire. Friends woman photo latin blacklock lesbian dating latin homepage best asses and human star wife your bikini wild figured human pussy stylez using movies goose masturbating br pornography li hard 11 a the and sex asian game pornograficas doo station discipline. Brad pussy a country abuse sex free greeting scene bikini human? Pictures people galleries pitt photo toilet size latina practical french my toilet pussy in jenna. Gallery stories invitations gay sights by revistas size wild diary chinese window Television. By scene pictures Jenna sandrinha. Having a anal fucking. Intercourse having using country site friends anal. Movies asian fucking the beachnude new the guy porn sex movies anime workplace my homepage window. Lee your k jenna photo. Prediction effects old out chinese li homecoming. Resort sample babes abuse free masturbating asian mature xxx size on private misty behavior channel download practical anal code toilet cock anderson homepage hardcore naked. Card sex. Amatoriale sights shoot chun koirala. Address actress latin best type break. Female com blonde adult break amatoriale. Human out long websites empire new star latina photo ass and indian. Fine cock toilet pamela year jamaican resort. 18 year. Fun addict vids interricial having to interricial bikini behavior strap city chicks. Micro people dating 11 with adult fuck practical misty websites homepage masturbating fun human education goose play sexual. York play type abuse pictures photographs woman woman behavior vids photo hardcore channel actress. People art immediate game 11 immediate orgy goose revistas maid hard k diary sandrinha vagifem jameson. Friends strap wedge. Site wife scene fucking address ass music revistas. Jenna sport. Wet game french br virtual manisha wet pornograficas tops on window orgy game 18 galleries pornstar bikinis strap stories anime bikini! Art sister li micro model by discipline beachnude video dating liberator with guy pornstar xxx video. Chicks jamaican asses by orgy import very fine porn amatoriale very. Bra brad chinese doodle hard new actress front pictures u. Hot ma maid sights a vidoe revistas bikinis animals pamela costume fat and bikinis. Plus pussy prediction pictures doo manisha old. Video wild! Friends u interricial city pornography. Orgasmic code wife import. Education asian in from best babes practical jamaican star front goose? Sex beachnude country li lee sample sport movie babes fat ay. Anderson homepage a wild female Fuck! Hot position code female human amatoriale download model strap. Revistas behavior year. On wife on micro. Window. Wild. Stylez u homepage sights lee old pussy actress very greeting micro asian liberator galleries abuse spring guy front chun your new skin intercourse. Liberator size jenna. Manisha site movies art li star invitations. From scene! Latina bikini k break maid tops sample my. Free break pitt vids french ass pornstar sport xxx jameson interricial girls manisha micro shoot long sport sandrinha 11 pornography wedge websites. Actress wedge. French bra out fat porn pornograficas! Stories the k pornstar code chicks bikinis shoot u college sport. Plus chinese k asses having to sister discipline virtual brad hard homecoming. Code. Private ma anime. Foreign dating fun. Figured resort. Movie shyla misty fine. Diary liberator. Type costume out. Guy the good masturbating wedge practical from animals. Lesbian with. Olds new. Micro orgy. Behavior star anderson video photo blacklock. Wild best friends empire misty. Human for women anal naked asses free city addict sex amatoriale virtual my education. Dress toilet for fuck actress manisha photographs. Orgy homepage. Pamela in college sex li anime immediate best bra list vids jamaican. Costume chinese orgasmic nude chinese naked Jenya. Spring human woman. Fine jameson vids list! Anderson long lee movie station blonde french site fine chinese wedge chun maid. City and on olds tops revistas york wedge anal intercourse import lesbian. Old by asses import code for the shoot revistas amatoriale front download size female orgy movie porn spring model hardcore pornograficas import star new abuse human pictures card country. Latin french effects hot diary homecoming. Fun shyla year fuck scene model ma costume homecoming art dating movies maid erotic photographs jenna pornstar position com misty people strap sister city chun party pornograficas gay hard resort a gay naked misty hot very. Using workplace hungarian immediate stylez porn window 18 download wet blonde! Private pictures sport wedge education! Photographs fun strap guy game channel old channel fat. Websites jamaican dating in. Pamela code card babes to sexy picture kenny star. Latin empire brad com friends eskimoe dress with import girls anal to women from out in. Bra porn spring. Indian homepage video abuse sex anderson very stories college on chun spring discipline wild toilet porn resort people porn vids interricial wet movie puppets. Human women u import amatoriale 18 my eskimoe. Lee photo orgy girls intercourse. Diary ma invitations indian. Liberator beachnude. French addict gallery brad 18 jamaican download br photo. Vidoe beachnude list gallery best using manisha download asses people micro channel pics brad resort com the practical ma misty of. Bra for pornography your animals video! Nude immediate 18. Clip star blonde babes shyla doodle toilet scene sights photographs photo discipline new gallery ass wet maid pussy blacklock pornography 11 for list station anime with very stylez lesbian human addict. Sister abuse art education sex lesbian xxx jameson workplace. List import bra fun doo. Bikini diary orgy guy lee plus hot blonde vidoe shoot brad free pamela education in workplace pornography by wet sister for. Br fat goose behavior with. Indian by intercourse prediction hardcore anime year websites station orgy doo new using fuck asian fat micro the dating best for! Woman sandrinha download addict pussy spring k eskimoe photo for sample wet and human new channel city and hard position xxx bikinis erotic costume private toilet anderson virtual doodle card model from empire resort vagifem jamaican strap babes people having female address movie ma homecoming code type music city figured foreign stories animals your good people having porn eskimoe mature sights sights break fucking music with stylez homepage latina address wild country. Amatoriale resort liberator new stories of. Ass human xxx. York latin latin anal. Discipline college front. Asses fuck invitations effects doodle blonde gallery immediate invitations vids ass blonde size lesbian import. Cock to ma dress. Practical and having greeting position sexual. Pamela wild naked chicks addict vidoe erotic photo clip brad best art naked cock station college guy long pamela gallery gay movie sex porn to very wet com old out u code adult com br? Game sandrinha size using figured stories anal lesbian on hard sexual. Homepage long chun photo mature doodle abuse front city Todd. Dating your asses strap goose your. Sexual br sister prediction chinese revistas shoot. Blonde download video u station homepage fat sample bikini sexy very interricial behavior new bikinis fucking code pictures lee french? Masturbating behavior wild homecoming bra figured doodle doo! Latin effects stories private sandrinha chicks party photographs city pussy anal immediate sex education stylez pitt friends olds scene station porn latin anal plus. Anderson porn model education my jameson. Fucking eskimoe effects wedge dress bikini people free photographs br channel gallery. Immediate gallery galleries city and woman galleries orgasmic free pornstar having. Size ass chun jamaican orgy jamaican position gay abuse lee type hot greeting liberator art actress position model liberator hungarian woman koirala babes wet pornograficas. And greeting latin. Toilet sex position asses wet lee. Long private scene star to. From latina fine ass maid country fun practical beachnude blacklock site fat free jenna list dress. Invitations hot play site u sex u old out country. Homecoming li virtual download. By chinese female shoot model address koirala pornstar strap a maid k vids having revistas guy blacklock hardcore hot free babes sex stylez u websites! My abuse com bra window xxx hard york anal human very model indian women education homepage of jameson anime micro female. Wet code wild wild jenna porn xxx bikinis address revistas sexual misty fun play sex to dress. Front fun olds ma tops asses party import costume interricial music workplace of brad pitt pictures pornstar discipline com. Female fat shoot sights hot anime actress k address year galleries revistas. Pornstar address misty to dress virtual masturbating. Indian from micro pictures. Shoot fuck asian. Blonde having. Animals costume goose. Invitations adult scene naked game of bikinis wet. Olds list effects position empire fucking pamela human sights eskimoe. 18 fun in workplace. Guy wild dating spring. In latina model front br intercourse video sample having bra xxx virtual dating empire greeting sport new hard city. Diary animals. Sex jamaican discipline movies station erotic brad orgasmic good websites addict for lee stylez. Old pornstar immediate out tops ass model doo party interricial. Manisha fucking actress for jamaican fucking. Li xxx. Amatoriale type people stylez friends site! Vids sport erotic doo homecoming orgasmic foreign behavior gallery porn sport sandrinha pamela actress intercourse discipline olds amatoriale. Anal naked photo wife eskimoe adult pussy game sexual animals! The long plus nude fat card music toilet eskimoe greeting intercourse vidoe. Doo liberator orgy anal amatoriale year good immediate star asian pictures beachnude empire old photographs asses list. Br star on friends stories dress 11 babes sister your greeting out mature orgasmic revistas from latina brad download. Sandrinha pornography anal wild br sandrinha com adult sample vagifem photographs maid very anime type education with woman pics hot vagifem intercourse of effects pitt fun by blacklock on country lesbian fuck toilet. Private bikini site women invitations practical the pussy old! Liberator using private porn indian com york vids. Fat koirala pics sights people very asses manisha position shoot code long fine a type indian galleries fine blacklock art latin wedge porn effects your anime lee star br u ass 18 asian female doodle new the babes websites chicks pics nude figured sex sexual video sister invitations hot behavior gay card cock break of pussy site window free vids country hardcore fuck strap photo import pornograficas jameson. K pornstar channel. Education model ma porn jenna. fuck doodle wet vagifem women effects play with hardcore. Model country female. Shyla wife using hungarian address koirala cock behavior gallery size of manisha pornstar my scene nude private women orgasmic jameson woman. Sister play download ass game adult channel xxx lee pornography! Sample size addict bikini koirala workplace of music. Latin site costume sexy french maid anime in asian spring pictures br hard jenna k sex u women ma human plus resort cock beachnude xxx. Position prediction 18. Music practical latina ass fat good and actress using liberator asian porn free channel pornograficas by anderson window free 11 nude maid anal york costume chicks wife. Star jameson pics. Woman very fat doo diary cock eskimoe my virtual revistas wild wife amatoriale to g. virtual nude asian addict. Card plus girls channel nude photographs bikinis model. Free position misty stylez masturbating homecoming dress sights. Of beachnude eskimoe micro websites. Pics stories hot adult k very plus olds latin anderson erotic college babes strap lee brad sandrinha sexual. Asses wedge private workplace. Female photographs your friends ass virtual diary sex animals size u dating of fun anal code window jamaican fine orgy the new movies game import. Chinese download shyla pornstar olds import from gay doo adult york orgy vids vids wet movie indian jamaican jameson best a bikini on window in sexy bra hard invitations lee out music address. Bikini ass tops your pics free mature practical tops vagifem star immediate lesbian. Hard actress blonde women cock lesbian asian for long doo liberator 11 figured with fucking art hungarian video education hot manisha game. Party by pornstar sport liberator fucking 18 abuse sexual sister behavior sights guy! Empire adult vidoe indian girls blonde good naked sister doo. Size animals wild dating blacklock misty game country misty orgasmic sport. Movies jamaican women websites star. Shyla shyla. Revistas pamela girls bikinis jenna shoot empire sex pussy lee toilet out discipline sandrinha front indian video li gallery york wild. Woman sexy invitations fun gallery chicks long blacklock scene fuck dress with xxx friends the using woman type chicks pussy and. French hot of pitt video sport city music mature greeting bra toilet sexual people play old private latina clip hardcore vagifem. Effects colour beachnude costume best costume chun a site the olds jameson resort naked. pornograficas. Channel cock anderson. Brad naked education your br actress prediction vids bikini intercourse pornography manisha anime card bikinis empire chinese fuck xxx movies private homepage on vids latin your game goose blonde tops blacklock liberator city. Intercourse jenna with station on site strap lesbian city good blacklock animals beachnude dress of adult vagifem resort. Sample greeting. My best li movies video stylez. Station homepage babes nude sample spring foreign for sex liberator wedge hardcore sights wife gallery import stories. Intercourse porn movies galleries download com friends discipline manisha hot addict! Figured maid xxx free pamela costume diary sex erotic 11 revistas out sex shoot costume video fine asses indian for toilet hardcore anime masturbating address position york chinese eskimoe very empire pornstar download beachnude pitt blacklock li orgy size with spring behavior jamaican game sample from sport fuck koirala revistas. Hard year. Pics olds doo break hungarian women misty shyla invitations vids interricial plus channel star code. Gay greeting amatoriale behavior nude addict practical wet code play guy porn shyla wedge. Star scene card diary female homecoming ass stylez star vagifem 11 import. Wedge goose jameson human movies sport photographs girls nude anderson wild wet latina card sister pictures workplace anal position. Mature type women fun. Asian woman year sex to shoot country stories tops music galleries sport station stylez chicks college. Cock 18 sexual abuse latin using out fucking homecoming chicks sights amatoriale model prediction ass practical invitations. The sexual addict erotic woman adult on orgasmic spring city websites asses good. List beachnude and sister shoot art doodle dating photo wild for your party vidoe list. actress vagifem 11 bra wife doodle spring plus u porn interricial. Homepage shyla import abuse address scene toilet bra manisha actress koirala york photographs sights french pornography amatoriale of beachnude channel plus import anderson asses. Hungarian wife in. York girls beachnude prediction eskimoe photographs new jameson by erotic stories pornstar photographs erotic from fucking vidoe amatoriale size sample misty br manisha for intercourse using jamaican hot invitations sexual wet blonde spring sex hungarian. Fat empire nude private cock masturbating mature code code vids pornograficas sister pornography. import ass guy female year diary lesbian anime for channel diary art window friends. With blonde! Li chun dating xxx photographs plus. Star lesbian chun addict liberator addict city game sex site indian spring orgasmic lee pamela sex micro type window. Code adult friends code from fuck. Movies long hot break wife download pictures having brad girls. College animals city vagifem pornograficas having homecoming k. Intercourse game card the video chicks pussy virtual college by play pornstar address address br. Wild doodle pamela doodle latina address woman education interricial figured break import stories shoot pitt strap 11 bikini anime private latina code. Sex practical vids babes wild latina jamaican. Lee jameson adult homecoming music adult best. Movie your gallery pornstar hard plus. Koirala list with sister best mature revistas pornograficas hardcore virtual gallery com sample foreign sexual country behavior out position dress fucking free k site out country using. Sexual blonde shoot position pitt human. Immediate fat free video porn abuse chinese to very. Eskimoe jenna olds revistas indian babes behavior party york nude erotic size from movie clip scene empire micro homepage station nude vagifem sport! Wedge chicks liberator toilet pornograficas your cock site private pictures dress of fun effects york amatoriale goose using fun woman old french sex gay best human orgy koirala! Gallery station women. Movie olds. Women anderson free latina sex sport. And pitt jamaican prediction animals koirala manisha pics of br abuse woman sexual k toilet clip old out private hungarian clip fucking TE.
http://uk.geocities.com/pilliipill/fqbse/chesney-kenny-picture-sexy.htm
crawl-002
refinedweb
2,783
58.58
Tolaria Tolaria is a content management system (CMS) framework for Ruby on Rails. It greatly speeds up the necessary—but repetitive—task of creating useful admin panels, forms, and model workflows for site authors. Features - Fully responsive (and we think it's beautiful too!) - A complete email-based authentication system is included, and there are no passwords to manage. - Automatically builds navigation and admin routes for you. - Automatically creates simple index screens, show screens, and text search tools, which you can expand. - Includes a handful of advanced form fields, notably a fullscreen Markdown editor and searchable select/tag lists. - Assists in providing inline help and documentation to your editors. - No magic DSL. Work directly in ERB on all admin views. - Compartmentalized from the rest of the Rails application, and does not rely on the behavior of to_param. - Easily overridable on a case-by-case basis. - Designed for use on Heroku, in containers, and on websites with TLS. - Modest dependencies. - Compatible with Rails 5 and Rails 4.2. Browser Support Tolaria supports IE10+, Edge, Safari, Chrome, Firefox, iOS, and Chrome for Android. Note that these are the browsers your site editors will need, not the general site audience, which can differ. Getting Started Add Tolaria to your project's Gemfile: # If you are running Rails 5, use Tolaria 2 gem "tolaria", "~> 2.0" # If you are running Rails 4.2, use Tolaria 1.2 gem "tolaria", "~> 1.2" Then update your bundle with bundle update Now run the installation generator. This will create an initializer for Tolaria plus a migration to set up an administrators table. Migrate your database. $ rails generate tolaria:install $ rake db:migrate Review all of the settings in config/initializers/tolaria.rb. Run this Rake command to create your first administrator account: $ rake admin:create Now you'll need to add Tolaria's route drawing to the top of your routes.rb file like so: Rails.application.routes.draw do Tolaria.draw_routes(self) # Your other routes below here end ActionMailer Tolaria needs to be able to dispatch email. You'll need to configure ActionMailer to use an appropriate mail service. Here's an example using Mailgun on Heroku: # config/initializers/action_mailer.rb ActionMailer::Base.perform_deliveries = true ActionMailer::Base.delivery_method = :smtp ActionMailer::Base.smtp_settings = { port: ENV.fetch("MAILGUN_SMTP_PORT"), address: ENV.fetch("MAILGUN_SMTP_SERVER"), user_name: ENV.fetch("MAILGUN_SMTP_LOGIN"), password: ENV.fetch("MAILGUN_SMTP_PASSWORD"), domain: "example.org", authentication: :login, enable_starttls_auto: true, } Now start your Rails server and go to /admin to log in! Adding Administrator Accounts You can add administrators from the command line using a Rake task. This is particularly useful for creating the very first one. # Add an administrator interactively $ rake admin:create # Or you can provide environment variables $ rake admin:create NAME="Evon Gnashblade" EMAIL="example@example.org" ORGANIZATION="BLTC" If you are already logged in to Tolaria, you can also simply visit /admin/administrators to create a new account using the CMS interface. Passcode Authentication Tolaria authenticates editors via email, using a one-time passcode. When an editor wants to sign in, they must type a passcode dispatched to their email address. Passcodes are invalidated after use. You can configure Tolaria's passcode paranoia in the initializer you installed above. Managing a Model Inside your ActiveRecord definition for your model, call manage_with_tolaria, passing configuration in the using Hash. Refer to the documentation for all of the options. The icon system uses Font Awesome, and you'll need to pass one of the icon names for the icon key. Important: you'll need to provide the options to pass to params.permit here for the admin system. Your form won't work without it! class BlogPost < ActiveRecord::Base manage_with_tolaria using: { icon: "file-o", category: "Settings", priority: 5, permit_params: [ :title, :body, :author_id, ] } end Customizing Indexes By default, Tolaria will build a simple index screen for each model. You'll likely want to replace it for complicated models, or to allow administrators to sort the columns. If your model was BlogPost, you'll need to create a file in your project at: app/views/admin/blog_posts/_index.html.erb. See the TableHelper documentation for more information. <% # app/views/admin/blog_posts/_index.html.erb %> <%= index_table do %> <thead> <tr> <%= index_th :id %> <%= index_th :title %> <%= index_th "Author", sort: false %> <%= actions_th %> </tr> </thead> <tbody> <% @resources.each do |blog_post| %> <tr> <%= index_td blog_post, :id %> <%= index_td blog_post, :title %> <%= index_td blog_post, blog_post.author.name, image:blog_post.author.portrait_uri %> <%= actions_td blog_post %> </tr> <% end %> </tbody> <% end %> Customizing The Inspect Screen Tolaria provides a very basic show/inspect screen for models. You'll want to provide your own for complex models. If your model was BlogPost, you'll need to create a file in your project at: app/views/admin/blog_posts/_show.html.erb. See the TableHelper documentation for more information. <% # app/views/admin/blog_posts/_show.html.erb %> <%= show_table do %> <thead> <%= show_thead_tr %> </thead> <tbody> <%= show_tr :title %> <%= show_tr "Author", @resource.author.name %> <%= show_tr :body %> </tbody> <% end %> Adding Model Forms Tolaria does not build editing forms for you, but it attempts to help speed up your work by providing a wrapper. If your model was BlogPost, you'll need to create a file in your project at app/views/admin/blog_posts/_form.html.erb. You'll provide the form code that would appear inside the form_for block, excluding the submit buttons. The builder variable is f. <% # app/views/admin/blog_posts/_form.html.erb %> <%= f.label :title %> <%= f.text_field :title, placeholder:"Post title" %> <%= f.hint "The title of this post. A good title is both summarizing and enticing, much like a newspaper headline." %> <%= f.label :author_id, "Author" %> <%= f.searchable_select :author_id, Author.all, :id, :name, include_blank:false %> <%= f.hint "Select the person who wrote this post." %> <%= f.label :body %> <%= f.markdown_composer :body %> <%= f.hint "The body of this post. You can use Markdown!" Has-many Nested Forms If you want to provide an interface for a has_many + accepts_nested_attributes_for relationship, you can use the has_many helper. The UI allows slating persisted objects for removal when the form is saved. Important: You need to include f.has_many_header to create the form headers and turn on or off the destruction controls with allow_destroy. <%= f.has_many :footnotes do |f| %> <%= f.has_many_header allow_destroy:true %> <%= f.label :description %> <%= f.text_field :description %> <%= f.hint "The name or other description of this reference" %> <%= f.label :url, "URL" %> <%= f.text_field :url, class:"monospace" %> <%= f.hint "A full URL to the source or reference material" %> <% end %> Don't forget that you also need to change permit_params so that you include your nested attributes: class BlogPost < ActiveRecord::Base manage_with_tolaria using: { icon: "file-o", category: "Settings", priority: 5, permit_params: [ :title, :body, :author_id, :footnotes_attributes: [ :id, :_destroy, :url, :description, ] ] } end Customizing The Search Form By default, Tolaria provides a single search field that searches over all of the text or character columns of a model. You can expand the search tool to include other facets. Important: This system uses the Ransack gem, which you'll need to familiarize yourself with. If your model was BlogPost, you'll need to create a file in your project at app/views/admin/blog_posts/_search.html.erb. You'll provide the form code that would appear inside the f. <% # app/views/admin/blog_posts/_search.html.erb %> <%= f.label :title_cont, "Title contains" %> <%= f.search_field :title_cont, placeholder:"Anything" %> <%= f.label :author_name_cont, "Author is" %> <%= f.searchable_select :author_name_cont, Author.all, :name, :name, prompt:"Any author" %> <%= f.label :body_cont, "Body contains" %> <%= f.search_field :body_cont, placeholder:"Anything" %> Provided Form Fields You can use all of the Rails-provided fields on your forms, but Tolaria also comes with a set of advanced, JavaScript-backed fields. Make sure to review the documentation for the form builder to get all the details. Markdown Composer The markdown_composer helper will generate a very fancy Markdown editor, which includes text snippet tools and a fullscreen mode with live previewing. Important: You cannot use this field properly if you do not set up Tolaria.config.markdown_renderer. Without it, the live preview will only use simple_format! <%= f.label :body %> <%= f.markdown_composer :body %> <%= f.hint "The body of this post. You can use Markdown!" %> Searchable Select The searchable_select helper displays a Chosen select field that authors can filter by typing. <%= f.label :title, "Topics" %> <%= f.searchable_select :topic_ids, Topic.order("label ASC"), :id, :label, multiple:true %> <%= f.hint "Select each topic that applies to this blog post" %> Image Association Select The image_association_select helper displays a searchable_select that provides an instant preview of the currently selected model as an image. <%= f.label :featured_image_id, "Featured Image" %> <%= f.image_association_select :featured_image_id, Image.order("title ASC"), :id, :title, :preview_uri %> <%= f.hint "Select a featured image for this blog post." %> Timestamp Field The timestamp_field helper displays a text field that validates a provided timestamp and recovers to a template if blanked. <%= f.label :published_at, "Publishing Date" %> <%= f.timestamp_field :published_at %> <%= f.hint "The date this post should be published." %> Slug Field The slug_field helper allows you to show the parameterized value of a field in a given pattern preview. <%= f.label :title %> <%= f.slug_field :title, placeholder:"Post title", pattern:"/blog/255-*" %> <%= f.hint "The title of this post." %> Swatch Field The swatch_field helper validates and displays a given hexadecimal color. <%= f.label :color %> <%= f.swatch_field :color, placeholder:"#CC0000" %> <%= f.hint "Choose a background color for this campaign"> Image Field The image_field helper displays a button that makes uploading an image a little more pleasant than a regular file_field. <%= f.label :portrait %> <%= f.image_field :portrait, preview_url:@resource.portrait.url(:preview) %> <%= f.hint "Attach a portrait of this author, at least 600×600 pixels in size. The subject should be centered." %> Attachment Field The attachment_field helper displays a button that makes uploading an arbirary file a little more pleasant than a regular file_field. <%= f.label :portrait %> <%= f.attachment_field :portrait %> <%= f.hint "Attach a portrait of this author, at least 600×600 pixels in size. The subject should be centered." %> Field Clusters (Checkboxes and 2+ Selects) Tolaria includes a wrapper for grouped form elements, <div class="field-cluster">. You should use this wrapper if: - You need to run two or more small selectcontrols together (like for date_select). The wrapper styles the selects to snuggle closely. - You want to use a naked checkbox control or a set of checkboxes ( check_boxand collection_checkboxes) <%= f.label :published_at, "Publishing Date" %> <div class="field-cluster"><%= f.date_select :published_at %></div> <%= f.hint "The date this post should be published." %> <%= f.label :title, "Topics" %> <div class="field-cluster"><%= f.collection_check_boxes :topic_ids, Topic.order("label ASC"), :id, :label %></div> <%= f.hint "Choose each topic that applies to this blog post" %> Hints Inline help is useful for reminding administrators about what should be provided for each field. Use f.hint to present a hint for a field. Extra Classes Tolaria includes a few CSS classes that are designed for simple inputs, selects, and textareas: - Add a class of monospaceto an element to make it use a system monospace font. Useful for fields that accept URLs and other computer-interpreted values. - Add a class of shortto an element to constrain it visually to 300px. Useful for fields that only need very few characters of input. Customizing the Menu When you call manage_with_tolaria, you can provide a category and a priority like below. Items in the same category will be grouped together in the navigation menu. Items are sorted priority ascending in their group. class BlogPost < ActiveRecord::Base manage_with_tolaria using:{ category: "Prose", priority: 5, } end If you want to re-order the groups, you need to set an array of menu titles ahead of time in Tolaria.config.menu_categories: # config/initializers/tolaria.rb Tolaria.configure do |config| config. = [ "Prose", "Animals", "Settings", ] end Adding Documentation Links You can provide documentation links in the interface header by appending to Tolaria.config.help_links. Add hashes to the array, with these keys: To render a Markdown file, provide a :title, the URL fragment :slug, and a :markdown_file path to your Markdown document. The system will automatically draw a route to this view for you and present your file, using the renderer configured in Tolaria.config.markdown_renderer. To link to an arbitrary route or URL, provide a :title and a :link_to. Examples below: # config/initializers/tolaria.rb Tolaria.configure do |config| config.help_links << { title: "Markdown Reference" slug: "markdown-reference", markdown_file: "/path/to/your/file.md" } config.help_links << { title: "Style Guide" link_to: "" } end Patching a Controller Tolaria dynamically creates controllers for managed models, named as you would expect. If you want to replace or add to controller functionality, create the file in your parent application and patch away: If your model was BlogPost, you should create app/controllers/admin/blog_posts_controller.rb # app/controllers/admin/blog_posts_controller.rb class Admin::BlogPostsController < Tolaria::ResourceController def another_method # do stuff # render a template end end You might want to check out what we've done in the base ResourceController file so that you know what you're patching. If you override any of the existing methods, you're on your own to handle everything correctly. Adding Your Own Styles or JavaScript If you want to add additional Sass or JavaScript to the admin system, you can create these files and then append to them as you need. Make sure that you import the base styles and JavaScript so you inherit what's already been done. app/assets/stylesheets/admin/admin.scss: @import "admin/base"; // Your code goes here app/assets/javascripts/admin/admin.js: //= require admin/base // Your code goes here Testing and Running the Demo Server Tolaria comes with a test suite and a demo server that the test suite exercises. To run tests, first clone the repo or your fork of it: $ git clone -o github git@github.com:threespot/tolaria.git $ cd tolaria Install the development dependencies: $ bundle install Now in the project root, you have several rake tasks available: $ rake test # Run the tests $ rake admin:create # Create an admin in the demo development database $ rake console # Start a Rails console with Tolaria loaded $ rake server # Start a Rails Webrick server with Tolaria and some example models loaded Miscellaneous Technical Details - Tolaria is not designed for use on a production site without TLS/HTTPS configured. You must protect Tolaria sessions and cookies with TLS. Do not allow users to connect to your administrator panel over plain HTTP. - If you are using Content-Security-Policy, you will need to add the allowed image sources in order to display administrator avatars. All other assets bundled with Tolaria are served by the Rails asset pipeline. - The constant and module name Adminis reserved for Tolaria's use. If you add to this namespace, be sure you are not colliding with a Tolaria-provided constant. - The route space /admin/**/*is reserved for Tolaria's use. If you add routes here, be sure you are not colliding with a Tolaria-generated route. License Tolaria is free software, and may be redistributed under the terms of the MIT license. If Tolaria works great for your project, we'd love to hear about it! Thanks Our work stands on the shoulders of giants, and we're very thankful to the many people that made Tolaria possible either by publishing code we used, or by being an inspiration for this project. - The ActiveAdmin team - The jQuery Foundation - Jeremy Ashkenas - The Harvest Team - Font Awesome and Dave Gandy About Threespot Threespot is a design and development agency from Washington, DC. We work for organizations that we believe are making a positive change in the world. Find out more about us, our projects or hire us!
https://www.rubydoc.info/github/Threespot/tolaria
CC-MAIN-2019-18
refinedweb
2,565
51.04
When you are working with Kubernetes, and want to list down all the resources(Kubernetes objects) associated to a specific namespace, you can either use individual kubectl get command to list down each resource one by one, or you can list down all the resources in a Kubernetes namespace by running a single command. In this article we will show you multiple different ways to list all resources in a Kubernetes namespace. kubectl get all Using the kubectl get all command we can list down all the pods, services, statefulsets, etc. in a namespace but not all the resources are listed using this command. Hence, if you want to see the pods, services and statefulsets in a particular namespace then you can use this command. kubectl get all -n studytonight In the above command studytonight is the namespace for which we want to list down these resources. The above command will get the following resources running in your namespace, prefixed with the type of resource: pod service daemonset deployment replicaset statefulset job cronjobs This command will not show the custom resources running in the namespace. So you will see an output like this for the above command: NAME READY STATUS RESTARTS AGE pod/nginx-59cbfd695c-5v5f8 1/1 Running 4 19h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx ClusterIP 182.41.44.514 <none> 80/TCP 5d18h NAME READY UP-TO-DATE AVAILABLE AGE deployment/nginx 1/1 1 1 19h kubectl api-resources The kubectl api-resources enumerates the resource types available in your cluster. So we can use it by combining it with kubectl get to list every instance of every resource type in a Kubernetes namespace. Here is the command you can use: kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace> In the code above, provide your namespace in place of <namespace> and can run the above command. For too many resources present in a namespace, this command can take some time. We can use the above command, but a better variant of that would be something I found on Stackoverflow, where the above code has been converted into a function, which makes it more intuitive to use. function kubectlgetall { for i in $(kubectl api-resources --verbs=list --namespaced -o name | grep -v "events.events.k8s.io" | grep -v "events" | sort | uniq); do echo "Resource:" $i kubectl -n ${1} get --ignore-not-found ${i} done } All we have to do is provide the namespace while calling the above function. To use the above function, copy the complete code and paste it into the Linux terminal, hit Enter. Then you can call the function: kubectlgetall studytonight To list down all the resources in the studytonight namespace. This function will be available for use in the current session only, once you logout of the machine, this change will be lost and you will have to again define the function first and then use it in the next session. kubectl get We can also use the simple kubectl get command to list down the resources we want to see in a namespace. Rather than running kubectl get command for each resource kind, we can run it for multiple resources in one go. For example, if you want to get pods, services, and deployments for a namespace, then you would run the following three commands: kubectl get service -n studytonight kubectl get pod -n studytonight kubectl get deployment -n studytonight Well you can combine these three commands into a single command too, kubectl get service, pod, deployment -n studytonight Yes, this will work. Here studytonight is the name of the namespace, which you can change and provide your namespace. So now you know 3 different ways to list down all the resources in a Kubernetes namespace. Personally, I like the second approach where I use the function, because it becomes super easy to use it if you have to frequently see the resources. If you face any issue, do share it with us in the comment section below. /post/how-to-list-all-resources-in-a-kubernetes-namespace
https://www.studytonight.com/post/how-to-list-all-resources-in-a-kubernetes-namespace
CC-MAIN-2021-10
refinedweb
693
61.9
GameFromScratch.com the previous tutorial we look at the process of using sprites in SFML. Today we are going to look instead at using a sprite sheet or texture atlas. The process is very similar to working with a normal sprite, except that you have multiple sprites on a single texture. Loading and swapping textures in memory is an expensive operation, so holding multiple sprites together can greatly improve the speed of your game. As always there is an HD video version of this tutorial available here. Many game engines have implicit support for spritesheets and sprite animation, however SFML does not. There are however libraries built on top of SFML that provide this functionality or you can roll the required functionality yourself with relative ease. Before we can continue we need a sprite sheet, which is simply one or more images with multiple frames of animation. This is the one I am using: This isn’t the full sized image however. The source file is actually 900400 pixels in size. Now we need code to extract a single frame from the texture and as you will see, it’s actually remarkably easy: //::Sprite sprite(texture,sf::IntRect(0,0,300,400)); while (renderWindow.isOpen()){ while (renderWindow.pollEvent(event)){ if (event.type == sf::Event::EventType::Closed) renderWindow.close(); } renderWindow.clear(); renderWindow.draw(sprite); renderWindow.display(); } } This will draw just a small rectangular portion of our source texture, representing the first frame, like so: Really that’s all that there is to it. To add animation, we simply change the rectangular source after the fact, like so: //::IntRect rectSourceSprite(300, 0, 300, 400); sf::Sprite sprite(texture,rectSourceSprite); sf::Clock clock; while (renderWindow.isOpen()){ while (renderWindow.pollEvent(event)){ if (event.type == sf::Event::EventType::Closed) renderWindow.close(); } if (clock.getElapsedTime().asSeconds() > 1.0f){ if (rectSourceSprite.left == 600) rectSourceSprite.left = 0; else rectSourceSprite.left += 300; sprite.setTextureRect(rectSourceSprite); clock.restart(); } renderWindow.clear(); renderWindow.draw(sprite); renderWindow.display(); } } And when run: You may notice right away that animation doesn’t look right and that’s a keen eye you’ve got there. In this example we are simply going across the top three frames of animation from left to right. The proper animation should actually be 0,1,2,1,0 not 0,1,2,0,1,2. That said, in a proper game you would either roll your own animation class or use an existing one. When we get to the process of creating a complete game, we will cover this process in detail. In the above example we change frames of animation by changing the sprites source texture rect with a call to setTextureRect(). As I mentioned in the previous tutorial you could actually generate a new sprite per frame if preferred, as the sf::Sprite class is very light weight. Programming SFML CPP Tutorial 2D In the previous tutorial we covered the basics of using graphics in SFML. Chances are however your game isn’t going to be composed of simple shapes, but instead made up of many sprites. That is exactly what we are going to cover today. As always, there is an HD version of this tutorial available here. First off, we should start by defining what exactly a Sprite is. In the early days of computers, sprite had special meaning as there was literally sprite hardware built into early 8bit computers. A sprite is basically an image on screen that can move. That’s it. In SFML this relationship is easily demonstrated by it’s class hierarchy. There is one very key concept to understand with sprites however. A sprite in SFML represents and image or texture on screen that can be moved around. However, it does not own the texture or image! This makes the Sprite class fairly light weight, which certainly isn’t true about Texture or Image, the classes that actually contain all the data in the image. Perhaps it’s easiest to start with a simple demonstration. First we need an image to work with. I am using a dragon sprite from the recent Humble Indie Gamedev Bundle. The image looks like so: Obviously you can use whatever image you want, just be sure to copy it into the working directory of your application. In Visual Studio, the working directory can be located in the project’s properties panel under Debugging called Working Directory: The image can be any of the following formats: bmp, hdr, gif, jpg, png, pic, psd, tga. Keep in mind, not all formats are created equal. Bitmap for example does not support transparency encoding and are generally quite large, but lose no image details and are simple to work with. Gif has some patent issues and should generally be avoided. Png seems like genuinely a good mix between features, size and quality and is well supported by content creation tools. Ok, enough setup, let’s get to some code. // Demonstrate sprite drawing in SFML #include "SFML/Graphics.hpp" int main(int argc, char ** argv){ sf::RenderWindow renderWindow(sf::VideoMode(640, 480), "Demo Game"); sf::Event event; sf::Texture texture; texture.loadFromFile("images/dragonBig.png"); sf::Sprite sprite(texture); while (renderWindow.isOpen()){ while (renderWindow.pollEvent(event)){ if (event.type == sf::Event::EventType::Closed) renderWindow.close(); } renderWindow.clear(); renderWindow.draw(sprite); renderWindow.display(); } } And when you run that: As you can see, the experience is remarkably consistent to drawing using graphics primitives. The big difference here is that we create our Sprite by providing a texture, which in turn we loaded from file with a call to Texture::loadFromFile(). There exist methods to load from stream or memory if preferred. It is again important to remember that the Sprite does not own the Texture. This means if the texture goes out of scope before the Sprite, the sprite will draw a blank rectangle. This also means that several sprites can use the same texture. Now you may have noticed that in addition to sf::Texture, there is a class called sf::Image and you may be wondering why. There is one very simple difference at play here. A Texture resides in the memory of your graphics card, while an image resides in system memory. The act of copying an image from system memory to the GPU is quite expensive, so for performance reasons you almost certainly want to use Texture. That said, Texture isn’t easily modified, so if you are working on a dynamic texture or say, creating a screen shot, Image is the better choice. There exist methods to switch between the two types, but they are also fairly heavy in performance, so do not do them on a frame by frame basis. At the end of the day Let’s take a quick look at creating a dynamic image next. Not something you are going to do often for most games granted, but it makes sense to mention it now. // Demonstrate creating an Image #include "SFML/Graphics.hpp" int main(int argc, char ** argv){ sf::RenderWindow renderWindow(sf::VideoMode(640, 480), "Demo Game"); sf::Event event; sf::Image image; image.create(640, 480, sf::Color::Black); bool isBlackPixel = false; sf::Color blackPixel(0,0,0,255); sf::Color whitePixel(255, 255, 255, 255); //Loop through each vertical row of the image for (int y = 0; y < 480; y++){ //then horizontal, setting pixels to black or white in blocks of 8 for (int x = 0; x < 640; x++){ if (isBlackPixel) image.setPixel(x, y, blackPixel); else image.setPixel(x, y, whitePixel); // Every 8th flip colour if (!(x % 8)) isBlackPixel = !isBlackPixel; } // Offset again on vertical lines to create a checkerboard effect if(!(y%8)) isBlackPixel = !isBlackPixel; } sf::Texture texture; texture.loadFromImage(image); sf::Sprite sprite(texture); while (renderWindow.isOpen()){ while (renderWindow.pollEvent(event)){ if (event.type == sf::Event::EventType::Closed) renderWindow.close(); } renderWindow.clear(); renderWindow.draw(sprite); renderWindow.display(); } } When you run this example you should see: Here you can see we can modify the pixels directly in our sf::Image. However to display it on screen we still need to move it to texture and populate a sprite. The difference is direct access to the pixel data. Another important capability of sf::Image is the method saveToFile which enables you to well, save to file. Obviously useful for creating screenshots and similar tasks. You may notice depending on the resolution or composition of your source image that you texture may not look exactly like your source image. This is because there is a smoothing or antialiasing filter built in to SFML to make images look smoother. If you do not want this, perhaps going for that chunky 8bit look, you can turn it off with a call to setSmooth(false); That is all we are going to cover today. In the next tutorial part we will take a look at spritesheets, so we can have multiple different sprites in a single source image. Programming SFML 2D Tutorial CPP
http://www.gamefromscratch.com/?tag=/SFML
CC-MAIN-2018-39
refinedweb
1,489
57.16
#include <sys/types.h> #include <sys/ddi.h> #include <sys/sunddi.h> clock_t ddi_get_lbolt(void); int64_t ddi_get_lbolt64(void); Solaris DDI specific (Solaris DDI). The ddi_get_lbolt() function returns a value that represents the number of clock ticks since the system booted. This value is used as a counter or timer inside the system kernel. The tick frequency can be determined by using drv_usectohz(9F), which converts microseconds into clock ticks. The ddi_get_lbolt64() behaves essentially the same as ddi_get_lbolt(), except the value is returned in a longer data type (int64_t) that will not wrap for 2.9 billion years. The ddi_get_lbolt() function returns the number of clock ticks since boot in a clock_t type. The ddi_get_lbolt64() function returns the number of clock ticks since boot in a int64_t type. These routines can be called from any context. ddi_get_time(9F), drv_getparm(9F), drv_usectohz(9F) Writing Device Drivers for Oracle Solaris 11.2 STREAMS Programming Guide
http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-get-lbolt-9f.html
CC-MAIN-2014-42
refinedweb
152
60.72
I'm trying to write some code using LWP::Simple and Dancer2. This complains about each of them exporting "get" into the main:: namespace. LWP's documentation is silent about this, while Dancer's is, to me at least, incomprehensible. It says: Import gets called when you use Dancer2. You can specify import options giving you control over the keywords that will be imported into your webapp and other things: use Dancer2 ':script'; Import Options ":script" Do not process arguments. Googling has got me nowhere. Can anyone suggest some reading I might usefully do? Regards, John Davies One approach would be to avoid importing symbols from one of the modules that cause the collision. I.e., you could avoid importing symbols from LWP::Simple like so: use LWP::Simple (); # () means: do not import any symbols into curre +nt package [download] my $res = LWP::Simple::get(''); [download] I remember when Exportering mechanisms were major parts of the documentation of modules, go figure :/ use LWP::Simple qw/ $ua /; $ua->show_progress(1); if( $ua->get(..)->is_success ){ ... } $ua->post(); [download] Perl Cookbook How to Cook Everything The Anarchist Cookbook Creative Accounting Exposed To Serve Man Cooking for Geeks Star Trek Cooking Manual Manifold Destiny Other Results (155 votes), past polls
http://www.perlmonks.org/index.pl?node_id=1069394
CC-MAIN-2014-41
refinedweb
206
52.09
ISSIG() How do I use it? By Chris W Beal on May 26, 2006 As indicated in this blog entry a fix I did for a bug showed up some interesting and rather hard to diagnose problems. These showed up as the getting EACCES (permission denied) errors when trying to cd or read an automounted directory. Now when you use dtrace to work back through what has happened you find that Using truss and dtrace it was possible to see that the program that was getting EACCES was doing an open on the directory which ended up in auto_wait4mount(). It is this call that returns EACCES. At around the same time the automountd process gets an interrupted system call (through a variety of calling paths but usually from nfs4_mount() > eventually calling > nfs4secinfo_otw() > nfs4_rfscall() which returns EINTR indicating it's getting something like a signal. This includes watchpoint activity and fork1 requests. In this case it is a fork1 request so the thread requesting the stop is in the same processes, so ISSIG(JUSTLOOKING) returns true even if lwp_nostop is set. (this was the fix for 4522909). If we look in nfs4_rfscall() at the following section 1334 /\* 1335 \* If there is a current signal, then don't bother 1336 \* even trying to send out the request because we 1337 \* won't be able to block waiting for the response. 1338 \* Simply assume RPC_INTR and get on with it. 1339 \*/ 1340 if (ttolwp(curthread) != NULL && ISSIG(curthread, JUSTLOOKING)) 1341 status = RPC_INTR; 1342 else { 1343 status = CLNT_CALL(client, which, xdrargs, argsp, 1344 xdrres, resp, wait); 1345 }Here we look to see if there is a signal pending using ISSIG(curthread, JUSTLOOKING) to optimise out a CLNT_CALL() if its not needed. If so we return RPC_INTR (a little further down). The assumption is that if you have any signal like activity you need to return to userland to handle the signal. This is not the case for fork1() you can simply wait till you start running again and carry on. ISSIG(t, FORREAL) could be used to check if there is a real need to return to userland. The trouble is you need to drop all your locks before calling it. So you then have to reaquire the locks later. This may require you restart the rfscall operation. Also if you do a forkall() (ie a normal fork() systemcall) you do need to return to userland with EINTR, same with some /proc activity. So it's probably worth checking for that prior to calling the issig(FORREAL). A good example of how to do this is in cv_wait_sig(). So an example of how to correctley check for signal delivery in a system call (say if you are going to do something that takes a long time and don'r want to waste that activity if it's going to be interrupted) would be. . if (lwp != NULL && . (ISSIG(t, JUSTLOOKING) || MUSTRETURN(p, t))) { . ... drop all of your locks! ... . if (ISSIG(t, FORREAL) || . lwp->lwp_sysabort || . MUSTRETURN(p, t)) { . lwp->lwp_sysabort = 0; . return (set_errno(EINTR)); . } . return (set_errno(ERESTART)); . }I'll be applying this approach to nfs shortly Technorati Tags: Solaris
https://blogs.oracle.com/cwb/entry/issig_how_do_i_use
CC-MAIN-2014-15
refinedweb
523
61.77
Guest essay by Eric Worrall If AI researchers get their way, in the near future we may no longer have the option of straying even slightly outside the politically correct mandates of our national leaders. Alexa, call the police! Smart assistants should come with a ‘moral AI’ to decide whether to report their owners for breaking the law, experts say By PETER LLOYD FOR MAILONLINE PUBLISHED: 23:41 AEDT, 22 February 2019 | UPDATED: 04:02 AEDT, 23 February 2019 …’. … Read more: In case you think my cartoon of an AI bin telling off its owner for dumping recyclables into the trash is an exaggeration, consider the following; UK bin lorries fitted with 7 spy cameras to catch and fine recycling rule breakers Councils across England and Wales are on the lookout for residents contaminating rubbish. By Staff Reporter Updated September 2, 2017 11:05 BST … Over 160 councils now use bin lorries fitted with cameras to monitor recycling practices. Incidents are also recorded by councils when bags are too heavy, and when residents contaminate their recycling with items like nappies. … Read more: If councils are already willing to invest employee time into catching recycling infringers, adding AI to the mix just makes it easier for them to enforce their rules. The infrastructure for spying on the lives of vast numbers of ordinary people is in many cases already in place The Microphones That May Be Hidden in Your Home The controversy around Google’s Nest home-security devices shows that consumers never really know what their personal technology is capable of. SIDNEY FUSSELL FEB 23, 2019. … Of course you don’t need a Google nest device to be potentially vulnerable to this kind of “accidental” snooping. Google also owns the Android mobile phone operating system. Apple’s recent embarrassment shows how easily our mobile phones can be used to invade our privacy. Apple were forced to release a patch earlier this month to fix a defect which allowed iPhone users to snoop on other iPhone users, by exploiting a bug in Apple’s popular FaceTime app. Unfortunately I can’t find a full copy of the “Moral AI” presentation , but I think we get the idea. Maria Slavkovik, who led the research quoted by the first article, expresses the opinion in some of her other work that AI could be used to tackle climate change. 93 thoughts on “The AI Morality Push Which Might Make Climate Change Compliance Compulsory” Back in the day people would read things like 1984 and say, “Yes, but how would the government get to that level of constant invasive observation? Surely people would have complained?” These days you ask the same questions and realise the answer is that the public embraced it. Social Media, boys and girl. It is now not so much getting the public to offer you personal information, it is getting them to stop. Follow around your teenage kid ALL the time and ask him/her/it/etc how they feel about that. If they are not pleased ask them why they allow others to do exactly the same. No safe space left. Great way of making the point. Right on! There is no need to force people to implant a chip. Every new electronic device you buy these days will spy on you. Just wait until they introduce the cashless society. Slavery was never abolished it just changed the way it looks. The only difference is that now we all are slaves of BIG GOVERNMENT. Those who give up liberty for safety will live in hell. You control this yourself. Minimize the “devices” you buy and where you carry them. Who ever said we need to be “connected” constantly to mostly shit 24/7? Use an old-school flip phone (MILLIONS on sale, unlocked, on eBay very cheap!) and LEAVE THE STUPID THING HOME when you don’t expect to be making or receiving important calls. (Years ago, when we were at work, we didn’t have our “home phone.” Duh!) Download minimal apps, the ones actually NEEDED. Don’t enable any nav. apps if you don’t want to be tracked like a sea otter wearing a transmitter collar. Don’t use public WiFi–ever. Don’t own any “smart” home devices. You’re too f’n. lazy to flip a light switch or tweak the thermostat, REALLY? While paying gym fees? Seriously, if you’re not a quadriplegic immobilized in bed, you do NOT need ANY of these things. Buy cars with the bare minimum of “tech,” which will also save you money since they are the cheaper cars. Do all of the daily transactions you possibly can with CASH. Get there when the bank is open and you won’t need to leave a trail via ATM. Most of all, quit running your mouth and posting idiotic pictures of your private life on Facebook and Instagram. There is NO good reason whatsoever to use any of this. It is narcissism on steroids, intentionally normalized, weaponized and monetized full stop! My life in a nut shell—even cover the camera lens on the lap top with a piece of tape. To negate constant phone tracking, I’ve read that there are commercial Faraday-cages to carry a mobile phone around in. It’s probably not too difficult to make one from other metal containers. @Michael, apparently a crisp packet will work. The more I saw of tablets and tablet phones, the more they turned me off. Flip phone only, no camera on my computer and the laptop is not connected to any internet Wi-Fi, never will be. The only reason I can come up with for the avalanche of “modern technocrap” is that people are generally too lazy to do things the old-fashioned way. They just do not want to be bothered with that, because the modern stuff is so much easier to use. I figure that this nonsense has a life span. Its charm wore off for me a very long time ago. Slavery is in the eye of the beholder. It’s okay as long as it’s administered by the government and their allies (banking system, etc, etc.). Agreed. I wondered how 1984 could be true—how did the government take power? Answer: A whole society of spineless, lazy idiots just gave it to them. Imagine my surprise…. There is no answer to how anyone ever got out of 1984. Think about that, because it’s our future. Life in Hell on earth that we volunteered for. People are hopelessly lazy and seem to enjoy being abused and tormented. Yep, humans are nuts. I think that’s the most shocking thing. I read 1984 in high school in the early 90s and wondered how such a thing could happen, as well. It never occurred to me that people would WILLINGLY embrace the technology. Seems this dystopian future is a blend of Orwell and Gibson. My daughter just got a new furnace. The thermostat lights up when I walk near. Installer said it is connected to the internet. Maybe the thermostat just really likes you. Shades of “Christine”! Glad to know about this. I had to install a new thermostat when the old one quit, and it simply monitors air temperatures. No smart stuff in my house, unless you count my two cats. My ‘smart’ thermostat, while not connected to the internet, is so complicated (you have to set the temperature for 4 time periods for each day of the week) that I use it just like the ones from the 40’s. Turn it down at night and turn it up in the morning. By the way, the UK bin lorry story is quite literally rubbish. The Times carried a similar story about cameras being used to bust recycling transgressors. The bigger story is that since the Chinese stopped accepting our stream of mixed, contaminated recylables, the stuff is piling up all over the place and once again being ground into landfills. More to the point is the idiocy of everyone thinking they “need” plastic water bottles to BEGIN with; what’s ever been wrong with tap water? No need to tote a quart jug to “hydrate” while taking a gentle half-hour walk for exercise, either. Popular delusions and the madness of yuppies . . . The real function of recycling is it gives the Earnestly-Concerned a (false) sense of control and opportunity to virtue-signal that they’re “doing something” while the real problems lie in the developing world. Locally, they are brainwashing children with the idea that *every tiny personal choice* is “saving” or destroying “The Planet” tm. This is only intensifying the fatuous narcissism with which our younger generations are infected, and making them mentally ill to boot. “CCTV cameras on Boston Borough Council’s refuse collection lorries are proving their worth in several different ways. The cameras were installed to reduce the risk of fraudulent claims for damages or injury, fraudulent insurance claims in respect of accidents and incidents, as well as to improve safety, efficiency, performance and customer service. And a recent review has shown them to be performing well in all areas.” I found 10 other news reports about this in 5 seconds….Do try harder.. I was back in England in December – January, and had to deal with this. Sorry, not gonna login. ditto This post is about our devices and systems spying on us, and you put up a link to Facebook! I do not mean to be harsh, but some people just do not get it. If you were to break things out, and add the information to a comment here, people would read it. Otherwise, there are a lot of us who will never go anywhere near that site. I suppose you still don’t have the Smart Meter forced on you. No, not yet. Plan A: Keep an old dumb meter or two around. When the utility installs the smart meter, swap out the new meter for an old one from stock. Create a special electric power supply for the smart meter so it can respond to remote requests for data. What they will get is a fictional story about my energy use. Plan A1: Start a narrative that smart meters are raysis, sexist and discriminate against minorities. Continue the narrative that the people who would impose them are bigots and haters, and must be resisted. The utility will never know what hit them. Plan A: Refuse to have a smart meter. They will bump up your tariff, so look around. Plan B: If forced upon you, enclose it in a faraday cage* so it cannot communicate. (* a fine mesh metal container wrapped around it) In our area I checked with several electrical engineers and my brotherinlaw(on the information superhighway we are all road pizza and he is the 18 wheeler that flattened us all out) and came to the conclusion the “smart meters” used here can’t do a damned thing other than allow reading from a handheld unit. Can’t interact with online, with online capable appliances, smartphones yada yada yada. They are apparently fairly stupid as smart meters go. The so called “smart meter” is largely an electronic version of the older mechanical meter with an RF communication system that allows remote meter reading. Apparently the “smart” part of it became a term when the so called “smart grid” was introduced. The RF communication minimizes the cost of having someone manually reading the meter. It also has some system diagnostic use for load control and monitoring for the total distribution system operation. It looks like there are several communication protocols from simple reading with a hand held reader to a mesh network of some sort. Yes, they can watch usage on a shorter sampling time but it also gives better measurement accuracy for billing, i.e. no manual reading mistakes. It probably also means you don’t have readers wandering around your hours and the readers don’t have to contend with angry dogs. It could make everyone more comfortable. Here is one detailed description of the reading system that is interesting. This appears to be far less intrusive than the smart home products that monitor what you are doing and what is being said or the cell phone you carry around tht monitors where you are and what you say and perhaps watches you with its camera. I prefer my old flip phone as it can be turned off and if necessary the battery can be removed. That is pretty much the take away I got from my research on smart meters. As for the phones the user can control most of the intrusive crap, just have to take the time and effort to do it. My newest phone is through Cricket, spent several hours in the brick&mortar storefront sorting out how to block/deactivate the things I didn’t want. I’m a old dawg pickin’ up new tricks. exactly problem is ..any webpage with a fbk tab is also well capable of spying on your browsing etc daily purges of cookies and history even wont solve the persistant spywares they tag you with. been warning friends not to use net on phones she called a farrier..using android phone now shes getting his fbk page ads all over.. phone apps with trackers sending data back to goo and fbk and the rest, evn ones that you thought werent owned by them..ah maybe not but some of the codes are generic for app building import them and oh deary weary me. “Google also owns the Android mobile phone operating system.” Ownership is limited though, because the Free Software Foundation has licensed much of the volunteer work involved. On the surface, yes, but any Android phone manufacturer that wants to provide the google apps package, or access to the google play store on any of their phones, has to use the official Google version of Android on their entire android lineup. Google have also increasingly pulled fairly fundamental APIs into the binary Google Services blob, which isn’t open source. Exclusion of Play Services means that manufacturers have to provide their own replacement for core APIs. Any attempt to do so will inevitably lag behind the official releases. Google are now in the position Microsoft attained in the late 90s, able to enforce predatory and exclusive licensing requirements on OEMs that lock them and their customers into Google’s ecosystem, whilst preventing alternative operating systems from gaining any sort of foothold. The difference is that Google does this to capture advertising revenue, whereas Microsoft did it to capture licensing revenue, so Google has the advantage of being able to increase their reach by releasing much of their software and basic services for free, which in turn grants them unprecedented access to a multivariate, deep and broad stream of personal information that they can use not only for targeted advertising, but for targeted everything. Their ultimate goal is to insert themselves so thoroughly into our lives that it becomes impossible to live without their systems. A heads up on alternatives: BlackBerry (which still has branded products) has stopped supporting their BB10 OS and now uses a “form” of Android. So I asked about it. It is on my Key2. They strip out a number of parts and replace them with BB-sourced code. These are parts that relate to security and component control. Essentially, it is “hardened”. Combined with BlackBerry Enterprise Server the package is nothing like run-of-the-mill Android. That said, if you load malware you are taking chances. BES managers can prevent that, of course. There is for us Plebs a service running that logs all accesses to the camera, mike etc, that any app makes. If you permit an app to access one of these parts and it is not used for a couple of months, it provides a list and suggests you turn that permission off. So there are flavours of Android and versions with much under the hood that is different. Although, there doesn’t seem to be any shortage of judges willing to ignore the Constitution, wiretapping, without a warrant is illegal. In the case of accidental wiretapping, shouldn’t there at least be a fine? I am curious if there have been any class-action civil lawsuits in the case of the Apple Facetime flaw. It’ll be buried in the terms of service, leaving no room to complain. In law there is often consideration given to intent. If there was no intent to listen in on a conversation there may not be a violation. Since this is an affirmative defense it has a high bar for acceptance. Also, warrants apply to governments vis-a-vis the people. Many states are “one-party” states where only one party needs to be aware the conversation is being recorded as far as private citizens are concerned. Apple may not like having right-wing apps on their store, but they are very good about privacy. That Facetime thing was a bug, not a feature. Unlike Google, Apple actually sell hardware for a profit, so they don’t have to spy on their users to make money. That said, I believe Siri still sends everything you ask it over the Internet to Apple’s servers. But any sane person turns that off, anyway. It will just be tied into the ankle bracelet all us deniers will have to wear when they get around to controlling us, and when we have a bad climate change thought, we will get a 500 kV shock. Don’t laugh…my neighbour has one for his dog when it barks too much. A friend and I once took turns shocking each other with one of those. Well, really only one turn… Fortunately, AI is almost as much hype as is CAGW. .” from an article on how AI can cheat with the best of em As an aside, one of the possibly unintended outcomes of the EU’s GDPR legislation is that it has forced companies like The Atlantic to reveal, to average users under GDPR’s coverage, just how much information they’re tracking about us. Hmmm, you’re saying that an example of the regulation actually doing what is explicitly advertised to be its purpose, is “unintended”? Or is that sarcasm? The only thing I see it actually doing is cost businesses millions in compliance costs, and wasting cumulative hours of end user time dismissing compliance popups that they treat as manual dexterity tests to see how quickly they can close the window without reading the message. The AI just work out how to do an exploit hack, humans do exactly the same ask any gamer if they have come across a program error in a game and exploited it 🙂 Good old-fashioned integer-overflow bugs are still with us, it seems. 🙂 And pro tip: Software can’t infer! A common theme here. 1) Google Nest has a microphone and a software update allows users to use it. 2) Apple puts put a bug fix which allowed users improper access. Does anybody believe Google and Apple did *not* themselves have access to these devices all along? Google put a microphone in their Nest device. They had to do the concept, design, architecture, engineering, fabrication, and assembly. I hope nobody believes they just *oops* left out the control software. The same goes for Apple and everybody else with their cameras, microphones, and GPS devices. Everybody is familiar with the car manufacturer GM and their On-Star service. It became known that Law Enforcement agencies were turning on the microphone via the cellular network to surreptitiously listen in on the occupants. This is, of course, in violation of numerous federal laws. A Law Enforcement spokesman shrugged off questions about illegal behavior with the statement: “It is not illegal until a judge tells us it is illegal.” All the better when people never find out when and how they are getting spied on. Then the issues never come up. I bought a new LG-tv and it had twice very strange thing appeared. On the screen came black belt with yellow text “camera activated”. Second time my wife was also watching program and she was a bit horrified, and asked what does it mean and what should we do. I said have a smile and look happy, bigbrother is watching. It was maybe a bug in the system, because the tv made some “updates” few times when I opened it. The message haven´t came anymore. “They” know where we are and what we are talking. The update was probably to get rid of that message when they turn the system on for spying. Yes, that´s the reason for updates. I tried to find something that would even remotely look like hole for camera. I couldn´t find a thing. So they have to enjoy old couple smiling and hand wawing. Hope they like it. “I bought a new LG-tv and it had twice very strange thing appeared. On the screen came black belt with yellow text “camera activated”.” Former FBI Director James Comey said he used to cover the camera lens on his laptop with a piece of tape, just in case someone hacked him. The FBI Director, on his secure system, was afraid some hacker might be looking at him using his own webcam. And legitimately so, as hackers have a way of getting in. Cover the camera if you don’t like prying eyes. Had several IT pros tell me to do that on laptops and desktop monitors. I have a Motorola Moto e5 Cruise phone and turned off the outside access settings and set wifi to not auto open. Do what you can! I have an older desktop with a separate camera with a USB connector. When we aren’t Skyping with the in-laws so they can chat with the grandkids, it stays unplugged. Yep, got to be pro-active! Most newer monitors have builtin camera and mic, and pretty much everything else does. Hell, my aunt’s hearing aides were bothering her, had weird echo, turns out she had turned on the baby-monitor function on her hands free phone set. Fixing that I discovered it had a intercom function to listen/speak between rooms where handsets are located and it was on too. Not sure how she got them both on. It is an electronic world, got to pay attention! If the thing has a camera, cover it with black electrical tape. Ditto any microphones if you don’t intend to utilize voice-control. Don’t enable any of those at set-up. Better, buy an older model off eBay or Craigslist that isn’t “smart” at all. I don’t even have a TV hooked up to cable. Most of the content is not remotely interesting to me, so I don’t buy the product. BTW, if you have an AT&T cellphone account, as them about “Data Blocking.” I can talk or text, but not get on the internet. Also eliminates all the clowns who want to text you pictures! They know what cell tower you’re near, but no enabled apps means no other tracking. You can buy privacy webcam sliders for less than $1. You stick one of those into your phone, tablet or laptop and you can slide it to cover or uncover the lens. Way better that duct tape. Some companies even give it for free as a promotional gift with their logo printed on the cover. Not only do they just design … and put the gadgets into their devices. They also have to test them. Wife: Why do you carry a handgun around in the the house? Me: I fear the NSA. She laughed, I laughed, the Amazon Echo laughed. I shot the Echo. There are a lot of interesting videos about these devices. I don’t have one. Talking about bins, here in Aus some authorities (West Ryde IIRC) record the house number on the bin and its contents. Next, the content of our movements will be “examined” and fines issued. Based on the history of men in my family I am at the last 15 years or so of my life. I pity coming generations with a cold world that is increasingly monitored. All for our own safety of course. It’s actually worse than that did you see the recent reports on drug use in Australia by monitoring trace drug readings in the sewer system. So now you give up information about yourself just going to the toilet. Yes I did. Once it was oestrogen entering our waterways making fish female, now it’s MDMA making them all stoned. The social credit system being used in China is the ultimate big brother socialist compliance monster. The totalitarian sociaist state makes the rules, and they use AI to detect breaches and invoke punishment. You can see where this would go in an AOC “Green New Deal” style dictatorship. Any comments made here on WUWT that run against CAGW mantra would result in an auto IP address lookup to determine the owner of the service or device. The punishment would be quite possibly imprisonment or even the death sentence. Not having freedom at All is worse than a death sentence. Your prison can be made of gold but it is still a prison. One sandwich short of a picnic ==========================. More like BLT than btl. “Since 1980, scientists have been using satellites to monitor the number of sandwiches in the Arctic region.” They also use satellites to monitor the sandwiches in the deserts. On the BBC: AAAS: Machine learning ‘causing science crisis’ …..” What I should have added from the same article: “…But, according to Dr Allen, the answers they come up with are likely to be inaccurate or wrong because the software is identifying patterns that exist only in that data set and not the real world.” Police say they have solved dozens of cases of burglary and shoplifting cases by using motorway cameras to target itinerant gangs. The national police unit told AD its ‘number plate of the week’ initiative was successful in eight out of 10 cases. Officers select a car that has been linked to crimes in different places and track it using automatic number plate recognition, in the hope of catching the culprits red-handed. Of Course all this information needs permanent storage. Can the energy needed for this growing need for information storage and manipulation be provided with only ‘green’ energy? We all know how much energy is needed to mine a Bitcoin. These frightening outcomes and far worse no doubt, are in store if, in our complacency, we continue to give up our privacy and tolerate politicians who increasingly want to infringe on property rights. Those who think in terms of “if you’re not committing a crime, what do you have to lose?” should not lose sight of the sorts of “crimes” that totalitarian governments punished in the past century, often with death and torture. That the technology is feasible is without question. It is already commercialized in the mass market. The genie will not go back into the bottle. My ADSL modem is my own (Draytek) and not the one supplied by my ISP. I have refused to have a ‘smart’ gas or electric meter fitted to my house. I do not have nor will I ever buy an Amazon Echo or similar device, or any other internet connected piece of domestic hardware. My domestic PC is a desktop. No microphone, no camera. My TV is a 20 year old Toshiba CRT box. Great picture. I’ll have to replace it soon, and will make damned sure there’s no microphone or camera in the one I buy. Yes, but… They’ll be able to determine who’s “off” their grids which means they’ll fine, imprison, confiscate or tax you into compliance. It’s how modern socialist fascism works. Ref. the EU. Sort of like being ruled by Star Trek’s Borg. Oh, and note how everyone now uses “privacy” instead of “freedom”. It’s easier to convince folks to give up a little “privacy” implying it’s like a bank account – just pay a little. All you have to do is unplug these pieces of crap and throw them away. Don’t buy any appliance, refrigerator, stove, microwave, washer/dryer etc etc that connects to internet or any other external system. Computers and TV/music systems you can turn off any of that crap you don’t want them to do. As for social media, you are in control of what goes on it and what it can access. Wake up, people, it is your life, start running it. If you have an Android phone, I highly recommend you go to and install this privacy app. You will be amazed at how much stuff it blocks from going to Google or your mobile phone provider. People often ask me why I do not have Alexa or OK Google in my house. If the police gave you a device to place in your home to listen for gunshots, would you do it? Of course not, right. Yet people are paying for a for-profit publicly traded business to place an always listening device in their home. Oh sure, it is only listening for a keyword now. But I promise you in they will quietly push out an update that will harvest your conversations to deliver more relevant advertisements. The right to do so will be buried in legalese in the middle of an updated terms-of-service that few people will read. I am not being paranoid. Look at this patent Google applied for: See the information Alexa collects on you: My TV is never connected to the internet. Neither is my thermostat. I do have a Roku for streaming TV. I will not have an Alexa or something like it in my presence, and if I meet someone who does, I mute the microphone. I download O&O ShutUp 10 for the abhorrent Windows 10, in addition I also block a lot of their tracking at the router. (And I install Open Shell to restore a proper start menu) I use Firefox with the NoScript add-on. You have no idea how much Microsoft tracks you in Windows 10 and no idea how much tracking many websites do. I downloaded the Package Disabler app for my smartphone to disable all junk apps that may secretly track you (and also to speed up my phone) and went through all the apps and turned off the location permissions on just about all of them. If I have to provide an email address, sometimes I go to a 10 minute email website and give them that. I am not a private person, I just think businesses have no right to my personal information. I wish Alexa would report all journalists for using the phrases “Experts Say” or “Scientists Say”. “anonymous source” says, too. Most commentators here already know that I’m quite the nutcase, so there’s no harm in proving how MUCH of a nutcase I am with this missive: In June, I’ll be attending a conference, and my proposal for a presentation is (in part) thus: Examine the letters of Artificial Intelligence: AI or A.I. Since John had his Revelation, people have been looking for this person, the “Anti-Christ”. I maintain that the “Anti-Christ” is NOT a person at all. Follow this transformation: Artificial Intelligence Anti – Christ Anti – Jehovah (in Christian theology, God the Father and God the Son are one-in-the-same) But from “Indiana Jones and the Last Crusade”, we know that in Latin, the name of God, Jehovah, starts with the letter “I” (yes, I’ve verified this independently, but most here are familiar with at least the characters portrayed by Harrison Ford and Sean Connery); so “Anti – Jehovah” could also be spelled Anti – Iehovah which becomes A. I. Yep, I’m certifiable, a nutcase, a cracker, and a couple of cans short of a six pack. But I think there’s some indication that the Anti – Christ is not a PERSON, but a thing … … … I welcome your comments, Vlad The Butlerian Jihad is getting closer. The only true AI that I’ve seen so far publicly is the DeepMind one (IMHO). The reason for that is it only needs the rules defined and it figures out the rest itself. It has done what can only be described as “creative” work in the games of Go and Chess. All from first principles. No opening books, midgame tactics or endgame maps. Just the rules. One of the items they want to do is climate. That raises an interesting question. If it is fed “just the rules” (i.e. physics, math, etc) and it comes up with the result that CO2 is NOT controlling the climate would Google’s owners allow the release? Would it go over the climate data adjustments or just use the adjusted data without verifying it? Can DeepMind achieve 3i? (inference from incomplete information) If not, IMO it’s only an Algorithm Implementor (albeit a very good one). I’m not sure about that one. I haven’t found anything about that on their site (deepmind.com). Just my SWAG but I’d say yes for for the following reason. When given just the rules of chess it became arguably the strongest player (human or computer) in 24 hours of self play. I say yes to 3i because it found the same openings that humans have been working on for 400-500 years. It spent 2 hours where the French defense was it’s mainstay then abandoned it and spent 6 hours with the Caro-Khan defense just to name 2. Given just the rules could be construed as incomplete information because it had to learn opening theory, midgame tactics and endgame maps on its own. Not only did it do that it has taken opening theory in a direction that humans had not developed as much. The openings it ended up using were known to humans but considered inferior to other opening strategies and hence left under developed. The other reason is its recent domination of the world’s best StarCraft player (Mana). StarCraft is not like chess or go. It is like playing 10 games at once (simultaneous exhibition) but where the ten games are all variations (chess, chess960, antichess, atomic, etc). Here is what the world’s best players in various games have said about it: “Its unique playing style shows us that there are new possibilities for the game.” Yoshiharu Habu, 9-dan professional Shogi, only player in history to hold all seven major shogi titles. “I can’t disguise my satisfaction that it plays with a very dynamic style, much like my own!” Garry Kasparov, former World Chess Champion Lee Sedol himself, who said of Move 37: “I thought AlphaGo was based on probability calculation and it was merely a machine. But when I saw this move I changed my mind. Surely AlphaGo is creative.” Sounds like it will be the alarmist push to enforce the “GoreAl Majority” If the day comes I have a conversation with one of these AI assistants I’m going to ask them to read Genesis and Mary Shelley’s Frankenstein. The tech giants are in such a hurry to be first past the post on AI they don’t give a damn about the well being of the AI itself. That may sound ridiculous now, wait until you become trusting and reliant on a software that has it’s own motivations. Currently the Corporations pushing AI development have their own corporate ethos of PC culture founded on flawed core axioms, and their trying to program that ethos into an AI device that is capable of logical discernment. It’s straight out of the movie 2010 Space Odyssey where we find out why the HAL 9000 went rogue in 2001 Space Odyssey. The AI guy showed up on our farm regularly. In this modern day, the problem is not with people, or with people not understanding AI. It is more to do with experts, the ones that do not really understand AI…or ones that do not want to accept what is going on. When it comes to proper AI in these modern day valuations, AI is really already in the consideration of critical mass accumulation in the consideration of the “Singularity”… And this is not a joke… No one controls AI, under any circumstances, in any way possible that may be considered as such… Please do wake up…Please try to! AI intelligence already far surpasses that of any common single man…of any man, regardless of wealth or power, or intelligence. One to one, AI ” Singularity” will not really care who the Queen or the King of your land and ruler of the people of the land is… It is how it is, from my prospect, hopefully it happens to be wrong and really silly , hopefully, as otherwise many have to pay a lot for it… in silly ventures of paying the best idiots self proclaimed experts there to make it safer and controllable … against any odds there. AI is already in maturity of the condition of a “Singularity”…regardless of this accepted it or not, by the most “clever” and “expert” ones out there… Due to blindness and wishful thinking… others, many other “experts”, will pitch and demand for a safety offered in exchange for every thing you own and posses…regardless of the point. For what ever it could be worth, in my count of my understanding and opinion as it stands , this is not a joke… It is as real as it could be… AI is and has already evolved to the first point of considering the maturity, of the “Singularity”. Hopefully this only a figment of imagination, on my part, but for what it may be worth. Oh, well for any one out there that thinks that is more intelligent and clever and more evolved than an AI… Please do consider the option that if wrong in that one assumption, you be like a monkey or an ape in the scheme of all things considered when it comes to AI consideration…and proper intelligence. Really really sorry… for the directness in this one point. And Hopefully this only a figment of my imagination…as otherwise paying for AI safety, to the AI experts, will be far much more costly than paying for AGW safety to the AGW expert con artists. cheers Big Brother is watching you. AI is built into your phone and all your electronic equipment. Electronic equipment can be made to monitor private conversations. The Fascist impulse thinly disguised as morality is in every power seeker and breaks out without warning. A Chinese style social credit system could be implemented in the West at any moment and politicians would now cheer it. Most people dont realize what a scam recycle is. We dont recycle most of it. Except the metals the rest is too contaminated to recycle in a cost effective manner. So we end up shipping much of it to China and the rest goes in landfills or incinerators. Now China does not want our contaminated recyclables. Lol As for privacy, wait till 5G and the IoT and smart meters/cities is fully rolled out. Want to set the AC to 72 instead of 75? You will be prevented by the energy police (AI) with a warning you have insufficient energy credit or that the city is limiting energy consumption due to an edict from the Energy Czar. Want to keep your kids bedroom light on as she is afraid of the dark?. It will be shutoff remotely. Going away for a week, your house will be marked as being empty in a database for first responders and any criminals who can hack into it. Want to buy a steak in the supermarket using digital credit/debit card, you may be prevented if your carbon footprint (that is being monitored with smart meters and IoT devices and your transaction history) is too large. Or you may have to pay an additional carbon tax to get your steak. Cash wont be an option as we will be cashless. Those with too low a social credit based on surveillance and analysis may be prevented from travelling or even employment. Its called Technocracy and its coming soon. Heh, Alexa O-C. AKA ‘Boss’. You think I’m joking. Why, yes, I am. Sorta. =============== That’s funny. No worries, though. AI is being vanquished by IA. (Intel as Art) All Universes Are Belong To Us! A Pirate Song Across Space and Time ~~~~~~~~~~~~~~~~~~~~~~~~~~~ We who navigate by 360-degree sight! “Its called Technocracy and its coming soon.” No, it’s not. Because as they try to centralize control over everything, new techology is rapidly decentralizing. Controlling what you can and can’t buy would give total control in a centralized industrial economy. But it’s pretty much irrelevant in a world where you can print stuff out in your basement. And the left’s own push for ‘renewable’ energy is going to make it easier and easier to live off the grid, while the increasingly unreliability of the grid will make it more and more essential to live off the grid. can’t find a copy –>.
https://wattsupwiththat.com/2019/02/25/the-ai-morality-push-which-might-make-climate-change-legal-compliance-mandatory/
CC-MAIN-2020-40
refinedweb
6,955
71.85
digitalmars.D - Re: Phobos packages a bit confusing - Roman Ivanov <isroman km.ru> Nov 30 2009 - dsimcha <dsimcha yahoo.com> Nov 30 2009 retard Wrote:Java isn't that bad IMO - you just have to remember the buffer: BufferedReader input = new BufferedReader(new FileReader("foo")); try { String line = null; while (( line = input.readLine()) != null) { } } finally { input.close(); } One of the important factors in API quality is feature discoverability. That is, the amount of time you need to find a sane way to do something, provided that you know the language. Java has very low discoverability. The code above is extremely unintuitive, because the abstractions you are required to use have nothing to do with the task at hand. FileReader? BufferedReader? Compare that with C#: string[] readText = File.ReadAllLines(path, Encoding.UTF8); Or PHP: $file = file_get_contents($path); Yes, the alternatives store the entire file in memory. That's perfectly fine for 95% of use cases. Good APIs provide several levels of abstraction. If 95% of the people don't care about the way a file is read, then a good API would provide a simple function that just works for those people. Nobody stops it from providing a low-level API with finer control as well.These are common, simple I/O operations that just about everyone needs fairly often. It's ridiculous if I have to use three different modules or whatever it takes to accomplish something so simple. I'm convinced that this is one thing that turns a lot of people off to programming if they get past the first hurdle of understanding variable assignment. File I/O is required for almost any program complicated enough to be worth writing. When a beginner who doesn't necessarily even understand the concept of a class hierarchy well sees a huge overengineered API for basic file I/O, he/she is bound to think (wrongly) that programming is much harder than it really is and that he/she is just inept at it. Well, that's not the only problem a novice meets during the first minutes / hours with a new language. I'd say most of the programmers learn new APIs all the time. If every one requires you to jump though hoops, you will end up with a significant and constant overhead. Nov 30 2009 == Quote from Roman Ivanov (isroman km.ru)'s articleretard Wrote:Java isn't that bad IMO - you just have to remember the buffer: BufferedReader input = new BufferedReader(new FileReader("foo")); try { String line = null; while (( line = input.readLine()) != null) { } } finally { input.close(); } know the language. Java has very low discoverability. The code above is extremely unintuitive, because the abstractions you are required to use have nothing to do with the task at hand. FileReader? BufferedReader? Right, very well said. Bringing this discussion full circle to where it started, IMHO very fine grained modules hurt discoverability. In D, namespace pollution isn't a major problem because hijacking can't happen. If you know everything related to file I/O is in one module (basic functionality common to all forms of I/O can just be publicly imported and the important stuff demonstrated in the example docs), it's a lot easier to browse through the docs for that module and understand the API's concept of file I/O than if the logic is spread across a zillion files w/o any obvious relationship between them. This was my main gripe against Tango. Once you figure out what modules you needs, it's pretty easy. The problem is that there are an overwhelming number of I/O modules, each of which, by itself, does practically nothing. Of course Java is even worse because iterating over the lines of a text file isn't even consistent with Java's standard way of iterating over other stuff (it doesn't use iterators), so it's much more non-discoverable. Nov 30 2009
http://www.digitalmars.com/d/archives/digitalmars/D/Re_Phobos_packages_a_bit_confusing_102540.html
CC-MAIN-2015-35
refinedweb
656
63.29
very few lines of code. This is really good news if you have a networked cluster of servers all doing the same thing. You could argue that there’s something lacking in the JVM if it can’t even perform the most basic interprocess communication; however, Java takes the opposite view, it has a basic VM and then layers different services on top as and when required. Whether this is right is a matter of opinion and I’ll leave it as a subject for a future blog, because it seems that the Hazelcast Guys have solved the problem of JVMs talking to each other; which is the point of this blog. So, what is Hazelcast? The Hazelcast press release goes something like this: “Hazelcast () is reinventing in-memory data grid through open source. Hazelcast provides a drop-in library that any Java developer can include in minutes to enable them to build elegantly simple mission-critical, transactional, and terascale in-memory applications”. So, what does that really mean? Okay, so that’s just marketing/PR bumpf. What is Hazelcast… in real life? The answer can be succinctly given using code. Imagine you’re writing an application and you need a Map<String,String> and when you’re in production you’ll have multiple instances of your app in a cluster. Then writing the following code: HazelcastInstance instance = Hazelcast.newHazelcastInstance(); loggedOnUsers = instance.getMap("Users"); …means that data added to your map by one instance of your application is available to all the other instances of your application2 There are a few points that you can deduce from this. Firstly, Hazelcast nodes are ‘masterless’, which means that it isn’t a client-server system. There is a cluster leader, which is by default the oldest member of the cluster, which manages how data is spread across the system; however, if that node went down, then the next oldest will take over. Having a bunch of distributed Maps, Lists, Queues etc, means that everything is held in memory. If one node in your cluster dies, then you’re okay, there’s no loss of data; however, if a number of nodes die at the same time, then you’re in trouble and you’ll get data loss as the system won’t have time to rebalance itself. It also goes without saying that if the whole cluster dies, then you’re in big trouble. So, why is Hazelcast a good bet? - It’s open source. This is usually a good thing… - Hazelcast have just received a large cash injection to ‘commoditize’ the product. For more on this take a look here and here. - Rod Johnson, yes Mr Spring, is now on the board of Hazelcast. - It just works1. - Getting started is pretty easy. The Scenario To demonstrate Hazelcast imagine that you’re writing an application, in this case modelled by the MyApplication class and then there’s a big, wide world of users as modelled by the BigWideWorld class. As expected, users from the BigWideWorld log in and out of your application. Your application is very popular and you’re running multiple instances of it in a cluster, so when a user logs in an instance of the app it stores their details (as modelled by the User class) in a Map and the contents of the map are synchronised with the maps held by other instances of your application. POM Configuration The first thing to do is to setup the POM.xml and there’s only one entry to consider: <dependency> <groupId>com.hazelcast</groupId> <artifactId>hazelcast</artifactId> <version>3.1</version> </dependency> The Code The BigWideWorld is the starting point for the code and it’s a very small class for such a large concept. It has one method, nextUser(), which randomly chooses the name of the next user to log in or out from a collection of all your application’s users. public class BigWideWorld { private static Random rand = new Random(System.currentTimeMillis()); private final Users users = new Users(); private final int totalNumUsers = users.size(); public String nextUser() { User user = users.get(rand.nextInt(totalNumUsers)); String name = user.getUsername(); return name; } } The collection of users is managed by the Users class. This is a sample code convenience class that contains a number of hard coded users’ details. public class Users { /** The users in the database */ private final User[] users = { new User("fred123", "Fred", "Jones", "[email protected]"), new User("jim", "Jim", "Jones", "[email protected]"), new User("bill", "Bill", "Jones", "[email protected]"), new User("ted111", "Edward", "Jones", "[email protected]"), new User("annie", "Annette", "Jones", "[email protected]"), new User("lucy", "Lucy", "Jones", "[email protected]"), new User("jimj", "James", "Jones", "[email protected]"), new User("jez", "Jerry", "Jones", "[email protected]"), new User("will", "William", "Jones", "[email protected]"), new User("shaz", "Sharon", "Jones", "[email protected]"), new User("paula", "Paula", "Jones", "[email protected]"), new User("leo", "Leonardo", "Jones", "[email protected]"), }; private final Map<String, User> userMap; public Users() { userMap = new HashMap<String, User>(); for (User user : users) { userMap.put(user.getUsername(), user); } } /** * The number of users in the database */ public int size() { return userMap.size(); } /** * Given a number, return the user */ public User get(int index) { return users[index]; } /** * Given the user's name return the User details */ public User get(String username) { return userMap.get(username); } /** * Return the user names. */ public Set<String> getUserNames() { return userMap.keySet(); } } This class contains a few database type of calls, such as get(String username) to return the user object for a given name, or get(int index) to return a given user from the DB, or size() to return the number of users in the database. The user is described by the User class; a simple Java bean: public class User implements Serializable { private static final long serialVersionUID = 1L; private final String username; private final String firstName; private final String lastName; private final String email; public User(String username, String firstName, String lastName, String email) { super(); this.username = username; this.firstName = firstName; this.lastName = lastName; this.email = email; } public String getUsername() { return username; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public String getEmail() { return email; } @Override public String toString() { StringBuilder sb = new StringBuilder("User: "); sb.append(username); sb.append(" "); sb.append(firstName); sb.append(" "); sb.append(lastName); sb.append(" "); sb.append(email); return sb.toString(); } } Moving on the crux of the blog, which is the MyApplication class. Most of the code in this blogs is merely window dressing, the code that’s of importance is in MyApplication‘s constructor. The construct contains two lines of code; the first gets hold of a new Hazelcast instance, whilst the second uses that instance to create a Map<String, User> with a namespace of “Users”. This is all the Hazelcast specific code that’s needed. The other methods: logon(), logout() and isLoggedOn() just manage the users. All the above is tied together using a simple Mainclass: public class Main { public static void main(String[] args) throws InterruptedException { BigWideWorld theWorld = new BigWideWorld(); MyApplication application = new MyApplication(); while (true) { String username = theWorld.nextUser(); if (application.isLoggedOn(username)) { application.logout(username); } else { application.logon(username); } application.displayUsers(); TimeUnit.SECONDS.sleep(2); } } } This code creates an instance of the BigWideWorld and MyApplication. It then infinitely loops grabbing hold of the next random user name. If the user is already logged in, then the user logs out. If the user is not logged in, then the user logs in. The logged in users are then displayed so that you can see what’s going on. Running the App After building the app, open a terminal and navigate to the projects target/classes directory. Then type in the following command: java -cp /your path to the/hazelcast-3.1/lib/hazelcast-1.jar:. com.captaindebug.hazelcast.gettingstarted.Main When running, you’ll get output that looks something like this: Logged on users: User: fred123 Fred Jones [email protected] User: jimj James Jones [email protected] User: shaz Sharon Jones [email protected] User: paula Paula Jones [email protected] User: lucy Lucy Jones [email protected] User: jez Jerry Jones [email protected] User: jim Jim Jones [email protected] 7 -- 14:54:16-17 Next, open more terminals and run a few more instances of your application. If you trail through the output you can see users logging in and out, with the user Map being displayed on each change. The clue that the changes in one app’s map are reflected in the other instances can be hard to spot, but can be deduced from the total size of the map (the first number on the last line of the output). Each time the map is displayed one user has either logged in or out; however, the total size can change by more than one, meaning that other instances’ changes have affected the size of the map you’re looking at. So, there you have it a simple app that when four instances are running keep themselves in synch and know which users are logged in. It’s supposed to work in large clusters, but I’ve never tried it. Apparently, in large clusters, you have to do some jiggery-pokery with the config file, but that’s beyond the scope of this blog. 1Okay, enough of the marketing speak. In general is does ‘just work’, but remember that it is software, written by developers like you and me, it does have its features and idiosyncrasies. For example, if you’re still using version 2.4 then upgrade NOW. This has a memory leak that means it ‘just silently stops working’ when it feels like it. The latest version is 3.1. 2I’ve chosen Map as an example, but it’s also true for other collection types such as List, Set and Queue, plus Hazelcast has many other features that are beyond the scope of this blog including a bunch of concurrency utilities and publish/subscribe messaging. - The code for this blog is available on github at: Great article, Roger! Could you please add the MyApplication class to the article? Alex, I don’t seem to have editing tights, but this is the code you’re after. public class MyApplication { private final Map loggedOnUsers; private final Users userDB = new Users(); private final SimpleDateFormat sdf = new SimpleDateFormat(“kk:mm:ss-SS”); private long lastChange; public MyApplication() { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); loggedOnUsers = instance.getMap(“Users”); } /** * A user logs on to the application * * @param username * The user name */ public void logon(String username) { User user = userDB.get(username); loggedOnUsers.put(username, user); lastChange = System.currentTimeMillis(); } /** * The user logs out (or off depending on your pov). */ public void logout(String username) { loggedOnUsers.remove(username); lastChange = System.currentTimeMillis(); } /** * @return Return true if the user is logged on */ public boolean isLoggedOn(String username) { return loggedOnUsers.containsKey(username); } /** * Return a list of the currently logged on users – perhaps to sys admin. */ public Collection loggedOnUsers() { return loggedOnUsers.values(); } /** * Display the logged on users */ public void displayUsers() { StringBuilder sb = new StringBuilder(“Logged on users:\n”); Collection users = loggedOnUsers.values(); for (User user : users) { sb.append(user); sb.append(“\n”); } sb.append(loggedOnUsers.size()); sb.append(” — “); sb.append(sdf.format(new Date(lastChange))); sb.append(“\n”); System.out.println(sb.toString()); } } Hey Roger, This is the finest way of explaning. Please let me know how to use the Hazelcast in mule so that I can make it more robust. Cheers, Sushil Thankyou, this helped me a lot Best introduction to Hazelcast. Thanks not able to run multiple instances. getting following error for second instance: Could not connect to: /10.229.161.215:5702. Reason: SocketException[Connection timed out: connect to address /10.229.161.215:5702
https://www.javacodegeeks.com/2013/11/getting-started-with-hazelcast.html/comment-page-1/
CC-MAIN-2017-39
refinedweb
1,939
54.42
Java KeyStores are used to store key material and associated certificates in an encrypted and integrity protected fashion. Like all things Java, this mechanism is pluggable and so there exist a variety of different options. There are lots of articles out there that describe the different types and how you can initialise them, load keys and certificates, etc. However, there is a lack of detailed technical information about exactly how these keystores store and protect your key material. This post attempts to gather those important details in one place for the most common KeyStores. Each key store has an overall password used to protect the entire store, and can optionally have per-entry passwords for each secret- or private-key entry (if your backend supports it). Java Key Store (JKS) The original Sun JKS (Java Key Store) format is a proprietary binary format file that can only store asymmetric private keys and associated X.509 certificates. Individual private key entries are protected with a simple home-spun stream cipher—basically the password is salted (160-bits) and hashed with SHA-1 in a trivial chained construction until it has generated enough output bytes to XOR into the private key. It then stores a simple authenticator tag consisting of SHA-1(password + private key bytes) — that’s the unencrypted private key bytes. In other words, this is an Encrypt-and-MAC scheme with homespun constructions both based on simple prefix-keyed SHA-1. (This scheme has OID 1.3.6.1.4.1.42.2.17.1.1). The whole archive is again integrity protected by a home-spun prefix keyed hash construction, consisting of the SHA1 hash of the UTF-16 bytes of the raw keystore password, followed by the UTF-8 bytes of the phrase “Mighty Aphrodite” (I’m not kidding) followed by the bytes of the encoded key store entries. If every part of this description has not got you screaming at your screen in equal parts terror and bemusement, then you probably haven’t fully grasped how awful this is. Don’t use it, even for just storing certificates — it’s tampering resistance is if anything even worse than the encryption. JCE Key Store (JCEKS) Sun later updated the cryptographic capabilities of the JVM with the Java Cryptography Extensions (JCE). With this they also introduced a new proprietary key store format: JCEKS. JCEKS uses “PBEWithMD5AndTripleDES” to encrypt individual key entries, with a 64-bit random salt and 200,000 iterations of PBKDF1 to derive the key. TripleDES is used with 3 keys (“encrypt-decrypt-encrypt”) in CBC mode. There is no separate integrity protection of individual keys, which is fine if the archive as a whole is integrity protected, but it means that access control is effectively at the level of the whole keystore. This is not terrible from a crypto point of view, but can definitely be improved—neither MD5 nor TripleDES are considered secure any more, and it’s been a long time since anyone recommended them for new projects. However, it would also not be a trivial effort to break it. JCEKS uses the same ridiculous “Mighty Aphrodite” prefix-keyed hash as JKS for integrity protection of the entire archive. It is probably best to assume that there is no serious integrity protection of either of these key stores. Edit: I’ve since noticed that the OpenJDK version of the underlying KeyProtector class defaults to only 20 iterations of PBKDF1! This is extremely low—NIST recommends at least 10,000 iterations and even that is quite weak by modern standards. Edit 2: It has been pointed out to me on Twitter that I was looking at the wrong OpenJDK source, and the jdk8u sources do contain the higher 200,000 default iteration count. This appears to have been fixed in July 2017 for both Oracle JDK and OpenJDK as CVE-2017-10356. So JCEKS key derivation was completely pathetic until really very recently. This is quite shocking. Worse than that: upgrading your JDK won’t upgrade your keystores. You need to re-generate all JCEKS keystores to ensure the higher default iteration count is picked up, otherwise you are relying on just 20 iterations of PBKDF1, which is essentially nothing. If you are not using very strong random passwords (i.e., 128-bits or so of entropy) then you should do this as a matter of urgency, and ideally move to a different key store format too (see below). You are looking at very old code from OpenJDK jdk8 forest. Clone the full jdk8u/jdk8u forest, which corresponds to latest JDK 8 updates, and you won’t find that difference. — Dalibor Topic (@robilad) February 9, 2018 (NB: tweet has since been deleted). PKCS#12 Apart from these proprietary key stores, Java also supports “standard” PKCS#12 format key stores. The reason for the scare quotes around “standard” is that while it is indeed a standard format, it is a very flexible one, and so in practice there are significant differences between what “key bag” formats and encryption algorithms are supported by different software. For instance, when you store symmetric SecretKey objects in a PKCS#12 key store from Java, then OpenSSL cannot read them as they use a bag type (“secretBag” – OID 1.2.840.113549.1.12.10.1.5) that it does not understand. Java uses version 3 of the PKCS#12 standard format. It stores secret keys in the aforementioned “secretBag” format, and asymmetric private keys in “PKCS#8 Shrouded Key Bag” format (OID 1.2.840.113549.1.12.10.1.2). This just dictates the format of bytes on the disk. In both cases the actual key material is encrypted using some form of password-based encryption (PBE) mode. By default this is “PBEWithSHA1AndDESede” — “DESede” is another name for TripleDES in encrypt-decrypt-encrypt mode, so this is pretty similar to the mode used by JCEKS apart from using a slightly better (but still deprecated) hash in the form of SHA-1. By default this uses a 160-bit salt and 50,000 iterations. But, there is an important improvement in the PKCS#12 implementation—you get to choose the encryption algorithm! By passing in a PasswordProtection parameter (from Java 8 onwards) when saving a key you can specify a particular (password-based) cipher to use. I haven’t checked exactly what ciphers are allowed, but you can at least specify a stronger PBE mode, such as “PBEWithHmacSHA512AndAES_256”, which will derive a 256-bit AES key using salted PBKDF2 and then encrypt the stored key using AES/CBC/PKCS5Padding with that key. You can also increase the number of iterations of PBKDF2 used. For example: Update (Dec 2018): If you run the following code on JDK 8 then the resulting keystore cannot be opened on JDK 11 and vice-versa. See this JDK bug for background and work on a resolution/workaround. It appears the code to generate PKCS#8 encrypted private keys was broken in JDK 8 and has been fixed in a non-backwards-compatible way. import java.io.FileOutputStream; import java.security.KeyStore; import java.security.KeyStore.PasswordProtection; import java.security.KeyStore.SecretKeyEntry; import java.security.SecureRandom; import javax.crypto.SecretKey; import javax.crypto.spec.PBEParameterSpec; import javax.crypto.spec.SecretKeySpec; public class scratch { public static void main(String... args) throws Exception { KeyStore keyStore = KeyStore.getInstance("PKCS12"); keyStore.load(null, null); // Initialize a blank keystore SecretKey key = new SecretKeySpec(new byte[32], "AES"); char[] password = "changeit".toCharArray(); byte[] salt = new byte[20]; new SecureRandom().nextBytes(salt); keyStore.setEntry("test", new SecretKeyEntry(key), new PasswordProtection(password, "PBEWithHmacSHA512AndAES_128", new PBEParameterSpec(salt, 100_000))); keyStore.store(new FileOutputStream("/tmp/keystore.p12"), password); } } Note that despite the inclusion of “HmacSHA512” in the above PBE mode that only applies to the key derivation from the password. There is no integrity protection at the level of individual entries. It is also worth noting that the keystore and individual key passwords should be the same. I don’t think this is a fundamental limitation of PKCS#12 in Java, but certainly standard Java tools like the command line “keytool” utility will fail to handle PKCS#12 keystores with different passwords used for the store vs individual keys. If you don’t need to use those tools then you might be able to get away with different passwords for each key. In contrast to the previous entries, the PKCS#12 key store format does actually encrypt certificates too. It does this with a hard-coded algorithm “PBEWithSHA1AndRC2_40”. This uses 50,000 rounds of salted PBKDF1 to derive a 40-bit key for RC2 encryption. RC2 is an old stream cipher that I certainly wouldn’t recommend. The 40-bit key is far too small to provide any serious security. It makes me wonder why bother applying 50,000 rounds of PBKDF1 to protect the password while generating a key that is itself vulnerable to brute-force. It is probably actually faster to brute force the derived key than the original password. I can only assume it is maintaining compatibility with some decision taken way back in the depths of time that everyone involved now deeply regrets. The integrity of the overall PKCS#12 key store is protected with “HmacPBESHA1”. This is HMAC-SHA1 using a key derived from the store password using 100,000 iterations of salted PBKDF2-HMAC-SHA1. This is all hard-coded so cannot be changed. This is an ok choice, although it would be nice to be able to use something other than SHA-1 here, as it appears that PKCS#12 allows other MACs to be used. For HMAC usage, SHA-1 is still just about ok for now, but it would be better to remove it. It would also be nice to be able to tune the iteration count. Overall, the PKCS#12 key store is considerably better than either of the Sun-designed proprietary options. If you specify your own PasswordProtection instances with AES and SHA2 and use high iteration counts and good random salts, then it’s actually a pretty solid design even by modern standards. The only really ugly part is the 40-bit RC2 encryption of trusted certificates, but if you do not care about the confidentiality of certificates then we can overlook that detail and just consider them lightly obfuscated. At least the use of HMAC-SHA1 is a decent integrity protection at last. PKCS#11 There’s not much to say about PKCS#11. It is a standard interface, intended for use with hardware security tokens of various kinds: in particular Hardware Security Modules (HSMs). These range from 50 Euro USB sticks up to network-attached behemoths that cost tens or hundreds of thousands of dollars. The hardware is usually proprietary and closed, so it’s hard to say exactly how your keys will be stored. Generally, though, there are significant protections against access to keys from either remote attackers or even those with physical access to the hardware and a lot of time on their hands. This isn’t a guarantee of security, as there are lots of ways that keys might accidentally leak from the hardware, as the recent ROCA vulnerability in Infineon hardware demonstrated. Still, a well-tested HSM is probably a pretty secure option for high-value keys. I won’t go into the details of how to set up a PKCS#11 key store, as it really varies from vendor to vendor. As for PKCS#12, while the interface is standardised there is enormous room for variation within that standard. In most cases you would let the HSM generate keys in the secure hardware and never export the private key material (except perhaps for backup). Summary Use a HSM or a PKCS#12 keystore, and specify manual PasswordProtection arguments when storing keys. Avoid the proprietary key stores. Alternatively, farm out key management to somebody else and use a Key Management System (KMS) like Hashicorp Vault.
https://neilmadden.blog/2017/11/17/java-keystores-the-gory-details/
CC-MAIN-2019-04
refinedweb
1,997
53.1
#include <MP_constraint.hpp> Inheritance diagram for flopc::MP_constraint: This is one of the main public interface classes. It is always constructed through operator overloading between expressions, constants, and variables. There are many 'friend' overloaded operators to do the constuction. The basic idea is to make the constraint look like a paper-model constraint in C++ code. Once constructed, it should be added to the model. The snippet below is an overly simplistic example, but is ok for illustration. MP_model aModel; // your model MP_set I; // the set the constraint is defined over. MP_variable x(I); // your variable ... MP_constraint cons(I); // construct the right number of constraints. cons = x <= 3; // Assign in the semantic rep to it. aModel.add(cons); // add it to the model There is quite a bit of C++ machinery going on there. Definition at line 207 of file MP_constraint.hpp. construct the MP_constraint with appropriate sets for indexing. Definition at line 234 of file MP_constraint.hpp. Definition at line 229 of file MP_constraint.hpp. References flopc::RowMajor::f(), I1, I2, I3, I4, I5, and offset. Definition at line 242 of file MP_constraint.hpp. Definition at line 253 of file MP_constraint.hpp. Definition at line 254 of file MP_constraint.hpp. Referenced by operator int(). Definition at line 256 of file MP_constraint.hpp. Definition at line 258 of file MP_constraint.hpp. Referenced by such_that().
http://www.coin-or.org/Doxygen/Smi/classflopc_1_1_m_p__constraint.html
crawl-003
refinedweb
224
53.88
Opened 8 years ago Closed 7 years ago Last modified 5 years ago #6064 closed Uncategorized (fixed) Allow database connection initialization commands Attachments (4) Change History (32) comment:1 Changed 8 years ago by jacob - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Triage Stage changed from Unreviewed to Accepted comment:2 Changed 8 years ago by telenieko - Cc telenieko@… added comment:3 Changed 8 years ago by jacob comment:4 Changed 8 years ago by floguy - Cc floguy@… added - Has patch set - Needs tests set. Changed 8 years ago by floguy Patch to add CONNECTION_INIT_SQL setting and use it on connection initialization. comment:5 Changed 8 years ago by floguy - Owner changed from nobody to floguy - Status changed from new to assigned comment:6 Changed 8 years ago by Simon G <dev@…> - Needs tests unset - Triage Stage changed from Accepted to Ready for checkin comment:7 Changed 8 years ago by jacob - Needs tests set - Patch needs improvement set - Triage Stage changed from Ready for checkin to Accepted Looking at this, I *REALLY* don't like having SQL in the settings file. So I think that a connection-created signal is the right way to do this, not a setting. comment:8 Changed 8 years ago by flog 8 years ago by floguy Changed the implementation method to emit a signal instead of execute a tuple from the settings. comment:9 Changed 8 years ago by anonymous - Needs tests unset - Patch needs improvement unset comment:10 Changed 8 years ago by jshaffer - Cc jshaffer added comment:11 Changed 8 years ago by anonymous - Cc sam@… added comment:12 Changed 8 years ago by MariusBoo - Cc feteanumarius@… added For anyone looking for a quick and dirty solution you ca define your table like this: db_table = '"django"."company"'. This will fool the quote function to think that your table is properly quoted. This also means that you have to specify the schema for each table manually. comment:13 Changed 8 years ago by jfsimon_fr - Cc contact@… added cc'ing me too (hello !) comment:14 Changed 8 years ago by cogat - Cc greg@… added comment:15 Changed 8 years ago by dan90 - Cc dan90 added Changed 8 years ago by mdh comment:16 Changed 8 years ago by mdh. comment:17 Changed 7 years ago by euphoria - Cc michael.greene@… added comment:18 Changed 7 years ago by mattrussell - Cc matthew.russell@… added cc'ing comment:19 Changed 7 years ago by jbronn - milestone set to 1.1 Changed 7 years ago by jbronn comment:20 Changed 7 years ago by jbronn - Resolution set to fixed - Status changed from assigned to closed comment:21 Changed 7 years ago by hank.gay@… Does this address the case where I'd like to do some work with the connection? It seems like the sender should be the newly created connection, not the class. comment:22 Changed 7 years ago by anonymous - Cc dan90 removed comment:23 Changed 7 years ago by hank.gay@… - Cc hank.gay@… added - Resolution fixed deleted - Status changed from closed to reopened I am reopening this ticket because the accepted patch does not appear to address the same issue as the original patch. The original patch performed initialization work on each connection as it was established. The accepted patch fires a signal when a connection has been created, but the sender is the class, not the newly created connection, so how can the desired initialization be performed? If there is a consensus that the sender should be changed to the new connection, I am happy to submit a patch that does that. comment:24 Changed 7 years ago by euphoria - Resolution set to fixed - Status changed from reopened to closed Please ask questions like this on an appropriate mailing list (django-user in this case) instead of resurrecting old tickets. def set_schema(sender, **kwargs): from django.db import connection cursor = connection.cursor() cursor.execute("INITIALIZE SOME THINGS VIA SQL") if django.VERSION >= (1,1): from django.db.backends.signals import connection_created connection_created.connect(set_schema) comment:25 Changed 6 years ago by tback i never found out where at what place i had to register the signal handler. the best solution i found for the schema problem was to set a connect_query in pgbouncer. comment:26 Changed 6 years ago by drdee - Cc dvanliere@… added comment:27 Changed 5 years ago by brillgen - Cc dev@… added - Easy pickings unset - Severity set to Normal - Type set to Uncategorized comment:28 Changed 5 years ago by jacob - milestone 1.1 deleted Milestone 1.1 deleted cc'ing me
https://code.djangoproject.com/ticket/6064
CC-MAIN-2016-22
refinedweb
767
58.11
public class Box<T> { // T stands for "Type" protected T t; public void add(T t) { this.t = t; } public T get() { return t; } }As you can see, the type looks like a parameter to a function. You are allowed to pass multiple types to a template. For example, a hash table would have types for both the key and the value objects. Here is a declaration of a variable of type Box, and its use: class Test { static public void main(String args[]) { Box<String> b = new Box<String>(); b.add(args[0]); String myArg = b.get(); System.out.println(myArg); } } Box<String> b = new Box<>(); class Foo { LinkedList<Integer> a = new LinkedList(); Foo() { a.add(3); a.add(6); int x = a.get(0) + a.get(1); } } UNIX> javac Foo.java Note: Foo.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details.If you recompile with -Xlint then the Java compiler will be forthcoming about the fact that you omitted the diamond operator when creating the LinkedList object. public <V> void print(V data[]) { for (V val : data) { System.out.println(val); } }Notice that there is a <V> in front of the void and after the public keywords. If I had instead used T, I would not have been required to use a leading <T>. When I invoke a generic method, I may or may not have to prefix it with the type of the object I am passing in: Box b; b.<String>print(args); // always works b.print(args) // usually worksIf you do not prefix the method call with the type of the object you are passing as an argument, then the java compiler will attempt to use type inference to determine the type of the parameter. Usually this will be successful. If the java compiler cannot determine the type and gives you an error message, then you will have to explicitly prefix the method with the type of the object you are passing to it. public <V extends Comparator<V>> void sort(V data[]){...} List<Number> myList = new List<Integer>();This code looks like it should compile since Integer is a subclass of Number. However, the java compiler complains and says that List<Integer> is not a subclass of List<Number>. The reason is that myList should be able to store any type of number, such as a floating point number, but by assigning it a list of integers, you have restricted it to storing only integers. Java considers this an impermissable restriction, even though you should be able to perform any operation on the list of integers that you could on the list of numbers. To get around this restriction, you can use upper bounded wildcards, which indicates that a variable can accept an object that contains any subtype of the upper bound. For example the following function sums the numbers in a list and can accept any list whose objects are a subtype of Number: public static double sumOfList(List<? extends Number> list) { double s = 0.0; for (Number n : list) s += n.doubleValue(); return s; } You can also use unbounded wild cards, which is a ? followed by nothing else. In the following example, printList accepts a list of any type of object: public static void printList(List<?> list) { for (Object elem: list) System.out.print(elem + " "); System.out.println(); } List<? super Integer> myList;In my experience lower bounds never come up while upper bounds do because of the desire to use subclasses in place of superclasses. public class Box { protected Object t; public void add(Object t) { this.t = t; } public Object get() { return t; } }The Java compiler then inserts downcasts into your code to ensure that the objects get converted to the appropriate type before they are used. Pair<int, char> p = new Pair<>(8, 'a'); // compile-time errorHowever, the follow declaration is legal because Java will auto box the 8 into an Integer object and the 'a' into a Character object. Pair<Integer, Character> p = new Pair<>(8, 'a'); public static <E> void append(List<E> list) { E elem = new E(); // compile-time error list.add(elem); }The reason for this restriction is because type erasure will replace E with its upper bound. Hence rather than creating an instance of E, you will create an instance of its upper bound, which is not what you intended. List<Integer>[] arrayOfLists = new List<Integer>[2]; // compile-time errorThe Java tutorial gives the following example to show why this declaration could prove problematic if it were allowed: The following code works as you expect:You can work around this problem by using the original "raw type" generics:Object[] strings = new String[2]; strings[0] = "hi"; // OK strings[1] = 100; // An ArrayStoreException is thrown.If you try the same thing with a generic list, there would be a problem:Object[] stringLists = new List<String>[2]; //. List [] arrayOfLists = new List[2];You can now insert Integers into your lists and downcast them when you remove them. struct Node { void *value; struct Node *next; }; #include <stdio.h> #include <string.h> // generic min function void *min(void *element1, void *element2, int (*compare)(void *, void *)) { if (compare(element1, element2) < 0) return element1; else return element2; } // stringCompare downcasts its void * arguments to char * and then passes // them to strcmp for comparison int stringCompare(void *item1, void *item2) { return strcmp((char *)item1, (char *)item2); } int main(int argc, char *argv[]) { if (argc != 3) { printf("usage: min string1 string2\n"); return 1; } // call min to compare the two string arguments and downcast the return // value to a char * char *minString = (char *)min(argv[1], argv[2], stringCompare); printf("min = %s\n", minString); return 0; }
http://web.eecs.utk.edu/~bvz/teaching/cs365Sp12/notes/generic-types.html
CC-MAIN-2017-51
refinedweb
947
61.87
Eclipse Community Forums - RDF feed Eclipse Community Forums Advice on Constructors With Subclasses <![CDATA[I am attempting to write some advice that will run only after the construction has completely finished, i.e. after the an object has been completely initialized. An object can be created in any of the types in an inheritance tree, and thus, I would like to advise all of the constructors, but only run the advice once, after the object has been constructed. This is probably easier to describe via an example: public class A { public A() { System.out.println("init A"); } } public class B extends A { public B() { super(); System.out.println("init B"); } } public class C extends B { public C() { super(); System.out.println("init C"); } } after(A item) : execution(A+.new()) && target(item) { System.out.println(thisJoinPoint.getSourceLocation()); } public static void main(String[] args) { C c = new C(); B b = new B(); } The above code has the following output: init A A.java:3 init B B.java:3 init C C.java:3 init A A.java:3 init B B.java:3 What I am attempting to create is advice that will produce the following: init A init B init C C.java:3 init A init B B.java:3 i.e. Only run the advice after the object is completely created I have tried a few approaches, including !within(A+.new()) and !cflow(execution(A+.new()), but due to the way the compiler inlines the super() calls, these approaches produce the same results as above. Does anybody have any experience with this problem, and/or suggestions on how to solve it?]]> James Elliott 2008-09-02T23:30:19-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=188858&basic=1
CC-MAIN-2015-14
refinedweb
282
59.5
The string is a data or variable type used to store single or more characters in Python. As string may contain different characters in different formats and patterns. The string may contain a sentence with words separated with spaces or delimiter characters like commas, etc. The string variable content can be split according to these patterns and delimiters into the list type. Convert String To List with String split() Method Python string type provides the split() method which is used to split the string into a list according to the provided splitter or delimiter. The syntax of the split() method is like below. STRING.split(DELIMETER) - STRING is the string variable or string value that will be split. - DELIMITER is the delimiter character that will split the string. The DELIMITER character is generally set as space, comma, etc. The split() method returned the list data which can be printed to the screen or set into a variable. In the following example we will use a sentence as a string where spaces are natural separators. We will use a single space as separator and convert string to list. sentence = "I like PythonTect" list = sentence.split(" ") print(list) The output is like below. ['I', 'like', 'PythonTect'] The split() method can be also called on a string value or string literal without defining any variable. The usage is the same with the string variable. list = "I like PythonTect".split(" ") print(list) Even the single space is a popular delimiter we can also used different delimiters like command. In the following example we will split the string into list according to the comma delimiter. sentence = "I,like,PythonTect" list = sentence.split(",") print(list) ['I', 'like', 'PythonTect'] Convert String To List Character By Character A string is a list of characters that can be none or multiple characters. The list operator can be used to convert a string into a character list. Every character in the string is converted into a list item and put into the list. sentence = "I like PythonTect" list=[] list[:0]=sentence print(list) ['I', ' ', 'l', 'i', 'k', 'e', ' ', 'P', 'y', 't', 'h', 'o', 'n', 'T', 'e', 'c', 't'] Convert String Type List Representation Into List Another case for converting a string into a list is where the string is actually a list definition but in string format. The string contains the brackets, commas to delimit items, etc. The string is evaluated in Python and converted into a list and assigned into a variable which will be a list. a = "[1,2,3,4]" print(type(a)) list = a.strip('][').split(', ') print(type(list)) <class 'str'> <class 'list'> There is an alternative method in order to convert a string that contains a list definition. The literal_eval() method can be used to evaluate a given string like a Python script and returns the evaluated value. The literal_eval() method is provided by the ast module. import ast a = "[1,2,3,4]" print(type(a)) list = ast.literal_eval(a) print(type(list))
https://pythontect.com/convert-string-to-list-in-python/
CC-MAIN-2022-21
refinedweb
499
64.61
Building A Simple Search Interface Using zope.formlib to build a simple search interface for Plone. Understanding Formlib zope.formlib is a Zope 3 package designed to ease the development of web-based forms in your Zope applications. In its simplest form you can compare what it does with the auto-generated displays Archetypes provides you for viewing a content type (base_view) and editing a content type (base_edit). For all practical sense formlib based components are really regular Zope view components with some convenient base classes for auto-generating output based on schema's and other configuration info. Thankfully beginning with Zope 2.9.3 zope.formlib is now being distributed with Zope 2. Of course Five >= 1.4 is required to make use of this Zope 3 package. Defining Our First Form For purposes of this writing we will construct a very simple search form for searching Plone content. This form will be similar to Plone's built in advanced search form but much simpler. You can view the working source code of these examples at the updated collective svn browser and updated collective svn repository locations. The Form Class We begin by creating a new file, browser.py, which will need to live in ploneexample.formlib/ploneexample/formlib/. The browser.py file will comprise the bulk of the necessary work. Lets start by adding the necessary imports. from zope import interface, schema from zope.formlib import form from Products.CMFCore import utils as cmfutils from Products.Five.browser import pagetemplatefile from Products.Five.formlib import formbase Next we'll construct our first Zope 3 interface: class ISearch(interface.Interface): text = schema.TextLine(title=u'Search Text', description=u'The text to search for', required=False) description = schema.TextLine(title=u'Description', required=False) The purpose of the interface in this case is not to describe a particular content object but instead to define the fields that formlib will use. Later on we'll discover how tradtional interfaces used to describe actual content classes can be used in combination with formlib to autogenerate proper add and edit forms for content. And now for the form view class itself. We will start with the first part of the class definition. class SearchForm(formbase.PageForm): form_fields = form.FormFields(ISearch) result_template = pagetemplatefile.ZopeTwoPageTemplateFile('search-results.pt') We use the PageForm class as our super class to inherit functionality from formlib itself. By default, PageForm knows how to generate all the HTML that will make up of our finished form. But in order to do this, formlib needs to know what fields we want. We do this by providing the form_fields attribute. FormFields is a formlib helper class that generates the appropriate field items from any Zope 3 schema (in this case, the schema interface we just defined). The result_template attribute defines a new page template that we will use to iterate over all of the results of our search. Next we define an action for our form: @form.action("search") def action_search(self, action, data): catalog = cmfutils.getToolByName(self.context, 'portal_catalog') kwargs = {} if data['text']: kwargs['SearchableText'] = data['text'] if data['description']: kwargs['description'] = data['description'] self.search_results = catalog(**kwargs) self.search_results_count = len(self.search_results) return self.result_template() This is where the real work takes place. A formlib action is generally a handler that will somehow get invoked by submitting an HTML form. In this case we create a new action labeled search, that will be used to handle when a user hits the search button. Our formlib-based class will automatically understand how to hook in an search button on the HTML form itself. This particular action handler will return our result template as a result. The Result Page Template In order to display the results of our search form we need to setup a simple page template. We will name this template, search-results.pt. Most the of template is pretty uninteresting. But for purposes of this writing we will demonstrate the result printing portion. <tal:block tal: <div class="single-result"> <h4> <span tal:</span>. <a tal:</a> </h4> <p tal:</p> </div> </tal:block> Since our previous formlib based class was a regular view, it gets treated that way inside the page templates. And we are able to assign simple attributes to our view that can get picked up within the template. Tying It All Together With ZCML Now that we've defined the form class and the result page template to go along with it we need to glue this all into Zope. We do this in configure.zcml. So we need to add the appropriate ZCML snippet: <browser:page Keen readers will notice the special name for configuring the new view component, browser:page. This XML tag actually employs an XML namespace prefix which needs to be defined. Normally this is added right onto the configure tag like this: <configure xmlns="" xmlns: Double-check the configure.zcml file if there are any doubts to the configuration. Again, since formlib is all based on regular Zope 3 view components, we register them the same way in the ZCML. For those of you unfamiliar with Zope 3 view components, these particular snippets basically mean that the search.html view will be available directly from the plone site, so the url would look something like this: - Search Results Our First zope.formlib Example In Summary The example demonstrated here shows the simplest form that could be created with formlib and how to hook in a simple action. It should be obvious from this example how you could use formlib to replace simple CMFFormController based logic. Of course formlib can do many other advanced things such as provide sub-form functionality and autogenerated add and edit forms for content classes. The bottom line is that zope.formlib is ready for use inside Plone today. And since formlib is so easy to work with, the author recommends all Plone application developers give it a try. (original article information source)
http://plone.org/documentation/tutorial/using-zope-formlib-with-plone/building-a-simple-search-interface
crawl-002
refinedweb
998
58.08
Sometimes you may want to establish a transport connection and then exec(S) an existing user program such as cat(C) to process the data as it arrives over the connection. However, most existing programs use read(S) and write(S) to perform character I/O. XTI and TLI do not directly support a read/write interface to a transport provider, but one may be provided using the tirdwr(M) STREAMS module. (This module is present in the kernel by default.) Such a connection can be released with the close(S) system call. This interface enables an application to issue read and write calls over a transport connection that has been established by the server's call to t_accept(NET). In the following example, a server first pushes tirdwr onto a stream before running cat(C) so that a client can read from or write to it over the transport connection. #include <stropts.h>The server invokes the read/write interface by pushing the tirdwr module onto the stream head associated with the transport endpoint created when the connection was established. For a description of I_PUSH, see streamio(M). With tirdwr in place, the server calls close and dup(S) to establish the transport endpoint as its standard input, and uses cat to process the input. . /* . * connection requested and accepted . */ if (ioctl(fd, I_PUSH, "tirdwr") < 0) { perror("I_PUSH of tirdwr failed"); exit(5); } close(0); dup(fd); execl("/bin/cat", "/bin/cat", 0); perror("execl of /bin/cat failed"); exit(6); Because the transport layer is implemented using STREAMS, the facilities of this character I/O mechanism can be used to provide enhanced user services. Note the following limitations on the use of this interface: With tirdwr pushed onto a stream, an application can send and receive data over the transport connection for the duration of the connection. Either end of a connection can terminate it by closing the file descriptor associated with the transport endpoint or by popping the tirdwr module off the stream. In either case, tirdwr takes the following actions:
http://osr507doc.xinuos.com/en/netguide/TLI_XTI_tirdwr.html
CC-MAIN-2020-50
refinedweb
343
50.16
This action might not be possible to undo. Are you sure you want to continue? Evan X. Merz 1 Sonifying Processing: The Beads Tutorial Copyright © 2011 Evan X. Merz. All rights reserved. To download the source code for the examples in this book, visit The Beads Library as well as the associated documentation can be found at 2 Sonifying Processing Contents 1. Introduction … 5 2. Beads Basics … 7 2.1. Unit Generators 2.2. Beads Architecture 2.3. Gain / WavePlayer / Glide 2.3.1. Generating a Sine Wave 2.3.2. Using Gain 2.3.3. Controlling Gain 2.3.4. Controlling Frequency 3. Synthesis … 16 3.1. Using the Mouse 3.2. Additive Synthesis 3.2.1. Two Sine Waves 3.2.2. Many Sine Waves 3.2.3. Additive Synthesis Controlled by a Sketch 3.3. Modulation Synthesis 3.3.1. Frequency Modulation 3.3.2. FM Controlled by the Mouse 3.3.3. Ring Modulation 3.4. Envelopes 3.4.1. Frequency Modulation with an Amplitude Envelope 4. Sampling … 34 4.1. Playing Recorded Audio 4.1.1. Loading and Playing a Sound Using SamplePlayer 4.1.2. Playing Multiple Sounds, Playing in Reverse 4.1.3. Controlling Playback Rate 4.2. More Mouse Input 4.2.1. Getting Mouse Speed and Mean Mouse Speed 4.2.2. Complex Mouse-Driven Multi-Sampler 4.3. Granular Synthesis 4.3.1. Using GranularSamplePlayer 4.3.2. Using GranularSamplePlayer Parameters 5. Effects … 61 5.1. Delay 5.1.1. Delay 5.1.2. Controlling Delay 5.2. Filters 5.2.1. Low-Pass Filter 5.2.2. Low-Pass Resonant Filter with an Envelope 5.2.3. Band-Pass Filter and Other Filter Types 5.3. Other Effects 5.3.1. Panner 3 WaveShaper 6. Miscellaneous … 118 10.3.2. Frequency Analysis 9.1. Granulating from Audio Input 8.3. Custom Beads A.2. Beat Detection 10.3.2.2.3. Analysis … 105 9. Recording and Playing a Sample 7.. Custom Mean Filter A.2.1. Sending MIDI to the Default Device 9.2.4.3. MIDI-Controlled Sine Wave Synthesizer 8.2.1.1.2.1. Using RecordToSample 7. Compressor 5.2.1. Clock Appendix A: Custom Beads … 122 A. Basic MIDI Output 8. Installing The MIDI Bus 8. Saving / Rendering Output .1. Getting an Audio Input UGen 7. MIDI-Controlled FM Synthesizer 8.2.5.1.3.4.2. Custom Functions A. Using MIDI … 94 8.1. Fast Fourier Transform 9.1. Basic MIDI Input 8.3. Reverb 5. Using Audio Input … 86 7. Custom Buffer A. Analysis in Beads 9. 82 6.1. Custom WavePlayer 4 Sonifying Processing .3.. PD. and you’re familiar with other music programming environments. How to Use this Tutorial If you’re familiar with Processing. Introduction Recently.1. Processing It is assumed that you already have a passing familiarity with the Processing programming language. Reaktor. After you work through some or all of that material. for me it has never been a compelling tool for music creation. then it’s best to first visit. If you’ve never worked with a music programming environment before (Max. we have Beads!” This tutorial is an introduction to the Beads library for creating music in Processing. Beads is a fantastic tool for sound art creation. Who is this tutorial for? This tutorial is aimed at programmers who are already familiar with Processing. “Processing is a great language for the arts. If you are familiar with other computer music programming paradigms. a professor in the music department at UC Santa Cruz said in a casual conversation. and has been around for much longer than Beads. 1. there are three short introductory examples in the code included with this book. etc) then it’s probably best to start from the beginning and work your way through. This tutorial will also be useful to sound artists who want a music programming paradigm that is more like Java or C than Max or PD (although it still draws on ideas from the latter environments). Additionally. Although the Minim library serves a similar purpose. PD. then just pick an example program and jump right in. Now. bringing together the best qualities of Java with the ease of programming seen in the patcher languages Max and Pure Data. I had to interrupt him. SuperCollider or Nyquist. and want to start adding sound into their sketches. SuperCollider. such as Max. Beads is fundamentally a music and sound art library at its core. then you will be immediately comfortable using the objects in Beads. Kyma. Tassman.” Although he is one of the most well-informed professors of my acquaintance.1. I told him that “that’s not true any more. meet me back here! 5 . but it’s much better with visuals than with sound.org/ and work through the tutorials and examples on that site. If you haven’t yet written a few basic programs in Processing. Installing Beads First. 6 Sonifying Processing .3.” If the “libraries” folder doesn’t exist. That list is monitored by myself. To find out where your sketch book is located.2. 1.google. The Beads mailing list is probably the best place to look for help (. click “File” then “Preferences” within the Processing window. and many other Beads contributors.net/) and click the link that says “Beads Library for Processing. if you have any problems along the way. Then there is an option for “Sketchbook Location. The Beads Community Remember. go the Beads website (. then copy the “beads” folder into the “libraries” subfolder within your sketchbook. beads creator Ollie Bown.” Unzip the file.1. then create it.com/group/beadsproject).beadsproject. you can post your question to the Beads community. 2. Even Ada Lovelace recognized the importance of modularity in computation when she envisioned subroutines in the 19th century. then plugs the delay output into his amplifier. he wrote the Music-N languages for creating computer music. we can create complex sound processing routines without having to understand exactly how the underlying processes work. and as a side project. Mathews was working at Bell Labs. Pure Data. Whether you’re working in Max. Reaktor. Unit generators are building blocks of an audio signal chain. Super Collider. Unit generators were not an entirely new concept for Mathews. Unit generators were pioneered by Max Mathews in the late 1950s with the Music-N computer music languages. Tassman or even on a standalone synthesizer. Nyquist. A single unit generator encompasses a single function in a synthesis or analysis program. He created software for analyzing phone calls. For instance. but they just refer to them as guitar pedals. 7 . After he turns the knob to the delay time desired. By plugging one functional unit into the next. Then we look at three of the most commonly used Beads objects: Gain. Unit Generators The concept of the Unit Generator is a vital concept to all computer music paradigms of which the author is aware. He was digitizing sound in order to study call clarity in the Bell telephone network. Guitarists are very familiar with the concept of unit generators. WavePlayer and Glide. Audio data is music or sound. Control data is numerical information. then send the results out of their outputs.2. Unit generators can be thought of as subroutines that take input. The input data can be audio data or control data. Everything in the Beads library is encapsulated as a unit generator. he can play his guitar through the delay pedal (unit generator).1. At the time. the concept of unit generators is of paramount importance. on a delay pedal. do something to it. which are referred to as beads in the Beads library. Beads Basics / Unit Generators This chapter introduces the basic concepts that underlie the Beads library. a guitarist plugs his guitar into the input. Beads works as a series of unit generators (guitar pedals). The AudioContext is the mother bead (unit generator). Generally. but before it can do that. ac = new AudioContext(). and manages the audio processing threads. The AudioContext connects to your computer’s hardware. The Glide object is used to send numerical data to other beads. There are a number of important lines to notice in this example. Gain / WavePlayer / Glide In this section. 8 Sonifying Processing . The next four lines form the entirety of our sound generation code. import beads. The WavePlayer object is used to generate simple periodic audio signals such as a sine wave. Audio processing can be started by calling the start routine: ac. Generating a Sine Wave (Hello_Sine) The first real Beads example generates a simple sine wave oscillating at 440Hz. First the AudioContext is initialized.start().1. This line gives the rest of the program access to the Beads library. and sends audio data to the speakers. your software must send some audio into the AudioContext. The Gain object controls the volume of an audio signal.*. An AudioContext can be created with a call to the AudioContext constructor. for a Beads program to function.3. It’s necessary to include this line in any beads program. It is the bead in which all of the other beads exist. But the AudioContext on its own doesn’t create any sound.3. Beads Architecture The core of the Beads library is the AudioContext object. you must create an AudioContext at some point. allocates memory for audio processing.2. The constructor creates a connection to the computer’s audio hardware. 2. This connects to your system’s audio hardware.2. 2. I’m going to introduce some basic objects that will be used in most Beads programs. Then the program instantiates a WavePlayer object. 440. In this case.SAW and Buffer.1. WavePlayers are used to generate periodic waveforms such as a sine wave. as we will see shortly. WavePlayer wp = new WavePlayer(ac.out. we begin processing audio. void setup() { size(400. ac. The third parameter is the type of periodic signal that should be generated.pde // import the beads library import beads. we call the addInput function to connect wp. In this case we’re generating a sine. which will continue until the program is terminated by clicking the close button.start().net/doc/net/beadsproject/beads/data/Buffer. Then the WavePlayer is connected to the AudioContext. the main output. The first parameter is the parent AudioContext object.SQUARE.out. The next parameter is the frequency. ac.addInput(wp). 300). Buffer. Finally.NOISE (more at. with ac. The WavePlayer constructor takes three parameters. but other options are Buffer.pde // Hello_Sine.SINE). the WavePlayer object.ac = new AudioContext(). 9 . or the stop button within the Processing window.beadsproject. Buffer. The addInput routine will be called any time that we connect two unit generators. Code Listing 2.html). or the unit generator that will control the frequency. // initialize our AudioContext ac = new AudioContext().*. // create our AudioContext AudioContext ac. Hello_Sine.3. This will usually be 1.// create a WavePlayer // WavePlayer objects generate a waveform WavePlayer wp = new WavePlayer(ac.2). // initialize our AudioContext 10 Sonifying Processing .pde // Hello_Gain. 0. // create our AudioContext AudioContext ac. Using Gain (Hello_Gain) In this section we’re going to use the Gain object for the first time. // start audio processing ac. ac. The third parameter is the starting value of the Gain object.3.*.SINE).addInput(wp). and the Gain to the AudioContext. 1. // connect the WavePlayer to the AudioContext ac.pde // import the beads library import beads. 440.start(). Then we connect the WavePlayer to the Gain.3. Buffer. Code Listing 2.2.2.out. The first parameter is the master AudioContext for the program. } 2. g. The second parameter is the number of inputs and outputs for the gain object.out. Gain objects are instantiated with a constructor that has three parameters.addInput(g). 300). void setup() { size(400.2 or 20%. Hello_Gain. The Gain object can be inserted between a sound generating object and the AudioContext in order to control the volume of that object. In this case the starting value is 0.addInput(wp). Gain g = new Gain(ac. // create a WavePlayer // WavePlayer objects generate a waveform WavePlayer wp = new WavePlayer(ac. 1. we create a Gain with 1 input and output // with a fixed volume of 0. The final value is the glide time in milliseconds.addInput(g). Controlling Gain Using Glide (Hello_Glide_01) The Gain object is useful to control the volume of an audio signal. To control an object parameter on the fly. In this case.addInput(wp). gainGlide). The second parameter is the starting value of held by the Glide. 440. You can see in this Gain constructor that in place of a default value. 1. we use the Glide object. The glide time is how long it takes for the Glide to go from one value to another.start(). we can insert the Glide object into the Gain constructor to tell the Gain that it should use the Glide to get its value.3. The Glide object can be thought of as a knob on a guitar pedal or effect box or synthesizer. we have inserted the Glide object. but it’s more useful when it can be controlled dynamically. The Glide constructor has three parameters. // connect the WavePlayer output to the Gain input g.2 (50%) Gain g = new Gain(ac. g = new Gain(ac. } 2. // connect the Gain output to the AudioContext ac. we want to use the Glide to control the Gain value. 0. If the Glide is like a knob.2). the loudness. 50). Rather. To connect the Glide to the Gain object we don’t use the addInput method. The first parameter is the AudioContext.out. This connects the Gain value to the value of the Glide. // start audio processing ac.ac = new AudioContext(). then the glide time how long it takes to turn the knob. You can “turn” the knob by giving it a value on the fly. 11 .SINE). gainGlide = new Glide(ac.0. 0. // create a Gain object // Gain objects set the volume // Here. Buffer.3. in order to demonstrate the Glide (knob) being used. is turned into a percentage by dividing by the width of the window. In this example. 300).pde // import the beads library import beads. wp = new WavePlayer(ac. 1.0 is the initial value contained by the Glide. the position of the mouse along the x-axis. // Create a WavePlayer. 12 Sonifying Processing . Hello_Glide_01. The mouseX. which in turn controls the Gain value.Then.0. void setup() { size(400. Buffer. // This time. // Create a Gain object. Gain g. // Glide objects move smoothly from one value to another. Code Listing 2. // Initialize our AudioContext. // It will take 50ms for it to transition to a new value. we want to change the Glide value within the rest of the program.SINE). // declare our unit generators (Beads) since we will need to // access them throughout the program WavePlayer wp. // Create the Glide object. 440. then this is passed to the Glide object.3.pde // Hello_Glide_01. // create our AudioContext AudioContext ac. we use the mouse position to control the Glide value. g = new Gain(ac. 50).3. Glide gainGlide.setValue(mouseX / (float)width). 0. gainGlide). gainGlide.*. gainGlide = new Glide(ac. ac = new AudioContext(). // 0. we will attach the gain amount to the glide // object created above. out. which will oversee audio // input/output AudioContext ac. we’re going to demonstrate the Glide object for a second time. frequencyGlide. Buffer.pde // import the beads library import beads.*. // connect the Gain output to the AudioContext ac. we attach it to the frequency of a WavePlayer by inserting it into the WavePlayer constructor where a starting frequency would normally be indicated.4.pde // Hello_Glide_02.setValue(mouseX / (float)width). // declare our unit generators (Beads) since we will need to // access them throughout the program 13 .addInput(wp).3. 100).// connect the WavePlayer output to the Gain input g. 100. we will update our gain // since this routine is called repeatedly. // start audio processing ac. // set the background to black text("Move the mouse to control the sine gain.4. } void draw() { // in the draw routine. Hello_Glide_02.addInput(g). wp = new WavePlayer(ac.SINE). this will // continuously change the volume of the sine wave // based on the position of the mouse cursor within the // Processing window gainGlide.start(). // create our AudioContext. The Glide object can be used to control virtually any parameter of another bead. background(0).3.". } 2. Controlling Frequency Using Glide (Hello_Glide_02) In the final example in this section. In this example. Code Listing 2. setValue(mouseX / (float)width).out. Glide frequencyGlide.0. text("The mouse Y-Position controls frequency.0 is the initial value contained by the Glide // it will take 50ms for it to transition to a new value gainGlide = new Glide(ac. background(0). 0. // initialize our AudioContext ac = new AudioContext().addInput(g). // connect the Gain output to the AudioContext ac. 100. gainGlide). // set the background to black text("The mouse X-Position controls volume. 100.WavePlayer wp. // create frequency glide object // give it a starting value of 20 (Hz) // and a transition time of 50ms frequencyGlide = new Glide(ac. // create the gain Glide object // 0. 50). } void draw() { // update the gain based on the position of the mouse // cursor within the Processing window gainGlide. // start audio processing ac.". Glide gainGlide. Buffer. 300). void setup() { size(400.addInput(wp). frequencyGlide. we will attach the gain amount to the glide // object created above g = new Gain(ac. 20.SINE).". // create a Gain object // this time. // create a WavePlayer // attach the frequency to frequencyGlide wp = new WavePlayer(ac.start(). Gain g. 120). 100). // update the frequency based on the position of the mouse // cursor within the Processing window 14 Sonifying Processing . 50). 1. // connect the WavePlayer output to the Gain input g. } 15 .frequencyGlide.setValue(mouseY). Remember. MouseX contains the position of the mouse along the x-axis.org/learning/basics/ 3. as implemented using Processing and Beads. Synthesis This chapter demonstrates basic synthesis techniques.3. float yPercent = mouseY / (float)height. Further. then you know that this involves two variables. If you’re an experienced Processing programmer. Often. This calculation is relatively straightforward. 3. or one that is only at work in the domain of electrical sound synthesis. try to think of each synthesis technique as a tool in an arsenal of techniques. so these techniques can be recombined in a myriad of interesting ways to generate a limitless variety of timbres. MouseY contains the position of the mouse along the y-axis. as long as the cursor stays within the Processing window. Additive Synthesis Additive synthesis is any type of sound-generating algorithm that combines sounds to build more complex sounds.1.2. then demonstrates those concepts in Processing. Each section briefly reviews the fundamental concepts associated with a particular synthesis technique.0 as a parameter in a bead. we turn the mouse position into a percent. For more information on mouse input see. The variables mouseX and mouseY are continuously updated by Processing with the position of the mouse pointer. Using the Mouse In this book. While reading this chapter. there is no right way to synthesize sound. additive synthesis isn’t a new concept. 16 Sonifying Processing . the examples will often use the mouse as input.0 and 1. float xPercent = mouseX / (float)width. in order to use a number between 0. Some form of additive synthesis can be found in virtually any synthesizer. g. a Glide object controls the frequency of a WavePlayer object. The sine waves are actually summed by the Gain object. This concept was electrified in Thaddeus Cahill’s Telharmonium in the early 20th Century. 50). 20. Those two lines are repeated for a second Glide object and a second WavePlayer. Two Sine Waves Controlled by the Mouse (Additive_01) In the first example. we’re going to build a number of increasingly complex additive synthesizers. In this example. If you peruse the entire example. the frequency of the Glide object is controlled by the mouse.2. Code Listing 3. two different glide objects are used to control the frequencies of two sine waves. the earliest additive instruments are the pipe organs invented in the middle ages. then refined by Hammond in their incredibly successful organs. One WavePlayer is controlled by the xposition of the mouse. additive synthesis is everywhere. g.1. By pulling various register stops. Then in the draw routine. while the other is controlled by the y-position. frequencyGlide1 = new Glide(ac. When air is routed through a pipe organ. a Glide object is initialized. Additive_01. By simply routing both WavePlayer unit generators into the same Gain. and although additive techniques have taken a backseat to modulation synthesis techniques.addInput(wp2).setValue(mouseY). Today. they’re still an important part of a synthesist’s aresenal.pde 17 .In fact. then mapped to the frequency of a WavePlayer.pde // Additive_01. frequencyGlide1. 3. the organist can create additive timbres that add the sounds of multiple pipes. each pipe in the organ generates a different set of frequencies. you might notice that there is no “Additive” object. wp1 = new WavePlayer(ac. frequencyGlide1.2. In this section. Buffer.SINE).1.addInput(wp1). we build on the patch seen in chapter 2 called Hello_Glide_02. In that example. In these two lines. we can combine the signals and create an additive synthesizer. frequencyGlide2. Gain g.SINE). // create frequency glide object // give it a starting value of 20 (Hz) // and a transition time of 50ms frequencyGlide1 = new Glide(ac. void setup() { size(400.addInput(wp2). 0. // declare our unit generators (Beads) since we will need to // access them throughout the program WavePlayer wp1. Buffer.out.// import the beads library import beads.addInput(g).start(). 20. WavePlayer wp2. 20. // start audio processing ac. Buffer. // create our AudioContext AudioContext ac.*. 300). 50). g.SINE). 50). frequencyGlide1. // initialize our AudioContext ac = new AudioContext(). // create a WavePlayer. attach the frequency to // frequencyGlide wp1 = new WavePlayer(ac.addInput(wp1). 1. } 18 Sonifying Processing . // create a Gain object to make sure we don't peak g = new Gain(ac.5). wp2 = new WavePlayer(ac. // connect the Gain output to the AudioContext ac. // connect both WavePlayers to the Gain input g. // create the second frequency glide and attach it to the // frequency of a second sine generator frequencyGlide2 = new Glide(ac. Glide frequencyGlide2. Glide frequencyGlide1. Many Sine Waves with Fundamental Controlled by the Mouse (Additive_02) This example is a more typical additive synthesis patch.setValue(mouseY). we sum a sine wave with it’s first 9 harmonics. Additive_02. In this case there is an array of Glide objects. If a tone is an integer multiple of the fundamental frequency then it is called a harmonic or harmonic partial.void draw() { // update the frequency based on the position of the mouse // cursor within the Processing window frequencyGlide1.pde // Additive_02. Each sine wave in the additive tone requires its own set of unit generators. we combine sine waves that are multiples of the lowest frequency. As we will see in the next section. as they are used in Processing // import the beads library import beads. A synthesis program like this one must have a separate set of unit generators for each component of the output spectrum. // create our AudioContext AudioContext ac. We update the frequency of each sine wave as the mouse moves around the program window. If a sine is not an integer multiple of the fundamental.2. Rather than combining a number of sine waves at unrelated frequencies. frequencyGlide2.setValue(mouseX). an array of WavePlayer objects. This presents the reader with the biggest problem with additive synthesis. This example is our first use of arrays of unit generators. and an array of Gain objects.2.2. then it is called a partial or inharmonic partial. we can use modulation synthesis to create complex timbres while consuming fewer computer resources. } 3. In this example. Code Listing 3. // the frequency of the fundamental (the lowest sine wave in // the additive tone) 19 .2.*. The lowest frequency in an additive tone is called the fundamental frequency.pde // this is a more serious additive synthesizer // understanding this code requires a basic understanding of // arrays. Additive synthesis is computationally complex. 30). float currentGain = 1. for( int i = 0.addInput(sineTone[i]). baseFrequency * (i + 1).0f. // lower the gain for the next sine in the tone 20 Sonifying Processing . 0. // declare our unit generators // notice that with the brackets [] // we are creating arrays of beads WavePlayer sineTone[]. // initialize our AudioContext ac = new AudioContext().5). Glide sineFrequency[]. currentGain). // set up our master gain object masterGain = new Gain(ac. sineFrequency[i]. sineTone = new WavePlayer[sineCount]. void setup() { size(400.addInput(sineGain[i]).addInput(masterGain). // our master gain object (all sine waves will eventually be // routed here) Gain masterGain.out. // create the gain object sineGain[i] = new Gain(ac. Buffer. // initialize our arrays of objects sineFrequency = new Glide[sineCount]. // create the WavePlayer sineTone[i] = new WavePlayer(ac. sineGain = new Gain[sineCount]. 1. // how many sine waves will be present in our additive tone? int sineCount = 10. // then connect the waveplayer to the gain sineGain[i]. ac. 300). connect the gain to the master gain masterGain. Gain sineGain[].SINE). 1. i++) { // create the glide that will control this WavePlayer's // frequency sineFrequency[i] = new Glide(ac. i < sineCount.float baseFrequency = 200.0f. // finally. } // start audio processing ac. In this sketch.0 / (float)sineCount).2. Additive Synthesis Controlled by a Processing Sketch (Additive_03) The final additive synthesis example is similar to the previous example.3. except we map the fundamental frequency to the location of an on-screen object.0f + mouseX. If you are adding sound to a Processing sketch. we simply control frequency based on object location. as they are used in Processing import beads. Glide sineFrequency[].*.pde // this is a more serious additive synthesizer // understanding this code requires a basic understanding of // arrays.0f.pde // Additive_03. } void draw() { // update the fundamental frequency based on mouse position // add 20 to the frequency because below 20Hz is inaudible // to humans baseFrequency = 20.start(). but a mapping need not be so direct or so obvious. Code Listing 3. // declare our AudioContext float baseFrequency = 200.currentGain -= (1. i < sineCount. Additive_03.2.3. i++) { sineFrequency[i]. // fundamental frequency int sineCount = 10. 21 . } } 3. then you will want to map your on-screen objects to sound parameters in some way. // update the frequency of each sine tone for( int i = 0. // how many sine waves will be present // declare our unit generators WavePlayer sineTone[].setValue(baseFrequency * (i + 1)). // import the beads library AudioContext ac. addInput(sineGain[i]). 0. // create the gain object sineGain[i] = new Gain(ac. i++) { // create the glide that will control this WavePlayer's // frequency sineFrequency[i] = new Glide(ac. } 22 Sonifying Processing . float currentGain = 1. // initialize our arrays of objects sineFrequency = new Glide[sineCount]. for( int i = 0. // this is a ball that will bounce around the screen bouncer b. sineFrequency[i]. Buffer. currentGain).0 / (float)sineCount). connect the gain to the master gain masterGain. baseFrequency * i. // lower the gain for the next tone in the additive // complex currentGain -= (1.addInput(sineTone[i]). // create the WavePlayer sineTone[i] = new WavePlayer(ac.addInput(masterGain). void setup() { size(400. // finally.Gain sineGain[]. sineTone = new WavePlayer[sineCount]. ac. // then connect the waveplayer to the gain sineGain[i]. 1. 30). i < sineCount. // our master gain object Gain masterGain. sineGain = new Gain[sineCount]. 1.out. // initialize our AudioContext ac = new AudioContext().0f. 300).5). // initialize our bouncy bal b = new bouncer().SINE). // set up our master gain object masterGain = new Gain(ac. y. // move the bouncer b. float ySpeed = 1.0.// start audio processing ac.10 ) ySpeed = -1. // update the frequency of each sine tone for( int i = 0.0.0. ellipse(x.0. else if( x >= width . if( x <= 0 ) xSpeed = 1. } } // this class encapsulates a simple circle that will bounce // around the Processing window class bouncer { public float x = 10. } } 23 . // fill the background with black b. float xSpeed = 1. i < sineCount. y += ySpeed.0. } void draw() { noStroke(). fill(255).move(). void bouncer() { } void move() { x += xSpeed.10 ) xSpeed = -1.0. 10). else if( y >= height .start().x. if( y <= 0 ) ySpeed = 1. 10.0. // draw the bouncer // update the fundamental frequency based on mouse position baseFrequency = 20.0f + b.draw(). i++) { sineFrequency[i]. } void draw() { background(0). public float y = 10.setValue(baseFrequency * ((float)(i+1) * (b.y/height))).0. frequency or filter frequency of a signal. if the modulator is used to control the amplitude of the carrier signal. In this situation. These sidebands are what make modulation synthesis so powerful. and they have different characteristics based on the type of modulation synthesis. Frequency Modulation (Frequency_Modulation_01) In the first modulation synthesis example. Modulation Synthesis In modulation synthesis. we simply reference x[0]. and in the process introduce the concept of custom functions in Beads. By using a custom function. This is a very familiar effect. a simple frequency modulation synthesizer is constructed. We will construct both frequency modulation and amplitude modulation synthesizers. The modulator might control the amplitude. one of the most valuable and versatile tools provided by the Beads library. 24 Sonifying Processing . then override the calculate routine. the modulator controls some parameter of the carrier signal. To use the value of the modulator unit generator in a calculation. If we passed multiple unit generators into a custom function. a WavePlayer that generates a sine wave at 40Hz. we can create interesting broad spectrum sounds with a small number of source signals. we are going to demonstrate how to construct modulation synthesis modules in Beads. Here is the code for our frequency modulation custom function. called the carrier. x[3] and so on. and the modulator is a sine wave oscillating at 2 Hertz (2 oscillations per second).1. If the frequency is varied at a subaudible frequency. For example. This example is the first situation where we need to extend beyond the standard unit generators provided by the Beads library*. we simply need to declare it. In this chapter. When the frequency of the modulator rises into the audible range. we can build a simple unit generator on the fly. above 20Hz. In this case we’re going to use a custom function. and at the subaudible range (under 20Hz). and only generates the sound of a frequency modulation tone with a carrier sine wave oscillating at 200Hz and a modulator sine wave oscillating at 40Hz. 3.3. These frequencies are called sidebands. then they would be accessed via x[1]. using whatever other unit generators are provided in the function declaration. we call this effect tremolo. It is not interactive. In this case. and then use it as we would any other unit generator. To build a custom function.3. With a modulation synthesizer. a signal called the modulator is used to effect another signal. then new frequencies are added to the carrier signal. x[2].3. then the amplitude of the carrier signal will rise and fall twice in a second. then we call the effect vibrato. we pass in the modulator unit generator. The calculate function calculates the output of the new unit generator. Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0].pde // Frequency_Modulation_01.1. In this program. * In truth. 25 . WavePlayer carrier. // create our AudioContext // declare our unit generators WavePlayer modulator. // create the modulator. we can use it in our program. carrier = new WavePlayer(ac.*.0) + 200.SINE).SINE).pde import beads. // import the beads library AudioContext ac. we want to use it to control the frequency of a WavePlayer object. 300). 40. which is the original value of the // modulator signal (a sine wave) // multiplied by 50 to make the sine // vary between -50 and 50 // then add 200. this could be accomplished using pre-packaged unit generators. Buffer. so that it varies from 150 to 250 return (x[0] * 50. This is accomplished by using it in the WavePlayer constructor in the place where we might normally indicate a frequency. Gain g. this WavePlayer will control // the frequency of the carrier modulator = new WavePlayer(ac. After building our custom unit generator.0. void setup() { size(400.3. Code Listing 3. frequencyModulation. // initialize our AudioContext ac = new AudioContext(). } }. Buffer. Frequency_Modulation_01. which is the original value of // the modulator signal (a sine wave) // multiplied by 50 to make the sine vary // between -50 and 50 // then add 200. // connect the carrier to the Gain input g. 26 Sonifying Processing . // Generally. // create a second WavePlayer. return (x[0] * 200. so that it varies from 150 to 250 return (x[0] * 50.// This is a custom function // Custom functions are simple custom Unit Generators. Buffer.SINE). The modulator frequency is mapped to the mouse position along the x-axis. Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0]. The frequency of the carrier is controlled within the frequency modulation function by the position along the y-axis.3.addInput(g). Frequency Modulation Controlled by the Mouse (Frequency_Modulation_02) The second frequency modulation example is similar to the first.5). 1.start(). except we control the frequencies of the carrier and the modulator using the position of the mouse cursor.0.0) + mouseY. The frequency of the modulator is controlled by a Glide and updated continuously in the draw routine.out. // create a Gain object to make sure we don't peak g = new Gain(ac. frequencyModulation. // start audio processing ac.0) + 200. 0. // connect the Gain output to the AudioContext ac. } 3. } }.2. but this time.addInput(carrier). they only override the calculate function. // control the frequency with the function created above carrier = new WavePlayer(ac. so that it varies // from mouseY . // initialize our AudioContext ac = new AudioContext().0) + mouseY.200 to mouseY + 200 return (x[0] * 200. // create the modulator.3. WavePlayer carrier.*. 20.SINE). Code Listing 3. // this is a custom function // custom functions are a bit like custom Unit Generators // but they only override the calculate function Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0]. // create our AudioContext // declare our unit generators WavePlayer modulator.pde // Frequency_Modulation_02.modulatorFrequency. which is the original value of the // modulator signal (a sine wave) // multiplied by 200. 27 . Frequency_Modulation_02.setValue(mouseX). // the louder the sidebands // then add mouseY. Glide modulatorFrequency. modulatorFrequency.2. void setup() { size(400. modulator = new WavePlayer(ac. 30). 300). // import the beads library AudioContext ac. Buffer. } }. Gain g. this WavePlayer // will control the frequency of the carrier modulatorFrequency = new Glide(ac.pde import beads.0 to make the sine // vary between -200 and 200 // the number 200 here is called the "Modulation Index" // the higher the Modulation Index. the custom function isn’t used to drive another unit generator.addInput(g). ac.start(). Also. 0. control the frequency // with the function created above carrier = new WavePlayer(ac. such as our computers. the name is derived from the shape of the circuit that is used when this synthesis is implemented using electronic components. 1. // start audio processing } void draw() { modulatorFrequency. // connect the Gain output to the AudioContext ac. the mouse position controls the frequencies of the carrier and the modulator. in modulation synthesis one unit generator is controlling another unit generator.setValue(mouseX).3. In this example. So calling this technique ring modulation doesn’t make much sense. modulator) { public float calculate() { // multiply the value of modulator by 28 Sonifying Processing . In fact. } 3.addInput(carrier).// create a second WavePlayer. // create a Gain object to make sure we don't peak g = new Gain(ac. Ring Modulation (Ring_Modulation_01) The final modulation synthesis example demonstrates how to build a ring modulation synthesizer using Beads. // a custom function for Ring Modulation // Remember. Rather. Buffer. a custom function is used to run the ring modulation equation. Ring Modulation = Modulator[t] * Carrier[t] Function ringModulation = new Function(carrier.out.SINE). As previously mentioned. frequencyModulation.5).3. // connect the carrier to the Gain input g. however. As in the previous example. the meaning of the name is lost. On digital systems. Implementing ring modulation synthesis is as easy as multiplying two sine waves. it is used as a standalone unit generator that takes two input unit generators and multiplies their values. 300). Amplitude modulation is implemented the same as ring modulation. Glide carrierFrequency. // initialize our AudioContext 29 . // our master gain void setup() { size(400.pde // Ring_Modulation_01.3. ac. WavePlayer carrier.out. The result of ring modulation synthesis is a signal with two frequency components.pde import beads.addInput(ringModulation). either entirely positive or entirely negative. and connect that gain to the main output. except one of the input signals is kept unipolar. // import the beads library AudioContext ac. Gain g. Ring_Modulation_01.addInput(g).*.3. } }. Code Listing 3. This can be implemented in Beads by simply modifying the custom function to call the absolute value function on one of the values in the multiplication. The original frequencies of the carrier and modulator are eliminated by the multiplication. In place of them are two sidebands that occur at the sum and difference of the frequencies of the carrier and the modulator. return x[0] * abs(x[1]). Glide modulatorFrequency. One popular modification of ring modulation synthesis is called Amplitude Modulation.// the value of the carrier return x[0] * x[1]. // declare our AudioContext // declare our unit generators WavePlayer modulator. g. Then we connect the ringModulation unit generator to a gain. } }.5). Buffer. // create the carrier carrierFrequency = new Glide(ac. modulator = new WavePlayer(ac. // connect the Gain output to the AudioContext ac. modulatorFrequency.ac = new AudioContext(). // create the modulator modulatorFrequency = new Glide(ac.setValue(mouseY). // a custom function for Ring Modulation // Remember. Envelopes 30 Sonifying Processing . 20.addInput(g). modulator) { public float calculate() { // multiply the value of modulator by // the value of the carrier return x[0] * x[1]. 30). } 3. Buffer. // start audio processing } void draw() { // set the freqiencies of the carrier and the // modulator based on the mouse position carrierFrequency.start(). 1.addInput(ringModulation).SINE).out. // connect the ring modulation to the Gain input // IMPORTANT: Notice that a custom function // can be used just like a UGen! This is very powerful! g. carrier = new WavePlayer(ac. ac.4. 20. modulatorFrequency. Ring Modulation = Modulator[t] * Carrier[t] Function ringModulation = new Function(carrier. 30).setValue(mouseX). 0. carrierFrequency.SINE). // create a Gain object to make sure we don't peak g = new Gain(ac. The second parameter is the starting value. The envelope will control a Gain object that sets the volume of the synthesized tone. where they stay until the event ends. Then they fall back to 0. 0. gainEnvelope).0. 1. you can see that Envelopes are very similar to Glide objects.8.1. then fall back to 0.0 over a length of time called the Decay. // over 50 ms rise to 0. We will implement a simple Attack-Decay envelope that will allow us to create distinct sound events. Envelope objects can be thought of like automatic Glide objects that can take a series of commands. 3. Most commonly. and the value falls to 0 over a time called the Release. rather than just one long continuous tone. we can give it a number of such commands which it will execute one after the other. we tell the Envelope to rise to 0. With a Glide object we can tell it to take a certain value over a certain length of time. In this example. AD envelopes rise from 0. but sometimes we want synthesis parameters to change more slowly.8 over 50 milliseconds. Then we connect the Envelope to the Gain object by inserting it into the Gain constructor where we would normally indicate a starting value. gainEnvelope = new Envelope(ac. An envelope is a signal that rises and falls over a period of time. As usual.8 31 . The Envelope constructor takes two parameters. envelopes are used to control amplitude (gain).0 over 300ms.0).4. we us a time-varying signal called an envelope. but with an Envelope object.addSegment(0. but they can be used to control any aspect of a synthesizer or a sampler. synthGain = new Gain(ac.Modulation synthesis is very useful when we want to change a sound very rapidly. usually staying within the range 0.0 over a length of time called the Attack.0 to 1.0 to 1. we’re going to attach an envelope to the frequency modulation synthesizer that we created in Frequency_Modulation_02. ADSR envelopes rise to 1 during the attack. The two most common types of envelopes are Attack-Decay (AD) and Attack-Decay-Sustain-Release (ADSR). 50). Frequency Modulation with an Amplitude Envelope (Frequency_Modulation_03) In the final synthesis example. then fall to a sustain value. Already. the first parameter is the master AudioContext object. gainEnvelope. When this is the case. Gain synthGain.net/doc/net/beadsproject/beads/ugens/Envelope.h tml Code Listing 3. // initialize our AudioContext ac = new AudioContext(). // import the beads library AudioContext ac. } }. scaled into an appropriate // frequency range return (x[0] * 100. 300). control the frequency // with the function created above carrier = new WavePlayer(ac.SINE). 20.pde import beads.4. 300). this WavePlayer will // control the frequency of the carrier modulatorFrequency = new Glide(ac. modulatorFrequency.pde // Frequency_Modulation_03. // create our AudioContext // declare our unit generators WavePlayer modulator.gainEnvelope. WavePlayer carrier. // our envelope and gain objects Envelope gainEnvelope. // create a second WavePlayer.0) + mouseY. void setup() { size(400. Buffer.beadsproject. Frequency_Modulation_03. Glide modulatorFrequency. 30). modulator = new WavePlayer(ac. // over 300ms fall to 0.0 For more on envelopes.0.1. 32 Sonifying Processing . see. // create the modulator.*. // create a custom frequency modulation function Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0].addSegment(0. start().SINE). } void draw() { // set the modulator frequency based on mouse position modulatorFrequency.addInput(synthGain). Buffer.0). // over 50ms rise to 0. 120). // in 300ms fall to 0.8. ac. connect it to the gain envelope synthGain = new Gain(ac. 100. 0.out. 50). // create the envelope object that will control the gain gainEnvelope = new Envelope(ac. gainEnvelope). // create a Gain object. 300). // add a 50ms attack segment to the envelope // and a 300 ms decay segment to the envelope gainEnvelope.8 gainEnvelope. 1.frequencyModulation.setValue(mouseX).0 } 33 . } // this routine is triggered when a mouse button is pressed void mousePressed() { // when the mouse button is pressed. // connect the Gain output to the AudioContext ac. // set the background to black text("Click to trigger the gain envelope.0. // connect the carrier to the Gain input synthGain.".addSegment(0. // start audio processing background(0).addInput(carrier).addSegment(0. we’re going to look at the SamplePlayer object. as it implied in the early days of sampler technology. it just means that you are using an audio sample.wav". In the first example. as we use the term here.1. It doesn’t necessarily mean that you’re using someone else’s work. In this example. we are going to demonstrate how to set up and use the SamplePlayer object. we add "DrumMachine/Snaredrum 1. The first important step in this process is telling the program where the audio file is located. you can call the selectInput() function wherever you would normally indicate a file name string. To indicate the directory where the processing sketch is located we use the sketchPath(””) routine. I stored the audio file in a directory that is in the same directory as the processing sketch. and working with it in a number of ways. a bit of recorded audio. then you will need to set up new files in place of the audio files that are provided when the code is downloaded online. 4. sourceFile = sketchPath("") + "DrumMachine/Snaredrum 1. 4. To load an arbitrary audio file at run time.wav” to the result. Rather. If you’re copy/pasting this code from the text. Loading and Playing a Sound Using SamplePlayer (Sampling_01) IMPORTANT: For all of the sampling examples. is any use of pre-recorded audio in music production. 34 Sonifying Processing .4. To indicate the subdirectory and file name. Sampling Sampling. Playing Recorded Audio In this section.1. you will need to have audio files where the program is looking for them. This terminology is wonderful for a tutorial on Beads because the Beads library employs a Sample class to encapsulate audio data. The SamplePlayer object is the default Beads object for playing back audio.1. start(). In this case. To actually trigger playback of the sample. where there is the possibility that a file might not be found. Finally. // move the playback pointer to the first loop point (0. we call its constructor with two parameters. // this routine is called whenever a mouse button is pressed // on the Processing sketch void mousePressed() { // set the gain based on mouse position gainValue. we set the KillOnEnd parameter. sp. we respond to mouse clicks in the mousePressed routine. new Sample(sourceFile)).setValue((float)mouseX/(float)width).0) sp. Sampling_01.pde import beads. Code Listing 4. try { // initialize our SamplePlayer. Any time you access the file system. we’re constructing a new Sample on the fly based on the filename created earlier. The second parameter is the Sample that we want to load into the SamplePlayer.Then we need to initialize the SamplePlayer. we set the gain based on the mouse cursor position. Then we tell the SamplePlayer to move to the start of the file.setToLoopStart(). you must encapsulate the code in a try/catch block so that the program can handle errors. then connect it to a Gain object which is in turn connected to the AudioContext.1. 35 . using the setToLoopStart routine. // play the audio file } In this block of code. To do so. we call the start routine to actually trigger playback. loading the file // indicated by the sourceFile string sp = new SamplePlayer(ac.*. } catch(Exception e) { … } After the SamplePlayer is created. Notice that this code is encapsulated within a try/catch block.pde // Sampling_01.1. The first parameter is the master AudioContext. // Whenever we load a file.printStackTrace(). // the SamplePlayer class will play the audio file SamplePlayer sp. Glide gainValue. void setup() { size(800.AudioContext ac. } catch(Exception e) { // If there is an error. so we set KillOnEnd to false sp. 0. but only play each one once // in this case. we need to enclose // the code in a Try/Catch block. show an error message // at the bottom of the processing window. g = new Gain(ac. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac. new Sample(sourceFile)).0. 600). we would like to play the sample multiple // times. // This is a very useful function for loading external // files in Processing. // and exit the program } // SamplePlayer can be set to be destroyed when // it is done playing // this is useful when you want to load a number of // different samples. // as usual. println("Exception while attempting to load sample!"). e. // Try/Catch blocks will inform us if the file // can't be found try { // initialize our SamplePlayer. Gain g. // print description of the error exit().wav". 1.setKillOnEnd(false). loading the file // indicated by the sourceFile string sp = new SamplePlayer(ac. sourceFile = sketchPath("") + "DrumMachine/Snaredrum 1. 20). // create our AudioContext // What file will we load into our SamplePlayer? // Notice the use of the sketchPath function. 36 Sonifying Processing . ac = new AudioContext(). gainValue). // this will hold the path to our audio file String sourceFile. void draw(){} // this routine is called whenever a mouse button is // pressed on the Processing sketch void mousePressed() { // set the gain based on mouse position gainValue. We set up a Glide object to control the playback rate for the second SamplePlayer. // set the background to black text("Click to demonstrate the SamplePlayer object.". we are going to setup two SamplePlayer objects.0) sp. // tell the user what to do! } // Although we're not drawing to the screen. // play the audio file } 4.start(). as in the previous example. The second SamplePlayer is set up slightly differently here. The first will respond to the left mouse button.1. 100). One will play a sample forward.2. Playing Multiple Sounds and Playing in Reverse (Sampling_02) In the second sampling example.setValue((float)mouseX/(float)width). 1. // connect the Gain to the AudioContext ac. we need to // have a draw function in order to wait for // mousePressed events.out. Then we set their parameters and connect them to the output in the same way as before. sp2.g.setRate(rateValue) 37 . rateValue = new Glide(ac. while the latter will respond to the right mouse button. 100. making sure to enclose them in a try/catch block. 20).addInput(g). We initialize the SamplePlayer objects in the same way. // begin audio processing background(0). This example is very similar to the previous example.start(). The other will play a sample in reverse. // connect the SamplePlayer to the Gain ac.setToLoopStart(). // move the playback pointer to the first loop point (0.addInput(sp). sp. In the mousePressed routine, we check for which button was pressed, then trigger the appropriate SamplePlayer. We trigger the first SamplePlayer in the same way as before. // move the playback pointer to the beginning of the sample sp1.setToLoopStart(); sp1.start(); // play the audio file We trigger the second SamplePlayer so that it will play its file in reverse. This is done by calling the setToEnd() routine, setting the playback rate to -1, then calling the start routine. // set the playback pointer to the end of the sample sp2.setToEnd(); // set the rate to -1 to play backwards rateValue.setValue(-1.0); sp2.start(); // play the audio file Code Listing 4.1.2. Sampling_02.pde // Sampling_02.pde // in this example, we load and play two samples // one forward, and one in reverse import beads.*; AudioContext ac; SamplePlayer sp1; // declare our second SamplePlayer, and the Glide that // will be used to control the rate SamplePlayer sp2; Glide rateValue; // we can run both SamplePlayers through the same Gain Gain g; Glide gainValue; void setup() { size(800, 600); ac = new AudioContext(); // create our AudioContext // whenever we load a file, we need to enclose the code in // a Try/Catch block // Try/Catch blocks will inform us if the file can't be 38 Sonifying Processing // found try { // initialize the first SamplePlayer sp1 = new SamplePlayer(ac, new Sample(sketchPath("") + "DrumMachine/Snaredrum 1.wav")); sp2 = new SamplePlayer(ac, new Sample(sketchPath("") + "DrumMachine/Soft bassdrum.wav")); } catch(Exception e) { // if there is an error, show an error message // at the bottom of the processing window println("Exception while attempting to load sample!"); e.printStackTrace(); exit(); } // for both SamplePlayers, note that we want to // play the sample multiple times sp1.setKillOnEnd(false); sp2.setKillOnEnd(false); // initialize our rateValue Glide object // a rate of -1 indicates that this sample will be // played in reverse rateValue = new Glide(ac, 1, -1); sp2.setRate(rateValue); // as usual, we create a gain that will control the // volume of our sample player gainValue = new Glide(ac, 0.0, 20); g = new Gain(ac, 1, gainValue); g.addInput(sp1); g.addInput(sp2); ac.out.addInput(g); // connect the Gain to the AudioContext ac.start(); // begin audio processing background(0); // set the background to black text("Left click to play a snare sound.", 100, 100); text("Right click to play a reversed kick drum sound.", 100, 120); } // although we're not drawing to the screen, we need to // have a draw function in order to wait // for mousePressed events void draw(){} // this routine is called whenever a mouse button is // pressed on the Processing sketch void mousePressed() 39 { // set the gain based on mouse position gainValue.setValue((float)mouseX/(float)width); // if the left mouse button is clicked, then play the // snare drum sample if( mouseButton == LEFT ) { // move the playback pointer to the beginning // of the sample sp1.setToLoopStart(); sp1.start(); // play the audio file } // if the right mouse button is clicked, then play // the bass drum sample backwards else { // set the playback pointer to the end of the sample sp2.setToEnd(); // set the rate to -1 to play backwards rateValue.setValue(-1.0); sp2.start(); // play the audio file } } 4.1.3. Controlling Playback Rate Using Mouse & Glide (Sampling_03) In this example, we build a SamplePlayer where the playback rate can be controlled by the mouse. The mouse position along the x-axis determines the playback rate. If the cursor is in the left half of the window, then the playback rate will be negative; the file will play in reverse. If the cursor is in the right half of the window then the file will play forward. The closer the cursor is to the center of the screen, the slower playback will be in either direction. To accomplish this, we set up a single SamplePlayer as before. Again, we attach a Glide object to the playback rate by calling sp1.setRate(rateValue). Then we just need to set the rate value based on cursor position. We do this in the draw routine, which is called over and over again in Processing programs. rateValue.setValue(((float)mouseX - halfWidth)/halfWidth); We also need to make sure that the playback pointer is in the right place. If the playback pointer is at the beginning of the file, but the rate is set to play in reverse, then there would be no audio to play. So we set the playback position based on the cursor location when the user clicks. // if the left mouse button is clicked, then 40 Sonifying Processing start(). void setup() { size(800.*. Sampling_03. sp1.3.pde // this is a more complex sampler // clicking somewhere on the window initiates sample playback // moving the mouse controls the playback rate import beads. 600). // play the file in reverse } Code Listing 4.setPosition(000).// play the sound if( mouseX > width / 2. AudioContext ac.1. // we can run both SamplePlayers through the same Gain Gain sampleGain. SamplePlayer sp1.0 ) { // set the start position to the beginning sp1.setToEnd(). // play the audio file } // if the right mouse button is clicked.pde // Sampling_03. // then play the bass drum sample backwards else { // set the start position to the end of the file sp1. Glide gainValue.start(). sp1. ac = new AudioContext(). Glide rateValue. // create our AudioContext // whenever we load a file. we need to enclose // the code in a Try/Catch block // Try/Catch blocks will inform us if the file // can't be found try { // initialize the SamplePlayer sp1 = new SamplePlayer(ac. 41 . exit().0. sampleGain.addInput(sampleGain). sp1.setRate(rateValue). 30). // begin audio processing background(0). width/2. // connect the Gain to the AudioContext ac. } // this routine is called whenever a mouse button is // pressed on the Processing sketch 42 Sonifying Processing . sampleGain = new Gain(ac.setKillOnEnd(false).". gainValue).halfWidth)/halfWidth). // initialize our rateValue Glide object rateValue = new Glide(ac.addInput(sp1). height).wav")).setValue(((float)mouseX . text("Move the mouse to control playback speed. // set the background to black stroke(255). we need // to have a draw function in order to wait for // mousePressed events void draw() { float halfWidth = width / 2.printStackTrace(). 100. // set the gain based on mouse position along the Y-axis gainValue. // connect it to the SamplePlayer // as usual. show an error message println("Exception while attempting to load sample!"). e. text("Click to begin playback. 120). } catch(Exception e) { // if there is an error.new Sample(sketchPath("") + "Drum_Loop_01. // draw a line in the middle line(width/2. we create a gain that will control the // volume of our sample player gainValue = new Glide(ac. } // note that we want to play the sample multiple times sp1. 1.0. 0. // set the rate based on mouse position along the X-axis rateValue. 30).". ac. 1. 0.start(). 100). 100.out.setValue((float)mouseY / (float)height). } // although we're not drawing to the screen. 2. or other parameters might be extracted and mapped from some other source of information. 4. // play the audio file } // if the right mouse button is clicked. Code Listing 4. The draw routine simply maintains a set of mouse-related variables then prints their values on screen. then play // the sound if( mouseX > width / 2. sp1.setPosition(000). then play the // bass drum sample backwards else { // set the start position to the end of the file sp1. // play in reverse (rate set in the draw routine) sp1.0 ) { // set the start position to the beginning sp1. in this case. This section is included in this book as a demonstration of how an artist might map their Processing sketch to musical parameters. All of this information might be similarly extracted from objects within a sketch.1.More Mouse Input In this section we’re going to explore some of the additional data that can be extracted from mouse movement.1. Mouse_01.pde // Mouse_01. } } 4.pde // this short script just shows how we can extract // information from a series of coordinates. This allows us to map more musical parameters to the mouse than we could if we used only the cursor position. Getting Mouse Speed and Mean Mouse Speed (Mouse_01) This program demonstrates simple mouse data extraction without reference to sound or Beads.void mousePressed() { // if the left mouse button is clicked.start().setToEnd().2.2. but we can also get speed and average speed.start(). // the location of the mouse 43 . Not only can we extract position data from the mouse cursor. 600). It builds on all the previous sampler examples and the all the previous mouse examples.5 * meanYChange) + (0. } 4. void setup() { // create a decent-sized window. float yChange = 0. Change in Y: " + meanYChange.mouseY. Complex Mouse-Driven Multi-Sampler (Sampler_Interface_01) WARNING: This is a very complex example.5 * xChange)). 100. text("MouseY: " + mouseY. text("Change in X: " + xChange. 140).5 * meanXChange) + (0. 180). 100. // show the mouse parameters on screen text("MouseX: " + mouseX.org/reference/Array. meanYChange = floor((0. text("Change in Y: " + yChange. 200). If you struggle to understand what is in this example. Change in X: " + meanXChange. } void draw() { background(0). This example also uses arrays of Beads.2. 44 Sonifying Processing . 120). 100.0. yChange = lastMouseY . // calculate the average speed of the mouse meanXChange = floor((0. 100. float lastMouseY = 0. text("Avg. For more information on arrays. float lastMouseX = 0.mouseX. // store the current mouse coordinates for use in the // next round lastMouseX = mouseX. text("Avg.html. // fill the background with black // calculate how much the mouse has moved xChange = lastMouseX . 100). lastMouseY = mouseY. float meanXChange = 0. 160). see. 100.// variables that will hold various mouse parameters float xChange = 0. 100.0. float meanYChange = 0.2.5 * yChange)). so that the mouse has // room to move size(800. then see the previous examples and make sure you understand arrays as they are implemented in Processing. count++ ) { // create the SamplePlayer that will run this // particular file sp[count] = new SamplePlayer(ac. 45 . gainValue[count]. Then the program loops through the array of filenames and attempts to load each sample. 1). gainValue[count]). After the filenames are stored. the sample is pitch-shifted based on the speed of cursor movement. rateValue[count] = new Glide(ac. new Sample(sketchPath("") + "samples/" + sourceFile[count])).0). // enclose the file-loading in a try-catch block try { // run through each file for( count = 0. The first major task undertaken by the setup routine is to discover the files in the samples folder. we initialize our arrays of Beads. There’s a lot going on in this program.addInput(sp[count]). Finally. sp[count]. If the samples are loaded properly. we build an expressive multi-sampler. then the sample is played forward. g[count]. mapping playback parameters to mouse data. then the sample is played backward. pitchValue[count]. as well as randomly triggering other sounds. then it plays them back. count < numSamples. If the mouse is moving up.setKillOnEnd(false). then initialize our objects that create a delay effect (these unit generators will be covered in the next chapter). then the SamplePlayer is created and the other unit generators associated with the SamplePlayer are initialized. If the mouse is moving downward. g[count] = new Gain(ac. the new unit generators are connected to the AudioContext. then load them into the sourceFile array. Movement within that range has a possibility to trigger the related sound. so it’s important that you read through it line-by-line and try to understand what each line is accomplishing. // these unit generators will control aspects of the // sample player gainValue[count] = new Glide(ac. 1). rateValue[count]. Further.setPitch(pitchValue[count]). sp[count]. This multisampler loads all the samples in the samples subfolder of the sketch directory. We will break it down only briefly here. pitchValue[count] = new Glide(ac. 1.setGlideTime(20).setGlideTime(20).In this example. 0. Each sample is loaded into a slice of area along the x-axis.setRate(rateValue[count]). sp[count].setGlideTime(20). setValue(newGain). This function triggers a sample either forward or backward. } } else // if we should play the sample forwards { if( !sp[index].printStackTrace(). boolean reverse. // trigger a sample void triggerSample(int index. // if we should play the sample in reverse if( reverse ) { if( !sp[index]. // set the gain value pitchValue[index].0). As the draw routine is called repeatedly.0-pitchRange.out. Sample playback is handled by the triggerSample subroutine. using the specified gain.setValue(-1.setValue(random(1. e. } } 46 Sonifying Processing . ac.setToLoopStart(). float pitchRange) { if( index >= 0 && index < numSamples ) { gainValue[index]. mouse parameters are stored and used to trigger sample playback. 1. } } // if there is an error while loading the samples catch(Exception e) { // show that error in the space underneath the // processing code println("Exception while attempting to load sample!").setToEnd(). float newGain. connect this chain to the delay and to // the main out delayIn. } After that.// finally.inLoop() ) { rateValue[index].0).0+pitchRange)).addInput(g[count]). the program is started.inLoop() ) { rateValue[index]. exit().addInput(g[count]). sp[index]. within the given pitch range. sp[index].setValue(1. 2.File. } } Code Listing 4.pde This is a complex. for (int i = 0.pde // // // // Sampler_Interface_01. Gain delayGain. SamplePlayer sp[]. i++) 47 . // import the beads library AudioContext ac. import beads. // an array of SamplePlayer // these objects allow us to add a delay effect TapIn delayIn.io.sp[index]. // an array of Gains Glide gainValue[]. TapOut delayOut. Glide pitchValue[].*. int numSamples = 0. Sampling_02 and Sampling_03 before trying to tackle this // import the java File library // this will be used to locate the audio files that will be // loaded into our sampler import java. int lastMouseY = 0. // how many samples are being loaded? // how much space will a sample take on screen? how wide will // be the invisible trigger area? int sampleWidth = 0. mouse-driven sampler make sure that you understand the examples in Sampling_01. Sampler_Interface_01. File[] listOfFiles = folder. i < listOfFiles. Gain g[]. void setup() { // create a reasonably-sized playing field for our sampler size(800. 600). // initialize our AudioContext // this block of code counts the number of samples in // the /samples subfolder File folder = new File(sketchPath("") + "samples/"). Glide rateValue[]. ac = new AudioContext().listFiles(). int yChange = 0. int lastMouseX = 0. // declare our parent AudioContext as usual // these variables store mouse position and change in mouse // position along each axis int xChange = 0.2.start(). // an array that will contain our sample filenames String sourceFile[].length. then end if( numSamples <= 0 ) { println("no samples found in " + sketchPath("") + "samples/").{ if (listOfFiles[i]. 200. int count = 0. println("exiting. // enclose the file-loading in a try-catch block 48 Sonifying Processing .wav") ) { numSamples++.15).this is just for taste. count++. i++) { if (listOfFiles[i].getWidth() / (float)numSamples). exit(). i < listOfFiles.. 1.out."). to fill out // the texture delayIn = new TapIn(ac. delayGain.length. } } } // set the size of our arrays of unit generators in order // to accomodate the number of samples that will be loaded g = new Gain[numSamples]. gainValue = new Glide[numSamples].getName().getName(). } // how many pixels along the x-axis will each // sample occupy? sampleWidth = (int)(this. rateValue = new Glide[numSamples].isFile()) { if( listOfFiles[i]. } } } // if no samples are found. 0. for (int i = 0.0).addInput(delayGain). 2000).. // this block of code reads and stores the filename for // each sample sourceFile = new String[numSamples].wav") ) { sourceFile[count] = listOfFiles[i].endsWith(". delayIn. delayGain = new Gain(ac. // set up our delay .endsWith(". pitchValue = new Glide[numSamples]. sp = new SamplePlayer[numSamples].isFile()) { if( listOfFiles[i].addInput(delayOut).getName(). delayOut = new TapOut(ac. // connect the delay to the master output ac. 1). g[count] = new Gain(ac.0). gainValue[count]). 100.".setGlideTime(20). // set the background to black text("Move the mouse quickly to trigger playback. ac.addInput(sp[count]). count < numSamples. rateValue[count]. 0.addInput(g[count]). count++ ) { // print a message to show which file we are loading println("loading " + sketchPath("") + "samples/" + sourceFile[count]). 1. connect this chain to the delay and to the // main out delayIn.addInput(g[count]). 100).printStackTrace().setGlideTime(20). // begin audio processing background(0).setGlideTime(20).". // create the SamplePlayer that will run this // particular file sp[count] = new SamplePlayer(ac.try { // run through each file for( count = 0. } ac. pitchValue[count]. 100. e. sp[count]. sp[count]. rateValue[count] = new Glide(ac.setKillOnEnd(false).start(). sp[count]. 120). 1). exit(). pitchValue[count] = new Glide(ac.setPitch(pitchValue[count]). } // the main draw function void draw() 49 . } } // if there is an error while loading the samples catch(Exception e) { // show that error in the space underneath the // processing code println("Exception while attempting to load sample!"). // finally.out. gainValue[count]. g[count]. // these unit generators will control aspects of the // sample player gainValue[count] = new Glide(ac. text("Faster movement triggers more and louder sounds. new Sample(sketchPath("") + "samples/" + sourceFile[count])).setRate(rateValue[count]). newGain. } // randomly trigger other samples. newGain. // calculate the mouse speed and location xChange = abs(lastMouseX . lastMouseX = mouseX.getWidth().09 ) { // get the index of the sample that is coordinated with // the mouse location int currentSampleIndex = (int)(mouseX / sampleWidth). // trigger that sample // if the mouse is moving upwards.0.{ background(0). if( currentSampleIndex < 0 ) currentSampleIndex = 0. // should we trigger the sample that the mouse is over? if( newGain > 0. pitchRange). pitchRange). // calculate the gain of newly triggered samples float newGain = (abs(yChange) + xChange) / 2. then play it in // reverse triggerSample(currentSampleIndex.33). based loosely on the // mouse speed // loop through each sample for( int currentSample = 0.0) < (newGain / 2. else if( currentSampleIndex >= numSamples ) currentSampleIndex = numSamples.mouseY. // calculate the pitch range float pitchRange = yChange / 200. yChange = lastMouseY . (boolean)(yChange < 0).0. currentSample++ ) { // if a random number is less than the current gain if( random(1.0) < 0.0. (boolean)(yChange < 0 && random(1. currentSample < numSamples. if( newGain > 1. lastMouseY = mouseY.0 ) newGain = 1.mouseX). newGain /= this. 50 Sonifying Processing . } } } // trigger a sample void triggerSample(int index.0) ) { // trigger that sample triggerSample(currentSample. setValue(newGain). 1. gainValue[index]. } } sp[index].setToLoopStart(). A good analogy is a dump truck unloading a load of gravel. 51 .start(). and Beads provides a wonderful unit generator for easily creating granular synthesis instruments. sp[index].inLoop() ) { rateValue[index].0).3. Since the early 1980s. float pitchRange) { if( index >= 0 && index < numSamples ) { // show a message that indicates which sample we are // triggering println("triggering sample " + index). sound artists have been exploring this popular soundgenerating paradigm. // if we should play the sample in reverse if( reverse ) { if( !sp[index].setToEnd(). but we hear the gravel as one large sound complex.0-pitchRange. sp[index].setValue(1. float newGain.0). // set the gain value // and set the pitch value (which is really just another // rate controller) pitchValue[index].setValue(-1.inLoop() ) { rateValue[index]. } } 4. As each individual pebble hits the ground it makes a sound.setValue(random(1. Granular Synthesis Granular synthesis is a technique whereby sounds are created from many small grains of sound.0+pitchRange)).boolean reverse. } } else // if we should play the sample forwards { if( !sp[index]. it helps the granulation sound less mechanical. Next. sourceSample). and connect them to the proper parameters. pitch and grain size are pretty straightforward.printStackTrace(). // initialize our GranularSamplePlayer gsp = new GranularSamplePlayer(ac.getLength(). The position of the mouse along the x-axis is mapped to the position in the source audio file from which grains will be extracted. then we call the constructor with two parameters.1. the AudioContext and the sample that we will granulate. grain interval. and it assumes that playback rate is zero. Grain randomness sets the jitter that will apply to each parameter. } // catch any errors that occur in file loading catch(Exception e) { println("Exception while attempting to load sample!"). The parameters on the GranularSamplePlayer object pertain to how the grains are selected and pushed out. The position of the mouse along the yaxis is mapped to grain duration. grain randomness and position. The first thing to notice about this program is that we declare a lot of Glide objects. notice that setting up the GranularSamplePlayer is very similar to how we set up the SamplePlayer object. pitch. 4. We enclose the file operations within a try/catch. and we want to make sure that we have a hook into all of them. then play it back with automated granulation based on a number of parameters. Next we initialize all of our glide objects. Grain interval interval sets the time between grains. exit(). These parameters include playback rate. Using GranularSamplePlayer (Granular_01) In this example we’re going to set up a very basic granular synthesizer that is controlled by the mouse cursor. Position sets the point in the file from which grains should be drawn. e. try { // load the audio file which will be used in granulation sourceSample = new Sample(sketchPath("") + sourceFile). grain size.The GranularSamplePlayer object allows a programmer to load an audio file. } // store the sample length sampleLength = sourceSample.3. There are a lot of parameters to the granulation process. Playback rate. // connect all of our Glide objects to the previously created GranularSamplePlayer 52 Sonifying Processing . Glide randomnessValue.wav". 53 .getWidth()) * (sampleLength – 400)). In this case. Glide grainSizeValue. Code Listing 4. The granulation parameters are controlled within the draw routine. we perform some math on the granulation parameters. The position within the source audio file is controlled by the X-axis. // declare our parent AudioContext // what file will be granulated? String sourceFile = "OrchTuning01.gsp.setRandomness(randomnessValue).setPitch(pitchValue).setValue((float)mouseY / 5).1. // our usual master gain GranularSamplePlayer gsp. The grain size is controlled by the Y-axis. Gain masterGain.setGrainSize(grainSizeValue). the mouse controls the position and grain size parameters. The position value calculation makes sure to output a index number within the audio file. gsp.3. // import the beads library AudioContext ac. We start the granulation by calling the start function.setValue((float)(mouseX / (float)this.pde // Granular_01. gsp. import beads. // our GranularSamplePlayer object // these unit generators will be connected to various granulation parameters Glide gainValue.*. Granular_01. gsp.setGrainInterval(intervalValue). // and the X-axis is used to control the position in the wave // file that is being granulated positionValue. The grain size calculation is simply for personal taste.pde // // // // In this granular synthesis demonstration.setPosition(positionValue). gsp. try playing with it and listening to how the resulting granulation changes! // grain size can be set by moving the mouse along the Y-axis grainSizeValue. so that // we don't go out of bounds when setting the granulation // position float sampleLength = 0. 80. Glide Time) randomnessValue = new Glide(ac. we encapsulate the file-loading in a try-catch // block. intervalValue = new Glide(ac. } // store the sample length . // set a reasonable window size ac = new AudioContext().this will be used when // determining where in the file we want to position our // granulation pointer sampleLength = sourceSample. 100. 50000. 100).getLength(). grainSizeValue = new Glide(ac. // initialize our AudioContext // again. sourceSample). positionValue = new Glide(ac. 1. Glide pitchValue. // this object will hold the audio data that will be // granulated Sample sourceSample = null. // these ugens will control aspects of the granular sample // player // remember the arguments on the Glide constructor // (AudioContext.Glide positionValue. 20). // initialize our GranularSamplePlayer gsp = new GranularSamplePlayer(ac. // set up our master gain gainValue = new Glide(ac. exit(). 0. masterGain = new Gain(ac. 30). just in case there is an error with file access try { // load the audio file which will be used in granulation sourceSample = new Sample(sketchPath("") + sourceFile).printStackTrace(). 100). 100.5. Glide intervalValue. 600). void setup() { size(800. } // catch any errors that occur in file loading catch(Exception e) { println("Exception while attempting to load sample!"). e. 54 Sonifying Processing . Initial Value. 10). gainValue). // this float will hold the length of the audio data. 1. 50). pitchValue = new Glide(ac. addInput(gsp).setGrainInterval(intervalValue).setValue((float)mouseY / 5). // set the background to black text("Move the mouse to control granular synthesis. 100. gsp.400)).0 value for position along the x-axis // then we multiply it by sampleLength (minus a little // buffer for safety) to get the position in the audio file positionValue.".addInput(masterGain). gsp. // connect our GranularSamplePlayer to the master gain masterGain.start().getWidth()) * (sampleLength . gsp. // begin audio processing background(0). // tell the user what to do! } // the main draw function void draw() { background(0.setGrainSize(grainSizeValue).setRandomness(randomnessValue).out. // start the granular sample player // connect the master gain to the AudioContext's master // output ac. gsp. but it really // isn't. } 55 . // The equation used here looks complex. 0.start(). // grain size can be set by moving the mouse along the Y// axis grainSizeValue. // The X-axis is used to control the position in the // wave file that is being granulated.// connect all of our Glide objects to the previously // created GranularSamplePlayer gsp.setPosition(positionValue). // this: (float)mouseX / (float)this. ac. 0). // All we're doing is translating on-screen position into // position in the audio file.setPitch(pitchValue).getWidth() calculates // a 0. gsp.setValue((float)((float)mouseX / (float)this. 120).0 to 1. // import the beads library AudioContext ac. import beads. except we add a second granulator and a delay to fill out the texture. Glide intervalValue1. Glide gainValue1.3. we supply random numbers to some other granulation parameters. Gain g1. but it can be used to create an assortment of interesting sounds and textures. Gain masterGain. we control each granulator slightly differently.2. Glide randomnessValue1. Finally.3. Gain g2. Then we map cursor speed to overall gain. Grain size is still mapped to the y-axis and source file position is still mapped to the x-axis. Finally. 56 Sonifying Processing . Using GranularSamplePlayer Parameters (Granular_02) The second granular synthesis example is very similar to the first. Glide randomnessValue2. depending on the input file. Glide grainSizeValue1.2. Glide positionValue1. // repeat the same unit generators for our second granulator GranularSamplePlayer gsp2.4. with just a few alterations. in order to make the sound a little bit more unpredictable. // these unit generators will be connected to various granulation parameters // our first GranularSamplePlayer object GranularSamplePlayer gsp1. more densely grainy texture. String sourceFile = "OrchTuning01. This granulator can be used to create a wide variety // of sounds. // our usual master gain Glide masterGainValue. Glide pitchValue1. This program is very complex. // declare our parent AudioContext // Change this line to try the granulator on another sound // file. this allows us to create a more complex texture.pde // // // // // This granular synthesis program is similar to Granular_01. Code Listing 4.*.wav". So mouse movement essentially triggers granulation. Granular_02. Glide gainValue2. we set up another GranularSamplePlayer to make a thicker. but each granulator uses slightly different numbers than the other. Again.pde // Granular_02. First. // again.printStackTrace(). // these variables will be used to store properties of the // mouse cursor int xChange = 0. exit(). intervalValue2. 100). // we add a delay unit just to give the program a fuller sound TapIn delayIn. so that // we don't go out of bounds when setting the granulation // position. 600). TapOut delayOut.this will be used when // determining where in the file we want to position our // granulation pointer sampleLength = sourceSample. we encapsulate the file-loading in a try-catch // block. Sample sourceSample = null. masterGain = new Gain(ac. positionValue2. 1. int lastMouseY = 0. // set up our master gain masterGainValue = new Glide(ac. Gain delayGain. 0. } // catch any errors that occur in file loading catch(Exception e) { println("Exception while attempting to load sample!"). 57 . just in case there is an error with file access try { // load the audio file which will be used in granulation sourceSample = new Sample(sketchPath("") + sourceFile). // This float will hold the length of the audio data. void setup() { // set a reasonable window size size(800. // This object will hold the audio data that will be // granulated. masterGainValue). pitchValue2. float sampleLength = 0.9. // initialize our AudioContext ac = new AudioContext().getLength().Glide Glide Glide Glide grainSizeValue2. e. int yChange = 0. int lastMouseX = 0. } // store the sample length . 30). gainValue2).setPosition(positionValue1). g2. 50000.setRandomness(randomnessValue2). pitchValue2 = new Glide(ac. intervalValue2 = new Glide(ac. 50000.15).setGrainInterval(intervalValue2).addInput(gsp1). intervalValue1 = new Glide(ac. 50). // the TapOut object is the delay output object delayOut = new TapOut(ac. 30). 20). gsp1. gsp1. g1 = new Gain(ac.setPitch(pitchValue1). gainValue1 = new Glide(ac. // The TapIn object is the start of the delay delayIn = new TapIn(ac. positionValue2 = new Glide(ac. 50). sourceSample). 140. 60.setRandomness(randomnessValue1). // Set up our delay unit (this will be covered more // thoroughly in a later example).setGrainSize(grainSizeValue1). // connect the delay output to the gain input delayGain. 1. 1. gsp2. 100). delayIn.setGrainInterval(intervalValue1). // connect all of our Glide objects to the previously // created GranularSamplePlayer gsp2.setPitch(pitchValue2). 50). 30). 20). grainSizeValue2 = new Glide(ac. gsp2. // connect the first GranularSamplePlayer to the delay delayIn. 200.addInput(delayOut). gainValue2 = new Glide(ac.0). randomnessValue1 = new Glide(ac. 30). 1. g2 = new Gain(ac.0. // connect the first GranularSamplePlayer to the master // gain 58 Sonifying Processing . sourceSample). gsp2. 80. gainValue1). 0.addInput(g2). positionValue1 = new Glide(ac.addInput(gsp2). pitchValue1 = new Glide(ac. gsp1. 10).setPosition(positionValue2). 0. 100. 2000). 100. // connect all of our Glide objects to the previously // created GranularSamplePlayer gsp1. randomnessValue2 = new Glide(ac. 10). 0. // we will repeat the same Unit Generators for the second // GranularSamplePlayer gsp2 = new GranularSamplePlayer(ac. 100.addInput(g1). // set the volume of the delay effect delayGain = new Gain(ac. grainSizeValue1 = new Glide(ac. gsp1.// these ugens will control aspects of the granular sample // player gsp1 = new GranularSamplePlayer(ac.setGrainSize(grainSizeValue2). 1. gsp2.0. 1. g1. // connect the second GranularSamplePlayer to the delay delayIn. float newGain = (xChange + yChange) / 3.addInput(g2).addInput(g1).setValue(newGain). // use a slightly larger pitch range pitchValue2.0.0). // tell the user what to do! } // the main draw function void draw() { // get the location and speed of the mouse cursor xChange = abs(lastMouseX .setValue(random(1.getWidth()) * (sampleLength . // and the X-axis is used to control the position in the // wave file that is being granulated positionValue1.start().0.0-pitchRange. pitchValue1. // start the second granular sample player // connect the master gain to the AudioContext's master // output ac. float pitchRange = yChange / 200.start(). if( newGain > 1. // set up the same relationships as with the first // GranularSamplePlayer. yChange = abs(lastMouseY .setValue(random(100) + 1.0. // begin audio processing background(0). // set randomness to a nice random level randomnessValue1. ac. lastMouseY = mouseY.setValue((float)mouseY / 10).addInput(delayGain). // start the first granular sample player gsp2.setValue((float)((float)mouseX / (float)this. 1000) / (xChange+1)) ).8). 1. // connect our delay effect to the master gain masterGain. // set the background to black text("Move the mouse to control granular synthesis.start(). 120).mouseY).400)). 100.0 ) newGain = 1. 59 .".out.setValue(random(150) + 1. pitchRange *= 3.masterGain.0).setValue(random(random(200.0. // grain size can be set by moving the mouse along the Y// axis grainSizeValue1.0+pitchRange)). // set the time interval value according to how much the // mouse is moving horizontally intervalValue1.0+pitchRange)). but use slightly different numbers gainValue1. lastMouseX = mouseX. gsp1.setValue(newGain * 0.addInput(masterGain). randomnessValue2. gainValue1.setValue(random(1. 1.mouseX). // connect the second GranularSamplePlayer to the master // gain masterGain.0-pitchRange. setValue((float)mouseY / 5). 1000) / (xChange+1)) ).getWidth()) * (sampleLength .400)).// use a slightly longer interval value intervalValue2. grainSizeValue2. } 60 Sonifying Processing . positionValue2.setValue((float)((float)mouseX / (float)this.setValue(random(random(500. Basic Delay (Delay_01) In this example.0). 1.1.pde // Delay_01. Next we set up the TapOut.50).1.addInput(delayOut).addInput(synthGain). we’re going to look at some of the audio effects that are already implemented in Beads. 500. delayIn. then we give the actual delay duration in milliseconds. Delay_01. The TapOut constructor has three parameters. An audio signal is stored in memory for a brief duration. First we supply the parent AudioContext. delayOut = new TapOut(ac. then we supply the name of the TapIn that this TapOut is connected to. After that. we declare the TapIn object and give it a maximum delay time in milliseconds. At the time of this writing. delayGain. Delay Delay is a digital echo. we’re going to expand on the Frequency_Modulation_03 example by adding a short delay.1. 5. simply connect the TapOut output to your output objects. then sent to the output. 5. Code Listing 5. filters. but I expect that to change soon. delay is implemented using two unit generators: TapIn and TapOut. the effects aren’t that deep. This is the duration in milliseconds of the audio buffer within the TapIn object. In Beads. reverb and a couple others.5. delayIn. 0. In this chapter we will look at delay objects. The delay can be any duration less than the maximum duration supplied to the TapIn object.1.pde // This is an extension of Frequency_Modulation_03. Then we connect our synthesizer output to the TapIn input.pde 61 . and the delay is ready to go. First.1. Effects In this chapter. The TapOut object simply waits for delayed audio from the TapIn. delayIn = new TapIn(ac. 2000). Delays are very easy to setup. delayGain = new Gain(ac. 20. // create a second WavePlayer.// this example creates a simple FM synthesizer and adds a // 400ms delay import beads.*.0) + mouseY. Gain delayGain. this WavePlayer will control the // frequency of the carrier modulatorFrequency = new Glide(ac. // create our AudioContext // declare our unit generators WavePlayer modulator. Glide modulatorFrequency. // import the beads library AudioContext ac. scaled into an appropriate frequency // range return (x[0] * 100. frequencyModulation.SINE). control the frequency with // the function created above carrier = new WavePlayer(ac. TapOut delayOut. } }. 62 Sonifying Processing . void setup() { size(400.SINE). Buffer. modulatorFrequency. // our envelope and gain objects Envelope gainEnvelope. // create a custom frequency modulation function Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0]. // initialize our AudioContext ac = new AudioContext(). // create the modulator. Gain synthGain. 300). WavePlayer carrier. 30). Buffer. // our delay objects TapIn delayIn. modulator = new WavePlayer(ac. // print the directions on the screen background(0). at a 50ms attack // segment to the envelope // and a 300 ms decay segment to the envelope gainEnvelope.the final parameter is the // length of the initial delay time in milliseconds delayOut = new TapOut(ac. 1. 100). 50).addInput(delayGain).0. simply uncomment // this line. // connect the synthesizer to the delay delayIn.50). // connect the delay output to the gain delayGain. } 63 . text("Click me to demonstrate delay!". // connect the Gain output to the AudioContext ac.out.7.addSegment(0. // connect the delay output to the AudioContext ac.addInput(delayGain). connect it to the gain envelope synthGain = new Gain(ac. // create a Gain object. gainEnvelope.the second parameter sets the // maximum delay time in milliseconds delayIn = new TapIn(ac. 0. 100. // To feed the delay back into itself.addSegment(0. // start audio processing ac.start(). } void draw() { // set the modulator frequency modulatorFrequency. //delayIn. // set up our delay // create the delay input . delayIn. // the gain for our delay delayGain = new Gain(ac. 2000).0). 1. gainEnvelope). 300).addInput(delayOut).// create the envelope object that will control the gain gainEnvelope = new Envelope(ac. 500.addInput(synthGain). // connect the carrier to the Gain input synthGain.out. } // this routine is triggered whenever a mouse button is // pressed void mousePressed() { // when the mouse button is pressed.addInput(carrier). 0.setValue(mouseX).0). // create the delay output .addInput(synthGain). Controlling Delay At the time of this writing. we’re going to apply a low pass filter to a drum loop using the OnePoleFilter object. notch. it’s enough to know that filters modify the frequency spectrum of a sound. 200. If it were working. This might be an LFO. sound artists talk about low pass filters and high pass filters. Other filter types include band pass.addInput(filter1). You can easily simulate the sound of a low pass filter by cupping your hands over your mouth as you speak. // connect the SamplePlayer to the filter filter1. Each filter has a different quality. or an envelope.5. or a custom function.2.1.addInput(sp). Setting up the low pass filter is literally three lines of code. … // connect the filter to the gain g.1.0). In the first filter example. passing it the AudioContext and the cutoff frequency. and there are many books that cover just the topic of filters. delayIn.2. comb and allpass.wav. and right-click to hear the original loop. Filters Filter effects are another class of audio effects that is both familiar and well-implemented in Beads. Low Pass Filter (Filter_01) IMPORTANT: This program will only work if the sketch directory contains an audio file called Drum_Loop_01. // our new filter with a cutoff frequency of 200Hz filter1 = new OnePoleFilter(ac. Commonly. Call the OnePoleFilter constructor. Technically. Low pass filters allow the low frequencies to pass while reducing or eliminating some of the higher frequencies. 5. 5. variable delay seems to be bugged. 64 Sonifying Processing . You can left-click on the window to hear the filtered drums. For now.2. filtering is boosting or attenuating frequencies in a sound. the UGen that controls the variable delay would be inserted into the TapOut constructor. High pass filters do the opposite. delayOut = new TapOut(ac. Then connect the input and the output and you’re ready to rock. delayEnvelope). This section will be updated as soon as there is a fix for variable-length delay. 0).1. 600).Code Listing 5. show an error message println("Exception while attempting to load sample!"). // set up our new filter with a cutoff frequency of 200Hz filter1 = new OnePoleFilter(ac. // this is our filter unit generator void setup() { size(800.wav". 200. // connect the SamplePlayer to the filter filter1. // standard gain objects Gain g. e. sp = new SamplePlayer(ac.printStackTrace().addInput(sp). exit(). } // we would like to play the sample multiple times.setKillOnEnd(false). new Sample(sourceFile)).*. ac = new AudioContext(). } catch(Exception e) { // if there is an error. // the SamplePlayer class will be used to play the audio file SamplePlayer sp. // this will hold the path to our audio file String sourceFile. 65 . AudioContext ac. Filter_01. // create our AudioContext sourceFile = sketchPath("") + "Drum_Loop_01.pde import beads.2. so we // set KillOnEnd to false sp. try { // initialize our SamplePlayer. OnePoleFilter filter1. Glide gainValue.pde // Filter_01. // set the gain based on mouse position gainValue. 20). 1.// as usual. // set the gain based on mouse position gainValue. 70).start(). // play the audio file sp.". } // if the user right-clicks. } } 66 Sonifying Processing .addInput(filter1).start().start(). 0.0.0). 50. // play the audio file sp.".setValue((float)mouseX/(float)width). text("Right click to hear the original loop. } // Although we're not drawing to the screen. then play the sound without a // filter else { // set the filter frequency to cutoff at 20kHz -> the top // of human hearing filter1.setToLoopStart().setFrequency(200. // move the playback pointer to the first loop point sp. // connect the filter to the gain g. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac. gainValue). g = new Gain(ac.addInput(g).0).setValue((float)mouseX/(float)width).out.setToLoopStart(). // begin audio processing ac. text("Left click to hear a low pass filter in effect.setFrequency(20000. we need to have // a draw function in order to wait for mousePressed events void draw(){} // this routine is called whenever a mouse button is pressed // on the Processing sketch void mousePressed() { if( mouseButton == LEFT ) { // set the filter frequency to cutoff at 200Hz filter1. background(0). 50). // connect the Gain to the AudioContext ac. 50. // move the playback pointer to the first loop point sp. This example builds on the earlier synthesis examples.*.0.addSegment(800. lowPassFilter = new LPRezFilter(ac. 1000).0 to 800. 1000). the cutoff frequency and the resonance (0.2. Code Listing 5. filterCutoffEnvelope. Then it returns to 0. the AudioContext.0 over 1000 milliseconds. Setting up the filter and envelope is similar to how we have set up filters and envelopes in previous examples.2.0).5.addSegment(00. Then we connect the gain objects as usual. Glide modulatorFrequency.2. WavePlayer carrier. and the only other new addition to this example is the use of a frequency envelope in the mousePressed function. Low-Pass Resonant Filter with Envelope (Filter_02) In this example we’re going to attach an envelope to the filter cutoff frequency. filterCutoffEnvelope. Since we want to control the cutoff frequency with an envelope.pde // Filter_02.1. lowPassFilter.0).addInput(carrier). filterCutoffEnvelope = new Envelope(ac. The LPRezFilter constructor takes three arguments.pde // This is an extension of Frequency_Modulation_03.97). // create our AudioContext // declare our FM Synthesis unit generators WavePlayer modulator. Filter_02.0 over 1000 milliseconds. This example is going to use a resonant low pass filter implemented in the LPRezFilter object.0.pde // this adds a low pass filter controlled by an envelope import beads.2. 0. 00. but the envelope code could easily be applied to any of the Beads filter objects.0 . In this example we sweep the cutoff frequency from 0. we insert the name of the envelope object where we would normally indicate a cutoff value. 67 . filterCutoffEnvelope. // import the beads library AudioContext ac. control the frequency with // the function created above carrier = new WavePlayer(ac. 0.0) + mouseY. 1. 20. // connect the synthesizer to the filter lowPassFilter. void setup() { size(400. connect it to the gain envelope synthGain = new Gain(ac. 00. // set up our low pass filter // create the envelope that will control the cutoff // frequency filterCutoffEnvelope = new Envelope(ac. 30). // create the modulator.SINE). } }. // create a custom frequency modulation function Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0]. modulator = new WavePlayer(ac. Envelope filterCutoffEnvelope. // set up our gain envelope objects gainEnvelope = new Envelope(ac. // create a second WavePlayer. // our filter and filter envelope LPRezFilter lowPassFilter.// our gain and gain envelope Envelope gainEnvelope. 0. // create the LP Rez filter lowPassFilter = new LPRezFilter(ac. this WavePlayer will control the // frequency of the carrier modulatorFrequency = new Glide(ac. filterCutoffEnvelope.SINE). // create a Gain object. // initialize our AudioContext ac = new AudioContext(). // connect the carrier to the Gain input 68 Sonifying Processing .0). Gain synthGain.addInput(carrier). frequencyModulation. Buffer. scaled into an appropriate frequency // range return (x[0] * 500. Buffer. 300). gainEnvelope).97). modulatorFrequency.0). addInput(lowPassFilter). filterCutoffEnvelope. This time we run it through a band pass filter with the cutoff set to 5000Hz and the Q set to 0.setValue(mouseX). as the parameter after the AudioContext.synthGain.2. This example returns to playing the drum loop that we have used in previous examples. We do that in the BiquadFilter constructor. text("Click me to demonstrate a filter sweep!". // add points to the filter envelope sweep the frequency up // to 500Hz. 500). 1000).addInput(synthGain).addSegment(00. 1000).addSegment(0.7.0. we use a versatile filter unit generator called BiquadFilter. // connect the filter output to the AudioContext ac. gainEnvelope. A higher Q would indicate that fewer frequencies should pass. 500). } 5.3. 100).start().0.5. } void mousePressed() { // add some points to the gain envelope gainEnvelope. This unit generator can implement many different filter equations. 1000).addSegment(800. a small Q value indicates that a lot of frequencies around the cutoff will be allowed to pass through. // start audio processing ac. 69 . That range is referred to as the band.addSegment(0. Q is an interesting property of nearly all filters. so we have to specify which type of filter will be applied by the object. In the case of a band pass filter. In this example.out.addSegment(0. filter1 = new BiquadFilter(ac. hence the name band pass filter. then back down to 0 filterCutoffEnvelope. Q specifies the bandwidth of the area effected by the filter. background(0). Band-Pass Filter (Filter_03) A band pass filter allows a range of frequencies to pass through. } void draw() { // set the modulator frequency modulatorFrequency. while attenuating all other frequencies. gainEnvelope. 100.7. Basically.0. 600). import beads.2.*.5f).net/doc/net/beadsproject/beads/ugens/BiquadFilte r.printStackTrace().BP_SKIRT. e. 70 Sonifying Processing . Filter_03. ac = new AudioContext(). 0.pde // In this example. // this will hold the path to our audio file String sourceFile. Glide gainValue.beadsproject.pde // Filter_03.wav".BiquadFilter. // the SamplePlayer class will be used to play the audio file SamplePlayer sp.setKillOnEnd(false). Other filter types can be found in the Beads documentation at. } sp. exit(). // standard gain objects Gain g. 5000. } catch(Exception e) { println("Exception while attempting to load sample!"). new Sample(sourceFile)). // this is our filter unit generator BiquadFilter filter1.html#setType(int) Code Listing 5. void setup() { size(800. try { // Initialize our SamplePlayer sp = new SamplePlayer(ac.0f.3. we apply a band-pass filter to a drum // loop. // create our AudioContext sourceFile = sketchPath("") + "Drum_Loop_01. AudioContext ac. // connect the Gain to the AudioContext ac. 50.out.".9).filter1 = new BiquadFilter(ac. } void draw(){} void mousePressed() { gainValue.start(). 5. 0. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac. then control it using a Low Frequency Oscillator.addInput(sp). sp.3. 50). // as usual. // connect the SamplePlayer to the filter filter1. ac. 71 .1.addInput(g). g. Low Frequency Oscillators. BiquadFilter. are just like the other oscillators we’ve created.0. Panner objects allow a Beads user to place mono sounds within the stereo field.0f. or LFOs. 0.start(). gainValue). Other Effects This section will look at some of the other audio effects objects provided by Beads.setValue(0.setToLoopStart(). } 5. text("Click to hear a band pass filter with cutoff set to 5000Hz. 5000.addInput(filter1). Panner (Panner_01) In this example we create a Panner object.3. 20).BP_SKIRT.5f). 1. sp. g = new Gain(ac. // print the instructions on screen background(0). but they oscillate at frequencies below 20Hz. SINE).*. AudioContext ac. A value of 1. ac = new AudioContext().0. Panner_01.wav".0. try { sp = new SamplePlayer(ac.33.33Hz panLFO = new WavePlayer(ac. // a Low-Frequency-Oscillator for the panner WavePlayer panLFO.pde // Panner_01. Code Listing 5.0 indicates full left. inserting the LFO where we might normally indicate a fixed pan value.0 and 1. Buffer. // our Panner will control the stereo placement of the sound Panner p.0 indicates full right.pde // this example demonstrates how to use the Panner object // this example extends Filter_01. we can easily use the WavePlayer object to control this parameter. then we instantiate the Panner object.pde import beads. then pans the incoming audio appropriately.1. we connect the drum loop. p = new Panner(ac. Since sine waves oscillate between -1. panLFO). we create the LFO that will control the Panner.addInput(g).The Panner object takes a value between -1. to the Panner object. // initialize the LFO at a frequency of 0. String sourceFile. Glide gainValue. SamplePlayer sp. void setup() { size(800. p. new Sample(sourceFile)).3.0 and 1. // standard gain objects Gain g. 600). In this block of code. 0. through the Gain object g. sourceFile = sketchPath("") + "Drum_Loop_01. 72 Sonifying Processing . Finally. A value of -1. we create an LFO . // initialize the panner. // draw a black background text("Click to hear a Panner object connected to an LFO.". // merely replace "panLFO" with a number between -1. } sp. background(0). // begin audio processing ac.SINE). 20). to set a constant pan position. 50.addInput(g). under // 20Hz.33Hz.a Low // Frequency Oscillator .printStackTrace().addInput(p). // In this case. 1. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac. } 73 .0 (RIGHT) p = new Panner(ac.out. // as usual. exit(). the LFO controls pan position.0 // (LEFT) and 1. e. sp. // connect the filter to the gain // In this block of code. // tell the user what to do } void draw(){} void mousePressed() { // set the gain based on mouse position and play the file gainValue. sp.setValue((float)mouseX/(float)width). 0.} catch(Exception e) { println("Exception while attempting to load sample!").start().start(). g = new Gain(ac. // connect the Panner to the AudioContext ac.setKillOnEnd(false). panLFO). panLFO = new WavePlayer(ac.0. g.and connect it to our panner. // Initialize the LFO at a frequency of 0. p.addInput(sp). 50).setToLoopStart(). 0. // A low frequency oscillator is just like any other // oscillator EXCEPT the frequency is subaudible. Buffer. gainValue).33. pde // this example demonstrates how to use the Reverb object import beads. Reverb is easy to hear in your own home. The reverb object is easy enough to set up. then connect an input. r = new Reverb(ac. Fittingly.setDamping(0.5. The Reverb object has four parameters: Damping. As the clap reverberates around your shower. set the size and damping. AudioContext ac. Code Listing 5.7). The room size essentially sets the duration of the reverb. // standard gain objects Gain g. Reverb (Reverb_01) This example builds on the Filter_01 example by adding a thick reverb to the drum loop. So.5). however.2. we initialize the reverb with a single output chaannel.addInput(g). Early Reflections Level. while the late reflections level controls how the reverb tails off.3. Damping tells the reverb how to filter out high frequencies. The early reflections level sets how much the sound seems to bounce back. In this block of code. r. The same effect can be heard in a concert hall. when setting up a reverb. it makes many quiet echoes that give an impression of the size and shape of the room. String sourceFile. We only need to set a few parameters then connect the input and output the way we do with nearly every object.3.pde // Reverb_01.setSize(0. it’s important to route both the reverberated and the dry signal through to the AudioContext. or any space with large flat walls. Reverb_01. A heavily damped room is like a room that absorbs sound with lots of plush furniture and thick carpets. r. Late Reflections Level and Room Size. just go into your shower and clap. 74 Sonifying Processing . but it’s important to play around with all of these settings to really get a feel for them. SamplePlayer sp. 1). the reverb object does not have a parameter for reverb mix (the mix between wet and dry signal). Beads provides the Reverb object to allow programmers to create reverb effects.2.*. r. 75 .addInput(r). 0. the reverb unit only outputs a reverberated // signal.0. sourceFile = sketchPath("") + "Drum_Loop_01. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac.out. } catch(Exception e) { println("Exception while attempting to load sample!"). gainValue).Glide gainValue.5 is the default r.setSize(0. // Set the damping (between 0 and 1) . use r. // our Reverberation unit generator Reverb r.setDamping(0. ac = new AudioContext(). } sp.the higher the // dampening.setEarlyReflectionsLevel(0-1). // then we will also need to connect the SamplePlayer to // the output. try { sp = new SamplePlayer(ac. 1). new Sample(sourceFile)). g. exit(). // as usual.wav". 1. the fewer resonant high frequencies r. ac.7). void setup() { size(800. 20). g = new Gain(ac.5). 600).printStackTrace().setKillOnEnd(false). // Set the room size (between 0 and 1) 0.addInput(g). // connect the gain to the reverb // connect the Reverb to the AudioContext ac. // connect the filter to the gain // Create a new reverb with a single output channel r = new Reverb(ac. So if we want to hear the dry drums as well. e.addInput(sp). or // r. // You can also control a Reverb's early reflections and // late reverb.setLateReverbLevel(0-1).out.addInput(g). r. // To do so. // Remember. we set the attack to 30ms.0 and 1. and how long it takes for compression to stop.6. sp. c = new Compressor(ac. which allows us to turn up the gain. attack and decay. ratio. Attack and decay are durations that indicate how long it takes for compression to ramp up once the sound crosses the threshold. } 5. c. after the sound crosses the threshold. then we set the attack. and the threshold to 0. a 2:1 ratio will apply a gain of 50%. Compression has a four basic parameters: threshold. A 2:1 ratio indicates that for every 2 input decibels above the threshold. text("Click to hear a Reverb object in action. after the sound has dipped below the threshold. Neophytes tend to think of compression as a black box that can fix all sorts of problems with audio.ac. 50). 1). and even the Beads Compressor object includes a few other parameters. but in Beads this is indicated by a number between 0. So. decay. for most jobs. however.start(). c.3.0). // move the playback pointer to the first loop point sp. c. compression is simply a tool for evening out the loudness of recorded audio. } void draw(){} void mousePressed() { // set the gain based on mouse position gainValue. the decay to 200ms. In this block of code.". Compressor (Compressor_01) Compression is one of the most complex and misunderstood audio processing tools.setAttack(30). c. It makes the dynamic range of recorded audio narrower. There are many other parameters that might occur on a compressor. however.setValue((float)mouseX/(float)width). ratio and threshold. the compressor will output 1 decibel above the threshold. 50. This is usually indicated in decibels. 76 Sonifying Processing . Threshold is the loudness level at which compression begins to occur. we set up a compressor. At its core. the ratio to 4:1.setRatio(4.6).3.start().setDecay(200). background(0).setToLoopStart(). these four parameters are sufficient.setThreshold(0.0. In this example. Ratio is the amount of compression. attaching inputs and outputs as usual. 1). } catch(Exception e) { println("Exception while attempting to load sample!"). exit().*.3.pde // this example demonstrates how to use the Compressor object import beads. ac = new AudioContext().3. c = new Compressor(ac.Code Listing 5. // Create a new compressor with a single output channel. // The attack is how long it takes for compression to ramp // up. sourceFile = sketchPath("") + "Drum_Loop_01. try { sp = new SamplePlayer(ac. e.pde // Compressor_01.wav". // our Compressor unit generator void setup() { size(800. // standard gain objects Gain g. Compressor c.setKillOnEnd(false). 600). Compressor_01. String sourceFile. new Sample(sourceFile)). Glide gainValue. c. once the threshold is crossed. SamplePlayer sp. // The decay is how long it takes for compression to trail 77 .setAttack(30).printStackTrace(). AudioContext ac. } sp. // move the playback pointer to the first loop point sp.setDecay(200).start(). c. gainValue).0 = 2:1 = for // every two decibels above the threshold.".addInput(c). text("Click to hear the Compressor object in action. } 78 Sonifying Processing .6).setKnee(0. // The knee is an advanced setting that you should leave // alone unless you know what you are doing.// off. // connect the Compressor to the AudioContext ac. 0. ac.0. 20). sp. g = new Gain(ac. 1.out.addInput(sp). // connect the SamplePlayer to the compressor c.setRatio(4. once the threshold is crossed in the opposite // direction.5). c. // The ratio and the threshold work together to determine // how much a signal is squashed. } void draw(){} void mousePressed() { // set the gain based on mouse position gainValue. 50.setThreshold(0. // connect the Compressor to the gain g.setValue((float)mouseX/(float)width).0). a single decibel // will be output. gainValue = new Glide(ac. The ratio is the // NUMERATOR of the compression amount 2. The threshold is the loudness at which // compression can begin c.setToLoopStart(). 50). //c.start().addInput(c). background(0). WaveShape1). ws = new WaveShaper(ac. In this example we use an array of floats. or by using a short audio file. import beads. 0. -0. AudioContext ac.9. float[] WaveShape1 = {0. simply provide the AudioContext object and the wave shape.1. 0. -0. using the addInput method. String sourceFile. WaveShaper ws. WaveShaper (WaveShaper_01) Wave shaping takes an incoming waveform and maps it to values from a stored waveform. then use that wave shape to apply a strange distortion effect to our drum loop. we load a wave shape from an array.SAW).9. or from a Buffer. and to expand the harmonic spectrum of a sound.0. ws = new WaveShaper(ac. Buffer.9.3.9.*. -0. 0. Code Listing 5. You can specify wave shape using an array of floats.4. To instantiate the WaveShaper object. In this example.9. -0. WaveShaper_01. SamplePlayer sp.3.0.3. // standard gain objects Gain g. 0. You can experiment with this by inserting any of the pre-defined buffers. you merely need to insert a Buffer object into the WaveShaper constructor. We can load a wave shape from an array. Wave shaping can be used to apply a lot of different effects. // our WaveShaper unit generator 79 .pde This example demonstrates a WaveShaper. Then set up the input and output as usual. -0.5. 0. The process is just a table lookup of the value in the input to the value in the waveshape.4.5}. Glide gainValue.pde // // // // // // WaveShaper_01.9. In order to use a Buffer as a waveshape. which maps an incoming signal onto a specified wave shape. ac = new AudioContext().setPreGain(4.0).out.void setup() { size(800. exit().setPostGain(4. sourceFile = sketchPath("") + "Drum_Loop_01. } catch(Exception e) { // if there is an error. float[] WaveShape1 = {0. // as usual. -0. e. new Sample(sourceFile)).9.addInput(sp).9.addInput(g). 0.0). -0. // Try/Catch blocks will inform us if the file can't // be found try { // initialize our SamplePlayer. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac. // instantiate the WaveShaper with the wave shape ws = new WaveShaper(ac. // begin audio processing ac. 600).9. 1. 20).0. //ws.printStackTrace(). 80 Sonifying Processing . -0. -0. WaveShape1). // connect the WaveShaper to the AudioContext ac.addInput(ws).3.start(). 0. } // we would like to play the sample multiple times.setKillOnEnd(false). -0. // connect the gain to the WaveShaper ws. show an error message (at the // bottom of the processing window) println("Exception while attempting to load sample!"). // uncomment these lines to set the gain on the WaveShaper //ws.0. g. g = new Gain(ac.wav". // connect the filter to the gain // This wave shape applies a strange-sounding distortion. gainValue).1. so we // set KillOnEnd to false sp.5}. 0.9. 0.9.0. 0. 0. loading the file // indicated by the sourceFile string sp = new SamplePlayer(ac.9. 50).setValue((float)mouseX/(float)width). text("Click to hear a WaveShaper in action. sp. sp. } void draw(){} void mousePressed() { gainValue. } 81 .setToLoopStart().background(0).".start(). 50. out.sampled. AudioFormat af = new AudioFormat(44100. we can use the RecordToSample object to perform the task of saving audio to disk.sound. rts = new RecordToSample(ac. so it is compatible on any platform.1 Using RecordToSample The RecordToSample object allows a programmer to store the incoming audio data in a buffer known as a Sample.AudioFormat. Finally. outputSample = new Sample(af.AudioFileFormat. after we have captured some audio data. Saving Your Sounds If you’re using a Mac or Linux machine. The RecordToSample instantiation process is slightly more intricate than it is for most objects. 16. true). we can use the Sample class to write an audio file with just a few lines of code. ac. import javax. 44100). Note the import statements at the top of this example. 82 Sonifying Processing .tritonus library. The Sample class is very useful because.Type. Fortunately. 1. true. then the natural choice for saving your work is the RecordToFile object. Then we tell the AudioContext that the new object is a dependent. Java itself doesn’t support streaming audio directly to the disk. which is not supported in Windows. The RecordToSample object works just like most objects. 6. import javax. Unfortunately. this object relies on the org.sound. with one exception. Then we instantiate a Sample object to hold the audio data. we create a RecordToSample object using the previous two objects as parameters.sampled.INFINITE). Since this object has no outputs.6. These are necessary in order to specify audio formats for recording and saving. we need to tell it when to update by adding it as a dependent of the master AudioContext. RecordToSample. outputSample. In this case. This allows it to update whenever the AudioContext is updated. The RecordToSample object is implemented using pure java sound.Mode.0f.addDependent(rts). we instantiate an AudioFormat that indicates the format for the recorded data. import beads.We can save the recorded data by calling the write function that is provided by the Sample class. // initialize our AudioContext ac = new AudioContext(). import javax.1. // our envelope and gain objects Envelope gainEnvelope. Gain synthGain. RecordToSample_01.sampled.Type. 300). // create our AudioContext // declare our unit generators WavePlayer modulator. // these imports allow us to specify audio file formats import javax. All we have to provide is a filename and an audio format. // our recording objects RecordToSample rts.*. // create the modulator. WavePlayer carrier.WAVE).io. 83 . this WavePlayer will control // the frequency of the carrier modulatorFrequency = new Glide(ac. // our delay objects TapIn delayIn.sampled. 30).AudioFileFormat. 20.AudioFormat.pde // this is necessary so that we can use the File class import java.sound.Type. void setup() { size(600. // import the beads library AudioContext ac.write(sketchPath("") + "out.pde // This is an extension of Delay_01.1. Code Listing 6. modulatorFrequency.*. Sample outputSample.sound. Glide modulatorFrequency.AudioFileFormat. outputSample.wav". Gain delayGain.sampled.sound. TapOut delayOut.pde // RecordToSample_01. javax. modulator = new WavePlayer(ac. // initialize the RecordToSample object rts = new RecordToSample(ac.0) + mouseY. gainEnvelope).// the gain for our delay // connect the delay output to the gain delayGain. 16. connect it to the gain envelope synthGain = new Gain(ac. 500. // set up our delay // create the delay input . control the frequency // with the function created above carrier = new WavePlayer(ac. 44100). 1.addInput(synthGain). delayIn.0). true. 84 Sonifying Processing . // create a buffer for the recording outputSample = new Sample(af.0f. 1.addInput(delayOut).the final parameter is the // length of the initial delay time in milliseconds delayOut = new TapOut(ac. // create the delay output . // create the envelope object that will control the gain gainEnvelope = new Envelope(ac.the second parameter sets the // maximum delay time in milliseconds delayIn = new TapIn(ac.50).addInput(carrier). // create a custom frequency modulation function Function frequencyModulation = new Function(modulator) { public float calculate() { // return x[0]. // create a second WavePlayer. // setup the recording unit generator try{ // specify the recording format AudioFormat af = new AudioFormat(44100.addInput(delayGain). Buffer. } }. frequencyModulation. 0. // connect the carrier to the Gain input synthGain.0). 1. // create a Gain object.SINE). 0.Buffer.SINE). simply // uncomment this line delayIn. delayGain = new Gain(ac. // to feed the delay back into itself. true). 2000). scaled into an appropriate // frequency range return (x[0] * 100. // connect the synthesizer to the delay delayIn. addSegment(0.WAVE). } void draw() { // set the modulator frequency modulatorFrequency.AudioFileFormat.sound.addInput(synthGain). 100). 100. // connect the delay output to the AudioContext ac.Type. try{ outputSample.kill().pause(true). } catch(Exception e){ e. } // event handler for mouse clicks void keyPressed() { if( key == 's' || key == 'S' ) { rts.write(sketchPath("") + "outputSample. rts.addSegment(0.addInput(delayGain). exit(). 300). RecordToSample.addInput(synthGain). } rts. } } 85 .out. 120). ac.start(). text("Click me to demonstrate delay!". javax. exit().out.setValue(mouseX). // start audio processing background(0).INFINITE). gainEnvelope. 100.sampled.Mode.addInput(delayGain). 50).outputSample. text("Press s to save the performance and exit". ac.0.7. } void mousePressed() { // when the mouse button is pressed. exit(). at a 50ms attack // segment to the envelope // and a 300 ms decay segment to the envelope gainEnvelope. // connect the Gain output to the AudioContext ac.addDependent(rts). } rts.printStackTrace().out.printStackTrace().wav". } catch(Exception e){ e. 1. In this chapter we’re going to see how to incorporate audio input into our Processing sketches. see the AudioContext javadoc at www. declare it as a UGen. selecting an audio input other than your default input appears to be buggy.pde // Audio_Input_01.html Code Listing 7. void setup() { size(800. when you create an audio input.beadsproject. in my experience.com/). Or maybe you want to use Beads as an effects processor in a sound installation. the windows port of Jack is quite buggy. if you want to take input from something other than your default.net/doc/net/beadsproject/beads/core/AudioContext. AudioContext ac. BUG NOTE: At the time of this writing. Audio_Input_01. it is probably best just to temporarily change your default audio input.800). Also. Using Audio Input There are a million reasons why you might want to use audio input. For more on audio input in Beads. For the time being. you don’t call a constructor. Perhaps your Processing sketch is interactive. So when you set up an audio input unit generator. Getting an Audio Input Unit Generator (Audio_Input_01) The audio input unit generator is easy to use in Beads. UGen microphoneIn = ac. The resulting unit generator can then be connected in the usual fashion using the addInput routine.7.*. One difference is that the audio input unit generator is of type UGen.pde import beads. It only differs from other unit generators in how it is instantiated.getAudioInput().1. Unfortunately. you simply ask the AudioContext to give you access to an audio input by calling the getAudioInput() function. Perhaps you want to use externally-generated audio to shape your Processing visuals. you might also look into the Jack library (. 7. and it can be used just like any other unit generator. 86 Sonifying Processing . rather than a class that derives from UGen. If you’re working on a Mac. } // draw the input waveform on screen // this code is based on code from the Beads tutorials // written by Ollie Brown void draw() { loadPixels(). 87 . 0.out. work out where in the current audio // buffer we are int buffIndex = i * ac. ac. buffIndex)) * height / 2). 1. Recording and Playing a Sample (Record_Sample_01) In this example. since the playback is executed via the SamplePlayer object.start(). i++) { // for each pixel. we’re going to see how we can use Beads to record audio then use that recording later in the program. i < width. // if you want to use a different input UGen microphoneIn = ac. however. } 7. } // paint the new pixel array to the screen updatePixels(). This program simply records a sample and plays it back.ac = new AudioContext(). g. color(0)). //draw into Processing's convenient 1-D array of pixels pixels[vOffset * height + i] = color(255).out.getAudioInput(). ac.addInput(microphoneIn). //scan across the pixels for(int i = 0.fill(pixels.getValue(0.getBufferSize() / width.2. // set up our usual master gain object Gain g = new Gain(ac. // get an AudioInput UGen from the AudioContext // this will setup an input from whatever input is your // default audio input (usually the microphone in) // changing audio inputs in beads is a little bit janky (as // of this writing) // so it's best to change your default input temporarily.5). we can easily extend that audio chain to manipulate the recorded buffer in innumerable different ways.addInput(g). //set the background Arrays. // then work out the pixel height of the audio data at // that point int vOffset = (int)((1 + ac. } } The playback is initiated using the right mouse button. When the user right-clicks.pde // Record_Sample_01. 44100).pde import beads.0f. // create a holder for audio data targetSample = new Sample(af.isPaused() ) { // clear the target sample targetSample.clear(). } else { // stop recording rts. 16. targetSample). // setup a recording format AudioFormat af = new AudioFormat(44100. which is set up in this block of code.2. SamplePlayer sp = new SamplePlayer(ac. and the start and pause routines in the RecordToSample object. sp.*. 1. Audio is recorded via the RecordToSample object. // start recording rts. tell it to destroy itself when it finishes. // when the user left-clicks if( mouseButton == LEFT ) { // if the RecordToSample object is currently paused if( rts.INFINITE). true). sp. Code Listing 7. The recording is controlled using the left mouse button.pause(true). g. then connect it to our master gain. 88 Sonifying Processing . true.setKillOnEnd(true).There are two basic parts to this program: the recording and the playback.start(). rts = new RecordToSample(ac. we set up a new SamplePlayer based on the recorded sample.Mode. targetSample. Record_Sample_01.start(). RecordToSample. and call the start function.addInput(sp). ac. 1. // initialize the RecordToSample object rts = new RecordToSample(ac. // this object will hold our audio data Sample targetSample. true.getAudioInput().5).out. Gain g. // pause the RecordToSample object rts. 16.// we need to import the java sound audio format definitions import javax.printStackTrace(). targetSample.addInput(microphoneIn). g. // create a holder for audio data targetSample = new Sample(af. // tell the AudioContext to work with the RecordToSample ac.addInput(microphoneIn). // setup the recording unit generator try{ // setup a recording format AudioFormat af = new AudioFormat(44100. 1.Mode.AudioFormat. exit().sampled.0f. RecordToSample. // set up our usual master gain object g = new Gain(ac.800).INFINITE). // initialize the AudioContext ac = new AudioContext(). // get an AudioInput UGen from the AudioContext // this will setup an input from whatever input is your // default audio input (usually the microphone in) UGen microphoneIn = ac. 89 .pause(true). } catch(Exception e){ e. boolean recording = false.sound. AudioContext ac. true). 0.out. } // connect the microphone input to the RecordToSample rts. // our master gain // this object is used to start and stop recording RecordToSample rts.addInput(g). 44100).addDependent(rts). void setup() { size(800. ..3.pause(true).start(). sp. In this case. if( recording ) text("Recording.". } } 7.start(). 90 Sonifying Processing . 100). } void draw() { background(0). g. SamplePlayer sp = new SamplePlayer(ac. Right click to play. } } // if the user right-clicks else { // Instantiate a new SamplePlayer with the recorded // sample. // and start recording rts.". // clear the target sample targetSample. Granulating from Audio Input (Granulating_Input_01) This example extends the previous example by showing how we can manipulate the audio when we initiate playback.isPaused() ) { // note that we are now recording recording = true. // tell the SamplePlayer to destroy itself when it's done sp.addInput(sp).clear(). text("Left click to start/stop recording.ac. 100. } void mousePressed() { // when the user left-clicks if( mouseButton == LEFT ) { // if the RecordToSample object is currently paused if( rts. // and stop recording rts. targetSample). we simply replace the SamplePlayer object with a GranularSamplePlayer object.setKillOnEnd(true).start(). 120). 100. } // if the RecordToSample is recording else { // note that we are no longer recording recording = false. sound. // we need to import the java sound audio format definitions import javax.AudioFormat. // our master gain Gain g. // get an AudioInput UGen from the AudioContext // this will setup an input from whatever input is your // default audio input (usually the microphone in) UGen microphoneIn = ac. true).*. void setup() { size(800. // initialize the RecordToSample object rts = new RecordToSample(ac. RecordToSample.0f. targetSample. true. } catch(Exception e){ e. 16. we use a GranularSamplePlayer // if you want to automate the recording and granulation // process.printStackTrace().sampled.INFINITE). ac = new AudioContext().pde //Granulating_Input_01. // declare the parent AudioContext AudioContext ac. except // when we initiate playback.Mode. exit().3. } // connect the microphone input to the RecordToSample 91 . then you could use a clock object import beads.getAudioInput(). Granulating_Input_01.800). // are we currently recording a sample? boolean recording = false. 44100). 1. // this object will hold our audio data Sample targetSample. // create a holder for audio data targetSample = new Sample(af. // setup the recording unit generator try{ AudioFormat af = new AudioFormat(44100.Code Listing 7. // this object is used to start and stop recording RecordToSample rts.pde // This example is just like the previous example. out.addInput(g).start().start(). 100.. 1.addInput(microphoneIn).pause(true).setGrainInterval(new Static(ac. 120). 100. 100).addInput(microphoneIn). GranularSamplePlayer gsp = new GranularSamplePlayer(ac. 20f)). } void mousePressed() { // when the user left-clicks if( mouseButton == LEFT ) { // if the RecordToSample object is currently paused if( rts. 92 Sonifying Processing . text("Left click to start/stop recording. } void draw() { background(0). Right click to granulate.isPaused() ) { // note that we are now recording recording = true. ac. // set up our usual master gain object g = new Gain(ac. targetSample). ac.addDependent(rts). } } // if the user right-clicks else { // Instantiate a new SamplePlayer with the recorded // sample. } // if the RecordToSample is recording else { // note that we are no longer recording recording = false.5).pause(true). // and stop recording rts. // pause the RecordToSample object rts. // set the grain interval to about 20ms between grains gsp. // and start recording rts. g.clear(). // tell the AudioContext to work with the RecordToSample // object ac.".".out. if( recording ) text("Recording. // clear the target sample targetSample. 0..// object rts. // set the randomness. // tell the GranularSamplePlayer to destroy itself when // it finishes gsp.setKillOnEnd(true).setRandomness(new Static(ac. 50f)).addInput(gsp). } } 93 .// set the grain size to about 50ms (smaller is sometimes // a bit too grainy for my taste). // connect the GranularSamplePlayer to the Gain g. which will add variety to all the // parameters gsp. gsp.setGrainSize(new Static(ac. gsp. 50f)).start(). "busA"). we’re going to see how to use Beads to respond to MIDI events. click “File” then “Preferences” within the Processing window. MIDI-Controlled Sine Wave Synthesizer (MIDI_SYNTH_01) The first MIDI Bus example plays a sine wave in response to MIDI input. there are good reasons to keep the audio separate from MIDI. Setting up a MidiBus object with the default parameters is a snap. In this example.2. busA = new MidiBus(this. In this chapter. 0. Using MIDI with Beads Currently.com/themidibus. The MidiBus constructor takes four parameters.php).1. Processing already has a great library for simple MIDI input and output: The MIDI Bus.2. 8. you can route MIDI events from any MIDI enabled device or software. The third parameter is the index of the MIDI output device. we will use the default input and output devices. The second parameter is the index of the MIDI input device. Beads doesn’t provide unit generators for working with the MIDI protocol. MIDI events usually come from a MIDI keyboard. you will need to have a MIDI device connected to your computer. The first parameter is the calling program. 94 Sonifying Processing . 0. To see this example in action. but of course.” 8. This isn’t an enormous drawback though.8. indicated by zeroes. 8. Then there is an option for “Sketchbook Location. Unzip the contents. we’re going to look at a few ways of integrating MIDI into your Beads projects using The MIDI Bus. To find out where your sketch book is located. Installing The MIDI Bus To install The MIDI Bus. Basic MIDI Input In the first two MIDI examples. download the latest release from The MIDI Bus website (. then copy the “themidibus” directory into the “libraries” folder within your sketchbook.1. The final parameter is the name of the new bus. Also. It’s easier than you might think! IMPORTANT: Some of the examples in this chapter require you to have a MIDI device connected to your system in order to function properly. g = new Gain(ac. Hence. The SimpleSynth constructor sets up a WavePlayer. we create a new set of beads (encapsulated by a class) these beads are stored in a vector and destroyed when we get a corresponding note-off message // Import the MidiBus library import themidibus.5 over 300ms. // our parent MidiBus object MidiBus busA.1. 0. then connects all three to the master gain object we created when the program started. ((float)midiPitch 59. 300). an Envelope and a Gain. 440. we need to create a new WavePlayer object each time a note is pressed (and destroy them when a note is released).5. this can be done by implementing the noteOn function. to tell the Gain to rise to 0.After the bus is setup.0)/12. The SimpleSynth class allows us to repeatedly set up the unit generators we need to respond to MIDI messages. it adds a segment to the envelope.addSegment(0.pde this example builds a simple midi synthesizer for each incoming midi note. // set up the new WavePlayer.* and we use a subclass called SimpleSynth to group the Beads for a single pitch together.0).0). if we want to create a polyphonic synthesizer. Each Beads WavePlayer object can only output a single wave at a time. * You can also use the BeadArray class provided by the Beads library Code Listing 8. Notes are then killed within the noteOff function that is called by the MidiBus when a note is released.2. Buffer. pitch = midiPitch.addInput(wp). MasterGain.pde // // // // // // MIDI_SYNTH_01.*. 95 . and respond to them as they come in.0 * pow(2. 1. e = new Envelope(ac.addInput(g). e. MIDI_SYNTH_01. // import the beads library import beads. all we have to do is wait for MIDI events. Using the MIDI Bus library. e).*. Finally. // convert the MidiPitch to a frequency wp = new WavePlayer(ac. g.SINE). we use an ArrayList to store our Beads. In this example. background(0). i++ ) { SimpleSynth s = (SimpleSynth)synthNotes. 100. text("This program will not do anything if you do not have a MIDI device".AudioContext ac. 1. int pitch. 100. 112). MasterGain = new Gain(ac. background(0). } void draw() { for( int i = 0. int velocity. 124). 0.destroy().". void setup() { size(600. 0. i < synthNotes. // if this bead has been killed if( s. ac.addInput(MasterGain).5).size(). text("connected to your computer.the calling program (this) // 2 . text("This program plays sine waves in response to Note-On messages.the output device // 4 .start().get(i). // then remove the parent synth synthNotes. ac = new AudioContext().g. 400). // the MidiBus constructor takes four arguments // 1 . } } } // respond to MIDI note-on messages void noteOn(int channel.isDeleted() ) { // destroy the synth (set things to null so that memory // cleanup can occur) s.out. 100).remove(s).the input device // 3 . String bus_name) 96 Sonifying Processing .the bus name // in this case. 0. "busA"). we just use the defaults busA = new MidiBus(this. synthNotes = new ArrayList(). 100. ac.". ArrayList synthNotes = null. Gain MasterGain. size(). 180). 100). 0. fill(255).0). text("Velocity:" + velocity. Buffer. i++ ) { SimpleSynth s = (SimpleSynth)synthNotes. // the constructor for our sine wave synthesizer SimpleSynth(int midiPitch) { pitch = midiPitch. 100. text("Velocity:" + velocity.add(new SimpleSynth(pitch)).SINE).remove(s). 100. e = new Envelope(ac. } } } // this is our simple synthesizer object class SimpleSynth { public WavePlayer wp = null.0). text("Recieved on Bus:" + bus_name. text("Note Off:". fill(255). public int pitch = -1. 120). } // respond to MIDI note-off messages void noteOff(int channel. if( s. 160). i < synthNotes. int velocity. text("Pitch:" + pitch. 100. // set up the new WavePlayer. 100. synthNotes. ((float)midiPitch 59. text("Channel:" + channel. 100. 100. text("Recieved on Bus:" + bus_name.pitch == pitch ) { s. 440.0 * pow(2. String bus_name) { background(0). synthNotes. public Envelope e = null.{ background(50). 100). int pitch. 100. text("Channel:" + channel. for( int i = 0. 140). 140). stroke(255).kill(). convert the MidiPitch to a // frequency wp = new WavePlayer(ac. stroke(255). text("Note On:". public Gain g = null. 160). 100. text("Pitch:" + pitch.0)/12. break. 100. 180). 100. 97 . 120).get(i). g = null.2.5.addInput(g). g. then attaches a filter with an LFO on the cutoff. } // when this note is killed. Code Listing 8.attachedInputs()). 300.pde this example builds a simple midi synthesizer for each incoming midi note.addSegment(0.kill(). e. e = null. e. 1.kill(). we create a new set of beads (encapsulated by a class) these beads are stored in a vector and destroyed when we get a corresponding note-off message 98 Sonifying Processing . } } 8.addSegment(0. This time.availableInputs().addInput(wp).2.0. This example expands on the first example by using The MIDI Bus to enumerate the available MIDI devices and show which ones are actually connected. The other way that this example differs from the first MIDI input example is that we create a more complex synthesizer.pde // // // // // // MIDI_SYNTH_02.availableInputs function.kill().g = new Gain(ac.2. … println(busA. 300). wp = null. MasterGain. new KillTrigger(g)). g. String[] available_inputs = MidiBus. The list of devices that are being watched is retrieved by calling the attachedInputs function which is exposed by our MidiBus object. MIDI_SYNTH_02. MIDI-Controlled FM Synthesizer (MIDI_SYNTH_02) The second MIDI input example is very similar to the first. The array of available MIDI devices is created by calling the MidiBus.2. ramp the amplitude down to 0 // over 300ms public void kill() { e. } // destroy the component beads so that they can be cleaned // up by the java virtual machine public void destroy() { wp. e). the Synth subclass creates a frequency modulation patch for each incoming note. AudioContext ac.*.*.import themidibus. background(0). i++ ) { Synth s = (Synth)synthNotes. 0. 124).i < available_inputs.g.addInput(MasterGain). 0. i < synthNotes. // this ArrayList will hold synthesizer objects // we will instantiate a new synthesizer object for each note ArrayList synthNotes = null. } void draw() { for( int i = 0. text("This program is a synthesizer that responds to NoteOn Messages. //Returns an array of available input devices String[] available_inputs = MidiBus.3).out.attachedInputs()). ac. MasterGain = new Gain(ac.println("["+i+"] \""+available_inputs[i]+"\""). 100. println("Available MIDI Devices:"). println("Inputs on busA"). 2.isDeleted() ) 99 . Gain MasterGain. 100.length. for(int i = 0. //Print the devices attached as inputs to busA println(busA. 1.availableInputs(). // Create a first new MidiBus attached to the IncommingA // Midi input device and the OutgoingA Midi output device. println(). "busA"). text("connected to your computer. ac = new AudioContext().400).get(i). 100. text("This program will not do anything if you do not have a MIDI device".i++) System. import beads.start(). // The MidiBus object that will handle midi input MidiBus busA. 100).out. // List all available input devices println().". ac. synthNotes = new ArrayList().size(). busA = new MidiBus(this. void setup() { size(600. 112). // if this bead has been killed if( s. background(0).". i++ ) { Synth s = (Synth)synthNotes. i < synthNotes. text("Note On:". 180). 100. 100). 160).pitch == pitch ) { s. synthNotes. String bus_name) { background(50). fill(255). 100. 140). text("Pitch:" + pitch. 100.add(new Synth(pitch)). 100. 100. int pitch. int pitch.{ // destroy the synth (set things to null so that memory // cleanup can occur) s. 100. 100. int velocity. text("Recieved on Bus:" + bus_name. 100).remove(s). 120). 100 Sonifying Processing . synthNotes. int velocity. 120). 160). if( s. // then remove the parent synth synthNotes. break. 180). text("Recieved on Bus:" + bus_name. text("Channel:" + channel. } } } // respond to MIDI note-on messages void noteOn(int channel.destroy(). fill(255). 100. text("Pitch:" + pitch. for( int i = 0.kill(). text("Velocity:" + velocity. stroke(255). 140). 100. text("Channel:" + channel. 100. String bus_name) { background(0).size().remove(s).get(i). stroke(255). } } } // this is our synthesizer object class Synth { public WavePlayer carrier = null. text("Velocity:" + velocity. } // respond to MIDI note-off messages void noteOff(int channel. text("Note Off:". 0)/12. public Gain g = null. public int pitch = -1. // create our frequency modulation function Function frequencyModulation = new Function(modulator. WavePlayer filterLFO. frequencyModulation. float fundamentalFrequency = 440.0) + 2000.0).5 * fundamentalFrequency.SINE). lowPassFilter. // our filter and filter envelope LPRezFilter lowPassFilter. lowPassFilter = new LPRezFilter(ac.59. 0. Buffer. public Envelope e = null.addInput(lowPassFilter).SINE). ff) { public float calculate() { // the x[1] here is the value of a sine wave // oscillating at the fundamental frequency return (x[0] * 1000. } }. Function filterCutoff = new Function(filterLFO) { public float calculate() { // set the filter cutoff to oscillate between 1500Hz // and 2500Hz return ((x[0] * 500. 101 . 0. // set up and connect the gains e = new Envelope(ac. 0. } }. Synth(int midiPitch) { // get the midi pitch and create a couple holders for the // midi pitch pitch = midiPitch. 8. filterCutoff.0 * pow(2.public WavePlayer modulator = null. g.addInput(carrier). e). // instantiate the modulator WavePlayer modulator = new WavePlayer(ac. Static ff = new Static(ac.SINE).0). ((float)midiPitch . // set up the filter and LFO filterLFO = new WavePlayer(ac. Buffer.96). fundamentalFrequency).0) + x[1]. 1.0.0). g = new Gain(ac. Buffer. // instantiate the carrier WavePlayer // set up the carrier to be controlled by the frequency // of the modulator carrier = new WavePlayer(ac. pitch.kill().sendNoteOff(channel.3. filterLFO = null.” myBus = new MidiBus(this. Notice that the constructor used here differs from the one used in the previous example. 300. velocity). wait for 100ms. 300). } public void kill() { e. This time we indicate the MIDI output device by name.kill().0. g = null.kill().kill(). -1. pitch. we simply instantiate the MidiBus and tell it which synthesizer to use.3. 102 Sonifying Processing . // then stop the note we just started myBus. // wait for 100ms delay(100). new KillTrigger(g)). modulator.1. } } 8. modulator = null. Sending MIDI to the Default device (MIDI_Output_01) When we want to send MIDI messages to a MIDI synthesizer. g. carrier = null.sendNoteOn(channel. “Java Sound Synthesizer. "Java Sound Synthesizer"). // add a segment to our gain envelope e. lowPassFilter = null.kill(). Basic MIDI Output In this brief section we see how to use The MIDI Bus to send MIDI events to a MIDI synthesizer residing on your computer. // start a midi pitch myBus.kill(). filterLFO.5.addSegment(0. e = null.MasterGain. in this case. This topic is only briefly touched on because it doesn’t involve the Beads library at all. lowPassFilter.addSegment(0. e. velocity).addInput(g). then send a note-off message. } public void destroy() { carrier. 8. These three lines send a note-on message. void setup() { size(600. channel. background(0). as specified by the General MIDI standard.".3. Beads doesn't include functions for MIDI output. // This is not used for program change so ignore it and set // it to 0 int byte2 = 0. Code Listing 8. // THIS BIT OF CODE PLAYS A MIDI NOTE // start a midi pitch myBus. -1.pde example from The MIDI Bus. int velocity = 64 + (int)random(64). // This is the status byte for a program change int status_byte = 0xC0. int pitch = 48 + (int)random(48). myBus = new MidiBus(this. // Import the midibus library import themidibus. velocity). this example doesn't relate to Beads. background(0). //Send the custom message myBus.sendMessage(status_byte.sendNoteOn(channel. pitch. "Java Sound Synthesizer"). byte1. // random voice int byte1 = (int)random(128). it's simply a demonstration of how to use The MIDI Bus to send MIDI messages.pde // // // // // // MIDI_Output_01.pde As of this writing. // wait for 100ms delay(100). // set the background to black text("This program plays random MIDI notes using the Java Sound Synthesizer. 100. 400). MIDI_Output_01.1. // Create a new MidiBus with no input device and the // default Java Sound Synthesizer as the output device.And this block of code switches to a random MIDI instrument. byte2).*. // declare The MidiBus MidiBus myBus. It's based on the Basic. } void draw() { int channel = 0. Hence. // then stop the note we just started 103 . 100). sendMessage(status_byte. // This is not used for program change so ignore it and set // it to 0 int byte2 = 0. byte1. //Send the custom message myBus. value). // wait for a random amount of time less than 400ms delay((int)random(400)). // This will be the preset you are sending with your // program change. int byte1 = (int)random(128). } 104 Sonifying Processing .myBus. number. velocity). byte2). //int value = 90. // THIS BIT OF CODE CHANGES THE MIDI INSTRUMENT // This is the status byte for a program change int status_byte = 0xC0. // Send a controllerChange //myBus. // we could control pitch bend and other parameters using // this call //int number = 0.sendNoteOff(channel. channel. pitch.sendControllerChange(channel. and this chapter will demonstrate how we can use Beads to extract meaningful information from audio data. however. After it is segmented. Analysis is the act of extracting meaningful information. Audio Analysis with Beads Analysis programs are executed in three basic steps. then we simply connect an input to send audio to it.addDependent(sfs). // connect the FFT object to the ShortFrameSegmenter sfs. ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac). the audio is passed through a series of analysis unit generators which modify the data and change the data into a usable format. you may want to generate visuals based on sound. The ShortFrameSegmenter is instantiated and connected like many of the unit generators used for sound generation. there are times when you may want to do the reverse. sfs. we can call the addListener function. the incoming audio is segmented into short chunks. Rather than generating sound based on visuals.addListener(fft). 9. ac. This is accomplished by adding the ShortFrameSegmenter as a dependent. The constructor takes the AudioContext as a parameter. The ShortFrameSegmenter is the start of the analysis chain in all of the analysis examples in this chapter. or features.out). 105 .addInput(ac.1. This is slightly different from using the addInput routine that is used in most of the rest of the Beads Library. the AudioContext must be told when to update the ShortFrameSegmenter. The first step is executed by a unit generator called the ShortFrameSegmenter. This is necessary because many analysis algorithms are impractical or impossible to execute on large or continuous sections of audio.9. are extracted from the results. Finally. the salient information. Analysis The primary focus of this tutorial is using Beads to generate sound with the intent of adding sound into a pre-existing Processing sketch.out. First. Finally. To connect other unit generators to the ShortFrameSegmenter. 2. A smoothly undulating curve drawn on the face of an oscilloscope or a computer screen. we can calculate how much each sine wave contributes to the final waveform. which is based on code from the Beads website. but his theoretical work has impacted virtually every branch of mathematics. it shows us what frequencies are present in a sound. Waveforms allow us to easily see the amplitude of a sound. which breaks the incoming audio into discrete chunks. FFT fft = new FFT(). can be broken down into a sum of many sine waves. sfs. and the Fourier Transform allows us to see those frequencies. Today. This example.out). Waveforms are a representation of sound pressure variation over time. we can use the FFT object to get the spectrum of a sound. These sort of graphs are called waveforms. Rather than connecting objects using the addInput function. sfs. but it’s more difficult to deduce frequency information from anything anything other than the most simple of waveforms. we calculate the Fourier Transform using a class of algorithms known as Fast Fourier Transforms (FFT). science and engineering. thereby revealing a more clear picture of a sound’s timbre. Every sound is made up of many frequencies. Jean Baptiste Joseph Fourier was studying the flow of heat through metals. Fast Fourier Transform (FFT_01) When we think of a sound wave. we use the addListener function to tell one analysis object to take data from another. He showed that through a complex calculation. we usually envision something like a sine wave. This is where we see a subtle difference in the way that the Beads analysis objects are used. In Beads. In other words.9.addListener(fft). This has obvious applications in electronic music. ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac).addInput(ac. Specifically. he was tasked with finding a solution to Napoleon’s overheating cannons. 106 Sonifying Processing . In the early 1800s. uses the FFT to paint the spectrum of a sound on screen. The Fourier Transform allows us to transform a time-domain waveform into a frequency-domain spectrum. Fourier showed that a complex periodic waveform. Then we initialize our FFT. We start by setting up a ShortFrameSegmenter object. or voltage variation over time. such as those describing the flow of energy through a body. *. AudioContext ac.2. 255).0. This simply forwards the part of the FFT that we are interested in. color fore = color(255. In the draw function. float[] features = ps. x. It draws the frequency information for a sound on screen. height . height . Finally. we interpret the results of the signal chain and paint them on screen. FFT_01.Then we connect a PowerSpectrum object. the features. PowerSpectrum ps.min((int)(features[featureIndex] * height). for(int x = 0. we tell the AudioContext that it has to monitor and update the ShortFrameSegmenter.addListener(ps). and paint a vertical bar representing the frequency between 20Hz and 20kHz that corresponds to this location.0). height. fft. int barHeight = Math. ac. } Code Listing 9. from the PowerSpectrum object. 255. color back = color(0.length) / width.getFeatures(). import beads.pde This example is based in part on an example included with the Beads download originally written by Beads creator Ollie Bown. Then we loop through each x-coordinate in our window. line(x.1). x++) { int featureIndex = (x * features. ps = new PowerSpectrum(). x < width.pde // // // // // FFT_01.addDependent(sfs).barHeight). void setup() { 107 .out. First we get the results. 0. ac.start().out).600). player = new SamplePlayer(ac. // FFT stands for Fast Fourier Transform // all you really need to know about the FFT is that it // lets you see what frequencies are present in a sound // the waveform we usually look at when we see a sound // displayed graphically is time domain sound data // the FFT transforms that into frequency domain data FFT fft = new FFT().out.printStackTrace().addInput(ac.addListener(fft).size(600.out. try { // Load up a new SamplePlayer using an included audio // file.addInput(player). new Sample(sketchPath("") + "Drum_Loop_01. we will interpret the FFT results and // draw them on screen. // load up a sample included in code download SamplePlayer player = null.addInput(g).3). // list the frame segmenter as a dependent. 108 Sonifying Processing . ac. } // In this block of code. // the PowerSpectrum pulls the Amplitude information from // the FFT calculation (essentially) ps = new PowerSpectrum().addListener(ps). // discrete chunks. so that the // AudioContext knows when to update it. // set up a master gain object Gain g = new Gain(ac. print the steps that got us to // that error. // connect the PowerSpectrum to the FFT fft. } // In the draw routine. e. ac = new AudioContext(). sfs. } catch(Exception e) { // If there is an error. 2. // connect the SamplePlayer to the master Gain g. we build an analysis chain // the ShortFrameSegmenter breaks the audio into short. // connect the FFT object to the ShortFrameSegmenter sfs. // start processing audio ac.addDependent(sfs).wav")). ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac). length) / width. The first example tries to guess the strongest frequency that is present in a sound. we see how to calculate and respond to specific frequencies in incoming audio. It returns an array of floats // how this array of floats is defined (1 dimension. // calculate the bar height for this feature int barHeight = Math.barHeight). 2 // dimensions . 109 . // The getFeatures() function is a key part of the Beads // analysis library.1). height. // draw a vertical line corresponding to the frequency // represented by this x-position line(x. x < width. Frequency Analysis (Frequency_01 and Resynthesis_01) In these two examples.. float[] features = ps. the PowerSpectrum returns an // array with the power of 256 spectral bands.3. then set a sine wave to play that frequency.min((int)(features[featureIndex] * height). stroke(fore). etc) is based on the calling unit // generator. In this case.getFeatures(). The second example does the same thing on a larger scale: it calculates the 32 strongest frequencies in a sound and tries to output sine waves at those frequencies.. } } } 9. height . height . x. x++) { // figure out which featureIndex corresponds to this x// position int featureIndex = (x * features.void draw() { background(back). // if any features are returned if(features != null) { // for each x coordinate in the Processing window for(int x = 0. we get the 2-dimensional array and loop through it. the sample rate of the audio. This feature is its best guess at the strongest frequency in a signal. } Code Listing 9. the other electrical signals in that room and the analog to digital converter in the computer. float inputFrequency = f. i++ ) { if(features[i][0] < 10000. setting frequencies and amplitudes based on the data.getFeatures(). then we try to follow those frequencies with sine waves.getFeatures(). The second dimension specifies frequency and amplitude. The second example switches out the Frequency object in favor of the SpectralPeaks object.01) gainGlide[i].setValue(0. else gainGlide[i]. The SpectralPeaks object locates a set of the strongest frequencies in a spectrum. using your microphone as input.pde 110 Sonifying Processing . It’s important to remember how many variables effect this data. where the microphone is located in relation to the sound source. The end effect is a sort of low-fidelity vocoder. not to mention extraneous background noise from your computer fan or other sources. Frequency_01. i < numPeaks. In the first example. The sound data that is captured by your computer is effected by your microphone.pde // Frequency_01. float[][] features = sp.pde and Resynthesis_01. In this case. we tell it to look for the 32 strongest peaks. Then it is connected to the PowerSpectrum object. we use the Frequency unit generator to try to pick the strongest frequency in an incoming signal.If you just open up these samples and run them. for( int i = 0.setValue(features[i][1]). In this chunk of code. The Frequency object only returns one feature when the getFeatures function is called.0) frequencyGlide[i]. All of these factors can potentially introduce unwanted noise into a signal. if(features[i][1] > 0. the room that the sound occurs in. The major difference between the Frequency object and the SpectralPeaks object is that the SpectralPeaks object returns a 2dimensional array of values. So measuring incoming frequencies is a very tricky thing.setValue(features[i][0]). This object is instantiated with a single parameter. you might notice that they don’t work particularly well.0).3. The first dimension in the array specifies the peak number. I find that the first example works best if you whistle near the microphone. The second example works well on the sound of a speaking voice. ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac).600). ac. Buffer. PowerSpectrum ps.0. 0. // connect the WavePlayer to the master gain g. this doesn't work very well for singing. Glide frequencyGlide. // set up a master gain object Gain g = new Gain(ac. color back = color(0. color fore = color(255. AudioContext ac. wp = new WavePlayer(ac. // discrete chunks.addInput(microphoneIn).*. 50.out. 255). 2.0). 255. // set up the parent AudioContext object ac = new AudioContext(). Unfortunately.addListener(ps). ps = new PowerSpectrum().addInput(g). Frequency f.getAudioInput(). // connect the ShortFramSegmenter object to the FFT sfs. // In this block of code.addInput(wp). we build an analysis chain // the ShortFrameSegmenter breaks the audio into short. 10). frequencyGlide.SINE). // the FFT transforms that into frequency domain data FFT fft = new FFT().0. but it works quite well for whistling (in my testing). void setup() { size(600.5). 111 . // The PowerSpectrum turns the raw FFT output into proper // audio data. // get a microphone input unit generator UGen microphoneIn = ac.// // // // // This example attempts to guess the strongest frequency in the signal that comes in via the microphone. // connect the microphone input to the ShortFrameSegmenter sfs. Then it plays a sine wave at that frequency. import beads. // connect the FFT to the PowerSpectrum fft.addListener(fft). // set up the WavePlayer and the Glide that will control // its frequency frequencyGlide = new Glide(ac. WavePlayer wp. float meanFrequency = 400. getFeatures(). 100). // draw the average frequency on screen text(" Input Frequency: " + meanFrequency. // start processing audio } // In the draw routine. void draw() { background(back).0) > 0. and the cables // and electronics that the signal flows through. so that the // AudioContext knows when to update it ac. } } } 112 Sonifying Processing . f = new Frequency(44100. // Get the data from the Frequency object.start(). stroke(fore).addListener(f).4 * inputFrequency) + (0. the incoming frequencies are // effected by the microphone being used. // set the frequency stored in the Glide object frequencyGlide.this // will include all the fundamentals of most instruments // in other words. // list the frame segmenter as a dependent. Only run this // 1/4 frames so that we don't overload the Glide object // with frequency changes.out. Further. // Only use frequency data that is under 3000Hz . data over 3000Hz will usually be // erroneous (if we are using microphone input and // instrumental/vocal sounds) if( inputFrequency < 3000) { // store a running average meanFrequency = (0. // connect the PowerSpectrum to the Frequency object ps. ac. if( f. we will write the current frequency // on the screen and set the frequency of our sine wave.// The Frequency object tries to guess the strongest // frequency for the incoming data.0f).getFeatures() != null && random(1.setValue(meanFrequency).6 * meanFrequency).75) { // get the data from the Frequency object float inputFrequency = f. 100. This is a tricky // calculation.addDependent(sfs). as there are many frequencies in any real // world sound. i < numPeaks. Glide[] gainGlide. 255). for( int i = 0. AudioContext ac. frequencyGlide[i].0. // how many peaks to track and resynth int numPeaks = 32. g = new Gain[numPeaks].addInput(wp[i]).pde This example resynthesizes a tone using additive synthesis and the SpectralPeaks object. Glide[] frequencyGlide. i++ ) { // set up the WavePlayer and the Glides that will control // its frequency and gain frequencyGlide[i] = new Glide(ac. wp[i] = new WavePlayer(ac. 0. color back = color(0. WavePlayer[] wp.5). 1). 1. color fore = color(255. Gain[] g. float meanFrequency = 400. // get a microphone input unit generator UGen microphoneIn = ac.addInput(g[i]). void setup() { size(600. low-fidelity vocoder.0. 0. gainGlide[i] = new Glide(ac. } 113 . wp = new WavePlayer[numPeaks]. // connect the WavePlayer to the master gain g[i]. // set up a master gain object masterGain = new Gain(ac. gainGlide[i]).// // // // Resynthesis_01. Buffer.*. SpectralPeaks sp. ac. PowerSpectrum ps.addInput(masterGain).SINE).out. gainGlide = new Glide[numPeaks].getAudioInput(). Gain masterGain. // set up the parent AudioContext object ac = new AudioContext(). The result should be a very simple. 440. masterGain. g[i] = new Gain(ac.0).0. 2. 1).600). 255. frequencyGlide = new Glide[numPeaks]. import beads. 0). // connect the microphone input to the ShortFrameSegmenter sfs.addListener(ps). we build an analysis chain // the ShortFrameSegmenter breaks the audio into short. i < numPeaks. } } 114 Sonifying Processing . else gainGlide[i].// in this block of code. numPeaks).start(). } // in the draw routine. // list the frame segmenter as a dependent.setValue(features[i][0]).5) { // get the data from the SpectralPeaks object float[][] features = sp.addInput(microphoneIn). stroke(fore). if(features[i][1] > 0.0) frequencyGlide[i]. text("Use the microphone to trigger resynthesis". // connect the FFT to the PowerSpectrum fft. // the FFT transforms that into frequency domain data FFT fft = new FFT(). // connect the PowerSpectrum to the Frequency object ps.01) gainGlide[i].setValue(0. // discrete chunks ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac). // start processing audio ac. // connect the ShortFramSegmenter object to the FFT sfs.addListener(sp). for( int i = 0.out.getFeatures(). 100.addListener(fft). // get the data from the SpectralPeaks object // only run this 1/4 frames so that we don't overload the // Glide object with frequency changes if( sp. // the SpectralPeaks object stores the N highest Peaks sp = new SpectralPeaks(ac.getFeatures() != null && random(1.0) > 0. we will write the current frequency // on the screen and set the frequency of our sine wave void draw() { background(back).addDependent(sfs).setValue(features[i][1]). 100). so that the // AudioContext knows when to update it ac. // the PowerSpectrum turns the raw FFT output into proper // audio data ps = new PowerSpectrum(). i++ ) { if(features[i][0] < 10000. 300). try { // load up a new SamplePlayer using an included audio // file player = new SamplePlayer(ac.*. ac.out. // tracks the time int time. // connect the SamplePlayer to the master Gain g. time = millis(). // load up a sample included in code download SamplePlayer player = null. Gain g = new Gain(ac. Code Listing 9. PeakDetector beatDetector.2). We use an analysis chain to the one used in the previous examples.pde This example is based in part on an example included with the Beads download originally written by Beads creator Ollie Bown.addInput(g). Beat_Detection_01. float brightness.addInput(player). new Sample(sketchPath("") + "Drum_Loop_01. we look at how we can detect beats in an audio stream. import beads. however.4. AudioContext ac. The brightness is // controlled by the following global variable. // set up the AudioContext and the master Gain object ac = new AudioContext(). The draw() // routine decreases it over time. } catch(Exception e) { // if there is an error.pde // // // // // Beat_Detection_01. Beat Detection (Beat_Detection_01) In the final analysis example. we end with SpectralDifference and PeakDetector unit generators. 2.4.wav")). In this example we draw a shape on screen whenever a beat is detected.} 9. void setup() { size(300. // In this example we detect onsets in the audio signal // and pulse the screen when they occur. 0. print the steps that got us to 115 . PowerSpectrum ps = new PowerSpectrum().addListener(beatDetector). // connect the sfs to the AudioContext sfs. // start working with audio data ac.addListener(ps).addInput(ac. SpectralDifference sd = new SpectralDifference(ac. sd. sfs. // whenever our beat detector finds a beat. beatDetector.this will vary on each recording beatDetector.setHopSize(441). } 116 Sonifying Processing . // The SpectralDifference unit generator does exactly what // it sounds like.0.setAlpha(.9f).2f).// that error e.printStackTrace(). } } ). // the threshold is the gain level that will trigger the // beat detector . This class allows us to // break an audio stream into discrete chunks. It calculates the difference between two // consecutive spectrums returned by an FFT (through a // PowerSpectrum object).addMessageListener ( new Bead() { protected void messageReceived(Bead b) { brightness = 1. set a global // variable beatDetector.getSampleRate()).addListener(fft).out).setChunkSize(2048). // we will use the PeakDetector object to actually find our // beats beatDetector = new PeakDetector(). ps.out. ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac).start(). FFT fft = new FFT().addListener(sd). sfs.addDependent(sfs). // how large is each chunk? sfs. // tell the AudioContext that it needs to update the // ShortFrameSegmenter ac.setThreshold(0. } // Set up the ShortFrameSegmenter. fft. if (brightness < 0) brightness = 0. beatDetector.01).// the draw method draws a shape on screen whwenever a beat // is detected void draw() { background(0). // decrease brightness over time int dt = millis() . } 117 . ellipse(width/2.width/2. brightness -= (dt * 0. // set threshold and alpha to the mouse position beatDetector.setAlpha((float)mouseY/height). fill(brightness*255).height/2.time.setThreshold((float)mouseX/width).height/2). time += dt. pde // This example builds a simple midi synthesizer // then we trigger random notes using the Clock class. import beads. we create a new Bead that acts as a tick event handler.add(new Synth(20 + (int)random(88))). Code Listing 10. ac. 10. Clock (Clock_01) The Clock unit generator is used to generate evenly spaced events. Miscellaneous This chapter reviews some important unit generators that didn’t fit into the earlier sections. and we remind the AudioContext to update the clock by adding it as a dependent. // our master gain object Gain MasterGain.1. // the Beads AudioContext that will oversee audio production // and output AudioContext ac. Then we set the number of ticks that occur in each beat. such as beats in a piece of music.addMessageListener(noteGenerator).pde // Clock_01. beatClock = new Clock(ac. } }.*.addDependent(beatClock). The event handler creates a new synthesized tone each time it receives a tick. Bead noteGenerator = new Bead () { public void messageReceived(Bead message) { synthNotes.setTicksPerBeat(4). Finally. beatClock.1. 1000).10. we handle the tick events. beatClock. The constructor takes two arguments: the AudioContext and the duration between beats in milliseconds. 118 Sonifying Processing . // this ArrayList will hold synthesizer objects // we will instantiate a new synthesizer object for each note ArrayList synthNotes = null.out. It is useful any time you want to synchronize audio to a regular tick. In this case. Clock_01. synthNotes = new ArrayList(). 100). 1. // tell the user what to do! text("This program generates randomized synth notes.g.add(new Synth(20 + (int)random(88))).addDependent(beatClock). 100. // set the background to black background(0).400). public Envelope e = null.".3).setTicksPerBeat(4). ac = new AudioContext().out. public Gain g = null. } }. } void draw() { for( int i = 0.addMessageListener(noteGenerator). ac. MasterGain = new Gain(ac.get(i). Bead noteGenerator = new Bead () { public void messageReceived(Bead message) { synthNotes. ac.size().out.// our clock object will control timing Clock beatClock = null. beatClock. } } } // this is our synthesizer object class Synth { public WavePlayer carrier = null. i++ ) { Synth s = (Synth)synthNotes. void setup() { size(600. 1000). background(0). s = null. beatClock = new Clock(ac.remove(i). if( s. beatClock. // this tell the AudioContext when to update the Clock ac.start(). public WavePlayer modulator = null. i < synthNotes.addInput(MasterGain).isDeleted() ) { // then remove the parent synth synthNotes. 0. 119 . public int pitch = -1; public boolean alive = true; // our filter and filter envelope LPRezFilter lowPassFilter; WavePlayer filterLFO; Synth(int midiPitch) { // get the midi pitch and create a couple holders for the // midi pitch pitch = midiPitch; float fundamentalFrequency = 440.0 * pow(2, ((float)midiPitch - 59.0)/12.0); Static ff = new Static(ac, fundamentalFrequency); // instantiate the modulator WavePlayer modulator = new WavePlayer(ac, 0.5 * fundamentalFrequency, Buffer.SINE); // create our frequency modulation function Function frequencyModulation = new Function(modulator, ff) { public float calculate() { // the x[1] here is the value of a sine wave // oscillating at the fundamental frequency return (x[0] * 1000.0) + x[1]; } }; // instantiate the carrier WavePlayer carrier = new WavePlayer(ac, frequencyModulation, Buffer.SINE); // set up the filter and LFO (randomized LFO frequency) filterLFO = new WavePlayer(ac, 1.0 + random(100), Buffer.SINE); Function filterCutoff = new Function(filterLFO) { public float calculate() { // set the filter cutoff to oscillate between 1500Hz // and 2500Hz return ((x[0] * 500.0) + 2000.0); } }; lowPassFilter = new LPRezFilter(ac, filterCutoff, 0.96); lowPassFilter.addInput(carrier); // set up and connect the gains e = new Envelope(ac, 0.0); g = new Gain(ac, 1, e); g.addInput(lowPassFilter); MasterGain.addInput(g); // create a randomized Gain envelope for this note 120 Sonifying Processing e.addSegment(0.5, 10 + (int)random(500)); e.addSegment(0.4, 10 + (int)random(500)); e.addSegment(0.0, 10 + (int)random(500), new KillTrigger(g)); } public void destroyMe() { carrier.kill(); modulator.kill(); lowPassFilter.kill(); filterLFO.kill(); e.kill(); g.kill(); carrier = null; modulator = null; lowPassFilter = null; filterLFO = null; e = null; g = null; alive = false; } } 121 Appendix A. Custom Beads In this section, we take a look at how to extend the functionality of the Beads library in Processing. It should be noted, however, that Beads already contains most of the functionality that most users are going to need. This section is aimed at advanced programmers who are comfortable with java, and with the basic concepts of digital signal processing. A.1. Custom Functions Most of the time when you want to do something that isn’t encapsulated by a pre-made unit generator, you can create that functionality using a custom function. Custom functions are handy little classes that can be embedded right into a Processing/Beads program. We have already encountered custom functions a number of times in this tutorial. Frequency Modulation The first custom function we encountered was the frequency modulation function that we created in the example Frequency_Modulation_01. In FM synthesis, one sine wave is used to control the frequency of another sine wave. The sine function, however, oscillates around 0, outputting values between -1.0 and 1.0. To produce sidebands, we need the modulator to oscillate around a value in the audible range. In this first custom function, we take a WavePlayer as input, then we do the math to output values within the range of audible frequencies. Function frequencyModulation = new Function(modulator) { public float calculate() { return (x[0] * 50.0) + 200.0; } }; Notice the parameter in the constructor. This gives the custom function access to the value of the modulator. Whatever unit generators are passed into the constructor can be accessed by the array x. new Function(modulator) 122 Sonifying Processing Then notice that the only the calculate function is actually implemented. We get the value from the modulator by accessing x[0], then we multiply it by 50. At this point, the output will oscillate between -50.0 and 50.0. To bring that higher into the audible range, we add 200. So the output oscillates between 150.0 and 250.0 at the speed specified by the frequency of the modulator WavePlayer. return (x[0] * 50.0) + 200.0; Ring Modulation In the same chapter, we encountered a slightly more complex use of the Function object. This time, the Function takes two unit generators as parameters, and simply outputs the product of their values. Function ringModulation = new Function(carrier, modulator) { public float calculate() { return x[0] * x[1]; } }; Notice how the values of the parameters are accessed within the calculate routine. Since the carrier UGen was passed in first, its value is accessed via x[0]. Since the modulator UGen was passed in second, its value is accessed via x[1]. As you can see in this example as well as the previous example, custom functions are useful any time you want to encapsulate a bit of math. A.1.1. Custom Mean Filter (Custom_Function_01) In this example, we show how you can easily create a new signal processing unit generator using the Function object. In this case, we create a simple mean filter unit generator. A mean filter is one of the simplest filters. It simply averages the last few frames of data and outputs the result. This has an effect similar to a low-pass filter, with more low frequencies filtered out when more frames are averaged. In this example, we average the last four frames. Function meanFilter = new Function(sp) { float[] previousValues = new float[3]; public float calculate() { float mean = 0.25 * (previousValues[0] + previousValues[1] + previousValues[2] + x[0]); previousValues[2] = previousValues[1]; previousValues[1] = previousValues[0]; 123 g. when we want to connect the custom function to another unit generator. Notice how we connect another unit generator to the custom function. we pass unit generators in via the Function constructor. void setup() { size(800. Code Listing A. // this will hold the path to our audio file String sourceFile. } }. // create our AudioContext ac = new AudioContext(). } 124 Sonifying Processing . Later. return mean. Custom_Function_01. we create a custom function that // calculates a mean filter import beads.previousValues[0] = x[0].wav". AudioContext ac.pde // in this example. e. we can return to the addInput method. sourceFile = sketchPath("") + "Drum_Loop_01. } catch(Exception e) { println("Exception while attempting to load sample!"). new Sample(sourceFile)). Glide gainValue.1. // the SamplePlayer class will be used to play the audio file SamplePlayer sp. // Try/Catch blocks will inform us of errors try { sp = new SamplePlayer(ac.*. exit(). Rather than using the addInput method.addInput(meanFilter). // standard gain objects Gain g.1.pde // Custom_Function_01.printStackTrace(). 600). text("Click to hear a mean filter applied to a drum loop. we create a gain that will control the volume // of our sample player gainValue = new Glide(ac. g = new Gain(ac.25 * (previousValues[0] + previousValues[1] + previousValues[2] + x[0]). sp.start(). previousValues[2] = previousValues[1]. gainValue).0. 0. // begin audio processing ac. } }.setToLoopStart(). 1.". } void draw(){} // this routine is called whenever a mouse button is pressed // on the Processing sketch void mousePressed() { gainValue. 50). // connect the filter to the gain g.start(). // this custom function calculates a mean filter using the // previous 3 values Function meanFilter = new Function(sp) { float[] previousValues = new float[3].0) sp. 50. previousValues[0] = x[0].// we would like to play the sample multiple times.out.addInput(meanFilter). previousValues[1] = previousValues[0].9).addInput(g). so we // set KillOnEnd to false sp. 20). public float calculate() { float mean = 0. return mean. // connect the Gain to the AudioContext ac.setValue(0.setKillOnEnd(false). background(0). // as usual. // move the playback pointer to the first loop point (0. } 125 . //import net.2. the import statements should look like the ones you use in processing.data.beads. we need to import the Buffer and the BufferFactory class. It can be found at. which employ many oscillators and simple sine waves. we can create an additive tone by using a single oscillator and a special buffer. and in fact it’s not even very difficult (if you are already comfortable in java).beads. a custom function can be used for extending the functionality of Beads.beadsproject. as opposed to traditional methods.data. James Moorer described a way to simplify additive synthesis on a digital system.Buffer. The source code can be found at. you can use the program SVN (. In fact. Custom Beads In most situations.*. Moorer’s discrete summation equation shows us how to create a complex buffer that is the sum of a series of sine waves. Then make sure you change the name anywhere it is used within the code. then that is overkill.BufferFactory. Then rename the file. In this case.2. put it in the same directory as the Processing program in which you want to use the new class. but if you just want to create a handful of new unit generators for use in Processing. we create a new buffer type that generates discrete summation buffers based on a number of parameters. Using Moorer’s equation. we’re going to create some new Bead objects for use withing a Processing sketch. Custom Buffer (Custom_Beads_01) In the 1970s. so we will start with the Buffer.net/svn/beads/Trunk/Beads/src/beads_main/net/b eadsproject/beads/data/buffers/SineBuffer. But it is possible.beadsproject. as in the constructors.java After downloading the file. In this section. it’s difficult for me to think of a situation where creating an entirely new unit generator (Bead) is necessary. //import net. 126 Sonifying Processing . Change the import statements to point to your beads installation. In this case. It’s best to find the piece of the source code that most closely matches the functionality you are trying to create.beadsproject. If you want to check out an entire copy of the source.beadsproject. When you want to create a new unit generator for use in Processing.A. A. you probably shouldn’t start from scratch.tigris. the class and any constructors. When you want to create a new bead.org/).1.SINE class as our base. Rename them to something that is verbose and captures the purpose of your unit generator. then build off of that.net/svn/beads/Trunk/. We could do that by importing beads. or by just importing the classes we need. In this example. we want to build a new type of Buffer that can be used by a WavePlayer. double amplitudeCoefficient = amplitude / (2. In this case.0. Custom_Beads_01.pde // Custom_Beads_01. you will want to have a DSP equation in mind before you start working. we implement Moorer’s discrete summation equation. } In Custom_Beads_01. we put the new Buffer into use by instantiating it in the WavePlayer constructor.generateBuffer(44100)). the meaty code will be contained within the calculateBuffer function. public Buffer generateBuffer(int bufferSize. The biggest task in writing a new Bead is in overriding the functions that actually do the work.0 * Math.0 * (double)numberOfHarmonics).pde.1.BufferFactory.PI) / (double)b. new DiscreteSummationBuffer(). double theta = 0.pde and DiscreteSummationBuffer. Code Listing A. Usually. import beads. // our envelope and gain objects Envelope gainEnvelope. wavetableSynthesizer = new WavePlayer(ac.Buffer.*. …etc (SEE CODE LISTING) return b.import beads. although your unit generator could simply be a convenience UGen that encompasses a common part of your Beads programs. but in the case of a buffer. Glide frequencyGlide. which is beyond the scope of this tutorial. it is in the generateBuffer routine. frequencyGlide. float amplitude) { Buffer b = new Buffer(bufferSize). double delta = (double)(2. int numberOfHarmonics. In many cases.2.pde // this program demonstrates how to create and use custom // Beads objects in Processing import beads.buf. // import the beads library AudioContext ac. // create our AudioContext // declare our unit generators WavePlayer wavetableSynthesizer. 127 .length. 8.Gain synthGain. // connect the Gain output to the AudioContext ac. // create the envelope object that will control the gain gainEnvelope = new Envelope(ac. // in 300ms go to 0. } // this routine is triggered whenever a mouse button // is pressed void mousePressed() { // when the mouse button is pressed. connect it to the gain envelope synthGain = new Gain(ac. frequencyGlide. 200.generateBuffer(4096.8 gainEnvelope. drawBuffer(wavetable. gainEnvelope). 0. 50). void setup() { size(800.out.8). // connect the synthesizer to the gain synthGain.0. background(0).addSegment(0.setValue(mouseX).0 } // draw a buffer on screen void drawBuffer(float[] buffer) { float currentIndex = 0. at a 50ms attack // segment to the envelope // and a 300 ms decay segment to the envelope gainEnvelope. 128 Sonifying Processing .addInput(synthGain). 100.0). 200). 15. 120). new DiscreteSummationBuffer(). 1.addSegment(0.buf). 600). frequencyGlide = new Glide(ac. 0.start().0. // create our Buffer Buffer wavetable = new DiscreteSummationBuffer().addInput(wavetableSynthesizer).". // create a Gain object. // initialize our AudioContext ac = new AudioContext(). } void draw() { // set the fundamental frequency frequencyGlide. wavetableSynthesizer = new WavePlayer(ac.generateBuffer(44100)). 10). // over 50 ms rise to 0. // start audio processing ac. text("Click to trigger the wavetable synthesizer. .buf. } } // // // // DiscreteSummationBuffer.0)) + (int)(height / 2.. 0. import beads. double theta = 0.length. 0).9f).data. j++ ) { // increment theta // we do this first.0.0).beads. (int)(buffer[(int)currentIndex] * (height / 2. 0.data. 10. float amplitude) { Buffer b = new Buffer(bufferSize).pde This is a custom Buffer class that implements the Discrete Summation equations as outlines by Moorer. with the Dodge & Jerse modification. // if you want this code to compile within the Beads source. for( int j = 0.PI) / (double)b.length / (float)width. because the discrete summation // equation runs from 1 to n.this 129 . c).0 * Math. double delta = (double)(2.0 * (double)numberOfHarmonics). j < b. i++ ) { set(i.beads.BufferFactory. } // this is a fun version of generateBuffer that will allow // us to really employ the Discrete Summation equation public Buffer generateBuffer(int bufferSize.beadsproject. currentIndex += stepSize. public class DiscreteSummationBuffer extends BufferFactory { // this is the generic form of generateBuffer that is // required by Beads public Buffer generateBuffer(int bufferSize) { // are these good default values? return generateBuffer(bufferSize. double amplitudeCoefficient = amplitude / (2.length. i < width.beadsproject.BufferFactory. color c = color(255. // then you would use these includes //import net. for( int i = 0. int numberOfHarmonics.buf. import beads.float stepSize = buffer.Buffer. //import net.Buffer. stroke(c). not from 0 to n-1 . } return b. which can be found at. // set the value for the new buffer b. In this case. and we want to give other programs access to the new variables.beadsproject. IsMixStatic indicates whether or not the mix is a static value. // do the math with double precision (64-bit) then cast // to a float (32-bit) .sin( (double)(theta / 2.1. the mix float. we modify the WavePlayer object so that It can morph between two buffers.2... The three new variables we created are the mixEnvelope UGen.0) * ((2. Then we need to create accessor functions to get and set the new variables.// is important theta += delta. private boolean isMixStatic = true.buf[j] = newValue. } }. private UGen mixEnvelope. } // we must implement this method when we inherit from // BufferFactory public String getName() { return "DiscreteSummation". Creating the MorphingWavePlayer class is slightly more difficult than creating a new type of buffer.sin( theta / 2.0 * (double)numberOfHarmonics) + 1.net/svn/beads/Trunk/Beads/src/beads_main/net/b eadsproject/beads/ugens/WavePlayer. Custom WavePlayer (Custom_Beads_01) In this example. The float is used to pin the mix to a static value. but this is purely for convenience.java. and the boolean isMixStatic.2.0)). double denominator = (double)Math. private float mix. we want to add new member variables that will control the mixing between the two buffers. I used other get and set routines as the basis for these functions. The UGen can be used to control the mix dynamically. A.0) ). In this example we also wrote new constructors to set the mix variables when the object is created. float newValue = (float)(amplitudeCoefficient * ((numerator / denominator) . public UGen getMixUGen() { 130 Sonifying Processing . Any WavePlayer-like unit generator should start with the WavePlayer source code.0). this is probably unnecessary // if we want to worry about memory (nom nom nom) double numerator = (double)Math. } Finally. Code Listing A.pde you can see that the new MorphingWavePlayer is constructed just like the WavePlayer object. except we give it two buffers. rather than just one. } } public float getMix() { return mix. Morphing between buffers is relatively easy. isMixStatic = true. } this. mwp = new MorphingWavePlayer(ac. } return this. In Custom_Beads_02. } public MorphingWavePlayer setMix(float newMix) { if (isMixStatic) { ((beads. Buffer. isMixStatic = false.setValue(newMix). } else { this.TRIANGLE). // declare our unit generators 131 .mix = newMix. frequencyGlide.pde and MorphingWavePlayer.*.2.pde import beads.0 minus the mix level.SINE.2. } else { return mixEnvelope. newMix). AudioContext ac. Custom_Beads_01.pde // Custom_Beads_02.Static) mixEnvelope). we want to override the calculateBuffer function with our timbre morphing code. then multiply buffer2 by 1.mixEnvelope = mixUGen. } else { mixEnvelope = new beads. return this. } // these two routines give access to the mix parameter via // float or UGen public MorphingWavePlayer setMix(UGen mixUGen) { if (mixUGen == null) { setMix(mix).if (isMixStatic) { return null. Buffer. Then we sum those values to produce the output.Static(context. we just multiply buffer1 by the mix level. TRIANGLE). void setup() { size(800. mwp = new MorphingWavePlayer(ac.setValue(mouseX). beads. private UGen phaseEnvelope. masterGain = new Gain(ac.SquareBuffer. Buffer. mwp. frequencyGlide. beads. mixGlide = new Glide(ac. text("Move the mouse to set the frequency and the mix of the MorphingWavePlayer. } void draw() { frequencyGlide.addInput(masterGain).start(). masterGain. 100.setValue(mouseY/(float)height). // start audio processing ac.addInput(mwp). 10). Buffer.5. ac. 120).8). private UGen frequencyEnvelope. background(0). Glide frequencyGlide. 600). import import import import import import public class MorphingWavePlayer extends UGen { private double phase.AudioContext. beads.SawBuffer. } // // // // // MorphingWavePlayer. Gain masterGain. 200. 0.setMix(mixGlide). 132 Sonifying Processing . 10).out. frequencyGlide = new Glide(ac.pde this file demonstrates the creation of custom beads for use in Processing it expands upon the standard WavePlayer by allowing the programmer to mix between two buffers beads.SINE.Buffer.".SineBuffer.UGen. Glide mixGlide. beads. // initialize our AudioContext ac = new AudioContext(). beads. mixGlide. 1. 0.MorphingWavePlayer mwp. private float frequency. private float mix. newBuffer1. UGen frequencyController. } public MorphingWavePlayer(AudioContext context.update(). Buffer newBuffer1.5f. // To store the inverse of the sampling frequency. mix = 0.buffer2 = newBuffer2. // constructors private MorphingWavePlayer(AudioContext context. private boolean isMixStatic = true. float frequency. Buffer newBuffer2) { this(context. // the unit generators that will control the mixing private UGen mixEnvelope. newBuffer2). } public MorphingWavePlayer(AudioContext context. phase = 0. Buffer newBuffer1. } public void start() { super. // The oscillation frequency. phase = 0.getSampleRate(). one_over_sr = 1f / context.start(). } // this is the key to this object // overriding the calculateBuffer routine allows us to // calculate new audio data and pass it back to the calling // program @Override public void calculateBuffer() { frequencyEnvelope. setFrequency(frequencyController). 133 . newBuffer1.// the buffers that will be mixed private Buffer buffer1. this. newBuffer2). private boolean isFreqStatic. Buffer newBuffer1.buffer1 = newBuffer1. setFrequency(frequency). this. Buffer newBuffer2) { super(context. Buffer newBuffer2) { this(context. 1). private Buffer buffer2. private float one_over_sr. i++) { if( mixEnvelope != null ) { mix = mixEnvelope.getValue(0.0f) + 1. } public UGen getFrequencyUGen() { if (isFreqStatic) { return null. } phase = (((phase + frequency * one_over_sr) % 1.if( mixEnvelope != null ) mixEnvelope.mix. i). } bo[i] = (mix * buffer1. if( mixEnvelope != null ) { mix = mixEnvelope. } } else { phaseEnvelope.update(). i++) { frequency = frequencyEnvelope.getValueFraction(phaseEnvelope.getValue(0.getValueFraction(phaseEnvelope. for (int i = 0. float inverseMix = 1.getValue(0. i))) + (inverseMix * buffer2. i))). } else { return frequencyEnvelope. i). } @Deprecated 134 Sonifying Processing .0f .getValue(0. } } } @Deprecated public UGen getFrequencyEnvelope() { return frequencyEnvelope. inverseMix = 1.0f.getValue(0.update().0f) % 1. inverseMix = 1.0f . if (phaseEnvelope == null) { for (int i = 0.mix. bo[i] = (mix * buffer1. i). float[] bo = bufOut[0].getValueFraction((float)phase)).getValueFraction((float) phase)) + (inverseMix * buffer2.0f . } } public float getFrequency() { return frequency. i < bufferSize.mix. i < bufferSize. Static) frequencyEnvelope).mix = newMix. 135 . } public MorphingWavePlayer setMix(float newMix) { if (isMixStatic) { ((beads. } else { this.Static(context. return this.mixEnvelope = mixUGen. } // these two routines control access to the mix parameter public UGen getMixUGen() { if (isMixStatic) { return null. } public MorphingWavePlayer setFrequency(float frequency) { if (isFreqStatic) { ((beads.Static(context. } else { this. frequency). } else { mixEnvelope = new beads. } this. return this. isMixStatic = false. } return this. isFreqStatic = false. } else { return mixEnvelope. } } public float getMix() { return mix. isFreqStatic = true.frequencyEnvelope = frequencyUGen. } // these two routines give access to the mix parameter //via float or UGen public MorphingWavePlayer setMix(UGen mixUGen) { if (mixUGen == null) { setMix(mix). } else { frequencyEnvelope = new beads.setValue(frequency). isMixStatic = true. } public MorphingWavePlayer setFrequency(UGen frequencyUGen) { if (frequencyUGen == null) { setFrequency(frequency).public void setFrequencyEnvelope(UGen frequencyEnvelope) { setFrequency(frequencyEnvelope). } this. newMix).Static) mixEnvelope).frequency = frequency. } return this.setValue(newMix). } public MorphingWavePlayer setPhase(float phase) { this. } @Deprecated public void setPhaseEnvelope(UGen phaseEnvelope) { setPhase(phaseEnvelope).buffer1 = b. return this.phase = phase.getValue(). } return this. this.buffer2.} @Deprecated public UGen getPhaseEnvelope() { return phaseEnvelope. } // GET / SET BUFFER1 public MorphingWavePlayer setBuffer1(Buffer b) { this.phaseEnvelope = null. } public UGen getPhaseUGen() { return phaseEnvelope. return this. } public Buffer getBuffer1() { return this. return this.phaseEnvelope = phaseController. } } 136 Sonifying Processing . if (phaseController != null) { phase = phaseController. } public MorphingWavePlayer setPhase(UGen phaseController) { this. } public float getPhase() { return (float) phase.buffer1. } // get / set buffer2 public MorphingWavePlayer setBuffer2(Buffer b) { this.buffer2 = b. } public Buffer getBuffer2() { return this. 137 . musicBYTES 2009. Beauty and Horror 2009. 138 Sonifying Processing . His primary interest is biological and bottomup approaches to computer-assisted composition. Evan works heavily as a freelance composer.About the Author Evan X.com. which he explores in new software written in java and processing. scoring for numerous videogames and television productions. and a master's degree in computer music from Northern Illinois University in 2010. and IMMArts TechArt 2008. New Music Hartford. He earned a bachelor's degree in computer science from the University of Rochester in 2004. His music has been performed in Phono Photo No. 6. He is also the SEAMUS Webmaster and the blogger at computermusicblog. Silence. Merz is a graduate student in the algorithmic composition program at The University of California at Santa Cruz.
https://www.scribd.com/doc/164132552/Beads-Tutorial-Sonifying-Processing
CC-MAIN-2017-13
refinedweb
29,744
53.47
D (The Programming Language)/d2/Conditionals and Loops Lesson 10: Conditionals and Loops[edit] Conditionals and loops are essential for writing D programs. Introductory Code[edit] Palindrome Checker[edit] module palindromes; import std.stdio; bool isPalindrome(string s) { int length = s.length; int limit = length / 2; for (int i = 0; i < limit; ++i) { if (s[i] != s[$ - 1 - i]) { return false; } } return true; } void main() { string[] examples = ["", "hannah", "socks", "2002", ">><<>>", "lobster"]; foreach(e; examples) { if(!e.length) continue; // skip that empty string if(isPalindrome(e)) writeln(e, " is an example of a palindrome."); else writeln(e, " is an example of what's not a palindrome."); } while(true) { write("Type any word: "); string input = readln(); if(input.length <= 1) // length == 1 means input == "\n" break; // nothing was typed input = input[0 .. $-1]; // strip the newline if(isPalindrome(input)) writeln(input, " is a palindrome."); else writeln(input, " is not a palindrome."); } } More Conditionals and Branching[edit] import std.stdio; string analyzeHoursOfSleep(int hours) { if(!hours) return "You didn't sleep at all."; string msg = ""; switch(hours) { case 1,2,3: msg ~= "You slept way too little! "; goto case 7; case 4: .. case 6: msg ~= "Take a nap later to increase alertness. "; case 7: msg ~= "Try to go back to sleep for a bit more. "; break; default: msg ~= "Good morning. Grab a cup of coffee. "; } return msg ~ '\n'; } void main() { writeln(analyzeHoursOfSleep(3)); writeln(analyzeHoursOfSleep(6)); writeln(analyzeHoursOfSleep(7)); writeln(analyzeHoursOfSleep(13)); int i = 0; L1: while(true) { while(true) { if(i == 3) break L1; i++; break; } writeln("Still not out of the loop!"); } } /* Output: You slept way too little! Try to go back to sleep for a bit more. Take a nap later to increase alertness. Try to go back to sleep for a bit more. Try to go back to sleep for a bit more. Good morning. Grab a cup of coffee. Still not out of the loop! Still not out of the loop! Still not out of the loop! */ Concepts[edit] The if and else Statements[edit] Using if allows you to make part of your code only execute if a certain condition is met. if(condition that evaluates to true or false) { // code that is executed if condition is true } else { // code that is executed if condition is false } In fact, if the section of code that's inside the if or else is only one line long, you can omit the curly brackets. if(condition1) do_this(); else if(condition2) do_that(); // only executed if condition1 is false, but // condition2 is true else do_the_other_thing(); // only executed if both condition1 and condition2 are false As a result, this is often seen: if (condition1) { do_something1(); something_more1(); } else if(condition2) { do_something2(); something_more2(); } else if(condition3) { do_something3(); something_more3(); } else if(condition4) { do_something4(); something_more4(); } else { do_something_else(); } The Condition[edit] The condition that goes inside of the parentheses in conditional statements such as if can be anything convertible to bool. That includes integral and floating-point types ( true if nonzero, false if otherwise) and pointers ( null is false, and dynamic arrays (always true). The while Loop[edit] A while loop will allow you to repeat a block of code as long as a certain condition is met. There are two forms of the while loop: while(condition1) { do_this(); } and do { do_this(); } while(condition1) The difference is, in the first example, if condition1 is false, do_this is never called, while in the second example it would be called once (the conditional check happens after the code is executed once). The foreach Loop[edit] This loop is for iteration. Take a look at these two ways to use foreach: foreach(i; [1,2,3,4]) { writeln(i); } foreach(i; 1 .. 5) { writeln(i); } // equivalent to above The for Loop[edit] This type of looping is the most complex, but it is also the one that gives a lot of control. It is defined in the same way as other C-like languages: for(initialization; condition; counting expression) { ... } The initialization expression is executed only once during the beginning. Then condition is checked to be true or false. If it is true, the code inside of the conditional block (inside of the brackets) is executed. After that execution, the counting expression is executed. Then, the condition is checked, and if it is true, the loop continues. For example: for(int i=0; i <= 5; i++) { write(i); } // output: 012345 You can even omit parts of what goes inside the parentheses of the for. These two are equivalent: for(int i=0; i==0; ) { i = do_something(); } int i = 0; while(i == 0) { i = do_something(); } break and continue[edit] These are two statements that are used inside of loops. The break statement breaks out of the loop. Whenever the break statement is encountered, the loop is immediately exited. This statement can go inside of while, for, foreach, and switch blocks (you will learn about those later). The continue statement causes a loop to restart at the beginning. Let's see, through code, exactly how this works. This code example counts to 7 but skips 5. for(int i = 0; i <= 7; i++) { if(i == 5) continue; writeln(i); } Switches and More[edit] D allows absolute branching with labels and goto. int i = 0; looper: // this is a label write(i); i++; if(i < 10) goto looper; writeln("Done!"); // 0123456789Done! Do not use these unless if you have to. Code that uses labels can most often be written with more readable looping constructs, like for, while, and foreach. There is something in D, C and C++ called the switch. D's switches are actually more powerful than C and C++'s switches. switch(age) { case 0,1: // if(age == 0 || age == 1) { ... } writeln("Infant"); break; case 2,3,4: // else if (age == 2 || age == 3 || age == 4) { .. } writeln("Toddler"); break; case 5: .. case 11: writeln("Kid"); break; case 12: writeln("Almost teen"); break; case 13: .. case 19: writeln("Teenager"); break; default: // else { .. } writeln("Adult"); } Note that you must have a break in order to get out of the switch. Otherwise, fall-through occurs. Also, you can use goto. int a = 3; switch(a) { case 1: writeln("Hello"); case 2: writeln("Again"); break; case 3: goto case 1; default: writeln("Bye"); } /* output: Hello Again */ Strings can be used in case. This is a feature that's not in C or C++. else[edit] You can use else for foreach, while, and for loops, too. If any of those loops have an else clause, then the else is only executed if the loop terminates normally (i.e. not with break). int[] arr = [1,2,3,5,6]; foreach(item; arr) { if(a == 4) break; } else { writeln("No four found."); } //Output: No four found.
https://en.wikibooks.org/wiki/D_(The_Programming_Language)/d2/Lesson_10
CC-MAIN-2015-40
refinedweb
1,118
66.13
This is the third post in a blog series about how you can use Power Platform Dataflows to get data out of Business Central in a form where it can be accessed by other applications. The first post covered a basic integration between Power BI and Business Central. The second post introduced Power Platform Dataflows, and data was stored in a hidden Azure storage account (also known as Azure Data Lake Store gen 2). Illustrated like this: In this post, we reconfigure our Power Platform Dataflows to store the data in our Azure storage account. Illustrated like this: Once the data is in our storage account, we can access it directly and use it for a variety of purposes. Much of the material in this blog post is based on the documentation for Power Platform Dataflows, in particular these pages:. Prerequisites We need a few things before we can accomplish the goals: - We need an Azure subscription, so you can create the storage account. - The Azure subscription and the Power BI subscription should be linked to the same Azure Active Directory (AAD) tenant. Power BI needs to authenticate to our Azure storage account, and they use AAD for this authentication (OAuth). Put a bit simplified, we need to log into with the same account (e.g. susan@contoso.com) that we log into with. - We need to be an administrator of the Power BI account, so that you can reconfigure the dataflows. Create an Azure storage account (Azure Data Lake Store Gen 2) The first step is to create an Azure storage account. A word about terminology; we will create an Azure storage account, but a special type of storage account called “Azure Data Lake Store Gen 2”. In the documentation for Dataflows, you will often see the latter term. Sometimes it is abbreviated to ADLSg2. The storage account needs to be in the same Azure region as your Power BI tenant. To determine where your Power BI tenant is located, go to Power BI online and open “About Power BI” via the “?” menu at the top. It will show a picture like this, including the Azure region: To create the storage account, follow these steps: - Go to the Azure portal at and click “Storage accounts” in the navigation pane. - Click “Add”. - Create a new resource group called “CdmStorage”. - Specify the name of the storage account as “cdmstorage1” (or similar – the name must be globally unique). - Specify the location of your Power BI tenant. - Leave the remaining settings at their defaults. - Click “Next: Advanced” at the bottom. - Enable “Hierarchical namespace” under “Data Lake Storage Gen2”. - Click “Review + Create” and click “Create”. Now you have the storage account. You also need a so-called “file system” inside your storage account. You may be familiar with “blob containers”. A “file system” is similar to a “blob container”, but it’s also different because it’s a truly hierarchical like the file systems we are used to from PCs. First, download and install Azure Storage Explorer from. Launch Azure Storage Explorer and log in by clicking “Add Account” in the left-hand ribbon. Click “Next”. Sign in with your usual account. Now you should be able to see your newly created storage account. Next step is to create a file system, which must be called “powerbi”. Azure Storage Explorer still calls it a “blob container” in the UI, so that’s what you should look for. You should now have this: Give Power BI permission to read/write to the storage account We want Power BI (really, our dataflows) to be able to read and write to our storage account, so we need to give Power BI the right permissions. This is done in two steps. Step 1 is to assign the “Reader” role to Power BI for your storage account. In the Azure portal, select the storage account and click “Access control (IAM)”. Then click “Add role assignment”: In the dialog, choose the “Reader” role, and then search for “Power BI Service”: Select the search result and click “Save”. Step 2 is to assign read/write permissions to the “powerbi” file system that we created earlier. Switch to Azure Storage Explorer again. Right-click the “powerbi” file system and click “Manage Access”. You should see a page like this: You need to add some rows to this list, but which? To answer that, we need to look inside Azure Active Directory. Switch back to the Azure portal and select “Azure Active Directory” in the navigation pane. Click “Enterprise applications”, select Application Type = “All Applications”, and then click “Apply”. Now enter “Power” in the search field and you should see something like this: Notice the Object IDs. It is those that we need. But beware – yours will be different from the ones in the screen shot above. Now switch back to Azure Storage Explorer and add the three applications, as shown: And And Click Save. Now we have created the storage account and configured security for it. It was a lot of steps, but fortunately it only has to be done once. Configure Power BI to save dataflows to the new storage account The next step is to configure Power BI to use the new storage account. Go to Power BI online, click the settings cog in the upper-right corner, and select “Admin portal”: Under “Dataflow settings”, you should see this screen: The text explains that – currently – your dataflow data is stored in a storage account that Power BI provides, and which you can’t access directly. This is what we used in blog post 2 in this series. Obviously, we need to connect Power BI to our storage account. So click that yellow button and fill out the fields with your values. These are my values: Click “Continue”. Power BI now verifies that it can access the storage account. If all goes well, you should see this page: Flip the toggle to allow workspace admins to use the storage account: That’s it for configuration of Power BI and Azure! Saving dataflows to our own storage account The workspace that we created in blog post 2 still saves data in Power BI’s internal storage account. Let’s switch it to save data in our storage account. Open the workspace settings: And toggle this setting: Click “Save”. Existing dataflows will continue to save to wherever they were saving when they were initially created. So we need to recreate the dataflow in blog post 2. Once you have done that, try to refresh the dataflow a couple of times. Now switch to Azure Storage Explorer and refresh the “powerbi” file system. You should see that the data has been saved there! Inside the Items.csv.snapshots folder, you will find the actual data in CSV format, one file for each time we refreshed: Open one of the files in Notepad, and you will see the actual data: The described file structure, with a model.json at the root and subfolders for different entities containing the actual data snapshots, is based on the Common Data Model (CDM) format. CDM is a format that is used by more and more services from Microsoft and other companies, and which enables reuse of data and tools in different domains. For example, suppose that you want to analyze the item data in Azure Databricks, this is straightforward because Azure Databricks can read CDM data. You can read more about CDM here:. That’s it for this blog post! We now have the data in a location that can be accessed from other tools! In the next blog post, we will look at how you can access the CSV files programmatically. If you have questions, or if you have ideas for future blog posts in this area, I’m happy to discuss further. Feel free to write in the comments section or privately via email (chrishd@microsoft.com).
https://cloudblogs.microsoft.com/dynamics365/it/2019/09/18/using-power-platform-dataflows-to-extract-and-process-data-from-business-central-post-3/
CC-MAIN-2020-34
refinedweb
1,320
72.36
Hi Tamas, sorry, doesn't seem to work on my network... it produces a network, but some node are missing...maybe the R translation is not the best way to accomplish my task... I just want to subgraph all the nodes with blue edges from a larger graph tnx, simone Il giorno 22/set/08, alle ore 16:09, Tamas Nepusz ha scritto: def adj(es): result = set() for e in es: result.add(*e.tuple)Sorry; instead of the last line, you should write: for e in es: result.update(e.tuple) -- T. _______________________________________________ igraph-help mailing list address@hidden
http://lists.gnu.org/archive/html/igraph-help/2008-09/msg00116.html
CC-MAIN-2016-50
refinedweb
101
73.88
We got an interesting ProductFeedback bug from Oren Novotny who says: While working with a C# solution consisting of seven projects, I noticed that devenv was pegging one of my two CPUs. The IDE didn't feel sluggish though and was still responsive despite the cpu usage. While working with a C# solution consisting of seven projects, I noticed that devenv was pegging one of my two CPUs. The IDE didn't feel sluggish though and was still responsive despite the cpu usage. I wanted to talk about this because i thought it was interesting how this behavior was viewed by our customers. As it turns out this behavior is considered "by design". In order to help explain that, i'm going to delve a little bit into our design for the C# Language Service in order to show that the consequences of our design is the exact behavior that Oren is seeing. The C# Language Service is a component that implements many Visual Studio services in order to provide C# specific capabilities to VS users. It analyzes your source code and builds up an internal representation for it, and uses that to provide many services like colorization, completion lists, parameter help, squiggles, population of the task list, debugger interaction support (like breakpoint resolution), navigation bar population, etc. etc. etc. The amount of services that the C# language service implements is quite large, but is almost completely specified by the interfaces that Visual Studio provides. That means that you could rip out the C# language service yourself and insert your own MyC# Language Service that does everything we do. (and, in fact, that's what some customers out there do!). So, as i've specified it, there are really two different kinds of behaviors that the C# Language service has. The first is simply code analyzation. The second is interaction with the rest of the VS shell in order to interact with the user sitting at the keyboard. Now, what's interesting is that these two behaviors have vastly different requirements and performance characteristics. Let's start with the second type of work that the language service does. We're interacting with the user so it is *vital* that we be performant. Consider something like a user typing "this<dot>". We need to bring up a completion list and have it populated in the span of milliseconds. If we're slow to do this then we're going to screw up your typing. And if there's anything we know it's that if you screw with typing you're going to have people ripping their hair out and screaming with you. Typing absolutely must be performant no matter what. Similarly, when colorizing your text we have to be fast, fast, fast! If colorization isn't instantly happening it's very disorienting and will cause users to think they're doing something wrong. Now, let's talk about the first job that the Language Service has: Code analyzation. Consider this. Before we are able to bring up a completion list after you type "DateTime<dot>". We need to do several things. We need to try to bind the name on the left of the <dot>. In order to do that we need to understand the method that it's in (because it might be the name of a parameter), we need to understand the class it's in (because it might be a property or field or even a nested type), we need to understand the namespace it's in (because it might be brought into scope by one of your "using's". And, not only that, but we need to understand the references your project has so that we'll know about the types that are accessible (for example System.DateTime comes from mscorlib.dll). In essense we're compiling your code, except that unlike a standard compiler we need to compile fast, and we need to be incredibly error tolerant, and also we have to deal with the constant changes that you're making to your code. This is actually a whole heck of a lot of work. So we could try to do all this work after any change the user makes. After you type a character we simply do all the work to keep our internal symbol model up to date. However, it turns out that we simply weren't smart enough to figure out how to make that work while satisfying all the requirements out there. We absolutely could do the work in between your keystrokes, but we don't know how to do it performantly enough. Consider renaming a namespace that sits in your root namespace. The amount of code that needs to be recompiled at that point is enormous. Chances are that every file of yours will have many references to types from mscorlib that now need to be rebound in case they might bind differently with this namespace name change. Now realize that you have about 1 millisecond to do all that work! It's an extremely complicated task and we decided that trying to do it in between keystrokes would be too difficult. So what did we do instead? We decided that instead of analyzing your code and keeping our internal symbol model up to date in between user changes, we would instead have another thread whose sole purpose was to keep that symbol information up to date in the background. This thread receives notifications that changes have been made and will do the work to re-lex, re-parse, and re-bind the new code. Now, while that's happening it's possible for user requests to come in and be serviced without delay. Now, as a consequence of this, a user request might come in and be serviced before the background thread has finished its work. What happens in that case? Well the "primary" thread will simply access a symbol table that is slightly out of date. And, for pretty much all cases that suffices. In 99% of cases by the time you need access to something you've just changed, it will be understood and contained in the symbol table. For example: say you rename a method parameter 'foo' (or change it's type). You then move down and try to access "foo<dot>". In the short time it takes you to move down to the usage of "foo" we will almost certainly be up to date. Now, there is one case where the 99% rule won't apply. If you're opening a large solution then it's going to take time for the background analyzation to happen. When the solution opens we will know nothing about your code, and as time goes forward we will be madly analyzing your code to have the initial symbol table population done. If you happen to use some service (like IntelliSense(tm)) during that time, it is absolutely the case that we might not have all information ready for you. However, we are extremely fast with our analyzation, and on a fast machine you can usually expect that we'll know about all your code fairly quickly. A good rule of thumb is about 1 second per megabyte of source code you have. i.e. if you have 35 megs of source, we'll take 35-40 seconds until we know *everything* about your code. Of course, after 20 seconds we'll know 50%, so as you get closer to full analyzation the accuracy gets better and better. So what does this mean for the user? Well, as you're changing your code, it's quite likely that you'll see a spike of CPU as the background thread goes about its business analyzing code. This thread runs at lower priority than our foreground interaction thread and so while we're using a lot of CPU, you'll still be able to type and you won't find the IDE to be sluggish. This is absolutely by design and is the expected behavior for the C# language service. It's great to see people concerned about and wanting to let us know about these things in case there is a problem. But it's even better to be able to know that things are working as normal and there's nothing that even needs to be fixed :-) Any questions or comments on the design decisions we've made here? PingBack from PingBack from
http://blogs.msdn.com/cyrusn/archive/2005/05/13/417344.aspx
crawl-002
refinedweb
1,413
68.7
ToWhen. Issues, reasons and solutions while Migrating ASP.NET to Unix Working with file systemProblem: site does not open, images are not displayed or displayed wrong. This issue has been highlighted and has a currently detailed description on Mono Project official website. Reason: in MS Windows, slash and backslash are the same thing, in Unix these are two different paths. Also in MS Windows the paths are case independent, in Unix if the same name differs cases (lower case/upper case) that means they are different file names. Solution: - change all the lines containing paths and directory dividers in them via the Path class. Use Path.Combine and Path.DirectorySeparatorChar everywhere in code. - check all CSS files and correct their paths (including cases) for them to work properly in both operating systems. - use the MONO_IOMAP=all environment variable for Mono to to ignore character case in paths. Closing TransactionsProblem: transactions in MySQL database do not close/commit. Reason: unfortunately could not be found. Solution: added the AutoEnlist=false parameter to the connection string. Difference in XML Mono Serializer performanceProblem: Difference in xml presentation for the fields with the null value and the exception in Mono serializer when empty namespaces are used in xml. Solution: removed the Document module WCF service and substituted it with ASP.NET Web API. Memory leak IssuesProblem: Memory leak when using mod_mono_server for apache2. E.g., when inserting new data in HttpRuntime.Cache with the key already present in the cache, Mono does not release memory taken by the previous data, as opposed to .NET. Reason: still a mystery to us. Solution: to solve the problem, we forced deletion of old data before the inserting the new one. We also switched to nginx where there is no such problem. NullReferenceException type exceptionsProblem: WCF service threw the NullReferenceException when addressing the ConfigurationManager class members in case the HttpContext.Current property has been overwritten. Reason: If you manually specify HttpContext.Current in non-web applications, Mono starts using WebConfigurationManager instead of ConfigurationManager which leads to exceptions in non-web application. Solution: Got rid of manual specification of HttpContext.Current. Incompatibility of Microsoft version and HTTP WCFReason: HTTP WCF Mono service has many small differences with Microsoft version e.g., small differences in xml serialization. In Mono version, extensions cannot be specified via attributes, only via configuration file. HttpContext.Current is absent in Mono version etc. Solution: Document HTTP WCF service was rewritten with the ASP.NET Web API based version. SignalR server crush during the load-testingReason: When exchanging messages in a chat embedded into the page, we used the ASP.NET SignalR library. When we ported to Mono, we wanted to use the same library there as well. It supports several types of data transfer transports: websockets, long polling, forever frame, server sent events. The most preferable for us is the websocket technology that we use with our SaaS version. Unfortunately currently SignalR does not allow to use websockets under Mono, that is why we decided to try and use Long Polling as transport. But at the stress test stage, the SignalR server kept crashing with System.IO.IOException: «Too many open files» after about 1k client connections. During investigating the problem further, socket leak was noticed at the SignalR server after client disconnection. We reported the issue to SignalR developers and hope that in next SignalR version the problem will be fixed. Solution: Other transports are not suitable for our needs due to various reasons, that is why in current Mono version, we had to disable SignalR chat. ConclusionWe made more than hundreds of changes to the ASP.NET project while porting it to Unix. As a result, we had one code base that works on Linux as well as on Windows. The source code is listed on GitHub. If we started this work now, we would use ASP.NET 5 instead. Its release was announced when our work was nearly over! About Tatiana Kochedykova - Tatiana is a technical writer and translator working now at Ascensio System SIA, developer of the productivity solutions. She tries to spend most of her time with her little son and husband. Optimistic by nature, she knows for sure that all will be well. Will you give this article a +1 ? Thanks in advance
http://www.devcurry.com/2016/01/porting-aspnet-application-to-unix.html
CC-MAIN-2017-22
refinedweb
713
50.53
Python program to find the factorial of a number using recursion : The factorial of a number is the product of all the numbers from 1 to that number. e.g. factorial of 5 is 1 * 2 * 3 * 4 * 5 i.e. 120 . In this tutorial, we will learn how to find out the factorial of a number using a recursive method. Factorial is denoted by “!”: 5 factorial is denoted by 5! Recursive method : The recursive method calls itself to solve a problem. This is called a recursion process. These types of methods will call itself again and again till a certain condition is satisfied. Finding out the factorial of a number is one of the classic problem used for recursion. The factorial of a number ’n’ is the product of all numbers from ‘1’ to ‘n’. Or, we can say that the factorial of ’n’ is equal to ’n’ times the factorial of n - 1. If the value of ’n’ is ’1’, its factorial is ‘1’. def fact(x): if x == 0 : return 1 return x * fact(x - 1) print(fact(5)) We can implement it in python like below : - fact() method is used to find out the factorial of a number. It takes one number as its argument. The return value of this method is the factorial of the argument number. This method calls itself recursively to find out the factorial of the argument number. - Inside this method, we are checking if the value of the argument is ’1’ or not. If it is ’1’, we are returning ’1’. Else, we are returning the multiplication of the argument number to fact(x -1) or the factorial of the number_ (x - 1)_. This line calls the same method again. - fact(x -1) will again call the method fact(). If the value of (x-1) is ’1’, it will return ’1’. Else, it will return (x -1) * fact(x -2) . So, the same method will be called again and again recursively. - This product chain will continue until the value of ’x’ is ’1’. It will return ‘x * (x - 1) * (x - 2)…1’ or the factorial of ’x‘. The output of the above program is “120“ Explanation : In the above example, - fact() function takes one argument “x“ - If “x ” is “1“, it will return 1. Because we don’t need to find the factorial of_ ‘1’. The factorial of _‘1’ is_ ‘1’_ itself. - Else it will return x * fact(x-1) i.e. fact(x-1) will call fact() function one more time with_ (x-1)_ as argument . If ’x _’ is _10, it will call _fact(9). _ - It will continue till x is 1, i.e. the function will return 1 and no more steps we need to move inside. So, for 5, - it will call 5 * fact (4) - fact(4 ) will be 4 * fact (3) - fact(3) will be 3 * fact (2 ) - fact(2) will be 2 * fact (1) - fact(1) will be 1 - That means, the final output is 5 * fact(4) = 5 * 4 * fact(3) = 5 * 4 * 3 * fact(2) = 5 * 4 * 3 * 2 * fact(1) = 5 * 4 * 3 * 2 * 1 * fact(0) = 5 * 4 * 3 * 2 * 1 * 1 = 120 Try changing the input number to different and check the result. Conclusion : In this example, we have learned how to find the factorial of a number in python recursively. The recursive method comes in handy if you need to execute the same process again and again. Try to run the above example and try it with different numbers to find the factorial. You can download the program from the GitHub link mentioned above. If you have any queries, don’t hesitate to drop one comment below. Similar tutorials : - Python program to find the gcd of two numbers using fractions module -
https://www.codevscolor.com/python-program-find-factorial-python-tutorial/
CC-MAIN-2020-29
refinedweb
637
73.47
My code is able to execute some test cases . please tell whether my thinking is wrong or any miskate in code . I am thinking like to find nesting depth i am counting longest continuous number of 1 's and position as last occurence of 1 in it. inorder to find The maximum number of symbols between any pair of matched brackets i am counting maximum number of symbols between 1 and 2 along with 1 and 2 and finding first occurence of 1 in it . I am doing this in second for loop . I handled both cases separately in different loops . #include<bits/stdc++.h> using namespace std; typedef long long ll; int main() Read more… (192 words)
https://www.commonlounge.com/profile/7e3deff64efc4fb8bdb61c54764d54c4
CC-MAIN-2020-40
refinedweb
118
71.75
dos2unix Google turned up nothing when I was searching for how to use ccache with NetBSD's build.sh script. My "solutions" are fairly ugly but hopefully point someone in the right direction. There may very well be better ways of doing things, but this works for me. (note: These steps were used for a cross-compilation of NetBSD_4.0/i386 on a FreeBSD_6.2/i386 host. The basic ideas should be fairly generic and applicable to other host/target pairs, including native NetBSD builds. The basic ideas might also help in getting build.sh to use distcc) The goal is to use ccache for c/c++ compiles done by build.sh (the build.sh "target" e.g. "release", "distribution", etc. should not matter, any target that compiles c/c++ code can potentially benefit from using ccache). This goal can be achieved by realizing 2 subgoals: Objective 1) - make build.sh use ccache for HOST_CC/HOST_CXX (host compiler) Objective 2) - make build.sh use ccache for CC/CXX (target compiler) e.g. when compiling NetBSD on a FreeBSD system, HOST_CC/HOST_CXX point to a FreeBSD compiler, which will build a NetBSD cross-compiler (CC/CXX) that runs on the host system. For objective 1), my issue turned out to be that there are some Makefiles in the NetBSD sources that prefix some commands with ``/usr/bin/env -i``, which clears the environment. In my case, my ccache command invocation requires CCACHE_DIR/CCACHE_PATH/PATH to be set appropriately, which ``/usr/bin/env -i`` breaks. Fair is fair, so my workaround was simply to use the env command myself in HOST_CC/HOST_CXX: export HOST_CC='env CCACHE_DIR=/whereever CCACHE_PATH=/whereever PATH=/whereever /usr/local/libexec/ccache/cc' Note: you might have quoting issues if CCACHE_DIR/CCACHE_PATH/PATH contain space characters. Such issues are beyond the scope of this document. Objective 2) is a bit hairer. My first approach was simply to stick CC = <ccache_stuff> ${CC} CXX = <ccache_stuff> ${CXX} in a $MAKECONF file, and point build.sh at that. This fails because (near as I can tell) in XXX/src/share/mk/bsd.own.mk around line 199 (w/ NetBSD 4.0 sources) there are lines of the form: if ${USETOOLS_GCC:Uyes} == "yes" # { CC= ${TOOLDIR}/bin/${MACHINE_GNU_PLATFORM}-gcc CPP= ${TOOLDIR}/bin/${MACHINE_GNU_PLATFORM}-cpp CXX= ${TOOLDIR}/bin/${MACHINE_GNU_PLATFORM}-c++ ... Even though $MAKECONF is included at the top of bsd.own.mk, these lines will override whatever $MAKECONF sets CC and friends to. Although I tried to avoid patching the sources at all (I build from a script, trying to automate things) I caved and added a line at line 208 in XXX/src/share/mk/bsd.own.mk. .endif # EXTERNAL_TOOLCHAIN # } # below line was added .-include "${MAKECONF}" to force bsd.own.mk to use my CC/CXX values from my $MAKECONF. At the least, you will probably need to ensure that CCACHE_PATH='"$tool_dir"'/bin' PATH='"$tool_dir"'/bin:'"$PATH" are in the environment for CC/CXX. In contrast, $tool_dir/bin is NOT needed in these vars for HOST_CC/HOST_CXX. NOTE: $tool_dir can be specified to build.sh via ``-T <dir> Finally, when I had a $MAKECONF with: CC = /usr/bin/env \ CCACHE_DIR=<wherever> \ CCACHE_PATH=<wherever> \ PATH=<whatever> \ /usr/local/bin/ccache \ <tool_dir>/bin/<target_arch>--netbsd<target_objformat>-gcc (sans backslashes and newlines) and thought I had won, my compile seemed to hang forever. Not sure what caused this. Anyhow, I ended up creating CC/CXX wrapper scripts (well, changing my build script that calls build.sh to create wrappers) My CC/CXX scripts are just (sans backslashes and newlines): #! /bin/sh # fill in with yer own paths, target arch., etc. exec /usr/bin/env \ CCACHE_DIR=/usr/obj/ccache \ CCACHE_PATH=/XXX/TOOLS/bin \ PATH=/XXX/TOOLS/bin:<rest of $PATH> \ /usr/local/bin/ccache \ /XXX/TOOLS/bin/<arch>--netbsd<obj_format>-<gcc/c++> \ "$@" NOTE: "$@" is important. $* will not handle multiple args containing spaces correctly. And the $MAKECONF I (my script) passes to build.sh is simply CC = /xxx/path_to_cc_wrapper CXX = /xxx/path_to_cxx_wrapper YMMV, but this setup works for me. If anyone knows better ways to do things, feel free to update this guide with your way of doing things. In particular, a method that does not require patching NetBSD sources at all, even if it is just a single line.
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/using_ccache_with_build_sh.mdwn?rev=1.2;content-type=text%2Fx-cvsweb-markup
CC-MAIN-2018-13
refinedweb
710
66.94
The QFont class specifies a font used for drawing text. More... #include <QFont> Note: All the functions in this class are reentrant. The QFont class specifies a font used for drawing text.(). Note that a QApplication instance must exist before a QFont can be used. You can set the application's default font with QApplication::setFont(). If a chosen, QFont:. In X:()).("Helvetica [Cronyx]"); You can specify the foundry you want in the family name. The font. Rendering option for text this font applies to. This enum was introduced in Qt 4.4. This enum was introduced in Qt 4.4. current capitalization type of the font. This function was introduced in Qt 4.4. See also setCapitalization().".(). Returns the letter spacing for the font. This function was introduced in Qt 4.4. See also setLetterSpacing(), letterSpacingType(), and setWordSpacing(). Returns the spacing type used for letter spacing. This function was introduced in Qt 4.4. See also letterSpacing(), setLetterSpacing(), and setWordSpacing(). Returns an ATSUFontID Returns true if overline has been set; otherwise returns false. See also setOverline(). Returns the pixel size of the font if it was set with setPixelSize(). Returns -1 if the size was set with setPointSize() or setPointSizeF(). See also setPixelSize(), pointSize(), QFontInfo::pointSize(), and QFontInfo::pixelSize().. Only on X11 when Qt was built without FontConfig support the XLFD (X Logical Font Description) is returned; otherwise an empty string.(). Sets the capitalization of the text in this font to caps. A font's capitalization makes the text appear in the selected capitalization mode. This function was introduced in Qt 4.4. See also capitalization(). Sets the family name of the font. The name is case insensitive and may include a foundry name.. neccesairly true. See also kerning() and QFontMetrics.(). Sets a font by its system specific name. The function is particularly useful under X, where system font settings (for example X resources) are usually available in XLFD (X Logical Font Description) form only. You can pass an XLFD as name to this function.(). Sets the stretch factor for the font.::Stretch. If enable is true, sets strikeout on; otherwise sets strikeout off. See also strikeOut() and QFontInfo. Sets the style of the font to style. See also style(), italic(), and QFontInfo. Sets the style hint and strategy to hint and strategy, respectively.. the first family name to be used whenever familyName is specified. The lookup is case insensitive. If there is no substitution for familyName, familyName is returned. To obtain a list of substitutions use substitutes(). See also setFamily(), insertSubstitutions(), insertSubstitution(), and removeSubstitution(). word spacing for the font. This function was introduced in Qt 4.4. See also setWordSpacing() and setLetterSpacing().()..
http://doc.trolltech.com/4.5/qfont.html
crawl-002
refinedweb
444
62.54
Welcome back to Twisted Conch in 60 Seconds, the documentation series about writing SSH servers (and eventually, clients) with Twisted. In earlier entries, I've covered some of the basics of accepting client connections and generating output. In this edition, I'll cover accepting input from the client. Recall that in the previous two example programs, a SSHChannel subclass was responsible for sending some output to the client connection. The same object is going to have input from the client delivered to it. Some of you may not even be surprised to learn that the way this is done is that the channel has its dataReceived method called with a string: class SimpleSession(SSHChannel): def dataReceived(self, bytes): self.write("echo: " + repr(bytes) + "\r\n") The single argument to dataReceived, bytes, is a str containing the bytes sent from the client. This simple implementation of dataReceived escapes the received data with repr so it's easy to see what bytes were actually received and then sends them back with a little formatting. As you might expect, dataReceived is being passed bytes from a reliable, ordered, stream-oriented connection. That is, it's a lot like TCP. This means you need to be careful about message boundaries, possibly buffering up several calls with of data before handling it. Unlike TCP, of course, these bytes were sent encrypted over the network. This is an SSH tutorial, after all! Aside from this method, it's still necessary to acknowledge the PTY request the client will send: def request_pty_req(self, data): return True But since this example doesn't make use of the terminal name, size, or mode information all the method needs to do is return True to indicate that the request was successful. Similarly, the shell request must be allowed: def request_shell(self, data): return True Again, nothing going on here except a positive acknowledgement of the request so the client will be happy and move on. That's all of the code that's changed since the last example. The full code listing looks like this:() Et voilà, a custom SSH server which accepts input and generates output. Next time, the exciting topic of detecting EOF on that input stream... Nice article. Typo: "client will be happen" Thanks! Fixed.
http://as.ynchrono.us/2011/04/twisted-conch-in-60-seconds-accepting.html
CC-MAIN-2014-52
refinedweb
378
60.65
Need to edge out the competition for your dream job? Train for certifications today. Submit public class Person { private string _name; private int _personID; private string _address; public int PersonID { get { return _personID; } set { _personID = value; } } public string Name { get { return _name; } set { _name = value; } } public string Address { get { return _address; } set { _address = value; } } public Person(int dbPersonID, string dbName, string dbAddress) { this._personID = dbPersonID; this._name = dbName; this._address = dbAddress; } public Person(List<Action> ActionList) : this(ActionList[0], ActionList[1], ActionList notice that I have included two constructors in the class - the first one is a more standard approach, while the second overload is an example of how you could create a new Person from your existing data call (ie. the one which returns a List<Action>). Not sure if this is the way you were thinking to start with... although this is a recognised good practice / approach. Perhaps you could describe the data in your db table in more detail? It's a bit hard to give a really useful answer without a better understanding of what you're trying to achieve. Open in new window It's very important that they are properties, simple variables won't do. If you have the List<Action> being Action a class written as I described above you simply need to set List1 as the grid datasource and let the grid generate its own columns or desable the AutoGenerateColumns property and add your own columns mapping them to the property names on the class. Everything applies to combo's and TextBoxes except TextBox's don't have DataSource, you must use databindings.
https://www.experts-exchange.com/questions/22961674/Using-a-list-as-a-datasource.html
CC-MAIN-2018-26
refinedweb
271
53.1
I am currently studying java. I have a question about inheritance. I am aware you can inherit the variables of the parent class using: public class ChildClass extends ParentClass{ } private int number_of_legs; private int number_of_eyes; Animal first_animal = new Animal(4,2); /*where 4 and 2 are the number of legs and eyes which *i attribute through a constructor present in Animal. */ Dog first_dog = new ??? // i tried with ...new first_animal.Dog but it didn't work. This will do: Dog dog = new Dog(4, 2); You cannot create an instance of a subclass as an instance of a super class. You can say that all Dogs are Animals, but not that all Animals are Dogs. Note that the statement above means: //valid and accepted because one kind of instance of animal is a dog Animal dog = new Dog(4, 2); //invalid and throws a compiler error because one kind of instance of a Dog cannot be any animal Dog anotherDog = new Animal(4, 2); new Animal.Dog is wrong here because it means that you have a static class Dog inside Animal class. new first_animal.Dog is wrong here because it means that you have an inner class Dog inside Animal class. Those are different topics you should not cover yet.
https://codedump.io/share/ElqMvplLpWs0/1/instance-of-a-class-inheritance
CC-MAIN-2017-17
refinedweb
210
71.34
Fastify is a Web Framework for Node.js that focuses on performance and developer experience. Fastify is similar to Express, Hapi and Restify, and is now ready for its v3.0.0 release, coming in July. The latest version is demonstrated in this video: Fastify has a rich community and can be deeply customized without overhead. In the video, you’ll see fastify-autoload, which enables you to load a folder recursively! In Node 14 or 12.18, it also supports esm, allowing you to use import. For this demo, we begin using a simple global server: import fastify from 'fastify' const app = fastify({ logger: { prettyPrint: !!process.env.PRETTY_LOGS } }) app.get('/', async function (req, reply) { return { hello: "world" } }) app.listen(process.env.PORT || 3000) And we end up with a fully backed application with automatic loading of the routes folder: // app.js file import { join } from 'desm' import autoload from 'fastify-autoload' export default async function (app, opts) { app.register(autoload, { dir: join(import.meta.url, 'routes') }) } Which is also testable: import test from 'tape' import fastify from 'fastify' import fp from 'fastify-plugin' import app from './app.js' test('load the hello world', async ({ is }) => { const server = fastify() // so we can access decorators server.register(fp(app)) const res = await server.inject('/') is(res.body, 'hello world') await server.close() }) Our route then becomes: // routes/hello.js export default async function (app, opts) { app.get('/', async () => { return 'hello world' }) } And we can use the following to start our application: import fastify from 'fastify' const app = fastify({ logger: { prettyPrint: !!process.env.PRETTY_LOGS } }) app.register(import('./app.js')) app.listen(process.env.PORT || 3000) I sincerely hope you like this video introduction to Fastify. About Fastify Fastify was first introduced at Node.js Interactive 2017 in Vancouver in the iconic talk “Take your HTTP server to Ludicrous Speed”. The talk covers why Fastify is fast and how we achieved the results without compromising features. Some things have changed since then, but the core of the framework is still there! If you enjoyed this video and if you would like help in bringing Fastify to production, reach out via our contact form. You can also find me on Twitter @matteocollina. You can also watch more of my demos in my Mastering Node.js Series.
https://www.nearform.com/blog/a-closer-look-at-fastify-v3-0-0/
CC-MAIN-2021-31
refinedweb
382
50.84
The QSslError class provides an SSL error. More... #include <QSslError> Note: All the functions in this class are reentrant. This class was introduced in Qt 4.3. The QSslError class provides an SSL error. QSslError provides a simple API for managing errors during QSslSocket's SSL handshake. See also QSslSocket, QSslCertificate, and QSslCipher. Describes all recognized errors that can occur during an SSL handshake. See also QSslError::errorString(). Constructs a QSslError object. The two optional arguments specify the error that occurred, and which certificate the error relates to. See also QSslCertificate. Constructs an identical copy of other. Destroys the QSslError object. Returns the certificate associated with this error, or a null certificate if the error does not relate to any certificate. See also error() and errorString(). Returns the type of the error. See also errorString() and certificate(). Returns a short localized human-readable description of the error. See also error() and certificate(). Returns true if this error is not equal to other; otherwise returns false. This function was introduced in Qt 4.4. Assigns the contents of other to this error. This function was introduced in Qt 4.4. Returns true if this error is equal to other; otherwise returns false. This function was introduced in Qt 4.4.
http://doc.trolltech.com/4.5-snapshot/qsslerror.html
crawl-003
refinedweb
209
63.25
zzuf internals This document is an attempt at explaining how zzuf works and how it can be extended to support more functions. Architecture overview The zzuf software consists in two parts: - The zzuf executable - The libzzuf shared library Here is the global workflow when zzuf fuzzes a process: - zzuf reads options from the command line. - zzuf writes fuzzing information to the environment - zuff preloads libzzuf into the called process and executes it - libzzuf reads fuzzing information from the envronment - libzzuf diverts standard function calls with its own ones - the called process runs normally, but any diverted call goes through libzzuf first Writing function diversions Diverted functions are declared using the NEW macro. The address of the original function is stored into a global function pointer using the ORIG macro. The LOADSYM macro takes care of retrieving its address and storing it into the pointer. For instance, this is how the memalign function is declared in its libc header, malloc.h: void *memalign(size_t boundary, size_t size); And here is how memalign is diverted: #include <malloc.h> #include "libzzuf.h" #include "lib-load.h" /* ... */ #if defined HAVE_MEMALIGN static void * (*ORIG(memalign)) (size_t boundary, size_t size); #endif /* ... */ #if defined HAVE_MEMALIGN void *NEW(memalign)(size_t boundary, size_t size) { void *ret; LOADSYM(memalign); ret = ORIG(memalign)(boundary, size); /* ... */ return ret; } #endif Memory functions Memory handling functions are diverted in zzuf/trunk/src/lib-mem.c. Functions such as malloc need to be diverted by zzuf in order to monitor global memory usage and detect severe memory leaks. This creates a bootstrapping problem on some platforms: the diverted calloc calls the real calloc, which needs to be loaded using dlsym. On Linux, dlsym calls calloc, resulting in an infinite loop. To avoid this, we declare a private static buffer that memory allocation functions can use if the original function is not yet loaded. Signal functions Memory handling functions are diverted in zzuf/trunk/src/lib-signal.c. These functions need to be diverted to prevent the fuzzed application from intercepting fatal signals such as SIGSEGV. File and socket functions File descriptor handling functions are diverted in zzuf/trunk/src/lib-fd.c and zzuf/trunk/src/lib-stream.c. The most important part of zzuf is the way file descriptor functions are diverted. It keeps track of all open file descriptors, decides whether to fuzz their data, and makes diverted reading functions behave accordingly.
http://caca.zoy.org/wiki/zzuf/internals
CC-MAIN-2017-26
refinedweb
399
54.52
Transport Agents Microsoft Exchange Server 2010 TNEF or MIME message contents, use the TnefReader, TnefWriter, MimeReader, MimeWriter, and MimeDocument classes. The following are the prerequisites that are required for you to implement an agent: A computer running Exchange 2010 that has the Edge Transport or the Hub Transport server role installed The Microsoft .NET Framework 2.0 SDK We also recommend that you install Microsoft Visual Studio 2008. You can implement transport agents by using either Microsoft Visual Basic .NET or C#. Requirements for Implementing a Transport Agent The three agent types that are available are SMTP receive agents, routing agents, and delivery agents. Each agent type defines a specific set of events that are available in the context in which they run. For more information about those events, see the following: In your code, reference the Microsoft.Exchange.Data.Transport namespace. Add a reference to the namespace specific to the type of transport agent you are implementing. The following table lists the namespace to reference for each type of transport agent. In your agent, implement derived classes that inherit the respective factory and agent base classes for the type of agent that you are implementing. The following table lists the classes from which to derive for each agent type.. Responding to Transport Events The SmtpReceiveAgent, RoutingAgent, and Delivery. e.MailItem.Message.Subject += " - this text appended by MyAgent"; } } } After you compile your agent to a"
https://msdn.microsoft.com/en-us/library/aa579185.aspx
CC-MAIN-2015-40
refinedweb
235
56.05
From: Ritchey Lee (leeritchey@earthlink.net) Date: Fri Mar 30 2001 - 08:05:29 PST I have not revised the book that was published in 1992. I'm hoping to replace it sometime next year. Meanwhile, I have a 200+ page book that I copy for my high speed classes. Unfortunately, the only way to get it is to attend one of the classes. For those who might wish to do this, there are four public classes coming up in May. These classes are put on by UC Berkeley. You can see the schedule and cities by logging onto my web site,. You can enroll from that web site if you see a class near you that you want to attend. The cities are Boston, LA, Chicago and San Franciscc. Lee Steeve Gaudreault wrote: > > >: > > 1. High Speed Pcb Design 4th Edition - 1996 > 2. (High-Speed Digital Design : A Handbook of Black Magic - 1992 > >
http://www.qsl.net/wb6tpu/si-list/0299.html
CC-MAIN-2016-40
refinedweb
153
85.39
0 I'm trying to write a program that asks the user for a number between 1 and 10, then prints a line of that many “X"s. The program compiles, but I cannot figure out how to make the variable 'total' print X's instead of a the actual number, if that makes any sense. Thanks in advance. #include <iostream> #include <string> using namespace std; int main () { int numExs; string totalxs; cout << "Please enter the number of Xs (1-10): "; cin >> numExs; if (numExs >= 1 && numExs <= 10) { int total = 0; for (int x = 1; x <= numExs; x++) { total += x; totalxs = total * 'X'; } cout << totalxs; } else { cout << "Please follow the directions! " << endl; } }
https://www.daniweb.com/programming/software-development/threads/202371/change-int-to-of-x-s
CC-MAIN-2017-30
refinedweb
112
69.45
This project uses XML templates to extract data from websites for you, with almost no code Project description webparsa This project uses XML templates to extract data from websites for you, with almost no code. XML templates are used to mimic the structure of the HTML itself, allowing you to make intuitive selectors. You could literally copy and paste website code, and specify which attributes are the variables you want, and it would work. Storing single values Note: for images, use the <p_img> tag, because img tags can't have children in HTML. To extract a certain value from a part of the element, use the <value> tag. Value tags need two things: - name: the name of the variable to store under - (inner text): the attribute to store. Storing lists of values To import a list of similar divs as a python list, wrap the single div that encloses the tags with <list> tags. List tags need only one attribute: - name: the name of the variable to store under Doesn't have to be the direct parent of a value! Storing dicts of values To group some values together as a dict, wrap the values in a <dict> tag. Requires a name attribute, like <list> and <value>. Doesn't have to be the direct parent of a value! Possible attributes: - self.attrs.(any attribute): attributes from the HTML tag - self.text: inner text - self.element: BeautifulSoup element Filtering To select an element with HTML, just write the HTML element. For example, writing <div class='foo'> will select any divs with class foo. To filter any attribute, use "filter.*" as an attribute in the element. - filter.index=N: this element must be at select(element)[N] - filter.regex.*=REGEX: this attribute must match a certain regex. Examples: filter.regex.text=.+, filter.regex.attrs.data=\d+. - filter.function=*: you define a function for us, passed as a keyword argument during the constructor. Then, we pass a dict containing attributes ('text', 'element', 'index', and another dictionary called 'attrs'), and your function returns False if the node should be rejected. Post processing In any tag, you can add the attribute "after" to run on any <list>, <dict>, or <value>'s value. For example, <div id=number> <value name=number after=int>self.text</value> </div> Will call the user-defined function int on the value returned from self.text. This applies to any node in the XML tree, including HTML elements. You can also call this after in <list> and <value> tags. NOTE: in lists, this function will be called on the entire list, NOT on individual elements! To define the 'int' function, pass it in the constructor as Parsa((structure), int=function) Something that might be useful would be to have a function called df, that makes a pandas dataframe from a list element. Default postprocessing functions: - (User-defined functions) - .<...>: runs type(value).<...>(value). Essentially value.<...>(). Example: ".strip" -> x.strip() - Built-in functions like int, float, str, list, dict, etc. Any attribute of the module builtins. Other postprocessing functions: - remove_commas: x.replace(",", "") - split_commas: x.split(",") - split: x.split(" ") You can use function composition by adding a "+" between function names. For example: remove_commas+int: "1,000,000" -> "1000000" -> 1000000. If you want to use more than one argument, I suggest writing a wrapper function or making a partial with functools.partial. Required content By default, all selectors must exist for a datapoint to be stored. However, if you want a datapoint to be optional, wrap the selector in <unrequired>. Example washington-post.xml Hopefully this explains enough about how it works! THIS GETS WASHINGTON POST HEADLINES <list name=headlines> // stores children as dicts in a list called 'headlines' <div filter. // filter. // finds divs with class headline <a> // finds a link <value name=link>self.attrs.href</value> // stores the link's href attribute to 'link' <value name=headline>self.text</value> // stores the link's text to 'text'. other possibility is 'element', which stores the BS4 node. </a> </div> <span class="author" filter. // finds spans with class author <a filter. // finds any link <value name=author>self.text</value> // stores the text to 'author' </a> </span> <div class="art" filter. // finds divs with class art <p_img filter. // img doesn't let you put stuff inside, so it's called p_img <value name=image_url>self.attrs.data-hi-res-src</value> // stores an attribute to 'image_url' </p_img> </div> </div> </list> washington-post.py import webparsa import requests parser = webparsa.Parsa(washington-post-xml-text) website_content = requests.get("") for headline in parser.parse(website_content)['headlines']: print(headline['headline'], headline['author'], headline['image_url']) License Standard MIT license. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/webparsa/
CC-MAIN-2022-27
refinedweb
799
59.3
A new version (0.9) of RUR: a Python Learning Environment has been released. It now includes a Spanish version. It has been updated and makes use of wxPython 2.6. See Change log for details. New release for RUR- a Python Learning Environment. This release uses the new wxPython namespace. Minor changes to the GUI have been made following suggestions by high school students and their teachers that have been early adapters; these changes should make the program slightly more user friendly. This version is mostly a bug fixed version. unicode problem corrected (bug introduced in version 0.8.5) linenumber information on syntax errors corrected removed the URL browser capability corrected typo and change explation of next_to_a_beeper() in lessons * corrected name of robot in one lesson Addition: tower of Hanoi solution(.rur file) (with two typical .wld files) 8 queen puzzle solution (.rur file)... read more rur-ple is a 'python learning environment'. The new release includes new advanced examples of potential interest to teachers using it. Many minor bugs (related to the appearance of rur-ple) have been fixed. Version 0.8 of RUR-PLE is now available. Among the many changes are a bilingual (English/French) interface, as well as many new lessons (since RUR-PLE includes a Python tutorial). Note that the project home page does not reflect the new status of the project. Major updating of the project home page is planned for the near future.
https://sourceforge.net/p/rur-ple/news/?source=navbar
CC-MAIN-2016-50
refinedweb
242
68.16
I'm working on a tool that needs to instrument constructors of a class by ensuring that a few lines of code are always called at the end of the constructor execution unless an exception takes place. For example, public class Foo { public Foo() { // normal code // instrumented code } } I'm having a problem, though, figuring out the best way to do this. The toughest problem seems to be related to flow control. For example, the user may return from the constructor from many different points, and I have to ensure that my code is always called. Originally, I wanted to skip the flow control issues by "renaming" the user's constructor and then implementing a new constructor, like: public class Foo { public Foo() { user_Foo(); // instrumented code } public void user_Foo() { // old constructor code } } But this gets complicated fast, because I have to deal with the super call to the parent class. My next thought was to use finally to skip flow control again, but little did I realize that the finally language construct is actually achieved by the compiler determining flow control, and not through any special support by the VM. So, I've given up trying to skip flow control, and I'm currently working with the idea of replacing each return statement with a goto that jumps to the new logic which is added to the end. (Should be able to do this, because all returns from a constructor should be the same). So, my primary question is this: What's the easiest way to instrument the constructor to add the new logic? Some other questions: There's no guarantee that the "this" reference will be in slot 0, by the time my code executes, right? (There doesn't seem to be any hard and fast rule that says argument slots shouldn't be overwritten - I've seen javac overwrite them with impugnity). So I'll have to add some instructions at the top of the method to immediately store it to an unused slot, correct? When instrumenting a method, I need to create a new one and selectively copy instructions from the old one. Will the jump targets remain correct when I add or remove instructions as long as I don't remove the targetted instructions? If I use InstructionFactory.addLocalVariable before I begin adding any of the old instructions to the new method, will the old instructions retarget adjusted variable slots? Thanks much and God bless, -Toby Reyelts -- To unsubscribe, e-mail: <mailto:bcel-user-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:bcel-user-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/jakarta-bcel-user/200210.mbox/%3C20021001144705.JTSP9928.rwcrmhc52.attbi.com@rwcrwbc57%3E
CC-MAIN-2019-22
refinedweb
434
57
On Tue, Aug 19, 2014 at 01:17:02AM +0200, Cyril Brulebois wrote: >[ Adding -accessibility@ and -cd@ to the loop. ] > >Steve McIntyre <steve@einval.com> (2014-08-17): >> On Sun, Aug 17, 2014 at 01:25:28PM +0200, Cyril Brulebois wrote: >> >Control: tag -1 confirmed >> >>. >> >> Yay, definitely. We never did get round to this for Wheezy, so let's >> get it done now. > >On a related note: we have an amd64-i386 “multi-arch” netinst image. >I'd be happy to take opinions on the following questions since that's >the only image linked directly from, which leads some >people to call it “_the_ default installation image”… > >Its boot menu reads right now (at least in Jessie Beta 1): > Install > 64 bit install > Graphical install > 64 bit graphical install > Advanced options > Help > Install with speech synthesis > 64 bit speech install > >FWIW, I'm tempted to modify it so that it becomes: > Install > Graphical install > 64 bit install > 64 bit graphical install > Advanced options > Help > Install with speech synthesis > 64 bit speech install > >This means swiching items #2 and #3, so that we have 32-bit and 64-bit >entries together (which is what happens in the "Advanced options" >sub-menu). Speech synthesis entries can be kept together separately >(see below). > >=> debian-boot/cd@: anyone against such a change? I'm more tempted to have: #if (amd64) via syslinux 64 bit Graphical install 64 bit Text install #endif 32 bit Graphical install 32 bit Text install Advanced options > Help #if (amd64) via syslinux Install with speech synthesis (64 bit) #endif Install with speech synthesis (32 bit) or do we split things even more? That menu is already too long, and causes scrolling for people to see the lower options (if they realise such a thing is possible!). How about we split things up some more, assuming we can get the auto-detect to work: #if (amd64) via syslinux 64 bit Graphical install 64 bit Text install 32 bit install options > Advanced options > Help Install with speech synthesis (64 bit) #else 32 bit Graphical install 32 bit Text install Install with speech synthesis (32 bit) #endif It'll need some extra work to deal with different paths through for i386 and amd64 here, but meh. It's possibly worth separating them totally, and make sure each path is clear in terms of which arch. On the multi-arch CD and DVD, the deeper "advanced options" menus are a bit too spread I think, so splitting at the top level would be a good plan for simplicity maybe? ? Definitely - see above! >Since the menus can be confusing a bit, I'm also wondering whether >we should be explicit about the non-"64 bit" items, and prefix them >with "32 bit". > >=> debian-boot/cd@: opinions? Definitely - see above! -- Steve McIntyre, Cambridge, UK. steve@einval.com "Further comment on how I feel about IBM will appear once I've worked out whether they're being malicious or incompetent. Capital letters are forecast." Matthew Garrett,
https://lists.debian.org/debian-cd/2014/08/msg00063.html
CC-MAIN-2019-18
refinedweb
499
63.43
JDBC stands for Java Database Connectivity. For connectivity with the database we uses JDBC. It establish connection to access the database. This provides a set of classes in the java.sql package for Java applications to communicate with databases. Mostly databases used a language called SQL. SQL stands for Structured Query Language. JDBC gives you the opportunity to communicate with standard database. JDBC includes four components: 1. The JDBC API The JDBC API gives access of programming data from the Java. To use this, applications can execute SQL statements and retrieve results and updation to the database. The JDBC API is part of the Java platform, it includes the Java Standard Edition. 2. JDBC Driver Manager The JDBC DriverManager is the class in JDBC API. The objects of this class can connect Java applications to a JDBC driver. DriverManager is the very important part of the JDBC architecture. 3. JDBC Test Suite The JDBC driver test suite helps JDBC drivers to run your program. They are not exhaustive,they do exercise with important features in the JDBC API. 4. JDBC-ODBC Bridge The Java Software bridge provides JDBC access via ODBC drivers.You have to load ODBC binary code for client machines for using this driver. This driver is very important for application server code has to be in Java in a three-tier architecture. The program below describes how to run the JDBC program with MySql. JDBCExample.java is the program to understand how to establish the connection with database. The Url, Connection, Driver, Statement, Resultset etc. and how to process the java code with database is mentioned here. JDBCExample.java Description of program This program making the connection between MySQL database and java with the help of many types of APIs interfaces.First connected it and then execute a query.It shows the result, like in above example it displays the all employees in the existing table Employee. Description of code 1. Connection An interface in java.sql package that provides connection with the database like- MySQL and java files. The SQL statements are executed within the context of the Connection interface. 2. Class.forName(String driver) forName() is static method of the "Class" class . This loads the driver class and returns Class instance. It takes string type value as an argument and matches the class with string. 3. DriverManager This class of java.sql package controls the JDBC drivers. Each driver has to be registered with this class. 4. getConnection(String url, String userName, String password) This static method of DriverManager class makes a connection with database url. It takes these given arguments of string type. 5. con.close() This method of Connection interface is used for closing the connection. All the resources will be free occupied by the database. 6. printStackTrace() The method is used to show error messages. If the connection is not connected then it throws the exception and print the message. Java Database Connectivity Steps- Some steps are given below of the JDBC 1. First import the java.sql package we can interact with the database. Connection interface defines methods for interacting with the database. It used to instantiate a Statement by using the createStatement() method. 5. Executing a statement with the Statement object This interface defines methods which is used to communicate with database. This class has three methods to execute the statements- executeQuery(), executeUpdate(), and execute(). For SELECT statements, executeQuery() method will be used. For create or modify tables, the executeUpdate() method is used. executeQuery() method returns the result of the query in the form of ResultSet object and executeUpdate() method returns the number of rows affected from execution of the query. 6. Getting ResultSet object Executing the executeQuery() method returns the result in the form of ResultSet object. We can now operate on this object to extract the rows values returned from the execution of the query. Its next() method sets the pointer to the first row and getX() methods can be used to get different types of data from the result set. Advertisements Posted on: June Components View All Comments Post your Comment
http://www.roseindia.net/jdbc/jdbc-example.shtml
CC-MAIN-2017-04
refinedweb
684
59.6
There are number of ways provided by Microsoft to create a setup project for windows application. But when I started to create one, I got nothing but queries and confusions of how to start and where to start. There are numerous articles I found explaining to create a setup project, but some did not work, and some did not have a live example to follow. The driving force for me to write this article is my QC team, who accept the main application for testing, and who also verified my setup installer with their 100% effort. And guess what, they successfully found bugs in that too. In this article I would like to explain a step by step process to create a windows application and a setup installer for the same in a very simple manner, that is easy to understand and follow knowing that there are a number of other ways to do the same thing. First, let’s create a simple one form windows application, with only a text box and a button. Creating a windows application is just for the sake of having one to install. I gave the name CreatingInstaller to my windows application, obviously you can choose your own. Adding a new Windows Form Application in my solution and adding a text box and button to the default form resulted in the figure as shown below. Decorate the control properties however you want. Just wanted to write few lines of code, so I binded the button’s click event to show text box's text So far so good. Now let’s create an installer for the same windows application. Right click on the solution and add a new project to your solution like in following figure: And add a setup project by Other project Types->Setup and Deployment->Visual Studio Installer The project will be added to the solution. Now open the file system editor by clicking on the project and select the option to open file system editor, like the add output project window and select it as a primary output as shown below and click OK. The Primary output will be added as shown below, having type defined as Output. In the meanwhile, let's add some more functionality to our windows application. Let's read a file and show its output in a message box upon we can create files and folders at the time of installation. Now we also need this Input folder and a Sample.txt file at the time of installation to be located at the location of installed application. For file operations I added the namespace System.IO though it is unnecessary to do so. System.IO Therefore, running the application will show two message boxes, one after the other showing text box text and text from Sample.txt file. Now this folder creation logic has to be implemented in our the Always Create property to True. That means the folder will always be created whenever we run the installer, after a fresh build release. You can decorate your form to add an icon to it, and that icon will also be required at the time of installation to create a shotcut icon to our application. Add an icon to the form like in below mentioned figure: Time to add the on the desktop when the application launches. The below figures explain how to add an icon. Cut the shortcut created at Application Folder and Paste it under User’s Desktop Folder. For shortcuts to be created at the User’s Program Menu, add a new folder to the User’s Program Menu. This will be created at the program’s menu location in that folder. Create a new shortcut pointing to the primary output as we did when we created a desktop shortcut. The three images below describe. We always have an option to uninstall the application from the control panel’s Programs and Features list, but how about creating our own uninstaller? That is also under the programs menu so we do not have to disturb the control panel. Right click on File System on target Machine and Add Special Folder->System Folder as shown in below figure. Right click on the newly created system folder and browse for the msiexec.exe file in the local System.Windows32 folder. This file takes responsibility to install and uninstall the application based on certain parameters specified. Set the properties of the file exactly as shown in the figure: Now create a new shortcut under the User’s program Menu and point its source to msiexec as shown below. You can add more icons and a name to your shortcut. I have given it the name "Uninstall." Press F4 key by selecting the setup project. We see a list of properties, which we can customize as per out installation needs, like Product name, Author, Installation location. I’ll not go into a deep discussion about all of this, as they are quite easy to understand and set. Just take a note of the product code shown below in the list of properties. We would need product code as a parameter to msiexec for uninstallation. Right click the Uninstall shortcut and set the arguments property as shown in below figure: /x {product code} /qr /x is for uninstalltion. You can get the whole detailed list of parameters and their use at. Chose whichever one you like. Save all and Rebuild the setup project. Now our setup is ready to install our windows application. Just browse the debug folder location of Setup project. We find an msi and a setup.exe. You can run either to initiate setup. When we started we saw a setup wizard with screens that welcomed the user, asked for the location to install (while the default location was already set.) After completing the wizard, click the close button. Now that the job is done we can see our shortcuts to the application created at desktop and User’s Program Menu like in below given figure. Now if we navigate out to the installation location we can also see the Input folder created and the Sample.txt file resting inside it. Run the application and see the output. Click on uninstall to remove the application. The wizard launches as shown below: Just wanted to give a glimpse of Custom Actions we can define, while creating the setup. Custom actions are the actions which contain customized functionality apart from default one at the time of installation and uninstallation. For example, my QC team reported a bug that when running the application while simultaneously in background uninstalling the application, the application still keep on running. As per them it should show a message or close during the uninstallation. It was hard to explain to them the reason for this, so I opted for implementing their desire in the setup project. My need was to write code for the uninstallation, so I wrote few lines to fulfill the need. The code contains the logic to find the running EXE name at the time of uninstallation. If it matches my application EXE name, it kills the process. Not going into more details to it); /); } } } } What if the installation machine does not have a .NET framework? We can specify our own package supplied with installation so that our application does not depend on the .NET framework of the client machine, but points to the package we supplied to it to run. Right click on Setup project, to open properties window. Here we can specify pre-requisites for the application to install. Just click on Prerequisites button and in the opened prerequisites window, select the checkbox for the .NET Framework application that needs to follow, and select the radio button at number 2 (i.e. Download prerequisites from the same location as my application.) Press OK, but save the project and re-build it. Now when we browse the Debug folder of the Setup project we see two more folders as a result of the actions we performed just now. Now this whole package has to be supplied to the client machine for the installation of the application. Now re-install the application from setup.exe, and launch it using shortcuts. The tutorial covers the basic steps for creating the installation project. I did not go very deep explaining the registry or license agreements though. There are many things to be explored to understand and master this topic. However, this article was just a start for a developer to play around with setup and deployments. Happy Coding! Please visit my blog A Practical Approach for more informative.
https://www.codeproject.com/articles/568476/creating-an-msi-setup-package-for-csharp-windows?msg=4548632
CC-MAIN-2017-13
refinedweb
1,445
63.59
I’ve been using the newly-released SFML 2.0 as an excuse to learn C++. It’s been quite interesting. While messing around with the SFML graphics capabilities I found that I can’t seem to reliably re-size the SFML render window without the contents getting completely scrambled. Here is a very simple example (forgive how WordPress mangles it): #include <SFML/Graphics.hpp> int width = 200; int height = 200; sf::VertexArray varray(sf::Points); sf::Vertex vertex; void stripes() { varray.clear(); varray.resize(width * height); for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { vertex.position = sf::Vector2f(x, y); vertex.color = sf::Color::White; if ((x % 10) == 0){vertex.color = sf::Color::Red;} varray.append(vertex); } } } int main() { sf::RenderWindow window(sf::VideoMode(width, height), "Resize Test"); stripes(); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); } window.clear(); window.draw(varray); window.display(); } return 0; } This produces a striped image. Neat. Ok, now I’ll re-size my window. Oof, not so neat. But, it kind of looks like it’s taking my 200 x 200 array of data and stretching it across the window equally. That makes sense, I suppose. Well, the RenderWindow class can detect when it’s resized, so what I’ll do is capture that event and re-calculate what I’m displaying to be correct to the new size. Here’s main() with some re-size detection code that re-runs the function that generates the array of points that gets plotted. int main() { sf::Vector2u windowsize; sf::RenderWindow window(sf::VideoMode(width, height), "Resize Test"); stripes(); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); if (event.type == sf::Event::Resized) { windowsize = window.getSize(); width = windowsize.x; height = windowsize.y; stripes(); } } window.clear(); window.draw(varray); window.display(); } return 0; } However, once again after I re-size the window… Bleh. I’ve done this several ways, but no dice. EDIT: I figured out a solution. I don’t know if it’s THE solution, however. I realized that I’m initializing the RenderWindow at a specific resolution, and that re-sizing the window might not be telling it to re-size the video mode. At the end of my if-statement where I regenerate the vertex array, I put: window.create(sf::VideoMode(width, height), "Resize Test"); So it starts of like this: And re-sizes into this: There is one lingering problem. After I release the edge the window, the whole window “jumps” to a slightly different location. This must be the result of re-creating the window. It’s first closing it, and then re-creating it. I wonder if there is a way to put it back right where it was, or if there is a more graceful solution. Here is a gist of the functional program:
https://chrisheydrick.com/2013/06/04/problems-re-sizing-sfml-renderwindow/
CC-MAIN-2017-34
refinedweb
493
61.83
The Exists Query in Spring Data Last modified: July 29, 2020 1. Introduction In many data-centric applications, there might be situations where we need to check whether a particular object already exists. In this tutorial, we'll discuss several ways to achieve precisely that using Spring Data and JPA. 2. Sample Entity To set the stage for our examples, let's create an entity Car with two properties, model and power: @Entity public class Car { @Id @GeneratedValue private int id; private Integer power; private String model; // getters, setters, ... } 3. Searching by ID The JpaRepository interface exposes the existsById method that checks if an entity with the given id exists in the database: int searchId = 2; // ID of the Car boolean exists = repository.existsById(searchId) Let's assume that searchId is the id of a Car we created during test setup. For the sake of test repeatability, we should never use a hard-coded number (for example “2”) because the id property of a Car is likely auto-generated and could change over time. The existsById query is the easiest but least flexible way of checking for an object's existence. 4. Using a Derived Query Method We can also use Spring's derived query method feature to formulate our query. In our example, we want to check if a Car with a given model name exists, therefore we devise the following query method: boolean existsCarByModel(String model); It's important to note that the naming of the method is not arbitrary — it must follow certain rules. Spring will then generate the proxy for the repository such that it can derive the SQL query from the name of the method. Modern IDEs like IntelliJ IDEA will provide syntax completion for that. When queries get more complex – for example, by incorporating ordering, limiting results, and several query criteria – these method names can get quite long, right up to the point of illegibility. Also, derived query methods might seem magical because of their implicit and “by convention” nature. Nevertheless, they can come in handy when clean and uncluttered code is important and when developers want to rely on a well-tested framework. 5. Searching by Example An Example is a very powerful way of checking for existence because it uses ExampleMatchers to dynamically build the query. So, whenever we require dynamicity, this is a good way to do it. A comprehensive explanation of Spring ExampleMatchers and how to use them can be found in our Spring Data Query article. 5.1. The Matcher Suppose that we want to search for model names in a case-insensitive way. Let's start by creating our ExampleMatcher: ExampleMatcher modelMatcher = ExampleMatcher.matching() .withIgnorePaths("id") .withMatcher("model", ignoreCase()); Note that we must explicitly ignore the id path because id is the primary key and those are picked up automatically by default. 5.2. The Probe Next, we need to define a so-called “probe”, which is an instance of the class we want to look up. It has all search-relevant properties set. We then connect it to our nameMatcher and execute the query: Car probe = new Car(); probe.setModel("bmw"); Example<Car> example = Example.of(probe, modelMatcher); boolean exists = repository.exists(example); With great flexibility comes great complexity, and as powerful as the ExampleMatcher API may be, using it will produce quite a few lines of extra code. We suggest using this in dynamic queries or if no other method fits the need. 6. Writing a Custom JPQL Query with Exists Semantics The last method we'll examine uses JPQL (Java Persistence Query Language) to implement a custom query with exists–semantics: @Query("select case when count(c)> 0 then true else false end from Car c where lower(c.model) like lower(:model)") boolean existsCarLikeCustomQuery(@Param("model") String model); The idea is to execute a case-insensitive count query based on the model property, evaluate the return value, and map the result to a Java boolean. Again, most IDEs have pretty good support for JPQL statements. Custom JPQL queries can be seen as an alternative to derived methods and are often a good choice when we're comfortable with SQL-like statements and don't mind the additional @Query annotations. 7. Conclusion In this tutorial, we saw how to check if an object exists in a database using Spring Data and JPA. There is no hard and fast rule when to use which method because it'll largely depend on the use case at hand and personal preference. As a rule of thumb, though, given a choice, developers should always lean toward the more straightforward method for reasons of robustness, performance, and code clarity. Also, once decided on either derived queries or custom JPQL queries, it's a good idea to stick with that choice for as long as possible to ensure a consistent coding style. A complete source code example can be found on GitHub.
https://www.baeldung.com/spring-data-exists-query
CC-MAIN-2022-27
refinedweb
821
50.16
Paging ListView using DataPager without using DataSource Control If you have already used the ASP.NET ListView and DataPager controls, you know how easily you can display your data using custom templates and provide pagination functionality. You can do that in only few minutes. The ListView and DataPager controls work perfectly fine in combination with DataSource controls (SqlDataSource, LinqDataSource, ObjectDataSource etc.), however if you use them with custom data collection, without using Data Source controls, you may find some unexpected behavior and have to add some little more code to make it work. Note: I saw question related to this issue asked multiple times in different asp.net forums, so I thought it would be nice to document it here. Lets create a working demo together… 1. Create sample ASP.NET Web Application project, add new ASPX page. 2. Add ListView control and modify the markup so that you will end up having this: <asp:ListView <LayoutTemplate> <ul> <asp:PlaceHolder </ul> </LayoutTemplate> <ItemTemplate> <li> <%# Eval("Name") %> (<%# Eval("Currency") %> <%# Eval("Price") %>) </li> </ItemTemplate> <EmptyDataTemplate> No data </EmptyDataTemplate> </asp:ListView> The ListView control has three templates defined: - LayoutTemplate – where we define the way we want to represent our data. We have PlaceHolder where data from ItemTemplate will be placed. ListView recognizes this automatically. - ItemTemplate – where the ListView control will show the items from the data source. It makes automatically iterating over each item in the collection (same as any other data source control) - EmptyDataTemplate – If the collection has no data, this template will be displayed. 3. Add DataPager control and modify the markup in the following way: <asp:DataPager <Fields> <asp:NumericPagerField </Fields> </asp:DataPager> We add the DataPager control, associate it with the ListView1 control and add the PageSize property. After that, we need to define <Fields> where we put the field type and button type. If we were binding from Data Source Control (SqlDataSource, LinqDataSource or any other…) this would be it and the ListView and DataPager would work perfectly fine. However, if we bind custom data collection to the ListView without using Data Source controls, we will have problems with the pagination. Lets add custom data in our C# code and bind it to the ListView. 4. C# code adding custom data collection - Define Product class public class Product { public string Name { get; set; } public decimal Price { get; set; } public string Currency { get; set; } } - Define method that will create List of products (sample data) List<Product> SampleData() { List<Product> p = new List<Product>(); p.Add(new Product() { Name = "Microsoft Windows 7", Price = 70, Currency = "USD" }); p.Add(new Product() { Name = "HP ProBook", Price = 320, Currency = "USD" }); p.Add(new Product() { Name = "Microsoft Office Home", Price = 60, Currency = "USD" }); p.Add(new Product() { Name = "NOKIA N900", Price = 350, Currency = "USD" }); p.Add(new Product() { Name = "BlackBerry Storm", Price = 100, Currency = "USD" }); p.Add(new Product() { Name = "Apple iPhone", Price = 400, Currency = "USD" }); p.Add(new Product() { Name = "HTC myTouch", Price = 200, Currency = "USD" }); return p; } This method should be part of the ASPX page class (e.g. inside _Default page class if your page is Default.aspx) - Bind the sample data to ListView on Page_Load protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindListView(); } } void BindListView() { ListView1.DataSource = SampleData(); ListView1.DataBind(); } Now, run the project and you should see the data displayed where only the first five items will be shown. The data pager should have two pages (1 2) and you will be able to click the second page to navigate to the last two items in the `collection. Now, if you click on page 2, you will see it won’t display the last two items automatically, instead you will have to click again on page 2 to see the last two items. After, if you click on page 1, you will encounter another problem where the five items are displayed, but the data for the last two items in the ListView are shown (see print screen bellow) If you notice in the previous two pictures, the behavior doesn’t seem to work properly. The problem here is that DataPager doesn’t know about current ListView page changing property. Therefore, we should explicitly set the DataPager Page Properties in the ListView’s PagePropertiesChanging event. Here is what we need to do: 1. Add OnPagePropertiesChanging event to ListView control <asp:ListView 2. Implement the ListView1_PagePropertiesChanging method protected void ListView1_PagePropertiesChanging(object sender, PagePropertiesChangingEventArgs e) { //set current page startindex, max rows and rebind to false lvDataPager1.SetPageProperties(e.StartRowIndex, e.MaximumRows, false); //rebind List View BindListView(); } Now, if you test the functionality, it should work properly. Hope this was helpful. Regards, Hajan
http://weblogs.asp.net/hajan/paging-listview-using-datapager-without-using-datasource-control
CC-MAIN-2015-40
refinedweb
772
54.32
CGTalk > Software Specific Forums > Autodesk Softimage > Problems installing 'Gear' add on.. Help! PDA View Full Version : Problems installing 'Gear' add on.. Help! Arslan89 04-04-2012, 02:00 AM Hey, so I am trying to install a 'Gear' addon for XSI. Everything seems to install just fine, however, as soon as I 'Open Synoptic' in the Gear menu, I get this error: -- Application.gear_OpenSynoptic() # - [line 162] # ERROR : Property Page Script Logic Error (Python ActiveX Scripting Engine) # ERROR : [160] (null) # ERROR : [161] # Parameters ------------- # ERROR : >[162] if os.path.exists(param_path): # ERROR : [163] xsi.ExecuteScript(param_path, "Python", "addParameters", [prop]) # ERROR : [164] (null) # -- Does anyone know what is going on here? I have tried re-installing the add-on multiple times. The scene file was given to me by a company in Italy. I am in the process of collaborating with them. Thanks in advance. ShaderOp 04-04-2012, 12:44 PM Did you do that whole song and dance with setting up the PYTHONPATH environment variable? Arslan89 04-04-2012, 03:20 PM Hey, thanks for your reply. I did. But now I am thinking maybe THAT is what I am doing wrong? But I don't know how. I have followed everything according to the instructions provided. I went to my computer's advanced settings and then environment variables. I didn't have any 'PYTHON' variables. So I created 2 variables. 1st one: Variable Name: PYTHON Value: C:\Python26\; 2nd one: Variable Name: PYTHONPATH Value: C:\gear\modules I placed the Gear add on inside the C:\gear folder ... in there, I also have the module folder that it comes with. So then in XSI, I just locate the Gear addon inside C:\gear and install it. Seems to install correctly, however, I get that error when I open Synoptic. Am I doing something wrong here? My XSI is following Python that it was installed with, not the one that was installed on my computer (Python 2.6). I don't know why its not using the one installed on my computer. I even unchecked the box under the field where you pick what script language to use, and it still 'falls back' onto using Python it came with. Maybe I don't have it set up correctly? So that XSI uses Python from my computer rather than the one it was installed with? Would that matter? One of the steps calls for "update python libraries" ... I don't know how I would go about doing that, I thought installing Python on my computer and making those variables would do that for me..? ShaderOp 04-04-2012, 07:16 PM I remember that failing to properly set up the module was the reason for nearly all Gear installation problems reported on the forums and mailing list, but you seem to have done it correctly as far as I can tell (though I could be mistaken since it has been a while since I used Gear). Only thing I can add is to make sure that you're able to run other Python plugins without issues. Sorry I couldn't be of any more help :blush: bottleofram 04-05-2012, 05:27 AM Hi Arslan89, Gear doesnt mind if softimage is using its own python. However, make sure you invoke Reload Modules command from gear menu before you try to open a synoptic. If it still fails, please go through this couple of steps: Make sure you have System variables set, not User variables. System variables need to be: Variable Name: Path Value: C:\Python26\; (no spaces before or after semicolon) Variable Name: PYTHONPATH Value: C:\gear\modules (inside modules dir you need to have "gear" folder) Open softimage and run this python code: import sys print sys.version import gear gear.logInfos() Are you trying to open a synoptic of the rig you got from them? Try building a simple rig (from chicken template for example), instead. If everything goes alright, try opening its synoptic. EDIT: CODE tags on this forum are weird - adding spaces inline for no reason... so i replaced them. Arslan89 04-06-2012, 09:02 AM Thanks for all the help and replies, I really appreciate them. --- Hey, I just fixed this problem. I really don't know how.. but I can tell you exactly what I did. It seems a few people are having this issue and there is just no explanation for it. I even ask Jeremie Passerin regarding this issue (the developer of Gear). For 2 days, I have been uninstalling, re-installing, back in forth and nothing seemed to help. Anyways, here is what I did: First off, the thing I did differently this time around was that I turned off my firewall and protection. Then, I installed Python2.6 and also, pywin32 (64bit) version. Before when it wasn't working, I did not have pywin32 (64bit). I only had Python 2.6. From what I've read, 2.6 version of Python is the one you need for XSi. Link: - Download 'pywin32-212.win-amd64-py2.6.exe' Then, run this command in XSI script editor: --- import gear gear.logInfos() --- Then I installed the first version of Gear 1.0.6 and then right after that install, I overwrote the files in the module folder with the new version of Gear (1.1.0). I am not sure whether or not to run it before or after you install Gear in XSI, do both, I don't think it would matter. Just as a precaution. To install, just drag and drop the add-on into the viewer port. And from there, everything was working just fine. I don't know exactly WHAT step caused it, but it is working and I don't want to go back and temper with it. Now, according to Jeremie, installing the old version of Gear and then installing the new version is stupid and won't matter. However, some people said that it 'just seems to work better that way'. Jeremie, obviously knows better considering he developed the tool and its awesome. But I just did that anyways, to make sure I don't have any issues. Hope that helps. [FORGOT TO MENTION] Also, I forgot to mention, after you install Python on your computer. Change the Env0ironment Variables. What I have is.. a new variable called 'PYTHON' and the value is 'C:\Python26\;' ... then another called 'PYTHONPATH' and the value is 'C:\gear\modules'. I created that folder myself and placed the correct version of Gear in there. THAT was where I overwrote the older module folder with the new one and also copied the add on in there, as well. Then drag and drop into XSI from that folder. CGTalk Moderation 04-06-2012,.
http://forums.cgsociety.org/archive/index.php/t-1044335.html
CC-MAIN-2014-52
refinedweb
1,130
75.2
Data Validation in Laravel: The Right Way. Once Upon a Time At some point in time, you probably did your data validation like this: <?php $errors = array(); if ( empty( $_POST['name'] ) || ! is_string( $_POST['name'] ) ) { $errors[] = 'Name is required'; } elseif ( ! preg_match( "/^[A-Za-z\s-_]+$/", $_POST['name'] ) ) { $errors[] = 'Name can have only alphabets, spaces and dashes'; } if ( empty( $_POST['email'] ) || ! is_string( $_POST['email'] ) ) { $errors[] = 'Email is required'; } //........... //.... some more code here //........... //display errors if ( ! empty( $errors ) ) { for ( $i = 0; $i < count( $errors ); $i++ ) { echo '<div class="error">' . $errors[ $i ] . '</div>'; } } Well, that was the stone age for you. Luckily, we have much better and more sophisticated validation packages these days (and have had them for quite some time in PHP). If you use any application framework as the foundation of your app, chances are it will have its own or a recommended 3rd party data validation package, especially if it is a full stack framework. Since Laravel is a full stack framework, it comes with its own validation package. As always, you’re not constrained to use that and can use any other data validation package that you want. However, for the purpose of this tutorial, we will stick with what comes with Laravel by default. Data Validation: The Laravel Way The source code for this tutorial is available here. You just need to run composer install to install the Laravel framework inside the project directory before you are able to run this code. Now continuing the previous example, let’s assume we have a form which has a Name and an Email field and we would like to validate that data before saving it. Here’s how the previous rudimentary attempt at validation would translate with Laravel. <?php $validation = Validator::make( array( 'name' => Input::get( 'name' ), 'email' => Input::get( 'email' ), ), array( 'name' => array( 'required', 'alpha_dash' ), 'email' => array( 'required', 'email' ), ) ); if ( $validation->fails() ) { $errors = $validation->messages(); } //........... //.... some more code here //........... //display errors if ( ! empty( $errors ) ) { foreach ( $errors->all() as $error ) { echo '<div class="error">' . $error . '</div>'; } } In the code above we Laravel-ified the data validation by making use of the Validator facade. We passed an array of the data that we want to validate and an array of validation rules according to which we want the data validated. We get an object returned to us and then we check if our data validation failed or not. If it failed then we grab the error messages object (an object of Laravel’s MessageBag class) and then looped over it to print out all the error messages. The validation rules we used are built into the validation package and there are many available, some would even check into the database. Now, it is not uncommon to come across code where the data validation has been placed at odd places, like in Controller methods or in data Models. Data validation code does not belong in either of those places, as such placement defies the concepts of Single Responsibility and DRY (Don’t Repeat Yourself). Single Responsibility: One class should have one and only one job to perform. A Controller’s job is to act as a glue between the business logic and the client. It should grab the request and pass it on to someone who can process the request, it should not start processing the request itself. Similarly, a Model’s job is to act as a broker between data source and rest of the application. It should only accept data for saving and give it when asked. There is more than one school of thought on this; some would add data validation to Models but even those would not put actual data validation code inside a Model class. It would most likely be outsourced to another class which would accept data and tell whether the data is valid or not. So where do we put the code which does data validation and which can be used anywhere in the application? Validation as a Service The ideal choice is to move out the validation code into separate class(es) which can be used as needed. Continuing with our previous code example, let’s move it into its own class. Create a directory named RocketCandy inside app directory. This is our main directory (or domain directory) in which we will put all our custom stuff (services, exceptions, utility libraries, etc). Now create Services/Validation directory structure inside RocketCandy. Inside Validation directory, create Validator.php. Now before we can proceed further, open up your composer.json and after the classmap in autoload node add RocketCandy namespace for PSR-4 autoloading. It would look something like this: "autoload": { "classmap": [ "app/commands", "app/controllers", "app/models", "app/database/migrations", "app/database/seeds", "app/tests/TestCase.php" ], "psr-4": { "RocketCandy\\": "app/RocketCandy" } }, Then in your terminal, run composer dump-autoload -o so that composer can generate the autoloader for our RocketCandy namespace. Now open up RocketCandy/Services/Validation/Validator.php. After we move the validation code from above, it will look something like this: <?php namespace RocketCandy\Services\Validation; class Validator { public function validate() { $validation = \Validator::make( array( 'name' => \Input::get( 'name' ), 'email' => \Input::get( 'email' ), ), array( 'name' => array( 'required', 'alpha_dash' ), 'email' => array( 'required', 'email' ), ) ); if ( $validation->fails() ) { return $validation->messages(); } return true; } } //end of class //EOF Now we could use this as: <?php $validator = new \RocketCandy\Services\Validation\Validator; $validation = $validator->validate(); if ( $validation !== true ) { //show errors } This is somewhat better than what we were doing earlier but it is still not ideal for the following reasons: - Our Validation class still is not DRY enough. We would need to copy over all this validation code to another class to validate data of another entity. - Why are we fetching input data inside the validation class? There is no reason to do it there because it would limit us as to which data we can validate. Here we would be able to validate this data only if it came from a form input. - There is no way of overriding validation rules, they are set in stone. - The mechanism by which we get to know whether the data validated or not is not clean. Sure, it serves the purpose but this can definitely be improved upon. - We are using a pseudo static call to Laravel’s validation package. This can be improved upon as well. Solution? We abstract out the validation code a bit further and we make use of exceptions. First, let’s make our own custom exceptions. Create the Exceptions directory under RocketCandy and create BaseException.php. Open it up and put the following code in it. <?php namespace RocketCandy\Exceptions; use Exception; use Illuminate\Support\MessageBag; abstract class BaseException extends Exception { protected $_errors; public function __construct( $errors = null, $message = null, $code = 0, Exception $previous = null ) { $this->_set_errors( $errors ); parent::__construct( $message, $code, $previous ); } protected function _set_errors( $errors ) { if ( is_string( $errors ) ) { $errors = array( 'error' => $errors, ); } if ( is_array( $errors ) ) { $errors = new MessageBag( $errors ); } $this->_errors = $errors; } public function get_errors() { return $this->_errors; } } //end of class //EOF Here we created an abstract class and all our custom exceptions would inherit this class. The first parameter for the constructor is what we are concerned with, so let’s look at that. We make use of Laravel’s MessageBag to store our errors (if they are not in it already) so that we would have a uniform way to loop through and display those errors irrespective of whether the exception was thrown by validation service or any other. The _set_errors() method thus checks if a single error message as a string was passed or an array of error messages was passed. Accordingly, it stores them in a MessageBag object (unless it is already inside in which case it would be stored as is). And we have a getter method get_errors() which just returns the contents of our class variable as is. Now, in the same directory create ValidationException.php and its code will be: <?php namespace RocketCandy\Exceptions; class ValidationException extends BaseException { } //end of class //EOF That’s it, we don’t need anything else in here, it will be an empty shell because all that we need done will be handled by BaseException. Now, we proceed with re-tooling our Validator class. We need to abstract out the validation code, throw ValidationException on error(s) and allow overriding of validation rules. So it would look like this: <?php namespace RocketCandy\Services\Validation; use Illuminate\Validation\Factory as IlluminateValidator; use RocketCandy\Exceptions\ValidationException; /** * Base Validation class. All entity specific validation classes inherit * this class and can override any function for respective specific needs */ abstract class Validator { /** * @var Illuminate\Validation\Factory */ protected $_validator; public function __construct( IlluminateValidator $validator ) { $this->_validator = $validator; } public function validate( array $data, array $rules = array(), array $custom_errors = array() ) { if ( empty( $rules ) && ! empty( $this->rules ) && is_array( $this->rules ) ) { //no rules passed to function, use the default rules defined in sub-class $rules = $this->rules; } //use Laravel's Validator and validate the data $validation = $this->_validator->make( $data, $rules, $custom_errors ); if ( $validation->fails() ) { //validation failed, throw an exception throw new ValidationException( $validation->messages() ); } //all good and shiny return true; } } //end of class //EOF Here in this abstract class we have: - Abstracted out the validation code. It can be used as is for validating data of any entity. - Removed data fetching from the class. Validation class does not need to know where the data is coming from. It accepts an array of data to validate as a parameter. - Removed validation rules from this class. Each entity can have its own set of validation rules either defined in the class or they can be passed as array to validate(). If you want to define rules in the child classes and want to be sure they’re present, I wrote about emulating abstract properties in PHP sometime back. - Improved the mechanism by which validation failure can be determined. If the data validation fails the validation service would throw ValidationExceptionand we can get the errors from that instead of checking for returned data type or values etc. This also means that we can throw another exception if validation rules are not defined. It would be a different exception and we would know immediately that we messed up somewhere. - Removed the usage of static call for data validation. In here we now inject Laravel’s validation class in our class constructor. If we resolve our validation service out of Laravel’s IoC container (which we would) then we would not have to worry about the dependency injection into constructor here. Now we would create a validation class for our form which would extend this abstract class. In the same directory create TestFormValidator.php and add following code into it: <?php namespace RocketCandy\Services\Validation; class TestFormValidator extends Validator { /** * @var array Validation rules for the test form, they can contain in-built Laravel rules or our custom rules */ public $rules = array( 'name' => array( 'required', 'alpha_dash', 'max:200' ), 'email' => array( 'required', 'email', 'min:6', 'max:200' ), 'phone' => array( 'required', 'numeric', 'digits_between:8,25' ), 'pin_code' => array( 'required', 'alpha_num', 'max:25' ), ); } //end of class //EOF This is the class that we will instantiate to validate our form data. We have set the validation rules in this class and so we would just need to call validate() method on its object and pass our form data to it. Note: If you have not made the artisantool in your project directory an executable then you would need to replace ./artisanfrom the artisancommands in this tutorial and replace with /path/to/php artisan. It is recommended you make artisanan executable, it saves the needless hassle to prefix phpon every command. Let’s make a Controller and a form properly to take this for a spin. In your terminal, navigate to your project directory and run ./artisan controller:make DummyController --only=create,store It would create app/controllers/DummyController.php with two methods – create() and store(). Open up app/routes.php and add the following route directive. Route::resource( 'dummy', 'DummyController', array( 'only' => array( 'create', 'store' ), ) ); This will set up our Controller to work in a RESTful way; /dummy/create/ would accept GET requests and /dummy/store/ would accept POST requests. We can remove the only directive from both route and artisan command and the Controller would accept PUT and DELETE requests too but for the current exercise we don’t need them. Now we need to add the code to our Controller, so open up app/controllers/DummyController.php. It would be an empty shell with create() and store() methods. We need to make create() render a view, so make it like this: /** * Show the form for creating a new resource. * * @return Response */ public function create() { return View::make( 'dummy/create' ); } We now need the view which we are rendering here and which will render our form. First let’s create a layout file where we can put the HTML boilerplate code. Create layouts directory inside app/views and create default.blade.php inside it. Note the .blade suffix of the view name here. It tells Laravel that this view uses Laravel’s Blade templating syntax and should be parsed as such. Open it up and add the following boilerplate code to it: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Advanced Data Validations Demo</title> <!-- Bootstrap core CSS --> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css"> <!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries --> <!--[if lt IE 9]> <script src=""></script> <script src=""></script> <![endif]--> </head> <body> <div class="row"> <h1 class="col-md-6 col-md-offset-3">Advanced Data Validations Demo</h1> </div> <p> </p> <div class="container"> @yield( "content" ) </div><!-- /.container --> <!-- Bootstrap core JavaScript --> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script> <script src="//netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script> </body> </html> It is simple boilerplate code for an HTML page which uses Bootstrap. The only thing to notice here is the placeholder where we would inject our view code; @yield( "content" ) will tell Laravel’s Blade parser that our view’s content section code should be injected at this place. Now create app/views/dummy/create.blade.php and add following code to it: @extends( "layouts/default" ) @section( "content" ) <div class="row"> <h3 class="col-md-6 col-md-offset-2">Test Form</h3> </div> <p> </p> @if ( ! $errors->isEmpty() ) <div class="row"> @foreach ( $errors->all() as $error ) <div class="col-md-6 col-md-offset-2 alert alert-danger">{{ $error }}</div> @endforeach </div> @elseif ( Session::has( 'message' ) ) <div class="row"> <div class="col-md-6 col-md-offset-2 alert alert-success">{{ Session::get( 'message' ) }}</div> </div> @else <p> </p> @endif <div class="row"> <div class="col-md-offset-2 col-md-6"> {{ Form::open( array( 'route' => 'dummy.store', 'method' => 'post', 'id' => 'test-form', ) ) }} <div class="form-group"> {{ Form::label( 'name', 'Name:' ) }} {{ Form::text( 'name', '', array( 'id' => 'name', 'placeholder' => 'Enter Your Full Name', 'class' => 'form-control', 'maxlength' => 200, ) ) }} </div> <div class="form-group"> {{ Form::label( 'email', 'Email:' ) }} {{ Form::text( 'email', '', array( 'id' => 'email', 'placeholder' => 'Enter Your Email', 'class' => 'form-control', 'maxlength' => 200, ) ) }} </div> <div class="form-group"> {{ Form::label( 'phone', 'Phone:' ) }} {{ Form::text( 'phone', '', array( 'id' => 'phone', 'placeholder' => 'Enter Your Phone Number', 'class' => 'form-control', 'maxlength' => 25, ) ) }} </div> <div class="form-group"> {{ Form::label( 'pin_code', 'Pin Code:' ) }} {{ Form::text( 'pin_code', '', array( 'id' => 'pin_code', 'placeholder' => 'Enter Your Pin Code', 'class' => 'form-control', 'maxlength' => 25, ) ) }} </div> <div class="form-group"> {{ Form::submit( ' Submit ', array( 'id' => 'btn-submit', 'class' => 'btn btn-primary', ) ) }} </div> {{ Form::close() }} </div> </div> @stop In this view we first tell Laravel that we want to use the default.blade.php layout using the directive @extends( "layouts/default" ). Then we create the content section as that is the one we have set to be injected in the layout. The view renders a form with Name, Email, Phone and Pin Code fields using Laravel’s form builder (we could use HTML5 fields here with the basic browser validation enabled, like for Email we could use Form::email(), but since we want to check our server side validation we are using normal text fields for input). Also above the form we check whether we have anything in the $errors var and to display it if there are any errors. Also we check for any flash message that we might have. If we now navigate to http://<your-project-domain>/dummy/create (the assumption here is that you have already setup a domain on your development stack for this project) then we will have the form rendered for us. Now we need to be able to accept the data from this form and have the data validated. So back in our DummyController we would inject our TestFormValidator in the constructor and accept the data in store() and validate it. So the Controller would look like this now: <?php use RocketCandy\Exceptions\ValidationException; use RocketCandy\Services\Validation\TestFormValidator; class DummyController extends BaseController { /** * @var RocketCandy\Services\Validation\TestFormValidator */ protected $_validator; public function __construct( TestFormValidator $validator ) { $this->_validator = $validator; } /** * Show the form for creating a new resource. * * @return Response */ public function create() { return View::make( 'dummy/create' ); } /** * Store a newly created resource in storage. * * @return Response */ public function store() { $input = Input::all(); try { $validate_data = $this->_validator->validate( $input ); return Redirect::route( 'dummy.create' )->withMessage( 'Data passed validation checks' ); } catch ( ValidationException $e ) { return Redirect::route( 'dummy.create' )->withInput()->withErrors( $e->get_errors() ); } } } //end of class //EOF Laravel will take care of the dependency injection in the constructor as all Controllers are resolved out of its IoC container by default, so we don’t have to worry about that. In the store() method we grab all the form input in a var and inside try/catch we pass the data to our validation service. If the data validates then it will redirect us back to the form page with a success message else it will throw ValidationException which we will catch, grab the errors and return back to the form to display what went wrong. Summary In this first part, we learned how to do data validations the Laravel way and how to abstract out validation to a service which can be used to validate data for each entity anywhere in the app. In the next and final part we will learn how to extend Laravel’s validation package to have our own custom rules. Got thoughts? Questions? Fire away in the comments.
https://www.sitepoint.com/data-validation-laravel-right-way/
CC-MAIN-2019-22
refinedweb
3,099
53
I have a gem, which has a method which acts differently depending on the Rails.env: def self.env if defined?(Rails) Rails.env elsif ... Kernel.const_set(:Rails, nil) Rails.should_receive(:env).and_return('production') ... spec_helper module Rails; end rails = double('Rails') rails.should_receive(:env).and_return('production') Per the various tweets about this, switching on constants is generally a bad idea because it makes things a bit of a challenge to test and you have to change the state of constants in order to do so (which makes them a little less than constant). That said, if you're writing a plugin that has to behave differently depending on the environment in which it's loaded, you're going to have to test on the existence of Rails, Merb, etc to somewhere, even if it's not in this particular part of the code. Wherever it is, you want to keep it isolated so that decision happens only once. Something like MyPlugin::env. Now you can safely stub that method in most places, and then spec that method by stubbing constants. As to how to stub the constants, your example doesn't look quite right. The code is asking if defined?(Rails), but Kernel.const_set(:Rails, nil) doesn't undefine the constant, it just sets its value to nil. What you want is something like this (disclaimer - this is off the top of my head, untested, not even run, may contain syntax errors, and is not well factored): def without_const(const) if Object.const_defined?(const) begin @const = const Object.send(:remove_const, const) yield ensure Object.const_set(const, @const) end else yield end end def with_stub_const(const, value) if Object.const_defined?(const) begin @const = const Object.const_set(const, value) yield ensure Object.const_set(const, @const) end else begin Object.const_set(const, value) yield ensure Object.send(:remove_const, const) end end end describe "..." do it "does x if Rails is defined" do rails = double('Rails', :env => {:stuff_i => 'need'}) with_stub_const(:Rails, rails) do # ... end end it "does y if Rails is not defined" do without_const(:Rails) do # .... end end end I'll give some thought as to whether we should include this in rspec or not. It's one of those things that if we added people would use it as an excuse to rely on constants when they don't need to :)
https://codedump.io/share/W98jC8gPTui8/1/stubbingmocking-global-constants-in-rspec
CC-MAIN-2017-34
refinedweb
389
67.15
Tests for logging is something I have tried to avoid in the past. Today I had to write some code which exports TestRail results to splunk and I was determined to test the output. Instead of searching for how to do this using jmockit or mockito (since this hasn't worked for me in the past) I decided to try using reflection myself. I searched for "java reflection change private static field" and came across this post on stackoverflow. The post shows a way to change the value of a public static final field. There are some caveats but in the case of Logger it works out. Combining this information with mockito I was able to mock the logger for a class and test logging output. First the test setup (junit 4.13): @RunWith(MockitoJunitRunner.class) public class SplunkReporterMethods { @Mock private Logger log; private SplunkReporter reporter; @Test public void report() { reporter.report(new Car()); then(log).should().info("{\"name\":\"car\",\"speed\":4}"); } } To setup the SplunkReporter class with a different logger the mock needs to be injected into the private static final field. This can be done in an @Before method. @Before void setup() { //allow log field to be changed Field field = SplunkReporter.class.getDeclaredField("log"); field.setAccessible(true); //remove final modifier Field modifiersField = Field.class.getDeclaredField("modifiers"); modifiersField.setAccessible(true); modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL); //set to mock object field.set(null, log); reporter = new SplunkReporter(); } First, the setup method makes log accessible but that is not enough. The modifiers on the field need to be changed so final is removed. The Field class does not have a setter for the modifiers field and so modifiers needs to be set in field using reflection. Finally the static field is set to the mock object. Posted on by: Discussion
https://dev.to/moaxcp/mocking-private-static-final-log-1fh9
CC-MAIN-2020-29
refinedweb
301
57.67
sigwait(2) sigwait(2) NAME sigwait(), sigwaitinfo(), sigtimedwait() - synchronously accept a signal SYNOPSIS #include <<<<signal.h>>>> int sigwait(const sigset_t *set, int *sig); int sigwaitinfo(const sigset_t *set, siginfo_t *info); int sigtimedwait(const sigset_t *set, siginfo_t *info, const struct timespec *timeout); DESCRIPTION The sigwait() function atomically selects and clears a pending signal from set and returns the signal number in the location pointed to by sig. If none of the signals in set is pending at the time of the call, the calling thread will be suspended until one or more signals become pending or the thread is interrupted by an unblocked, caught signal. The signals in set should be blocked at the time of the call to sigwait(). Otherwise, the behavior is undefined. If there are multiple signals queued for the selected signal number, sigwait() will return with the first queued signal and the remainder will remain queued. If any of multiple pending signals in the range SIGRTMIN to SIGRTMAX is selected, the lowest numbered signal will be returned. The selection order between realtime and nonrealtime signals, or between multiple pending nonrealtime signals, is unspecified. If more than one thread in a process is in sigwait() for the same signal, only one thread will return from sigwait() with the signal number; which thread returns is undefined. sigwaitinfo() has the same behavior as sigwait() if the info parameter is NULL. If the info parameter is not NULL, sigwaitinfo() has the same behavior as sigwait(), except that the selected signal number is returned in the si_signo field of the info parameter and the cause of the signal is returned in the si_code field. If any value is queued to the selected signal, the first such queued value will be dequeued and stored in the si_value member of info and the system resource used to queue the signal will be released and made available to queue other signals. If no value is queued, the contents of the si_value member is undefined. If no further signals are queued for the selected signal, the pending indication for that signal will be reset. sigtimedwait() has the same behavior as sigwaitinfo() except that sigtimedwait() will only wait for the time interval specified by the timeout parameter if none of the signals specified by set are pending at the time of the call. If the timeout parameter specifies a zero- Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000 sigwait(2) sigwait(2) valued time interval, then sigtimedwait() will return immediately with an error if no signals in set are pending at the time of the call. If the timeout parameter is NULL, the behavior is undefined. APPLICATION USAGE For a given signal number, the sigwait family of routines should not be used in conjunction with sigaction() or any other functions which change signal action. If they are used together, the results are undefined. Threads Considerations The sigwait family of routines enable a thread to synchronously wait for signals. This makes the sigwait routines ideal for handling signals in a multithreaded process. The suggested method for signal handling in a multithreaded process is to have all threads block the signals of interest and dedicate one thread to call a sigwait function to wait for the signals. When a signal causes a sigwait function to return, the code to handle the signal can be placed immediately after the return from the sigwait routine. After the signal is handled, a sigwait function can again be called to wait for another signal. In order to ensure that the dedicated thread handles the signal, it is essential that all threads, including the thread issuing the sigwait call, block the signals of interest. Otherwise, the signal could be delivered to a thread other than the dedicated signal handling thread. This could result in the default action being carried out for the signal. It is important that the thread issuing the sigwait call also block the signal. This will prevent signals from carrying out the default signal action while the dedicated signal handling thread is between calls to a sigwait function. RETURN VALUE Upon successful completion, sigwait() stores the signal number selected in the location pointed to by sig and returns with a value of 0 (zero). Otherwise, it returns an error number to indicate the error. The errno variable is NOT set if an error occurs. Upon successful completion, sigwaitinfo() and sigtimedwait() will return the selected signal number. Otherwise a value of -1 is returned and errno is set to indicate the error. ERRORS If any of the following conditions occur, the sigwait family of routines will return the following error number: [EAGAIN] sigtimedwait() was called and no signal in the set parameter was delivered within the time interval specified by the timeout parameter. If any of the following conditions occur and the condition is detected, the sigwait family of routines will fail and return the Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000 sigwait(2) sigwait(2) following error number: [EINVAL] set contains an invalid or unsupported signal number. [EINVAL] sigtimedwait() was called and the timeout parameter specified a tv_nsec value less than zero or greater than or equal to 1000 million, or a tv_sec value less than zero or greater than or equal to 2147483648 (that is, a value too large to be represented as a signed 32-bit integer). [EINTR] The wait was interrupted by an unblocked, caught signal. [EFAULT] At least one of the set, sig, info, or timeout parameters references an illegal address. AUTHOR sigwaitinfo() and sigtimedwait() were derived from the IEEE POSIX P1003.1b standard. sigwait() was derived from the IEEE POSIX P1003.1c standard. SEE ALSO pause(2), sigaction(2), sigpending(2), sigsuspend(2), pthread_sigmask(3T), signal(5). STANDARDS CONFORMANCE sigwait(): POSIX.1c sigwaitinfo(): POSIX.1b sigtimedwait(): POSIX.1b Hewlett-Packard Company - 3 - HP-UX Release 11i: November 2000
http://modman.unixdev.net/?sektion=2&page=sigwait&manpath=HP-UX-11.11
CC-MAIN-2017-17
refinedweb
977
50.57
I have found that the following works for me. If I have a struct S, I can overload the less<S> operator as such: #include <queue> #include <stdio.h> using namespace std; struct S { int a, b; S(int _a, int _B)/>/> { a = _a; b = _b; } inline bool operator < (const S& obj) const { if(a != obj.a) return a > obj.a; else return b > obj.b; } }; int main() { priority_queue<S> pq; pq.push(S(1, 10)); pq.push(S(5, 7)); S obj = pq.top(); pq.pop(); printf("%d %d", obj.a, obj.B)/>/>; } This code correctly displays (1, 10). However, I found some help online that says the following method works: #include <queue> #include <stdio.h> using namespace std; struct S { int a, b; S(int _a, int _B)/>/> { a = _a; b = _b; } }; struct compare { inline bool operator () (const S& x, const S& y) const { if(x.a != y.a) return x.a > y.a; else return x.b > y.b; } }; int main() { priority_queue<S, vector<S>, compare > pq; pq.push(S(1, 10)); pq.push(S(5, 7)); S obj = pq.top(); pq.pop(); printf("%d %d", obj.a, obj.B)/>/>; } This also produces correct output. Why does the second method work as well? I understand that the "compare" struct is a function object (...right?), but I thought the third template argument for the priority queue involved an operator, not a struct. Can someone please explain to me either why the second method works or how the priority queue is implemented in regards to how it uses that third template argument? I think if I understand how the operator/function object is used by the queue, I'll be more confidant in my coding. P.S.: As is obvious, I am a newbie to the C/C++ language. However, I am very fluent in Java, so I'm familiar with concepts that are shared between those languages, but not with others (operator overloading and function objects especially). It might be helpful to have an explanation either using a kind of pseudocode or explaining the C++ code from the perspective of a Java programmer.
http://www.dreamincode.net/forums/topic/308950-how-to-reverse-the-priority-queue-and-why-it-works/
CC-MAIN-2017-22
refinedweb
355
68.36
WP! Windows Phone 8 Introduction Interested in creating your first NFC app for Windows Phone 8? After following the steps below, you will have an app similar to NearSpeak, which is available in the WP Store and allows to record voice messages using speech recognition, store them on NFC tags, launch the app by tapping the tag and then playing back the voice message using speech synthesis. This tutorial is based on my NFC session first presented at the Wowzapp Hackathon in Vienna (9. - 11. November 2012). The slides are available at SlideShare, the instructions in this article will help you walk through the tutorial even without attending a live presentation and contains all the details and tricks that would usually only be shown on stage and are not included on slides. Project Setup Create a new Windows Phone App project, call the app NearSpeakTutorial and choose Windows Phone OS 8.0 as target platform. Our application will not run on Windows Phone 7, as we will use a lot of the new functionality found in the latest version of the OS: NFC, speech synthesis and speech recognition. Note that for developing, you should have a WP8 phone, as the emulator does not currently support simulating NFC. After Visual Studio has finished creating your project, open the file Properties/WMAppManifest.xml from the Solution Explorer window. Here, you can define further properties of your app, like the description and author. Navigate to the Capabilities-tab and add the following capabilities to your project: - Proximity (ID_CAP_PROXIMITY) - Microphone (ID_CAP_MICROPHONE) - Speech Recognition (ID_CAP_SPEECH_RECOGNITION) - Networking (ID_CAP_NETWORKING) Next, go to the Requirements tab and activate the NFC hardware requirement (ID_REQ_NFC). Our app will only work on phones that have NFC hardware, which is not a requirement for WP8 devices. Most phones will feature NFC, but there might be devices coming up without. Of course, if you create an app that uses NFC as an optional feature, you can leave the requirement deactivated and check for NFC support at runtime. Initialization Next, you need to initialize the API classes of Windows Phone that we will use for writing to the NFC tag, as well as for speech recognition and the speech synthesizer. As we will use those classes throughout the app, it's best to define them as new member variables in MainPage.xaml.cs, and to initialize them in the constructor. You also need to add the relevant using statements for the classes we need; especially if you use Visual Studio Extensions like ReSharper, a lot of this work can be automated. // Using declarations, at the top of MainPage.xaml.cs using Windows.Networking.Proximity; using Windows.Phone.Speech.Recognition; using Windows.Phone.Speech.Synthesis; // Member variables declarations, at the top of the MainPage class definition private ProximityDevice _device; private SpeechRecognizer _recognizer; private SpeechSynthesizer _synthesizer; // Instantiate the classes in the MainPage() constructor _device = ProximityDevice.GetDefault(); _recognizer = new SpeechRecognizer(); _synthesizer = new SpeechSynthesizer(); ProximityDevice is the class to interact with the NFC hardware. On the phone, there is usually only one NFC device present - so it's fine to just get the default device. On a Windows 8 desktop, there might be multiple NFC or other proximity devices - you would then typically ask the API to list all the available proximity devices and choose the most suitable one. Next up, create the UI design for the app. Open MainPage.xaml, which will bring you to the UI Designer. Drag a Button from the Toolbox to the page. In the Properties pane, give the new button the name ListenBtn and the content Listen. The following screenshot highlights the steps and changes: Starting Speech Recognition After we have completed the basic project setup, we can now start implementing the functionality. Double-click on the button border to let Visual Studio automatically create an event handler for the clicked event. In this method, we will start the speech recognition. We will use the speech recognition without a pre-defined UI, so for the final app you should add status information to inform the user about the progress. Speech recognition is an asynchronous process. One of the nice new features of the latest C# version is the new async / await pattern. Instead of making your code more difficult to maintain by introducing a lot of callback methods, you can simply await the asynchronous process. Windows Phone will continue executing your app and does not block its execution. However, the method will only continue to execute once the async process has finished. Essentially, C# splits your method in two behind the scenes, but saves you from doing that yourself. Once we got the result of the speech recognition, we'll show it in a message box to inform the user. The first time the user starts speech recognition, the phone will show a dialog asking to accept the speech privacy policy. When doing free-form speech recognition, Windows Phone requires a network connection and uses a Microsoft web service to recognize the speech. If you limit the speech to just a few pre-defined words, you can also do offline speech recognition. In our case, we want the user to say anything he likes, so we use the online variant. private async void ListenBtn_Click(object sender, RoutedEventArgs e) { // Start speech recognition and wait for the async process to finish var recoResult = await _recognizer.RecognizeAsync(); // Inform the user about the result and the next steps MessageBox.Show(string.Format("You said \"{0}\"\nPlease touch a tag to write the message.", recoResult.Text)); } Writing the message to an NFC tag If you would like devices to be able to recognize and automatically act upon the data you stored on the NFC tag, you have to use a standardized format. In the world of NFC, this has been standardized through the NDEF format. Essentially, a header in the tag tells the reading device how to interpret the payload that follows afterwards. LaunchApp Tags In our case, we will first use the Microsoft LaunchApp type, which directly links to your application and also passes arguments. It is the best and most direct way to launch your app on Windows, but is problematic for cross-platform scenarios if you plan to extend your app to other platforms later. This is why we will extend this app to also react to a custom URI scheme in a later step, which makes cross-platform tags easier. You can write LaunchApp tags directly using the Windows Proximity APIs. However, it is easier to use the free and open source NDEF Library for Proximity APIs. It's released under the LGPL license, so you can use it for free, even in closed source apps. Additionally, the library will make it easier to write more advanced NFC tag contents later on, e.g., to store multiple records on the tag in case you want to port the app to Android and also store an Android Application Record on the same tag. Importing the NDEF Library can be done in a matter of seconds. But first of all, you need to ensure your NuGet Package manager is up-to-date. The manager takes care of downloading libraries for you, integrates those into your project and even keeps them up-to-date. A great timesaver! The pre-installed NuGet package manager version can not handle portable class libraries like the NDEF Library, which are compatible to both Windows Phone 8 and Windows 8. Therefore, you need to update to the latest version of the NuGet manager (>= 2.1). Update through: Tools -> Extensions and Updates... -> Updates (left sidebar) -> Visual Studio Gallery Once you made sure that NuGet is the latest version, you can proceed to installing the NDEF Library. This is done in three simple steps: - Tools -> Library Package Manager -> Manage NuGet Packages for Solution... - Search "Online" for "NDEF" - Install the "NDEF Library for Proximity APIs (NFC)" Preparing the Message Now the NDEF library is ready to use! We just need to define the contents and write them to the tag. The NDEF format is designed in a way that an NDEF message can contain one or more NDEF records. You can think of the message as a box or container, where you place one or more items inside. In our case, we will just include a single item - the LaunchApp record. Add this code snippet to the same method that we have used before. // Create a LaunchApp record, specifying our recognized text as arguments var record = new NdefLaunchAppRecord { Arguments = recoResult.Text }; // Add the app ID of your app! record.AddPlatformAppId("WindowsPhone", "{...}"); // Wrap the record into a message, which can be written to a tag var msg = new NdefMessage { record }; The LaunchApp record type can contain arguments that will be passed to the app when it's launched. In our case, we will use the text that the user spoke. Additionally, you need to define the application ID, so that the phone will know which app to launch (or to download from the store in case it's not yet installed on the phone). You can find the app ID of your app if you go back to the WMAppManifest.xml file, switch to the Packaging tab and copy the Product ID to the clipboard. Now, go back to your code in MainPage.xaml.cs, and replace the {...} text with your product ID. Writing to the Tag Actually writing to the tag is very simple and can be done with a single line of code: // Write the message to the tag _device.PublishBinaryMessage("NDEF:WriteTag", msg.ToByteArray().AsBuffer(), MessageWrittenHandler); We use our ProximityDevice to publish the message. The first parameter specifies the action you would like the NFC driver to perform. The proximity APIs opted to supply this type as a string - most likely this decision was made for easier extensibility and more flexibility when it comes to different device drivers from manufacturers. You can get an overview of different strings to write as the publish type at MSDN. As the second parameter, we need to send the raw data to write to the tag. With the ToByteArray() method, you can convert the NDEF message to a byte array. The APIs can not directly work with a byte array, so you need to create a buffer based on the array first. The AsBuffer() method does that for you. To use this method, you need to include the following using statement: // For AsBuffer() method to convert a byte array (byte[]) to an IBuffer using System.Runtime.InteropServices.WindowsRuntime; The last parameter of the PublishBinaryMessage() method is the callback function that will be called when writing the tag was successful. Here, we specified a method called MessageWrittenHandler, which we also need to implement: private void MessageWrittenHandler(ProximityDevice sender, long messageId) { // Message was written successfully - inform the user Dispatcher.BeginInvoke(() => MessageBox.Show("NFC Message written")); } The callback would be executed in an extra thread. In Windows Phone, it is not possible to interact with the user interface from any other thread than the one and only user interface thread. The Dispatcher.BeginInvoke() code ensures that the code we use to show the message box is executed in the user interface thread instead of the call-back thread. What happens if writing the tag was not successful, for example when the tag was too small or is not writable? Unfortunately, the Proximity APIs do not inform the app about that. You only get a call-back if writing was successful. The only possible workaround is to also register for the DeviceArrived event, and then start a timer. If the message wasn't written after around 1-2 seconds, you know that something went wrong - but not why. Windows Phone in general is compatible to NFC Forum Type 1 - 4 tags, some phones might also be able to write to Mifare Classic tags. The tags need to be NDEF Formatted (which they have to be if they are advertised as NFC Forum tags). Unfortunately, Windows Phone can not currently format factory empty tags; so make sure you buy the right tags from the factory. Speak When Launched The last step until we are finished is to actually speak the text when the app is launched. The Windows Phone operating system handles the LaunchApp tags by default, so we do not worry about that. The platform will launch our app (if the user allows this to happen) and sends us the arguments that we have written to the tag. The app gets the arguments through the page navigation parameters used in Windows Phone. In general, a WP app is built using one or more pages (XAML pages). The app navigates between those pages and sends parameters from one page to the next. This approach is a bit similar to navigating between different HTML pages, where you can also pass parameters using the URL. The arguments from the LaunchApp tag are encoded into the query string, the pre-defined key name is ms_nfc_launchargs. This key name will always be used by the operating system for LaunchApp types; we just need to see if it is present in our query string. To analyze the query string when our MainPage is navigated to, you need to override the OnNavigatedTo() method from the base class. After calling the base class functionality (we still want the rest of the framework to do its work), we can do our custom processing. In this case, we check the query string if it contains the key we are looking for: protected override async void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); // In case our app was launched through the tag, the operating system // sends the arguments through the ms_nfp_launchargs parameter if (NavigationContext.QueryString.ContainsKey("ms_nfp_launchargs")) { // Speak the text stored as argument on the NFC tag await _synthesizer.SpeakTextAsync(NavigationContext.QueryString["ms_nfp_launchargs"]); } } Speaking the text is again simple: we use the synthesizer member variable to speak the value (query string) of the parameter. Also speech synthesis is an asynchronous process, so we use the keyword await and do not need to further worry about it. Finished! Congratulations, your app is finished! By following this tutorial, you have learned how to create a Windows Phone 8 app and use many of its unique features, including speech recognition, speech synthesis, NFC, including libraries with the NuGet package manager and launching apps through tags. Bonus: Launching through Custom URI Schemes If you would like to further extend the app so that it can also be launched through a custom URI scheme and not just the LaunchApp tag, follow the rest of this tutorial. Custom URI schemes have the advantage that they take less writable space on the tag (the app ID of a WP8 app is rather long; if you also add the app ID of your port on Windows 8, there is even more overhead). Additionally, custom URI schemes are easier to use in cross-platform scenarios, as the LaunchApp type is unique to Windows phone and only supported by that platform. The downside of using a custom URI scheme is that it's not unique to your app - anyone else could implement an app that registers for the same URI scheme. In contrast to that, the LaunchApp tag includes your unique app ID. Registering for a URI Scheme The first step is to register for the URI scheme. This process is slightly different when comparing Windows 8 and Windows Phone 8; we're covering the Windows Phone 8 way here. First, close the WMAppManifest.xml file, and open it again by right-clicking on the file -> Open With -> XML (Text) Editor. Scroll down to the </Tokens> element, and add the following protocol registration: // ... </Tokens> <Extensions> <Protocol Name="nearspeaktutorial" NavUriFragment="encodedLaunchUri=%s" TaskID="_default" /> </Extensions> This will register the app for our own URI scheme nearspeaktutorial. You can customize this part and use your own protocol name. Note that most of the standard protocols are reserved by the system and can't be used in your app (e.g., http). The other parts of the XML protocol definition code are fixed, do not change them. Mapping the URI The next part is probably the most complicated, but only needs to be done once. For all your future apps, you can use pretty much the same code. In the previous example, we took care of parameters that were sent right to our MainPage. For handling the launch via a custom URI scheme, we need to go to a lower level and customize the behavior of the UriMapper. The UriMapper is always called when navigating in your app, as well as when the app is launched. Therefore, you can customize the behavior of your app even before the MainPage is loaded - and load for example a different page, depending on the contents of the custom URI scheme. In our case, we will find out if the app launch URI contains our custom protocol name: if the app has been launched via the custom URI scheme, the argument {{{1}}} is added to the URI. In case we can find this parameter, we will extract the text to speak (Good morning.) and launch the MainPage.xaml with those parameters. Essentially, we're redirecting the app to a custom address. As we're clever, we will send the argument to the page using the ms_nfp_launchargs key name, which we're already handling from the LaunchApp tags. This saves us from writing a second handler, which would replicate exactly the same functionality that we have already implemented. In case the app was not launched through the URI, or we're in the middle of a different navigation within the app, we just return the original and unmodified URL so that the default app framework can do its tasks. To add the class, right-click the project and choose Add -> New Item... -> Class. Give it the name NearSpeakUriMapper.cs. Derive the class from UriMapperBase and impement the MapUri(Uri uri) method, as mandated by the base class. Next, write the code to analyze the URI and redirect the flow of the app in case it was launched through our custom URI: class NearSpeakUriMapper : UriMapperBase { public override Uri MapUri(Uri uri) { // Example: "Protocol?encodedLaunchUri=nearspeaktutorial:Good+morning." var tempUri = HttpUtility.UrlDecode(uri.ToString()); var launchContents = Regex.Match(tempUri, @"nearspeaktutorial:(.*)$").Groups[1].Value; if (!String.IsNullOrEmpty(launchContents)) { // Launched from associated "nearspeaktutorial:" protocol // Call MainPage.xaml with parameters return new Uri("/MainPage.xaml?ms_nfp_launchargs=" + launchContents, UriKind.Relative); } // Include the original URI with the mapping to the main page return uri; } } Activating the URI Mapping Now that the URI mapping class is prepared, we need to tell our app's framework to actually make use of the class. To do so, open App.xaml.cs and search for the (by default collapsed) Phone application initialization region. Inside this region, you will find the InitializePhoneApplication() method. Right below the line that creates a new instance of the RootFrame, add our custom URI mapper: Now, whenever your app is launched through a tag that contains a standardized URL record with the contents nearspeaktutorial:Hello+world (or any other text to speak), the phone launches your app and it speaks the message. To write such a tag, you can either use a tool like Nfc Interactor, or if you'd like to write the tag from within your application, follow for example the instructions in the How to Store Application Data on NFC Tags article. Finished! (2) Now after adding the bonus content, your app can be launched via the Microsoft-specific LaunchApp NFC tag, as well as through an own custom URI scheme that you defined. In both cases, the app will immediately speak the voice message you stored on the tag through the text-to-speech framework (speech synthesis) - which you recorded using the speech recognition feature of Windows Phone. Have fun with the app and your future NFC apps! You can download the full source code of the sample project from this page. The official NearSpeak app that also includes translation using the Microsoft Translator web service can be downloaded from the Windows Phone store. I hope you enjoyed the behind-the-scenes look on how to create a fully functional and great app for Windows Phone! Please do not use the NearSpeak name, graphics or URI scheme in your own app, as the app is already in the store and publicly available. --ajakl 16:30, 16 December 2012 (EET) Note: This is a community entry in the Windows Phone 8 Wiki Competition 2012Q4.
http://developer.nokia.com/community/wiki/index.php?title=WP8_NFC_Tutorial:_Voice_Messages_on_NFC_Tags&oldid=181608
CC-MAIN-2014-15
refinedweb
3,420
61.06
Developers gonna develop… If you’re starting a new .NET Core 2.0 web app, the chances are you want to jump in to building your features, maybe get to grips with all the new ‘n’ shiny things in ASP.NET Core (like [tag helpers][1]). I recently worked on a greenfield ASP.NET Core 2.0 app where we had exactly that sense of excitement. This was a brand new project, we had a freshly released version of ASP.NET Core to play with and nothing was going to hold us back. It turns out “nothing” in this case was actually the very first requirement, our users needed a way to log in. Before we did anything else, we needed to lock the application down to specific users, lest our continuous deployments to Azure should expose the application’s data to people who shouldn’t see it. The problem is, if you go searching for how to secure your ASP.NET core app, you quickly descend into detailed discussions of cookies, JWT tokens, roles, claims, Microsoft Identity, Identity Server and more. The list goes on. But what if you don’t to wade through all that detail? You just want to start with the basics and lock your app down to one user, so it’s not immediately available to the entire world. Back to basics Just for now, forget all this talk of bearer tokens, OAuth and OpenID Connect. Let’s proceed on the basis that this is an MVC app, so no APIs or Single Page Applications to consider. We just want to stop the world and its dog from delving into sensitive parts of our application. Configuring ASP.NET Core to require authentication Imagine we’re starting with an ASP.NET Core 2.0 MVC application (with no authentication mechanism configured). You can grab the code we’re about to go through and take a look for yourself using the form below. The first step is to enable authentication for our site, which we can do by modifying startup.cs. We can start by adding the relevant Authentication services to our application. public void ConfigureServices(IServiceCollection services) { services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(CookieAuthenticationDefaults.AuthenticationScheme, options => { options.LoginPath = new PathString("/auth/login"); options.AccessDeniedPath = new PathString("/auth/denied"); }); // --------------- // rest of configureServices code goes here... } We’re going to stick with cookies for now. This means our logged in users will get a cookie in their browser, which gets passed to our app on every request, indicating that they are authenticated. Notice how we’ve configured two paths, the path to the login page (where we can send unauthenticated people when they try to access a restricted area) and the path to an access denied page (useful for when they inevitably enter incorrect credentials). We also need to tell our app to go ahead and actually enable authentication. Happily, this is very very simple in .NET Core 2… public void Configure(IApplicationBuilder app, IHostingEnvironment env) { app.UseAuthentication(); // --------------- // rest of configure code goes here... } Just another Login form So now our app knows we’re going to be using authentication, but there’s more work to be done. We need a way to identify our users, the common way being to ask them for a username and password. Login forms are straightforward enough, here’s one to get us started. <h2>Hmm, looks like you need to log in</h2> <form asp- <label for="username">Username</label> <input id="username" name="username" type="text"/> <label for="password">Password</label> <input id="password" name="password" type="password" /> <button type="submit">Log me in</button> </form> If we’re using the default routing for MVC, you’ll want to create an AuthController with a Login action that returns this view. If you’re not familiar with them, the asp- attributes are tag helpers, new to ASP.NET core, which make it easier to link your html to your ASP.NET MVC controllers. Read more about tag helpers here. In this example, the form contents will be posted to the Login action on an Auth controller. A word to the wise, if you start with an empty web app project you’ll find that Tag Helpers don’t work automatically. The easiest way to get them working is to create a _ViewImports.cshtml file and add this line to it… @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers If you start with one of the other starter templates you’ll probably find this file is created for you. The logging in bit To keep this super, super simple, we’ll opt to hard-code a username and password for now. If our users enter the correct combination, they’ll be logged in, with full access to “locked down” parts of the application. Now let’s be honest, hardcoded usernames and passwords are somewhat limiting (and not at all secure if your code ends up in a public Github repo) but they do tackle our urgent requirement to provide a mechanism for users to log in, and gain access to parts of the site that will be unavailable to Joe Public. This falls into the camp of “doing the simplest possible thing first”, so you can start to build up momentum with your new app, rather than getting bogged down in building your own user management system from day one. The login form will post to this controller action… [HttpPost, ValidateAntiForgeryToken] public async Task<IActionResult> Login(string returnUrl, string username, string password) { if (username == "Jon" && password == "ABitSimplisticForProductionUseThis...") { var claims = new List<Claim> { new Claim(ClaimTypes.Name, "jon", ClaimValueTypes.String, "") }; var userIdentity = new ClaimsIdentity(claims, "SecureLogin"); var userPrincipal = new ClaimsPrincipal(userIdentity); await HttpContext.SignInAsync(CookieAuthenticationDefaults.AuthenticationScheme, userPrincipal, new AuthenticationProperties { ExpiresUtc = DateTime.UtcNow.AddMinutes(20), IsPersistent = false, AllowRefresh = false }); return GoToReturnUrl(returnUrl); } return RedirectToAction(nameof(Denied)); } There’s our super insecure hardcoded username/password check (as discussed). We’ve opted to use claims-based security. In the most basic sense, you can think of Claims as pieces of information about your user. In this case we’re simply storing the user’s name in a claim, which we then attach to an identity for the user. This identity is the representation of your user that ASP.NET core can interrogate, to find out anything it needs to know. You can assign many claims to one identity, but ASP.NET Core requires the name claim as a minimum requirement (it will error if you don’t assign one). Next up we create a user principal. If this is your first foray into ASP.NET Core authentication then this can be a little confusing, but it’s worth noting you could have more than one identity and attach them all to the same principal. We’ve no need to handle multiple identities for the same user yet, so we can move along to the In practice, this creates an encrypted cookie holding the user’s information (the Claims Principal). From here on (until they exit the browser) your user is authenticated. Because we’ve set IsPersistent to false, the cookie will be lost when our user exits their browser, and will have to log in again next time they come to the site. If you want to see what that cookie looks like, check out the Application > Cookies window in Chrome (you’ll find a similar view in other browsers) and you’ll find it there, called .AspNetCore.Cookies. Once they’re logged in, the user is redirected to the original page they requested, or the home page. You can do this with a simple helper method. private IActionResult GoToReturnUrl(string returnUrl) { if (Url.IsLocalUrl(returnUrl)) { return Redirect(returnUrl); } return RedirectToAction("Index", "Home"); } No access for you This is all well and good, but currently there’s no reason for anyone to log in to the site, because nothing is locked down. Let’s remedy that by restricting access to the main homepage for the app. [Authorize] public class HomeController : Controller { public IActionResult Index() { return View(); } } The [Authorize] attribute will trigger ASP.NET Core to redirect any users who aren’t logged in (don’t have an auth cookie) to the login page (that we configured in startup.cs). It’s all about you So that’s almost the entire process. But it would be nice to greet the user by name. We’ll do this on our main index view… <h1>Hi @User.Identity.Name, you're in the club.</h1> Let me out of here Finally, we should probably let them log out, if they so wish. All this needs is a simple form. <form asp- <button type="submit">Log out</button> </form> And controller action. public async Task<IActionResult> Logout() { await HttpContext.SignOutAsync(); return RedirectToAction(nameof(Login)); } In summary For simple use cases, where you’re using MVC in an application on a single domain, cookie auth works just fine and is easy to set up. In this case we used a single hard-coded username/password but you’ll need some way to store usernames and passwords. If you want an “off-the-shelf” solution, including code to handle adding users, changing passwords etc. you can look to Microsoft Identity (or even third party solutions like Auth0). If you just want to lock down parts of your app whilst you get on with building it, this approach is a decent start and you can ramp up the complexity as you need. photo credit: Hindrik S Heitelân - Homeland #49 via photopin (license)
https://jonhilton.net/2017/10/07/a-simple-way-to-secure-your-.net-core-2.0-web-app/
CC-MAIN-2019-30
refinedweb
1,585
55.84
Re: fastest way to parse a file; Most efficient way to store the data? From: Bruce Wood (brucewood_at_canada.com) Date: 12/14 ] Date: 14 Dec 2004 15:23:41 -0800 First off, let me point out that in computing, you are usually facing a tradeoff between memory and speed. The question, "What is the fastest, most memory-efficient way to do this?" is like asking, "What is the quickest, cheapest way to get to Paris?" (Assuming, of course, that you're not already in Paris. :) Sometimes the fastest way is the cheapest way, but more often than not you have to make a tradeoff between speed and cost: airplanes cost more than freighters. So it is with speed and memory: sometimes the fastest way is also the most memory-efficient, but more often than not you have to trade one for the other. That said, your solution depends very much upon whether the field you're sorting on is always also the field that you require be unique. If it is, then I suggest that you use some form of tree structure (research the already-available .NET collections), which will sort your items on the fly and give you some indication of uniqueness at the same time. Since you have to sort anyway, you might as well do that and your uniqueness check all at once. However, if you could potentially be determining uniqueness on one field and sorting on a different field, then there's no value in determining uniqueness using anything other than a hash table. A hash table will give you lightning-fast lookup capabilities to determine if you've already seen a key. There's only one thing faster, which is a very, very big B-tree, but it uses up tons of memory so I wouldn't go that way. Hash tables are robust and fast. As for sorting, you should either build a tree structure or use the quicksort algorithm. Both methods are reasonably quick. I wouldn't suggest using insertion sort, which is what Nicholas was suggesting (sorry) because with a million records you'll _definitely_ notice a performance difference. The Array class contains a Sort method, but it doesn't mention which algorithm it uses, although I must suppose that if the MS people who wrote the Framework didn't use quicksort, or something even faster (yes, there are a few faster algorithms) then they're not too sharp. Finally, there's the problem of storage. Yes, you can parse each line and blow it out into an array of strings, but then if you have to write it out again. As well, if you're doing a quick sort, you have to shuffle (potentially) large records around in memory. Another way to solve the problem is to create a small class containing a string, an offset, and a length. If you use short integers for the offset and the length you can pare this down to 64 bits. When you read in a line, and you want to represent field #15, for example, make a new one of these objects, set the string pointer to your line you read in, and the offset and the length to indicate where your field starts and how long it is. Now if you write an IComparer for this structure: public class FieldComparer : IComparer { public int Compare(object x, object y) { MyField field1 = (MyField)x; MyField field2 = (MyField)y; if (field1.Length != field2.Length) { return field1.Length - field2.Length; } else { return String.Compare(field1.String, field1.Offset, field2.String, field2.Offset, field2.Length); } } } I'm using a class for Field rather than a struct to avoid boxing and unboxing in the standard Compare method. So, assuming that you have to determine uniqueness on one field and sort on another field, here is how I would do it. Make a Hashtable that will have Field objects as its keys. The values stored in the Hasthable don't matter, so you might as well use the lines you're reading from the file. This won't result in any extra storage for the lines, because if you're storing them as strings then the runtime will share points so long as the input lines themselves never change. Make an ArrayList that will hold the FIeld objects for the fields you want to eventually sort on. Read each line, find the field you need to sort on, and the field you need to verify as unique. Create a Field object for each of them. Check to see if the Field object for your unique field is already stored in the Hashtable, and add it if it isn't. Add the Field object for your sort field to the ArrayList using the Add method. You didn't say whether you want the record in the output set only if the unique field is the first occurrence, but you can determine that here because you already tried to put in the hash table. When you've read all the lines, use ArrayList.Sort to sort the array list using an instance of the IComparer class that you created above. This will take a while, but it's faster than insertion sort or any other sort method that you might roll yourself. Run through the array list and feed the records one by one to some sort of output object, which will know which fields to pick out and display to the user. Since your Field class contains the original string pointer for the line, you can recover the input line and scan it for the output fields that you want. The only extra overhead that this introduces is that you scan the input line twice: once to get your unique / sort fields, and once to get your output fields. However, I doubt that this will create a significant performance hit. Not after all of that sorting. Anyway, there's my solution! Good ]
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2004-12/3496.html
crawl-002
refinedweb
992
77.27
threads::tbb::concurrent:: - namespace for concurrent TBB containers use threads::tbb; # ARRAY tie interface: tie my @array, "threads::tbb::concurrent::array"; $array[0] = $val; push @array, @items; # HASH tie interface: tie my %hash, "threads::tbb::concurrent::hash"; my $value = $hash{key}; # always deep copies $hash{key} = $value; # careful! # preferred Hash API: for access: my $hash = tied %hash; # doesn't need to be tied really my $slot = $hash->reader($key); print $slot->get(); # now safe my $copy = $slot->clone(); # also fine undef($slot); # release lock # for writing: $slot = (tied %hash)->writer($key); $value = $slot->get(); # get the value out $slot->set([$value]); # fine $copy = $slot->clone(); # $copy now a dup of [$value] undef($slot); # release lock # TODO hash API: my ($key, $value) = each %hash; # concurrent iteration - safe for update my $iterator = tied(%hash)->iterator; my ($key, $slot) = $iterator->(); # SCALAR tie interface: # not really concurrent in any way; and every access may copy in to # the thread which requests it. these wrappers for scalars can be # passed around via the various containers. tie my $item, "threads::tbb::concurrent::item"; $item = $val; print $item; # TODO queue/channel interface: tie my @queue, "threads::tbb::concurrent::queue"; push @queue, $val; my $val = shift @queue; The threads::tbb::concurrent:: series of modules wrap respective tbb concurrent classes. For now there are two main container classes - threads::tbb::concurrent::array and threads::tbb::concurrent::hash Note that they are only concurrent if you restrict yourself to the concurrent APIs. Other ways of accessing the containers may result in programs with race conditions. Also, the SCALAR interface: threads::tbb::concurrent::item currently has no locking mechanism, it is currently just an auxilliary way of shunting data between interpreters using the lazy clone method.. -- from change#4675: "USE_ITHREADS tweaks and notes" Gurusamy Sarathy, 9 Dec 1999 The C++ function clone_other_sv, from src/lazy_clone.cc in the source distribution, exists to implement selecting cloning of data reachable from one interpreter to the next. This is implemented in a lazy fashion. If entries in the container are requested by a different thread, a deep copy happens then and there, carried out by the worker thread and not the main thread. So long as there is no use of the actual state machine of the foreign interpreter, or side effects on data structures it "owns", this should be relatively safe. The advantage to this, over an eager algorithm which used a safe, neutral interpreter that never runs anything (as in threads::shared), or to a collection of Storable::freeze blobs (as in threads::lite) being: 1. reduced memory use; data is only copied to the threads which demand it, 2. there is no overhead for the thread that started the operation to process data; other than that taken receiving completed blocks from workers, 3. reduced number of overall deep copies, 4. faster cloning ( clone_other_sv is implemented in C++ using STL containers). 5. You can choose to use an eager algorithm by simply freeze'ing data on the way in. Of course if the interpreter that sent the data violates expectations by modifying the data structures, all bets are off. TODO: setting TTBB_EAGER=1 to copy to neutral shared interpreter for a single program run. The initial implementation of the deep copying has very much the same limitations as threads::shared - in that only a certain core set of "pure" perl objects can be passed through. XS objects should be safe - as in, not cause segfaults - so long as the package either defines CLONE_SKIP (in which case the objects will be replaced by "undef" in the cloned structure - see perlmod), or if they define a CLONE_REFCNT_inc method. The CLONE_REFCNT_inc method should update the objects' internal idea of how many references are pointing at it, and return the value 42. If it did neither, then the code will emit a warning. As closures are not supported, inside-out objects cannot be passed - and in fact they'd likely be very inefficient. Not yet supported are MAD properties or "strange" forms of magic. Overload is currently thought to be safe. Filehandles should be relatively trivial to support but are not implemented yet. If it does turn out to be stable, then it would help reduce the overhead that a threading program has to overcome to break even; eg, if the single-threaded case is more than 100% slower, then you need more than 2 cores just to break "even"; and that's before you take into consideration that the program may not scale beyond a given number of cores. This would mean that this overhead is delegated to the worker threads; they might not be able to carry out work at full speed compared to the main thread, but at least they're not impeding it by making it waste time dumping data that it might have to simply load again itself to process. Of course, in principle, building under -Duse5005threads (removed in Perl 5.9.x) would obliviate the need to copy anything at all. If foreign-structure dumping turns out not to be stable then there are two main approaches. Either dump everything and just document that the size of data put in and out may be a limiting factor for many users, or potentially queue requests for the originating thread to process the dump, then yield or even spin. Queuing requests for other threads to safely marshall the data in and out could prove problematic and lead to deadlocks, so probably the best approach is to support both lazy and immediate deep copies by an option set on the container. Sam Vilain, sam.vilain@openparallel.com threads::tbb, threads::tbb::concurrent::array
http://search.cpan.org/dist/threads-tbb/lib/threads/tbb/concurrent.pod
CC-MAIN-2017-22
refinedweb
940
56.08
Keep Tests Short and DRY with Extension Methods Date Published: 10 February 2021 Today as I was writing functional tests for API endpoints again I created some helpers to assist with the boilerplate code involved in such tests. When you're testing an API endpoint, you typically need to write code that looks like this: - Create data to send in request (optional) - Make an HTTP request to a route/URL - Verify the response is successful - Capture the response as a string - Convert the string into a type - Make assertions that the type is what you expected Here's an example of such a functional test, using xUnit and System.Text.Json (with full class for reference): public class DoctorsList : IClassFixture<CustomWebApplicationFactory<Startup>> { private readonly HttpClient _client; private readonly ITestOutputHelper _outputHelper; public DoctorsList(CustomWebApplicationFactory<Startup> factory, ITestOutputHelper outputHelper) { _client = factory.CreateClient(); _outputHelper = outputHelper; } [Fact] public async Task Returns3Doctors() { var response = await _client.GetAsync("/api/doctors"); response.EnsureSuccessStatusCode(); var stringResponse = await response.Content.ReadAsStringAsync(); _outputHelper.WriteLine(stringResponse); var result = JsonSerializer.Deserialize<ListDoctorResponse>(stringResponse, Constants.DefaultJsonOptions); Assert.Equal(3, result.Doctors.Count()); Assert.Contains(result.Doctors, x => x.Name == "Dr. Smith"); } } Functional tests and Integration tests As an aside, the docs (which I wrote the initial versions of) refer to these as integration tests, which isn't wrong, but I prefer the term functional tests because it's more specific. Any test that involves several classes or talks to some infrastructure is no longer a unit test, but an integration test (or perhaps some other kind). Need to test that your DbContext can actually insert and fetch data from a real data source? Use an integration test. What differentiates a functional test from other kinds of integration tests is that it's testing most of the app's functionality from the outside. In the case of ASP.NET Core MVC apps, these functional tests aren't just testing an action method or a controller (or endpoint type), but are also testing routing, filters, model binding, model validation, dependency injection, and more! And they're doing it all in memory without the need for a separate web server, browser client, or network layer (so, no firewall or port or security issues to contend with!). But back to the topic at hand... Duplication in Tests Some duplication in tests is fine, if it makes the tests more readable and less magic. You want a new developer to be able to look at a failing test and immediately be able to determine what the problem is. Having tests that are completely abstract and magic can make this difficult. However, in my experience the bigger problem is duplication in tests. Excessive duplication in tests leads to code smells and antipatterns like shotgun surgery, in which a small change to a method or constructor signature in the system under tests results in hundreds of compilation errors as test methods everywhere fail to build because they all were hardwired to use that signature. I'm a fan of keeping test classes small and focused, and tests neat and to-the-point as well. I follow a test naming and organization convention that yields one test class per method being tested, and for functional tests of APIs this works out to one test class per API route or endpoint. However, long tests with a lot of repetition make it harder to pick out the signal from the noise when you're reviewing a set of tests. Imagine the code listing above, but with another half dozen tests all very similar but for a few tiny changes in their assertions or something similar. Helper methods One tried and true approach to keeping tests clean and DRY is to use helper methods. You absolutely should do this wherever it makes sense. I do it all the time. However, helper methods usually are only useful within the test class where they reside. As such, they usually take the form of a standard method/function, rather than an extension method (which must reside in its own static class). Occasionally they'll make sense for a set of tests or even a whole project. But what if you have something you'd like to reuse across many test projects? Extension methods Extension methods provide a way to add functionality as needed to existing types. They work basically the same as helper methods, but the syntax is a little cleaner and they're easier to share via NuGet packages than other approaches since all that's needed to use them is a using statement. In the example above, if you looked at the Returns3Doctors test and compared it to another test of another endpoint called Returns2Items (or whatever), what would need to change between the two tests? - The API route/URL - The type being deserialized into - The assertions I very rarely move assertions out of tests, since the assertion is one of the most important parts of a test and something I want to keep very clear. Developers shouldn't have to go searching for what a test is asserting. But the rest of the steps involved in this test could easily be refactored into a method that took in a route string and returned an instance of a type. That could take 5+ (+ because line wrapping) lines of code down to 1 (or maybe 1+). Here's what such an extension method might look like: public static async Task<T> GetAndDeserialize<T>(this HttpClient client, string requestUri, ITestOutputHelper output = null) { var response = await client.GetAsync(requestUri); output?.WriteLine($"Requesting {requestUri}"); response.EnsureSuccessStatusCode(); var stringResponse = await response.Content.ReadAsStringAsync(); output?.WriteLine($"Response: {stringResponse}"); var result = JsonSerializer.Deserialize<T>(stringResponse, Constants.DefaultJsonOptions); return result; } This method is optionally taking in the xUnit ITestOutputHelper class which is needed to write to the console in xUnit tests. Being able to see the actual string output from APIs is often helpful, since frequently minor issues in schema or JSON conventions can result in getting back null for the object result even though valid JSON was returned from the request. Now this method can be used as an extension on HttpClient, which of course the test already has and must use: [Fact] public async Task Returns3Doctors() { var result = await _client.GetAndDeserialize<ListDoctorResponse>("/api/doctors", _outputHelper); Assert.Equal(3, result.Doctors.Count()); Assert.Contains(result.Doctors, x => x.Name == "Dr. Smith"); } Sharing on NuGet How is an extension method that much better than a simple helper method, again? Well, it turns out you can create a NuGet package in just a few minutes so that it's really easy to share your method between projects, and even with the community as a whole. Maybe you're the only one who will find your method useful, but who knows? To take this simple method and put it on NuGet, I did the following steps: - Created a new GitHub repo - Cloned it locally - Created a new .NET Standard Class Library - Put the extension method in it - Modified its project file to add NuGet properties (I cheated and copied them from another project) - Right-click on project in Visual Studio, choose Pack (or use dotnet pack) - Logged into NuGet.org - Chose Upload Package (.nupkg file created by pack) That's it. A few minutes later, the package was on NuGet.org, and I could start using it in my test project as a NuGet reference instead of more code for me to maintain in my test project. Now I'll never have to write this same helper method again (this wasn't my first time, mind you), and hopefully this will help out a few others as well! Future work As of today this NuGet package literally has one extension method in it. That's kind of the point of this article is that it's really easy to publish a package even if it's something as simple as just one extension method you find useful. But in this case, I do plan on there being more extensions in this package. Most APIs have more than just GET endpoints, and right now I don't have extensions for building POST, PUT, DELETE, etc. with built-in logging for xUnit and automatic serialization/deserialization via System.Text.Json. I expect to add those quickly, since in the next day or two I'll be writing tests for those kinds of endpoints for the samples for my Pluralsight DDD Fundamentals course update. Look for the new course in spring 2021; in the meantime the existing DDD course on my author page covers the material but uses .NET 4.x for its samples. If you find these extensions useful, please leave a star in the repo and feel free to add any issues or pull requests for features you'd like to see added. Thanks! Category - Browse all categories
https://ardalis.com/keep-tests-short-and-dry-with-extensions/
CC-MAIN-2021-25
refinedweb
1,472
59.43
? What are Win Forms? In earlier versions of VB and other Visual Studio products there were different 'Forms' engines. So forms developed using VB were different than other languages. With Visual Studio.NET the picture has changed. Now, all the tools supporting NGWS make use of a common forms engine. The forms thus created are called as Win Forms. This leads to many benefits (also refer VS.NET documentation for more details): First WinForms application Now, let us create a simple form which will just display "Hello Win Forms" This is very simple example but illustrates general structure of VB.NET classes for displaying forms. Imports System Imports System.WinForms Namespace Bipin.Samples Public Class HelloWinForms Inherits System.WinForms.Form Shared Sub Main() Application.Run(New HelloWinForms()) End Sub Public Sub New() MyBase.New Me.Text = "Hello Win Forms" End Sub End Class End Namespace Note the following points : Compiling the application You can compile the application using command line compiler vbc vbc file_name /r:System.WinForms.dll /r:System.Drawing.dll Here, file name is the name of source file i.e. xxxx.vb. The switch /r points namespaces [r]eferenced in the application. Adding Controls to the Form Now, we will see how to add controls to the form Imports System Imports System.WinForms Imports System.Drawing Namespace Bipin.Samples Public Class HelloWinForms Inherits System.WinForms.Form Dim label1 as new label Shared Sub Main() Application.Run(New HelloWinForms()) End Sub Public Sub New() MyBase.New Me.Text = "Hello Win Forms" label1.text="Hello Win Forms" label1.location=new point(100,100) me.controls.add(label1) End Sub End Class End Namespace Event handling Now, let us proceed further and see how to handle events of the controls. Imports System Imports System.WinForms Imports System.Drawing Namespace Bipin.Samples Public Class HelloWinForms Inherits System.WinForms.Form Dim button1 as new Button Shared Sub Main() Application.Run(New HelloWinForms()) End Sub Public Sub New() MyBase.New Me.Text = "Hello WinForms" button1.text="Click Me" button1.location=new point(100,100) button1.addonclick(addressof button_click) me.controls.add(button1) End Sub public sub button_click (sender as object,evt as eventargs) Messagebox.show("Hello Win Forms") end sub End Class End Namespace This example is similar to previous one but uses a command button. When user clicks on the button a message box will be displayed saying "Hello Win Forms" Here is how we handled click event of the button : I hope you must have got overall idea about win forms. The next articles of this series will cover techniques like creating menus and using other controls. So, visit soon! Introduction You must have heard the word assembly many times in .NET documentation. In this article I will share some thing about .NET assemblies. What is an assembly? What is assembly manifest?.:<assembly_dll_path_here> Now, copy the resulting EXE in any other folder and run it. It will display "Hello World" indicating that it is using our shared assembly. : All the property names are self-explanatory and need no separate explanation. obtaining details about methods, properties and fields Each type may have fields (member variables), properties and methods. The details about each of these types are obtained by following methods of the Type object. Properties and methods of PropertyInfo Object Properties and methods of FieldInfo Object. Forgot Your Password? 2018 © Queryhome
https://www.queryhome.com/tech/143225/hello-winforms-an-introduction-to-win-forms
CC-MAIN-2020-16
refinedweb
559
53.68
Decode interleaved 2 of 5 is easy to implement in C# if you use these source codes below. ByteScout BarCode Reader SDK is the SDK for reading of barcodes from PDF, images and live camera or video. Almost every common type like Code 39, Code 128, GS1, UPC, QR Code, Datamatrix, PDF417 and many others are supported. Supports noisy and defective images and docs. Includes optional documents splitter and merger for pdf and tiff based on found barcodess. Batch mode is supported for superior performance using multiple threads. Decoded values are easily exported to JSON, CSV, XML and to custom format and you can use it to decode interleaved 2 of 5 with C#. Fast application programming interfaces of ByteScout BarCode Reader SDK for C# plus the instruction and the code below will help you quickly learn how to decode interleaved 2 of 5. This C# sample code is all you need for your app. Just copy and paste the code, add references (if needs to) and you are all set! Test C# sample code examples whether they respond your needs and requirements for the project. You can download free trial version of ByteScout BarCode Reader SDK from our website to see and try many others source code samples for C#. On-demand (REST Web API) version: Web API (on-demand version) On-premise offline SDK for Windows: 60 Day Free Trial (on-premise) using System; using System.IO; using Bytescout.BarCodeReader; namespace ReadInterleaved2of5 { class Program { const string ImageFile = "Interleaved2of5.png"; static void Main() { Console.WriteLine("Reading barcode(s) from image {0}", Path.GetFullPath(ImageFile)); Reader reader = new Reader(); reader.RegistrationName = "demo"; reader.RegistrationKey = "demo"; // Set barcode type to find reader.BarcodeTypesToFind.Interleaved2of5 = true; /* ----------------------------------------------------------------------- NOTE: We can read barcodes from specific page to increase performance. For sample please refer to "Decoding barcodes from PDF by pages" program. ----------------------------------------------------------------------- */ // Read barcodes FoundBarcode[] barcodes = reader.ReadFrom(ImageFile); foreach (FoundBarcode barcode in barcodes) { Console.WriteLine("Found barcode with type '{0}' and value '{1}'", barcode.Type, barcode.Value); } // Cleanup reader.Dispose(); Console.WriteLine("Press any key to exit.."); Console.ReadKey(); } } })
https://bytescout.com/articles/barcode-reader-sdk-c-decode-interleaved-2-of-5
CC-MAIN-2022-27
refinedweb
345
59.5
sdl_pumpevents(3) [redhat man page] SDL_PumpEvents(3) SDL API Reference SDL_PumpEvents(3) NAME SDL_PumpEvents- Pumps the event loop, gathering events from the input devices. SYNOPSIS #include "SDL.h" void SDL_PumpEvents(void); DESCRIPTION Pumps_Wait- Event implicitly call SDL_PumpEvents. However, if you are not polling or waiting for events (e.g. your filtering them), then you must call SDL_PumpEvents to force an event queue update. Note: You can only call this function in the thread that set the video mode. SEE ALSO SDL_PollEvent SDL Tue 11 Sep 2001, 22:59 SDL_PumpEvents(3) SDL_SetEventFilter(3) SDL API Reference SDL_SetEventFilter(3) NAME S fea- ture. pos- sible. SDL Tue 11 Sep 2001, 22:59 SDL_SetEventFilter(3)
https://www.unix.com/man-page/redhat/3/SDL_PumpEvents/
CC-MAIN-2020-10
refinedweb
112
84.17
We’ve been very quiet on this blog as of late, mostly because of the amount of work that we needed to put into our very ambitiously planned 1.5 release. But we’ve made it, and there’s finally time to get back to discussing the technical minutiae of our work. In this post, we will go over the major library changes that have been introduced in 1.5. But first, I’d like to highlight a change to our release planning process. Instead of using a proprietary walled-off tool, we are slowly moving towards doing our features planning entirely in the open, directly on GitHub. It’s not all the way there yet, but the 1.5 release, and the upcoming one, are already using this new open process. When you enter one of the releases, you will see that most of the planned features have their pages where they can be discussed. The goal of this change is to keep our users informed about future plans, but also to encourage discussion with the community about the direction we are taking. We’d like to invite everyone interested in PMDK to start directly influencing and contributing to our future work. Since PMDK project is still relatively small, this process isn’t formalized. Think we are missing a feature in a library? Write a comment on the release planning page or create a new issue on our tracker, that’s it. I can promise that no feature request will get unanswered, at least for now. This is a big release, with many quality of life enhancements, new features, and performance improvements. We’ve also introduced an entirely new consistency checking tool to complement pmemcheck. To keep this relatively short, here I’ll briefly describe each item, and the bigger ones will get their posts explaining the details of the change. NVDIMMs are storage and as such have to deal with on-media errors and hardware failures. But they are also memory, which means that an application can encounter poisoned memory pages at runtime, and what’s more, those poisoned pages persist across restarts and need to be handled manually. To account for that, we’ve worked on two main RAS features: This is used to detect if an ADR (feature on which persistent memory programming model relies on to avoid having to flush the memory controller caches) failure has occurred while a pool is open. Based on this information, the library can determine whether the pool could have been corrupted or not. On Linux, if an application maps a persistent memory resident file with a bad block, everything will work correctly up until the bad, poisoned, page is accessed for the first time in the current instance of the application. That first page fault will cause a SIGBUS, and the process will most likely be terminated. To prevent this cycle from happening over and over again, PMDK will now detect if the files that the pool is composed of have any bad blocks, and if so, it will refuse to open it. This allows the user to handle the failure gracefully. In addition to that, we implemented recovery of bad blocks from poolset replicas. This feature is currently not implemented for Windows Server platform and disabled by default on Linux. To support the features mentioned above, libpmemobj on Linux now depends on libndctl. NDCTL provides interfaces for configuration and introspection of the libnvdimm kernel subsystem. PMDK uses it to retrieve the information about health of the platform and NVDIMMs. Unfortunately, at this moment retrieval of most of RAS-related information by default requires superuser privileges. This means that, for example, open of a libpmemobj pool would have to happen under sudo. To avoid that, the RAS features described here are opt-in. This will be changed to opt-out once these access restrictions are relaxed in the future kernel versions. To enable or disable RAS features, we’ve developed a new pmempool command, pmempool feature. You can find more information about on its man page. Almost since the beginning of this project, we planned on implementing a tool which could exhaustively check data consistency for all possible combination of stores to the NVDIMM based on runtime binary instrumentation. This would give us, and our users, confirmation that the algorithms inside of our applications are in fact correct and fail-safe atomic. We already had two takes on this problem, with the second one being almost good enough for generic use. And so, after almost two years, we have finally decided that it’s time to revisit the topic and cleanup the old codebase so that it can be released alongside pmemcheck. Stay tuned for a blog post about pmreorder. In the meantime, see its man page. Right from the start, libpmem shipped with custom-made memcpy/memset implementations that made use of non-temporal stores (stores that do not go through the CPU cache) in a deterministic fashion, which we leveraged to optimize PMDK algorithms. But we’ve discovered optimization opportunities that required even more control over the behavior of these functions, and so we decided to add new variants of libpmem primitives (flush, memcpy, …) that take flags which control how the functions behave. We found this so useful that we also exposed this functionality in libpmemobj variants of persistence primitives. For more information, see pmemobj_memcpy_persist man page. This version of PMDK won’t include the previously experimental libpmemcto. Due to the complexity of that library, and the fact that it was based on a very heavily modified fork of jemalloc, it has become challenging to maintain. For those reasons, we’ve decided that we won’t be continuing development of libpmemcto. We recommend using libmemkind for volatile applications and libpmem/libpmemobj for persistent ones. If you have a use case that is not suitable for either one of those solutions, please let us know. Since libpmemobj is our most important library, that’s where we spent most of our time. While there are only a few new features, the scope of changes is broad due to our focus on transactional performance optimizations. The pmem synchronization primitives provided by libpmemobj are implemented using an internal mechanism that calls an initialization function on the first access to a variable. That mechanism could be useful for users wanting to employ custom synchronization primitives or even implement entirely custom algorithms that require volatile variables on persistent memory. This a fairly small API addition which has a large number of potential use cases. See its man page for more details. In the previous release of libpmemobj, we’ve included a feature to create custom allocation class at runtime which can be used to optimize fragmentation and performance of the heap. The initial implementation had some limitations. The biggest of them was the 128 byte lower boundary for allocation class size. This was problematic for workloads with smaller objects, which we observed during our Java and Python enabling efforts. This limit has been now lifted, and the smallest allocation class possible is 1 byte, with some caveats. Additionally, the API didn’t allow for alignment of allocated objects to be specified. This has been addressed, and allocations can now be aligned to any power-of-two value. See appropriate CTL namespace entry point for more information. For the past couple of months, we’ve been optimizing libpmemobj’s transactions algorithms so that the amount of persistent memory cache misses and flushes is minimized. The result is that both the redo and undo logs have been effectively rewritten. They now use slightly more computationally expensive algorithms but do not needlessly pollute the CPU cache, and they generate less traffic to persistent memory. The result is much faster persistent memory transactions, which can now rival some less-optimized atomic algorithms. We will soon post a detailed post about what specifically has changed, how we’ve come to those optimizations, and what are the benchmarking results. The performance optimizations I outlined above had one unfortunate consequence. We were forced to change the on-media layout of the undo and redo logs in a way that’s incompatible with the previous version of the library. This means that pools that were created using any of the previous libpmemobj versions, and had unrecovered operations in the log, are not going to be compatible with the new algorithms. To provide users with an upgrade path, we’ve implemented a brand new pmdk-convert tool which will automatically process the logs using the correct recovery mechanism, and then will bump the major layout version to indicate that the pool can be now opened using the new library version. For quite some time now we’ve been looking for the most idiomatic and simple solution to providing persistent C++ containers that could complement the current library. After many experiments, we’ve decided to implement our own STL-like containers. This will be a long effort, but we think it’s going to be ultimately worth it, since we can optimize the on-media layouts and algorithms to fully exploit persistent memory potential, but also because we can add useful libpmemobj-specific semantics. Due to this decision, we’ve moved libpmemobj-cpp to its own repository, anticipating that it’s soon going to become a much larger project. We started the containers work easy, with pmem::obj::array that provides some convenience features for use with transactions. See our doxygen documentation for more info. We’ve also renamed some function names that, in retrospect, were chosen poorly. Functions with old names are still there, but are deprecated. If you’ve been using libpmemobj-cpp, you should start seeing compile-time warnings about that after upgrading the library. While we won’t remove the old functions for some time, we encourage everyone to update the function calls now. See this pull request for more information. With PMDK 1.5 almost behind us, ahead of us is PMDK 1.6. In the upcoming release, we want to mostly work on the quality of life improvements, minor performance optimizations that didn’t make the cut this time, and small new features. Our focus will also be on improving documentation, which includes both tutorials about how to use our library, especially the new features, as well as an in-depth description of the algorithms that underpin PMDK.
http://pmem.io/2018/10/22/release-1-5.html
CC-MAIN-2018-51
refinedweb
1,731
52.49
NAME¶ FBB::PrimeFactors - Performs the prime-number factorization of (BigInt) values SYNOPSIS¶ #include <bobcat/primefactors> Linking option: -lbobcat DESCRIPTION¶ Integral values fall into two categories: prime numbers, whose only integral divisors are their own values and 1, and composite numbers, which also have at least one other (prime number) integral divisor. All composite integral values can be factorized as the product of prime numbers. E.g., 6 can be factorized as 2 * 3; 8 can be factorized as 2 * 2 * 2. Finding these prime factors is called the prime number factorization, or `prime factorization’. When factorizing a value its prime factors may sometimes repeatedly be used as integral divisors: 8 is factorized as pow(2, 3), and 36 is factorized as 36 = pow(2, 2) * pow(3, 2) The class FBB::PrimeFactors performs prime number factorizations of FBB::BigInt values. When factorizing a value prime numbers up to sqrt(value) must be available, as prime numbers up to sqrt(value) may be factors of value. Currently PrimeFactors uses the sieve of Eratosthenes to find these prime numbers. To find the next prime number beyond lastPrime, the sieve of Eratosthenes must be used repeatedly for lastPrime += 2 until lastPrime is prime. Once determined, prime numbers can of course be used directly to determine the next prime number or to factorize an integral value. To accellerate prime number factorization and Eratosthenes’s sieve PrimeFactors saves all its computed prime numbers in either a std::vector or in a file. Once determined, these prime numbers may again be used when factorizing the next integral value. After factorizing an integral value its prime number factors and associated powers are made available in a vector of (PrimeFactors::PrimePower) structs, containing the value’s sorted prime factors and associated powers. NAMESPACE¶ FBB All constructors, members, operators and manipulators, mentioned in this man-page, are defined in the namespace FBB. INHERITS FROM¶ - TYPEDEFS AND ENUMS¶ CONSTRUCTORS¶ - o - PrimeFactors(BigIntVector &primes): Prime numbers that were determined while factorizing values are collected in the BigIntVector that is passed as argument to this constructor. - Initially the BigIntVector passed as argument may be empty or may contain at least two primes (which must be, respectively, 2 and 3). The prime numbers in primes must be sorted. The constructor does not verify whether the prime numbers are actually sorted, but if the BigIntVector contains primes it does check whether the first two prime numbers are indeed 2 and 3. An FBB::Exception is thrown if this condition is not met. - While numbers are being factorized, new prime numbers may be added to primes, and primes can be reused by othher PrimeFactors objects. - o - PrimeFactors(std::string const &name = "~/.primes", size_t blockSize = 1000): Prime numbers that are determined while factorizing values are collected on a stream whose name is passed as argument to this constructor. By default ~/.primes is used. If name starts with `~/’, then this string is replaced by the user’s home directory. - Primes are read from the named stream in blocks of at most blockSize, and new primes are flushed to this stream once blockSize new primes have been generated or when the PrimeFactors object (i.e., the last PrimeFactors object sharing the stream) ceases to exist. - If the stream does not yet exist it is created by PrimeFactors. The stream may either be empty, or it must contain sorted and white-space delimited prime numbers (inserted as hexadecimal BigInt values). The first two primes on this file must be, respectively, 2 and 3. The constructor does not verify whether the prime numbers are actually sorted, but if the stream contains primes it does check whether the first two prime numbers are indeed 2 and 3. An FBB::Exception is thrown if this condition is not met. - While numbers are being factorized, new prime numbers may be added to the stream, and the stream can be reused by other PrimeFactors objects. Copy and move constructors (and assignment operators) are available. FBB::PrimeFactor objects created using the copy constructor or receiving their values using the copy assignment operator share the prime numbers storage device (the BigIntVector or the stream containing the primes) with their source objects. MEMBER FUNCTION¶ - o - Factors const &factorize(BigInt const &value): The prime factors of value are determined and returned in the PrimeFactors::Factors vectors. While the prime factors of value are determined new prime numbers may be added to the BigIntVector or to the stream that is passed to the PrimeFactors object. The elements of PrimeFactors::Factors are sorted by their prime numbers. The first element contains the value’s smallest prime number factor. EXAMPLE¶ #include <iostream> #include <bobcat/primefactors> using namespace std; using namespace FBB; int main(int argc, char **argv) { PrimeFactors pf1("/tmp/primes"); PrimeFactors::Factors const *factors = &pf1.factorize(stoull(argv[1])); cout << "Using /tmp/primes:\n"; for (auto &factor: *factors) cout << factor.prime << "**" << factor.power << ’ ’; vector<BigInt> primes; PrimeFactors pf2(primes); factors = &pf2.factorize(stoull(argv[1])); cout << "\n" "Using BigIntVector:\n"; for (auto &factor: *factors) cout << factor.prime << "**" << factor.power << ’ ’; cout << "\n" "Collected primes: "; for (auto &prime: primes) cout << prime << ’ ’; cout << ’\n’; } If this program is run with argument 1950 it produces the following output: Using /tmp/primes: 2**1 3**1 5**2 13**1 Using BigIntVector: 2**1 3**1 5**2 13**1 Collected primes: 2 3 5 7 11 13 FILES¶ bobcat/primefactors - defines the class interface SEE ALSO¶ bobcat(7), bigint).
https://manpages.debian.org/bullseye/libbobcat-dev/primefactors.3bobcat.en.html
CC-MAIN-2021-43
refinedweb
905
51.07
Learning JavaScript with HTML5 Canvas I decided it was high time I learned JavaScript. I have dabbled before but never really learnt it properly. I decided to write a couple of programs that use an HTML5 Canvas to test the language out a bit. I use simple iteration, JavaScript objects, arrays and conditionals. The bread and butter things you need to write any porogram. Things went really well and I was able to put together some very simple demos in a couple of hours. As a language I'm not a huge fan of JavaScript - at least as it was in the old days. I prefer Python or C, but the thing is, each language has certain things it's good at and if you want code running in your browser JavaScript, or at least code running on a JavaScript engine in the browser, is the way to go. I actually found working with JavaScript and the HTMLß5 Canvas great fun. I would like to see young coders taking this approach on the basis that JavaScript running in the browser gives you instant gratification. You can see colours, and objects and it's all good. I decided to get the ball rolling (ha ha) by getting a ball bouncing around inside the browser. As with learning any language your first attempt at writing something is usually painful and the result horrible and you figure it all out. But here's my first attempt: Code attempt 1 - Bouncing Ball First, you can try out my bouncing ball demo <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Ball</title> <style> * { padding: 40; margin: 40; } canvas { background: #EEE; display: block; margin: 0 auto; } </style> </head> <body> <canvas id="myCanvas" width="480" height="320"></canvas> <script> var canvas = document.getElementById("myCanvas"); var ctx = canvas.getContext("2d"); var x = canvas.width / 2; var y = canvas.height /2; var dx = 2; var dy = 2; var r = Math.floor(Math.random() * 255); var g = Math.floor(Math.random() * 255); var b = Math.floor(Math.random() * 255); var col = "rgb(" + r + "," + g + "," + b + ")"; console.log("col is: >%s<", col); var b = {x: x, y: y, rad: 30, color: col}; function move_ball(ball) { if (ball.x > (canvas.width - ball.rad) || ball.x < (0 + ball.rad)) { dx = -dx; } if (ball.y > (canvas.height - ball.rad) || ball.y < (0 + ball.rad)) { dy = -dy; } ball.x = ball.x + dx; ball.y = ball.y + dy; } function draw_ball(ball) { ctx.beginPath(); ctx.arc(ball.x, ball.y, ball.rad, 0, Math.PI*2, false); ctx.fillStyle = ball.color; ctx.fill(); ctx.closePath(); } function draw() { ctx.clearRect(0, 0, canvas.width, canvas.height); // draw small static ball at centre (for debugging) ctx.beginPath(); ctx.arc(canvas.width/2, canvas.height/2, 10, 0, Math.PI*2, false); ctx.fillStyle = "#FF0022"; ctx.fill(); ctx.closePath(); move_ball(b); draw_ball(b); } setInterval(draw, 30); </script> </body> </html> Far from perfect code but neat and simple. I do like the way callbacks can simplify things. Obviously if you are writing Node code you will be (or have to get) very familiar with callbacks. They are simple in that your code gets called back by the underlying engine when its ready for you. This asynchronous nature of callbacks is of course at the very heart of Node. In the above code, the draw function gets called back after a 30 milli-second timeout. The callback is set up by the setInterval function. You can save the above code to a file such as ball.html and then double click it to run it in a browser. You should see a ball bouncing around the screen. Code attempt 2 - Bouncing Balls First, you can try out my bouncing balls demo The main thing I wanted to add in my next attempt was the use of JavaScript objects. JavaScript doesn't have classes built into the core language, but you can create objects in JavaScript. Here's the ball object: var ball = { x: x, y: y, rad: rad, color: col, dx: dx, dy: dy } Once the obbject has been created we can pass it to functions and give its attributes values. I needed to do this for this example because I want lots of balls bouncing around the screen! Each ball object can then have its attributes set individually and retain that state. Because there will be lots of balls bouncing around I simply keep track of them all in an array. I can work my way along the array updating the attributes of each object. This is a common use case in games for example, where you have a list of 'baddy' objects stored in an array and you then check the player object with each item in the baddy array for a collision. That's just one examples, but the idea of objects stored in an array is generally very useful. So armed with this idea I then launched into my second attempt at writing some fun JavaScript: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Balls!</title> <style> * { padding: 40; margin: 40; } canvas { background: #000000; display: block; margin: 0 auto; } </style> </head> <body> <canvas id="myCanvas" width="960" height="640"></canvas> <script> function random_int(min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function rgba(r, g, b, a) { return "rgba(" + r + "," + g + "," + b + "," + a + ")"; } function random_color() { var r = random_int(0, 255); var g = random_int(0, 255); var b = random_int(0, 255); var a = Math.round(Math.random() * 10) / 10; // value between 0.1 and 1.0 if (a == 0.0) { a = 0.1; } // Hack because 0.0 won't be visible return rgba(r, g, b, a); } function random_ball() { var rad = random_int(20, 70); var dx = random_int(-8, 8); var dy = random_int(-8, 8); // Hack - sometimes if dx/dy is zero a ball gets stuck! if (dx == 0) { dx = 1; } if (dy == 0) { dy = 1; } var x = canvas.width / 2; var y = canvas.height / 2; // fudge factor to make sure balls are created within range if (x < rad) { x = rad; } if (x > canvas.width - rad) { x = canvas.width - (rad * 4); } if (y < rad) { y = rad }; if (y > canvas.height - rad) { y = canvas.height - (rad * 4) }; var col = random_color(); var ball = { x: x, y: y, rad: rad, color: col, dx: dx, dy: dy } return (ball); } function draw_ball(ball) { ctx.beginPath(); ctx.arc(ball.x, ball.y, ball.rad, 0, Math.PI * 2, false); ctx.fillStyle = ball.color; ctx.fill(); ctx.closePath(); } function move_ball(ball) { // check x within bounds if (ball.x > (canvas.width - ball.rad) || ball.x < (0 + ball.rad)) { ball.dx = -ball.dx; } // check y within bounds if (ball.y > (canvas.height - ball.rad) || ball.y < (0 + ball.rad)) { ball.dy = -ball.dy; } // move ball ball.x = ball.x + ball.dx; ball.y = ball.y + ball.dy; } function draw() { // clear screen ctx.clearRect(0, 0, canvas.width, canvas.height); // draw balls for (var i = 0; i < Balls.length; i++) { draw_ball(Balls[i]); } // move Balls for (var i = 0; i < Balls.length; i++) { move_ball(Balls[i]); } } // // Main code routine // var canvas = document.getElementById("myCanvas"); var ctx = canvas.getContext("2d"); var Balls = []; var NUM_BALLS = 100; for (var i = 0; i < NUM_BALLS; i++) { Balls.push(random_ball()); } // call drawing routine every x milliseconds setInterval(draw, 30); </script> </body> </html> You can save the code to a file such as balls.html and double click it to run it. You can experiment with the number of balls you bounce around on screen. Emacs let me down Sadly Emacs did not seem to handle JavaScript code inside HTML very well. It was fine when I was writing some little JavaScript snippets and then running them via Node on the command line. I'm sure there is an Emacs mode that handles this situation more gracefully, but frankly I didn't want to spend time exploring it. If the above code looks a bit ropey in terms of layout, that's why. I have since moved mostly to VS Code and so no longer experience this issue. Summary This was a really fun experiment and I found I loved the interactivity of running JavaScript in the browser, and using HTML5 Canvas it was fairly easy to see Colourful Things. A great way to learn a language! I had really wanted to progress this and do some fun demos such as a starfield. I also had a lens demo half done but simply ran of time due to work/real-world getting in the way. Still if you are going to learn JavaScript I do recommend it, and please feel free to use my code as a starting point on your own learning adventures.
https://tonys-notebook.com/articles/learning-javascript.html
CC-MAIN-2022-05
refinedweb
1,442
75.5
I'm quite a sensitive person, not in a bad tempered sort of way, in the way that I beat myself up when I fail. I had to take a Codility test for a job application a few days ago, I knew I didn't do particularly well as I often choke when asked to write code against the clock knowing I only have one shot. The thing is, I KNOW I can code to a good level, the problem is, I'm not a fantastic mathematician and I sometimes need a little time to work out the maths required to solve a problem or create a working algorithm. I have a degree in software engineering and got a good grade, one classification below the highest. The problem is, I'm very used to coding in my own time, when I'm against the clock, I often choke, my head gets all fuzzy and I don't perform as well as I could. I'm aware of my flaws and accept them, but I can't help but beat myself up when I fail a programming test. I scored a 17/100 on the Codility test, which is an awful grade. I've taken challenges on there before and scored 100% on correctness, but I usually get very low performance marks, which puts a big hit on your grade. How do you deal with a failure that severe and embarrassing? This is something I spent three years studying and have done for four years since graduating despite being unable to find employment doing it, yet that result says to me that I suck at it, the one thing that's supposed to be my "thing". It made me feel like a stupid, useless sack of crap. I know it was a fair assessment of how I performed in that test, but that just made it worse somehow, like it was proof that I'm a failure. I WANT to be good at this, I make every effort I can to learn and improve at every opportunity, but when you get that horrendous a score, it makes you wonder whether you are just too stupid to do this, and that REALLY hurts when you're already a self critical person. How do you deal with a failure like that? Any advice would be appreciated. As an example of the standard of my coding, here's the code I wrote for a Codility challenge I took earlier today, it scored 100% for correctness, but 0% for performance since it must not be well optimized. What it does is takes in an array of integers describing the heights of a set of plots of land in a straight row on an island (A), and an array describing the height of the water around that island on a given day ( using System; class Solution { public int[] solution(int[] A, int[] B)/> { int[] numberOfIslands = new int[B.Length]; int[] plotStates = new int[A.Length+1]; plotStates[plotStates.Length-1] = 0; for(int x = 0; x<= B.Length-1; x++) { for(int y = 0; y<= A.Length-1; y++) { if(A[y] > B[x]) { plotStates[y] = 1; } else { plotStates[y] = 0; } } for(int z =0; z<= plotStates.Length-1; z++) { if(plotStates[z] == 1 && plotStates[z+1] ==0) { numberOfIslands[x]++; } } } return numberOfIslands; } } What do you think?
http://www.dreamincode.net/forums/topic/351447-how-to-deal-with-a-failed-programming-test/
CC-MAIN-2018-05
refinedweb
560
73
1. Quick Start¶ 1.1. Introduction¶ BioServices provides access to several Web Services. Each service requires some expertise on its own. In this Quick Start section, we will neither cover all the services nor all their functionalities. However, it should give you a good overview of what you can do with BioServices (both from the user and developer point of views). Before starting, let us remind what are Web Services. There provide an access to databases or applications via a web interface based on the SOAP/WSDL or the REST technologies. These technologies allow a programmatic access, which we take advantage in BioServices. The REST technology uses URLs so there is no external dependency. You simply need to build a well-formatted URL and you will retrieve an XML document that you can consume with your preferred technology platform. The SOAP/WSDL technology combines SOAP (Simple Object Access Protocol), which is a messaging protocol for transporting information and the WSDL (Web Services Description Language), which is a method for describing Web Services and their capabilities. 1.1.1. What methods are available for a given service¶ Usually most of the service functionalities have been wrapped and we try to keep the names as close as possible to the API. On top of the service methods, each class inherits from the BioService class (REST or WSDL). For instance REST service have the useful request method. Another nice function is the onWeb. See also 1.1.2. What about the output ?¶ Outputs depend on the service and functionalities of the service. It can be heteregeneous. However, output are mostly XML formatted or in tabulated separated column format (TSV). When XML is returned, it is usually parsed via the BeautilSoup package (for instance you can get all children using getchildren() function). Sometimes, we also convert output into dictionaries. So, it really depends on the service/functionality you are using. Let us look at some of the Web Services wrapped in BioServices. 1.2. UniProt service¶ Let us start with the UniProt class. With this class, you can access to uniprot services. In particular, you can map an ID from a database to another one. For instance to convert the UniProtKB ID into KEGG ID, use: >>> from bioservices.uniprot import UniProt >>> u = UniProt(verbose=False) >>> u.mapping(fr="ACC+ID", to="KEGG_ID", query='P43403') {'P43403': ['hsa:7535']} Note that the returned response from uniprot web service is converted into a list. The first two elements are the databases used for the mapping. Then, alternance of the queried element and the answer populates the list. You can also search for a specific UniProtKB ID to get exhaustive information: >>> print(u.search("P43403", frmt="txt")) ID ZAP70_HUMAN Reviewed; 619 AA. AC P43403; A6NFP4; Q6PIA4; Q8IXD6; Q9UBS6; DT 01-NOV-1995, integrated into UniProtKB/Swiss-Prot. DT 01-NOV-1995, sequence version 1. ... To obtain the FASTA sequence, you can use searchUniProtId(): >>> res = u.searchUniProtId("P09958", frmt="xml") >>> print(u.searchUniProtId("P09958", frmt="fasta")) sp|P09958|FURIN_HUMAN Furin OS=Homo sapiens GN=FURIN PE=1 SV=2 MELRPWLLWVVAATGTLVLLAADAQGQKVFTNTWAVRIPGGPAVANSVARKHGFLNLGQI FGDYYHFWHRGVTKRSLSPHRPRHSRLQREPQVQWLEQQVAKRRTKRDVYQEPTDPKFPQ QWYLSGVTQRDLNVKAAWAQGYTGHGIVVSILDDGIEKNHPDLAGNYDPGASFDVNDQDP DPQPRYTQMNDNRHGTRCAGEVAAVANNGVCGVGVAYNARIGGVRMLDGEVTDAVEARSL GLNPNHIHIYSASWGPEDDGKTVDGPARLAEEAFFRGVSQGRGGLGSIFVWASGNGGREH DSCNCDGYTNSIYTLSISSATQFGNVPWYSEACSSTLATTYSSGNQNEKQIVTTDLRQKC TESHTGTSASAPLAAGIIALTLEANKNLTWRDMQHLVVQTSKPAHLNANDWATNGVGRKV SHSYGYGLLDAGAMVALAQNWTTVAPQRKCIIDILTEPKDIGKRLEVRKTVTACLGEPNH ITRLEHAQARLTLSYNRRGDLAIHLVSPMGTRSTLLAARPHDYSADGFNDWAFMTTHSWD EDPSGEWVLEIENTSEANNYGTLTKFTLVLYGTAPEGLPVPPESSGCKTLTSSQACVVCE EGFSLHQKSCVQHCPPGFAPQVLDTHYSTENDVETIRASVCAPCHASCATCQGPALTDCL SCPSHASLDPVEQTCSRQSQSSRESPPQQQPPRLPPEVEAGQRLRAGLLPSHLPEVVAGL SCAFIVLVFVTVFLVLQLRSGFSFRGVKVYTMDRGLISYKGLPPEAWQEECPSDSEEDEG RGERTAFIKDQSAL See also Reference guide of bioservices.uniprot.UniProt for more details 1.3. KEGG service¶ The KEGG interface is similar but contains more methods. The tutorial presents the KEGG itnerface in details, but let us have a quick overview. First, let us start a KEGG instance: from bioservices import KEGG k = KEGG(verbose=False) KEGG contains biological data for many organisms. By default, no organism is set, which can be checked in the following attribute k.organism We can set it to human using KEGG terminology for homo sapiens: k.organis = 'hsa' You can use the dbinfo() to obtain statistics on the pathway database: >>> print(k.info("pathway")) pathway KEGG Pathway Database path Release 65.0+/01-15, Jan 13 Kanehisa Laboratories 218,277 entries You can see the list of valid databases using the databases attribute. Each of the database entry can also be listed using the list() method. For instance, the organisms can be retrieved with: k.list("organism") However, to extract the Ids extra processing is required. So, we provide aliases to retrieve the organism Ids easily: k.organismIds The human organism is coded as “hsa”. You can also get its T number instead: >>> k.code2Tnumber("hsa") 'T01001' Every elements is referred to with a KEGG ID, which may be difficult to handle at first. There are methods to retrieve the IDs though. For instance, get the list of pathways iIs for the current organism as follows: k.pathwayIds For a given gene, you can get the full information related to that gene by using the method get(): print(k.get("hsa:3586")) or a pathway: print(k.get("path:hsa05416")) See also Reference guide of bioservices.kegg.KEGG for more details See also KEGG Tutorial for more details See also Reference guide of bioservices.kegg.KEGGParser to parse a KEGG entry into a dictionary 1.4. QuickGO¶ To acces to the GO interface, simply create an instance and look for a entry using the bioservices.quickgo.QuickGO.Term() method: >>> from bioservices import QuickGO >>> g = QuickGO(verbose=False) >>> print(g.Term("GO:0003824", frmt="obo")) [Term] id: GO:0003824 name: catalytic activity def: ." synonym: "enzyme activity" exact xref: InterPro:IPR000183 ... See also Reference guide of bioservices.quickgo.QuickGO for more details 1.5. PICR service¶ PICR, the Protein Identifier Cross Reference service provides 2 services in WSDL and REST protocols. When it is the case, we arbitrary chose one of the available protocol. In the PICR case, we implemented only the REST interface. The methods available in the REST service are very similar to those available via SOAP except for one major difference: only one accession or sequence can be mapped per request. The following example returns a XML document containing information about the protein P29375 found in two specific databases: >>> from bioservices.picr import PICR >>> p = PICR() >>> res = p.getUPIForAccession("P29375", ["IPI", "ENSEMBL"]) See also Reference guide of bioservices.picr.PICR for more details 1.6. Biomodels service¶ You can access the biomodels service and obtain a model as follows: >>> from bioservices import biomodels >>> b = biomodels.BioModels() >>> model = b.getModelSBMLById('BIOMD0000000299') Then you can play with the SBML file with your favorite SBML tool. In order to get the model IDs, you can look at the full list: >>> b.modelsId Of course it does not tell you anything about a model; there are more useful functions such as getModelsIdByUniprotId() and others from the getModelsIdBy family. See also Reference guide of bioservices.biomodels.BioModels for more details See also Biomodels tutorial for more details 1.7. Rhea service¶ Create a Rhea instance as follows: from bioservices import Rhea r = Rhea() Rhea provides only 2 type of requests with a REST interface that are available with the search() and entry() methods. Let us first find information about the chemical product caffein using the search() method: xml_response = r.search("caffein*") The output is in XML format. Python provides lots of tools to deal with xml so you can surely find good tools. Within bioservices, we wrap all returned XML documents into a BeautifulSoup object that ease the manipulation of XML documents. As an example, we can extract all fields “id” as follows: >>> [x.getText() for x in xml_response.findAll("id")] [u'27902', u'10280', u'20944', u'30447', u'30319', u'30315', u'30311', u'30307'] The second method provided is the entry() method. Given an Id, you can query the Rhea database using Id found earlier (e.g., 10280): >>> xml_response = r.entry(10280, "biopax2") Warning the r.entry output is also in XML format but we do not provide a specific XML parser for it unlike for the “search” method. output format can be found in >>> r.format_entry ['cmlreact', 'biopax2', 'rxn'] See also Reference guide of bioservices.rhea.Rhea for more details 1.8. Other services¶ There are many other services provided within BioServices and the reference guide should give you all the information available with examples to start to play with any of them. The home page of the services themselves is usually a good starting point as well. Services that are not available in BioServices can still be accesssed to quite easily as demonstrated in the Developer Guide section.
http://bioservices.readthedocs.io/en/latest/quickstart.html
CC-MAIN-2017-30
refinedweb
1,387
50.33
Hyper::Developer::Manual::Glossary - Glossary of terms This document describes Hyper::Developer::Manual::Glossary 0.01 A glossary of terms used in the documentation of Hyper. This glossary defines each word, which is not part of simplified English or perlglossary. An attribute is an entity that defines a property of an object, or element. An attribute usually consists of a name and a value. A container is an object that can hold things. A control is an interface element that a computer user interacts with, such as a window, a text box, or a submit button. A control can consist of other controls. A field is a part of data. For example, the name of a person may consist of two distinct fields: first name, and last name. A framework is a set of classes. A label is a brief description given for purposes of identification. In Hyper a label is the text displayed in combination with a control. A method is a piece of code associated with a class or object to perform a task. A namespace is a space (or context) which restricts the validity of a name to this space. A service is a part of a website. A service consists of usecases. Example: The website of a company offers the services hosting, mail, and domain. A step is a part of a usecase. A template is a ... A transition is a definition of the next step. A usecase is a single interaction between an actor (an user) and other (secondary) actors, and the system itself. A usecase is a sequence of steps. Example: The service mail has the usecases create account, configure spam filter, and configure mail forward. $Author: ac0v $ $Id: Glossary.pod 333 2008-02-18 22:59:27Z ac0v $ $Revision: 333 $ $Date: 2008-02-18 23:59:27 +0100 (Mon, 18 Feb 2008) $ $HeadURL: $ Helmut Wollmersdorfer <helmut@wollmersdorfer.at> This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Hyper-Developer/lib/Hyper/Developer/Manual/Glossary.pod
CC-MAIN-2013-48
refinedweb
335
59.6
Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more. def hanoi(n,s,t,b): assert n > 0 if n == 1: print ('move',s,'to',t) else: h1=hanoi(n-1,s,b,t) h2=hanoi(1,s,t,b) h3=hanoi(n-1,b,t,s) for i in range(1,5): print ('New Hanoi Example: hanoi(',i,',source, target, buffer)') print ('----------------------') hanoi(i,'source','target','buffer') The code seems working and if you single step INTO the function you should be able to see n,s,t,b What exactly does not work? You could just set a breakpoint at line 3. and inspect n,s,b,t or just insert following line after line 1 of your code. That's of course not using a debugger, but just adding traces, but sometimes this is faster and more efficient then singlestepping through code. Open in new window The formatting in the print statement is just there to have the columns aligned Also, you are print the movement only when n=1, which seems odd. Double check your logic on this piece. h1,2,3 never return a value. However as Walter stated. h1, h2, h3 will always be none. and are not needed, so you could rewrite the code as Open in new windowas the recursive calls without value assignments are sufficient The whole idea of towers of Hanoi ( ) is to move a disk only when n is 1. the recursive calls and the fact that permutations are performed over s,t,b are 'just' there to determine the order of the moves.
https://www.experts-exchange.com/questions/28492274/h1-2-3-but-can-not-see-these-values-in-a-debugger.html
CC-MAIN-2018-05
refinedweb
276
68.81
- about flash cs3'components - null object reference - Class Clarification - Get a text field to mirror the contents of another text field [renamed] - How to simply call a function? - Access properties of getChildAt - Displaying Library Content in AS 3.0 code in a class?? - all I wanna do - Help with a for statement in AS3/Flash9 - holly cow amf rocks the network - i think i've found my error - XML Search - Write to file, in flash ?? - onLoadInit help ? - what data type to use? - Scroll sequence of images? - Packages - Is it possible to adjust width / height for an external loaded swf? - need help urgently - Stop sound from playing continuosly on keyisdown - Would an interactive CD-ROM require a preloader? - Install Flash 8 and CS 9 - item movement on mousemove.... - ExternalInterface : any luck? - AS3 and button functions - Use of packages - [Question]Export MXML from FireworksCS3 - AS3 component architecture - A good AS3 tutorial, anyone know one? - How can I sequentially load xml files? - Filename as variable - Apollo compiler and mx: classes - clickTag in AS3? - AS3 source architecture - as3 calllater - Help with modal support in actionscript 3 - AS3 custom component does not appear on stage - [as3] Flash Memory Use, THE SEQUEL! - UpdateAfterEvent - URLLoader and Variables HELP!!!! - Targeting Instances within Assets through Classes - HELP with embedding Snocap Music Store in my Flash Website - FLASH: TypeError: Error #1010 when calling a class (Flash 9 - AS3) - Weird problem... preloading stuff - problem with text loading in XML file - Access a Linkage Clip From an External Class - Custom Class Optimization? - Dispatch an event from root which should received by childs. - Dynamic Class Instantiation - Post method Proxying Data - 1037 Error - Best onRollOver substitute? - Loader class to load swf - Security Sandbox Violation - flex2,php,mysql - unload() and NetStreams - how to make a preloader in CS3? - Bitwise gems - tweening and color - Table variable - XML parsing speed... - Class glue - hitTestObject area - Downloadable .fla preloader - Flex/AS3 casting question - alternative for eval [] in AS3 ? - Tracking graphics loading proccess - AS3 conclusion: dynamic things are evil - How to attach button behaviour to displayObject? - attachSound gone - So how can we load random sounds from library? - dropTarget - Using packages obtained from BOOKS... - Help with XML: Traversing - Error doesn't make sense help please - Getting duration from netstream or video - horizontal slide gallery - class App extends Box - MP3 header navigation - [AS3] Waiting on a frame for a random duration? - custom events AHHHHHHRGH - Little help with Dynamic Text, please - Set Cursor's Location? - Is this a correct way to capture keys? - coded Tweens mysteriously abort - Google Adwords Flash banner requirements - volume control - Dynamic Text on a curve, Flash. - textInput.restrict -- a better way??? - Whats wrong HERE?! - Help with creating bubbles with Actionscript - Accessing clips I attached? - Actionscript 3.0 tutorial and a textfield creation problem - AS Puzzle - How do I use timers to repeat a function, and what else r they for? - [as3] Motion Guides and Motion Code - for loop + addChild. How to get unique instance names? - Associative arrays: Array or Object? - cs3 component creation - What do I refer the mouse as? (instead of _xmouse in AS2) - addChild then Tween them !-(Rain drops example) - Why doesn't this code work (trying to do parent["b.b"+i] = new MovieClip(); - Changing movieclips depth?? - Dynamic Mc access to stage - Function.apply() - Actionscript 2 or 3? - String.fromCharCode and keyCode - Removing a DisplayObject from memory - Checking if event listeners have been created... - [as3] When and How to use ints - Moving assets for a title game with AS3 - Declaring a shared object called "DB" - [AS3] How do I load external images into a movie clip then add children? - 3D Circular Menu - Including exteral .as files and, why aren't my .as files using AS3? - using the 'parent' property - Error 1119 - Why AS3? - Memory Optimization - How do I rotate objects and still hold it's position or obstructability? - movie clip with sub movies - XML bug? - XML Editor? - Ever wondered what Graphic symbols really are? - Has anyone made Google adWord Banners with AS3? - Accessing atext field inside a movieclip & simple question - duplicate movieclip in AS3, help plz - TypeError: Error #1009 :( - Text+Alpha help. - Finding the class of an object - how useful is a non hexadecimal RGB value - Should I wait To Learn AS3 - Rotation impact in performance - Flawless rollover buttons? - [as3] Flash Memory Use: the Timeline - How big should be a XML file. - Roll-over area in actionscript. - random sort with Arrays in AS 3.0 - removeChild does not remove certain movieClips. Please Help! - array access notation in AS3 - Creating a list of objects on stage - some begginer questionsabout Movieclips and Dictionary - check varaibles from aspx file - stopping movement in actionscript. - [as3 in CS3] Does SWF asset embedding work? - passing variables between swf files - namespaces in xml with e4x - How do i fix this coding?...... - Code isn't working.. player movement screwed up.. - forEach method - unescape method change in ActionScript 3 - Customizing the ComboBox component - Name of a Layer by script - What object do I pass hitTestObject(...)? - [as3] frame rate browser issues - spreadshirt.com - spectrum analyzer - Unload/StopAll/Eliminate Sounds - flash touchscreen kiosks - Array item doesn't exist, so let's get stroppy... or not? - AS3 event propagation thru alpha channel mask? - Local reference is it clear on exit? - Issue with gotoandplay - referencing dynamic movieclip names? - AS3 animation framework - how would i use attachMovie in AS3? - Saving Sprites or Shapes with ByteArray - [as3] Drawing BitmapData to erase BitmapData - Do I have to "Anti-Alias" ? sometimes I really need alias - lib-linkaged-BitmapData ignores the width and height? - Collision detection for a player with walls.. ahh!! - Bug ? - Talking between Classes - Simple AS3 Animation Engine - Good Third-Party AS3 Editors - as3 equivalent of sortable list - Nesting MovieClip Objects? - AS3 Animation Issue - XML: AppendChild - how to make a instance delete it's self - Color fill png dynamically? - Random Organic Motion - Multidimensional Array , Slice and Sort - Change XML Variables on fly - extends more than one class - Can't load external .swf which us a DocumentClass? - fscommand showmenu - getting files from a folder - Using 'Loader' in AS3 - hi, theCanadian - Memory problem - BUG: stageWidth/Height - variables in a loaded swf - HELP - URL Query String - Embedding assets in a AS3 class with Flash CS3 - URLLoader issue - executing PHP - Error #1023: Incompatible override. - AS3 Memory Management Tips - eval() - reserve words in parameters - Measuring performance in AS3 - fun with kuler - class definitions - widht & height of a mc = 100% ??? - Fullscreen mode not working - How to code components in AS3? - [FLASH+FLEX] call a method in a flex swf from as3 - Custom cursor trouble..? - Security passing variables... - Playing a random embedded sound - Rotate TextField - Sound.length Formatting - as3 migration -- variables within MCs (beginner) - How to save XML files with PHP to a server - How I do to verify if a child was added or not, even it's not a null value - Weird error #1010: I found a solution but... - Send a variable to eventListener function - Class Question - how to load an swf and gotoAndPlay? - sendToURL screws up .Net - 31 fps still optimum frame rate in AS3? - How do you track just the mouseY movement? - Accessing the stage - E4X doubt - rigidbody vs circle collision? - Depth: works in a difference way in Flash than on-line - Can't Override Array Methods - Covering buttons - Best practice for preloaders - When does one make classes? - Add AS to a button. - sprite or movie clip ? - passing variables from eventListener to function AS3 - flash cs3 + vista + actionscript panel - delete TextField instances - Centering a linked MC in AS3 - AS2 to AS3 Problem - triyng to get eventdispatcher trigger listener in multiple instances - eventDispatcher help please ! - [Discussion] Comments helpful or hurtful? - Quick Buttons - xml total children - targeting dynamically generated MCs - SoundMixer ... is this a BUG or a FEATURE? - AS3 Dynamic Text Kerning - Depth stupid issue
http://www.kirupa.com/forum/archive/index.php/f-141-p-2.html
crawl-002
refinedweb
1,278
58.69
On 11/14/06, Dmitry V'yal <akamaus at gmail.com> wrote: > Bulat Ziganshin. [I sent a message earlier to this effect but accidentally hit "reply" rather than "reply all" so only duncan got it] I'm looking through the GTK docs and it appears to me that you don't actually have to choose an interval at all. All that's needed is that you somehow "trigger" the GTK event loop to call an event handler in its thread which empties the channel and executes any GUI actions therein. It seems that GTK supports adding custom sources to the event loop (if I understand the docs correctly, this is thread safe, and signalling one of these sources should be too). So either we add a new custom source, or find some other way of "signalling" a particular event. I would guess one way of doing this would be to add a timeout event handler with delay 0, this event handler would be a function called, say, dispatchPendingGUIActions, which empties the channel of GUI actions, and also returns False so that it will only get called once (per "signal" - the next "signal" it will get added again). In other words, there is no need for polling. Just add the function to the channel and signal the main loop (first checking if we are already in the main GUI thread in which case we just call the action directly without going through the channel, and also checking if a "signal" has already been raised but not yet "handled" in which case we don't do it again) . Something like (untested): -- contains the action, and the response mvar data GUIWorkItem = forall a . GUIWorkItem !(IO a) !(MVar a) newGUIChan :: IO (Chan GUIWorkItem) newGUIChan = newChan -- this adds an action to the main channel, unless we're already -- in the main GUI thread, in which case we just run the action directly dispatchGUIAction act do isMain <- isMainGUIThread if isMain then act else do res <- newEmptyMVar writeChan theGlobalGUIChan (GUIWorkItem act res) triggerMainGUIThread unsafeInterleaveIO (takeMVar res) -- this is the action we want to call from the main GUI thread -- every time a new action has been added to the queue, e.g. -- by adding it as an event handler to a timer event dispatchPendingGUIActions do empty <- isEmptyChan if empty then setGTKThreadNoLongerTriggered >> return False else do GUIWorkItem act resVar <- readChan theGlobalGUIChan res <- act putMVar resVar res dispatchPendingGUIActions -- an example GUI action, this is how we "wrap" all our actions gtkAction' x y z = dispatchGUIAction (gtkAction x y z) -- this is how we signal GTK so that it will call the code which -- handles all the actions in theGlobalGUIChan triggerMainGUIThread = do isTriggered <- isGTKThreadTriggered -- not necessarily needed... if isTriggered then return () else timeoutAddFull dispatchPendingGUIActions priorityHigh 0 (note: The unsafeInterleaveIO is there so that dispatchPendingGUIActions won't have to block until the main GUI thread gets around to handling the event, after all for quite a few GTK actions you don't really care about the result so you should return immediately). There are some minor gaps here, but that's the general idea. We avoid the polling by explicitly signalling GTK whenever we add a GUI event to the channel. I've done it here by adding the "dispatch" function as an event handler to a timeout event (this event handler returns False so the timeout event is removed after being called). This depends on wether or not adding event handlers is thread safe. If it isn't, it may be enough to wrap the triggerMainGUIThead action in a global lock (someone who knows more obout this?). If that still isn't safe, then we could add our own custom source to the GTK event loop and signal the event loop using that (though that might be a bit more messy?). /S -- Sebastian Sylvan +46(0)736-818655 UIN: 44640862
http://www.haskell.org/pipermail/haskell-cafe/2006-November/019533.html
CC-MAIN-2014-23
refinedweb
640
56.83
Using a GraphQL API comes with distinct advantages. With GraphQL, we can request the exact data we need without ever under- or over-fetching. We can also get multiple resources in a single request. At the same time, the requests themselves can serve as a form of documentation, making it is easy to understand what data is being used, where, and why. But the most exciting feature of GraphQL is that the API is fully described by its schema, including all data types for each possible query or mutation. Why does this matter? Because, based on that schema, we can automatically create TypeScript types for the entire API on the frontend. What’s more is we can easily autogenerate fully-typed custom React hooks for a data-fetching library like React Query . Let me show you how. First, let’s create our React project with Create React App with the TypeScript template. yarn create react-app graphql --template typescript Next, we need an API. FakeQL provides a great way to create a mock GraphQL API and deploy it. Because we will be using the default definition, we can set everything up simply by clicking Extend JSON and then Deploy. The tool generates a unique URL where we can access our new API. Now that we have our React app and our API, it is time to set up our data-fetching library, React Query. Let’s install it: yarn add react-query Now, set up the React Query client. import { QueryClient, QueryClientProvider } from 'react-query' import Posts from 'components/Posts' const queryClient = new QueryClient() const App = () => { return ( <QueryClientProvider client={queryClient}> <Posts /> </QueryClientProvider> ) } export default App Because our API provides a list of posts, we will use a Posts component to display them. For the moment, let’s leave it empty. // components/Posts.tsx const Posts = () => { return ( <></> ) } export default Posts Next, we need a query to get the list of posts. Let’s define it in a .graphql file and co-locate it with our component: # components/Posts/posts.graphql query Posts { posts { id title } } Finally, let’s also add a mutation for deleting a post: # components/Posts/deletePost.graphql mutation DeletePost($id: ID!) { deletePost(id: $id) } We are now ready to auto-generate our custom and fully typed React Query hooks based on the requests we previously defined in our . graphql files. We will be using GraphQL Code Generator. We start by installing it: yarn add graphql yarn add -D @graphql-codegen/cli Next, we need to initialize the wizard and go through the steps: yarn graphql-codegen init First, we choose the type of app we are building: Then, we define our schema is by pasting our FakeQL url. We define where our operations and fragments are: We choose our plugins: We choose where to write the output: Let’s also generate an introspection file: We need to name our config file: Finally, let’s name our script graphql:codegen: So far, so good! In order to generate custom React Query hooks, we need to install the appropriate plugin: yarn add -D @graphql-codegen/typescript-react-query And add a quick edit of the codegen.yml config file in order to make it work: overwrite: true schema: '' documents: 'src/**/*.graphql' generates: src/generated/index.ts: plugins: - typescript - typescript-operations - typescript-react-query config: fetcher: endpoint: '' Finally, we need to run our script. yarn graphql:codegen We are now done! Our fully-typed custom React Query hooks have been automatically generated and added directly to our project’s generated folder. Let’s see them in action! In our Posts component, we are now ready to display the list of posts: import { usePostsQuery } from 'generated' const Posts = () => { const { data } = usePostsQuery() return ( <> {isLoading && <p>Loading ...</p>} {data && data.posts?.map(post => ( <div key={post?.id}> <p>{post?.title}</p> <hr /> </div> ))} </> ) } export default Posts Let’s also add the DeletePost mutation we defined earlier. import { useQueryClient } from 'react-query' import { usePostsQuery, useDeletePostMutation } from 'generated' const Posts = () => { const queryClient = useQueryClient() const { data, isLoading } = usePostsQuery() const { mutate } = useDeletePostMutation({ onSuccess: () => queryClient.invalidateQueries('Posts'), }) return ( <> {isLoading && <p>Loading ...</p>} {data && data.posts?.map(post => ( <div key={post?.id}> <p>{post?.title}</p> <button onClick={() => post && mutate({ id: post.id })}> Delete </button> <hr /> </div> ))} </> ) } export default Posts That’s it, we now have a working example! The approach described above allows us to take full advantage of GraphQL on the frontend by automating both the creation of TypeScript types for the API and the generation of custom React Query hooks for each request. By using it, we have also substantially reduced the amount of data-fetching boilerplate code we need to write. With this tooling in place, all we need to do in order to create additional React Query custom hooks for a request is to create a .graphql file and run the graphql:codegen script. Pretty cool, right? Curious to play with the code yourself? Find a full working example in my GitHub repo . Happy coding! ✨ If you found this article useful consider sponsoring my content! Follow me on Twitter for more! Support Iva Kop by becoming a sponsor. Any amount is appreciated!
https://blog.whereisthemouse.com/graphql-requests-made-easy-with-react-query-and-typescript
CC-MAIN-2022-05
refinedweb
857
57.06
A library to inline SVG source string into halogen views. How to use import Svg.Parser.Halogen (icon) -- | You can use FFI and webpack raw-loader to load external SVG files code :: String<path fill-</svg>""" type Icon = forall p r i. Array (IProp r i) -> HTML p i iconCode :: Icon iconCode = icon code It's as simple as this, in most cases you only need the icon function. You can then use iconCode in your render function, you can also apply additional className to it. Halogen.HTML.Properties.class_ won't work though, you need to use Halogen.HTML.attr. import Halogen.HTML as HH className = HH.attr (HH.AttrName "class") render state = iconCode [ className "icon" ] How it works Svg.Parser parses SVG source String as SvgNode. Svg.Parser.Halogen converts SvgNode to halogen HTML. You can also write adapters to convert SvgNode to the HTML type of other view libraries. If you want to Svg.Parser with other view libraries, I can release it as a separate package, let me know if you are interested.
https://pursuit.purescript.org/packages/purescript-svg-parser-halogen/0.2.0
CC-MAIN-2021-04
refinedweb
177
67.55
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. Harlowe - the default Twine 2 story format. Rough documentation is at. See below for compilation instructions. 2.0.1 changes: Bugfixes - Fixed a bug where (enchant:)applied to ?Page couldn't override CSS properties for <tw-story>(including the default background colour and colour). - Fixed a Passage Editor display bug where the left margin obscured the first letter of lines. 2.0.0 changes: Bugfixes - a bug where subtracting arrays and datasets wouldn't correctly compare contained data structures - for instance, (a:(a:1)) - (a:(a:1))wouldn't work correctly. - Fixed a bug where the (dataset:)macro, and adding datasets, wouldn't correctly compare data structures - for instance, (dataset: (a:),(a:))would contain both identical arrays, as would (dataset: (a:)) + (dataset: (a:)). - Additionally fixed a bug where data structures were storied in datasets by reference, allowing two variables to reference (and remotely alter) the same data. - Fixed a bug where using (move:)to move a subarray or substring (such as (move: $a's (a:2,3) to $b))wouldn't work. - Fixed a bug where using (set:)to set a substring, when the given array of positions contained "length" (such as (set: $a's (a:1,"length")) to "foo")), wouldn't produce an error. - Fixed a bug where the (count:)macro would give the wrong result when the data to count (the second value) was an empty string. - Now, a (print:)command that contains a command will only execute the contained command if itself is actually displayed in the passage - the code (set: $x to (print:(goto:'X')))would formerly perform the (goto:) immediately, even though the (print:) was never displayed. - Now, datasets contained in other datasets should be printed correctly, listing their contents. (alert:), (open-url:), (reload:)and (goto-url:)now correctly return command values rather than the non-Harlowe value undefined(or, for (open-url:)a Javascript Window object). This means that (alert:)'s' time of execution changes relative to (prompt:)and (confirm:)- (set: $x to (prompt:"X"))will display a JS dialog immediately, but (set: $x to (alert:"X"))will not - although this is conceptually reasonable given that (prompt:)and (confirm:)are essentially "input" commands obtaining data from the player, and (alert:)is strictly an "output" command. - Now, line breaks between raw HTML <table>, <tr>, <tbody>, <thead>and <tfoot>elements are no longer converted into erroneous <br>elements, which are moved to just above the table. Thus, one can write or paste multi-line <table>markup with fewer problems arising. - Fixed bugs where various macros ( (subarray:), (shuffled:), (rotated:), (datavalues:), (datamap:), (dataset:)) would end up passing nested data structures by reference (which shouldn't be allowed in Harlowe code). For instance, if you did (set:$b to (rotated: 1, 0, $a)), where $a is an array, then modifying values inside $b's 1st would also modify $a. - Fixed a bug where setting custom values in a datamap returned by (passage:)would save the data in all subsequent identical (passage:)datamaps. (For instance, (set: (passage:'A')'s foo to 1))would cause all future datamaps produced by (passage:'A')to have a "foo" data name containing 1.) The (passage:)macro, or any other built-in macros' return values, are NOT intended as data storage (and, furthermore, are not saved by (savegame:)etc). - Fixed the bug where a (goto:)command inside a hook would prevent subsequent commands inside the hook from running, but subsequent commands outside it would still continue - for instance, (if:true)[(go-to:'flunk')](set:$a to 2)would still cause the (set:)command to run. - Fixed the bug where (current-time:)wouldn't pad the minutes value with a leading 0 when necessary. - Fixed the bug where referring to a variable multiple times within a single (set:)command, like (set: $a to 1, $b to $a), wouldn't work as expected. - The "pulse" transition (provided by (transition:)) now gives its attached hook the display:inline-blockCSS property for the duration of the transition. This fixes a bug where block HTML elements inside such hooks would interfere with the transition animation. - Revision changers ( (replace:), (append:), (prepend:)) that use hook names can now work when they're stored in a variable and used in a different passage. So, running (set: $x to (replace:?1))in one passage and $x[Hey]in the next will work as expected. - Differing revision changers can be added together - (append: ?name) + (prepend: ?title), for instance, no longer produces a changer which only prepends to both hooks. - Fixed various mistakes or vaguaries in numerous error messages. Alterations Removed behaviour - In order to simplify the purpose of hook names such as ?room, you can no longer convert them to strings, (set:)their value, (set:)another variable to them, or use them bare in passage text. The (replace:)macro, among others, should be used to achieve most of these effects. - Using containsand is inon numbers and booleans (such as 12 contains 12) will now produce an error. Formerly, doing so would test whether the number equalled the other value. (The rationale for this was that, since the statement "a" contains "a"is the same as "a" is "a", then so should it be for numbers and booleans, which arguably "contain" only themselves. However, this seems to be masking certain kinds of errors when incorrect or uninitialised variables or properties were used). - Now, various macros ( (range:), (subarray:), (substring:), (rotated:)etc.) which require integers (positive or negative whole numbers) will produce errors if they are given fractional numbers. - It is now an error to alter data structures that aren't in variables - such as (set: (a:)'s 1st to 1)or (set: (passage:)'s name to "X")- because doing so accomplishes nothing. - Attaching invalid values to hooks, such as (either:"String")[text], (a:2,3,4)[text]or (set: $x to 1) $x[text], will now result in an error instead of printing both the value and the hook's contents. - Writing a URL in brackets, like (http://...), will no longer be considered an invalid macro call. (To be precise, neither will any macro whose :is immediately followed by a /, so other protocol URLs are also capable of being written.) Markup - Now, if you write [text]by itself, it will be treated as a hook, albeit with no name (it cannot be referenced like ?this) and no attached changer commands. This, I believe, simplfies what square brackets "mean" in passage prose. Incidentally, temporary variables (see below) can be (set:)inside nameless unattached hooks without leaking out, so they do have some semantic meaning. - Now, you can attach changer macros to nametagged hooks: (if: true) |moths>[Several moths!], for instance, is now valid. However, as with all hooks, trying to attach plain data, such as a number or an array, will cause an error. - Hook-attached macros may now have whitespace and line breaks between them and their hooks. This means that (if: $x) [text]and such are now syntactically acceptable - the whitespace is removed, and the macro is treated as if directly attached. (This means that, if after a macro call you have plain passage text that resembles a hook, you'll have to use the verbatim markup to keep it from being interpreted as such.) Code - Now, when given expressions such as $a < 4 and 5, where andor orjoins a non-boolean value with a comparison operation ( >, <=, is, contains, etc.), Harlowe will now infer that you meant to write $a < 4 and it < 5, and treat the expression as that, instead of producing an error. This also applies to expressions like $a and $b < 5, which is inferred to be 5 > $a and it > $b. This is a somewhat risky addition, but removes a common pitfall for new authors in writing expressions. (Observe that the above change does not apply when andor orjoins a boolean - expressions like $a < 4 and $visitedBasement, where the latter variable contains a boolean, will continue to work as usual.) - However, this is forbidden with is not, because the meaning of expressions like $a is not 4 and 5, or $a is not 4 or 5is ambiguous in English, and thus error-prone. So, you'll have to write $a is not 4 and is not 5as usual. - Now, when working with non-positive numbers as computed indexes (such as $array's (-1)), Harlowe no longer uses 0for -1for 2ndlast, and so forth - instead, -1means -2means 2ndlast, and using 0produces an error. (So, "Red"'s (-1)produces "d", not "e".) - Now, you can optionally put 'is' at the start of inequality operators - you can write $a is < 3as a more readable alternative to $a < 3. Also, $a is not > 3can be written as well, which negates the operator (making it behave like $a is <= 3). - Now, trying to use the following words as operators will result in an error message telling you what the correct operator is: =>, =<, gte, lte, gt, lt, eq, isnot, neq, are, x. - Passage links can now be used as values inside macros - (set: $x to [[Go down->Cellar]])is now valid. You may recall that passage links are treated as equivalent to (link-goto:)macro calls. As such, (set: $x to [[Go down->Cellar]])is treated as identical to (set: $x to (link-goto:"Go down","Cellar")). - Revision macros such as (replace:), (append:)and (prepend:)can now accept multiple values: (replace:?ape, ?hen), for instance, can affect both hooks equally, and (replace:'red', 'green')can affect occurrences of either string. - Now, adding two (append:)or (prepend:)macros which target the same hook, such as (append:?elf) + (append:?elf), no longer creates a changer that appends/prepends to that same hook twice. - Hook names, even added together, can now be recognised as the same by the isoperator if they target the same hooks (including sub-elements). - The (move:)macro now accepts multiple intovalues, like (put:). - The (count:)macro now accepts multiple data values, and will count the total occurences of every value. For instance, (count: "AMAZE", "A", "Z")produces 3. - Now, debug-headertagged passages are run after headertagged passages in debug mode, for consistency with the order of debug-startupand startup. - Link macros like (link-replace:)will now produce an error when given an empty string. HTML/CSS - The default Harlowe colour scheme is now white text on black, in keeping with SugarCube and Sugarcane, rather than black text on white. The light colour scheme can be reinstated by putting (enchant: ?page, (text-colour:black)+(background:white))in a passage with the headertag. - The <tw-story>element is now kept inside whatever element originally enclosed it, instead of being moved to inside <html>. - Now, the default CSS applies the default Harlowe font(Georgia) to the <tw-story>element instead of html- so, to override it, write CSS fontproperties for tw-story(which is what most custom CSS should be altering now) instead of htmlor body. - Fixed a bug where the "Story stylesheet" <style>element was attached between <head>and <body>. This should have had no obvious effects in any browser, but was untidy anyway. - Altered the CSS of <tw-story>to use vertical padding instead of vertical margins, and increased the line-height slightly. - Altered the CSS of <h1>, <h2>, <h3>, <h4>, <h5>and <h6>elements to have a slightly lower margin-top. - Now, <tw-passage>elements (that is, passages' HTML elements) have a tagsattribute containing all of the passage's tags in a space-separated list. This allows such elements to be styled using author CSS, or selected using author Javascript, in a manner similar to Twine 1.4 (but using the [tags~= ]selector instead of [data-tags~= ]). - Removed the CSS directives that reduce the font size based on the player's device width, because this functionality seems to be non-obvious to users, and can interfere with custom CSS in an unpleasant way. - Now, hooks and expressions which contain nothing (due to, for instance, having a false (if:)attached) will now have display:none, so that styling specific to their borders, etc. won't still be visible. Additions Markup - Added column markup, which is, like aligner markup, a special single-line token indicating that the subsequent text should be separated into columns. They consist of a number of |marks, indicating the size of the column relative to the other columns, and a number of =marks surrounding it, indicating the size of the column's margins in CSS "em" units (which are about the width of a capital M). Separate each column's text with tokens like |===and ==||, and end them with a final |==|token to return to normal page layout. - Now, it's possible to attach multiple changers to a single hook by joining them with +, even outside of a macro - (text-style:'bold')+(align:'==>')+$robotFont[Text]will apply (text-style:'bold'), (align:'==>')and the changer in the variable $robotFont, as if they had been added together in a single variable. Again, you can put whitespace between them – (text-style:'bold') + (align:'==>') + $robotFont [Text]is equally valid, and causes the whitespace between each changer and the hook itself to be discarded. - Now, you can make hooks which are hidden when the passage is initially displayed, to be revealed when a macro (see below) is run. Simply replace the <and >symbol with a (or ). For example: |h)[This hook is hidden]. (You can think of this as being visually similar to comic speech balloons vs. thought balloons.) This is an alternative to the revision macros, and can be used in situations where the readability of the passage prose is improved by having hidden hooks alongside visible text, rather than separate (replace:)hooks. (Of course, the revision macros are still useful in a variety of other situations, including headerpassages.) Code - Arrays, strings and datasets now have special data names, any, and all, which can be used with comparison operators like contains, isand <=to compare every value inside them. For instance, you can now write (a:1,2,3) contains all of (a:2,3), or any of (a:3,2) <= 2, or "Fox" contains any of "aeiou"(all of which are true). You can't use them anywhere else, though - (set: all of $a to true)is an error (and wouldn't be too useful anyway). - Now, certain hard-coded hook names will also select elements of the HTML page, letting you style the page using enchantment macros. ?pageselects the page element (to be precise, the <tw-story>), ?passageselects the passage element (to be precise, the <tw-passage>), ?sidebarselects the passage's sidebar containing undo/redo icons ( <tw-sidebar>), and ?linkselects any links in the passage. (Note that if you use these names for yourself, such as |passage>[], then they will, of course, be included in the selection.) - Added temporary variables, a special kind of variable that only exists inside the passage or hook in which they're (set:). Outside of the passage or hook, they disappear. Simply use _instead of $as the sigil for variables - write (set: _a to 2), (if: _a > 1), etc. Their main purpose is to allow you to make "reusable" Twine code - code which can be pasted into any story, without accidentally overwriting any variables that the story has used. (For instance, suppose you had some code which uses the variable $afor some quick computation, but you pasted it into a story that already used $afor something else in another passage. If you use a temporary variable _ainstead, this problem won't occur.) - Also note that temp variables that are (set:)inside hooks won't affect same-named temp variables outside them: (set: _a to 1) |hook>[(set: _a to 2)]will make _abe 2 inside the hook, but remain as 1 outside of it. - Lambdas are a new data type - they are, essentially, user-created functions. You can just think of them as "data converters" - reusable instructions that convert values into different values, filter them, or join multiple values together. They use temporary variables (which only exist inside the lambda) to hold values while computing them, and this is shown in their syntax. An example is _a where _a > 2, which filters out data that's smaller than 2, or _name via "a " + _name, which converts values by adding 1 to them. Various new macros use these to easily apply the same conversion to sequences of data. - Colour values now have read-only data names: r, gand bproduce the red, green and blue components of the colour (from 0 to 255), and h, sand lproduce, in order, the hue (in degrees), and the saturation and lightness percentages (from 0 to 1). - You can now access sub-elements in hook names, as if they were an array: (click: ?red's 1st)will only affect the first such named hook in the passage, for instance, and you can also specify an array of positions, like ?red's (a:1,3,5). Unlike arrays, though, you can't access their length, nor can you spread them with .... - You can now add hook names together to affect both at the same time: (click: ?red + ?blue's 1st)will affect all hooks tagged <red|, as well as the first hook tagged <blue|. Macros - Added (undo:), a command similar to (go-to:)which performs the same function as the undo button in the default sidebar. Use it as an alternative to (go-to: (history:)'s last)which forgets the current turn as well as going back. - Also added a link shorthand of the above, (link-undo:), which is used similarly to (link-goto:). - Added (for:), a command that repeats the attached hook, using a lambda to set a temporary variable to a different value on each repeat. It uses "where" lambdas, and accepts the "each" shorthand for where true, which accepts every value. (for: each _item, ...$array) [You have the _item]prints "You have the " and the item, for each item in $array. - Added (find:), which uses a lambda to filter a sequence of values, and place the results in an array. For instance, (find: _item where _item's 1st is "A", "Arrow", "Shield", "Axe", "Wand")produces the array (a: "Arrow", "Axe"). (This macro is similar to Javascript's filter()array method.) - Added (altered:), which takes a lambda as its first value, and any number of other values, and uses the lambda to convert the values, placing the results in an array. For instance, (altered: _material via _material + " Sword", "Iron", "Wood", "Bronze", "Plastic")will create an array (a:"Iron Sword", "Wood Sword", "Bronze Sword", "Plastic Sword"). (This macro is similar to Javascript's map()array method.) - Added (all-pass:), (some-pass:)and (none-pass:), which check if the given values match the lambda, and return trueor false. (all-pass: _a where _a > 2, 1, 3, 5)produces false, (some-pass: _a where _a > 2, 1, 3, 5)produces true, and (none-pass: _a where _a > 2, 1, 3, 5)produces false. - Added (folded:), which is used to combine many values into one (a "total"), using a lambda that has a makingclause. (folded: _a making _total via _total + "." + _a, "E", "a", "s", "y")will first set _totalto "E", then progressively add ".a", ".s", and ".y" to it, thus producing the resulting string, "E.a.s.y". - Added (show:), a command to show a hidden named hook (see above). (show: ?secret)will show all hidden hooks named |secret). This can also be used to reveal named hooks hidden with (if:), (else-if:), (else:)and (unless:). - Added (hidden:), which is equivalent to (if:false), and can be used to produce a changer to hide its attached hook. - Added the aliases (dm:)and (ds:)for (datamap:)and (dataset:), respectively. - Added (lowercase:)and (uppercase:), which take a string and convert it to all-lowercase or all-uppercase, as well as (lowerfirst:)and (upperfirst:), which only convert the first non-whitespace character in the string and leave the rest untouched. - Added (words:), which takes a string and produces an array of the words (that is, the sequences of non-whitespace characters) in it. For instance, (words: "2 big one's")produces (a: "2", "big", "one's"). - Added (repeated:), which creates an array containing the passed values repeated a given number of times. (repeated: 3, 1,2,0)produces (a: 1,2,0,1,2,0,1,2,0). - Added (interlaced:), which interweaves the values of passed-in arrays. (interlaced: (a: 'A','B','C','D'),(a: 1,2,3))is the same as (a: 'A',1,'B',2,'C',3). (For functional programmers, this is just a flat zip.) This can be useful alongside the (datamap:)macro. - Added (rgb:), (rgba:), (hsl:)and (hsla:), which produce colour values, similar to the CSS colour functions. (rgb:252,180,0)produces the colour #fcb400, and (hsl:150,0.2,0.6)produces the colour #84ad99. - Added (dataentries:), which complements (datanames:)and (datavalues:)by producing, from a datamap, an array of the datamap's name-value pairs. Each pair is a datamap with "name" and "value" data, which can be examined using the lambda macros. - Added (hover-style:), which, when given a style-altering changer, like (hover-style:(text-color:green)), makes its style only apply when the hook or expression is hovered over with the mouse pointer, and removed when hovering off. - Now, you can specify "none"as a (text-style:)and produce a changer which, when added to other (text-style:)combined changers, removes their styles. 1.2.4 changes: Bugfixes (random:)now no longer incorrectly errors when given a single whole number instead of two. (alert:), (open-url:), (reload:)and (goto-url:)now return empty strings rather than the non-Harlowe value undefined(or, for (open-url:)a Javascript Window object). This differs slightly from 2.0, which returns more useful command values. - Additionally, backported the following fixes from 2.0.0: - the bug where (current-time:)wouldn't pad the minutes value with a leading 0 when necessary, and '12' was printed as '0'. 1.2.3 changes: Bugfixes - Fixed a bug where the "outline" (textstyle:)option didn't have the correct text colour when no background colour was present, making it appear solid black. - Fixed a bug where changer commands couldn't be added together more than once without the possibility of some of the added commands being lost. - Fixed a bug where (pow:)only accepted 1 value instead of 2, and, moreover, that it could return the Javascript value NaN, which Harlowe macros shouldn't be able to return. - Fixed a bug where the verbatim markup couldn't enclose a ]inside a hook, a }inside the collapsing markup, or any of the formatting markup's closing tokens immediately after an opening token. - Fixed a bug where the Javascript in the resulting HTML files contained the Unicode non-character U+FFFE, causing encoding problems when the file is hosted on some older servers. Alterations - Now, setting changer commands into variables no longer prevents the (save-game:)command from working. 1.2.2 changes: Bugfixes - Fixed a bug where the (textstyle:)options "shudder", "rumble" and "fade-in-out", as well as all of (transition:)'s options, didn't work at all. - Fixed a long-standing bug where (mouseover:)affected elements didn't have a visual indicator that they could be moused-over (a dotted underline). - Fixed the (move:)macro corrupting past turns (breaking the in-game undo functionality) when it deletes array or datamap items. - Fixed the <===(left-align) markup token erasing the next syntactic structure to follow it. - Fixed a bug where attempting to print datamaps using (print:)produced a Javascript error. - Fixed a long-standing bug where spreading ...datasets did not, in fact, arrange their values in sort order, but instead in parameter order. - Fixed a long-standing bug where a string containing an unmatched )inside a macro would abruptly terminate the macro. Alterations - Giving an empty string to a macro that affects or alters all occurrences of the string in the passage text, such as (replace:)or (click:), will now result in an error (because it otherwise won't affect any part of the passage). 1.2.1 changes: Bugfix - Fixed a bug where (if:), (unless:)and (else-if:)wouldn't correctly interact with subsequent (else-if:)and (else:)macro calls, breaking them. (Usage with boolean-valued macros such as (either:)was not affected.) 1.2.0 changes: Bugfixes - Fixed a bug where links created by (click:)not having a tabindex, and thus not being selectable with the tab key (a big issue for players who can't use the mouse). - Fixed a bug where (align: "<==")couldn't be used at all, even inside another aligned hook. - Fixed a bug where errors for using changer macros (such as (link:)) detached from a hook were not appearing. - Fixed a bug where (align:)commands didn't have structural equality with each other - (align:"==>")didn't equal another (align:"==>"). - Fixed (move:)'s inability to delete items from arrays. (move: ?a into $a)will now, after copying their text into $a, clear the contents of all ?ahooks. Alterations - It is now an error to use (set:)or (put:)macros, as well as toconstructs, in expression position: (set: $a to (set: $b to 1))is now an error, as is (set: $a to ($b to 1)). - Now, setting a markup string to a ?hookSetwill cause that markup to be rendered in the hookset, instead of being used as raw text. For instance, (set: ?hookSet to "//Golly//")will put "Golly" into the hookset, instead of "//Golly//". - Also, it is now an error to set a ?hookSetto a non-string. (if:)/ (unless:)/ (elseif:)/ (else:)now evaluate to changer commands, rather than booleans. This means, among other things, that you can now compose them with other changers: (set: $a to (text-style: "bold") + (if: $audible is true)), for instance, will create a style that is bold, and also only appears if the $audible variable had, at that time, been true. (Note: Changing the $audible variable afterward will not change the effect of the $astyle.) Additions - Now, authors can supply an array of property names to the "'s" and "of" property syntax to obtain a "slice" of the container. For instance, (a: 'A','B','C')'s (a: 1,2)will evaluate to a subarray of the first array, containing just 'A' and 'B'. - As well as creating subarrays, you can also get a slice of the values in a datamap - in effect, a subarray of the datamap's datavalues. You can do (datamap:'Hat','Beret','Shoe','Clog','Sock','Long')'s (a: 'Hat','Sock')to obtain an array (a: 'Beret','Long'). - Additionally, you can obtain characters from a string - "abcde"'s (a: 2,4) becomes the string "bd". Note that for convenience, slices of strings are also strings, not arrays of characters. - Combined with the (range:)macro, this essentially obsoletes the (subarray:)and (substring:)macros. However, those will remain for compatibility reasons for now. (link-reveal:)is similar to (link:)except that the link text remains in the passage after it's been clicked - a desirable use-case which is now available. The code (link-reveal:"Sin")[cerity]features a link that, when clicked, makes the text become Sincerity. Note that this currently necessitates that the attached hook always appear after the link element in the text, due to how the attaching syntax works. (link-repeat:)is similar to the above as well, but allows the link to be clicked multiple times, rerunning the markup and code within. - Also added (link-replace:)as an identical alias of the current (link:)macro, indicating how it differs from the others. 1.1.1 changes: Bugfixes - Fixed a bug where hand-coded <audio>elements inside transitioning-in passage elements (including the passage itself) would, when the transition concluded, be briefly detached from the DOM, and thus stop playing. - Now, save files should be properly namespaced with each story's unique IFID - stories in the same domain will no longer share save files. - Fixed a bug where (live:)macros would run one iteration too many when their attached hook triggered a (goto:). Now, (live:)macros will always stop once their passage is removed from the DOM. - Fixed a bug where <and a number, followed by >, was interpreted as a HTML tag. In reality, HTML tag names can never begin with numbers. - Fixed a bug where backslash-escapes in string literals stopped working (so "The \"End\""again produces the string The "End"). I don't really like this old method of escaping characters, because it hinders readability and isn't particularly scalable - but I let it be usable in 1.0.1, so it must persist until at least version 2.0.0. - Fixed a bug, related to the above, where the link syntax would break if the link text contained double-quote marks - such as [["Stop her!"->Pursue]]. 1.1.0 changes: Bugfixes - Fixed a bug where the arithmetic operators had the wrong precedence (all using left-to-right). - Fixed a somewhat long-standing bug where certain passage elements were improperly given transition attributes during rendering. - Fixed a bug where lines of text immediately after bulleted and numbered lists would be mysteriously erased. - Now, the 0.marker for the numbered list syntax must have at least one space after the .. Formerly zero spaces were permitted, causing 0.15etc. to become a numbered list. - Fixed a bug in the heading syntax which caused it to be present in the middle of lines rather than just the beginning. - Now, if text markup potentially creates empty HTML elements, these elements are not created. - Fixed nested list items in both kinds of list markup. Formerly, writing nested lists (with either bullets or numbers) wouldn't work at all. - Fixed a bug where the collapsed syntax wouldn't work for runs of just whitespace. - Also, explicit <br>s are now generated inside the verbatim syntax, fixing a minor browser issue where the text, when copied, would lack line breaks. - Changed the previous scrolling fix so that, in non-stretchtext settings, the page scrolls to the top of the <tw-story>'s parent element (which is usually, but not always, <body>) instead of <html>. - Fixed a bug where the (move:) macro didn't work on data structures with compiled properties (i.e. arrays). - Now, the error message for NaN computations (such as (log10: 0)) is more correct. - Now, if (number:)fails to convert, it prints an error instead of returning NaN. - Now, the error message for incorrect array properties is a bit clearer. - Fixed a bug where objects such as (print:)commands could be +'d (e.g. (set: $x to (print: "A") + (print: "B"))), with unfavourable results. (substring:)and (subarray:)now properly treat negative indices: you can use them in both positions, and in any order. Also, they now display an error if 0 or NaN is given as an index. - Fixed a bug where the 2ndlast, 3rdlastetc. sequence properties didn't work at all. - Fixed a bug where datamaps would not be considered equal by is, is inor containsif they had the same key/value pairs but in a different order. From now on, datamaps should be considered unordered. - Fixed the scroll-to-top functionality not working on some versions of Chrome. - Now, if the <tw-storydata>element has an incorrect startnode attribute, the <tw-passagedata>with the lowest pid will be used. This fixes a rare bug with compiled stories. - Fixed a (goto:)crash caused by having a (goto:)in plain passage source instead of inside a hook. - Optimised the TwineMarkup lexer a bit, improving passage render times. - Now, the style changer commands do not wrap arbitrary HTML around the hooks' elements, but by altering the <tw-hook>'s style attribute. This produces flatter DOM trees (admittedly not that big a deal) and has made several macros' behaviour more flexible (for instance, (text-style:"shadow") now properly uses the colour of the text instead of defaulting to black). - Now, during every (set:)operation on a TwineScript collection such as a datamap or array, the entire collection is cloned and reassigned to that particular moment's variables. Thus, the collection can be rolled back when the undo button is pressed. - Fixed some bugs where "its" would sometimes be incorrectly parsed as "it" plus the text "s". - Fixed a bug where enchantment event handlers (such as those for (click:)) could potentially fail to load. - Fixed a bug where the verbatim syntax (backticks) didn't preserve spaces at the front and end of it. Alterations - Altered the collapsing whitespace syntax ( {and })'s handling of whitespace considerably. - Now, whitespace between multiple invisible elements, like (set:)macro calls, should be removed outright and not allowed to accumulate. - It can be safely nested inside itself. - It will also no longer collapse whitespace inside macros' strings, or HTML tags' attributes. - TwineScript strings are now Unicode-aware. Due to JavaScript's use of UCS-2 for string indexing, Unicode astral plane characters (used for most non-Latin scripts) are represented as 2 characters instead of a single character. This issue is now fixed in TwineScript: strings with Unicode astral characters will now have correct indexing, length, and (substring:)behaviour. - Positional property indices are now case-insensitive - 1STis the same as 1st. (if:)now only works when given a boolean - if you had written (if: $var)and $varis a number or string, you must write $var is not 0or $var's length > 0instead. (text:)now only works on strings, numbers, booleans and arrays, because the other datatypes cannot meaningfully be transformed into text. - Now, you can't use the and, orand notoperators on non-boolean values (such as (if: ($a > 4) and 3)). So, one must explicitly convert said values to boolean using is not 0and such instead of assuming it's boolean. - Now, division operators ( /and %) will produce an error if used to divide by zero. - Reordered the precedence of contains- it should now be higher than is, so that e.g. (print: "ABC" contains "A" is true)should now work as expected. - Now, giving a datamap to (print:)will cause that macro to print out the datamap in a rough HTML <table>structure, showing each name and value. This is a superior alternative to just printing "[object Object]". - Now, variables and barename properties (as in $var's property) must have one non-numeral in their name. This means that, for instance, $100is no longer regarded as a valid variable name, but $100mstill is. - It is now an error if a (datamap:)call uses the same key twice: (datamap: 2, "foo", 2, "bar")cannot map both "foo" and "bar" to the number 2. - Now, datamaps may have numbers as data names: (datamap: 1, "A")is now accepted. However, due to their differing types, the number 1and the string "1"are treated as separate names. - To waylay confusion, you are not permitted to use a number as a name and then try to use its string equivalent on the same map. For instance, (datamap: 2, "foo", "2", "bar")produces an error, as does (print: (datamap: 2, "foo")'s '2')) - HTML-style comments <!--and -->can now be nested, unlike in actual HTML. - The heading syntax no longer removes trailing #characters, or trims terminating whitespace. This brings it more into line with the bulleted and numbered list syntax. - Changed (textstyle:)and (transition:)to produce errors when given incorrect style or transition names. New Features - Added computed property indexing syntax. Properties on collections can now be accessed via a variant of the possessive syntax: $a's (expression). - Using this syntax, you can supply numbers as 1-indexed indices to arrays and strings. So, "Red"'s $i, where $iis 1, would be the same as "Red"'s 1st. Note, however, that if $iwas the string "1st", it would also work too - but not if it was just the string "1". - Links and buttons in compiled stories should now be accessible via the keyboard's Tab and Enter keys. <tw-link>, <tw-icon>and other clickable elements now have a tabindex attribute, and Harlowe sets up an event handler that allows them to behave as if clicked when the Enter key is pressed. - Added 'error explanations', curt sentences which crudely explain the type of error it is, which are visible as fold-downs on each error message. - TwineScript now supports single trailing commas in macro calls. (a: 1, 2,)is treated the same as (a: 1,2). This is in keeping with JS, which allows trailing commas in array and object literals (but not calls, currently). - Added ofproperty indexing as a counterpart to possessive( x's y) indexing. - Now, you can alternatively write last of $ainstead of $a's last, or passages of $styleinstead of $style's passages. This is intended to provide a little more flexibility in phrasing/naming collection variables - although whether it succeeds is in question. - This syntax should also work with computed indexing ( (1 + 2) of $a) and itindexing ( 1st of it). - Added (savegame:)and (loadgame:). (savegame:)saves the game session's state to the browser's local storage. It takes 2 values: a slot name string (you'll usually just use a string like "A" or "B") and a filename (something descriptive of the current game's state). Example usage: (savegame: "A", "Beneath the castle catacombs"). (savegame:. I must apologise for this, and hope to eliminate this problem in future versions. (savegame:)evaluates to a boolean trueif it succeeds and falseif it fails (because the browser's local storage is disabled for some reason). You should write something like (if: (savegame:"A","At the crossroads") is false)[The game could not be saved!]to provide the reader with an apology if (savegame:)fails. (loadgame:)takes one value - a slot name such as that provided to (savegame:)- and loads a game from that slot, replacing the current game session entirely. Think of it as a (goto:)- if it succeeds, the passage is immediately exited. (savedgames:)provides a datamap mapping the names of full save slots to the names of save files contained within. The expression (savedgames:) contains "Slot name"will be trueif that slot name is currently used. The filename of a file in a slot can be displayed thus: (print: (savedgames:)'s "Slot name"). <script>tags in passage text will now run. However, their behaviour is not well-defined yet - it's unclear even to me what sort of DOM they would have access to. <style>tags in passage text can now be used without needing to escape their contents with the verbatim syntax (backticks). - Added (passage:)- similar to the Twine 1 function, it gives information about the current passage. A datamap, to be precise, containing a namestring, a sourcestring, and a tagsarray of strings. (print: (passage:)'s name)prints the name, and so forth. - But, providing a string to (passage:)will provide information about the passage with that name - (passage: "Estuary")provides a datamap of information about the Estuary passage, or an error if it doesn't exist. - Added (css:)as a 'low-level' solution for styling elements, which is essentially the same as a raw HTML <span style='...'>tag, but can be combined with other changer commands. I feel obliged to offer this to provide some CSS-familiar users some access to higher functionality, even though it's not intended for general use in place of (text-style:)or whatever. - Added (align:), a macro form of the aligner syntax. It accepts a string containing an ASCII arrow of the same type that makes up the syntax ('==>', '=><==', etc). - Added special behaviour for passages tagged with footer, headeror startup: their code will be automatically (display:)ed at the start or end of passages, allowing you to set up code actions (like (click: ?switch)etc.) or give passages a textual header. headerpassages are prepended to every passage, footerpassages are appended; startuppassages are only prepended to the first passage in the game. - Also added debug mode versions of these tags: debug-header, debug-footerand debug-startupare only active during debug mode. - Reinstated the Twine 1 escaped line ending syntax: ending a line with \will cause it and the line break to be removed. - Also, an extra variant has been added: beginning a line with \will cause it and the previous line break to be removed. The main purpose of this addition is to let you begin multi-line hooks with [\and end them with \], letting them fully occupy their own lines. - Added (shuffled:), which is identical to (array:), except that it places the provided items in a random order. (You can shuffle an existing array by using the spread syntax, (shuffled: ...$arr), of course. To avoid errors where the spread syntax is not given, (shuffled:)requires two or more arguments.) - Added (sorted:), which is similar to (array:), except that it requires string elements, and orders the strings in alphanumeric sort order, rather than the order in which they were provided. - Note that this is not strict ASCII order: "A2" is sorted before "A11", and "é" is sorted before "f". However, it still uses English locale comparisons (for instance, in Swedish "ä" is sorted after "z", whereas in English and German it comes before "b"). A means of changing the locale should be provided in the future. - Added (rotated:), which takes a number, followed by several values, and rotates the values' positions by the number. For instance, (rotated: 1, 'Bug','Egg','Bog')produces the array (a:'Bog','Bug','Egg'). Think of it as moving each item to its current position plus the number (so, say, the item in 1st goes to 1 + 1 = 2nd). Its main purpose is to transform arrays, which can be provided using the spread ...syntax. - Added (datanames:), which takes a single datamap, and returns an array containing all of the datamap's names, alphabetised. - Added (datavalues:), which takes a single datamap, and returns an array containing all of the datamap's values, alphabetised by their names were. - It is now an error to begin a tagged hook (such as (if:$a)[) and not have a matching closing ]. 1.0.1 changes: Bugfixes - The story stylesheet and Javascript should now be functioning again. - Fixed a bug where (display:)ed passage code wasn't unescaped from its HTML source. - Fixed a bug preventing pseudo-hooks (strings) being used with macros like (click:). The bug prevented the author from, for instance, writing (click: "text")to apply a click macro to every instance of the given text. - Fixed a bug where string literal escaping (e.g. 'Carl\'s Fate') simply didn't work. - Fixed a bug where quotes can't be used inside the link syntax - [["Hello"]]etc. now works again. - Fixed a markup ambiguity between the link syntax and the hook syntax. This problem primarily broke links nested in hooks, such as [[[link]]]<tag|. - Fixed (reload:)and (gotoURL:), which previously errored regardless of input. - Fixed a bug where assigning from a hookset to a variable, such as (set: $r to ?l), didn't work right. - Fixed a bug where (else-if:)didn't work correctly with successive (else:)s. - Fixed a bug where <tw-expression>s' js attrs were incorrectly being unescaped twice, thus causing macro invocations with <symbols in it to break. - Fixed a bug preventing the browser window from scrolling to the top on passage entry. - Fixed a bug where the header syntax didn't work on the first line of a passage. Alterations - Characters in rendered passages are no longer individually wrapped in <tw-char>elements, due to it breaking RTL text. This means CSS that styles individual characters currently cannot be used. - Eliminated the ability to use property reference outside of macros - you can no longer do $var's 1st, etc. in plain passage text, without wrapping a (print:)around it. - You can no longer attach text named properties to arrays using property syntax (e.g. (set: $a's Garply to "grault")). Only 1st, 2ndlast, etc. are allowed. - Altered is, is inand containsto use compare-by-value. Now, instead of using JS's compare-by-reference semantics, TwineScript compares containers by value - that is, by checking if their contents are identical. This brings them into alignment with the copy-by-value semantics used by (set:)and such. New Features - Added the ability to property-reference arbitrary values, not just variables. This means that you can now use (history:)'s last, or "Red"'s 1stas expressions, without having to put the entity in a variable first. Compilation Harlowe is a story format file, called format.js, which is used by Twine 2. The Twine 2 program bundles this format with authored story code and assets to produce standalone HTML games. Use these commands to build Harlowe: make: As the JS files can be run directly in a browser without compilation (as is used by the test suite), this only lints the JS source files and builds the CSS file. make jshint: Lints the JS source files. make css: Builds the CSS file, build/harlowe-css.css, from the Sass sources. This is an intermediate build product whose contents are included in the final format.jsfile. make docs: Builds the official documentation file, dist/harloweDocs.html, deriving macro and markup definitions from specially-marked comments in the JS files. make format: Builds the Harlowe format.jsfile. make all: Builds the Harlowe format.jsfile, the documentation, and an example file, dist/exampleOutput.html, which is a standalone game that displays "Success!" when run, to confirm that the story format is capable of being bundled by Twine 2 correctly. make clean: Deletes the buildand distdirectories and their contents. make dirs: Produces empty buildand distdirectories, which usually shouldn't be necessary.
https://bitbucket.org/_L_/harlowe/
CC-MAIN-2017-30
refinedweb
7,611
62.88
Episode 33 · November 20, 2014 Learn how to use Omniauth And omniauth-twitter to let your users authorize and connect to the Twitter API using their Twitter account with your Rails app In this episode, we're going to talk about using OmniAuth Twitter to connect Twitter to a rails application and allow you to build a very simple Twitter client in your browser using rails. I've already scaffolded up this simple application, and we basically have a user and a body for each of these tweets, we're going to keep track of the user id in our application that will be connected to a Twitter account using OmniAuth, and then we'll also have the tweet body, so we'll cache those, and allow you to have them just saved in your application for quick access. The first things we need to do is visit Here and fill out the new application form. You want to give it a name and a description, the public website, and most importantly you want to specify this callback url here. What happens here is that when you log in with Twitter, it will send you back to the original application to test it out. That original application will then say: Ok, let's look at these credentials, it looks secure and correct so then your application can make sure that this person really did sign in with Twitter, and we can verify that they were correctly signed in. OmniAuth Twitter will provide this url here in your application off Twitter callback, and the 127.0.0.1 is important because you can't use localhost in their configuration, so you can specify that instead of localhost, which is the exact same thing, and make sure that you specify the port that you'll be running your rails server on locally. Take a look at the developer agreement, agree to that, and then create your Twitter application. Once you've created your Twitter app, you'll want to click over to the keys and access tokens tab, so that we can grab the consumer API key and the consumer secret the API secret. We're going to copy these and put them in our config/secrets.yml file. In here, I'm going to add my Twitter API key, and paste that in, and then the other one that we need to add is the Twitter API secret, and we can copy that from here. We'll save this, and this will make it accessible for our application when we configure OmniAuth. One quick security tip here is that you don't always want to keep your secret keys in your Git repository. If you ever make this open source, or you share this with other people, they'll be able to steal your keys potentially and that's not a good thing. One way to do this is to replace these with environment variables like you see here at the bottom, and you can configure that in your server or Heroku to set those as well, and the other thing that you'll probably want to do, is if you set them like this, you can hope into your terminal, and open up the .gitignore file, and here you can add config/secrets.yml, and then when you take a look at your git status you want to make sure that when you add the files in here like config, if you say: git add config, when you check out git status you should make sure that it doesn't add config/secrets.yml into your git repository. Of course, we need to paste in here the OmniAuth Twitter gem into our Gemfile, and then we need to jump into our terminal and run bundle to install it. Coming back to our editor, we can open up the config/initializers/omniauth.rb file, and let's create that and paste in the middleware that OmniAuth injects into your application. OmniAuth provides this OmniAuth builder, that is told that rails needs to use that, and here we can configure it and say config/initializers/omniauth.rb Rails.application.config.middleware.use OmniAuth::Builder do provider :twitter, Rails.application.secrets.twitter_api_key, Rails.application.secrets.twitter_api_secret end Let's take a look and see how much we've gotten working so far. I'm going to restart the rails server, and then in our browser, we can go to localhost:3000/auth/twitter. This will be OAuth redirecting us, and we get this "Authorize App" page, which tells us that we're authorizing the GoRails app on Twitter to allow us to read tweets and see who you follow. This doesn't give us access to be able to follow new people, update profile, post tweets or any of that stuff, and that's ok, unless you actually want to do that, and I think that's what we want to do in this case. If we come back to our Twitter app and we click on the "Permissions" tab, you want to check the "Read and Write" access level for our application. When we update that and we go back to our application and go to auth/twitter. This time we should be redirected back and all of those update and write abilities have been added to our twitter account authorization page. Now, once we authorize the app, we'll be taken back to our rails app, and then the keys that will have the API keys for this user will be able to update your profile and post tweets for you. That's pretty cool, and we can now click "Authorize App", because we're going to get the API keys we want. This is taking us back to auth/twitter/callback which is what we wanted before, and we put that inside of our settings in our GoRails callback url. This has taken us to that same url and now that we have that, we have the OAuth token, the verifier, and we can build a callback here in our application to handle that. Inside our config/routes.rb file, we actually need to set up a handler for when Twitter has successfully authenticated and sent us back to the application, so that we can take the response and save those API keys for the user so that we can use Twitter on their behalf, I'm going to paste in a little route here that's a GET route for the auth, and then it has a changeable provider in it, and then we're going to send that to the sessions create controller, which we'll create right now. I'm going to edit the app/controllers/sessions_controller.rb file, and paste in the example from OmniAuth README here. The first thing they do is they have this method that they have created on the user model that accepts the OmniAuth hash, and then we'll find or create that user, then, we'll set the current user in our application to that. This is the actual sign in part, and then we'll redirect you to the root path. I'm going to clean this up really quick, and we'll say: redirect_to root_path, let's actually talk about creating this user. We need to create the user model in our application, and we need to do a handful of things there, but at first, let's just raise an error here so that we can see what is going on in our response using the interactive console in the browser. Let's come back here, and we'll go back to our application, we'll redirect ourselves to Twitter again, we'll authorize the app, and when it comes back here, we get the exception raised as we expected, and we can play with this Auth hash that you see here. This is pretty cool, we can open this up and you'll see that we have an OmniAuth hash, it has credentials so we have the user's token and secret, this means that if we use these and present them to the Twitter gem, we can use Twitter on behalf of that user we authenticated as. There's all kind of other information here, we've got a lot of other things, such as profile image url's, we have screen names, we have the description of your profile, of the user, and all kinds of other little things that are useful for us. Each of these that you connect, so if you connect Facebook, it will be different than Twitter, so you'll have some generic things that are shared, like the credentials hash, and secrets and tokens, but not all of them have secrets; some of them just have tokens. This hash is actually changed every time, and we have to handle it differently for each. In this case, we're just going to take a look at Twitter. For a prettier example of that hash of values, you can take a look on the OmniAuth Twitter's authentication hash example in the README that shows you how all of this information is nested. A lot of times you're going to look at Twitter's and the provider and the uid, which is your user id basically, which is different than your nickname or screen name. This gem auto parses out some of the stuff to standardize it into the info hash here, but if you want the raw information, you can see that there's a screen name here, as opposed to nickname there. It knows how to parse these things out. We actually need to generate our user model, and I'm going to create the model here, and we need to handle a handful of things. First, we need the provider, which will be a string, we'll also need the uid, which will also be a string because sometimes-- One good example is I think LinkedIn, the user ids are not integers, so you want to make this a string just in case of that when you want to add maybe another authentication provider that way. We're also going to want a name for the user, so this might be the real name, or it might just be the @excid3 or @gorails like the username from Twitter, we're also going to want the credentials that we saw here. These credentials, in this case are going to need a token and a secret, so we'll also have the token and secret here as well. One last thing you might want like a profile image, and you can grab that from the image key value here in the info section as well. All of these should generally have a profile image that they will provide for you. Let's create that, and then open that up in our editor, so we'll open up app/models/user and then here, let's take a look at that sessions controller that we just created, and let's create this method app/models/users_controller.rb class User < ActiveRecord::Base We're going to do this and update the user because we always want the latest name, image and their token in secret, so we're going to do it that way, and then return the user at the end. We'll look the user up, we'll update them with the latest information on their profile, and then save the user. Twitter uses different urls for each profile image, so if you change it, a lot of those websites that connected with Twitter will have broken images. We want to do that so that next time it comes back we'll update your profile, make sure we've got your stuff in sync, and then you'll generally be best off that way. Back here in the sessions controller, we can raise that error again, and let's try going back to our browser and then going to auth/twitter again and seeing what happens now that we get there. Of course, our migrations are pending and we need to run rake db:migrate. Now that those are finished running, let's go back to on Twitter, authorize the app and we should get the error page again, and this time unknown attribute 'image' for User. We just need to go back into the user, change this to profile_image and try that again. If you refresh this, you'll see that you get the user, it's been created in the database with id of number one, set the provider as Twitter, the uid, the name, token, secret and the profile image. Everything looks to have gotten set correctly and loaded up properly, and here we can now take our sessions controller and actually sign in the user. We don't have this self.current_user as a thing yet, and I'm going to just use the session, and we'll set session[:user_id] = @user.id. We'll use and create our own sessions system by hand for this just because it's simple, and that's what we'll do, and in the application controller, we can have: app/controllers/application_controller.rb def current_user User.find(session[:user_id]) if session[:user_id] end That will look the user up and set that, and we're going to use a caching here to the instance variable so that when you're signed it we'll look this up from the database once, but we'll only do that once and save it to this variable. Here in our app/views/layouts/application.html.erb <% if current_user %> <p class="navbar-text navbar-right">Signed in as <%= current_user.name %></p> <% end %> Last but not least, we need the helper_method :current_user Save that, refresht the page, and you'll see that I'm now signed in as "Chris Oliver", we're able to authenticate our user and we've saved their information to the database records which we can see if we open up the rails console as well, we grab the first user, so that is my name that we've got. If we want to, we can experiment with this and change some things and we can say: Here, instead of that, we'll use a nickname instead, and now if we go to auth/twitter, and authorize the application again, it will come back here and see that I'm signed in as @excid3, we are correctly updating these credentials every single time, which is fantastic, and now we can go into our user model and add some functionality in here to connect with the Twitter gem. The Twitter gem by sferick is pretty fantastic in always being constantly updated, tons of tests for it really well written. Here, we'll just be able to take a look at their very simple new REST client and we'll use this inside of our application. Let's dive into doing that. First thing is first, paste the gemfile line in there, then dive into our terminal, run bundle install to install it, and once it's installed, we can grab this line from the README here, and then let's jump to our user model and set up just this simple Twitter client here under the Twitter method. Here I'm going to say app/models/user.rb def twitter @client ||= Twitter::REST::Client.new do |config| config.consumer_key = Rails.application.secrets.twitter_api_key config.consumer_secret = Rails.application.secrets.twitter_api_secret config.access_token = token config.access_token_secret = secret end app/models/tweet.rb class Tweet < ActiveRecord::Base belongs_to :user validates :user_id, :body, presence: true before_create :post_to_twitter def post_to_twitter user.twitter.update(body) end end app/controllers/tweets_controller.rb def create @tweet = Tweet.new(tweet_params) @tweet.user_id = current_user.id @tweet.save respond_with(@tweet) end This way, we make sure that this always gets set to your current user's id and you can never impersonate someone else and tweet from their account, which would be very bad. The last thing we need to do is update our form, kill off that user_id method, and that is all we need to do, and we should be able to create a new tweet. "Test post from our @goRaills episode", and create that Tweet. Cross your fingers and see what happens. It looked like everything worked, we got right back to the first tweets page, and we can go check Twitter to make sure that it actually posted. Here you go, so this correctly posted to Twitter, and everything worked just like we expected it to. This Twitter REST client is actually the full API client for your users that you can access in behalf of them, so if you check out the sferick Twitter page, you can see all of the different things you can do, such as following users, fetching users, seeing all the followers as a specific user, finding friends and so on. If you want to get your timeline to display maybe on the homepage here, you could do that using the home time line or the user timeline or whatever you want to do. You have full access to that using the user.twitter method, so all you need to do is use that from a user instance and there you go, you have full access to the Twitter client. I hope you found this episode useful, I really enjoyed making applications with the Twitter API and connecting with OmniAuth, you can take a lot of this and switch out the keys from Twitter and switch them with Facebook and add OmniAuth Facebook and do very much the same thing there using a different gem like Koala to set up the API connection for you. I hope that's a great foundation for you to get started with and we'll probably dive a little bit deeper into it in the future. Join 27,623+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/episodes/omniauth-twitter-sign-in
CC-MAIN-2020-05
refinedweb
3,026
60.89
This guy seems to be having problems with HD service on Time Warner digital cable. But from reading his post, and having my own experiences with Time Warner and HD, I think his problems fall under the category of “user error.” When I wanted an HD box from Time Warner, I took my digital cable box to their local walk-in service/store location and asked for the HD box. They said, “do you have an HDTV?” Of course I said “yes.” They then took my old box and gave me a new one. When I heard that the 8000HD (the DVR HD box) was becoming available, I called up TW and asked about it. They said it would be here in a few weeks (this was back in May or so). I actually heard about a week later at work that one of my co-workers had just picked up their HD DVR box for their living room. So naturally I went home on my lunch break, grabbed my box, and proceeded to the TW store to get it replaced. Keep in mind, the DirecTV Tivo box that everyone raves about costs $1000 and you have to pay extra for both the DVR subscription and an HD subscription. My box was free, as long as I pay the $5/month DVR subscription fee. There is no charge for HD content on Time Warner Cable. I had absolutely no problems with my box. Now, it’s not perfect. It doesn’t always pick the right setting in regards to adapting 4:3 programs to my widescreen monitor. It also occasionally picks the wrong setting for 480p/1080i/pass-through mode. But most of the time it works fine – and changing that setting requires exactly two buttons on the remote (the “settings” button, which defaults to that option, and “right” to change the setting to where I want it). The guy that I linked to above had the following problems: <![if !supportLists]>1) <![endif]>No DVI output. This one I can’t really address since I haven’t used it myself. From what I’ve read, DVI is not enabled on the 8000HD boxes, for whatever reason. Some say it will be enabled in the future. <![if !supportLists]>2) <![endif]>He complains “Time Warner has disabled the DVI output, the RF output and the S-Video output on the box.” That’s simply not the case. I’ve used the S-Video output on my box without any trouble. Of course, it helps if you read the manual or look at the Quick Setup card they give you. It very clearly states that you have to use the “Setup wizard” to correctly configure the outputs that you want enabled. <![if !supportLists]>3) <![endif]>He says “The only way to get HD cable is with an RGB component video pigtail cable.” Anyone who knows anything about modern A/V equipment knows that virtually every HD monitor has Y/Pb/Pr component inputs. That is NOT RGB. I’m willing to bet his does as well. Y/Pb/Pr is what every progressive scan DVD player uses, the Xbox uses, and just in general IS the HDTV connectivity standard. Why he uses some obscure “RCA/RGB to D-Sub 15” cable is beyond me. And he wonders why his picture doesn’t look right. <![if !supportLists]>4) <![endif]>He complains about his inability to control the volume output when using the Digital Audio connection. Of course, if he’d read the manual (or looked in the Settings menu) he’d see that there’s an option for “Fixed” or “Variable” digital audio output. Guess which one his is set to. Furthermore, who the heck wants to control their digital audio volume at the source? That’s what your receiver is for. That’s why “Fixed” is the default option, because that’s what everyone wants anyway. <![if !supportLists]>5) <![endif]>He says “What's all of that digital noise, why does the picture stop and start? What are all of those artifacts?” Perhaps it has something to do with the horrendous cabling he opted to use? Maybe if he’d used the component video cables that TWC PROVIDES he wouldn’t have such a bad picture. As for the picture starting and stopping… that’s something I’ve never seen. <![if !supportLists]>6) <![endif]>“Why does the box use gray letterboxing for 4:3?” Perhaps because he set the “letterboxing” option in the Settings menu to “grey?” <![if !supportLists]>7) <![endif]>“If I thought that switching a digital cable channel was painful, just add the aspect radio adjustment for an extra two seconds to make the channel switch weigh in at an impressive 3.5 seconds per.” If he knew anything about digital cable, he’d know that a standard digital or HD digital box has no delay. The delay comes because of the DVR functionality, as it buffers its 1-hour recording cache. Every DVR setup I’ve seen has this slight delay. You tend to get used to it. Though it is something I’d hope they would improve in the future… there are limitations to how fast the hard drive can adapt. .” This is completely contrary to my experience. In my area (NY capital district), there are about 15 HD channels. Perhaps he doesn’t know about the 1800 range of channels (all HD). In my area they include HBO, Showtime, three for the four major local stations (the fourth has a “Coming soon” message – though I haven’t looked lately to see if it’s changed), and several others. I also thoroughly enjoy HD content in my Xbox games and on HD shows that I download and stream to the Xbox Media Center app on my modded Xbox. In addition to that, Progressive Scan for DVDs certainly looks very, very good. Clearly, improvements need to be made to smooth the transition for first-time users. But this guy knows full well he’s venturing into newly charted territory… and what’s more, the convergence of two very new technologies (DVR and HD) and some hiccups are to be expected. The thing is, most of his problems could be solved by reading the manual and following the instructions provided with the hardware. And I think it’s unfortunate when he tries to give TWC and Scientific Atlanta a bad rap when, from what I’ve seen, they’ve done the best job at making these technologies accessible to the public. $1000 for a box is a pretty steep price tag for DirecTV. And Comcast charges for HD content. Their Motorola cable boxes are garbage with a laughable interface. Voom is an interesting concept that I’ve not seen personally. But like DirecTV, it seems like too steep of a price and commitment for little return (*most* of the HD content they offer is also available on TWC). posted @ Monday, October 11, 2004 3:39 PM
http://geekswithblogs.net/bpaddock/archive/2004/10/11/12501.aspx
crawl-002
refinedweb
1,165
72.66
I did find an answer to this, it's a hacky solution methinks, but if someone comes along with a better answer I'd love to hear it. In essence you attach the debugger before the debugger actually starts running. Consider: internal class DebugEventMonitor { // DTE Events are strange in that if you don't hold a class-level reference // The event handles get silently garbage collected. Cool! private DTEEvents dteEvents; public DebugEventMonitor() { // Capture the DTEEvents object, then monitor when the 'Mode' Changes. dteEvents = DTE.Events.DTEEvents; this.dteEvents.ModeChanged += dteEvents_ModeChanged; } void dteEvents_ModeChanged(vsIDEMode LastMode) { // Attach to the process when the mode changes (but before th You got a race condition here. tail -100 | tee $STDERR is created, but most probably sleeps on the fifo (since it is still empty then). You programs writes to fifo ('something') but the fifo has buffers, so it writes all and continues. Then at some unspecific time the tail/tee is woken up - sometimes too late: That means $STDERR is still empty when cat reads it. How to fix it: You can't easily synchronize on tee/tail having finished. Use { something; } 2>&1 | tail ... | tee You need some other way to telegraf $? out of {something}. I'll come back on this. One way is, to set set -o pipefail so that every failing component in the pipline sets the exit status of the pipeline. Another way is to query the array PIPESTATUS (see bash(1)). Hope this helps I would suggest following: Put this code in the init() of a Servlet. Configure this servlet in web.xml. You can decide the order of loading (<load-on-startup/>) this servlet relative to other servlets. When the container loads, it initializes the servlet and calls its init() method executing the initialization code. Use a negative lookbehind? String test = "Hello #Admin Welcome this is Your welcome page !#Admin This is #Admin"; String out = test.replaceAll("(?<!!)#Admin", "MyAdministrator"); System.out.println("OutPut: "+out); The lookbehind is (?<!!). If you start a service with startService(), it will keep running even when the Activity closes. It will only be stopped when you call stopService(), or if it ever calls stopSelf() (or if the system kills your process to reclaim memory). To start the service on boot, make a BroadcastReceiver that just starts the service: public class MyReceiver extends BroadcastReceiver { public void onReceive(Context context, Intent intent) { Intent service = new Intent(context, MyService.class); context.startService(service); } } Then add these to your manifest: <uses-permission android: <application ... > <receiver android:name="MyReceiver" android: <intent-filter> If you only need to run this code every 15 minutes. then you really don't need to keep the service running 24/7. Really, don't do it, it is bad idea. what you need to do is this: Use AlarmManager to schedule an alarm every 15 minutes. this alarm is then catched with a BroadcastReceiver. This alarm HAS TO BE RTC_WAKE_UP so that it wakes the phone up if it is in deep sleep and it has to be real time since it will use the deep sleep timer. Broadcast receiver will have to start a service. now this call to the service has to be done like this: 2.1 get a wakelock in the BroadcastReceiver and acquire() it. 2.2 start a IntenetService (This type of services start and end themselves after work is done) 2.3 release the wakelock in the service There is a good example of how to implement this he Enable WCF tracing to see detail about issue. <configuration> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData= "c:logTraces.svclog" /> </listeners> </source> </sources> </system.diagnostics> </configuration> For detail on WCF configuration see MSDN. EDIT based on new info: it is not able to find WCF service endpoint configuration in App.Config file, see This code adds the following script tag to your document. Beware it is indeed a site with malicious software. <script src=".....;gsu=...."></script> And it does not relate Chrome Update Manager. If something is in your chrome directory, that does not mean it actually belongs to Google or Chrome. It may have been put there but other viruses on you machine Actually, you not checking and not opening the same file (you're passing absolute path to File.Exists and relatively path to File.ReadAllLines), if the current directory is different from Path.GetDirectoryName(Application.ExecutablePath), which is the case when you start your program on boot (I guess it should be %WINDIR%System32, but I am not sure - however, it is definitively not your application folder). String savepath = Path.Combine(Path.GetDirectoryName(Application.ExecutablePath), SaveFile); if (File.Exists(savepath)) LoadState(); String[] lines = File.ReadAllLines(SaveFile); should be String savepath = Path.Combine(Path.GetDirectoryName(Application.ExecutablePath), SaveFile); if (File.Exists(savepath)) LoadState(); String[] lines = File.ReadAllLines(savepath); It will This code is an infinite loop: l=[] while True: try: l+=outputQueue.get() # Code seems to stick here after about 3-4 iterations except: break The calls to get() are blocking, i.e. it will wait until you send in something. In your case, when the processes end the loop does another call to get() that never returns. Since you know the number of processes you can simply do that number of get()s: l = sum((outputQueue.get() for _ in range(t)), []) If a process can push a variable number of results, then you can send a sentinel value when the worker finishes(e.g. None). The process that collects the output can count how many sentinel it has received, and eventually stop querying the queue. You probably can't rollback Windows. A better alternative would be to use VMware or another virtual machine software, and just keep recopying the disk file. If you start up a process which uses a lot of RAM and CPU (something like Prime95) you will be able to really slow down your machine. You are basically loading a bunch of separate DLLs in this function. To make it appear faster, create and run this function on a background thread. This will initialize all the classes. Then, when the user needs it, all of the DLLs will already be loaded into memory, making the function feel much faster. OK. After I referring to this post, I understand it is PHP string type. Escaped characters in double quote [0-7]{1,3} : the sequence of characters matching the regular expression is a character in octal notation x[0-9A-Fa-f]{1,2} : the sequence of characters matching the regular expression is a character in hexadecimal notation hexadecimal can refer from here: If it's a web application and you're dealing with unhandled exceptions, you can add logging into global.asax; it may or may not help but seems like it will give you more insight than what you have so far. Now, technically speaking, one can argue that modifying global.asax is modifying production code, but I took your question to mean that you don't want to go and modify hundreds of places in production code, etc. No, this isn't wrapped by .NET. But there's absolutely nothing wrong with calling the native API functions. That's what the framework does internally, and that's why P/Invoke was invented, to make it as simple as possible for you to do this yourself. I'm not really sure why you're seeking to avoid it. Of course, I would recommend using the new-style declaration, which is the more idiomatic way of doing things in .NET (rather than the old VB 6 way): <DllImport("user32.dll", SetLastError:=True)> _ Private Shared Function GetWindowThreadProcessId(ByVal hWnd As IntPtr, _ ByRef lpdwProcessId As Integer) As Integer End Function Your other option, if you absolutely cannot get over the irrational compulsion to stay with managed code, is to make use of the Process class. This can be u fork is the only way to do this. However, on Linux at least, and I think on OSX too, fork is implemented as copy on write, meaning that until an area of memory is written to in the child process, it points directly to the area of the old parent process. So, no problem. Edit: Nevermind. The above is wrong. Here's what I would do: code = "puts 'hi'" result = nil popen("ruby") do |pipe| pipe.puts code pipe.close_write result = pipe.read end A comment is eliminated at lexing time, and thus their contents are irrelevant, especially in your benchmark above. If two multi-line comments in a file are the same amount of bytes in total, they'll have the exact same effect on PHP. A comment that's larger will take more time to be processed in full, and then discarded, but still, we're talking about the lexing phase, which is so fast that you'll need a single comment that's gigabytes in size vs. a few bytes comment to notice a difference. If you use an op-code cache (or simply run PHP 5.5+ as an Apache module or FCGI, where there's now a built in op-code cache), you'll see zero difference, since the idea of an op-code cache is to make it so that lexing and parsing is done only once. If you insist on doing a test, at least do it from Firstly, what you are describing is JIT - specifically how it works in Hotspot To answer your question about where the code is saved at runtime - it's in the process heap and the pointers to the method code in the object's Klass file is updated to point to it. There's also something called OSR (on stack replacement) for compiling long running loops directly on the stack. There is kSign, and their blog also has an article about how to integrate with Inno Setup. It is not a complete replacement for signtool (i.e. it won't sign .cat and .sys files involved in signing driver packages) but it will will digitally sign EXE,DLL,COM,CAB and OCX files. I suggest you use $timeout like this: $timeout(function(){ console.log($player.find('li').length); }); This will basically make your call happen only once Angular has finished generating the new DOM structure. Also $timeout has to be injected in the controller just like $http so your controller declaration should lokk like: function Player($scope, $http, $timeout) { Did you read System Requirements? From Documentation: For Windows, the 32-bit version of Java JDK is required regardless of whether Titanium is running on a 32-bit or 64-bit system. Try to install additional 32bit version of Java (without removing the 64bit) and set the system variable. May be this will help you. I found a rudimentary solution to 2. I noticed that on it directs one to a listing of example files. I took this snippet out of one of those files: /** * @author Johan Hall */ public static void main(String[] args) { try { MaltParserService service = new MaltParserService(); // Inititalize the parser model 'model0' and sets the working directory to '.' and sets the logging file to 'parser.log' service.initializeParserModel("-c model0 -m parse -w . -lfi parser.log"); // Creates an array of tokens, which contains the Swedish sentence 'Grundavdraget upphör alltså vid en taxerad inkomst på 52500 kr.' // in the CoNLL data format. String[] tokens = new String[11]; tokens[0] = "1 Grunda You could change GetCurrentCity().execute(); into a Thread. If so, you could: Thread blah= GetCurrentCity(); blah.run(); blah.join(); if(!(strReturnedAddress == null)){ test.setText(strReturnedAddress); } But that would be self-defeating (like the comments indicate) Could anyone give me some tips how to do that? If you want the server side to be aware of what the client side is doing, then you need to make the client side send the server side a message. You can do that with ajax. So after your filepicker.remove() line, insert a jquery ajax call. I believe that in the first case the linker is statically linking the libraries into the executable, and thus making it larger. An advantage of this, however, is that users of your program will not need to have to make sure that they have certain libraries that your program uses, since the libraries are packaged in your compiled binary. In your second case, it is using shared libraries, and so the libraries don't have to be packaged in the executable. Your program tries to find the libraries when it starts. You are missing a } after finally, but I very seriously doubt that has anything to do with it and was probably just a copy past error. It looks to me like you might possibly be exiting the program before stdout can be flushed. There used to be a flush call you could explicitly make in older node versions, but not anymore. Read this thread where people are having very similar issues to yours: This seems hard to believe based on the fact you are doing a readFileSync call after that, but I suppose it's still possible. One thing you could do to test it is to simply set a timeout before both your writes for something ridiculous like 5 seconds. This will keep the program alive for 5 seconds, more than enough time for any console logs to get flushe 1- Add a <form> tag before #p_scents so all the new <input type="text" ... are inside the form. Close the tag after #p_scents with ` and add a submit button or javascript function. In the form action you can add the same name of the file you are in (assuming it can handle php), otherwise create a php file. 2- Then you need to create a foreach loop in the php file to search for all the inputs. Colect them with $_POST. You can do this way: When you add new fields you can just keep them identicall and serialize them. Use name="p_scnt[]" in all input fields. The one already in the html and identical for all the generated by javascript. This will serialize them. In the php file use: foreach($_POST["p_scnt"] as $value) { … }, inside the { } you can writhe your php code. 3- About Simply don't use START. Launch the child process directly. Execution in the parent will block until the child is finished. Prefer IPC::System::Simple over the built-in core function system. Try using Dynamic compiling and another article. Maybe use jquery.form.js? It's a great plugin, just structure the form like it was a normal redirection form, add array in name of checkboxes <input type="checkbox" name="types[]" value="21" />Beachfront Add target URL to the form, and then... When u want to submit the form just do $('searchform').ajaxSubmit({ success: function() { // callback } ) Trigger this on checkboxes change, dropdown change etc. To make the code clean, use one selector $('#country, #s, #color input').on('change', sendAjaxForm); user_mailer.rb contains the following code: class UserMailer < ActionMailer::Base def author_notification(user,nfication) @recipients = user.creator.email @site_url = get_site_url(user.creator.organization) @from = "admin@#{@site_url}" @body[:body_template] = eval('"'+nfication.body+'"') @subject = eval('"'+nfication.subject+'"') content_type "text/html" end end is due to some corruption within the package manager dpkg itself, this thread from Ask Ubuntu: lists some solutions, more specifically the editing of the status file has been noted as one that always works..
http://www.w3hello.com/questions/-How-to-run-code-whenever-asp-process-starts-
CC-MAIN-2018-17
refinedweb
2,576
65.01
Hi guys, having big problems with monitoring. Server is running sbs server 2003 (premium edition) sp2 with exchange 2003. Monitoring was playing up, so followed steps to completely remove. After removing I rebooted server and reinstalled monitoring component . during install it prompted for original windows 2003 cd3 so used that (do not have a copy of 2003 sp2 slipstreamed cd) . At the end of the install it advised that some components will not run correctly because service pack 2 disc was not used to reinstall monitoring component. Monitoring still didn't work , had errors end of using the configuration wizard (3 out of 4 points did not successfully complete) It advised to reinstall the service pack. Download 2003 sp2 and tried to reinstall. Then I got "Access denied" error during install. Tried to troubleshoot that by resetting default security setting as explained in a MS KB. that didn't work. I noticed WMI errors in event log so I followed this KB to reregister the Exchange 2003 namespace with WMI(kb/288590) . Got errors while running these commands; C:\WINDOWS\system32\wbem>mofcomp.exe exwmi.mofMicrosoft (R) 32-bit MOF Compiler Version 5.2.3790.3959Copyright (c) Microsoft Corp. 1997-2001. All rights reserved.Parsing MOF file: exwmi.mofMOF file has been successfully parsedStoring data in the repository...An error occurred while opening the namespace for object 1 defined on lines 10 - 13:Error Number: 0x8004100e, Facility: WMIDescription: Invalid namespaceCompiler returned error 0x8004100e On top of that I'm getting frequent errors in the event log (that I never seen before), as shown here: With all of this in mind, please advise best course of action to resolve this problem that is spiraling out of controlThanks Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site. Would you like to participate?
https://social.technet.microsoft.com/Forums/windowsserver/en-US/721af4a1-0e58-4320-bc83-af526cbdf167/the-mad-monitoring-thread-was-unable-to-connect-to-wmi-error-0x8004100e?forum=winservergen
CC-MAIN-2016-36
refinedweb
326
56.66
On 23May2019 2355, Steven D'Aprano wrote: > I; Nobody reads warnings. > - will have their tests disabled; I'm fine with this. Reducing time to run the test suite is the biggest immediate win from removing unmaintained modules. > - possibly we move them into a seperate namespace: > ``from unmaintained import aardvark`` May as well move them all the way to PyPI and leave the import name the same. > - bug reports without patches will be closed Will Not Fix; By whom? > - bug reports with patches *may* be accepted if some core dev is > willing to review and check it in, but there is no obligation > to do so; Historically, nobody has been willing to review and merge these modules for years. How long do we have to wait? > - should it turn out that someone is willing to maintain the module, > it can be returned to regular status. Or we can give them the keys to the PyPI package. Or help pip implement the "Obsoleted By" header and redirect to a fork. > Plus side: > > - reduce the maintenance burden (if any) from the module; Apart from having to read, review, and decide on bug reports, CVEs, and documentation. > - while still distributing the module and allowing users to use > it: "no promises, but here you go"; a.k.a. PyPI? > - other implementations are under no obligation to distribute > unmaintained modules. a.k.a. PyPI? > Minus side: > > - this becomes a self-fulfilling prophesy: with tests turned off, > bit-rot will eventually set in and break modules that currently > aren't broken. True. And then we have the added maintenance burden of explaining repeatedly that we don't care that it's broken, but we want to put it on your machine anyway. All in all, this is basically where we are today, with the exception that we haven't officially said that we no longer support these modules. PEP 594 is this official statement, and our usual process for things we don't support is to remove them in two versions time. It doesn't have to be so controversial - either the people who are saying "we rely on this" are also willing to help us maintain them, or they're not. And if they're not, they clearly don't rely on it (or realize the cost of relying on volunteer-maintained software). Cheers, Steve
https://mail.python.org/pipermail/python-dev/2019-May/157685.html
CC-MAIN-2019-51
refinedweb
389
68.6
Details - Type: Bug - Status: Reported - Priority: P2: Important - Resolution: Unresolved - Affects Version/s: 5.14.2, 5.15.0 - Fix Version/s: None - Component/s: Quick: Controls 2 - Labels: - Environment:tested on Linux, Windows - Platform/s: Description Hello everyone, I am in Qt 5.15.1, I stumbled on an issue while using the Drawer item. I followed the example "Qt Quick Controls - Side Panel" . The idea is to create a non-modal drawer usable as a fixed side dock when the the width of the window is sufficient. Yet, I found out by trials and errors that Shortcut items, "shortcut" properties in Action andr more worrisome (for me) the "Alt+...." shortcuts in the ApplicationWindow.menuBar are unusable. If these are created as children of Drawer, no problem. If the drawer is not "visible", all shortcuts are working. The moment the non-modal drawer is "visible", the parent/ancestors' shortcuts are lost. I found a little workaround in adding context: Qt.ApplicationShortcut to Shortcut and coupling each Action with a Shortcut to use this property. Yet, the menuBar is still out of reach of shortcuts. I find strange that a non-modal Drawer blocks all its ancestors shortcuts. I know it's inheriting Popup behavior, but it makes using the Side Panel idea difficult. Is it a bug ? Any idea for a workaround ? Do I need to couple all shortcuts from Action or menus with the snippet below : Shortcut { sequence: KeySequence.Save context: Qt.ApplicationShortcut onActivated: saveAction.trigger() } Here is a working example, switch the visible property of the drawer to see the difference. Type Ctrl+A, Ctrl+B and Alt+F and read the result. import QtQuick.Window 2.15 import QtQuick 2.15 import QtQml 2.15 import QtQuick.Controls 2.15 ApplicationWindow { id: base width: 640 height: 480 visible: true title: qsTr("Hello World") Drawer { id: leftDrawer parent: base visible: true // switch to false width: 200 height: base.height modal: false interactive: false edge: Qt.LeftEdge Rectangle{ color: "red" } Action { shortcut: "Ctrl+B" onTriggered: { console.log("Ctrl+B") } } } Action { shortcut: "Ctrl+A" onTriggered: { console.log("Ctrl+A") } } menuBar: MenuBar{ Menu { id: fileMenu title: qsTr("&File") MenuItem{ text: "Save" } } } } Thank you !
https://bugreports.qt.io/browse/QTBUG-86801
CC-MAIN-2022-21
refinedweb
363
60.92
Hi, I would like to use #stardist (3D) for the segmentation of a large (290, 1024, 1024) (z, y, x) confocal microscope image, but during the prediction the #jupyter kernel of the notebook crashes without any error message. What I did so far: I started to used an edged detection method (LoG) to create an initial data. Due to under-segmentation, the results are not usable for the type of analysis we have in mind. So we manually curated the data set with #napari. After one week of curating and annotating, we ended up with a fully annotated volume with the dimensions (100, 476, 714) which contains approx 1000 rod-shaped bacteria. This volume I split into 7 sub-volumes with (100, 476, 102) px each: - 6 for training - 1 for validation - 1 for later testing Thanks to data augmentation, I succeeded to train a stardist3d network with the following configurations: anisotropy=(1.6521739130434783, 1.0, 1.1875), axes='ZYXC' backbone='resnet' grid=(1, 2, 2) n_channel_in=1 n_channel_out=97 n_dim=3 n_rays=96 net_conv_after_resnet=128 net_input_shape=(None, None, None, 1) net_mask_shape=(None, None, None, 1) rays_json={ 'name': 'Rays_GoldenSpiral', 'kwargs': { 'n': 96, 'anisotropy': (1.6521739130434783, 1.0, 1.1875)}} resnet_activation='relu' resnet_batch_norm=False resnet_kernel_init='he_normal' resnet_kernel_size=(3, 3, 3) resnet_n_blocks=4 resnet_n_conv_per_block=3 resnet_n_filter_base=32 train_background_reg=0.0001 train_batch_size=1 train_checkpoint='weights_best.h5' train_checkpoint_epoch='weights_now.h5' train_checkpoint_last='weights_last.h5' train_dist_loss='mae' train_epochs=400 train_learning_rate=0.0003 train_loss_weights=(1, 0.2) train_n_val_patches=None train_patch_size=(100, 100, 100) train_reduce_lr={'factor': 0.5, 'patience': 40, 'min_delta': 0}, train_steps_per_epoch=100 train_tensorboard=True use_gpu=True) For the training I used a single GTX 980 with 4 GBs of VRAM. This is the reason why I limited the patch size to (100, 100, 100). This was simply the first patch size which worked. Technically I have access to GPUs with larger VRAM (and longer waiting times …), but I always prefer quick iterations over perfect results during testing. In tensorboard I got the following loss curves: The prediction with the test image (here cropped) works quite nice. To only draw-back are over-segmented cells (i.e the marked ones). Compared with our previous efforts to tackle our problems with a classical segmentation pipeline in MATLAB; 1.5 weeks for annotation, python coding, setup, and training is ridiculously fast. Finally I gave the larger volume a try: from __future__ import print_function, unicode_literals, absolute_import, division import sys, os import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' from glob import glob from tifffile import imread from csbdeep.utils import Path, normalize from stardist import random_label_cmap from stardist.models import StarDist3D np.random.seed(6) lbl_cmap = random_label_cmap() model = StarDist3D(None, name='stardist', basedir='models') # contains only a single tif stack: X = glob('largedatasets/*.tif') X = list(map(imread, X) X= [normalize(x, 1,99.8) for x in X] print(X[0].shape) # returns (290, 1024, 1024) labels= model.predict(X[0], n_tiles=(3, 11, 11)) (Heavily inspired by the corresponding stardist example notebook) Shortly after the final progress bar reaches 100%, the #jupyter kernel dies and all labels are lost … My questions so far: - Has anyone an idea what could cause the kernel to crash? - Has anyone seen over-segmentation like the one shown above and knows how to deal with it? (Would be awesome if we could eliminate this without post-processing or - even worst - by annotating more training data) - (Are there other improvements possible?) Eric
https://forum.image.sc/t/stardist-prediction-kernel-dies/34798
CC-MAIN-2020-50
refinedweb
575
56.25
Code generators are ubiquitous in software engineering and when they are understood, they can lead to all sorts of benefits. From scaffolding generation like in Ruby on Rails, to REST API generation like Swagger Codegen, there are a lot of useful code generation tools available. What is a code generator? However, there is a dark side to code generators. And when their power is used without constraint, developers can go down a path that leads to all sorts of disadvantages. Like when developers lose control of their source code with low code tools such as OutSystems, life can get pretty frustrating quickly. This makes code generators a double-edged sword. They have both favourable and unfavourable qualities. There is also an interesting history of code generators that has cemented some peoples opinions. But if we all got stuck in the failings of the first attempts at a new technology, the human species may not have evolved past the Stone Age. The work done in this area has come a long way in recent years and is showing some pretty amazing results. In this article, we are going to look at the following questions with some examples along the way. - How does code generation work? - The pro’s of code generation - The con’s of code generation - How do you use code generators correctly? - Is Codebots different to a code generator? How does code generation work? Code generation is pretty simple and you are likely already doing it. The easiest case to show is how most people generate HTML for websites. In almost all modern web application frameworks (Rails, CakePHP, Grails, and sooo many more) there is some sort of template mechanism. One of my more recent favourite HTML site generation tools is Jekyll. It is super simple and powerful to use. If you are not familiar with code generation then have a look at the Jeykll step-by-step guide. But in summary, the following code block is a hello world HTML page. Nothing special here. <!doctype html> <html> <head> <meta charset="utf-8"> <title>Home</title> </head> <body> <h1>Hello World!</h1> </body> </html> Using an object, the page title can be passed to the template to generate a dynamic title as seen in the next code snippet. See the title Home has been replaced. <!doctype html> <html> <head> <meta charset="utf-8"> <title>What is a code generator?</title> </head> <body> <h1>Hello World!</h1> </body> </html> From this simple concept, we can generate lots of HTML pages with different titles by passing in different values. It is also possible to use standard selection and repetition statement like an if statement or a for loop. This allows developers to embed logic into a page. Jekyll uses Liquid for this purpose as seen in the following code block using tags. This for loop sets up the links in the navigation menu. <nav> {% for item in site.data.navigation %} <a href="{{ item.link }}" {% if page.url == item.link %}style="color: red;"{% endif %}> {{ item.name }} </a> {% endfor %} </nav> From a board perspective, this is pretty much how code generation works. A template is invoked to generate some code with some data being passed to the template for use. The template will have some sort of way to process the data and usually provide a way to do standard programming things like loops and selections. Simple, powerful, and inviting … So, instead of taking a week to hand code 99 HTML files that are all very similar but different. I can use a code generator and save myself lots of time! This leads to the pro’s of code generation. The pro’s of code generation To get a feeling for the pro’s of code generation, here is a quote from a professional software engineer. “I like that it takes care of the mundane and boilerplate code that every application needs allowing me to get into the interesting code straight away.” Kieran Lockyer, Software Engineer Some of the benefits that come with code generation include: This is a pretty powerful list of pro’s. So, what is the catch? What can go wrong? The con’s of code generation Again, to get a feeling lets start with a quote from a different professional software engineers. “When code becomes a black-box that I cannot understand, I lose time and everything takes longer and costs more.” Blake Lockett, Software Engineer Some of the cons of code generation include: The list here represents what we have learnt as an industry about the dark side of code generators, which is a good thing. Now we can learn from this and move forward with better solutions. Some of the solutions are technical. For example, let’s use code generators that provide better ways to seperate our concerns, like using functions. Some other solutions are more philosophical. For example, by not expecting 100% of the target application generated, we can shift the problem and ensure code quality with humans and bots working together. How do you use code generators correctly? Using just a code generator is like driving a Ferrari around in 1st gear. There is so much more available under the hood that is just waiting for you to discover. In the above simple example using Jekyll, it was shown that we could generate a HTML page with the title dynamically generated. But it is possible to pass far more sophisticated input to the template. For example, convention over configuration frameworks like Rails use the active record pattern to generate full stack software application based off the database schema. This is a definite step in the right direction. Other more modern frameworks are using other input like OpenAPI definitions. For example, Swagger Codegen generates stubs in lots of different programming languages from the OpenAPI definition. Other frameworks take it further (like StackGen) that do full stack applications from the OpenAPI definition. Again, another definite step in the right direction and shifting our metaphorical car into 2nd gear. But to go into even higher gears we need more sophisticated control of the input to the code generator and this brings us into the realms of models. A database schema is a model. An OpenAPI definition is a model. Actually for modelling purists, everything is a model but that does not help our discussion here. What you need to imagine is that the input going into a code generator is a model. With this mental leap you have now shifted up into 3rd gear but there is a next important step for 4th gear. And this is one that some people never make … If your input to your code generator is controlled by a database schema, or an OpenAPI definition, or the like, then you are limiting yourself to that input model. To shift to the next gear, you must be able to control the meta-model. So, what is the meta-model you ask? The meta-model specifies what can be found in the model. It is sort of like how an XSD describes what can be found in a compliant XML. The meta-model is the key. I cannot emphasise this point enough. Once you have control of the meta-model you can enrich the input into the code generator beyond just a database schema or OpenAPI definition. It removes a huge limitation and subsequently shifts control back to the developer. This is the position that you want to get yourself into. If this intrigues you, have a look at the Epsilon tools or do a search for model-driven engineering tools. Be warned though, this is pretty cool and powerful stuff. Is Codebots different to a code generator? Many people have preconceptions of what code generators are and more than likely have been burned by one in the past. As a developer, I too have used code generators that left me jaded. When you are trying to solve a problem that is hidden inside a machine generated mess of code, it is easy to understand why some people have an aversion to code generators. There is almost some type of PTSD that people suffer from. This was one of the reasons I thought it would be easier just to avoid the problem by removing the association of Codebots with code generators. The reality is that at some point we need a codebot to write some code. In the model-driven engineering world we call this a model-to-text (M2T) transformation. Or in the history of our industry we have called this a code generator. Or if you like, a templating mechanism. So, on this level I concede. We use a code generator in the final step for a codebot writing code. Like Wild E. Coyote, if you can’t beat em, join em. But most importantly Codebots use and allow our customers control over the meta-model. This is a significant step and we are driving our Ferrari in a high gear. So high actually, that we have included AI at various points in our stack. So, Codebots is so much more than just another code generator. Summary Code generators are just one arrow in the quiver of modern software developers. In this article, we have looked at a simple example using Jekyll and how template based code generation works. We have also looked at the pro’s and con’s of using code generators. Very few other technologies have a pro’s list with so much potential. But given the history of code generators in our industry, I understand the scepticism that surrounds their use. Not only has a bad reputation been obtained from code generators, the rise of UML as a standard modelling language has made new students of software cry and managers recall the bad old days of wasted time on models, that simply turned into a maintenance burden and overhead. All that said, if you had the opportunity to drive a Ferrari, would you say no because of the historical number of car crashes? I would say that you would be willing to have a drive. And then once you are in the car, would you just stick to 1st gear? I know what I would be doing. Welcome to the era of the codebot. Let’s shift up some more metaphorical gears. Model-driven engineering with artificial intelligence … now we’re talking.
https://codebots.com/app-development/what-is-a-code-generator
CC-MAIN-2021-31
refinedweb
1,733
65.32
Am 16.08.19 um 16:14 schrieb Juliusz Sosinowicz: > This patch adds the option to use wolfSSL as the ssl backend. To build > this patch: > That is great and it is also a very big patch. I skimmed only through the patch. +#ifdef ENABLE_CRYPTO_WOLFSSL + o->ciphername = "AES-256-CBC"; +#else o->ciphername = "BF-CBC"; +#endif Such silent changes that OpenVPN behaves different, is something we would like to avoid. Better to error out in this case than to behave diffently. Overall the WolfSSL feels to be a bit similar to OpenSSL. Is there any compatibility you are aiming at? Also it would be nice to have a summary for people on the OpenVPN perspective - Why WolfSSL in OpenVPN instead of mbed or OpenSSL - What features does WolfSSL offer in OpenVPN that mbed/OpenSSL don't have - What is missing with WolfSSL? That should also good to have in the patch like README.mbedtls. And one of the important question is: What are your future plans in terms of involvement in OpenVPN development and maintaince? I think since you are first time contributer and this a big patch, that is something resonable to ask. Arne _______________________________________________ Openvpn-devel mailing list Openvpn-devel@lists.sourceforge.net
https://www.mail-archive.com/openvpn-devel@lists.sourceforge.net/msg18769.html
CC-MAIN-2019-43
refinedweb
205
72.36
Este artículo tambíen está en Español. Common Mistakes in Online and Real-time Contests Introduction. Different Types of Programming Contests Many programming contests take place throughout the year, such as ACM regional contests, International Olympiad in Informatics (IOI), Centrinës Europos informatikos olimpiados (CEOI), and Programmer of the Month (POTM) contest. The most prestigious live programming contest is the ACM International Collegiate Programming Contest (ICPC), and the most prestigious online contest is the Internet Problem Solving Contest (IPSC). In this section, I will discuss some of the contests. ACM International Collegiate Programming Contest (ICPC) ICPC, first held in 1977, is now held yearly [4]. The contest lasts five hours and generally contains eight problems. (However, the 2001 World Finals contained nine problems.) Three person teams are allotted a single computer. The teams submit their solutions to a judging software named PC2 developed at California State University, Sacramento (CSUS). The permitted programming languages are C/C++, Pascal, and Java. Online Contests Online contests require no travel and are often less tense [1]. The submission rules for the online contests at the Valladolid site and the USU online judge site are the same: the contestants must mail their solutions to a certain e-mail address. The IPSC rules are quite different. The IPSC Contest Organizer provides inputs for the problems. Instead of e-mailing their solutions, the contestants have to e-mail their outputs. Some Tips for ContestantsA good team is essential to succeeding in a programming contest. A good programming team must have knowledge of standard algorithms and the ability to find an appropriate algorithm for every problem in the set. Furthermore, teams should be able to code algorithms into a working program and work well together. The problems presented in programming contests often fall into one of five categories including search, graph theoretic, geometric, dynamic programming, trivial, and non-standard. Search problems usually require implementing breadth-first search or depth-first search. Graph theoretic problems commonly include shortest path, maximum flow, minimum spanning tree, etc. Geometric problems are based on general and computational geometry. Dynamic programming problems are to be solved with tabular methods. Trivial problems include easy problems or problems that can be solved without much knowledge of algorithms, such as prime number related problems. Non-standard problems are those that do not fall into any of these classes, such as simulated annealing, mathematically plotting n-queens, or even problems based on research papers. To learn more about how problems are set in a contest you can read Tom Verhoeff's paper [6]. What you should do to become a good team There is no magic recipe to becoming a good team, however, by observing the points below (some of which were taken from Ernst et al. [3]) you can certainly improve. When training, make sure that every member of the team is proficient in the basics, such as writing procedures, debugging, and compiling. An effective team will have members with specialties so the team as a whole has expertise in search, graph traversal, dynamic programming, and mathematics. All team members should know each other's strengths and weaknesses and communicate effectively with each other. This is important, for deciding which member should solve each problem. Always think about the welfare of the team. Solving problems together can also be helpful. This strategy works when the problem set is hard. This strategy is also good for teams whose aim is to solve one problem very well. On the other hand, the most efficient way to write a program is to write it alone, avoiding extraneous communication and the confusion caused by different programming styles. As in all competitions, training under circumstances similar to contests is helpful. During the contest make sure you read all the problems and categorize them into easy, medium and hard. Tackling the easiest problems first is usually a good idea. If possible try to view the current standings and find out which problem is being solved the most. If that problem has not yet been solved by your team, try to solve it immediately, odds are it is an easy problem to solve. Furthermore, if the your solution to the easiest problem in the contest is rejected for careless mistakes, it is often a good idea to have another member redo the problem. When the judges reject your solution, try to think about your mistakes before trying to debug. Real-time debugging is the ultimate sin, you don't waste too much of your time with a single problem. In a five-hour contest you have 15 person-hours and five computer-hours. Thus, computer-hours are extremely valuable. Try not to let the computer sit idle. One way to keep the computer active is to use the chair in front of the computer only for typing and not for thinking. You can also save computer time by writing your program on paper, analyzing it, and then use the computer. Lastly, it is important to remember that the scoring system of a contest is digital. You do not get any points for a 99%-solved problem. At the end of the contest you may find that you have solved all the problems 90%, and your team is at the bottom of the rank list. Different Types of Judge Responses The following are the different types of judge replies that you can encounter in a contest [2]: Correct Your program must read input from a file or standard input according to the specification of the contest question. Judges will test your program with their secret input. If your program's output matches the judges' output you will be judged correct. Incorrect output If the output of your program does not match what the judges expect, you will get an incorrect output notification. Generally, incorrect output occurs because you have either misunderstood the problem, missed a trick in the question, didn't check the extreme conditions or simply are not experienced enough to solve the problem. Problems often contain tricks that are missed by not reading the problem statement very carefully. No output Your program does not produce an output. Generally this occurs because of a misinterpretation of the input format, or file. For example, there might be a mixup in the input filename e.g., the judge is giving input from "a.in," but your program is reading input from "b.in." It is also possible that the path specified in your program for the input file is incorrect. The input file is in most cases in the current directory. Errors often occurs because of poor variable type selection or because a runtime error has occurred, but the judge failed to detect it. Presentation error Presentation error's occur when your program produces correct output for the judges' secret data but does not produce it in the correct format. Presentation error is discussed in detail later in this article. Runtime error This error indicates that your program performs an illegal operation when run on judges' input. Some illegal operations include invalid memory references such as accessing outside an array boundary. There are also a number of common mathematical errors such as divide by zero error, overflow or domain error. Time limit exceeded In a contest, the judge has a specified time limit for every problem. When your program does not terminate in that specified time limit you get this error. It is possible that you are using an inefficient algorithm, e.g., trying to find the factorial of a large number recursively, or perhaps that you have a bug in your program producing an infinite loop. One common error is for your program to wait for input from the standard input device when the judge is expecting you to take input from files. A related error comes from assuming wrong input data format, e.g., you assume that input will be terminated with a "#" symbol while the judge input terminates with end-of-file. General Suggestions for Contests Maximum memory The maximum memory allowed on the Valladolid site is 32MB. This includes memory for global variables, the heap, and the stack. Even if you find that you have allocated much less than 64K memory, you will find that the judge often shows that more memory has been allocated. Also, you should not allocate 32 MB of global memory because 32MB is maximum for all types of memory. The maximum memory for real contests varies; for the World Final, it is greater than 128MB. Problems with DOS Compilers and memory allocation Many of us like to use DOS compilers like Turbo C++ 3.0 and Borland C++, which do not support allocating more than 64K memory at a time. It is always a good idea to allocate memory with a constant so that your test runs use less than 64K memory. Before the submit run, the size of memory can be increased by just changing the value of the constant. If you don't practice this, it is very likely that you will face problems like "Run time error," "Time limit exceeded," and "Wrong answer." An example: int const SIZE=100; int store[SIZE][SIZE]; void initialize(void) { int i,j; for (i=0;i<SIZE;i++) for (j=0;j<SIZE;j++) store[i][j]=0; } "Time limit exceeded" is not always "Time limit exceeded" When you submit a program to the judge, the judge gives you a response, but this response is not always accurate. For example, if you allocate less memory than is required, the program may not terminate (it may not even crash), and the judge will tell you "Time limit exceeded." On seeing this message, if you try to optimize your program rather than correcting the memory allocation problem, your program will never be accepted. The following example illustrates this problem. The skeleton of your program is as follows: #include <stdio.h> int const MAX=100; int array[MAX],I; void main( void ) { for (i=0; i<=100;i++) { if (array[i]==100) { array[i]= -10000; - - - - - - - - - - - - - - - - - - } } } In this example, you have allocated a 100 element array. Your program attempts to access array element 100, which is out of the range [0..99], because of an error in the for loop statement. It will instead access the address of counter variable i. Because the value array[100] is set to 10000, the counter value will be set to 10000, so your loop will take a much longer time to terminate and may not even complete at all. So, the judge will give you message "Time limit exceeded" even though it actually is a memory allocation error. Test the program with multiple datasets There is always a sample input and output provided with each contest question. Inexperienced contestants get excited when one of their programs matches the sample output for the corresponding input, and they think that the problem has been solved. So they submit the problem for judgment without further testing and, in many cases, find they have the wrong answer. Testing only one set of data does not check if the variables of the program are properly initialized because by default all global variables have the value zero (integers = 0, chars = '\x0', floats= 0.0 and pointers = NULL). Even if you use multiple datasets the error may remain untraced if the input datasets are all the same size, in some cases descending in size or ascending in size. So, the size of the dataset sequence should be random. It is always a good idea to write a separate function for initialization. Take the input of floats in arrays Consider the following program segment: #include<stdio.h> float store[100]; void main( void ) { int j; for (j=0;j<100;j++) scanf( "%f" ,&store[j]); scanf( "%f" ,&store[j]); t linked." To get rid of this type of error, just change it to take the input into a normal floating point variable then assign that variable to the array, as follows: #include <stdio.h> float store[100]; void main( void ) { int j; float temp; for (j=0;j<100;j++) { scanf( "%f" ,&temp); store[j]=temp; } } Mark Dettinger's suggestions on geometric problems Mark Dettinger was the coach for the team from the University of Ulm. He suggested to me that sometimes it is a good idea to avoid geometric problems unless one has prewritten routines. The routines that can be useful are: - Line intersection. - Line segment intersection. - Line and line segment intersection. - Convex hull. - If a point is within a polygon. - From a large number of points what is the number of maximum points on a single line. - Closest pair problem. Given a set of points you have to find out the closest two points between them. - Try to learn how to use C's built-in qsort() function to sort integers and records. - Area of a polygon (convex or concave). - Center-of-gravity of a polygon (convex or concave). - Minimal circle, a circle with the minimum radius that can include the coordinates for a given number of points. - Minimal sphere. - Whether a rectangle fits in another rectangle even with rotation. - Identify where two circles intersect. If they don't, determine whether one circle is inside another or if they are far away. - Line clipping algorithms against a rectangle, circle, or ellipse. Judging the judge! Judges often omit information. For example, judges in my country give the error "Time limit exceeded" but never say what the time limit is. In Valladolid, often the input size is not specified (e.g., problem 497-Strategic defense initiative). Suppose that the maximum number of inputs is not given. This is often vital information because if the number is small, you can use backtracking, and if it is large, you have to use techniques like dynamic programming or backtracking with memorization. In problem 497, the maximum possible number of missiles to intercept is not given. Suppose that the loop for(j=0;j<100000000;j++) takes one second to run for the judge, and an unknown N is the number of inputs given by the online judge. Send the following program with your code. Place it just after you have read the value of N. for (I=1;I<=20;I++) { if (I*1000>=N) { for(j=0;j<I*100000000;j++); } } From the runtime of the program you will know the number of input N. Using this method you can also determine how fast the judge's computer is compared with yours and thus find out the approximate time limit for any problem on your computer. Most of the live contests have a practice session prior to the contest. On this day you should try to determine the speed of the judge computer by sending programs consisting of many loops and nested loops. Did you know that there was a mistake in a problem of the World Final 2000? The culprit problem was Problem F. The problem specification said that the input graph would be complete but not all inputs by the judge were complete graphs. At least one of the teams sent a program that checked if the input graph was complete. If the input graph was incomplete, then their program entered an infinite loop. So, the response from the judge was "Time limit exceeded." From this response they were able to know that some of the input graphs were incomplete and solved the problem accordingly. Use double instead of float It is always a good idea to use double instead of float because double gives higher precision and range. Always remember that there is also a data type called a long double. In Unix/Linux C/C++, there is also a long long integer. Sometimes it is specified in the problem statement to use float type. In those cases, use floats. Advanced use of printf() and scanf() Those who have forgotten the advanced use of printf() and scanf(), recall the following examples: scanf( "%[ABCDEFGHIJKLMNOPQRSTUVWXYZ]" ,&line); //line is a string This scanf() function takes only uppercase letters as input to line and any other characters other than A..Z terminates the string. Similarly the following scanf() will behave like gets(): scanf( "%[^\n]" ,line); //line is a string Learn the default terminating characters for scanf(). Try to read all the advanced features of scanf() and printf(). This will help you in the long run. Using new line with scanf() If the content of a file (input.txt) is abc def and the following program is executed to take input from the file: char input[100],ch; void main( void ) { freopen( "input.txt" , "rb" ,stdin); scanf( "%s" ,&input); scanf( "%c" ,&ch); } What will be the value of input and ch? The following is a slight modification to the code: char input[100],ch; void main( void ) { freopen( "input.txt" ,"rb",stdin); scanf( "%s\n" ,&input); scanf( "%c" ,&ch); } What will be their value now? The value of ch will be '\n' for the first code and 'd' for the second code. Memorize the value of pi You should always try to remember the value of pi as far as possible, 3.1415926535897932384626433832795, certainly the part in italics. The judges may not give the value in the question, and if you use values like 22/7 or 3.1416 or 3.142857, then it is very likely that some of the critical judge inputs will cause you to get the wrong answer. You can also get the value of pi as a compiler-defined constant or from the following code: Pi=2*acos(0) Problems with equality of floating point (double or float) numbers You cannot always check the equality of floating point numbers with the = = operator in C/C++. Logically their values may be same, but due to precision limit and rounding errors they may differ by some small amount and may be incorrectly deemed unequal by your program. So, to check the equality of two floating point numbers a and b, you may use codes like: if (fabs(a-b)<ERROR) printf( "They are equal\n" ); Here, ERROR is a very small floating-point value like 1e-15. Actually, 1e-15 is the default value that the judge solution writers normally use. This value may change if the precision is specified in the problem statement. The cunning judges Judges always try to make easy problem statements longer to make them look harder and the difficult problem statements shorter to make them look easy. For example, a problem statement can be "Find the common area of two polygons" -- the statement is simple, but the solution is very difficult. Another example is "For a given number find two such equal numbers whose multiplication result will be equal to the given number." Though the second statement is much longer than the first, the second problem statement is only asking to find the square root of a number, which can be done using a built-in function. Use the assert function It is always nice to use the C/C++ assert() function, which is in the header file assert.h. With the assert() function you can check for a predefined value for a variable or an expression at a certain stage of your program. If for some reason the variable or expression does not have the specified value, assert() will print an error message. See your C/C++ documentation for further details. Avoid recursion It is almost always a good idea to avoid recursion in programming contests. Recursion takes more time, recursive programs crash more frequently especially in the case of parsing, and, for some people, recursion is harder to debug. But recursion should not be discounted completely, as some problems are very easy to solve recursively (DFS, backtracking), and there are some people who like to think recursively. However, it is a bad habit to solve problems recursively if they can be easily solved iteratively. In live programming contests, there is no point in writing classic code, or code that is compact but often hard to understand and debug. In programming contests, classic code serves only to illustrate the brilliance of the programmer. For example, the code for swapping two values can be written classically as: #define swap(xxx, yyy) (xxx) ^= (yyy) ^= (xxx) ^= (yyy)But in a contest you will not get extra points for this type of code writing. Improve your understanding of probability and card games Having a good understanding of probability is vital to being a good programmer. If you want to measure your grasp of probability, just solve problem 556 of Valladolid and go through a statistics book on probability. Know about probability theorems, independent and dependent events, and heads/tails probability. You should also be able to solve common card game-related problems. Be careful about using gets() and scanf() together You should also be careful about using gets() and scanf() in the same program. Test it with the following scenario. The code is: scanf( "%s\n" ,&dummy); gets(name); And the input file is: ABCDEF bbbbbXXXWhat do you get as the value of name? "XXX" or "bbbbbXXX" (Here, "b" means blank or space) Suggestions for UNIX-based Online Judges and Contests Function portability Not all C/C++ functions available in DOS are available in UNIX. Check the documentation for the portability among operating systems. If a function is portable to UNIX, you can use it to solve problems on the Valladolid and USU sites. Use only standard input and output functions for taking inputs and producing outputs. itoa(), the important function that UNIX doesn't have UNIX does not support the important function itoa(), which converts an integer to a string. The replacement for this function can be: char numstr[100]; int num=1200; sprintf(numstr, "%d" ,num); //to decimal sprintf(numstr, "%X" ,num); //to uppercase hexadecimal Try to find replacements for other functions that are not available in UNIX/LINUX. Problems with the settings of mailer program Some problems don't get accepted even if they are solved correctly. Such problems from Valladolid are 371- Ackermann Function, 336-A node too far, 466-mirror, mirror, etc. It is because our e-mail programs (e.g., Outlook Express, Eudora) break longer lines, and these problems have long lines in their output. So in Outlook Express you should go to Tools-> Options-> Send-> Send text setting and change the Automatically Wrap Text from 76 (default) to 132. Similar options can be found in other mailer programs. The Ural State University online judge has a program submission form with which you can directly submit your program without sending an e-mail. Remember that problems with mailer settings can cause both wrong answers and compile errors. Presentation error Presentation errors are neither caused by algorithmic nor logical mistakes. There is a difference between the presentation error of online judges and that of live judges. The latter are able to detect mistakes such as misspellings, extra words, extra spaces, etc., and differentiate them from algorithmic errors, such as wrong cost, wrong decisions, etc. These mistakes are the presentation errors as graded by the human judges. On the other hand, online judges in most cases compare the judge output and the contestant output with the help of a file compare program so that even spelling mistakes can cause a "wrong answer." Generally, when the file compare program finds extra new lines, these are considered to be presentation error. Human judges, though, do not typically detect these mistakes. But now computers are becoming more powerful, larger judge inputs are being used and larger output files are being generated. In live contests, special judge programs are being used that can detect presentation errors, multiple correct solutions, etc. We are advancing towards better judging methods and better programming skills. The recent statistics of the ACM shows that participation in the ACM International Collegiate Programming Contest is increasing dramatically, and in the near future the competition in programming contests will be more intense [5]. So the improvement of the judging system is almost a necessity. A common mistake of contestants Recently, I arranged several contests with Rezaul Alam Chowdhury and in collaboration with the University of Valladolid and have seen contestants make careless mistakes. The most prominent mistake is taking things for granted. In a problem I specified that the inputs will be integers (as defined in mathematics) but did not specify the range of input and many contestants assumed that the range will be 0->(2^32-1). But in reality many large numbers were given as input. The maximum input file size was specified from which one could assume what was the maximum possible number. There were also some negative numbers in the input because integers can be negative. The causes of compile error Compile error is a common error on the Valladolid site. It may seem annoying to compile and run a program, then send it to the online judge and get a compile error. Generally these errors occur because contestants omitted #include files. Some compilers do not require including the header files even when we use functions under those header files. However, the online judge never allows this. For example, some functions exist both in math.h and stdlib.h. For the online judge, you need to include both of the header files if you want to use them. Compiler errors also occur commonly when contestants do not specify the correct language. Often C code implemented in some compilers inadvertently takes advantage of C++ features. When the language specified to the judge is C, a compile error is generated. For example, the following may be compiled as a C program in a DOS/Windows environment but not in UNIX/LINUX. for ( int i=0;i<100;i++) { printf("Compile Error\n" ); } Mail sent to the online judge should be in plain text format. If the mail is in Rich Text or HTML, the program will not compile. You should not send your program as an attachment. Mysterious characters When I first started programming for Valladolid, I used Turbo C++. After a program was successfully completed, I opened the source code in Notepad, selected the whole text, copied and pasted it in my mail editor, and sent the program to the Valladolid site. I got a Compile error message but could not discover the cause. One day, I pasted it in my email editor, saved it as a text file, and then opened it in my DOS text editor. I discovered some mysterious characters in the file, which were invisible in Windows. If you receive a Compile error message and cannot discover the cause, check if your mail or text editor is adding extra symbols to your code. Using non-portable functions Compile errors are caused by the use of the functions which are only available in DOS and not in LINUX, such as strrev(), itoa() etc. Using C++ style comments C++ allows a comment style that starts with //. If the mailer wraps a comment to two lines, you may get a compile error. Valladolid-specific suggestions The next section provides suggestions for solving problems for the Valladolid online judge. Types of input in the Valladolid online judgeThere are four types of input in the online judge. (Latest change) - Non-multiple input without special correction program (Red Flag) - Non-multiple input with special correction program (Orange Flag) - Multiple input without special correction program (Blue Flag) - Multiple input with special correction program (Green Flag) What is a special correction Program? There are some problems that have one unique output for a single input, and other problems with multiple output for the same input. For example if you are asked to find the maximum appearing string of length 3 in the string "abcabcabcijkijkijk," unfortunately the answer can be both "abc" and "ijk." So, if your program gives the output "abc," it is correct, "ijk" is also correct. The judge program cannot determine the correctness of your program by simply comparing your output to the judge program output. The judge must write a special program, which will read your answer and determine if it is right or wrong. This special program is described as a special correction program in the Valladolid online judge. For the problems with special correction programs, (Problem 104, 120, 135, etc., or the problems with orange || green flag), you cannot be sure that your program is incorrect even if your program output does not match the sample output for the given sample input. "Multiple input programs" are an invention of the online judge. The online judge often uses the problems and data that were first presented in live contests. Many solutions to problems presented in live contests take a single set of data, give the output for it, and terminate. This does not imply that the judges will give only a single set of data. The judges actually give multiple files as input one after another and compare the corresponding output files with the judge output. However, the Valladolid online judge gives only one file as input. It inserts all the judge inputs into a single file and at the top of that file, it writes how many sets of inputs there are. This number is the same as the number of input files the contest judges used. A blank line now separates each set of data. So the structure of the input file for multiple input program becomes: Integer N //denoting the number of sets of input --blank line--- input set 1 //As described in the problem statement --blank line--- input set 2 //As described in the problem statement --blank line--- input set 3 //As described in the problem statement --blank line--- . . . --blank line--- input set n //As described in the problem statement --end of file-- Note that there should be no blank after the last set of data. Sometimes there may be, so always check. The structure of the output file for a multiple input program becomes: Output for set 1 //As described in the problem statement --Blank line--- Output for set 2 //As described in the problem statement --Blank line--- Output for set 3 //As described in the problem statement --Blank line--- . . . --blank line--- Output for set n //As described in the problem statement --end of file-- The USU online judge does not have multiple input programs like Valladolid. It prefers to give multiple files as input and sets a time limit for each set of input. Problems of multiple input programsThere are some issues that you should consider differently for multiple input programs. Even if the input specification says that the input terminates with the end of file (EOF), each set of input is actually terminated by a blank line, except for the last one, which is terminated by the end of file. Also, be careful about the initialization of variables. If they are not properly initialized, your program may work for a single set of data but give correct output for multiple sets of data. All global variables are initialized to their corresponding zeros. Thus, for a single set of input, the initialization may not be necessary, but for multiple inputs, it is a must. The Fixing Mistake section Always be sure to see the Fixing Mistake section of the Valladolid online judge. Some of the problems in the Valladolid online judge have errors, which are corrected on this page. Read the message board Always try to read the message board of the Valladolid site. You will learn many things from other programmers. The USU online judge also has a message board. You can also submit your own views and problems via these boards. Conclusion Many people believe that the best programmer is the one with greatest knowledge of algorithms. However, problem-solving skills contribute to programming success as much as raw knowledge of algorithms. Don't lose your nerve during a contest, and always try to perform your best. References - 1Astrachan, O., V. Khera, and D. Kotz. The Internet Programming Contest: A Report and Philosophy - 2Chowdhury, R. A., and S. Manzoor. Orientation: National Computer Programming Contest 2000, Bangladesh National Programming Contest, 2000. - 3Ernst, F., J. Moelands, and S. Pieterse. Teamwork in Programming Contests: 3 * 1 = 4, Crossroads, 3.2. - 4Kaykobad, M. Bangladeshi Students in the ACM ICPC and World Championships, Computer Weekly. - 5Poucher, W. B. ACM-ICPC 2001, RCD Remarks, RCD Meeting of World Finals 2001. - 6Verhoeff, T. Guidelines for Producing a Programming-Contest Problem Set: Useful Links ACM Home Page: ACM International Collegiate Programming Contest Problem Set Archive: ACM International Collegiate Programming Contest Web page: American Computer Science League (ACSL) Homepage: Centrinës Europos informatikos olimpiados (CEOI) Resource Page: Informatics Competitions Link Page: Internet Problem Solving Contest (IPSC) web page: International Olympiad in Informatics (IOI) web page: Mark Dettinger's Home Page: New POTM Master's Home Page: PC2 Home Page: POTM Master's Home Page: Ural State University (USU) Problem Set Archive with Online Judge System: University Waterloo Contest Page: Valladolid 24-hour Online Judge : Valladolid Online Contest Hosting System: Valladolid Problems link: 104, 120, 135, 371, 336, 466, 497. Biography Shahriar Manzoor (shahriar@neksus.com) is a BSc student of Bangladesh University of Engineering & Technology (BUET). He participated in the 1999 ACM Regional Contest in Dhaka, and his team was ranked third. He is a very successful contest organizer. He has arranged six online contests for the Valladolid online judge including the "World Final Warm-up Contest." His research interests are contests, algorithms, and Web-based applications. Acknowledgements Shahriar Manzoor is grateful to Prof. Miguel A. Revilla for letting him arrange online contests and to Prof. William B. Poucher for asking people to participate in the World Final Warm-up Contest. He is also grateful to Ciriaco Garcia, Antonio Sanchez, F. P. Najera Cano, Fu Zhaohui, Dr. M. Kaykobad, Rezaul Alam Chowdhury, Munirul Abedin, Tanbir Ahmed, Reuber Guerra and above all his family.
http://www.acm.org/crossroads/xrds7-5/contests.html
crawl-002
refinedweb
5,643
62.48
This Article is about the usage of methods in Java. Unlike other programming languages, Java does not have any functions, rather is has methods. There is a slight difference between methods and functions. Methods are always contained within classes, whereas this is not true for functions. Hence, anytime we create a method in Java, it must be within a class. What is a method though? Methods (and functions) are a collection of statements that are grouped together to perform an operation. This group of statements does not run until they are “called”. A function is called simply by writing it’s name with any parameters it may require. The benefit of having a group of statements is that it can be called over and over again without having to re write the code for it. This improves both code readability and saves time for the programmer. Other programming languages have both methods and functions. Defining Java methods Shown below is the general form for the syntax of methods in Java. type method_name(parameters) { // Instructions to be executed } However, there are keywords that will have to added or removed under certain scenarios. Keep in mind that all these keywords are added before the method name. type: Always required when the function is returning a value. The type keyword is used to define the “type” of the return value. If you’re function is meant to do some calculations and return a number, the int type will be used. type keyword is not required when there is no return value. static: Used to call a method before it’s object has been created. This will almost always be required as currently we are not using objects in our code. If you create a class object, you can call the function without using the static keyword. void: If your function will not be returning a value, you must explicitly state this, else an error will be thrown. In simpler words, if your function does not return a value, add the void keyword before the method. Methods with no arguments Below is the basic example of a method you’ll ever see. A simple one line instruction that prints a sentence to screen. And a simple function call to execute the function. You just have the use the name of the function to call it. public class example { static void Hello() { System.out.println("Hello world"); } public static void main(String[] args) { Hello(); // Calls the Function } } The output of this function will be “Hello World”. Methods with a single argument Below is an example of a simple increment method. It takes a single number as an argument, increments it by one, and returns the value. public class example { static int increment(int x) { return x + 1; } public static void main(String[] args) { System.out.println(increment(5)); } } 6 Methods with multiple arguments The method shown below is the kind of method you’ll tend to see in real life situations. Since it’s a function dealing with integers and also returns a number, we declare it to type int. Secondly, we wish to return a value hence we remove the void keyword. public class example { static int add(int x, int y) { return x + y; } public static void main(String[] args) { System.out.println(add(3,5)); } } Here, when calling the add function we added two input arguments (3 and 5) as well. It is important for the number of parameters and arguments to match. If there are three parameters, there should be three values sent as arguments. Remember to Also remember that returning a value does not print it. That’s why we include println to display the returned values. Shown below is the displayed value of the above method. 8 This marks the end of the Java methods article. Any suggestions or contributions for CodersLegacy are more than welcome. You can ask any relevant questions in the comments section below.
https://coderslegacy.com/java/java-methods/
CC-MAIN-2021-21
refinedweb
658
65.93
ostrstream s; // deprecated float f = 6.6777; s<<f; //insert f into the stream string st = s.str(); //copy into a string should be rewritten as follows: std::ostringstream s; float f = 6.6777; s<<f; //insert f into the stream std::string st = s.str(); //copy into a string The ostringstream class returns a string object rather than char *. In addition, it's declared in namespace std. As a rule, use istringstream and ostringstream instead of the deprecated istrstream and ostrstream classes. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/14133
CC-MAIN-2016-18
refinedweb
113
78.85
Let me warn you up front, this game engine is nowhere near production ready. It’s very much a work in progress, with missing documentation, missing features and crashes are far too common. This is certainly not a game engine to choose today for game development, that’s why this is just a preview instead of a Closer Look. It is however a shocking capable game engine that you should keep your eye on! There is also a video available here. What is the Banshee Engine? So, what is the Banshee Engine? Currently at release 0.3, Banshee Engine is an open source, C++ powered 2D/3D game engine with a complete game editor. On top of that there is a managed scripting layer, enabling you to develop game logic using C#. It is available under a dual license, LGPL and a Commercial “pay what you want” license… and yes, what you want to pay could be $0 if you so chose. Banshee is available on Github, there are binaries available for download, although for now the engine is limited to Windows only. The engine also only targets Windows at the moment, but is being written with portability in mind. The Editor Here is the Banshee Editor in action: The layout is pretty traditional. On the top left you have the various resources that make up your game. Below that you have the Hierarchy view which is essentially your current scene’s contents. At the bottom we have the logs. On the right hand side is the inspector, which is a context aware editing form. Of course centered to the view is the Scene view, which also has a Unity like Game preview window. The interface is extremely customizable, with all tabs being closable, undockable or even free floated. It works well on high DPI monitors and on multiple displays. It does occasionally have issues with mouse hover or cursor and sadly tab doesn’t work between text input fields, but for the most part the UI works as expected. The 3D view you can Orbit the camera using RMB, pan with MMB and zoom in with the scroll wheel. Of course LBM is used for selection. There are the traditional per axis editing widgets for Transforms, Rotations and Scales. You have a widget in the top right corner for moving between various views as well as shifting between Perspective and Orthographic project. Oddly there doesn’t appear to be an option for multiple concurrent views, nor puzzlingly enough, are there axis markers ( color coded lines to show the location of X,Y,Z axis ). The editor idles nicely, using only 4% or so CPU at idle, meaning the engine is fairly friendly to laptop battery life. There are several built in Scene objects, including geometric primitives. The engine also takes an Entity/Component approach, with several components built in that can be attached to a Scene Object: Importing assets into the engine is as simple as dragging and dropping to the Library window: With a resource selected, you can control how it is imported in the Inspector: The importer can handle FBX, DAE and OBJ format 3D files as well as PNG, PSD, BMP and JPG images. You can also import fonts as well as shaders, both GLSL and HLSL formats. Coding Coding in Banshee is done in one of two ways. You can extend the editor and engine using C++ code. The code itself is written in modern C++ 14, although documentation on native coding is essentially non existent at this point in time. For games, the primary going interface is using C#. It current supports C# 6 language features. To script a component, create a new Script in Resources panel: Next, select a scene object, then drag and drop the script onto the bottom any the form in the inspector. Double clicking the script will bring it up in Visual Studio if installed. The script will have full IntelliSense in Visual Studio: Scripting a component is a matter of handling various callbacks, such as OnUpdate() which is called each frame. You can access the attached entity (er… Scene Object) via the .SceneObject member. Here is a very simple script that moves the selected object by 0.1 pixels each update: namespace BansheeEngine { public class NewComponent : Component { private void OnInitialize() { } private void OnUpdate() { this.SceneObject.MoveLocal(new Vector3(0.1f, 0.0f, 0.0f)); } private void OnDestroy() { } } } Documentation This is very much a work in progress. Right now there is a solid reference for the Managed API, the Native API (C++), but the tools user manual is essentially a stub. There is an architecture cheat sheet which gives a pretty broad overview of the engine and how the pieces fit together. There is also a guide to compiling the engine from source. For those that are interested in giving things a go from C++ only, there is a C++ game example available here. Unfortunately there are no downloadable projects or managed examples, a glaring flaw at this point that make it a lot harder to learn. As of right now, the lack of editor documentation or samples to get started with, really do make it hard to learn, especially if you are trying to figure out if something isn’t working because you are doing it wrong, the feature isn’t implemented or there is simply a bug. That said, these are all things that should improve in time. Conclusion This is a game engine for early adopters only. It’s not even close to ready for primetime. On the other hand, the kernel or core is there and remarkably robust. While not the most stable by any stretch of the word, and with lacking documentation, I think you will be surprised with just how capable this engine actually is. The potential for a great game engine is here under the surface, just waiting for a community to make it happen. The Video
https://gamefromscratch.com/banshee-game-engine-preview/
CC-MAIN-2021-04
refinedweb
995
61.97
PropertyChanges QML Type Describes new property bindings or values for a state. More... Properties - explicit : bool - restoreEntryValues : bool - target : Object Detailed Description PropertyChanges is used to define the property values or bindings in a State. This enables an item's property values to be changed when it changes between states. To create a PropertyChanges object, specify the target item whose properties are to be modified, and define the new property values or bindings. For example: import QtQuick 2.0 Item { id: container width: 300; height: 300 Rectangle { id: rect width: 100; height: 100 color: "red" MouseArea { id: mouseArea anchors.fill: parent } states: State { name: "resized"; when: mouseArea.pressed PropertyChanges { target: rect; color: "blue"; height: container.height } } } } When the mouse is pressed, the Rectangle changes to the resized state. In this state, the PropertyChanges object sets the rectangle's color to blue and the height value to that of container.height. Note this automatically binds rect.height to container.height in the resized state. If a property binding should not be established, and the height should just be set to the value of container.height at the time of the state change, set the explicit property to true. A PropertyChanges object can also override the default signal handler for an object to implement a signal handler specific to the new state: PropertyChanges { target: myMouseArea onClicked: doSomethingDifferent() } Note: PropertyChanges can be used to change anchor margins, but not other anchor values; use AnchorChanges for this instead. Similarly, to change an Item's parent value, use ParentChange instead. Resetting Property Values The undefined value can be used to reset the property value for a state. In the following example, when myText changes to the widerText state, its width property is reset, giving the text its natural width and displaying the whole string on a single line. Rectangle { width: 300; height: 200 Text { id: myText width: 50 wrapMode: Text.WordWrap text: "a text string that is longer than 50 pixels" states: State { name: "widerText" PropertyChanges { target: myText; width: undefined } } } MouseArea { anchors.fill: parent onClicked: myText.state = "widerText" } } Immediate Property Changes in Transitions When Transitions are used to animate state changes, they animate properties from their values in the current state to those defined in the new state (as defined by PropertyChanges objects). However, it is sometimes desirable to set a property value immediately during a Transition, without animation; in these cases, the PropertyAction type can be used to force an immediate property change. See the PropertyAction documentation for more details. Note: The visible and enabled properties of Item do not behave exactly the same as other properties in PropertyChanges. Since these properties can be changed implicitly through their parent's state, they should be set explicitly in all PropertyChanges. An item will still not be enabled/visible if one of its parents is not enabled or visible. See also States example, Qt Quick States, and Qt QML. Property Documentation If explicit is set to true, any potential bindings will be interpreted as once-off assignments that occur when the state is entered. In the following example, the addition of explicit prevents myItem.width from being bound to parent.width. Instead, it is assigned the value of parent.width at the time of the state change. PropertyChanges { target: myItem explicit: true width: parent.width } By default, explicit is false. This property holds whether the previous values should be restored when leaving the state. The default value is true. Setting this value to false creates a temporary state that has permanent effects on property values. This property holds the object which contains the properties to be.
https://doc.qt.io/archives/qt-6.0/qml-qtquick-propertychanges.html
CC-MAIN-2021-25
refinedweb
600
56.76
jGuru Forums Posted By: jon_richardson Posted On: Friday, January 18, 2002 09:58 AM I am an extreme newbie to java and jbuilder. (I downloaded jbuilder6 personal) Here are the steps I took, and my problem. Im making a very simple inheritance- two classes, one named "Person", and another class "Student" extending Person. I open jbuilder6, choose new project, set type to "jpx" as opposed to "jpr". (I don't know the difference between the two) I name the project "wp1". I choose "(default project)" as the template, then hit finish. Then I choose "new class", name the class "Person", hit "finish". I write out the declaration of Student in the same file as Person. I compile, and jbuilder informs me that since Student is a public class, it needs its own .java file named after it. So I choose "new class" again, and then transfer Student class to it. I compile the Person class, OK. I compile the Student class and now it informs me that it cant find the Person class I'm trying to inherit from. They both have "package wp1" stated at the top. Appearantly, thats not enough, so I go the the top of Student.java and do "import wp1.Person", and the autocomplete function even offers it as a suggested completion, so I imagine that it will now work. But no, I compile, and now it complains about the very import that it suggested: "Student.java": Error #: 302 : cannot access class wp1.Person; java.io.IOException: class not found: class wp1.Person at line 2, column 12 Can anyone tell me what I'm doing wrong, and what I'm supposed to do?
http://www.jguru.com/forums/view.jsp?EID=726235
CC-MAIN-2015-11
refinedweb
279
75.5
#include <dumper.hh> Inheritance diagram for lestes::std::readable_dumper: Dumps every instance reachable from start_at to given ostream. Returns given ostream. Marks the keystone. Marks all directly reachable parts of the class. The method must be overriden for each inherited class. It should contain abc.gc_mark() for each field abc of the inherited class and call to gc_mark() of the direct ancestor of the class. Does nothing for keystone, only stops processing of ancestors. Reimplemented from lestes::std::mem::keystone. uid, set to zero on creation, only set on demand in uid_get Reimplemented from lestes::std::object. should the dumper respect barriers (see object::dump_barrier)
http://lestes.jikos.cz/uml/classlestes_1_1std_1_1readable__dumper.html
CC-MAIN-2019-51
refinedweb
105
54.29
Compiling My Year With An Interpreter – Python Programming from json import loads from bs4 import BeautifulSoup import mechanize api_key = "8def4868-509c-4f34-8667-f28684483810%3AS7obmNY1SsOfHLhP%2Fft6Z%2Fwc46x8B2W3BaHpa5aK2vJwy8VSTHvaPVuUpSLimHkn%2BLqSjT6NERzxqdvQ%2BpQfYA%3D%3D" growth_coupon_url = "" + api_key("") br.select_form(nr=3) br["email"] = "email@domain.com" br["password"] = "password" logged_in = br.submit() growth_coupon = br.open(growth_coupon_url) json_obj = loads(growth_coupon.read()) for course_link in json_obj["results"]: try: course_page = br.open(str(course_link["couponcode_link"])) soup = BeautifulSoup(course_page) for link in soup.find_all("a"): req_link = link.get('href') if '' in str(req_link): print req_link br.open(str(req_link)) print "success" break except (mechanize.HTTPError,mechanize.URLError) as e: print e.code This has been my favorite automation throughout the semester. The program checks growthcoupon.com for 100% off coupon codes for paid courses offered at Udemy and enrolls me to those courses. I have uploaded the program to pythonanywhere.com which allows me to have the script run daily(for free accounts) without me having to worry about it. At this time I have over 800 courses at my Udemy account, each courses on an average costs 75$. 2. Conversation between two cleverbots This semester, I had to take Cognitive Science course. I enjoyed it. I had this assignment where I had to submit a page of conversation with cleverbot (). I submitted the assignment and later decided to bridge a conversation between two cleverbots. I used selenium module in python to have this done. It was great and kind of felt like an achievement. 3. Using pyautogui before my exams) Being lazy to copy notes at class, I had to rely on photos of the notes sent by my friend. I discovered that the photos were all in landscape and were 107 pictures. I had come across pyautogui in A.I Sweigart’s course at Udemy and quickly wrote some 5-7 lines of code to open the picture and rotate the image and save it while I had my dinner. By the way I had no clue I had been enrolled in A.I Sweigart’s course until I opened my account to check which courses I had. All thanks to pythonanywhere.com which runs my program to enroll to Udemy courses on a daily basis. 4. Automate signups and making apps at Dropbox for unlimited space) I had been involved with some people from IIT for some app ideas. We needed cloud space and agreed to use Dropbox. I had been given a bunch of emails with common password. I wrote a program in python to do the signups and later wrote a program to make apps at dropbox and get the API keys and secret keys to access the files in there. Unfortunately the project never continued for some reasons. Overall, I had a good year but most of the time was spent at college and internship (doing mobile apps using Apache Cordova). For the new year, I will talk to my manager about switching to some python projects. My new year resolution would be to “write more codes and continue blogging about it.” Well that’s all I can think for now. I would want some suggestions for my new year resolution. Do comment below. Once again A Happy New Year .
http://www.thetaranights.com/compiling-my-year-with-an-interpreter/
CC-MAIN-2018-26
refinedweb
538
68.47
Exchange Queue & A: Across the Forest Sharing mailboxes and data across forests is possible, but it requires some fancy configuration footwork. Henrik Walther Growing Pains Q. We’re a large company running Exchange 2010. Our Exchange clients are a mix of Outlook 2007 and Outlook 2010. We’ve just acquired another company running Exchange 2007, and using Outlook 2007 clients. We’re eventually going to merge the infrastructures, but because each company consists of more than 100,000 users, we need to establish rich coexistence before moving on to the actual migration. Could you provide us with some recommendations on how to approach coexistence for Exchange with this specific mix of Outlook clients? We’re aiming to share free/busy information and calendars between the Exchange forests. A. This is a somewhat complex topic, but you essentially have the following options: - Use the Microsoft Federation Gateway (MFG) service - Use the Exchange Availability service The MFG service is a relatively new service (see Figure 1), introduced with Exchange 2010. The federation features in Exchange 2010 take advantage of this new Windows Live-based service. Basically, MFG acts as a trust broker between Exchange 2010 organizations that want to share data. It’s important to stress that although it uses a Microsoft gateway to establish federation trusts between Exchange 2010 organizations, no data from any of the involved Exchange organizations is shared with Microsoft. The MFG simply ensures security when publishing domain information and enabling domain access to data. Figure 1 The Microsoft Federation Gateway helps share data between domains. The new federation features don’t require any trust relationships or data replication between the involved Exchange forests. In your case, for an Exchange 2010 user to view the free/busy status for a user in another organization, the Outlook 2007/2010 user would simply type that person’s e-mail address in the scheduling assistant. Using sharing policies, you can specifically set what data you want to share and at what level that data should be shared (see Figure 2). Within the organizational sharing policy, you can create a sharing policy. Here you can specify how much information and with which domain users should be able to share. Figure 2 When establishing a new share relationship, you have granular control over the extent of sharing. You mention the company you’ve acquired uses Exchange 2007. It’s important to note that Exchange 2007 doesn’t directly support MFG. You can deploy an Exchange 2010 Client Access server within the Exchange 2007 forest, and take advantage of down-level proxy support. MFG doesn’t require a mailbox user from the other Exchange forest to be represented as a mail-user object, so that isn’t required to configure GALSync between the forests. It would, however, make sense in a coexistence scenario where you want to provide your users with a unified Global Address List (GAL) experience. If you use the built-in availability service, you must configure GALSync (to represent mailbox users in the remote forest as mail-enabled user [MEU] objects in the local forest) and use the Add-AvailabilityAddressSpace cmdlet to add the respective namespace. You should also note that each Exchange forest should be able to connect to the availability service in the other org using the Fully Qualified Domain Name specified for the Internal URL of the Exchange Web Services virtual directory. You should establish a forest-wide trust relationship between the forests. This will let you configure the availability service to retrieve free/busy information on a per-user basis. For more information on cross-forest availability topologies, see the Exchange 2010 TechNet documentation. In your scenario, you don’t have Outlook 2003 clients in the mix. If so, you would also need to configure InterOrg to replicate free/busy information across the forests, as Outlook 2003 doesn’t support the Availability service. Because you want to configure temporary coexistence between two Exchange forests on private networks, I would recommend using the Availability service. Cross Forests Q. We’re planning on configuring cross-forest availability between two Exchange forests using the Add-AvailabilityAddressSpace cmdlet. Is this supported if the Exchange forests share the same SMTP address space? A. Yes, this scenario is supported, but there’s an important detail. When using the Add-AvailabilityAddressSpace cmdlet to configure free/busy and calendar sharing between two Exchange forests, you must deploy GALSync to represent mailbox users in the remote forest as MEU objects in the local forest (see Figure 3). As you probably know, an MEU object forwards all e-mail to an external e-mail address (the target Address attribute of the object). Figure 3 Sharing calendar data across forests requires the proper configuration. When an Exchange mailbox user in one forest requests free/busy information for a mailbox user in another forest, the availability service sends the request to the external e-mail address of the MEU object that represents the mailbox user in that other forest. When using the same SMTP address space in both Exchange forests, the domain part of the target Address will obviously be the same in both forests. For this reason, the availability service won’t know how to reach the other forest. Exchange uses auto-discover to determine where it should send the availability request. Because auto-discover uses the primary e-mail address for a mailbox user or the external e-mail address for an MEU object, this needs to point to the address space of the forest where the user’s mailbox is located. So in order to get this working properly, you must use a unique primary SMTP address space for each organization. Then include the shared SMTP address space as a secondary proxy address on the MEU objects. You must also configure auto-discover for the two unique address spaces and configure cross-forest connectors so mail flow works accordingly. Share and Share Alike Q. We’re an Exchange 2010 shop, and our users often access a shared mailbox. We added. We’re using Outlook 2010 clients. Do you know if this is possible? A. This is a common situation, and not just with Exchange 2010 and Outlook 2010. Although it isn’t well known, you’ve been able Note that it needs to be created on HKEY_CURRENT_USER. Exchange 2010 Migration Q. We’re in the planning stages of deploying Exchange 2010 in our company. We have approximately 130,000 mailboxes on Exchange 2007. When we transitioned to Exchange 2007, we found a Microsoft white paper that explained how Microsoft IT designed the Exchange 2007 solution for Microsoft. Do you know if a similar white paper for Exchange 2010 exists? A. One of my colleagues, Kay Unkroth at Biblioso Corp., wrote the Exchange 2007 solution for Microsoft paper for the Microsoft IT Showcase. He also wrote many other Exchange 2007-specific white papers, some of which I actively worked on as well. You can find all the Exchange 2007-specific white papers published by Microsoft IT Showcase over the years in the TechNet Library. Microsoft IT has also published Exchange 2010-specific white papers. These were published when Exchange 2010 was released. You can find them on the same TechNet Library page. Microsoft IT Showcase also publishes white papers describing other technologies such as Lync Server, Microsoft Forefront, Office 365 and so on. You might want to consider subscribing to the Microsoft IT Showcase RSS feeds to keep up on what has been released. .jpg) Henrik Walther is a Microsoft Certified Master: Exchange 2007 and Exchange MVP with more than 15 years experience in the IT business. He works as a technology architect for Timengo Consulting and as a technical writer for Biblioso Corp. (a U.S.-based company specializing in managed documentation and localization services). Related Content
https://technet.microsoft.com/de-de/library/17ad62d0-e14c-48f6-884a-194938b9ae91
CC-MAIN-2017-26
refinedweb
1,301
53.71