text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
tag:blogger.com,1999:blog-143729682014-10-02T00:28:14.738-05:00Words in BoxesNouns, verbs, and occasionally adjectives.jsulak What I’ve Been Reading, 18-month Edition<h2>History</h2> <ol> <li><p>Robert A. Caro, <a href="">The Power Broker</a>. A great, highly detailed book, even epic, book about Robert Moses and city government.</p></li> <li><p>John Darwin, <a href="">Unfinished Empire: The Global Expansion of Britain</a>.</p></li> <li><p>Antony Beevor, <a href="">The Second World War</a>. Comprehensive history of the second world war. The one book to read about WWII.</p></li> <li><p>Paul Kennedy, <a href="">Engineers of Victory: The Problem Solvers Who Turned the Tide in the Second World War</a>. A look at the logistical and engineering challenges behind the WWII.</p></li> <li><p>Ian W. Toll, <a href="">Pacific Crucible: War at Sea in the Pacific, 1941–1942</a>. From Pearl Harbor to Midway, how the unready US Navy learned how to fight:</p> <blockquote> <p>The interior of the <em>Enterprise</em>.</p> </blockquote> </li> <li><p>Greg Woolf, <a href="">Rome: An Empire’s Story</a>.</p></li> <li><p>Jonathan Steinberg, <a href="">Bismark: A Life</a>. Only read the first third - too dull even for me.</p></li> </ol> <h2>Fiction</h2> <ol> <li><p>Adam Johnson, <a href="">The Orphan Master’s Son</a>. By far the best book I’ve read this year, and deserved winner of the <a href="">Tournament of Books</a>.</p></li> <li><p>Gillian Flynn, <a href="">Gone Girl</a>. A solid page turner. Also from the <a href="">ToB</a>.</p></li> <li><p>John Green, <a href="">The Fault in Our Stars</a>. A very good YA novel that I only picked up because of its strong showing in the <a href="">ToB</a>.</p></li> <li><p>Iain M. Banks - <em>Culture</em> novels: <a href="">Consider Phlebas</a>, <a href="">Use of Weapons</a>, <a href="">The Hydrogen Sonata</a>, and <a href="">The Player of Games</a>. I had only recently heard of Iain M. Banks and the <em>Culture</em> novels. I started with <em>Consider Phlebas,</em> which is by far the weakest. Banks lets his tendency toward grotesquery for its own sake get away from him. If I hadn’t heard such universally great things about the rest, I would have stopped. I’m glad I didn’t. <em>The Player of Games</em> is excellent, and <em>The Hydrogen Sonata</em> is solid as well. I did not care for <em>Use of Weapons</em> - I couldn’t buy into the central conceit or the protagonist.</p></li> <li><p>John Scalzi - <a href="">The Ghost Brigades</a>, <a href="">The Last Colony</a>, <a href="">Zoe’s Tale</a>. Quick, easy, and disposable science-fiction reads. <em>Zoe’s Tale</em> is especially forgettable - it’s a straight (and weak) retelling of <em>The Last Colony</em>, with only a few new scenes.</p></li> <li><p>James Clavell, <a href="">Shogun</a>.</p></li> <li><p>Frank Herbert, <a href="">Dune</a>. I finally read the classic. It’s earned its reputation.</p></li> <li><p>Stephen L. Carter, <a href="">The Impeachment of Abraham Lincoln</a>. Despite the good press, I couldn’t get into it and stopped about a quarter of the way through.</p></li> <li><p>Vernor Vinge, <a href="">Rainbows End</a>. Strangely relevant given the recent revelations about the NSA.</p></li> <li><p>Audur Ava Olafsdottir, <a href="">The Greenhouse</a>.</p></li> <li><p>Kim Stanley Robinson, <a href="">2312</a>. As a big fan of the <em>Red Mars</em> series, I was excited to read this sort-of-sequel. But the book coyly (and disappointingly) refuses to mention anything directly about them. Forgettable and poorly constructed.</p></li> </ol><img src="" height="1" width="1" alt=""/>jsulak I’ve Been Reading<ol> <li><p>Haruki Murakami, <a href="">1Q84</a>. Entertaining even at 900+ pages. Top seed (and my favorite) in this year’s <a href="">Tournament of Books</a>.</p></li> <li><p>George Dyson, <a href="">Turing’s Cathedral: The Origins of the Digital Universe</a>. I’m listening to the audiobook during late-night baby calming sessions, which is a new experience for me (on both counts, I suppose). Still in progress, but good so far.</p></li> <li><p>Sara Wheeler, <a href="">Cherry: A Life of Apsley Cherry-Garrard</a>. Continuing my Heroic-Age-of-Antarctic-Exploration reading, this is a biography of the youngest member of Scott’s last expedition and the author of <a href="">The Worst Journey in the World</a>. Interesting, but not worth your time unless you’re into this stuff. </p></li> <li><p>Neal Stephenson, <a href="">Reamde</a>. Great but long page-turner. You can tell he had fun writing this one. Recommended. </p></li> <li><p>Michael S. Malone, <a href="">Infinite Loop</a>. I picked it up after hearing <a href="">John Siracusa’s recommendation</a>..</p></li> <li><p>Vernor Vinge, <a href="">Children of the Sky</a>. The long-awaited sequel to the <a href="">A Fire Upon the Deep</a>. Vinge is one of the most imaginative science-fiction writers at creating and depicting alien species. Recommended if you’re into sci-fi (as are the two previous novels - ignore the heinous covers).</p></li> </ol><img src="" height="1" width="1" alt=""/>jsulak repositories from Bitbucket to Github<p <a href="">move</a>.</p> <p>This is a guide on how to do just that. It assumes that your repos have a small number of committers, and that you’re comfortable with the command line.</p> <h2 id="cleanupyourmercurialrepository">1. Clean up your Mercurial repository</h2> <p. </p> <p>To do this, we can use Mercurial’s <a href="">convert extension</a>, which can create a new, ‘filtered’ repository from an existing repository. In our case, we’ll alter the usernames to match what we want to appear on Github. </p> <p>The convert extension is enabled by adding these lines to our ~/.hgrc:</p> <pre><code>[extensions] hgext.convert=</code></pre> <p>Next, we’ll create a text file that maps old usernames to our new usernames. This is easy with some bash magic. Go to your repository directory and on the command line type:</p> <pre><code>hg log | grep user: | sort | uniq | sed ’s/user: *//‘ > users.txt</code></pre> <p>users.txt contains a sorted and filtered list of all the usernames attached to commits, like this:</p> <pre><code>username username@localhost username <username@gmail.com></code></pre> <p>Edit this file so it maps old usernames to new usernames. If you have any existing Github repositories, I recommend running <code>git log</code>to get the exact username / email pair Github expects.</p> <pre><code>username=Username Fullname <username@gmail.com> username@localhost=Username Fullname <username@gmail.com> username <username@gmail.com>=Username Fullname <username@gmail.com></code></pre> <p>We’re ready to convert (replace SOURCE_HG_REPO and CLEAN_HG_REPO with real directory names, of course):</p> <pre><code>cd .. hg convert --authors SOURCE_HG_REPO/users.txt SOURCE_HG_REPO CLEAN_HG_REPO</code></pre> <p>And bam, you have a new, tidied mercurial repository in CLEAN_HG_REPO.</p> <h2 id="converttherepositorytogit">2. Convert the repository to Git</h2> <p>Converting a repository from Mercurial to Git is simple. We’ll use a script called <a href="">fast-export</a>. The following instructions are adapted from <a href="">Dan Benjamin’s</a>.</p> <p>The easiest way to get fast-export is to clone it from its Git repository:</p> <pre><code>cd ~/tmp git clone git://repo.or.cz/fast-export.git </code></pre> <p>Now we’ll create a new, empty Git repository, and use fast-export to populate it.</p> <pre><code>git init DEST_GIT_REPO cd DEST_GIT_REPO ~/tmp/fast-export/hg-fast-export.sh -r CLEAN_HG_REPO git checkout HEAD</code></pre> <p>And bam, you have a new Git repository, complete with your entire commit history. Don’t forget: you’ll need to convert your .hgignore into .gitignore.</p> <h2 id="pushthenewrepositorytogithub">3. Push the new repository to Github</h2> <p>Now it’s time take your shiny new Git repository and slap it up on Github. Go to your Github profile page, and click “New Repository.” Fill out the form, and follow the instructions to import an existing Git repo, which goes something like this:</p> <pre><code>cd DEST_GIT_REPO git remote add origin git@github.com:username/REPO_NAME.git git push -u origin master</code></pre> <h2 id="dontforgettomigrateotherdata">4. Don’t forget to migrate other data</h2> <p>I haven’t found a way to automatically migrate them, but don’t forget about:</p> <ul> <li>Issues</li> <li>Wiki. I tended to use a one-page Wiki as a landing page. The Github convention is to use a readme.md markdown document. You’ll have to convert the Wiki markup to Markdown.</li> </ul> <h2 id="deletingyourbitbucketrepository">5. Deleting your Bitbucket repository</h2> <p>On the “Admin” tab of your Bitbucket repository page, you can choose to delete the repository. Make sure to enter the url of your new Github repository in the “Redirect to” field!</p><img src="" height="1" width="1" alt=""/>jsulak 2.0 Released<p>It’s been a long time coming, but I’ve released a new version of <a href="">Wordcycler</a>, the program for two-way Instapaper sync for Windows and your e-book reader. There is only one major change — now, instead of scraping the site, Wordcycler uses the offical <a href="">Instapaper full API</a>.</p><p>This means two things for current users:</p><ol><li><p.</p></li> <li><p>To use Wordcycler, <strong>you must be an <a href="">Instapaper subscriber</a></strong>. Only subscriber accounts have access the full API. It’s only $1 a month and helps keep Instapaper running, so I recommend <a href="">you subscribe</a> if you haven’t already.</p></li> </ol><p.</p><h2>The Future of Wordcycler</h2>.</p><p.</p><img src="" height="1" width="1" alt=""/>jsulak I’ve Been Reading (History)<ol> <li>David Crane, <a href="">Scott of the Antarctic</a>. My recent fascination with Scott’s two polar expeditions and the “heroic age” of Antarctic exploration probably merits its own post. <a href="">Apsley Cherry-Garrard</a>’s first-hand account, the aptly-named <a href="">The Worst Journey in the World</a> is also a great read.</li> <li>Alfred Lansing, <a href="">Endurance: Shackleton’s Incredible Voyage</a>.</li> <li>Charles Mann, <a href="">1491: New Revelations of the Americas Before Columbus</a>. This is maybe <em>the</em> best history book I’ve ever read. Interesting facts on almost every page. </li> <li>Charles Mann, <a href="">1493: Uncovering the New World Columbus Created</a>. Also endlessly fascinating, if not quite as strong as its predecessor. </li> <li>Max Hastings, <a href="">Winston’s War: Churchill, 1940-1945</a>. I was surprised by how desperate the British plight in WWII was, how impotent they actually were through most of it, and how much it bankrupted the empire.</li> </ol> <p>That will probalby be the last of the non-fiction for a while; I’m just starting <a href="">1Q84</a>, which clocks in at over 900 pages.</p><img src="" height="1" width="1" alt=""/>jsulak Switching to a Standing Desk<p>About a month ago I decided to try working at a standing desk. Four trashcans, two shelves, and several books later, I created_5627725431375388770" /></a> <p>Despite its humble appearance, it functioned beautifully. It's since been replaced by an honest, actual adjustable-height desk, and I'm still standing and liking it a lot.</p> <p>So why did I do this? This whole idea of standing while working has gotten a lot of talk recently, most prominently in the New York Times article <a href="">Stand Up While You Read This</a>. This <a href="">blog post</a> by Jesse Noller and <a href="">Episode 21 of Build and Analyze</a> finally convinced me to give it a shot.</p> <p>Here what I was hoping to get out of it:</p> <ul> <li><strong>Better posture.</strong> Like many people, I tend to slouch in my chair. I find it easier to keep better posture while standing. </li> <li><strong>More active.</strong> The thing about standing at a desk is that I'm not just standing -- I'm fidgeting, stepping back and forth, and generally moving around. This makes me... </li> <li><strong>More alert.</strong> I tend to get sleepy after lunch. Having to keep myself upright without the aid of a chair counteracts that. It keeps me on task.</li> <li><strong>Respect(?)</strong> When people first come across you using your standing desk, they give you an expression that could either be respect or what's-up-with-that-dude wariness (I can never tell). </li> </ul> <p. </p> <p. </p> <p>So, if you want to get started, here's what I recommend:</p> <p>Rig something on top of your existing desk. It's cheap and easy (and fun!). I used parts scavenged from around the office. Some have used <a href="">soft drink cans</a>. Adjustable desks are expensive so you want to make sure that you'll actually use it before shelling out.</p> .</p> <p>I probably fall more in the second camp, but I haven't bought anything special. I simply wear my normal shoes and sit down when it starts to hurt. Seems to work for me.</p><img src="" height="1" width="1" alt=""/>jsulak: A Simple Online Countdown Timer<p>I've created a small, HTML5 online countdown timer called <a href="">Clockster</a>. There's not much to it - you enter a time, the browser starts counting down in big numbers, and then beeps and flashes when time expires. It has a few interesting features:</p><ul><li>It's completely HTML based. Most of the timers out there are Flash based, because until recently that was really the only way to play audio in the browser (for the beeping). But now all the major browsers support HTML5 audio, rendering it unnecessary.</li> <li>If you resize the window, the numbers resize as well. So you can either see it from across the room, or make a small clock in the corner of your screen.</li> <li>There are controls (pause, restart) on the countdown screen.</li> <li>The time is specified in the url. For example, <a href=""></a>.</li> <li>The front page or any specific timer can be pinned to the Windows taskbar using Chrome or Internet Explorer.</li> </ul><p>On the server, Clockster is written in ASP.NET MVC, and hosted on <a href="">AppHarbor</a>. <a href="">Heroku</a>, if you're familiar with it.) For a small app like Clockster, it's completely free.</p><p>On the client, <a href="">Clockster</a> uses <a href="">jQuery</a> and <a href="">date.js</a>. The audio is handled by HTML5 audio, which why you need a modern browser to use it. (That's the fun of doing a completely non-commercial personal site).</p><p>Clockster was also a great opportunity to check out <a href="">Nuget</a>, the .NET package manager. I'm really impressed. It's obviously heavily inspired by Ruby gems, and it makes installing all sorts of project dependencies (like jQuery and Elmah) really easy.</p><p><a href="">Clockster</a> was a fun project and I hope you check it out.</p><img src="" height="1" width="1" alt=""/>jsulak<p>I just released <a href="">Wordcycler 1.4</a>. The only change is that Wordcycler will now fetch every page of multi-page articles, which has been a long-time user request.</p> <p>If you're a frequent Instapaper user, you might ask why it has taken so long when <a href="">Instapaper</a>). </p> <p>The technology behind Wordcycler's page cleaning is <a href="">NReadability</a>, an open-source C# port of the Javascript <a href="">Readability bookmarklet</a>. It strips away all the non-article content of a web page, and cleans up the formatting - just like Instapaper's text view.  </p> <p>NReadability is maintained by Marek Stój, and powers his <a href="">Instafetch</a>. </p> <p.</p> <img src="" height="1" width="1" alt=""/>jsulak Emacs Server Play Nice With Windows<p>Some people like to use Emacs for everything, even <a href="">Google Maps</a>.. </p> <p. </p> <p>(The following instructions are all a distillation of the instructions available from the <a href="">Emacs Wiki</a>.) </p> <p>First, add this to your .emacs: </p> <pre>(require 'server) (server-start)</pre> <p>This will start the server when Emacs starts. To make things even more fun, this sometimes fails on Windows 7. See <a href="">here</a> for a solution. </p> <h2>Windows Explorer</h2> <p>We're going to add a right-click menu to Windows Explorer labeled "Open with Emacs," which does exactly what it says it does. </p> <p>The next step is to edit the registry. Here's how: </p> <p>1. Open up the Registry Editor (Windows-R, type "regedit"). <br />2. Create a new key [HKEY_CLASSES_ROOT\*\Shell\Edit in Emacs\Command]. <br />3. Set the value of the command to "C:\Program Files\emacs-23.2\bin\emacsclientw.exe" -na "C:\Program Files\emacs-23.2\bin\runemacs.exe" "%1" (adjusting the paths as necessary).</p> <center><a href=""><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="registry" border="0" alt="registry" src="" width="513" height="201" /></a></center> <p.</p> <h2>Visual Studio</h2> <p. </p> <p>The theory is the same as before, but the details are a bit different. Go to Tools > External Tools, and Add a new tool with the following configuration: </p> <blockquote> <p>Title: Edit In Emacs <br />Command: C:\Program Files\emacs-23.2\bin\emacsclientw.exe <br />Arguments: –na "C:\Program Files emacs-23.2\bin\runemacs.exe" "$(ItemPath)" </p> </blockquote> <p><a href=""><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="visual_studio_external_tools" border="0" alt="visual_studio_external_tools" src="" width="475" height="464" /></a> </p> <p>Now, when you go to Tools > Edit in Emacs, the current file should appear in Emacs. </p> <p><a href=""><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="visual_studio_menu" border="0" alt="visual_studio_menu" src="" width="339" height="187" /></a></p> <p>To assign this to a keyboard shortcut, go to Tools > Options, then Environment > Keyboard. In the "Show commands containing" text box, type in "externalcommand." You'll see a bunch of options that look like this: </p> <p><a href=""><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="visual_studio_keyboard" border="0" alt="visual_studio_keyboard" src="" width="492" height="444" /></a> </p> .</p> <img src="" height="1" width="1" alt=""/>jsulak Wordcycler<p>Earlier this year, <a href="">Carlo Zottmann</a> released a little program called <a href="">Ephemera</a>, which gave Mac users the ability to synchronize articles between <a href="">Instapaper.com</a> and their USB e-book reader. I heard nothing but great things about Ephemera, and I really wanted to use it myself. But I'm a PC, not a Mac, so I was out of luck.</p> <p>But that’s nothing a bit of code couldn’t fix. So a few months of weekend hacking later, I'm happy to announce <a href="">Wordcycler</a>, which brings two-way Instapaper syncing goodness to Windows. <img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="wordcycler" border="0" alt="wordcycler" src="" width="341" height="68" />So what is <a href="">Wordcycler</a>? </p> <blockquote> <p>Wordcycler lives in your tray. When you connect your reader via USB, Wordcycler fetches your unread Instapaper items, saves them on the device as single articles, and (if you want) ejects it. All automatically and in seconds.</p> .</p> <p>Wordcycler can also download the pre-made all-in-one bundles available from Instapaper. Either way you choose to consume, Wordcycler is the fast, easy, Whispernet-charge-free way of experiencing Instapaper on your reading device.</p> </blockquote> <p>If this kind of thing interests you, please check it out. I'd love to hear your feedback, bug reports, etc. I plan to keep adding features in the coming weeks (support for Instapaper's folders being the big no-brainer.)</p> <img src="" height="1" width="1" alt=""/>jsulak 0.2 Released<p>Last week, a new version of <a href="">XSpec</a>, an open-source unit-testing framework for XSLT (and now XQuery) was released.  XSpec allows you to write test suites (called scenarios) to test XSLT functions and templates and XQuery functions.  </p> <p>It’s good stuff.  If you do any work with these technologies, I recommend checking out the <a href="">Getting Started</a> guide and the <a href="">User Guide</a> for an introduction.  </p> <p>I’m particularly excited about this release because I’m involved in the project – well, in a bit part anyway, mainly providing testing and documentation.  The overwhelming bulk of the credit for the new release is due to <a href="">Florent Georges</a>, who has really kicked the project back into gear and has rewritten the core functionality to support XQuery in addition to XSLT.  After a year of dormancy, both development and the <a href="">mailing list</a> are active, which means that XSpec will continue to improve.</p> <img src="" height="1" width="1" alt=""/>jsulak Orwell on Writing and Thinking<p>From <a href=""><em>Politics and the English Language</em></a>:</p> <blockquote> . </p> <p>… </p> <p: </p> . </p> </blockquote> <p>Worth rereading every once and a while.</p> <img src="" height="1" width="1" alt=""/>jsulak version 0.4<p>I’ve released a <a href="">new version of Switcheroo</a>, the humble incremental-search task switcher for Windows.  Changes in this version include:</p> <ul> <li>The list that prevents certain windows from appearing in the main window is now editable from the options dialog.  (For example, “Program Manager.”) </li> <li>The main window now resizes to the full width and height of the list. </li> <li>Fixed a few minor bugs and tested on 32-bit Windows 7. </li> </ul> <p><a href=""><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="screenshot" border="0" alt="screenshot" src="" width="392" height="176" /></a></p> <img src="" height="1" width="1" alt=""/>jsulak Windows Tools<p>When working, I like everything on my computer “just so.”  I want to do things quickly and with minimal mental overhead.  Here’s a list of (developer-centric) tools that make that possible:</p> <ul> <li><strong><a href="">VirtuaWin</a></strong>. This is hands-down the best <a href="">virtual desktop</a> manager for windows.  With a bit of tweaking, you can switch between desktops instantaneously by bumping your mouse pointer at the edge of the screen while holding down the control key.  </li> <li><a href=""><strong>xplorer2</strong></a>. I use the free “<a href="">lite</a>” version.  Great features:</li> <ul> <li>Tabbed views (just like your browser).  Keeps the taskbar clean.</li> <li>Filter a directory listing quickly using ctrl-h.  </li> <li>When in a folder, pressing F10 and enter starts a command prompt in that folder.  (With a bit more tweaking, get a cygwin bash prompt.)</li> </ul> <li><a href=""><strong>Cygwin + minTTY</strong></a>. :</li> <ul> <li>Color coded directory listings and grep results.  </li> <li>Press control-r and search backwards through your command history. </li> </ul> <li><a href=""><strong>Switcheroo</strong></a>. Yes, I wrote it, but what of it?  Switching between (and closing) running applications using incremental search keeps me from reaching for the mouse and breaking my concentration. </li> <li><a href=""><strong>Slickrun</strong></a><strong>.</strong> .  </li> <li><a href=""><strong>Emacs</strong></a><strong>.</strong> The One True Editor.  Learning it is a commitment, but worth it.  If you take the plunge, grab <a href="">Ctrl2Cap</a> and remap your caps lock key as another control key.  With a bit of setup, plays very well with Cygwin.</li> <li><a href=""><strong>Winsplit Revolution</strong></a><strong>.</strong>  Not as essential with Windows 7’s built-in window positioning, but the fusion mode is still great.  </li> <li><a href=""><strong>Password Safe</strong></a><strong>.</strong>  Securely encrypt a list of all your passwords.  Now you only have to remember one, which lets you online banking passwords more complex and harder to brute-force.  And you’ll stop using the same one everywhere.</li> </ul> <img src="" height="1" width="1" alt=""/>jsulak and productivity are like compound interest<blockquote> <p.</p> </blockquote> <p>– Richard Hammin, in <em><a href="">You and Your Research</a></em></p> <img src="" height="1" width="1" alt=""/>jsulak Projects<p>It's been a while since I've written here. In the interim, I've completed a few different small projects. Here they are in no particular order:</p> <ul> <li> <p><a href="">Switcheroo</a>. The humble incremental-search task switcher for Windows.</p> <p. </p> <li> <p><a href="">eXist XProc Extension Library</a>. A set of XProc extension steps for interacting with an eXist XML database from a client. Using these steps, you can conduct common eXist management tasks from XProc - loading resources, extracting resources, querying data, etc. They fill much the same role as the eXist Ant tasks.</p> <p>I wrote it as an experiment in creating a non-trivial reusable library in pure XProc. I think it came out fairly well. </p> <li> <p><a href="">Ruby Blue Visual Studio theme</a>. I really liked the various ruby blue themes available for other editors and wanted one for Visual Studio.</p></li></ul> <img src="" height="1" width="1" alt=""/>jsulak Roundup<a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="244" alt="1419405808_7dff1fea23_m" src="" width="184" align="right" border="0"></a> <p>Interest in XProc has really picked up recently, which is exciting to see. Here's a roundup of some of the recent activity: </p> <ul> <li><a href="">XProc: Step by Step</a>. Vojtech Toman (who's working on EMC's Calumet implementation) wrote a great introduction to XProc. Calumet itself is scheduled to be released under a developer's license on <a href="">June 15</a>. I don't use any of EMC's products, but it's encouraging that such a big player in the market is embracing XProc. <li><a href="">XProc Tutorial</a>. Roger Costello updated his excellent tutorial. It's a bit of a shame that it's in PowerPoint, but it's well organized and comes with several sample scripts. <li><a href="">Why XProc Rocks</a>. Joel Amoussou talks about XProc and news publishing. <li><a href="">XML Pipelines / XProc for bioinformatics</a>. Pierre Lindenbaum describes an interesting use of XProc.</li></ul> <p>The XProc spec is tantalizingly close to making the transition to proposed recommendation, and all the recent (and positive) attention makes me optimistic that XProc will be a standard that's actually <em>used</em> in the real world. </p> <img src="" height="1" width="1" alt=""/>jsulak XProc Pipelines<p>As I've worked more with XProc, I've written a couple of utility steps to help debug pipelines. Although they started as quick hacks, they've continued helping out long enough to be worth sharing. You can download the library <a href="">here</a>.</p> <p>The more interesting of those is a step (wxp:assert) which asserts that the result of a given XPath expression evaluated against one document must be equal that that of a second XPath evaluated against a second document. </p> <p. </p> <p>For example, here's an excerpt from an upconversion pipeline:</p><pre class="code"><span style="color: blue"><</span><span style="color: #a31515">p:xslt </span><span style="color: red">name</span><span style="color: blue">=</span>"<span style="color: blue">group-sections<">group-sections.xslt</span>"<span style="color: blue">/> </</span><span style="color: #a31515">p:input</span><span style="color: blue">> </</span><span style="color: #a31515">p:xslt</span><span style="color: blue">> <</span><span style="color: #a31515">wxp:debug-output </span><span style="color: red">step-name</span><span style="color: blue">=</span>"<span style="color: blue">6-sections-grouped</span>" <span style="color: red">debug</span><span style="color: blue">=</span>"<span style="color: blue">true</span>"<span style="color: blue">/> <</span><span style="color: #a31515">wxp:assert </span><span style="color: red">label</span><span style="color: blue">=</span>"<span style="color: blue">No sections were deleted during grouping</span>" <span style="color: red">xpath-source</span><span style="color: blue">=</span>"<span style="color: blue">count(.//subsection)</span>" <span style="color: red">xpath-alternate</span><span style="color: blue">=</span>"<span style="color: blue">count(.//*[self::sect-1 | self::sect-2 | self::sect-3 | self::sect-4])</span>" <span style="color: red">fail-on-error</span><span style="color: blue">=</span>"<span style="color: blue">false</span>"<span style="color: blue">> <</span><span style="color: #a31515">p:input </span><span style="color: red">port</span><span style="color: blue">=</span>"<span style="color: blue">alternate</span>"<span style="color: blue">> <</span><span style="color: #a31515">p:pipe </span><span style="color: red">port</span><span style="color: blue">=</span>"<span style="color: blue">result</span>" <span style="color: red">step</span><span style="color: blue">=</span>"<span style="color: blue">parse-initial-subsections</span>"<span style="color: blue">/> </</span><span style="color: #a31515">p:input</span><span style="color: blue">> </</span><span style="color: #a31515">wxp:assert</span><span style="color: blue">> </span></pre><a href=""></a> <p>(This also shows the wxp:debug-output step, which has probably been rendered obsolete by the latest version of Calabash. It simply functions as a p:identity step that also writes the XML it receives to disk for later review.)</p> <p>This instance of wxp:assert asserts that the number of subsection elements in the primary <em>source </em>input port matches the number of sect-1 and sect-2 elements on the <em>alternate</em> input port. The <em>source</em> document is produced by the preceding step, and the <em>alternate</em> document is piped in from a step earlier in the pipeline. This is a quick way to ensure I don't do anything too lame-brained in the "group sections" step.</p> <p>Here's another instance from the same upconversion pipeline. This tests the output from a XSLT step that uses regular expressions to parse out section numbers from title elements. </p><pre class="code"><span style="color: blue"><</span><span style="color: #a31515">wxp:assert </span><span style="color: red">label</span><span style="color: blue">=</span>"<span style="color: blue">No empty enums after parsing sections</span>" <span style="color: red">xpath-source</span><span style="color: blue">=</span>"<span style="color: blue">count(.//enum[not(.//text()[string-length(normalize-space(.)) gt 0])])</span>" <span style="color: red">xpath-alternate</span><span style="color: blue">=</span>"<span style="color: blue">0</span>" <span style="color: red">fail-on-error</span><span style="color: blue">=</span>"<span style="color: blue">true</span>"<span style="color: blue">/></span></pre><a href=""></a> <p>In this case, the <em>alternate</em> XPath expression is a constant (0), so I don't need to provide a document on the <em>alternate</em> port. </p> <p. </p> <p>There are a few interesting in the step's <a href="">source</a> worth checking out. Here's a summary of how wxp:assert works:</p> <ol> <li>It combines the two source documents into a single document using p:pack. </li> <li>That is passed to an XSL stylesheet, which uses the saxon:evaluate extension function to evalutate the two XPaths against their respective nodesets. </li> <li>Depending on the value of the fail-on-error option, throw an XProc error and terminate the pipeline.</li> <li>Use a final p:identity step to pipe the initial input from the step's <em>source</em> port to its <em>result</em> port.</li></ol> <p>Or course, using a Saxon extension function does tie the pipeline to Calabash (or at least, a Saxon-based processor). I'm interested in hearing ideas about alternate approaches. </p> <img src="" height="1" width="1" alt=""/>jsulak Review: Commenting XSLT Regular Expressions<p>You learn a lot from reading other people's code.  For example, the other day I ran into a <a href="">clever trick</a> in Jeni Tennison's <a href="">XSpec</a> code for commenting regular expressions in XSLT: </p> <pre class="code"><span style="color: blue"><</span><span style=" color: #a31515">xsl:variable </span><span style=" color: red">name</span><span style=" color: blue">=</span><span >"</span><span style=" color: blue">attribute-regex</span><span >" </span><span style=" color: red">as</span><span style=" color: blue">=</span><span >"</span><span style=" color: blue">xs:string</span><span >"</span><span style=" color: blue">> <</span><span style=" color: #a31515">xsl:value-of</span><span style=" color: blue">> </span><span >\s+ (\S+) </span><span style=" color: blue"><!-- </span><span style=" color: green">1: the name of the attribute </span><span style=" color: blue">--> </span><span >\s* = \s* ( </span><span style=" color: blue"><!-- </span><span style=" color: green">2: the value of the attribute (with quotes) </span><span style=" color: blue">--> </span><span >"([^"]*)" </span><span style=" color: blue"><!-- </span><span style=" color: green">3: the value without quotes </span><span style=" color: blue">--> </span><span >| '([^']*)' </span><span style=" color: blue"><!-- </span><span style=" color: green">4: also the value without quotes </span><span style=" color: blue">--> </span><span >) </span><span style=" color: blue"></</span><span style=" color: #a31515">xsl:value-of</span><span style=" color: blue">> </</span><span style=" color: #a31515">xsl:variable</span><span style=" color: blue">></span></pre> <p>The trick is the <xsl:value-of /> instruction, which casts its contents as a string.  An especially nice thing about this method is that you can refer to other variables within the declaration:</p> <pre class="code"><span > (\S+) </span><span style=" color: blue"><!-- </span><span style=" color: green">12: the name of the element being opened </span><span style=" color: blue">--> </span><span >( </span><span style=" color: blue"><!-- </span><span style=" color: green">13: the attributes of the element </span><span style=" color: blue">--> </span><span >( </span><span style=" color: blue"><!-- </span><span style=" color: green">14: wrapper for the attribute regex </span><span style=" color: blue">--> <</span><span style=" color: #a31515">xsl:value-of </span><span style=" color: red">select</span><span style=" color: blue">=</span><span >"</span><span style=" color: blue">$attribute-regex</span><span >" </span><span style=" color: blue">/> <!-- </span><span style=" color: green">15-18 attribute stuff </span><span style=" color: blue">--> </span><span >)* )</span></pre> <p>Of course, to ignore all the extra white space in a regex constructed this way, you'll need to set the "x" <a href="">flag</a> in any <xsl:analyze-string />, replace(), or matches() that refers to it.</p><img src="" height="1" width="1" alt=""/>jsulak“Broken gets fixed. Shoddy lasts forever”<p><a href="">Jack Moffett</a>:</p> <blockquote> <p.</p></blockquote> <p>(via <a href="">Daring Fireball</a>)</p> <img src="" height="1" width="1" alt=""/>jsulak and Adjustable Meanings<p>An interview of <a href="">Alan Kay</a>:</p> <blockquote> <p? <p>... Even if the user is an absolute expert, able to remember almost everything, I’m always interested in the difference between what you might call stark meaning and adjustable meaning. <p. <p>There have been many, many studies of this. This argues that the surface form of a language, whatever it is, has to be adjustable in some form.</p></blockquote> <img src="" height="1" width="1" alt=""/>jsulak Is Gas in the Car<p><a href="">Tim O'Reilly talks about the importance of working on things that are important</a>, not just things that simply make money:</p> <blockquote> <p>First off, though, I want to make clear that "work on stuff that matters" <em>does not mean focusing on non-profit work, "causes, or any other form of "do-goodism."</em> Non-profit projects often do matter a great deal, and <a href="">people with tech skills can make important contributions</a>,. <p>...</p></blockquote> <blockquote> <p. <strong>Money is like gas in the car — you need to pay attention or you'll end up on the side of the road —</strong> <strong>but a well-lived life is not a tour of gas stations!</strong></p></blockquote> <img src="" height="1" width="1" alt=""/>jsulak's Your Semantic Exit Strategy?<p>Quoting <a href="">Cafe Con Leche</a> quoting <a href="">A List Apart</a>:</p> <blockquote> <p>We’ll start by posing the question: “why are we inventing these new elements?” A reasonable answer would be: “because HTML lacks semantic richness, and by adding these elements, we increase the semantic richness of HTML—that can’t be bad, can it?” </p> <strong>specific terms</strong> to the vocabulary of HTML, we need to add a <strong>mechanism</strong> that allows semantic richness to be added to a document as required. In technical terms, we need to make HTML <strong>extensible</strong>. HTML 5 proposes no mechanism for extensibility. </p> <p>HTML 5, therefore, implements a feature that breaks a sizable percentage of current browsers, and doesn’t really allow us to add richer semantics to the language <strong>at all</strong>.</p> </blockquote> <p.</p> <img src="" height="1" width="1" alt=""/>jsulak from a Windows Reinstall<p <a href="">Acronis True Image</a>, which I stored on an external hard drive. I didn't think I'd need it, because the current install was so horrible that anything was preferable to it. </p> <p align="center"><a href=""><img src=""></a></p> <p>So, I rebooted, I popped in the Gateway system restore disk, and told it to format the drive and reinstall the operating system. </p> <p. </p> <p.)</p> <p>This problem was entirely my fault. I should have made absolutely sure I had the drivers I needed before wiping the disk, so I'll spare you the whole rant of WHY DOESN'T GATEWAY'S <strong>CUSTOM</strong> SYSTEM RESTORE DISK CONTAIN THE DRIVERS TO ACTUALLY, YOU KNOW, RESTORE THE SYSTEM? Because, that would be petty.</p> <p>After some Internet searching, I discovered a free utility called <a href="">Double Driver</a>, which allows you to backup and restore all the drivers installed on your system. Which was great, except I'd already killed the Windows installation with the drivers I needed. </p> <p>To make a long story short, here's what I did:</p> <ol> <li>Created a new drive image of my partially finished reinstall with Acronis True Image. <li>Restored the original, pre-wipe drive image. <li>Booted into Windows and ran Double Driver to create an archive of all the installed drivers on my USB drive. <li>Restored the in-progress drive image. <li>Used Double Driver to restore all the previously installed drivers.</li></ol> <p>Success! (Well, Double driver actually missed one file, which I had to retrieve from the backup image. But close enough.)</p> <p>Also, although Acronis True Image saved the day, I can't recommend it as a product, since its background service slowed to a crawl the two computers I installed it on. Which is too bad.</p> <p>So, recapping today's lessons, if you're reinstalling Windows, you should first:</p> <ul> <li>Backup all your data. Twice. <li>Create a drive image of your current install. It will save your bacon. <li>Create an archive of all your drivers using <a href="">DoubleDriver</a>. This too will save your bacon. </li></ul> <img src="" height="1" width="1" alt=""/>jsulak, Part III — Turtles All The Way Down (Steps, Reuse, and Encapsulation)<p>This is my third post on <a href="">XProc</a>. (To see the first two, go <a href="">here</a> and <a href="">here</a>.) In this post, I'll talk about the different categories of steps, how to create your own steps and reuse them across pipelines, and how this all relates to the fundamental metaphor of XProc. </p> <p>One task I do frequently is removing <a href="">Arbortext Editor change-tracking markup</a> from documents. Here's a simple pipeline that does just that:</p><pre class="code"><span style="color: blue"><</span><span style="color: #a31515">p:pipeline </span><span style="color: red">type</span><span style="color: blue">=</span>"<span style="color: blue">u:accept-changes</span>"<span style="color: blue">> <: blue"> </</span><span style="color: #a31515">p:xslt</span><span style="color: blue">> </</span><span style="color: #a31515">p:pipeline</span><span style="color: blue">></span></pre> <p>The fundamental unit of work in XProc is the "step," which you can think of in the same way you do subroutines in other programming languages. In general, a step takes XML as input, performs an operation on it, and outputs XML. There are <a href="">three types of steps</a> in XProc: </p> <ul> <li> <p><strong><a href=""><img height="155" alt="atomic" src="" width="256" align="right" border="0"></a>Atomic steps.</strong> These are the most basic building blocks of XProc pipelines, and each carries out a single fundamental XML operation. There are a number of <a href="">built-in atomic steps</a>, such as <a href="">p:xslt</a> (as shown above). These built-in atomic steps will be the foundation for almost everything you do in XProc, so learn them. </p> <li> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="182" alt="image" src="" width="244" align="right" border="0"></a> <strong>Compound steps. </strong>These are assembled from other XProc steps. Sometimes they are just a series of implicitly connected atomic steps, and sometimes they use built-in logical control structures, such as <a href="">p:for-each</a>, to control their execution. </p> <p>It turns out that the above accept-changes pipeline — and in fact any pipeline you create — <a href="">is also a compound step</a>. (The fact that <a href="">p:pipeline is a shortcut for p:declare-step</a> is a dead giveaway.) This is an important idea, and we'll discuss it more below. But for now, remember this: <em>a pipeline and a step are the same thing</em>. </p> <li> <p><strong>Multi-container steps.</strong> There are only two multi-container steps: <a href="">p:choose</a> and <a href="">p:try</a>. These contain two or more alternate pipelines. You cannot define your own custom multi-container step.</p></li></ul> <p>No doubt, in your day-to-day work, you'll never feel the need to stop and consider whether a certain step fits one category or the other. But the difference does have consequences, so it's worth spending a bit of time on it.</p> <p>Let's say that you want to use the above accept-changes step as a step within a larger pipeline — for example, a pipeline that removes all of the notes from 2nd-level sections in a document. Here's how you would do that:<"><a href="""></span>" </a> ">Declare accept-changes step <><a href=""></a> <p>What's happening is that we are declaring a new step <em>type</em> called "u:accept-changes," and then invoking an <em>instance</em> of it as the first step in the pipeline. A few notes:</p> <ul> <li. <li>A step's <em>type</em> attribute is different than its <em>name</em> attribute. The value you place in <em>type</em> becomes the name of the element you use to invoke it (lines 11 and 21). That instance can itself be given a name, which can then be used when controlling the flow of XML using pipes. So, a <em>type</em> refers to the object, and a <em>name</em> refers to the instance. <li>A step's type must be in a non-XProc namespace, which you must remember to declare. <li>The <a href="">p:serialization</a> element defines how a specified pipeline output port serializes XML when outputting from the pipeline. It is the equivalent of the xsl:output instruction in XSLT, and takes many of the same options.</li></ul> <p>Of course, if you're going to just reference your new step type once within the same pipeline that you've declared it in, there's not really much of a point.</p> <p>The real power of declaring your own step type comes from the ability to place it in an external library and create instances of it within any number of pipelines. Here's how you do that, using <a href="">p:import</a>. First, the library document:</p><pre class="code"><span style="color: blue"><</span><span style="color: #a31515">p:library <: #a31515">p:library</span><span style="color: blue">></span></pre><a href=""></a> <p>Note the root element <a href="">p:library</a>, which is used as a container for a collection of custom steps. If I wanted, I could have made the p:pipeline element the root element, and imported that single step, but using p:library leaves more room for future growth.</p> <p>Here's the final pipeline, which imports the above library:<">Import accept-changes step </span><span style="color: blue">--> <</span><span style="color: #a31515">p:import </span><span style="color: red">href</span><span style="color: blue">=</span>"<span style="color: blue">lib.xpl<> <p>There. A nice and simple pipeline. I (and anyone else) can reference this u:accept-changes step in any number of pipelines, and never worry about how it actually works. (And, if you stored the transform inside the u:accept-changes definition using <a href="">p:inline</a>, it would be even more portable.) Later, if I decide that I'm better served by using a Python script to accept changes instead of a transform, I can simply update the step declaration (using <a href="">p:exec</a>) once in the library. My hope is that <a title="EXProc" href="">XProc will become a common language</a> for sharing XML manipulation tasks. </p> <p>It's also worth noting that if I gave this note-deleting pipeline a type attribute, then it too could be invoked as step in another pipeline. And that pipeline could be invoked as a step, and so on. Turtles all the way down.</p> <p align="center"><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="184" alt="image" src="" width="244" border="0"></a> </p> <p>Here's where the difference between an atomic step and a compound step comes into play. Although we defined our accept-changes step as a <em>compound step</em> (because it contains a subpipeline of one step), when we invoke an instance of it, that instance is an <em>atomic step</em>. </p> <p — <a href=""><em>exactly like a built-in atomic step</em></a>. The fact that it's actually implemented by assembling XProc steps is irrelevant to the calling pipeline. </p> <p>There's one more important XProc fact I've talked around but never explicitly stated. <em>The only information that can flow between steps through pipes is XML. </em>At first, I thought this a stifling restriction, but as I work more with XProc, I think that this prohibition is the key to its power. </p> <p>Because, when you combine all these ideas, you end up with a very high level of enforced encapsulation that provides a very simple yet powerful abstraction for working with XML. It allows you to worry about <em>what</em> needs to be done instead of <em>how</em>. When any pipeline can be used as an atomic step, you can <a href="">work fractally</a>, making parts out of the same raw material as the whole, creating higher and higher levels of abstraction.</p> <p align="center"><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="164" alt="image" src="" width="244" border="0"></a> </p> <p>So, to summarize:</p> <ul> <li>A pipeline is a step. <li>Although the <em>declaration<strong> </strong></em>of a step may be a compound step, its <em>invocation</em> is not. <li>XML is the only information that can flow between steps through pipes. </li></ul> <p>That's it for now. If you're looking for more XProc information, Dave Pawson has been working on <a href="">his introduction to XProc</a>, which is shaping up to be a great resource.</p> <p>I may start posting my XProc-related material in a more suitable space, away from my blog. I'm considering using Docbook Website system. Any thoughts?</p><img src="" height="1" width="1" alt=""/>jsulak
http://feeds.feedburner.com/IndigoFlats
CC-MAIN-2017-13
refinedweb
8,469
56.45
Frank Sommers: What is ActionScript? James Ward: ActionScript 3 is our implementation of the ECMAScript 262, Revision 4 standard. That's the same standard JavaScript 2.0 will be based on, once browsers officially support that version of JavaScript. We've done a few things differently from the ECMAScript standard, and the reason for that was that the standard wasn't actually finalized when we built Flash 9 and Flex 2. We had to make some assumptions about what things would be in the spec before the spec was ratified by the ECMAScript standards committee. As a result, there are probably a few slight variances between what we implemented in ActionScript 3 and the final ECMAScript specification, which, I believe, is still not final yet.Frank Sommers: What will surprise a Java developer the most when first encountering ActionScript 3 code? James Ward: Java developers will recognize and get to know the ActionScript 3 language very quickly. The syntax is very similar to Java's, and there are all the object-oriented features you're familiar from Java, such as extending classes and implementing interfaces. The similar syntax allows Java developers to very quickly come up to speed by not having to really learn a brand new language. They can just focus on learning the slight differences from Java syntax, and learn how to actually use the ActionScript 3 language within the context of the Flash Virtual Machine. One of the biggest differences from Java is optional static typing in ActionScript 3. When I came from Java to Flex, I was struck by the fact that I didn't have to type everything. I could type things, when I felt that typing was important, and when I wanted code hinting in the IDE, and compile and runtime type checking, but then there were places when typing didn't matter so much. There are places when you can leave things untyped or, to put it differently, dynamically typed. Optional typing made me have to do a bit of a mental shift from how I, as a typical Java developer, really saw languages. There are just a few differences in syntax from Java. The package declaration is almost the same, with the one difference that we put curly braces inside the package keyword, so when you define a class, the class definition goes inside the curly braces within the package declaration. Your import statements go inside that same block. Instead of declaring methods, you explicitly have to say that something is a function, with the function keyword, and your function's return types are specified with a colon after the function name. Variables are defined with the var keyword, followed by the variable name, and then the type of the variable, if you want to specify a variable type. ActionScript has very similar namespaces to Java's: private, public, protected. If you don't want something to be public, private, or protected, and want instead to have your own name space that you can use internally, you can define that namespace as well. If you have a variable, or an inner class, for instance, that you only want to be accessed from, say, just two other classes, then you can create a custom namespace and use that namespace in just those two classes. That gives you more flexibility in how you can structure your code. You can apply private, public, protected, or any other name space you create, on your class or a variable or a function. It's a bit different from how you'd do that in Java, but once you get the hang of it, you can get used to it and find it quite useful. The Flex framework uses this technique pretty heavily. We have an mx_internal namespace that we use for variables that we think could change in the future and that you shouldn't rely on. But if you did want to get access to one of these variables, you could import the mx_internal namespace, and then have access to the variables in that namespace. By default, users don't see those variables. Function. You can define functions inline, for instance, for callbacks. When you specify functions, a lot of times you may want to pull them out and use them as closure functions, and that's also available. Often, when you specify a listener in ActionScript 3, you would pass a function as the listener, which, in turn, will pass a reference to that function. In ActionScript 3, you can do prototype-based classes, just as you can in JavaScript. In Flex, we hide that because we prefer the other style of object-orientedness: creating classes and extending them in the typical Java-like way. If you wanted to, you could get to the prototype stuff, and still do things through the prototype, though. Other than the optional static typing, the only major difference between JavaScript in the browser and Flex's ActionScript 3, is the DOM that each language is implemented against. In the browser, the DOM is the browser's DOM, while Flex makes a different DOM available to you, that of the Flash Player. If you take the DOM differences out of the equation, then the languages are almost completely portable. I actually have taken a JavaScript library, a few thousand lines of JavaScript, and copied and pasted that into Flex. With a few minor changes, I was able to get that API running as an ActionScript 3 Flex API.. One of the places where you see the benefits of optional static typing is when you start pulling in Web services data that isn't SOAP, but come from RESTful or JSON-style Web services. You could go through all the work to type the whole response and convert everything into typed objects. But a lot of times the interfaces of the Web services that you're talking to aren't themselves very statically typed. Those interfaces don't necessarily follow a consistent object model. That's one place where I typically don't force my objects into being statically typed. When you talk to the Flikr API, for instance, and you decided to define a bunch of objects that represent their API, if Flikr changes the data their API exposes, you would have to change your object model, too. Static typing may not buy you much in that case. With dynamic typing, as long as they used the same names, everything would continue to work. Flex takes advantage of optional static tying in a couple of places. Flex defines an HTTP Service, for instance. [Editor's note: The HTTP Service is an object-oriented client-side HTTP API.] You can get back your response from the HTTP Service in a couple of different ways. The default way is to take all the XML you get back from a RESTful service call, and deserialize that response into basic objects: not typed objects, but objects that are basically just hashmaps or name-value pairs. You can also have the HTTP service give you back E4X objects, which is ActionScript's XML API. You can then execute XPath-style queries on that response, for instance. And you can also write your own custom deserializer to go through the XML tree and deserialize that into typed ActionScript objects. One way to think about optional static typing is by comparing it to untyped Java collections. You can stick any type object into a collection, and then cast that object back into its type when you need to know the specific object type. In Flex, if you define a variable to be of type Object, then you can set any property you want on that object. You don't have to specify in advance the properties of that object, but can, instead, go in and arbitrarily set properties on the object. If you create an instance of an object foo, you can just say, foo.name = "a new String". That would set the name property on that object, even though you never defined a name property for that object before. You can also add functions to an object in a similar way: You can just say, foo.doSomething = something, where something is a specification of a function. Frank Sommers: You mentioned E4X, ActionScript's native XML support. What advantages does E4X provide over other XML processing APIs? James Ward: XML is a native data type in ActionScript 3. One way we support XML is through E4X, which is an ECMA standard, and is part of the ECMAScript language. We chose to follow the standard instead of creating our own XML support in the language. E4X is an XQuery- and XPath-like language for manipulating XML structures. The query language built into E4X makes it easy to find elements, or collections of elements, in your XML tree, pull elements out, or set and add elements to XML structures. ActionScript, and Flex, also have facilities to take ActionScript objects and serialize them into XML and back. While E4X is very powerful and convenient for many XML-related tasks, you have the choice of whether to deal with XML objects, or whether to serialize XML objects into your ActionScript objects, and then use those ActionScript objects in your code. Those with a Java background may find the latter more natural. But once you get used to E4X, you may find it very convenient, too. The Flex framework classes define the containers, layouts, and controls, and are all implemented in ActionScript. The source code for the whole Flex framework is shipped with the Flex SDK. You can go in and see how all of those components are built, and you can extend those components as well with ActionScript code. It's also possible to not use anything that Flex provides, and just build from scratch your own ActionScript rich-client framework. You could conceivably build anything you'd want on top of the Flash Player DOM, using Flash Player's ability to draw lines, do vector graphics, and use all the things that the Flash VM can do. All the Flash animations you see people do on the Web are built to run on the Flash VM, so you can conceivably go in and use the low-level drawing APIs to create animations, draw lines with nice anti-aliasing, display and record video, and so on. But you don't have to do that, because Flex gives you all the components built for you, including very rich UI controls, such as a data grid. In your application, you can just create an instance of the data grid, for example, and then you can override any of the methods in that component, including default styles and properties. The Flex components use the low-level Flash API to draw lines, and basically take advantage of the rich media capabilities of the Flash VM. With the Flex API, you're programming at a higher level, and don't have to learn the low-level details of how the components and effects are actually implemented. At the same time, one difference from Swing, for instance, is that with Flex you could go really low-level, if you wanted to, because those low-level Flash API calls are available to any Flex application. You could even mix high-level Flex API calls with low-level Flash API calls. If you didn't like how something looked in Flex, you could go in and do some lower-level graphics calls to change how a component looked. You could also use that technique to set custom item renderers, too. In practice, you will very rarely, if ever, need to do that because Flex exposes a very rich set of component properties. The other differences from Java come from the media-handling capabilities of the Flash VM, such as video and audio playback capabilities. Media support has been built into the Flash player for a long time now, and that's one of the areas where the Flash VM really shines. Flash was built with the idea to allow people to create multimedia experiences on the Web. Flex applications can easily take advantage of all those rich-media capabilities. Video and audio integration with the Flex API is very simple and straightforward. If you wanted to have your Web cam record some video and broadcast that video out to other computers, that takes only a few lines of code to accomplish in Flex. You just find the Camera object in the API, and call a few methods on that object. Playing back video is similarly simple: You can use the VideoDisplay object, and point that to a URL. Out of the box, we support the FLV format, which is Flash Video, the SWF format, which is the regular Flash format, and MP3 playback. Have a question or opinion about ActionScript 3? Discuss this article in the Articles Forum topic, ActionScript 3: The Language of Flex..
https://www.artima.com/lejava/articles/actionscript.html
CC-MAIN-2018-17
refinedweb
2,166
68.1
I'm trying to create a simple function that allows a user to input basic information that will change the values of a datetime object, and I'd like to find a way to make it as clean as possible by using a variable as a keyword. This can easily be done a different way, but I figured it'd be useful to know how to replace pre-set keywords. The datetime object has a .replace() method that takes any time value as a keyword: datetime.replace([year[, month[, day[, hour[, minute[, second[, microsecond[, tzinfo]]]]]]]]) start_time = datetime.datetime(2016, 09, 16, 15, 30) def change_time(integer, time_string): time_type = time_string.replace("s","") # de-pluralizes input new_time = getattr(start_time, time_type) + integer print(start_time.replace(time_type=new_time)) change_time(2, "days") Python allows you to store data in a dictionary and finally unpack them as keyword arguments to different functions. For the thing that you want to do, best way to accomplish this is to use kwargs. replacement_info = {'day': 2, 'month': 9, ...} new_time = start_time.replace(**replacement_info) Note the difference with what you've done. Passing time_type directly to replace, would result in replace being called with the time_type parameter, set to 2, which is undefined (since it's not in the list of accepted arguments for replace) Instead you have to pass it like **{time_type: 2} to the replace function, this way, replace will receive the interpreted value for time_type, namely day, as the input. So you need to change print(start_time.replace(time_type=new_time)) to print(start_time.replace(**{time_type:new_time})
https://codedump.io/share/x106sp8Fdpnf/1/use-variables-as-keywords-for-functions-with-pre-defined-keywords-python
CC-MAIN-2018-17
refinedweb
257
55.03
Python Library to Construct Word Embeddings for Small Data Project description hyperhyper Python Library to Construct Word Embeddings for Small Data. Still work in progress. Building upon the work by Omer Levy et al. for Hyperwords. Why? Nowadays, word embeddings are mostly associated with Word2vec or fastText. Those approaches focus on scenarios, where an abundance of data is available. But to make them work, you also need a lot of data. This is not always the case. There exists alternative methods based on counting word pairs and some math magic around matrix operations. They need less data. This Python library implements the approaches (somewhat) efficiently (but there is there is still room for improvement.) hyperhyper is based on a paper from 2015. The authors, Omer Levy et al., published their research code as Hyperwods. I tried to the port their original software to Python 3 but I ended up re-writing large parts of it. So this library was born. Limitations: With hyperhyper you will run into (memory) problems, if you need large vocabularies (set of possible words). It's fine if you have a vocabulary up until 50k. Word2vec and fastText especially solve this curse of dimensionality. Installation pip install hyperhyper If you have an Intel CPU, it's recommended to use the MKL library for numpy. It can be challening to correctly set up MKL. A package by intel may help you. conda install -c intel intelpython3_core pip install hyperhyper Verify wheter mkl_info is present: >>> import numpy >>> numpy.__config__.show() Disable internal multithreading ability of MKL or OpenBLAS. export OPENBLAS_NUM_THREADS=1 export MKL_NUM_THREADS=1 This speeds up computation because we are using multiprocessing on an outer loop. Usage import hyperhyper as hy corpus = hy.Corpus.from_file('news.2010.en.shuffled') bunch = hy.Bunch("news_bunch", corpus) vectors, results = bunch.svd(keyed_vectors=True) results['results'][1] >>> {'name': 'en_ws353', 'score': 0.6510955349164682, 'oov': 0.014164305949008499, 'fullscore': 0.641873218557878} vectors.most_similar('berlin') >>> [('vienna', 0.6323208808898926), ('frankfurt', 0.5965485572814941), ('munich', 0.5737138986587524), ('amsterdam', 0.5511572360992432), ('stockholm', 0.5423270463943481)] The general concepts: - Preprocess data once and save them in a bunch - Cache all results and also record their perfomance on test data - Make it easy to fine-tune paramters for you data More documenation may be forthcoming. Until then you have to read the source code. Scientific Background This software is based on the following papers: - Improving Distributional Similarity with Lessons Learned from Word Embeddings, Omer Levy, Yoav Goldberg, Ido Dagan, TACL 2015. Paper Code Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks.. - The Influence of Down-Sampling Strategies on SVD Word Embedding Stability, Johannes Hellrich, Bernd Kampe, Udo Hahn, NAACL 2019. Paper Code Code The stability of word embedding algorithms, i.e., the consistency of the word representations they reveal when trained repeatedly on the same data set, has recently raised concerns. We here compare word embedding algorithms on three corpora of different sizes, and evaluate both their stability and accuracy. We find strong evidence that down-sampling strategies (used as part of their training procedures) are particularly influential for the stability of SVD-PPMI-type embeddings. This finding seems to explain diverging reports on their stability and lead us to a simple modification which provides superior stability as well as accuracy on par with skip-gram embedding Development - Install pipenv. git clone && cd hyperhyper && pipenv install && pipenv shell python -m spacy download en_core_web_sm pytest tests Contributing If you have a question, found a bug or want to propose a new feature, have a look at the issues page. Pull requests are especially welcomed when they fix bugs or improve the code quality. Future Work / TODO - evaluation for analogies - replace pipenv if they still don't ship any newer release - implement counting in a more efficient programming language, e.g. Cython. Why is this library named hyperhyper? License BSD-2-Clause. Sponsoring This work was created as part of a project that was funded by the German Federal Ministry of Education and Research. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hyperhyper/
CC-MAIN-2020-16
refinedweb
702
50.23
barchart: Creating Bar Charts in Haskell. Modules - Graphics Downloads - barchart-0.1.1.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees Candidates - No Candidates Readme for barchart-0.1.1[back to package description] barchart is a command-line program for generating bar charts from CSV files. It has special support for creating charts from data generated by the Haskell benchmarking tools criterion and progression. barchart can create PNG, SVG, PDF, and PS files using the Haskell Diagrams library for rendering and, hence, depends on a Haskell binding to Cairo which is part of Gtk2Hs. installation available soon. be patient. usage Bar charts can be created from CSV files using the barchart command-line utility. For example, if you want to track how many hours you practice playing the guitar on each day of the week, you can create a file guitar.csv with the following contents: Mon,1.2 Tue,0.3 Wed,2.1 Thu,0.9 Fri,1.1 Sat,3.2 Sun,3.1 The call # barchart --title="Practice hours" guitar.csv creates a file guitar.png which looks like this: Each bar is labeled with a weekday and has an associated practice hour. The chart is scaled automatically such that the largest bar spans the (configurable, see below) height of the chart. The --title flag passed to barchart in the above call is optional. If you do not supply one, then barchart uses the basename of the CSV file as title of the chart. If you want to track practice hours over multiple weeks, you can create a file with a mean, minimal, and maximal values for each day of the week: Mon,1.2,0.9,1.7 Tue,0.3,0.1,0.5 Wed,2.1,1.5,2.5 Thu,0.9,0.4,1.0 Fri,1.1,1.0,1.2 Sat,3.2,1.5,5.2 Sun,3.1,2.3,4.2 Calling barchart in the interval mode # barchart intervals --title="Mean practice hours" guitar-mean.csv produces the following bar chart: In this chart, each bar represents the mean practice hour for a day of the week and the minimum and maximum values are depicted with intervals on the right edge of a bar. If you want to compare your practice hours for each day of the week and split it by months, you can create a CSV file like this: Mon,1.2,2.1,1.7 Tue,0.6,0.7,0.8 Wed,2.1,1.2,2.5 Thu,0.9,1.5,1.7 Fri,1.1,1.3,0.7 Sat,3.2,1.7,4.3 Sun,3.1,3.2,2.1 We can use barchart (in the default mode) # barchart --title="Practice hours per month" --division="Jan Feb Mar" guitar-months.csv to create the following diagram: Each bar is divided into different blocks which all have an associated amount of practice hours. Green blocks represent practice hours in January, red blocks in February, and blue blocks represent practice hours in March. The block labels are given to barchart via the --division flag. You can also draw multiple blocks per bar in interval mode but then three values (mean,min,max) are used for each block. Hence, if you want to depict mean practice times with deviations for January, February, and March, you must create a CSV file where each day of the week is followed by nine practice times. flags The barchart program can be configured using command-line flags. We can use the --help flag to print a summary: # barchart --help Bar Chart 0.1 barchart [blocks] [FLAG] [FILE] barchart intervals [FLAG] [FILE] barchart criterion [FLAG] [FILE] --summary Show benchmark summary (default) -s --summary-comparison Compare different benchmark summaries -b --benchmark-comparison Compare different benchmarks barchart progression [FLAG] [FILE] -s --summary-comparison Breakdown chart by benchmark summary (default) -b --benchmark-comparison Breakdown chart by benchmarks Common flags: -? --help[=FORMAT] Show usage information (optional format) -V --version Show version information -v --verbose Higher verbosity -q --quiet Lower verbosity -o --out-file=FILE Name of generated file -t --title=STRING Title of bar chart -x --xlabel=STRING Label of x axis -y --ylabel=STRING Label of y axis -g --guess-file-type Guess output file type by name (default) --png Generate .png file --svg Generate .svg file --pdf Generate .pdf file --ps Generate .ps file -d --division=STRINGS Labels separated by whitespace -c --colors=STRINGS Color names separated by whitespace -w --width=NUM Width of generated bar chart (default=600) -h --height=NUM Height of generated bar chart (default=300) -l --label-size=NUM Font size used for labels (default=12) -W --bar-width=FLOAT Bar width between 0 and 1 (default=0.3) barchart can be run in different modes. We have already seen the default ( blocks) mode and the intervals mode. The criterion and progression modes are described below. Most command-line flags are self explanatory. Apart from what we have seen in the example above, the following options are particularly interesting: --xlabeland --ylabellabel the axis of the coordinate system. --colorschange the colors of the different blocks of a bar. You can use all color names listed in the SVG 1.1 Specification. If you specify fewer colors than there are blocks, then colors are reused in a cyclic fashion. The default value for this argument is --colors="seagreen firebrick midnightblue". --widthand --heightspecify the dimensions of the generated chart. The generated picture is a little larger because of the title and bar labels. If you want to draw a chart with many bars, you should increase the width compared to the height or tweak the bar width. --bar-widthis a value between 0.0 and 1.0 hat specifies how thick the bars are compared to the bar distance. With a value of 1.0the bars are drawn directly next to each other, a value of 0.0 draws bars that are actually lines. criterion criterion is a Haskell tool for statistically robust benchmarking that can generate graphs which, for example, depict the densities of execution times. criterion can also generate a CSV file summarising all benchmarks but does not provide means to visualise these summaries. We can use barchart in criterion mode to draw graphs based on the summary files generated by criterion. To demonstrate the criterion mode, we write a small Haskell program that benchmarks a simple definition of the factorial function: import Criterion.Main main = defaultMain [bgroup "fac" [bench (show n) (nf product [1..n]) | n <- ns]] where ns = [k*10^4 | k <- [1,2,3]] :: [Integer] We can compile this program, generate a benchmark summary using criterion, and visualise it using barchart as follows: # ghc -O2 --make factorial # ./factorial --summary=factorial.csv # barchart criterion factorial.csv These three calls generate a bar chart with one bar for each benchmark whose size is proportional to the corresponding run time. We can now modify the program to use an explicitly recursive definition of the fibonacci function to see whether this affects the run times. fac 0 = 1 fac n = n * fac (n-1) After generating another summary file factorial2.csv we could generate another bar chart to visualise it and view both charts side by side to compare the run times of the different implementations of the fibonacci function. However, instead of generating two different charts we can also generate a single chart that combines information from multiple benchmark summaries. # barchart criterion --summary-comparison factorial.csv factorial2.csv We can pass as many summary files as we like and barchart will draw a bar for each summary file with blocks representing the different benchmarks. We can see clearly that the original implementation using product is faster than the explicitly recursive definition of the factorial function. Instead of drawing different bars for different summaries, barchart can also draw different bars for the different benchmarks with blocks for the summaries. This is less useful for comparing different implementations but useful to compare the different benchmarks using all implementations at once. progression progression is an alternative tool to depict criterion's benchmark results. It generates charts similar to the summary comparisons discussed above but draws bars for different benchmarks next to each other rather than as boxes on top of each other in a single bar. Drawing the results of all benchmarks for a specific implementation on top of each other lets you see immediately which implementation is the fastest in total because the total height of the bar is the sum of all benchmarks for the corresponding implementation. However, progression provides features that barchart does not provide such as normalising run times and show them as percentages. Hence, the barchart program has a progression mode to generate bar charts from CSV files generated by progression. We can alter the factorial program as follows to generate progression's data: import Criterion import Progression.Main main = defaultMain (bgroup "fac" [bench (show n) (nf fac n) | n <- ns]) where ns = [k*10^4 | k <- [1,2,3]] :: [Integer] fac 0 = 1 fac n = n * fac (n-1) We do not need to change the fibonacci function itself because we won't execute benchmarks using progression. We only generate a comparison of the previous benchmarks: # mv factorial.csv bench-factorial.csv # mv factorial2.csv bench-factorial2.csv # ./factorial --mode=graph --compare=factorial,factorial2 progression wants the summary files to have specific names, hence, we rename them appropriately before generating a graph. The progression tool generates two files plot.csv and plot.png. Here, we are not interested in the generated PNG file but pass the CSV file to the barchart program in progression mode. # barchart progression --division="product recursive" plot.csv As the CSV file generated by progression does not include information of the names of the summary files, we provide labels using the --division flag. The generated bar chart loooks like this: The run times of the first implementation are normalised to 100 % and the run times of the second implementation are depicted relative to the first. Like in criterion mode we can also flip the axis in progression mode. # barchart progression --benchmark-comparison --division="product recursive" plot.csv Now the first benchmark is normalised to 100 % in each bar so this diagram is of questionable use. collaboration The source code is on GitHub so you can create or vote on issues to ask for extensions or fork this project to write extensions yourself. limitations Currently, barchart does not support negative quantities and draws weird bars if you use them anyway. Also, the legend for blocks is sometimes drawn suboptimally. I guess I made a mistake but I cannot find it. thanks I would not have written this program without the Diagrams library by Brent Yorgey and the CmdArgs package by Neil Mitchell. For questions or feedback email Sebastian Fischer.
http://hackage.haskell.org/package/barchart-0.1.1
CC-MAIN-2021-49
refinedweb
1,827
53.92
Hi, I am writing a script which will run a process in a while loop, i.e., user will execute sh test.sh and the script will run a set of processes, I want to know how to kill the script which is having a infinite while loop in its contents. Killing a process (say test.out) is done by pkill -9 test.out and a script can be killed using pkill -9 test.sh But this is possible only when the user has started the script using ./test.sh But if the user runs like this sh test.sh Then it cannot be killed using pkill -9 test.sh Since both options to run the script should be given to the user, in what way can i kill the script in case the user starts the script as sh test.sh I am using fedora 16. Below is a sample code of what i am trying to do. You can create a test.out executable with following content #include <stdio.h> #include <iostream> int main () { int i=1; while (i) { printf ("%d ",i); i++; sleep (1); } return (0); } the contents of script file test.sh is as follows #!/bin/sh # Run using ./test.sh start # Stop using sh test.sh stop or ./test.sh stop start () { while [ 1 ] do echo "Starting.." ./test.out #./test2.out sleep 3 done } stop () { echo "Stopping run.sh script" pkill -9 run.sh #** The above line works only if user has run with ./test.sh start # but not if the user has used sh test.sh start ** exit } case "$1" in start) start fi ;; stop) stop ;; *) echo "Invalid argument" esac
https://www.daniweb.com/programming/software-development/threads/426735/killing-a-running-script
CC-MAIN-2017-26
refinedweb
276
95.06
Loke14,659 Points SwapHead by Chris - do not understand it class PezDispenser { private String characterName; public PezDispenser(String characterName) { this.characterName = characterName; } public String getCharacterName() { return characterName; } public String swapHead(String characterName) { String originalCharacterName = this.characterName; this.characterName = characterName; return originalCharacterName; } } public class Example { public static void main(String[] args) { System.out.println("We are making a new PEZ Dispenser"); PezDispenser dispenser = new PezDispenser("Donatello"); System.out.printf("The dispenser is %s %n", dispenser.getCharacterName() ); String before = dispenser.swapHead("Darth Vader"); System.out.printf("It was %s but Chris switched it to %s %n", before, dispenser.getCharacterName()); } } Anyone can help me break down step-by-step the swapHead part by chris? Thanks! 2 Answers andren28,520 Points Sure String originalCharacterName = this.characterName; This creates a new variable called originalCharacterName that stores what is currently stored in the characterName field/member variable ("Donatello" in the example above) that is defined at the top of the class. The this keyword has to be used to help Java distinguish the field/member variable characterName from the characterName parameter that is passed in to the function. this.characterName = characterName; This sets the field/member variable characterName to the value of the characterName parameter, the characterName parameter will contain whatever string you passed in as the first argument to the method. Which is "Darth Vader" in the example above. return originalCharacterName; This simply returns the value stored in the originalCharacterName variable that was created earlier. So to summarize, the method creates a variable to hold the current character name, then it changes the character name to one you supplied to the method, then it returns the old name. If you have any further questions or feel confused by a part of my explanation then feel free to reply to this post with your questions, I'll answer anything I can. andren28,520 Points String before = dispenser.swapHead("Darth Vader"); This line calls the dispenser.swapHead method and then stores the value it returns into the before variable. Since the dispenser.swapHead returns the name the dispenser had before it was swapped you end up with the before variable containing the old name of the dispenser. So the line both runs dispenser.swapHead (which means that the characterName within the dispenser changes) and assigns the value it returns to the before variable. System.out.printf("It was %s but Chris switched it to %s %n", before, dispenser.getCharacterName()); } This prints out a formatted string. The %s symbols are replaced by the string values you provide to the printf method, in this case the first %s is replaced by the contents of the before variable and the second %s is replaced by the value that dispenser.getCharacterName() returns. So you end up with the first %s being replaced by the old dispenser name since that was stored in the before variable on the line above, and the second %s ends up containing the current dispenser name since that is what the dispenser.getCharacterName() returns. Cassi Emerson1,910 Points Hi Andren - is it because String before = dispenser.swapHead("Darth Vader"); is creating a new instance of the variable dispenser? andren28,520 Points No, that line does not create a new instance. All it does is run the swapHead method within the dispenser object and then assign the result to the before variable. It does not do anything beyond that. The swapHead method changes the characterName field on the dispenser to whatever name you pass into it and then returns the old name. That means that dispenser.characterName is now equal to whatever you passed in to the swapHead method, and before is equal to whatever the characterName was before the name change. Sean Loke14,659 Points Sean Loke14,659 Points Thank you! Could you explain this bit as well in the example.java file? String before = dispenser.swapHead("Darth Vader"); System.out.printf("It was %s but Chris switched it to %s %n", before, dispenser.getCharacterName()); }
https://teamtreehouse.com/community/swaphead-by-chris-do-not-understand-it
CC-MAIN-2022-27
refinedweb
658
56.55
Building Your Own URL ShortenerBy Alex Fraundorf Most of us are familiar with seeing URLs like bit.ly or t.co on our Twitter or Facebook feeds. These are examples of shortened URLs, which are a short alias or pointer to a longer page link. For example, I can send you the shortened URL that will forward you to a very long Google URL with search results on how to iron a shirt. It would be much easier to text the 20-character bit.ly URL to your son who is in college and preparing for his first big job. Answering Some Common Questions So with bit.ly and many other URL shorteners like it out there and freely available, why should we bother building our own? Most of these shortening services even have an easy-to-use API so that we can programmatically generate a shortened URL, and use it within our PHP scripts., simple link that can be emailed to your readers or published on your website. The obvious advantage to having your own is that your readers have instant brand recognition with your website. You may wonder why you always see letters mixed with numbers in shortened URL’s. By having more than ten options (0-9) per digit, we are able to have dramatically more combinations while keeping the code as short as possible. The characters we’ll be using are the digits 1-9 along with various upper/lowercase letters. I have removed all of the vowels to prevent having links created which are unintended bad words, and I have removed any characters that could be confused with each other. This gives us a list of about 50 characters available for each digit, which means that with two characters, we have 2,500 possible combinations, 125,000 possibilities with three characters, and a whopping 6.5 million combinations with just four characters! Planning the Database Let’s set up the short_urls table. It’s a simple table and the create statement is found below: CREATE TABLE IF NOT EXISTS short_urls ( id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, long_url VARCHAR(255) NOT NULL, short_code VARBINARY(6) NOT NULL, date_created INTEGER UNSIGNED NOT NULL, counter INTEGER UNSIGNED NOT NULL DEFAULT '0', PRIMARY KEY (id), KEY short_code (short_code) ) ENGINE=InnoDB; We have our standard auto-incrementing primary key and fields for the full URL, the shortened code for the URL (indexed for faster retrieval), a timestamp when the row was created, and the number of times the short URL has been accessed. Note that the long_url field has a maximum length of 255 characters, which should be sufficient for most applications. If you need to store longer URLs then you’ll need to change its definition to TEXT. Now on to the PHP! Creating a URL Short Code The code to create and decode short URL codes will be in a class named ShortUrl. First, let’s look at the code responsible for creating the short codes: <?php class ShortUrl { protected static $chars = "123456789bcdfghjkmnpqrstvwxyzBCDFGHJKLMNPQRSTVWXYZ"; protected static $table = "short_urls"; protected static $checkUrlExists = true; protected $pdo; protected $timestamp; public function __construct(PDO $pdo) { $this->pdo = $pdo; $this->timestamp = $_SERVER["REQUEST_TIME"]; } public function urlToShortCode($url) { if (empty($url)) { throw new Exception("No URL was supplied."); } if ($this->validateUrlFormat($url) == false) { throw new Exception( "URL does not have a valid format."); } if (self::$checkUrlExists) { if (!$this->verifyUrlExists($url)) { throw new Exception( "URL does not appear to exist."); } } $shortCode = $this->urlExistsInDb($url); if ($shortCode == false) { $shortCode = $this->createShortCode($url); } return $shortCode; } protected function validateUrlFormat($url) { return filter_var($url, FILTER_VALIDATE_URL, FILTER_FLAG_HOST_REQUIRED); } protected function verifyUrlExists($url) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_NOBODY, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_exec($ch); $response = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); return (!empty($response) && $response != 404); } protected function urlExistsInDb($url) { $query = "SELECT short_code FROM " . self::$table . " WHERE long_url = :long_url LIMIT 1"; $stmt = $this->pdo->prepare($query); $params = array( "long_url" => $url ); $stmt->execute($params); $result = $stmt->fetch(); return (empty($result)) ? false : $result["short_code"]; } protected function createShortCode($url) { $id = $this->insertUrlInDb($url); $shortCode = $this->convertIntToShortCode($id); $this->insertShortCodeInDb($id, $shortCode); return $shortCode; } protected function insertUrlInDb($url) { $query = "INSERT INTO " . self::$table . " (long_url, date_created) " . " VALUES (:long_url, :timestamp)"; $stmnt = $this->pdo->prepare($query); $params = array( "long_url" => $url, "timestamp" => $this->timestamp ); $stmnt->execute($params); return $this->pdo->lastInsertId(); } protected function convertIntToShortCode($id) { $id = intval($id); if ($id < 1) { throw new Exception( "The ID is not a valid integer"); } $length = strlen(self::$chars); // make sure length of available characters is at // least a reasonable minimum - there should be at // least 10 characters if ($length < 10) { throw new Exception("Length of chars is too small"); } $code = ""; while ($id > $length - 1) { // determine the value of the next higher character // in the short code should be and prepend $code = self::$chars[fmod($id, $length)] . $code; // reset $id to remaining value to be converted $id = floor($id / $length); } // remaining value of $id is less than the length of // self::$chars $code = self::$chars[$id] . $code; return $code; } protected function insertShortCodeInDb($id, $code) { if ($id == null || $code == null) { throw new Exception("Input parameter(s) invalid."); } $query = "UPDATE " . self::$table . " SET short_code = :short_code WHERE id = :id"; $stmnt = $this->pdo->prepare($query); $params = array( "short_code" => $code, "id" => $id ); $stmnt->execute($params); if ($stmnt->rowCount() < 1) { throw new Exception( "Row was not updated with short code."); } return true; } ... When we instantiate our ShortUrl class, we’ll pass it our PDO object instance. The constructor stores this reference and sets the $timestamp member. We call the urlToShortCode() method passing it the long URL that we wish to shorten. The method wraps up everything needed to create the short URL code, which we will appended to our domain name. urlToShortCode() calls validateUrlFormat() which simply uses a PHP filter to make sure that the URL is properly formatted. Then, if the static variable $checkUrlExists is true, verifyUrlExists() will be called which uses cURL to contact the URL and make sure that it doesn’t return a 404 (Not Found) error. You could alternatively check for a 200 (OK) status, but this could cause issues if the page were to unexpectedly return a 301 (Moved) or 401 (Unauthorized) response code. It doesn’t make sense to have duplicate entries, so the code checks for that with urlExistsInDb() which queries the database for the long URL. If it finds the URL, it will return the corresponding short code, otherwise it returns false so we know we need to create it. Note that and are different URLs, so if you want to prevent this kind of duplication then you will have to add some regular expressions. createShortCode() delegates the following tasks to specific methods: insertUrlInDb()to insert the long URL into the database and return the new row’s ID. convertIntToShortCode()to convert the new row’s ID to our base-50 number scheme. insertShortCodeInDb()to update the row with the newly created short code. When we want to create a short URL, all we have to do is instantiate the class, passing a PDO instance to the constructor, call the urlToShortCode() method with the long URL we wish to shorten, and append the returned short code to the domain and pass it back to the controller that requested it. <?php include "../include/config.php"; include "../include/ShortUrl.php"; try { $pdo = new PDO(DB_PDODRIVER . ":%1$s</a></p>', SHORTURL_PREFIX . $code); exit; } catch (Exception $e) { // log exception and then redirect to error page. header("Location: /error"); exit; } Decoding a Short Code The code to decode a short code and obtain the long URL is part of the ShortUrl class too. We call the shortCodeToUrl() method and pass it the short code we have extracted from the URI. shortCodeToUrl() also accepts an optional parameter, $increment, which defaults to true. It then delegates the following: validateShortCodeFormat()makes sure that the provided short code only contains letters and numbers. getUrlFromDb()queries the database for the supplied short code and returns the record’s id, long_url, and counter fields. - If the $incrementparameter is true, incrementCounter()is called to increment the row’s counter field. Here’s the rest of the class: ... public function shortCodeToUrl($code, $increment = true) { if (empty($code)) { throw new Exception("No short code was supplied."); } if ($this->validateShortCode($code) == false) { throw new Exception( "Short code does not have a valid format."); } $urlRow = $this->getUrlFromDb($code); if (empty($urlRow)) { throw new Exception( "Short code does not appear to exist."); } if ($increment == true) { $this->incrementCounter($urlRow["id"]); } return $urlRow["long_url"]; } protected function validateShortCode($code) { return preg_match("|[" . self::$chars . "]+|", $code); } protected function getUrlFromDb($code) { $query = "SELECT id, long_url FROM " . self::$table . " WHERE short_code = :short_code LIMIT 1"; $stmt = $this->pdo->prepare($query); $params=array( "short_code" => $code ); $stmt->execute($params); $result = $stmt->fetch(); return (empty($result)) ? false : $result; } protected function incrementCounter($id) { $query = "UPDATE " . self::$table . " SET counter = counter + 1 WHERE id = :id"; $stmt = $this->pdo->prepare($query); $params = array( "id" => $id ); $stmt->execute($params); } } Bringing It All Together Building/altering a front controller or tailoring this package to an existing framework are outside the scope of this article, and so I’ve opted to contain our decoding logic in a file named r.php (r standing for redirect). We can write our shortened URLs as where r.php (or r/index.php depending on your design) will be the controller. This format will be easy to integrate into just about any framework without touching the existing front controller. On a related note, if you would like to learn how to build your own front controllers, check out the excellent series An Introduction to the Front Controller Pattern. One advantage of this design is that, if you wanted to, you can have a separate controller for different parts of your site using different tables to keep the short codes organized and as short as possible. could be for blog posts, and could be for images. “But what if I don’t use a front controller or framework?” you ask, “Did I just read this whole article for nothing?” Although it’s not as pretty, you can use the format where r/index.php contains the decoding script. Here’s what r.php looks like: <?php include "../include/config.php"; include "../include/ShortUrl.php"; // How are you getting your short code? // from framework or front controller using a URL format like // // $code = $uri_data[1]; // from the query string using a URL format like // where this file is index.php in the // directory http_root/r/index.php $code = $_GET["c"]; try { $pdo = new PDO(DB_PDODRIVER . ":host=" . DB_HOST . ";dbname=" . DB_DATABASE, DB_USERNAME, DB_PASSWORD); } catch (PDOException $e) { trigger_error("Error: Failed to establish connection to database."); exit; } $shortUrl = new ShortUrl($pdo); try { $url = $shortUrl->shortCodeToUrl($code); header("Location: " . $url); exit; } catch (Exception $e) { // log exception and then redirect to error page. header("Location: /error"); exit; } Depending on how you are getting the short code, the variable $code is set along with your other configuration settings. We establish our PDO connection, instantiate an instance of ShortUrl, and call shortCodeToUrl() passing it the short code and leaving the counter setting the default value. If the short code is valid, you’ll have a long URL which you can redirect the user to. In Closing So there you have it, your very own URL shortener that is incredibly easy to add to your existing site. Of course, there are plenty of ways that this package could be improved, such as: - Abstract your database interaction to remove redundant code. - Add a way to cache shortened URL requests. - Add some analytics to the requested short URLs beyond the counterfield. - Add a way to filter out malicious pages. I’d would like to take this opportunity to thank Timothy Boronczyk for his patient advice throughout my writing process. It was an honor to write this article for SitePoint and to work with him. Feel free to fork this article’s sample code on GitHub and share your contributions and improvements. Thanks for reading and happy PHPing! - علیرضا - Łukasz Krzyszczak - sydo - itoctopus - Case Sensitivity - Kevin - Tony - Ben - Sky - Alex Gervasio - netkili - Chris - dave - Rohit - Rohit - QRmii - Pritesh - King - Boaz - C.W.
https://www.sitepoint.com/building-your-own-url-shortener/
CC-MAIN-2017-13
refinedweb
2,012
53.31
opencv_transforms This repository is intended as a faster drop-in replacement for Pytorch's Torchvision augmentations. This repo uses OpenCV for fast image augmentation for PyTorch computer vision pipelines. Requirements - A working installation of OpenCV. Tested with OpenCV version 3.4.1 - Tested on Windows 10. There is evidence that OpenCV doesn't work well with multithreading on Linux / MacOS, for example num_workers >0in a pytorch DataLoader. I haven't tried this on those systems. Installation git clone - Add to your python path Usage from opencv_transforms import opencv_transforms as transforms - From here, almost everything should work exactly as the original transforms. Example: Image resizing import numpy as np image = np.random.randint(low=0, high=255, size=(1024, 2048, 3)) resize = transforms.Resize(size=(256,256)) image = resize(image) Should be 1.5 to 10 times faster than PIL. See benchmarks Performance - Most transformations are between 1.5X and ~4X faster in OpenCV. Large image resizes are up to 10 times faster in OpenCV. - To reproduce the following benchmarks, download the Cityscapes dataset. - An example benchmarking file can be found in the notebook bencharming_v2.ipynb I wrapped the Cityscapes default directories with a HDF5 file for even faster reading. The changes start to add up when you compose multiple transformations together. TODO - [x] Initial commit with all currently implemented torchvision transforms - [x] Cityscapes benchmarks - [ ] Make the resampleflag on RandomRotation, RandomAffineactually do something - [ ] Speed up augmentation in saturation and hue. Currently, fastest way is to convert to a PIL image, perform same augmentation as Torchvision, then convert back to np.ndarray
https://pythonawesome.com/opencv-implementation-of-torchvisions-image-augmentations/
CC-MAIN-2021-04
refinedweb
260
51.44
Although Qt comes with a unit-test framework QuickTest for QML code, hardly any QML GUIs are unit-tested, let alone developed with TDD (test-driven development). One reason is that many developers regard unit tests for GUI code as a waste of time. Another reason is that QuickTest is poorly documented. My post will change this with a walkthrough how to test-drive a QML keypad. Keypad Requirements With the keypad below, drivers of ROPA sugar beet harvesters can change ECU parameters. Here, they can change the speed of the conveyor belt transporting the sugar beets from the harvester bunker to a tractor trailer. The ECU parameters are C++ objects passed between C++ and QML. When the user accepts the new value, the changed parameter is sent to the ECU (Electronic Control Unit). The list of requirements for the keypad is pretty long. And it’s not even complete. For example, the requirements for changing the sign of the displayed number or for entering floating-point numbers are missing. - When the user opens the keypad, it displays the current value of the parameter and enters the initial state. - When the user presses one of the digit keys in the initial state, the keypad clears the value, shows the digit and enters the entry state. - When the user presses one of the digit keys in the entry state, the keypad appends the digit to the displayed value and stays in the entry state. - When the user presses the Accept button at the top right, the keypad is closed and passes the changed parameter value to the caller. - When the user presses the Reject button at the top left, the keypad is closed without changing the parameter value. - When the user presses the clear button “C”, the keypad deletes the rightmost digit of the displayed value and goes to the entry state. - When the user presses the “Reset” button, the keypad displays the default value of the parameter and goes to the entry state. - When the user presses one of the increment or decrement keys (e.g., -10 or +10), the keypad increments or decrements the displayed value accordingly. - When the displayed value is outside the range given by the minimum (e.g., 0) and maximum (e.g., 65535) of the parameter value and shown to the left and to the right of the displayed value, the displayed value is highlighted in red. Such a formidable list of requirements is a clear sign that we are developing serious code and not just some colourful buttons. As serious developers, we’ll apply our best coding practices. We’ll apply TDD to writing QML GUI code. Qt provides Qt Quick Test unit-testing QML code. Unfortunately, it’s poorly documented. The example test checks that 2 + 2 equals 4. We don’t learn how to write the CMakeLists.txt files for QML tests, how to make mouse clicks work in tests, or how to access the digit buttons used in the keypad. My post will explain this and how to test-drive the implementation of the keypad. The source code for this post is available in the directory examples/qml-keypad of the Github repository tdd-training-add-ons. Setting Up a QML Test Case The project qml-keypad is set up like a real-life QML application. The main QML application is in the directory main. The main QML file imports our custom controls from the QML module EmUse.Controls. It also instantiates the Keypad, which is the QML component under test. The QML module with the custom controls can be found in imports/src/EmUse/Controls/. The tests for the custom controls are located in subdirectories of imports/tests. The test for the keypad, for example, is in imports/tests/keypad. # Directory layout of project qml-keypad main/ main.cpp main.qml imports/ src/EmUse/Controls/ Keypad.qml tests/keypad/ tst_keypad.cpp tst_keypad.qml The CMakeLists.txt of the QML module EmUse.Controls and its tests rely heavily on qt_add_qml_module, which was introduced in Qt 6.2. Let us look at the CMakeLists.txt file of the QML module first. # File: imports/src/EmUse/Controls/CMakeLists.txt qt_add_qml_module(${PROJECT_NAME} URI EmUse.Controls VERSION 0.1 RESOURCE_PREFIX /imports QML_FILES Keypad.qml ) add_library(EmUse::Controls ALIAS ${PROJECT_NAME}) The command qt_add_qml_module stores the file Keypad.qml at the path qrc:/imports/EmUse/Controls/Keypad.qml in the Qt resource file system. The resource path is the concatenation of the RESOURCE_PREFIX and of the URI, where dots are replaced by slashes. Among other things, the command creates the qrc, qmltypes and qmldir files and compiles the QML files into C++ files. The build produces two shared libraries for the QML module: the backing library libEmUseControls.so and the plugin library libEmUseControlsplugin.so. When the QML engine sees the statement import EmUse.Controls, it loads the plugin library, which in turn loads the backing library. We make these libraries known to the build under the alias EmUse::Controls. The CMakeLists.txt file of the main application adds the alias EmUse::Controls to its target_link_libraries. The main function main() adds qrc:/imports as an import path: // File: main/main.cpp QQmlApplicationEngine appEngine; appEngine.addImportPath(u"qrc:/imports"_qs); The test case for the keypad could import the module EmUse.Controls in the same way as the main application. However, we wouldn’t test the keypad in isolation any more but with all its direct and indirect dependencies. This approach would reduce the controllability and observability of the tests. Writing tests would become more difficult. We use a different approach with the test application that instantiates a Keypad item and runs tests on this instance. The CMakeLists.txt file looks as follows: # File: imports/tests/keypad/CMakeLists.txt project(TestKeypad) find_package(Qt6 COMPONENTS Quick QuickTest REQUIRED) add_executable(${PROJECT_NAME} tst_keypad.cpp ) set_source_files_properties( ../../src/EmUse/Controls/Keypad.qml PROPERTIES QT_RESOURCE_ALIAS Keypad.qml) qt_add_qml_module(${PROJECT_NAME} URI EmUse.Tests.Keypad VERSION 0.1 QML_FILES tst_enter_number.qml ../../src/EmUse/Controls/Keypad.qml ) target_link_libraries(${PROJECT_NAME} PRIVATE Qt6::Quick Qt6::QuickTest) The test executable or test runner is in tst_keypad.cpp and links against the Qt libraries Qt6::Quick and Qt6::QuickTest. The test case with the test functions is in tst_enter_number.qml. The test runner executes all test cases whose filenames start with tst_. The command set_source_files_properties makes the real source file . ./../src/EmUse/Controls/Keypad.qml known as the Qt resource file Keypad.qml. The command qt_add_qml_module places the source files Keypad.qml and tst_enter_number.qml in the Qt resource path qrc:/EmUse/Tests/Keypad. These two commands perform the following mapping. Real file system -> Qt resource file system ../../src/EmUse/Controls/Keypad.qml -> qrc:/EmUse/Tests/Keypad/Keypad.qml. tst_enter_number.qml -> qrc:/EmUse/Tests/Keypad/tst_enter_number.qml The test runner doesn’t need any further help to find the QML files. Its source code is simple. // File: imports/tests/keypad/tst_keypad.cpp #include <QtQuickTest> QUICK_TEST_MAIN(QmlKeypad) QUICK_TEST_MAIN is for QML tests what QTEST_MAIN and its variants are for Qt/C++ tests. QmlKeypad is the name of the test suite comprising all the QML test cases (files starting with tst_). Showing the QML test case is past due now. Here is the source code of tst_enter_number.qml. # File: imports/tests/keypad/tst_enter_number.qml import QtQuick import QtTest TestCase { name: "EnterIntegerNumber" Keypad { id: keypad } function test_setup() { verify(false, "All set up! Now write a proper test"); } } The name of the test case is EnterIntegerNumber. It is used in the test output. The test function above, for example, shows up as QmlKeypad::EnterIntegerNumber::test_setup in the test output. The test case instantiates the component under test Keypad and provides a test function test_setup. We’ll add more meaningful tests in the next sections. For now, we only want to see whether our test setup works. When we build the project QmlKeypad, QtCreator will provide two run configurations: KeypadApp and TestKeypad. When we run the configuration TestKeypad, QtCreator’s Application Output pane will show the following messages: ********* Start testing of QmlKeypad ********* Config: Using QtTest library 6.2.0, Qt 6.2.0 (arm64-little_endian-lp64 shared (dynamic) release build; by Clang 12.0.5 (clang-1205.0.22.11) (Apple)), macos 11.4 PASS : QmlKeypad::EnterIntegerNumber::initTestCase() FAIL! : QmlKeypad::EnterIntegerNumber::test_setup() 'All set up! Now write a proper test' returned FALSE. () Loc: [.../tests/keypad/EmUse/Tests/Keypad/tst_enter_number.qml(17)] PASS : QmlKeypad::EnterIntegerNumber::cleanupTestCase() Totals: 2 passed, 1 failed, 0 skipped, 0 blacklisted, 46ms ********* Finished testing of QmlKeypad ********* As expected the test fails and suggests to write a proper test. When we run the configuration KeypadApp, the application will show an edit field with the text 933. We won’t write tests for the visual appearance. We run the application when we add visual elements and check the appearance manually. We run the test suite when we add behaviour like pressing a digit key. The code at this point is available under the tag tdd-qml-setup. Pressing Key1 Zero, One and Many Times We use James Grenning’s ZOMBIES approach to get rolling with TDD. We start with the first three letters: - Zero – The button key1is pressed zero times. Check that nothing is displayed after instantiating Keypad. - One – The button key1is pressed once. Check that “1” is displayed in the keypad. - Many – The button key1is pressed two or more times. Check that “11”, “111” and so on are displayed in the keypad. Pressing Key1 Zero Times Our first test test_pressKey1ZeroTimes checks that the number keypad.value entered by the user and the number valueDisplay.text displayed by the keypad are both the empty string. function test_pressKey1ZeroTimes() { compare(keypad.value, "") var valueDisplay = findChild(keypad, "valueDisplay") compare(valueDisplay.text, "") } The test case fails, because the component Keypad does not contain a property value. Hence, we add this property to Keypad. property string value: "" Now the test fails, because valueDisplay is null. The call findChild(keypad, "valueDisplay") recursively searches the child tree of the keypad item for an item with the object name "valueDisplay". We set the property objectName in the keypad’s TextField accordingly. TextField { id: valueDisplay objectName: "valueDisplay" ... } Now the test passes. Besides checking the initial condition of the keypad, the test forced us to define the interface of the keypad, the property value, and to make the displayed number valueDisplay.text available for checks. Pressing Key1 Once Our second test test_pressKey1 simulates the user clicking on the key button "1" once and checks that the keypad displays the value “1”. function test_pressKey1OneTime() { var key1 = findChild(keypad, "key1") var valueDisplay = findChild(keypad, "valueDisplay") mouseClick(key1) compare(keypad.value, "1") compare(valueDisplay.text, "1") } The test fails with the following error message. TypeError: mouseClick requires an Item or Window type This isn’t surprising, because the keypad lacks a button completely. We add the button below the TextField in Keypad.qml and assign it the object name "key1". TextField { ... } Button { id: key1 objectName: "key1" text: "1" } We add a button with the object name "key1" after the TextField for displaying the entered value in Keypad.qml. The test case fails with a different error message. test_pressKey1OneTime() window not shown The error message says that test_pressKey1 was run before the Keypad was shown in a window. The mouse click goes into the void because there is no visible item yet to click on. As a remedy, we instantiate Keypad inside a Window in the same way as in main.qml. We also tell the test case to execute the tests only after the window is shown, that is, when: windowShown. Here is the relevant code. // File: tst_enter_number.qml TestCase { name: "EnterIntegerNumber" when: windowShown Window { width: 400 height: 300 visible: true Keypad { id: keypad anchors.fill: parent } } ... All the error messages are gone. The test fails, because the key button doesn’t change the value yet when clicked. The simplest way to make the test pass is to set value to “1” when the button key1 is clicked. Button { objectName: "key1" text: "1" onClicked: value = "1" } The check compare(keypad.value, "1") passes, whereas the check compare(valueDisplay.text, "1") fails. The first test test_pressKey1ZeroTimes fails as well. test_pressKey1ZeroTimes is executed after test_pressKey1OneTime. Hence, keypad.value is “1” when test_pressKey1ZeroTimes runs. Our tests depend on the execution order! We fix this by introducing an init function that initialises the Keypad properties properly. We move the checks from test_pressKey1ZeroTimes into init and remove test_pressKey1ZeroTimes. The QML TestCase component calls the init function before each test function. function init() { keypad.value = "" compare(keypad.value, "") var valueDisplay = findChild(keypad, "valueDisplay") compare(valueDisplay.text, "") } The checks in init pass. The check of the displayed value in test_pressKey1ZeroTimes still fails. We fix this by binding the entered value to the text property of the TextField. TextField { id: valueDisplay text: value ... The test passes now. The KeypadApp displays 1 instead of nothing when we click the button key1. Pressing Key1 Many Times No matter how often we press the button key1 in the KeypadApp, the displayed value is always 1. This leads to the following test. function test_pressKey1TwoTimes() { var key1 = findChild(keypad, "key1") var valueDisplay = findChild(keypad, "valueDisplay") mouseClick(key1) mouseClick(key1) compare(keypad.value, "11") compare(valueDisplay.text, "11") } The test fails, because the displayed value is still “1” instead of “11”. We append “1” to value when the button key1 is clicked. Then our new test passes. Button { objectName: "key1" text: "1" onClicked: value += "1" } Entering Integer Numbers We lay out the buttons for the digit, plus-minus and the clear keys in a grid layout with three columns. The plus-minus and clear keys are disabled. The beginning of the grid looks like this. TextField { id: valueDisplay; ... } GridLayout { columns: 3 Button { objectName: "key1" text: "1" onClicked: value += "1" } Button { objectName: "key2" text: "2" onClicked: value += "2" } Button { objectName: "key3" text: "3" onClicked: value += "3" } ... We make the test functions more readable by storing the results of the findChild calls in member properties of TestCase. We do this for all 10 digit buttons and the valueDisplay. property QtObject valueDisplay: findChild(keypad, "valueDisplay") property QtObject key1: findChild(keypad, "key1") property QtObject key2: findChild(keypad, "key2") ... We replace the two existing tests by tests for entering the numbers 12007, 38398 and 43566. These numbers cover all digits. The test test_enterNumber12007 would look as follows. The other tests look similar. function test_enterNumber12007() { mouseClick(key1) mouseClick(key2) mouseClick(key0) mouseClick(key0) mouseClick(key7) compare(keypad.value, "12007") compare(valueDisplay.text, "12007") } The code at this point is available under the tag tdd-qml-enter-number. Accepting and Cancelling Entered Numbers Before the user enters a new number with the keypad, the client of the Keypad (the TestCase in our case) stores the current value of the parameter in the property acceptedValue. When the user accepts the entered number, the client stores the new number keypad.value in the property acceptedValue. On a harvester, the client would close the keypad and send the new number to the responsible ECU so that the ECU can change the parameter accordingly. When the user rejects or cancels the entered number, the client doesn’t change acceptedValue. The tests for accepting and cancelling an entered number look as follows. Checking for the empty string when cancelling is not quite right, but it helps to get the code in place. We’ll come to this a little bit later. function test_acceptEnteredNumber() { mouseClick(key4) mouseClick(key3) mouseClick(ok) compare(acceptedValue, "43") } function test_cancelEnteredNumber() { mouseClick(key4) mouseClick(key3) mouseClick(cancel) compare(acceptedValue, "") } We add the Cancel and OK button at the bottom of the keypad. When the Cancel button is clicked, it emits the signal rejected from the Keypad. Similarly, the OK button emits the signal accepted. They are defined at the beginning of Keypad. signal accepted() signal rejected() ... GridLayout { ... } RowLayout { Button { objectName: "cancel" text: "Cancel" onClicked: rejected() } Button { objectName: "ok" text: "OK" onClicked: accepted() } } The Keypad instance in the TestCase catches the signals and sets acceptedValue accordingly. property string acceptedValue: "" Window { Keypad { ... onAccepted: acceptedValue = keypad.value onRejected: acceptedValue = "" } ... The new and the old tests pass. It’s time to initialise acceptedValue with the current parameter value (here "77") instead of the empty string and keypad.value with acceptedValue. We do this in the init function of the TestCase. function init() { acceptedValue = "77" keypad.value = acceptedValue compare(keypad.value, acceptedValue) compare(valueDisplay.text, acceptedValue) } This change makes all tests fail because all the entered numbers are now prefixed with "77". We fix test_cancelEnteredNumber by changing the final check to compare(acceptedValue, "77") and by removing the onRejected handler in Keypad. The other tests require a little bit more work. When the first digit button is pressed, keypad.value is assigned the pressed digit. From then on, the digits are appended to keypad.value again. We introduce the function addDigit to handle these two cases. function addDigit(digit) { if (!wasDigitEntered) { wasDigitEntered = true value = digit return } value += digit } The property wasDigitEntered is initialised with false when Keypad is created. We must do this explicitly in the init function of the TestCase. When addDigit is called for the first time, wasDigitEntered is set to true and value is assigned the clicked digit. From then on, wasDigitEntered is true and the digits are appended to value. We make the onClick handlers call the new function addDigit. For example, button key4 looks like this. Button { objectName: "key4" text: "4" onClicked: addDigit("4") } All the tests pass again. The code at this point is available under the tag tdd-qml-accept-cancel. Ignoring Leading Zeros The user must not be able to enter numbers with leading zeros like “00”, “04” and “007”. The keypad shall display these numbers as “0”, “4” and “7”, respectively. Here is the test for clicking key0 twice. function test_ignoreLeadingZeros_00() { mouseClick(key0) mouseClick(key0) mouseClick(ok) compare(acceptedValue, "0") } The new test fails. The actual value is “00” and the expected value “0”. Leading zeros arise, if the entered value is “0” and the user presses any digit button. The resulting number is the digit from the button pressed. We replace value by the digit pressed, which is the if-case in addDigit. Only the if-condition changes. function addDigit(digit) { if (!wasDigitEntered || value == "0") { wasDigitEntered = true value = digit return } value += digit } All tests pass. The solution looks too simple to be right. So, we test “04” and “007”. These tests also pass. Our doubts were unfounded. The code at this point is available under the tag tdd-qml-ignore-zeros. The Keypad So Far The keypad application looks as follows. If we enter the number 396 and press Cancel, the keypad will still display 568. If we enter 443 and press OK, the keypad will display 443. So, we can try out the functionality we just developed using TDD. We implemented the first five requirements from our test list using TDD on QML code. That’s pretty good! Requirements 6 and 7 (clear and reset button) are straightforward. Requirements 8 and 9 (increment & decrement and range check) are more challenging, but not too much when TDD guides us. Of course, the visual appearance needs some serious improvement. Testing the visual appearance would be futile. Our unit tests help us make sure that we don’t break the functionality while changing the appearance. Most importantly, we have a proper set up for test-driving QML code. And the keypad example demonstrates that applying TDD to GUI code is not only possible but makes a lot of sense.
https://embeddeduse.com/2021/11/18/developing-a-qml-keypad-with-tdd/
CC-MAIN-2021-49
refinedweb
3,240
60.01
Welcome to the fifth article on beginning game development. At this point we have a working 3D environment and can manipulate the camera direction and location using the keyboard and mouse. In this article we are going to add 3D objects to the game using predefined mesh files and implement some simple culling. The cleanup in this article consists mainly of fixing the navigation keys and removing some items we no longer need. The following changes have already been integrated into the code for this article: _z _deltaTime You may have noticed that some of the classes, such as the Keyboard and Mouse classes, implement the IDisposable interface. This is an implementation of the Dispose pattern in .NET as explained in the .NET Framework Reference topic Implementing Finalize and Dispose to Clean Up Unmanaged Resources. The Dispose pattern in .NET is intended to be used when a program makes use of resources that are not managed by the .NET runtime. These "unmanaged" resources need to be cleaned up in a special way to ensure that they are released in a deterministic manner. Since the .NET garbage collection is non-deterministic, we need to use a particular set of steps to ensure this cleanup is done correctly. These steps are defined in the Dispose pattern. We use a lot of unmanaged resources in game development, so it is best to implement the Dispose pattern in every class that interacts with DirectX or file resources (almost all of our classes). This will protect us against memory leaks and increase the performance of the game. You can read the topic mentioned above to get more background information on this pattern and garbage collection for .NET. I have added the Dispose pattern in all of the classes for this article, and will do so going forward. What we need in BattleTank 2005 now are units. If you go back to the screenshot of the original game shown in the first article, you see that we need to add shapes and opposing tanks. The shapes are obstacles for you or the enemy tanks to use as cover. The enemy tanks are what we are going to eventually shoot at. These objects also aid us in navigation in the otherwise bare landscape. You may remember that we had to write out the camera location to the console to see that we were moving because there were no reference points in the scene. In the next article we will complete the scene by adding some terrain. For BattleTank 2005 we are going to have two types of 3D objects: Obstacles and Tanks. The main difference between them is that tanks can move and obstacles can not. Since we are going to have lots of obstacles and tanks, we are going to organize them so that they can be manipulated in bulk. We do this by adding them to a collection. We could add both types of objects to a single collection, but it makes sense to separate each type into its own collection. Separating the objects in this manner allows us to concentrate on a particular group without incurring the overhead of testing each unit for its type, which improves performance. We can then update the position of the mobile units while skipping the stationary units. In previous versions of .NET, creating a collection class to hold the units and ensuring that it was type-safe was pretty involved. Each collection in .NET 1.0 and 1.1 was a collection of objects. This meant that it accepted any value or reference type, making it very flexible. But this also meant that we had to cast the object to its proper type whenever we retrieved an object from the collection. This meant we ran the danger of the object cast failing. Accounting for the possible exception and testing each object before the cast incurred more overhead. With .NET 2.0 we can use Generics to create type-safe collections for us with minimum effort. These collection classes are safer and perform better than regular collection. This is only one of the possible uses of generics, but probably the most common. To learn more about generics check out these articles at MSDN: All units, regardless of their purpose, have a number of things in common. To make the application easier to maintain, we are going to factor all of the common characteristics of the unit into a single base class. All units will derive from the common base class and be extended with custom properties and methods. Creating a base class and derived classes creates an object hierarchy that then allows us to treat all units polymorphically, which is a very powerful concept in object oriented programming. Check out these Visual Studio .NET topics on polymorphism at MSDN: The resulting base class looks like this. Visual C# public abstract class UnitBase : IDisposable{ public UnitBase (Device device, string meshFile, Vector3 position, float scale ) public void Render ( Camera camera ) public bool IsCulled public Vector3 Position public float Radius private void LoadMesh ( ) private void ComputeRadius ( ) private Vector3 _position; private float _radius; private bool _isCulled; private Device _device; private string _meshFile; private float _scale; private Mesh _mesh = null; private Material[] _meshMaterials; private Texture[] _meshTextures;} Visual Basic Public MustInherit Class UnitBase Implements IDisposable Public Sub New(ByVal device As Device, _ ByVal meshFile As String, ByVal position As Vector3, _ ByVal scale As Single) Public Sub Render(ByVal camera As Camera) Public Property IsCulled() As Boolean Public Property Position() As Vector3 Public Property X() As Single Public Property Y() As Single Public Property Z() As Single Public ReadOnly Property Radius() As Single Public Sub Dispose() Implements IDisposable.Dispose Protected Overridable Sub Dispose(ByVal disposing As Boolean) Protected Overrides Sub Finalize() Private _disposed As Boolean Private Sub LoadMesh() Private Sub ComputeRadius() Private m_position As Vector3 Private m_radius As Single Private m_isCulled As Boolean Private m_device As Device Private m_meshFile As String Private m_scale As Single Private m_mesh As Mesh = Nothing Private m_meshMaterials As Material() Private m_meshTextures As Texture()End Class While most of this should make sense, you may be wondering what a mesh file is and why we are loading it. At some point you realize that creating complex 3D objects in code line by line is pretty silly. The simple cube we created for the skybox alone was almost 200 lines of code. There has to be a better way to create 3D objects. Most 3D models are created by artists using dedicated modeling programs such as Maya or 3ds Max. These programs store the information about their models in proprietary file formats (iff for Maya and 3ds for 3DS Max). DirectX can not directly read these file formats, but it can read a format called the X file. DirectX defines a file format called the X file format. It contains the definition for a 3D model. We can use these files to dramatically reduce the amount of code we need to write when loading 3D models into our game by loading them from the X file. In DirectX lingo, the X files are mesh files. If you remember from our previous article, a mesh is data that describes a 3D shape. Mesh data includes a list of vertices that comprise the shape, information on how the vertices are connected to each other, and texture information for all vertices. There are also conversion programs available that convert 3D objects files from other formats into the X file format. The DirectX SDK actually ships with plug-ins for Maya and 3ds Max that allow you to convert files created with these programs into the X file format. So where can you get some X files to start working with? The DirectX SDK includes a folder called Media under the Samples folder which contains a number of X files that you can use. The DirectX SDK also includes a number of utilities that allow you to work with the X files such as the DirectX viewer and MeshViewer. All of the utilities are located in the Utilities folder. Check out these and the other utilities that come with the SDK, as they can save you time and effort. Free 3D models: Unless you are very graphically talented it is probably best for an artist to create the models for your game. There are a large number of free 3D models available on the internet created by artists to show of their skills. Generally they don't mind if you use their work for you own gratification, but if you use it in a commercial game, that's a whole other issue. Make sure to read and understand the usage rules for each model before using it. A good site for free models is:. Using X files with predefined models opens up an entirely new world of integrating 3D objects into our game. No longer do we need to worry abut the low-level details of each object, we can simply load a previously created mesh file. The first place we are going to use this newfound knowledge is to clean up the skybox code. The SDK includes a file called lobby_skybox.x in the Samples\Media\Lobby folder that describes a cube like the one we are currently using for our sky box. I copied this file to the Resources folder as skybox.x and updated the texture files to match the names of our texture files. Changing textures: Most X files can be opened in a simple text editor. You can search the file for the references to the texture files (look for TextureFilename) and replace them with your own, or you can substitute your own texture files during the mesh loading phase by passing the texture file name to the TextureLoader.FromFile method. If you want to see something cool, copy the lobby_skybox.x files and all of the JPG files from the SDK media folder to the resource folder, then change the name of the X file in the skybox LoadMesh method to this X file. The resulting skybox is the lobby of the building where all the game developers work at Microsoft. In the Skybox class remove the SetupCubeFaces method, the six methods starting with Copy (CopyLeftFaceVertexBuffer, CopyFrontFaceVertexBuffer, etc.) and the RenderFace method. You can also remove all of the private variables declared at the bottom of the class except for the Device variable. In the Render method, remove the lines of code that check the Pitch and Heading of the camera. Finally, replace the call to SetupCubeFaces in the constructor with a call to LoadMesh. Now add the following three variable declarations at the bottom of the class. private Mesh _mesh = null;private Material[] _meshMaterials;private Texture[] _meshTextures; Private m_mesh As Mesh = Nothing Private m_meshMaterials As Material()Private m_meshTextures As Texture() We already know what a mesh and a texture is, but what is the material? Materials describe how polygons reflect ambient and diffuse light, as well as information about specular highlights and if the polygons appear to emit light. The main thing to remember is that while textures define how polygons look, materials define how they reflect light. Loading the mesh from a file is very simple. All you have to do is to call the FromFile method of the Mesh class. (You should spend some time to familiarize yourself with the Mesh class and its methods, since it is one of the core classes you will use.) In the Skybox class add the following code. private void LoadMesh ( ){ ExtendedMaterial[] materials = null; Directory.SetCurrentDirectory ( Application.StartupPath + @"\..\..\..\Resources\" ); _mesh = Mesh.FromFile (@"skybox.x", MeshFlags.SystemMemory, _device, out materials); if ( ( materials != null ) && ( materials.Length > 0 ) ) { _meshTextures = new Texture[materials.Length]; _meshMaterials = new Material[materials.Length]; for ( int i = 0 ; i < materials.Length ; i++ ) { _meshMaterials[i] = materials[i].Material3D; _meshMaterials[i].Ambient = _meshMaterials[i].Diffuse; if (materials[i].TextureFilename != null && (materials[i].TextureFilename != string.Empty)) _meshTextures[i] = TextureLoader.FromFile ( _device, materials[i].TextureFilename ); } }} Private Sub LoadMesh() Dim materials As ExtendedMaterial() = Nothing Directory.SetCurrentDirectory(Application.StartupPath _ & "\..\..\..\Resources\") m_mesh = Mesh.FromFile("skybox.x", MeshFlags.SystemMemory, _ m_device, materials) If (Not (materials Is Nothing)) AndAlso _ (materials.Length > 0) Then m_meshTextures = New Texture(materials.Length) {} m_meshMaterials = New Material(materials.Length) {} Dim i As Integer = 0 While i < materials.Length m_meshMaterials(i) = materials(i).Material3D m_meshMaterials(i).Ambient = m_meshMaterials(i).Diffuse If Not (materials(i).TextureFilename Is Nothing) AndAlso _ (Not (materials(i).TextureFilename = String.Empty)) Then m_meshTextures(i) = TextureLoader.FromFile( _ m_device, materials(i).TextureFilename) End If System.Math.Min(System.Threading.Interlocked.Increment(i), i - 1) End While End If End Sub While DirectX handles most of the work of reading the X file and converting it into an object for us (i.e. the creation of the vertex and index buffers etc.) we have to manually manage loading the materials and textures for the mesh. The last step to integrate the X file and textures into the Skybox is to modify the Render method of the Skybox class. Immediately after the _device.RenderState.CullMode = Microsoft.DirectX.Direct3D.Cull.None; line add the following code. _device.RenderState.CullMode = Microsoft.DirectX.Direct3D.Cull.None; for ( int i = 0 ; i < _meshMaterials.Length ; i++ ){ _device.Material = _meshMaterials[i]; _device.SetTexture ( 0, _meshTextures[i] ); _mesh.DrawSubset ( i );} While i < m_meshMaterials.Length m_device.Material = m_meshMaterials(i) m_device.SetTexture(0, m_meshTextures(i)) m_mesh.DrawSubset(i) System.Math.Min(System.Threading.Interlocked.Increment(i), i - 1)End While Once again we iterate over the meshMaterials and then call the DrawSubset method of the Mesh to draw each subset in turn. That's it; the entire Skybox class is now only about 80 lines long and much easier to read. Returning to the UnitBase class, the next item we need to look at is the IsCulled flag. We briefly covered culling in the third article. Culling is simply the removal of entire objects from the scene so they will not be rendered. The logic to determine what objects should be removed can range from the very simple to the very complex. In BattleTank 2005 we are going to cull all of the objects that do not fall into the view frustum of the scene. To determine if the unit is in the view frustum, we enhance the Camera class to provide us the info about the current view frustum so that we can check each object and see if it falls inside or outside of the frustum. To perform this check, we use the Radius property of the UnitBase class which is computed in the ComputeRadius method. The BoundingSphere method of the Geometry class (the same one we use now for converting radians to degrees and vice versa) computes a sphere that completely contains all the points in the mesh using a vertex data of the mesh. The mesh contains this information in the vertex buffer. To access this buffer, it is best to lock it before access and unlock it when done. You should also make sure to dispose the vertex buffer when you are done. The safest way to do this is to leverage the using statement that will ensure that Dispose is called regardless of what happens. using private void ComputeRadius ( ){ using ( VertexBuffer vertexBuffer = _mesh.VertexBuffer ) { GraphicsStream gStream = vertexBuffer.Lock ( 0, 0, LockFlags.None ); Vector3 tempCenter; _radius = Geometry.ComputeBoundingSphere (gStream, _mesh.NumberVertices, _mesh.VertexFormat, out tempCenter ) * _scale; vertexBuffer.Unlock ( ); }} Private Sub ComputeRadius() Dim vertexBuffer As VertexBuffer = Nothing Try vertexBuffer = m_mesh.VertexBuffer Dim gStream As GraphicsStream = _ vertexBuffer.Lock(0, 0, LockFlags.None) Dim tempCenter As Vector3 m_radius = Geometry.ComputeBoundingSphere(gStream, _ m_mesh.NumberVertices, m_mesh.VertexFormat, tempCenter) * _ m_scale Finally vertexBuffer.Unlock() vertexBuffer.Dispose() End Try End Sub You can probably understand why using the radius of the object is a very rough way of culling objects. It works fine and is fairly accurate if the objects are simple shapes, but for more complex shapes the bounding sphere becomes much larger than the basic object itself. In a commercial game a lot of effort is invested in creating a culling routine that eliminates the most objects it can. We could also get fancy here and identify objects that are only partially in the frustum, but without the corresponding ability to render only a portion of the unit, we are just wasting our time. Instead we will treat any unit that has even a single point in the view frustum as being completely in the frustum. So now that we have the radius of the unit, we need to know what the frustum is so we can check the radius against it. Adding the frustum info to the camera is easy. First we add some data structures to hold the information about the view frustum. If you recall from the previous article, a frustum resembles a pyramid with the top cut off. This means that we need to store the corners of the top and bottom square and the plane for each side. For the corners we use an array of the familiar Vector3 structure. For the planes DirectX provides a convenient Plane structure. private Vector3[] _frustumCorners;private Plane[] _frustumPlanes; We then initialize the arrays in the constructor (two squares with four corners equals 8 points, and the four sides of the polygon plus the top and bottom make six planes). _frustumCorners = new Vector3[8];_frustumPlanes = new Plane[6]; m_frustumCorners = New Vector3(8) {}m_frustumPlanes = New Plane(6) {} The next step is to compute the frustum using the current view and projection matrices. private void ComputeViewFrustum ( ){ Matrix matrix = _viewMatrix * _perspectiveMatrix; matrix.Invert ( ); _frustumCorners[0] = new Vector3 ( -1.0f, -1.0f, 0.0f ); // xyz _frustumCorners[1] = new Vector3 ( 1.0f, -1.0f, 0.0f ); // Xyz _frustumCorners[2] = new Vector3 ( -1.0f, 1.0f, 0.0f ); // xYz _frustumCorners[3] = new Vector3 ( 1.0f, 1.0f, 0.0f ); // XYz _frustumCorners[4] = new Vector3 ( -1.0f, -1.0f, 1.0f ); // xyZ _frustumCorners[5] = new Vector3 ( 1.0f, -1.0f, 1.0f ); // XyZ _frustumCorners[6] = new Vector3 ( -1.0f, 1.0f, 1.0f ); // xYZ _frustumCorners[7] = new Vector3 ( 1.0f, 1.0f, 1.0f ); // XYZ for ( int i = 0 ; i < _frustumCorners.Length ; i++ ) _frustumCorners[i] = Vector3.TransformCoordinate ( _frustumCorners[i], matrix ); // Now calculate the planes _frustumPlanes[0] = Plane.FromPoints ( _frustumCorners[0], _frustumCorners[1], _frustumCorners[2] ); // Near _frustumPlanes[1] = Plane.FromPoints ( _frustumCorners[6], _frustumCorners[7], _frustumCorners[5] ); // Far _frustumPlanes[2] = Plane.FromPoints ( _frustumCorners[2], _frustumCorners[6], _frustumCorners[4] ); // Left _frustumPlanes[3] = Plane.FromPoints ( _frustumCorners[7], _frustumCorners[3], _frustumCorners[5] ); // Right _frustumPlanes[4] = Plane.FromPoints ( _frustumCorners[2], _frustumCorners[3], _frustumCorners[6] ); // Top _frustumPlanes[5] = Plane.FromPoints ( _frustumCorners[1], _frustumCorners[0], _frustumCorners[4] ); // Bottom } Private Sub ComputeViewFrustum() Dim matrix As Matrix = m_viewMatrix * m_perspectiveMatrix matrix.Invert() m_frustumCorners(0) = New Vector3(-1.0F, -1.0F, 0.0F) m_frustumCorners(1) = New Vector3(1.0F, -1.0F, 0.0F) m_frustumCorners(2) = New Vector3(-1.0F, 1.0F, 0.0F) m_frustumCorners(3) = New Vector3(1.0F, 1.0F, 0.0F) m_frustumCorners(4) = New Vector3(-1.0F, -1.0F, 1.0F) m_frustumCorners(5) = New Vector3(1.0F, -1.0F, 1.0F) m_frustumCorners(6) = New Vector3(-1.0F, 1.0F, 1.0F) m_frustumCorners(7) = New Vector3(1.0F, 1.0F, 1.0F) Dim i As Integer = 0 While i < m_frustumCorners.Length m_frustumCorners(i) = _ Vector3.TransformCoordinate(m_frustumCorners(i), matrix) System.Math.Min( _ System.Threading.Interlocked.Increment(i), i - 1) End While m_frustumPlanes(0) = Plane.FromPoints(m_frustumCorners(0), m_frustumCorners(1), m_frustumCorners(2)) m_frustumPlanes(1) = Plane.FromPoints( _ m_frustumCorners(6), _ m_frustumCorners(7), _ m_frustumCorners(5)) m_frustumPlanes(2) = Plane.FromPoints( _ m_frustumCorners(2), _ m_frustumCorners(6), _ m_frustumCorners(4)) m_frustumPlanes(3) = Plane.FromPoints( _ m_frustumCorners(7), _ m_frustumCorners(3), _ m_frustumCorners(5)) m_frustumPlanes(4) = Plane.FromPoints( _ m_frustumCorners(2), _ m_frustumCorners(3), _ m_frustumCorners(6)) m_frustumPlanes(5) = Plane.FromPoints( _ m_frustumCorners(1), _ m_frustumCorners(0), _ m_frustumCorners(4)) End Sub First we combine the view and projection matrices by multiplying them. Next we initialize the eight corners of the frustum as a cube immediately in front of the camera. These corners are then transformed and used to create the 6 planes using the FromPoints method of the plane. The frustum is computed upon initialization of the class and on each render loop. Add a call to ComputeViewFrustum( ) to the bottom of the constructor of the Camera class and at the end of the Render method of the Camera class. (It needs to be at the end so it can use the newly computed view and projection matrices.) Now we can use the computed frustum and the radius for each unit to determine whether any part of the unit is in the frustum and should be rendered. The IsInViewFrustum method returns true if the unit is inside the frustum and false otherwise. public bool IsInViewFrustum ( UnitBase unitToCheck ){ foreach ( Plane plane in _frustumPlanes ) { if ( plane.A * unitToCheck.Position.X + plane.B * unitToCheck.Position.Y + plane.C * unitToCheck.Position.Z + plane.D <= ( -unitToCheck.Radius ) ) return false; } return true;} Public Function IsInViewFrustum(ByVal unitToCheck As UnitBase) As Boolean For Each plane As Plane In m_frustumPlanes If plane.A * unitToCheck.Position.X + plane.B * _ unitToCheck.Position.Y + plane.C * unitToCheck.Position.Z + _ plane.D <= (-unitToCheck.Radius) Then Return False End If Return True End Function The process of culling should take place before the unit is rendered. We accomplish this in the Render method of the BaseUnit class by checking the frustum of the camera before actually rendering the mesh. By placing it into the actual Render method, we avoid having to check the cull state elsewhere in the code. if (camera.IsInViewFrustum ( this ) == false ) return; If camera.IsInViewFrustum(Me) = False Then Return End If Now that we have the basic infrastructure in place, it's time to start adding units. But the UnitBase class is abstract, so it can not be instantiated. The entire purpose of the base class was to keep common unit properties and functionality together. Now it's time to create some classes that represent the objects we are going to use in BattleTank 2005, namely obstacles and tanks. public class Obstacle : UnitBase{ public Obstacle ( Device device, string meshFile, Vector3 position, float scale ) : base ( device, meshFile, position, scale ) { }} Public Class Obstacle Inherits UnitBase Public Sub New(ByVal device As Device, ByVal meshFile As String, _ ByVal position As Vector3, ByVal scale As Single) MyBase.New(device, meshFile, position, scale) End Sub End Class The Obstacle class currently does nothing more than call its base class, but we are going add to this class later on. public class Tank : UnitBase{ public Tank ( Device device, string meshFile, Vector3 position, float scale ) : base ( device, meshFile, position, scale ) public void Update ( float deltaTime ) private float _speed = 10.0f;} Public Class Tank Inherits UnitBase Public Sub New(ByVal device As Device, ByVal meshFile As String, _ ByVal position As Vector3, ByVal scale As Single) MyBase.New(device, meshFile, position, scale) End Sub Public Sub Update(ByVal deltaTime As Single) MyBase.Z -= (m_speed * deltaTime) End Sub Private m_speed As Single = 10.0FEnd Class The Tank class adds a _speed property we need to describe and an Update method. _speed The Update method takes in a float that describes the amount of time in seconds that has passed since the last render loop. In the second article we added a variable called deltaTime which we used to compute the frame rate. This value is the value we want to use from now on to compute the position of moving objects. We use the principle of time to ensure a similar experience on each computer regardless of the speed of the computer and make the movement appear fluid. For example, if we updated the position of a moving object by 1 in each pass through the render loop, the object would move fast on faster computers since they are able to complete a render loop faster. You may have seen this behavior when experimenting with the rotating cube. Another problem is that each pass through the render loop is not performed at the same speed, depending on the other operations the CPU is performing, so that the movement may appear choppy. (Again, the easiest way to understand this is to experiment. Change the increment of the Z axis by one instead of the speed * time formula and see what happens.) Notice that we only need to update the tanks since the obstacles do not move. foreach ( Tank tank in _tanks ){ tank.Update ( _deltaTime );} For Each tank As Tank In m_tanks tank.Update(m_deltaTime)Next The actual Update method in the tank simply moves the tank towards the origin for now, using a predefined speed. public void Update ( float deltaTime ){ base.Z -= ( _speed * deltaTime );} Public Sub Update(ByVal deltaTime As Single) MyBase.Z -= (m_speed * deltaTime)End Sub Next we create two generic collections in the GameEngine class to hold the mobile and stationery units. private List<UnitBase> _obstacles;private List<UnitBase> _tanks; The actual units are added to the collection in the CreateObstacles and CreateTanks methods. private void CreateObstacles ( ){ _obstacles = new List<UnitBase> ( ); _obstacles.Add ( new Obstacle ( _device, @"car.x", new Vector3 ( 0, 0, 200 ), 1f ) ); _obstacles.Add ( new Obstacle ( _device, @"car.x", new Vector3 ( 60, 0, 100 ), 1f ) ); _obstacles.Add ( new Obstacle ( _device, @"car.x", new Vector3 ( -60, 0, 150 ), 1f ) ); _obstacles.Add ( new Obstacle ( _device, @"car.x", new Vector3 ( 60, 0, -100 ), 1f ) ); _obstacles.Add ( new Obstacle ( _device, @"car.x", new Vector3 ( -60, 0, -150 ), 1f ) );}private void CreateTanks ( ){ _tanks = new List<UnitBase> ( ); _tanks.Add ( new Tank (_device, @"bigship1.x", new Vector3 ( 0, 0, 200 ), 1f ) ); _tanks.Add ( new Tank (_device, @"bigship1.x", new Vector3 ( 100, 0, 300 ), 1f ) ); _tanks.Add ( new Tank (_device, @"bigship1.x", new Vector3 ( -100, 0, 500 ), 1f ) ); _tanks.Add ( new Tank (_device, @"bigship1.x", new Vector3 ( 100, 0, -200 ), 1f ) ); _tanks.Add ( new Tank (_device, @"bigship1.x", new Vector3 ( -100, 0, -400 ), 1f ) );} Private Sub CreateObstacles() m_obstacles = New List(Of UnitBase)() m_obstacles.Add(New Obstacle(m_device, "car.x", _ New Vector3(0, 0, 200), 1.0F)) m_obstacles.Add(New Obstacle(m_device, "car.x", _ New Vector3(60, 0, 100), 1.0F)) m_obstacles.Add(New Obstacle(m_device, "car.x", _ New Vector3(-60, 0, 150), 1.0F)) m_obstacles.Add(New Obstacle(m_device, "car.x", _ New Vector3(60, 0, -100), 1.0F)) m_obstacles.Add(New Obstacle(m_device, "car.x", _ New Vector3(-60, 0, -150), 1.0F))End Sub Private Sub CreateTanks() m_tanks = New List(Of UnitBase) m_tanks.Add(New Tank(m_device, "bigship1.x", _ New Vector3(0, 0, 200),1.0F)) m_tanks.Add(New Tank(m_device, "bigship1.x", _ New Vector3(100, 0, 300), 1.0F)) m_tanks.Add(New Tank(m_device, "bigship1.x", _ New Vector3(-100, 0, 500), 1.0F)) m_tanks.Add(New Tank(m_device, "bigship1.x", _ New Vector3(100, 0, -200), 1.0F)) m_tanks.Add(New Tank(m_device, "bigship1.x", _ New Vector3(-100, 0, -400), 1.0F))End Sub If you look at the coordinates for the obstacles and tanks, you will see that I placed them along similar axes to help me orient myself. You could also modify these methods to create a random number of obstacles and tanks placed at random coordinates. Just make sure to add logic to avoid objects overlapping each other or being too close to the origin and obscuring the camera. A more flexible and extensible solution is to read the number, type and location of obstacles and tanks from a file. This file could also contain other play-specific settings and be loaded automatically based on some internal logic. The most common use of this scenario is for predefined levels. A player would have the ability to advance to more difficult levels after meeting some completion criteria. This allows you to easily create an ever-changing play experience and fine-tune the playability of each level, something you can't do with random placement. Setting the game up that way also allows players to customize the game for themselves. You would normally provide a level editor when following this route. The final step is to call these methods. The best place is the constructor of the GameEngine class right after the Camera class is created. public GameEngine (){ InitializeComponent ( ); this.SetStyle (ControlStyles.AllPaintingInWmPaint | ControlStyles.Opaque, true ); ConfigureInputDevices ( ); ConfigureDevice ( ); _skyBox = new SkyBox ( this._device ); _camera = new Camera ( ); CreateObstacles ( ); CreateTanks ( ); this.Size = new Size ( 800, 600 ); HiResTimer.Start ( );} Public Sub New() InitializeComponent() Me.SetStyle(ControlStyles.AllPaintingInWmPaint Or _ ControlStyles.Opaque, True) ConfigureInputDevices() ConfigureDevice() m_skyBox = New SkyBox(Me.m_device) m_camera = New Camera CreateObstacles() CreateTanks() Me.Size = New Size(800, 600) HiResTimer.Start()End Sub On each render loop all we have to do is to iterate over the appropriate collection and render those that are in the frustum. We call the RenderUnits method in the Render method of the GameEngine class immediately after rendering the skybox. private void RenderUnits ( ){ foreach ( UnitBase ub in _obstacles ) { ub.Render ( _camera ); } foreach ( UnitBase ub in _tanks ) { ub.Render ( _camera ); }} Private Sub RenderUnits() For Each ub As UnitBase In m_obstacles ub.Render(m_camera) Next For Each ub As UnitBase In m_tanks ub.Render(m_camera) Next End Sub That's all. Now we have units. If you run the game now you will notice three things. The first problem can be solved by adding a terrain to the game to show a solid surface along the Y axis. The second problem can be solved by adding lights to the scene. The final issue will be solved once we add collision detection to the game. Other then these items, the game is getting more playable by the day. In the next article we are going to add the missing terrain using a heightmap, add lights so we can see the colors of the units, and add some collision detection. As you may have noticed, I have changed (and will continue to do so) the various graphics in the game from article to article. This is intended to encourage you to implement your own graphics to change the underlying game into something completely different. You could easily change this game to be set in space, or turn it into a powerboat racing game by just changing the graphics. Give it a try. As usual, I ran out of time. In the last article I had promised to add the HUD back, but I ran out of space. I will try to accomplish this in the coming articles. I also wanted to discuss Action mapping to enhance the way we are tracking the keyboard and mouse inputs, but this subject will have to be covered in a later article. In addition, I am planning to add a simple debugging console to the game. Along with these features, the next articles are going to discuss adding Artificial Intelligence to the opposing tanks, making our game conform to realistic physical forces, and some cool sounds to make playing the game more fun. So stay tuned. Until then: Happy coding. If you would like to receive an email when updates are made to this post, please register here RSS Great articles so far. It is really helping me further my knowledge of game programming. Can I use DirectX 9.0 for rendering on the WPF control? Yup! You officially lost my attention here. I am almost hesitating going any furthure. It seems you do something and then go back and change this so dramatically that it makes it impossible to follow along with the tutorials. At this point I think I know enough to be dangerous and that I should go to the resources you mentioned earlier in hopes to find a better tutorial or a book. One thing I would like to see cleared up is the debate between Managed DirectX and XNA. I did a little research and I am confused as to which one I should be using. I don't want to use Managed DirectX if it won't be supported in the near future and I am not really sure how different XNA is from what I have already learned in this tutorial. I think I need to shop for a book for beginners. I thought I was going to be able to drive a tank around a world but up to this point it looks to me like the world moves around the tank. I guess I need baby steps but as Carl said in the previous tutorial. These are starting to get worse and worse. I do appreciate the attempt tho. I am just trying to give you input in case you plan to improve them in the future. If your concidering doing these tutorials for yourself, forget the crappy comments from these simpletons, I have novice/intermediate programming knowledge and i'm finding these tutorials god-like for what i need. The difference between me and all the people complaining is that i really really wanna learn this stuff and im spending hours playing around with the code making sure i understand 100% before i move on For anyone getting disheartened by reading peoples comments, just ignore them and try the tutorials for yourself (no offence intended to the people that lost interest). Cant wait to go onto the next page and add some lights and terrain in!!!! I agree with Mike about the simpletons. Cuz honestly, all it takes is real effort. I myself have been working hours on cramming all of the content down before going to the next section. To be honest .. I haven't even opened the next section. not even peaked. And with the knowledge I have already gained, I have customized my own camera class apart from the one provided and am already working on an Objects class to render and load 3d objects onto the device. Each new object is given its own attributes as well as the default . Translate, Rotate, and Scale attributes. The Objects class I made also allows me to get(), remove() and add() new attributes to any 3d object instance I want. So.... to break it all down, I've been reading my eyes dry to make sure I've gotten all the material down pack before I head to the next section. Even if it takes me a whole day to finish one section (or around 6 hours as the tutorial identifies). It takes a little more effort than you think, to learn it, but you'll only have to learn it once. ;) PingBack from Hey, I'd just like to say thank you for this tutorial, and all the time you put into it. I'm currently just finishing up part 5, and I have never been more confident in C# and DirectX development than I am now. PingBack from This is Part 2 of an introductory series on game programming using the Microsoft .NET Framework and managed PingBack from PingBack from PingBack from
http://blogs.msdn.com/coding4fun/archive/2006/11/03/941679.aspx
crawl-002
refinedweb
5,792
57.27
Nice. How does this differ from the existing project at: Jon Brisbin wrote: > > I hope that this can help so of you, since I've struggled with finding a > good engine that would allow me to embed Python in my HTML pages. > Everything I've tried so far was way too complicated to install and > maintain. I'm a minimalist, so my implementation is contained in only 1 CGI > script! :-) > > There is some initial documentation, as well as a download spot, on my > website at > > My implementation of PSP gives you a few goodies like Request, Form, and > Page objects, with all the CGI environment going into the Request object > (i.e. Request.http_user_agent , etc...). If you're familiar with Embeded > PERL, you should have no problems whatsoever in installing and immediately > using PSP. If you're not, it's not that big a deal...you get used to the > [+..+] and [$ if Request.request_uri == "/this.html": $] ...code... [$ endif > $] symantics really quickly. > > Coming soon: Session, Application, and a few other neato objects. > > Please let me know what you think! > > Thanks! > > -- > > Jon Brisbin > > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > Email: mail at jbrisbin.net > eFax.com Fax/Voice mailbox: 208.545.5368 > PGP public key: > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
https://mail.python.org/pipermail/python-list/2000-September/037391.html
CC-MAIN-2017-04
refinedweb
197
66.84
I am new to the DM environment, but I want to use it to port a library from Linux and gcc. The DM IDDE really seems to be just what I need to set up a project with 200+ C++ classes in several subdirectories. Where I fall over, however, is when I try to compile one particular file, 'writer.cpp'. 'writer.h' has the includes: #include <string> #include <iostream> Compilation with 8.42n fails with two errors: Error: \dm\stlport\stlport\stl\_config.h(469): missing decl-specifier-seq for declaration of 'e'. This corresponds to the source line: namespace std { } and a consequent(?) error: Error: \dm\stlport\stlport\stl\_config.h(527): unidentified identifier 'std'. This corresponds to the source line: namespace stlport = _STLP_STD; What is the problem, please? JR Try reducing the file to the minimum that reproduces the error.
http://www.digitalmars.com/archives/cplusplus/4970.html
CC-MAIN-2015-32
refinedweb
142
59.9
Introduction. In this article, we will see how we can perform time series analysis with the help of a recurrent neural network. We will be predicting the future stock prices of the Apple Company (AAPL), based on its stock prices of the past 5 years. Dataset The data that we are going to use for this article can be downloaded from Yahoo Finance. For training our algorithm, we will be using the Apple stock prices from 1st January 2013 to 31 December 2017. For the sake of prediction, we will use the Apple stock prices for the month of January 2018. So in order to evaluate the performance of the algorithm, download the actual stock prices for the month of January 2018 as well. Let's now see how our data looks. Open the Apple stock price training file that contains data for five years. You will see that it contains seven columns: Date, Open, High, Low, Close, Adj Close and Volume. We will be predicting the opening stock price, therefore we are not interested in the rest of the columns. If you plot the opening stock prices against the date, you will see the following plot: You can see that the trend is highly non-linear and it is very difficult to capture the trend using this information. This is where the power of LSTM can be utilized. LSTM (Long Short-Term Memory network) is a type of recurrent neural network capable of remembering the past information and while predicting the future values, it takes this past information into account. Enough of the preliminaries, let's see how LSTM can be used for time series analysis. Predicting Future Stock Prices Stock price prediction is similar to any other machine learning problem where we are given a set of features and we have to predict a corresponding value. We will perform the same steps as we do perform in order to solve any machine learning problem. Follow these steps: Import Libraries The first step, as always is to import the required libraries. Execute the following script to do so: import numpy as np import matplotlib.pyplot as plt import pandas as pd Import Dataset Execute the following script to import the data set. For the sake of this article, the data has been stored in the Datasets folder, inside the "E" drive. You can change the path accordingly. apple_training_complete = pd.read_csv(r'E:\Datasets\apple_training.csv') As we said earlier, we are only interested in the opening price of the stock. Therefore, we will filter all the data from our training set and will retain only the values for the Open column. Execute the following script: apple_training_processed = apple_training_complete.iloc[:, 1:2].values Data Normalization As a rule of thumb, whenever you use a neural network, you should normalize or scale your data. We will use MinMaxScaler class from the sklear.preprocessing library to scale our data between 0 and 1. The feature_range parameter is used to specify the range of the scaled data. Execute the following script: from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range = (0, 1)) apple_training_scaled = scaler.fit_transform(apple_training_processed) Convert Training Data to Right Shape As I said earlier, in a time series problems, we have to predict a value at time T, based on the data from days T-N where N can be any number of steps. In this article, we are going to predict the opening stock price of the data based on the opening stock prices for the past 60 days. I have tried and tested different numbers and found that the best results are obtained when past 60 time steps are used. You can try different numbers and see how your algorithm performs. Our feature set should contain the opening stock price values for the past 60 days while the label or dependent variable should be the stock price at the 61st day. Execute the following script to create feature and label set. features_set = [] labels = [] for i in range(60, 1260): features_set.append(apple_training_scaled[i-60:i, 0]) labels.append(apple_training_scaled[i, 0]) In the script above we create two lists: feature_set and labels. There are 1260 records in the training data. We execute a loop that starts from 61st record and stores all the previous 60 records to the feature_set list. The 61st record is stored in the labels list. We need to convert both the feature_set and the labels list to the numpy array before we can use it for training. Execute the following script: features_set, labels = np.array(features_set), np.array(labels) In order to train LSTM on our data, we need to convert our data into the shape accepted by the LSTM. We need to convert our data into three-dimensional format. The first dimension is the number of records or rows in the dataset which is 1260 in our case. The second dimension is the number of time steps which is 60 while the last dimension is the number of indicators. Since we are only using one feature, i.e Open, the number of indicators will be one. Execute the following script: features_set = np.reshape(features_set, (features_set.shape[0], features_set.shape[1], 1)) Training The LSTM We have preprocessed our data and have converted it into the desired format. now is the time to create our LSTM. The LSTM model that we are going to create will be a sequential model with multiple layers. We will add four LSTM layers to our model followed by a dense layer that predicts the future stock price. Let's first import the libraries that we are going to need in order to create our model: from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout In the script above we imported the Sequential class from keras.models library and Dense, LSTM, and Dropout classes from keras.layers library. As a first step, we need to instantiate the Sequential class. This will be our model class and we will add LSTM, Dropout and Dense layers to this model. Execute the following script model = Sequential() Creating LSTM and Dropout Layers Let's add LSTM layer to the model that we just created. Execute the following script to do so: model.add(LSTM(units=50, return_sequences=True, input_shape=(features_set.shape[1], 1))) To add a layer to the sequential model, the add method is used. Inside the add method, we passed our LSTM layer. The first parameter to the LSTM layer is the number of neurons or nodes that we want in the layer. The second parameter is return_sequences, which is set to true since we will add more layers to the model. The first parameter to the input_shape is the number of time steps while the last parameter is the number of indicators. Let's now add a dropout layer to our model. Dropout layer is added to avoid over-fitting, which is a phenomenon where a machine learning model performs better on the training data compared to the test data. Execute the following script to add dropout layer. model.add(Dropout(0.2)) Let's add three more LSTM and dropout layers to our model. Run the following script. model.add(LSTM(units=50, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(units=50, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(units=50)) model.add(Dropout(0.2)) Creating Dense Layer To make our model more robust, we add a dense layer at the end of the model. The number of neurons in the dense layer will be set to 1 since we want to predict a single value in the output. model.add(Dense(units = 1)) Model Compilation Finally, we need to compile our LSTM before we can train it on the training data. The following script compiles the our model. model.compile(optimizer = 'adam', loss = 'mean_squared_error') We call the compile method on the Sequential model object which is "model" in our case. We use the mean squared error as loss function and to reduce the loss or to optimize the algorithm, we use the adam optimizer. Algorithm Training Now is the time to train the model that we defined in the previous few steps. To do so, we call the fit method on the model and pass it our training features and labels as shown below: model.fit(features_set, labels, epochs = 100, batch_size = 32) Depending upon your hardware, model training can take some time. Testing our LSTM We have successfully trained our LSTM, now is the time to test the performance of our algorithm on the test set by predicting the opening stock prices for the month of January 2018. However, as we did with the training data, we need to convert our test data in the right format. Let's first import our test data. Execute the following script: apple_testing_complete = pd.read_csv(r'E:\Datasets\apple_testing.csv') apple_testing_processed = apple_testing_complete.iloc[:, 1:2].values In the above script, we import our test data and as we did with the training data, we removed all the columns from the test data except the column that contains opening stock prices. If the opening stock prices for the month of January 2018 are plotted against the dates, you should see the following graph. You can see that the trend is highly non-linear. Overall, the stock prices see small rise at the start of the month followed by a downward trend at the end of the month, with a slight increase and decrease in the stock prices in-between. It is extremely difficult to forecast such a trend. Let's see if the LSTM we trained is actually able to predict such a trend. Converting Test Data to Right Format For each day of January 2018, we want our feature set to contain the opening stock prices for the previous 60 days. For the 1st of January, we need the stock prices for the previous 60 days. To do so, we need to concatenate our training data and test data before preprocessing. Execute the following script to do so: apple_total = pd.concat((apple_training_complete['Open'], apple_testing_complete['Open']), axis=0) Now let's prepare our test inputs. The input for each day should contain the opening stock prices for the previous 60 days. That means we need opening stock prices for the 20 test days for the month of January 2018 and the 60 stock prices from the last 60 days for the training set. Execute the following script to fetch those 80 values. test_inputs = apple_total[len(apple_total) - len(apple_testing_complete) - 60:].values As we did for the training set, we need to scale our test data. Execute the following script: test_inputs = test_inputs.reshape(-1,1) test_inputs = scaler.transform(test_inputs) We scaled our data, now let's prepare our final test input set that will contain previous 60 stock prices for the month of January. Execute the following script: test_features = [] for i in range(60, 80): test_features.append(test_inputs[i-60:i, 0]) Finally, we need to convert our data into the three-dimensional format which can be used as input to the LSTM. Execute the following script: test_features = np.array(test_features) test_features = np.reshape(test_features, (test_features.shape[0], test_features.shape[1], 1)) Making Predictions Now is the time to see the magic. We preprocessed our test data and now we can use it to make predictions. To do so, we simply need to call the predict method on the model that we trained. Execute the following script: predictions = model.predict(test_features) Since we scaled our data, the predictions made by the LSTM are also scaled. We need to reverse the scaled prediction back to their actual values. To do so, we can use the ìnverse_transform method of the scaler object we created during training. Take a look at the following script: predictions = scaler.inverse_transform(predictions) Finally, let's see how well did our algorithm predicted the future stock prices. Execute the following script: plt.figure(figsize=(10,6)) plt.plot(apple_testing_processed, color='blue', label='Actual Apple Stock Price') plt.plot(predictions , color='red', label='Predicted Apple Stock Price') plt.title('Apple Stock Price Prediction') plt.xlabel('Date') plt.ylabel('Apple Stock Price') plt.legend() plt.show() The output looks like this: In the output, the blue line represents the actual stock prices for the month of January 2018, while the red line represents the predicted stock prices. You can clearly see that our algorithm has been able to capture the overall trend. The predicted prices also see a bullish trend at the beginning followed by a bearish or downwards trend at the end. Amazing, isn't it? Conclusion A long short-term memory network (LSTM) is one of the most commonly used neural networks for time series analysis. The ability of LSTM to remember previous information makes it ideal for such tasks. In this article, we saw how we can use LSTM for the Apple stock price prediction. I would suggest that you download stocks of some other organization like Google or Microsoft from Yahoo Finance and see if your algorithm is able to capture the trends.
https://stackabuse.com/time-series-analysis-with-lstm-using-pythons-keras-library/
CC-MAIN-2019-43
refinedweb
2,197
64.91
I've spent most of the last year developing and playing with a skinning system, two elements of which I have previously posted to CodeProject (see 'A Revolutionary New Approach to Custom Drawn Menus' and 'A template class for easy implementation of Windows hooks'). Achieving good results in window skinning requires an exacting knowledge of exactly what Windows messages are sent to whom and when. In the past, like many of you, I would reach for Spy++ and/or generate reams of TRACE statements and then stare for what seemed like hours trying to make sense of the output. Just recently though, a number of specific issues had really begun to bug me: WM_xxx Out of this was born InterSpy, an integrated Windows message tracing utility, which solves all the problems I previously outlined and provided me with hours of enjoyment. This utility requires you to build some source code into your application and cannot therefore be applied to a 3rd party application (unless you also have the source code). Because this solution requires the inclusion of source code, the greatest challenge was to create something simple and accessible; that could be used with the minimum of fuss but giving the maximum result. I happy to agree that I probably haven't reached that goal, but I find its always a useful place to aim for. The following positive attributes were sought: The first 'problem' to solve was how to get a handle on all the Windows messages flowing through an application. The simplest solution to this I've found is to use application-wide Windows hooks. Note: In the interest of those who do not care for Windows hooks I did spend sometime investigating the possible use of PreTranslateMessage to see an application's messages but it was a dismal failure; MFC's implementation seems to leave out more messages than it shows and it certainly leaves out the interesting ones at the start and end of a window's life. (if anyone has anymore specific knowledge about exactly what PreTranslateMessage() does and does not show I would be interested to learn more.) PreTranslateMessage PreTranslateMessage() To implement this I used CHookMgr which removes the ugly hooking details and simply requires the overriding of a few virtual functions to handle the message hooks. As far as I can tell, the required Windows hooks to trap all Windows messages are WH_CALLWNDPROC and WH_GETMESSAGE (which translate to passing HM_CALLWNDPROC | HM_GETMESSAGE to CHookMgr). WH_CALLWNDPROC WH_GETMESSAGE HM_CALLWNDPROC | HM_GETMESSAGE CHookMgr The next step was to decide what to do with the messages. Coming from an engineering background, I always favor figuring out the larger elements of a project before doing too much detail. That way, if I stumble up against something nasty in the woodshed (joke requiring some knowledge of an English dramatized play called 'cold comfort farm') I will not have wasted too much hard work. It was already obvious that the messages would need to be sent to a separate window but the 'where, what and how' had not been decided. My first prototype created a window in the same process as the application that was being traced, but the biggest problem was that the window was destroyed when the application terminated, which was an unacceptable 'feature'. I was also interested in the possibility of outputting to a Visual Studio tool window, but not knowing the first thing about writing plug-ins for Visual Studio, rather put the dampeners on that idea. So I was left with creating a separate application with a specially registered class name so that the code built into the client application could find it easily. The only other architectural issue to resolve was the means of communicating between two, although strictly the communication is only one-way. I'd used WM_COPYDATA a number of times before and so stuck with that because I was familiar with it. WM_COPYDATA One complication of having a sort-of client-server architecture is that the two sides do not share the same address space. What this means simply is that the message processing would have to be done on the client side and not in the logging window, as I had initially wanted (remember: my first prototype created the logging window in the same process as the application to take advantage of the shared address space). This was a nuisance because I had wanted to keep the source code to be compiled into the client application to be the absolute minimum necessary. Oh well, plans of mice and men and stuff ... All Windows messages essentially take the same form: that of the MSG structure. However, after that, every one is different in how the wParam and lParam values are used. wParam lParam Some messages have no parameters, some use only one parameter and some use both parameters. Further, whilst most often the parameters are used to supply simple UINT values, some use the lParam (typically) to point to a memory structure supplying more detail than could otherwise be shoe-horned into the simple parameters. UINT I therefore first decided to implement a simple base-class message handler, CISMsgHandler, which would do no more than crack each message into its constituent parameters. CISMsgHandler Then, taking a pointer from MFC, I wrote a message map of sorts (implemented using macros) to assign an instance of this class to all the possible Windows messages I could find (and then some). These include: all the standard messages, the MFC specific messages in afxpriv.h and a bunch I have come across during my searches of the news groups and my own debugging. Note: I am sure that some of the messages are obsolete but, since these will simply not be triggered if that is the case, I see little harm in leaving them in. The obvious and intended benefit of this default message handler is that it allowed me to get the message map working so that I could then build the 'server-side' logging window to receive the messages. Thereafter, it was plain old slog work to implement specific handlers for all the different message types. Fortunately, many messages such as WM_LBUTTONDOWN share common parameters with other mouse button messages so that a single handler could be used for 6 or 7 messages. Elsewhere, its one for one, and in this case you'll see that I have not quite finished the task. WM_LBUTTONDOWN In cracking the messages, you may also note that I have not always gone as far as I might have. e.g. where a parameter describes a window handle (HWND) I could have also retrieved the window's class and text to provide more information. The reason I have not done this at present is simply an issue of efficiency and performance. HWND There's not much more to say that cannot be gleaned from the screenshot above. The main output view is a CTreeCtrl, chosen simply because the data is hierarchical in nature and the tree control allows data to be hidden and displayed with such ease. CTreeCtrl I also like the specific implementation of custom-draw on trees and the fact that you can embolden items just by setting a flag. The logging window currently provides the following 'features' to simplify the output and navigation of messages: nn Note: in my demo project, these files are in a separate 'skinwindows' folder because they form a subset of a much larger skinning system, but there is no need for you to do the same. CInterSpy ISMsgManager/ISMsgHandler CWinApp InitInstance() // assumes files are in same folder as rest of the project #include <span class="code-string">"interspy.h" </span> The flags which may be passed to CInterspy::Initialize() are as follows: CInterspy::Initialize() IS_NOKICKIDLE WM_KICKIDLE IS_NODUPLICATES IS_RESETONSTART IS_AUTOSTARTOUTPUT The code is supplied here for you to use and abuse without restriction, except that you may not modify it and pass it off as your own. The concept and design, however, remains my intellectual property. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here CISMH_Cmd General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/3923/InterSpy-An-integrated-Windows-message-trace-and-f
CC-MAIN-2016-26
refinedweb
1,410
53.14
TIL: When working with namespaced files in PHP it’s a huge performance win when using fully-qualified function calls. ~ If you’re calling is_null in your code, PHP will first check for the function’s existence in the current namespace. If not found there, it will look for the function in the global namespace. This extra check is quite a drag, as detailed in this issue for the SCSS PHP Project. If you do want to target PHP’s built-in is_null (or any other global function), it’s more performant to refer to it using it’s fully-qualified name, e.g. \is_null ❌ Slow: is_null(); ✅ Fast: \is_null(); Alternatively you can also import the function first through a use statement. ✅ Fast: use function is_null; is_null(); ~ In the case of SCSS PHP the change from is_null to \is_null lead to a 28% speed increase!. If you’re looking for more benchmarks, Toon Verwerft has done some benching in the past. Thank me with a coffee. I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks! To stay in the loop you can follow @bramus or follow @bramusblog on Twitter. Via this tweet that sparked my quest into knowing more about this.
https://www.bram.us/2020/04/18/php-performance-tip-use-fully-qualified-function-calls-when-working-with-namespaces/
CC-MAIN-2021-04
refinedweb
213
79.9
#include <sys/socket.h> #include <netinet/in.h> the IANA list of assigned protocol numbers atzero). For sending and receiving datagrams (sendto(2), recvfrom(2), and similar), raw sockets use the standard sockaddr_in address structure defined in ip(7). The sin_port field could be used to specify the IP protocol number, but it is ignored for sending in Linux 2.2 and later, and should be always set to 0 (see BUGS). For incoming packets, sin_port is set to zero. Raw socket options can be set with setsockopt(2) and read with getsockopt. User tried to send to a broadcast address without having the broadcast flag set on the socket. An invalid memory address was supplied. Invalid argument. Packet too big. Either Path MTU Discovery is enabled (the IP_MTU_DISCOVER socket flag) or the packet size exceeds the maximum allowed IPv4 packet size of 64KB. Invalid flag has been passed to a socket call (like MSG_OOB). The user doesn't have permission to open raw sockets. Only processes with an effective user ID of 0 or the CAP_NET_RAW attribute may do that. An ICMP error has arrived reporting a parameter problem. IP_RECVERR and ICMP_FILTER are new in Linux 2.2. They are Linux extensions and should not be used in portable programs. Linux 2.0 enabled some bug-to-bug compatibility with BSD in the raw socket code when the SO_BSDCOMPAT socket option was set; since Linux 2.2, this option no longer has that effect. By default, raw sockets do path MTU (Maximum Transmission Unit) discovery., raw sockets will fragment outgoing packets that exceed the interface MTU. However, disabling it is not recommended for performance and reliability reasons.. recvmsg(2), sendmsg(2), capabilities(7), ip(7), socket(7) RFC 1191 for path MTU discovery. RFC 791 and the < linux/ip.h > header file for the IP protocol.
http://manpages.courier-mta.org/htmlman7/raw.7.html
CC-MAIN-2017-30
refinedweb
307
59.9
LINQ (short for Language INtegrated Query) is an addition to Microsoft’s .NET languages (C# and Visual Basic) that allows queries to be carried out on various data sources, ranging from the more primitive data types such as arrays and lists to more structured data sources such as XML and databases. Since I haven’t used Visual Basic since version 3, I’ll consider only C# code in these posts. Deferred versus non-deferred operators Before we start writing code, there are a few concepts that are important to understand. First, LINQ queries consist of commands that fall into two main categories: deferred and non-deferred. A query containing only deferred commands is not actually performed until the query is enumerated. What this means is that the code that specifies the query merely constructs an object containing instructions for performing the query, and the query itself is not performed until some other code (typically a foreach loop iterating through the results of the query) attempts to access the result of the query. This can be a mixed blessing. On one hand, it means that each time you access the query, an up to date version of the results is provided. If you’re querying a database, for example, then if changes are made to the database in between queries, the later query will return the updated information. Sometimes, of course, this isn’t what you want – you want to run the query once and save these results for all future uses, even if the data source changes in the meantime. This is possible by using one of LINQ’s non-deferred commands, since placing any such command in a query forces the query to be run at the time it is defined, enabling you to save results for later use. As you might guess, it is very important to know which LINQ commands are deferred and which are non-deferred. Failure to distinguish between them can lead to bugs in the code that are hard to find. For example, since a deferred query is not actually run until some code accesses the results of the query, any errors in the query definition will not become apparent until this later code is run. Query expression syntax A second important concept is that many LINQ commands can be written using two types of syntax. All LINQ commands can be written using standard query operators, which are essentially just method calls. LINQ commands are performed on data sources, and the usual way of calling an operator on such a data source is with a statement of the form dataSource.LinqOperator(parameters). In this syntax, LinqOperator() is an extension method (not that you really need to know this to use it). Although any LINQ command can be written using standard query operators, there is an alternative syntax known as query expression syntax which can be used for the most common query operators. Query expressions essentially introduce a number of new keywords into C#, and resemble standard SQL statements more than method calls. It is important to realize, however, that not all LINQ commands can be written using query expressions. In the examples that follow, we’ll try to give both forms if it is possible to use both syntaxes to write a query. Data sources We mentioned above that LINQ allows you to query several types of data source, ranging from simple types up to complex structures such as databases. In fact, LINQ contains separate versions of many commands for different types of data. We won’t go into the details quite yet, but it’s important to remember that commands used for querying objects such as arrays may differ from those for querying databases, even if they have the same name. We’ll look at LINQ for objects first and consider more complex data structures later. A data source for a LINQ for objects query must implement the IEnumerable<T> generic interface, where T is the type of data stored in the object. If this sounds frightening, don’t worry unduly. In recent versions of C#, the common data sources such as arrays and lists implement IEnumerable<T> by default, so you can apply LINQ to these data types without any problems. For legacy data sources such as the ArrayList, there are ways of converting them to the correct form so LINQ can be applied to them too. We’ll get to that in due course. A simple LINQ query That’s about all the background you need to start looking at some LINQ code. We’ll begin with probably the most common command, which is ‘select’. First, we need some data. We’ll use a list of all of Canada’s prime ministers, which we’ll encapsulate in a class like this: public class PrimeMinisters { public int id; public string firstName, lastName, party; public static ArrayList GetPrimeMinistersArrayList() { ArrayList primes = new ArrayList(); primes.Add(new PrimeMinisters { id = 1, firstName = "John", lastName = "Macdonald", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 2, firstName = "Alexander", lastName = "Mackenzie", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 3, firstName = "John", lastName = "Abbott", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 4, firstName = "John", lastName = "Thompson", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 5, firstName = "Mackenzie", lastName = "Bowell", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 6, firstName = "Charles", lastName = "Tupper", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 7, firstName = "Wilfrid", lastName = "Laurier", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 8, firstName = "Robert", lastName = "Borden", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 9, firstName = "Arthur", lastName = "Meighen", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 10, firstName = "William", lastName = "Mackenzie King", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 11, firstName = "Richard", lastName = "Bennett", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 12, firstName = "Louis", lastName = "St. Laurent", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 13, firstName = "John", lastName = "Diefenbaker", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 14, firstName = "Lester", lastName = "Pearson", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 15, firstName = "Pierre", lastName = "Trudeau", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 16, firstName = "Joe", lastName = "Clark", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 17, firstName = "John", lastName = "Turner", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 18, firstName = "Brian", lastName = "Mulroney", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 19, firstName = "Kim", lastName = "Campbell", party = "Conservative" }); primes.Add(new PrimeMinisters { id = 20, firstName = "Jean", lastName = "Chrétien", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 21, firstName = "Paul", lastName = "Martin", party = "Liberal" }); primes.Add(new PrimeMinisters { id = 22, firstName = "Stephen", lastName = "Harper", party = "Conservative" }); return primes; } public override string ToString() { return id + ". " + firstName + " " + lastName + " (" + party + ")"; } public static PrimeMinisters[] GetPrimeMinistersArray() { return (PrimeMinisters[])GetPrimeMinistersArrayList().ToArray(typeof(PrimeMinisters)); } } We’ve provided two forms of this data. The first method creates an old-fashioned ArrayList (which we’ll use later), and the last method converts this to a standard array. We’ve provided an override of the ToString() method as well so that we can print out each prime minister neatly. A simple starting point is some LINQ code that just prints out the entire list of prime ministers. We can do this using a query expression as follows: PrimeMinisters[] primeMinisters = PrimeMinisters.GetPrimeMinistersArray(); IEnumerable<PrimeMinisters> pmList = from pm in primeMinisters select pm; foreach (PrimeMinisters pm in pmList) { Console.WriteLine(pm); } We retrieve the array using the static method GetPrimeMinistersArray(). Remember that a C# array already implements IEnumerable<T>, so we can use it directly in a LINQ query. The query begins with a ‘from’ command. The clause ‘from pm in primeMinisters’ means that each element of the primeMinisters array will be examined, and the element is referred to as ‘pm’ while it’s being examined. The ‘select’ clause says what is to be returned, or yielded, in response to each element passed to it. In this case, we simply return pm for each pm passed to it, so we get a sequence of PrimeMinisters objects as the result of the query. Note that we’ve declared the result of the query as ‘pmList’, which is of type IEnumerable<PrimeMinisters>. Of course, since this is an interface, it doesn’t tell you the actual data type of the sequence that is returned by the query. You can find this type by stepping through the code using the debugger, and it turns out to be something quite unfriendly (in my case {System.Linq.Enumerable.WhereSelectArrayIterator<LinqObjects01.PrimeMinisters,LinqObjects01.PrimeMinisters>}). This shouldn’t cause any problems since the IEnumerable<T> interface provides enough methods to allow you to use the data in pretty well any way you like. The output from this code is:) As mentioned above, we can also write this query using standard method notation. We get: IEnumerable<PrimeMinisters> pmList2 = primeMinisters.Select(pm => pm); foreach (PrimeMinisters pm in pmList2) { Console.WriteLine(pm); } This form reveals the underlying structure of the query expression. Select() is actually an extension method with prototype public static IEnumerable<S> Select<T, S>( this IEnumerable<T> source, Func<T, S> selector); Select() takes a source argument of type IEnumerable<T> (which is primeMinisters in our example) and a selector which is a Func that specifies what should be returned for each element in source. We’ve used a lambda expression to provide the selector. In this case, the selector just returns the same object that was passed to it. This means that the return data type S is the same as the source data type T (they are both of type PrimeMinisters). Note that the ‘from pm in primeMinisters’ clause in the query expression is replaced by giving primeMinisters as the source for the Select() method. In the query expression we declared the variable for the elements in the source by saying ‘from pm in…’, while in the method expression this variable is declared by giving it as the argument in the lambda expression. In fact, the compiler translates a query expression into a method expression, so the first example will simply be translated into the second. One final note for this introductory post. We’ve specified the data type of the result of the query explicitly by saying it’s IEnumerable<PrimeMinisters>. In many cases we won’t know the actual data type being returned; it may even be an anonymous type making it impossible to specify. In such cases, we can simply use ‘var’ to declare the return type of the query. Thus we could rewrite the first query above as: PrimeMinisters[] primeMinisters = PrimeMinisters.GetPrimeMinistersArray(); var pmList = from pm in primeMinisters select pm; foreach (PrimeMinisters pm in pmList) { Console.WriteLine(pm); } Remember that ‘var’ knows the internal data type of its object, so we can still access individual fields of each pm object if we want. Trackbacks […] our list of Canadian prime ministers, we can call the method that returns an ArrayList instead of an array. To apply LINQ operators to […] […] data type can be used as the array index or key. For example, if we wanted to store our list of Canadian prime ministers in a dictionary, we could use the integer ID we’ve assigned each prime minister as the key, […] […] to our example using Canadian prime ministers, we can create a Lookup in which the key is the first letter of the prime minister’s last […]
https://programming-pages.com/2012/05/15/linq-introduction-and-a-simple-select-clause/
CC-MAIN-2018-26
refinedweb
1,871
53.1
# find all integers in the list that do not share any bits except the +lsb. my %MatchedIntegers=(); #my output foreach my $i1 (@LongListOfIntegers) { foreach (@LongListOfIntegers) { #if a match (only least significant bit overlaps), # add to hash of hash, I don't care about the value. $MatchedIntegers{$i1}{$_}=() if (($i1 & $_) == 1) ; } } [download] foreach my $i1 (@LongListOfIntegers) { ($i1 & $_)==1 and undef $MatchedIntegers{$i1}{$_} foreach @LongListOfIntegers; } [download] foreach my $i1 (@LongListOfIntegers) { use integer; ($i1 & $_) == 1 and undef $MatchedIntegers{$i1}{$_} foreach @LongListOfIntegers; } [download] @LongListOfIntegers = grep $_ &1, @LongListOfIntegers; foreach my $i1 (@LongListOfIntegers) { use integer; ($i1 & $_) == 1 and undef $MatchedIntegers{$i1}{$_} foreach @LongListOfIntegers; } [download] Beyond that, any kind of categorisation of the values by the bits they have set (other than the LSB) will take far longer to set up. It even seems unlikely that you could save much time by moving this into (Inline)C, given you want a hash as the result. To be honest 2 seconds for 16 million comparisons doesn't seem too bad. Update: There's probably no need to have results for $x & $y and $y & $x in the hash, and even if there is, there's no need to test both. So, @LongListOfIntegers = grep $_ &1, @LongListOfIntegers; foreach my $i1 ( 0 .. $#LongListOfIntegers) { use integer; my $v = $LongListOfIntegers[ $i1 ]; ($v & $_) == 1 and undef $MatchedIntegers{$i1}{$_} foreach @LongListOfIntegers[ $i1+1 .. $#LongListOfIntegers ]; } [download] Will save a bit more, though not as much as you'd think. I am not a perl whiz, and do not know how to "remove a level of scope" in the syntax below. "use integer" helps, and more so (i think) when I moved it outside of the loop. the "grep $_ &1, @LongListOfIntegers" doesn't do anything since I know the values are all odd. The last idea provided the most savings. I do need both values as keys in the hash, since I am using this as a lookup in subsequent steps. { use integer; foreach my $i1 ( 0 .. $#LongListOfIntegers) { my $v = $LongListOfIntegers[ $i1 ]; foreach (@LongListOfIntegers[ $i1+1 .. $#LongListOfIntegers]) { if (($v & $_) == 1) { $MatchedIntegers{$v}{$_}=(); $MatchedIntegers{$_}{$v}=(); } } } } [download] At first the list is small, only once the list is done is it N long. By doing the checks as I add numbers I am performing 1/4 the number of comparisons. { use integer; for my $i1 (0 .. $#LongListOfIntegers) { for my $i2 ($i1+1 .. $#LongListOfIntegers) { my $v = $LongListOfIntegers[$i1]; my $w = $LongListOfIntegers[$i2]; unless ($v & $w & ~1) { $MatchedIntegers{$v}{$w} = 1; $MatchedIntegers{$w}{$v} = 1; } } } } [download] The following tests the LSB of a variable: $value & 1 Updated Update: Actually I was wrong: The number of permutations of subsets k drawn from a set of size n is n!/(n-k)! only, or in this case 2000! / 1998! or 2 000 * 1 999 = 3 998 000. Still a big number, but do. I tried to use threads, but the numerous calls to create the threads created way too much overhead for any speed improvement. My program is not memory intensive; the memory usage maxes out at just over 4MB before the program finishes; yet, by using threads the required memory exceeded the 2GB on my machine. Dual core machines are very common these days. I was hoping to re-write the program to use both processors. I was thinking of having a service run on one processor (which takes requests to find pairs of numbers) while the main program looks for valid numbers, waits for all the pairs to be found, then finishes the last step of the process. I'm using windows (it sounds like perl behaves slightly differently on windows). I need some type of inter-process communication mechanism to submit new requests, eventually wait and receive back the (hash table) results. At this point, I don't need detailed explanations, just your insights and recommendations on which modules are best suited to this problem. There are some gains to be had from using threads, but not as much as you might hope for. The following are the results of a single threaded test, and a multi-threaded version operating on the same list using 1, 2, 3 & 4 threads: C:\test>817827 Took 1.622 Odd numbers in list: 4000 Pairs found: 99135 C:\test>817827-t -T=1 Took 1.790000 Odd numbers in list: 4000 Pairs found: 99135 C:\test>817827-t -T=2 Took 1.365000 Odd numbers in list: 4000 Pairs found: 99135 C:\test>817827-t -T=3 Took 1.047500 Odd numbers in list: 4000 Pairs found: 99135 C:\test>817827-t -T=4 Took 0.832500 Odd numbers in list: 4000 Pairs found: 99135 [download] As you can see, the overhead of threading costs you relative to the non-threaded version for the 1-thread case. You make a small gain using two. And further small gains using 3 or 4. At the cost of complexity. The tests were run on a 4-core system. Note: These are only test apps; the results produced--a single level hash with composite keys--may not suit your purposes. Non-threaded: #! perl -slw use strict; use Time::HiRes qw[ time ]; use Data::Dump qw[ pp ]; our $N ||= 4000; our $S ||= 1; srand( $S ); my @list = map 1 | int( rand( 2**32 )), 1 .. $N; my $start = time; my %hash; for my $i ( 0 .. $#list ) { use integer; my $v = $list[ $i ]; ( $v & $_ ) == 1 and undef $hash{ "$i:$_" } for @list[ $i+1 .. $#list ]; } printf "Took %.3f\n", time() - $start; print 'Odd numbers in list: ', scalar @list; print 'Pairs found: ', scalar keys %hash; #<>; pp \%hash; [download] Threaded #! perl -slw use strict; use threads; use Time::HiRes qw[ time ]; use Data::Dump qw[ pp ]; $|++; our $N ||= 4000; our $T ||= 4; our $S ||= 1; srand( $S ); my @list = map 1|int( rand( 2**32 ) ), 1 .. $N; my $cStart = 0; my $cSize = int( @list / $T ); my $start = time; my @threads = map { my( $t ) = threads->create( sub{ my( $lo, $hi ) = @_; use integer; my $tid = threads->tid; my @keys; for my $i ( 0 .. $#list ) { my $v = $list[ $i ]; my $start = $i > $lo ? $i+1 : $lo; ( $v & $_ ) == 1 and push @keys, "$i:$_" for @list[ $start + .. $hi ]; } return @keys; }, $cStart, $_<($T) ? $cStart + $cSize + 1 : $#list ); $cStart += $cSize; $t; } 1 .. $T; my %hash; for( @threads ) { my @keys = $_->join; undef $hash{ $_ } for @keys; } printf "Took %.6f\n", time() - $start; print 'Odd numbers in list: ', scalar @list; print 'Pairs found: ', scalar keys %hash; <>; pp \%hash; [download] Using your program, my machine starts tailing out at six or seven threads. To get the results below, I added some loops, and ran each number of threads 20 times to get an average. Here are the results: Threads Used: 1 Time: 2.193750 Threads Used: 2 Time: 1.646408 Threads Used: 3 Time: 1.418214 Threads Used: 4 Time: 1.311989 Threads Used: 5 Time: 1.228125 Threads Used: 6 Time: 1.221352 Threads Used: 7 Time: 1.218237 Threads Used: 8 Time: 1.211988 Threads Used: 9 Time: 1.233855 Threads Used: 10 Time: 1.212500 [download] What I would really like to try is to create a parallel process/thread that receives new numbers to "pair-up" while the main process keeps working to build the numbers. The process would need an event mechanism to add a new number and at the end, return its list. I think POE provides just such a framework. This is much more complicated than partitioning the algorithm, so I'll try to leverage what you've shown me first. I think I'd have to re-frame the overall algorithm to take advantage of many processors; then, it may have been more efficient to multi-thread and partition like (some of) you suggested. The time dropped to 44% of the original (2.25 times faster). Some of the speed came not from the threading, but just learning how to write more efficient perl statements. I learned that there are expensive operations in perl and some very fast operations... I ended up tweaking many, and changing how I was storing the data to take advantage of these realities. I am sure the program could be made even faster once I understand perl a bit better. Here are the code snippets relevant to the threading: # This child thread builds an encoded array of valid pairs. # "valid pairs" do not have any of the same bits set. # Output format is row1_integer:row2_integer my $DataQueue = Thread::Queue->new(); my ($thr) = threads->create({'context' => 'list'}, sub { # I tried to use a shared hash of hashes, but the required work to + explicity define the hash of hashes in the manner required by thread +s::shared was just too expensive. use integer; # This thread uses queues to get its parameter. my @ValidPairs=(); my @output=(); while (my $validNum = $DataQueue->dequeue()) { !($validNum & $_) and push @output, $validNum .':'.$_ foreach +(@ValidPairs); push (@ValidPairs, $validNum ); #must add number to this threa +d's list of valid numbers } return (@output); }); #... Inside loop and logic to build numbers # send this new number to the child thread waiting on queued par +ameters. # The child thread builds the array of valid pairs $DataQueue->enqueue($i); #... After block that builds the numbers $DataQueue->enqueue(undef); #tell the child process to finish what has + been queued and then exit. my @encodedPairedNumbers = $thr->join(); #subsequent code decodes our encoded paired numbers into a hash of has +hes [download] Thank you to everyone.
http://www.perlmonks.org/?node_id=817827
CC-MAIN-2016-40
refinedweb
1,574
80.41
Exploring the Cache API in Drupal 8By Daniel Sipos. In this article, I want to shine a bit of light over the new cache API. To this end, we are going to look at how we can use it in our custom modules as we are encouraged to do so much more in Drupal 8. Additionally, I have prepared a little demonstration in the shape of a module you can install for testing the impact of the cache API. It’s a simple page that in its rendering logic makes an external API call (to a dummy JSON endpoint) and caches its results. The page then displays the actual time it takes for this to happen, contrasting the external call time vs. the cached version time. The new cache API Bins The new cache API (with the default DatabaseBackend storage) is stored in multiple bins which map to tables that start with the prefix cache_. When interacting with the cache, we always start by requesting a cache bin: $cache = \Drupal::cache(); Where $cache will be an instance of the DatabaseBackend object that represents the default bin ( cache_default). To request a particular bin we pass in the name in the constructor: $render_cache = \Drupal::cache('render'); Where $render_cache will represent the render cache bin (which is new in Drupal 8 and is supposed to improve render performance across the board). As you can see, we are requesting the cache service statically using the \Drupal class. If we are working inside classes, it is best practice to inject the service from the container. You can do so by specifying as an argument to your service the relevant cache bin service (such as cache.default). Here you can get a list of all core services including the ones related to cache. But for the sake of brevity, we will use it statically here. Retrieving cached items Once we know which bin we want to work with (for custom modules this will usually be the default bin), we can retrieve and store cache items. $cache = \Drupal::cache()->get('my_value'); It’s that simple. $cache will be a stdClass object containing some metadata about the cache item plus the actual data available under the $cache->data property. The my_value parameter is the cache ID. An important thing to keep in mind is that using the get() method without a second parameter will not return the cache item if it has been invalidated (either programatically or through expiration). Passing the boolean true as a second parameter will force it to return the data. Storing cache items Although storing new items in the cache is just as easy as retrieving them, we have more options when doing so. To store an item we use the set() method (instead of get() like before), a method that takes 2 mandatory parameters and 2 optional ones: - the cache ID (the string by which we can later reference the item) - the data (a PHP value such as a string, array or object that gets serialised automatically and stored in the table – should not be over 1MB in size) - the expiration time (a timestamp in the future when this cache item will automatically become invalid or -1which basically means this item never expires. It is best practice to use the Drupal\Core\Cache\CacheBackendInterface::CACHE_PERMANENTconstant to represent this value) - tags (an array of cache tags this item can be later identified by) As an example: Drupal::cache()->set('my_value', $my_object, CacheBackendInterface::CACHE_PERMANENT, array('my_first_tag', 'my_second_tag')); This will set a permanent cache item tagged with 2 tags and store a serialised version of $my_object as the data. Cache invalidation and removal Cache invalidation means that the respective items are no longer fresh and are unreliable in terms of what data they hold. They will be removed at the next garbage collection which can also be called using the garbageCollection() method on the CacheBackend object. As mentioned above, when storing a cache item we can specify an expiration time. When this time lapses, the cache item becomes invalid but still exists in the bin and can be retrieved. However, we can also invalidate items manually using the invalidate(), invalidateMultiple() or invalidateAll() methods on the CacheBackend object. Removing items altogether can be done using the delete(), deleteMultiple() or deleteAll() methods. These actions also happen only on the bin the CacheBackend is wrapping and completely remove the respective table records. Cache tags Another cool new feature of the Cache API in Drupal 8 are the cache tags (the fourth parameter in the setter method). The role of the tags is to identify cache items across multiple bins for proper invalidation. The purpose is the ability to accurately target multiple cache items that contain data about the same object, page, etc. For example, nodes can appear both on a page and in a view (stored in different cache items in different bins but both tagged with the same node:nid formatted tag). This allows invalidating both cache items when changes happen to that node without having to know the cache ids. To manually invalidate caches using the tags, we can use the invalidateTags() method statically on the \Drupal\Core\Cache\Cache class: \Drupal\Core\Cache\Cache::invalidateTags(array('node:5', 'my_tag')); This will call the cache invalidator service and invalidate all the cache items tagged with node:5 and my_tag. Additionally, for Drupal entities we don’t have to create our own tags but can retrieve them from the entity system: \Drupal\Core\Entity\EntityInterface::getCacheTags() \Drupal\Core\Entity\EntityTypeInterface::getListCacheTags() This keeps the tags for Drupal entities consistent across the board. Demonstrating the cache API As I mentioned before, I created a small module that allows us to see the benefits of caching data. You can find the module in this git repository but here is the crux of it: Please note that in this example I access the cache backend service statically to save some space. For a dependency injection approach (the correct approach), take a look at the repository code. A route file that adds a new route to the /cache-demo path: cache_demo_page: path: 'cache-demo' defaults: _controller: '\Drupal\cache_demo\Controller\CacheDemoController::index' _title: 'Cache demo' requirements: _permission: 'access content' And the controller class that returns the page inside src/Controller/CacheDemoController.php: <?php /** * @file * Contains \Drupal\cache_demo\Controller\CacheDemoController. */ namespace Drupal\cache_demo\Controller; use Drupal\Core\Cache\CacheBackendInterface; use Drupal\Core\Controller\ControllerBase; use Drupal\Core\Url; use \GuzzleHttp\Client; /** * Cache demo main page. */ class CacheDemoController extends ControllerBase { public function index(Request $request) { $output = array(); $clear = $request->query->get('clear'); if ($clear) { $this->clearPosts(); } if (!$clear) { $start_time = microtime(TRUE); $data = $this->loadPosts(); $end_time = microtime(TRUE); $duration = $end_time - $start_time; $reload = $data['means'] == 'API' ? 'Reload the page to retrieve the posts from cache and see the difference.' : ''; $output['duration'] = array( '#type' => 'markup', '#prefix' => '<div>', '#suffix' => '</div>', '#markup' => t('The duration for loading the posts has been @duration ms using the @means. @reload', array( '@duration' => number_format($duration * 1000, 2), '@means' => $data['means'], '@reload' => $reload )), ); } if ($cache = \Drupal::cache()->get('cache_demo_posts') && $data['means'] == 'cache') { $url = new Url('cache_demo_page', array(), array('query' => array('clear' => true))); $output['clear'] = array( '#type' => 'markup', '#markup' => $this->l('Clear the cache and try again', $url), ); } if (!$cache = \Drupal::cache()->get('cache_demo_posts')) { $url = new Url('cache_demo_page'); $output['populate'] = array( '#type' => 'markup', '#markup' => $this->l('Try loading again to query the API and re-populate the cache', $url), ); } return $output; } /** * Loads a bunch of dummy posts from cache or API * @return array */ private function loadPosts() { if ($cache = \Drupal::cache()->get('cache_demo_posts')) { return array( 'data' => $cache->data, 'means' => 'cache', ); } else { $guzzle = new Client(); $response = $guzzle->get(''); $posts = $response->json(); \Drupal::cache()->set('cache_demo_posts', $posts, CacheBackendInterface::CACHE_PERMANENT); return array( 'data' => $posts, 'means' => 'API', ); } } /** * Clears the posts from the cache. */ function clearPosts() { if ($cache = \Drupal::cache()->get('cache_demo_posts')) { \Drupal::cache()->delete('cache_demo_posts'); drupal_set_message('Posts have been removed from cache.', 'status'); } else { drupal_set_message('No posts in cache.', 'error'); } } } Inside the index() method we do a quick check to see whether the clear query parameter is present in the url and call the clearPosts() method responsible for deleting the cache item. If there isn’t one, we calculate how long it takes for the loadPosts() method to return its value (which can be either the posts from the cache or from the API). We use Guzzle to make the API call and when we do, we also store the results directly. Then we just output the duration of the call in milliseconds and print 2 different links depending on whether there is cache stored or not (to allow us to clear the cache item and run the API call again). When you navigate to cache-demo for the first time, the API call gets made and the 100 posts get stored in the cache. You can then reload the page to see how long it takes for those posts to be retrieved from the cache. Upon doing that, you’ll have a link to clear the cache (by a page refresh with the clear query string) followed by another link which refreshes the page without the clear query string and that in turn makes the API call again. And on like that to test the contrast in duration. Conclusion In this article we’ve looked at how easy it is to use the Cache API in Drupal 8. There are some very simple class methods that we can use to manage cache items and it has become too straightforward for us not to start using it in our custom modules. I encourage you to check it out, play around with the API and see for yourself how easy it is to use.
https://www.sitepoint.com/exploring-cache-api-drupal-8/
CC-MAIN-2016-40
refinedweb
1,612
55.88
Microsoft Corporation April 2004 Applies to: Microsoft® Office InfoPath™ 2003 Summary: Learn to use Microsoft BizTalk Human Workflow Services (HWS) to create an approval workflow. Create workflows and track them using actions. Create a custom Approval action that uses the HWS Assign and Delegate actions that ship with the BizTalk Server 2004 SDK. Learn how to create an InfoPath form that uses the Approval action. Configure BizTalk Pipeline to process InfoPath forms. For more information about the pipeline component, download the InfoPath 2003 Software Development Kit. (33 printed pages) Background: InfoPath contains a number of features that help build simple workflow scenarios. Human Workflow Services can be used for advanced integration with BizTalk. Lab 7, User Roles in InfoPath 2003, helps you understand how to use roles, rules, and views to create a simple workflow scenario by presenting only certain controls and views depending on the current role of the user. Prerequisites Scenario Lab Objective Server Setup Exercises Download the odc_INF03_Labs.exe sample file. 400 An internal analysis at the Contoso Corporation discovered a number of inefficient business processes. Contoso decided to seek feedback about the processes from its employees. The IT department must develop a suggestion form that employees can use to submit process improvement suggestions to management. Subsequently, management needs the ability to assess suggestions from each submitted form and accept, decline, or cancel the request. In this lab, use Microsoft BizTalk® Human Workflow Services (HWS) to create an approval workflow. You can use HWS to create workflows and track them using actions. Actions are composed of tasks that users interact with as part of a workflow. InfoPath displays the actions and tasks to the user in the Workflow task pane. In this lab, you create a custom Approval action that uses the HWS Assign and Delegate actions that ship with the BizTalk Server 2004 SDK. The Approval action allows employees in a corporation to start an approval process. Employees can use an Assign action to assign a task to management for approval. Subsequently, management can use a Delegate action to assign the task to someone who can work on a solution for the issue brought up in the suggestion form. In this lab, you learn to: The BizTalk server needs the following tools installed: Note Refer to the BizTalk Server 2004 System Requirements for complete server requirements. Use the Assign and Delegate actions that are included in the BizTalk Server 2004 SDK to construct a custom Approval action to use for the approval process. In this step, you deploy the Assign and Delegate actions included as part of the Human Workflow Services sample in the BizTalk Server 2004 SDK. This step configures the Activation and Interrupt ports that are used by the custom action, Approval. In this step, you create a key file that will be used as part of the strong name of your assembly. In this step, create an action named Approval. From Visual Studio .NET, use the BizTalk Server 2004 Human Workflow template to create actions. Figure 1. Solution Explorer At this point, your solution should look like Figure 2. Figure 2. Solution Explorer After the preparation work for the HWS project is done, it is time to customize the schemas to fit the needs of our Approval action. The Approval action needs the following information as part of the schema: This step generates the XML file that is needed later for enabling the Approval action to an InfoPath form. This step cleans up the Payload section of the generated Activation.xml file to ensure that InfoPath operates properly with HWS. The Payload section transports the InfoPath form as part of the HwsMessage. <Payloads> <Payload ID="ID_0"></Payload> </Payloads> The Approval task needs to be modified so that it includes an EndAction element to indicate when a task is complete or not complete. The EndAction node also needs to be promoted as a distinguished node so that code inside this action has access to its value. At this point, you have created an Activation message and a Task message. The Activation message is used to start the Approval action. Once the Approval action is started, an Approval task is generated. When the orchestration receives the inbound activation file, it requires instructions on how to transform any necessary fields into a Task message. In this step, you learn how to use a map to handle this transformation. Table 1. Source and Destination schema nodes //function to return a GUID in the form of a string public string getGuid() { return System.Guid.NewGuid().ToString(); } You should see the message, "Build: 1 succeeded, 0 failed, 0 skipped" in the Output window. The business logic of the action is the orchestration itself. Within the orchestration, business logic determines how the action should behave. Figure 3. DoAllActionSpecificLogicHere block At this point, your orchestration should look like Figure 4. Figure 4. DoAllActionSpecificLogicHere block At this point your orchestration should look like Figure 5. //boolean expression to test for EndAction flag TaskMessage.ActionSection.EndAction == false You should see the message, "Build: 1 succeeded, 0 failed, 0 skipped" in the Output window. At this point, your orchestration should look like Figure 6. When the action is ready to be put into production, you need to deploy the action. In the Output window, you should see the message, "Deploy: 1 succeeded, 0 failed, 0 skipped." The logical ports of the orchestration need matching ports that are bound to physical port locations. Example: C:\ApprovalOut InfoPath provides a Workflow task pane that allows users to interact with actions and tasks. Now that you have created an Approval action, the next exercise walks you through activating the Approval action for the InfoPath form. The InfoPath form needs to communicate with HWS workflows so that users can initiate actions and respond to tasks. To create this channel of communication, you need to perform these tasks: We need a solution to facilitate working with HWS. In this example, we create a blank form. InfoPath uses four Receive Data connections to connect to four Web methods that are exposed by the HWS Web Service. These data connections are case-sensitive and must be named exactly as listed here: Note The actual namespace prefix may be different. If so, look for *:detailLevel. At this point, your Data Connections dialog box should look like Figure 7. Figure 7. Add the GetActivityFlowInfo Receive Data connection An HWS action requires an Activation XML instance that validates against the activation schema. This can vary for each action created by the HWS Orchestration Designer. Values for fields that are required in the Activation XML instance need to be supplied by InfoPath. Following is the actionSection fragment of a sample Activation XML instance. It is important to note that the sample can vary from action to action. <ActionSection> <Target>Target_0</Target> <Description>Description_0</Description> <Status>Status_0</Status> <EndAction>false</EndAction> </ActionSection> At this point, your Data Source pane should look similar to Figure 8. Figure 8. Add the EndAction node In addition to the necessary nodes needed for the activation file, you may want additional data nodes to hold other data. The structure is limited only by the InfoPath solution developer. After the data nodes are added, controls should be placed on the form so that the user can interact with the data. At this point, your Data Source pane should look like Figure 9. Figure 9. Add data nodes to the form At this point, your form should look like Figure 10. value: Accepted display: Accepted value: Completed display: Completed value: Declined display: Declined value: Deferred display: Deferred value: NotStarted display: NotStarted value: Tentative display: Tentative value: Waiting display: Waiting value: Cancelled display: Cancelled value: Interrupted display: Interrupted At this point, your Drop-Down List Box Properties should look like Figure 11. Figure 11. Change the Status control to a populated drop-down list box To be able to start an action within HWS, each action requires an activation file, which must be added as a resource to the InfoPath solution. You created this file (C:\Activation.xml) previously in the lab. At this point, the Resource Files dialog box should look like Figure 12. Figure 12. Add the Activation.xml file as a resource You must manually edit some of the files by using Extracting Form Files. The activation of HWS within the manifest is controlled by two sections. The first is the hwsWorkflow section. This section determines: The following is a fragment from a sample manifest.xsf file. Two actions and two tasks are defined in the sample. Note that in this lab, the HWS service is configured with only the first action. The second action is displayed here only to show that you can enable an InfoPath solution with more than one HWS action. <xsf:hwsWorkflow <xsf:location</xsf:location> <xsf:allowedActions> <xsf:action</xsf:action> <xsf:action</xsf:action> </xsf:allowedActions> <xsf:allowedTasks> <xsf:task</xsf:task> <xsf:task</xsf:task> </xsf:allowedTasks> </xsf:hwsWorkflow> <xsf:action</xsf:action> <xsf:task</xsf:task> The second section within the manifest needed to enable HWS is hwsAdapter(s). HWSAdapters are custom adapters, based on the InfoPath Web service adapters that inherently call into the HWSService Web service. Also, the InfoPath XML instance is encoded into base64Binary data and placed within a PayLoad element in the submitted message. For each action that can be enabled within InfoPath, there can be the following hwsAdapter types: <xsf:dataAdapters> </xsf:dataAdapters> <xsf:hwsAdapter <xsf:hwsOperation ToNewActivityFlow" typeID="12A3CDE9-FAC9-4B42-2BD1-A0EFCEA03062" <xsf:hwsAdapter <xsf:hwsOperation ToActivityFlow" typeID=" 12A3CDE9-FAC9-4B42-2BD1-A0EFCEA03062" <xsf:hwsAdapter <xsf:hwsOperation <xsf:input <xsf:partFragment match="Hws="sendTaskResponse" typeID="" Script handlers are needed to get the hwsAdapters to submit to HWS. Therefore you must add a script file to the solution. function Approval::OnClick(eventObj) { XDocument.DataAdapters.Item("Approval").Submit(); } function RespondToApproval::OnClick(eventObj) { XDocument.DataAdapters.Item("RespondToApproval").Submit(); } function ExtendApproval::OnClick(eventObj) { XDocument.DataAdapters.Item("ExtendApproval").Submit(); } For addActionToNewActivityFlow and sendTaskResponse hwsAdapter types, a button is automatically generated in the HWS Workflow task pane. In order to interact with the addActionToActivityFlow hwsAdapter, add a button. At this point, your Button Properties should look like Figure 13. Figure 13. Button properties At this point, your solution should look like Figure 14. The HwsMessage contains the InfoPath form (stored as base64Binary) as part of the PayLoad element. As HWS processes the HwsMessage to generate the appropriate taskIDs for a workflow, it is necessary to update the InfoPath form's workflow IDs so that the form is synchronized with HWS. This is done by inserting a component on the HWS pipeline that handles this interaction. Pipelines allow messages to take varied paths from a source to a destination. Along the paths, the pipeline component can interact with the message payload. InfoPath forms are stored as base64Binary data within the Payload node of the HWS message. A custom pipeline is needed to unpack the base64Binary nodes, update the processing instructions, and repack the node. This interaction takes place after the message is sent from the source (action), and before it arrives at the destination (physical drop point). This exercise shows how to set up a custom pipeline. You need the HWSInfoPathPipelineComponent.cs file included in the sample files. Your entry should look similar to this: [assembly: AssemblyKeyFile(@"<strong key location>\myKey.snk")] At this point, the InfoPath solution is activated to work with the Approval action on the HWS server. At this point, the InfoPath window should look similar to Figure 15. Example: C:\ApprovalOut ![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]>
https://msdn.microsoft.com/en-us/ms788211(office.11).aspx
CC-MAIN-2015-18
refinedweb
1,957
55.34
I have recently been working on a knotty problem involving control and image acquisition using the Flea®2 camera, using FlyCapture® SDK by Point Grey Research, Inc. They have an impressive set of code samples to try all manner of things related to image grabbing and setting the camera controls hue, focus, colour etc. After an initial frustrating period of figuring out how to incorporate their APIs into projects of my own, things are much much simpler than I originally thought. When using this SDK, you will probably need to upgrade to Visual Studio 2005 or higher, if you have not yet done so. FOR 32-BIT WINDOWS PLATFORMS 1. Install the full FlyCapture software package. For the example you see here I used Flycapture v2.2 in a relatively recent version of Visual Studio, probably VS 2008. At the time this was in a Windows XP (32-bit) platform, so you will probably need to download whichever product is appropriate for your needs: 2. Create your Visual Studio project For example, a Win32 console application, just to keep things simple for now. You can always play around using more involved project types at a later time. 3. Set the Visual Studio project dependencies Once you have created the VS C++ project intended to run the set of APIs provided by the FlyCapture SDK, it is case of setting your project dependencies, as follows: 4. Set the libraries: 5. Import the FlyCapture property sheet(s), if necessary: In View -> Property Manager, right-click the project folder and select “Add Existing Property Sheet”: Navigate to where the vsprops folder has been installed (usually C:\Program Files\Point Grey Research\FlyCapture2\src\) and select FlyCapture2.vsprops: These version of .vsprops files do not appear to be supported in Visual Studio 2010. 6. Sort out any compiler/linker issues When attempting to try this for the first time you may encounter error messages similar to this: 1>flycap1.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __thiscall FlyCapture2::Camera::~Camera(void)" (__imp_??1Camera@FlyCapture2@@UAE@XZ) referenced in function _wmain 1>flycap1.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __thiscall FlyCapture2::Camera::Camera(void)" (__imp_??0Camera@FlyCapture2@@QAE@XZ) referenced in function _wmain 1>C:\Users\andy\Documents\Visual Studio 2010\Projects\flycap1\Debug\flycap1.exe : fatal error LNK1120: 2 unresolved externals Make sure the compiler knows where to find the .lib files needed, as described in step 4. For example, when running a Win32 console application, then you will probably need the 32-bit version of the SDK to run correctly, not the 64-bit version. If this is the case, download the 32-bit version of the SDK, and set the Additional Library Directories using this installation, as described in step 4. 7. Try it. You are now free to use the FlyCapture include to access the full set of FlyCapture APIs. Provided the previous steps have been executed correctly, the following simple code with FlyCapture2 dependency should now compile without whinges: #include "FlyCapture2.h" using namespace FlyCapture2; int _tmain(int argc, _TCHAR* argv[]) { // Do Flycapture2 related stuff... Camera cam; return 0; } FOR 64-BIT WINDOWS PLATFORMS Pretty much the same as for the 32-bit setup, but you will probably need to consider the following steps: 1. Make sure you have downloaded and installed the 64-bit version of the SDK. 2. Configure your Visual Studio Configuration Manager to 64-bit. In Visual Studio select Project > Properties > Configuration Properties and change from Win32 to x64 accordingly. 3. Instead of installing the FlyCapture2.vsprops property sheet as used for the 32-bit setup, you will probably need the FlyCapture2 (x64).vsprops property page: Hi, I find you’re blog post really interesting because I too have two Firefly MV Point grey cameras which I have been trying to get working with the FlyCapture2 SDK but I have ahad a frustrating time trying to get the sample code running. Would you be able to explain to me what problem you had and how you solved it? Regards, Ricardo Gaviria
http://www.technical-recipes.com/2011/getting-started-with-the-flycapture-sdk/?replytocom=5584
CC-MAIN-2017-39
refinedweb
681
54.32
Main class: package me.hbc.main; import java.util.ArrayList; import org.bukkit.Bukkit; import org.bukkit.plugin.java.JavaPlugin; public... How would I make it so that if the player can like the "nuke" and have it so that it has a 5x times bigger blast radius than a normal tnt? package... So I've coded my own vote listener for my server and I know I've coded it correctly and everything should be working. I have two classes Main, and... I need help on learning the basics of Game-Structure for simple Mini-games. What I'm looking for is a way to code a simple mini-game plugin in the... I need of a bukkit private dev for my server Plugin category: PRIVATE, Suggested name: WRessentials What I want: - Money System ~ Every time a player kills another player they... Separate names with a comma.
https://dl.bukkit.org/members/war-realms.90910230/recent-content
CC-MAIN-2020-24
refinedweb
148
67.76
Dynamic Lock Screen for Windows Phone 8 Revision as of 10:59, 11. Tested with Compatibility Windows Phone 8 Article. Setting up a default lock screen background image. Adding code to change the Lock screen background We can change background of Lock screen using Windows.Phone.System.UserProfile.LockScreenManager namespace. To change the background, our app must have selected as Lock screen background provide, to check that Go to Setting -> Lock Screen. To check that whether our app is selected as Lock screen image provider or not through code, we can do that using below code. for the app to provide wallpaper images to the lock screen. Once user click on OK, WP8 Simulator. To call set lock screen image, call above.
http://developer.nokia.com/Community/Wiki/index.php?title=Dynamic_Lock_Screen_for_Windows_Phone_8&diff=177095&oldid=177093
CC-MAIN-2013-48
refinedweb
122
66.74
This tutorial illustrates how to load/save a DjVu file. To use DjVu SDK, you firstly include several headers. Since the base library is shared with PixelLive SDK, they're in Celartem namespace while DjVu related classes are in Celartem::DjVu namespace. In this step, you should create a Storage instance from an DjVu file. The Storage class is the most easiest one to load files from local disk or a http based web servers. The following code is trying to load image from a web server: There're also other Storage classes. For more information about them, see Storage Class Factory Functions. You should create a Document instance from the Storage instance. But you can also use the following easier way, which does not require you to open Storage instance separately: The Document instance usually has several Page instances in it and pages are actually managed like as an array. The following code loads the first page: You can also know how much pages in the Document by the following code: Now, you're ready to load the image data from the Page instance. But first, please let me introduce how to get the page size. The code above can get the "pixel" size of the page but it's very large to show all the portion in a display. So we should calculate the size which is fit into the display. In this tutorial, I'll introduce a way to get physical size from the page's resolution, known as dot-per-inch. OK, now render the pixel image: The 5th,6th parameter of Page::render method specifies the rendering image size in pixels and the 4th paramether specifies which portion of the image is actually stored to the buffer; actual picture size rendered is controled by the 4th parameter. As I've explained in the steps, you can easily render the page image without knowing how the actual data are stored in DjVu file. But for advanced operations, you may have to access the internal data. DjVu data is actually stored in a structure called chunk and each page is corresponding to a chunk. So you can get the chunk of the page from Page instance: And basically, Document instance is the actual owner of page chunks. So you can also access chunks from Document instance: The chunk returned by Document class is called DJVM, which manages multipage chunks. The most chunks are for retaining pages but some are for special purposes. To get these chunks, do like the following code: Please note that not all the chunks has child chunks and if you call Chunk::getChildren method on a chunk which does not have children, an exception occures. So you had better check whether it has children or not like the following: As you know, DjVu is actually a bunch of chunks. And we usually load the chunks from DjVu file using IFF::deserialize method like the following: And to save the chunks using IFF::serialize method: DiskStorageWithRollback class is a Storage class which provides the rollback feature in case of serialization failure. So you have to call DiskStorageWithRollback::commit method after successful serialization; otherwise the serialization result is to be rollbacked. For more, see StorageRollback class. Of course, Document class provides a way to save the Document : The following does almost same thing: When you modify Document instance, you have to synchronize the chunk before directly access it: In the same way, when you modify some part of the chunk which is obtained from Document instance, you should synchronize the Document instance before using Document::save method:
https://www.cuminas.jp/sdk/djv_dec_tutorial.html
CC-MAIN-2018-09
refinedweb
602
58.01
java jar downloaded_filename.jar This will run an installer, which will help you complete the installation. After that, you need to correctly set the path names mentioned on the last step of installation. You are then ready to work with AspectJ in the "big project" example to follow. Look at the following code for your big project: 1: // Our Java example TestClass.java 2: public class TestClass { 3: 4: public static void method1() { 5: System.out.println("Hello World!"); 6: } 7: 8: public static void method2() { 9: System.out.println("Really Hello!"); 10: } 11: 12: public static void say(String x) { 13: System.out.println(x); 14: } 15: 16: public static void main(String[] args) { 17: method1(); 18: method2(); 19: say("I can talk!"); 20: } 21: } This code intentionally is easy, beginner-level Java to make understanding the technology easier. You can create more complex code after you've mastered aspects. Lines 4-6 and 8-10 define the class methods, which perform the only job: printing the well-known "Hello World" prompt. Lines 12-14 contain the other method, which says whatever you "ask" it to say. And last but not least, the main() function of the class calls all the methods. main() Compile and run this example: javac TestClass.java java TestClass The output is the following: Hello World! Really Hello! I can talk! Nothing unexpected, I presume. Now, create the AspectJ code in Listing 1. Strange code, isn't it? It looks pretty hard at first glance. But don't worry, it's not. Line 2 creates the aspect TestMe. Line 4 defines pointcut (it picks out certain join points in the program flow) for all public methods of class TestClass that are without arguments. Line 5 defines pointcut for all public methods of class TestClass that accept arguments of String type. Lines 7-10 define what will be executed before all public methods of class TestClass that accept one String type argument (defined in second pointcut). Lines 12-15 define what will be done after the previously mentioned methods. Lines 17-19 and 21-23 share the same definitions as the first pointcut methods. Notice that lines 9, 14, 18, and 22 use the following call: thisJointPointStaticPart.getSignature().getName() The result is the name of the method before or after which you are functioning. Argument a in lines 7 and 12 holds the value of the argument that passed to the methods as a String type argument. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/Java/Article/18197/0/page/3
CC-MAIN-2016-26
refinedweb
441
68.06
Plot MAD output (and more). Project description This project aims to facilitate working with MADX from within Python. It contains the following major components: - MADX API: Build, parse and run MADX scripts. - Plot API: Plot MADX output in various formats. - Utilities: Convert MADX output tables to pandas data frames. Script templating via Jinja is also supported. MADX API The MADX API consists of three parts: building, parsing and running MADX scripts. Builder The builder API can be used for creating MADX scripts. The following example code shows the various features. from madplot.madx.builder import Script # At first generate a new script. s = Script() # Labeled or declaration statements can be created via `[]` access. # This produces the following statement in the resulting MADX script: # L = 5; # N = 10; s['L'] = 5 s['N'] = 10 # MADX commands can be created by accessing them through the script instance. # Output: `DP: SBEND, L = L/2, ANGLE = 2*PI/(2*N);`. s['DP'] = s.SBEND(L='L/2', ANGLE='2*PI/(2*N)') # Output: `QF: MULTIPOLE, KNL = {0, 1/f};`. s['QF'] = s.MULTIPOLE(KNL=[0, '1/f']) # Sequences can be generated using the `Sequence` class. from madplot.madx.builder import Sequence with Sequence(refer='entry', l='N*L') as seq: for n in range(s.N): # Python loop over number of cells. # Unlabeled statements can be just added the script instance. # Stored element definitions can be reused via attribute access of the script instance. # This produces the following output: `QF, at = 0 * L;`. seq += s.QF(at=f'{n} * L') # [...] Add more elements. # Adding a sequence to the script will auto-expand it when dumping the script. # This produces the following output: # `LATTICE: sequence, refer = entry, l = N*L;` # ` QF, at = 0 * L;` # ` [...]` # `endsequence;` s['LATTICE'] = seq # A script can be dumped by converting to `str`. with open('example.seq', 'w') as f: f.write(str(s)) Complete code example The following is a complete code example. from madplot.madx.builder import Sequence, Script s = Script() s['N_cells'] = 60 s['L_cell'] = 13.45 s['f'] = 7.570366 s['DP'] = s.SBEND(L='L_cell/2', ANGLE='2*PI / (2*N_cells)') s['QF'] = s.MULTIPOLE(KNL=['0', '1/f']) s['QD'] = s.MULTIPOLE(KNL=['0', '-1/f']) with Sequence(refer='entry', l='N_cells*L_cell') as seq: for n in range(s.N_cells): seq += s.QF(at=f'{n} * L_cell') seq += s.DP(at=f'{n} * L_cell') seq += s.QD(at=f'{n} * L_cell + 0.50 * L_cell') seq += s.DP(at=f'{n} * L_cell + 0.50 * L_cell') s['FODO_LATTICE'] = seq with open('example.seq', 'w') as f: f.write(str(s)) Advanced control The following operations allow for advanced control statements. - Comments can be placed as strings: s += '// Comment'. - Re-evaluated (deferred) expressions (:=) can be created via the E class: from madplot.madx.builder import E; s += s.ealign(dx=E('ranf()')). - Any MADX command can be accessed via the script instance: s += s.TWISS(file='optics'). Parser The parser.Parser class has two methods available: - Parser.raw_parse: This method parses the given script into its statements and returns a list thereof. The different statement types can be found in Parser._types. The literal values of command attributes will be returned. - Parser.parse: Parses the script into its statements as well but only returns non-comment non-variable declaration statements and interpolates any command attribute values. For example: >>>>> Parser.raw_parse(madx) [[Variable] L = 5, [Command] QF: QUADRUPOLE {'k1': 'pi/5', 'l': 'L'}] >>> Parser.parse(madx) [[Command] QF: QUADRUPOLE {'k1': 0.6283185307179586, 'l': 5}] Engine The MADX Engine API can be used to run MADX scripts. The MADXEngine class expects a set of templates which will be used to run the script. A template is a MADX script that contains unfilled parts which can be interpolated later on. The first template is considered the entry point (the main script) and will be run. The following code creates an engine: from madplot.madx.engine import MADXEngine engine = MADXEngine( ['test.madx', 'test.seq'], # Template files; `test.madx` is the main script. madx='/opt/madx', # File path to the MADX executable; if not specifed the `MADX` environment variable will be considered. working_directory='/tmp/test' # The directory in which the engine runs the scripts. ) The templates can contain substitutions following the Python string formatting rules. For example: QF: QUADRUPOLE, KL={kl};. The {kl} part can be interpolated when running the scripts. The run method can be invoked to run a script. It expects a list of output file names (which need to be generated by the template scripts). By default the file contents will be returned as pandas.DataFrame instances. twiss, = engine.run(['example.twiss']) Here the file example.twiss needs to be generated when running test.madx. In case one or more template scripts require interpolation the corresponding values can be specified using the configuration keyword argument: twiss, = engine.run( ['example.twiss'], configuration={'test.madx': {'kl': 0.01}} ) Special arguments for the output conversion can be specified per output in form of a dict: (twiss, meta), = engine.run([('example.twiss', {'return_meta': True}]) This will return meta data (prefixed with @ in the TFS output) along the main data frame. Running without creating intermediary files The MADXPipe class runs scripts without creating intermediary script files. This is useful in order to minimize the load on the file system. It yields stdout and stderr from the underlying MADX sub-process: from madplot.madx import MADXPipe runner = MADXPipe(madx='path/to/madx') with open('example.madx') as fh: stdout, stderr = runner.run(fh.read()) Templating and formatting is done manually in Python before providing the full script to the runner instance: with open('template.madx') as fh: stdout, stderr = runner.run(fh.read() % {'h1_kick': 0.001}) Sessions The MADXSession can be used to run interactive MADX sessions. This is advantageous to avoid rerunning parts of a script that are the same for each run (e.g. sequence structure); also it doesn’t require starting a new process for each run. Instead one can only issue the relevant commands (e.g. update an optics parameter) and then ask for the results (e.g. Twiss file generation). For example: from madplot.madx.engine import MADXSession with open('/tmp/log', 'w') as log: session = MADXSession(stderr=log, stdout=log) session.run(['a := ranf()']) session.run(['value a'] * 3) # Running a script at start-up. session = MADXSession(['twiss_script.madx']) twiss, = session.run(results=['example.twiss']) # Update a parameter and regenerate twiss. twiss, = session.run(['some_parameter = 0', 'twiss, file="example.twiss"'], results=['example.twiss']) Using Jinja as templating engine The JinjaEngine and JinjaPipe classes allow for using the Jinja2 templating engine for configuring single runs. JinjaEngine creates intermediary script files for each configuration, similar to the MADXEngine class, while JinjaPipe directly pipes input and output to the MADX sub-process, similar to MADXPipe. from random import random from madplot.madx import JinjaEngine, JinjaPipe file_runner = JinjaEngine('example.madx.j2', madx='path/to/madx') twiss, = file_runner.run(['twiss.tfs'], configuration={'quadrupole_gradient_errors': {f'quad_{i+1}': 0.001 * random() for i in range(18)}}, job_id='test') pipe_runner = JinjaPipe('example.madx.j2', madx='path/to/madx') stdout, stderr = pipe_runner.run(quadrupole_gradient_errors={f'quad_{i+1}': 0.001 * random() for i in range(18)}) Plotting Various functions for plotting are available in the madplot.plot module. Please refer directly to this module for further information. Utilities Utilities for conversion of data formats are available at madplot.utils: - Convert.tfs: Converts TFS file to pandas data frame, - Convert.trackone: Converts trackone table (as outputted by TRACK, onetable = true) to pandas data frame. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/madplot/
CC-MAIN-2021-49
refinedweb
1,285
53.37
1 Introduction 2 RDF Features and API Considerations 3 Existing Work A References While the adoption of a particular RDF API will, of course, be influenced heavily by platform and language considerations, there is a common core set of considerations that apply to all APIs. We begin therefore by considering general features that those APIs may expose and support. We look primarily at the features of RDF and RDF "triple-matching" implementations in imperative languages. After the general discussion of the issues and potential points of variability is concluded, existing production-quality APIs are examined in the light of these features. This survey is not exhaustive; instead, we seek to highlight features that may represent "corner-cases" or areas of complexity in API specification, and to provide sufficient examples of API examination that a reader could undertake their own survey. The RDF Specifications [RDF-WD] define a number of concepts and constructs which current APIs may offer differing degrees of support for. As well as these features, the discussion below includes extra-RDF features which have proved to be useful in practice (that is, during the development of experimental RDF applications). Where such features are not found in the RDF Specifications an indication is given. In addition, an API provides services to an environment for programming: we also consider some environmental features that may be worth consideration when the choice of API is made. Where textual examples are given, we use the compact version of the N-Triple notation used in [RDF-PRIMER]. The following concepts are likely to be found (in one guise or another) in common imperative RDF APIs. As one might expect, the RDF Graph (that is, the set of triples that makes up the graph) is likely to appear in an API. Without such a facility being explicitly present, it is hard to write programs that deal with multiple RDF documents or graphs concurrently. The assumption that a program deals with a single, implicit instance of an RDF graph may be found in some experimental APIs. Such an assumption may give rise to some notational convenience, but that convenience comes at the expense of explicitly being able to distinguish between separate instances of graphs. In general, the graph corresponds to the RDF Graph defined in [RDF-CONCEPTS]; that is, to the abstract syntax. We look now at the basic operations on a graph. These form the basis of what has been characterised as a "triple-matching" API - that is, they permit the basic manipulation and interrogation of a graph on a triple-by-triple basis. Graph manipulation is considered first; we consider the interrogation of a graph later. Essentially, the minimal requirements for an RDF API are that an updatable graph should support the assertion and retraction of triples. At this stage it is worthwhile considering briefly the way in which the API reports the results of its operations. Early experimental RDF APIs used a simple "triple-matching" style to select target triples; more will be said of this below. When an operation (such as a selection) returns a set of triples, there are a number of patterns or styles which an API might adopt to present the results: for instance, as a "record set" or snapshot of the results; or as a sequence, generator, or iterator that produces successive results. The choice of return type may be largely influenced by idioms common to the implementation language (and potentially, efficientcy considerations in the underlying implementation). There may be benefits from using a style that integrates closely with existing language facilities (eg, the C++ STL or Java's Collection Framework). In the discussion that follows, where illustrative examples are needed, we adopt an iterator-based idiom. One extension that may be supported, which has shown some utility in practical RDF applications (eg, [FOAF]), is the notion of provenance. Mechanisms for support for this vary; essentially, the term is used to describe the tracking of the source of a particular subset of assertions within a graph. The use of this provides a quick-and-dirty way to model the merge of a number of source graphs into one resultant graph, while maintaining origin information on each source. The manner of identification of each source graph varies; typically, each source is associated with an RDF URIref. The mechanism is particularly useful if one source graph changes; it allows the fast retraction of the original statements sourced from that graph followed by a reassertion, for a fast "partial update". As a nonstandard extension, the precise semantics of provenance may vary between implementations: for instance, after the following sequence of operations, the question arises whether the originally asserted triple still exists within the resultant graph g: An alternative approach to provenance may be suitable with an object-oriented API: to provide an implementation of RDF Graph that provides an amalgamating view over one or more source graphs. There, the semantics of updates could be more clearly expressed. However, there may be disadvantages to this approach too: the creation of many persistent graphs can be expensive. As defined by [RDF-CONCEPTS], an RDF Triple is composed of three parts: the subject, the predicate, and the object. (Whether the Triple is exposed as a first-class citizen via the API or is treated implicitly may be a stylistic issue, depending on features of the language of implementation.) Using the language of the [RDF-CONCEPTS], the Subject may be a URIref or a Blank Node; the Predicate is a URIref and the Object may be a URIref, a Blank Node, or a Literal (which may be a Typed Literal). We now look briefly at the features of each of these classes, as required by the RDF Specifications. The definition of an RDF URIref is given in [RDF-CONCEPTS]. Note that there is no requirement that a URIref be dereferencable, and that the RDF Specifications makes explicit the fact that there is no automatic connection between a URIref's denotation and the document (fragment) it may address. However, there seems to be some de facto convention in this regard. In any respect, such a fact only has a simplifying impact upon the requirements for an RDF API. The namespace used to create a URIref in the RDF/XML serialisation is not considered by the RDF Specifications to play any part in the URIref itself. The behaviour of RDF APIs may not completely reflect this. See below. Again, a full definition of RDF Literals can be found in [RDF-CONCEPTS]. Rather than repeat that, we note here that literals may carry an optional language tag. Support for interrogation of literals based upon the language tag may be present; however, current opinion seems to hold that the language tag is a somewhat archaic misfeature preserved solely for the sake of backwards compatibility. The revised RDF Specifications have introduced the notion of Typed Literals. Taking the view that a basic RDF API offers access to the abstract syntax, a typed literal is simply the pairing of a unicode string (the lexical form of the literal) and a URIref naming the datatype of the literal. [NOTE: at time of writing, a language tag may optionally be part of a typed literal. The future of this is still unclear, and this decision may be revised in a future draft of the specification.] Therefore, at a minimum, an RDF API simply needs to be able to record this pair. The RDF Specifications place the determination of the value of a datatyped literal in the realm of the model theoretic semantics (at the inference layer); thus, there is no onus on an implementation of an abstract synax to verify that a particular literal corresponds to a legal lexical form of the named datatype. In particular, an RDF Graph may indeed contain "illformed" typed literals. However, it is quite possible that an RDF API and its implementation may "go the extra mile" with datatyped literals. At the time of writing, there is little practical experience of using datatyped literals. One of the literal types mandated by the RDF Specification is XMLLiteral. Support for this may be as minimal as the requirement outlined above for general typed literals. The node type with the most potential for complexity is the Blank Node. These nodes have an identity with respect to the graph that contains them, but no universal identifying name. When merging two RDF graphs (at an abstract syntax level), every blank node in the result will have originated with a corresponding blank node in one or other of the source graphs - no node will correspond to a blank node in both source graphs. This semantics presents some complexity. The internal identity of blank nodes must be preserved, even if two blank nodes appear initially to be the same. Consider the following sequence of operations on a graph, g, in which we use an illustrative pseudocode: b1 = new Blank Node b2 = new Blank Node g.assert( triple ( URIref(eg:a) URIref(eg:p1) b1 ) ) g.assert( triple ( URIref(eg:a) URIref(eg:p1) b2 ) ) g.assert( triple ( b1 URIref(eg:p2) URIref(eg:c) ) ) g.assert( triple ( b2 URIref(eg:p2) URIref(eg:d) ) ) After the first pair of assertions, g contains two triples which are identical apart from the blank nodes in them. In fact, as far as the semantics is concerned, the interpretation of this graph would be unchanged were we to "merge" the two triple into a single one. However, as can be seen, the separate identities of b1 and b2 must be preserved, since the second two assertions add concrete disambiguating triples to the graph. The merging of the two blank nodes would not give rise to a logically equivalent graph. A second illustration is necessary to indicate that care must be taken when writing code that utilisesblank nodes. Consider the following pseudocode, which attempts to copy graph g1 to graph g2: i = iterator over the triples of g1 while ( i has more triples ) { t = next triple from i g2.assert( t ) } It is possible to produce an implementation which would enable this pseudocode to work as written. Consider the operation on the following graph (expressed in N-Triples): eg:a eg:p _:a . _:a eg:q eg:b . Clearly, for the pseudocode to duplicate this trivial structure in a second graph, a (temporary) relationship between the identity of the blank node in graph g1 and the corresponding blank node in graph g2 must be established. Not every RDF API may support this implicit semantics, however, in which case the trivial "copy a graph" test case must be expressed by hand as follows (here we assume that blank nodes from the same graph may be compared for identity): i = iterator over the triples of g1 m = new map < blank nodes of g1, blank nodes of g2 > while ( i has more triples ) { t = next triple from i s = t.subject if (s is a blank node) { if (s exists as a key in m) { s' = m(s) } else { s' = new blank node in g2 m(s) = s' } } o = t.object ... similar code for o follows ... g2.assert( triple( s' p o' ) ) } We note that some trivial tactics, such as assigning a GUID for blank nodes, may suffice in practice. If the GUID name assigned to a "blank node" by an implementation is truly unique, then logically the resulting graph are entails all the same consequences of the graph with the "really" blank node. This tactics, however, does admit of the possibility that the names assigned to blank nodes will not be unique, with corresponding subtle errors or renaming checks required in the event of a name clash. The treatment of blank nodes is a difficult area, and one in which there is a high degree of variability amongst APIs. Indeed, the behaviour of some APIs may vary depending on the precise implementation of backing store selected. If particular behaiour is required, testing for the exact semantics may be required. Of course, an API may choose to mitigate some of the impact of this complication by providing convenience operations that work over set of triples en bloc. One obvious operation is the merging of one graph into another. There are also operations that make sense only at a whole-graph level. [RDF-CONCEPTS] defines a syntactic equality between graphs: essentially, an isomorphism relation between instances of the abstract syntax. This high-level operation is expensive; at least one RDF API provides it (with a good implementation) as a convenience. (See Jena, below.) Thus far, we have only considered simple operations against an abstract syntax consisting of triples. RDF and RDFS both provide additional vocabulary elements and idiomatic constructs. An RDF API may provide support for these. RDF itself has three such features: the collection elements, containers, and reification. The container mechanism is from the original RDF specification. It uses indexical properties (rdf:_1, rdf:_2, ...) to link a container (of type rdf:Bag, rdf:Alt or rdf:Seq) to its members. RDF itself makes no guarantees for the wellformedness of containers. However, an API may supply convenience methods, for example: All of these are possible to build from basic triple-matching operations; however, the API may supply such methods itself. "Support" for this construct may give rise to a number of corner cases, for example: do the methods operate on non-well-formed collections? If a "graph merge" operation is performed, will it renumber clashing collections? While there is no definitive "right" behaviour, answers to these questions may vary between implementations; APIs may give differing guarantees in this regard. The collection mechanism is a new feature added to RDF, modelled on cons lists as specified by DAML+OIL. It has an added bonus under the open world assumption of many RDF systems, in that it is possible to explicitly close a list (that is, assert that it has no more members) through the use of rdf:nil. Supporting operations, if present, will be similar to those for containers. There are analogous issues with non-well-formed collections. Reification is a de re mechanism for talking about "statings" - that is, occurrences of RDF triples, within RDF itself. Typical operations in support of reification are: Again, these are purely convenience methods which may be expressed in terms of more fundamental operations. RDFS adds additional vocabulary to RDF in support of a simple class system. From an abstract syntax point of view, there is little additional support required, but the following operations may be provided in the absence of full inferencing support. RDFS classes are related to each other via rdfs:subClassOf relationships. Such relationships may be circular. Typical simple support for classes might extend to providing convenience mechanisms as follows: Due to interactions with the subproperty relationship (below), these methods may not be as straightforward as first appears. Supporting these operations may require fuller support of RDFS inferencing. In addition to the analogous subproperty chains, RDFS properties may also have range and domain specificaitons. The ability to make such declarations leads naturally to ask whether an RDF API can support validation. Although RDF permits schema and instance data to be mixed freely, it is quite common in practice to see a separation between RDF graphs used to specify a schema, and RDF graphs used to specify "instance data". That being the case, a natural question that arises is whether instance data can be validated against a schema. Although such a question makes something of a closed-world assumption, it is asked often enough in practice that an API may support such a decision-making operation: Such an operation may include checking that collections and containers are well-formed, confirming that range and domain constraints apply, and validating the lexical forms of datatyped literals (where possible). While RDFS is somewhat weak as a schema language, this data-validation operation may nevertheless find frequent practical use, particularly by users new to the open world of the Semantic Web, who come from a more traditional data-processing background. Of course, while graph manipulation is useful, for applications to take their place in a large Semantic Web they must be able to communicate. For this purpose, the ability to transform a graph to and from a serialised form is necessary. Currently there is one standard form for RDF, RDF/XML [RDF-SYNTAX], although informal support for N-Triples [RDF-TESTCASES] may also be provided. Additionally there is some current interest in embedding RDF within other XML formats (particularly xhtml), although standardisation effort in this regard is still ongoing. The minimal requirement for a parser is to be able to produce an RDF graph from an RDF/XML serialisation. Exact details vary; for instance, many parsers are stream-oriented, producing a sequence of triples as they consume RDF. Most parsers build on top of existing XML SAX-based parsers (for example, Expat). While namespaces do not appear in the RDF abstract syntax, it is conceivable that an RDF API offer programmatic access and control over the namespaces used in a document, for example, to select the namespaces used for graph serialisation. Although the RDF Syntax specification does not deal with embedding RDF within larger XML documents, it is certainly possible that a parser may offer facilities to extract RDF "opportunistically" from the whole XML document, or to operate over a subsequence of a stream of SAX events. Where operations produce and consume sequences of triples, it is natural to ask whether such operations can be safely overlapped, and what the semantics of such an overlap may be. The detailed semantics may actually vary not only between RDF APIs, but within one API that offers multiple implementations of a graph store. There are two extremes that an API's semantics might range between: that each operation acts on a "snapshot" of the underlying RDF graph; and that each operation provides a synchronised "view" of the underlying store. The API may either provide a default semantics or permit (via different methods, or different wrapper classes) the selection of the precise semantics. In addition, the selection of an appropriate API might hinge on the safety of the underlying data. Therefore, it is appropriate to ask whether the API offers some support for compound operations being combined into transactions. A related issue is the thread safety of an API, and the possibility of using multiple processes to access the same underlying graph store. (No de facto standard has yet arisen for the support of these facilities; an API's implementation may provide greater or lesser degrees of support for communicating these requirements to underlying layers through its SPI.) As has been briefly mentioned, the evaluation of an API for suitability may wish to include a consideration of how well the API integrates with standard features of the language and platform. For example, if a triple-matching operation returns its results using an iterator pattern, and the language provides a standard framework for iterators, it would be reasonable for the returned iterator to be compatible with that framework. We do not dwell too strongly on this aspect of an API, however, since the consideration is largely stylistic. Providing the API fits broadly into well-established patterns for containers, sequences, and iterators common to modern imperative languages, targetted wrappers are usually trivial to implement. Of course, an API may supply several implementations of a graph store, with varying semantics: memory-based and transient, database-backed, and so on. In the survey below we highlight briefly the mechanism used to select a graph implementation (where present). The simplest mechanism for querying the RDF abstract syntax arose early in experimental APIs, and has been dubbed "triple-matching". Essentially, a triple template is presented: each of the subject, predicate and object are either specified with constant values or a wildcard placeholder (meaning "match anything") is supplied. The operation returns a set or sequence of triples which match the specified constraint. The common operations which one would expect to see of this ilk include: It is possible to extend the triple-matching to a more general notion where a filtering function is passed, which accepts or rejects triples based upon programmatically-supplied criteria. As an implementation detail, it is helpful if the API can guarantee an ordering on triples (at least, those originating in the same graph). Again, this may be dependent on the precise graph store used; but such a facility makes the merging of triple sequences much more efficient (although this effectively requires a low-level ordering relationship be supplied between triples). RDF Path is an informal working group attempting to produce a language for node selection roughly analogous to XPath. While no formal proposal exists, the notion is an interesting one since it offers the promise of a higher-level mechanism for selecting sets of nodes at once, through what is essentially a variant on a regular expression describing a set of paths through an RDF graph. In particular, such mechanisms offer simple ways to calculate the transitive closure of a property, etc. Support for such constructs is somewhat limited at present. Work in this area may prove to be useful; it certainly offers a more high-level approach than the simplictic triple-matching. The highest-level abstractions for querying the RDF abstract syntax have all tended to revolve around RDF Query languages, which are (more or less) reminiscent of SQL's DQL subset. This topic is covered in more detail in 7.2; suffice to say that a typical RDF Query API takes a graph and a query specification, and returns a set or sequence of bindings for the free variables in the query which satisfy the constraints exprssed by it. Of course, a query need not merely be expressed over the abstract syntax; it may be expanded to include the closure of a graph under particular closure rules (see [RDFSEMANTICS]). In this case, the query system represents an interface to an implementation's inferencing capabilities. Not all current implementations provide support for inferencing, instead providing RDF graph operations solely at the level of the abstract syntax. These implementations can still be extremely useful for many practical applications involving the manipulation of metadata. Experimental systems for RDF/RDFS inferencing thus far are varied in their approaches; the imperative triple-matching idiom is far from common in this context. However, inference support may be offered via a triple-matching API that exposes an inferencing "layer" built on top of a graph store. It is not clear yet if this is the best or most natural approach to the problem of providing a programmatic interface to RDF(S) inference. However, it is possible that such an approach may be taken. For example, a triple-matching interface may be offered over the closure of an RDF graph. The RDF, RDFS and RDF datatyping closure rules are not amenable to naive forward-chaining (that is, just adding triples to a model until no more closure rules can be satisfied) although at least one system [CWM] has tried this approach. However, the full RDF closure rules introduce a countablly infinite set of additional triples. Therefore, the use of a combination of forward- and backward-chaining in an inference layer is likely to be more appropriate. Inference support does not have to stop with mere closure rules. A higher-level - that is, graph-level - operation is to determine if one RDF graph entails another, and if so, what values must be bound to do so. This actually gives rise to another way of expressing some RDF Query constraints. APIs do not have to offer full support for RDF(S) inferencing to be useful. In fact, two simple facilities have proven useful that are not expressible using RDF or RDFS. The first facility is a simple one to assert that two nodes in an RDF graph have the same denotation; that is, that they refer to the same thing. Note that the simple "merging" of the two nodes in the graph is not as straightforward as one might expect; instead, it might be better to properly record that two nodes have the same denotation. The effect on the operation of an API to interrogations on the existence of triples would then be as follows: if eg:X and eg:Y are declared to have the same denotation, then a query such as may be answered in the positive if ( eg:f eg:p eg:Y . ) is present in the graph. A second operation that has proven of great utility for simple inferencing applications (eg, [FOAF] is the ability to declare RDF properties to functional and/or "inverse functional", that is: More expressive languages for ontology description are currently being defined: the OWL family of languages [WEBONT]. In the future we can expect to see the adoption of more OWL functionality into RDF systems, and exposed via RDF/OWL-capable APIs. This is not intended to be an exhaustive survey; instead, we look at the APIs of a few existing RDF implementations in the light of the features discussed above. We look first at the Redland[REDLAND] toolkit. The core Redland implementation is written in a fairly portable C style. It has been ported to a number of Unix and Unix-like platforms as well as the Windows platform. The API uses explicit calls to initialise and finalise its subsystems, rather than relying on linker technology and .init (etc.) sections. The implementation has a reputation for efficiency; here, however, we concentrate on the characteristics of the API itself. It has been widely used within applications which utilise it for the storage and manipulation of metadata. Redland is implemented in C; however, wrappers for many languages have been produced using the SWIG [SWIG] system: for a full list, consult the Redland documentation. The API is object-oriented in flavour (implemented using a few conventions on top of C). Typically for C, it uses explicit object allocation, ownership and destruction (this may be automatic in languages that support automatic object dealocation where SWIG supports it). The API is well-documented with care taken to highlight where functions take object ownership. Where plug-ins may be provided to implement particular features of the API, a consistent factory-registration system is used. The API supports nodes, which may be URIrefs, literals (with language tagging and support for XML literals) and blank nodes. Ordinal properties are considered a separate type. Typed literal constructors and inspectors are provided. Blank nodes carry string identifiers; these may be manipulated explicitly via API calls. Redland offers a full set of wildcarded triple-matching query operations, together with statement assertion and retraction. Returned sets of statements are implemented via an iterator-style interface. Triples (called Statements) are first-class citizens in this API. A model may contain statements associated with a context; this provides a provenance-tracking style of interface. Redland offers copy constructors for graphs; however, no computationally expensive syntactic equality is offered. Redland offers convenience constructors for ordinal properties. While there are implementation plans reported to add fast checking of rdf:type, Redland operates solely on the abstract syntax; no special support for RDFS is provided. Redland itself provides no parser; however, there are hooks for supplying a "plug-in" parser; a number of parsers are supported. The parsing and serialisation interfaces are built around a "stream" of statements. This is similar in concept to an interator or generator; it additionally permits the attachment of a filter function for fine-grained statement selection. Similarly, serialisation mechanisms can be provided via a plug-in interface. Current plug-ins provide for the serialisation and parsing of RDF/XML and N-Triples. Fine-grained control over serialisation behaviour is available generically through the association of URIrefs with serialisation features. No explicit transaction-control operations are present in Redland; the API does not define a guaranteed semantics for overlapping operations. A number of storage systems are available in the existing implementation. They offer in-memory and persistent storage via Berkeley DB [BDB]. Triple-matching is supported using templated statements with total wildcards. Matched results are returned as iterators. An iterator may be converted into a stream and additional fine-grained filters applied to it for further selection. The current implementation contains no RDF Query implementation, however the API supports queries expressed either as strings or as opaque query objects (eg, for pre-parsed queries). Redland is an RDF API that operates solely on the abstract syntax. In its current state, an inferencing layer could be provided by an "RDF Model Storage" implementation; however, the API is strongly slanted towards operation n the abstract syntax. Jena is a popular open-source framework for RDF processing. At the time of writing, work on the a full revision to prodice a "Jena 2" is still underway; we examine instead the Jena 1 API. The Jena API is implemented in Java, and should therefore be reasonably platform-neutral. One of the supplied model storage implementations uses Berkeley DB. Jena provides a simple hierarchy of types based on RDFNode: Resource, Literal, and so on. In addition, there are some convenience types (Alt, Bag, and Seq plus the DAML support) which directly support the manipulation of those RDF constructs. Blank nodes get "anonymous IDs" assigned automatically; of these, Jena says: The id is unique within the scope of a particular implementation. All models within an implementation will use the same id for the same anonymous resource. Jena's graph operations take a resource-centric approach (that is, a resource carries its associated graph). This permits an interesting idiom which works well in Java. Jena has its own iterator classes which require an explicit disposal. This is principally due to the need to ensure proper resource management within storage implementations: Java's finalization mechanism does not make sufficient guarantees to be usable for this purpose. Jena provides convenience classes and methods to support DAML ontology constructs. These predate the OWL Working Group's standardisation efforts. Future versions of Jena will probably update this support. Jena offers graph isomorphism checking: in this version, this is somewhat ambitiously implemented as the .equals() method on a Model. In addition, the Model class has methods for the difference, intersection and union of whole graphs at once. The support for RDF's containers is quite good, with automatic renumbering of ordinal properties to maintain wellformedness of the container construct. Jena 1.6 doesn't support collections directly, although the parser, ARP, understands the rdf:parseType="daml:collection" syntax. Jena 1 does not support much in the way of RDFS; Jena 2 promises more support for RDFS manipulation and inference. However, the current support for manipulating DAML ontologies is very convenient. Jena's parser, ARP, is fully-featured. It uses a callback mechanism to pass individual statements to an RDF consumer. Jena's serialisation support is also good. It can target RDF/XML (with the option of taking advantage of RDF/XML's abbreviated forms where possible), N-Triples and N3. Jena's Model interface provides explicit methods for starting, committing and aborting transactions, if the underlying storage facility supports such notions. The stock implementation comes with a number of storage mechanisms. These are selected by instantiating the appropriate implementation of Model. Jena offers an interesting and useful query interface. Its facilities are currently based around SquishQL []. A query can be specified either as a data structure or as a string; the results are given as a set of variable bindings that satisfy the query (similar to a JDBS ResultSet). Jena 2 promises an implementation of RDFS inference. Currently this is not available.
http://www.w3.org/2001/sw/Europe/reports/rdf_api_comparison_report/Overview.html
crawl-001
refinedweb
5,266
51.07
On Sep 28, 2005, at 3:49 PM, Andy Gayton wrote: > There's getMetadata. That let's you return arbitrary attributes. > > Would that cover what you're thinking? No -- one problem is that extended metadata can be potentially very large (just as large as the file's normal contents). Another is that there may be potentially a large number of such attributes. So, you really don't want to return all of it with one call -- you want to explicitly ask for certain attributes. See the getxattr, listxattr, etc functions for what it the low-level functionality looks like in linux. At the moment this interface varies slightly between OSX, FreeBSD, and Linux, but they're almost the same. The OSX one adds an offset parameter to get/set, and the freebsd one adds a "namespace" parameter, to distinguish between root-only attributes and user attributes. I think it may be best to model it as a special kind of child, as it seems as if that's the way people are moving their thinking anyhow. IFileSystemNode.xattrContainer() -> IFileSystemContainer That returned object would then allow the use of the same createFile/ child/remove/rename/open/read operations as on normal children. It would have to throw errors if you try doing excessively "interesting" things, like trying to create directories in it, but I think that's probably okay. On the other hand, some people think xattrs are only for small metadata, and that "subfiles" or "named forks" are an altogether different thing. I'm not sure if that means that it's necessarily a bad idea to present both as fileish objects though. Some interface to this is necessary for properly functioning WebDAV (and smb if anyone ever wanted to implement that mess). > Protocol's should try to get by with as little metadata as they > can. If a backend doesn't supply a bit of metadata a protocol must > have, then it won't be able to be used with the protocol. Support for xattrs can also be optional in a backend. Eventually, support for POSIX ACLs should also be considered. I don't really have the first clue how that should be modeled though. James
http://twistedmatrix.com/pipermail/twisted-python/2005-September/011529.html
CC-MAIN-2016-26
refinedweb
367
62.48
Recent: Archives: February 8, 2002 Please explain the difference between checked exceptions and runtime exceptions? When would I throw them, and when would I catch them? Java provides two main exception types: runtime exceptions and checked exceptions. All checked exceptions extend from java.lang.Exception, while runtime exceptions extend from either java.lang.RuntimeException or java.lang.Error. Two aspects differentiate checked exceptions from runtime exceptions: mechanical and logical. From a mechanical viewpoint, runtime exceptions and checked exceptions differ in how you declare the method that throws the exception, and how you handle the exception when it's thrown. Consider the following checked exception definition: public class CheckedException extends Exception { public CheckedException() {} public CheckedException( String message ) { super( message ); } } The following class has a number of methods that throw exceptions: public class ExceptionalClass { public void method1() throws CheckedException { // ... some code throw new CheckedException( "something bad happened" ); } public void method2( String arg ) { if( arg == null ) { throw new NullPointerException( "arg passed to method2 is null" ); } } public void method3() throws CheckedException { method1(); } } Right away you'll notice that both method1() and method2() throw exceptions, but only method1() has an exception declaration. You'll also notice that method3() doesn't throw an exception itself, but its signature indicates that it may throw a CheckedException. Before I tell you why, consider the following main method: public static void main( String [] args ) { ExceptionalClass example = new ExceptionalClass(); try { example.method1(); example.method3(); } catch( CheckedException ex ) { } example.method2( null ); } When a call is made to method1(), you must make the call from within a try/catch block. In this case, it's a checked exception because you must catch it. Checked exceptions are always declared as thrown in the method signature. The signature lets the method's caller know that an exception may occur as a consequence of the call. If the exception does get thrown, the caller must be prepared to do something about it. Contrast method1() with method2(). When you make a call to method2(), you don't have to do so within a try/catch block. Since you do not have to catch the exception, the exception is said to be unchecked; it is a runtime exception. A method that throws a runtime exception need not declare the exception in its signature. Now, look back at method3(). It makes a call to method1() without making the call in a try/catch block. method3() avoids not catching the exception by declaring that it may also throw the exception thrown by method1(). Instead of catching the exception, method3() simply passes the exception along. Analogously, you could have dispensed with the try/catch block in the main method by declaring that it throws CheckedException. (I only mention this to give you a second reference point; you should never declare a main as throwing an exception.)
http://www.javaworld.com/javaworld/javaqa/2002-02/01-qa-0208-exceptional.html
crawl-002
refinedweb
466
63.8
On Jan 3, 2004, at 9:44 AM, Paul Miller wrote: >> There. > > Wow Bob - many thanks for looking into this. > >>'. > > I couldn't find any -flat_namespace references in PackageManager.py. I > do have 1.0a3. It's in Ft/Lib/DistExt/BuildExt.py not Ft/Lib/DistExt/PackageManager.py.. sorry, must not have been paying attention. > >> 2) It assumes that you can get a pointer to environ from a >> bundle/dylib, you can't. Change the "environ" trap in >> Ft/Server/Server/src/process.c to the following: > > This change I made and the setup process was able to complete. But > when I try to use Ft.Xml.Domlette.Node, it fails to find the Node > type. Assuming Node is implemented in the C code, could the first > issue be causing the bundle not to load, thus it not being able to > find Node? > > Any other ideas on fixing #1? How about some example code that should work (but doesn't)? I don't know (or have a desire to learn) this API. -bob -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2357 bytes Desc: not available Url :
https://mail.python.org/pipermail/pythonmac-sig/2004-January/009809.html
CC-MAIN-2018-05
refinedweb
196
78.96
On 2007-07-13, Chris Carlen <crcarleRemoveThis at BOGUSsandia.gov> wrote: > John Nagle wrote: >> You can sometimes get better performance in C++ than in C, >> because C++ has "inline". Inline expansion happens before >> optimization, so you can have abstractions that cost nothing. > > That's interesting. But why is this any different than using > preprocessor macros in C? This is OT, however: inline functions have a few benefits over preprocessor macros. 1. They are type-safe. 2. They never evaluate their arguments more than once. 3. They don't require protective parentheses to avoid precedence errors. 4. In C++, they have the additional benefit of being defined in a namespace, rather than applying globally to a file. As an experienced C programmer you're probably used to coping with the problems of preprocessor macros, and may even take advantage of their untyped nature occasionally. Even C++ programmers still use the advisedly. > I will keep an open mind however, that until I work with it for > some time there is still the possibility that I will have some > light go on about OOP. So don't worry, I'm not rejecting your > input. In my opinion OOP is usefully thought of as a type of design rather than a means of implementation. You can implement an OO design in a procedural langauge just fine, but presumably an OO programming language facilitates the implementation of an OO design better than does a procedural language. Going back to the stack machine question, and using it as an example: Assume you design your program as a state machine. Wouldn't it be easier to implement in a (hypothetical) state-machine-based programming language than in a procedural one? I think John was insinuating that a state-machine is more like an object than it is like a procedure. -- Neil Cerutti
https://mail.python.org/pipermail/python-list/2007-July/439218.html
CC-MAIN-2014-15
refinedweb
305
55.03
15967/replace-first-and-last-word-of-string-in-the-most-pythonic-way Hey guys, I was building a programme for my school computer exercise which involved me to replace the first and last word of a string. Now I've done it using my own indigenous ways but I'm rather interested to learn the most efficient way I could do it using Python a = "this is the demo sentence." I'd like the result of my python function to be: b = "This is the demo Sentence." The tricky part of it is that there might be spaces on the front or the end of the string. I need those to be preserved. Here's what I mean: a = " this is a demonstration sentence. " The result would need to be: b = " This is a demonstration Sentence. " I'm also interested in opinions on whether a regex would do this job better than python's inbuilt methods, or vice versa. import re a = " this is a demonstration sentence. " print(re.sub(r'''(?x) # VERBOSE mode ( # ^ # start of string \s* # zero-or-more whitespaces \w # followed by an alphanumeric character ) | # OR ( \w # an alphanumeric character \S* # zero-or-more non-space characters \s* # zero-or-more whitespaces $ # end of string ) ''', lambda m: m.group().title(), a)) yields This is a demonstration Sentence. class Solution: def firstAlphabet(self, s): self.s=s k='' k=k+s[0] for i in range(len(s)): if ...READ MORE Refer to the below command: txt = "hello" x = ...READ MORE Here's the code: check = input("Enter the character: ...READ MORE Hey, @Subi, Regarding your query, you can go ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE You can simply the built-in function in ...READ MORE down voteaccepted ++ is not an operator. It is ...READ MORE If you are talking about the length ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/15967/replace-first-and-last-word-of-string-in-the-most-pythonic-way
CC-MAIN-2021-49
refinedweb
357
75.2
Below are a few tips and tricks that you may find useful when you use Mayavi2. Often you write Mayavi scripts to render a whole batch of images to make an animation or so and find that each time you save an image, Mayavi “raises” the window to make it the active window thus disrupting your work. This is needed since VTK internally grabs the window to make a picture. Occluding the window will also produce either blank or incorrect images. If you already have a Python script, say script.py that sets up your visualization that you run likes so: $ mayavi2 -x script.py Then it is very easy to have this script run offscreen. Simply run it like so: $ mayavi2 -x script.py -o This will run the script in an offscreen, standalone window. On Linux, this works best with VTK-5.2 and above. For more details on the command line arguments supported by the mayavi2 application, see the Command line arguments section. When using mlab you will want to do this: mlab.options.offscreen = True before you create a figure and it will use an offscreen window for the rendering. Another option for offscreen rendering would be to click on the scene and set the “Off screen rendering” option on. Or from a script: mayavi.engine.current_scene.scene.off_screen_rendering = True This will stop raising the window. However, this may not be enough. Please see below on the situation on different platforms. Windows: If you are using win32 then off screen rendering should work well out of the box. All you will need to do is what is given above. Linux and the Mac: there are several options to get this working correctly and some major issues to consider: If you have VTK-5.2 the offscreen rendering option should let you generate the pictures without worrying about occluding the window. However, you will need VTK-5.2 to get this working properly. There are also situations when this does not always work – try it and if you get blank windows, you have a problem. For example: from mayavi import mlab mlab.options.offscreen = True mlab.test_contour3d() mlab.savefig('example.png') If this produces a clean image (even if you switch desktops or cover any windows produced), you should be golden. If not you should consider either using a virtual framebuffer or building VTK with Mesa + OSMesa to give you a pure software rendering approach. VTK uses openGL for all its rendering. Under any conventional Unix (including Linux), you need an Xserver running to open a GL context (especially if you want hardware acceleration). This might be a problem when rendering on a headless server. As mentioned in the above paragraph, on a desktop, using the default server may also be a problem as it interferes with your ongoing work. A good workaround is to use the virtual framebuffer X server for X11 like so: - Make sure you have the Xvfb package installed. For example under Debian and derivatives this is called the xvfb package. - Create the virtual framebuffer X server like so:Xvfb :1 -screen 0 1280x1024x24 -auth localhost This creates the display ”:1” and creates a screen of size 1280x1024 with 24 bpp (the 24bpp is important). For more options check your Xvfb man page. - Export display to :1 like so (on bash):$ export DISPLAY=:1 - Now run your Mayavi script. It should run uninterrupted on this X server and produce your saved images. This probably will have to be fine tuned to suit your taste. Many Linux systems (including Ubuntu and Debian) ship with a helper script xvfb-run for running headless. The following command can run a Python script with Mayavi2 visualizations headless: xvfb-run --server-args="-screen 0 1024x768x24" python my_script.py Beware that you shouldn’t call mlab.show or start the mainloop in the script, elsewhere the script will run endlessly, waiting for interaction in a hidden window. Note If you want to use Mayavi without the envisage UI or even a traits UI (i.e. with a pure TVTK window) and do off screen rendering with Python scripts you may be interested in the examples_offscreen. This simple example shows how you can use Mayavi without using Envisage or the Mayavi envisage application and still do off screen rendering. If you are using mlab, outside of the Mayavi2 application, simply set: mlab.options.offscreen = True Sometimes you might want to run Mayavi/VTK completely headless on a machine with no X server at all and are interested in pure offscreen rendering (for example for usage on the Sage notebook interface). In these cases one could use Mesa’s OSMesa library to render offscreen. The downside is that you will not get any hardware acceleration in this case. Here are brief instructions on how to build VTK to do this. - Build a recent version of mesa. 7.0.4 (as of this time) should work as would 7.2. We assume you download MesaLib-7.0.4.tar.bz2. - Untar, and change directory to the new directory created. We call this directory $MESA henceforth. - Run make configs/linux-x86, change file as per your configuration. Run make to see list of options. Note: 7.2 has a ./configure script that you can run. - Get VTK-5.2 or later (CVS will also work).. - Run ccmake path/to/VTK. - Now select advanced options ‘t’. - Set VTK_OPENGL_HAS_OSMESA ON - Configure: press ‘c’ - Set the OSMESA_INCLUDE_DIR to the $MESA/include dir - Set OSMESA_LIBRARY to $MESA/lib/libOSMesa.so - Similarly set the OPENGL_INCLUDE_DIR, OPENGL_gl_LIBRARY=$MESA/lib/libGL.so, OPENGL_glu_LIBRARY, and OPENGL_xmesa_INCLUDE_DIR. - Set VTK_USE_OFFSCREEN to ON if you want offscreen all the time, this will never produce an actual mapped VTK window since the default value of the render window’s offscreen rendering ivar will be set to True in this case. - Any other settings like VTK_USE_GL2PS, USE_RPATH etc. - Configure again (press ‘c’) and then generate ‘g’. - Note that if you do not want to use ccmake and would like to do this from the command line you may also do (for example):cmake \ -DVTK_OPENGL_HAS_OSMESA=ON \ -DVTK_USE_OFFSCREEN=ON \ -DCMAKE_INSTALL_PREFIX=/path/to/vtk-offscreen \ -DVTK_WRAP_PYTHON=ON \ -DPYTHON_EXECUTABLE=/usr/bin/python2.5 \ -DPYTHON_LIBRARY=/usr/lib/libpython2.5.so \ -DBUILD_SHARED_LIBS=ON \ -DVTK_USE_GL2PS=ON \ -DOSMESA_INCLUDE_DIR=/path/to/Mesa-7.2/include/ \ -DOSMESA_LIBRARY=/home/path/to/Mesa-7.2/lib64/libOSMesa.so \ -DOPENGL_INCLUDE_DIR=/path/to/Mesa-7.2/include \ -DOPENGL_gl_LIBRARY=/path/to/Mesa-7.2/lib64/libGL.so \ -DOPENGL_glu_LIBRARY=/path/to/Mesa-7.2/lib64/libGLU.so \ path/to/VTK/ - Run make and wait till VTK has built. Let us say the build is in $VTK_BUILD. - Now install VTK or set the PYTHONPATH and LD_LIBRARY_PATH suitably. Also ensure that LD_LIBRARY_PATH points to $MESA/lib (if the mesa libs are not installed on the system) this ensures that VTK links to the right GL libs. For example:$ export PYTHONPATH=$VTK_BUILD/bin:$VTK_BUILD/Wrapping/Python`` $ export LD_LIBRARY_PATH=$VTK_BUILD/bin:$MESA/lib Now, you should be all set. Once this is done you should be able to run mlab examples offscreen. This will work without an X display even. With such a VTK built and running, one could simply build and install mayavi2. To use it in a Sage notebook for example you’d want to set ETS_TOOLKIT='null' and set mlab.options.offscreen = True. Thats it. Everything should now work offscreen. Note that if you set VTK_USE_OFFSCREEN to ON then you’ll by default only get offscreen contexts. If you do want a UI you will want to explicitly set the render window’s off_screen_rendering ivar to False to force a mapped window. For this reason if you might need to popup a full UI, it might be better to not set VTK_USE_OFFSCREEN=ON. A developer may wish to customize Mayavi by adding new sources, filters or modules. These can be done by writing the respective filters and exposing them via a user_mayavi.py or a site_mayavi.py as described in Customizing Mayavi2. A more flexible and reusable mechanism for doing this is to create a full fledged Mayavi contrib package in the following manner. - Create a Python package, let’s call it mv_iitb (for IIT Bombay specific extensions/customizations). The directory structure of this package can be something like so:mv_iitb/ __init__.py user_mayavi.py sources/ ... filters/ ... modules/ ... docs/ ... The two key points to note in the above are the fact that mv_iitb is a proper Python package (notice the __init__.py) and the user_mayavi.py is the file that adds whatever new sources/filters/modules etc. to Mayavi. The other part of the structure is really up to the developer. At the moment these packages can add new sources, filters, modules and contribute any Envisage plugins that the mayavi2 application will load. - This package should then be installed somewhere on sys.path. Once this is done, users can find these packages and enable them from the Tools->Preferences (the UI will automatically detect the package). The user_mayavi.py of each selected package will then be imported next time Mayavi is started, note that this will be usable even from mlab. Any number of such packages may be created and distributed. If they are installed, users can choose to enable them. Internally, the list of selected packages is stored as the mayavi.contrib_packages preference option. The following code shows how this may be accessed from a Python script: >>> from mayavi.preferences.api import preference_manager >>> print preference_manager.root.contrib_packages [] >>> preference_manager.configure_traits() # Pop up a UI. For more details on how best to write user_mayavi.py files and what you can do in them, please refer to the examples/mayavi/user_mayavi.py example. Please pay particular attention to the warnings in that file. It is a very good idea to ensure that the user_mayavi.py does not implement any sources/modules/filters and only registers the metadata. This will avoid issues with circular imports. There are three ways a user can customize Mayavi: - Via Mayavi contributions installed on the system. This may be done by enabling any found contributions from the Tools->Preferences menu on the Mayavi component, look for the “contribution settings”. Any selected contributions will be imported the next time Mayavi starts. For more details see the Extending Mayavi with customizations section. - At a global, system wide level via a site_mayavi.py. This file is to be placed anywhere on sys.path. - At a local, user level. This is achieved by placing a user_mayavi.py in the users ~/.mayavi2/ directory. If a ~/.mayavi2/user_mayavi.py is found, the directory is placed in sys.path. The files are similar in their content. Two things may be done in this file: - Registering new sources, modules or filters in the Mayavi registry (mayavi.core.registry.registry). This is done by registering metadata for the new class in the registry. See examples/mayavi/user_mayavi.py to see an example. - Adding additional envisage plugins to the mayavi2 application. This is done by defining a function called get_plugins() that returns a list of plugins that you wish to add to the mayavi2 application. The examples/mayavi/user_mayavi.py example documents and shows how this can be done. To see it, copy the file to the ~/.mayavi2 directory. If you are unsure where ~ is on your platform, just run the example and it should print out the directory. Warning In the user_mayavi.py or site_mayavi.py, avoid Mayavi imports like from mayavi.modules.outline import Outline etc. This is because user_mayavi is imported at a time when many of the imports are not complete and this will cause hard-to-debug circular import problems. The registry is given only metadata mostly in the form of strings and this will cause no problem. Therefore to define new modules, we strongly recommend that the modules be defined in another module or be defined in a factory function as done in the example user_mayavi.py provided. The Standalone example demonstrates how one can use the core Mayavi API without using Envisage. This is useful when you want to minimize dependencies. Offscreen example demonstrates how to use Mayavi without the envisage UI or even a traits UI (i.e. with a pure TVTK window) and do off screen rendering. Compute in thread example demonstrates how to visualize a 2D numpy array and visualize it as image data using a few modules. It also shows how one can do a computation in another thread and update the Mayavi pipeline once the computation is done. This allows a user to interact with the user interface when the computation is performed in another thread. Sometimes you have a separate computational process that generates data suitable for visualization. You’d like Mayavi to visualize the data but automatically update the data when the data file is updated by the computation. This is easily achieved by polling the data file and checking if it has been modified. The Poll file example demonstrates this. To see it in action will require that you edit the scalar data in the examples/data/heart.vtk data file. Say you have a little visualization script and you’d like to run some kind of server where you can script the running Mayavi UI from a TCP/UDP connection. It turns out there is a simple way to do this if you have Twisted installed. Here is a trivial example: from mayavi import mlab from mayavi.tools import server mlab.test_plot3d() server.serve_tcp() There is no need to call mlab.show() in the above. The TCP server will listen on port 8007 by default in the above (this can be changed with suitable arguments to serve_tcp()). Any data sent to the server is simply exec’d, meaning you can do pretty much anything you want. The names engine, scene, camera and mlab are all available and can be scripted with Python code.) The nice thing about this is that you do not loose any interactivity of the application and can continue to use its UI as before, any network commands will be simply run on top of this. To serve on a UDP port use the serve_udp() function. For more details on the server module please look at the source code – it is thoroughly documented. Warning While this is very powerful it is also a huge security hole since the remote user can do pretty much anything they want once connected to the server. Serve the M2TCP_tcp() The TCP server will listen on port 8007 by default in the above. Any data sent to the server is simply exec’d, meaning you can do pretty much anything you want. The engine, scene, camera and mlab are all available and can be used.) Warning Data sent is exec’d so this is a security hole. Serve the M2UDP_udp() Test it like so: import socket s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind(('', 9008)) s.sendto('camera.azimuth(10)', ('', 9007)) Warning Data sent is exec’d so this is a security hole. Often users like to animate visualization without affecting the interactive capabilities of the view. For example you may want to rotate the camera continuously, take a snapshot while continuing to interact with the Mayavi UI. To do this one can use the very convenient animate() decorator provided with Mayavi. Here is a simple example: from mayavi import mlab @mlab.animate def anim(): f = mlab.gcf() while 1: f.scene.camera.azimuth(10) f.scene.render() yield a = anim() # Starts the animation. Notice the use of yield in the above, this is very crucial to this working. This example will continuously rotate the camera without affecting the UI’s interactivity. It also pops up a little UI that lets you start and stop the animation and change the time interval between calls to your function. For more specialized use you can pass arguments to the decorator: from mayavi import mlab @mlab.animate(delay=500, ui=False) def anim(): # ... a = anim() # Starts the animation without a UI. If you don’t want to import all of mlab, the animate decorator is available from: from mayavi.tools.animator import animate Note that to start the event loop, i.e. to get the animation running, you will need to call show() if you do not already have a GUI environment running. For more details check the documentation of the animate() decorator available in the MLab reference. For an example using it, alongside with the visual handy for object-movement animation, see Mlab visual example. Note If you want to change the data of an object in an animation, see Animating the data Let’s say you have a stack of PNG or JPEG files that are numbered serially that you want to animate on a Mayavi scene. Here is a simple script (called img_movie.py): # img_movie.py from pyface.timer.api import Timer def animate(src, N=10): for j in range(N): for i in range(len(src.file_list)): src.timestep = i yield if __name__ == '__main__': src = mayavi.engine.scenes[0].children[0] animator = animate(src) t = Timer(250, animator.next) The Timer class lets you call a function without blocking the running user interface. The first argument is the time after which the function is to be called again in milliseconds. The animate function is a generator and changes the timestep of the source. This script will animate the stack of images 10 times. The script animates the first data source by default. This may be changed easily. To use this script do this: $ mayavi2 -d your_image000.png -m ImageActor -x img_movie.py This isn’t really related to Mayavi but is a useful trick nonetheless. Let’s say you generate a stack of images using Mayavi say of the form anim%03d.png (i.e. anim000.png, anim001.png and so on), you can make this into a movie. If you have mencoder installed try this: $ mencoder "mf://anim%03d.png" -mf fps=10 -o anim.avi \ -ovc lavc -lavcopts vcodec=msmpeg4v2:vbitrate=500 If you have ffmpeg installed you may try this: $ ffmpeg -f image2 -r 10 -i anim%03d.png -sameq anim.mov -pass 2 The Mayavi application allows for very powerful Command line arguments that lets you build a complex visualization from your shell. What follow is a bunch of simple examples illustrating these. The following example creates a ParametricSurface source and then visualizes glyphs on its surface colored red: $ mayavi2 -d ParametricSurface -m Glyph \ -s"glyph.glyph.scale_factor=0.1" \ -s"glyph.color_mode='no_coloring'" \ -s"actor.property.color = (1,0,0)" Note that -s"string" applies the string on the last object (also available as last_obj), which is the glyph. This example turns off coloring of the glyph and changes the glyph to display: $ mayavi2 -d ParametricSurface -m Glyph\ -s"glyph.glyph.scale_factor=0.1" \ -s"glyph.color_mode='no_coloring'" \ -s"glyph.glyph_source.glyph_source = last_obj.glyph.glyph_source.glyph_list[-1]" Note the use of last_obj in the above. Here is a simple example showing how to texture map an iso-surface with the data that ships with the Mayavi sources (the data files are in the examples directory): $ mayavi2 -d examples/tvtk/images/masonry.jpg \ -d examples/mayavi/data/heart.vti \ -m IsoSurface \ -s"actor.mapper.scalar_visibility=False" \ -s"actor.enable_texture=True"\ -s"actor.tcoord_generator_mode='cylinder'"\ -s"actor.texture_source_object=script.engine.current_scene.children[0]" It should be relatively straightforward to change this example to use a ParametricSurface instead and any other image of your choice. Notice how the texture image (masonry.jpg) is set in the last line of the above. The image reader is the first child of the current scene and we set it as the texture_source_object of the isosurface actor. Sometimes you need to shift/transform your input data in space and visualize that in addition to the original data. This is useful when you’d like to do different things to the same data and see them on the same plot. This can be done with Mayavi using the TransformData filter for StructuredGrid, PolyData and UnstructuredGrid datasets. Here is an example using the ParametricSurface data source: $ mayavi2 -d ParametricSurface \ -m Outline -m Surface \ -f TransformData -s "transform.translate(1,1,1)" \ -s "widget.set_transform(last_obj.transform)" \ -m Outline -m Surface If you have an ImageData dataset then you can change the origin, spacing and extents alone by using the ImageChangeInformation filter. Here is a simple example with the standard Mayavi image data: $ mayavi2 -d examples/mayavi/data/heart.vti -m Outline \ -m ImagePlaneWidget \ -f ImageChangeInformation \ -s "filter.origin_translation=(20,20,20)" \ -m Outline -m ImagePlaneWidget The UserDefined filter in Mayavi lets you wrap around existing VTK filters easily. Here are a few examples: $ mayavi2 -d ParametricSurface -s "function='dini'" \ -f UserDefined:GeometryFilter \ -s "filter.extent_clipping=True" \ -s "filter.extent = [-1,1,-1,1,0,5]" \ -f UserDefined:CleanPolyData \ -m Surface \ -s "actor.property.representation = 'p'" \ -s "actor.property.point_size=2" This one uses a tvtk.GeometryFilter to perform extent based clipping of the parametric surface generated. Note the specification of the -f UserDefined:GeometryFilter. This data is then cleaned using the tvtk.CleanPolyData filter. Under mlab, the Userdefined can be used to wrap eg a GeometryFilter VTK filter with: filtered_obj = mlab.pipeline.user_defined(obj, filter='GeometryFilter') With mlab, the user_defined function can either take as a filter argument the name of the VTK filter to be used, or an already-instanciated instance of the filter. Note With the UserDefined filter, as with most Mayavi filter, the raw TVTK object can be accessed as the filter attribute of the Mayavi filter object. The Image cursor filter example gives a full example of using the UserDefined filter. The Tvtk segmentation example is a full example of building a complex VTK pipeline with a heavy use of the UserDefined filter. The default 3D interaction with the scene (left click on the background rotates the scene, right click scales, middle click pans) is not suited for every visualization. For instance, in can be interesting to restrict the movement to 2D, e.g. when viewing an object in the ‘x’ direction. This is done by changing the interactor_style of a scene. Here is an example to use Mayavi as a 2D image viewer: from mayavi import mlab mlab.test_imshow() mlab.view(0, 0) fig = mlab.gcf() from tvtk.api import tvtk fig.scene.interactor.interactor_style = tvtk.InteractorStyleImage() mlab.show() Another useful interactor is the ‘terrain’ interactor, handy to have natural movement in scenes where you want the ‘up’ vector to be always pointing in the ‘z’ direction: from mayavi import mlab mlab.test_surf() fig = mlab.gcf() from tvtk.api import tvtk fig.scene.interactor.interactor_style = tvtk.InteractorStyleTerrain() mlab.show() VTK has many different interactors. An easy way to list them is to display the VTK class browser (via the help menu, in the mayavi2 application) and to search for “Interactor”. Another option is to tab complete on Ipython, on tvtk.InteractorStyle. You’ve just created a nice Mayavi/mlab script and now want to generate an animation or a series of images. You realize that it is way too slow rendering the images and takes ages to finish. There are two simple ways to speed up the rendering. Let’s assume that obj is any Mayavi pipeline object that has a scene attribute: obj.scene.disable_render = True # Do all your scripting that takes ages. # ... # Once done, do the following: obj.scene.disable_render = False This will speed things up for complex visualizations sometimes by an order of magnitude. While saving the visualization to an image you can speed up the image generation at the cost of loosing out on anti-aliasing by doing the following: obj.scene.anti_aliasing_frames = 0 The default value is typically 8 and the rendered image will be nicely anti-aliased. Setting it to zero will not produce too much difference in the rendered image but any smooth lines will now appear slightly jagged. However, the rendering will be much faster. So if this is acceptable (try it) this is a mechanism to speed up the generation of images.
http://docs.enthought.com/mayavi/mayavi/tips.html
CC-MAIN-2014-52
refinedweb
4,008
58.08
(Please use the Discussion Group for any questions, and the issue list for bug reports) This tutorial corresponds with Hotwire 0.710. For the 0.6xx series, see GettingStarted0600 (or upgrade!). You've started Hotwire for the first time, and are looking at a command prompt. It wants us to type something. Let's do it; type: ls What happened? In short, Hotwire resolved "ls" to the ls Builtin object, executed it with no arguments in a thread, received the File objects it returned, and displayed them with the FilePathRenderer. At the core of Hotwire is the idea of object-oriented pipeline processing. On the top right of the display, you should see something that says "Type: GnomeVfsFile". This is telling you that the current stream superclass is GnomeVfsFile objects. If you expand the inspector, you can see the properties of the output objects. You have access to those properties. Let's try creating a pipeline: ls | prop size The prop builtin returns the property of an object. Now, let's take a look at our process list: proc | filter python cmd This command will show you all processes whose cmd property matches the regular expression "python". Let's ask for help about proc: help proc The text in: None out: <class 'hotwire.sysdep.proc.Process'> is telling is that the proc builtin outputs Process objects. Click on the link to see the inspector for that object class and its available properties. Let's go back to manipulating files a bit more: cd Desktop ls -a *.zip Here we can see that Hotwire's syntax is very shell-like in argument processing and globs. Other things to try are Hotwire's builtin file management commands like mv and rm. In particular rm is special in that it actually moves files to the Trash by default, which gives it support for undo. You aren't limited to Hotwire's builtin commands like ls and proc. You can also execute system programs using the sys builtin: sys ifconfig You'll notice that we got the result as text. Now, we mentioned Hotwire is oriented around object-oriented pipeline processing. What the sys builtin does is look up an external program as a Unix shell would, and then return its output as an object stream - in particular a stream of Unicode strings, one per line. You can confirm this by noticing that the pipeline type in the top right is now unicode. The above is very important to understand - again, in the Hotwire model, external binaries are just a type of command which return lines of text. Because executing external binaries is quite common, Hotwire allows you to omit the sys: ifconfig If a verb typed in doesn't match the name of a builtin, Hotwire tries to execute it using sys by default. For pure text processing, Hotwire actually has several advantages over a Unix terminal. By default, the entire output of every command is saved, and Hotwire knows which commands output what text. This lets us do some neat things; for example, type: |filter -i link Note the leading "|". What happened here? When you start a pipeline with "|", Hotwire actually expands it to "current | ", which means "pipe the output of the currently visible command". This lets you interactively refine pipelines by chaining the output of a previous one into a new one, without re-execution. Another perk Hotwire offers is Firefox-style search over text. Press Ctrl-s. You should see a search area appear. Enter some text such as link, and use the up/down arrows to go between matches. Press ESC when you're done. As mentioned above, Hotwire is saving the result of every pipeline. Currently, it saves them until 5 minutes after you last viewed them; then they are automatically garbage collected. Try pressing Ctrl-Shift-o (or View->Overview) to see a complete list of all the saved results. Click on one of them to display it. From there, you can use Ctrl-Down and Ctrl-Up to move forwards and backwards through results, respectively. You might have noticed that commands never block your shell. By default Hotwire runs everything under the equivalent of '&' in the Unix shell. Try typing the following: sleep 30 While it's running, you can continue to do other things. This can be very convenient for things such as make, svn up, etc. At this point you've run a number of commands, and you may have seen the completion/history popup. Recognizing that often day-to-day work can be repetitive - you might run the same commands very often in just a few directories - Hotwire is fanatical about remembering what you do and making it easy to find things again. Try pressing the Up navigational arrow. This pops up the dialog for the per-tab command history. Keep pressing Up or Down to select a history item, then Enter to choose. Also, you might have already noticed that Hotwire is displaying choices for token completion and history if you've paused in typing. Let's take a look. Type the following, but do not hit Return - instead hit TAB at the end: You should see a dialog that shows you all verbs which match ls - this is by default the Hotwire builtin ls, but other things such as lspci are offered as well. Press TAB to expand the completion selections. As above, Enter chooses one. You can also use TAB to look for completions to your files and directories. Try typing the following, then hit TAB: ls ~/D On a recent system you will likely see completions for at least Desktop and Downloads. Again, once you're in a completion selection mode, use the up/down arrows to move the selection, and Enter to choose. Additionally, Hotwire is automatically doing a search of your command history as you type. Try the ls again, but this time press Ctrl-R: Ctrl-R shows you which commands you executed previously that match the current input. One major goal of Hotwire is to bring real programming languages like Python and Ruby much closer to hand than is normally the case with a Unix terminal+shell. Because Hotwire is built on the Python runtime, it has powerful execution and introspection capability. Let's run some Python: py 1+1 If you look above, you can see that Hotwire actually parsed this as py-eval "1+1". Hotwire knows about different languages, and allows you to run them using prefixes. py import os; os.getpid() Note here in particular you don't have to quote anything. Hotwire is taking everything we type after "py " literally. py xrange(20) What's going on here is that because Hotwire is oriented around stream processing, it is expanding iterable python objects to streams of objects. If you are in a mode where you mainly want to enter Python without using "py " prefix repeatedly, you can change the input language using Ctrl-Shift-L, or by just clicking on the language chooser in the bottom left. Change the input language to Python. Then type: 2+2 The language selector causes Hotwire to treat your input as if you had typed the language prefix before each line. Thus the above is equivalent to py 2+2 in the HotwirePipe mode. Switch back to HotwirePipe mode again by using Ctrl-Shift-L - we'll get to the other languages in a minute. In the HotwirePipe language, you can use several builtins which can evaluate snippets of Python code. These Python snippets act a bit like lambda, except that the variable it is bound to the object being processed. ls | py-map 'it.path.upper() + "\n"' The py-map builtin acts like the Python map function. It allows you to transform an object stream. Here, we're grabbing the path property of each File object, uppercasing it, and appending a newline. We need the newline because otherwise the strings will appear all on one line in the UnicodeRenderer (in the future this will be easier to deal with). proc | py-filter "it.pid > 50 and it.pid < 5000" The py-filter builtin is like the Python function filter. As with py-map, the variable it is bound to the current object. ifconfig| py-map "it.swapcase()" Remember, because this is the HotwirePipe language, we can easily execute system commands and pipe them to Python code. If you look at the implementation of Hotwire builtins, you'll notice that all of them are essentially just wrapped Python generator functions. If you want to create anonymous scripts and see their output, you can do that with your chosen text editor and the py-eval builtin. ed ~/testscript.py Inside testscript.py, write: import os,sys,time def main(): time.sleep(3) yield 42 time.sleep(3) yield "Test python generator script!" Now, save it and close the editor. In Hotwire, type: py-eval -f ~/testscript.py The py-eval builtin looks inside the file for a function called main. It then executes it as a generator. You can put whatever Python code you want inside this file, and it will be compiled and executed inside the Hotwire process, making it easy to view and manipulate with further pipelines. Python is a good programming language, but it is not the only language. A true shell should be flexible, able to handle whichever language you want to use. You may notice that we called the builtin py-map, and not just map. This was deliberate - in the future, we'd like to have rb-map as well. And pl-filter. However, currently due to technical limitations, Hotwire best supports Python because it can be executed in-process. In the medium-term future, we hope for a common shared runtime for Free Software programming languages, so that Hotwire can be more than just Python. Despite this, there are still useful things we can do - we can communicate with other languages on a bytestream level. So Hotwire has the concept of a language registry, which currently just has basically a favicon and an interpreter execution command for each language. This turns out to allow some useful things. For example, if you're a Ruby programmer - have you ever wished you could easily use Ruby in your Unix pipelines? Let's go back to that ifconfig output: We mentioned earlier that Python had the prefix py. Similarly, Ruby has the prefix rb. Now you can pipe the output of the last command to some Ruby: |rb $stdin.each { |line| puts line.downcase } If you run this, you see what happens is that Hotwire will pass it to the external Ruby interpreter using "ruby -e". The large advantage of this over a traditional Unix shell is that you do not need to shell-quote your Ruby code - anything you type after the "|rb" will be passed directly to Ruby, with no Hotwire processing. Hotwire also has several other languages registered by default. For example, if you want to run a snippet of Unix shell, that's easy to do: sh for x in $(seq 50); do sleep 0.5; echo ${x}; done As with the Ruby example, Hotwire will pass everything you type after sh directly to /bin/sh -c. So we've seen how Hotwire runs executables asynchronously in the background. What if you need to give input to one? Sometimes e.g. "svn up" might ask for a password. Hotwire has some rudimentary support for input. We can try it now: Try running this small shell script, which reads from its standard input: sh echo "Sample question? [y/n]"; read ans; echo "You answered ${ans}" Press Ctrl-I to activate input mode. This allows you to send basic lines of input to the subprocess. For example: y When you're done, press ESC to exit input mode. A significantly smaller percentage of commands you might encounter are actually designed for the Unix terminal "GUI". Examples include "vi", "mutt", and "top". Hotwire also has a builtin called term which creates a full Unix terminal for commands. Many popular Unix commands which require a terminal are by default aliased to term commandname. For example, try typing: top What happened here is Hotwire expanded it using its default alias from top to term top. This will create another tab with the command; when the command exits, the tab is removed. You can see the default list of aliases by typing: And scroll to the bottom. For more on command compatibility, see HotwireCommandInput. You've now had your first twenty minutes or so with a hypershell. There is a huge number of differences from a traditional Unix terminal+shell or Windows cmd.exe, but in the end you we hope find that your time invested in learning Hotwire pays itself back many times over. You may be interested in HotwirePipeExamples for some more commands to try, and ExtendingHotwire for how to add custom Python to Hotwire. You could also review the KeyboardShortcuts and CommandLineOptions . See also HotwireArchitecture for a more technical overview of how things work.
http://code.google.com/p/hotwire-shell/wiki/GettingStarted0700
crawl-002
refinedweb
2,184
73.27
For loop is very common control flow statement in programming languages such as java. I am not going to describe the basics of “for loop” as it is beyond the scope of this article and most of us are already well aware of this. In this post, I will list down various ways to use for loop in day-to-day programming habits and then I will compare them against a similar set of data for their relative performances. Different ways to use for loop I am listing down 4 different ways in my knowledge. If you know any other way also, please let me know through a comment. 1) For each statement In this technique, advanced for each statement introduced in java 5 is used. Learn this also as your next core java interview question. private static List<Integer> list = new ArrayList<>(); for(Integer i : list) { // do other stuff } 2) Using list.size() in condition private static List<Integer> list = new ArrayList<>(); for(int j = 0; j < list.size() ; j++) { //do stuff } 3) Initialize another local variable with size private static List<Integer> list = new ArrayList<>(); int size = list.size(); for(int j = 0; j < size ; j++) { //do stuff } 4) Initialize the initial value of counter to size of list private static List<Integer> list = new ArrayList<>(); for(int j = list.size(); j > size ; j--) { //do stuff } Comparing the performance of all types I am creating an arraylist and populate it with 1 crore Integer instances. Then I will iterate through the list using all four ways. This way we will be able to understand the different in performance. Execution environment: - Java 7 - Eclipse Juno package com.howtodoinjava.demo.core; import java.util.ArrayList; import java.util.Calendar; import java.util.List; public class ForLoopPerformanceTest { private static List<Integer> list = new ArrayList<>(); private static long startTime; private static long endTime; static { for(int i=0; i < 1_00_00_000; i++) { list.add(i); } } @SuppressWarnings("unused") public static void main(String[] args) { //Type 1 startTime = Calendar.getInstance().getTimeInMillis(); for(Integer i : list) { // } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("For each loop :: " + (endTime - startTime) + " ms"); //Type 2 startTime = Calendar.getInstance().getTimeInMillis(); for(int j = 0; j < list.size() ; j++) { // } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("Using collection.size() :: " + (endTime - startTime) + " ms"); //Type 3 startTime = Calendar.getInstance().getTimeInMillis(); int size = list.size(); for(int j = 0; j < size ; j++) { //System.out.println(j); } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("Using [int size = list.size(); int j = 0; j < size ; j++] :: " + (endTime - startTime) + " ms"); //Type 4 startTime = Calendar.getInstance().getTimeInMillis(); for(int j = list.size(); j > size ; j--) { //System.out.println(j); } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("Using [int j = list.size(); j > size ; j--] :: " + (endTime - startTime) + " ms"); } } When above program runs, following is the output in console: For each loop :: 110 ms Using collection.size() :: 37 ms Using [int size = list.size(); int j = 0; j < size ; j++] :: 4 ms Using [int j = list.size(); j > size ; j--] :: 1 ms Clearly the last two ways are way ahead in terms of performance, while for each statement [type 1] is most expensive operation if compared with other three. Update: Reason for difference in performance Last two flavors type 3 and 4 have a very little difference and should be considered as same. They both fetch the size of collection initially. And then uses this size value in loop for checking the 1 is costliest one and simple reasoning is the use of iterator internally created in for each loop. Creating an iterator and calling iterator.get() adds up to most of cost which is not involved in direct access in other three types. Happy Learning !! 17 thoughts on “Performance comparison of different for loops in java” I was thinking about the time difference,Then this came in my mind & below are the results. Please use the loop what it is meant for:- List list = new ArrayList(); long startTime; long endTime; for (int i = 0; i < 10000000; i++) { list.add(i); } // Type 1 startTime = Calendar.getInstance().getTimeInMillis(); for (Integer i : list) { int a = i; } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("For each loop :: " + (endTime – startTime) + " ms"); // Type 2 startTime = Calendar.getInstance().getTimeInMillis(); for (int j = 0; j < list.size(); j++) { int a = list.get(j); } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("Using collection.size() :: " + (endTime – startTime) + " ms"); // Type 3 startTime = Calendar.getInstance().getTimeInMillis(); int size = list.size(); for (int j = 0; j < size; j++) { int a = list.get(j); } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println("Using [int size = list.size(); int j = 0; j 0; j–) { int a = list.get(j); } endTime = Calendar.getInstance().getTimeInMillis(); System.out.println(“Using [int j = list.size(); j > size ; j--] :: ” + (endTime – startTime) + ” ms”); For each loop :: 32 ms Using collection.size() :: 27 ms Using [int size = list.size(); int j = 0; j size ; j--] :: 22 ms Article is good but you forgot main thing in analysis and that is “Accessing actual values from the list”. You are just comparing how much time LOOP will take. But if for case 3/4, if you try to access list value for that index then you will see actual performance. In that case, First is clear winner. Try it. Type 4 is not good to use because values are not accessed. Using of Type 3 is efficient. BTW this is good tutorial. Effectively, some tests where wrong (type4) and the it tests nothing if value is not accessed… But, something else has not been taken in advice with these tests : I’have added tests at startup of my swing application and the order of the tested methods changes the results. I think the memory used by previous tests objects should be garbaged during next test… Then i have added System.gc(); before each tests and results are now really differents : Using [Iterator iter = list.iterator(); iter.hasNext();] :: 42 ms : TOTAL = -2014260032 Using [Integer i : list] :: 45 ms : TOTAL = -2014260032 Using [int j = 0; j = 0 ; j--] :: 54 ms : TOTAL = -2014260032 Using [int j = 0; j < size ; j++] :: 48 ms : TOTAL = -2014260032 Thanks for sharing your observations. What is the best practices of looping while retrieving data from Record set ? there is a bit error in type 4, it should be for(int j = list.size() – 1; j >= 0 ; j–) Also, the for-each loop runs significantly faster in cases such as a LinkedLIst, where the data is not stored in a sequential space in memory. In fact, it takes FOREVER (not literally) to use the non-for-each loops on a LinkedList. Having a fair understanding of how what you’re using works is fairly important to knowing how to optimize your code. Good Article… Its nice!!! previously I also had a view that foreach loop gives better performance.Thanks for clarifying. Well … 1) If you make a loop, you obviously need to access the values … So add Integer i = list.get(j) for types >=2 2) Your Type 4 may need to be debugged … 1 + 2 => Type 1 is something le 25% slower than the 3 others (no differences between them) Type 4 should be “for(int j = list.size() – 1; j >= 0 ; –j)” Even with the added access, mine average out to about 35 for type 1 and 30 for the others. I have searched for the performance of loop over the internet and no-one article says about the low performance of first loop(i.e. for-each loop). Moreover many people have given the JAVA doc specification saying that:——- “The for-each loop, introduced in release 1.5, gets rid of the clutter and the opportunity for error by hiding the iterator or index variable completely. The resulting idiom applies equally to collections and arrays. Note that there is no performance penalty for using the for-each loop, even for arrays. In fact, it may offer a slight performance advantage over an ordinary for loop in some circumstances, as it computes the limit of the array index only once. While you can do this by hand, programmers don’t always do so.” Hi Shivam, Please refer to. I can give you 10 more links which hints for slower performance. The reason “for-each” loop is thought better is because of its readability. Its easier to write and understand the statement. The difference for looping a large collection is a rare condition and also the time advantage is not very attractive [but its there, you can verify yourself] . If we are dealing with collections, and trying to address each of the collection object in that case for each loop is faster. Instead of using the increment variable “i” to get the object at index “i”. Enhanced For loop will make the loops easier to write but as you mentioned it does come with some performance penalty. Sometime back I’ve done the tests myself to assess different for loops and found the Enhanced for loop to be 13-15x slower than the normal for loop.But I’ve made few mistakes like you. In the tests that you’ve mentioned in this article, I don’t see any warm-up phase before your actual measurement starts So I’m wondering about the results that you’ve posted here. Is it from a single run or an average of say 1Million runs. Also JVM does lot of optimizations if a loop/method is recognized as Hot (used frequently). So it would be better to do some warmup before starting our timer or use tools like JMH to assess the performance of different versions. Also as Rajesh mentioned, most of the times we use for loop to iterate over a collection of user defined objects like Person. In that case we need to check which is significantly faster. for(int i=0,j = list.size(); i < j; i++){ Person p = list.get(i); // do something with p } for(Person p: list){ // do something with p } I'll post my findings using JMH as soon as I find time Fair enough. I didn’t do stuffs like warm-up, and these tests are very basic observations. Note:- In comment box, please put your code inside [java] ... [/java] OR [xml] ... [/xml] tags otherwise it may not appear as intended.
http://howtodoinjava.com/2013/03/26/performance-comparison-of-different-for-loops-in-java/
CC-MAIN-2014-42
refinedweb
1,713
66.03
Proposed features/Sanitary Station Note: For current tagging see Tag:amenity=sanitary_dump_station Contents Proposal At various points along the UK canals, you can manually empty your portable human waste tanks. We need a tag with which to mark places offering such services. There was some discussion on the list as to what the value should be here, and as to whether we need to subdivide. Proposed_features/Dump_Station is related. Tagging amenity=sanitary_station Applies to nodes Rendering ? Discussion - i'm ocnfused. you proposed 'pumpout' at the same time as this. what's the difference? is it the maunal/machine-controlled bit? - amenity=pumpout - operation=manual|mechanical is more logical Myfanwy 10:29, 16 February 2008 (UTC) - There's rather more difference than that. The pumpout places are a thing a bit like a fuel pump, with a big hose which sucks. A sanitary station is basically a stainless steel funnel/sink with a tap. You empty the tank down the hole, then rinse it out. So they are pretty different, and not interchangeable - you can't empty a pumpout boat using a sanitary station, or vice versa. Your tagging system implies the only difference is the amount of elbow grease required :-) Gerv 10:42, 5 March 2008 (UTC) I also see this as the same as pumpout. I suggest merging the two. If necessary, you could make an additional tag to differentiate: amenity=sanitary_station, sanitary_station:type=* (or something along those lines.) --Hawke 22:30, 19 March 2008 (UTC) - As I'm (officially) to late for the vote I'm putting my FOR opinion here. I live on my boat, trust me a Pumpout and a Sanitary Station are VERY different. I have a Cassette type toilet - can use Sanitary Station, can't use Pumpout. The difference is very important - facilities are sometimes (as on my waterway) DAYS travelling time apart, getting it wrong is a non-trivial problem. Not differentiating the two would be a bit a bit like marking a particular concert hall on the map as being both London and Edinburgh - it better to mark neither than mislead the user. My ex girlfriend has a Pumpout system. The difference between his boat and mine is that he has a substantial steel tank built into the boat with connectors for pipes on the side of the boat. Her Black Water ends up in this tank, mine ends up in a plastic cassette that forms the lower part of the toilet seat (think of it as a sophisticated bucket - some are actually just that - big buckets known as a 'Bucket and Chuckit'), when its full I change it for a spare and put the full one in a wheelbarrow to take it down the local Sanitary Station (I'm on a mooring for houseboats with its own (private, hence not on the map) Sani Stn). She has to find someone to crew for her and motor down the waterway for several hours to the local Pumpout. She could buy a 'Self Pumpout Kit' to use Sanitary Stations but, like most people, she hasn't, they are expensive, smelly, very bulky to store, take a lot of effort to work, take a LOT of time to work (during which time people like me are queued up behind you, getting annoyed about hanging around for significant parts of an hour) and often involve trailing a smelly hose across a public footpath (I've seen the result of someone tripping on one - not pretty!). On top of that some Sani Stns are unsuitable for Self Pumpout - they are either to far from the waterside or can't take the volume a boats tank delivers in a short period. If anything we could do with THREE tags, Pumpout, Sanitary Station and Sanitary Station (No Self Pump Out) --sbt 17:29, 22 August 2008 (UTC) Voting - I approve this proposal -- Gerv 19:15, 5 March 2008 (UTC) - I disapprove this proposal. If it can only be used by boats, it should go in the waterway "namespace". -- Ulfl 02:00, pumpout, and the description makes it very boat-specific while the concept is not. --Hawke 22:33, 19 March 2008 (UTC) I oppose this proposal. See Hawke. --Chrischan 21:47, 29 May 2008 (UTC) I oppose this proposal. This is recycling --Bjalf 02:25 1 June 2008 (CET)
https://wiki.openstreetmap.org/wiki/Proposed_features/Sanitary_Station
CC-MAIN-2019-35
refinedweb
721
67.89
SerialPC Table of Contents Mbed OS 2 and Mbed OS 5 This is the handbook for Mbed OS 2. If you’re working with Mbed OS 5, please see the new handbook. For the latest information about Serial, please see The Windows serial driver. Serial Communication with a PC¶ The mbed Microcontroller can communicate with a host PC through a "USB Virtual Serial Port" over the same USB cable that is used for programming. This enables you to: - Print out messages to a host PC terminal (useful for debugging!) - Read input from the host PC keyboard - Communicate with applications and programming languages running on the host PC that can communicate with a serial port, e.g. perl, python, java and so on. Hello World!¶ Hello World! #include "mbed.h" Serial pc(USBTX, USBRX); // tx, rx int main() { pc.printf("Hello World!\n"); } Host interface and terminal applications It is common to use a terminal application on the host PC to communicate with the mbed Microcontroller. This allows the mbed Microcontroller to print to your PC screen, and for you to send characters back. -: - Windows - Look under the "Ports" section in "Device Manager" (''Start -> Control Panel -> System -> Hardware -> Device Manager''). The name will be ''mbed Serial Port (COMx)'', where ''x'' is the number of the COM port allocated. - Mac OS X - Use the command ls /dev/tty.usbmodem* - Linux - Use the command ls /dev/ttyACM* Terminal Applications¶ Details¶ Communication over the USB Serial port simply uses the standard Serial Interface, specifying the internal (USBTX, USBRX) pins to connect to the Serial Port routed over USB. The Serial Interface defaults to a 9600 baud standard serial connection (8 bits, 1 stop bit, no parity), so your host program should be set to the same settings. If you want to communicate at a different standard baud rate, ensure you modify the settings of both the Serial Interface and the Host PC application! Examples¶ Echo back characters you type #include "mbed.h" Serial pc(USBTX, USBRX); int main() { pc.printf("Echoes back to the screen anything you type\n"); while(1) { pc.putc(pc.getc()); } } Connect to your mbed Microcontroller with a Terminal program and uses the 'u' and 'd' keys to make LED1 brighter or dimmer ; } } } Pass through characters in both directions between the PC and Serial Port ; } } } The C stdin, stdout and stderr file handles are also defaulted to the PC serial connection #include "mbed.h" int main() { printf("Hello World!\n"); } Read in to a buffer #include "mbed.h" DigitalOut myled(LED1); Serial pc(USBTX, USBRX); int main() { char c; char buffer[128]; pc.gets(buffer, 4); pc.printf("I got '%s'\n", buffer); } Troubleshooting If you have having difficulties with USB serial communication: - Make sure you have installed the driver if you are working on Windows - Windows Serial Configuration - Learn how to use the Serial port - Read up on using Terminals programs If you have any problems, or think this tutorial could be improved, please tell us in the Forum!
https://os.mbed.com/handbook/SerialPC
CC-MAIN-2018-22
refinedweb
500
52.39
Opencv tutorial C++ Background substraction This method is used to learn what belongs to the background of the image and what belongs to the foreground. The static cameras that monitor the area can very easily recognize, what is part of the image that is always here or there is something that is new and moving over the background. Background subtraction Visual studio 2019 project setup If you have Opencv 4+ compiled or installed only steps you need to do is set the include directory with OpenCV header files. Set the Additional library Directories that point to \lib folder. Just note that Visual Studio 2019 should use VC16\lib. Finally, As additional dependencies, specify the libs used to resolve the function implementation in the code. The list for Opencv 420 is here. The different version of opencv is using different numbering for example opencv 440 will use opencv_core440.lib. opencv_bgsegm420.lib opencv_core420.lib opencv_videoio420.lib opencv_imgproc420.lib opencv_highgui420.lib opencv_video420.lib Background sustraction C++ video Opencv 4 C++ background substractor full source code #include "opencv2/highgui.hpp" #include "opencv2/imgproc.hpp" #include <vector> #include <fstream> #include <iostream> #include <math.h> #include <Windows.h> #include "opencv2/video/background_segm.hpp" #include "opencv2/imgproc.hpp" using namespace cv; using namespace std; int main(int argc, const char** argv) { // Init background substractor Ptr<BackgroundSubtractor> bg_model = createBackgroundSubtractorMOG2(500,16.0,true); // Create empy input img, foreground and background image and foreground mask. Mat img, foregroundMask, backgroundImage, foregroundImg; // capture video from source 0, which is web camera, If you want capture video / Thank you for sharing beneficial information nice post learn mulesoft online This comment has been removed by the author. hello, thank you for sharing i want to ask, what can be BS is done with image ?? no video or what can be BS is without looping for update background ? very useful...code works...thank you Excellent Thanks for sharing this information. I really Like Very Much. mulesoft online training best mulesoft online training top mulesoft training online with Free Bundle videos .
https://funvision.blogspot.com/2015/11/tutorial-opencv-300-background.html
CC-MAIN-2021-25
refinedweb
336
57.27
If I am way off feel free to say to start over. #include <string> #include <fstream> #include <iostream> using std::cout; using std::cin; using std::endl; using std::ofstream; using std::ifstream; using std::ios; using std::string; int main() { string name = ""; int num = 0; int updatedNum = 0; int x = 0; int heading = 0; int columnHeaders = 0; int underLines = 0;; } inFile.close(); } else cout << "The file could not be opened." << endl; /++;); outFile << num[x] << endl; //end for outFile << endl; //close the file outFile.close(); return 0; can not get it to compile to try it out the error I get is :error C2109: subscript requires array or pointer type the error is on line 63 but I can not figure out how to fix this.
http://www.dreamincode.net/forums/topic/122853-reading-from-file-and-making-new-file/
CC-MAIN-2017-39
refinedweb
126
64.85
The Reason The Internet Was Invented, or, Cat Pictures FTW September 17, 2014 Leave a comment I’ll report about the SpringOne2GX conference soon, but I’m fighting a cold at the moment and the medication has put me in a rather bleary state. Thus, the following bit of random Groovy whimsy… In this post, I’ll use Groovy to access a publicly available RESTful web service, parse the downloaded data, and build a GUI to hold the resulting images. The code will demonstrate the Groovy JDK, maps, the collect method, I/O processing, and the builder pattern, all to achieve its ultimate goal: looking at cat pictures. A Flickring Light In The Darkness There are many so-called RESTful web services available to the general public. Despite the name, however, most only support GET requests. This isn’t terribly surprising, since POST, PUT, and DELETE would require some kind of transaction, and security, and all sorts of protection against the sort of damage reminiscent of YouTube comments[1]. Examples of that sort of services include Open Weather Map (), Currency Exchange Rates (), the Bing[2] Maps REST services (), the Google Maps Geocoder () and the source for this article, Flickr (). Each supports HTTP GET requests only. If a RESTful web service only supports GET requests, does that make it a GETful service? If so, and it’s also stateless, does that make it a FORGETful web service? Thank you, thank you. Try the veal, and please remember to tip your wait staff. Believe it or not, Flickr used to be the definitive site for image sharing, and some people still use it. Access requires a key, which means you need to register with Yahoo!, another web site that used to matter long, long ago when the web was shiny and new. Why use it here? It’s available, it’s free, and, most important of all, it’s got cat pictures. Feeling Groovy Groovy is one of the new family of languages that compiles to bytecodes for the Java Virtual Machine. It’s simpler than Java, yet much more powerful, and has an easy learning curve for existing Java developers. It’s cool in its own right, and also forms the basis of interesting projects like Grails and Gradle. Hey, I even wrote a Java/Groovy integration book about it. Now on to the Groovy. Let’s assume you register and receive your special key. Save it in a file called flickr_key.txt, so you can access it this way: String key = new File('flickr_key.txt').text Accessing a property (like text here) in Groovy is the same as invoking the corresponding getter or setter method. Here, the Groovy JDK adds the getText method to File, which returns the contents of the file. The base URL for the web service is: String endPoint = '?' I included the “?” because I want to append the generated query string to the end point to get the complete URL. The Flickr API uses a query string with a lot of parameters. Here I add them to a Groovy map. def params = [method: 'flickr.photos.search', api_key: key, format: 'json', tags: 'cat', nojsoncallback: 1, media: 'photos', per_page: 6] I’m calling the flickr.photos.search method with the api_key. I want JSON data back, but not as a JSON callback, and I want photo data using the cat tag. It’s not terribly complicated, but there are a lot of parameters. The cool part is that a Groovy map like this can be converted to a query string using a very common idiom. The collect method applied to a map generates a list by applying a closure to each element of the original list. Therefore I can generate the query string thusly: def qs = params.collect { k,v -> "$k=$v" }.join('&') The two-argument closure takes each key/value pair and returns a string where the key equals the value. Then the join method applied to the list creates a string whose entries are separated by ampersands. That means the whole URL is defined by "$endPoint$qs". I can then use the toURL method from the Groovy JDK to convert the string to a URL[3], and the text parameter invokes getText, returning the total response. I can now use the cool JsonOutput.prettyPrint(txt) method to format the output and write it to a file. File f = new File('cats.json') if (f) f.delete() f << jsonTxt println JsonOutput.prettyPrint(jsonTxt) Here’s a sample from that output: { "photos": { "page": 1, "pages": 882777, "perpage": 6, "total": "5296661", "photo": [ { "id": "15077663928", "owner": "125421155@N06", "secret": "dc605f2671", "server": "3837", "farm": 4, "title": "Mornings with bb kitty", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { ... }, { ... }, { ... }, { ... }, { ... } ] }, "stat": "ok" } From JSON to Cat Pictures Each photo block has the components necessary to assemble the URL to retrieve it, which is way more complicated than it needs to be but is still doable. Here’s the code for that part. def json = new JsonSlurper().parseText(jsonTxt) def urls = json.photos.photo.collect { p -> "{p.farm}.staticflickr.com/${p.server}/${p.id}_${p.secret}.jpg" } The JsonSlurper has a parseText method that converts the received JSON string into a Groovy map. Then walking the map by calling json.photos.photo gets all the photo objects, and the collect block converts each of them into a URL. All that remains is to access all the photo URLs and populate a simple GUI with them. One simple way to do that is to use Groovy’s SwingBuilder class. new SwingBuilder().edt { frame(title:'Cat pictures', visible: true, pack: true, defaultCloseOperation: WindowConstants.EXIT_ON_CLOSE, layout:new GridLayout(0, 2)) { urls[0..5].each { String url -> label(icon:new ImageIcon(url.toURL())) } } } The SwingBuilder has an edt method, which builds the UI on the event dispatch thread. The frame “method” instantiates a JFrame with the given title, makes it visible, and packs it as small as possible. The GUI uses a GridLayout to make two columns of equally-spaced elements, each of which is a JLabel that contains an ImageIcon with the individual photo URLs. The result is a display of the most recent six pictures tagged cat in Flickr, which can result in almost anything[4]. Here’s a sample view: Don’t ask me about the monkey. I have no idea what that’s about either. So what have we learned? - Groovy maps can be converted to query strings using a closure and the joinmethod - The Groovy JDK adds methods to Java library classes that you wished were there all along - Groovy builders can make Swing programming almost, but not quite, pretty cool - I can haz kitteh pics The entire script from this article is part of a GitHub repository called IntroGroovy. Drill down to the src/main/groovy/flickr directory. You’ll need to add your own Flickr key, but after that, go for it. Just don’t be surprised, apparently, if you get monkeys or tractors mixed in with your cat pictures. I suppose now I should go and reimplement the whole script in LOLCODE. toURLto convert the string into, a tomato? That would be pretty cool, actually. Now I wish I had that method. Of course, Groovy is an open source project… Recent Comments
https://kousenit.org/2014/09/
CC-MAIN-2017-17
refinedweb
1,210
74.08
Translucent persistent objects Project description persistent: automatic persistence for Python objects This package contains a generic persistence implementation for Python. It forms the core protocol for making objects interact “transparently” with a database such as the ZODB. Please see the Sphinx documentation (docs/index.rst) for further information,.4.3 (2018-10-22) - Fix the repr of the persistent objects to include the module name when using the C extension. This matches the pure-Python behaviour and the behaviour prior to 4.4.0. See issue 92. - Change the repr of persistent objects to format the OID as in integer in hexadecimal notation if it is an 8-byte byte string, as ZODB does. This eliminates some issues in doctests. See issue 95. 4.4.2 (2018-08-28) 4.4.0 (2018-08-22) - Use unsigned constants when doing arithmetic on C timestamps, possibly avoiding some overflow issues with some compilers or compiler settings. See issue 86. - Change the default representation of Persistent objects to include the representation of their OID and jar, if set. Also add the ability for subclasses to implement _p_repr() instead of overriding __repr__ for better exception handling. See issue 11. - Reach and maintain 100% test coverage. - Simplify __init__.py, including removal of an attempted legacy import of persistent.TimeStamp. See PR 80. - Add support for Python 3.7 and drop support for Python 3.3. - Build the CFFI modules (used on PyPy or when PURE_PYTHON is set) at installation or wheel building time when CFFI is available. This replaces the deprecated way of building them at import time. If binary wheels are distributed, it eliminates the need to have a functioning C compiler to use PyPy. See issue 75. - Fix deleting the _p_oid of a pure-Python persistent object when it is in a cache. - Fix deleting special (_p) attributes of a pure-Python persistent object that overrides __delattr__ and correctly calls _p_delattr. - Remove some internal compatibility shims that are no longer necessary. See PR 82. - Make the return value of TimeStamp.second() consistent across C and Python implementations when the TimeStamp was created from 6 arguments with floating point seconds. Also make it match across trips through TimeStamp.raw(). Previously, the C version could initially have erroneous rounding and too much false precision, while the Python version could have too much precision. The raw/repr values have not changed. See issue 41.. 4.0.1 (2012-08-26) Worked around test failure due to overflow to long on 32-bit systems. Renamed TimeStamp extension module to avoid clash with pure-Python timestamp module on case-insensitive filesystems. N.B: the canonical way to import the TimeStamp class is now: from persistent.timestamp import TimeStamp which will yield the class from the extension module (if available), falling back to the pure-Python reference implementation. 4.0.0 (2012-08-11) Platform Changes - Added explicit support for Python 3.2 and PyPy. - Note that the C implementations of Persistent, PickleCache, and Timestamp are not built (yet) on these platforms. - Dropped support for Python < 2.6. Testing Changes - 100% unit test coverage. - Removed all ZODB-dependent tests: - Rewrote some to avoid the dependency - Cloned the remainder into new ZODB.tests modules. - Refactored some doctests refactored as unittests. - Completed pure-Python reference implementations of ‘Persistent’, ‘PickleCache’, and ‘TimeStamp’. - All covered platforms tested under tox. - Added support for continuous integration using tox and jenkins. - Added setup.py dev alias (installs nose and coverage). - Dropped dependency on zope.testing / zope.testrunner: tests now run with setup.py test. Documentation Changes - Refactored many Doctests as Sphinx documentation (snippets are exercised via ‘tox’). - Added setup.py docs alias (installs Sphinx and repoze.sphinx.autointerface). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/persistent/
CC-MAIN-2019-13
refinedweb
640
60.82
In today's post I want to present the class library ILNumerics and share my experiences with it. ILNumerics is a class library for .Net with the aim of allowing faster and easier implementations of numerical calculations, plots etc. Of course one could do this in theory with .Net means alone, but this could turn out to be very tedious (just think of the representation of vectors, multiplying matrices etc.). With ILNumerics now a language extension very similar to Matlab is available, with which complex algorithms can be implemented efficiently. In the following I will now, as stated, give an introduction to this topic and test the product which is subject to a charge but which I rate in total as very good. Let us start with the installation: The library can be downloaded via the homepage. Available is only a trial version, this fact and the price is the only negative point in my opinion. The program is relatively costly, starting from 89 Euros per month, so probably not suited for the hobby user. But for universities, companies etc., here I can think of ILNumerics as a real alternative to Matlab, especially since I personally like C# and .Net a lot. A free version for students would be highly appreciated. After the installation the trial version can be tested respectively the version activated. More information you find on the support page. On this you also find a tutorial and a more precise documentation, which also served me as a reference. The library contains basically two components: The computation engine and the visualization engine. First I will explain the usage of the computation engine, then that of the visualization engine. To include the computation engine easily we create a new Windows-Forms project and then click on Project - Add New Item in the menu. In the new window we select "Computing Module" and with that add a new .cs file in the form of a computing module to the project. In this already some example methods are implemented which demonstrate the usage of ILNumerics, but which can of course be deleted. We do this but leave the Main method since we want to use the computing module as entry point for the program. We have to set this in the project since also the created Form class contains a main method. We do this by selecting in the project properties the computing module as Application - Startup Object. The code of the file Computing Module1.cs should look as follows: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using ILNumerics; using System.IO; namespace IlNumericsTest { public class Computing_Module1 : ILMath { public static void Main(params string[] parameter) { } } } Let us start with some easy operations for acclimatization. Matrices are managed by the ILNumerics class ILArray. With this via +, -, *, / the expected calculation operations can be executed, which are matrix addition / subtraction and element wise multiplication / division. The following code creates 3 matrices of the size 10 x 10, the first is filled with ones, the second with zeros and the third with random values between 0 and 1. Then A + C and A * (-1) is outputted in the console. ILArray<double> A = ones(10, 10); ILArray<double> B = zeros(10, 10); ILArray<double> C = rand(10, 10); Console.WriteLine(A + C); Console.WriteLine(-A); Now some more complex examples, I want to show some operators on graphs and networks. Who does not know graphs is referred to Wikipedia, they are an often used data structure in computer science and mathematics. The research area Web Science especially deals with the investigation and analysis of networks (like the internet), for this graphs are used to represent the data, which can take on huge sizes. Because of the thematic relevance (for example often Matlab is used) I decided to show some examples from this area as for examle the PageRank algorithm of Google. But first a function for reading in data: var file = File.ReadAllText("Graph.txt"); ILArray In the first line the file is read to a string, which is then read to a matrix via the ILNumerics function csvread(). csvread() splits the read values (for example) by commas. The input file should contain the comma separatued adjacency matrix of a graph, which contains a 1 on position i, j if the nodes i and j are connected by an edge, otherwise 0. For a graph with 4 nodes the file could look as follows: 0, 1, 0, 0 1, 0, 1, 1 0, 1, 0, 1 0, 1, 1, 0 Now we want to determine the Laplacian Matrix out of the adjacency matrix, which contains the degree of node i in the i-th diagonal entry and -1 on position i, j if the nodes i, j, are connected. For this we create a new function. Since ILNumerics puts special emphasis on performance / memory management (or has to put if it wants to handle big data), we here have to watch some things. On the one hand there is not only the presented type ILArray, but also ILInArray, ILOutArray and ILRetArray (and in general also other types than Array). For more information I point to the documentation, here, just briefly, ILInArray is for example used for input parameters, is immutable and is immediately deleted after leaving the corresponding scope (because of memory / performance reasons). Because of this we have to enclose every function in a new scope, which contains the input parameters. The function for calculating the Laplacian looks as follows: public static ILRetArray<double> CalcLaplacian(ILInArray<double> AdjMatrix) { using (ILScope.Enter(AdjMatrix)) { ILArray<double> Degrees = sum(AdjMatrix); ILArray<double> LaplacianMatrix = Degrees * eye(AdjMatrix.Length, AdjMatrix.Length); LaplacianMatrix = LaplacianMatrix - AdjMatrix; return LaplacianMatrix; } } Input parameter is the adjacency matrix. In the first line in the scope the degree of the nodes is calculated, which is done via the function sum(). This sums over all columns of the matrix and then outputs a vector with the result, which equals exactly the sequence of degrees. eye() creates a matrix with ones in the diagonal, multiplying the degree vector and substracting it from the adjacency matrix gives us the desired matrix. As a next example we calculate the density of a network. This is an important measure in networks, it is calcualted by dividing the number of present edges by the number of possible edges and thus points out how dense and centralized a network is. With a double summation over the matrix dimensions the density can simply be calculated: public static ILRetArray<double> CalcDensity(ILInArray<double> AdjMatrix) { using (ILScope.Enter(AdjMatrix)) { ILArray<double> NrLinks = sum(sum(AdjMatrix)); return (double)NrLinks / (AdjMatrix.Length * (AdjMatrix.Length - 1)); } } Now to an algorithm, which made its inventors rich and world famous, but is in principal so easy that we implement it now here (in principal - of course there is much more to it than just these lines of code). I am talking about Google's Pagerank algorithm, which was for a long time the basis of the ranking of websites in the Google search. The algorithm assigns to every page a pagerank (a populariy), which is made up of the pageranks of the pages linking to this page. The formula for calculating it is explained in the linked Wikipedia article, in practise though the pagerank is approximated iteratively: Start with an arbitrary pagerank vector P, then the new pagerank is calculated by P = (1 - d) * e * A'T*P. Here d is a damping factor (e.g. 0.5), e is the unity vector and A' the matrix which results of the adjacency matrix A by substituting all rows with only zeros by rows with the entries 1/n (n = number of nodes). We repeat this until old and new pagerank are sufficiently close, that means the procedure does not change much any more with the next iterations. In ILNumerics this looks as follows: public static ILRetArray<double> CalcPageRank(ILArray<double> AdjMatrix) { using (ILScope.Enter(AdjMatrix)) { ILArray<double> Degrees = sum(AdjMatrix.T); double epsilon = 0.00001; double d = 0.5; for (int i = 0; i < AdjMatrix.Length; i++) { for (int j = 0; j < AdjMatrix.Length; j++) { if (Degrees[i] != 0) AdjMatrix[i, j] /= Degrees[i]; } if (AdjMatrix["0", ":"].Equals(zeros(1, AdjMatrix.Length))) AdjMatrix["0", ":"] = ones(AdjMatrix.Length) / AdjMatrix.Length; } ILArray<double> POld = zeros(AdjMatrix.Length); ILArray<double> PNew = ones(AdjMatrix.Length); do { POld = PNew; PNew = (1 - d) * ones(AdjMatrix.Length) + d * multiply(AdjMatrix.T, POld); } while (norm(POld - PNew, 1) > epsilon); return PNew; } } So with the previously described method first the adjacency matrix of an arbitrary graph has to be read, the function then calculates for every node in the graph the Pagerank and returns the vector of Pageranks. As one can see, also the selection of submatrices is possible like in Matlab. Matrix["a:b", "c:d"] chooses the submatrix which consists of the rows a - b and the columns c - d of Matrix. But, like this the Pagerank should not be implemented in ILNumerics. I simply wrote the code in this style first to give an intuitive introduction, and then show how to better use the possibilites of ILNumerics. In general the usage of custom ILNumerics functions is recommended over the usage of self made ones, like looping with for - loops over big matrices etc. Thus we use the type ILLogical instead of looping over the matrix and testing the rows for 0. This gives us a vector which describes which elements from some given vector fulfill a certain condition. We choose as a condition Degrees = 0, then select the submatrix with the corresponding rows and replace them by 1/n. We do the same to divide entries of nodes with degree unequal to zero by n. Further we pull unneccessary calculation operations out of the loop, like generating a vector of ones and transposing the matrix. The resulting code is much more simpler and quicker and looks as follows: public static ILRetArray<double> CalcPageRank(ILArray<double> AdjMatrix) { using (ILScope.Enter(AdjMatrix)) { ILArray<double> Degrees = sum(AdjMatrix, 1); double epsilon = 0.00001; double d = 0.5; ILLogical dummy = Degrees == 0; AdjMatrix[dummy, full] = 1.0 / AdjMatrix.Length; dummy = Degrees != 0; AdjMatrix[dummy, full] = AdjMatrix[dummy, full] / Degrees[dummy]; AdjMatrix = AdjMatrix.T; ILArray<double> POld = zeros(AdjMatrix.Length); ILArray<double> PNew = ones(AdjMatrix.Length); ILArray<double> ILOnes = (1.0 - d) * ones(AdjMatrix.Length); do { POld = PNew; PNew = ILOnes + d * multiply(AdjMatrix, POld); } while (norm(POld - PNew, 1) > epsilon); return PNew; } } This algorithm implemented in Matlab takes about 5s for a graph of size 20 MB (about 3000 nodes), in ILNumerics as well. Such short running times of course do not give a reliable performance measure, but I think the two tools prove to be comparable. According to the producers ILNumerics has a somewhat bigger overhead, but is supposed to be faster for longer running times. To conclude the topic about the computation engine here an example of how to solve a linear equation system: ILArray<double> A = ILMath.zeros(3, 3); A["0;:"] = new double[] { 1, 2, 1 }; A["1;:"] = new double[] { 3, 4, 0 }; A["2;:"] = new double[] { -1, 0, 1 }; ILArray<double> B = new double[] { 5, 4, 7}; ILArray<double> x = ILMath.linsolve(A, B); Let us come to the visualization engine. For this there are of course many many possibilities and settings to create the desired plots, therefore I will here just give a brief introduction and point out to the online documentation for the rest. As also for the computation engine we here follow the quick start guide and create a new Windows-Form project. Then we add via Project - Add New Item a new Plotting form. In the file Program.cs we change the line Application.Run(new Form1()); to Application.Run(new Plotting_Form1()); to tell the program to start with the plotting form. This already contains example plots, but we first delete the code to create a new plot from scratch. The plotting form provides a control named ilPanel1, in its Load() function we create a simple line plot with 6 points when starting the form: private void ilPanel1_Load(object sender, EventArgs e) { var scene = new ILScene(); // create some data ILArray<float> A = new float[] { 1, 2, 3, 4, -1, -2 }; // add a plot cube scene.Add(new ILPlotCube { new ILLinePlot(A) }); ilPanel1.Scene = scene; } Basic components are so called scenes. We here create a new one and add to this then a new lineplot with the given data. The result looks as follows: By default the plotting area can be moved, rescaled etc. For this it has to be redrawn. This shall already be enough, here some interesting plots of the homepage which also can be implemented quickly:
http://csharp-tricks-en.blogspot.de/2015/07/
CC-MAIN-2018-09
refinedweb
2,123
55.34
1 Creating a Web Part Using MOSS BIN Directory We will start from the Visual Studio 2005 new project window. In the new project window, we will select C# Window application and then, class library. We will name the project “HELLOBIN”. 1.1 Adding Reference The first thing we will do in the class library is to add reference of the name space System.Web. This is done by right clicking on the References link in the solution explorer and then clicking on “Add Reference…”. Clicking on add reference will generate a new pop up window Add Reference. We will select the System.Web namespace, as encircled in the figure below, and click on OK. It will add the System.Web namespace in our project and details of which can be seen in the Reference link. “HELLOBINCLASS”. This can be done by clicking on the class name in the solution explorer and right click on the class name. This will open a new window from which we will select Rename and change the class name from “Class1” to “HELLOBINCLASS”. As you change the name of the class, a prompt will appear, shown in the figure below, and ask would you like to rename all references to the code element “Class1”? Click Yes. 1.3 Inheriting Class We will inherit our class from the class WebPart. The path to the class WebPart is System.Web.UI.WebControls.WebParts.WebPArt. We have already added the namespace System.Web.UI.WebControls.WebParts; therefore, we just right the name of the class WebPart for inheritance. 1.4 Overriding Method We will create a simple program for demonstration purpose. The code will override a method Render. The exact code of the method is shown in the figure below. 1.5 Changing Output Path After writing the code, we will change the out put path setting of the project. For this, we will right click on the project name in the solution explorer and then, we will click on the properties. On clicking the properties, a new window will be appeared. We will select the build pane of the window as shown in the figure below. In the build pane, we will set the out path to the bin directory of the MOSS. In our case, the path is C:\Inetpub\wwwroot\wss\VirtualDirectories\80\bin which is encircled in the figure below: 1.6 Build Project We will build by pressing F6 or clicking on Build in the tolls menu and then clicking on Build Solution. Build will generate the file HELLOBIN.dll in the bin directory. 1=”HELLOBIN” Namespace=”HELLOBIN” TypeName=”*” Safe=”True” />, encircled in the figure below: After writing the entry, don’t forget to save the web.config file. Becareful while making changes in the web.config file because if this file get corrupted, i am afraid you will be in big trouble. So workaround is that save a copy of file and then make changes so that you have orginal copy of the file intact. 1.8 Adding Web Part in the Web Part Gallery Now we demonstrate the next phase of adding the web part in the sharepoint site. For this, click on the “Site Action” of your sharepoint main page and click on “Modify All Site Settings” by moving through “Site Settings”. Site Settings page will be opened. Click on the link “Web Parts” under “Galleries”. Web Part Gallery web page will be opened. Click on the “New” to add the web part in the gallery. Clicking on “New” will bring you the list of all the web parts and our web part HELLOCINCLASS will be available in the list. Select the Web Part “HELLOBIN.HELLOCINCLASS” (Name of the namespace followed by the name of the class) and click on the button “Populate Gallery” as shown in the figure below: 1.9 Adding Web Part on the Site After adding the web part in the web part gallery, you can easily add the web part on the desired site. For this, click on the Site Action and then on Edit Page. The page will be open in the edit mode. Click on the link “Add a Web Part” in the space where you want to add the web part. Clicking on “Add a Web Part” will result in a new window which will contain the list of all the web parts. From the list, select the web part you have created. After adding the web part, click on the “Exit Edit Mode” and the web part will be working on the site. Remondo on Microsoft .Net » Blog Archive » Creating a Web Part Using MOSS BIN DIRECTORY said [...] Creating a Web Part Using MOSS BIN DIRECTORY « farhanfaiz’s Weblog [...] ssa said Hey Farhan I just noticed your blog, I didn’t know it existed! It seems you recently started blogging. Welcome to the blogging world! You can also announce the existence of your blog on the SharePointPUG group of which you are already a member. That will help increase the traffic to your blog. The content you posted is really good. Well done! Keep posting good tips and tricks! All the best SSA MOSS MVP
http://farhanfaiz.wordpress.com/2008/01/04/creating-a-web-part-using-moss-bin-directory/
crawl-002
refinedweb
865
82.85
Dear Diary, an Encrypted Command-Line Diary with Python In my last post, I wrote about how to work with encrypted SQLite databases with Python. As an example application of these libraries, I showed some code fragments for a fictional diary program. Because I was thinking the examples directory of the peewee repo was looking a little thin, I decided to flesh out the diary program and include it as an example. In this post, I'll go over the diary code in the hopes that you may find it interesting or useful. The code shows how to use the peewee SQLCipher extension. I've also implemented a simple command-line menu loop. All told, the code is less than 100 lines! Getting started To follow along at home, you'll need to install a few libraries. The encryption is handled by SQLCipher, an open-source library that securely encrypts SQLite databases using AES-256 with cipher-block chaining. There are also python bindings which expose a db-api 2.0 compatible API. For detailed instructions on installation, refer to my previous post, but if you want things to just work, you can run the following: $ pip install pysqlcipher You will also need to install peewee: $ pip install peewee The database layer We'll be using peewee to securely store and manage the database of diary entries. To get started, we will define our database connection and a model class representing the table of entries. #!/usr/bin/env python from peewee import * from playhouse.sqlcipher_ext import SqlCipherDatabase # Defer initialization of the database until the script is executed from the # command-line. db = SqlCipherDatabase(None) class Entry(Model): content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) class Meta: database = db def initialize(passphrase): db.init('diary.db', passphrase=passphrase, kdf_iter=64000) Entry.create_table() The python standard library includes a module for reading passwords from stdin without echoing the characters. We will use that module to securely accept the passphrase for unlocking the diary. In the application's entry-point, we'll collect the passphrase, initialize the database, and enter the main menu loop (defined later in the post). #!/usr/bin/env python from getpass import getpass import sys from peewee import * from playhouse.sqlcipher_ext import SqlCipherDatabase # ... Database and model code from previous code snippet ... if __name__ == '__main__': # Collect the passphrase using a secure method. passphrase = getpass('Enter password: ') if not passphrase: sys.stderr.write('Passphrase required to access diary.\n') sys.stderr.flush() sys.exit(1) # Initialize the database. initialize(passphrase) menu_loop() Interactive menu loop The diary will feature an interactive menu loop. Having interactive menus in your scripts is typically frowned-upon, because it hurts composability and is not unix-like. For the diary, though, it seemed like a good idea to me, as it allows you to enter your password once and then have many interactions with the app. For simplicity, the menu will allow us to perform three operations: - Add a new entry - List entries, ordered newest to oldest. Here is the structure for the menu loop and the functions the menu will delegate to: #!/usr/bin/env python from collections import OrderedDict import datetime from getpass import getpass import sys from peewee import * from playhouse.sqlcipher_ext import SqlCipherDatabase # ... Database definition and model code ... def menu_loop(): choice = None while choice != 'q': for key, value in menu.items(): print('%s) %s' % (key, value.__doc__)) choice = raw_input('Action: ').lower().strip() if choice in menu: menu[choice]() def add_entry(): """Add entry""" def view_entries(search_query=None): """View previous entries""" def search_entries(): """Search entries""" menu = OrderedDict([ ('a', add_entry), ('v', view_entries), ('s', search_entries), ]) if __name__ == '__main__': # ... Application entry-point code ... The menu loop is called after the database is initialized, then runs in a loop, displaying the menu and delegating to one of the three menu functions. The program ends when the user types q. Let's start by defining the add_entry function. This function will accept multiple lines of input from the user, reading until an EOF is received (Ctrl+d on my computer). After the user has entered their text, the program will prompt the user whether they wish to save the entry and automatically return to the menu loop. def add_entry(): """Add entry""" print('Enter your entry. Press ctrl+d when finished.') data = sys.stdin.read().strip() if data and raw_input('Save entry? [Yn] ') != 'n': Entry.create(content=data) print('Saved successfully.') Impressively, that's all the code it takes! The call to sys.stdin.read() will automatically read up to an EOF. Since the menu_loop delegated to the function, when the function exits we will be back in the loop, so no additional code is required. Now let's define the view_entries function which will display previously-written entries. We have a couple options here -- we could get fancy and implement a less-style paging system, or we could go simple and just print every single entry in a massive wall-of-text. I chose to take the middle road -- display one entry at a time, and let the user continue to the next one or break out of the loop. To accomplish this, view_entries will query entries ordered newest-to-oldest, then in a loop, each entry will be displayed and the user will then be prompted to continue to the next, or to quit. As with the add_entry function, as soon as the function exits, we will be back in the menu loop. In order to re-use this logic for the search action, I've written view_entries to accept an optional search query. def view_entries(search_query=None): """View previous entries""" query = Entry.select().order_by(Entry.timestamp.desc()) if search_query: query = query.where(Entry.content.contains(search_query)) for entry in query: timestamp = entry.timestamp.strftime('%A %B %d, %Y %I:%M%p') print(timestamp) print('=' * len(timestamp)) print(entry.content) print('n) next entry') print('q) return to main menu') if raw_input('Choice? (Nq) ') == 'q': break As eluded to in the previous paragraph, search_entries will simply delegate to the view_entries function after collecting the search query from the user. Here is the code: def search_entries(): """Search entries""" view_entries(raw_input('Search query: ')) And with that, our secret diary program is complete! Ideas for improving the diary As an exercise, here are some features that might be cool to add: - Option to delete an entry. - Smarter pagination for lists of entries. - Use terminal colors to make the app more visually appealing. Blessings looks pretty cool for this. - Option to edit entries. - Calendar-type view, or way to query entries by date. - Web front-end? Thanks for reading If you'd like to browse the complete code, you can find it here. I hope you enjoyed reading this post! Feel free to leave any comments or questions using the form below. - SQLCipher and Python overview - SQLCipher project page - pysqlcipher source code - Peewee SQLCipher extension. - Peewee ORM documentation Here are some blog posts on related topics you might be interested in: - Using SQLite's full-text search engine with Python - Guide to extending SQLite with Python through user-defined functions and aggregates, and even building your own SQLite extension modules Joshua | nov 04 2014, at 09:35am These are great, keep them coming! Commenting has been closed, but please feel free to contact me kmonsoor | nov 05 2014, at 05:19am way cool, bro. i wish there would a VCS-integrated version; e.g. that would store those entries to a remote repo, so that can be accessible from any internet-connected device.
https://charlesleifer.com/blog/dear-diary-an-encrypted-command-line-diary-with-python/
CC-MAIN-2020-50
refinedweb
1,251
57.47
Hello DaniWeb Users, I am new to the forum however from what I have seen it is exceptionally good! I have searched the forums already and found various questions relating to a similar problem as mine however I am unable to work out how I can amend this for my problem. During web research I found various suggestions such as While loops (which I can't see how it can run in my piece of code), GOTO (which apparently is a big no) and Switch. I am trying to make a Text Based Adventure Game to run in CMD as my first more advanced C++ project. I am currently at the stage of creating the menu option however I am struggling to get it to return to the main menu once the user has selected their text colour through the GameOptions. The code which I have so far for the menu is: #include <iostream> #include <string> #include <windows.h> int main() { using namespace std; string Option; cout << "\t********************************************************\n"; cout << "\t* Welcome to Dragon Slayer *\n"; cout << "\t* Please select your option now *\n"; cout << "\t* *\n"; cout << "\t* PlayGame GameOption Quit *\n"; cout << "\t********************************************************\n\n\n"; cin >> Option; { if (Option == "PlayGame") { string Name; cout << "Great, let's get started!\n\n"; cout << "What is your name?\n"; cin >> Name; cout << "\n\nHello " << Name << ", how are you doing? My name is Alexander."; } if (Option == "GameOption") { string Colour; cout << "You are now able to change the text colour!\n\n"; cout << "Green\n"; cout << "Yellow\n\n"; cin >> Colour; { if (Colour == "Green") system("Color 0a"); if (Colour == "Yellow") system("Color 0e"); system("cls"); cout << "Sample Text\n\n"; } } if (Option == "Quit") { cout << "What a shame, I was looking forward to it\n"; return 0; } } } How would I get it so once the user has selected their colour (lines 35 to 53) it goes back to the original menu whilst keeping the colour they chose? Thank you
https://www.daniweb.com/programming/software-development/threads/390508/c-return-to-point-in-programme
CC-MAIN-2016-50
refinedweb
326
71.18
Advertise with Us! We have a variety of advertising options which would give your courses an instant visibility to a very large set of developers, designers and data scientists.View Plans Spring Interview Questions Table of Contents Ever since its release back in October of 2002, Spring Framework has succeeded in becoming a top application framework for Java developers. In fact, it secures a place among the leading backend web development frameworks. Hence, opting for a career in Spring Framework is very lucrative. Top Spring Interview Questions and Answers Spring Framework has a wide array of concepts that one needs to have a working understanding in order to stay ahead in the game. If you’re preparing for a Spring job interview, then here are top Spring interview questions that you must know the answers to: Question: Please list down the various features of Spring Framework and the advantages of using the same. Answer: Features of Spring Framework: - Question: How many modules does the Spring Framework have? Answer: Question: What components does a Spring application have? Answer: the functionality of cross-cutting concerns - User Program – Uses the function Question: What do you understand by the Spring IoC Container? Explain their types. Answer: Question: Please explain the Dependency Injection in Spring. In how many ways can the same be used? Answer: Question: Can you differentiate between ApplicationContext and BeanFactory in Spring? Answer: - Question: How is the configuration metadata provided to the Spring container? Answer:. Question: What do you understand by Spring Beans? How many bean scopes are supported by Spring Framework? Answer:. Question: Please explain the Bean lifecycle in Spring Bean Factory Container? Answer: -. Question: What is a Spring Configuration File? Answer: Since Spring is based on the concept of Dependency Injection, all the classes, interfaces, and their dependencies are stored in a file termed as the spring configuration file. It is a .xml file. The spring container uses this file to control the lifecycle of spring beans. A bean is configured as: <bean id = "..." class = "..." init- <!-- bean dependencies and configurations --> </bean> Question: What are the benefits of IoC (Inversion of Control)? Answer: The advantages are: - No need to write extensive code on how services are created and getting object references. Everything can be achieved through simple configurations. New dependencies and services can be added just by adding a constructor or setter method. - Code is more accessible to unit test as it is designed as several components, and developers can inject their objects and switch implementations. - Loose coupling of components. - Allows for lazy loading of objects and dependencies. Question: What is the Bean life cycle in Spring Bean Factory Container? Answer: The bean lifecycle is as follows: - The bean is instantiated by the Spring container from the bean definition found in the spring configuration file. - Using dependency injection, all the properties specified in the bean definition are populated. - If the bean implements the BeanNameAware interface, setBeanName() is called by the bean factory bypassing the bean's ID. - If the bean implements the BeanNameAware interface, setBeanFactory() is called by passing the bean's instance. - If any BeanPreProcessors are associated with the bean, preProcessBeforeInitialization(), methods are called. - Init-method, if defined for the bean, will be called. - Lastly, postProcessAfterInitialization() methods are called, if required. Question: Explain inner beans in Spring. Answer: Inner beans are the beans that exist within the scope of another bean. The concept is similar to inner classes in Java. The inner bean is defined as the target inside the outer bean id tag. <bean id = "outerBean" class = "..."> <property name = "target"> <bean id = "innerBean" class = "..."/> </property> </bean> Question: What is Annotation-based container configuration? Also, explain how to turn on annotation wiring in Spring Framework. Answer:. Question: Please explain the various annotations supported by Spring. Answer: - . Question: What do you mean by Spring DAO support? Answer: The Spring DAO support eases working with data access technologies, such as JDBC, JDO, and Hibernate, in a reliable way. Also, it allows coding without worrying about catching specific-technology exceptions and easily makes a switch amongst persistence technologies. Question: What classes does the JDBC API contain? Answer: - JdbcTemplate - NamedParameterJdbcTemplate - SimpleJdbcCall - SimpleJdbcInsert - SimpleJdbcTemplate Question: How will you access Hibernate using Spring Framework? Answer: Hibernate can be accessed using Spring Framework in the following two ways: - Extending HibernateDAOSupport and then applying an AOP Interceptor node - Inversion of Control with a Hibernate Template and Callback Question: Enumerate the types of transaction management supported by Spring. Answer:. Question: Please explain the AOP technique. Answer: AOP or Aspect-Oriented Programming is a programming technique that allows programmers to modularize behavior that cuts across the typical division of responsibility or cross-cutting concerns. Logging and transaction management are examples of cross-cutting concerns. Question: What is Advice in Spring? Explain its various types. Answer: Question: Could you draw a comparison between concern and crosscutting concerns in Spring AOP? Answer: While the concern is a behavior that the developer wants to have in a particular module of a Spring application, the cross-cutting concern is a concern that is applicable throughout the entire Spring application. Question: What do you understand by the Spring MVC framework? Answer:. Question: Please explain DispatcherServlet. Answer:. Question: What is Spring? Answer: Spring is an integrated framework that is used for developing enterprise applications in Java language. Question: Are there any differences between the Bean Factory and Application Context? Answer: Yes, there are many differences between the Bean Factory and Application Context. These are stated as under: Question: What are the differences between the Constructor Injection and Setter Injection? Answer: The differences between the Constructor Injection and Setter Injection can be stated as under Question: Define Autowriting in Spring? Answer: Autowiring in Spring helps the computer programmers to apply bean automatically without writing explicit injection logic. The various auto writing modes include the following. - No. - byname. - byType. - Constructor. Question: What are the different types of Bean scopes in Spring framework? Answer: There are mainly five types of Bean scopes in the Spring framework. These include the following. - Singleton. - Prototype. - Request. - Session. - Global session. Question: What is a Pointcut in Spring? Answer: Pointcut is an expression language of Spring AOP. What are the different latest versions of Spring framework? The latest versions of the Spring framework are as follows. - Spring 2.5 - Spring 3.0 - Spring 4.0 Question: What is a JoinPoint? Answer: A JoinPoint is a point during the execution of a program and represents the method execution. It includes the execution of a method or even handling of an exception. Question: What are the differences between Spring AOP and AspectJ AOP? Answer: There are several differences between Spring AOP and AspectJ AOP, which are stated as under: Question: What is a proxy in Spring Framework? Answer: A proxy in the Spring framework is referred to as the creation of an object after applying advice to a particular target object. Question: When are the target object and proxy objects the same? Answer: The target object and proxy object are the same in the case of client objects. Question: What is weaving in the Spring framework? Answer: Weaving in Spring framework is the process of linking a particular aspect with other application types or objects so that an advised object is created. It is performed mostly during Runtime. Question: What is Spring Security? Answer: Spring Security is a separate module in the Spring framework that focuses on offering authentication and authorization methods that can be used in Java Applications. Question: What is the Spring Boot? Answer: Spring Boot is the name given to a project which offers a pre-configured set of the framework, which is meant to reduce the boilerplate configuration. It helps in getting spring applications up and running with lesser codes. Question: What is Reactive Programming? Answer: Reactive programming is a non-blocking and event-driven application. It scales with a small number of threads. The backpressure is on the key ingredient, which ascertains that the producers do not overwhelm consumers. Question: What are the benefits of Reactive Programming? Answer: There are various benefits of Reactive Programming, which include the following. - It helps in increasing the utilization of computing resources, including multicore and multi-CPU hardware. - It helps in increasing performance by a reduction in the serialization process. Question: What are the important characteristics of the Reactive system? Answer: The important characteristics of the Reactive system includes the following. - Message-driven. - Resilient. - Responsive. - Elastic. Question: Define Spring Webflux? Answer: Spring Web flux is a highly reactive web framework of Spring. It acts as an alternative to Spring MVC and offers a highly scalable and non-blocking stack. Question: Define Mono and Flux types? Answer: Mono and Flux types, are both the reactor of the Spring Framework 5. The Mono represents the single async value, while the Flux represents the stream of async value. Together they help to implement the publisher interface, which is defined clearly in the reactive streams specifications. Question: What is the basic difference between Mono and Flux in Spring? Answer: Mono implements the publisher and returns 0 or even one element while the Flux implements the publisher and return N elements. Question: What are the common features of Mono and Flux? Answer: The common features of Mono and Flux include the following. - They represent streams. - They can’t be executed without consuming the stream using the subscribe method. - They are immutable and can be called again and again to create a new instance of Flux or Mono. Question: Difference between the Web client and Webtestclient? Answer: The difference between the Web client and Webtestclient can be stated as follows. Question: Do you think that Spring 5 compatible with older versions of JAVA? Answer: No, Spring 5 is not compatible with the older versions of JAVA. It requires at least JAVA 8 for attaining compatibility. Question: Can Spring Boot allow Spring MVC or Spring WebFlux in the same application? Answer: Yes, Spring Boot can allow either Spring MVC or Spring WebFlux in the same application but with the condition to apply only one at a time. This is because MVC is a blocking paradigm, and WebFlux is a non-blocking paradigm and hence cannot be used together. Question: Can Spring MVC run on Netty? Answer: No, Spring MVC cannot run on Netty. Question: Can Spring 5 Integrate with the Jdk9 Modularity? Answer: Yes, Spring 5 could integrate with the Jdk9 Modularity. This can be stated as follows. Step 1: Creating a new class: package com.hello; public class HelloWorld { public String sayHello(){ return "HelloWorld"; } } Step 2: Creating a new module: module com.hello { export com.hello; } Step 3: Creating a new Java Project: module com.hello.client { requires com.hello; } Step 4: Testing the new module: public class HelloWorldClient { public static void main(String[] args){ HelloWorld helloWorld = new HelloWorld(); log.info(helloWorld.sayHello()); } } Conclusion So, that completes the list of the top Spring interview questions. These questions will surely help you buckle up for your Spring interview. Looking for more Spring interview questions? Here is a great udemy course for the same: Spring Interview Questions Preparation Course. Here is a great book that helps you to prepare well for any programming interview: Cracking the Coding Interview: 189 Programming Questions and Solutions. All the very best! Be sure to check out these best Spring tutorials to further refine your Spring knowledge. People are also reading: How to control the bean life cycle in spring? What is getBean in spring? What is Spring IoC? What are spring annotations? What is spring Autowiring? What are spring beans? Honestly - this post is EVERYTHING that is wrong with technical interviews. If you want to hire quality staff - DO NOT use these low grade questions. Ask about high level concepts, learning, application and values. Not this rubbish.
https://hackr.io/blog/spring-interview-questions
CC-MAIN-2020-50
refinedweb
1,960
50.43
Zope 2.7.2 installation This Dockerfile installs Python 2.3.5, MySQL database connector, Python LDAP and Zope 2.7.2 from source packages. For convenience the packages are in the directory, as they are 10 years old. The installation expects the Zope instance to be located in /var/local/website in the container namespace. This can then be mapped to whatever you want in the docker-compose.yml file. The Python interpreter is installed in /usr/local/bin/python and zope in /usr/local/zope If the container doesn't find an etc/zope.conf file, then it creates a new instance in /var/local/website. You can set the user id to run zope as by setting the environment variable USERID to a numeric value. If not set, then it defaults to 600. Configuration In the etc/zope.conf set the port of the embedded HTTP service to 8080. <http-server> # valid keys are "address" and "force-connection-close" address 8080 # force-connection-close on </http-server> In the scripts under bin, make sure that the Python interpreter is /usr/local/bin/python and ZOPE_HOME is /usr/local/zope. Sources Building The build the container locally do docker-compose build. When you push the changes to GitHub, then the Docker Hub will build and publish at eeacms/zope-2-7-2:latest
https://hub.docker.com/r/eeacms/zope-2-7-2/
CC-MAIN-2017-51
refinedweb
225
58.89
Class to read/write ascii text. More... #include <vtkASCIITextCodec.h> Class to read/write ascii text. A virtual class interface for codecs that readers/writers can rely on Definition at line 43 of file vtkASCIITextCodec.h. Definition at line 46 of file vtkASCIITextCodec.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkTextCodec. The name this codec goes by - should match the string the factory will take to create it. Reimplemented from vtkTextCodec. is the given sample valid for this codec? Reimplemented from vtkTextCodec. Iterate through the sequence represented by the stream assigning the result to the output iterator. The stream will be advanced to its end so subsequent use would need to reset it. Implements vtkTextCodec. Return the next code point from the sequence represented by the begin, end iterators advancing begin through however many places needed to assemble that code point. Implements vtkTextCodec.
https://vtk.org/doc/nightly/html/classvtkASCIITextCodec.html
CC-MAIN-2021-17
refinedweb
170
68.87
ios_base& hex ( ios_base& str ); <ios> Use hexadecimal base Sets the basefield format flag for the str stream to hex.When basefield is set to hex, integral numerical values inserted into the stream are expressed in the hexadecimal base (radix 16). For input streams, extracted values are also expected to be expressed in the hexadecimal base when this flag is set.The basefield format flag can take any of the following values (each with its own manipulator): // modify basefield #include <iostream> using namespace std; int main () { int n; n=70; cout << hex << n << endl; cout << dec << n << endl; cout << oct << n << endl; return 0; } 4670106
http://www.cplusplus.com/reference/iostream/manipulators/hex/
crawl-002
refinedweb
104
54.26
I wrote a sketch in Processing 3.5.4 that uses a collection of png and mp4 files as inputs, making use of the Video library for videos. On macOS, it runs perfectly. On Windows 10, however, it fails to load videos, either giving the error “Seek operation failed” when calling any Movie functions, like loop(), or simply returning blank pixels. After substantially reducing the script to its barest bone operations, I found that in default renderer mode (no renderer specified in size()), the video files are read perfectly. If I specify any other renderer (P2D or P3D), they fail to load. I unfortunately need to use P3D for the script. Is this a known issue? Multiple other people using my sketch experienced this issue on their machines as well, so I don’t believe it’s specific to my machine’s specifications, but could this be an OpenGL issue? In case it is useful, I’m including the pared down code below. This particular portion simply reads the first frame of the video to create a thumbnail image. Use any mp4 file to test, placed inside the “data” folder within the sketch. Thanks! import processing.video.*; Layer layer; void setup() { size(900,900); //Change this to P3D to cause an error layer = new Layer(this, "1105_L4_objects_RAB.mp4"); } void draw() { image(layer.frame,0,0); } void movieEvent(Movie m) { m.read(); } class Layer { PImage frame; Movie m; Layer(PApplet parent, String f_) { m = new Movie(parent, f_); m.loop(); m.read(); m.pause(); m.loadPixels(); int w = m.width; int h = m.height; frame = m.get(0, 0, w, h); } }
https://discourse.processing.org/t/video-library-error-on-pc-with-non-default-renderers/25333
CC-MAIN-2022-40
refinedweb
270
67.55
To continue where I stopped, The message_pause function is like this: private void message_pause(uint channel, uint count) { int a = 0; // delay for first message // (4 - Ch) * Tm if (count == 0) a = 4 - (int)channel + 1; // next 2 messages // 5 * Tm else if (count == 1 || count == 2) a = 5; // last 2 messages // (6+2*Ch) * Tm else if (count == 3 || count == 4) a = 5 + ((int)channel + 1) * 2; // Tm = 16 ms (in theory 13.7 ms) System.Threading.Thread.Sleep(a * 16); It is a bit more comprehensive if you look at this picture. Each dot represent a signal sent. and each space the time you have to wait. The Lego protocol says you have to wait 16 ms minimum between 2 messages as it is the max size of a message. How did they arrive to this magic number? Back to our protocol, we know that the frequency is 38KHz, that the structure of a message starts and stop with a start/stop bit which is composed by 6 pulses IR and 39 pauses. A low bit is a 6 IR pulse and 10 pauses, a high bit a 6 IR pulse and 21 pauses. So we can do a ^simple table like this to have the length in µ seconds and the length in ushort: And we can also have the view of the full size of a message. The minimum size (if all bits are low) and the maximum one (if all bits are high): So if you do the sum, you can see that the maximum length is 13 744 µ seconds which is 13.744 mili seconds and not 16ms as Lego describe. But lets take the Lego recommendation there. As you can see also the maximum length of a message is 522 ushort. And that’s the perfect transition to have a look at the spi_send function: private void spi_send(ushort code) { try { ushort[] tosend = new ushort[522]; // 522 is the max size of the message to be send ushort x = 0x8000; int i = 0; //Start bit i = FillStartStop(tosend, i); //encoding the 2 codes while (x != 0) { if ((code & x) != 0) i = FillHigh(tosend, i); else i = FillLow(tosend, i); x >>= 1; //next bit } //stop bit i = FillStartStop(tosend, i); MySerial.Write(tosend); } catch (Exception e) { Debug.Print("error spi send: " + e.Message); } } The code starts with the creation of a ushort buffer of 522 elements. It is the max size of a message. The I create a short “x” (ok, I’m not creative for this little increment, but any developer has to use some crappy small names time to time ). I will use it as a mask to see what is the value to send. There are now 3 functions called here: FillStartStop, FillHigh and FillLow. They are like this: private int FillStartStop(ushort[] uBuff, int iStart) { //Bit Start/stop = 6 x IR + 39 x ZE int inc; int i = iStart; //startstop bit for (inc = 0; inc < 6; inc++) { uBuff[i] = _high; i++; } for (inc = 0; inc < 39; inc++) { uBuff[i] = _low; i++; } return i; } private int FillHigh(ushort[] uBuff, int iStart) { //Bit high = 6 x IR + 21 x ZE int inc; int i = iStart; //High bit for (inc = 0; inc < 6; inc++) { uBuff[i] = _high; i++; } for (inc = 0; inc < 21; inc++) { uBuff[i] = _low; i++; } return i; } private int FillLow(ushort[] uBuff, int iStart) { //Bit low = 6 x IR + 10 x ZE int inc; int i = iStart; //Low bit for (inc = 0; inc < 6; inc++) { uBuff[i] = _high; i++; } for (inc = 0; inc < 10; inc++) { uBuff[i] = _low; i++; } return i; } Those functions take the buffer as an input and where to fill it. And then depending if it is a start/stop, low or high bit will fill the buffer correctly. For example, the low bit is 6 times IR pulses (_high = 0xFE00) and 10 times pauses (_low = 0x0000). And it return the new start position. Back to the spi_send function, after calling a first time the FillStartStop, the while loop use the “x” variable as a mask, call FillHigh if it is a high bit and FillLow if it is a low bit. And I change the mask for each bit. High bits have to be send first. while (x != 0) { if ((code & x) != 0) i = FillHigh(tosend, i); else i = FillLow(tosend, i); x >>= 1; //next bit } When all the bits are transformed and the waveform is created, the signal is sent with MySerial.Write(tosend); The MySerail object is an SPI object: private SPI MySerial; Initialization is done like this: try { //Frequency is 38KHz in the protocol float t_carrier = 1 / 38.0f; //Reality is that there is a 2us difference in the output as there is always a 2us bit on on SPI using MOSI float t_ushort = t_carrier - 2e-3f; //Calulate the outpout frenquency. Here = 16/(1/38 -2^-3) = 658KHz uint freq = (uint)(16.0f / t_ushort); SPI.Configuration Device1 = new SPI.Configuration( Pins.GPIO_NONE, // SS-pin true, // SS-pin active state 0, // The setup time for the SS port 0, // The hold time for the SS port true, // The idle state of the clock true, // The sampling clock edge freq, // The SPI clock rate in KHz SPI_Devices.SPI1); // The used SPI bus (refers to a MOSI MISO and SCLK pinset) MySerial = new SPI(Device1); } catch (Exception e) { Debug.Print("Error: " + e.Message); } Details on on the math can be found in Mario article. This is a very precise math, the tolerance for the Lego protocol is about 30%. The official document gives the following range value: Low bit range 316 – 526 us High bit range 526 – 947 us Start/stop bit range 947 – 1579 us That said, it is better to be in the right domain, make it work better. So we’ve seen how to create the waveform, send it over the MOSI output. Now, let see how to use all this in a very simple way. public class Program { public static void Main() { LegoInfrared myLego = new LegoInfrared(); for (int i = 0; i < 10; i++) { myLego.ComboMode(LegoInfrared.LegoSpeed.BLUE_FWD, LegoInfrared.LegoSpeed.RED_FWD, LegoInfrared.LegoChannel.CH1); System.Threading.Thread.Sleep(1000); } } } public class LegoInfrared The LegoInfrared class does contains all the functions and enums I’ve explained. Here the usage is extremely simple. I create an object like this and call 10 times a forward command for both the Blue and Red output on channel 1. I wait 1 second and do it again. And the good news is that it is really working. I’m of course using the electronic schema that Mario proposed. If you are interested in the full source code of the full protocol, just let me a comment. More to come to show how to pilot it thru a web server and how to use it from another program. And again, depending of my inspiration, we will go a bit further and use sensors to raise events and be a bit smarter. Stay tune . If you want to implement other protocols like RC5, you can directly go to Mario blog and use his code. If you have a more complex protocol like the Lego one, you’ll be able to reuse most of the implementation I’ve done. Let me know if you want the full code. Is it possible to use a PC with Lego RCX remote connected to serial port to send Lego PF IR signal to receiver? Sorry that I made mistake, not Lego RCX remote, I mean RCX serial IR tower. Thanks for the detailed write up on coding Power Functions controls. It is the most detailed one I have found! Could you please share the full code? This is my first real Arduino project and I am trying to implement the combo PWM mode. Thanks!
https://blogs.msdn.microsoft.com/laurelle/2012/04/17/using-netduino-and-net-microframework-to-pilot-any-lego-power-function-thru-infrared-part-2/
CC-MAIN-2016-50
refinedweb
1,302
77.37
DEBSOURCES Skip Quicknav sources / glibc / 2.29-3 / string / memmem _LIBC # include <config.h> #endif #include <string.h> #ifndef _LIBC # define __memmem memmem #endif #define RETURN_TYPE void * #define AVAILABLE(h, h_l, j, n_l) ((j) <= (h_l) - (n_l)) #define FASTSEARCH(S,C,N) (void*) memchr ((void *)(S), (C), (N)) #include "str-two-way.h" #undef memmem /* Hash character pairs so a small shift table can be used. All bits of p[0] are included, but not all bits from p[-1]. So if two equal hashes match on p[-1], p[0] matches too. Hash collisions are harmless and result in smaller shifts. */ #define hash2(p) (((size_t)(p)[0] - ((size_t)(p)[-1] << 3)) % sizeof (shift)) /* Fast memmem algorithm with guaranteed linear-time performance. Small needles up to size 2 use a dedicated linear search. Longer needles up to size 256 use a novel modified Horspool algorithm. It hashes pairs of characters to quickly skip past mismatches. The main search loop only exits if the last 2 characters match, avoiding unnecessary calls to memcmp and allowing for a larger skip if there is no match. A self-adapting filtering check is used to quickly detect mismatches in long needles. By limiting the needle length to 256, the shift table can be reduced to 8 bits per entry, lowering preprocessing overhead and minimizing cache effects. The limit also implies worst-case performance is linear. Needles larger than 256 characters use the linear-time Two-Way algorithm. */ void * __memmem (const void *haystack, size_t hs_len, const void *needle, size_t ne_len) { const unsigned char *hs = (const unsigned char *) haystack; const unsigned char *ne = (const unsigned char *) needle; if (ne_len == 0) return (void *) hs; if (ne_len == 1) return (void *) memchr (hs, ne[0], hs_len); /* Ensure haystack length is >= needle length. */ if (hs_len < ne_len) return NULL; const unsigned char *end = hs + hs_len - ne_len; if (ne_len == 2) { uint32_t nw = ne[0] << 16 | ne[1], hw = hs[0] << 16 | hs[1]; for (hs++; hs <= end && hw != nw; ) hw = hw << 16 | *++hs; return hw == nw ? (void *)hs - 1 : NULL; } /* Use Two-Way algorithm for very long needles. */ if (__builtin_expect (ne_len > 256, 0)) return two_way_long_needle (hs, hs_len, ne, ne_len); uint8_t shift[256]; size_t tmp, shift1; size_t m1 = ne_len - 1; size_t offset = 0; memset (shift, 0, sizeof (shift)); for (int i = 1; i < m1; i++) shift[hash2 (ne + i)] = i; /* Shift1 is the amount we can skip after matching the hash of the needle end but not the full needle. */ shift1 = m1 - shift[hash2 (ne + m1)]; shift[hash2 (ne + m1)] = m1; for ( ; hs <= end; ) { /* Skip past character pairs not in the needle. */ do { hs += m1; tmp = shift[hash2 (hs)]; } while (tmp == 0 && hs <= end); /* If the match is not at the end of the needle, shift to the end and continue until we match the hash of the needle end. */ hs -= tmp; if (tmp < m1) continue; /* Hash of the last 2 characters matches. If the needle is long, try to quickly filter out mismatches. */ if (m1 < 15 || memcmp (hs + offset, ne + offset, 8) == 0) { if (memcmp (hs, ne, m1) == 0) return (void *) hs; /* Adjust filter offset when it doesn't find the mismatch. */ offset = (offset >= 8 ? offset : m1) - 8; } /* Skip based on matching the hash of the needle end. */ hs += shift1; } return NULL; } libc_hidden_def (__memmem) weak_alias (__memmem, memmem) libc_hidden_weak (memmem)
https://sources.debian.org/src/glibc/2.29-3/string/memmem.c/
CC-MAIN-2020-05
refinedweb
546
71.14
Get the highlights in your inbox every week. 9 essential GNU binutils tools | Opensource.com 9 essential GNU binutils tools Binary analysis is the most underestimated skill in the computer industry. opensource.com Subscribe now Imagine not having access to a software's source code but still being able to understand how the software is implemented, find vulnerabilities in it, and—better yet—fix the bugs. All of this in binary form. It sounds like having superpowers, doesn't it? You, too, can possess such superpowers, and the GNU binary utilities (binutils) are a good starting point. The GNU binutils are a collection of binary tools that are installed by default on all Linux distributions. Binary analysis is the most underestimated skill in the computer industry. It is mostly utilized by malware analysts, reverse engineers, and people working on low-level software. This article explores some of the tools available through binutils. I am using RHEL but these examples should run on any Linux distribution. [~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) [~]# [~]# uname -r 3.10.0-957.el7.x86_64 [~]# Note that some packaging commands (like rpm) might not be available on Debian-based distributions, so use the equivalent dpkg command where applicable. Software development 101 In the open source world, many of us are focused on software in source form; when the software's source code is readily available, it is easy to simply get a copy of the source code, open your favorite editor, get a cup of coffee, and start exploring. But the source code is not what is executed on the CPU; it is the binary or machine language instructions that are executed on the CPU. The binary or executable file is what you get when you compile the source code. People skilled in debugging often get their edge by understanding this difference. Compilation 101 Before digging into the binutils package itself, it's good to understand the basics of compilation. Compilation is the process of converting a program from its source or text form in a certain programming language (C/C++) into machine code. Machine code is the sequence of 1's and 0's that are understood by a CPU (or hardware in general) and therefore can be executed or run by the CPU. This machine code is saved to a file in a specific format that is often referred to as an executable file or a binary file. On Linux (and BSD, when using Linux Binary Compatibility), this is called ELF (Executable and Linkable Format). The compilation process goes through a series of complicated steps before it presents an executable or binary file for a given source file. Consider this source program (C code) as an example. Open your favorite editor and type out this program: #include <stdio.h> int main(void) { printf("Hello World\n"); return 0; } Step 1: Preprocessing with cpp The C preprocessor (cpp) is used to expand all macros and include the header files. In this example, the header file stdio.h will be included in the source code. stdio.h is a header file that contains information on a printf function that is used within the program. cpp runs on the source code, and the resulting instructions are saved in a file called hello.i. Open the file with a text editor to see its contents. The source code for printing hello world is at the bottom of the file. [testdir]# cat hello.c #include <stdio.h> int main(void) { printf("Hello World\n"); return 0; } [testdir]# [testdir]# cpp hello.c > hello.i [testdir]# [testdir]# ls -lrt total 24 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i [testdir]# Step 2: Compilation with gcc This is the stage where preprocessed source code from Step 1 is converted to assembly language instructions without creating an object file. It uses the GNU Compiler Collection (gcc). After running the gcc command with the -S option on the hello.i file, it creates a new file called hello.s. This file contains the assembly language instructions for the C program. You can view the contents using any editor or the cat command. [testdir]# [testdir]# gcc -Wall -S hello.i [testdir]# [testdir]# ls -l total 28 -rw-r--r--. 1 root root 76 Sep 13 03:20 hello.c -rw-r--r--. 1 root root 16877 Sep 13 03:22 hello.i -rw-r--r--. 1 root root 448 Sep 13 03:25 hello.s [testdir]# [testdir]# cat hello.s .file "hello.c" .section .rodata .LC0: .string "Hello) 4.8.5 20150623 (Red Hat 4.8.5-36)" .section .note.GNU-stack,"",@progbits [testdir]# Step 3: Assembling with as The purpose of an assembler is to convert assembly language instructions into machine language code and generate an object file that has a .o extension. Use the GNU assembler as that is available by default on all Linux platforms. [testdir]# as hello.s -o hello.o [testdir]# [testdir]# ls -l total 32 [testdir]# You now have your first file in the ELF format; however, you cannot execute it yet. Later, you will see the difference between an object file and an executable file. [testdir]# file hello.o hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped Step 4: Linking with ld This is the final stage of compillation, when the object files are linked to create an executable. An executable usually requires external functions that often come from system libraries (libc). You can directly invoke the linker with the ld command; however, this command is somewhat complicated. Instead, you can use the gcc compiler with the -v (verbose) flag to understand how linking happens. (Using the ld command for linking is an exercise left for you to explore.) [testdir]# gcc -v hello.o Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man [...] --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) COMPILER_PATH=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/:[...]:/usr/lib/gcc/x86_64-redhat-linux/ LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.8.5/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.8.5/collect2 --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu [...]/../../../../lib64/crtn.o [testdir]# After running this command, you should see an executable file named a.out: [testdir]# ls -l total 44 -rwxr-xr-x. 1 root root 8440 Sep 13 03:45 a.out Running the file command on a.out shows that it is indeed an ELF executable: [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=48e4c11901d54d4bf1b6e3826baf18215e4255e5, not stripped Run your executable file to see if it does as the source code instructs: [testdir]# ./a.out Hello World It does! So much happens behind the scenes just to print Hello World on the screen. Imagine what happens in more complicated programs. Explore the binutils tools This exercise provided a good background for utilizing the tools that are in the binutils package. My system has binutils version 2.27-34; you may have a different version depending on your Linux distribution. [~]# rpm -qa | grep binutils binutils-2.27-34.base.el7.x86_64 The following tools are available in the binutils packages: [~]# rpm -ql binutils-2.27-34.base.el7.x86_64 | grep bin/ /usr/bin/addr2line /usr/bin/ar /usr/bin/as /usr/bin/c++filt /usr/bin/dwp /usr/bin/elfedit /usr/bin/gprof /usr/bin/ld /usr/bin/ld.bfd /usr/bin/ld.gold /usr/bin/nm /usr/bin/objcopy /usr/bin/objdump /usr/bin/ranlib /usr/bin/readelf /usr/bin/size /usr/bin/strings /usr/bin/strip The compilation exercise above already explored two of these tools: the as command was used as an assembler, and the ld command was used as a linker. Read on to learn about the other seven GNU binutils package tools highlighted in bold above. readelf: Displays information about ELF files The exercise above mentioned the terms object file and executable file. Using the files from that exercise, enter readelf using the -h (header) option to dump the files' ELF header on your screen. Notice that the object file ending with the .o extension is shown as Type: REL (Relocatable file): [testdir]# readelf -h hello.o ELF Header: Magic: 7f 45 4c 46 02 01 01 00 [...] [...] Type: REL (Relocatable file) [...] If you try to execute this file, you will get an error saying it cannot be executed. This simply means that it doesn't yet have the information that is required for it to be executed on the CPU. Remember, you need to add the x or executable bit on the object file first using the chmod command or else you will get a Permission denied error. [testdir]# ./hello.o bash: ./hello.o: Permission denied [testdir]# chmod +x ./hello.o [testdir]# [testdir]# ./hello.o bash: ./hello.o: cannot execute binary file If you try the same command on the a.out file, you see that its type is an EXEC (Executable file). [testdir]# readelf -h a.out ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 [...] Type: EXEC (Executable file) As seen before, this file can directly be executed by the CPU: [testdir]# ./a.out Hello World The readelf command gives a wealth of information about a binary. Here, it tells you that it is in ELF64-bit format, which means it can be executed only on a 64-bit CPU and won't work on a 32-bit CPU. It also tells you that it is meant to be executed on X86-64 (Intel/AMD) architecture. The entry point into the binary is at address 0x400430, which is just the address of the main function within the C source program. Try the readelf command on the other system binaries you know, like ls. Note that your output (especially Type:) might differ on RHEL 8 or Fedora 30 systems and above due to position independent executable (PIE) changes made for security reasons. [testdir]# readelf -h /bin/ls ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: EXEC (Executable file) Learn what system libraries the ls command is dependant on using the ldd command, as follows: [testdir]# ldd /bin/ls linux-vdso.so.1 => (0x00007ffd7d746000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f060daca000) libcap.so.2 => /lib64/libcap.so.2 (0x00007f060d8c5000) libacl.so.1 => /lib64/libacl.so.1 (0x00007f060d6bc000) libc.so.6 => /lib64/libc.so.6 (0x00007f060d2ef000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f060d08d000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f060ce89000) /lib64/ld-linux-x86-64.so.2 (0x00007f060dcf1000) libattr.so.1 => /lib64/libattr.so.1 (0x00007f060cc84000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f060ca68000) Run readelf on the libc library file to see what kind of file it is. As it points out, it is a DYN (Shared object file), which means it can't be directly executed on its own; it must be used by an executable file that internally uses any functions made available by the library. [testdir]# readelf -h /lib64/libc.so.6 ELF Header: Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00 Class: ELF64 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - GNU ABI Version: 0 Type: DYN (Shared object file) size: Lists section sizes and the total size The size command works only on object and executable files, so if you try running it on a simple ASCII file, it will throw an error saying File format not recognized. [testdir]# echo "test" > file1 [testdir]# cat file1 test [testdir]# file file1 file1: ASCII text [testdir]# size file1 size: file1: File format not recognized Now, run size on the object file and the executable file from the exercise above. Notice that the executable file (a.out) has considerably more information than the object file (hello.o), based on the output of size command: [testdir]# size hello.o text data bss dec hex filename 89 0 0 89 59 hello.o [testdir]# size a.out text data bss dec hex filename 1194 540 4 1738 6ca a.out But what do the text, data, and bss sections mean? The text sections refer to the code section of the binary, which has all the executable instructions. The data sections are where all the initialized data is, and bss is where all the uninitialized data is stored. Compare size with some of the other available system binaries. For the ls command: [testdir]# size /bin/ls text data bss dec hex filename 103119 4768 3360 111247 1b28f /bin/ls You can see that gcc and gdb are far bigger programs than ls just by looking at the output of the size command: [testdir]# size /bin/gcc text data bss dec hex filename 755549 8464 81856 845869 ce82d /bin/gcc [testdir]# size /bin/gdb text data bss dec hex filename 6650433 90842 152280 6893555 692ff3 /bin/gdb strings: Prints the strings of printable characters in files It is often useful to add the -d flag to the strings command to show only the printable characters from the data section. hello.o is an object file that contains instructions to print out the text Hello World. Hence, the only output from the strings command is Hello World. [testdir]# strings -d hello.o Hello World Running strings on a.out (an executable), on the other hand, shows additional information that was included in the binary during the linking phase: [testdir]# strings -d a.out /lib64/ld-linux-x86-64.so.2 !^BU libc.so.6 puts __libc_start_main __gmon_start__ GLIBC_2.2.5 UH-0 UH-0 =( []A\A]A^A_ Hello World ;*3$" Recall that compilation is the process of converting source code instructions into machine code. Machine code consists of only 1's and 0's and is difficult for humans to read. Therefore, it helps to present machine code as assembly language instructions. What do assembly languages look like? Remember that assembly language is architecture-specific; since I am using Intel or x86-64 architecture, the instructions will be different if you're using ARM architecture to compile the same programs. objdump: Displays information from object files Another binutils tool that can dump the machine language instructions from the binary is called objdump. Use the -d option, which disassembles all assembly instructions from the binary. [testdir]# objdump -d hello.o hello.o: file format elf64-x86-64 Disassembly of section .text: 0000000000000000 : 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: e8 00 00 00 00 callq e e: b8 00 00 00 00 mov $0x0,%eax 13: 5d pop %rbp 14: c3 retq This output seems intimidating at first, but take a moment to understand it before moving ahead. Recall that the .text section has all the machine code instructions. The assembly instructions can be seen in the fourth column (i.e., push, mov, callq, pop, retq). These instructions act on registers, which are memory locations built into the CPU. The registers in this example are rbp, rsp, edi, eax, etc., and each register has a special meaning. Now run objdump on the executable file (a.out) and see what you get. The output of objdump on the executable can be large, so I've narrowed it down to the main function using the grep command: [testdir]# objdump -d a.out | grep -A 9 main\> 000000000040051d : 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp 400521: bf d0 05 40 00 mov $0x4005d0,%edi 400526: e8 d5 fe ff ff callq 400400 40052b: b8 00 00 00 00 mov $0x0,%eax 400530: 5d pop %rbp 400531: c3 retq Notice that the instructions are similar to the object file hello.o, but they have some additional information in them: - The object file hello.o has the following instruction: callq e - The executable a.out consists of the following instruction with an address and a function: callq 400400 <puts@plt> The above assembly instruction is calling a puts function. Remember that you used a printf function in the source code. The compiler inserted a call to the puts library function to output Hello World to the screen. Look at the instruction for a line above puts: - The object file hello.o has the instruction mov: mov $0x0,%edi - The instruction mov for the executable a.out has an actual address ($0x4005d0) instead of $0x0: mov $0x4005d0,%edi This instruction moves whatever is present at address $0x4005d0 within the binary to the register named edi. What else could be in the contents of that memory location? Yes, you guessed it right: it is nothing but the text Hello, World. How can you be sure? The readelf command enables you to dump any section of the binary file (a.out) onto the screen. The following asks it to dump the .rodata, which is read-only data, onto the screen: [testdir]# readelf -x .rodata a.out Hex dump of section '.rodata': 0x004005c0 01000200 00000000 00000000 00000000 .... 0x004005d0 48656c6c 6f20576f 726c6400 Hello World. You can see the text Hello World on the right-hand side and its address in binary on the left-hand side. Does it match the address you saw in the mov instruction above? Yes, it does. strip: Discards symbols from object files This command is often used to reduce the size of the binary before shipping it to customers. Remember that it hinders the process of debugging since vital information is removed from the binary; nonetheless, the binary executes flawlessly. Run it on your a.out executable and notice what happens. First, ensure the binary is not stripped by running the following command: [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] not stripped Also, keep track of the number of bytes originally in the binary before running the strip command: [testdir]# du -b a.out 8440 a.out Now run the strip command on your executable and ensure it worked using the file command: [testdir]# strip a.out [testdir]# file a.out a.out: ELF 64-bit LSB executable, x86-64, [......] stripped After stripping the binary, its size went down to 6296 from the previous 8440 bytes for this small program. With this much savings for a tiny program, no wonder large programs often are stripped. [testdir]# du -b a.out 6296 a.out addr2line: Converts addresses into file names and line numbers The addr2line tool simply looks up addresses in the binary file and matches them up with lines in the C source code program. Pretty cool, isn't it? Write another test program for this; only this time ensure you compile it with the -g flag for gcc, which adds additional debugging information for the binary and also helps by including the line numbers (provided in the source code here): [testdir]# cat -n atest.c 1 #include <stdio.h> 2 3 int globalvar = 100; 4 5 int function1(void) 6 { 7 printf("Within function1\n"); 8 return 0; 9 } 10 11 int function2(void) 12 { 13 printf("Within function2\n"); 14 return 0; 15 } 16 17 int main(void) 18 { 19 function1(); 20 function2(); 21 printf("Within main\n"); 22 return 0; 23 } Compile with the -g flag and execute it. No surprises here: [testdir]# gcc -g atest.c [testdir]# ./a.out Within function1 Within function2 Within main Now use objdump to identify memory addresses where your functions begin. You can use the grep command to filter out specific lines that you want. The addresses for your functions are highlighted below: [testdir]# objdump -d a.out | grep -A 2 -E 'main>:|function1>:|function2>:' 000000000040051d : 40051d: 55 push %rbp 40051e: 48 89 e5 mov %rsp,%rbp -- 0000000000400532 : 400532: 55 push %rbp 400533: 48 89 e5 mov %rsp,%rbp -- 0000000000400547 : 400547: 55 push %rbp 400548: 48 89 e5 mov %rsp,%rbp Now use the addr2line tool to map these addresses from the binary to match those of the C source code: [testdir]# addr2line -e a.out 40051d /tmp/testdir/atest.c:6 [testdir]# [testdir]# addr2line -e a.out 400532 /tmp/testdir/atest.c:12 [testdir]# [testdir]# addr2line -e a.out 400547 /tmp/testdir/atest.c:18 It says that 40051d starts on line number 6 in the source file atest.c, which is the line where the starting brace ({) for function1 starts. Match the output for function2 and main. nm: Lists symbols from object files Use the C program above to test the nm tool. Compile it quickly using gcc and execute it. [testdir]# gcc atest.c [testdir]# ./a.out Within function1 Within function2 Within main Now run nm and grep for information on your functions and variables: [testdir]# nm a.out | grep -Ei 'function|main|globalvar' 000000000040051d T function1 0000000000400532 T function2 000000000060102c D globalvar U __libc_start_main@@GLIBC_2.2.5 0000000000400547 T main You can see that the functions are marked T, which stands for symbols in the text section, whereas variables are marked as D, which stands for symbols in the initialized data section. Imagine how useful it will be to run this command on binaries where you do not have source code? This allows you to peek inside and understand which functions and variables are used. Unless, of course, the binaries have been stripped, in which case they contain no symbols, and therefore the nm command wouldn't be very helpful, as you can see here: [testdir]# strip a.out [testdir]# nm a.out | grep -Ei 'function|main|globalvar' nm: a.out: no symbols Conclusion The GNU binutils tools offer many options for anyone interested in analyzing binaries, and this has only been a glimpse of what they can do for you. Read the man pages for each tool to understand more about them and how to use them. 2 Comments, Register or Log in to post a comment. This is really helpful article. Thanks for writing it. It is a very helpful post for me. I recommend this post for readers. Thank you Gaurav
https://opensource.com/article/19/10/gnu-binutils
CC-MAIN-2021-04
refinedweb
3,818
65.73
09 August 2010 17:50 [Source: ICIS news] LONDON (ICIS)--Rhodia plans to target output from its proposed ethyl acetate (etac) project in ?xml:namespace> Saudi International Petrochemical Co (Sipchem) revealed plans last week to build the project, in partnership with Rhodia, at its site at Al-Jubail. The 100,000 tonne/year plant is expected to start up in 2013 and require a Saudi riyals (SR) 400m ($107m) investment, the Saudi company said. Rhodia will provide the technology and ethanol feedstock for the project and market most of the output, said Kamel in a telephone interview. Under the current agreement, Sipchem will fully own the project, he added. The French company will market “a very important part of the output” under a long-term agreement, Kamel remarked. A large part of that will be targeted at The plant will also have the flexibility to produce butyl acetate, although it is expected to produce just etac for the first few years, he added. This will be the Sipchem already had methanol production and brought on-stream acetic acid and vinyl acetate monomer (VAM) plants at its Jubail site last year. “Before those two plants started up, nothing was happening in the acetyls sector outside The Jubail etac project is interesting for Rhodia because of the availability of low cost feedstock and because it is well positioned between Europe and Rhodia will source sugarcane-based ethanol feedstock from The agreement with Sipchem represents the first step to greater cooperation with Sipchem and other partners in the region, he said. ($1 = SR3.75) For more on ethyl acet
http://www.icis.com/Articles/2010/08/09/9383461/rhodia-to-target-etac-from-saudi-project-at-europe.html
CC-MAIN-2014-10
refinedweb
266
62.41
from Google (as Android Dev, let’s support Google 😜) - It can work with Android Studio (and maybe Xcode, I’m not sure, but who cares 😜) - It can compile into Android and iOS app with a single source!!! (below is one example I experiment from the codelab. Single code into 2 platforms) What I tried… I’m basically glancing through various blogs, and also doing some exercises provided by Google’s Codelab Google Codelabs Google Developers Codelabs provide a guided, tutorial, hands-on coding experience. Most codelabs will step you through… codelabs.developers.google.com Sharing my findings, so you could just quick glance through and have a feel of what Flutter has before you decide to get your hand dirty. Caveat: I’m an Android Developer, so my view of it would be very much from one already familiar with Android Studio. So I’m comparing it pretty much with normal Android Development. Installation The installation is pretty straightforward. As an Android Dev, the additional components you’ll need to add is - Flutter SDK (download and unzip it to the folder, then export the path of flutterexecutable) - Flutter and Dart plugin for Android Studio (go to AndroidStudio →Preferences →Plugin →BrowseRepository) - Optionally: I think you need XCode to be available so that you could have iOS Simulator (run from Android Studio… yeah, you could run iOS simulator from Android Studio!!) .. Note: I’m not 100% sure we need Xcode, but I have one on my machine. Command line Unlike Android Studio, instead of using ./gradlew, we will us flutter instead. Several command example below flutter doctor Display your machine if they have the components needed to develop with Flutter Note, you don’t need to get all the ❌ become ✅, but only one of it (e.g. I only do for AndroidStudio and not IntelliJ, and don’t bother about libmoduledeviceand ios-deployissue for now (Maybe will need it when I deploy it, but not when I’m testing it) flutter update I haven’t use this, but I believe it is for updating to the latest version of the Flutter SDK. flutter create <project> This is like /gradlew compile your project and build the apk flutter run -d “emulator/device-name” This is like adb -install that will install and execute the app on the device. This command impress me, as after it got executed, it does provide some command line feature below (I press h to display it) The most impressive is to hot reload changes. I guess one could make changes, and try to reload it instant it like instant run. Also note, to screen shot, just press s will do. Many more stuffs can be done as shown above and I haven’t explore them all. Pretty impressive. Flutter in Android Studio If you have installed Flutter plugin in Android Studio, in your File →New menu, you’ll have New Flutter Project, which then leads you to the 4 possible fluter project components you could create as below. From here, we could see the 2nd and 4th component is enabling Flutter interacting with Android module or App. Wow, some interoperable ability!! Sample Project The default sample project given in Flutter is an app with FAB that will increment the count as shown below. And from this App, it could show you the special feature of Flutter, which is Hot Reload. Similar to Instant Run but better and more reliable. The entire Flutter infrastructure is build based on this as it’s fundamental. Cross platform operability If you go to the Tools →Flutter, you’ll be able to see two function, i.e. - Open for Editing in Android Studio - Open iOS module in Xcode. These two features allow one to open the Android or iOS side of the code of the Flutter project. I believe with that, one could enhance the project on the specific iOS platform with other modules, pod accordingly. Interoperability!! Development in Flutter Coding in Dart Unlike Android native development, which has XML as the UI layout, and Java/Kotlin code, Flutter uses Dart programming language for everything. It is still Object Orientated Programming, when we work with class and inheritance. The declaration like Java where they type comes first then only variable, and semi-colon to break a line. However, In comparison with Java (7) and Kotlin, I think Dart is more similar to Kotlin fro it’s usability point of view, where it has features like - Functional parameter - Default constructor argument - Async-await - Multiple class in a file - Null aware operator There are something that is unlike both Java and Kotlin. e.g. - the private variable or function is defined with prefix underscore _ - one need to importanother dartfile even if they are in the same folder. You could check more about it from the below sites UI in Widget For Android development, we use XML to draw our layout (same for iOS, though the XML is auto generated from the interface builder), Flutter uses Widget which is fully written in Dart as well for it’s UI. The entire App is wrap into a widget, where the top widget could be a MaterialApp which is provided by import ‘package:flutter/material.dart’; class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: MyHomePage(title: 'Flutter Demo Home Page'), ); } } The MyHomePage is also a type of widget ( StatefulWidget — as you could guess, it is a widget that can have state changes) class MyHomePage extends StatefulWidget { MyHomePage({Key key, this.title}) : super(key: key); final String title; @override _MyHomePageState createState() => _MyHomePageState(); } And MyHomePageState is a state where will return a Scaffold widget.), ), ); } } Refer to the below site for more info. Introduction to widgets Flutter widgets are built using a modern react-style framework, which takes inspiration from React. The central idea is… flutter.io Having said all in code, there are some utility side which make some of the UI coding easier, such as the use of Flutter Outline Navigate from Page to Page There’s no more concept of Activity or Fragment. Everything is Widget. So how could one navigate from one page to the other? It is by using the Navigator widget, where the below example shows, it is using the MaterailPageRoute to route and return a new Scaffold widget as a new page. Navigator.of(context).push( new MaterialPageRoute<void>( builder: (BuildContext context) { return new Scaffold( appBar: new AppBar( title: const Text('Saved Suggestions'), ), ); }, ), ); Add dependent library As Flutter is not using gradle, how could it add external Flutter packages? This could be done by adding the dependency to pubspec.yaml instead. Below is an example of adding a package of english_words dependencies: flutter: sdk: fluttercupertino_icons: ^0.1.0 english_words: ^3.1.0 # add this line After added the dependency, you might need to click get dependency from the top prompt when you try to compile your code. After that, one need to remember to import it in the code import 'package:english_words/english_words.dart'; Reseting build In Android Studio, whenever we face some weird compile issue, we are used to go to Build →CleanProject In Flutter, we can’t do that anymore. I tried to go to the Project Folder and rm -rf build, and my project doesn’t compile anymore. I need to close and reopen the project. So don’t do that! What you need to do instead is, go to Tools →Flutter →FluterClean instead. Preparing App for release In actual, I haven’t release any of Flutter App. But I have checked out some aspect one need to look into before releasing the App. App by default is Debug If you compile and run the App, by default they are debug build. And this is easily seen as there’s a stripe on the right side of the app-bar showing it To release the App, you’ll need to make it releasable, and the process differs for each platform (for obvious reason) Preparing an Android App for Release During a typical development cycle, you test an app using`flutter run` at the command line, the **Run** and… flutter.io Preparing an iOS App for Release This guide provides a step-by-step walkthrough of releasing a Flutter app tothe [App Store][appstore] and… flutter.io Do I need to proguard my App? Where for Android Developer, this is like a must-not-forget thing to do, though one could release an App without proguard. Technically there isn’t any Android Java or Kotlin code that you need to protect, as it is all Flutter’s code. But in case you still like to do it, you could follow the to your proguard file #Flutter Wrapper -keep class io.flutter.app.** { *; } -keep class io.flutter.plugin.** { *; } -keep class io.flutter.util.** { *; } -keep class io.flutter.view.** { *; } -keep class io.flutter.** { *; } -keep class io.flutter.plugins.** { *; } I tried, and it reduce my App size from 26.3MB to 25.8MB. Not much help. The only reason you might want to do that is if you have other Android module that you need to Proguard. However, you might ask what about your Dart Code. According to this stackoverflow, the Dart code is compiled into .so file. I believe it, as I use apk decompiler to check it out, not easy man…. But if you are still paranoid, there’s a way to handle it as listed in flutter/flutter Flutter makes it easy and fast to build beautiful mobile apps. - flutter/flutter github.com Is Flutter ready to take the world? Well, this is a very personal view, and limiting to what I know and see. I do see Flutter has lots of fun exploration, especially would be really handy for quick prototype for both iOS and Android release for some simple test the market App. But I think there are still some deterrence of it overtaking the Native development work. The App size If you are thinking of building a simple App, you probably want it to be really small. The overhead of Flutter is already huge >20MB. Of course if compare to Facebook or Uber App, this might be nothing. However imagine if one have to add another 20MB overhead, it is not a number to be neglected. Nonetheless, it is heard that the Flutter team haven’t spent time looking into making the small size a priority yet. When they do so, things might change. The change of development paradigm Moving from Java to Kotlin, is merely a language change, and mostly positive change. The risk is really so low and as developer I don’t see any risk at all. Even then, its endorsement is still be carefully considered until Google finally endorse it fully. For Flutter, the entire development paradigm changed (at least when compare to Android Developer that is used to Java/Kotlin/Gradlew/XML layout etc). Adopting to new approach would need sometime to establish the convention that suite for the organization. Perhaps there’s some convention out there that is of similar nature to Flutter development (that I’m not aware of… as I’m a purely Android Developer)… still then it is definitely not as smooth as Java to Kotlin for a pure Android Dev like me. Maybe we don’t need Android or iOS dev anymore… someone else could do the job instead? That’ll be a different story. I’m talking as an Android Dev. Limited ecosystem — still dependent on Native Development One main reason if an organization decide to go Flutter instead of Android or iOS native development, is because of its capability of supporting 2 platforms simultaneously. However, there limited library available from the open source community (for now) would be restricting to it. Yes, it could connect to existing Android module library with plugin or iOS pod perhaps (I don’t know), but then that would require platform dependent knowledge to work with. So back to square one, native developer need is there. Besides, it’s capability to leverage the device feature e.g. camera, geo-location would definitely need the platform specific side of code handling. E.g. the googlemap work require to link to Android and iOS code Flutter How-To: Integrate Google Maps (experimental) This post is about an experimental and unpublished version of a to-be-official Google Maps widget that is being… medium.com Life cycle and state restoration support For Android development especially, live cycle and state restoration knowledge is the core of its development model. I tried it out on Flutter, seems like its support is not there yet. There’s an issue filed, and there’s a need to considering exposing the lifecycle method to Flutter as below Instance state not saved when app is killed by OS · Issue #6827 · flutter/flutter What is instance state, and why it exists On Android, an Activity can be killed at any time by the system. This happens… github.com Imagine if one has to expose the Life Cycle to Flutter code, this would really makes Flutter code more native development than platform independent. Native knowledge is still required to may be even more tedious as native knowledge of two platforms is required to work on one piece of code. So this is not ideal yet, and solving this problem would be tricky. Having said all that, I do see some value of Flutter as a different development model, and could potentially be used on some segment of business need, especially for simple app that doesn’t interact with any device specific needs, and not sensitive to state restoration. From development point, it is also another interesting model, as from it, I heard it solve some problem on the testing framework, where UI unit test could be done more easily with Flutter. Well. I’m sure Flutter will improve on its limitation. But will it take over the world and replace native development… my view is, it will be a while. Anyway, in case you like to learn Flutter and explore it, the below blog wold be a good place to start “Flutter is awesome, but where do I start learning?” Flutter is Google’s mobile app SDK for crafting high-quality native interfaces on iOS and Android in record time… medium.freecodecamp.org I hope this post is helpful to you. You could check out my other interesting topics here. Follow me on medium, Twitter or Facebook for little tips and learning on Android, Kotlin etc related topics. ~Elye~
https://medium.com/@elye.project/first-view-on-flutter-an-android-developer-view-83f87c185f92
CC-MAIN-2019-30
refinedweb
2,429
60.75
1.gdb and gdbserver Debugging principle adopt linux In the virtual machine gdb, To the development board gdbserver dispatch orders , For example, setting breakpoints , function setp etc. , And then develop the gdbserver After receiving the order , Will execute the application to do the corresponding action , To realize the function of debugging And the bare board I learned before GDB debugging equally , It's just that what I learned before was win Under the , This time it's in linux Inside gdb 1.1 Again , They all need a compiled file with debugging information . adopt Makefile Inside arm-linux-gcc -g To the , -g: Indicates that the compiled file contains gdb Debugging information 1.2 Why do I need a compiled file for debugging information ? For example, read the variables in the application of the development board a: First gdb Through the application's compiled file with debugging information , To find the variables a The location of the deposit address Then send the address to gdbserver, To read it out a The value of the address 2. install gdb and gdbserver First, go to the official website to download gdb-7.4: 2.1 Install... On the virtual machine GDB: # tar xjf gdb-7.4.tar.bz2 // decompression # cd gdb-7.4/ // Get into gdb-7.4 Catalog #./configure --target=arm-linux //GDB Need to be in pc Running in this machine , And debug the application in the development board , therefore --target Set to arm-linux#make // compile #mkdir tmp #make install prefix=$PWD/tmp // The installation to ./tmp Under the table of contents sudo cp tmp/bin/arm-linux-gdb /bin/ // Copied to the /bin Under the table of contents /bin/arm-linux-gdb -v //-v: Make sure gdb Version of VID, Whether it is 7.4 2.2 Install... On the development board GDBServer: cd gdb/gdbserver/ // stay gdb-7.4 Enter... In the directory ./configure --target=arm-linux --host=arm-linux // set up GDBServer Working environment make // compile The following error occurred : Referring to linux-arm-low.c in , Can't find PTRACE_GETSIGINFO Definition 2.3 solve : 1) #echo $PATH // Check it out. PATH Environment variables find compiler gcc be located /work/tools/gcc-3.4.5-glibc-2.3.6/bin 2) #cd /work/tools/gcc-3.4.5-glibc-2.3.6/ # grep "PTRACE_GETSIGINFO" * -nR stay gcc The root directory , We found that linux/ptrace.h In the definition of : 3) #vi linux-arm-low.c add to : #define PTRACE_GETSIGINFO 0x4202 4) Finally, I will try again make, Generate gdbserver Command file And then gdbserver Command file , Put it in the root directory of our development board /bin in , You can use it cp gdbserver /nfs_root/bin/ //nfs_root: Development board nfs System root 3. The test procedure is as follows (test_debug.c) #include <stdio.h>void C(int *p){ *p = 0x12;}void B(int *p){ C(p);}void A(int *p){ B(p);}void A2(int *p){ C(p);}int main(int argc, char **argv){ int a; int *p = NULL; A2(&a); // A2 > C printf("a = 0x%x\n", a); A(p); // A > B > C return 0;} among A2(&a) Would call A2()->C(), And then a The assignment is 0x12. A§ Would call A()->B()->C(), because p It's a null pointer , There's going to be a mistake . Next , Let's take this application as an example . 4. compile #arm-linux-gcc -g -o test_debug test_debug.c //-g: With debugging information 5. debugging test_debug.c On the development board : First , Need make gdbserver Set up a local server , And which file to test : #gdbserver 192.168.2.107:2345 ./test_debug//192.168.2.107: Local IP Address //2345: Port number , Used to make gdb To connect //./test_debug: Which file to test On a virtual machine : #/bin/arm-linux-gdb ./test_debug // start-up gdb, Specifies that the debug file is test_debug#target remote 192.168.2.107:2345 // And gdbserver Establishing a connection 5.1 Successful connection , Then use gdb Command to debug The common commands are as follows : l List all the source code break [file]:[row] Breaking point , such as : break test_debug.c:21 // stay test_debug.c File first 21 Break point at line info br View breakpoints info file List the current files , Shared library . delete num Delete the number of breakpoints , As shown in the figure below c Start the program run step Step by step Step by step , and step The difference is , such as : When there are function calls in the current line ,next Go straight to the next sentence ,step It goes into the function print a Print a The value of the variable quit sign out gdb 6. It can also be done through gdb+coredump To debug test_debug.c When the program runs in error , It will be generated core file , And save the running status of the program to core in , That is to say coredump, supply gdb To debug 6.1 First , adopt ulimit Check it out. coredump Resource size for ulimit command (user limit), Mainly used to limit the user's various process resources . In the development board , Input As shown in the figure above , You can see coredump The resource size of is 0, in other words , When the program runs in error , Will not generate core file 6.2 Set up core file Set up core The resource size of the file is unlimited , Input : ulimit -c unlimited //-c: Corresponding coredump 6.3 Generate core file perform : #./test_debug There was a paragraph error , And generate core file , As shown in the figure below : 6.4 Enter virtual machine take core Copy it over , And then execute : #/bin/arm-linux-gdb ./test_debug ./core Then input bt, You can view the call relationship :
https://javamana.com/2021/04/20210416184201912a.html
CC-MAIN-2022-40
refinedweb
961
62.48
After. Problems that occurred I have seen DLLS added to the GAC that you can't remove - very frustrating. I have seen registered DLLs into the cache - verified everything is there ok using ILDASM only to find the DLLs are no longer in the GAC. Strongly Naming the assembly When doing this make sure you get the directory slashes \\ correct within the assembly file (assembly.cs). - if not you will get errors whilst the code is looking for the .snk file. If you get errors which leave you scratching your head - best bet is to remove the .snk file and start over. Project References Also be careful and watch where you build projects as the referenced DLLs can easily be built to the development instead of the release folder - sometimes even when you specify the release folder. This can be very, very frustrating. Conclusion My conclusion on using the GAC was only use it if you really need to as it isn't the 'end of DLL hell' as first thought. Also only use it if you are using a DLL that is shared by other projects. Don't put it in the GAC if you don't have to. Global Assembly Cache(GAC) Hell What happens to the GAC if I uninstall any dotnet frameworks? I'm migrating an ASP application to .Net 2005. The application uses ADODB.dll which registers with no problem. The application uses the methods of the registered ADODB.dll but it also accesses ADODB.dll (not always) that is on another drive on the PC in the GAC (WINDOWS/assembly). How can I resolve this problem which generates the error 'Connection' is ambiquous in the namespace 'ADODB'? Thanks!
http://www.c-sharpcorner.com/UploadFile/gsuttie/GACHell09092005073302AM/GACHell.aspx
crawl-003
refinedweb
284
73.78
Some knowledge points 1. Database cursor.fetchall Return type of (): out: [{'detail_link': ''}, {'detail_link': ''}, {'detail_link': ''}] two cursor.fetchone (): only one result will be returned, and a single dictionary type will be returned, such as: out: {'detail_link': ''} 3. (1) SELECT * FROM table LIMIT 5,10; / / retrieve record lines 6-15 (because index starts from 0) (2) SELECT * FROM table LIMIT 5; / / retrieve the first five record rows / / in other words, LIMIT n is equivalent to LIMIT 0,n. (3) To retrieve all the record lines from an offset to the end of the Recordset, you can specify the second parameter as - 1: SELECT * FROM table LIMIT 95,-1; / / retrieve the record line 96 last 4.SELECT count(*) FROM test.comment250 ; / / according to the netizen's test, the total number of queries is optimized with count (*), which is time efficient. New issues: Question one When the crawler is halfway up, the reason why pychar is not named is turned off. How to solve it? Method 1 Set the log to see where the breakpoints appear stay setting.py The following settings can be used to configure logging # Default: True, logging is enabled LOG_ENABLED = True # Default: 'utf-8', encoding used by logging LOG_ENCODING = "utf-8" # Default: None. Create the name of the logging output file in the current directory LOG_FILE = "name.log" # Default: 'DEBUG', the lowest level of log LOG_LEVEL = "DEBUG" # Default: False. If True, all the standard output (and errors) of the process will be redirected to the log. # For example, execute print "hello", which will be displayed in the Scrapy log. LOG_STDOUT = False from datetime import datetime # current time today = datetime.now() # Log files are named by date log_file_path = "logs/log_{}_{}_{}.log".format(today.year, today.month, today.day) # Log output level LOG_LEVEL = "DEBUG" # Log output path LOG_FILE = log_file_path There is a log, but I don't see the problem. Well, maybe Method 2 When writing to the database, use the select statement to detect whether the corresponding data has been written, so as to avoid writing again? I don't need to think about this question, because the second question, I deleted it directly Of course, I don't need to consider it, because I set the primary key, which is unique and won't write the same data again Question 2 It is found that some people use pictures as names, and comments may also have pictures. How to store them in mysql? Solution: the following are personal methods only:* First of all, I changed it my.ini The configuration of utf8mb4 has been changed I also did this. In the command line mode of mysql, I executed the following commands in turn. After all the commands were successfully executed, I restarted the mysql service set character_set_client = utf8mb4; set character_set_connection = utf8mb4; set character_set_database = utf8mb4; set character_set_results = utf8mb4; Close mysql service **Then * * changed the database table and added collate utf8mb4 to some settings_ unicode_ In fact, add engine = InnoDB default charset = utf8mb4 collate = utf8mb4 at the end of the table_ unicode_ I think it will be done once and for all The example below is as follows: create table comment250 ( `movie_id` text COLLATE utf8mb4_unicode_ci, `content` text COLLATE utf8mb4_unicode_ci, `useful_num` varchar(255) COLLATE utf8mb4_unicode_ci, `star` int, `time` varchar(255) COLLATE utf8mb4_unicode_ci, `people_name` text COLLATE utf8mb4_unicode_ci, `people_url` varchar(255), `comment_id` int, `URL` varchar(255), primary key (`comment_id`) ) Inspiration: What is COLLATE in MYSQL There are utf8 and utf8mb4 in mysql. Please forget utf8 and use utf8mb4 forever in mysql. This is a legacy problem of mysql. The utf8 in mysql can only support character encoding of 3 bytes at most. For some words that need to occupy 4 bytes, the utf8 in mysql does not support it. It is necessary to use utf8mb4. Finally, create the table again, insert the data, and show the following magic operation (I took down the rainbow) Question 3 A low level error occurred: (1264, "Out of range value for column 'comment_id' at row 1") Obviously, the int type I defined is too small. I changed it to bigint ok, the current problem is solved.
https://www.fatalerrors.org/a/0N991z8.html
CC-MAIN-2021-17
refinedweb
679
50.97
Basics of Common Math Functions in C Programming Not everyone is going to employ their C language programming skills to help pilot a rocket safely across space and into orbit around Titan. No, it’s more likely that you’ll attempt something far more down-to-earth. Either way, the work will most likely be done by employing math functions. All the functions listed, save for the abs() function, deal with floating-point values. The abs() function works only with integers. You can look up function references in the man pages, accessed via Code::Blocks or found online or at the command prompt in a Unix terminal window. Math Mania Mangled is littered with a smattering of math functions. The compiler enjoys seeing these functions, as long as you remember to include the math.h header file at Line 2. MATH MANIA MANGLED #include <stdio.h> #include <math.h> int main() { float result,value; printf("Input a float value: "); scanf("%f",&value); result = sqrt(value); printf("The square root of %.2f is %.2f\n", value,result); result = pow(value,3); printf("%.2f to the 3rd power is %.2f\n", value,result); result = floor(value); printf("The floor of %.2f is %.2f\n", value,result); result = ceil(value); printf("And the ceiling of %.2f is %.2f\n", value,result); return(0); } Exercise 1: Create a new project using the source code from Math Mania Mangled. Be aware that the printf() functions in the Listing were wrapped so that they’re split between two lines; you don’t need to wrap them in your source code. Build the project. Run it and try various values as input to peruse the results. Exercise 2: Write a program that displays the powers of 2, showing all values from 20 through 210. These are the Holy Numbers of Computing. The math functions listed are only a small sampling of the variety available. Generally speaking, if your code requires some sort of mathematical operation, check the C library documentation, the man pages, to see whether that specific function exists. On a Unix system, type man 3 math to see a list of the C library’s math functions. The ceil() function is pronounced seal. It’s from the word ceiling, which is a play on the floor() function.
http://www.dummies.com/how-to/content/basics-of-common-math-functions-in-c-programming.html
CC-MAIN-2016-18
refinedweb
382
66.84
float.h(0P) POSIX Programmer's Manual float.h(0P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. delim $$ float.h — floating types #include <float.h> The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard.−b digits in the significand). $f_ k$ Non-negative integers less than b (the significand digits). A floating-point number x is defined by the following model: x " " = " " sb"^" e" " " " sum from k=1 to p^ " " f_ k" " " " b"^" " "-k , " " e_ min" " " " <= " " e " " <= " " e_ max" " In addition to normalized floating-point numbers ($f_ 1$>0 if x≠0), floating types may be able to contain other kinds of floating-point numbers, such as subnormal floating-point numbers (x≠0, e=$e_ min$, $f_ 1$=0) and unnormalized floating-point numbers (x≠0,. An implementation may give zero and non-numeric values, such as infinities and NaNs, a sign, or may leave them unsigned. Wherever such values are unsigned, any requirement in POSIX.1‐2008 to retrieve the sign shall produce an unspecified sign and any requirement to set the sign shall be ignored. The accuracy of the floating-point operations ('+', '−', '*', '/') and of the functions in <math.h> and <complex.h> that return floating-point results is implementation-defined, as is the accuracy of the conversion between floating-point internal representations and string representations performed by the functions in <stdio.h>, <stdlib.h>, and <wchar.h>.: −1: −1 <float.h> header shall define the following values. lpile { p_ max" " " " log_ 10" " " " b above left ceiling " " 1 " " + " " p_ max" " " " log_ 10" " " " b right ceiling } " " " " lpile {if " " b " " is " " a " " power " " of " " 10 above otherwise} DECIMAL_DIG 10 * Number of decimal digits, q, such that any floating-point number with q decimal digits can be rounded into a floating-point number with p radix b digits and back again without change to the q decimal digits. lpile { p " " log_ 10" " " " b above left floor " " (p " " - " " 1) " " log_ 10" " " " b " " right floor } " " " " lpile {if " " b " " is " " a " " power " " of " " 10 above otherwise}. left ceiling " " log_ 10" " " " b"^" " "{ e_ min" " " " "^" " "-1 } ^ " " right ceiling FLT_MIN_10_EXP −37 DBL_MIN_10_EXP −37 LDBL_MIN_10_EXP −37 * Maximum integer such that FLT_RADIX raised to that power minus 1 is a representable finite floating-point number, $e_ max$. FLT_MAX_EXP DBL_MAX_EXP LDBL_MAX_EXP Additionally, FLT_MAX_EXP shall be at least as large as FLT_MANT_DIG, DBL_MAX_EXP shall be at least as large as DBL_MANT_DIG, and LDBL_MAX_EXP shall be at least as large as LDBL_MANT_DIG; which has the effect that FLT_MAX, DBL_MAX, and LDBL_MAX are integral. * Maximum integer such that 10 raised to that power is in the range of representable finite floating-point numbers. left floor " " log_ 10" " ( ( 1 " " - " " b"^" " "-p ) " " b"^" e" "_ max" "^ ) " " right floor FLT_MAX_10_EXP +37 DBL_MAX_10_EXP +37 LDBL_MAX_10_EXP +37 The <float.h> header shall define the following values as constant expressions with implementation-defined values that are greater than or equal to those shown: * Maximum representable finite floating-point number. (1 " " - " " b"^" " "-p^) " " b"^" e" "_ max" " FLT_MAX 1E+37 DBL_MAX 1E+37 LDBL_MAX 1E+37 The <float.h> header shall define the following values as constant expressions with implementation-defined (positive) values that are less than or equal to those shown: * The difference between 1 and the least value greater than 1 that is representable in the given floating-point type, $b"^" " "{1 " " - " " p}$. FLT_EPSILON 1E−5 DBL_EPSILON 1E−9 LDBL_EPSILON 1E−9 * Minimum normalized positive floating-point number, $b"^" " "{ e_ min" " " " "^" " "-1 }$. FLT_MIN 1E−37 DBL_MIN 1E−37 LDBL_MIN 1E−37 The following sections are informative. None. All known hardware floating-point formats satisfy the property that the exponent range is larger than the number of mantissa digits. The ISO C standard permits a floating-point format where this property is not true, such that the largest finite value would not be integral; however, it is unlikely that there will ever be hardware support for such a floating-point format, and it introduces boundary cases that portable programs should not have to be concerned with (for example, a non-integral DBL_MAX means that ceil() would have to worry about overflow). Therefore, this standard imposes an additional requirement that the largest representable finite value is integral. None. complex.h(0p), math.h(0p), stdio.h(0p), stdlib float.h(0P) Pages that refer to this page: math.h(0p), ilogb(3p), logb(3p), strtod(3p), wcstod(3p)
http://man7.org/linux/man-pages/man0/float.h.0p.html
CC-MAIN-2019-22
refinedweb
780
52.09
Opened 12 months ago Closed 11 months ago Last modified 11 months ago #31877 closed Bug (fixed) TemplateView.get_context_data()'s kwargs returns SimpleLazyObjects that causes a crash when filtering. Description Example Code that works in 3.0, but not in 3.1: class OfferView(TemplateView): template_name = "offers/offer.html" def get_context_data(self, **kwargs): offer_slug = kwargs.get("offer_slug", "") offer = get_object_or_404(Account, slug=offer_slug) return {"offer": offer, "offer_slug": offer_slug} In order to make this work in 3.1, you have to explicitly convert the result of kwargs.get() to a string to get the SimpleLazyObject to resolve: class OfferView(TemplateView): template_name = "offers/offer.html" def get_context_data(self, **kwargs): offer_slug = kwargs.get("offer_slug", "") offer = get_object_or_404(Account, slug=str(offer_slug)) return {"offer": offer, "offer_slug": offer_slug} The error generated if you don't is: Error binding parameter 0 - probably unsupported type from django/db/backends/sqlite3/operations.py, line 144, in _quote_params_for_last_executed_query In both cases, the urls.py looks like: path( "/offers/<slug:offer_slug>/", OfferView.as_view(), name="offer_view", ), When debugging, I found that offer_slug (coming in from kwargs.get) was of type 'SimpleLazyObject' in Django 3.1, and when I explicitly converted it to a string, get_object_or_404 behaved as expected. This is using Python 3.7.8 with SQLite. Change History (19) comment:1 Changed 12 months ago by comment:2 Changed 12 months ago by comment:3 Changed 12 months ago by comment:4 Changed 12 months ago by Using lazy() instead of SimpleLazyObject() fixes this - PR is up. comment:5 Changed 12 months ago by comment:6 Changed 12 months ago by comment:7 Changed 11 months ago by comment:8 Changed 11 months ago by comment:9 Changed 11 months ago by Passing deprecated keyword arguments to a queryset with lookups, e.g. Artist.objects.get(name__iexact=artist_name), still crashes on PostgreSQL: django.db.utils.ProgrammingError: can't adapt type '__proxy__' Thanks Mohit Solanki for the report. comment:10 Changed 11 months ago by 😟 Hmm this is a tough one. psycopg2 uses type(obj) to look for its "adapter" - in its C extension: . So this "proxy" approach may not be feasible. I know of a more accurate proxy wrapper that proxies more attributes than Django's lazy objects - wrapt.ObjectProxy -. But docs there acknowledge that it can't even make type() work, whilst it makes isinstance() work. Any ideas of alternative approaches? comment:11 Changed 11 months ago by comment:12 Changed 11 months ago by There isn't really a general purpose way of wrapping primitive/arbitrary types like this in Python that won't hit some corner cases. You can make an object appear to be a duck by adapting it's quacking dynamically (i.e wrapt.ObjectProxy), but if someone looks close enough they can always see that it's actually dog. And on the whole that's a good thing, IMO. Our use of kwargs makes this harder as we lose the ability to construct a container that can trigger the deprecation warning which would be the typical easy approach. There is no way to control what lands on the other side of get_context_data() (it's always a plain kwargs dict), and there is no way to construct a wrapper value that looks _exactly_ like the value it's wrapping. That basically leaves only "crazy" approaches, some of which are fun to consider but none of which are suitable. Here's one that uses settrace() to do what we need: import sys class DeprecatedWrapper(dict): def __getitem__(self, key): warnings.warn("stop right there, scoundrel!") return super().__getitem__(key) def wrap_kwargs(frame, *args): frame.f_locals['kwargs'] = DeprecatedWrapper(frame['kwargs']) sys.settrace(None) class TemplateView(...): def get(...): ... sys.settrace(wrap_kwargs) context = self.get_context_data(**context_kwargs) return self.render_to_response(context) Given these issues, I'm not sure if we can go ahead with deprecating this. comment:13 Changed 11 months ago by comment:14 Changed 11 months ago by comment:15 Changed 11 months ago by I agree too. Thanks for the report. get_object_or_404()and QuerySet.filter()with SimpleLazyObjectthrow the same exception in Django 2.2 or 3.0. TemplateView.get_context_data()'s kwargsreturns SimpleLazyObjectsin Django 3.1 which causes a crash. Passing URL kwargs into context is deprecated (see #19878) but should still work in Django 3.1 and 3.2. Regression in 4ed534758cb6a11df9f49baddecca5a6cdda9311. Reproduced at 60626162f76f26d32a38d18151700cb041201fb3.
https://code.djangoproject.com/ticket/31877
CC-MAIN-2021-31
refinedweb
716
51.34
Figure 97. The simple GUI of SwingApplication presents a JButton and a JLabel. Look and Feel Figure 98 shows three views of a GUI that uses Swing components. Each picture shows the same program but with a different look and feel. Figure 98. Three samples of look and feel. Swing allows you to specify which look and feel your program usesJava look and feel, CDE/Motif look and feel, Windows look and feel, and so on. The code in boldface type in the following snippet shows you how SwingApplication specifies its look and feel: public static void main(String[] args) { try { UIManager.setLookAndFeel( UIManager.getCrossPlatformLookAndFeelClassName()); } catch (Exception e) { } ...// Create and show the GUI... } The preceding code essentially says, "I don't care whether the user has chosen a look and feeluse the cross-platform look and feel (the Java look and feel)." Setting Up Buttons and Labels Like most GUIs, the SwingApplication GUI contains a button and a label. (Unlike most GUIs, that's about all that SwingApplication contains.) Here's the code that initializes the button: JButton button = new JButton("I'm a Swing button!"); button.setMnemonic('i'); button.addActionListener(...create an action listener...); The first line creates the button. The second sets the letter "i" as the mnemonic that the user can use to simulate a click of the button. For example, in the Java look and feel, typing Alt-i results in a button click. The third line registers an event handler for the button click, as discussed later in this section. Here's the code that initializes and manipulates the label: ...// where instance variables are declared: private static String labelPrefix = "Number of button clicks: "; private int numClicks = 0; ...// in GUI initialization code: final JLabel label = new JLabel(labelPrefix + "0 "); ... label.setLabelFor(button); ...// in the event handler for button clicks: label.setText(labelPrefix + numClicks); The preceding code is pretty straightforward, except for the line that invokes the setLabelFor method. That code exists solely to hint to assistive technologies that the label describes the button. [1] [1] Assistive technologies enable people with permanent or temporary disabilities to use computers. For more information, see the section "How to Support Assistive Technologies" in the book The JFC Swing Tutorial. This section is also available on this book's CD and online at: Now that you know how to set up buttons, you also know how to set up check boxes and radio buttons, as they all inherit from the AbstractButton class. Check boxes are similar to radio buttons, but by convention their selection models are different. Any number of check boxes in a groupnone, some, or allcan be selected. On the other hand, only one button can be selected from a group of radio buttons. Figure 99 shows pictures of two programs that use radio buttons and check boxes. Figure 99. As you'd expect, the CheckBoxDemo application shows the use of check boxes, and the RadioButtonDemo application shows the use of radio buttons. Both programs are available on this book's CD and online. You'll get a chance to take a closer look at radio buttons in the section Example Five: VoteDialog (page 368). Handling Events Every time the user types a character or pushes a mouse button, an event occurs. Any object can be notified of the event. All the object has to do is implement the appropriate interface and be registered as an event listener on the appropriate event source. How to Implement an Event Handler Every event handler requires three pieces of code: public class MyClass implements ActionListener { someComponent.addActionListener(instanceOfMyClass); public void actionPerformed(ActionEvent e) { ...//code that reacts to the action... } Event handlers can be instances of any class. Often an event handler that has only a few lines of code is implemented using an anonymous inner class. Figure 100. When the user clicks a button, the button's action listeners are notified. Swing components can generate many kinds of events. Table 49 lists a few examples. To learn more about how to detect events from a particular component, refer to each component's how-to section in the "Creating a GUI with JFC Swing" trail. This trail is also available on the CD or online at:. Note Event-handling code executes in an single thread, the event-dispatching thread. This ensures that each event handler finishes execution before the next one executes. For instance, the actionPerformed method in the preceding example executes in the event-dispatching thread. Painting code also executes in the event-dispatching thread. Therefore, while the actionPerformed method is executing, the program's GUI is frozenit won't repaint or respond to mouse clicks. See the section Threads and Swing (page 378) for more information. Adding Borders Around Components If you take another look at the snapshot of SwingApplication (Figure 97 on page 356),30 extra pixels on the top, left, and right, and 10 extra pixels on the bottom. Borders are a feature that JPanel inherits from the JComponent class.
https://flylib.com/books/en/2.33.1/example_two_swingapplication.html
CC-MAIN-2018-34
refinedweb
831
56.55
: * CORTEX-I Frame Grabber driver V1.0 3: * 4: * Copyright (C) 1994, Paul S. LaFollette, Jr. This software may be used, 5: * modified, copied, distributed, and sold, in both source and binary form 6: * provided that the above copyright and these terms are retained. Under 7: * no circumstances is the author responsible for the proper functioning 8: * of this software, nor does the author assume any responsibility 9: * for damages incurred with its use. 10: * 11: * $FreeBSD: src/sys/i386/isa/ctx.c,v 1.36 2000/01/29 16:17:31 peter Exp $ 12: * $DragonFly: src/sys/dev/video/ctx/ctx.c,v 1.8 2004/05/19 22:52:53 dillon Exp $ 13: */ 14: 15: /* 16: * 17: * 18: * 19: * Device Driver for CORTEX-I Frame Grabber 20: * Made by ImageNation Corporation 21: * 1200 N.E. Keyues Road 22: * Vancouver, WA 98684 (206) 944-9131 23: * (I have no ties to this company, just thought you might want 24: * to know how to get in touch with them.) 25: * 26: * In order to understand this device, you really need to consult the 27: * manual which ImageNation provides when you buy the board. (And 28: * what a pleasure it is to buy something for a PC and actually get 29: * programming information along with it.) I will limit myself here to 30: * a few comments which are specific to this driver. See also the file 31: * ctxreg.h for definitions of registers and control bits. 32: * 33: * 1. Although the hardware supports low resolution (256 x 256) 34: * acqusition and display, I have not implemented access to 35: * these modes in this driver. There are some fairly quirky 36: * aspects to the way this board works in low resolution mode, 37: * and I don't want to deal with them. Maybe later. 38: * 39: * 2. Choosing the base address for the video memory: This is set 40: * using a combination of hardware and software, using the left 41: * most dip switch on the board, and the AB_SELECT bit of control 42: * port 1, according to the chart below: 43: * 44: * Left DIP switch || DOWN | UP | 45: * ================================================= 46: * AB_SELECT = 0 || 0xA0000 | 0xB0000 | 47: * ------------------------------------------------- 48: * AB_SELECT = 1 || 0xD0000 | 0xE0000 | 49: * ------------------------------------------------ 50: * 51: * When the RAM_ENABLE bit of control port 1 is clear (0), the 52: * video ram is disconnected from the computer bus. This makes 53: * it possible, in principle, to share memory space with other 54: * devices (such as VGA) which can also disconnect themselves 55: * from the bus. It also means that multiple CORTEX-I boards 56: * can share the same video memory space. Disconnecting from the 57: * bus does not affect the video display of the video ram contents, 58: * so that one needs only set the RAM_ENABLE bit when actually 59: * reading or writing to memory. The cost of this is low, 60: * the benefits to me are great (I need more than one board 61: * in my machine, and 0xE0000 is the only address choice that 62: * doesn't conflict with anything) so I adopt this strategy here. 63: * 64: * XXX-Note... this driver has only been tested for the 65: * XXX base = 0xE0000 case! 66: * 67: * 3) There is a deficiency in the documentation from ImageNation, I 68: * think. In order to successfully load the lookup table, it is 69: * necessary to clear SEE_STORED_VIDEO in control port 0 as well as 70: * setting LUT_LOAD_ENABLE in control port 1. 71: * 72: * 4) This driver accesses video memory through read or write operations. 73: * Other functionality is provided through ioctl's, manifest 74: * constants for which are defined in ioctl_ctx.h. The ioctl's 75: * include: 76: * CTX_LIVE Display live video 77: * CTX_GRAB Grab a frame of video data 78: * CTX_H_ORGANIZE Set things up so that sequential read 79: * operations access horizontal lines of 80: * pixels. 81: * CTX_V_ORGANIZE Set things up so that sequential read 82: * operations access vertical lines of 83: * pixels. 84: * CTX_SET_LUT Set the lookup table from an array 85: * of 256 unsigned chars passed as the 86: * third parameter to ioctl. 87: * CTX_GET_LUT Return the current lookup table to 88: * the application as an array of 256 89: * unsigned chars. Again the third 90: * parameter to the ioctl call. 91: * 92: * Thus, 93: * ioctl(fi, CTX_H_ORGANIZE, 0); 94: * lseek(fi, y*512, SEEK_SET); 95: * read(fi, buffer, 512); 96: * 97: * will fill buffer with 512 pixels (unsigned chars) which represent 98: * the y-th horizontal line of the image. 99: * Similarly, 100: * ioctl(fi, CTX_V_ORGANIZE, 0: 101: * lseek(fi, x*512+y, SEEK_SET); 102: * read(fi, buffer, 10); 103: * 104: * will read 10 a vertical line of 10 pixels starting at (x,y). 105: * 106: * Obviously, this sort of ugliness needs to be hidden away from 107: * the casual user, with an appropriate set of higher level 108: * functions. 109: * 110: */ 111: 112: #include "use_ctx.h" 113: 114: #include <sys/param.h> 115: #include <sys/systm.h> 116: #include <sys/conf.h> 117: #include <sys/uio.h> 118: #include <sys/kernel.h> 119: #include <sys/malloc.h> 120: #include <bus/isa/i386/isa_device.h> 121: #include "ctxreg.h" 122: #include <machine/ioctl_ctx.h> 123: #include <machine/md_var.h> 124: 125: static int waitvb(int port); 126: 127: /* state flags */ 128: #define OPEN (0x01) /* device is open */ 129: 130: #define UNIT(x) ((x) & 0x07) 131: 132: static int ctxprobe (struct isa_device *devp); 133: static int ctxattach (struct isa_device *devp); 134: struct isa_driver ctxdriver = {ctxprobe, ctxattach, "ctx"}; 135: 136: static d_open_t ctxopen; 137: static d_close_t ctxclose; 138: static d_read_t ctxread; 139: static d_write_t ctxwrite; 140: static d_ioctl_t ctxioctl; 141: #define CDEV_MAJOR 40 142: 143: static struct cdevsw ctx_cdevsw = { 144: /* name */ "ctx", 145: /* maj */ CDEV_MAJOR, 146: /* flags */ 0, 147: /* port */ NULL, 148: /* clone */ NULL, 149: 150: /* open */ ctxopen, 151: /* close */ ctxclose, 152: /* read */ ctxread, 153: /* write */ ctxwrite, 154: /* ioctl */ ctxioctl, 155: /* poll */ nopoll, 156: /* mmap */ nommap, 157: /* strategy */ nostrategy, 158: /* dump */ nodump, 159: /* psize */ nopsize 160: }; 161: 162: 163: #define LUTSIZE 256 /* buffer size for Look Up Table (LUT) */ 164: #define PAGESIZE 65536 /* size of one video page, 1/4 of the screen */ 165: 166: /* 167: * Per unit shadow registers (because the dumb hardware is RO) 168: */ 169: 170: static struct ctx_soft_registers { 171: u_char *lutp; 172: u_char cp0; 173: u_char cp1; 174: u_char flag; 175: int iobase; 176: caddr_t maddr; 177: int msize; 178: } ctx_sr[NCTX]; 179: 180: 181: static int 182: ctxprobe(struct isa_device * devp) 183: { 184: int status; 185: 186: if (inb(devp->id_iobase) == 0xff) /* 0xff only if board absent */ 187: status = 0; 188: else 189: status = 1; /*XXX uses only one port? */ 190: return (status); 191: } 192: 193: static int 194: ctxattach(struct isa_device * devp) 195: { 196: struct ctx_soft_registers *sr; 197: 198: sr = &(ctx_sr[devp->id_unit]); 199: sr->cp0 = 0; /* zero out the shadow registers */ 200: sr->cp1 = 0; /* and the open flag. wait for */ 201: sr->flag = 0; /* open to malloc the LUT space */ 202: sr->iobase = devp->id_iobase; 203: sr->maddr = devp->id_maddr; 204: sr->msize = devp->id_msize; 205: cdevsw_add(&ctx_cdevsw, -1, devp->id_unit); 206: make_dev(&ctx_cdevsw, devp->id_unit, 0, 0, 0600, 207: "ctx%d", devp->id_unit); 208: return (1); 209: } 210: 211: static int 212: ctxopen(dev_t dev, int flags, int fmt, struct thread *td) 213: { 214: struct ctx_soft_registers *sr; 215: u_char unit; 216: int i; 217: 218: unit = UNIT(minor(dev)); 219: 220: /* minor number out of range? */ 221: 222: if (unit >= NCTX) 223: return (ENXIO); 224: sr = &(ctx_sr[unit]); 225: 226: if (sr->flag != 0) /* someone has already opened us */ 227: return (EBUSY); 228: 229: /* get space for the LUT buffer */ 230: 231: sr->lutp = malloc(LUTSIZE, M_DEVBUF, M_WAITOK); 232: if (sr->lutp == NULL) 233: return (ENOMEM); 234: 235: sr->flag = OPEN; 236: 237: /* 238: Set up the shadow registers. We don't actually write these 239: values to the control ports until after we finish loading the 240: lookup table. 241: */ 242: sr->cp0 |= SEE_STORED_VIDEO; 243: if ((kvtop(sr->maddr) == 0xB0000) || (kvtop(sr->maddr) == 0xE0000)) 244: sr->cp1 |= AB_SELECT; /* map to B or E if necessary */ 245: /* but don't enable RAM */ 246: /* 247: Set up the lookup table initially so that it is transparent. 248: */ 249: 250: outb(sr->iobase + ctx_cp0, (u_char) 0); 251: outb(sr->iobase + ctx_cp1, (u_char) (LUT_LOAD_ENABLE | BLANK_DISPLAY)); 252: for (i = 0; i < LUTSIZE; i++) { 253: outb(sr->iobase + ctx_lutaddr, (u_char) i); 254: sr->lutp[i] = (u_char) i; 255: outb(sr->iobase + ctx_lutdata, (u_char) sr->lutp[i]); 256: } 257: /* 258: Disable LUT loading, and push the data in the shadow 259: registers into the control ports. 260: */ 261: outb(sr->iobase + ctx_cp0, sr->cp0); 262: outb(sr->iobase + ctx_cp1, sr->cp1); 263: return (0); /* successful open. All ready to go. */ 264: } 265: 266: static int 267: ctxclose(dev_t dev, int flags, int fmt, struct thread *td) 268: { 269: int unit; 270: 271: unit = UNIT(minor(dev)); 272: ctx_sr[unit].flag = 0; 273: free(ctx_sr[unit].lutp, M_DEVBUF); 274: ctx_sr[unit].lutp = NULL; 275: return (0); 276: } 277: 278: static int 279: ctxwrite(dev_t dev, struct uio * uio, int ioflag) 280: { 281: int unit, status = 0; 282: int page, count, offset; 283: struct ctx_soft_registers *sr; 284: u_long ef; 285: 286: unit = UNIT(minor(dev)); 287: sr = &(ctx_sr[unit]); 288: 289: if (uio->uio_offset < 0) 290: return (EINVAL); 291: if (uio->uio_offset >= 4 * PAGESIZE) 292: page = 4; /* EOF */ 293: else 294: page = (u_int)uio->uio_offset / PAGESIZE; 295: offset = (u_int)uio->uio_offset % PAGESIZE; 296: count = min(uio->uio_resid, PAGESIZE - offset); 297: while ((page >= 0) && (page <= 3) && (count > 0)) { 298: sr->cp0 &= ~3; 299: sr->cp0 |= page; 300: outb(sr->iobase + ctx_cp0, sr->cp0); 301: 302: /* 303: Before doing the uiomove, we need to "connect" the frame buffer 304: ram to the machine bus. This is done here so that we can have 305: several different boards installed, all sharing the same memory 306: space... each board is only "connected" to the bus when its memory 307: is actually being read or written. All my instincts tell me that 308: I should disable interrupts here, so I have done so. 309: */ 310: 311: ef = read_eflags(); 312: cpu_disable_intr(); 313: sr->cp1 |= RAM_ENABLE; 314: outb(sr->iobase + ctx_cp1, sr->cp1); 315: status = uiomove(sr->maddr + offset, count, uio); 316: sr->cp1 &= ~RAM_ENABLE; 317: outb(sr->iobase + ctx_cp1, sr->cp1); 318: write_eflags(ef); 319: 320: page = (u_int)uio->uio_offset / PAGESIZE; 321: offset = (u_int)uio->uio_offset % PAGESIZE; 322: count = min(uio->uio_resid, PAGESIZE - offset); 323: } 324: if (uio->uio_resid > 0) 325: return (ENOSPC); 326: else 327: return (status); 328: } 329: 330: static int 331: ctxread(dev_t dev, struct uio * uio, int ioflag) 332: { 333: int unit, status = 0; 334: int page, count, offset; 335: struct ctx_soft_registers *sr; 336: u_long ef; 337: 338: unit = UNIT(minor(dev)); 339: sr = &(ctx_sr[unit]); 340: 341: if (uio->uio_offset < 0) 342: return (EINVAL); 343: if (uio->uio_offset >= 4 * PAGESIZE) 344: page = 4; /* EOF */ 345: else 346: page = (u_int)uio->uio_offset / PAGESIZE; 347: offset = (u_int)uio->uio_offset % PAGESIZE; 348: count = min(uio->uio_resid, PAGESIZE - offset); 349: while ((page >= 0) && (page <= 3) && (count > 0)) { 350: sr->cp0 &= ~3; 351: sr->cp0 |= page; 352: outb(sr->iobase + ctx_cp0, sr->cp0); 353: /* 354: Before doing the uiomove, we need to "connect" the frame buffer 355: ram to the machine bus. This is done here so that we can have 356: several different boards installed, all sharing the same memory 357: space... each board is only "connected" to the bus when its memory 358: is actually being read or written. All my instincts tell me that 359: I should disable interrupts here, so I have done so. 360: */ 361: ef = read_eflags(); 362: cpu_disable_intr(); 363: sr->cp1 |= RAM_ENABLE; 364: outb(sr->iobase + ctx_cp1, sr->cp1); 365: status = uiomove(sr->maddr + offset, count, uio); 366: sr->cp1 &= ~RAM_ENABLE; 367: outb(sr->iobase + ctx_cp1, sr->cp1); 368: write_eflags(ef); 369: 370: page = (u_int)uio->uio_offset / PAGESIZE; 371: offset = (u_int)uio->uio_offset % PAGESIZE; 372: count = min(uio->uio_resid, PAGESIZE - offset); 373: } 374: if (uio->uio_resid > 0) 375: return (ENOSPC); 376: else 377: return (status); 378: } 379: 380: static int 381: ctxioctl(dev_t dev, u_long cmd, caddr_t data, int flags, struct thread *td) 382: { 383: int error; 384: int unit, i; 385: struct ctx_soft_registers *sr; 386: 387: error = 0; 388: unit = UNIT(minor(dev)); 389: sr = &(ctx_sr[unit]); 390: 391: switch (cmd) { 392: case CTX_LIVE: 393: sr->cp0 &= ~SEE_STORED_VIDEO; 394: outb(sr->iobase + ctx_cp0, sr->cp0); 395: break; 396: case CTX_GRAB: 397: sr->cp0 &= ~SEE_STORED_VIDEO; 398: outb(sr->iobase + ctx_cp0, sr->cp0); 399: sr->cp0 |= ACQUIRE; 400: if (waitvb(sr->iobase)) /* wait for vert blank to start 401: * acquire */ 402: error = ENODEV; 403: outb(sr->iobase + ctx_cp0, sr->cp0); 404: if (waitvb(sr->iobase)) /* wait for two more to finish acquire */ 405: error = ENODEV; 406: if (waitvb(sr->iobase)) 407: error = ENODEV; 408: sr->cp0 &= ~ACQUIRE; /* turn off acquire and turn on 409: * display */ 410: sr->cp0 |= SEE_STORED_VIDEO; 411: outb(sr->iobase + ctx_cp0, sr->cp0); 412: break; 413: case CTX_H_ORGANIZE: 414: sr->cp0 &= ~PAGE_ROTATE; 415: outb(sr->iobase + ctx_cp0, sr->cp0); 416: break; 417: case CTX_V_ORGANIZE: 418: sr->cp0 |= PAGE_ROTATE; 419: outb(sr->iobase + ctx_cp0, sr->cp0); 420: break; 421: case CTX_SET_LUT: 422: bcopy((u_char *) data, sr->lutp, LUTSIZE); 423: outb(sr->iobase + ctx_cp0, (u_char) 0); 424: outb(sr->iobase + ctx_cp1, (u_char) (LUT_LOAD_ENABLE | BLANK_DISPLAY)); 425: for (i = 0; i < LUTSIZE; i++) { 426: outb(sr->iobase + ctx_lutaddr, i); 427: outb(sr->iobase + ctx_lutdata, sr->lutp[i]); 428: } 429: outb(sr->iobase + ctx_cp0, sr->cp0); /* restore control 430: * registers */ 431: outb(sr->iobase + ctx_cp1, sr->cp1); 432: break; 433: case CTX_GET_LUT: 434: bcopy(sr->lutp, (u_char *) data, LUTSIZE); 435: break; 436: default: 437: error = ENODEV; 438: } 439: 440: return (error); 441: } 442: 443: static int 444: waitvb(int port) 445: { /* wait for a vertical blank, */ 446: if (inb(port) == 0xff) /* 0xff means no board present */ 447: return (1); 448: 449: while ((inb(port) & VERTICAL_BLANK) != 0) { 450: } 451: while ((inb(port) & VERTICAL_BLANK) == 0) { 452: } 453: 454: return (0); 455: }
http://www.dragonflybsd.org/cvsweb/src/sys/dev/video/ctx/ctx.c?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.8
CC-MAIN-2015-22
refinedweb
2,320
50.2
Lead Image © Ivan Mikhaylov, 123RF.com Blending Java with other programming languages It's in the Blend Neither project managers nor programmers mix languages purely for pleasure. The former are worried about the complexity and the increased demands on their team, while the latter prefer to use things that they feel most comfortable with – and that is precisely one programming language for most of them. Having said this, the second programming language often does more good than harm in a software project: - Bazaar method: The more expensive the software, the more benefits you have from leveraging tried and trusted routines – even if the recycled code is written in a different language. - Performance: If you need low-level access to hardware and operating system resources, low-level programming languages offer superior performance. - Scripting: Scripts and scripting languages give power users powerful tools. This is still the case with complex enterprise applications based on J2EE. - Integration capability: Java applications can act as one component in the greater context and may need to adapt to match the look and feel of a desktop. Java is a very powerful language, so it comes as no surprise that it also supports integration of software written in third-party languages for any use case. The following sections look into these techniques, along with their benefits and drawbacks. Nothing Works Without C One reason for Java's success is that it protects the developers from a few tricky things that were required in C and C++, in particular dynamic memory management. Despite this, the makers of Java saw from the outset that Java would have difficulty surviving in the real world without being able to access the versatile resources of existing C libraries. Thus, even the very first Java version offered the option of integrating C code in the form of the Java Native Interface (JNI). Java access to C relies on two layers (Figure 1). At the top is a lean Java class, which does little more than describe the interface and load the next layer by means of a static method. From a technology point of view, the second layer is a dynamic library ( .so file, or a DLL on Windows) that converts Java calls to C. Here is a practical example: In one of my projects [1], the task was to make the Readline library available in Java. Native Java programs cannot pick up output from stdout. In contrast, Readline can output a line in which the user can navigate with keystrokes and also edit in a style similar to Bash (Figure 2). The library is used, for example, in Jython [2]; Jython implements the Python language in Java. Listing 1 shows part of the class definition. For design reasons, all of the native methods are private; in other words, there is a private native String readineImpl() (line 34) for the public String readline() method (line 17). The special thing about this class is that it is not abstract; however, it still only defines one interface for these native methods. The load() (lines 10-13) loads the native library. Listing 1 Snippet of the Readline.java Class 01 package org.gnu.readline; 02 03 import java.io.*; 04 import java.util.*; 05 06 public class Readline { 07 08 [...] 09 10 public static final void load(ReadlineLibrary lib) throws UnsatisfiedLinkError { 11 [...] 12 System.loadLibrary(lib.getName()); // might throw UnsatisfiedLinkError 13 } 14 15 [...] 16 17 public static String readline(String prompt, boolean addToHist) 18 throws EOFException, IOException, UnsupportedEncodingException { 19 [...] 20 String line = readlineImpl(prompt); 21 if ((line != null) && (addToHist)) { 22 addToHistory(line); 23 } 24 return line; 25 } 26 27 public static void addToHistory(String line) { 28 [...] 29 addToHistoryImpl(line); 30 } 31 32 [...] 33 34 private native static String readlineImpl(String prompt) 35 throws EOFException, UnsupportedEncodingException; 36 37 private native static void addToHistoryImpl(String line); 38 [...] 39 } The javah tool provided by the JDK generates the C header file, org_gnu_readline_Readline.h, from the bytecode of the class: javah -classpath $(BUILDDIR) -jni org.gnu.readline.Readline Listing 2 shows a snippet of this. The actual implementation then involves converting the Java types passed in – such as jstring to char* – into a format that the C library understands. There are a couple of pitfalls here, especially on non-Unicode systems. Listing 2 Snippet of the C Header Generated 01 /* DO NOT EDIT THIS FILE - it is machine generated */ 02 #include <jni.h> 03 /* Header for class org_gnu_readline_Readline */ 04 [...] 05 /* 06 * Class: org_gnu_readline_Readline 07 * Method: initReadlineImpl 08 * Signature: (Ljava/lang/String;)V 09 */ 10 JNIEXPORT void JNICALL Java_org_gnu_readline_Readline_initReadlineImpl 11 (JNIEnv *, jclass, jstring); 12 13 /* 14 * Class: org_gnu_readline_Readline 15 * Method: cleanupReadlineImpl 16 * Signature: ()V 17 */ 18 [...] Magic of Images Web applications implemented in Java that make intensive use of image editing features frequently rely on JMagick [3]. JMagick is a Java-C interface for ImageMagick, and it's a good example of why JNI programming can be a frustrating experience. One issue here relates to stability across the board. Integrating buggy C code injects atypical problems into your application – very often crashes or memory leaks that gradually drag down the application server. A second problem is inherent to ImageMagick, whose developers often change the interfaces. That means the JMagick wrapper works only with precisely the ImageMagick version for which it was built. Administrators either need to install a version of ImageMagick that does not match their choice of distribution or compile the JMagick version for precisely the current ImageMagick ABI; however, this involves code changes to JMagick and makes maintaining the code base accordingly difficult. An alternative called Im4java [4], which is another of my projects, does without the performance benefits of JNI and instead offers a stable, object-oriented interface to ImageMagick. You can integrate this by directly calling the ImageMagick executable (typically convert) via the ProcessBuilder class. This means that the non-Java code runs in a separate process and can never cause any damage. Scripting As a compiled, high-level language, Java lacks the benefits of scripting. Some attempts have been made to compensate for these drawbacks. As a result, both approaches work: integrating Java into the scripting language (which makes Java the third-party language) and integrated scripting languages into Java. An important representative of the first category is Python in its Jython embodiment. This is the implementation of the standard Python language using Java, which makes it possible to import and use Java packages from within Python. This option may seem unnecessary from the Python point of view because CPython has a treasure trove of packages in its repository. However, if you need to integrate legacy Java applications, Jython is definitely an interesting option. Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Archive/2015/26/Blending-Java-with-other-programming-languages
CC-MAIN-2020-34
refinedweb
1,115
54.12
Distribution of JSON values with Python Back in the early 1990's, the company I worked for had a program we used for data exploration of flat files to quickly create a distribution of values which was very useful to get a quick look at the cardinality of values within a field before planning how to perform any ELT steps. While the popular data format has changed over the years, I found myself confronted with a similar situation with JSON data. What I needed was a small program to read through many source data files containing one JSON record per line. For reasons that are fuzzy to me now, the original program was called something like "mval" and I think the original developers name was Mike, so I decided to call this program "gval". I've also needed to do Python programming in 2014 on Linux; therefore this program is in Python. The program is simple, and makes use of standard Python libraries. This resulted in code that was nearly as long to parse arguments as it was to perform the actual processing. Yes, the argument process in overkill, but I didn't want to have to commit parameters to my long term memory if I could simply type python gval.py -h. Python has the ability to crawl a directory, and all subdirectories which allows the program to process all the files with a .json extension. Each json file is expected to have one or more json records delimited by a carriage return. In addition to crawling the directory and any sub directory, the rest is very straightforward Python using a standard dictionary to collect the counts of values. In the case where there are a large number of unique values, the program will run out of memory and die. It should be noted that the point is for a distribution of a reasonable number of values for a distribution of one unique value per record is useless when the desire is to get a reasonable number of values and the count of each. If this data was in an RDBMS, the analogous SQL to what this Python program does would be a statement like: select value, count(*) as cnt from foo group by value order by cnt desc In overall terseness, the SQL statement wins for less lines of code but it should be noted that it would first be necessary to load the data into an RDBMS which defeats the purpose for quick exploration of data to develop a plan to process the data. Another neat thing that Python provides is the lambda function for sorting. In this case, the results are sorted by the count (to match the order by cnt desc part of the SQL statement). Further processing of the results may be achieved by piping to another program to reformat (use sed) or insert into a datastore. The parameters to the program are: python gval.py -h -e, --element JSON element to distribute -i, --input Input directory -h, --help This message A test set of data, gziped is available here. The source of this data is the data set used for the GunStockMarket. Using the test data, the output is: python gval.py -e firearm Input directory: . JSON element: firearm Processing: GunStockMarketSample.json SKS: 431 Mosin Nagant: 269 Springfield Armory M1A: 208 M1 Garand: 163 Mauser Rifle: 65 Lee Enfield: 62 M1917 Revolver: 50 Luger Pistol: 45 Mannlicher M95: 35 CZ-52: 34 PSL: 32 Nagant M1895 Revolver: 19 Hakim: 17 Swiss K-31: 16 TT-33: 15 CZ-82: 14 SVT-40: 14 VZ-52: 9 Yugo M76: 5 Rasheed: 3 And finally, here's the script: import os import sys import getopt import json input_dir = "." element = "" options, remainder = getopt.getopt(sys.argv[1:], 'i:e:h' , ['i=', 'input=', 'element=', 'help', ]) for opt, arg in options: if opt in ('-i', '--input'): input_dir = arg elif opt in ('-e', '--element'): element = arg elif opt in ('-h', '--help'): print ' -e, --element' print ' JSON element to distribute' print ' -i, --input' print ' Input directory' print ' -h, --help' print ' This message' if element is "": sys.exit() print 'Input directory: ' + input_dir print 'JSON element: ' + element distribution = dict() for subdir, dirs, files in os.walk(input_dir): for file in files: if file.endswith(".json"): with open(subdir + '/' + file, 'r') as f: print "Processing: " + file for line in f.readlines(): json_data = json.loads(line) value = json_data[element] if element in json_data else "" if value in distribution: distribution[value] = distribution[value] + 1 else: distribution[value] = 0 f.close() for k, v in sorted(distribution.iteritems(), key=lambda (k,v): (v * -1,k)): print u'{0}: {1}'.format(k, v)
http://georgestragand.com/pythonJSONdistribution.html
CC-MAIN-2019-22
refinedweb
776
57.4
How to bind data in XAML Designer In XAML Designer, you can set data binding properties by using the artboard and the Properties window. The example in this topic shows how to bind data to a control. Specifically, the procedure shows how to create a simple shopping cart class that has a DependencyProperty named ItemCount, and then bind the ItemCount property to the Text property of a TextBlock control. To create a class to use as a data source Create a C# or Visual Basic project in the Blank App template. Open MainPage.xaml.cs (or MainPage.xaml.vb) and add the following code. In C#, add the code in the projectName namespace (before the final closing parenthesis in the file). In Visual Basic, just add the new class. Public Class ShoppingCart Inherits DependencyObject Public Shared ReadOnly ItemCountProperty As DependencyProperty = DependencyProperty.Register( "ItemCount", GetType(Integer), GetType(ShoppingCart), New PropertyMetadata(0)) Public Property ItemCount As Integer Get ItemCount = CType(GetValue(ItemCountProperty), Integer) End Get Set(value As Integer) SetValue(ItemCountProperty, value) End Set End Property End Class This code sets a value of 0 as the default item count by using the PropertyMetadata object. Click Build > Build Solution. To bind the ItemCount property to a TextBlock control Right-click MainPage.xaml and then click View Designer. In the Document Outline window, select the root Grid panel, which appears as [Grid] in the window. With the Grid selected, click the New button next to the DataContext property in the Properties window. In the Select Object dialog box, make sure that Show all assemblies is cleared, and then select ShoppingCart under the projectName namespace, and then click OK. The following illustration shows Select Object dialog box with ShoppingCart selected. In the Toolbox, double-click a TextBlock control to add it to the artboard. With the TextBlock control selected, click the property marker to the right of the Text property under Common in the Properties window. (The property marker looks like a small box.) Click Create Data Binding in the resulting menu. With a binding type of Data context, which is the default value in the Binding type list, select the ItemCount property in the Path box, and then click OK. The following illustration shows the Create Data Binding dialog box with the ItemCount property selected. Press F5 to start the app. The TextBlock control should show the default value of 0 as text.
http://msdn.microsoft.com/en-us/library/windows/apps/hh921072(v=vs.110).aspx?cs-save-lang=1&cs-lang=vb
CC-MAIN-2014-15
refinedweb
400
55.84
This is related to Tcl bug 3600058 "Doctools nroff/groff output not supported by tcltk-man2html" (). Most of the cases listed here are invalid input or invalid nroff. Valid nroff not supported by the Tcl tool is being addressed in the Tcl bug. Required action: most of the issues below have proposed fixes, and those fixes have been implemented (or are being implemented) in a patch. Mostly I need to get a YES/NO on each proposed fix (and for NO a suggested approach). Issue #1: Rendering of Copyright is inappropriate for files in the public domain c_get_copyright in doctools/mpformats/_common.tcl currently renders public domain as "Copyright (c) Public domain". Suggested fix is to handle "public domain" as a special case that doesn't get a "Copyright (c)" prefix. Issue #2: Some Copyright statements include the word "copyright" Invalid input problem; examples are amazon-s3/S3.man and amazon-s3/xsxp.man. Option A is to strip leading "copyring" and/or "(c)" from the copyright parameter. Option B is to fix the .man pages, but I'm hesitant to do so as (i) this probably should require the copyright holder's approval; and (ii) if any authors maintain their own upstream repositories the change should be committed there. Proposed fix is A (as a workaround), and to request authors to approve or make a change to the source. Issue #3: Doctools can generate nroff output where non-directive lines start with a period (invalid nroff) See for example snit/snitfaq.man, where [example] blocks may contain widget names like '.text'. Suggested fix is that the nroff formatter (mpformats/_nroff.tcl) should quote leading "." on non-directive lines. Issue #4: Doctools can still generate nroff output where non-directive lines start with a period Parts of the nroff output are explicitly handled by mpformats/fmt.nroff, in particular the output of keywords. If a keyword starts with a period (e.g. .dtx in docstrip/docstrip.man) it will be sorted to the front of the keyword list and result in invalid nroff output. Suggested fix is to explicitly check for a quote a leading period in the keyword line (in fmt.nroff). Issue #5: Bad output from ldap/ldap.man This turns out to be a bad input problem - the .man file has "[call ::ldap::searchInit]" in the middle of a paragraph, instead of "[cmd ::ldap::searchInit]". Issue #6: Bad output from pop3/pop3.man This turns out to be a bad input problem - the .man file has a trailing period on the line "[opt_def -retr-mode ...].". Issue #7: Spaces in man page names struct/graph1.man, struct/matrix1.man, and struct/struct_tree1.man all have [manpage_begin] statements in which the name contains a space: {struct::graph v1} {struct::matrix v1} and {struct::tree v1} respectively. Proposed fix is to use underscore (_) instead of space, giving struct::graph_v1, struct::matrix_v1, and struct::tree_v1. Issue #8: Inappropriate copyright statements A number of files contain copyright statements in formats that may not have appropriate legal effect. No date: expander: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) William H. Duquette, smtpd: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) Pat Thoyts <patthoyts@users.sourceforge.net> Invalid date: namespacex: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) 200? Neil Madden () namespacex: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) 200? Various () soundex: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) ????, Algorithm: Donald E. Knuth Weird date: OpenTcp: CATEGORY: merge-copyrights: unrecognised date format: Copyright (c) 1996-7 Sun Microsystems, Inc. I'm really not sure how to handle these. Best option may be to request the authors to fix their copyright statements. The effect is that for now these authors won't (necessarily) be reflected in the summary of copyright holders that is added to the table of contents. Issue #9: Multiple definitions for coroutine, doctools::idx, interp, tie, try, Tcllib packages define man pages with the same name as Tcl core commands coroutine, interp and try. Proposed fix is to add a "tcllib" prefix to Tclllib pages, and fix up references. Within Tcllib there are multiple man pages providing definitions of doctools::idx and tie. Proposed fix is to add a version suffix (as for graph/matrix/tree) and fix up references. Issue #10: Unvestigated Once the above issues are resolved there remain three more types of error when converting .man --(doctools)--> .n --(tcltk-man2html)--> .html: A. struct_list: make-manpage-section: ignoring .TP after .TP B. treeql: EXAMPLES: output-directive: unrecognized format directive: ... TODO ... C. calculus.n: reference error: Out of place end-quote: {<PRE>Dy'' + ky = 0 x = 0: y = 1 x = L: y = 0</PRE>} The causes have not yet been investigated. Proposed fixes pushed to branch bug-3601370-td. In combination with the Tcl tcltk-man2html fixes it looks like the only remaining issues are invalid copyright statements: expander: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) William H. Duquette, namespacex: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) 200? Neil Madden () namespacex: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) 200? Various () smtpd: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) Pat Thoyts <patthoyts@users.sourceforge.net> soundex: COPYRIGHT: merge-copyrights: unrecognised format: Copyright (c) ????, Algorithm: Donald E. Knuth I think it would be a good idea for the doctools checker to include a check that the manpage name does not contain spaces, and to warn if the copyright statement has an inappropriate format. Andreas Kupries 2013-01-18 Andreas Kupries 2013-01-21 New revision on the branch: [6606f5686a]. Added check to manpage_begin, reject spaces in title. Message catalogs extended with new warning 'mptitle' for spaces in the manpage title. The french catalog contains the english text, and needs a translation. For the copyright check I would need a nice regexp, or two. Even without that check, and the missing french translation of my change I see no trouble to merge this into the tcllib-1-15-rc branch and work in that. Andreas Kupries 2013-01-29 Andreas Kupries 2013-01-29 Merged into the release branch now. Bumped version to 1.4.14. Closing. Reopen or create new bug for the missing french translation of the new warning message, and/or the regexes to check for issues with copyright lines.
http://sourceforge.net/p/tcllib/bugs/1344/
CC-MAIN-2014-52
refinedweb
1,038
57.98
Java SE 6 ; Changes in I/O This is a new feature added in Java SE 6... Java SE 6  ... MicroSystems has released the Java SE 6 on Monday December 11. So go java swing problem - Java Beginners java swing problem i doesn't know about the panel in swings here i had created one frame then created a panel and i added that to my frame but which... to make the that Hi Friend, Try the following code: /*import Hi Hi I need some help I've got my java code and am having difficulty to spot what errors there are is someone able to help import java.util.Scanner; public class Post { public static void main(String[] args) { Scanner sc Multiple session problem - Swing AWT Multiple session problem I am working in a Linux based java swing application. Problem: I have optimized JDialog and JPanel for my use...) can be launched. While switching between JPanels I am facing issue... with run command. Run your code with the following statement: java Maxof2 5 6 Hi ..I am Sakthi.. - Java Beginners :// Thanks...Hi ..I am Sakthi.. can u tell me Some of the packages n Sub... that is available in java and also starts with javax. package HEMAL RAJYAGURU   hi - Swing AWT information, visit the following link: Thanks...hi sir,how to set a title for jtable plz tell me sir Hi Problem in card demo example. Problem in card demo example. Hi, I have successfully shows...:// I have checked all the links, thanks Java Swings problem - Swing AWT Java Swings problem Sir, I am facing a problem in JSplitPane. I want the divider in the splitpane to be customized. I have a splitpane... component. Now, I want this divider to display only half java swing - Swing AWT java swing how i can insert multiple cive me exampleolumn and row in one JList in swing?plz g Hi Friend, Please clarify your question. Thanks Java SE 7 Java SE 7 Java SE was released on 28 July 2011. Its code name is Dolphin... is the list of new features added to the Java 7 : Java Virtual Support for dynamic languages Java HotSpot Virtual Machine Performance Enhancements Changes Hi Hi I want import txt fayl java.please say me... Hi, Please clarify your problem! Thanks when i compile jsp file then got the error "code to large for try statement" I am inserted 177 data please give me solution and let me know what is the error its very urgent Hi Ragini Hi.. Hi.. what are the steps mandatory to develop a simple java program? To develop a Java program following steps must be followed by a Java developer : First of all the JDK (Java Development Kit) must be available Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java i got an exception while accept to a jsp i got an exception while accept to a jsp type Exception report... in a file.later i changed it to ANSII problem is resolved... that server is unable to find your properties file. Keep properties file hi - Java Beginners then, if you got any error then post your code. Thanks... to be declared final i want to be use this variable in another class,what i am do sir,plz tell me.if i am declare a variable is a final hi online multiple choice examination hi i am developing online multiple choice examination for that i want to store questions,four options,correct answer in a xml file using jsp or java?can any one help me? Please Hi.... - Java Beginners . For example : Java/JSP/JSF/Struts 1/Struts 2 etc.... Thanks...Hi.... Hi Friends, Thanks for reply can send me sample of code using ur idea... First part i have completed but i want to how to go hi! hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....) thanx for answering.... Hi...); System.out.print("Enter integer: "); int i=input.nextInt Hi .Again me.. - Java Beginners :// Thanks. I am sending running code...Hi .Again me.. Hi Friend...... can u pls send me some code...... REsponse me.. Hi friend, import java.io.*; import java.awt.... code; if any one need just mail me: fightclub_ceo@sify.com Hi friend, As your requirement to solve the problem to visit... http Swing - Swing AWT Swing hi sir i have a code like this: import... information, visit the following link: Thanks java problem - Java Beginners /java/example/java/swing/calculator-in-swing.shtml Thanks...java problem Write a program to model a simple calculator. Each data... to power of right operand) q quit Your calculator should display the accumulator java-swings - Swing AWT java-swings How to move JLabel using Mouse? Here the problem is i have a set of labels in a panel. I want to arrange them in a customized order...:// Thanks. Amardeep I need your help - Java Beginners I need your help What is the purpose of garbage collection in Java, and when is it used? Hi check out this url : java image loadin and saving problem - Swing AWT java image loadin and saving problem hey in this code i am trying to load a picture and save it.........but image is only visible whn we maximize... )"; } } Hi Friend, Use the following code: import java.io.*; import java and xml problem - XML java and xml problem hi, i need to write a java program... --------- i have witten a program in java, but im not getting the same format as it is here. this is what i have written import COMMENT & HIDDEN COMMENT IN JSP COMMENT & HIDDEN COMMENT IN JSP In this section , we will learn how to implements comment & hidden comment in JSP. COMMENT : Comment generates a comment that is sent to the client. The comment use in JSP is very similar got   i need your help - Java Interview Questions i need your help Write a java program that: i. Allows user to enter 2 numbers from the keyboard ii. Uses a method to compare the two numbers... than the other. Hi Friend, Try the following code: import Problem with code - Java Beginners Problem with code Hi Deepak. Im a newbie here at your forum. I have got a simple code of mine which is having a little problem. When I compile it, i...(); for(int i=0 ; i < msg.length ; i++) { myArgs += msg java i/o - Java Beginners java i/o Dear sir, i wrote a program where program asks "Enter your... texts get replaced with new text. i tried my best but got failed plz tell me.... Hi Friend, Please try the following code which will solve your Swing - Applet information on swing visit to : Hello, I am creating a swing gui applet, which is trying... on in order to retrieve the result. The problem i have is that i think the solution code - Swing AWT code i want example problem for menubar in swings Hi Friend, Please visit the following links: Swing Button Example Swing Button Example Hi, How to create an example of Swing button in Java? Thanks Hi, Check the example at How to Create Button on Frame?. Thanks Swing - Java Beginners Swing Hi friends. I need a swing programming book for free download...:... links: I need your help - Java Beginners I need your help For this one I need to create delivery class... the file name is ApplicationDelivery.java Your program must follow proper Java... to describe the purpose of the program. Hi Friend, Try the following java programming problem - Java Beginners java programming problem Hello..could you please tell me how can I... I am sending some link where you can find solution of your prblem.../java/java-tips/data/strings/96string_examples/example_count.shtml http JAVA CLASSPATH PROBLEM JAVA CLASSPATH PROBLEM hi all Friends I am stuck using the java servlets and problem raise for classpath. I had a problem with servlet to call... but got same problem again. Now this time i doing the all procedure for setting java swing. java swing. Hi How SetBounds is used in java programs.The values in the setBounds refer to what? ie for example setBounds(30,30,30,30) and in that the four 30's refer to what JAVA SWING JAVA SWING Hi.... Iam doing project in java...and my front end in swing ..our project is like billing software... then what are the topics i want cover? then how to design? pls help me java swing - Swing AWT java swing how to add image in JPanel in Swing? Hi Friend, Try the following code: import java.awt.*; import java.awt.image....: Thanks code problem - Java Beginners your problem in details. Which keyword search the line. Thanks... a particual line's contain, example if a file contains- hy, how r u? regrd, john! how will i find what contain a particular line carries? output - Java Beginners Java swing Hi, I want to copy the file from one directory to another directory,the same time i want to get the particular copying filename will be displayed in the page and progress bar also. Example, I have 10 files Java program problem Java program problem Hi, I took the chance to login here in your educational page due to my questions in mind about my java programming assignment... is a path to a directory. Example: java DuplicateFinder c:\Documents java swings - Swing AWT write the code for bar charts using java swings. Hi friend, I am.... swings I am doing a project for my company. I need a to show Hi da SAKTHI ..check thiz - Java Beginners " to solve the problem : Thanks Hi friend, I saw your code and modified. What you want...Hi da SAKTHI ..check thiz package bio;//LEAVE IT import java.lang. Java swing - Java Beginners Java swing Hi, I want to copy the file from one directory to another directory,the same time i want to get the particular copying filename will be displayed in the page and progress bar also. Example, I have 10 files Swing In Java the interface using which it is viewed. Swing And AWT Before, the Java SE version... components. But, since Java 1.2 Swing is available with the Java SE. The AWT... Swing Example Here I am giving a simple example where we will use the Swing API java swing - Java Beginners (){ JFrame f = new JFrame("Frame in Java Swing"); f.getContentPane().setLayout(null...java swing How to set the rang validation on textfield, compare... ......... thanx a lot Hi Friend, Try the following code: import hi - Java Beginners hi hi sir, i am entering the 2 dates in the jtable,i want to difference between that dates,plz provide the suitable example sir Hi.../beginners/DateDifferent.shtml Java Dialogs - Swing AWT : Java Applet Demo Thanks Hi Friend, a)Please visit the following links: Dialogs a) I wish to design a frame whose layout mimics SWINGS - Swing AWT SWINGS Thanks for your fast reply, But I dont want to receive the given value in the same class. I want to receive the given value...:// all comment in jsp page in the browser. jsp scriptlet tag also support java comment...; // java comment /* multiple line java...all comment in jsp Defined all comment in jsp ? jsp problem - Java Beginners .. for eg. o.00002 Hi aman I am sending code according to your...problem num=(a/b)*(c/d) i want a program which accepts a number... Example! "); System.out.println("Please enter integer number! "); String str Hi... - Java Beginners Hi... Hello Friend I want to some code please write and me I want to add dynamic row value in onChange event created dynamic is successfully then i face problem in adding two value please write the code Swing Application Swing Application hello, i am making swing application in java i... thn run it is not showing output .....and when i am cliking on individual java file it is running plzzz reply.... Hi Friend, Post your code java problem - Java Beginners java problem Suppose list is an array of five components of the type int.What is stored in list after the following Java code executes? for (i... WAIT YOUR RESPONSE.... Hi friend, Code : import java.util. HI - Java Beginners case & also i do subtraction & search dialognal element in this. Hi friend, Code to help in solving the problem : public class twoDimension...HI how i make a program if i take 2 dimensional array in same again - Java Beginners hi again i did the changes on the code but still the time is not decreasing i wanna reach increasing running time target sorry for asking too... number of the threads as i got from what is shown here hi Friend... - Java Beginners hi Friend... Hi friend... I have to import only... plz Explain this...Thank u.. Sakthi Hi friend, Java IO : The Java Input/Output (I/O) is a part of java.io package. The java.io package friends, I want download c&c++ software.....can u send me link....its urgent...please send me likn or site name Hi Friend Pradip Dutta Go this link Here you can fine your java - Swing AWT java how can i add items to combobox at runtime from jdbc Hi Friend, Please visit the following link: Thanks Hi Friend query - Swing AWT java swing awt thread query Hi, I am just looking for a simple example of Java Swing Java - Swing AWT . How can I run an application of Microsoft Windows like notepad or paint from my swing program? Hi friend, import java.awt. hi - Java Beginners hi hi sir,Thanks for ur coporation, i am save the 1 image with customer data,when i am search that customer data,i want to see that image (already...("Frame in Java Swing"); f.getContentPane().setLayout(null); l=new JLabel java - Swing AWT java hello sir.. i want to start the project of chat server in java please help me out how to start it?? urgently.... Hi friend, To solve problem to visit this link....... Hi... - Java Beginners Hi... I want to write code for change password from data base please... java bean file for setting and getting and other is .jsp file this file is not work please tell me that what is the problem in this file hi - Java Beginners Sorting String Looking for an example to sort string in Java. ...;)); writer.write("Before shorting :\n"); for(int i=0; i < 5 ; i++) { writer.write(words[i] + " "); } java.util.Arrays.sort(words Java Problem Steps - Swing AWT Java Problem Steps How to create a Jar File From the Dos prompt of a Swing program having two classes in the program with one is public class one of them hi - Java Beginners hi hi sir,i want a program for when i am place the mouse cursor in textfield box then automatically 1 number is generated,again i am place the cursor in that field the sequence no is generated.that is my problem Java I/O problem Java I/O problem Write a Java application that prompts the user to input their Name, Address, Date of Birth and Student ID number using the standard input - this information should then be saved to a file named studentData Hi, - Java Beginners Hi, Hi Friends, I want to write code in java for change...,con_pwd Thanks Hi Friend, Create a database table login[id...=con.createStatement(); int i=st.executeUpdate("update login set password printout problem printout problem Hi I have created a swing application which enter some data and fetch some data from DB . Now I want a print feature by which i... information, visit the following link: Swing programming - Java Beginners { JFrame f = new JFrame("Frame in Java Swing"); f.getContentPane... someone help me. Hi Friend, We have modified your code,try... in Java Swing"); f.getContentPane().setLayout(null); JLabel lbl1 = new JLabel Need Help with Java-SWING progrmming - Swing AWT Need Help with Java-SWING progrmming Hi sir, I am a beginner in java-swing programming. I want to know how we can use the print option...:// Thanks Java Swing - Java Beginners Java Swing Hi, I'm a java programmer, developed few java swing projects.. i want to make that swing jar packages to windows installer to .msi or .exe is that possible? after converting is it necessary that java should New Features of JAVA SE 6. . Changes in I/O This is a new feature added in Java SE 6, which has... New Features of JAVA SE 6.  ... been made to the JMX API in Java SE 6 like : JMX API now completely supports servlet program problem - Java Beginners file to run it Hi Friend, Please clarify your problem. Thanks...servlet program problem i used ur servlet example prg with xml file of helloworld program and i run dat program in tomcat, it shows only Java swing code - Java Beginners :// I hope this would be helpful to you...Java swing code How to set a font for a particular cell in JTable? Hi Friend, Please visit the following link for any kind of help Java - Swing AWT Java Hi friend,read for more information, java - Swing AWT java Hi, I override the mouseClicked interface in my code and get...(). Is there a way to obtain the text at that location? Hi Friend, Please clarify your question. Thanks Hi.. Hi.. what are access specifier available in java java,eclipse - Swing AWT java,eclipse Hi, I have made on program SimpleCalculator on eclipse 3.4.0.But while running the code I am getting the problem for Class File Editor Source not found Source attachment doesnot contain the source Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/11250
CC-MAIN-2015-22
refinedweb
2,991
65.62
Opened 15 years ago Closed 15 years ago Last modified 13 years ago #5135 closed (wontfix) Unicode-branch merge broke insertion of binary data Description Django does not have a BinaryField, or BlobField or whatever you want to call it, which is a bit sad. In 0.96 and before it however worked to insert into blob fields with custom SQL though. In the SVN-release this is broken. An example from my code. info_hash is a blob field (mysql) def create_xbt_file(info_hash, timestamp): query = "INSERT INTO xbt_files (info_hash, mtime, ctime) VALUES (%s, %s, %s)" from django.db import connection cursor = connection.cursor() cursor.execute(query, [info_hash, timestamp, timestamp]) This throws a nasty UnicodeDecodeError whenever a byte with a position between 45-50 (among others I guess) is in info_hash. If I bypass django completely using MySQLdb, then it works fine def create_xbt_file(info_hash, timestamp): import MySQLdb db = MySQLdb.connect("localhost", DATABASE_USER, DATABASE_PASSWORD, DATABASE_NAME) cursor = db.cursor() query = "INSERT INTO xbt_files (info_hash, mtime, ctime) VALUES (%s, %s, %s)" cursor.execute(query, [info_hash, timestamp, timestamp]) db.close() Change History (5) comment:1 Changed 15 years ago by comment:2 Changed 15 years ago by Storing binary data was unsafe before (what if your binary data contained a zero byte?), so it was kind of lucky -- and unsupported -- that it worked at all. It just works even less well know. The real fix here is something like #2417 (adding a propery binary field type). The current workaround is to use base64 encoding (or base96 or some other binary->ascii encoding) on the data before storing it. There's nothing we can do at the text field level, since we are assuming Unicode strings for text and databases obviously use an encoding when they store stuff, hence we have to convert between the encoding and Python Unicode objects. comment:3 follow-up: 4 Changed 15 years ago by So this means that django officially can't be used to interface with legacy databases or external applications using binary fields? bas64 works great when you design the database from scratch, but when integrating your app with another application that uses blob-fields it simply won't work- In my case I'm building a web front end for XBT Tracker which uses blob fields for storing hashes. I can't change that without editing a lot of XBTT's source code and since I'm no C++ genius that doesn't seem very wise.. comment:4 Changed 15 years ago by Replying to bjorn.kempen@gmail.com:. Malcolm pointed you to the ticket where adding a real binary field to Django is being discussed; if it's an important issue for you, why not head over there and devote some energy to helping improve the proposed patch? ;) comment:5 Changed 13 years ago by Instead of implementing a binary field, why don't you just use cursor from MySQLdb and SETTINGS to perform connection? Sorry about the formating.. here we go again Does not work (custom SQL) Does work (MySQLdb)
https://code.djangoproject.com/ticket/5135
CC-MAIN-2022-27
refinedweb
506
60.85
Content-type: text/html #include <unistd.h> int fsync(int fildes); The fsync() function moves all modified data and attributes of the file descriptor fildes to a storage device. When fsync() returns, all in-memory modified copies of buffers associated with fildes have been written to the physical medium. The fsync() function is different from sync(), which schedules disk I/O for all files but returns before the I/O completes. The fsync() function forces all outstanding data operations to synchronized file integrity completion (see fcntl.h(3HEAD) definition of O_SYNC.) The fsync() function forces all currently queued I/O operations associated with the file indicated by the file descriptor fildes to the synchronized I/O completion state. All I/O operations are completed as defined for synchronized I/O file integrity completion. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. If the fsync() function fails, outstanding I/O operations are not guaranteed to have been completed. The fsync() function will fail if: EBADF The fildes argument is not a valid file descriptor. EINTR A signal was caught during execution of the fsync() function. EIO An I/O error occurred while reading from or writing to the file system. ENOSPC There was no free space remaining on the device containing the file. ETIMEDOUT Remote connection timed out. This occurs when the file is on an NFS file system mounted with the soft option. See mount_nfs(1M). In the event that any of the queued I/O operations fail, fsync() returns the error conditions defined for read(2) and write(2). The fsync() function should be used by applications that require that a file be in a known state. For example, an application that contains a simple transaction facility might use fsync() to ensure that all changes to a file or files caused by a given transaction were recorded on a storage medium. The manner in which the data reach the physical medium depends on both implementation and hardware. The fsync() function returns when notified by the device driver that the write has taken place. See attributes(5) for descriptions of the following attributes: mount_nfs(1M), read(2), sync(2), write(2), fcntl.h(3HEAD), fdatasync(3RT), attributes(5), standards(5)
https://backdrift.org/man/SunOS-5.10/man3c/fsync.3c.html
CC-MAIN-2017-39
refinedweb
380
56.66
Kohana (web framework) From Wikipedia, the free encyclopedia Kohana is an open source, PHP5, web application framework that uses the Model View Controller design pattern. It aims to be secure, lightweight, and easy to learn and use.[2] It supports only version 5 or higher of PHP in order to make full use of the improvements in that version's object-oriented design. [edit] History Kohana was originally created as a project fork[3] of the CodeIgniter PHP framework under the name Blue Flame. The principal reason for the fork was to create a more community-based web application framework as many users were frustrated with CodeIgniter's lack of bug fixes and inclusion of new features requested by the community. In July 2007, Blue Flame was renamed to Kohana to avoid possible trademark issues[4]. The name Kohana was chosen when the developers began looking through Native American dictionaries to find words that would have little copyright conflict. Kohana is the Sioux word for 'swift.' It is also the Japanese word for 'tiny flower', Ukrainian and Polish word for 'beloved' (only of feminine gender) and the name of a famous killer whale (Kohana) — none of which have any relation to the original intended meaning. While the initial release of Kohana was essentially an improved version of CodeIgniter, by the end of 2007 the Kohana development team had released version 2.0 which was a complete re-write from the ground up. The new version 2.0 was strictly a PHP5 framework and has formed the basis for all other releases since. [edit] Differences between Kohana and CodeIgniter - Strict PHP5 OOP. Offers many benefits: visibility protection, automatic class loading, overloading, interfaces, abstracts, and singletons.[5] - Kohana has joined the GoPHP5 initiative. All releases from 2.2 on will conform with this project.[5] - Continues CodeIgniter design patterns. Anyone who has used CodeIgniter will quickly understand Kohana's structure and design patterns.[5] -.[5] - GET, POST, COOKIE, and SESSION arrays all work as expected. Kohana does not limit your access to global data, but offers the same filtering and XSS protection that CodeIgniter does.[5] - Cascading resources, modules, and inheritance. Controllers, models, libraries, helpers, and views can be loaded from any location within your system, application, or module paths. Configuration options are inherited and can be dynamically overwritten by each application.[5] - No namespace conflicts. Class suffixes, like _Controller, are used to prevent namespace conflicts. This allows a User's controller and Users model to both be loaded at the same time.[5] - True auto-loading of classes. This includes libraries, controllers, models, and helpers. This is not pre-loading, but true dynamic loading of classes, as they are instantiated.[5] - Helpers are static classes, not functions. For example, instead of using form_open(), you would use form::open().[5] -.[5] - Powerful event handler. Kohana events can by dynamically added to, replaced, or even removed completely. This allows many changes to Kohana execution process, without modification to existing system code.[5] - Kohana features a built-in ORM library, whereas CodeIgniter does not.[6] [edit] Third-party vendors [edit] Release history [edit] References - ^ Developers - Kohana - Trac - ^ overview [Kohana User Guide] - ^ "Changeset 7 – Kohana PHP Framework – Trac".. Retrieved on 2009-01-06. - ^ a b c d e f overview:history [Kohana User Guide] - ^ a b c d e f g h i j k "Frequently Asked Questions".. Retrieved on 2007-11-25. - ^ "libraries:orm [Kohana User Guide]". Kohana Team.. Retrieved on 2009-01-17. - ^ a b c d e overview:credits [Kohana User Guide] - ^ a b BlueFlame 1.0 Released! | CodeIgniter Forums - ^ a b c d changelog [Kohana User Guide] - ^ changelog:2.1 [Kohana User Guide] - ^ changelog:2.1.1 [Kohana User Guide] - ^ changelog:2.1.2 [Kohana User Guide] - ^ changelog:2.2 [Kohana User Guide]
http://ornacle.com/wiki/Kohana
crawl-002
refinedweb
632
59.4
This - CLICK FOR LARGER IMAGE. Click image to go to web page with movie. ============================================================ This does not bode well – according to this NAM model from last night, A tornado outbreak for Ohio Valley is possible late Friday 00Z Saturday The Storm Prediction Center shows a moderate risk of severe convective development on Friday March 2nd: For those interested in tracking storms yourself, you can download a free and functional trial copy of StormPredator personal weather radar software. 29 thoughts on “Tornado outbreak tracking” Thanks for finding this Anthony. You be sure it will be on my weather telecasts today. “Tornado outbreak caused by climate change” in …3, …2, …1 … I think there’s an interesting story here regarding the first real test of the new dual-polarization radar system that was just installed in Springfield MO: I read that the new technology provided an additional 30 minutes warning of the tornado that hit Branson MO this week. All I know is their “Tornado Warning” for our county failed miserably last night. We had a nice, gentle rain with no winds at all last night. Rainfall total was less than 1″. I live in SE Missouri, Poplar Bluff for all you fans out there. Had to get up at 4:30 AM and head to the basement. One of those tornadoes was spotted about 5 miles SE of here. Of course they are pretty hard to see two hours before sunrise. The GOES video makes it look like it all happened in daylight. Fortunately for us there was no serious damage locally. I’d hate to be around if a Joplin or Tuscaloosa 2011 twister hit at such a sleepy hour. I suppose I’d hate to be around one of those at anytime, come to think of it. today’s big MSM scare: 2 March: Bloomberg: Alex Morales: Oceans Acidifying Fastest in 300 Million Years Due to Emissions today in a statement… today in the journal Science… The UN’s Intergovernmental Panel on Climate Change has said ocean pH may fall another 0.3 units this century, according to Columbia… Researchers based in the U.S., U.K., Netherlands, Germany and Spain contributed to the study, which was funded by the U.S. National Science Foundation. Speaking of Storm Predator, I purchased it way back before Window 7. Is there a way that I can retrieve my activation key with out re-buying the program? I still have my original E-mail addresses.. Tornadoes: day by month are shown at the link below. The text is a “must read” – otherwise it is difficult to know what you are looking at. Richard, how much weight do you give to each of these factors re: this severe outbreak: 1. Jet Stream position/upper level low 2. Cold Front location 3. The warm, moist air influx that provides the ‘fuel’? And what amount of ‘boost’ would you attribute to ” Lunar declinational tidal effect” in this event? . Bill, Did you have another link in mind? That link led to a ‘Polarization Upgrade (and temporary outage) Notice’ (for the NWS Springfield WSR-88D Polarization Upgrade) rather than the first real test of the new polarization function.) . When the power is on, I get real-time radar from:¢erx=395¢ery=313&prevzoom=zoom Or ============ An audible alarm for severe weather might be a selling point, Anthony. _Jim says: March 1, 2012 at 7:12 pm ================= We’re heading into the “mean” season. Posts by: _Jim, need to be taken seriously. _Jim, that link talks about the new technology and the advantages. I’ve seen video showing the the dual-pol radar images from the night of the Branson tornado and it shows the debris field around the tornado alerting forecasters to the fact that it was indeed a tornado on the ground. Here’s a link with the story and video from CBS: _Jim says: March 1, 2012 at 7:08 pm Richard Holle says on March 1, 2012 at 6:47 pm: Once again the Lunar declinational tidal effect is responsible but goes unmentioned … “”Richard, how much weight do you give to each of these factors re: this severe outbreak: “”1. Jet Stream position/upper level low Reply; the reoccur-ant position of the jet stream and the upper level low is the result of the Lunar declinational bulge being almost the same as it was the past three cycles when the lunar declination was the same, they are the product of the cyclic timing of the tidal bulge production. the only thing different from the past three cycles is the addition of the heliocentric conjunction of Mars on the 3rd of March this year, increasing the volume of the tidal pulse a little more than the past cycles. “”2. Cold Front location Reply; The cold front is a combination of the pulse of cold air coming out of Canada earlier and the negative ion/static charge it carried on the leading edge, you can see the ions pushing ahead of the cold air in the water vapor satellite series. The cold alone will not generate tornadoes it requires the ion gradient as well to trigger the intense vortex action. “”3. The warm, moist air influx that provides the ‘fuel’? Reply; The in-surge of the tropical fetch of moisture from the previous three days gave the positive ion charge on the moisture needed to cause the warm air mass next to the ground to contain positive ions that can only move with the air mass. (free electrons from – negative ionization can move ahead of the frontal boundary, commonly seen as high cirrus clouds) Because of the positive ion charge the lower air swirls up into the negative charge area just ahead of the cold front that helps push it up, results in the generation of the vorticity. “”And what amount of ‘boost’ would you attribute to ” Lunar declinational tidal effect” in this event? Reply; The timing of the concurrence of all of these forces is the result of the ionic and tidal effects of the Moon being maximum North at the same time the polarity of the solar wind shifts from its peak charging cycle. Tornadoes form from the cresting of the tidal and ionic strengths, just as the magneto on your lawn mower sets up the charge for the discharge through the spark plug just past top dead center of the piston travel. I would be glad to answer any further questions you have, much of what I would tell you can be found at the above links. Richard, How do those theories fit in with the more generally-accepted concepts of of high pressure ridges and low pressure troughs, and Rosby Waves in general? . Thanks Bill; not much ‘new’, but the NWS guy alludes to being able to see ‘debris’ with this option (we saw a good bit of debris from our KFWS WSR-88D last couple of years on several tornadoes, and I don’t think KFWS has been upgraded to dual-pol capability yet) … and I had not seen that ‘capability’ or feature in past white papers (on the subject) as the reason for the upgrade to dual-polarization, rather, the identification of various hydrometeors (hail vs heavy rain) which assist in the warnings for (obviously) hail rather than wait for ground reports by spotters as well as allowing the forecasters and meteorologists to gauge storm ‘health’ (or storm ‘life cycle’ as it relates to cyclogenesis) and strength and the way that tornadic formation storm can be seen to ‘skip’ ahead of an older cell which had already spawned a funnel (a dangling vortex) or tornado. BTW, significant upgrades have been occurring throughout the life of the WSR-88D; most of the digital and signal processing hardware (H/W) has been upgrade to COTS (commercial off-the-shelf) hardware vs the specially-designed H/W that was part of the original WSR-88D equipment ‘fielding’ a couple decades ago now. . Do the NWS SPC (Storm Prediction Center) folks use this kind of information (‘the phase’ of the moon and planets) in creating their “Convective Outlooks” (ranging from today to as far ahead as 8 days), and if not why not? Example, SPC’s “Convective Outlook” webpage: Can you cite any cases or storms on record where the posited theory regarding “Lunar declinational tidal effect” contra-indicated that severe weather outbreak? Would like to know of any ‘false positives’ indicated by the theory as well. PS Sorry for the typo on the misspelling of ‘Rossby’ in a previous post. . _Jim says: March 1, 2012 at 9:16 pm _Jim says on March 1, 2012 at 7:08 pm ”Richard, how much weight do you give to each of these factors re: this severe outbreak: 1. Jet Stream position/upper level low” Richard Holle says on March 1, 2012 at 9:03 pm: Reply; the reoccur-ant position of the jet stream and the upper level low is the Richard, How do those theories fit in with the more generally-accepted concepts of of high pressure ridges and low pressure troughs, and Rosby Waves in general? Reply; The global circulation is driven by the Lunar declinational tides just as the ocean tides are driven. There are known periods and wave styles to the jet streams and the Rossby wave patterns, it is just nobody looked at the synchronization of them to the lunar declinational tides. I have spent almost 30 years just looking at the connections between the solar/lunar periods and global circulation patterns. “Please take however much time is necessary to understand. The discussion cannot advance until people make the effort to understand the basics. My reply; If you read the paper by Leroux he has found and well defined the actions of the air masses, that make up a typical lunar declinational tidal bulge. The Mobile Polar Highs are the polar component of the resultant tidal bulges and the equatorial air masses he thought were pushed by the polar highs are actually pulled off of the meteorological equator or “ITCZ”. The slow and fast phases he eludes to are the result of the 18.6 year variation of the declinational angle at culmination with reference to the equator.” Bill, the document below might be of interest regarding the Dual-Polarization equipped/retrofitted WSR-88D NWS RADARs and the extended capability it offers the operational and warning meteorologists. Title: WSR-88D Dual-Polarization Radar Decision Aid First paragraph/intro: – – – – – – – – – – – – – – – – – – – – – – Background Information on Using this Document The following pages contain information on various hydrometeors that can be identified using WSR-88D dual-polarization radar data. For each hydrometeor, likely values for Reflectivity and three key dual-polarization radar products are provided: Correlation Coefficient (CC), Differential Reflectivity (ZDR), and Specific Differential Phase (KDP). Descriptions for the three dual-polarization radar products are provided below to help you remember their significance in discriminating between various hydrometeors. – – – – – – – – – – – – – – – – – – – – – - _Jim says: March 1, 2012 at 9:50 pm [clipped some text] “”Do the NWS SPC (Storm Prediction Center) folks use this kind of information (‘the phase’ of the moon and planets) in creating their “Convective Outlooks” (ranging from today to as far ahead as 8 days), and if not why not?”” Reply; The forecast process I use, has not much to do with the “phases of the moon”, it is all about the solar/lunar connections with a focus on the solar wind and the magnetic rotation of the sun being in sync with the declinational movement of the moon because of the inductive forces of the changing polarity of the solar wind as the sun rotates, driving the declinational component of the moons orbital dynamics. then showing the resultant repeating patterns due to the Saros cycle of the inner planet returns. Piers Corbyn uses these same effects in his forecasting method ablate a little differently. As to why the NWS does not use this method, I gave up on trying to reach them 12 years ago, and started up this Aerology.com web site presence to reach the general public. As far as false positives or failures in this method of forecasting, to my knowledge there needs to be all of the ingredients present to get any kind of large outbreak, single to small numbers of tornadoes per day happen most any day of the summer, but the big 20+ outbreaks are always on time as scheduled by the lunar declination, 3 days either side of Maximum North from Maximum South declination to 4 to 5 days after, or some times in early spring as the moon crosses the equator headed North for about a 36 hour window, as happened last week; Thanks Richard, I don’t know how much ‘water’ these alternative theories hold, but I may take a look at cited references given time … Bill, here’s the ‘presentation’ on the RADAR screen that operational meteorologists sitting at what used to be called the WSR-88D’s ‘PUP’ (Principle User Position and now called the ‘OPUP’ for Open Systems Principal User Processor) or RADAR display console back in the forecasting office will see when the dual-polarization RADAR data is processed into something usable by humans: . _Jim, I found this regarding the ability of the new dual-pol radar: Tornado debris detection Dual-polarization radar technology can also easily detect the presence of random shaped and sized targets like leaves, insulation or other debris, giving the forecaster a high degree of confidence that a damaging tornado is on the ground. This is especially helpful at nighttime when tornadoes are difficult to see with the human eye. That sounds like exactly what was shown on that CBS News video. >As to why the NWS does not use this method, I gave up on trying to reach them 12 years ago, and started up this Aerology.com web site presence to reach the general public. If your theory has any water publish it….or do a simple scale analysis, which will easily show that the magnitudes of the effects you claim are minimal to the point of being negligible. But the reason no one is listening to you is because the dynamics of severe local storms have been pretty well understood for a while now, and tornadoes can be simulated within a cloud-scale model that neglects all of things you claim are important. I gather you actually know virtually *NOTHING* about the dynamics of severe local storms given statements like ” negative static/ionic charges that gave added power to the temperature front to drive the condensation high enough to generate the tornadoes.” Ummm, NO! From the Special relativity thread; Deadman says: March 1, 2012 at 8:35 pm Fred H. Haynie asks, “how do you tell [virtual] reality from the real thing?” Simple, virtual = not. For example: “virtual reality” — not reality; if your teen-aged son says his homework is “virtually finished” — it’s not finished; “virtually no calories” — it does have calories; “virtually identical” — not identical; “virtually free” — not free; “Some aspects of climate science are known with virtual certainty” — some aspects of climate science are not known with certainty; and so on. With the standard method of forecasting with numerical models they hit the wall of accuracy at about 7 to 10 days out. Using my method I can forecast up to 18 years out with an average accuracy about equal to their 5 to 7 day forecast. In 1990 the comparison was about as good as the NWS 3 to 5 day forecast, in the past 22 years they have stretched their window of visibility out two to three more days. By adding what I and Piers Corbyn know and use, they could see the whole 18 year window as well. My track record for last year; Bill, not a white paper (i.e. that is not a white paper on the subject); rather, it is much closer to ‘advertising lit'(erature) than anything else. I would also assert that the orientation of such RADAR reflective debris would be randomly positioned (remember is the longer dimension which provides the greater return (RCS or RADAR Cross Section) or reflection back to the RADAR site) or isa ble to be seen as a different Azimuth velocity as opposed to various shaped hydrometeors (e.g. raindrops which might be oblately shaped as in the case of large raindrops or more spherically shaped hail stones or small raindrops). Remember, simple horizontally polarized WSR-8D NEXRADs pick up debris ‘clouds’ as well. Let me be specific on this now: the implementation of Dual-Polarity as an add-on to the WSR-88Ds was not done for the purpose of debris-cloud detection; there are a number of sound meteorological reasons which the dual-polarity modes and equipment are being added, and a number of those modes optimize the WSR-88D for the purposes of detecting non-severe storm precipitation and also determining the type, although there appears to be a dedicated ‘spin’ afoot in the press (includes primarily TV it would seem) since “glitz and pizazz gets press”. Is it ‘budget time’ or something in NOAA/NWS land?. I would make reference to this document again which points out the added differentiation capability on hydrometeors (e.g. more specific precip types) that Dual-Polarity Doppler WSR-88Ds provides: . Richard Holle, Seems there is a discussion of these concepts over at
http://wattsupwiththat.com/2012/03/01/tornado-outbreak-tracking/
CC-MAIN-2014-52
refinedweb
2,905
53.04
>The DOM's getAttributeNS() method behaves like getAttribute(). Not according to the apiDocs. They state that the exact opposite is true. When I said it was a bug, I meant as per the api docs that accompany the distribution. I believe this is what the other folks who encountered this meant also. The following is from the apiDocs for Element.getAttributeNS(...). getAttributeNS public java.lang.String getAttributeNS(java.lang.String namespaceURI, java.lang.String localName) Retrieves an attribute value by local name and namespace URI. HTML-only DOM implementations do not need to implement this method. Parameters: namespaceURI - The namespace URI of the attribute to retrieve. localName - The local name of the attribute to retrieve. Returns: The Attr value as a string, or an null if that attribute does not have a specified or default value. This is different from getAttribute which never return null . Since: DOM Level 2 Joseph Kesselman/Watson/IBM@IBMUS on 07/19/2000 09:00:48 AM Please respond to soap-user@xml.apache.org To: "Sanjiva Weerawarana" <sanjiva@watson.ibm.com> cc: Mike Pogue/Cupertino/IBM@IBMUS, Matthew Duftler/Watson/IBM@IBMUS, Andy Clark/Austin/Contr/IBM@IBMUS, soap-dev@xml.apache.org, soap-user@xml.apache.org Subject: Re: Fw: Xerces 1.1.2 Bug The described behavior is not a bug, per the DOM Level 2 Candidate Recommendation. The DOM's getAttributeNS() method behaves like getAttribute(). If an attribute is not present and doesn't have a default value, it should return an empty string. If you want to distinguish this case, use getAttributeNodeNS() to obtain the Attr object; this returns null if there isn't one. You can obtain the requested behavior by saying something like: Attr tempAttr; String valueOrNull = ( (tempAttr=myNode.getAttributeNodeNS(...)) == null ) : null ? tempAttr.getNodeValue(); (We can, and do, argue about whether having getAttribute return an empty string was a good idea or not. But that decision was made in DOM Level 1, and we decided that the new namespace-aware methods in Level 2 should follow that behavior to avoid confusing users faimilar with the Level 1 calls.) ______________________________________ Joe Kesselman / IBM Research
http://mail-archives.apache.org/mod_mbox/ws-soap-user/200007.mbox/%3C85256921.004D2E7C.00@D51MTA03.pok.ibm.com%3E
CC-MAIN-2017-13
refinedweb
355
59.6
Introduction! IIFEs (Immediately Invoked Function Expressions) We are going to start with IIFEs (Immediately Invoked Function Expressions). IIFEs do not necessarily use closures, but they are going to be very useful later in this tutorial. An IIFE is, as its name implies, a function that is executed as soon as the interpreter gets to it. This means that we don’t need to call it afterwards. Let’s look at an example: var global_var = 1;(function () {var local_var = 2;console.log(local_var); //2global_var = 3;})();console.log(global_var); //3console.log(local_var) //ReferenceError There’s only one thing that is new to us: as you can see, the function is wrapped in parentheses, and it is called in situ (in the same place that it is defined). We would usually have to first declare the function: function myFunction() {...}. And then call it: myFunction();. When using IIFEs, the function is not declared, it is a function expression. As soon as the interpreter gets to it while executing the script, the function gets called, computes something, and then it is gone. It's no longer a bound object and eventually gets swooped up by the JavaScript Garbage collector. To reiterate, IIFE, being functions, are able to return values or even interfaces, but the presence of the braces mean that function is bound to life as long as that code is executing. Then it vanishes. As any other function, is has access to its parent’s lexical scope, that’s why global_var gets reassigned inside of it. And also, as with any other function, its own lexical scope is gone when the function finishes its computation (that’s why we can’t access local_var from outside). Why would we want to do this? Well, for starters, because an IIFE doesn’t “pollute” our global namespace. When we declare a function in the global scope, the identifier that we use to name it is stored inside of the global namespace. And so, if we try to name another function or a variable with the same name, we will encounter some name collisions. So, when you need to make a computation, but you don’t need that function to stick around for long, then you can call it as an IIFE and get rid of it afterwards. Another use for IIFEs is in closures, but we’ll talk about that later. Something that is very important to consider is that, even though an IIFE doesn’t NEED a name, it is a good idea to name it regardless. Naming an IIFE has two advantages: - It is easier to recognize it in your code. - It is very useful for debugging. Besides being able to name an IIFE, as with any other function we can pass arguments to it: var random_array = ["-", "am", "-", "array"];(function completeArray(an_array) {an_array[0] = 'I';an_array[2] = "an";joined_array = an_array.join(" ");joined_array = joined_array + ".";console.log(joined_array); //I am an array.})(random_array); There are two variations of IIFEs. You can choose whichever you prefer. The difference is of style, but they work in the same way. Variation #1 (function variation1() {var a = 1;console.log(a); //1}()); As you can see, the difference is at the end, in the parentheses that we use to call the function. The first version is (function() {...})();, and the variation is (function() {...}());. Pretty simple. Variation #2 var a_variable = 1;(function variation2(def) {def(a_variable);})(function def(first_variable){var other_variable = 2;console.log(first_variable + other_variable); //3}); In this variation, we pass a function to the IIFE as an argument. The IIFE takes that function, passes an argument to it, and calls it. I know that it looks a lot more complicated than the others. So, why would anyone want to use it? Well, as I said, it is a matter of style. There are some programmers that think that this version is better. Perhaps you won't use it ever, but it helps to be familiar with it in case you encounter it some day! Closures Closures are a fundamental part of how JavaScript works. Enclosing (or encapsulating) data is an essential part to making applications (and it's functionality) behave predictably, and having a better understanding of what it means can help you build your applications. A part of JavaScript's lexical scope, functions act as a filter that prevents values from reaching into other functions, but allowing them to access the values located outside of it. The Closures act like a virtual force field around each function, preventing outside calls from accessing it, while still allowing the protected function context to observe (and assign) available* outside values. available*, since you can only create objects within the scope you are in, unless using some sort of function or reference to delegate the creation of new objects. A closure is when a function has access to its lexical scope even when it is called outside of it. Closures are everywhere in JavaScript, we just need to be able to recognize them. Let’s look at one: function outerFunction() {var a = 1;function innerFunction() {console.log(a);}return innerFunction;}var a_variable = outerFunction();a_variable(); //1 Ok, so, in this example, we are creating an innerFunction inside of outerFunction. innerFunction uses variable a, which is declared in outerFunction, to print it to the console. Then, we return the innerFunction without calling it and assign it to a_variable. a_variable contains innerFunction, and so we can call a_variable and print variable a. How does innerFunction has access to variable a even from OUTSIDE of outerFunction? We would expect that variable a has already stop existing, as outerFunction has already finished its computations. Where is that variable stored? Well, the trick is to think about the way in which lexical scopes work. Remember what lexical scope means? It means that the scope of a function is defined by WHERE in the script that function was defined by the author. The place where that function is written in the code defines its scope, NOT the place where that function is called. And so, in our last example, innerFunction stores a reference to the scope of outerFunction and can access it any time. innerFunction’s scope chain is something like this: scope_chain = {inner_function VariableObject,outer_function Variable Object,global Variable Object} In each Variable Object (VO), we are storing references to the arguments, variables and functions that were declared at each scope level. That scope chain is a part of innerFunction’s execution context, and so it exists "inside" innerFunction. Let’s look at a classic example: var some_array = [];for (var i = 0; i < 3; i++) {some_array.push(function pushToArray() {console.log(i);})}some_array[0](); //3some_array[1](); //3some_array[2](); //3 In this example, even though we would perhaps expect the console.log()’s to print 0, 1, and 2, what we get is 3, 3, and 3. This is pretty easy to understand. What we are pushing into the array is a function that contains a reference to the variable i that was declared in the global scope (remember that variables declared with var inside of a block statement are created in the scope where the block statement is in). So, each time the function is called, it searches for the value of the global variable i (3 at the end of the for loop) and prints it). We can change this code to make it behave as we would expect: var some_array = [];for (var i = 0; i < 3; i++) {(function createClosure() {var j = i;some_array.push(function pushToArray() {console.log(j);});})();}some_array[0](); //0some_array[1](); //1some_array[2](); //2 What do IIFEs have to do with closures? Well, we just used an IIFE! That’s right, we use an IIFE to create a new lexical scope at each iteration of the for loop. What happens when we declare variable j and assign it the variable i, is that j is declared inside of the lexical scope of createClosure(). And so, when function pushToArray() is called, it creates a closure over the scope of createClosure(). Each time createClosure() is called, it stores in its own Variable Object a variable j that contains the value of i at each iteration of the for loop. And so, pushToArray() uses that reference to j to print 0, 1, and 2. Amazing, right? We could even save some pixels and write it like this: var some_array = [];for (let i = 0; i < 3; i++) { //A let variable!some_array.push(function() {console.log(i);});}some_array[0](); //0some_array[1](); //1some_array[2](); //2 We’ll let you analyze why this works! If you have some doubts, check our last tutorial! You can see the answer in here: The Module Pattern So, why do we want to use closures? Something pretty cool that we can do with closures is to create modules. Modules are a part of a design pattern that is called The Module Pattern. A design pattern is basically a set of rules that allow us to write JavaScript code in a cleaner and better way. An example of a module could be this: function Module() {var an_array = ["Data 1", "Data 2", "Data 3"];function getData1() {console.log(an_array[0]);}function getData2() {console.log(an_array[1]);}function getData3() {console.log(an_array[2]);}return {getData1 : getData1,getData2 : getData2,getData3 : getData3}}var module = Module();module.getData1(); //Data 1module.getData2(); //Data 2module.getData3(); //Data 3 Ok, so, let’s analyze this. - First of all, the identifier of the module is written with uppercase letter (Module). This is done to state clearly that this function is not just any function, but a module. - Inside of Module, we have three functions that take a value stored inside of the variable an_array and print it to the console. - Module returns the names of the three functions inside of an Object Literal. An Object Literal is similar to a JSON structure. The difference is that an Object Literal doesn’t necessarily has to use strings as keys, and that JSONs don’t allow functions as values. Module stores that Object Literal in the variable module, and now the inner functions of Module can be accessed by using dot notation: module.getData1(); Modules are a way of simulating private and public functions in JavaScript. Private functions are the ones that the module DOESN’T return, and so the user has no access to them. Public functions are the ones that the module DOES return inside the Object Literal, and they can be accessed from outside of the function. This is very similar to the way Object-Oriented Programming works, but we will talk about that in a later tutorial. More about Modules Ok, there are other things that we can do with modules. Let’s explore them a little bit further: var dataForDatabase = ["Mark", "Paul", "Jeremy", "Roger"];var database = (function CreateDatabase(an_array) {var _database = an_array;var publicAPI = {};function _printData(idx) {console.log(_database[idx]);}function _pushToArray(name) {_database.push(name);}publicAPI.getName = function getName(idx) {_printData(idx);};publicAPI.storeName = function storeName(name) {_pushToArray(name);}return publicAPI;})(dataForDatabase);database.getName(1); //Pauldatabase.getName(3); //Rogerdatabase.storeName("Peter");database.getName(4); //Peter Ok, so, a lot of things are happening in here! Let’s break them down: - We are using an IIFE to store a module named CreateDatabase in the variable database. As we don’t declare the module as a function, this means that we can only have 1 instance of this module. The only instance is in the variable database and there’s no other way of instantiating another one of this modules. - _database, _printData, and _pushToArray are private variables and functions (they cannot be accessed directly from outside of the function). We state clearly that these are private by adding an underscore at the beginning of the identifier (this is just a convention). - We define a variable publicAPI and assign an empty Object Literal to it. This type of variable is called a Locally Scoped Object Literal. This makes the code clearer because we can explicitly see when a function is appended to the publicAPI (publicAPI.getName = function() {...};). - We return the publicAPI to store it in the variable database. - We can also pass arguments to the module AND to the public functions! This gives us a lot of flexibility as you can see. - When we call the public functions with database.getName() and database.storeName() we are not only accessing the variable _database and printing its contents to the console, but we are also inserting values into it! This means that the variable _database that lives in the Variable Object of CreateDatabase() can even be modified after the function is called, and the changes are stored to be used later! As you can see, modules are a very very powerful tool! The Revealing Module Pattern One last thing. There is a variation of the Module Pattern that is called the Revealing Module Pattern. This is very similar to what we just learned; the only difference is that it explicitly assigns functions to variables, and then returns the variables inside the object literal: var arithmetic = (function Arithmetic() {var add = function addNums(a, b) {console.log(a + b);};var substract = function substractNums(a, b) {console.log(a - b);};var multiply = function multiplyNums(a, b) {console.log(a * b);};var divide = function divideNums(a, b) {console.log(a / b);};var publicAPI = {add : add,substract : substract,multiply : multiply,divide : divide};return publicAPI;})();arithmetic.add(3, 5);arithmetic.substract(6, 4);arithmetic.multiply(3, 4);arithmetic.divide(20, 5); One thing to note is that we are declaring and assigning publicAPI at the same time, just before returning it. This is another variation that you can use if you like. Conclusion IIFEs, Closures, and Modules are some of the most powerful tools in JavaScript. If you learn how to use them well, you can access a myriad of superpowers that will definitely make you a better JavaScript coder. Practice them and master them! And keep on reading, for we are going to explore now a concept that is pretty similar to modules: Object-Oriented Programming. Continue learning to become a JavaScript Grand Master!
https://www.commonlounge.com/discussion/60d183b650124b6794546ee7444e31b8
CC-MAIN-2018-39
refinedweb
2,342
56.05
. To set our minds, assume the following scenario: you're doing a lot of string manipulations and one of the operations you need quite often is reversing a string. Since strings are immutable, a new string has to be created with the reversed set of characters in it. A typical solution looks like this: using System; class ReverseStringDemo{ public static void Main() { string s = "Hello World!"; string r = Helpers.Reverse(s); }}static class Helpers{ public static string Reverse(string s) { char[] c = s.ToCharArray(); Array.Reverse(c); return new string(c); }} Although this approach works like a charm, it has a few drawbacks. First of all, the code is less intuitive because of the need to call a static method which happens to be defined in "some" static class. (Note: Static classes were introduced in C# 2.0 as a way to enforce - at compile time - all member of a class to be static. This feature was the direct result of an API issue in .NET Framework 1.1 where an instance member was added to the System.Environment class which didn't have a public constructor, so the newly added instance member was useless. As the matter in fact, the member had to be static but the error was not caught at compile time. Static classes avoid this kind of problem to arise.) Another drawback is the disruptive nature when results have to be chained. Assume the following: string s = "Hello World!"; string t = s.TrimLeft().ToLower(); string r = Helpers.Reverse(t).Substring(0, 15).PadRight(20, '*'); It would be much nicer to be able to write everything in one chain of calls, like this: string s = "Hello World!"; string r = s.TrimLeft().ToLower().Reverse().Substring(0, 15).PadRight(20, '*'); This becomes even more convenient when a lot of "extensions" would have to be added, for example to every possible collection type which is exactly what LINQ needs to do: take a collection, filter items based on a predicate, sort it, group the results, project, etc. Basically, this is a chain of calls that transforms one collection into another, like this: var result = customers.Where(c => c.Age >= 18).Select(new { c.Name, c.Age }); Extension methods allow us to realize this. Basically, you can turn static methods on a static class into extension methods by just adding the this keyword to the first method parameter: using System;class ReverseStringDemo{ public static void Main() { string s = "Hello World!"; string r = s.Reverse(); }}static class Helpers{ public static string Reverse(this string s) { char[] c = s.ToCharArray(); Array.Reverse(c); return new string(c); }} Note: The extension method can still be used as a regular static method too, so you can turn existing static methods (on static classes) into extension methods without breaking compatibility. The Main method in the code fragment above is turned into the following IL code: .method public hidebysig static void Main() cil managed{ .entrypoint .locals init (string V_0, string V_1) IL_0000: nop IL_0001: ldstr "Hello World!" IL_0006: stloc.0 IL_0007: ldloc.0 IL_0008: call string Helpers::Reverse(string) IL_000d: stloc.1 IL_000e: ret} // end of method ReverseStringDemo::Main As you can see, extension methods are just some syntactical sugar to masquerade static methods calls, in order to make source code cleaner and more intuitive. The Reverse method is translated in a regular static method but annotated with an attribute that makes it recognizable by the compiler as an extension method: .method public hidebysig static string Reverse(string s) cil managed{ .custom instance void System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 ) .locals init (char[] V_0, string V_1) IL_0000: nop IL_0001: ldarg.0 IL_0002: callvirt instance char[] [mscorlib]System.String::ToCharArray() IL_0007: stloc.0 IL_0008: ldloc.0 IL_0009: call void [mscorlib]System.Array::Reverse(class [mscorlib]System.Array) IL_000e: nop IL_000f: ldloc.0 IL_0010: newobj instance void [mscorlib]System.String::.ctor(char[]) IL_0015: stloc.1 IL_0016: br.s IL_0018 IL_0018: ldloc.1 IL_0019: ret} // end of method Helpers::Reverse In case the library with extension methods is compiled into another namespace (maybe in another .dll file), you need to bring these into scope using the using statement. An example will make this clear: using System;using System.Collections.Generic; namespace Bar{ using Foo; class Ext { public static void Main() { string s = "Hello world!"; string r = s.Reverse(); Console.WriteLine(r); List<string> lst = new List<string> { "Bart", "Steve", "Rob", "John", "Bill" }; foreach (string t in lst.Where(i => i.Length == 4)) Console.WriteLine(t); } }} namespace Foo{ delegate R Func<T,R>(T t); static class Extensions { public static string Reverse(this string s) { char[] c = s.ToCharArray(); Array.Reverse(c); return new string(c); } public static IEnumerable<T> Where<T>(this IEnumerable<T> sequence, Func<T, bool> predicate) { foreach (T item in sequence) if (predicate(item)) yield return item; } }} In the code fragment above you can see the use of using Foo to bring extensions defined in the Foo namespace in scope. This makes the Reverse and Where methods available on System.String and System.Collections.Generic.IEnumerable<T> respectively. The latter method is a sample of an extension method applied to an interface, which is as easy as an extension method on a regular type. To understand the Where method sample, you should know about iterators (a C# 2.0 feature). You can read more on iterators on my C# 2.0 Iterators blog post. The Where method sample is equivalent to the Where operator in LINQ and can be found in my LINQ-SQO CodePlex project as well. Essentially, it takes a predicate (which is a method that maps an "item" on a boolean value) to filter items from the source "sequence" (a synonym for an IEnumerable<T> collection type). In the samp,e all strings with a length of 4 characters are retained. Tke key takeaway from this post is that extension methods are a convenient way to extend types with instance methods without requiring access to the orginal source code. As usual, use it with care. LINQ uses extension methods extensively, as we'll explain later on. Have fun! Bart, What do you think about the possible versioing problems with extension methods? I blogged about it here, . Everyone is is talking about segregating extension methods into their own namespaces, but I'm not sure that completely solves the problem. A few days ago I blogged about extension methods in C# 3.0 as a piece of glue to support LINQ ( although Here we are again for some cool LINQ stuff. In the past I've been blogging on C# 3.0 language innovation Introduction Welcome to the first real part of our LINQ-to-LDAP series. So far, we've been discussing: You've probably already heard about this new feature in .NET 3.5: extension methods. If not, check Collecting a few of my posts for easy quick reference: C# 3.0 Feature Focus - Part 1 - Local Type Inference Pingback from Are Extension Methods Dangerous? : Acceptably Unique
http://community.bartdesmet.net/blogs/bart/archive/2006/12/06/C_2300_-3.0-Feature-Focus-_2D00_-Part-4-_2D00_-Extension-Methods.aspx
CC-MAIN-2015-32
refinedweb
1,166
66.64
This post is an early draft of expanded work that will eventually appear on the District Data Labs Blog. Your feedback is welcome, and you can submit your comments on the draft GitHub issue. In order to learn (or teach) data science you need data (surprise!). The best libraries often come with a toy dataset to show examples and how the code works. However, nothing can replace an actual, non-trivial dataset for a tutorial or lesson because it provides for deep and meaningful further exploration. Non-trivial datasets can provide surprise and intuition in a way that toy datasets just cannot. Unfortunately, non-trivial datasets can be hard to find for a few reasons, but one common reason is that the dataset contains personally identifying information (PII). A possible solution to dealing with PII is to anonymize1 the data set by replacing information that can identify a real individual with information about a fake (but similarly behaving) fake individual. Unfortunately this is not as easy at it sounds at a glance. A simple mapping of real data to randomized data is not enough because anonymization needs to preserve the semantics of the dataset in order to be used as a stand in for analytical purposes. As a result, issues related to entity resolution2 like managing duplicates or producing linkable results come into play. The good news is that we can take a cue from the database community, who routinely generate data sets in order to evaluate the performance of a database system. This community, especially in a web or test driven development context, has a lot of tools for generating very realistic data for a variety of information types. For this post, I’ll explore using the Faker library to generate a realistic, anonymized dataset that can be utilized for downstream analysis. The goal can therefore be summarized as follows: given a target dataset (let’s say for simplicity, a CSV file with multiple columns), produce a new dataset such that for each row in the target, the anonymized dataset does not contain any personally identifying information. The anonymized dataset should have the same amount of data, as well as maintain its value for analysis. Anonymizing CSV Data In this example we’re going to do something very simple, we’re going to anonymize only two fields: full name and email. Sounds easy, right? The issue is that we want to preserve the semantic relationships and patterns and read the data from a source CSV file and write the anonymized data to a target CSV file. The end result is that the file should be very similar in terms of length, row order, and fields, the only difference being that names and emails have been replaced with fake names and emails. import unicodecsv as csv from faker import Factory from collections import defaultdict def anonymize_rows(rows): """ Rows is an iterable of dictionaries that contain a name and email field): """ source is a path to a CSV file containing data to anonymize. target data is held in CSV form, and target a path to write out the anonymized data to. Both of these paths are opened for reading and writing respectively, then to map names to fake names and emails to fake emails. The Python collections module provides the to something as follows:. Because Fake Factory has multiple language support, and a wider array of fake data generators, I typically use it over the more intuitive and simple to use Faker library which only does fake user profiles and we’ll inspect it in detail for the rest of this post (everywhere except in this paragraph,) that all generate fake data. Please allow me to highlight a few: >>> fake.credit_card_number() u'180029425031151' >>> fake.military_ship() u'USCGC' >>> (fake.latitude(), fake.longitude()) (Decimal('-39.4682475'), Decimal('50.449170')) >>> fake.hex_color() u'#559135' >>> fake.pyset(3) set([u'Et possimus.', u'Blanditiis vero.', u'Ad odio ad qui.', 9855]) Importantly, providers can also be localized using a language code; and" As you can see there are a wide variety of tools and techniques to generate fake data from a variety of domains. The best way to explore all the providers in detail is simply to look at the providers package on GitHub. Creating A Provider Although the Faker library has a very comprehensive array of providers, occasionally you need a domain specific fake data generator. In order to add a custom provider, you will need to subclass the BaseProvider and expose custom fake may create a package structure with localization similar to how Faker is organized of just name and email address. First, if you look at the results in the pass over the data and modify our fake data generation such that it matches the distributions we’re seeing in the original data set. In particular we’ll deal with the domain, creating more realistic fake profiles, and adding. I propose the following utility: Right so that’s quite a lot of code all at once, so let’s break it down a bit. First, the class extends MutableMapping which is an abstract base class in the collections module. The ABC gives us the ability to make this class act just like a dict object. All we have to do is provide __getitem__, __setitem__, __delitem__, and __iter__ methods and all other dictionary methods like pop, or values work on our behalf. Here, we’re just wrapping an inner dictionary called domains. The thing to note about our __getitem__ method is that it acts very similar to a defaultdict, that is if you try to fetch a key that is not in the mapping, then straight forward, it just takes an open file-like object and parses it uses the json module, and instantiates the domain mapping and returns it. The dump method is a bit more complex, it has to break down the whitelist and mapping into separate objects, so that we can easily modify the data on disk if needed. Together, these methods will allow you to load and save to the user in order of most frequent domain single domains; if you quit, the domains will still be faked, and the user only sees the most frequent examples for whitelisting. The idea behind this mechanism to read through your CSV once, generate the whitelist, then save it to disk so that you can use it for anonymization on a routine basis. Moreover, you can modify domains in the JSON file to better match any semantics you might have (e.g. include .edu or .gov domains, which are not generated by the internet provider in Faker). Realistic Profiles To create realistic profiles, we’ll create a provider that uses the domain map from above and generates fake data for every combination we see in the data set. so you understand what is happening. First off, a profile in this case is the combination of a mapping of names to fake names and emails to fake emails. The key is that the names and emails are related to original data somehow. In this case, an email domain “@districtdatalabs.com” will be mapped to the same fake domain. In order to maintain the relationship of names to emails (which is very common), we need to be able to access the name more directly. In this case we have a name parts generator which generates fake first, middle, and last names. We then randomly generate names of the form “first last”, “first middle last”, or “first i. last” with random chance. Additionally data set we had the situation where we had a clear entity duplication: same email, but different names. In fact, the second name was simply the first initial and last name but you could imagine other situations Similar to how we did the domain mapping, we’re going to pass through the entire dataset and look for similar name, email pairs and propose them to the user. If the user thinks they’re duplicates, then we’ll merge them together into a single profile, and use the mappings as we anonymize. Although I won’t go through an entire object to do this as with the domain map, your entire dataset, and create pairs of people it finds and compute their similarity score. Filter all pairs except for scores that meet a threshold (say, 50) then propose them to the user to decide if they’re duplicates in descending score order. When a duplicate is found,. This issue becomes how to leverage these fake data generators while still maintaining a high quality dataset with semantic relations preserved for further analysis. As we’ve seen throughout the post, even just the anonymization of just two fields, name and email can lead to potential problems. This problem, and the code in this post are associated with a real case study. For District Data Labs' Entity Resolution Research Lab3 I wanted to create a dataset that removed PII of DDL members while maintaining duplicates and structure to study entity resolution. The source dataset was 1,343 records in CSV form and contained name and emails that I wanted to anonymize. Using the strategy I mentioned for domain name mapping, the dataset contained 245 distinct domain names, 185 of which were hapax legomena (appeared only once). There was a definite long tail, as the first 20 or so most frequent domains were the majority of the records. Once I generated the whitelist as above, I manually edited the mappings to ensure that there were no duplicates and that major work domains were “professional enough”. covered many of the duplicates in the dataset. In the end the dataset that I anonymized was of a high quality. It obfuscated personally identifying information like name and email and I’m happy to make the data set public. Of course, you could reverse the in the dataset, you wouldn’t be able to use it to extract my email address, which was the goal. In the end, anonymizing a dataset is a lot of work, with a lot of gotchas and hoops to jump through. However, I hope you will agree that it is invaluable in an open data context. By sharing data, resources, and tools we can use many eyes to provide multiple insights and to drive data science forward..
https://bbengfort.github.io/2016/02/anonymizing-profile-data/
CC-MAIN-2021-17
refinedweb
1,721
58.42
HOUSTON (ICIS)--Here is Tuesday’s midday ?xml:namespace> CRUDE: Mar WTI: $97.37/bbl, up 94 cents; Mar Brent: $105.96/bbl, down 8 cents NYMEX WTI crude futures traded higher getting support from an upside correction in the stock market. Winter storms across the US lifted heating oil (ULSD) and natural gas futures, also providing underling support. The market will be keeping an eye on the weekly supply statistics for a sign of a drawdown in crude oil inventories at the Cushing, Oklahoma delivery hub following the start of the southern leg of the Keystone XL pipeline. RBOB: Mar $2.6138/gal, up 0.69 cents/gal Reformulated blendstock for oxygen blending (RBOB) gasoline futures began to rebound slightly from Monday’s losses as they tracked strength in crude oil futures. NATURAL GAS: Mar $5.266/MMBtu, up 36.1 cents The March contract broke a three-day losing sequence on Tuesday morning, surging through the $5.00/MMBtu threshold as fresh forecasts for below average temperatures over high consumption regions through the next 10 days fired expectations for strong near term heating demand. ETHANE: steady at 34.00 cents/gal Ethane spot prices were steady in early trading, even as natural gas futures surged on forecasts for more cold weather. AROMATICS: benzene flat at $5.00-5.20/gal Activity was thin in the US benzene market early in the day, sources said. As a result, spot prices were flat from the previous session. OLEFINS: Feb ethylene higher at 51.5-55.0 cents/lb, PGP offered lower at 72 cents/lb US February ethylene bid/offer levels climbed to 51.50-55.00 cents/lb on Tuesday compared with 51.25-54.50 cents/lb the previous day. US February polymer-grade propylene (PGP) offer levels fell to 72.00 cents/lb from 72.25 cents/lb the previous day against no fresh bids. For more pricing intelligence please visit
http://www.icis.com/resources/news/2014/02/04/9750226/noon-snapshot-americas-markets-summary/
CC-MAIN-2016-26
refinedweb
324
68.97
This is a brief summary of the following threads, Put together - out of curiosity - the wishlist in no particular order but grouped. Web Services + Consume Web Service - Consume Web Services easily (WSDP 1.4 Integration) + Create Web Service + Implement a Web Service specified by a WSDP (AXIS Integration?) Gui Builder + Gui builder improvments. Set fonts, colors, icons, etc. A menu and tool bar builder General + XML/JSP improvements + Improved GUI designer + JSP improvements Major + JSP 2.0 support + Multiple editor window support + IDE Modularisation for better plugin support + Basic code coverage support (EMMA plugin) + Peer review engine + Pattern Support / integration (possibly better coupled with ODN support below) + Integrated online developer network support (javapractices.com, javaalmanac.com, javaperformancetuning.com, jdoc.com, c2 wiki, etc) Clear Features + Subversion Integration + Open API allowing scripting language support with at least 1 language example (Groovy) + Pluggable Language Support with pluggable refactorings for those languages + Basic SSC support for marginal support of various source control systems + Perforce integration equal to Idea 4.5 CVS integration + Project support for multiple modules in the same source tree + Shared/Remote file viewing by more than one IDE editor. Remote user may edit, etc. Remote file appears with a yellow tab in the tabbed pane. + Usability: Custom view filters, Views, Workspaces + Breakpoint Workspaces - saved sets of workspaces + Logging breakpoint shortcut - at a particular line print out/log the value of a particular expression Create a breakpoint, select its properties from the context menu. Tick the checkboxes "Suspend VM execution" OFF and "Log evaluated expression" ON and enter your expression. Light/Easy Features + Recently used shortcut reference card. + Move to Java 5.0 + Batch inspection, dependancy analysis, duplicates location, etc. + Explicit support for generated code - . Generated source directories would be excluded from intention analysis/refactoring. + Combine error highlighting settings and inspect tool profiles. Share profiles to other team members. + Create Custom Module types ie: Java, Web, Ejb, J2EE, etc. + Sample IntelliJ IDEA Plugin Templates - Create new IntelliJ IDEA Plugin Module ANT + ANT: Project import from ANT + ANT: Queue multiple targets and execute in order + ANT: Download common ANT plugins (Plugin Manager Tabs = Installed, Available, ANT Plugins) Minor Issues + All Enable/Disable all breakpoints in a file + Refactor: Move private/packaged class to its own source java file. + Usability: TAB reordering + Ignore intention at a specific place after it has been decided to ignore + Dont allow breakpoints when activation never possible. + Make a line breakpoint on a method a method breakpoint. + Fix focus being grabbed by IDEA on completion (configurable) + Fix focus/progress bar - Flashing once for every web/ejb/j2ee is not nice. + Backgroundable CVS operations + Usability: Window shortcuts to show/hide (alt-5 show/hide debug) (etc...) + Remember the selected module/tab in project properties. + Configurable File Synchronization Interval - or force sychronize for a particular file + Intention: Override method Bugs + Working/Fixed UNDO + Memory leaks: working with multiple projects and opening/closing them. Other + GUI blueprints recommendation / Swing guide + Completion/Code completion made accessible through OPEN API. + (1) IDEA had the concept of mode-specific keymaps that are only active while in that mode. Besides plugins, it would be nice if IDEA let you change the keys for each special-mode like completion, incremental-search, live templates, etc. You could also fix these while you're at it: + 12355 Auto-format of whitespace according to coding style + 12357 SmartKey Brace opening { and closing brace } + (2) IDEA needs the concept of modes and submodes so that plugins can determine the full context in callbacks. + (3) It should be easier to override the key handler for any character, and also be able to defer back to the default handler for that character. + Saner approach to bugrporting. Volume is high, important stuff slips through. This is a brief summary of the following threads, Tim Twe wrote: To this I would add: + Editor performance on not-so-large files: -- Rob Harwood Software Developer JetBrains Inc. "Develop with pleasure!" and: fix all the annoying little bugs !! Introduce new features into IDEA that assist in fixing bugs that are hard to reproduce (because of course you use IDEA to develop / improve IDEA) Every developer could take advantage in the knowhow / assistance of really tuning a complex software to the maximum and make it 100% bugfree. Come on, I need at least 99% bugfree IDEA - the bugs in my own software are causing my enough pain ;) just tried to run a JUnit test method (one method alone) that was not prefixed with "test" - IDEA allows it but then locks deadly... What about the advanced JavaScript/HTML editing ? lllopo wrote: It seems there will be only primitive JavaScript support :(, that what all other editors at the moment know. Ahemd. I find very few bugs. Okay, i dont do any web or J2ee stuff, so i'm hardly using IDEA to its fullest. Actually... I've probably had 2 issue/bugs. but it was far from blocking. 1 was something that I had to terminate IDEA... and the other was structural search/replace in an XML file (didnt work). Its obvious that using a beta there may be issues... so i hope you're not refering to those issues. -Tim Well, I think this is essential for an IDE that supports J2EE/WEB programming ... and it's really essantial. Hi, Any details / tracker request on this? Tim Twe wrote: and the other was structural search/replace in an XML file (didnt work). -- Best regards, Maxim Mossienko IntelliJ Labs / JetBrains Inc. "Develop with pleasure!" lllopo wrote: There's no JavaScript editor at the moment that knows "intelligent" code completion, code navigation, find usages, etc. Jetbrains would have a real advantage over Eclipse and the others with such a feature, but it seems to me that "Subversion Integration" has much more lobby :(. Ahmed.. Any uses from me will be using established Jakarta Commons libraries and there are JSP components to make the libraries easy to use. Ahmed Mohombe wrote: > >> Well, I think this is essential for an IDE that supports J2EE/WEB >> programming ... and it's really essantial. > > Norris Shelton wrote: Well you are lucky because you can choose not to use it. In the most of the projects however JavaScript IS required (or how yould you like to make all that client side work?). JavaScript is indeed a pain, but hey, it should be the job of IDEs to make it easier. I can remember that java was "dumb" too before that CodeSearch plugin from IntelliJ (for JBuilder at that time :) ). Only after IntelliJ made the "intelligent" code completion, all the others started to do the same :), o,r am I wrong? Ahmed. One thing I would like to see added to the list is the inclusion of the PropertiesEditor plugin to the general installation, much like IG and IPP were added. This is a very useful plugin and would definately add to the overall product. Sean I would like to see a properties editor, for sure (I need it every 1-2 hours of my work), but something BETTER THAN THIS ONE. The author keeps putting some non-sense features, while some essential things are missing, like the ultra important i18n support. er, I didnt log a tracker request. I was working through an XML structure and wanted to perform a subtle structural change (playing with ideas before its set in stone). Put various attributes into a particular namespace; move XML elements outside their parent, etc. Never having used structural search I tried on a really simple document (example below, query included the angle brackets) to learn how to use structural search and get to progressively more complex examples before doing a search and replace.... Tried fiddling with regex and other parameters... no luck. with ]]> should come up with resultset = { abc , def } the is questionable.... For XML files, under no circumstances could i get even one result. I assumed it would only work on java files. A couple of different tries and there were a couple of exceptions. After looking through the structural search templates I noticed that there were no XML examples. Maybe i was using it incorrectly. very few bugs / beta: of course I'm only talking about bugs that occured with the released IDEA 4.5 I had problems with IDEA totally locking when running JUnit tests (seems to be related to printf statements in native code), or it outputs "Cannot proceed" in the JUnit output window and you cannot view the rest of the output. Then everything I posted here in the recent days - mostly do not get a response, except from the struct search guy. I know that I should file in tracker, but my time is very limited (would have to first check if bug report already exists) - and I am lazy, that's why I use IDEA ;) Again, in former times (sigh ;) almost virtually every bug that was reported was fixed nearly instantly... Today you find unfixed bugs that were reported in the year 2002 ... IDEA is slowley but cruelly losing reputation. Time will tell if it is a good marketing decision... Nice weekend, nevertheless ;) I have the opposite problem. We have contracts with some government agencies that require absolutely no java script. They even test with it disabled. Ahmed Mohombe wrote: > >>. > > Norris Shelton wrote: Yes, I know that "government agencies" have such restrictions, but where productivity and user friedleness is required, no one can afford so many roudtrips to the server. Ahemd. What things missing? What is i18n suuport? I would beg to differ. On , we did everything client side (before CNet screwed it all up). We were serving a few million registered users with a completely dynamic site. The only thing that was static were the icon images. We always beat the industry average for response time. Ahmed Mohombe wrote: > >> I have the opposite problem. We have contracts with some government >> agencies that require absolutely no java script. They even test with >> it disabled. > > More wishes for the next version: - Back-in-time debugger Coolness. - Fix package names Go through files and fix packages to match directory structure. For example, after copying a bunch of files from a different project all the package statements are incorrect. IDEA should be able to go through and fix them. - Move method refactoring No further comment necessary. - Branch package Copy package and change references to new package. Useful for branching a package from a different project. - Show exception source Visualize where in the call tree certain exceptions are generated. (Eclipse has this in M7, which I didn't know when I first thought of this.) - More structural awareness in editor Make it possible to move elements around in the parse tree. Internationalization support(it's called i18n , 'cause it's an-18-symbols-word :)). Just write an unicode properties file , then compile it with the provided java tool (native2ascii.exe) as it should be, and then open it in the (so called) properties editor plug-in. lllopo wrote: So what stops you from doing this? Personally I'm doing it all the time, except that the native2ascii is part of my build script (it makes sense if you sometimes want to edit your file at the client site.) -- dimiter If you know way how do it with existing Open API and saving all exisiting features I'll mail you sources of plugin and you can add this feature. And you'll become (so called) author of i18n part of (so called) properties editor plug-in. "internationalization" has 20 characters, but in "i18n" 18 characters are ommitted :) Tom Obviously What do U mean ? It seems like U don't find this to be important. It's simple - this plug-in can be used for editing only non-unicode property files. If U can't call this a huge restriction, than I don't know ... for me it's a big miss. Man, just replace the /uXXXX with the appropriate unicode symbol when opening, and put back all non-ASCII chars to /uXXXX form on save. I think this will be enough. I don't mind to make this, but I don't have the time for now. Anyway , U can send me the source and I'll try to find the time for this, cause it's really important for many developers. Use this mail : lllopo@abv.bg I have discussed this many times in other forums (if U remember), anyway, once again - I really appreciate your work, but sometimes it seems like U try to avoid the problems and not to solve them. May be U miss a bit the users feedback, I don't know. Greetings lllopo wrote: It's perfectly ok to edit unicode encoded with UTF-8 or whatever else encoding for that matter. The escaped values are NOT unicode. They are just that - escaped values. And yes, I don't need editing of escaped values in property files any more than I need HTML editing plugin for javadocs. Then you've never done serious localization, particularly not for asian languages. Try writing a japanese property file using escaped values. It's nowhere near "html for javadocs". Marcus Brito wrote: It's kind of funny, but this is exactly the situation with me :) I'm using shift_jis encoded property files and during the build I translate them with native2ascii. Yet, I never edit the already translated properties - I treat them like any other compiled file. I.e. if you need a minor change in class functionality you dont patch the bytecode (or do you?) -- dimiter It's possible to do it on loading but not possible on saving cause there are only beforeDocumentSaving and no afterDocumentSaving. Is one way conversion enough? TIA, Dmitry
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206945915-Whats-Next-Summary?page=1
CC-MAIN-2020-10
refinedweb
2,282
64.51
Infrastructure as Code This is another in Robert's Getting Started series, where he provides an introduction to a topic that everyone knows about but not everyone is actually doing (or doing enough of). In this episode, part 1 of 3, Robert shows how to get started with unit testing in Visual Studio. The primary goal of unit testing is to take the smallest piece of testable code in an application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit can be tested separately and can easily be tested over and over again. You can find the sample code and demo scripts for the entire 3 part series here. Good job. Do you plan to cover how to do data driven testing with MSTest? It appeared to be a little limited in this area, so I went with NUnit. Thx. This is interesting. I always wanted to learn about testing. How to test JavaScript programs? I create windows 8 apps with JavaScript. In visual studio 2013, I find Test under C# language, but not under JavaScript. How do I get that? @Sonal: The Microsoft unit test framework works with managed code. For JavaScript unit testing, you should check out one of the available frameworks. There is a good list of them at. Personally when I'm trying to learn something new I prefer the smaller chunks at a time. In fact it seems like PluralSight does that with their videos as well. I don't know what you will be covering in future episodes, but here are some of my notes: ------ If your doing Windows Store app development, your going to have to make some changes or additions, for example: #if NETFX_CORE using Microsoft.VisualStudio.TestPlatform.UnitTestFramework; #else using Microsoft.VisualStudio.TestTools.UnitTesting; #endif --- [TestMethod] [TestCategory( "Timer" ), TestCategory( "Exception expected" )] #if !NETFX_CORE [ExpectedException(typeof(NotSupportedException))] #endif public void SingleTimer_OnceRanAndStopped_ItShouldNotStartAgain() { //ToDo: Change exception type to something more appropriate. SingleTimer timer = new SingleTimer(); timer.Start(); timer.Stop(); #if !NETFX_CORE timer.Start(); #endif #if NETFX_CORE Assert.ThrowsException<NotSupportedException>(() => timer.Start()); #endif } Use templated Asserts. Example: Assert.AreEqual<int>( 0, timers.Count ); I have more, but that's enough for now. The manuscript dead tree must be replaced by something geekier like Lumia OneNote RT. @VSChawathe: I am addicted to paper. What can I say? :) In the top right side of your Visual Studio environment Robert, you have 2 MB values, maybe is the Ram consumption of Visual Studio itself. Is this an extension? @Luigi Zambetti: It is an internal only extension. Sorry. Robert @Luigi Zambetti: It is an internal only extension. Sorry. Robert Thanks Robert for this awesome video. Hope to see more videos in this series. My humble opinion will be when going from red to green you don't act instead of doing an invalid assert, please let me know what you think.?" Great show. I have some notes which aren't that clear to me: 1. If I want to test a method that has more than one paths, what's better: writing single unit test method that has all testing scenarios or should I divide it into different test methods one for each test case? 2. If I have a method that has many codes which can be divided into methods but they are used only in the main method. Now if I want to test this method should I first factor these sub-methods and then test them individually or just test it as one method? Thanks in advance :) Great video Robert, looking forward to part 2! Great episode. I've been adopting unit testing in my latest projects. There were a few things in this video I wasn't aware of, like the ExpectedException attribute. It's also nice to know you can categorize your tests. Looking forward to the next part! I'm well versed on unit testing, but I was curious. I thought you did a great job covering the basics. Nice episode. Please share episode 2 & 3 too. @Niner615224: Episode 2 is now live. I am recording Episode 3 tomorrow and will post it next week. Robert Excellent show. I like the shorter segments. It breaks it out like chapters. Thanks for this video . I love it. I really enjoyed this video and the length was just right - I prefer it broken out into chapters. I will be speaking to my team about unit tests today! great session a must watch to start :) I was looking for such basic video. Thanks Green. Hi, there's a serious problem with the subtitles of this video (part1), there's a delay several seconds and it is difficult to follow. It would be great if you could fix it, thank! Thanks for this video! This is really excellent, Rod, thank you for doing this. I'm find with breaking up the videos into smaller chunks. Good stuff. Short is better. Excellent! Thank you! :D Great video on Unit Testing. I'm doing an Independent Study at my University on different types of Automated testing and this is a perfect starting point. You had mentioned books on Test Driven Development. Do you have any in particular you suggest I could potentially look up for my project? Thanks! sir please define how to apply unit test in web form asp.net This conversation has been locked by the site admins. No new comments can be made.
https://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Getting-Started-with-Unit-Testing-Part-1
CC-MAIN-2019-43
refinedweb
907
76.11
This action might not be possible to undo. Are you sure you want to continue? NET Platform, ©2001 Andrew Troelsen (Apress, ISBN: 1-893115-59-3) p. 5-1 Chapter 5 Advanced C# Class Construction Techniques This chapter rounds out your introduction to the core aspects of the C# language by examining a number of advanced (but extremely useful) syntactic constructs. To begin, you learn how to construct and use an indexer method. This C# mechanism enables you to build custom types, which exposes internal subtypes using the familiar bracket operator (i.e., []). If you have a C++ background, you will find that creating a C# indexer method is analogous to overloading the [] operator on a C++ class. Once you learn how to build an indexer, you then examine how to overload various operators (+, -, <, > and so forth) for a custom C# type. This chapter then examines three techniques that enable the objects in your system to engage in bidirectional communications. First, you learn about the C# “delegate” keyword, which is little more than a type-safe function pointer. Once you learn how to create and manipulate delegates, you are in a perfect position to investigate the .NET event protocol, which is based on the delegation model. Finally, you discover how the use of custom interfaces can also enable bidirectional communications (which should ring a bell for those coming from a COM background). I wrap up by examining how you can document your types using XML attributes, and how the Visual Studio.NET IDE automatically generates Web-based documentation for your projects. Although this might not qualify as a truly “advanced” technique, it is a high note on which to end the chapter. Building a Custom Indexer At this point, you should feel confident building C# types using traditional OOP (refer to Chapter 3) as well as interface-based programming techniques (refer to Chapter 4). In this chapter, I take some time to examine some additional aspects of C# that you may not be readily failure with, beginning with the concept of an indexer. Most programmers (such as yourself) are very familiar with the process of accessing discrete items held within a standard array using the index (aka bracket) operator: // Declare an array of integers. int[] myInts = {10, 9, 100, 432, 9874}; // Use the [] operator to access each element. for(int j = 0; j < myInts.Length; j++) Console.WriteLine("Index {0} = {1}", j, myInts[j]); myInts[j] A Comprehensive Guide to C# and the .NET Platform, ©2001 Andrew Troelsen (Apress, ISBN: 1-893115-59-3) p. 5-2 The C# language supports the capability to build custom classes that may be indexed just like an array of intrinsic types. It should be no big surprise that the method that provides the capability to access items in this manner is termed an “indexer.” Before exploring how to create such a construct, let’s begin by seeing one in action. Assume you have added support for an indexer method to the Cars container developed in the previous chapter. Observe the following usage: // Indexers allow you to access items in an array -like fashion. arraypublic class CarApp { public static void Main() { // Assume the Cars type has an indexer method. Cars carLot = new Cars(); // Make some cars and add them to the car lot. carLot[0] = new Car("FeeFee", 200, 0); carLot[1] = new Car("Clunker", 90, 0); carLot[2] = new Car("Zippy", 30, 0); // Now obtain and display each item. for(int i = 0; i < 3; i++) { Console.WriteLine("Car number {0}:", i); Console.WriteLine("Name: {0}", carLot[i]. carLot[i].PetName); Console.WriteLine("Max speed: {0}", carLot[i].MaxSpeed); carLot[i]. } } } A test run would look something like Figure 5-1. Figure 5-1. Accessing cars using an indexer As you can see, indexers behave much like a custom collection supporting the IEnumerator and IEnumerable interfaces. The only major difference is that rather than A Comprehensive Guide to C# and the .NET Platform, ©2001 Andrew Troelsen (Apress, ISBN: 1-893115-59-3) p. 5-3 accessing the contents using interface references, you are able to manipulate the internal collection of automobiles just like a standard array. Now for the big question: How do you configure the Cars class (or any class) to do so? The indexer itself is represented as a slightly mangled C# property. In its simplest form, an indexer is created using the this[] syntax: // Add the indexer to the existing class definition. public class Cars : IEnumerator, IEnumerable { … // Let's rollback to the basics and simply make use of a standard array // to contain the cars. You are free to use an ArrayList if you desire… private Car[] carArray; public Cars() { carArray = new Car[10]; } // The indexer returns a Car based on a numerical index. public Car this[int pos] { // Accessor returns an item in the array. Accessor get { if(pos < 0 || pos > 10) throw new IndexOutOfRangeException("Out of range!"); else return (carArray[pos]); } // Mutator populates the array. set { carArray[pos] = value;} } } Beyond the use of the “this” keyword, the indexer looks just like any other C# property declaration. Do be aware that indexers do not provide any array-like functionality beyond the use of the subscript operator. In other words, the object user cannot write code such as: // Use System.Array.Length? Nope! Console.WriteLine("Cars in stock: {0}", carLot.Length); To support this functionality, you would need to add your own Length property to the Cars type, and delegate accordingly: A Comprehensive Guide to C# and the .NET Platform, ©2001 Andrew Troelsen (Apress, ISBN: 1-893115-59-3) public class Cars { … // Containment / delegation in action once again. public int Length() { /* figure out number of non-null entries in array. */} } p. 5-4 However, if you are in need of this functionality, you will find your task will be much easier if you make direct use of one of the System.Collections types to hold your internal items, rather than a simple array. SOURCE CODE The Indexer project is located under the Chapter 5 subdirectory. Overloading Operators C#, like any programming language, has a canned set of tokens that are used to perform basic operations on intrinsic types. For example, everyone knows that the + operator can be applied to two integers in order to yield a new integer: // The + operator in action. int a = 100; int b = 240; int c = a + b; // c = = 340 This is no major news flash, but have you ever stopped and noticed how the same + operator can be applied to any intrinsic C# data type? For example: // + operator with strings. with string s1 = "Hello"; string s2 = " world!"; string s3 = s1 + s2; // s3 = = Hello world! In essence, the + operator has been overloaded to function correctly on various individual data types. When the + operator is applied to numerical types, the result is the summation of the operands. However, when applied to string types, the result is string concatenation. The C# language (like C++ and unlike Java) provides the capability for you to build custom classes and structures that also respond uniquely to the same set of basic tokens (such as the + operator). Thus, if you equip a type to do so, it is possible to apply various operators to a custom class. To keep your wits about you, assume the following simple Point class: // You can’t get much lamer than this! than public class Point { private int x, y; public Point(){} public Point(int xPos, int yPos) { x = xPos; bigPoint. it would be helpful to subtract one Point from another. } } s p. public static int Main(string[] args) { // Make two points Point ptOne = new Point(100. minorPoint.y.WriteLine("Here is the big point: {0}". 5-5 Now. y = yPos. return 0. Point p2) { Point newPoint = new Point(p1. } .ToString()). Point bigPoint = ptOne + ptTwo. C# provides the “operator” keyword. Point ptTwo = new Point(40. you would like to do something like this: // Adding and subtracting two Points.WriteLine("Just a minor point: {0}". which can only be used in conjunction with static methods.x + p2. y. To illustrate: // A more intelligent Point class. public class Point { private int x.A Comprehensive Guide to C# and the . } public override string ToString() { return "X pos: " + this.x + " Y pos: " + this. public Point(){} public Point(int xPos. // Subtract the points to make a new point. your goal is to somehow make your Point class react uniquely to the + and – operators. // Add the points to make a new point. Console. On a related note. ©2001 Andrew Troelsen (Apress.y + p2. int yPos){ x = xPos. To allow a custom type to respond to these intrinsic tokens. 100). if you created two Point objects with some initial startup values.ptOne. p1. return newPoint. For example. 40).x.y). } Clearly.ToString()). } // The Point class can be added… public static Point operator + (Point p1. Point minorPoint = bigPoint . Console. logically speaking it makes sense to add Points together. ISBN: 1-893115-59-3) y = yPos.NET Platform. } } Notice that the class now contains two strange looking methods called operator + and operator –.(Point p1. Overloaded operators at work The capability to overload operators is useful in that it enables the object user to work with your types (more or less) like any intrinsic data item. under the hood you can envision the following hidden call to the static operator + method: // p3 = Point.x .operator + (p1. Other languages (such as Java) . Thus. p2) p3 = p1 + p2. int newY = p1. } public override string ToString() { return "X pos: " + this. int newX = p1.operator . you would see something like Figure 5-2. The logic behind operator + is simply to return a brand new Point based on the summation of the incoming Point objects. if(newX < 0) throw new ArgumentOutOfRangeException(). p2) p3 = p1 .p2. // Figure new Y (also assume [0.0] base).p2.y.p2. return new Point(newX.(p1. newY). If you were to take your class out for a test run.x + " Y pos: " + this.x. public static Point operator . ISBN: 1-893115-59-3) p. p1 – p2 maps to: // p3 = Point. if(newY < 0) throw new ArgumentOutOfRangeException().y .A Comprehensive Guide to C# and the . 5-6 // …and subtracted.NET Platform. ©2001 Andrew Troelsen (Apress. Likewise. Point p2) { // Figure new X (assume [0.y.0] base). when you write pt1 + pt2. Figure 5-2. you can achieve the same functionality using public methods.x .AddPoints Point. Point p2) { // Figure new X.x + p2. int newY = p1.WriteLine("My final point: {0}". ISBN: 1-893115-59-3) p. To maximize your coding efforts.NET Platform. you should always support member function equivalents.p2. Console.as SubtractPoints() public static Point SubtractPoints (Point p1. if(newY < 0) throw new ArgumentOutOfRangeException().y . 5-7 do not support this capability. public class Point { … // Operator + as AddPoints() public static Point AddPoints (Point p1. and are not CLS-compliant.x.y + p2.A Comprehensive Guide to C# and the . For example: . you could write the Point class as so: // Making use of methods rather than overloaded ops. int newX = p1.p2. } } You could then add Points as so: // As member f(x)'s Point finalPt = Point. p1. Seen in this light. However.ToString()). not all . finalPt.NET-aware languages support types containing overloaded operators. that they are little more than a friendly variation on a traditional public method. thus. Also understand that the capability to overload operators is not a requirement of the Common Language Specification.y. When you are building production level classes that support overloaded operators. newY). Remember however.y). For example. Point p2) { return new Point(p1. if(newX < 0) throw new ArgumentOutOfRangeException(). overloaded operators are always an optional construct you may choose to support for a given class. return new Point(newX. // Figure new Y. simply have the overloaded operator call the member function alternative (or vice versa).x.AddPoints(ptOne. ptTwo). ©2001 Andrew Troelsen (Apress. } // Operator . ©2001 Andrew Troelsen (Apress. } // Now let's overload the = = and != operators. public static Point AddPoints (Point p1.Equals(p2).GetHashCode(). Point p2) { return new Point(p1. ISBN: 1-893115-59-3) public class Point { … // For overload operator aware languages. y. 5-8 Overloading the Equality Operators As you recall.ToString().To illustrate.x && ((Point)o).x = = this.A Comprehensive Guide to C# and the . } . public Point(){} public Point(int xPos. } public override int GetHashCode() { return this. } // For overloaded challenged languages.y) return true. p2).y = = this.x. an object may choose to override the equality operators (= = and !=). public static Point operator + (Point p1.NET Platform.y). System. public class Point { public int x. p1.Object.y + p2. y = yPos.x + p2.} … public override bool Equals(object o) { if( ((Point)o). Point p2) { return AddPoints(p1.Equals() can be overridden in order to perform value-based (rather than referenced-based) comparisons between objects. int yPos){x = xPos. public static bool operator = =(Point p1. else return false. In addition to overriding Equals() and GetHashCode(). Point p2) { return p1. } } p. here is the updated Point class: // This incarnation of Point also overloads the = = and != operators. WriteLine("Nope. Like the equality operators. } // Are they different? // Are they the same? Console. Overriding the Comparison Operators In the previous chapter. keep in mind that C# demands that if you override operator = =. you must also override operator !=. 5-9 public static bool operator !=(Point p1.Equals(). different values. Console. if(ptOne != ptTwo ptOne ptTwo) else Console. the object user could now compare types as so: . ISBN: 1-893115-59-3) p. just as when you override Equals() you will need to override GetHashCode(). ©2001 Andrew Troelsen (Apress.WriteLine("Same values!").").Equals(p2). in order to compare the relative relationship between two like objects. <= and >=) for the same class. it is quite intuitive to compare two objects using the well-known = = and != operators rather than making a call to Object. If you do overload the equality operators for a given class. you learned how to implement the IComparable interface.WriteLine("Same values!"). you may also overload the comparison operators (<. SOURCE CODE The OverLoadOps project is located under the Chapter 5 subdirectory. Point p2) { return !p1. classes that override Object. you can now exercise your Point class as so: // Make use of the overloaded equality operators. The same holds true for the <= and >= operators. Additionally."). Given this. >. C# demands that < and > are overloaded as a set. This ensures that an object behaves in a uniform manner during comparisons and functions correctly if placed into a hash table (if you forget.Equals() should always overload the = = and !+ operators. the compiler will let you know).WriteLine("These are not equal.NET Platform. As you can see. If the Car type you developed in Chapter 4 overloaded these comparison operators.A Comprehensive Guide to C# and the . public static int Main(string[] args) { … if(ptOne = = ptTwo ptOne ptTwo) else Console. As a rule of thumb. } } Notice how the implementation of operator = = and operator != simply makes a call to the overridden Equals() method to get the bulk of the work done. // Is Rusty less than Chucky? if(myAutos[0] < myAutos[4] myAutos[0] myAutos[4]) myAu Console.WriteLine("Rusty is less than Chucky!"). public class Car : IComparable { … public int CompareTo(object o) { Car temp = (Car)o.WriteLine("Chucky is less than Rusty!"). public class CarApp { public static int Main(string[] args) { // Make an array of Car types.CarID) return -1. myAutos[1] = new Car(6. if(this. "Mary"). myAutos[0] = new Car(123. Here is the updated class definition: // This class is also comparable using the comparison operators. "Rusty"). "Chucky"). if(this. else Console. } public static bool operator < (Car c1. "NoName"). Car c2) { IComparable itfComp = (IComparable)c1. ©2001 Andrew Troelsen (Apress. myAutos[2] = new Car(6. Car[] myAutos = new Car[5]. overloading the comparison operators is trivial.A Comprehensive Guide to C# and the .NET Platform. return 0. "Viper"). Car c2) { .CarID > temp. return (itfComp. 5-10 Because the Car type already implements IComparable (see Chapter 4). } } p. myAutos[4] = new Car(6.CompareTo(c2) < 0). else return 0.CarID < temp. myAutos[3] = new Car(13. } public static bool operator > (Car c1.CarID) return 1. ISBN: 1-893115-59-3) // Exercise the overloaded < operator for the Car class. let’s say you overloaded the multiplication operator for the Engine class. Also. fractions.CompareTo(c2) <= 0). be aware that you cannot overload each and every intrinsic C# operator. Overloading operators is generally only useful when building utility types. managers.A Comprehensive Guide to C# and the .NET-aware language.CompareTo(c2) > 0). rectangles. If you want to be completely sure that your types will work in any . well-known operators. ©2001 Andrew Troelsen (Apress.NET Platform. points. and hexagons make good candidates for operator overloading. return (itfComp. Strings. } } p. Car c2) { IComparable itfComp = (IComparable)c1. Table 5-1 outlines the “overloadability” of each item: . Now. and baseball hats do not. People.CompareTo(c2) >= 0). 5-11 SOURCE CODE The ObjCompWithOps project is located under the Chapter 5 subdirectory. always remember that not all languages targeting the .NET platform will support overloaded operators for custom types! Therefore. Use this feature wisely. headphones. } public static bool operator <= (Car c1. Final Thoughts Regarding Operator Overloading As you have just seen. } public static bool operator >= (Car c1. ISBN: 1-893115-59-3) IComparable itfComp = (IComparable)c1. return (itfComp. return (itfComp. What exactly would it mean to multiply two Engine objects? Not much. you must be sure that the operator(s) you are about to overload make some sort of logical sense in the world at large. Finally. Car c2) { IComparable itfComp = (IComparable)c1. cars. always test your types against any language that may make use of a class defining overloaded operators. For example. supply the same functionality using custom methods in addition to your operator set (as illustrated earlier in this chapter). C# provides the capability to build types that can respond uniquely to various intrinsic. before you go and retrofit all your classes to support such behavior. A Comprehensive Guide to C# and the . However. C(++) callbacks could be configured to include additional type-safe information such as the number of (and types of) parameters. The comparison operators can be overloaded. the C# will demand that “like” operators (i. return value. Understanding (and Using) Delegates Up until this point. Thus.” Using callbacks. and !=) are overloaded together. 5-12 Table 5-1. and calling convention. ©2001 Andrew Troelsen (Apress.. = =. < and >. every sample application you have developed added various bits of code to Main(). As you may know. As you have seen earlier in this chapter. Valid Overloadable Operators C# Operator + ! ~ ++ -true false + * / % & | ^ << >> == != < > <= >= [] Meaning in Life (Can this operator be overloaded?) This set of unary operators can be overloaded. programmers are able to configure one function to report back to (call back) another function in the application. Ideally. however. the indexer construct provides the same functionality. you have not yet examined how these objects can talk back to the object that created them in the first place. The [] operator cannot technically be overloaded. the Windows API makes frequent use of function pointers to create entities termed “callback functions” or simply “callbacks. In the “real world” it is quite common for the objects in a system to engage in a two-way conversation. These binary operators can be overloaded. however. Sadly. ISBN: 1-893115-59-3) p.NET Platform. this is not the case in traditional C(++)/Win32 callback functions. The problem with standard C(++)callback functions is that they represent nothing more than a simple memory address.e. Recall. . which (in some way or another) sent messages to a given object. let’s examine a number of ways in which objects can be programmed to do this very thing. <= and >=. you not only specify the name of the method. // The synchronous Invoke() method. } Notice that the class that is created on your behalf contains two public methods that enable you to synchronously or asynchronously work with the delegate (Invoke() and BeginInvoke() respectively). // You also receive an asynchronous version of the same callback. the C# compiler produces a new class. let’s begin by updating the Car class to include two new Boolean member variables. public virtual void EndInvoke(IAsyncResult result). To enable the object user to interact with this new state data.MulticastDelegate { PlayAcidHouse(object target. To keep things simple. params public Car(string name. I will focus only on the synchronous behavior of the MulticastDelegate type. int max.NET Platform. int ptr). Under the hood. object o). Thus. int volume. Building an Example Delegate To illustrate the use of delegates. the callback technique is accomplished in a much safer and more objectoriented manner using the “delegate” keyword. 5-13 In C#. int curr. the “delegate” keyword represents an instance of a class deriving from System. bool rotate rotate) { … isDirty = dirty.MulticastDelegate. ©2001 Andrew Troelsen (Apress. which looks something like the following: public class PlayAcidHouse : System MulticastDelegate System. Here is the story so far: // Another updated Car class. the other represents if the car in question is in need of a tire rotation (shouldRotate). public virtual IAsyncResult BeginInvoke(object PaulOakenfold. Car also defines some additional properties and an updated constructor.A Comprehensive Guide to C# and the . ISBN: 1-893115-59-3) p. int volume). public class Car { … // NEW! Are we in need of a wash? Need to rotate tires? private bool isDirty. When you wish to create a delegate in C#. public void virtual Invoke(object PaulOakenfold. AsyncCallback cb. when you write: public delegate void PlayAcidHouse(object PaulOakenfold. private bool shouldRotate. // Extra params to set bools. bool dirty. int volume). . The first is used to determine if your automobile is due for a wash (isDirty). but the set of parameters (if any) and return type as well. you would see something like Figure 5-3 (notice the “extends” informational node). } } } // Get and set shouldRotate. C# delegates represent a class deriving from MulticastDelegate. } } public bool Rotate { get{ return shouldRotate. ISBN: 1-893115-59-3) shouldRotate = rotate.exe. is nothing more than an object-oriented wrapper around a function pointer) within your current namespace: // This delegate is actually a class encapsulating a function pointer // to ‘some method’ taking a Car as a parameter and returning void.A Comprehensive Guide to C# and the . you have created a delegate named CarDelegate. // Get and set isDirty. If you were to examine the internal representation of this type using ILDasm. Figure 5-3. public delegate void CarDelegate(Car c). Here. } public bool Dirty { get{ return isDirty.NET Platform. p. } set{ isDirty = value. 5-14 Now. ©2001 Andrew Troelsen (Apress. . The CarDelegate type represents “some” function taking a Car as a parameter and returning void. assume you have declared the following delegate (which again. } set{ shouldRotate = value. exe (see Figure 5-4). your delegate is decoupled from its logically related Car type (given that you have simply declared the CarDelegate type within the defining namespace).A Comprehensive Guide to C# and the .Delegate base class). While there is nothing horribly wrong with the approach. public class Car : Object { // This is represented as Car$CarDelegate (i. the CarDelegate is in fact a nested type definition! If you check ILDasm. 5-15 Delegates as Nested Types Currently. … } Given that the “delegate” keyword produces a new class deriving from System. when you create delegates. ISBN: 1-893115-59-3) p.MutlicastDelegate So to review thus far.MulticastDelegate. Table 5-2 illustrates some interesting inherited members to be aware of. you indirectly build a type that derives from System. public delegate void CarDelegate(Car c).MulticastDelegate (which by the way. you will see the truth of the matter.. . Nesting the delegate Members of System. a more enlightened alternative would be to define the CarDelegate directly within the Car class: // This time. define the delegate as part of the class definition.e.NET Platform. derives from the System. ©2001 Andrew Troelsen (Apress. a nested type). Figure 5-4. NET Platform. you pass each Car in your collection as a parameter to the “function pointed to” by the delegate. public class Garage { // A list of all cars in the garage. To illustrate. Upon creation. you can create other functions that take this delegate as a parameter. In the implementation of ProcessCars(). cars ArrayList theCars = new ArrayList(). the method pointed to is static. each representing an entry in the list of function pointers. If the value returned from Target equals null. This static method is used to build a delegate that points to a number of different functions. call Remove(). because this class has the capability to hold multiple function pointers using an internal linked list. we updated the ctor to set isDirty and shouldRotate. To help understand the inner workings of the delegation model. Returns an array of Delegate types. is the complete definition of the Garage class: // The Garage class has a method that makes use of the CarDelegate. which takes a single argument of type Car.CarDelegate. More importantly. Using the CarDelegate Now that you have a pointer to “some” function. If the method pointed to is a member of a class. Combine() GetInvocationList() Remove() Multicast delegates are capable of pointing to any number of functions. To remove a function from the internal list. assume you have a new class named Garage. ISBN: 1-893115-59-3) p. ©2001 Andrew Troelsen (Apress. The function pointers themselves can be added to the linked list using the Combine() method or the overloaded + operator. This type maintains a collection of Car types contained in an ArrayList. the Garage class defines a public ProcessCars() method.A Comprehensive Guide to C# and the . then. this member returns the name of the class. This static method removes a delegate from the list of function pointers. public Garage() { // Recall. // Create the cars in the garage. let’s also make use of two members defined by the System.MulticastDelegate class (Target and Method) to determine exactly which function the delegate is currently pointing to. the ArrayList is filled with some initial Car types. . 5-16 Table 5-2. Here. Select Inherited Members Inherited Member Method Target Meaning in Life This property returns the name of the method pointed to. "). true)).NET Platform. true. 5-17 // Diagnostics: Are we calling an instance method or a static method? if(proc.WriteLine("-->Target is a static method").Add(new Car("Fred". Consider the following usage: // The garage delegates all work orders to these static functions // (finding a good mechanic is always a problem…) public class CarApp { // A target for the delegate. 100. proc.Dirty) Console. true)).Add(new Car("Bart". theCars. 0.CarDelegate as a parameter.Method Method. } // This method takes a Car.Add(new Car("Stan". public static void WashCar(Car c) { if(c. theCars. false. } } When the object user calls ProcessCars(). false)). ©2001 Andrew Troelsen (Apress. ISBN: 1-893115-59-3) theCars. 100. false)).Add(new Car("Viper".Target. 0. theCars. it will send in the name of the method that should handle this request. // Real Work: Now call the method. Method 'proc' is nothing more than a function pointer! public void ProcessCars(Car. 100. theCars. 100. } .. d. his // Therefore! herefore! { // Diagnostics: Where are we forwarding the call? Console.CarDelegate proc) p..Target != null) Target Console.A Comprehensive Guide to C# and the . 0. passing in each car. For the sake of argument. 0.ToString()). foreach(Car c in theCars) proc(c). else Console. false. true)). else Console.WriteLine("***** Calling: {0} *****".WriteLine("-->Target: {0}". true.ToString()). assume these are static members named WashCar() and RotateTires().WriteLine("This car is already clean. 0.WriteLine("Cleaning a car").Add(new Car("BillyBob". false. 100. (Notice the output messages supplied by Target and Method properties.ProcessCars(new Car. Garage g = new Garage().NET Platform. Also.CarDelegate(WashCar))."). // Wash all dirty cars.ProcessCars(new Car.CarDelegate(RotateTires) Car. } public static int Main(string[] args) { // Make the garage.CarDelegate(RotateTires)). public static void RotateTires(Car c) { if(c. ISBN: 1-893115-59-3) // Another target for the delegate.WriteLine("Tires have been rotated").MulticastDelegate.) Figure 5-5. return 0.WriteLine("Don't need to be rotated.A Comprehensive Guide to C# and the . Figure 5-5 shows the output of this test run. recall that when you pass in the name of your function as a constructor parameter. take one . g.CarDelegate(WashCar) Car. Delegate output.Rotate) Console. ©2001 Andrew Troelsen (Apress.. else Console.. 5-18 Notice (of course) that the two static methods are an exact match to the delegate type (void return value and a single Car argument). // Rotate the tires. g. } } p. you are adding this item to the internal linked list maintained by System. CarDelegate(RotateTires RotateTires)).CarDelegate proc) { … foreach(Car c in theCars) proc(c). you can assume that ProcessCars() actually looks like the following under the hood: // CarDelegate points to the WashCar function: public void ProcessCars(Car.CarDelegate proc) { foreach(Car c in theCars) proc(c). the real work is delegated to another part of the system (which explains why a 30-minute oil change takes 2 hours). the Main() method begins by creating an instance of the Garage type. WashCar what you are effectively saying is “Add a pointer to the WashCar() function to the CarDelegate type.ProcessCars(new Car.ProcessCars(). g. Given this. given that a delegate represents a function pointer.CarDelegate(WashCar) Car. … } // proc(c) => CarApp. Now. you must create a new instance of the custom delegate: // Wash all dirty cars. This might seem odd at first. g. g. when you write the following: // Wash all dirty cars. g. ISBN: 1-893115-59-3) p.WashCar(c) Likewise. However. RotateTires ProcessCars() can be understood as: // CarDelegate points to the RotateTires function: public void ProcessCars(Car. // Rotate the tires.CarDelegate(WashCar WashCar)). and therefore must be “new-ed.” Like most real-world garages. and pass this delegate to Garage. ©2001 Andrew Troelsen (Apress.RotateTires(c) Also notice that when you are calling ProcessCars().ProcessCars(new Car.CarDelegate(RotateTires) Car. if you say: // Rotate the tires.CarDelegate(RotateTires)).CarDelegate(WashCar)). This class has been configured to delegate all work to other named static functions.ProcessCars(new Car.” .NET Platform. remember that this function pointer is represented by an instance of type System.MulticastDelegate. 5-19 Analyzing the Delegation Code As you can see.A Comprehensive Guide to C# and the .ProcessCars(new Car. … } // proc(c) => CarApp. Thus. you are actually passing in a new delegate. you begin by creating two new CarDelegate objects.CarDelegate(RotateTires). which holds each function pointer within the internal linked list (crazy huh?).CarDelegate(WashCar).CarDelegate(WashCar). understand that when you call Combine() (or use the overloaded + operator) you are adding a new function pointer to the internal list. overloaded // The result is a new delegate that maintains pointers to // both functions. g. Car.Combine() method. each of which points to a given function. MulticastDelegate d = wash + rotate.CarDelegate rotate = new Car. you can call . sending in a new instance of the CarDelegate each time. // The overloaded + operator can be applied to multicast delegates.CarDelegate wash = new Car. ©2001 Andrew Troelsen (Apress. Rather.CarDelegate)d). Car. To illustrate multicasting.NET Platform.ProcessCars(). Garage g = new Garage(). } Here. (Car.Combine(wash. Furthermore.CarDelegate wash = new Car. you did not make use of this feature.CarDelegate)Delegate.CarDelegate)d Regardless of how you configure a multicast delegate. if you wish to hang on to the new delegate for later use.CarDelegate)d (Car. you made two calls to Garage. g. public static int Main(string[] args) { // Make the garage.CarDelegate rotate = new Car. assume you have updated Main() to look like the following: // Add two function pointers to the internal linked list.CarDelegate(RotateTires). you could write the following equivalent (but uglier) code: // The + operator has the same effect as calling the Combine() method. // Create two new delegates. you could write the following instead: // Create two new delegates. ISBN: 1-893115-59-3) p.A Comprehensive Guide to C# and the . rotate)). When you call ProcessCars(). In the current example. Car. // Send the new delegate into the ProcessCars() method. g.ProcessCars((Car. // Store the new delegate for later use.ProcessCars(wash + rotate wash rotate). If you wish to remove an item from this internal linked list. 5-20 Multicasting Recall that a multicast delegate is an object that is capable of calling any number of functions. Do note that the + operator is simply a shorthand for calling the static Delegate. return 0.ProcessCars((Car. Car. NET Platform.Method. Delegate washOnly = MulticastDelegate.CarDelegate)washOnly). Before you view the output of this program.GetInvocationList GetInvocationList()) GetInvocationList { Console. } … } The output is shown in Figure 5-6. assume that the WashCar() and RotateTires() methods have now been moved into a new class named ServiceDept: . 5-21 the static Remove() method. ©2001 Andrew Troelsen (Apress.Remove MulticastDelegate. This is not a requirement of the delegate protocol. let’s also update ProcessCars() to print out each function pointer stored in the linked list using Delegate. the CarDelegate type is storing pointers to static functions. The first parameter marks the delegate you wish to manipulate.ProcessCars((Car.WriteLine("***** Calling: " + d.Remove(d. ISBN: 1-893115-59-3) p.A Comprehensive Guide to C# and the . while the second parameter marks the item to remove: // The static Remove() method returns a Delegate type. Figure 5-6.(CarDelegate proc) { // Where are we passing the call? foreach(Delegate d in proc. rotate). It is also possible to delegate a call to a method defined on any object instance. take two Instance Methods as Callbacks Currently. This method returns an array of Delegate objects.GetInvocationList(). g.ToString() + " *****"). Delegate output. To illustrate. public void ProcessCarsCar. which you iterate over using foreach: // Now print out each member in the linked list. RotateTires sd. sd. } // Still not static! public void RotateTires(Car c) { if(c.WashCar Car."). else Console.CarDelegate)d). sd..A Comprehensive Guide to C# and the .CarDelegate rotate = new Car..CarDelegate wash = new Car.NET Platform.. // Make the service department. } Now notice the output in Figure 5-7 (check out the name of the target). // Tell the garage to do some work. ISBN: 1-893115-59-3) // We have now moved the static functions into a helper class. Garage g = new Garage(). return 0. . ©2001 Andrew Troelsen (Apress.ProcessCars((Car.CarDelegate(sd. 5-22 You could now update Main() as so: // Delegate to instance methods of the ServiceDept type. else Console.CarDelegate(sd. public static int Main(string[] args) { // Make the garage garage..RotateTires).WriteLine("Cleaning a car").WriteLine("Tires have been rotated").Dirty) Console. Car.WashCar sd.WriteLine("Don't need to be rotated. // The garage delegates the work to the service department.WriteLine("This car is already clean. g.Rotate) Console.RotateTires MulticastDelegate d = wash + rotate.WashCar). } } p. public class ServiceDept { // Not static! public void WashCar(Car c) { if(c."). ServiceDept sd = new ServiceDept(). However. because the ability for one object to call back to another object is such a helpful construct. Recall that the current implementation of Car. Let’s reconfigure the Car to send two events to those who happen to be listening. The first event (AboutToBlow) will be sent when the current speed is 10 miles below the maximum speed. A better design would be to simply inform the object user when the car has died using a custom event. The second event (Exploded) will be sent when the user attempts to speed up a car that is already dead. TextBox. ©2001 Andrew Troelsen (Apress. and allow the caller to act accordingly. Here is the updated Car class (also notice that I have added a private Boolean to represent the state of the car): . 5-23 Figure 5-7. Delegating to instance methods SOURCE CODE The CarDelegate project is located under the Chapter 5 subdirectory. Establishing an event is a two-step process. they can be quite helpful when creating “non-GUI” based projects (as you will now see). which as you recall represents a pointer to the method(s) to call when the event is sent. Indeed. C# provides the “event” keyword to lessen the burden of using delegates in the raw. ISBN: 1-893115-59-3) p. you need to define a delegate. The most prevalent use of the event keyword would be found in GUI-based applications. This is a rather brute force way to deal with the problem.SpeedUp() (see Chapter 3) throws an exception if the user attempts to increase the speed of an automobile that has already been destroyed. in which Button. events are not limited to GUI-based applications. Understanding (and Using) Events Delegates are fairly interesting constructs because you can resolve the name of a function to call at runtime. rather than compile time. Admittedly. you define the events themselves using the “event” keyword. Next. given that the exception has the potential to halt the program’s execution if the error is not handled in an elegant manner. First. However.NET Platform. this syntactic orchestration can take a bit of getting used to. and Calendar widgets all report back to the containing Form when a given action (such as clicking a Button) has occurred.A Comprehensive Guide to C# and the . WriteLine("\tCurrSpeed = {0}". else Console. public static event EngineHandler AboutToBlow AboutToBlow. if(currSpeed >= maxSpeed) dead = true.currSpeed) if(AboutToBlow != null) AboutToBlow("Careful. ©2001 Andrew Troelsen (Apress."). approaching terminal speed!"). currSpeed).A Comprehensive Guide to C# and the . sending the event to those who happen to be listening) is as simple as specifying the event by name and sending out any specified parameters. } } . AboutToBlow // Still OK! Proceed as usual. ISBN: 1-893115-59-3) // This car can 'talk back' to the user.. Exploded } else { currSpeed += delta. // Holds the function(s) to call when the event occurs.. Send about to blow event. public static event EngineHandler Exploded Exploded. // Almost dead? event. if(dead) { if(Exploded != null) Exploded("Sorry. public class Car { … // Is the car alive or dead? private bool dead. To illustrate.. // This car can send these events. send exploded event. public void SpeedUp(int delta) { // If the car is dead. public delegate void EngineHandler(string msg). 5-24 Firing an event (i. … } p.NET Platform.e. update the previous implementation of SpeedUp() to send each event accordingly (and remove the previous exception logic): // Fire the correct event based on our current state of affairs. if(10 = = maxSpeed . this car is dead. In addition to defining hidden add_XXX() and remove_XXX() methods. ©2001 Andrew Troelsen (Apress. Events Under the Hood A given event actually expands into two hidden public functions. You will see the usage of this new automobile in just a moment. each event also actually maps to a private static class.exe. each method maintained by the delegate will be called. .A Comprehensive Guide to C# and the . the other having a “remove_” prefix. but first. ISBN: 1-893115-59-3) p. you have configured the car to send two custom events (under the correct conditions). This is a convenient way to allow an object to broadcast the event to multiple “event sinks. let’s check the event architecture in a bit more detail. a screenshot of the Car type as seen through the eyes of ILDasm. For example. 5-25 With this. the Exploded event expands to the following methods: // The following event expands to: // add_Exploded() // remove_Exploded() // public static event EngineHandler Exploded Exploded. which associates the corresponding delegate to a given event. one having an “add_” prefix.” To illustrate. check out Figure 5-8. when an event is raised.NET Platform. In this way. method public hidebysig specialname static void { add_AboutToBlow(class CarEvents. ©2001 Andrew Troelsen (Apress.Combine() is handled on your behalf): . Events under the hood As you can see.A Comprehensive Guide to C# and the . you would find the following (note the call to Delegate.Car/EngineHandler 'value') cil managed synchronized . ISBN: 1-893115-59-3) p. 5-26 Figure 5-8.NET Platform. each event (Exploded and AboutToBlow) is internally represented as the following members: * A private static class * An add_XXX() method * A remove_XXX() method If you were to check out the IL instructions behind add_AboutToBlow(). Car::add_AboutToBlow add_AboutToBlow(class CarEvents.Delegate) IL_000b: castclass CarEvents.Remove() automatically: .Car::remove_AboutToBlow remove_AboutToBlow(class remove_AboutToBlow CarEvents.method public hidebysig specialname static void { // Code size .Delegate::Remove(class [mscorlib]System. now that you understand how to build a class that can send events.Car::AboutToBlow 22 (0x16) remove_AboutToBlow(class CarEvents.removeon void CarEvents.Delegate.0 IL_0006: call class [mscorlib]System.0 IL_0006: call class [mscorlib]System.Delegate::Remove class CarEvents. ISBN: 1-893115-59-3) // Code size .Car::AboutToBlow class CarEvents.NET Platform. .Car/EngineHandler CarEvents.Car/EngineHandler) } // end of event Car::AboutToBlow So.Delegate::Combine class [mscorlib]System.Car::AboutToBlow 22 (0x16) p.maxstack 8 IL_0000: ldsfld IL_0005: ldarg. System.Car/EngineHandler CarEvents.Car/EngineHandler CarEvents. remove_AboutToBlow() will make the call to Delegate.Car/EngineHandler 'value') remove_AboutToBlow cil managed synchronized The IL instructions for the event itself make use of the [.addon] and [.Delegate class [mscorlib]System. the next big question is how you can configure an object to receive these events. 5-27 As you would expect.Car/EngineHandler IL_0010: stsfld IL_0015: ret } // end of method Car::add_AboutToBlow class CarEvents.event .Car/EngineHandler) add_AboutToBlow .Delegate [mscorlib]System.Delegate.removeon] tags to establish the correct add_XXX and remove_XXX methods (also note the static private class is mentioned by name): .ev nt CarEvents. ©2001 Andrew Troelsen (Apress.A Comprehensive Guide to C# and the .Delegate) IL_000b: castclass CarEvents.addon void CarEvents.Car/EngineHandler IL_0010: stsfld IL_0015: ret } // end of method Car::remove_AboutToBlow class CarEvents.Delegate::Combine(class [mscorlib]System.Car/EngineHandler CarEvents.Delegate::Combine System.Car::AboutToBlow [mscorlib]System. System.maxstack 8 IL_0000: ldsfld IL_0005: ldarg.Car/EngineHandler AboutToBlow { .Delegate::Remove System. EventName -= new ObjectVariable. Basically. 100. To do so. but rather use the overloaded += and -= operators. Car. // Car.e. when you wish to listen to an event. // Speed up (this will generate the events.Exploded -= new Car. } // OnBlowUp event sink. i++) c1.Exploded -= new Car.. use the -= operator: // Shut up already! // ObjectVariable.EngineHandler(OnBlowUp OnBlowUp). public class CarApp { public static int Main(string[] args) { Car c1 = new Car("SlugBug". // Car. i < 10.EventName += new ObjectVariable.EngineHandler(OnBlowUp). return 0.NET Platform.SpeedUp(20).AboutToBlow += new Car. follow the pattern shown here: // I’m listening… // ObjectVariable. i < 10.EngineHandler(OnAboutToBlow). 10). from Car. you do not call add_XXX() and remove_XXX() directly.Exploded += new Car.EngineHandler(OnBlowUp).Exploded += new Car. // Detach from events. you need to call the correct add_XXX() method to ensure that your method is added to the list of function pointers maintained by your delegate. ISBN: 1-893115-59-3) p. // Hook into events. i++) c1. OnBlowUp Here is a complete example (output is shown in Figure 5-9): // Make a car and listen to the events.DelegateName(functionToCall).) for(int i = 0.A Comprehensive Guide to C# and the . // No response! for(int i = 0. Car. The goal is to create a method that represents the “event sink” (i.SpeedUp(20).DelegateName(functionToCall). ©2001 Andrew Troelsen (Apress. OnBlowUp public static void OnBlowUp(string s) { . Car. However. 5-28 Listening to Incoming Events Assume you have now created an instance of the Car class and wish to listen to the events it is capable of sending. the method called by the delegate).Exploded -= new Car. OnBlowUp When you wish to detach from a source of events.EngineHandler(OnAboutToBlow).EngineHandler(OnBlowUp OnBlowUp). i++) c1. ©2001 Andrew Troelsen (Apress.EngineHandler(OnAboutToBlow). } // OnAboutToBlow event sink. Car.Exploded -= new Car.EngineHandler(OnBlowUp2). public class CarApp { public static int Main(string[] args) { // Make a car as usual.EngineHandler(OnBlowUp).EngineHandler(OnBlowUp2). s). // Speed up (this will generate the events.Exploded -= new Car.AboutToBlow += new Car.SpeedUp(20). simply repeat the process: // Multiple event sinks.NET Platform.WriteLine("Message from car: {0}". Car.Exploded += new Car.EngineHandler(OnBlowUp2). public static void OnAboutToBlow(string s) { Console. 10). ISBN: 1-893115-59-3) Console. Car c1 = new Car("SlugBug". Car.Exploded -= new Car.) for(int i = 0. Car. // Detach from events. Car. i < 10. Car. // Hook into events. } } p. Handling your Car’s event set If you wish to have multiple event sinks called by a given event. s).A Comprehensive Guide to C# and the .WriteLine("Message from car: {0}". Car. 5-29 Figure 5-9. .Exploded += new Car. 100.EngineHandler(OnBlowUp).EngineHandler(OnAboutToBlow). A Comprehensive Guide to C# and the . public static void OnBlowUp(string s) { Console. when the Exploded event is sent. public static void OnAboutToBlow(string s) { Console. as shown in Figure 5-10. } // OnBlowUp event sink B. } } Now. ISBN: 1-893115-59-3) … } p.WriteLine("-->AGAIN I say: {0}". 5-30 // OnBlowUp event sink A. However. public static void OnBlowUp2(string s) { Console. } // OnAboutToBlow event sink.WriteLine("Message from car: {0}".WriteLine("Message from car: {0}". Figure 5-10. you have the background to build objects that can participate in a two-way conversation.NET Platform. s). Working with multiple event handlers Objects as Event Sinks At this point. s). ©2001 Andrew Troelsen (Apress. the associated delegate calls OnBlowUp() as well as OnBlowUp2(). s). understand that you are free to build a helper object to respond to . s). . sink Car. 5-31 an object’s event set. 10).EngineHandler(sink sink. public class CarApp { public static int Main(string[] args) { Car c1 = new Car("SlugBug".A Comprehensive Guide to C# and the . // Hook into events using sink object. } // OnBlowUp event handler version 2. } } The CarApp class is then a bit more self-contained.NET Platform.Exploded += new Car.OnBlowUp).OnAboutToBlow). 100. i++) c1. sink Car. ©2001 Andrew Troelsen (Apress.Exploded += new Car. // Make the sink object. let’s move your event sink methods out of the CarApp class and into a new class named CarEventSink: // Car event sink public class CarEventSink { // OnBlowUp event handler.EngineHandler(sink sink.EngineHandler(sink sink. public void OnAboutToBlow OnAboutToBlow(string s) { Console.WriteLine("Message from car: {0}".AboutToBlow += new Car.WriteLine("-->AGAIN I say: {0}". For example. CarEventSink sink = new CarEventSink(). public void OnBlowUp2 OnBlowUp2(string s) { Console. } // OnAboutToBlow handler. public void OnBlowUp OnBlowUp(string s) { Console. Here is the update: // Note the creation and use of the CarEventSink. ISBN: 1-893115-59-3) p. Car.SpeedUp(20).OnBlowUp2). s). i < 10. much in the same way that you created a helper class to be called by all delegates. sink for(int i = 0. s).WriteLine("Message from car: {0}". as the event sink methods have been pulled out of the CarApp definition and into their own custom type. For an illustration of using the interface as a callback. but rather the following custom interface: // The engine event interface.EngineHandler(sink sink. let’s keep the same assumption that the Car type wishes to inform the outside world when it is about to blow (current speed is 10 miles below the maximum speed) and has exploded. void Exploded(string msg). this time you will not be using the “delegate” or “event” keywords. and is often used to bypass the overhead imposed by the official COM connection point architecture. on which the Car will make calls.EngineHandler(sink sink. public interface IEngineEve nts IEngineEvents { void AboutToBlow(string msg).Exploded -= new Car. ISBN: 1-893115-59-3) // Detach from events using sink object. } } p.OnBlowUp).EngineHandler(sink sink.OnAboutToBlow). } This interface will be implemented by a sink object.WriteLine(msg). However. let’s examine how callback interfaces can be created using C# (and . 5-32 The output is (of course) identical. First.OnBlowUp2). Here is a sample implementation: // Car event sink. } . public class CarEventSink : IEngineEvents { public void AboutToBlow(string msg) { Console. sink return 0. SOURCE CODE The CarEvents project is located under the Chapter 5 subdirectory.WriteLine(msg). ©2001 Andrew Troelsen (Apress. Designing an Event Interface COM programmers may be familiar with the notion of defining and implementing “callback interfaces. sink Car.Exploded -= new Car. Consider this last topic a bonus section.Exploded -= new Car.” This technique allows a COM client to receive events from a coclass using a custom COM interface. Car. which proves the point that there is always more than one way to solve a problem. } public void Exploded(string msg) { Console.A Comprehensive Guide to C# and the . sink Car.NET Platform.NET in general). When the object user wishes to detach from the event source.Exploded("Sorry. call the correct method on the sink): // Interface based event protocol! // class Car { … public void SpeedUp(int delta) { // If the car is dead. send exploded event to each sink.. public class Car { // The set of connected sinks. this car is dead. e.A Comprehensive Guide to C# and the . let’s assume that the Car maintains an ArrayList to represent each outstanding connection (analogous to the array of IUnknown* interfaces used with classic COM connection points). public void Advise(IEngineEvents itfClientImpl) { itfConnections.SpeedUp() can be retrofitted to iterate over the list of connections and fire the correct notification when appropriate (i. let’s call this method Advise().. } public void Unadvise(IEngineEvents itfClientImpl) { itfConnections.e.Remove(itfClientImpl). ArrayList itfConnections = new ArrayList(). and makes calls back on the sink when appropriate. Car. The Car holds onto the reference. 5-33 Now that you have an object that implements the event interface. he may call another method (Unadvise() in COM-speak).Add(itfClientImpl). In order to allow the Car to obtain a reference to the sink. In keeping with the COM paradigm. Here is the story so far: // This Car does not make any use of C# delegates or events. // Attach or disconnect from the source of events. you can assume some method has been added to the default public interface. In order to allow the object user to register multiple event sinks.Exploded } else . if(dead) { foreach(IEngineEvents e in itfConnections) e.."). ©2001 Andrew Troelsen (Apress. } … } Now. your next task is to pass a reference to this sink into the Car.NET Platform. ISBN: 1-893115-59-3) } p. CarEventSink sink = new CarEventSink(). 5-34 if(10 = = maxSpeed . c1. now making use of a callback interface to listen to the Car events: // Make a car and listen to the events. e. } } Proceed with caution! p.AboutToBlow } // Still OK! if(currSpeed >= maxSpeed) dead = true. // Pass the Car a reference to the sink. i < 10. ISBN: 1-893115-59-3) { currSpeed += delta. .SpeedUp(20).Unadvise(sink). i++) c1.AboutToBlow("Careful buddy! Gonna blow!").NET Platform. // Dude. // Speed up (this will generate the events. else Console.) for(int i = 0. c1. 100.WriteLine("\tCurrSpeed = {0}". return 0. // (The lab solution registers multiple sinks…). // Make sink object. you’re almost dead! { foreach(IEngineEvents e in itfConnections) e.A Comprehensive Guide to C# and the . public class CarApp { public static int Main(string[] args) { Car c1 = new Car("SlugBug".currSpeed) The following is some client-side code.Advise(sink). 10). ©2001 Andrew Troelsen (Apress. } } The output should look very familiar (see Figure 5-11). currSpeed). // Detach from events. XML-Based Documentation This final topic of this chapter is by no means as mentally challenging as the . your next goal is to examine a technique provided by C#. you are most likely familiar with the javadoc utility.A Comprehensive Guide to C# and the . the triple forward slash (///) rather than the C++ style double slash (//) or C-based (/*… */) syntax. Nevertheless.NET Platform. If you have a background in Java. you are free to use any wellformed XML tags. So. in that the “source code to XML formatting” process is the job of the C# compiler (csc. you (as a programmer) can apply any number of XML transformations to the raw XML. As well.exe) rather than a standalone utility. . and is not necessarily an “advanced” technique. Given that XML separates raw data from the presentation of that data. ISBN: 1-893115-59-3) p. Using javadoc. The C# documentation model is slightly different. When you wish to document your types in XML. which enables you to turn your source code documentation into a corresponding XML file. you are able to turn Java source code into an HTML representation. your first step is to make use of a special comment syntax. ©2001 Andrew Troelsen (Apress.NET delegation protocol.NET base class library. including the following predefined set (see Table 5-3). After the triple slash. you could programmatically read the XML file using types defined in the . 5-35 Figure 5-11. why use XML to represent your type definitions rather than HTML? The primary reason is that XML is a very enabling technology. Interfaces as an event protocol SOURCE CODE The EventInterface project is located under the Chapter 5 subdirectory. Stock XML Tags Predefined XML Documentation Tag <c> <code> <example> <exception> <list> <param> <paramref> <permission> <remarks> <returns> <see> <seealso> <summary> <value> Meaning in Life Indicates that text within a description should be marked as code Indicates multiple lines should be marked as code Used to mock up a code example for the item you are describing Used to document which exceptions a given class may throw Used to insert a list into the documentation file Describes a given parameter Associates a given XML tag with a specific parameter Used to document access permissions for a member Used to build a description for a given member Documents the return value of the member Used to cross-reference related items Used to build an “also see” section within a description Documents the “executive summary” for a given item Documents a given property The following is a very streamlined Car type with some XML-based comments. note the use of the <summary> and <param> tags: /// <summary> /// /// This is a simple Car that illustrates working with XML style documentation.hasSunroof = hasSunroof. ©2001 Andrew Troelsen (Apress. /// </summary summary> summary public class Car { /// <summary summary> summary /// Do you have a sunroof? /// </summary summary> summary private bool hasSunroof = false. ISBN: 1-893115-59-3) p.A Comprehensive Guide to C# and the . /// </summary summary> summary /// <param name="hasSunroof"> </param param param> param public Car(bool hasSunroof) { this. /// <summary summary> summary /// The ctor lets you set the sunroofedness. In particular. . 5-36 Table 5-3.NET Platform. "). } } p. } } /// <summary summary> summary /// Entry point to application.WriteLine("Sorry.xml As you would hope. /// </summary summary> summary public static void Main() { SimpleCar c = new SimpleCar(true).xml simplecar. click the Properties button from the Solution Explorer window (see Figure 5-12).A Comprehensive Guide to C# and the . ©2001 Andrew Troelsen (Apress..you don’t have a sunroof. } else { Console..cs doc:simplecar. c. . /// </summary summary> summary /// <param name="state"> </param param param> param public void OpenSunroof(bool state) { if(state = = true && hasSunroof = = true) { Console.OpenSunroof(true). To do so. the Visual Studio. you can specify the /doc flag as input to the C# compiler.NET IDE enables you to specify the name of an XML file to describe your types. ISBN: 1-893115-59-3) } /// <summary summary> summary /// This method allows you to open your sunroof.NET Platform. 5-37 Once you have your XML documentation in place. Note that you must specify the name of the XML output file as well as the C# input file: csc /doc:simplecar.WriteLine("Put sunscreen on that bald head!"). Viewing the Generated XML File If you were now to open the simplecar. ©2001 Andrew Troelsen (Apress. you would find the raw XML format.A Comprehensive Guide to C# and the . Activating the Project Properties dialog Once you’ve activated the Project Properties dialog. Be aware that assembly members are denoted with the <member> .NET XML viewer If you were to select the XML button from the XML editor window. select the Build option from the Configuration Properties folder. ISBN: 1-893115-59-3) p. 5-38 Figure 5-12. Here you will find an edit box (XML Documentation File) that enables you to specify the name of the file that will contain XML definitions for the types in your project (which is automatically regenerated as you rebuild your project).NET IDE. The Visual Studio.NET Platform.xml file from within the Visual Studio. you would find the display shown in Figure 5-13. Figure 5-13. delegate) Represents a field Represents type properties (including indexers) Represents method (including such constructors and overloaded operators) Denotes an event Represents an error string that provides information about the error. Table 5-4 provides some additional XML format characters.NET Platform. interface. Visual Studio.NET types. ISBN: 1-893115-59-3) p. struct.NET Documentation Support If the thought of ending up with a raw XML file is a bit anticlimactic. as shown in Figure 5-14. enum. . types with T. there is another alternative.. XML Format Characters Format Character N T F P M E ! Meaning in Life Denotes a namespace Represents a type (i. ©2001 Andrew Troelsen (Apress. be aware that VS.A Comprehensive Guide to C# and the . At this point. 5-39 tag.e. When you select this item. you will be asked if you wish to build the entire solution or a specific project within the solution set. Table 5-4. and members with M.NET does offer another comment-formatting option. class. The C# compiler generates error information for links that cannot be resolved. you have a raw XML file that can be rendered into HTML using an XSL style sheet or programmatically manipulated using . Using the same XML tags you have just examined. Although this approach gives you the biggest bang for the buck when it comes to customizing the look and feel of your source code comments. you may make use of the “Tools | Build Comment Web Pages…” menu option. fields are marked with an F prefix. ©2001 Andrew Troelsen (Apress. check out Figure 5-15: . 5-40 Figure 5-14.A Comprehensive Guide to C# and the . For example. You can now open the main HTML file and view your commented project. Configuration of your HTML-based documentation The Build Comment Web Pages option will respond by creating a new folder in your project directory that holds a number of images and HTML files built based on your XML documentation.NET Platform. ISBN: 1-893115-59-3) p. Next.A Comprehensive Guide to C# and the . 5-41 Figure 5-15. . You are now well-equipped to build sophisticated object models that function well within the .NET Platform. The chapter began by examining how to build a custom indexer method. The generated XmlCarDoc online documentation SOURCE CODE The XmlDocCar project is located under the Chapter 5 subdirectory. ISBN: 1-893115-59-3) p. which allows the object user to access discrete sub-items using array-like notation. the chapter examined how the C# language enables you to overload various operators in order to let your custom types behave a bit more intuitively to the object users of the world.NET universe. ©2001 Andrew Troelsen (Apress. Summary The purpose of this chapter was to round out your understanding of the key features of the C# language. The third approach (event interfaces) is more of a design pattern than a language protocol. The first two approaches (delegates and events) are official.NET universe. I wrapped up this chapter by examining how to comment your types using XML comment tags. and you learned how the Visual Studio. 5-42 You have also seen three ways in which multiple objects can partake in a bidirectional conversation. ISBN: 1-893115-59-3) p. however. it does allow two entities to communicate in a type-safe manner. you enable your peers to fully understand the fruit of your .NET Platform. .NET labors. Using these techniques.A Comprehensive Guide to C# and the . well-supported constructs in the . ©2001 Andrew Troelsen (Apress.NET IDE can make use of these tags to generate online documentation for your current project.
https://www.scribd.com/doc/55935317/3075
CC-MAIN-2017-09
refinedweb
9,957
61.83
Talk:Demystifying Depression From Wikibooks, the open-content textbooks collection [edit] Review Notes - Just because wikibooks can be edited by anybody, doesn't mean they are. (The graphs are totally bogus.) by 128.119.165.135 - That disclaimer was definitely a good one. Although the author warns the reader at some points about following conjecture, it feels like most of this book is conjecture, fuzzy logic, and the result of a nonprofessional. The author scolds professionals, as if those who have made a career out of psychology don't know their subject; yet the author himself seems to have little experience. If he could back up his statements by citing his sources, I would be more impressed. -Monk 05:55, 4 December 2006 (UTC) [edit] This book is not a good source of information Let me make this clear right now: The majority of this book was written by one person (Name of Feather), a person who is not a professional, has not cited any articles from scientific journals. In fact, his sources are all secondary, and they all suck (Really, they do). There is no indication that this guy has any idea what he's talking about. He talks big, as if he knows the subject. But where's the proof? This entire book is just one rambling speech after another about what he thinks depression is. Where is the proof that any of this bull is true? He speaks about all this brain chemistry mumbo-jumbo, and acts as if it instantly means that all his conclusions are accurate. But he backs up almost NONE of his claims with evidence. Do not listen to this guy. Listening to this guy would be equivalent to taking medical advice about your broken leg from a Janitor who says he took a biology course in high school. Although Name of Feather may act like he knows the subject, it sure as hell seems he doesn't. If he had any actual professional experience, instead of just deciding that he'd go on the internet and spew whatever he decides is right, he'd have evidence and actual statistics, he'd have objective evidence and not just anecdotes about his own personal version of "the truth" about depression. If you are depressed, do not make decisions about what to do off of this book. There's no evidence that the author understands the subject any more than you do. The author just *thinks* they understand the subject, but that doesn't mean anything. Go to an actual doctor, an actual professional; or if you're not into quacks, who have spent years studying this stuff, at least don't listen to this nut who thinks he has "all the answers" because he uses fancy words like "serotonin" and "neurogenesis." I don't care if he read a news article or two about it. I don't care if he took some course in psychology ten or twenty years ago. You should listen to actual professionals, or to many different people who have had depression. But don't let some stranger from the internet, who has not shown that his ideas are valid in any way, tell you what to do with your life.--Monk 06:21, 4 December 2006 (UTC) I agree. I am wondering if large portions of this book need removed until they can be written properly. Lonjers 08:47, 24 January 2007 (UTC) I agree, and would have said it if nobody else had. But in defence of the author, depression is a very individual thing. I was diagnosed with depression as a child and I am now over 60. It didn't stop me working. My story would be very different. The other day I looked at a book by an English sufferer. His story was very different again, and just as irrelevant to mine. Depression is not capable of precise description like a broken leg or an infection. I have yet to come up with a formal definition of it. [edit] On the subject of rants Hey Monk, before you "contribute" any further, would you mind counting the number of times that readers are advised to seek professional help if they think they have a depression? I go as far as advising them to seek a competent professional (i.e. a psychiatrist) instead of being treated only by their (cheaper) GPs. That being said, show the advice herein contained to a modern psychiatrist. They will not disagree with it. --Name of Feather 21:40, 6 December 2006 (UTC) - On the subject of proof, by Monk - You seem to be very confused as to where the burden of proof lies here. It's not my job to disprove the contents of your book to show that the contents are untrustworthy. It's your job to prove the contents of your book to show that they're trustworthy in the first place. You can't just put any content up here and expect everyone to treat it as authoritative and the truth until they can get an expert to vouch against it. You need to back up your statements with proof. Doing that is your job, not mine. If you cannot provide good proof for your scientific statements, you can not expect others to treat anything you say as scientifically reputable. - Secondly, I don't give a damn if you advise readers to see a professional a hundred times, that doesn't make your knowledge any more professional. You are not qualified to "inform" people about the subject to any degree, because as far as I'm concerned, you may know nothing about it. You may just be making this up. You may just be "remembering it" from some casual chat you had with a few psych doctors months ago, but that means essentially nothing as to qualifications. You have shown NO qualifications to educate others on the subject. Normally, qualifications are irrelevant as long as you can back up your claims with objective evidence, but so far you have not actually backed up your claims with anything but a vague hand-motion to a few mainstream-level science magazines, and some other worthless sources. - Depression is a sensitive subject. I think I'm justified in warning other people when you go around telling them what to do with their lives without providing any evidence that you know what you're talking about. I don't care if you throw in "Oh, but you should really see a doctor" at the beginning, you're still instructing them on something that you may know nothing about. If you have enough evidence to support the claims you make in this book, that's great, but you have to show that evidence, and it has to be valid evidence. Every claim has to be supported by previously collected data, proof, and scientific papers, rather than be anecdotal. - Irrelevant of whether the book is designed to appeal to the masses, the information within is of a sensitive scientific subject, and should be scientifically accurate. Like any scientific piece, although it may be up to others to investigate whether your evidence and data is valid and whether it supports or does not support your conclusion (something I have done, and in this case I have found that it's mostly invalid evidence), it's up to you to prove your arguments in the first place. You have failed to do that. If you can not support your claims about depression, you are not qualified to speak on the subject, and you are not qualified to tell people how to live their lives when under such a mentally precarious condition. - I don't need to prove you wrong for your claims to be dubious. If you can not prove that you are correct in the first place, your claims are automatically put into doubt. Until you can back your claims up, and back them up well, this book is an unreliable source of information. -Monktalk 20:15, 17 December 2006 (UTC) I am male, and in Australia. I have had depression for 45 years, but I still don't take an hour and a half to get up in the morning. I am sure that what Name of Feather has written agrees with his experience, and no doubt, what he has written is what a psychiatrist would say to him. It is good if it gets men to acknowledge that they can be depressed, too. But it is not of general application. On the ordinary Wikipedia, it would be labelled "weasel words" - putting the author's views forward as general knowledge (although I hate the expression itself.) How does he know that a modern psychiatrist would not disagree with his advice? Is he a psychiatrist, or so familiar with psychiatric thought generally? He hasn't quoted the basis of this statement, just as with any of his others. I have already mentioned one area where I would receive different advice. What worries me is that advice to wrap oneself in cotton wool, as he recommends, is exactly the wrong treatment for depression. It depends on the depth of depression involved, and I was lucky enough to keep working. But so many depressed people are sitting patiently waiting for a cure, and doing nothing with their lives in the meantime. Depression is an interaction of brain chemistry, circumstances and attitude. Change any one, and you change the other two. Although "positive thinking" or "counting our blessings" often achieves nothing against depression, nevertheless attitude is still within our control. Depression practically wipes me out for about 3 days every month, but like a woman with periods that are just as disabling, I don't let it stop me the rest of the time. I try to look on it as a disability, rather than an illness. I find the term "a depression" strange. I suppose that it is legitimate like "a cancer" or "an AIDS". It must be a local thing. We don't use it here. [edit] Sources suck because.... His sources suck, and here's why -- The author has only supported his statements with the following references: - created a few haphazard links to a "science news" website, and a page which reprints a "science news" magazine. Where "science news" refers to a magazine that reformats, rewrites, and rewords science knowledge as to make consumable for easy reading by the public. - created a few links to wikipedia articles - created a link to a student's assignment at a college (Not an authoritative source in any way!). - The only possibly valid reference he uses is This NIMH site, and you would be better off reading that page than this "book". This guy only cites one good source, but his book doesn't even reflect the contents of that source. All the other sources are either secondary or meaningless. Looks like he has no idea what he's talking about. --Monk 06:36, 4 December 2006 (UTC) [edit] Exercise only for 'mild' depression? My psychiatrist told me that regular aerobic exercise is as good for moderate depression as an SSRI, according to studies. This book states the proper role of exercise is much more limited, without seeming to back that up with proper research. 207.34.148.245 19:07, 7 December 2006 (UTC) [edit] Todo - Change the tone of the book, especially the first person perspective. - Agreed. [edit] Retitle? I think that a title like Introduction to Depression would serve this book better and bring it more inline with the textbook nature of wikibooks. What does everyone else think? Triddle 07:17, 3 Jun 2005 (UTC) - Change Title The title Introduction to Depression is less marketing-pitch and honest. One more vote for this, and I'll move the title if nobody else does. -Monk 05:41, 4 December 2006 (UTC) [edit] Retitle plus other suggestions Concerning the retitle, that is fine by me! Some other suggestions (for now): - I will make proper figures out of the ASCII art. I know how they should look like, so I should do it. - Merge parts 1 and 2. Later on there are other things that could be done: - Create a section for each of the available treatments (meds, etc) - Add information on the existing antidepressants (though Wikipedia already has quite a lot) [edit] New navigation template I created a prototype navigation template and put it on the first two pages of this book. Here it is as an example: Next page: Normal Neuron Communication, Previous page: Introduction, First page: Demystifying Depression What does everyone else think? Triddle 20:22, 5 Jun 2005 (UTC) [edit] Template Using a template is a good idea, of course. About the particular format of the template, I would suggest that all links use the same font size. Also, "First Page" should be renamed "Top", as that is the standard nomenclature used by most tools (eg. tex2html). Name of Feather 10:21, 6 Jun 2005 (UTC) [edit] Communication between editors I personally find the use of the talk page a very limited form of communication. What about creating a mailing list (a yahoo group or something) for the purpose of synchronising the editing process? Name of Feather 10:23, 6 Jun 2005 (UTC) [edit] Concerning the title Well, I am not adamant about it, but perhaps we should leave the title as it is. It has already picked up some "brand recognition"... :-) Name of Feather 10:44, 6 Jun 2005 (UTC) [edit] Namespace Looking at the Wikibooks:Naming conventions page, we should move all pages back to the original "Demystifying Depression/Section Name". That is the preferred format for new books. Name of Feather 10:54, 6 Jun 2005 (UTC) - After reading the Naming Conventions page and Wikibooks:Hierachy_naming_scheme it seems that using : is going to be easier for this project than using /. Why? The | trick will be of great use when creating links to pages inside the book. Quick overview of how links work: - [[Foo]] - links to Foo - [[Foo|Bar]] links to Foo but displays Bar - [[Foo:Bar]] links to Foo:Bar - [[Foo:Bar|Bar]] links to Foo:Bar but only displays Bar (note trailing |) - [[Foo/Bar]] links to Foo/Bar - [[Foo/Bar]] links to Foo/Bar but displays Foo/Bar, can't use trailing / - [[Foo/Bar|Bar]] links to Foo/Bar but displays Bar Now think of the difference between Next page: What Is Depression, Previous page: Demystifying Depression, First page: Demystifying Depression and Next page: What is depression, Previous page: Demystifying Depression, First page: Demystifying Depression As well as the other benefits listed by Wikibooks:Hierachy_naming_scheme#Real_custom_namespaces. It really seems to me that setting this up in its own namespace is the way to go. Triddle 16:32, 6 Jun 2005 (UTC) - I didn't know it was possible to create custom namespaces. In that case, it is definately the way to go! Name of Feather 08:53, 8 Jun 2005 (UTC) [edit] Announce book I think it is time to announce the book and add it to the bookshelves, don't you agree? Name of Feather 11:03, 6 Jun 2005 (UTC) I do not agree due to the huge amount of unsupported and perhaps false information.Lonjers 09:10, 24 January 2007 (UTC) [edit] Notation fixer I ran this program on all the articles to update the old style reference notation. I hope someone else can find it useful in the future. Triddle 06:56, 12 Jun 2005 (UTC) #!/usr/bin/perl -w #This source code is copyright Tyler Riddle <triddle@gmail.com>, #caries no license requirements, and is made available for any purpose. #You may incorporate this into any other project with out regard #for licensing issues. use strict; my $data = <<EOF; #[] EOF my @links = &build_array; while(<>) { s/(\[(\d+)\])/&handler($1, $2)/ge; print; } sub handler { my $orig = shift; my $refnum = shift; die "Invalid reference number: $refnum" unless defined $links[$refnum]; my $newlink = '[' . $links[$refnum] . ']'; print STDERR "$orig => $newlink\n"; return $newlink; } sub build_array { my @array; $array[0] = 'garbage'; #numbering for the articles starts at 1 while($data =~ m/^#\[([^ \]]+)/gm) { my $link = $1; push(@array, $link); } return @array; } [edit] Reference Descriptions This is about the reference links throughout the book. What about adding article/page titles to the tooltips? I am very impressed with the book. Thanks for a useful and real discussion. [edit] Calvinism The references to Calvinism probably need better explanation. 220.253.24.254 01:54, 28 December 2005 (UTC) [edit] Before you make changes... Being a wikibook and all that means that one expects people to make changes/corrections as they see fit. However, if not for anything else at least for the sake of politeness, it would be very much appreciated if radical changes were first proposed in this talk page. (See the history for an example of what I'm referring to). Thank you, Name of Feather 18:46, 30 May 2006 (UTC) Radical changes are however needed to this book please make them.Lonjers 08:44, 24 January 2007 (UTC) I think some doctor should put some specific counter arguments against some of the specific the book is claiming. I sense that the author of this book deeply believes what he or she wrote, so he or she deserves some scientific arguments. May be those sections that can not be proved or were just made up should be marked as that are controversial. Ervinn 15:24, 24 January 2007 (UTC) [edit] What should be done with this book? I think that this book needs almost completely rewritten and reorganized as an introduction to depression. Or it needs to be removed from wikibooks. My reason for this is it contains large amounts of unsighted or original research and information based on individual experience. According to the "What Wikibooks is not" section of Wikibooks:What is Wikibooks large parts of this book do not qualify for being part of a Wikibook. At the vary least it needs removed from the completed books book shelf. What are other peoples opinions Lonjers 09:15, 24 January 2007 (UTC) I would not remove this book. The claims it is making could be true, based on 'individual experience'. If scientifically proven that the book claims are incorrect then I would remove it. In the meantime it could be moved to a controversial state. There are some precedence, where 'individual experiences' were strongly rejected and eventually turned out to be true. Ervinn 15:40, 24 January 2007 (UTC) [edit] I'd move to remove... I read this book with some care. I see very little merit in it and much potential for disseminating misconceptions and unfounded speculation about a very important subject. Does the nature of wiki mean that we have no reponsibility to try to ensure good quality information? Despite the caveats, this book is seriously likely to misinform people including sufferers about a potentially very dangerous illness. [edit] Unfortunately not a 'wikibook' I personally think that this work has its merits in giving a new public perspective (as far as I am aware) on depression. Upon reading the Wikibooks:What is Wikibooks, however, I do not think that it qualifies unless properly cited or founded, as it is certainly not a textbook, rather an interesting essay. Simply put, I do not think that this is the place for the book; but I do think that it should be part of a public reading list somewhere on depression (I assume somewhere on Wiki), if not simply for the debate it causes. [edit] My editing spree In case any one noticed I went through and just destroyed everything I though was obviously wrong or does not fit the guidelines for what should be on wiki books. I realize this was not the most democratic method but it needed done and I was in the mood. The whole book still need restructured perhaps in the form of three major sections What is Depression, Treatments, Other Issues. It also all needs cited. If disagree with any of my changes feel free to revert. The book is currently a mess with many pages now blank in need of deletion but I could not figure out how to do that so perhaps some one more knowledgeable can help with that. Lonjers 11:44, 12 March 2007 (UTC) - I do not agree all the deletions. Deleting something because being in the mood is the definitation of vandalism. I think a warning on the discussion page before deletion is the wiki policy we should follow. Ervinn 16:20, 12 March 2007 (UTC) I understand revert anything you feel is actually within the guide lines of things that should be on wiki books. Relize however the most of the information I deleted was in direct confrontation with much higher quality article on wikipedia. It just seemed to me that there had been a consensus for a while for deletion but no one had done anything. Lonjers 17:42, 12 March 2007 (UTC) [edit] If this stays it is certainly inappropriately indexed As a psychiatrist, and a wikipedia member I must say that the inclusion of this book under Health is completely unfair to to people seeking high quality information. It does provide one person's perspective without scientific data or sources, and it may have some value in shedding light on the publics viewpoint. However, this is not a book about Health. At best this should fall under Self-Help or perhaps Editorial. I refrain to get into the argument about whether this should be completely removed or not, but it certainly has no place whatsoever within the category of health. Slippy007 March 30, 2007 [edit] VfD I personally feel that this book had some good information before it was mostly all deleted by Lonjers. However, if the requirements for inclusion here are the same as Wikipedia, than original research is not welcome and thus this book should be outright deleted. I don't know why it hasn't been put up for deletion given the general consensus. However, there is a deletion policy in effect here. As I do not believe it should be deleted I'm not going to put it up myself, but there you have it. Spoom (talk) 18:17, 18 April 2008 (UTC)
http://en.wikibooks.org/wiki/Talk:Demystifying_Depression
crawl-002
refinedweb
3,722
60.85
JJ Asghar’s role as a developer advocate for IBM has immersed him in helping organizations make the transition to cloud native ecosystems and to IBM Cloud’s Kubernetes Service. Previously, Asghar was the OpenStack go-to-guy at Chef Software. In this interview, Asghar has drawn from his deep well of hands-on and high-level experience to describe the trials and tribulations organizations face when adopting cloud native platforms. He also discusses when using Istio’s service mesh for Kubernetes makes sense and the immense benefits a well-run CI/CD pipeline offers enterprises. I wonder if you could speak from your experience about what is the biggest push factor for enterprises to adopt cloud native and serverless platforms? It boils down to whether there are a lot of applications out there still running Java. There are Java apps still running in the world. Tomcat and Jbox were very successful because you could just take a WAR file and dump it into Tomcat. Over time, this new generation coming out don’t want to use Java. They want to use Ruby/Python/Node/Go and Kubernetes for the enterprise is now becoming a unified control plane or application layer for anything. You can run Java inside of there, you can run a Python app inside of there, and it’s the same technology that the Java people used to do with Tomcat, but now fully across every single language. It’s extremely powerful because now you don’t have to spend all this time having multiple infrastructures to run your applications — you just have Kubernetes run everything across the board. Does it seem that enterprises notice that Java is a bit out of date? Not necessarily. There is actually amazing progress with the newer versions of Java. There is a version of Java out there that has been compiled down and run on Knative to the point where it is ridiculously fast and easy to use. I believe that at DevNexus, which is a conference coming up in a month or so, there’s going to be a really big push about showing it off. Java is not dead; it’s the workhorse of the enterprise. I work at Semaphore where we often emphasize the mantra “optimize for happiness,” and we hear from our clients about how a flexible CI/CD pipeline improves the lives of whole teams and is an actual enabler. Could you describe why having a well-oiled CI/CD system in place that crucial to implementing and maintaining enterprise-class software? That is also a very very important question that I used to hear multiple times in previous lives in different companies I worked at. The beauty of a good CI/CD pipeline is that it is a comfort blanket. It is something that if you’re moving towards cloud native, container-based, or just using configuration management, CI/CI is your comfort blanket. It is the thing you can trust that will always do the right thing, and it’s not human — the robots are doing the work. Unfortunately, to get to a really successful CI/CD pipeline you can’t just drop on it, as you probably know from Semaphore. You need to do it piecemeal; you need to learn to trust the software to do the work and you need to start very simple. One of the best stories I have from teaching CI/CD to someone was basically with Python. There are a lot of Python developers, and pep8y implementation is pretty standard across the board. Well, the first thing you should do for your CI/CD pipeline for any Python project is to have pep8 run against it before it gets merged in the master — just to make sure that you’re doing exactly what you expect. Then you add your unit and integration tests. You keep moving forward, and then with that, you’ll be able to have that full pipeline, and before you know it, every time you’ve seen a new PR you see the little green check, and you have confidence that it can go into master and not blow up the world. But how are enterprises doing CI/CD in general? In general, it’s the same conversation, but it’s much slower. With enterprises, you have so many people touching one single codebase or multiple codebases at different levels of CI/CD pipelines, and there are so many different products out there that do this kind of work. But the beauty of it is that you can find what will fit your company portfolio, and you’ll be able to mold it to fit your workflow; or you buy an opinionated one and say, “hey, this is the way I wanted it. I know that this opinionated workflow succeeds because of X Y and Z” or “I need one that is completely flexible, where I can do literally anything I want.” You recall in a recent interview how working as a DevOps engineer at Chef enabled you to spend more time with your family and to no longer worry about having to run to the data center to reboot servers and get the infrastructure rolling in the middle of celebrating winter holidays with your family. You even go as far as to admit that “Chef changed your life.” What do you think should be taken care of today? What’s still a big issue in running cloud native apps that influences people’s lives and prevents them from getting an uninterrupted eight hours of sleep at night (both in terms of systems administrators and people running businesses on the cloud)? My story about Chef and how it changed my life is pretty straightforward. I was working in this shop that did some application work, and I used to wake up at three in the morning to do rollouts. Eventually, I got to the point where I was able to use configuration management to do the work (thanks to Chef). I actually created a CI pipeline by accident whereby I introduced the so-called Test-Kitchen to do the work for me. I actually went to my boss and said, “I don’t want to wake up at 3:00 in the morning on a Saturday.” I went through the whole process. I showed him the test-kitchen CI pipeline to walk us through, and I showed them that it all checked off. I didn’t take any machine down, did a rolling restart and said, “I’m so confident in this thing. I will quit if I take down production. I’m an honorable man — I’ll fix the thing, get it up and then I’ll write my resignation letter.” The boss said, “I’ll take that bet.” I did the typie-typie then I pushed it out and it worked perfectly fine. He looked at me and said, “I can’t believe that actually works.” Now I was able to do that, and it changed my life. I was able to now actually roll things out whenever I wanted to because I had a CI pipeline so I didn’t take down production. Fast forward six months later, and I was working for Chef. Having this in mind, are there still any tools that could really change people’s lives? In general, the idea of using container technology to have that flexibility and velocity, using CI/CD to make sure that you have confidence “just like that” is extremely important. To have that level of confidence to say that CI/CD has checked off and everything is green — that will change your life; especially if you’re coming from the VM world and you don’t use cloud native stuff. When you start walking through and building pipelines to do the work, CI/CD is invaluable. You spoke earlier this year at Configuration Management Camp 2019 about Istio service mesh. This project is often the next thing people try to approach after accepting Kubernetes with its complexity. What would be your advice when it comes to, “When should I think about using Istio on my K8s cluster?” What are the most common issues that Istio can help resolve? Kubernetes is just a scheduler. Well, it’s extremely complex, but not if you boil it down to exactly what it does all day every day. You tell Kubernetes, “I need this container running,” and you create this manifest with a bunch of variables around it. That’s all it really does. There’s a whole ecosystem around Kubernetes that needs to be created.S Istio is specifically about networking — that’s what it boils down to. The moment that it clicked for me, the story I tell people about why I should give a damn it is basically the moment that I saw Istio with the Envoy sidecar. You can write intelligent routing to read HTTP headers and change the direction of your container. Instead of you just striking a normal three-tier app, you have the Web frontend, your application server, and your database. Obviously, your database is somewhere not on Kubernetes, but that’s a different topic and a hotly debated one. You have your Web frontend and your application server, let’s say you want to release a new iPhone app, and you can use Istio to look at the iPhone http-header from the user agent and send it specifically to a beta version of a container over in a corner, and then actually go to your production data. This is how you can actually have real production data in a secure manner. You can actually have any arbitrary headers on top, so if you have a QA team or a design team, you can tell them, “Hey, use this hash when you hit my production data and you’re running into a small section of something we want to release tomorrow.” Kubernetes does all the work to make sure everything’s there, and back to the CI/CD story — you can actually put headers in to have CI/CD push specifically to the specific namespace, have Istio route to it, and then you can have a safe way to make sure the public doesn’t use it, or just a specific group of people use it. So it’s about complex traffic management? That’s only a portion of it. That’s the moment that it clicked first for me. There’s a whole world to it. You can make sure specific containers only talk over secure channels between one another. There’s free telemetry, so as soon as you install Istio you actually get a bunch of different plugins immediately, where you can show your boss a graph of what’s actually happening between your microservices, and there’s even a Vistio diagram that can come out where you can actually see all the different connections happening almost in real time. The complexity doesn’t ever stop. I mean, that’s the beauty of it, right? You see the complexity in front of you. Over time as you get more and more microservices to your point, it gets really really complex. You can actually have that up on a screen where you see the requests per second going through to different microservices, and figure out what’s actually happening. In your article “Building and Leading an Open Source Community,” you underline how difficult it is to build an active community around a project, especially when it has to “make the corporate overlords happy.” You also mention that “80 percent of Open Source projects” never get to a point where they are able to say that they grew something organically that’s valuable to both the community and the companies backing up the project.So, how has the explosion of Kubernetes served as the exception to this rule? I’ll start with OpenStack. It’s a really awesome Open Source project where the idea was, “We’re going to build a free and open cloud.” I think it’s been around for six and seven years, now and everyone had a voice. It was amazing. It was completely democratized, and anyone can work on it. It was open. It was great. Over time, what happened is that humans are human — mistakes are made, things happen, personalities change, and companies come in and make problems happen. I genuinely believe one reason why Kubernetes is so successful is: they took a lot of lessons learned on how OpenStack was run, and then took those lessons learned and created another foundation for a cloud native story. It’s also because it’s mainly supported by a handful of companies in the CNCF that can actually say “yes” or “no” to things. So, there is an actual trust circle to make sure that not all the random stuff happens. I genuinely believe that one of the main reasons why Kubernetes is now so successful is because there is a “level of oversight” that allows for not-really-bonkers things to come in out of the blue, but an actual project that comes out that is extremely successful. At FOSDEM, for example, I’ve heard people talking about, “years ago it was all about OpenStack, and now it’s all about Kubernetes.” These are exactly the same people. If you actually go around, you’ll see the exact same people who moved from the OpenStack community into Kubernetes. Hell, I’m even one of them. Article originally published on The New Stack.
https://semaphoreci.com/blog/why-a-well-oiled-cicd-pipeline-makes-for-a-happy-devops-team
CC-MAIN-2019-47
refinedweb
2,270
67.99
interfaces for a lab Adam Jones Greenhorn Joined: Aug 22, 2002 Posts: 19 posted Aug 03, 2003 11:04:00 0 ok, i am in a java class right now and i have worked ahead a few labs and i am completely stuck on something right now. the lab requires the use of an interface for a rental car program. i made all of my classes and the interface but when i try to run it and enter the information i get a zero for the two outputs that i should have numbers in. if at all possible could anyone offer any idea as to what i am doing wrong or what i forgot....also, all constructive criticism will be more than welcome....many thanks for any time that you put into helping me this is the test class i wrote(sorry, none of the code posted have comments yet) import javax.swing.JOptionPane; public class RentalTest { public static void main(String[] args) {IgnoreCase("Economy")|| category.equalsIgnoreCase("Econ")) { Economy econ1 = new Economy(); econ1.setMilesEstimate(milesToDrive); System.out.println("Category: Economy"); System.out.println("Your estimated fuel usage is " + econ1.getFuelUsage() + " gallons."); System.out.println("Your estimated mileage surcharge total is $" + econ1.getMileageSurcharge()); } else if (category.equalsIgnoreCase("Midsize")|| category.equalsIgnoreCase("Mid")) { Midsize mid1 = new Midsize(); mid1.setMilesEstimate(milesToDrive); System.out.println("Category: Midsize"); System.out.println("Your estimated fuel usage is " + mid1.getFuelUsage() + " gallons."); System.out.println("Your estimated mileage surcharge total is $" + mid1.getMileageSurcharge()); } else if (category.equalsIgnoreCase("Luxury")|| category.equalsIgnoreCase("Lux")) { Luxury lux1 = new Luxury(); lux1.setMilesEstimate(milesToDrive); System.out.println("Category: Luxury"); System.out.println("Your estimated fuel usage is " + lux1.getFuelUsage() + " gallons."); System.out.println("Your estimated mileage surcharge total is $" + lux1.getMileageSurcharge()); } System.exit(0); } } this is the interface provided with the java course public interface Car { void setMilesEstimate(double milesToDrive); void calculateFuelUsage(); void calculateMileageSurcharge(); double getFuelUsage(); double getMileageSurcharge(); } this is one of the classes i wrote for each car class needed (Economy, Midsize, Luxury) each of the other two classes are exactly alike with the appropriate words substituted for each car category public class Economy implements Car { private static final double FUEL_EFFICIENCY = 30.0; private static final double MILE_SURCHARGE = 0.05; private double milesToBeDriven = 0; private double gallonsToBeUsed = 0; private double totalCostForDriving = 0; public Economy(double x) { } public void setMilesEstimate(double milesToDrive) { milesToBeDriven = milesToDrive; } public void calculateFuelUsage() { gallonsToBeUsed = milesToBeDriven / FUEL_EFFICIENCY; } public void calculateMileageSurcharge() { totalCostForDriving = milesToBeDriven * MILE_SURCHARGE; } public double getFuelUsage() { return gallonsToBeUsed; } public double getMileageSurcharge() { return totalCostForDriving; } } sorry about the lenght of this post adam jones Adam Jones Greenhorn Joined: Aug 22, 2002 Posts: 19 posted Aug 03, 2003 11:07:00 0 oops, i just noticed something i forgot to remove from one of the sections of code in the last code section the Economy class, the (double x) public Economy(double x) { } is not supposed to be there....i was giving something a try and it did not work....just forgot to take it out Adam Jones Greenhorn Joined: Aug 22, 2002 Posts: 19 posted Aug 03, 2003 20:26:00 0 i hope no one took the time to work on this using any amount of brainpower....i was working on it a while ago and found my problem. i was trying to grasp the whole interface thing and not mess it up and while i was working with my classes i ended up removing all of the nice little lines that actually called the methods that calculated everything. the code i posted was code i was working on while i was at school and when i got home i opened up the same project that i was working on(i was farther on the code at home but had time to kill at school so i tried to catch up and all) and found that the method calls were in there and everything worked fine with the corrrect code so if anyone tried to sift through my crap, thank you very much for the attempt and i am sorry if i wasted any of your time one question though: the whole interface thing...is it important enough to use often? it seemed to me that i could just as easily put everything in the classes themselves. Of course, this was a tiny project and i am sure a huge project that needs to use the same methods would make use of interfaces. Is that right? Feedback ? thanks again adam jones Ken Blair Ranch Hand Joined: Jul 15, 2003 Posts: 1078 posted Aug 03, 2003 23:35:00 0 First of all, I'm fairly new to Java and programming myself so don't assume I know what I'm talking about. It seems to me your code has pretty much gone around the interface, which is probably why it doesn't seem necessary to you. Let me try and show you what I mean... public class RentalTest { public static void main(String args[]) { Car car;("Economy")) { car = new Economy(); } else if (category.equals("Midsize") { car = new MidSize(); } else if (category.equals("Luxury") { car = new Luxury(); } car.setMilesEstimate(milesToDrive); car.calculateFuelUsage(); car.calculateMileageSurcharge(); System.out.println("Category: " + category); System.out.println("Your estimated fuel usage is " + car.getFuelUsage() + " gallons."); System.out.println("Your estimated mileage surchage total is $" + car.getMileageSurchage()); System.exit(0); } } I'll be the first to admit that's a pretty poor exmaple, it's late and JAI is giving me a migraine, but I -think- that sort of conveys the point. I'm not sure why the hell the interface has calculation methods, everytime the mileage gets updated it'll need to be recalculated anyway, seems unnecessary to create a separate method. As for interfaces, they're VERY important. It's not really a question of 'how often' to use them but simply a matter of using them when they're appropriate. The point (as I understand it) is to keep your code modular and flexible, and (if I understand correctly) is about the closest thing to 'multiple inheritance' there is in Java. Sometimes, as with small examples like this, it's difficult to see the usefulnesss in it, it can almost seem redundant. Here's a simple little example, let's say we have a 'Ford' and we have a 'AutomaticTransmission'. We code them specifically for each other and they interact directly with each other, and they work perfectly, why would we go through the hassle of using interfaces? But what happens if we want to stick a 'ManualTransmission' on the Ford? Or what if we want to use that AutomaticTransmission on a Dodge instead? We can't, because the Ford and the AutomaticTransmission are tied directly to each other, to get them to work with something else we'd have to go in and modify their code. Instead of constantly modifying the code to make it work with a different car or transmission (and then having to modify EVERY SINGLE ONE if we want to change something) why not use an interface? We can make a 'Car' interface and a 'Transmission' interface. We'll create our 'AutomaticTransmission' and 'ManualTransmission' classes and both will implement the Transmission interface, along with any new transmissions we come up with in the future. Then we'll do the same with our cars, every one will implement the Car interface. Now... all any 'transmission' has to do is code itself based upon the car's interface... not around 'Dodge' or 'Ford' but around 'Car'. As long the car implements the Car interface the transmission won't have problems. We'll also code our cars around the 'transmission' interface so they'll work with any transmission that implements it. If we ever need to modify something, we don't have to go through and change EVERY version of each transmission type (one for each type of car), just the single transmission class, which makes maintenance a hell of alot easier. I hope that helped. [ August 03, 2003: Message edited by: Ken Blair ] Adam Jones Greenhorn Joined: Aug 22, 2002 Posts: 19 posted Aug 04, 2003 07:12:00 0 wonderful, thank you for all of your help [ August 15, 2003: Message edited by: Adam Jones ] Did you see how Paul cut 87% off of his electric heat bill with 82 watts of micro heaters ? subject: interfaces for a lab Similar Threads error message Noob java assigment Odometer Desperately need help writing Tester completely lost All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/371558/java/java/interfaces-lab
CC-MAIN-2014-10
refinedweb
1,418
51.38
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of The Career Toolkit this week in the Jobs Discussion forum! Michael Fitzmaurice Ranch Hand 168 10 Threads 0 Cows since Aug 22, (168 (168/10) Number Threads Started (10/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Michael Fitzmaurice Declarative Security & SSL Hi Rich Thanks for your response - I never thought to try that. Unfortunately, it doesn't seem to work; any path you put in the <form-login-page> element must start with a '/', so absolute URLs are not possible. Does anyone else have any ideas? Thanks Michael show more 15 years ago Tomcat Declarative Security & SSL Hi all I have a web app deployed in Tomcat. I would like to declaratively secure the application such that all pages are inaccessible unless the user successfully authenticates. This is easy enough. I would also like to use HTTP form authentication over SSL - this is also easy enough. However, what I would like to do (declaratively) is to use SSL only on the login page . If I do something like this in web.xml: <security-constraint> <web-resource-collection> <web-resource-name>Protected Area</web-resource-name> <url-pattern>/*</url-pattern> <http-method>DELETE</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>INTEGRAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/login.html</form-login-page> <form-error-page>/loginFailure.html</form-error-page> </form-login-config> </login-config> This means the user cannot access any page without authenticating, but it also means all pages are being served over SSL, which is not necessary for my application. However, I do want to protect the communication of the user's credentials with SSL, hence I want to be able to specify SSL for the login page. Is this possible declaratively, and if so, how? Thanks Michael [ March 02, 2005: Message edited by: Michael Fitzmaurice ] show more 15 years ago Tomcat OCTET String and IA5String Hi How you would go about converting the binary back into human readable format depends on how it was converted to binary in the first place and what it represents. How did the binary you need to work with come to be in your possession? You need to know whether the binary represents and IA5 string or an octet string in the first place, then you can go about converting it from binary. IA5 is a character set that is almost the same as ASCII - it uses 7 bytes. Octet strings use 8 bit bytes, as the name suggests. Hope this helps Michael show more 16 years ago Beginning Java help!help!help!how to remotely connect to VSS? Presumably you are talking about connecting to Microsoft Visual Source Safe as a client? If so, what platform do you need to connect from? If it's Windows, can't you just use the MS client? Are you trying to connect programatically or manually? show more 16 years ago Beginning Java Passed SCBCD today with 97%!!! You can read about how I studied for it in this thread . I would also add that I found the real exam to be a little bit easier than any of the mock exams I had taken (HFEJB, EJB+, etc.) - this is probably because: 1 - there are no ambiguous questions or questions with ambiguous answers on the real exam 2 - it maybe my imagination, but it seemed to me that there were normally fewer options to choose from with the average question on the real exam 3 - you always know exactly how many options you need to choose on the real exam 4 - there are not really any 'trick' questions on the real exam - you normally understand pretty clearly what area of the spec each question is probing. Also, you never get questions on the real exam that check to see if you know about something that is technically legal (e.g. will compile and deploy okay due to some loophole in the spec) but would constitute unbelievably bad practice in the real World. The questions pretty much just test if you can make sensible use of the spec, rather than find ways to abuse it. Big shout out to Kathy & Bert and the whole Head First massive!!! Michael show more 16 years ago Certification Results Is going Head First Enough ? Hi Dean I passed the SCBCD today with 97%, and I can tell you that HFEJB was the cornerstone of my preparation. I also used 'Enterprise Java Beans' by Richard Monson-Haefel (which is excellent), the EJB spec, and the Enthuware 'EJBPlus' test engine. I would estimate that my overall study time was divided thus: - HFEJB (70% of my study time, from start of study right to the very end - a fantastic learning resource. I read it cover to cover in detail first, then booked the exam and starting cramming) - EJBPlus (15%, all in the final 5 days before the test - good for exam cramming, but quite a lot of ambiguous questions, a few errors, and a couple of bugs, especially with the marking of drag n drop questions. Much harder than the real exam, partly due to bugs & ambiguity - I never scored more than 85% on one of their tests, and I took a fair few) - Monson-Haefel (10%, from start of study to finish - dipped in when I needed more detail or another perspective on topics I had read in HFEJB. A very good book which I will continue to use) - EJB 2.0 Spec (5%, mostly during the final 5 days, whenever I needed absolute authority on something, or when I wanted an alternative explanation. Get a hard copy - it's dry reading in some places, but a pretty valuable resource to have handy) I found the references to particular pages in the spec that you see in the answers to the 'Coffee Cram' exams in HFEJB to be invaluable. Use these where you got an answer wrong - it's like a signpost to the parts of the spec that you really need to check out. Good luck with the exam Michael P.S. Make sure you attempt all the exercises in HFEJB - they really do help you to understand and remember. Also, follow all the study hints at the start of the book (e.g. take regular breaks, change location, drink water, etc.) show more 16 years ago EJB Certification (OCEEJBD) Independent Candidate Assessment Required (UK) Ritesh I am sending a bit more info to your personal email address. Thanks Michael show more 16 years ago Jobs Offered Independent Candidate Assessment Required (UK) Hi The organisation I work for (a large red-brick UK university) need to independently assess the Java skills of one of our employees. Our requirement is that nobody internal to the University is involved in this assessment (other than the candidate, of course!); hence I cannot conduct the test myself, or directly influence the questions and methodology. Is there anyone who would be interested in carrying out this assessment for us on a contract basis? We would ideally want somebody to come on site and perform the assessment in person. If you are interested, or know another suitable person who would be, drop me a line and we'll take it from there. Thanks Michael [ June 08, 2004: Message edited by: Michael Fitzmaurice ] show more 16 years ago Jobs Offered Slough Slough is quite ugly. Maidenhead is quite nice though. Expensive though. I worked down there for while when the whole M4 corridor was known as the 'Golden Corridor', or the UK 'Silicon Valley' (I was there in 2000/2001). Hmmm, look at it now... show more 16 years ago Meaningless Drivel Question about the English in SCJD I think they speak Portuguese in Brazil, not Spanish (although I'm sure a Portuguese speaker would understand the gist of your post). show more 17 years ago Developer Certification (OCMJD) reading number from System.in into array Peter Have a look at my post on this thread . This will enable you to read in each character of the input number one by one. You can then store them in whatever data structure you think appropriate. You could even compare them against the 'mastermind' number as they are read in from System.in, if you like. One way to store the mastermind sequence (and the user's input) would be a String - you could store words as well as numbers. Check out the String.charAt() method to figure out how to perform the comparisons of individual characters within the Strings. If you use arrays, you are almost reinventing the wheel - the String class can be thought of as an array of chars and a set of operations you can perform against it. Hope this helps Michael show more 17 years ago Beginning Java reading number from System.in into array Hi Peter Do you want to read in just one number, or a series of numbers? Why do you need to put the individual digits of these numbers into an array? I assume you want to use this array to perform some manipulation of the individual digits - is this acurate? What is your overall goal? Michael show more 17 years ago Beginning Java Poll to help my decision... Gregg I would also vote for web app, assuming you do not need a really rich user interface. Another reason I tend to prefer these, all other things being equal, is that it allows me to pool database connections in one place and have a bit more control over where and when the db gets accessed from. Where I work we have some VB & Java fat client apps on desktops holding open lots of database connections unnecessarily. We have also found, as other people have already mentioned, that rolling out new releases and patches is a bit of a pain for the fat clients. Releasing to 160 desktops as opposed to one web server is definitely slower, more difficult, and more error prone, even if it only happens a couple of times a year. Michael show more 17 years ago Servlets Coding Style: accessor vs. attribute Jim I stand corrected, but feel obliged to point out how highly dependent on particular circumstances this caveat is, and even then I don't know if I would go so far as to say it is encouraged. Accepted, yes. And I would say the caveats you provided are actually more valuable than those Sun offer (i.e. 'if you would have used a struct, its okay'). However, I am really talking about what the default practice should be for public classes, which is what I guessed was under discussion in this thread. I feel strongly enough about that to re-iterate my objection to the practice of matter-of-factly declaring member variables public as a habit. I regard this as a completely filthy practice, and would be amazed to find that anyone had written a coding standard that encouraged this as the norm. And if I was ever asked to conform to such a standard, I would really have to ask myself if I wanted to work on a project like that. I know we are all in agreement that JD's preference is superior, hence this further post is possibly a bit redundant, but sometimes a fella just has to get something off his chest... show more 17 years ago Java in General Coding Style: accessor vs. attribute Hi JD In short, your colleague is undertaking a very bad practice - exposing the implementation of his classes as opposed to the interface. He seems to dismiss encapsulation as a non-issue - amazing. This leads to a high level of coupling between classes, which is always to be avoided. Using your colleague's idea, let's say I decide to rename one of my class members - I have potentially just broken every single piece of code that uses my class, and they are broken because they have an unacceptably high dependence on the internal implementation details of my class. How my class does what it does is nobody else's business, and that 'how' includes the number of, type of, and name of any member variables.If your colleague does not think this is important, why use OO languages at all? Everyone is entitled to their own opinion, but I feel he really is fighting a losing battle on this one. Encapsulation is one of the most fundamental tenets of OO programming. BTW, I have never yet read a Java coding standard that allows your friend's style of programming, let alone encourages it. Michael [ October 30, 2003: Message edited by: Michael Fitzmaurice ] [ October 30, 2003: Message edited by: Michael Fitzmaurice ] show more 17 years ago Java in General
https://coderanch.com/u/19309/Michael-Fitzmaurice
CC-MAIN-2021-04
refinedweb
2,208
58.72
0.) Try to guess what the following print. Check your solution using the Python interpreter. >>> tuple(map(lambda x: x * x, (1, 2, 3, 4, 5, 6))) >>> tuple(filter(lambda x: x % 2 == 0, (1, 2, 3, 4, 5, 6))) >>> def is_even(x): ... return x % 2 == 0 ... >>> def square(x): ... return x * x ... >>> tuple(map(square, filter(is_even, (1, 2, 3, 4, 5, 6)))) 1.) Write the sizes procedure using map. This procedure will take a sequence of tuples and will return a sequence of the sizes of each of the tuples in the original input. def sizes(seq): """Takes an input sequence of tuples, seq, and returns a sequence with the corresponding lengths of each tuple in seq. Arguments: seq -- A sequence of tuples. >>> sizes(((1,), (2, 3), (4, 5, 6))) (1, 2, 3) """ "*** Your code here. ***" 2.) Write the odd_len_only procedure using filter. This procedure will take a sequence of tuples and return a sequence containing only those tuples in the original sequence which had an odd length. def odd_len_only(seq): """Takes an input sequence of tuples, seq, and returns a sequence with only the tuples which had odd length. Arguments: seq -- A sequence of tuples. >>> odd_len_only(((1,), (2, 3), (4, 5, 6))) ((1,), (4, 5, 6)) """ "*** Your code here. ***" Recall the implementation of Rlists from lecture and discussion section: empty_rlist = None def make_rlist(first, rest = empty_rlist): return first, rest def first(r): return r[0] def rest(r): return r[1] 3) First, let's implement the len_rlist function. You have seen this in both lecture and discussion section, but try to write it from scratch using rescursion (ie, no loops!). def len_rlist(r): """Return the length of recursive list r. >>> len_rlist(empty_rlist) 0 >>> len_rlist(make_rlist(6, make_rlist(2, make_rlist(-12)))) 3 """ "*** Your code here. ***" 4) Now use a similar technique to write a (recursive) sum function for recursive lists. def sum_rlist(r): """Return the sum of items in the recursive list r, assuming they are numbers. >>> sum_rlist(empty_rlist) 0 >>> sum_rlist(make_rlist(-1, make_rlist(3, make_rlist(8)))) 10 """ 5) With only a very small modification to the previous function, you can also create a str_concat_rlist function that concatenates strings in a recursive list. def str_concat_rlist(r): """Returns a string that is the concatenation of the strings in the recursive list r >>> str_concat_rlist(empty_rlist) '' >>> str_concat_rlist(make_rlist('How ', make_rlist('do ', make_rlist('you ', make_rlist('sum ', make_rlist('strings?')))))) 'How do you sum strings?' """ "*** Your code here. ***" 6) Now let's write an insert function that inserts an item at a specific index in the recursive list. If the index is greater than the current length, you should insert the item at the end of the list. HINT: This will be much easier to implement using recursion than a loop! def insert_rlist(r, item, index): """ Returns a recursive list matching r but with the given item inserted at the specified index. If the index is greater than the current length, the item is appended to the end of the list. Arguments: r -- A recursive list. item -- the item to be inserted index -- the index at which to insert the item >>> r = make_rlist('I', make_rlist(' love', make_rlist(' recursion'))) >>> str_concat_rlist(insert_rlist(r, ' using', 2)) 'I love using recursion' >>> str_concat_rlist(insert_rlist(r, '!', 100)) 'I love recursion!' """ "*** Your code here. ***" 7) Let's check to make sure you are using the recursive list abstract data type properly. Replace the initial implementation with the one below in your code: empty_rlist = 'empty_rlist' def make_rlist(first, rest = empty_rlist): return rest, first def first(r): return r[1] def rest(r): return r[0] See what we did there? We simpy changed the order of how the list data is stored in the tuple. We also modified the way an empty rlist is represented by using a specific string instead of the NoneType. This is just one of the (literally) infinite ways we could have implemented Rlists. You should now re-run our doctests with the new implementation. Does everything still work? If not, fix any code that breaks an abstraction barrier so that your functions will be agnostic to the underlying implementation of Rlists. 8) Finally, since you're on a roll, see if you can implement a filter function for recursive lists. Again, recursion is your friend! def filter_rlist(predicate, r): """ Returns a recursive list matching r but with items failing the predicate removed. Arguments: predicate -- A function that takes a single argument and returns True or False. r -- A recursive list. >>> from math import sqrt >>> r = make_rlist(53, make_rlist(16, make_rlist(625, make_rlist(50, make_rlist(49))))) >>> sum_rlist(filter_rlist(lambda x : sqrt(x)%1==0, r)) 690 >>> s = make_rlist('I', make_rlist(' love', make_rlist(' recursion', make_rlist('!')))) >>> str_concat_rlist(filter_rlist(lambda x: len(x) < 7, s)) 'I love!' """ "*** Your code here. ***" Now that you are a master of recursion, show it off to all of your friends and use it in your daily life. Can you find a way to recursively walk to your next class?
http://www-inst.eecs.berkeley.edu/~cs61a/sp12/labs/lab5/lab5.html
CC-MAIN-2018-09
refinedweb
825
65.42
Forum:Extremely well thought out rant about the CRISIS From Uncyclopedia, the content-free encyclopedia Soooooo, how's it goin? Here's the crisis: Capital letters. I just spent 5 minutes trying to find one of my favorite unbooks, UnBooks:Gone With d' Wind. I spelled it many different ways: UnBooks:gone with d' wind, unbooks:Gone With d' Wind. Then I gave up, and just made a redirect: Gone With the Wind BUT LOOK OUT!! If you put in gone with the wind or Gone With The Wind you get jack-shit! Please, what the hell is going on with the capital letters??? I always wondered why search is capslock sensitive, but I never had the balls to ask until now. It is THE most annoying thing on Uncyc, even more annoying than against votes, even more annoying than getting banned every time I look at that one user, and even more annoying than YTTE's never-ending usergroups! How do we fix it, or, conversely, how can we turn this forum into a discussion about the newly ressurected penisman? • <21:23, 13 Aug 2008> - I agree. Fuck case sensitivity. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 21:23, 13 August 2008 (UTC) Thank you, TKF! And another thing, I don't even understand the RULES of case sensitivity. Is it the first letter of every word, or what? • <21:27, 13 Aug 2008> - I believe that, for mainspace, the rule is that the first letter of the title is capitalised and none thereafter (apart from names, places, etc). What we're meant to do with other namespaces, especially UnBooks, is anybody's guess. I just pretend that there's no extra bit at the start and proceed as before. -- 15Mickey20 (talk to Mickey) 22:20, 13 August 2008 (UTC) - What we should do is make that not-so-much a rule anymore, if only for the reason that satan does it...or is that a wikia-wide rule? Either way, REALLYREALLYREALLYREALLY fuckin' stupid. Chocolate Rain 00:09, 14 August 2008 (UTC) - My guess is that it's built into the wikia software itself. (I'm pissed that it capitalized my user name.) --monika 00:14, 14 August 2008 (UTC) - Probably true, I think there was some argument about some band or artists name about 6 months back about their name being capitalized on their wikipedia article... Chocolate Rain 00:20, 14 August 2008 (UTC) Look, guys, if we got rid of case sensitivity, then User:Benson and User:BENSON would combine powers and merge into one super ultra evil Benson machine, and we'd all be doomed until the Power Rangers kill it. And I know no one wants that to happen. So case sensitivity stays. --Pleb SYNDROME CUN medicate (butt poop!!!!) 02:13, 14 August 2008 (UTC) Okay, here are the rules as far as I can tell, in case anyone is confused: 1. New Page Rule: When you create a page, the first letter will be capitalized. You don't get a say in the matter. The rest of the letters remain as you left them. 2. Search Rule: When you enter a page name in the search box, it first tries it with your capitalization, and if that page doesn't exist, it capitalizes the first letter and tries again. If that still doesn't work, it lower-cases the rest and tries once more. If the page doesn't exist, it tells you the page doesn't exist with the capitalization scheme you gave it, and offers to create it under Rule 1. For example, searching for "Japan-France" or "japan-France" takes you to Japan-France but searching for "Japan-france", "japan-france", or "JAPAN-FRANCE" takes you to Japan-france, which redirects to Japan-France. 3. Link Rule: When you make a link, it automatically capitalizes the first letter and leaves the rest as they are. It will not try again as it would in the search box in rule 2. For example, megadeth and Megadeth will both go to the page that exists. mEGADETH and MEGADETH will both not work (and are seen as "MEGADETH" by the parser, as you can see if you scroll over). Any of those would go to the right page in the search box. 4. Gone with the Wind Rule: The parser does not recognize "d'" as a correct spelling of "the". --monika 00:32, 14 August 2008 (UTC) - Quality whoring there monika. Hats off, Aug 14 - If I wanted to whore out an article, I'd whore out a new one that needs pimping. Japan-France is my oldest and possibly worst article and I only used it because it was the first article I thought of that had a default capitalization redirect to another capitalization. --monika 00:44, 14 August 2008 (UTC) - I'm sure you know it was a genuine complement. Anything else would be the Megadeth:57, Aug 14 - Ah, see, I didn't know what you were talking about :) But Megadeth doesn't need pimping either! It's everywhere! Recent articles, VFH (thanks), Metallica... It's just the related betting pool that needs pimping. --monika 01:02, 14 August 2008 (UTC) query shouldn't spang be here by now to use a bullwhip on us and tell us that we'll get the current capitalization rules, and we'll like it? 03:01, 14 August 2008 (UTC) - Yeah, mon whoring his article to me is greeeeat, but can I please have some official answers, here?? • <3:03, 14 Aug 2008> - I sat there and tested and solved the rules for you. What more do you want? I don't have time to hack Wikia for you tonight... --monika 03:06, 14 August 2008 (UTC) - The page you were looking for is first in the search results for the first two you supposedly tried, and 10th for the second two. Are you sure you tried actually reading the results of your search? - And the case of this wiki is called first-letter. The first letter is always capitalised, and the rest are however you made them. That's what redirects are for. - And the "go" function follows this pattern for matching. Also, if you enter an IP address, it goes straight to that IP's contributions page. Neato. • Spang • ☃ • talk • 04:00, 14 Aug 2008 - Bah. I missed the fourth pass it makes. I am a failure as a scientist. I'm sorry Cajek. I didn't realize that "GONE WITH THE WIND" would be just as good an idea for the location of the redirect as "Gone with the wind". --monika 04:03, 14 August 2008 (UTC) - You did good, mon, thanks anyway. It doesn't really matter: I'm still pretty confused. • <5:03, 14 Aug 2008> - What I've taken from this is that if you have an article with lots of words, make sure that you have a redirect from the title all in lowercase except the first letter. If you have that, all searches will work (and not over-strain the server) and while not all links will work, they'll be easy to fix. (But of course we could have avoided if Wikia'd been smarter about this years ago.) --monika 05:40, 14 August 2008 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Extremely_well_thought_out_rant_about_the_CRISIS
CC-MAIN-2017-13
refinedweb
1,215
72.36
Identify. Set to "{{auto}}"to let Sentry infer the IP address from the connection. Additionally, you can provide arbitrary key/value pairs beyond the reserved names, and the Sentry SDK will store those with the user. To identify the user: Copied using Sentry; SentrySdk.ConfigureScope(scope => { scope.User = new User { Email = "john.doe@example.com" }; }); You can also clear the currently set user: The platform or SDK you've selected either does not support this functionality, or it is missing from documentation. If you think this is an error, feel free to let us know on GitHub. You can edit this page on GitHub.
https://docs.sentry.io/platforms/dotnet/guides/aspnet/enriching-events/identify-user/
CC-MAIN-2021-17
refinedweb
104
57.67
> -----Original Message----- > From: Stephen Colebourne [mailto:scolebourne@btopenworld.com] > Sent: Thursday, August 15, 2002 2:39 PM > To: Jakarta Commons Developers List > Subject: Re: [VOTE] RESULT XxxUtils constructors > > > From: "Michael A. Smith" <mas@apache.org> > > On Thu, 15 Aug 2002, robert burrell donkin wrote: > > > there is no consensus. > > > > agreed. > > The problem is that the Jakarta voting rules don't help us > here. Given Henri > and Michael also gave the result as 'inconclusive' I withdraw > my result > declaration (gosh, that sounds very official). > > The only way to move forward would seem to be another vote, > more tightly > driven. However, there's probably time for another round of discussion > first... I originally voted for protected. However, I did so before seeing the approach described below. If that will actually work for Velocity, then I'd be happy to change my vote and jump in with the 'private' crowd. -- Martin Cooper > > > > > (Although we could probably argue about it forever ;-) > > > > > > the problem is that pushing this forward against > opposition bodes ill > for > > > a project that is attempting to encourage code reuse. the > class was > > > developed in velocity but it can no longer be used. > instead, the stated > > > position of the privateers is that velocity needs to be > changed. other > > > projects and component beware. > > > > I don't think anyone said that velocity *needs* to change. > As far as I > > can remember, the "privateers" asked why commons needs to > make beans out > > of decidedly non-bean classes rather than velocity > accepting non-bean > > classes. Just because velocity says "beans-only", doesn't > mean that the > > Utils need to be made into beans when they are not. > > > > In other words, we have some component (e.g. Velocity) that > only works > > on beans, and another component (e.g. StringUtils) that is > not a bean, > > and we're trying to get them to work together. To begin with, it > > already seems like a bad mix. Neither side wants to change > things (i.e. > > velocity doesn't want to accept non-bean classes to allow > static method > > invocation on them, and us "privateers" don't want to make > a non-bean > > class into a bean). > > > > However, that doesn't mean that things can't work without > making one of > > the "sides" change to accomodate the other. Paul posted a perfectly > > reasonable bridge between the two, and I don't think anyone > has posted a > > reason why it couldn't be used: > > > > public class Util { > > public StringUtils string() { return null; } > > public ObjectUtils object() { return null; } > > public CollectionUtils collection() { return null; } > > } > > > > $foo.string().method() > > > > And, although I'm not that familiar with velocity, I'm guessing you > > could get even do: > > > > public class Util { > > public StringUtils getString() { return null; } > > public ObjectUtils getObject() { return null; } > > public Collectionutils getCollection() { return null; } > > } > > > > then, you could use: > > > > $foo.string.method() > > > > (i.e. without the "()") > > > > Is there a reason velocity users can't use this? Doesn't > this bridge > > the bean to non-bean gap? > > > > regards, > > michael > > > > > > -- > >>
http://mail-archives.apache.org/mod_mbox/commons-dev/200208.mbox/%3C7382FCA44E27D411BD4A00508BD68F9504BB0B1C@pigeon.tumbleweed.com%3E
CC-MAIN-2016-18
refinedweb
490
63.49
topics covered: [PDF version] I have been very interested in the sales of Math You Can't Use: Patents, Copyright, and Software, a book with which I was heavily involved. (Amazon page) So naturally, I've been tracking the Amazon sales rank. At first, I did it the way everybody else does--refreshing the darn page every twenty minutes--but I have recently started doing it the civilized way--an automated script. Here is what I've learned about how Amazon does its rankings. How much more detail can we get? The answer: none, really. You'll see below that over the course of a few days, the ranking of a typical book will go from 50,000 to 500,000, and a minute later it will be back at 50,000. Thus, the sort of things we usually do with a ranking, like compare two books, are unstable to the point of uselessness. One thing you evidently can do with the ranking is determine whether a book has sold a copy in the last hour or two. As you'll see below, there's a simple formula that will work for most books: if (current rank) > (earlier rank) then there was a recent sale. Sales rank for Klemens's Math You Can't Use. There are lots of details that those of us not working at Amazon will never quite catch. There are periods (sometimes mid-day) when the rank drifts down more slowly than it should, then speeds up in its descent. This implies to me some computational approximations that eventually get corrected. You'll notice that some of the books below show a small slope upward (a ten or twenty point rise in ranking) from time to time. When this happens, lots of books do it at once, also indicating some sort of correction whose purpose or method I don't have enough information to divine. Epstein and Axtell's book rises appreciably when it nears half a million. Finally, I don't have enough data to determine whether the ranking distinguishes between sales of used and new copies; I don't think it does. Here is a haphazard sampling of other books. Again, these are dynamically regenerated every three hours, so come back later for more action-packed graphing. Update 25 June 2008: I've switched to a host that doesn't have gnuplot, so the plots are no longer updated; you've got two years of data, and that's it. Some of these books bear something in common with Math You Can't Use, and others were based on a trip to the used book store I'd made the other day. Some have hardcover and paperback editions, in which case I just plot the paperback. Epstein and Axtell's Growing Artificial Societies [Amazon p.] Sales rank for Epstein and Axtell's Growing Artificial Societies Andy Rathbone, Tivo for Dummies. I have no idea who would buy this, and yet it is the best nonfiction seller here. This proves that I must never go into marketing. [Amazon p.] Sales rank for Rathbone's Tivo for Dummies Dickens's Great Expectations, Penguin Classics ed. [Amazon p.] Books in the top 10,000 or so are selling several copies a day, so the pattern looks different. Sales rank for Great Expectations Madonna's Sex. [Amazon p.] Somebody ran into the used bookstore asking for a copy, and ran out when the owner said he didn't have one. It's amusing that a book from 1992 could still instill such fervor in a person. It sells new for $125, used around $85. Sales rank for Madonna's Sex Ian McEwan's Atonement. [Amazon p.] I really thought I'd hate this book, since it starts off as being about subtle errors in manners committed by a gathering of relatives and friends at a British country manor, but it turned out to be an interesting modern take on the genre. Update: After I read it, it turned into a movie; you can see in the plot when it was in theaters. Sales rank for McEwan's Atonement Those of us interested in the sales rank of books outside Oprah's picks would be better served if the system were less volatile. In technical terms, if my guess that the score experiences exponential decay is correct, then the ranking system would be more useful to those of us watching the long tail if the decay factor were set to a smaller value. The data looks to me like an exponential decay system, where you have a current score St which goes up by some amount every sale, but drifts down by some discount rate every period, St+1 = λSt. [Thus, if there were no sales events, your score would be St = S0exp(- λt).] To fit this, I flipped and renormalized the rankings so that one was the highest possible ranking, and zero corresponded to a ranking of 500,000. Then, I set the following algorithm: • The score was initialized at 0.58. • Each period, score is multiplied (shrinks) by a factor of 0.96. • If there is a sale, then score rises by the addition of (1-current score) * 0.79. As you can imagine, I found those constants via minimizing the distance between the estimate and the actual. The algorithm is an exponential decay model with λ = 0.96, and upward shocks as described. The only way I could fit the data was to make shocks when the book is at a low sales rank bigger than shocks when it has a high sales rank. There's surely a more clever way to do it. The green line shows the exponential decay model fit to the actual data. You can decide if this is a good fit or a lousy one. My attempts to fit the Amazon sales rank to an exponential model You can also have a look at how the model fit to Madonna's book. Usage: import amazon #assuming you've saved the file below as amazon.py. amazon.onebookloop("03030303", "output_graph.png") This is pretty rudimentary; in the spirit of open source, I'd be happy to post your improvements. #!/usr/bin/python #(c)2006 Eric Blair. Licensed under the GNU GPL v 2. import re, os, time def checkrank(asin): (a,b,wresult) = os.popen3(""" wget "" -O - """ %(asin,)) exp =re.compile(R'Amazon.com Sales Rank:</b> #([^ ]*) in Books') result = exp.search(b.read()) if result is not None: return result.groups()[0] else: return None def onebookloop(asin, outfile): if not os.path.isfile(outfile): f = open(outfile, 'w') f.write ("""set term png set xdata time set timefmt "%Y; %m; %d; %H;" set yrange [1:*] reverse plot '-' using 1:5 title "sales rank" """) f.close() while (1): f = open("rankings.%s" % (asin,), 'a'); t = time.localtime() r = None while r is None: r = checkrank(asin) if r is None: time.sleep(10) f.write("%i; %i; %i; %i; %s\n"% ( t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, r)); f.close() os.popen(""" sed -e 's/,//g' < rankings.%s | gnuplot > %s """ % (asin,outfile) ) time.sleep(3*60*60) #3 hours. [link] [7 comments] [Previous entry: "The abject failure of IP PR"] [Next entry: "The continuing Byzantine-Ottoman war"] Replies: 7 comments on Thursday, August 10th, techne said Very fun. If only I had a book to track. on Thursday, August 10th, Andy said You can also use this formula to judge how Amazon's business is doing overall. When the book drifts down at a faster rate, that means that more books underneath it are selling; when it drifts down more slowly, fewer books below it are being sold. on Thursday, August 10th, Miss ALS of San Diego, of course said You're a geek.I know why Tivo for Dummies sells--how funny would that be as a gift Christmas morning with your tivo? Answer? Funny. Very very funny. We had a good laugh at the bookstore as I recall. on Sunday, August 20th, AC said Heh. Neat. on Sunday, March 25th, Mike said I know why Tivo for Dummies sells -- I work at a call center for Directv, and get calls all the time like: "How do I record a show, How do I erase a show, how do I set up to record something regularly?". It really makes one sad for the future of society because more than half of theese people have VCRs and can use them well. Just remember... It may be obvious, it may be EXACTLY THE SAME as something they ALREADY USE, but it ISN'T what they already use, so obviously their pre-existing knowledge is worthless. This holds true for web sites and computer programs as well. on Tuesday, March 17th, jonathan yates said I have no idea how to use your script I have been trying for days to get the following working but to no avail - any ideas ? Jonathanimport osimport stringcom="wget -O -"temp=os.popen(com)msg=temp.read()index=string.find(msg,"")index2=string.find(msg,"")fc=int(msg[index12:index2])print str(fc) on Tuesday, March 17th, the author said Jonathan,I'm not much of an expert on the details of Python, to tell you the truth, just decent enough to bang together scripts like this. So I don't think I could help you debug the partial script you put up here. There are many sites that will help you better than I could; e.g., everything I know I learned from Dive into Python.
http://fluff.info/blog/arch/00000188.htm
crawl-002
refinedweb
1,601
71.85
My first question on SO! What I'm working on is a Webforms page that's has a lot ASP textboxes, datepickers and dropdowns. Right now I'm using ADO.net for all of these controls to do CRUD operations. I'm already a great deal into this project, but I can't help wondering if I could be doing this in an easier or more efficient way. So far I'm using the SqlDataReader class for everything. If someone could break down the different options available to me or point me to some good information, I'd appreciate it. I know it's a pretty broad topic. I'm kind of aware of LINQtoSQL and EntityFramework. So my question really is: How does ADO.net compare to LINQtoSQL or EntityFramework? you should read up on one sample each of ADO.NET, Linq 2 SQL and Entity Framework and implement them to know the pros/cons of each. a simple web search should give you samples. Linq2Sql and EF will require very SQL query writing from you. once you have an initial grasp of these 3 things individually, follow this simple pattern in your code: e.g. public interface IRepository { MyDto GetData(int id); // and so on } public class EntityFrameworkRepository : IRepository { public MyDto GetData(int id) { using (var db = new MyDbContext()) { var myDtoEntity = db.MyDtoEntity.FirstOrDefault(m => m.Id == id); // extension method to transform DB objects into DTOs return myDtoEntity.ToMyDto(); } } } // similarly you can do: public class Linq2SqlRepository : IRepository { // so on.. } // now for all your aspx.cs pages: derive them from a base page, // and in the base page // have a variable like this, so that all pages have access to this. public IRepository Repository {get; set;} // you can have static instances as well for one time initialization. // you can initialize the Repository with a particular concrete implementation // or inject it. (if you're well versed with Dependency Injection) using the above way, all your code will work off the Interface, and you just need to change one place if you decide to change the implementation.
https://codedump.io/share/dnTrEBYuFt76/1/what-are-the-some-data-access-options-for-manipulating-data-using-aspnet-controls
CC-MAIN-2017-09
refinedweb
346
57.77
In this post, you’ll learn how to merge Python dictionaries. You’ll learn different ways, allowing you to merge dictionaries all the way back to Python 3.4, including some new, more efficient ways of accomplishing this in Python 3.9. For this tutorial, we’ll use two dictionaries, dict_a and dict_b. Let’s start things off by creating them here: dict_a = { 'age': 31, 'name': 'Nik' } dict_b = { 'career': 'Python-enthusiast', 'gender': 'Male' } dict_c = { 'age': 32, 'location': 'Toronto' } Now we can access dictionary items by either using square-bracket notation or, as a much safer alternative, using the .get() method: >>> print(dict_a['age']) >>> print(dict_b['gender']) 31 Male The reason that the .get() method is considered safer is that it will return None if a key doesn’t exist. Using the square-bracket notation, on the other hand, will return an error, thereby causing your code to fail unless you build in an exception. Check out some other Python tutorials on datagy, including our complete guide to styling Pandas and our comprehensive overview of Pivot Tables in Pandas! Merge Dictionaries in Python 3.9+ Using the Union Operator Python 3.9 introduced a very handy (and syntactically beautiful) operator to merge two dictionaries. To learn more about this change, check out the Pep-584 documentation on the official Python website. In order to use this change, make sure you’re running Python 3.9 or higher. Now let’s get to merging some dictionaries: >>> merged_dict = dict_a | dict_b >>> print(merged_dict) {'age': 31, 'name': 'Nik', 'career': 'Python-enthusiast', 'gender': 'Male'} Similarly, you can use the inplace operator to simplify your code even further. Say you wanted to updated dict_a by merging in the values of dict_b. Instead of writing dict_a = dict_a | dict_b, you could simply write: >>> dict_a |= dict_b >>> print(dict_b) {'age': 31, 'name': 'Nik', 'career': 'Python-enthusiast', 'gender': 'Male'} Now, you may be wondering what happens when your dictionaries share keys. The simple answer is that the values in the left-sided dictionary are replaced by those on the right side. This occurs since keys in dictionaries need to be unique. In our sample dictionaries we have overlaps in keys in dict_a and dict_c. Let’s try merging them and see what happens: >>> merged_dict = dict_a | dict_c >>> print(merged_dict) {'age': 32, 'name': 'Nik', 'location': 'Toronto'} Merge Dictionaries in Python 3.5+ Using Unpacking Python 3.5 introduced the ability to unpack variables using the unpacking operator **. The way this method works is to create a dictionary using the unpacked values of two or more dictionaries. Similar to the method above, this will overwite any duplicate keys with the dictionary on the right. Let’s give this a shot: >>> merged_dict = {**dict_a, **dict_c} >>> print(merged_dict) {'age': 32, 'name': 'Nik', 'location': 'Toronto'} Merge Dictionaries in Python 2 and 3 Using .update() Finally, if you need to use an older version of Python, say Python 3.4 or earlier, you can use the .update() method. What we’ll do, is first copy the dictionary into a new dictionary using the .copy() method. Then, we’ll use the .update() method inplace to merge the dictionaries: >>> merged_dict = dict_a.copy() >>> merged_dict.update(dict_c) >>> print(merged_dict) {'age': 32, 'name': 'Nik', 'location': 'Toronto'} This can be a bit cumbersome to run. It may be prudent to write a function to accomplish this. We can do this by writing: def merge_dicts(dict_1, dict_2): merged_dict = dict_1.copy() merged_dict.update(dict_2) return merged_dict merged_dict = merge_dicts(dict_a, dict_b) print(merged_dict) This returns the following: {'age': 31, 'name': 'Nik', 'career': 'Python-enthusiast', 'gender': 'Male'} Conclusion In this post, you learned how to merge Python dictionaries using multiple methods, depending on the version of Python you’re using. You learned how to use the union operator, the unpacking operator, and the .update() method.
https://datagy.io/merge-python-dictionaries/
CC-MAIN-2022-27
refinedweb
625
55.64
ISSUE-131: Mechanism for importing standard UoM? import UOM Mechanism for importing standard UoM? - State: - PENDING REVIEW - Product: - QB4ST - Raised by: - Phil Archer - Opened on: - 2017-01-03 - Description: - Is there an option for importing standard uom, envelope properties? For time there is hasBeginning, hasEnd - how do we use these - directly and create spatial analogues? - Related Actions Items: - No related actions - Related emails: - No related emails Related notes: created qb4st:envelopeStart, envelopeEnd, etc. CRS reference will define UoM for spatial, and we will ignore the more general case - outsied QB4ST scopeRob Atkinson, 21 Feb 2017, 03:10:27 Display change log
https://www.w3.org/2015/spatial/track/issues/131
CC-MAIN-2018-47
refinedweb
101
55.44
Let's step through adding a basic dissector. We'll start with the made up "foo" protocol. It consists of the following basic items. A packet type - 8 bits, possible values: 1 - initialisation, 2 - terminate, 3 - data. A set of flags stored in 8 bits, 0x01 - start packet, 0x02 - end packet, 0x04 - priority packet. A sequence number - 16 bits. An IP address. The first decision you need to make is if this dissector will be a built-in dissector, included in the main program, or a plugin. Plugins are the easiest to write initially, so let's start with that. With a little care, the plugin can be made to run as a built-in easily too - so we haven't lost anything. Example 9.1. Dissector Initialisation. #ifdef HAVE_CONFIG_H # include "config.h" #endif #include <epan/packet.h> #define FOO_PORT 1234 static int proto_foo = -1; void proto_register_foo(void) { proto_foo = proto_register_protocol ( "FOO Protocol", /* name */ "FOO", /* short name */ "foo" /* abbrev */ ); } Let's go through this a bit at a time. First we have some boilerplate include files. These will be pretty constant to start with. Next we have an int that is initialised to -1 that records our protocol. This will get updated when we register this dissector with the main program. It's good practice to make all variables and functions that aren't exported static to keep name space pollution down. Normally this isn't a problem unless your dissector gets so big it has to span multiple files. Then a #define for the UDP port that we'll assume we are dissecting traffic for. Now that we have the basics in place to interact with the main program, we'll start with two protocol dissector setup functions. First we'll call the proto_register_protocol function which registers the protocol. We can give it three names that will be used for display in various places. The full and short name are used in e.g. the "Preferences" and "Enabled protocols" dialogs as well as the generated field name list in the documentation. The abbreviation is used as the display filter name. Next we need a handoff routine. Example 9.2. Dissector Handoff. void proto_reg_handoff_foo(void) { static dissector_handle_t foo_handle; foo_handle = create_dissector_handle(dissect_foo, proto_foo); dissector_add("udp.port", FOO_PORT, foo_handle); } What's happening here? We are initialising the dissector. First we create a dissector handle; It is associated with the foo protocol and with a routine to be called to do the actual dissecting. Then we associate the handle with a UDP port number so that the main program will know to call us when it gets UDP traffic on that port. The stardard Wireshark dissector convention is to put proto_register_foo and proto_reg_handoff_foo as the last two functions in the dissector source. Now at last we get to write some dissecting code. For the moment we'll leave it as a basic placeholder. Example 9.3. now we'll do the minimum we can get away with. The first two lines check to see if the Protocol column is being displayed in the UI. If it is, we set the text of this to our protocol, so everyone can see it's been recognised. The only other thing we do is to clear out any data in the INFO column if it's being displayed. At this point we should have a basic dissector ready to compile and install. It doesn't do much at present, other than identify the protocol and label it. In order to compile this dissector and create a plugin a couple of support files are required, besides the dissector source in packet-foo.c: Makefile.am - This is the UNIX/Linux makefile template Makefile.common - This contains the file names of this plugin Makefile.nmake - This contains the Wireshark plugin makefile for Windows moduleinfo.h - This contains plugin version info moduleinfo.nmake - This contains DLL version info for Windows packet-foo.c - This is your dissector source plugin.rc.in - This contains the DLL resource template for Windows You can find a good example for these files in the agentx plugin directory. Makefile.common and Makefile.am have to be modified to reflect the relevant files and dissector name. moduldeinfo.h and moduleinfo.nmake have to be filled in with the version information. Compile the dissector to a DLL or shared library and copy it into the plugin directory of the installation. Now that we have our basic dissector up and running, let's do something with it. The simplest thing to do to start with is to just label the payload. This will allow us to set up some of the parts we will need. The first thing we will do is to build a subtree to decode our results into. This helps to keep things looking nice in the detailed display. Now the dissector is called in two different cases. In one case it is called to get a summary of the packet, in the other case it is called to look into details of the packet. These two cases can be distinguished by the tree pointer. If the tree pointer is NULL, then we are being asked for a summary. If it is non null, we can pick apart the protocol for display. So with that in mind, let's enhance our dissector. Example 9.4. Plugin Packet); } if (tree) { /* we are being asked for details */ proto_item *ti = NULL; ti = proto_tree_add_item(tree, proto_foo, tvb, 0, -1, FALSE); } } What we're doing here is adding a subtree to the dissection. This subtree will hold all the details of this protocol and so not clutter up the display when not required. We are also marking the area of data that is being consumed by this protocol. In our case it's all that has been passed to us, as we're assuming this protocol does not encapsulate another. Therefore, we add the new tree node with proto_tree_add_item, adding it to the passed in tree, label it with the protocol, use the passed in tvb buffer as the data, and consume from 0 to the end (-1) of this data. The FALSE we'll ignore for now. After this change, there should be a label in the detailed display for the protocol, and selecting this will highlight the remaining contents of the packet. Now let's go to the next step and add some protocol dissection. For this step we'll need to construct a couple of tables that help with dissection. This needs some additions to the proto_register_foo function shown previously. Two statically allocated arrays are added at the beginning of proto_register_foo. The arrays are then registered after the call to proto_register_protocol. Example 9.5. Registering data structures. void proto_register_foo(void) { static hf_register_info hf[] = { { &hf_foo_pdu_type, { "FOO PDU Type", "foo.type", FT_UINT8, BASE_DEC, NULL, 0x0, NULL, HFILL } } }; /* Setup protocol subtree array */ static gint *ett[] = { &ett_foo }; proto_foo = proto_register_protocol ( "FOO Protocol", /* name */ "FOO", /* short name */ "foo" /* abbrev */ ); proto_register_field_array(proto_foo, hf, array_length(hf)); proto_register_subtree_array(ett, array_length(ett)); } The variables hf_foo_pdu_type and ett_foo also need to be declared somewhere near the top of the file. Example 9.6. Dissector data structure globals. static int hf_foo_pdu_type = -1; static gint ett_foo = -1; Now we can enhance the protocol display with some detail. Example 9.7. Dissector starting to dissect the packets., 0, 1, FALSE); } Now the dissection is starting to look more interesting. We have picked apart our first bit of the protocol. One byte of data at the start of the packet that defines the packet type for foo protocol. The proto_item_add_subtree call has added a child node to the protocol tree which is where we will do our detail dissection. The expansion of this node is controlled by the ett_foo variable. This remembers if the node should be expanded or not as you move between packets. All subsequent dissection will be added to this tree, as you can see from the next call. A call to proto_tree_add_item in the foo_tree, this time using the hf_foo_pdu_type to control the formatting of the item. The pdu type is one byte of data, starting at 0. We assume it is in network order, so that is why we use FALSE. Although for 1 byte there is no order issue it's best to keep this correct. If we look in detail at the hf_foo_pdu_type declaration in the static array we can see the details of the definition.bit unsigned integer. This tallies with our call above where we tell it to only look at one byte. BASE_DEC - for an integer type, this tells it to be printed as a decimal number. It could be BASE_HEX or BASE_OCT if that made more sense. We'll ignore the rest of the structure for now. If you install this plugin and try it out, you'll see something that begins to look useful. Now let's finish off dissecting the simple protocol. We need to add a few more variables to the hf array, and a couple more procedure calls. Example 9.8. Wrapping up the packet dissection. ... static int hf_foo_flags = -1; static int hf_foo_sequenceno = -1; static int hf_foo_initialip = -1; ... static void dissect_foo(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree) { gint offset = 0; ...,; } ... } void proto_register_foo(void) { ... ... { &hf_foo_flags, { "FOO PDU Flags", "foo.flags", FT_UINT8, BASE_HEX, NULL, 0x0, NULL, HFILL } }, { &hf_foo_sequenceno, { "FOO PDU Sequence Number", "foo.seqn", FT_UINT16, BASE_DEC, NULL, 0x0, NULL, HFILL } }, { &hf_foo_initialip, { "FOO PDU Initial IP", "foo.initialip", FT_IPv4, BASE_NONE, NULL, 0x0, NULL, HFILL } }, ... ... } ... This dissects all the bits of this simple hypothetical protocol. We've introduced a new variable offset into the mix to help keep track of where we are in the packet dissection. With these extra bits in place, the whole protocol is now dissected.. Example 9.9. Naming the packet types. static const value_string packettypenames[] = { { 1, "Initialise" }, { 2, "Terminate" }, { 3, "Data" }, { 0, NULL } };. Example 9.10. Adding Names to the protocol. { &hf_foo_pdu_type, { "FOO PDU Type", "foo.type", FT_UINT8, BASE_DEC, VALS(packettypenames), 0x0, NULL, HFILL } } This helps in deciphering the packets, and we can do a similar thing for the flags structure. For this we need to add some more data to the table though. Example 9.11. Adding Flags to the protocol. #define FOO_START_FLAG 0x01 #define FOO_END_FLAG 0x02 #define FOO_PRIORITY_FLAG 0x04 static int hf_foo_startflag = -1; static int hf_foo_endflag = -1; static int hf_foo_priorityflag = -1; static void dissect_foo(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree) { ... ... proto_tree_add_item(foo_tree, hf_foo_flags, tvb, offset, 1, FALSE); proto_tree_add_item(foo_tree, hf_foo_startflag, tvb, offset, 1, FALSE); proto_tree_add_item(foo_tree, hf_foo_endflag, tvb, offset, 1, FALSE); proto_tree_add_item(foo_tree, hf_foo_priorityflag, tvb, offset, 1, FALSE); offset += 1; ... ... } void proto_register_foo(void) { ... ... { &hf_foo_startflag, { "FOO PDU Start Flags", "foo.flags.start", FT_BOOLEAN, 8, NULL, FOO_START_FLAG, NULL, HFILL } }, { &hf_foo_endflag, { "FOO PDU End Flags", "foo.flags.end", FT_BOOLEAN, 8, NULL, FOO_END_FLAG, NULL, HFILL } }, { &hf_foo_priorityflag, { "FOO PDU Priority Flags", "foo.flags.priority", FT_BOOLEAN, 8, NULL, FOO_PRIORITY_FLAG, NULL, HFILL } }, ... ... } ... Some things to note here. For the flags, as each bit is a different flag, we use the type FT_BOOLEAN, as the flag is either on or off. Second, we include the flag mask in the 7th field of the data, which allows the system to mask the relevant bit. We've also changed the 5th field to 8, to indicate that we are looking at an 8 bit quantity when the flags are extracted. Then finally we add the extra constructs to the dissection routine. Note we keep the same offset for each of the flags. This is starting to look fairly full featured now, but there are a couple of other things we can do to make things look even more pretty. At the moment our dissection shows the packets as "Foo Protocol" which whilst correct is a little uninformative. We can enhance this by adding a little more detail. First, let's get hold of the actual value of the protocol type. We can use the handy function tvb_get_guint8 to do this. With this value in hand, there are a couple of things we can do. First we can set the INFO column of the non-detailed view to show what sort of PDU it is - which is extremely helpful when looking at protocol traces. Second, we can also display this information in the dissection window. Example 9.12. Enhancing the display. static void dissect_foo(tvbuff_t *tvb, packet_info *pinfo, proto_tree *tree) { guint8 packet_type = tvb_get_guint8(tvb, 0); if (check_col(pinfo->cinfo, COL_PROTOCOL)) { col_set_str(pinfo->cinfo, COL_PROTOCOL, "FOO"); } /* Clear out stuff in the info column */ if (check_col(pinfo->cinfo,COL_INFO)) { col_clear(pinfo->cinfo,COL_INFO); } if (check_col(pinfo->cinfo, COL_INFO)) { col_add_fstr(pinfo->cinfo, COL_INFO, "Type %s", val_to_str(packet_type, packettypenames, "Unknown (0x%02x)")); } if (tree) { /* we are being asked for details */ proto_item *ti = NULL; proto_tree *foo_tree = NULL; gint offset = 0; ti = proto_tree_add_item(tree, proto_foo, tvb, 0, -1, FALSE); proto_item_append_text(ti, ", Type %s", val_to_str(packet_type, packettypenames, "Unknown (0x%02x)")); foo_tree = proto_item_add_subtree(ti, ett_foo); proto_tree_add_item(foo_tree, hf_foo_pdu_type, tvb, offset, 1, FALSE); offset += 1; } } So here, after grabbing the value of the first 8 bits, we use it with one of the built-in utility routines val_to_str, to lookup the value. If the value isn't found we provide a fallback which just prints the value in hex. We use this twice, once in the INFO field of the columns - if it's displayed, and similarly we append this data to the base of our dissecting tree.
http://www.linuxtopia.org/online_books/network_security/wireshark_development_guide/wireshark_ChDissectAdd.html
CC-MAIN-2016-50
refinedweb
2,214
65.22
This guide shows you how to perform common scenarios by using the Azure Table storage service. The samples are written in Python and use the Python Azure Storage package. The covered scenarios include creating and deleting a table, in addition to inserting and querying entities in a table.: You can use the Table service to store and query huge sets of structured, non-relational data, and your tables will scale as demand increases. The Table service contains the following components:.. [AZURE.NOTE] If you need to install Python or the Python Azure package, see the Python installation guide. The TableService object lets you work with table services. The following code creates a TableService object. Add the code near the top of any Python file in which you wish to programmatically access Azure Storage: from azure.storage.table import TableService, Entity The following code creates a TableService object by PartitionKey and RowKey. These are the unique identifiers of your entities. You can query these values much faster than you can query your other properties. The system uses PartitionKey to automatically distribute the table entities over many storage nodes. Entities that have the same PartitionKey are stored on the same node. RowKey is the unique ID of the entity within the partition that does not exist, then the update operation will fail. If you want to store an entity regardless of whether it PartitionKey and RowKey. task = table_service.get_entity('tasktable', 'tasksSeattle', '1') print(task.description) print(task.priority) This example finds all tasks in Seattle based that you want to bring over to the client. The query in the following code returns only the descriptions of entities in the table. [AZURE.NOTE] The following snippet works only against the cloud storage service. This is not supported by the storage emulator. tasks = table_service.query_entities('tasktable', "PartitionKey eq 'tasksSeattle'", 'description') for task in tasks: print(task.description) You can delete an entity by using its partition and row key. table_service.delete_entity('tasktable', 'tasksSeattle', '1') The following code deletes a table from a storage account. table_service.delete_table('tasktable') Now that you have learned the basics of Table storage, follow these links to learn about more complex storage tasks: For more information, see also the Python Developer Center.
https://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-table-storage/
CC-MAIN-2015-40
refinedweb
372
57.16
How to Reverse a number in C++ In this tutorial, we will learn how to reverse a number in c++. The Reverse of a number means to reverse the position of all digits of any number and storing the digits in reverse order. For example, the reverse of 254 is 452. We can reverse a number in c++ using a loop and arithmetic operators. In this program, we will get a number as input from the user and will reverse that number. Let’s see the explanation of the program over here. These are some simple steps to code the program. Program Explanation: - Firstly, we will check the number if it is greater than zero or not. - Then we will find the remainder of the given number and will get the last digits of the number. - After that, we will place the last digit in the first position. - Again we divide the number by 10 and store the value in a variable. - Now we will get a new number which will have all digits except for the last digit. - We will continue to check while loop again and again and find the remainder and get last digits of the number. The same process is repeated again and again until the condition is true. For any program, try to reduce time complexity and space complexity. Time complexity – is O(Log(n)) where n is the input number. You may also like: How to find the sum of digits in C++ Maximum subarray sum of a given array using c++ Reverse a number in C++ #include<iostream> using namespace std; //function to reverse digits of a number int reversenumber(int num) { int rev_no=0; while(num>0) { rev_no = rev_no*10 + num%10; num=num/10; } return rev_no; } int main() { int n; cin>>n; // print number cout<<"number"<<endl; cout<<n<<endl; //print reverse of number cout<<reversenumber(n)<<endl; return 0; } OUTPUT EXAMPLE : INPUT: 12345 OUTPUT: 54321 INPUT: 762419 OUTPUT: 914267
https://www.codespeedy.com/reverse-a-number-in-cpp/
CC-MAIN-2022-27
refinedweb
329
69.92
Module to work with bencoded strings. PyBencoder - your bencoded strings module What is a Bencoded String? Bencode (pronounced like B encode) is the encoding used by the peer-to-peer file sharing system BitTorrent for storing and transmitting loosely structured data. For more info on bencoding check out. - It provides: - decoding of the different bencoded elements - encoding of the allowed types (byte strings, integers, lists, and dictionaries). Requirements Requires Python 3 or later Installation python setup.py install To run test suite: python setup.py test Usage Import the module from pybencoder.bencoder import PyBencoder Encoding Encoding is very easy to do. Just pass as an argument your data to encode method. It will automagically call the right encoder for you. ben = PyBencoder() ben.encode(123) # encode an integer ‘i123e’ ben.encode(‘123’) # encode a string ‘3:123’ ben.encode([1, 2, 3]) # encode a list ‘li1ei2ei3ee’ ben.encode([1, 2, [ 4, 5]]) # encode a slightly more complex list ‘li1ei2eli4ei5eee’ ben.encode({ ‘one’: 1, ‘two’: 2, ‘three’: 3 }) # encode a dictionary ‘d5:threei3e3:twoi2e3:onei1ee’ Decoding Decoding is also easy to deal with. Just pass the bencoded string to decode method. Two mentions: - the first char of your bencoded string must be actually bencoded data, no garbage is allowed - at the end of the bencoded string it might be garbage data; after the extraction, you can also retrieve it ben = PyBencoder() ben.decode(‘i123e’) # decode an integer 123 ben.decode(‘i123esomeothergarbagedata’) # decode an integer with garbage data at the end ben.get_left() # gets what’s left -> ‘someothergarbagedata’ ben.decode(‘3:123somegarbage’) # decode a 3 chars string ben.decode(‘li1ei2eli4ei5eee’) # decode a list [1, 2, [4, 5]]
https://pypi.org/project/PyBencoder3/
CC-MAIN-2017-43
refinedweb
276
61.02
Created on 2013-11-22 19:42 by larry, last changed 2014-03-10 01:35 by python-dev. This issue is now closed. Attached is a patch exposing the old opcode_stack_effect() function to Python. The patch does the following: * renames opcode_stack_effect() to PyCompile_OpcodeStackEffect() * removes the "static" modifier from PyCompile_OpcodeStackEffect() * changes PyCompile_OpcodeStackEffect()'s behavior so it returns a magic value on failure * preserves existing behavior when compiling code and encountering an opcode/oparg pair that results in failure * creates a new _opcode module * exposes PyCompile_OpcodeStackEffect() as _opcode.stack_effect() * tests _opcode module with new test__opcode.py * imports _opcode.stack_effect() into opcode, exposing it publically * documents the function in dis (there is no documentation for opcode, and dis imports and exposes everything in opcode) Whew! I think it's ready to go in. (Sponging around for a reviewer ;-) FWIW, I agree with Antoine about making those PyCompile_ functions private (leading "_"). New patch, incorporating Antoine's comments. Thanks, Antoine! +1 from me. A stack_effect attribute on dis.Instruction would be a nice bonus, but doesn't need to be part of this patch. Hmm, looking at dis.py, I'm -1 on exposing this as a public opcode module API at this point, although I'm still a fan of exposing it as opcode._stack_effect to allow advanced users access (ala sys._get_frames). I initially thought the required addition to dis.Instruction would just be: @property def stack_effect(self): return opcode.stack_effect(self.opcode, self.arg) However, that doesn't necessarily work, since self.arg may be None. That means stack_effect has to be at least: def stack_effect(opcode, oparg=None): if oparg is None: if opcode >= HAVE_ARGUMENT: raise ValueError("This opcode needs an argument") oparg = 0 return _opcode.stack_effect(opcode, oparg) However, even that's not quite accurate, since if the previous opcode was EXTENDED_ARG, you should be adding *that* arg times 65536 to oparg in order to figure out the stack effect. It's that need to take the previous opcode into account to correctly determine the value for "oparg" that makes this a bit tricky. Although, I guess the latter concern would only apply to integration into the dis module - for the opcode module, it just needs to be documented that the calculation of the passed in oparg value should take EXTENDED_ARG into account. stack_effect will never know about EXTENDED_ARG. Instead, you simply pass the already-extended arg as the second argument. Making the second argument support a default of None will be slightly inconvenient, but I'll do it if you think it's a big improvement. Considering how troublesome it was to recreate this information when I wrote my assembler, I definitely think this information should be exported out of the murky heart of Python. Yeah, I argued myself into realising EXTENDED_ARG just needed to be mentioned in the function docs, but didn't go back and fix my opening paragraph. The fact dis uses an arg of None (rather than zero) to indicate "no argument" means I think the extra layer of wrapping in the pure Python module is needed prior to 3.4 rc1, but we can live without it for beta 1. Attached is revision 3 of the patch. I'm gonna check it in pretty soon, so as to make beta (and feature freeze). I changed the API so the oparg is optional, and it raises if it gets one it shouldn't have or didn't get one when it should. New changeset 5fe72b9ed48e by Larry Hastings in branch 'default': Issue #19722: Added opcode.stack_effect(), which accurately New changeset 4a801f8b7e2d by R David Murray in branch 'default': whatsnew: dis.stack_effect (#19722).
http://bugs.python.org/issue19722
CC-MAIN-2016-22
refinedweb
609
55.64
Exchange organization for delivery to the recipient. will have to assign Contoso.com as the SMTP e-mail address for the users in your organization. Accepted domains are configured for the Exchange organization and on computers that have the Edge Transport server role installed.. In the work pane, click the Accepted Domains tab. In the action pane, click New Accepted Domain…. The New Accepted Domain wizard appears. On the New Accepted Domain page, complete the following fields: Name Use this field to identify the accepted domain in the user interface. You can type any name that you want. We recommend that you select a meaningful name that helps you easily identify the purpose of this accepted domain. For example, you may want to use a name that identifies this as a subsidiary domain or as a hosted domain. You must use a unique name for each accepted domain. Accepted Domain Use this field to identify the SMTP namespace for which the Exchange organization will accept e-mail messages. You can use a wildcard character to accept messages for a domain and all its subdomains. For example, you can type *:
https://technet.microsoft.com/en-us/library/bb124907(v=exchg.80).aspx
CC-MAIN-2015-14
refinedweb
189
58.89
One more chapter until From Zero, boys. So that's how many months/chapters until the end of the whale battle to see if they kept thatin? If they do, does that all but confirm them doing the fourth arc? First for no new content no new threads >>150274894 3-4~ months? The White Whale won't be more than two chapters. >>150275067 The Wilhelm backstory is going to eat a whole chapter. I think that whole battle should've been directed better. I'll be interested to see how this adaption fares Arc 2 artist will start Arc 4 manga in summer. I bet. >>150275243 >filename What? The special Olympics? >>150275349 Nah. So if Gluttony eating your name and memories erases all your possessions does that mean they found her naked? >>150275443 Doubtful, Gluttony's power seems more "writer's convenience" than "strict rules" If there were no evidence of her existing, Ram should have two horns. It's actually possible that Ram does have two horns and lost them both in the attack. With her hair, the scars would be barely noticeable, and you don't just go around asking people about sensitive topics like that. >>150275550 I'm pretty sure Subaru adressed her that and she always had one horn. Basically gluttony can't affect other people related to those who lost their names, in a physical sense. Only their memories of said person are gone. >>150275550 That's consistent though. The horn wasn't Rem's so it's unaffected. It's not really a regular erase all evidence thing and more like something close to it that is tied to someone's personal possessions. Not that it necessarily matters. Just like if I had an apple and someone ate it, if I were to suddenly forget all about the apple the best I could hope to figure out is that I at one point had an apple. But that won't make my apple come back. Okay so, drawbros. Crusch-sama caressing Ferri's ears. Get on it. >>150275850 But you're really unlikeable and annoying. That's the consensus and there aren't any drawfags here anyway. You're better off going to the drawfag thread or a place you can request shit. After all the grief you've caused why would anyone want to appease you? >inb4 the truth is bullying >>150275775 The only weird one I can think of offhand is that her nightgown was lying around for them to change her to. >>150275550 We saw her as a baby though. >>150275972 Ram might've had spares. >>150275951 I'm just hoping some kind drawbro would do it. >>150275850 >Draws dozens of pictures of raping Petra >Demands drawings of ferris Fuck off cunt, draw it yourself >>150275975 Did we see young Ram after Gluttony? I don't recall that. Is this an Arc5/6 thing? >>150275972 It's possible that Gluttony has enough tasteto at least leave his victims dressed. >>150276035 Seconding this. >>150276035 >>150276156 But Petra is fucking annoying so she deserves it. >>150276156 Nevermind. I misread that but this guy has the right idea >>150275775 That's why the letter got sent. It was Rem's words but Crusch or Subaru who sent the letter. So the paper remains but the words don't. Which I'm sure you can find ways this can get screwy. I think at some point Subaru brought up the horn to Ram to reinforce that Rem is the real deal and she would probably remember at least if she had two. >Every thread until Rem wakes up. >>150276344 Yeah, the appearance and ram having only one horn is what proves it to her >>150276324 >then he complains about bullies Oh fuck it. Let's not go in circles. Just fuck off to the draw thread, patreon, tumblr, deviantart, or even pixiv to request your art and give it a break. >>150276324 Oh look, you really, really must hate drawings of felix >>150276494 Maybe I do. >>150276513 Huh? I love drawings of Ferri. >>150276573 And yet you never post anything but what that one drawfag, who clearly isn't a regular, these threads have had in recent memory drew for you. >>150276573 You clearly don't or you wouldn't be such an insufferable cunt. No one wants to draw for you because you are rude, greedy, unthankful, entitled and a collosal faggot and shitposter. >>150276732 I post all kinds of Ferri's. Nobby just happens to make awesome Crusch X Ferri stuff and he actually visits these threads sometimes so I respect him a lot. Why did Daphne get a design before Slothy witch? It might seem at a glance like she's dreaming of sweet vanilla stuff, but she's obviously plotting some deviant stuff at the end. >>150275443 So after a battle with Ley it looks like everyone got naked, shitfaced drunk, and passed out on the road? >>150276941 My guess is they moved her somewhere else >>150277127 Like they just cut out the part where she shoos Satella off, or something along those lines? >>150276941 We still don't know if Sloth will show up in the next volume and we still have plenty of time for a design preview. >>150277174 Yeah, from the chapter list she should've shown up in 10 if it followed the wn, so she was probably moved Are there seriously people who waifu the Witches? >>150277531 Echidna yeah, maybe minerva as well >>150277531 Only Echidna, really. There are some who would accept her contract in Subaru's shoes although a few just want to go full minmaxer. 7.4 billion fucking people and nobody draws Ferri kissing Crusch-sama, nobody colors the illustrations... Why are everyone so slothful? I have to learn to do everything myself. FERRI AND CRUSCH-SAMA WILL HAVE THE HAPPIEST ENDING >>150277780 Ever thought that it is entirely your fault? It seems crazy, I know, but you really are just that big of a faggot >>150277825 >Ever thought that it is entirely your fault? It's not. >>150277550 >Echidna >someone who would never truly be capable of loving you >someone who would only ever have interest in you if you had a unique trait with significant exploitation potential >Minerva >someone who's focus is so fixated on her goals of healing that she would never show you any serious affections >someone so single-minded she's either oblivious, ignorant, or outright apathetic to the chaos she breeds >someone who would never settle, never slow, would leave you behind without even noticing >>150277889 Hey man, some people are crazy. Like Roswaal >haven't watched any anime in a while >watch Erased >get depressed >watch Your Name >get depressed >watch re zero thinking it will be lighthearted my face right now. Angel. >>150278067 Don't worry, it gets worse >>150277889 Realize that while some of the witches have tunnel vision, they're still human. Echidna displays a lot of emotion after the scene when Subaru rejects her contract when she thinks about the past and the other witches. Interesting little detail. Witch Beasts don't go after the Witch's scent out of hunger. They actually instinctively despise the Witch of Envy. >>150278067 If/when you finish episode 25 Check out the epilogue since the anime cut it out. It's criminal that it did. >>150278326 I'm still curious why the oni were targeted. Could it be something so simple as them being able to smell cultists? >>150278359 It would be hilarious if they decided to add it in the last BD just to troll everyone. >>150278426 Well, the current Arc seems like it involves a lot of lore, and there's some discussion of Witch Beasts in it. Not to mention they brought Meili along, and that the Oni sisters are featured here and the narrative hasn't been too subtle about comparing them to Witch Beasts. I honestly think it's going to turn out that the Oni actually were created by the Authority of Gluttony, which would make them natural enemies of the Witch Cult under ordinary circumstances. >>150278326 Probably because they know Satella is responsible for killing their creator. >>150278446 Has any BD done something like that? The BD scheme is crazy enough in Japan, I think this would be where many otaku would draw the line. I could see an OVA though. >>150278865 In other series in the past? Yeah added scenes or short OVAS bundled with the BD are common enough. >>150278326 He really can't escape being gobbled up by Daphne's creations. >>150277531 She's intelligent, nihilistic, and with a wicked sense of humor. >>150275243 How far along is arc 2's manga? Are its translators lagging behind? Anyone have that manga compilation where Rem jumps in to save Subaru from the thugs? I forgot to save it. >>150279293 Arc 2 manga just finished at the point where Subaru hit Rem's horn with his sword, so it should be ending soon. >>150278666 So here's a thought. What if the authority of envy causes people to hate her? So witch beasts and rem both can detect the scent and hate it due to the effects of the authority. >>150279301 >>150279374 Thanks anon. >>150279362 That's not really how it works i think. Echidna doesn't like Satella anymore, but that's due to what happened in the past. None of the others hate her. Plus, envy sounds more like the ability to absorb things, or RbD. >>150279493 But she's generally hated by the world, and she went crazy somehow. Obviously the wc gain her scent somehow despite that she doesn't directly sponsor them I don't want to translate this anymore I think. I might go back to it, not sure. The translation is not that good because I haven't edited it at all. Here is what I have so far anyway if anyone wants to read it I am sorry for the confusing lines. I will try to fix them later >>150279493 Envy would suggest the desires for something someone else has, and would be entirely fitting for an Authority that takes what others have. Ever since "the event," however, she's been coined the "Jealous Witch," which implies she had something taken away from her or threatened. Given Flugel's supposed influence on the Witches, his apparent connections to Subaru, and Satella's attachment to him, it's probably safe to assume that Flugel was somehow the trigger for her snapping. Flugel=Subaru Satella=Emilia Rem takes Subaru away >>150279667 Being hated happened because she wrecked havoc around the world, and is notorious for having destroyed half the world. >>150279755 Wrong link I think Here is the right one >>150279788 >>150279755 What is this/where in the story is this/why are you tired of it? >>150279807 Wondering about this too, how far in was the translation anyway? >>150279788 Alright. Well thanks for translating what you have so far anyway. No one else has attempted these yet so it's a serious boon. >>150279807 It's from Tanpenshuu extras. >>150279807 Tanpenshuu volume 2, at the end of arc 2. >>150279919 I don't really want to say because people might give me shit for it, but you can probably guess why. >>150279919 Sorry forgot to answer how much this is. It is 1/3 of the chapter >>150280046 >>150280108 Not much Rem I guess? Its alright, the fact that you're even doing translations for us seems like a godsend Is Mr. Remonwater going to translate the chapters when Tappei resumes the WN? >>150280132 Yes I am stopping for now because not as much Rem as I expected >>150280403 He said he MIGHT do it if he gains interest again, but it's not certain. Like he keeps stating, his main motivation for translating is Rem, so maybe if something interesting happens with her he'll do it. >>150280403 I would like to, but chicken might be doing them too. I will have to join their team if I want to I think, though I do like doing this by myself Why does this happen everytime? >>150280521 It's year 2020 when cock gets to Arc 6 >>150280521 There's no need to join their team. Hmph. >>150280546 This is accurate >>150280521 As others have said, it's going to be a long time before anyone else gets to Arc 6. I don't think anyone will complain if you do it yourself. Rather, it'd be nice if you caught up so we could read where the story currently was rather than waiting years for it. >Code Geass: Fukkatsu no Lelouch >Legend of the Galactic Heroes 17 >One Punch Man 2 >Full Metal Panic 4 >FLCL 2 >Fate/stay night: Heaven's Feel >Evangelion 3.0 + 1.0 >Panty & Stocking 2 >Steins;Gate 0 >Ao no Exorcist: Kyoto Arc >No Game No Life Movie Reminder that this year is a magical year >>150280754 This. >>150280754 >>150280794 Apparently he's referring to the NEW chapters of arc 6. So in conjunction with doing arc 3, he'd be doing the new ones Tappei makes. >>150280546 I skimmed the chapter a bit more and noticed that there is actually more Rem later. I should really read these things before I decide to translate. I'll be finishing the chapter later some other days, but I am kind of tired of it right now. For now I will just leave that mediocre translation there >>150281069 Dammit anon don't do this to me you are taking me on a rollecoaster with your translation habits >>150281069 >I should really read these things before I decide to translate You should probably do that >>150281069 You tell us you won't but then you do I fucking love this >Chapter 59 >Previous stuff >For animefags looking to continue the story >>150281069 Not reading the whole thing before translating? Truly slothful. >>150281069 Sleep tight aniki >>150280681 That never happened. The OP is often a witch character oddly enough. An Emilia OP, hell more than 10 Emilia images per thread, is a unicorn though. >>150281295 >this never happened How about when catfag made a thread that everyone refused to use, and everyone moved onto the Otto thread while the catfag just kept giving himself (You)s in his containment thread? >>150274894 Reminder that Ram remembers Barusu being the one to kill all the doggos Rem killed, and when everyone forgets Julius he seemingly soloed Betelgeuse. >>150281378 Correction. The context that's being presented never happened. There was never a waifuwar slant to it. >>150281480 Are you saying Otto isn't a waifu? Okay, does anyone know what the fuck Subaru is? Is he pride? I'm on episode 22 of the anime right now, and I plan on reading it, but these last episodes are all over the place. >>150281480 My bad, I thought you were replying to >>150280995 >>150281410 I can just imagine this convo happening >"So Barusu, who is this man next to you?" >"His name is Julius, he got affected by Gluttony too, he's a friend and the guy who helped me kill the Sloth Bishop" >"In my memory you were the one who seemingly had battled with said Bishop, and came out victorious. I see now, so in truth you are incompetent, everything is as it should be." >"YOU'RE IMMEDIATELY SLANDERING ME?!" >>150281518 Subaru is Just a dude who happens to be Satella's husbando, it's heavily implied Al is Pride >>150281518 That's actually a plotpoint that has yet to be revealed. Watch the first season then come back to us so we don't spoil you with our speculations. Also SOON >>150281518 The anime didn't touch upon it yet, Petelgeuse will mention it on episode 24. The reason he keep saying Subaru is pride, is because there is not Bishop of pride currently, and Subaru reeks of witch, so he figures Subaru must be a candidate for the seat of pride. Although, his power may be Envy's. >>150281629 SPOON >>150281518 No. Pride is just an educated guess by him because any other option would be more ridiculous. We don't know what Subaru is yet or how Satella knew him. Al had lots of foreshadowing but the anime cut it. >>150281629 >>150281724 SOON >>150281557 Nee-sama is amazing. >Subaru hesitantly asks if she remembers him doing all the housework and cooking for the mansion before she hired Freddie >Not even Ram can believe that shit >she smugly announces that she remembers being the one to do all the work >>150281759 ALL HAIL SUBARU Chapter 18 of Arc 6 seems to be when the Bishops show up. >>150281938 >Arc 6 >Bishops show up Did you mean to say arc 5? Cause there's no bishops in arc 6, at least not yet. >>150281938 Are you sure that's not just the Bete puzzle or does the other bishops actually show up in Subaru's Tower? >>150282005 >>150281974 No, I mean the dead ones show up in Subaru's dreamland shit and ramble on about some nonsense. >>150274990 This >>150282087 Well shit. Got any more details? >>150282087 Ah i see, yeah. Subaru hadn't really died in that one right? I think he just fainted. What did they ramble on about? Was it all incoherent, or inconsequential? >>150282181 I'm just skimming it with google translate. But yeah, he was just unconscious. Which is a really bad sign, seeing as how he used to need to die to go there. However, they do seem to be reacting to Subaru. Betelgeuse seems to be bitching about being ignored, and they both seem to think the crude dream form Subaru created for himself is a piece of shit. >>150282361 >Betelgeuse seems to be bitching about being ignored Convos when? is Betelgeuse a villain or what? >>150282361 >the crude dream form Subaru created for himself is a piece of shit Inb4 the "Subaru" who knew Satella was nothing short of a giant bastard who deserves to die. >>150282509 He wasn't in the past. Absorbing the witch genes of sloth , and accidentally killing the woman he loved, Fortuna due to the witch Pandora made him snap and go crazy. He was straight up evil and insane in arc 3. >>150282361 Its happening isn't it? >>150282361 Sounds like Subaru has some new shoulder angels >>150282361 >they both seem to think the crude dream form Subaru created for himself is a piece of shit That's kinda really freaking important. It could mean a lot of things, but i'm pretty sure the WN has segments where Subaru talks to Satella when he dies in arc 3, and that one seems to act as if he knows Satella. I wonder if they both assume Subaru made that version of himself for the dream realm, and don't suspect it may be something like a split personality/someone else. >>150282804 Actually wait, maybe i'm looking too far into it, and the "crude form" may just be that Subaru looks like shit in the dream realm. >>150282870 That's exactly it. Apparently, creating a human form isn't easy for him, and so it didn't even have a mouth. It was actually described as incomplete, rather than crude. >>150282613 >and accidentally killing the woman he loved, Fortuna Or at least so he thinks. There is almost no way Sirius isn't her. >>150282943 I imagine Subaru is going to gradually start looking more and more human until at some point he's fully conscious in the dream realm and can talk to Petelgeuse and Regulus. >>150282870 No you're right, he failed to give himself eyes and a mouth and shit mostly because it didn't occur that he might want/need those >>150283163 Subaru doing his own Tea party with all the bishops he killed when? Does Subaru have unseen hands? >>150283225 >Twist ending it was Echidna who killed all the Witches and absorbed them into her soul >>150283252 No he has PROVIDENCE >>150283252 Yes. But unlike Bete's he only has one and can phase through things. >>150283292 I think he may be able to use more, but it hurts like shit when he overdoes it the use of the hand, and apparently he has a bad feeling that something is being taken from him when he uses it. >>150282690 >>150282755 If they ever anime this he'll already be at the point of no return. >>150283694 We still don't know what the end game is, so i doubt it's the point of no return for Subaru's sanity. >>150283799 But he already has literal people talking inside his head calling him a piece of shit. >>150283868 For all we know, he may have sucked their souls and not just their authorities into himself, so he may not be going crazy. >>150283950 He has literal entities of life that are taking residence inside of his head, that's just not something anyone wants. >>150283868 I think the worst part is that Subaru hadn't even died. He saw them when he was only unconscious. If it keeps going this way, he'll start seeing them when he's awake. >>150284178 What if they teach him how to properly use his authorities? >[Rem: You’re wrong, Subaru-kun. Nee-sama doesn’t get frightened by Holou. However, just having these circumstances being different than not having an attack land on someone after striking…...Rem also feels the same way.] >[Subaru: You’re saying I have bad chemistry in battle!? You don’t have to talk with an assumption that I’d get knocked out!] >[Rem: It is Nee-sama’s and my duty to fight with means to protect people. Don’t worry, even if a Holou were to appear, I would put my life on the line and protect Subaru-kun.] >[Subaru: It was just hypothetical talk, and yet the topic has come to this.] Rem never stops with the criticisms doesn't she? Surprising that this is the same girl who is head over heels for him. >>150284399 That's part of her charm anon. >>150284212 I expect they might, but I'm not so sure that would be a good thing. >>150284399 She thinks he's amazing, not perfect or superhuman. >>150284399 Does Ram stop liking the clown after finding out he's a spooky ghost? >>150284212 He'd have to give up his "precious thing" in order to do that. >>150284473 I wish >[Subaru: Come to think of it, your guys’ facial expressions have been tough. Could it be…?] >[Ram: What do you mean by “Could it be”? If you don’t include the subject in that, then I can’t understand you. Barusu, a person poor at asking questions, has missed his chance through his own foolishness. This conversation is over.] >[Subaru: If you make such a blunt response like that, then you already know the subject!] >Ram clicked her tongue at Subaru’s reference, although she was three times more bitter than usual. Rem, who quietly went in front of her sister to protect her, shaked her head at Subaru. Ram is a bit childish isn't she? >>150285068 I wouldn't say childish but she does give that tsun sister-in-law vibe. >>150285068 >Ram is a bit childish isn't she? Aren't they both? That's kind of the thing the sisters have going on. They're the same, but different. Ram is more responsible and mature in some ways, but Rem is more responsible and mature in others. >>150285068 >Ram is a bit childish isn't she? She lives like the lady of the house while being a maid. I'm pretty sure she's fairly childish. >>150286356 >She lives like the lady of the house while being a maid. How fitting, as that's not the only way she lives like the lady of the house. >>150286957 DELETE THIS >>150286957 Reminder that the clown needs to die >>150286957 I assume this is based off the part where Roswaal had finished fighting Ram and he cast Algoa on her, after he became furious at her buring his gospel. >>150286957 Wow, right in front of Rem? That's bold. >>150287097 Wouldn't it just be the village? Hence the horn and her outfit. It happened. >>150287144 Oh right. >>150287381 Of course it did >That speech above her Someone had fun with this. >>150287381 Meme magic is real >>150287381 2016 the year of miracles >>150287381 This is almost great, but then >Lemm >>150287381 >>150287381 Is that Ram and Rem in the other car? >>150287647 >Pink Rem >>150287381 Damn right it did >>150287381 >Lemm >>150287381 >>150287695 It is >>150278067 You haven't seen nothing yet >>150287865 Anyone have the one where Subaru is carrying Rem around and confusing Patrasche? >[Ram: Rem, Rem. Subaru said something again.] >[Rem: Nee-sama, Nee-sama. That’s something that makes him Subaru-kun.] >>150288511 >>150288622 That Subaru sure does say stuff Why is Ram so much better than Rem? >>150288853 Rem, get off the computer. >>150288853 t. Rem >>150288648 Thanks. I should probably save these things Remember the good old days of medieval age of consent? >>150289139 What's consent? >>150289210 Exactly. >>150287381 Source? >>150289278 How strong is Al's one good arm since he spends all day hanging around that? >>150289418 What if he lost his other one because he fapped too hard? >>150289466 >Priscilla caught him fapping and cut it off >>150281759 >filename His best friend wouldn't let him go dark lord. >>150289908 That was the friendship ring he was supposed to give before she presumably went up and died I'm guessing. >>150289987 >Getting cucked by a bird >>150289987 For some reason, I can only imagine this ending up as a comedic short story where the bird flies away with the ring and Subaru and Rem are frantically chasing after it, while rounding up anyone they can find to help catch the damned thing. Shenanigans ensue as they try ever more elaborate ways to catch the bird, as it narrowly avoids them all. It all ends in an argument over why he got the bird and with Subaru lamenting his poor choices and sulking about how his attempt at a memorable proposal ended in disaster, which Rem is quick to agree to. Though she also adds that it certainly was memorable, and they both suddenly burst out into laughter. Then the fucking bird lands next to them, and everyone gets tense as they resolve not to give up and finally manage to work together and catch the bastard. >>150290315 Send this to Tappei right now >>150290315 That's really silly and stupid I love it >>150290040 Tappei ran out of people to put in his convoluted cuck chain by that point in the story. >>150289987 That's a well trained bird, did he get his bro Otto to help him out with this? >>150291818 Probably Otto >>150291943 How is Otto such a bro? Even Rem had to be won over at great risk and a couple failures. >>150292077 He saved him from the Witch Cult. Though he was always amiable the canon timeline Otto would probably think twice before leaving Subaru for dead to a hungry monster, if only to repay the favor. >>150292298 But Mimi did that, and he's not following her around >>150292571 Well this was in his own words, in the Garf turns into a big furry faggot and kills everyone loop I think, so he likely attributed it to the whole team. Plus there was that whole Betelgeuse chase and Otto's decision to talk to the clown for business. I think Subaru acting familiar with him also made him gravitate towards the memeboy out of everyone else. >>150292800 He was so desperate for friends he put up with the memeing. Poor Otto. >>150283256 But then why would Satella want to die? >>150293175 The curse of immortality maybe? I don't know. Its jsut my little crack theory that Echidna took all of the witched in for herself or something. >>150293277 I'd guess with as many obtuse plots as the clown has that his teacher probably has plots just as obtuse if not moreso. >>150293488 Definitely now that she's out and about in the world. I fully expect her to blindside everyone at some point. >>150293488 If Echidna ever comes back its gonna be a wilder ride >>150277772 Text wrapping exists for a reason reeeeee >>150294743 Do we have any idea what she's planning now that Step 1: stop being a ghost/holou has been achieved? >>150294878 Go around telling everyone she's a Holou and seeing how they react. >>150294900 Speaking of that. Isn't Holou supposed to be Hollow? I'm thinking it is. Because the 'long o' or diphthong at the end of the word is the same diphthong in Hollow. Which is the same vowel (actually two) in words like no and go. That's why the OW (W is a consonant U (oo) ) is there and words like 'now' would have sounded like modern day 'no' before the great vowel shift fucked this language's shit up. The 'short o' in words like 'or' and 'gold' exists in English and is the first one in Hollow. If you don't speak the most common NA dialects anyway which replaces most short o with an a vowel like 'ah' (Ha-Low). (Pro tip to any native English speaker learning a language. Knowing these things is how you avoid soundy Englishy as fuck when speaking your second language). If you stop by again and are reading this Lemon pls respond. >>150286356 Other way around. Rem a whore. >>150287381 >>150297512 Good morning catboi >>150297680 It's 2 pm. Good day to you! Why is this serialized under 3 different titles? in what order do i need to read? >>150298052 Just read Yashiki no Shuukan Hen (Incomplete) > Truth of Zero (Incomplete) >>150298052 There are 3 manga, for Arcs 1-3 >>150298052 It's monthly so they update multiple arcs at a time. Daisshou - Outo no Ichinichi Hen Dainishou - Yashiki no Shuukan Hen Daisanshou - Truth of Zero Really though the manga is mostly just useful for posting and finding scenes that weren't in the anime. >>150281724 >>150298579 >Entry ticket 7 for a special story written by Nagatsuki Teppei: >Re:If kara Hajimeru Isekai Seikatsu Is this what I think it is? >>150298579 >Re:If kara Hajimeru Isekai Seikatsu What's this? >>150299128 >>150298579 What is this? New material? >>150299223 Blu-Ray 7 "We Are Number One" but with both Emilia and Rem. FERRI IS AN ANGEL!!!! CRUSCH-SAMA IS A GODDESS!!!! I want to gently hold Ferri's head on my lap, fondle his ears and tell him that Crusch-sama will be fine. >>150298579 Is this right? I thought each BD came with a ticket you could exchange for a alternate cover LN of past volumes. >>150299836 Ok apparently, it's that and the new written novel too? FERRI A BEST >>150299586 >>150299713 >>150299994 Stop spamming you fucking faggot. >>150300086 Ok, I'll produce some OC then. Ok, this is confusing. Every single BD seems to include a "new story story written by Tappei", but the only actual short stories that came out of the BDs were the Emilia short story where she met Puck, and the new one from BD 5 about Rem and Ram's childhood and early start in the mansion. So my best guess is that's it's not actually a short story, but the alternate cover LN volumes. Like BD 7 would allow you to get a alternate cover Vol 7 and so on. >>150300165 Or wait, maybe the ticket itself allows you to get a free copy of one of the short stories that are coming out from comic alive magazine, like pic related. More OC for my bros, now including the superior oni >>150300490 Why is she crying? Isn't that Roswaal's? >>150301103 She just likes it so much. However, Remu-chan... >>150301153 REEEEm >>150301179 I want to rape Priscilla. I want to hug Ferri. Remind to report felixfag for spamming. >>150303637 Must be weird to get that sort of help from the person he just tried to murder due to a frenzy. Welp. Not sure how good of a job I did with this. It's right after Ram shoves her fingers down his throat, after he got his head crunched by Patrasche. >>150300165 >>150300356 Apparently i was entirely wrong. The tickets are for you to collect. Once you get all 9, you swap then for the Natsuki Rem Light novel. I had completely forgotten that you needed all 9 BDs for the Natsuki Rem book. >>150303781 I guess. Not too sure I know which sound effect I want to go with, so I'll just repost the original. >>150303798 >>150303850 Either one works. I guess Pat works a little better since she's patting his back. Tappei just linked to what seems to be another announcement about the Crossover. Greed is good. Turns out Emilia doesn't just kick Subaru when he's being hugged by Shaula, she also pulls his hair. >>150306346 How the hell are you getting so much info out of freaking google translate? Do you know a bit of jap and you're using google to get a bit more context? >>150306384 To be fair Google Translate got better two weeks ago and if you separate sentences it can give a decent result, at least you can get what is being said now. >>150306346 >Love interest just "insert other girl doing something to him" and it was out of his control >Better beat the shit out of him I hate this trope so much. >>150306434 To be fair, Subaru hasn't been entirely faithful to her, claiming he loves her while fawning over a mysterious potato who just showed up after the Cult attack. Blaming Honda is a fairly rational concept when a second girl that nobody knows just shows up and is all over him. >>150306384 I do know a little, but just a little. In other news, Shaula got her name from her master. She wasn't born with it. I wonder if that's the case with all of the Bishops as well? Oh, and Flugel really does look nothing like Subaru. He even comments that Julius is a better match for the picture he was shown. >>150306900 We all know the star related names are a plot point. How they fit in we don't know, and that Petelgeuse is the odd one out, since his name is not exactly a star name due to starting with a P instead of a B. My guess is that someone is giving then these names, or Flugel left instructions in how to name new bishops/disciples, since as far as i'm aware he's long dead. Any drawbros? >>150306992 Oh, and this is another Arc 6 moment I hope to get translated soon. There really are a few moments where Subaru is staring at her face and thinking that it's pretty. Also, she rushed to go check on him when he woke up, but was trying to hide that. Subaru thought that was cute, but not much like her normal attitude and seems to find her recent behavior to be a little confusing. >>150306900 I wonder if we'll actually get a stable loop even if Satella somehow knew him beforehand. Maybe he was isekai'd at a later point in his life and this time Satella took a younger Subaru and dropped him into the future, thus mostly erasing the fact that Subaru was in the past to anyone but Satella who may exist out of time in some ways. >>150308272 Please no. Stop the meme magic. >>150308287 Well, Subaru doesn't resemble Flugel in the slightest. There's literally no resemblance. >>150308530 You can't stop the meme magic. I mean, that's pretty much: >It's not like I was concerned about you or anything. The age of tsundere Ram has begun. >>150308272 I actually do think it will go somewhere dramatic and a little sad based on arc 4 foreshadowing where she tells him he was seeing Rem in her and to cut it out. Nothing quite like the doujin scenario some anons are hoping for though. I think calling her cute doesn't mean as much however. Subaru was never shy about letting girls know that and gave her a backhanded compliment once that she was very cute on looks alone. >>150301959 If you succeeded it wasn't rape and was just for her convenience. Wow >>150309731 Would this be too much for this christian board? >>150309776 What was it? I was sleeping and woke up to it deleted. >>150307734 Yes. And no, i won't draw your fucking cat or greenwaifu. Get back to colorings, faggot. >>150309989 >>150309999 Pleeeease. Please draw Crusch-sama caressing Ferri's ears. >>150308272 You faggots need to stop with the whole pity sex shit, you're jinxing everything. >>150309999 What about Otto and Ram? Make the memes come true. >>150310006 You can clearly see his dick there, of course it's too much you idiot. >>150310039 Fuck. You. Get the trip already. >>150310202 No, draw Crusch-sama caressing Ferri's ears. >>150310109 Maybe, i guess? Right now i have several unfinished drawings, could you wait for a week or so? I love memes. >>150310245 I'd rather he draw Subaru caressing the strings of a violin. >>150309999 Thy quads has spokenth! Fucking bullies >>150310407 dude just tone it down. It's like you're roleplaying with the way you talk, to the point it's SUPER fucking easy to know it's you. If you were here since episode 16, i don't think you were this easy to spot, or repetitive. At least do something different instead of just repeating yourself every goddamn thread. If you're doing it on purpose to get a rise out of people, it's working i guess, but geez. >>150310450 I want Crusch-sama caressing Ferri's ears right about now. >>150306429 It's weird, maybe one day Google Translate will actually be able to translate contextual languages like Japanese accurately. >>150306429 >To be fair Google Translate got better two weeks ago Tell more Hey drawbro, can you draw Subaru eating cat ears with Rem in a fine restaurant? >>150310345 No problem. It'll be just in time for Summaryanon's return (maybe). >>150310527 >>150310345 Fantastic. I'll be looking forward to it. >>150310407 >>150310902 >tfw the sound of all the chains kills his boner >>150303817 >Blue haired Subaru jr Well isn't that odd to look at. So would a half demon retain their horn? >>150310979 I see. I fucking see. This is war, isn't it? >>150311214 >reusing old drawings instead of drawing a new one Dude, i really don't want to crush your hopes and dreams, but that's pretty pathetic >>150311253 Fuck you, it took me ages to find it >>150311214 fuck off with this roleplaying shit >>150311336 Onilovers want to give up huh? How pathetic. >>150310450 This is honestly exactly the shit the mods are currently butthurt about and why they nuke threads. RP faggotry, spamming, blogging, low post quality, etc. >>150311107 You haven't read the Natsuki Rem chapters? They're translated. Rigel has a horn yes, only one though. Not sure about Spica, she's still a baby. >>150311389 It's not the easiest to find if you don't know about it. >>150311107 Here's a resource with the AU chapters if you want. >Felines, especially domestic cats, are well known for having penile spines. Upon withdrawal of a cat's penis, the spines rake the walls of the female's vagina, which may cause ovulation. Fucking Onilovers So Subaru told her the truth about his phone. At least according the manga version of that scene up until Otto started asking questions. I wonder what else he told her. Video games? Just imagine she at first believing he was talking about real events and people when describing his anime and vidya. This stupid sketch thing added to spam filter when. Emilia is still the best Girl Why did Gluttony target Rem specifically and not Crusch? >>150312272 He had good taste. >>150312272 Her memories and name were delicious in a tragically sadistic way. It's not like he left Crusch untouched either as she was half-eaten and probably got away just in time before he could finish the job. >>150312272 As much as it sounds like a meme answer, he probably decided Rem tasted better. Or he wasn't trying to mess with one of the candidates too much since she was already basically dealt with. >>150312258 You mean Echidna best witch and best girl. >>150312081 He did in the anime as well >>150312403 My man >>150312368 It's kind of confirmed. He's the gourmet after all and the scene Lemon translated of arc 5 shows he wants to mock Subaru with her memories. There's also the idea that Rem sacrificed herself by meeting him head on. She was screwed anyway but the way she went out bought Crusch time. >>150312541 Yeah, he seems to think Subaru would also be worth eating and wants to piss him off as much as possible. I think it would be funny if Subaru tasted absolutely disgusting to him though. Dark lord Subaru when? Echidna did nothing wrong What makes the memories taste good? Despair? >>150312272 Rem named herself. >>150312815 He would've known her name anyway, he ate a bunch of the randoms and they would probably know it. Its more of that she picked a fight with him and that got his food boner going. >>150312806 It's been a while and I might just reread for a refresher but Ley pretty much runs his mouth about who he is and what he likes to do so the answer is probably in the interludes already. I don't think there's any one reason beyond those that interest him. Romance, despair, hope probably all interest him at least individual cases of it. >>150312806 I think all sorts of things interest him, but the stronger one element happens to be, i guess he finds it more appetizing. >>150279225 She is fucking perfect, Even I do not understand why I like her so much. Cuck clown. That might be it. Reminder that Felix loves the clown canon, too. >>150313348 Fuck off >>150312637 Would the turd goblins mind break if he ate Subaru's memories? Just imagine that. An arch bishop of the witch cult realizing he just ate the memories of Satella's beloved. >>150313449 I think the twist is going to be that Subaru can't be eaten to begin with. He's already immune to forgetting. >>150313449 I'm pretty sure if nothing else, Satella would kill him if he ate Subaru's memories. Actually, that would be a really dick way to fuck things up for everybody. Ley eats Subaru, so Satella kills him, but that doesn't restore the lost memories to anyone, including Subaru. But since Ley is now dead, he isn't really at great risk of dying, and wouldn't remember that he can or needs to. Plus the other people there would try and keep him safe. >>150313485 There's the theory that RbD will cause some kind of BSOD for the authority due to the dead timelines too. >>150313604 Unless she has very specific conditions I don't think she could attack like she did in arc 4. Ley is strong enough to avoid some unseen hands. >>150311570 He is holding the gun with his left hand on the trigger while it's leaning on his right shoulder. You fucking suck at this, catfag. >>150313669 The one condition we know about is telling people about RbD, or possibly someone finding out about it. Eating Subaru's memories would reveal RbD to Ley, which possibly makes him subject to attack. >>150313669 Didn't Subaru run into Ley in arc 5? I believe a summary stated they ran into a building, and Julius recognized who he was, and told everyone to not refer to each other by their true names before they spread. Did Subaru not realize he was the bishop of Gluttony? >>150313714 Tappei said Emilia was killed because she was Emilia, which gives the idea that she won't kill anyone else aside from her. I don't think finding out about the power without Subaru telling said person triggers Satella trying to squeeze Subaru's heart though. Roswaal has some idea about his power, just not how it's triggered. >>150313673 Also guns don't exist in this world and Felix doesn't like killing people. But I guess turning him into some kind of monster bent on genocide makes sense in fagville. >>150313809 >Tappei said Emilia was killed because she was Emilia, which gives the idea that she won't kill anyone else aside from her. She killed everyone in Sanctuary because Subaru talked about it to Echidna. >>150313871 I don't know how i forgot that one. >>150313714 The conditions for the Sanctuary massacre were >Subaru was in Echidna's dream >Emilia was taking the trial >he talked about RbD a fuck ton He even speculated those three things together made ir possible but she can't always do this. >>150313673 No, no, the right hand is on the trigger >>150313919 What in the fuck am I looking at? >>150313919 Huh, I guess this is why Tappei hasn't been working on the WN. >>150313963 His thoughts also opened up the possibility that she doesn't travel back in time with him once RbD is used. >>150313963 Yes, but what happens when some jackass goes poking around in Subaru's head? She bitch slapped Pete right the fuck outta there. >>150313673 It's like this >>150314081 Oh, that's right. He's basically jumping into the shark tank. >>150313809 But Satella went on a complete murder rampage in Sanctuary because Subaru told Echidna about RbD in her dream world where Satella couldn't reach. Which is kinda dumb since she could have just squeezed his heart as his physical body was just lying in the tomb immobile. She didn't immediately kill him either after possessing Emilia, though, so maybe that was intentional. >>150313995 My bad, I reversed it. Right hand on trigger, gun leaning on the left shoulder since it doesn't cover his face. >>150314132 It's under his chin >>150314041 He's been doing a toon of shit. Side story books, BD commentary, BD related stuff, Light novel, crossover and so on. There's even some award between LN authors on the 24th of December, but that's more of a celebration. >>150314132 That likely has to do with split personalities. Envy seems to do the mean rules enforcing and wanted to punish him but considering she's very impulsive and emotional decided to absorb him into her rape dungeon at the end, Satella's scheme be dammed. >>150314177 Nope. The upper edge of the gun is above the lowest point of his chin. Also I watched it until finish. It's basically leaning on the left side but not where it should. He'll the entire perspective is FUBAR in that pic. Stop drawing catfag. Or at least go to >>>/ic/ And read the sticky to get good. >>150314132 Despite causing him a huge amount of grief by killing those close to him she really doesn't seem to like killing him. When he committed suicide she flipped even though RbD was a thing. It's probably a last resort for her. >>150314357 Hmph, fine. I'm doing this with mouse, when I get a tablet I'll really start to practice. >>150314357 If you're a drawbro, draw Crusch-sama caressing Ferri's ears. >>150313919 I haven't checked out konosuba. How would Subaru do in that world? >>150315153 If you like comedy it's pretty good, at least for me. Also i'm not sure? It works on MMO rules, but it doesn't take itself that seriously since it's comedy, but the world has a dark lord conquering the land. >>150315153 Probably better, since it does work like a game. His stats would probably start a little higher than Kazuma's besides Luck. That isn't to say that world isn't randomly dangerous, but I'd say Subaru is about as resourceful as Kazuma is. >>150314421 It has become obvious that while ''envy'' is a bit nuts, she does truly care about him. When Subaru raged at her for calling him ''Subaru-kun'' she felt hurt. >>150315499 Well 'care about him' in this sense could just mean in a jealous obsessive stalker way. Not all attention is positive. She clearly doesn't care that much for his well being since she doesn't care about trying to mind fuck him. >>150315571 She's just trying to show her love >>150315571 She is constantly watching him, it's not the RBD has a trigger of it's own that acitvates when he dies or talks about it. For a power like RBD, which has truly helped Subaru out a lot, a condition that he can not talk bout it, is not that extreme. It's a iron rule he has to obey. Yes, she is very twisted. But Envy doesn't usally try to fuck with his mind unless Subura fucks something up. Or it's just the only way she can knows how to express her love for him. Atleast Satella is not (that) twisted. >>150306346 That's absolutely retarded Shaula was just welcoming the master of the house back. Well, isn't that nice? >>150316923 I forget, did he suddenly hold back due to something Ram did, or did he get assaulted by pain from over doing the invisible hands? >>150316901 Well, Emilia claimed she was just trying to help Subaru detach himself from Shaula, who was clinging to his arm with full force and wouldn't let go. But that's not exactly a good reason to pull by the hair. >>150306346 Looks like Emilia became quite ENVIOUS >>150317055 It was the feedback from using the hands that he recoiled a bit >>150316923 Oni deserves this >>150317265 Since you're reading it currently, i'd like to confirm one thing. Is it true that when Subaru uses the hands, he feels like "something" is being taken from him so he avoids using it as much as possible, or going overboard on how much power he puts into it? >>150317383 He avoids using it because because something "precious" is being taken away from him so yes. so the unseen hand is literally just the power from elfen lied, right? >>150317613 Basically yes. But not everyone gets the same exact power. For example, when Subaru got his, his can phase through people or objects. We don't know what's the difference with Sloth's though. >>150316055 Why haven't Envy/Satella approached Subaru normally yet to speak to him? Or if they have no interest in that, why haven't they straight up kidnapped him to lock him in a rape dungeon? >>150317716 She did approach him normally in arc 4 though. Right after he goes through the second trial. >>150317716 Did that. Subaru killed himself because he didn't want to be imprisoned. >>150317613 Kind of. Elfen Lied arms are more like tentacles and are razor sharp. This world needs more FerriXCrusch-sama >>150317869 please get help >>150317831 Why can't they allow Subaru to do as he pleases most of the time, and then just give them the dick every night, which I'm assuming is what they want? >>150317920 Please learn good manners. >>150317964 Because this is a heartwarming story Ferri is beautiful. >>150318243 Those hips arent male >>150318267 Angel hips Ferri is cute. Ferri is beautiful. >>150317869 >>150317969 >>150318243 >>150318473 Stop spamming you piece of shit. >>150318510 Stop bullying you piece of shit. >>150318542 Read this: >>150300086 >>150317273 You know that's Ram right? Even when I frequented extremely cancerous /vg/ threads I didn't have to do this much filtering. I'd say just pick up a trip already but we all know that you wouldn't be able to samefag. >>150318589 I realized it the moment I posted. >>150317964 One wants to die and has a death wish. >>150318607 I never samefag. Grow a thicker skin my bro. >>150318607 Just report each of his posts and don't respond to him. >>150317613 Not really. Vector arms cut through things. Pete's are always tangible but not visible, while Subaru's is incorporeal. >>150317716 She does between some deaths in the novels but he is unable to respond. Dunno why they cut those. >>150318243 >>150318473 Locked in a basement for 9 years >>150318744 Bullying on 4chan >>150318744 not long enough More you bully, more I want to spread the love >>150318655 Felix is shit Ha ha, you can't get mad bro, its just a funny joke. Get it, funny joke. Its so hilarious I'm just going to post that in every thread at least 10 times every time I'm awake. Its just so fucking funny, if you don't like it you're a bully. Hang on, wait I've got another funny joke. Crusch is shit What a fucking laugh riot right? So funny, witty and clever. If you disagree you're a bully, you're not a bully are you? Right bro? >>150310345 Draw Wilhelm decapitating felix while crusch fires one of her sword lazers at the cat's flying head. >>150318933 >>150318941 Stop it bros... I can't even tell who is more cancer right now. The Catfag or the people going out of their way to to talk shit and giving (you)s. >>150318941 The Ram/Otto suggestion is better, best not to even acknowledge shit characters. Theresia got essentially necromanced, right? A badly battered and dying Wilhelm holding her dead in his arms would be a heartwarmingpicture though. I don't like this. There's all this buildup and development with Emilia and then out of fucking nowhere Rem starts taking the spotlight. Rem is nice and all but she's not the kind hearted person Emilia is and she's not trying to make the world a better place. >>150319119 >>150319123 Fucking bullies. Looks like I haven't posted enough Ferri. >>150319125 >Trying to make the world a better place You do know the whole "equality" deal was just an excuse she got in order to work out her platform for the election right? Her real wish is to get Roswaal to help unfreeze her village if i remember right, since he promised to do so if she won. >>150319125 Emilia is too much a generic waifu, though. Her biggest character flaws are a superficial resemblance to the boogeywitch and naivete. As outsiders to that world it's remarkably easy to look past her appearances, and her social awkwardness is something she will literally grow out of. Superior maid coming through. Seeing the Felixfag post so much kinda reminded me of something. Does anyone happen to know if we got to see Felix's reaction to Crusch getting affected by "Dragon aids"? >>150319342 When does Dragon AIDS even happen? >>150319342 From wiki . >>150319367 Arc 5 >>150319367 I believe Subaru and Crusch go into Capella's direction, after Julius makes everyone separate when they encounter Gluttony. Inside of a building they find a little girl who Crusch takes care of, while Subaru investigates ahead, and finds some horrible disfigured mosquito looking things, that are people Capella transformed. Crusch then gets stabbed by the little girl who turns out to be Capella who changed her shape, and Subaru and Crusch get attacked, one of the mosquitos injecting some cursed blood on him. Then a dragon who was made from a bunch of people being transformed shows up and tries to roast Capella and Subaru falls unconscious. Anons call it dragon aids, not sure why. The blood makes those affected suffer enormous pain, and become weaker. For Subaru though, the parts that loooked black due to the blood became stronger and more resistant, and he can apparently absorb some of it from Crusch, but a Lewes clone advises him not to drain too much. The clone was one of the first ones made who ran away from sanctuary and gained a "free will" of it's own. I believe the same one who helped in the demi human vs human war. just watched the anime. goddamn that ending. remfags truly got btfo. wish i'd followed this show when it was airing. the threads must have been glorious. >>150319125 Rem has her own build up and development too that leads up to her taking up the spotlight in the source material. Imo it gives her sacrifices more meaning as you realize she's actually not just a weirdo with a crush and there's foreshadowing to that too. Ferri loves Crusch-sama and Crusch-sama loves Ferri. Last Ferri for a while. Good night bros. >>150319518 Now you're just a secondary citing a flawed adaption if you meme about it. >>150319518 They cut out the proper ending, it ends before 2 interludes and an extra chapter in the LN, which would have ended on a more Tragic note. The anime did skip on scenes that made Rem basically win too, like Subaru confessing to her after the whale fight was over, when she faked dying from her wounds, and got Subaru to confess. Or not citing how Al is from earth just like Subaru.
https://4archive.org/board/a/thread/150274894/one-more-chapter-until-from-zero-boys-so-that-039-s-how-many
CC-MAIN-2019-51
refinedweb
9,702
81.93