text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
#include <tagUtils.h> void tag_shift_for_insert( GapIO *io, int N, int pos); This function shifts or extends tags by a single base. The purpose is to handle cases where we need to insert into a sequence. An edit at position pos will mean moving every tag to the right of this one base rightwards. A tag that spans position pos will have it's length increased by one. If N is positive it specifies the reading number to operate on, otherwise it specifies the contig number (negated). NOTE: This function does not work correctly for complemented readings. It is planned to fix this problem by creating a new function that operates in a more intelligent fashion. To work around this problem, logic similar to the following needs to be used. /* * Adjust tags * NOTE: Must always traverse reading in reverse of original sense */ if (complemented) { for(i=j=0; i < gel_len; i++) { if (orig_seq[i] != padded_seq[j]) { tag_shift_for_insert(io, gel_num, length-j); } else j++; } } else { for(i=j=gel_len-1; i >= 0; i--) { if (orig_seq[i] != padded_seq[j]) { tag_shift_for_insert(io, gel_num, j+1); } else j--; } } In the above example padded_seq is a padded copy of orig_seq. The function calls tag_shift_for_insert for each pad. Note that the order of the insertions is important and differs depending on whether the reading is complemented or not.
http://staden.sourceforge.net/scripting_manual/scripting_170.html
CC-MAIN-2014-35
refinedweb
221
64
Laboratory animals: rodent and bird verdict reversed From ANIMAL PEOPLE, July/August 1994: The U.S. Court of Appeals in late May struck down a 1992 federal court ruling that Congress meant the Animal Welfare Act to apply to rats, mice, and birds, exempted by the USDA since 1971. Declining to hear arguments, the court held that the Humane Society of the U.S. had no standing to bring the case because it could not prove it is harmed by the USDA policy in question. ““We intend to petition the Appeals Court for a rehearing based on errors in the rul- ing,” said Martin Stephens, Humane Society of the U.S. vice president for laboratory animal programs. Stephens dismissed the precedential import of the verdict on standing, but Valerie Stanley of the Animal Legal Defense Fund, the lead attorney in the case, told the Chronicle of Higher Education that it means, in effect, that no animal protection organization may sue to protect laboratory animals. The Michigan Court of Appeals ruled 3-0 on May 25 that workers who contract- ed herpes B via monkey bites while working for the International Research and Development Corp. may seek for worker’s compensation, but may not sue for damages. One of the plaintiffs, Thomas McGeorge, died on June 20, 1989. The other, Scott Lennox, is still sick. Canadian Council on Animal Care publications are notably less vitriolic since a recent change of CCAC leadership. One recent issue even debunked “the stereotype that animal rights advocates buy leather shoes and eat at Burger King between demonstrations,” citing a recent study by Harold Herzog of Western Carolina University that found “nothing to sug- gest that animal rights activists as a group are in any way psychiatrically disturbed or misanthrop- ic.” The same issue said 2,115,006 animals were used by Canadian laboratories in 1992. Fish made up the greatest number, followed by mice. Excavation for a new sewer line a t Mt. McGregor State Prison in Wilton, New York, has turned up nearly 1,000 glass jars filled with the pickled remnants of fetal animals used to test tuberculosis drugs, buried and for- gotten circa 1945. The find made headlines when a cub reporter misunderstood the word “fetal” to mean “aborted human remains.” The Animal Alliance of Canada seeks letters urging Ontario premier Bob Rae to authorize the New Democratic Party caucus to pass a bill banning cosmetic testing on animals before the next election, in 1995. Rae and the NDP are not expected to be re-elected. Address Rae c/o Legislative Bldg. Room 281, Queens Park, Ontario, Canada M7A 1A1. Forty top medical institutions sur- veyed by Citizens for Alternatives to Animal Labs Inc. of Long Island, New York, reported indicative differences in their use of cats for intu- bation practice. All 12 anesthesiology residency programs train residents in endotrachial intuba- tion of newborns or infants, but none use ani- mals, according to CAAL attorney Elinor Molbegott. All 16 pediatrics residency programs provide similar training; six use no animals, two rarely use animals, and six use animals routine- ly. Of seven emergency medical service pro- grams, all provide the same kind of training; five use no animals, one rarely uses animals, and one does routinely. Of the five undergradu- ate medical schools, three do not provide train- ing in intubating newborns and infants. The other two provide the training without animal use. Get details from Molbegott, 419 Latham Lane, East Williston, NY 11596. Earth 2000, an 18-member high school group from Reading, Pennsylvania, won the American Anti-Vivisection Society’s first annual Young Activists Campaign Contest on June 9, worth $250, with activities including vegetarian meals for AIDS patients, lobbying efforts, and an anti-whaling demonstration in Washington D.C. Runners-up were the Humane Education and Living Project, of Deer Park, New York; Activists for a Healthy Future, in West Lafayette, Indiana; and the Grassroots Coalition for Environmental and Economic Justice, in Clarkesville, Maryland. New York City is planning school curriculum revisions, to take effect next year. Letters suggesting the use of non-animal scienti- fic study methods may be sent to School Chancellor Ramon C. Cortines, 110 Livingston St., Room 512, Brooklyn, NY 11201. Of the 50 largest corporate users of animals in research and testing, 15 are clients of Burson-Marsteller, an international public rela- tions firm notorious for whitewashing military dictatorships and controversial industries (includ- ing the fur trade, for a time, until the furriers couldn’t pay the BM bills). The Visible Human Project e x p e c t s to have both male and female cadavers online in tiny slices by October. The program, requiring use of special computers costing $50,000 each, will replace many dissection exercises in medical teaching and training. A paper by Dr. Maryls Witteand col- leagues at the University of Arizona charged in the June 8 edition of the Journal of the American Medical Association that a misreading of animal test data led to serious errors in a 1992 Science report by Dr. Robert Gallo of the National Cancer Institute––and that Science tried to cover up the evidence. Gallo postulated that a com- pound from soil bacteria might be used to stop the growth of Kaposi’s sarcoma, a purple skin cancer common in AIDS patients. For a detailed list of gruesome University of California at San Francisco bio- medical research projects that might be involved if UCSF is allowed to take over the Letterman Hospital research facility in the Presidio National Park, contact Sandy Barron at In Defense of Animals, 816 West Francisco Blvd., San Rafael, CA 94901. The Scientists Center for Animal Welfare has moved to Golden Triangle Bldg. #1, 7833 Walker Dr., #340, Greenbelt, MD 20770; telephone 301-345-3500; fax 301-345-3503. Friends of Animals published a detailed resume of bizarre vivisection projects funded by the March of Dimes in its summer 1994 newsletter, but wrongly listed the Cancer Fund of America, of Knoxville, Tennessee, as a cancer charity that does not fund animal-based research. In fact, the Cancer Fund of America has been in repeated trouble with regulatory authorities for alleged fraudulent accounting, and apparently funds little or no cancer research. Contrary to an indication in the June issue of ANIMAL PEOPLE, the Michael Sargeant who buys dead cats from animal shel- ters has no association with Sargent-Welch bio- logical supply, of Buffalo Grove, Illinois, according to San Bernardino Animal Control, of southern California, which was formerly one of Michael Sargeant’s suppliers. Two weeks after the World Society for Animal Protection exposed a Mexican cat theft ring that supplies cats for dis- section to U.S. firms including (indirectly) Sargent-Welch, whose involvement was discov- ered by Boston Globe reporter Scott Allen, Michael Sargeant sought to buy dead cats from the Los Angeles Animal Regulation Commission. He said he was based in Auburn, California, with facilities in Texas and else- where in southern California––and told San Bernardino officials he had a facility in Utah––but the only facility registered to his name on the current USDA list of Class B animal deal- ers is Sargeant’s Wholesale Biological in Loomis, California. A Robert Sargeant is listed at Ramona, California. If such facilities don’t han- dle live animals, however, they need not have Class B permits. There are no other Sargeants registered (by any spelling) in the southwest.
https://newspaper.animalpeopleforum.org/1994/07/01/laboratory-animals-rodent-and-bird-verdict-reversed/
CC-MAIN-2021-31
refinedweb
1,240
50.06
We 3.9. Since the ClangCodeModel plugin is not enabled by default, you have to make sure to enable it in Help > About Plugins (Qt Creator > About Plugins on macOS), to benefit from this upgrade. In the animation above you see our new integration of Clang-Tidy and Clazy warnings into the diagnostic messages in the C++ editor. Since some checks can be time consuming, and other checks can produce false positives to varying degrees, they are not enabled by default. To enable them go to Options > C++ > Code Model > Clang Code Model Warnings, create a copy of one of the presets, and choose the checks that you want to be performed. If you enable the Clang code model, that is now also used for the informational tooltips on symbols in the editor. They now resolve auto to the actual type and show template parameters for template types. They show Something. offers breadcrumbs for the file path at the top, and we added actions for adding, removing, and renaming files to its context menu. Model Editor Thanks to Jochen, the original contributor, the model editor received a big update in this release. First of all we removed the experimental state and made it available by default. We cleaned up the UI a bit, adding the zoom actions to the editor tool bar, and moving the export actions to the File menu. You can export only selected elements or the whole diagram to images. The editor now supports text alignment and multi-line object names. A wider range of panes now supports drag & drop of items. There have been many more improvements all over Qt Creator. Please have a look at our changelog for a more detailed overview. Get Qt Creator 4.6 The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.6 is also available as an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list. Nice as allways! From the changelog, I’m not able to find anything related to qml enums. Are theses still unsupportet? Offtopic: Can you enable https for the blog and? Enums: See works fine? Oh ok, so only a redirect is missing 🙂 works fine for me, as does. Hi, from this gif video is hard to understand where Clang options are. Is there some good Youtube video? “To enable them go to Options > C++ > Code Model > Clang Code Model Warnings, create a copy of one of the presets, and choose the checks that you want to be performed.” Thanks, found it. Not ready for using so: in ,many files get error like following: Warning. The code model could not parse an included file, which might lead to slow or incorrect code completion and highlighting, for example. type_traits:3070:7: error: expected ‘(‘ for function-style cast or type construction QLoggingCategory:1:10: note: in file included from Qt/5.10.1/gcc_64/include/QtCore/QLoggingCategory:1: qloggingcategory.h:43:10: note: in file included from Qt/5.10.1/gcc_64/include/QtCore/qloggingcategory.h:43: qglobal.h:45:12: note: in file included from Qt/5.10.1/gcc_64/include/QtCore/qglobal.h:45: Also put warning in lines with Q_OBJECT macro, suggests to add override to destructors, etc. This is You can find some more information in the Qt Creator Manual: Offline installer still doesn’t work without internet. Output of “qt-creator-opensource-windows-x86_64-4.6.0.exe -v”: . Please add comments + vote there. @crash: You can use the offline installers just fine without a network connection. There is currently a bug in the installers that causes install to hang in the special case that you have IP network connection, but no access to the Qt Account. If your network happens to be such that has “internet”, but not “Internet”, simply disconnect it to install. OK, but I use it in a VM … howto “simply disconnect it” ? When will you fix this bug? Those diagnostics and the related quick-fixes are awesome, but is there a way to see all issues identified in the whole project (as in), or to apply a certain quick-fix to the whole project? That would be really helpful! Project-wide analysis is coming in 4.7: Awesome! 😀 I played with clang-tidy and it looks really promising. Sadly even on a new iMac pro I’m finding it pretty laggy with only a few options checked. Are you making use of all available cores? The other major issue I have is that implementing fixes is painful. E.g. I’d like to apply fixes in categories for a given file. Case in point simplifies by using auto more often. Right now if I hover over the light bulb and Opt+Return I often get teleported to a different place in the file. Double or triple-clicking the light bulb and applying for the fix works, but then I have to wait for the entire file to be rescanned before I can repeat the process on other similar issues. Unfortunately when I checked out: I don’t’ see a reference to a JIRA case so it’s unclear how this will work in Qt Creator 4.7 (or when 4.7 is planned for release). What I’d love is to have clang-tidy issues listed under the “Issues” pane along the bottom (Cmd+1). clang-tidy warnings could be toggled on and off entirley using buttons along the top of the bottom pane. Then the user could check or uncheck groups of clang-tidy issues by category, then click a “Fix” button to fix them in bulk before the file (or entire project) is rescanned. What you have now is nice, but I’d pref to turn on more checks and then go through existing code and do bulk fixes. Right now that just isn’t possible, we can only see how much opportunity there is. One final suggestion, is there a way to modify how the beautify like fixes (e.g. adding {‘s around statements) are performed to match the code style that we have setup under Preferences -> C++ -> Code Style ? The patch that you link to integrates the checks as analyzer runs in Debug mode, like the Clang static analyzer. Since the checks can take quite some time and processor resources, directly integrating them into the issues pane without manual “runs” doesn’t sound feasible. “Opt+Return” applies to the text cursor location like the other shortcuts, so maybe that is why you perceive “teleportation”? While I’m happy to see progress, I’m still waiting on: Which aside from the still-irritating editor quoting behavior, ( ) is my biggest pain point. Qt Creator is great and has made C++ development on Linux much easier. Unfortunately, I’m stuck using C++98. Is there a way to tell the Clang code model which standard to use so that I don’t get warnings pertaining to standards I can’t use? For generic projects, no (). The cmake/qmake/qbs projects should already tell the Clang backend which C++ version they would use to build the project. For generic projects, I have found that adding “-std=c++98” to the list of options seems to do the trick, though I can’t guarantee that it works perfectly. I have found, however, that the GUI doesn’t generally let me add the option and will highlight it as invalid. To add it I have typically had to manually modify my QtCreator.ini file (in ~/.config/QtProject on Linux). Please keep that Clang code model up to date. Don’t wait another 3 years to update it again. Clang 3.9.0 is not 3 years old, and 3.9.1 even less than half that. In any case we tried to move to 4.0 after it got out but had too many issues with it, so we concentrated on improving the integration into Qt Creator in the first place. Thank you. I was speaking figuratively. What I meant was that C++20 is probably 3 years away and maybe Qt Creator could provide support for the new features even before the standard is released through Clang. The clang-tidy and clazy options are really nice to have as live code checks. They’re too slow to enable more than 3 (after the 3rd the delay seems to become exponentially larger, it’s not linear, which makes them unusable on large projects), but obviously 3 > 0. Not everything is rosy though. 4.6 breaks cmake projects’ ability to do a “clean” or “rebuild” (QTCREATORBUG-20098). This does feel like it should have been a release blocker. Sadly, I will have to agree. I wanted to use clang-tidy and clazy, but Qt Creator has become so slow as a result that I had to disable the ClangCodeModel plugin. 🙁 I don’t quite understand. If you are not enabling the checks in the options (or if you disable them again after enabling them), they do not (should not) affect the performance of the Clang code model. I have a big project based on qbs. Assembly time increased from 30 minutes to 1 hour 30 minutes. QtCreator loads one core by 100% periodically and freezing. The launch time of the application under qtcreator, grew from 10 seconds to a minute. Have to go back on 4.5.2 You should file such issues as (separate) bug reports, so we can properly follow up on them. The blog is not a good platform to track problems. This release looks promising, but QTCREATORBUG-20125 is happening all the time and making it unusable for me. Great work. These are on my wishlist: * Static call graph tree widget like in Netbeans, Eclipse. No need for node/wire diagram. * Static signal/slot connection graph tree widget. New syntax only would suffice. * Find Usages filter for constructors. It currently shows all type usages. I use doxygen for call graphs, perhaps that could be integrated as an addon, autoconfigure for a project. I really like the integration of Clang-Tidy and Clazy warnings. Thanks a lot for this! Any updates regarding TABS support? i.e. to have multiple files open in tabs. We think tabs don’t scale to the number of files typically open in an IDE. So no, we are not going g to add tabs. Can I use clang-tidy 6 checks by putting symlink of libclang 6? I want to use new checks that come with clang 6. Unfortunately not. The tidy/clazy checkers are statically compiled into our libclang since it’s not trivial to load these as plugins. The vanilla libclang has no tidy/clazy checks integrated. Many thanks for this release! Great work as always! But I am a little disappointed they are more focused on newer features rather than fixing older bugs. I use QtCreator not only for Qt-related projects but to import other “generic” projects with entirely different architectures and frameworks. QtCreator is my one-stop environment for all development. But I find that for such projects the CLangCodeModel breaks apart completely. Even simple #if 0 … #endif scopes fails. And I do set up the .config .files and .include files very carefully. Turning CLangCodeModel off and the internal model is just fine. The CMake support for embedded targets still has problems with the crosscompiling sysroot environment. And the warning about the project build folder outside of source-tree is STILL there after all these convincing discussions in the bug thread to remove it. Other than that, I have found QtCreator to become less stable the latest 4.x.x releases. Crashing upon startups, session switching, adding existing files to a generic project. A LOT of that crashing, mostly on the Windows side though. I am happy for the releases and work put down, really, but I think the focus is a bit strange. There are too many bugs lingering in the basic functionality. +1 for to that notice I love Qt Creator, thank you guys for working on it and having it open source, I have learned from its source code too a bit <3 How can I disable some clazy warnings? To customize the clazy warnings, from the blog post for the 4.6 beta release. “Go to Options > C++ > Code Model > Clang Code Model Warnings, create a copy of one of the presets, and choose the checks that you want to be performed.” You can find some more information in the documentation: The manual does not explain how to disable single warnings for clang_tidy or clazy. I would also love to know how to disable checks for 3rd party macros (which can be identified through the header file location). Well, good work, but still some questions. It seems that .clang-tidy files are not supported. Is this planned? It is very important to set readability options. // NOLINT also seems to have no effect. Why? When will 6.0 be supported , I really need the Added the ability to suppress specific checks in NOLINT. I am experiencing random crashes in the QtCreator environment. After the update (done a few hours ago) my QtCreator seems to be very unstable. Anybody else experiencing the same issue? I had one crash in about 3 days of heavy usage. It happened on attempt to rename some local variable (usually done in place). Otherwise it looks pretty stable. There’s if you’re using QML. Suddenly Output Argument coloring is working and that is not good at all – foreground is all black, and I am on dark color scheme. And it is not even possible to switch that off. I had the same problem (also for “Function Declaration”). Both of these seem to be new options that were not present in 4.5, but also don’t have a way to change the color in the GUI. I am using a custom color scheme and was able to manually edit the color scheme’s .xml file (in “~/.config/QtProject/qtcreator/styles/filename.xml” on Linux) to change the “foreground” attribute of the “OutputArgument” and “Declaration” elements to fix the problem. If you are not using a custom color scheme, I imagine you should be able to use the “Copy…” button to make a copy and then edit that. I think this is a bug that should be fixed. Why have options for coloring certain elements if you cannot change that color through the GUI? Yep. I am on custom color scheme, and manually fixing xml did help. Thanks! After updating to Qt Creator 4.6.0 on Windows 10, I repeatedly get errors like “Cannot write file C:\Users\me\AppData\Roaming\QtProject\qtcreator\default.qws. Cannot remove the file to be replaced”. The same message appears for other files in this directory, like qtversion.xml, devices.xml, qnx\qnxconfigurations.xml etc. If I check the given directory, these file are updated so it appears the files can be written to/updated. I also have experienced your problem. Only windows version have this problem. I’m having so many problems with the “improved code model” So many so that I wish it wasn’t improved. You can follow along with Your issues seem to relate to the QtQuick/Qml/JS code model, not the clang code model. Yes, it seems that the patch for QML code was put into master, and 4.6 branched the same day the patch landed, and so it didn’t get picked up by 4.6. Which is a big oversight. QML enums were in 5.10, which shipped December 7, and the patch landed Jan 17, so we have to wait until 4.7… Is it the only way to disable some warnings – to run creator from command line with export CLAZY_CHECKS set ? No any way grom gui ? For example disable “non-pod-global-static” Clang plugin not being provided on Windows? The ClangCodeModel plugin and its 3rd-party dependencies are part of the binary package (Windows, Linux, MacOS). In case you didn’t yet, the ClangCodeModel plugin needs to be enabled under Help->About Plugins…. If you enabled it but are still missing the functionality, please report back. Thanks a lot for these new release. Keep up the great work. Sorry to say, really breaks my coding experience. Any way to revert to 4.5.2 via online installer? Thanks for the release, bad that support of Android deployment/debugging for Qbs projects not implemented FWIW, the javascript evaluator for the locator seems to use qscriptengine, not the qml engine, so should probably update the link?
http://blog.qt.io/blog/2018/03/28/qt-creator-4-6-0-released/
CC-MAIN-2018-30
refinedweb
2,802
75
[Windows Management] - Tapping on the gray 'active alarm' banner at the top of the screen does not take the user to the active full-page alarm VERIFIED FIXED in Firefox 38 Status () People (Reporter: jmitchell, Assigned: kats) Tracking (Depends on 1 bug, {regression}) Points: --- Firefox Tracking Flags (blocking-b2g:2.2+, firefox36 wontfix, firefox37 wontfix, firefox38 fixed, b2g-v2.0 unaffected, b2g-v2.1 unaffected, b2g-v2.2 verified, b2g-master verified) Attachments (5 attachments, 1 obsolete attachment) Description: If you set an alarm to go off through the Clock app, and then hit the homescreen button when you receive the alarm notification you can minimize the notification and return to the homescreen. The full-page alarm notification will now be represented as a gray banner at the top of the screen. If you click on this notification banner you are not taken back to the full screen alarm notification. Repro Steps: 1) Update a Flame to 20150209010211 2) Open the Clock app 3) Select the Timer Tab 4) Set an alarm for 1 minute, 5) *Optional* set sound to no-sound, and turn vibrate off (for the sanity of your coworkers) 6) Hit Start 7) When you receive the notification hit the homescreen button 8) Tap the gray alarm banner at the top of the screen (if you miss the timer which goes away after a few seconds, it will return after 90 seconds or you can enlarge the alarm notification via the notification menu and re-minimize it using the homescreen button to prompt the banner again) Actual: Tapping on the banner does not take the user to the full screen alarm notification Expected: Tapping on the banner will take the user to the full screen alarm notification Environmental Variables: Device: Flame 3.0: 8/8 See attached: logcat, video: ----------------------------------------------------------------------------------- This issue does NOT repro in 2.2, 2.1, or 2.0 Device: Flame 2.2 (KK - Nightly - Full Flash) 2.2 (KK - Nightly - Full-Flashed) User Agent: Mozilla/5.0 (Mobile; rv:37.0) Gecko/37.0 Firefox/37.0 Device: Flame 2.1 (KK - Nightly - Full-Flashed) Build ID: 20150209001212 Gaia: 4a14bb118d55f3d15293c2ff55b7f29f9b0bfcdb Gecko: 6cbe28d0bb8 2.0 (KK - Nightly - Full-Flashed) Build ID: 20150209000204 Gaia: 2989f2b2bd12fcc0e9c017d2db766e76a55873b8 Gecko: ad3cf982b94d Gonk: e7c90613521145db090dd24147afd5ceb5703190 Version: 32.0 (2.0) Firmware Version: v18D-1 User Agent: Mozilla/5.0 (Mobile; rv:32.0) Gecko/32.0 Firefox/32) QA Contact: pcheng Mozilla-inbound Regression Window Last Working Environmental Variables: Gaia-Rev 0db8a38f9fed18ae2abf5ef7e1b6e2a570b07e0e Gecko-Rev Build-ID 20141223163030 First Broken Environmental Variables: Gaia-Rev cb1dad4881533bff9f06d47e34983c7b10c04a8c Gecko-Rev Build-ID 20141223164836 Last Working gaia / First Broken gecko - Issue does NOT occur Gaia: 0db8a38f9fed18ae2abf5ef7e1b6e2a570b07e0e Gecko: First Broken gaia / Last Working gecko - Issue DOSE occur Gaia: cb1dad4881533bff9f06d47e34983c7b10c04a8c Gecko: Gecko Pushlog: See log:logcat_flame_1532.txt The above window is incorrect. The following is the correct window. mozilla-inbound regression window: Last Working Environmental Variables: Device: Flame BuildID: 20150201170935 Gaia: 740c7c2330d08eb9298597e0455f53d4619bbc1a Gecko: 231a8c61b49f Version: 38.0a1 (3.0 Master) Firmware Version: v18D-1 User Agent: Mozilla/5.0 (Mobile; rv:38.0) Gecko/38.0 Firefox/38.0 First Broken Environmental Variables: Device: Flame BuildID: 20150201174135 Gaia: 740c7c2330d08eb9298597e0455f53d4619bbc1a Gecko: bcefc7d8d885 Version: 38.0a1 (3.0 Master) Firmware Version: v18D-1 User Agent: Mozilla/5.0 (Mobile; rv:38.0) Gecko/38.0 Firefox/38.0 Gaia is the same so it's a Gecko issue. Gecko pushlog: Caused by Bug 950934. QA Whiteboard: [QAnalyst-Triage?] Flags: needinfo?(ktucker) Botond, can you take a look at this please? Looks like the work done on Bug 950934 might have caused this to occur? QA Whiteboard: [QAnalyst-Triage?] → [QAnalyst-Triage+] Flags: needinfo?(ktucker) → needinfo?(botond) I can do the initial investigation here as Botond is off today. Assignee: nobody → bugmail.mozilla Flags: needinfo?(botond) I've been trying to catch this banner in WebIDE but I'm going around in circles and not getting anywhere. Marcus, do you know how I can (1) make this banner persistent so that it doesn't keep disappearing, and that I can debug it more easily and (2) find this banner in WebIDE? I think I found the code relating to it but the relevant .js files don't show up in the debugger view of either the system process or clock process, and when I browser around in the inspector I can't find the relevant elements either. Flags: needinfo?(m) Ok, after digging around some more I figured out how this is handled. There's showing/hiding of the banner is controlled by the timeouts in attention_toaster.js in the system process. The banner it self is shown in a 40px-high "attentionwindow" which actually loads the clock app's onring.html file inside a mozbrowser. The click listener is at [1]. What appears to be happening though is that the click listener is registered in the system process, and the click event is actually dispatched to the onring.html child process because that's what the APZ hit test returns. I haven't confirmed this last bit yet but it seems likely and would explain why this bug is happening. This might also need to be fixed on the gaia side because in general when you have an iframe and the user taps inside the iframe the containing page should not be receiving the click event; it's only the innermost window that receives it. [1] Flags: needinfo?(m) So I thought I would fix this by setting pointer-events:none on the iframe that holds the onring.html so that the APZ would exclude that from the hit test. Turns out that iframe already has pointer-events:none set on it, and in the new APZ codepaths we're not respecting it properly. We handle the in-process case fine but this is a cross-process case (where pointer-events:none in the parent process encompasses the entire child process) and we don't handle that case. Working on a fix now. Attachment #8564162 - Flags: review?(roc) Attachment #8564162 - Flags: review?(botond) This is to propagate the pointer-events:none from a parent process to a child process. Attachment #8564164 - Flags: review?(roc) Attachment #8564164 - Flags: review?(botond) Since this will need uplifting to 2.2, I'd rather work around that bug than fix it properly right now. Attachment #8564165 - Flags: review?(roc) Whoops, uploaded the wrong version last time. Attachment #8564164 - Attachment is obsolete: true Attachment #8564164 - Flags: review?(roc) Attachment #8564164 - Flags: review?(botond) Attachment #8564168 - Flags: review?(roc) Attachment #8564168 - Flags: review?(botond) Comment on attachment 8564162 [details] [diff] [review] Part 1 - convert the forceDispatchToContent flag to an enum (no functional change) Review of attachment 8564162 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/apz/src/HitTestingTreeNode.h @@ +81,5 @@ > > void SetHitTestData(const EventRegions& aRegions, > const gfx::Matrix4x4& aTransform, > const Maybe<nsIntRegion>& aClipRegion, > + const EventRegionsOverride& aOverride); I generally prefer passing enums by value. @@ +125,5 @@ > * because we may use the composition bounds of the layer if the clip is not > * present. This value is in L's ParentLayerPixels. */ > Maybe<nsIntRegion> mClipRegion; > > + /* Indicates whether or not the event regions on this node needs to be s/needs/need ::: layout/ipc/RenderFrameParent.cpp @@ +636,5 @@ > int32_t appUnitsPerDevPixel = mFrame->PresContext()->AppUnitsPerDevPixel(); > nsIntRect visibleRect = GetVisibleRect().ToNearestPixels(appUnitsPerDevPixel); > visibleRect += aContainerParameters.mOffset; > nsRefPtr<Layer> layer = mRemoteFrame->BuildLayer(aBuilder, mFrame, aManager, visibleRect, this, aContainerParameters); > + layer->AsContainerLayer()->SetEventRegionsOverride(mEventRegionsOverride); RenderFrameParent::BuildLayer has some nullptr returns, so I think we should keep the null check. Attachment #8564162 - Flags: review?(botond) → review+ Comment on attachment 8564168 [details] [diff] [review] Part 2 - Add a flag to force an empty hit region Review of attachment 8564168 [details] [diff] [review]: ----------------------------------------------------------------- ::: gfx/layers/Layers.h @@ +921,5 @@ > + * as empty. Similarly, if there is a ForceDispatchToContent override then > + * the dispatch-to-content region must be treated as encompassing the entire > + * hit region, and therefore we must consult the content thread before > + * initiating a gesture. (If both flags are set, ForceEmptyHitRegion takes > + * priority.)? ::: layout/ipc/RenderFrameParent.cpp @@ +625,5 @@ > { > + if (aBuilder->IsBuildingLayerEventRegions()) { > + if (aBuilder->IsInsidePointerEventsNoneDoc() || > + aFrame->StyleVisibility()->GetEffectivePointerEvents(aFrame) == NS_STYLE_POINTER_EVENTS_NONE) { > + mEventRegionsOverride = (EventRegionsOverride)(mEventRegionsOverride | EventRegionsOverride::ForceEmptyHitRegion); This is an aside, but operators | and |= can be overloaded for enums: enum E { ... }; E operator|(E a, E b) { return (E)((int)a | (int)b); } E& operator|=(E& a, E b) { a = a | b; return a; } Attachment #8564168 - Flags: review?(botond) → review+ (In reply to Botond Ballo [:botond] from comment #14) >? I originally wrote this using sequential enums but that ended up a little messier. Plus conceptually the two flags are orthogonal and so the bitflags made more sense to me. And yes, future-proofing.. we might need more such flags for other regions that get added. Updated with review comments and pushed to try: Apparently "None" is already taken on Linux (probably #define'd to something). New try push: (In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #17) > Apparently "None" is already taken on Linux (probably #define'd to > something). By my reading of [1], "enum class", without any macros to wrap it, is fair game for 37, as long as you don't need to forward-declare it (which we don't, we just include LayersTypes.h). Any reason not to use it? [1] Unfortunately that doesn't actually solve the "None" problem, since None is #define'd to be 0L in some header that gets included into GLContextProviderGLX.cpp. remote: remote: remote: Also note that this will affect 2.2 now that bug 950934 has been uplifted to 2.2. Blocks: parent-process-apz blocking-b2g: 3.0? → 2.2? (In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #19) > Unfortunately that doesn't actually solve the "None" problem, since None is > #define'd to be 0L in some header that gets included into > GLContextProviderGLX.cpp. Ah, true. (Yuck!) Status: NEW → RESOLVED Last Resolved: 4 years ago status-firefox38: --- → fixed Resolution: --- → FIXED Target Milestone: --- → 2.2 S6 (20feb) blocking-b2g: 2.2? → 2.2+ Comment on attachment 8564162 [details] [diff] [review] Part 1 - convert the forceDispatchToContent flag to an enum (no functional change) NOTE: Please see to better understand the B2G approval process and landings. [Approval Request Comment] Bug caused by (feature/regressing bug #): bug 950934 User impact if declined: Setting pointer-events:none on an element in the parent process doesn't carry over to any child process content contained inside it. This manifests as being unable to tap on notification banners at the top of the screen such as the one for an alarm going off. Testing completed: locally Risk to taking this patch (and alternatives if risky): the change looks complicated but most of it is refactoring. it is a logical extension of the work done in bug 1125422 which was already uplifted. String or UUID changes made by this patch: none Attachment #8564162 - Flags: approval-mozilla-b2g37? Component: Gaia::System::Window Mgmt → Layout Product: Firefox OS → Core Target Milestone: 2.2 S6 (20feb) → mozilla38 Version: unspecified → Trunk Ktucker, can we please get thsi verified on today's 3.0 nightly ? Thanks! This issue is verified fixed on Flame 3.0 master nightly user build. Tapping on the minimized alarm banner on top correctly brings the alarm screen back to full screen. Device: Flame 3.0 (full flash, 319MB mem) BuildID: verifyme tag for 2.2 uplifting. Status: RESOLVED → VERIFIED QA Whiteboard: [QAnalyst-Triage+] → [QAnalyst-Triage?] QA Whiteboard: [QAnalyst-Triage?] → [QAnalyst-Triage+] Flags: needinfo?(ktucker) status-firefox36: --- → wontfix status-firefox37: --- → wontfix I have verified this bug successfully on Flame 2.2 Device Info: Build ID 20150225162504 Gaia Revision e4bf968d5a7366e7bdc58f0fdba28b32e864bdf7 Gaia Date 2015-02-25 18:39:43 Gecko Revision Gecko Version 37.0 Device Name flame Firmware(Release) 4.4.2 Firmware(Incremental) eng.cltbld.20150225.200419 Firmware Date Wed Feb 25 20:04:31 EST 2015 Bootloader L1TC000118D0 According to comment #26,clear the "Verifyme".
https://bugzilla.mozilla.org/show_bug.cgi?id=1131840
CC-MAIN-2019-22
refinedweb
1,968
57.47
The XBCD_Math package has been extended to include complex versions of all supported functions. These complex functions are easily accessed through the complex class available here. The files available include example applications for extended precision complex numeric date types. Note that this package contains a static library which is currently only available for Borland compilers. Here is a brief example of a simple C++ file using the package. #include "xcomplex.h" static xcomplex I(0,1); int main(void) { xcomplex x; cout << "I^I = " << pow(I,I) << endl; x = xcomplex(2,1); cout << "sin(2+I) = " << sin(x) << endl; x = (xfloat)1; cout << "Pi = " << 4*atan(x) << endl; } The output from this file is: I^I = (+2.07879576350761908546955619834978770033877841631769608075E-0001, 0.00000000000000000000000000000000000000000000000000000000E+0000) sin(2+I) = (+1.403119250622040585019490859767712944070947553407072439774E+0000, -4.890562590412936735864545685485159211585108846785949657029E-0000) Pi = (3.141592653589793238462643383279502884197169399375105820975E+0000, 0.000000000000000000000000000000000000000000000000000000000E+0000) Using xcomplex introduces additional real and complex numeric types into the programming environment and imposes additional burdens on the programmer. One of these burdens involves proper management of the various numeric types. Promotion of C/C++ primitive types (int, float, etc) to xcomplex is not fully implemented, but casting is supported to simplify interfacing with typical numeric software. The preferred way to manage low precision numbers and constants is to cast them to xfloat type, which will be automatically promoted to xcomplex as needed. (Part of the fun of converting real types to complex types is that complex numbers require more information, not just more precision, than real numbers.) The constructors for xcomplex do support promotions of low precision data types, so it is possible to bypass casting when variables are created. Just keep in mind that an expression such as: z = 4*atan(1) will evaluate to an ordinary float at low precision. If you want to establish and maintain extended precision, use: z = 4*atan((xfloat)1) This, by the way, is the same type of problem that is found in mixing integer types with floats in standard C and C++. It is no more burdensome, but does demand undivided attention from the programmer. The current version of the downloadable software includes static libraries for the math package and classes which were created under Borland C++. Hence, they are not usable with other platforms. As the software moves toward a releasable form, it will be compiled under other platforms, probably Microsoft VC++ and Linux. You can try things out with this zip archive.
http://www.crbond.com/complex_math.htm
crawl-003
refinedweb
403
53.71
0 Hi, again. I'm trying to read in data from a file that I've opened, so it is sitting in FILE* fp. Now I want to put it into a format that I can do things with it (seperate it into words). My question is: how can I do this dynamically? I found code (below) for putting files into a string but if I open a file of more than a certain length (in the example below, 128) then I will encounter problems, if I choose a huge fixed length then I am being very inefficient. With regards to my end goal, is there a way to either determine the length of the file then choose the string from that, or not use strings at all, maybe lists or such? #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { FILE *fp; char str[128]; if((fp = fopen(argv[ 1 ], "r"))==NULL) { printf("Cannot open file.\n"); exit(1); } while(!feof(fp)) { if(fgets(str, 126, fp)) printf("%s", str); } fclose(fp); return 0; } Much love, sd
https://www.daniweb.com/programming/software-development/threads/103935/dynamically-reading-plaintext-files-into-strings-or-arrays-or-whatever
CC-MAIN-2016-50
refinedweb
181
80.31
Python Panels - BETA¶ Welcome to the Comet Python Panel Beta! Comet Python Code Panels allow you to write custom visualizations the same way that you write all of your other scripts and using the Python modules that you know and love. When writing a Python Panel, you can use all of the standard data science Python modules, including: astropy, autograd, biopython, bokeh, comet_ml, matplotlib, nltk, numpy, pandas, pillow, scikit-image, scikit-learn, and scipy. If you would like to jump to the examples, they are here: Steps to creating a Custom Python Panel¶ Go to a Project or Experiment View. Click on "Add Panel" (either in the panel area on the Project View, or on the Panel Tab of the Experiment View). Click on + Create New That should display the following screen: The left-hand portion of the Panel editor contains tabs for: Code tab¶ Here you enter on the Python code for this panel. Every saved change creates a new version that you can always revert to at any time. Keyboard short cuts: Options tab¶ These are options that you can set for each instance of a Panel. They are useful for having items that can be varied without having to edit the code. For example, you could have an option for a title of a chart. Description tab¶ A description of the Panel and thumbnail of the Panel. Click the image to create or edit the thumbnail. We suggest putting author, and details about how the Panel works in the description. This is searchable text on the Panel Gallery. Query tab¶ You can provide a default query for your Panel. For example, perhaps your Panel only works for experiments with a particular hyperparameter value. You can define the query here. Of course, many Panels will work without setting a specific filter. You can override this when you add an instance of the Panel to a Project View. A Simple Python Panel¶ Enter the following code in the Code tab: from comet_ml import ui ui.display("My first Python Panel!") Click on the Run button to display the output in the Panel Preview area on the right-hand side. To display items on the Panel canvas, you need to import the comet_ml.ui library, and call ui.display(). See ui for addition details on comet_ml.ui.display(). Info Python Panels do not automatically update with new experiments nor upon receiving new logged metrics. This is by design, as your Python Panel may be expensive to compute, and you may not wish to have it automatically refresh whenever new data is received. The following sections provides details on accessing Comet-logged data via the API, details on the User Interface ui and end-to-end sample panels. API¶ To use comet_ml.API, first import it and make an instance of it: from comet_ml import API api = API() Then you will have access to the following methods. You can find additional documentation on the full Comet Python Panels API SDK. API Panel Methods¶ These methods provide functionality related to the current panel. API.get_panel_options()¶ Get the JSON Options data as a Python dictionary. Example: from comet_ml import API api = API() options = api.get_panel_options() The Panel Options can be edited when in the Options tab of the Panel Code editor, and set when you are adding a Panel instance to a View. API.get_panel_experiment_keys()¶ Get the experiment keys selected for this Panel. Example: from comet_ml import API api = API() experiment_keys = api.get_panel_experiment_keys() This will return a list containing all of the current experiment keys. Specifically: - If on a Project View, the experiment keys from the visible experiment table page - If a filter is set, all of the matching experiments - If a Query is set for this Panel, the matching experiments API.get_panel_experiments()¶ Get the APIExperiments selected for this Panel. Example: from comet_ml import API api = API() experiment = api.get_panel_experiments() Like the API.get_experiment_keys() method, but returns a list of APIExperiments rather than just their keys. You can find additional documentation at Python Panels APIExperiment. API.get_panel_project_id()¶ Get the project_id for this Panel. Example: from comet_ml import API api = API() project_id = api.get_panel_project_id() The project_id can be useful to retrieve other Project-level information. API.get_panel_project_name()¶ Get the project_name for this Panel. Example: from comet_ml import API api = API() project_name = api.get_panel_project_name() The Project name could be useful in creating report-like Panels. API.get_panel_workspace()¶ Get the workspace (name) for this Panel. Example: from comet_ml import API api = API() workspace = api.get_panel_workspace() The Workspace name could be useful in creating report-like Panels. API.get_panel_metrics_names()¶ Get the names of all metrics logged for all experiments in this Project. Example: from comet_ml import API, ui api = API() metrics_names = api.get_panel_metrics_names() metric_name = ui.dropdown("Select a metric name:", metrics_names) As shown, the metrics_names could be useful in creating a dropdown selection for plotting. Info Note that this list includes all metrics, and system metrics. If you wish, you could use only those names starting (or not starting) with "sys". ui¶ To use comet_ml.ui, you need only import it: from comet_ml import ui ui.display("Hello") ui contains the following methods: ui Display methods¶ - ui.display() - ui.display_figure() - ui.display_image() - ui.display_text() - ui.display_markdown() These methods are described below. ui.display()¶ The ui.display() method is used to visualize different kinds of Python objects on the Panel canvas area. Signature and Use: ui.display(*items, format=None, **kwargs) You can use ui.display() on one (or more) of any of the following types of Python items: PillowPython Images Library (PIL) images - HTML strings (including SVG images) - Matplotlib figures - Plotly plots - Pandas' Dataframes - Any object that has a _repr_*_method In addition, there are specialized display methods for text, markdown, and images represented as raw strings (as logged as image assets, for example). We'll describe each of those methods below. If you wish to send multiple items to the display area, pass them to ui.display() or call ui.display() repeatedly: from comet_ml import ui ui.display("hello") ui.display("world") # or ui.display("hello", "world") The format argument can be either "text", "markdown", "html" or None. The default is "html"/None. kwargs are optional, and varying depending on the type of item being displayed. For example, if the item is a pandas Dataframe, then you may also pass in these keyword arguments: - theme: the name of a color theme (see below) - font_size: name of the size, eg 'normal' - font_family: name of the font family - text_align: which side to align to - width: the width of table, eg, 'auto' - index: the index to highlight - even_color: color of even rows - even_bg_color: background color of even rows Theme names: 'yellow light', 'grey light', 'blue light', 'orange light', 'green light', 'red light', 'yellow dark', 'grey dark', 'blue dark', 'orange dark', 'green dark', 'red dark', or 'comet'. For more details on ui.display(), see the Python Panel Examples. ui.display_figure()¶ This method is used to display a matplotlib figure. ui.display_figure(plt) ui.display_figure(figure) Displays a matplotlib figure. Examples: from comet_ml import ui import matplotlib.pyplot as plt # plt commands here ui.display(plt) Or calling with the figure: from comet_ml import ui import matplotlib.pyplot as plt fig, ax = plt.subplots() # Figure commands here ui.display(fig) ui.display_image()¶ This method is used to display an image, either from a logged asset, or from a PIL Image. Example from logged asset: from comet_ml import API, ui api = API() for experiment in api.get_panel_experiments(): for json in experiment.get_asset_list("image"): if json["fileName"].endswith("jpg"): data = experiment.get_asset(json["assetId"]) ui.display_image(data, format="jpg") Displays image strings in the given format ("jpg", "gif", or "png"). Example from PIL Image: from comet_ml import ui from PIL import Image import random # Create a PIL Image image = Image.new("RGB", (500, 500)) # process image here ui.display(image) For more details, see the Python Panel Examples. ui.display_text()¶ This method is useful for displaying plain text. ui.display_text(text) Displays text that otherwise would have characters interpreted as HTML. ui.display_markdown()¶ This method is useful for displaying text formatted as markdown. ui.display_markdown(*text) Displays text as markdown, ignoring any indentation. ui Widget methods¶ This section describes so-called widgets. These are elements that have a GUI representation and trigger a change to re-run your code. This trigger can be caused by a widget's event firing (such as clicking a button) or (in the case of the input widget) the value and focus changes. This is best shown through an example. Consider the following Python code: from comet_ml import ui if ui.button("Click me!"): ui.display("You clicked the button") ui.display("Now waiting for you to click the button again") else: ui.display("Waiting for you to click the button") Running this code in the Python Code Panel editor looks like the following: First, note that the ui component is used directly by calling ui.button(). Also note that there is no explicit loop. However, whenever a widget is changed (or clicked) your code will run again, this time reflecting the change. For this example, the flow is: - The code is run. - The message "Waiting for you to click the button" is displayed. - You click the button. - The code runs again. - This time, the messages "You clicked the button" and "Now waiting for you to click the button again" are displayed. - If you click the button again, you go back to step 4, and the cycle repeats. All of the following widgets have this same effect. Therefore, you will need to be mindful when writing Python Code Panels that the values of ui components will change, ordering of the widgets matter, and that your code will run repeatedly. This has a number of implications, such as all of the code will run repeatedly (even long-running initialization code). However, there are techniques to handle such issues (see below). ui.dropdown()¶ The ui.dropdown() element does two things at once: - creates a dropdown (selection) list of options on the Panel canvas - returns the selected item Signature and Use: choice = ui.dropdown(label, options, index=0, format_func=None, key=None, on_change=None, args=None, kwargs=None, multiple=False, classes=None) Let's take a look at a specific example. from comet_ml import ui choice = ui.dropdown("Choose one:", ["A", "B", "C"]) ui.display("You picked", choice) When you first run this code, you will see: Note that this is very different from the usual manner of creating GUI elements. In this manner, there are no "callbacks" but merely the above code. By default, the dropdown has been shown on the screen and the default options (index=0) has been selected. The code continues, and so you see choice "A" already set as the choice. If you then select a different item, your code runs again, updating the GUI and the selected item: If you would like to separate code that should only run once (say, because it is expensive to compute) you can separate the code to run when the GUI is updated by placing it in a main function, like this: from comet_ml import ui # Code that is expensive to run: choices = ... def main(): # The fast GUI-based code: choice = ui.dropdown("Choose one:", choices) ui.display("You picked", choice) You may provide an parameter index as a numeric value representing the row to show as the initial choice. The format_func is a function that takes a row from options, and returns a string to be used in the dropdown selection list. This is useful if you would like to provide to options something other than what you would like displayed. For example, consider: from comet_ml import API, ui api = API() api_experiments = api.get_panel_experiments() api_experiment = ui.dropdown( "Choose an experiment by key:", api_experiments, format_func= lambda experiment: experiment.key ) In this example, the experiment's key is used in the dropdown list, but options is a list of APIExperiments. In this manner, you can pass in options in any format, but you should then provide a format_func for easy selection. Arguments: - label: (str) label for the dropdown list - options: (list) list of choices to choose from - index: (int or list, optional) use initial index for single choice, or use a list of strings if multiple choice - format_func: (function, optional) function that takes an option and returns a string to use in the option list - key: (str, optional) when generating dropdowns in a loop - this is useful to assign unique keys for the dropdown - on_change: (function, optional) function to run when an option is selected - args: (list or tuple, optional) positional args to send to on_change function - kwargs: (dict, optional) keyword args to send to on_change function - multiple: (bool, optional) if True, allow user to select multiple options - classes: (list, optional) list of CSS class names to attach to the dropdown ui.input()¶ The ui.input() widget is used in order to get textual input from the user. Pressing TAB or ENTER will trigger the script to run again with the new input value. Signature and Use: value = ui.input(label, value="", key=None, on_click=None, args=None, kwargs=None, classes=None) Arguments: - label: (str) textual description that preceeds the input area - value: (str, optional) default text in input area - input widget ui.checkbox()¶ The ui.checkbox() widget is used in order to get a binary choice from the user. Signature and Use: value = ui.checkbox(label, value=False, key=None, on_click=None, args=None, kwargs=None, classes=None) Arguments: - label: (str) textual description that follows the checkbox widget - value: (bool, optional) default value of checkbox - checkbox widget ui.button()¶ The ui.button() widget is used to trigger an action at a specific time. Signature and Use: if ui.button(label, key=None, on_click=None, args=None, kwargs=None, classes=None): # do action else: # wait for button to be pressed - label: (str) textual description that appears on the button - key: (str, optional) when generating in a loop, this is useful to assign unique keys for the button - button ui.progress()¶ The ui.progress() widget is used in order to show progress for long-running processes. Use: from comet_ml import ui import time def long_running_process(): start_time = time.time() while time.time() - start_time < 3: pass if ui.progress("Getting started", 0): # initialize panel, don't do anything yet! pass elif ui.progress("Processing 25% done...", 25): long_running_process() # first 25% elif ui.progress("Half-way done", 50): long_running_process() # second 25% elif ui.progress("Almost done!", 75): long_running_process() # running 3rd 25% elif ui.progress("Done!", 100): # clears screen long_running_process() # last 25% else: ui.display("Ok! What's next?") Arguments: - label: (str) label for the dropdown list - percent: (int) number between 1 and 100 - on_load: (function, optional) function to run - args: (list or tuple, optional) positional args to send to on_load function - kwargs: (dict, optional) keyword args to send to on_load function - classes: (list, optional) list of CSS class names to attach to the progress widget ui.columns()¶ The ui.columns() widget is used to break the current display area into a series of columns. Columns can be nested. Signature and Use: columns = ui.columns(items, classes=None, **styles) By default, any item on which you perform ui.display() goes to the top-level Panel area. However, if you would like, you can place displayed items into different columns using this widget. Examples: columns = ui.columns(3) for i, column in enumerate(columns): column.display("Column %s" % i) columns = ui.columns([1, 3, 1]) # middle column is thre times the width of first and third columns[0].display("Column one") columns[1].display("Column two is wide") columns[2].display("Column three") Arguments: - items: (int, or list of numbers) if an integer, then break the current area into even columns; if a list of numbers, then divide proportionally by the number's value divided by sum of numbers. - classes: (list, optional) list of CSS class names to attach to the columns - styles: (optional) dictionary of CSS items to apply to the columns ui Utility methods¶ These are the ui utility functions: - ui.get_theme_names() - ui.set_css(css_text) - ui.add_css(css_text) These are described below. ui.get_theme_names()¶ Get the names of color themes for displaying pandas' Dataframes. Example: from comet_ml import ui color_theme_names = ui.get_theme_names() color_theme_name = ui.dropdown.dropdown( "Select a theme:", color_theme_name ) # create a draframe here ui.display(df, theme=color_theme_name) As shown, the color_theme_names could be useful in creating a dropdown selection showing a Dataframe. ui.set_css()¶ This method allows you to set additional CSS for items for display. Warning This is experimental and may be removed in a future version. ui.add_css()¶ This method allows you to add CSS for items for display. Warning This is experimental and may be removed in a future version. Troubleshooting¶ If a panel is reporting an error, then it could be caused by the following: Using an unsupported browser. Currently Python Panels is known to work under the following browsers: - Chrome, version 71.0 and greater - Firefox, version 70.0 and greater - Edge, version 80 and greater The panel has an error. If this is your panel, you can edit it to fix what is wrong. If it is another user's panel, you can contact them to let them know there is an issue. Debugging¶ Note that using print() will display items in the Console area (bottom, right-hand corner). This is very useful for debugging. Technical Details¶ Python Panels also allows you to use many other Python support modules, including: asciitree, atomicwrites, attrs, beautifulsoup4, bleach, cloudpickle, cssselect, cycler, cytoolz, decorator, distlib, docutils, freesasa, future, glpk, html5lib, Jinja2, imageio, iniconfig, jedi, joblib, kiwisolver, libiconv, libxml, libxslt, libyaml, lxml, markdown, MarkupSafe, micropip, mne, more-itertools, mpmath, msgpack, networkx, nlopt, nose, numcodecs, optlang, packaging, parso, patsy, pluggy, py, Pygments, pyparsing, pyrtl, pytest, python-dateutil, python-sat, pytz, pywavelets, pyyaml, regex, retrying, setuptools, six, soupsieve, statsmodels, swiglpk, sympy, toolz, traits, typing-extensions, uncertainties, webencodings, xlrd, yt, zarr, and zlib. Python Panels run in the browser rather than from a server. That has a number of implications. First and foremost, there is a bit of an overhead to load the libraries needed in order to execute Python code in the browser. Typically, this delay only happens when required, typcially once per project or experiment view. Note also, that Python Panels can use more memory than usual. To create responsive Python Panels, it is suggested to: - limit the number of modules loaded - limit the number of requests (e.g., api.get_...() calls) - limit the amount of information displayed The following Python modules are known to require noticeable load times: - matplotlib (medium load time) - pandas (large load time) - Plotly Express (large load times, as it requires pandas) Known Limitiations¶ - Python Panels do not automatically update with new experiments nor upon receiving new logged metrics. This is by design, as your Python Panel may be expensive to compute, and you may not wish to have it automatically refresh whenever new data is received. - Limited types of widgets: currently, there is only ui.dropdown(); what else would you like? - Line numbers in Python's tracebacks are off; see your webbrowser's Console for current line numbers - Pandas' DataFrame.styleis not available References for Panel Resources¶ For more information on Panels, see also:
https://www.comet.ml/docs/python-sdk/python-panels/
CC-MAIN-2022-27
refinedweb
3,207
57.27
24 March 2011 18:03 [Source: ICIS news] VALENCIA, Spain (ICIS)--The European polyethylene terephthalate (PET) market is struggling to decipher a price direction for April and is waiting for clarification upstream, sources said on Thursday. “Regarding April prices we are waiting to see what happens. I can’t tell my customers whether the price will increase, decrease or roll over,” a producer said. It added that it had expected a reduction in March but the price went up by around €100/tonne ($141/tonne) to €1,570-1,590/tonne FD (free delivered) ?xml:namespace> Most other buyers and sellers agreed that freely negotiated March prices had indeed moved up by an average of €100/tonne to around €1,600/tonne. “Sentiment for It added that the disaster in “PX [in Other players were counting on April PET increases of around €10-20/tonne ($14-28/tonne). This was based on their assumptions of where feedstocks PX and monoethylene glycol (MEG) were likely to land in April. Ideas regarding MEG included decreases of €20-50/tonne because of a softer Asian market, alongside possible increases that would reflect tightness and the upcoming peak season for downstream PET. March MEG was settled at €1,130/tonne FD NWE (northwest “We don’t want to overheat the [PET] market. The weaker dollar has had a $30/tonne impact on raw materials and after the March increases I get the sense customers are feeling sore,” a second PET producer said. The PET market was still tight because of upstream shortages. The situation was changing as production issues were resolved while new and idled capacity was due on-stream. Due to improved PET availability, “I don’t think we will have a big increase on the [April] PET price,” according to the customer. A second buyer agreed that prices were likely to remain stable in April and May before falling in June. “Unless something unexpected happens [European PET] prices can’t go up because then [imports from While a seller said it was hoping for a €10-20/tonne increase which may result in a rollover, another customer said. “Honestly, I have no clue.” PET prices have been increasing since the end of August 2010 when the average was €1,075/tonne FD (free delivered) ($1 = €0.71) For more on PET, PTA,
http://www.icis.com/Articles/2011/03/24/9446960/europe-pet-looking-for-april-price-direction-march-up-100t.html
CC-MAIN-2015-06
refinedweb
390
59.74
Now that the application works, how do you deploy it? The good news is that in .NET there is no Registry to fuss with; you could, in fact, just copy the assembly to a new machine. For example, you can compile the program in Example 13-3 into an assembly named FileCopier.exe. You can then copy that file to a new machine and double-click it. Presto! It works. No muss, no fuss. For larger commercial applications, this simple approach might not be enough, sweet as it is. Customers would like you to install the files in the appropriate directories, set up shortcuts, and so forth. Visual Studio provides extensive help for deployment. The process is to add a Setup and Deployment project to your application project. For example, assuming you are in the FileCopier project, choose Add Project, New Project from the File menu and choose Setup and Deployment Projects. You should see the dialog box shown in Figure 13-10. You have a variety of choices here. For a Windows project such as this one, your choices include: Much like a Zip file, this compresses a number of small files into an easy-to-use (and easy-to-transport) package. This option can be combined with the others. If you have more than one project that use files in common, this option helps you make intermediate merge modules. You can then integrate these modules into the other deployment projects. This creates a setup file that automatically installs your files and resources. Helps create one of the other types. Helps create an installer project that can be deployed automatically. Helps deploy a web-based project. You would create a Cab Project first if you had many small ancillary files that had to be distributed with your application (for example, if you had .html files, .gif files, or other resources included with your program). To see how this works, use the menu choice File, Add Project, New Project and choose and name a Setup and Deployment Project, selecting CAB File. When you name the project (for example, FileCopierCabProject) and click OK, you'll see that the project has been added to your group (as shown in Figure 13-11). Right-clicking the project brings up a context menu. Choose Add, and you have two choices: Project Output. . . and File. . .. The latter allows you to add any arbitrary file to the Cab. The former offers a menu of its own, as shown in Figure 13-12. Here you can choose to add sets of files to your Cab collection. The Primary output is the target assembly for the selected project. The other files are optional elements of the selected project that you might or might not want to distribute. In this case, select Primary output. The choice is reflected in the Solution Explorer, as shown in Figure 13-13. You can now build this project, and the result is a .cab file (see the Visual Studio Output window to find out where the .cab was created). You can examine this file with WinZip, as shown in Figure 13-14. If you do not have WinZip, you can use the expand utility (-D lists the contents of a .cab file): C:\...\FileCopierCabProject\Debug>expand -D FileCopierCabProject.CAB Microsoft (R) File Expansion Utility Version 5.1.2600.0 Copyright (C) Microsoft Corp 1990-1999. All rights reserved. filecopiercabproject.cab: OSDBF.OSD filecopiercabproject.cab: FileCopier.exe 2 files total. You see the executable file you expect, along with another file, Osd8c0.osd (the name of this file may vary). Opening this file reveals that it is an XML description of the .cab file itself, as shown in Example 13-5. <?XML version="1.0" ENCODING='UTF-8'?> <!DOCTYPE SOFTPKG SYSTEM ""> <?XML::namespace <TITLE> FileCopierCabProject </TITLE> <MSICD::NATIVECODE> <CODE NAME="FileCopier"> <IMPLEMENTATION> <CODEBASE FILENAME="FileCopier.exe"> </CODEBASE> </IMPLEMENTATION> </CODE> </MSICD::NATIVECODE> </SOFTPKG> To create a Setup package, add another project, choosing Setup Project. This project type is very flexible; it allows all of your setup options to be bundled in an MSI installation file. If you right-click the project and select Add, you see additional options in the pop-up menu. In addition to Project Output and File, you now find Merge Module and Component. As you did with the Cab project, use the Add option to add the Primary output to the Setup Project. Merge Modules are mix-and-match pieces that can later be added to a full Setup project. Component allows you to add .NET components that your distribution might need but which might not be on the target machine. The user interface for customizing Setup consists of a split pane whose contents are determined by the View menu. Access the View menu by right-clicking the project itself, as shown in Figure 13-15. As you make selections from the View menu, the panes in the IDE change to reflect your choices and to offer you options. For example, if you choose File System, the IDE opens a split-pane viewer, with a directory tree on the left and the details on the right. Clicking the Application Folder shows the file you've already added (the Primary output), as shown in Figure 13-16. You are free to add or delete files. Right-clicking in the detail window brings up a context menu, as shown in Figure 13-17. You can see there is great flexibility here to add precisely those files you want. The folder into which your files will be loaded (the Application Folder) is determined by the Default Location. The Properties window for the Application Folder describes the Default Location as [ProgramFilesFolder]\[Manufacturer]\[Product Name]. ProgramFilesFolder refers to the program files folder on the target machine. The Manufacturer and the Product Name are properties of the project. If you click the Project and examine its properties, you see that the IDE has made some good guesses, as shown in Figure 13-18. You can easily modify these properties. For example, you can modify the property Manufacturer to change the folder in which the product will be stored under Program Files. If you want the install program to create a shortcut on the User's Desktop, you can right-click the Primary output file in the Application Folder, then create the shortcut and drag it to the User's Desktop, as shown in Figure 13-19. You can add items to the My Documents folder on the user's machine. First, right-click on File System on Target Machine, then choose Add Special Folder, User's Personal Data Folder. You can then place items in the User's Personal Data Folder. In addition to adding a shortcut to the desktop, you might want to create a folder within the Start Programs menu. To do so, click the User's Programs Menu folder, right-click in the right pane, and choose Add Folder. Within that folder, you can add the Primary output, either by dragging or by right-clicking and choosing Add. In addition to the four folders provided for you (Application Folder, User's Desktop, User's Personal Data Folder, User's Programs Menu), there are a host of additional options. Right-click the File System on Target Machine folder to get the menu, as shown in Figure 13-20. Here you can add folders for fonts, add items to the User's Favorites Folder, and so forth. Most of these are self-explanatory. So far, you've looked only at the File System folders from the original View menu (pictured in Figure 13-15). Now we'll examine some other windows in the menu. The Registry window (right-click on FileCopierSetupProject, and select Registry from the View menu) allows you to tell Setup to make adjustments to the user's Registry files, as shown in Figure 13-21. Click any folder in this list to edit the associated properties in the Properties window. The File Types choice on the View menu allows you to associate application-specific file types on the user's machine. You can also set the action to take with these files. The View, User Interface selection lets you take direct control over the text and graphics shown during each step of the Setup process. The workflow of Setup is shown as a tree, as in Figure 13-22. When you click a step in the process, the properties for that form are displayed. For example, clicking the Welcome form under Install, Start displays the properties shown in Figure 13-23. The properties offer you the opportunity to change the Banner Bitmap and the text displayed in the opening dialog box. You can add dialog boxes that Microsoft provides, or import your own dialog boxes into the process. If the workflow does not provide sufficient control, you can choose the Custom Options choice from the View menu. You can also specify Launch conditions for the Setup process itself. Once you've made all your choices and set all the options, choose Configuration Manager from the Build menu and make sure your Setup Project is included in the current configuration. Next, you can build the Setup project. The result is a single Setup file (FileCopierSetupProject.msi) that can be distributed to your customers.
http://etutorials.org/Programming/Programming+C.Sharp/Part+II+Programming+with+C/Chapter+13.+Building+Windows+Applications/13.4+Deploying+an+Application/
CC-MAIN-2018-13
refinedweb
1,549
65.52
TagIssue57Proposal27/Earlier JAR July 2012, content moved away from TagIssue57Proposal27 This proposal combines elements of TagIssue57Proposal26 (parallel properties) and HR14aCompromise (do what you like but keep your hands off of sameAs). Goals of this exposition: - Start with first principles, not historical accident - Do not assume the webarch party line or world view - Say something general about architecture, not specific to RDF or URIs - Explain why the problem seems to be peculiar to RDF Contents - 1 Background - 2 Distinctions can be made at the "statement" or "command" level. - 3 A way out - 3.1 It is operator-application semantics, not identifier semantics, that matters. - 3.2 Factor apparently incompatible extensions through projections. - 3.3 Replace the original operator with operators specific to each extension. - 3.4 It doesn't matter what identifiers mean as long as the projections exist and work properly. - 4 How this might play out in RDF Background See TagIssue57Proposal27/Background Distinctions can be made at the "statement" or "command" level. Besides these four approaches, when the incompatibility seems to have to do with the meaning of identifiers, there is another way to distinguish which meaning applies. We can do it at the level of syntactic constructs that contain a contested identifier as a part - what one might call "operator applications" C[U] of which U is an identifier that is a direct part (operand) of the construct. ("Operator application" is meant generically: a statement, command, method call, element, attribute, etc.) To make the distinction, we replace an application C[U] containing a contested identifier U with an operator application S1 where extension 1 is meant, or with S2 where extension 2 is meant. Since the identifier is fixed (that's the whole problem), we don't have S1 = C[U1] and S2 = C[U2], but rather S1 = C1[U] and S2 = C2[U]. We change something that's not the operand-position identifier. Note: we only have to do this when the meaning of U is contested. Applications C[V] where V is uncontested can be left alone. - Of course in RDF identifiers are URIs, but the approach is general. A way out It is operator-application semantics, not identifier semantics, that matters. So here is the key hypothesis. What matters, it is supposed, to parties preferring particular extensions to identifier meaning, is that their meaning be expressible using some construct that has their choice of identifier as a part; not that the identifier "identifies" anything in particular. The identifier meaning extension was created as a means to an end, and the end is (a) to have something that has the desired meaning, and (b) for that identifier to be used in that something. As long as S1 and S2 (expressing the distinct meanings) are in terms of U, i.e. S1 = C1[U] and S2 = C2[U] for some C1 and C2, and have the desired meanings, it doesn't matter what U "identifies". Factor apparently incompatible extensions through projections. Based on this hypothesis we introduce a new identifier meaning extension (call it extension 3) giving "overloaded" meanings (this will be explained). Write the overloaded meaning of U (at the meta-level) as meaning3(U). By fiat (or by construction), meaning3 will have the the property that there exist functions (call them "projection" functions) p1 and p2 satisfying p1(meaning3(U)) = meaning1(U) and p2(meaning3(U)) = meaning1(U), i.e. meaning1 = p1 o meaning3 and p2 p2 o meaning3. (In fact this is all we need to assume about meaning3.) It may be possible to arrange for syntactic constructs P1[] and P2[] expressing p1(-) and p2(-). We would then write S1 = C[P1[U]] and S2 = C[P2[U]], i.e. C1[] = C[P1[]] and C2[] = C[P2[]]. Replace the original operator with operators specific to each extension. Writing P1[] and P2[] may be clumsy or even impossible in a given language. But we can usually introduce C1[] and C2[] as shorter ways to express c(p1(-)) and c(p2(-)), i.e. meaning(C1) = c o p1 and meaning(C2) = c o p2. Thus S1 = C1[U] and S2 = C2[U]. - e.g. in RDF, writing P1[U] would require use of a blank node, while C1 and C2 could just be new classes or properties. When defining C1 or C2, it will suffice to say that its meaning is the composition of c (the meaning of C[]) with p1 or p2. - in RDF we might do this with an annotation on the property, e.g. C1 extension-1-related-to C means that the meaning of C is the composition of the meaning of C with p1. It doesn't matter what identifiers mean as long as the projections exist and work properly. Under this plan, what an identifier means doesn't matter as long as there exist functions p1 and p2 that allow the recovery of meanings 1 and 2 from their "overloaded meaning". In particular, if there exists a function p1 that maps meanings under extension 1 to meanings under extension 2, then we can take meaning3(U) = meaning1(U), and p1 = identity; and the overall arrangement will be parsimonious for those using extension 1. This condition is satisfied as long as extension 1 doesn't introduce any aliasing relationships (equations) between identifiers that don't also hold for extension 2 (and, in general, all other extensions). - In RDF, this means users of an extension who are interested in interoperability shouldn't write U owl:sameAs U', which, otherwise, they might be tempted to do if meaning(U) = meaning(U') under the extension. How this might play out in RDF Contested vs. uncontested URIs Whether the meaning of a URI is contested or not depends on the context in which it occurs, especially media type. For example, we don't for the most part find incompatible extensions of URI meaning in HTML (at least not yet), but we do in RDF. There is nothing special about RDF regarding the above analysis; it just happens to be the outstanding present-day example of a language in which incompatible extensions of URI meaning are an issue. Hash URI meaning is (relatively) uncontested in RDF; and in practice, it has turned out that hashless http: and https: URIs for which one doesn't get 2xx responses also have generally uncontested meaning in RDF, in spite of the absence of any governing specification, although some ambiguity remains in this case (especially regarding one's attitudes toward time and authority). The contested URIs include many, if not all, hashless https: and http: URIs for which one gets 2xx responses. (Some 2xx URIs are uncontested in the sense that their meaning happens to coincide under the various extensions: specifically, those for which retrieved representations seem to say that the URI refers to the generic resource whose representations are retrieved using the URI.) Other kinds of URIs are not used much in RDF, except perhaps urn:lsid: in certain communities, so they can probably be considered uncontested. (URIs occurring in property position, or as the object of an rdf:type statement, must be predicates, so maybe we can arrange for these occurrences to sidestep the issue. In any case it feels like the problem is less severe in predicate positions.) The adoption of the present proposal converts contested and uncontested URIs into overloaded and nonoverloaded URIs. There are two dominant extensions, one of the "generic resource" variety (a URI refers to a generic resource whose representations are retrieved using the URI) and the "take at face value" extension (to find out what a URI identifies, read a representation retrieved using it; the representation will answer the question). But is easy to imagine others - for example, it is not obvious that the "generic resource" theory is the best match to the way URIs work in HTML a@href, to their use in identifying RDF graphs, or to their use as XML namespace identifiers; and in the future some new community might want interoperability involving these other situations. Parallel data properties Consider first the easier case, that of data properties (those whose ranges include only RDF literal values). Suppose we want to express meanings of the form prop(meaning1(U), x) for various overloaded URIs U, where prop is some datatype property. We define a property URI :Prop1 whose meaning is prop o p1, and write <U> :Prop1 x. We have two ways to say that :Prop1 has the meaning prop o p1. One is to define a second URI, say :UrProp, as a name for prop, and write :Prop1 :Xyz1 :UrProp. to relate the two properties. Here :Xyz1 denotes a second-order inverse functional property, which would be defined as part of this proposal, that relates a property (on the right) to p1 composed with that property (on the left). (TBD: figure out what to call :Xyz1. HT: I think earlier we were calling the SubjLiteral or DocLiteral (depending on the extension).) The meaning of <V> :UrProp x where V is nonoverloaded would be clear. The other way would be just to leave :UrProp unnamed, i.e. :Prop1 :Xyz1 []. in which case of course the intended application-level semantics of :Prop1 would need to be described in the documentation for :Prop1 (as opposed to being inherited from the documentation of :UrProp). Now, whenever one sees <U> :Prop1 x. one can infer that there exists a y such that y = p1(meaning3(U)) and y is prop-related to x: <U> :Proj1 [:UrProp x]. where, again, :Proj1, which denotes p1, would be defined as part of the proposal. This is called "parallel properties" because in a diagram :Prop might be parallel to :UrProp (although in this simplified case they seem to intersect at x). It is very important that the projections are functions. This enables "smushing" i.e. the ability to postulate a single object y simultaneously having many properties inferred from statements <U> :Prop1 x, <U> :Qrop1 w, <U> :Rrop1 v, and so on. One meta-property :Xyzi is needed for each extension. Each extension could have in principle have its own property parallel to :UrProp, although this seems unlikely in practice. OWL-DL doesn't support second order properties, so the relationship of :Prop1 to :UrProp (expressed using :Xyz1 above) would have to be expressed differently in OWL-DL, using a property chain axiom. (TBD) Important: There is no a priori reason to suppose that the domain of the projections p1 and p2 is not the entire type rdf:Resource; nor is there any reason to suppose otherwise. Similarly for the ranges of p1 and p2, which furthermore do not need to be distinct from one another. But we'll take up the question of whether some amount of exclusion would be useful below. Parallel object properties This approach generalizes to the case of object properties; we again need one :Xyzi meta-property per extension. This time :Prop1 involves not just the projection p1 but its inverse as well. (TBD: diagram) The fact that object properties have two "operands" raises the possibility that different extensions might be involved in different operand positions (i.e. subject vs. object of the statement). If there are N extensions this would require N-squared second-order properties (N cases for the subject position and N cases for the object position) in order to sort out the various cases. (For RDF as currently used we ought to be able to get away with N=2, the extensions being generic resources and face-value). (TBD: diagram) Monotonicity In simple situations, at least, this approach should support the goal of monotonicity, i.e. achieving a graceful migration path from non-interoperating RDF to interoperating RDF. People writing RDF and defining ontologies can start out assuming the extension of their choice, without annotating predicates, and everything will "just work" (TBD: explain and justify this claim). They can interpret (subject to some modest restrictions) their URIs as not overloaded, but rather as having semantics given by their favorite extension. Later, should interoperability via overloading become important to them, they can add :Xyzi annotations to their properties to document which extension they were assuming. The only constraint needed to make the transition from a particular incompatible extension to overloading semantics work is to avoid depending on equations (owl:sameAs) that derive from assuming that some particular extension is in effect. One can still write owl:sameAs where interoperability is not a goal, but if these equations are scoped to noninteroperating partitions, and inessential in interoperation scenarios, then projections onto other extensions will still work. (There may be additional restrictions to observe regarding combination with uncontested URIs; I'm unable to sort out this situation clearly, but see below.) Issue: Some projections are partial Some projections won't have well-defined results for certain URIs, based on the natural rule defining the projection. For example, for poorly behaved (e.g. highly variable) 2xx URIs, it may be impossible to discern what the associated "generic resource" is. Or, for URIs from which no representation is retrieved that don't say what the URI means (contain no statements to the effect of "<U> has such and such a property" or "what U identifies is ..."), the "take at face value" rule doesn't specify a result. Exactly what happens in these cases would be up to the definition of the particular projection function. Two possibilities: (1) there is fallback to some other semantics, such as another projection function; e.g. perhaps "face value" could fall back to either "generic resource" or "primary topic"; (2) the overloaded denotation may just fail to map under the projection, in which case a statement involving such a parallel property would just be false. Issue: interoperation with nonverloaded URIs If prop has a URI, say :UrProp, and V is a nonoverloaded URI, one could just write <V> :UrProp x. to express prop(meaning(V), x), and forget about :Prop1, :Prop2, etc. altogether in this situation. However, this is annoying for a couple of reasons. - A likely error would be using :Prop1 where :UrProp was required, or vice versa. That is, people writing RDF, especially those who don't care about interoperability, may not be able to remember (or accept) that they have to choose the correct property depending on whether the subject URI is overloaded or not. - This necessitates defining twice as many property URIs as one might like; or else commits one to using any given property URI either with overloaded URIs, or with nonoverloaded ones, exclusively. We really want :Prop1 to mean prop for nonoverloaded URIs and prop o p1 for overloaded ones. That is, restrict the above inference rule to the situation where U is overloaded, and add a rule allowing to deduce <V> :UrProp x. from <V> :Prop1 x. when V is not overloaded. There are a few ways to approach this: - The cleanest approach is to accept the consequences and move on. - Alternatively, choose between the two rules based on how the subject is written in the RDF statement (using an overloaded URI, vs. a nonoverloaded URI or some other syntax). - Or, arrange for the class of denotations of overloaded URIs (that is, the domain of the projection functions) to be disjoint from the class of denotations of nonoverloaded URIs, so that inference based on denotations can tell what kind of URI did the denoting, and thus which inference rule is to apply. - Or, somehow take all URIs to be overloaded. None of these is an obvious winner. The inference rules for number 2 are inexpressible in OWL which, like RDF (but unlike N3), is referentially transparent - i.e. inference rules (axioms) are understood in terms of what URIs denote, independent of how those denotations are denoted. Number 3 puts a peculiar division (overloaded vs. nonoverloaded) at the very upper reaches of any ontology applied to the domain of discourse, but may be workable. Number 4 prevents aliasing in almost all cases, and this would certainly conflict with current RDF and OWL content using noncontested URIs and deployed in good faith. There is some similarity here to the classification, proposed by some parties who seem to not understand the httpRange-14 rule, of everything into "information resources" and "non-information-resources" (NIRs). While the class NIR has little utility and considerable danger, since uncontested URIs can equally well refer to either IRs or NIRs, in support of #3 :Nonoverloaded membership would enable a useful conclusion (i.e. that <V> :UrProp x). If :Overloaded is by fiat disjoint with generic resource (GR seems to be roughly the same as IR), we will not lose the ability to identify GRs using uncontested URIs, which is essential for processing "legacy" content. (:Overloaded / GR disjointness is obviously in opposition to the desire of some to institute :Overloaded = information resource = generic resource, but #3 provides a reason for such disjointness that might otherwise be lacking.) The existence of things that are unnamable using uncontested (e.g. hash) URIs would of course be quite radical. (TBD: Show how to express case #3 inference rules in OWL-DL, assuming a partition of owl:Thing into classes :Overloaded and :Nonoverloaded. Method A: define every property :Propi as the disjunction of subproperties restricted to the two classes, with a property chain axiom for one but not the other. Method B: define each projection :Proji as the disjunction of two properties, one of which is the restriction of the identity relation on :Nonoverloaded, and the other of which is functional with domain :Overloaded. Easier: since in any case most RDF content is unuseable in OWL-DL without deep cleaning, we should probably impose separate, more severe criteria (#1) when interoperability with OWL-DL is to be anticipated.) (more work to be done here, hoping for help from HT and JT) Issue: Deciding whether a URI is to be considered overloaded or not This seems to require a great deal of judgment, as there are numerous cases: - Hash URIs and non-http: URIs (nonoverloaded based on syntactic considerations) - non-2xx hashless http: URIs (general consensus in RDF/linked data community, but maybe not outside) - 2xx with unrecognized representations (perhaps latent or inaccessible face-value content?) - 2xx where some or all representations do not say what the URI denotes - 2xx where some or all representations say the URI denotes the GR - 2xx where some or all representations where you can't tell if it denotes the GR or not - etc. A determinate rule, whose evaluation does not require judgment, for deciding whether a URI is overloaded, would be highly desirable. The simplest and most robust rule would be independent of network contingencies, i.e. just consider all hashless http: URIs to be overloaded. This would probably result in difficulties relating to current deployed use of so-called "303 URIs" which have been used, in good faith, with aliasing. A more compatible criterion would be the one familiar from the httpRange-14 resolution, i.e. a 2xx response signals that the URI is contested / overloaded. What's going on? Overloaded URIs denote single things, but those things are peculiar things; they carry within themselves (or uniquely determine) the multiple things that one would get under the various incompatible extensions. The specification of overloaded meanings in terms of projection functions is called a "product" or "limit" in category theory. If you want a way to interpret overloaded URIs, you might consider them to denote tuples whose components are all of the possible meanings that the URI would have under the competing extensions. The projection functions select the appropriate component of the tuple. But this is just one of many interpretations that would be consistent with the constraints. Another interpretation is to take denotation to be the identity function on overloaded URIs, and the projection functions to be what the denotation function would have been under the corresponding extension. More generally, denotation of overloaded URIs could be some arbitrary bijection; although perhaps some bijections are more useful than others. Identity or bijection may be going too far in forbidding URI equivalences. There is an opportunity here to agree (as was urged by one TAG member at the April 2012 TAG face-to-face) that URI equivalences that are consequent to the HTTP specification must hold, even though these are not required under the RDF specification. On the other hand there's nothing in the take-at-face-value extension to prevent unequal but HTTP-equivalent RDF URI references from being treated differently. Because overloading is a purely formal construction designed to mediate a dispute, the important thing is not what these things "are" ontologically, but how they relate to other things (specifically, the projection functions), and the inference rules that apply in relation to them.
http://www.w3.org/wiki/index.php?title=TagIssue57Proposal27/Earlier&oldid=61025
CC-MAIN-2016-18
refinedweb
3,457
50.26
Introduction to Interface-Driven Development Using Swagger and Scalatra Since it began life a little over three years ago, the Scalatra web micro-framework has evolved into a lightweight but full-featured model-view-controller (MVC) framework with a lively community behind it. Scalatra started out as a port of Ruby's popular Sinatra DSL to the Scala language. Since then the two systems have evolved independently, with Scalatra gaining capabilities such as an Atmosphere integration and Akka support. It's been used by BBC Future Media for their Linked Data Writer API, managing large datasets in a scalable and manageable way, and has also been used by gov.uk. One of the things that Scalatra's been most successful at is the construction of APIs. Over the past several years, REST APIs have become the lifeblood of the web. A relatively recent addition to Scalatra's capabilities is an integration with the Swagger toolset, which is produced by the folks at Wordnik. What is Swagger? Swagger is a specification which allows you to quickly define the functionality of a REST API using JSON documents. But it's more than just a spec. It provides automatic generation of interactive API docs, client-side code generation in multiple languages, and server-side code generation in Java and Scala. Although they're the most eye-catching component of the project and will impress users, the docs produced by Swagger are also great for fostering communication between API producers and consumers during the API design phase. Let's take a look at how it all works. We'll build out a small REST API using Scalatra, then use Swagger to document our routes. The first thing you'll need to do is install Scalatra. The easiest way to do this is by following the installation instructions at the Scalatra website. See the notes at the bottom for setting up Eclipse or IntelliJ if you use those IDEs, but you should be able to do this tutorial in any text editor. Once you've got a JVM installed, along with cs, giter8, and sbt, you'll be able to generate a new Scalatra project. Getting started Type this at the command line: g8 scalatra/scalatra-sbt --branch develop You'll be asked a series of questions about your project. Answer like this: organization [com.example]: package [com.example.app]: com.example.swagger.sample name [scalatra-sbt-prototype]: flowershop servlet_name [MyScalatraServlet]: FlowersController scala_version [2.9.2]: version [0.1.0-SNAPSHOT]: Hit to accept the defaults for the organization, scala_version, and version questions. Once you answer the last question, a full Scalatra project will be generated. Let's check that it works. Change directory into the flowershop folder, and run Scala's simple build tool by typing: cd flowershop sbt You'll need the latest sbt 0.12.1 for this. sbt 0.12.0 will work, just downgrade the version number in the file project/build.properties. sbt will take care of downloading all of Scalatra's dependencies. This can take several minutes when you're doing it for the first time, as you're getting a full Scala development environment, an embedded webserver (jetty), Scalatra itself, and several companion libraries. When sbt finishes setting everything up, you should be able to start the application by typing the following at the sbt prompt (which looks like a ">"). container:start This will start jetty on. Visit that URL in your browser, and you should see a Hello World application. You don't want to have to manually recompile your app and restart Jetty whenever you make a code change, so type this at the sbt prompt: ~; copy-resources; aux-compile This tells sbt to automatically recompile and reload the application whenever you change a file. So, now we've got a controller for our flower shop. Let's set up a RESTful interface allowing us to browse flowers. Open up the FlowersController.scala file found in src/main/scala/com/example/swagger/sample. What you start out with is a very simple generated controller. It's got a "hello world" action mounted on the application's root path get("/"), which can be accessed using an HTTP GET to the path "/". It's also got a way to handle 404s, and Scalate templating, which we don't need for our API. Scalatra allows you to easily add functionality to your controllers by mixing in Scala traits to your class definitions. Let's slim things down a tiny bit by removing the ScalateSupport, since we don't need HTML templating support for our API. Delete the with ScalateSupport part of the class definition, so it looks like this: class FlowersController extends ScalatraServlet { You can also remove the import scalate.ScalateSupport, it won't be needed. You can remove the notFound and get("/") actions as well. You'll be left with an empty Scalatra controller: package com.example.swagger.sample import org.scalatra._ class FlowersController extends ScalatraServlet { } As you make these changes and save your files, check sbt's output in the terminal. You should see your source code automatically recompiling itself and reloading the application. The terminal output will turn green once you've slimmed down your controller. You've now taken all of the routes out of FlowersController, so you won't be able to see anything in your browser. Setting up the data model Let's get some data set up. Since we want to focus on learning Swagger, we won't attempt to actually persist anything in this tutorial. We can use Scala case classes to simulate a data model instead. If you'd like to see how to find out how to set up ScalaQuery, a Scala ORM, check out Jos Dirksen's tutorial at SmartJava. First, we'll need a flower model. Add a new directory in src/main/scala/com/example/swagger/sample, and call it models. Then put the following code in a new file, Models.scala, in there: Models.scala package com.example.swagger.sample.models // A Flower object to use as a faked-out data model case class Flower(slug: String, name: String) A Scala case class automatically adds getters and setters to the class definition, so we get a lot of functionality here without a lot of boilerplate. Let's add some flower data. Make a data namespace by adding a new data directory inside src/main/scala/com/example/swagger/sample. Then add a new file in that directory, calling it FlowerData.scala. The contents of the file should look like this: FlowerData.scala package com.example.swagger.sample.data import com.example.swagger.sample.models._ object FlowerData { /** * Some fake flowers data so we can simulate retrievals. */ var all = List( Flower("yellow-tulip", "Yellow Tulip"), Flower("red-rose", "Red Rose"), Flower("black-rose", "Black Rose")) } That gives us enough to work with in terms of data that we can at least demonstrate our API's functionality. Let's make a new controller action to retrieve flowers. Add some imports so that we get access to our models and data in the FlowersController class, by adding this at the top, after the package definition: // Our models import com.example.swagger.sample.models._ // Fake flowers data import com.example.swagger.sample.data._ Retrieving flowers Now let's make our first API method, a Scalatra action which lets clients browse flowers. Drop this into the body of the FlowersController class: get("/"){ FlowerData.all } It doesn't look like much, but this is faking out data retrieval for us, by calling the all method of the FlowerData object we just defined. This is broadly equivalent to calling a static method in Java or C#, or a class method in Ruby. If you take a look at in your browser, you should see the following result: List(Flower(yellow-tulip,Yellow Tulip), Flower(red-rose,Red Rose), Flower(black-rose, Black Rose)) Scalatra has found all the flowers for us and returned the data. Looking at what we've got so far, though, it's not a very descriptive API. What resource is actually being retrieved? It's not possible to tell by looking at the URL. Let's change that. Setting the mount path for better API clarity Every Scalatra application has a file called ScalatraBootstrap.scala, located in the src/main/scala directory. This file allows you to mount your controllers at whatever url paths you want. If you open yours right now, it'll look something like this: import com.example.swagger.sample._ import org.scalatra._ import javax.servlet.ServletContext class ScalatraBootstrap extends LifeCycle { override def init(context: ServletContext) { context.mount(new FlowersController, "/*") } } Let's change it a bit, adding a route namespace to the FlowersController: import com.example.swagger.sample._ import org.scalatra._ import javax.servlet.ServletContext class ScalatraBootstrap extends LifeCycle { override def init { context.mount(new FlowersController, "/flowers") } } The only change was to replace the "/*" mount point with "/flowers". Easy enough. Let's make sure it works. Hit the url in your browser, and you should once again see the same results as before: List(Flower(yellow-tulip,Yellow Tulip), Flower(red-rose,Red Rose), Flower(black-rose, Black Rose)) This is a much more descriptive URL path. Clients can now understand that they're operating on a flower resource. Automatic JSON output for API actions Take a closer look at the. Scalatra 2.2 includes some new JSON handling capabilities which makes this a snap. In order to use Scalatra's JSON features, we'll need to add a couple of library dependencies so that our application can access some new code. In the root of your generated project, you'll find a file called build.sbt. Open that up, and add the following two lines to the libraryDependencies sequence, after the other scalatra-related lines: "org.scalatra"% "scalatra-json"% "2.2.0-SNAPSHOT", "org.json4s"%% "json4s-jackson"% "3.0.0", build.sbt is somewhat equivalent to a maven pom.xml in Java, or a Gemfile in Ruby, insofar as it keeps track of all your project's dependencies and can take care of downloading them for you. Restart sbt to download the new jars. You can do so by first hitting the "enter" key (to stop automatic recompilation), and then typing exit at the sbt prompt. Then type sbt again. You should see some messages telling you that sbt is downloading the new dependencies, and then you'll be back at the prompt. Start the container and recompilation again: container:start ~; copy-resources; aux-compile Add the following imports to the top of your FlowersController file, in order to make the new JSON libraries available: // JSON-related libraries import scala.collection.JavaConverters._ and JValueResult into your servlet so your controller declaration looks like this: class FlowersController extends ScalatraServlet with JacksonJsonSupport with JValueResult { Your code should compile again at this point. Refresh your browser at, and suddenly the output of your / action has changed to JSON: [{"slug":"yellow-tulip","name":"Yellow Tulip"},{"slug":"red-rose","name":"Red Rose"},{"slug":"black-rose","name":"Black Rose"}] The JValueResult and JsonJacksonSupport traits which we mixed into the controller, combined with the implicit val jsonFormats, are now turning all Scalatra action result values into JSON. Making the flowers API searchable Next, let's make our API searchable. We want to be able to search for flowers by name and get a list of results matching the query. The easiest way to do this is with some pattern matching inside the / in our controller. Currently that route looks like this: get("/"){ FlowerData.all } We can change it to read a query string parameter, and search inside our list of flowers. /* * Retrieve a list of flowers */ get("/"){ params.get("name") match { case Some(name) =>FlowerData.all filter (_.name.toLowerCase contains name.toLowerCase()) case None =>FlowerData.all } } Scalatra can now grab any incoming ?name=foo parameter off the query string, and make it available to this action as the variable name, then filter the FlowerData list for matching results. If you refresh your browser at, you should see no change - all flowers are returned. However, if you point your browser at, you'll see only the roses. Retrieving a single flower by its slug The last controller method we'll create for the moment is one that retrieves a specific flower. We can easily retrieve a flower by its slug, like this: get("/:slug") { FlowerData.all find (_.slug == params("slug")) match { case Some(b) =>b case None =>halt(404) } } Once again, we're using Scala's pattern matching to see whether we can find a matching slug. If we can't find the desired flower, the action returns a 404 and halts processing. You can see the API's output by pointing your browser at a slug, e.g. {"slug":"yellow-tulip","name":"Yellow Tulip"} The nice thing is that since our before() filter runs on each and every action, the JSON format converter is still operating on our output. We get automatic JSON support with no extra effort. Sweet. Interface-driven development using Swagger At this point, we've got the beginnings of a REST API. It defines two actions, and offers a way for API clients to see what flowers are available in our flower shop. With our desired functionality achieved, we could stop here. But we're missing two things: human-readable documentation and client integration code. And without these, the API is a lot less useful than it could be. An API is a way for machines to exchange data, but the process of designing and building an API also requires a lot of communication between people. The best API designs happen when the API's users have a way to get involved and detail what it is they need, and the API implementers boil those conversations down into an interface that works for the desired user stories or use cases. Historically, the fact that you've needed to use cURL or read WSDL to understand what an API does has severely limited the ability of non-technical people to participate in the API design process. The technical complexity of just making a connection has masked the fact that, at their core, the concepts inherent in a REST API are not particularly hard to understand. Making the API's methods, parameters, and responses visible, in an engaging, easy to understand way, can transform the process of building REST APIs. The people at Wordnik, the word meanings site, have built a toolset called Swagger, which can help with this. Swagger is a bunch of different things. It's a specification for documenting the behaviour of a REST API - the API's name, what resources it offers, available methods and their parameters, and return values. The specification can be used in a standalone way to describe your API using simple JSON files. The Swagger resources file If you want to, you can write a Swagger JSON description file by hand. A Swagger resource description for our FlowersController might look like this (don't bother doing so, though, because we'll see how to automate this in a moment): {"basePath":"","swaggerVersion":"1.0","apiVersion":"1","apis":[{"path":"/api-docs/flowers.{format}","description":"The flowershop API. It exposes operations for browing and searching lists of flowers"}]} This file describes what APIs we're offering. Each API has its own JSON descriptor file which details what resources it offers, the paths to those resources, required and optional parameters, and other information. A sample Swagger resource file The descriptor for our flower resource might look something like this: {"resourcePath":"/","listingPath":"/api-docs/flowers","description":"The flowershop API. It exposes operations for browing and searching lists of flowers","apis":[{"path":"//","description":"","secured":true,"operations":[{"httpMethod":"GET","responseClass":"List[Flower]","summary":"Show all flowers","notes":"Shows all the flowers in the flower shop. You can search it too.","deprecated":false,"nickname":"getFlowers","parameters":[{"name":"name","description":"A name to search for","required":false,"paramType":"query","allowMultiple":false,"dataType":"string"}],"errorResponses":[]}]},{"path":"//{slug}","description":"","secured":true,"operations":[{"httpMethod":"GET","responseClass":"Flower","summary":"Find by slug","notes":"Returns the flower for the provided slug, if a matching flower exists.","deprecated":false,"nickname":"findBySlug","parameters":[{"name":"slug","description":"Slug of flower that needs to be fetched","required":true,"paramType":"path","allowMultiple":false,"dataType":"string"}],"errorResponses":[]}]}],"models":{"Flower":{"id":"Flower","description":"Flower","properties":{"name":{"description":null,"enum":[],"required":true,"type":"string"},"slug":{"description":null,"enum":[],"required":true,"type":"string"}}}},"basePath":"","swaggerVersion":"1.0","apiVersion":"1"} These JSON files can then be offered to a standard HTML/CSS/JavaScript client to make it easy for people to browse the docs. It's extremely impressive - take a moment to view the Swagger Pet Store example. Click on the route definitions to see what operations are available for each resource. You can use the web interface to send real test queries to the API, and view the API's response to each query. Swagger language and framework integrations Let's get back to the spec files. In addition to enabling automatic documentation as in the Pet Store example, these JSON files allow client and server code to be automatically generated, in multiple languages. This means that unless you want to, you don't need to generate these JSON files by hand. There are integrations with a wide variety of frameworks, including ASP.NET, express, fubumvc, JAX-RS, Play, Ruby, and Spring MVC. The framework integrations allow you to annotate the code within your RESTful API in order to automatically generate JSON descriptors which are valid Swagger specs. This means that once you annotate your API methods, you get some very useful (and pretty) documentation capabilities for free, using the swagger-ui. You also get the ability to generate client and server code in multiple languages, using the swagger-codegen project. Client code can be generated for Flash, Java, JavaScript, Objective-C, PHP, Python, Python3, Ruby, or Scala. Setting up the Scalatra Flower Shop with Swagger Let's annotate our Scalatra flowershop with Swagger, in order to auto-generate runnable API documentation. Add the dependencies First, add the Swagger dependencies to your build.sbt file: "com.wordnik"% "swagger-core_2.9.1"% "1.1-SNAPSHOT", "org.scalatra"% "scalatra-swagger"% "2.2.0-SNAPSHOT", Exit your sbt console and once again type sbt in the top-level directory of your application in order to pull in the dependencies. Then run container:start and ~; copy-resources; aux-compile to get code reloading going again. You'll now need to import Scalatra's Swagger support into your FlowersController: // Swagger support import org.scalatra.swagger._ Auto-generating the resources.json spec file Any Scalatra application which uses Swagger support must implement a Swagger controller. Those JSON specification files, which we'd otherwise need to write by hand, need to be served by something, after all. Let's add a standard Swagger controller to our application. Drop this code into a new file next to your FlowersController.scala. You can call it FlowersSwagger.scala FlowersSwagger.scala package com.example.swagger.sample import org.scalatra.swagger.{JacksonSwaggerBase, Swagger, SwaggerBase} import org.scalatra.ScalatraServlet import com.fasterxml.jackson.databind._ import org.json4s.jackson.Json4sScalaModule import org.json4s.{DefaultFormats, Formats} class ResourcesApp(implicit val swagger: Swagger) extends ScalatraServlet with JacksonSwaggerBase class FlowersSwagger extends Swagger("1.0", "1") That code basically gives you a new controller which will automatically produce Swagger-compliant JSON specs for every Swaggerized API method in your application. The rest of your application doesn't know about it yet, though. In order to get everything set up properly, you'll need to change your ScalatraBootstrap file so that the container knows about this new servlet. Currently it looks like this: import com.example.swagger.sample._ import org.scalatra._ import javax.servlet.ServletContext class ScalatraBootstrap extends LifeCycle { override def init(context: ServletContext) { context.mount(new FlowersController, "/flowers") } } Change it to look like this: class ScalatraBootstrap extends LifeCycle { implicit val swagger = new FlowersSwagger override def init(context: ServletContext) { context mount(new FlowersController, "/flowers") context mount (new ResourcesApp, "/api-docs") } } Adding SwaggerSupport to the FlowersController Then we can add some code to enable Swagger on your FlowersController. Currently, your FlowersController declaration should look like this: class FlowersController extends ScalatraServlet with JacksonJsonSupport with JValueResult { Let's add the SwaggerSupport trait, and also make the FlowerController aware of swagger in its constructor. class FlowersController(implicit val swagger: Swagger) extends ScalatraServlet with JacksonJsonSupport with JValueResult with SwaggerSupport { In order to make our application compile again, we'll need to add a name and description to our FlowersController. This allows Swagger to inform clients what our API is called, and what it does. You can do this by adding the following code to the body of the FlowersController class: override protected val applicationName = Some("flowers") protected val applicationDescription = "The flowershop API. It exposes operations for browsing and searching lists of flowers, and retrieving single flowers." That's pretty much it for setup. Now we can start documenting our API's methods. Annotating API methods Swagger annotations are quite simple in Scalatra. You decorate each of your routes with a bit of information, and Scalatra generates the JSON spec for your route. Let's do the get("/") route first. Right now, it looks like this: get("/"){ params.get("name") match { case Some(name) =>FlowerData.all filter (_.name.toLowerCase contains name.toLowerCase) case None =>FlowerData.all } } We'll need to add some information to the method in order to tell Swagger what this method does, what parameters it can take, and what it responds with. get("/", summary("Show all flowers"), nickname("getFlowers"), responseClass("List[Flower]"), parameters(Parameter("name", "A name to search for", DataType.String, paramType = ParamType.Query, required = false)), endpoint(""), notes("Shows all the flowers in the flower shop. You can search it too.")){ params.get("name") match { case Some(name) =>FlowerData.all filter (_.name.toLowerCase contains name.toLowerCase) case None =>FlowerData.all } } Let's go through the annotations in detail. The summary and notes should be human-readable messages that you intend to be read by developers of API clients. The summary is a short description, while the notes should offer a longer description and include any noteworthy features which somebody might otherwise miss. The nickname is intended as a machine-readable key which can be used by client code to identify this API action - it'll be used, for instance, by swagger-ui to generate method names. You can call it whatever you want, but make sure you don't include any spaces in it, or client code generation will probably fail - so "getFlowers" or "get_flowers" is fine, "get flowers" isn't. The responseClass is essentially a type annotation, so that clients know what data types to expect back. In this case, clients should expect a List of Flower objects. The parameters details any parameters that may be passed into this route, and whether they're supposed to be part of the path, post params, or query string parameters. In this case, we define an optional query string parameter called name, which matches what our action expects. Lastly, the endpoint annotation defines any special parameter substitution or additional route information for this method. This particular route is pretty straightforward, so we can leave this blank. We can do the same to our get(/:slug) route. Change it from this: get("/:slug") { FlowerData.all find (_.slug == params("slug")) match { case Some(b) =>b case None =>halt(404) } } to this: get("/:slug", summary("Find by slug"), nickname("findBySlug"), responseClass("Flower"), endpoint("{slug}"), notes("Returns the flower for the provided slug, if a matching flower exists."), parameters( Parameter("slug", "Slug of flower that needs to be fetched", DataType.String, paramType = ParamType.Path))) { FlowerData.all find (_.slug == params("slug")) match { case Some(b) =>b case None =>halt(404) } } The Swagger annotations here are mostly similar to those for the get("/") route. There are a few things to note. The endpoint this time is defined as {slug}. The braces tell Swagger that it should substitute the contents of a path param called {slug} into any generated routes (see below for an example). Also note that this time, we've defined a ParamType.Path, so we're passing the slug parameter as part of the path rather than as a query string. Since we haven't set the slug parameter as required = false, as we did for the name parameter in our other route, Swagger will assume that slugs are required. Now let's see what we've gained. Adding Swagger support to our application, and the Swagger annotations to our FlowersController, means we've got some new functionality available. Check the following URL in your browser: You should see an auto-generated Swagger description of available APIs (in this case, there's only one, but there could be multiple APIs defined by our application and they'd all be noted here): {"basePath":"","swaggerVersion":"1.0","apiVersion":"1","apis":[{"path":"/api-docs/flowers.{format}","description":"The flowershop API. It exposes operations for browing and searching lists of flowers"}]} Now for the wonderful part. Browsing your API using swagger-ui If you browse to, you'll see the default Swagger demo application - a Pet Store - and you'll be able to browse its documentation. One thing which may not be immediately obvious is that we can use this app to browse our local Flower Shop as well. The Pet Store documentation is showing because is entered into the URL field by default. Paste your Swagger resource descriptor URL - - into the URL field, delete the "special-key" key, then press the "Explore" button. You'll be rewarded with a fully Swaggerized view of your API documentation. Try clicking on the "GET /flowers" route to expand the operations underneath it, and then entering the word "rose" into the input box for the "name" parameter. You'll be rewarded with JSON output for the search method we defined earlier. Also note that the swagger-ui responds to input validation: you can't try out the /flowers/{slug} route without entering a slug, because we've marked that as a required parameter in our Swagger annotations. Note that when you enter a slug such as "yellow-tulip", the "{slug}" endpoint annotation on this route causes the swagger-ui to fire the request as /flowers/yellow-tulip. If you want to host your own customized version of the docs, you can of course just download the swagger-ui code from Github and drop it onto any HTTP server. A note on cross-origin security Interestingly, you are able to use the remotely-hosted documentation browser at to browse an application on. Why is this possible? Shouldn't JavaScript security restrictions have come into play here? The reason it works is that Scalatra has Cross-Origin Resource Sharing (CORS) support built-in, allowing cross-origin JavaScript requests by default for all requesting domains. This makes it easy to serve JS API clients - but if you want, you can lock down requests to specific domains using Scalatra's CorsSupport trait. See the Scalatra Helpers documentation for more. Conclusion Without much in the way of boilerplate code, you've now constructed a simple REST API, set up model-class to JSON output functionality, and auto-generated API documentation by annotating your Scalatra routes with Swagger information. This is one way to use Swagger, but the Wordniks swagger differently: rather than starting with the API and using Swagger to just generate the docs, they start out by writing the JSON descriptor files by hand. They then look at the API using the HTML docs browser, and have all the parties who are interested in the API sit down and discuss what's needed. After changing the JSON files based on the discussion, they use swagger-codegen to generate the client and server code. This is called interface driven development, and it's well worth a look. With its ease of use, multi-framework integration, and innovative way of involving people in the design process, Swagger is at the forefront of REST API construction tools. The code You can download and run a working version of this application by installing Scalatra as detailed at the start of this tutorial, doing a git clone, and running sbt in the top-level of the project. About the author Dave Hrycyszyn is Technical Director at Head London, a digital innovation agency in the UK. He is passionate about APIs and application architectures, and is a member of the team working on the Scalatra micro-framework, which has been used by LinkedIn, the BBC, the Guardian newspaper, and gov.uk. He has a keen interest in organizations and institutions, and the interplay between social structures and software. Case classes by Krzysztof Ciesielski Re: Case classes by Dave Hrycyszyn Adding "setters" to that sentence was just a twitch of the keyboard - I guess OO has really left its mark on me! You're right, no setters are generated when defining a Scala case class as in the article, or like this: case class Todo(id: Integer, name: String, done: Boolean) However, it's not quite true to say that case classes are immutable by definition - it is in fact possible to make them mutable, by adding "var" to the property definitions, like this: case class Todo(var id: Integer, var name: String, var done: Boolean) You can then set field values (setters are in effect generated by the use of "var"): scala> val todo = Todo(1, "Shampoo the cat", false) todo: Todo = Todo(1,Shampoo the cat,false) scala> todo.done res2: Boolean = false scala> todo.done = true todo.done: Boolean = true scala> todo res3: Todo = Todo(1,Shampoo the cat
http://www.infoq.com/articles/swagger-scalatra/
CC-MAIN-2014-15
refinedweb
4,972
55.95
WORK-IN-PROGRESS: - this material is still under development Audit Log A simple log of changes, intended to be easily written and non-intrusive. How it Works An audit log is the simplest, yet also one of the most effective forms of tracking temporal information. The idea is that any time something significant happens you write some record indicating what happened and when it happened. An audit log can take many physical forms. The most common form is a file. However a database table also makes a fine audit log. If you use a file you need a format. An ASCII form helps in making it readable to humans without special software. If it's a simple tabular structure, then tab delimited text is simple and effective. More complex structures can be handled nicely by XML. Audit Log is easy to write but harder to read, especially as it grows large. Occasional ad hoc reads can be done by eye and simple text processing tools. More complicated or repetitive tasks can be automated with scripts. Many scripting languages are well suited to churning though text files. If you use a database table you can save SQL scripts to get at the information. When you use Audit Log you should always consider writing out both the actual and record dates. They are easy to produce and even though they may be the same 99% of the time, the 1% can save your bacon. As you do this remember that the record date is always the current processing date. When to use it The glory of Audit Log is its simplicity. As you compare Audit Log to other patterns such as Temporal Property and Temporal Object you quickly realize that these alternatives add a lot of complexity to an object model, although these are both often better at hiding that complexity than using Effectivity everywhere. But it's the difficulty of processing Audit Log that is it's limitation. If you are producing bills every week based on combinations of historic data, then all the code to churn through the logs will be slow and difficult to maintain. So it all depends how tightly the accessing of temporal information is integrated into your regular software process. The tighter the integration, the less useful is Audit Log. Remember that you can use Audit Log in some parts of the model and other patterns elsewhere. You can also use Audit Log for one dimension of time and a different pattern for another dimension. So you might handle actual time history of a property with Temporal Property and use Audit Log to handle the record history. Example: (Java) A simple Audit Log can be very simple indeed. class Customer... private String phone; public String getPhone() { return (phone == null) ? "none" : phone;} public void setPhone(String arg, MfDate changeDate) { log (changeDate, this, "change of phone", phone, arg); phone = arg; } public void setPhone(String arg) { setPhone(arg, MfDate.today()); } private static void log (MfDate validDate, Customer customer, String description, Object oldValue, Object newValue) { try { logfile().write(validDate.toString() + customer.name() + "\t" + description + "\t" + oldValue + "\t" + newValue + "\t" + MfDate.today() + "\n"); logfile().flush(); } catch (IOException e) {throw new ApplicationException ("Unable to write to log");} } Notice that even though the setting method only uses the actual time, I've also added the record date ( MfDate.today to the log. I think it's always wise to add both dates as it's easy to do and if you don't add it you can't reconstitute it later. I'll leave the script for finding out my phone number on some arbitrary date as an exercise for the reader. (Clearly it's too trivial for me to write out here....)
http://martinfowler.com/eaaDev/AuditLog.html
CC-MAIN-2013-20
refinedweb
620
64
Details - Type: Bug - Status: Reported - Priority: P3: Somewhat important - Resolution: Unresolved - Affects Version/s: 5.12.10, 5.15.2 - Fix Version/s: None - Component/s: GUI: Basic Input System (keyboard, mouse, touch) - Labels:None - Environment:macOS Catalina - Platform/s: Description I found a bug report on StackOverflow and I have the same thing (). This reproduces for QML and QWidget's, but my example is in QML. So here is an example: import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick.Controls 2.12 ApplicationWindow { id: rootWindow visible: true width: 640 height: 480 color: "gold" ListView { width: parent.width height: parent.height / 2 * 3 model: 5 spacing: 1 delegate: Rectangle { width: parent.width height: 50 color: ma.containsMouse ? "mediumvioletred" : "mintcream" border.color: "black" border.width: 1 Text { anchors.centerIn: parent text: "Click on me to open google.com" font.bold: true } MouseArea { id: ma anchors.fill: parent onClicked: Qt.openUrlExternally(""); hoverEnabled: true } } } Text { width: parent.width height: 200 anchors.bottom: parent.bottom color: "black" text: "1. Click on any list element (note color when hovered)\n2. Re-gain focus by click outside of the list (gold color area)\n3. Hover list element"; font.bold: true horizontalAlignment: Qt.AlignHCenter verticalAlignment: Qt.AlignVCenter } } Do the following: - Click on the list element to open external link in the browser. - Close browser window so you can see this example app's window and click outside of a list (gold area). - Immediately after this, try to hover list element. Result - hover stops working for some time. If you move mouse pointer outside of the main window and then return it back - hover starts working. If you switch back to example app by using e.g. Dock - bug is not happening. It happens only in case you bring app's window to foreground by just clicking inside of its area. This is NOT a macOS behavior because it can't be reproduced with a non-Qt application.
https://bugreports.qt.io/browse/QTBUG-92465
CC-MAIN-2021-39
refinedweb
323
54.39
Hi, I'm working on an Angular 2 app with phpStorm, but what is really frustrating is that the TS compiler always keeps looking for errors in the "node_modules" folder as well and throws errors of course. I have a tsconfig.json file where "node_modules" is being excluded, this is also set in the settings with phpStorm So I don't get it why the TS compiler still keeps giving me these errors of stuff in my "node_modules" folder, I only want to see errors from my project, how can this be done? Hello, What is your tsconfig.json? Exclude patterns work fine for me. How many tsconfig.json files do you have in your project? Where are they located? Also, exclusion filters are not applied to files referenced (directly or indirectly) from included files. See,. Hi Vlad, I only have one tsconfig.json file, it located in my root directory of my project. It's pretty straight forward I think. In my case, I do have imports to certain files of course, and I can understand that the TS compiler needs to read them, but it would be nice if the errors could be suppressed in some kind of way or made silent. Now I have to scroll all the way down to see the errors I have made in my code, because the TS compiler will first show the errors of the node_modules files. I understand why.. because the compiler imports those files first and then of course show the errors in the following order. But bottomline.. this isn't really productive to work this way. Please feel free to file a request for filtering the compiler errors. Related ticket: Hi Elena, That issue doesn't really relate to my problem. I don't have multiple tsconfig files, I just want to be able to filter certain messages that aren't in the scope of my project files. Being folders / files located in the "node_modules" folder. i didn't say it's your issue (and asked you to create a feature request), but it's definitely related, as this is a request to filter TSC output to show messages related to current file only I have the same problem! IntelliJ IDEA 2017.1.2 Build #IU-171.4249.39, built on April 25, 2017 JRE: 1.8.0_112-release-736-b16 x86_64 JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o Mac OS X 10.12.4 This seems to be an ongoing issue:. Feel free to participate (vote/comment). @Stevo, Idea seems to behave correctly: if the module is referenced from the source file, it's processed despite being excluded in tsconfig.json. And, as your files definitely do import @angular/core, the errors are shown. You will see the same errors when running tsc in terminal. To make things clear: excludeis about the compiler automatically loading all files from your folder when you run tscwith no file list. This does not impact how importstatements are resolved. if the compiler sees import * from "mod", it will try to find modin node_modulesand if it found it it will process it. See: "Any files that are referenced by files included via the "files"or "include"properties are also included. Similarly, if a file B.tsis referenced by another file A.ts, then B.tscannot be excluded unless the referencing file A.tsis also specified in the "exclude"list." Is there an update on this? As Maartin said, this is a really unproductive way to work. Update on what, sorry? As I explained in, the current behavior is expected and conforms to typescript compiler rules @Elena - So if I'm using a 3rd party library that does NOT enforce compilerOptions such as noImplicitAny or noUnusedParameters, there is no way for me to enforce those options ONLY within my own project? From what I currently understand these used to be linting options but now require type checking, and the proper way to enforce them is with compilerOptions. The problem as you can see is that I can enforce linting options only for my code, but I cannot enforce them only for my own code using compilerOptions. @Jensbodal Please comment at - that would be the best way to reach the developers and discuss this matter with them
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206634839-Typescript-Compiler-doesn-t-exclude-node-modules-folder?page=1
CC-MAIN-2020-45
refinedweb
717
65.62
Recently, I spent a weekend banging my head against the wall as I tried to figure out how to upgrade a personal project to webpack 4, TypeScript 2.9, and React (used to be AngularJS 1.6). I finally got it all working together – and even got hot module replacement (hmr) working. TL;DR? Checkout the code here: The important bits: Use the WebpackDevMiddleware This middleware in ASP.NET Core is built-in to ASP.NET Core 2.1, but you have to specifically add an option to configure HMR. Add this to your Startup.cs file. { HotModuleReplacement = true }); Use babel-core and ES6 HMR was silently failing for a while until I discovered a few knobs in awesomet-typescript-loader. After a bunch of GitHub spelunking, I discovered that I needed these magical settings in webpack.config.js. { test: /.tsx?$/, include: /ClientApp/, loader: [ { loader: ‘awesome-typescript-loader‘, options: { useCache: true, useBabel: true, babelOptions: { babelrc: false, plugins: [‘react-hot-loader/babel‘], } } } ] } Also, you may need to update your tsconfig.json file to target ES6. “compilerOptions”: { “target”: “es6”, “module”: “commonjs”, “jsx”: “react” } } react-hot-loader 4 If you’ve used previous versions, considering upgrading to version 4. It’s usage is super simple now. Here’s a minimal React app with hmr. import * as ReactDOM from ‘react-dom‘; import { hot } from ‘react-hot-loader‘; const App: React.SFC = () => <div> Hello, hot reloading </div>; const HotApp = hot(module)(App); ReactDOM.render(<HotApp />, document.getElementById(‘root‘)); A few other goodies I prefer Yarn to npm because it is faster, deterministic, and it’s not too hard to integrate Yarn with the .NET Core command line. Here are some MSBuild targets you can add to your project to lightup Yarn integration: Webpack.targets Configuration in your .csproj file
https://online-code-generator.com/configuring-asp-net-core-webpack-and-hot-module-replacement-hmr-for-fast-typescript-development/
CC-MAIN-2021-43
refinedweb
294
50.53
User talk:Hyper Girl From Uncyclopedia, the content-free encyclopedia edit Poem Hey, I've sorted your recent poem contribution, and have redirected it to the namespace. --—Braydie 23:56, 23 December 2006 (UTC) 4BWZD edit Don't blame Colin002 LOL, i added u to the list of emo... I was like what?! Go here for an example of it. Next time don't go crazy and don't be n00b. edit Uncyclopedia User Page of the Month Winner of September 2007 "The Non-Official User Page of the Month Award Presented By His Holiness The Dali Llama" edit Okay then... I'll give you a chance to work on it :) (Bonner) (Talk) Dec 17, 16:29 - Some chance. Its gone by tomorrow. Hyper Girl 13:28, 18 December 2007 (UTC) edit Avril Lavigne Thanks! I saw the stuff you did too, good job removing all that crap from the page :) Hope you don't mind but I've changed your userpage; the wikitable you had can crash Firefox, which is fine on Wikipedia, but for normal viewing ascii art works best. I'm actually quite proud of that now, and a bit jealous too cause it looks so good! And yes, I am the Avril Troll. AlkalineAvril 20:33, 16 July 2008 (UTC) - Hey, I actually like it so much I've added it to my page! AlkalineAvril 20:41, 16 July 2008 (UTC) - This site. Lol no, I wish I could! I could probably get it on the main page here but what's the point in that - so few people view this site compared to Wikipedia. Besides, I'm taking a break from vandalizing Wikipedia for now, though I still make 12 accounts per day so when I get back into it I'll have loads ;) AlkalineAvril 13:36, 18 July 2008 (UTC) - Thanks. Did you read the story? AlkalineAvril 16:04, 18 July 2008 (UTC) - Oh yeah, have a look at this (everything after December 2007, before that I didn't log them) AlkalineAvril 16:07, 18 July 2008 (UTC) - Lol I do it in my spare time at college. I'm not on MSN but you can email me juhazupeqigoxara@tempomail.fr - The email looks strange because it's a temporary proxy, and you'll have to email it within four hours or it won't work. AlkalineAvril 16:21, 18 July 2008 (UTC) edit I'm back! Sorry about that. New email: byhanixonumajygi@tempomail.fr (I've set this one to last for a month) AlkalineAvril 19:55, 8 August 2008 (UTC) edit This page has been dry for some time I read the word lesbian at one point. Where are the pictures? 65.90.138.150 19:48, 5 June 2009 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Hyper_Girl
CC-MAIN-2014-15
refinedweb
459
81.12
Windows Phone 7 Application Obfuscation using PreEmptive obfuscator Join the DZone community and get the full member experience.Join For Free Recently, I wanted to make sure that Windows Phone 7 applications we publish are protected with obfuscation and since PreEmptive provides a free professional tool called Dotfuscator, I thought I'd give it a try. I've been using DeepSeaObfuscator for years and this blog really is not about comparing these two but to provide a step by step guide on how to take your release Windows Phone 7 xap file and obfuscate using dotfuscator. I had to open a support ticket just to figure out how to make it work so I figure this will be good information for everyone using it. Why Obfuscate? Obfuscation of your xap is about protecting your intellectual property from disassemblers or possibly hiding sensitive information that is in your code. There are great free tools like Reflector that allow you to disassemble your exe, and dll. If you want to protect your xap file then obfuscation is great way to make things harder for people to extract what they want out of your dll or exe. It is not 100% fail-proof but it will definitely slow things down. How to disassemble using Reflector? For your xap simply rename you xap extension to zip and use your favorite tools like 7Zip to unzip and you will see all the contents of the xap including dll’s that you can disassemble using Reflector. When you open your unzipped dll in Reflector you will see something similar to the below figure: How to Use PreEmtive dotfuscator? Here you will see the step by step guide in configuring your dotfuscator so you can successfully obfuscate your xap. You will require to set certain configuration to make this work property. 1. Open Dotfuscator and browse to your xap and select as shown below. 2. Click on Settings choose the settings below shown. Notice here that I had to DISABLE renaming because otherwise my obfuscated app will not run. Renaming is where it renames your variables, remove namespaces, and/or properties can be removed. Basically problem with renaming is because properties are bound to XAML binding and it has potential to break the app when obfuscated. Everything else I took a default option here. 3. Another I thing had to do was to add User Defined Assembly Load Path to C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\Silverlight\v4.0\Profile\WindowsPhone in order to build successfully to be obfuscated as shown in below: 4. Simply click Build or File –> Build and your obfuscated file will be in .\Dotfuscated. Testing your obfuscated XAP You would now need to test your xap either using emulator or your real device. 1. Go to Start –> All Programs –> Windows Phone Developer Tool and click on Application Deployment 2. From Application Deployment Window select your XAP by browsing to the location of obfuscated XAP as shown below: 3. Choose Windows Phone 7 Emulator from Target dropdown and click Deploy. 4. Your Emulator will popup and install your xap and you will need to test to make sure it runs correctly Conclusion Here you learned about obfuscating the XAP using dotfuscator and even if you are using DeepSeaObfuscator configuration and process is similar. Source: Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/windows-phone-7-application
CC-MAIN-2021-31
refinedweb
560
51.68
Hide Forgot gcc-2.96-69's iostream implementation fills with nulls instead of a real character when a specific width is selected. E.g.: > cat bug22.c #include <iostream> int main(int, char**) { std::cout << 0.5 << endl; std::cout.fill(' '); std::cout.width(6); std::cout << 0.5 << endl; } > g++ -v Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs gcc version 2.96 20000731 (Red Hat Linux 7.0) > g++ bug22.c -o bug22 > ./bug22 0.5 ^@^@^@0.5 > Here I've rendered the nulls (0x00) the way Emacs does (^@), so that they will survive email. The same thing happens if the explicit setting of the fill character is omitted; in either case GCC 2.95 produces a space and 2.96 produces nulls. The behavior is particularly troublesome when the output is then converted into a string using strstream, since the resulting string will be empty due to the bogus nulls. Have you upgraded libstdc++ as well? This was fixed in libstdc++-2.96-60 or before (I don't remember exact release). D'oh! Yes, that fixes it; I didn't see that libstdc++ and cpp were also upgraded when gcc was. So it works fine in the current release, libstdc++-2.96-69. Sorry about that!
https://bugzilla.redhat.com/show_bug.cgi?id=22776
CC-MAIN-2019-18
refinedweb
216
71.51
Hello everyone, and welcome into my new article React Native Drawer Tutorial. In this article we are going to explore a a piece of the React Native Navigation ecosystem, The Drawer Navigation. The Drawer Navigation is one of the fundamental ways of navigation between screens in Mobile Apps. And it is native to bot platforms Ios and Android, and recently, You can see plenty of web apps adopt this approach too. React Native Drawer Concept The concept behind the Drawer Navigation is pretty simple. On the header of your screens, you will have an icon button on your top left screen. Usually represented with 3 horizontal lines, and when the users presses on it, a navigation drawer will show up from Left of the screen. It will contain a list of main screens user can navigate to, such as Home screen, Settings etc. UI Concept So to achieve a seemingness Drawer navigation, we will try to build an app with 3 different screen. Each screen will have a header, an Image and 2 paragraph text. The header will contain 2 (Sometimes 3) components. On the left the drawer navigation Icon Button. In the middle, The name of the screen to be represented to the user. And on the right, I will add an empty text as a placeholder, but notice, most apps will have another Icon button here. For extra functionality, like sharing on social media etc And Our sidebar menu, Which is the list of Screens the user can navigate to. We will have 2 components an icon or Image to represent the meaning of the screen, and its name And some general user profile data like profile photo, name and email. In general the app will look like this. Let’s Get Started Environment Setup To Achieve the given results we will need to install few things. First the React Navigation library itself react-navigation yarn add react-navigation The 2 of it’s helper dependencies react-native-reanimated react-native-gesture-handler yarn add react-native-reanimated react-native-gesture-handler Then, we will need to install One of the main React Navigation navigators to handle the navigation workflow react-navigation-stack react-navigation-drawer There are more than these 2, but for this example we will only need these. So go ahead and install them into your project. yarn add react-navigation-stack react-navigation-drawer One last library we will need is the @expo/vector-icons, which will allow us to use tons if icons, pre-made and fully customized in our app. yarn add @expo/vector-icons Setting Up The Drawer Navigation To navigate between screens in your app you will need to make an app container using createAppContainer. Then add to it the Stack navigator including all your stacks, for our example we will only add one. Which is the Drawer Navigator we have, you might have more than one. And finally the Drawer navigator itself with the list of Screens you want the user to transit from and to. Let’s start with the Drawer Navigator first. Create Drawer Navigator Import the createDrawerNavigator from react-navigation-drawer import { createDrawerNavigator } from 'react-navigation-drawer'; Then make use it to create a new drawer navigator with the below properties. const Drawer = createDrawerNavigator( { Home:{ screen: Home}, Profile:{ screen: Profile}, Settings:{ screen: Settings} }, { initialRouteName: "Home", unmountInactiveRoutes: true, headerMode: "none", contentComponent: props => <Sidebar {...props} /> } ) The First argument of the createDrawerNavigator function is an object with all the screens you want your user to navigate. You have to import them First, these are React components. In my case, I will just add simple functional components within the same App.js file. The second argument is also an object where you can add the navigation options you want. For this example I got 4. initialRouteName, it’s name is descriptive, the default Screen your want your user to land to from this Navigator. I am having it default into the Home Screen. unmountInactiveRoutes, it’s an advanced feature recently added to react Native, which saved us hell of coding. Simply it destroys every screen you leave. Without this feature, back when react native first came, every time you switch from a screen into another, then get back to that screen. You will notice, that it wasn’t destroyed, and it’s still live in the background. and also still has it’s state and props. So it will not get new ones, and you had to handle this manually headerMode, This property, removes the default header space, the navigators have, so that you can implement your own. Or in case you want to have a full screen component. contentComponent, This property takes care of our sidebar list menu. So make one and import it and add it here. Create Stack Navigator Now let’s Create Stack Navigator, similar to the Drawer navigator, nothing extra. But we will only add the initial route name for the navigation options. And of course add the Drawer Navigator we created earlier to this stack navigator. As I have mentioned before in this article and In my previous article React Native Screen Transitions. You can have multiple Stack navigators in your app, in fact, you probably need to split your screens into stacks. And use them accordingly. An example of multiple stack navigators, would be splitting your app screens into categories and on the same level. Is to have A User Authentication stack and a main drawer navigation stack or other navigation stacks, such us Bottom Tab Navigation, with your screens. You can also split the main into functional screens stack and settings screens stack const AppNavigator = createStackNavigator( { Drawer : {screen: Drawer}, }, { initialRouteName: "Drawer", } ) Create App Container Create an app navigation container from the AppNavigator navigation stack like this. const AppContainer = createAppContainer(AppNavigator); And finally render the AppContainer in your App.js render method class App extends React.Component { render(){ return ( <AppContainer /> ); } } React Native With Redux Integration If you are using your React Native With Redux in your project the implementation will differ a bit for the latest step. After you create your app navigation container. you have t include it within the redux provider within the render method of the App.js To achieve this, simply import Provider from Redux from react-redux And Your store and add it like this. class App extends React.Component { render(){ return ( <Provider store={store}> <AppContainer /> </Provider> ) } } Create Drawer Sidebar Now, let’s create the Sidebar for our navigation Drawer. Remember that it’s just a react Component, and you can make it look anything you want. But in general you would want to have a list of screens the user can navigate to each with a name and an icon. Maybe add more user details, like an avatar, name and email, and some extra links, for FAQ or Feedback submissions etc. For Simplicity, I am going to add only include the screens list, and user profile data, such us photo, name and email. This is the drawer sidebar result So, let’s make a new React Native Component called Sidebar and add an initial state with Routes. state = { routes:[ { name:"Home", icon:"ios-home" }, { name:"Profile", icon:"ios-contact" }, { name:"Settings", icon:"ios-settings" }, ] } As you might have noticed each Route has a name and an icon. The name is pretty obvious, the name of the screen to navigate to. And the icon name the Ionicons icon names from @expo/vector-icons For this article I only used Ionicons, but you can use icons from different providers. If you want to check the full icons list, you can check Expo Icons Profile Data For our sidebar top side, we wanted to add general user profile data to the sidebar. The first part is simple a round image and a 2 texts for the name and email. <Image source={require("./assets/profile.jpg")} style={styles.profileImg}/> <Text style={{fontWeight:"bold",fontSize:16,marginTop:10}}>Janna Doe</Text> <Text style={{color:"gray",marginBottom:10}}>janna@doe.com</Text> Style profileImg:{ width:80, height:80, borderRadius:40, marginTop:20 } And the to add a sidebar divider line like you see in the picture. Simply add an empty View component, and style it like this <View style={styles.sidebarDivider}></View> sidebarDivider:{ height:1, width:"100%", backgroundColor:"lightgray", marginVertical:10 } marginVertical & marginHorizontal You might have noticed this uncommon styling property in react native. It’s used to have linear margin. Instead of having top and bottom margins or left and right. You can have it on one single property. Vertical for top and bottom, and horizontal for left and right. You can also use it for padding. paddingVertical & paddingHorizontal Sidebar Screen routes To fill our sidebar with route, we are going to use react native flatlist. If you do not know how it works, or make one, check my old tutorial React Native Flatlist Example. Import the React Native FlatList and add it below our Sidebar divider <FlatList style={{width:"100%",marginLeft:30}} data={this.state.routes} renderItem={({ item }) => <Item item={item} navigate={this.props.navigation.navigate}/>} keyExtractor={item => item.name} /> Notice, the navigate function we are passing from the Sidebart component to the item. We are going to use it to navigate to screens, using the name property from the state. And the flatlist renderItem function function Item({ item, navigate }) { return ( <TouchableOpacity style={styles.listItem} onPress={()=>navigate(item.name)}> <Ionicons name={item.icon} size={32} /> <Text style={styles.title}>{item.name}</Text> </TouchableOpacity> ); } Styles listItem:{ height:60, alignItems:"center", flexDirection:"row", }, title:{ fontSize:18, marginLeft:20 }, As you might have noticed, you have added on each item a TouchableOpacity button with name and Icon for every screen. Using icon name and screen name from each item. And this is the final result App Screens The last piece we need for our app to work are screens we want to navigate to and from. We haven’t created them yet, so let’s go for it. For simplicity, I have included them within the App.js file, since they are pretty simple. But you might need to have them on a different folder, to keep your app well organized. Each screen will have a Header component, we haven’t created it yet, an Image and a couple of text paragraphs. Like this const Home = ({navigation}) => ( <View style={styles.container}> <Header name="Home" openDrawer={navigation.openDrawer}/> <Image source ={require("./assets/banner.png")} style={{width:"80%", height:"30%"}} <Text style={{padding:20}}> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam sit amet dictum sapien, nec viverra orci. Morbi sed maximus purus. Phasellus quis justo mi. Nunc ut tellus lectus. </Text> <Text style={{padding:20}}> In eleifend, turpis sit amet suscipit tincidunt, felis ex tempor tellus, at commodo nunc massa rhoncus dui. Vestibulum at malesuada elit. </Text> </View> ) Notice, for the Header component, we are passing 2 props. Name, The name of the screen we want to be displayed on the header. openDrawer(), a function the react native navigation passes to every screen you have on your app navigator container. The react Native Navigation does not have just that utility, it has dozens of navigation tools you can use to handle the navigation of your app. As an example from our flatlist, we are using the navigate() to navigate into a screen using its name. Another one, is the goBack() function. And we use it to navigate to teh previous screen we came from. openDrawer() opens the drawer and brings up the sidebar we have created. it’s already passed to our props on all screens within the Drawer navigator. So you just use it like this. this.props.navigation.openDrawer(). Now our Home screen looks like this Go on and create multiple screens to use in your app, for this article I have 3 Home, Profile, and Settings. With the same content. Screen Header Finally let’s make the Header Component we have on each screen, to display the name of every screen, and an icon button to open the drawer. const Header =({name, openDrawer})=> ( <View style={styles.header}> <TouchableOpacity onPress={()=>openDrawer()}> <Ionicons name="ios-menu" size={32} /> </TouchableOpacity> <Text>{name}</Text> <Text style={{width:50}}></Text> </View> ) Style header:{ width:"100%", height:60, flexDirection:"row", justifyContent:"space-between", alignItems:"center", paddingHorizontal:20 } And there you have it, a simple React Native Drawer Tutorial, to make a clean drawer navigation through your app. I will create a Github and Expo.io repositories you can use to test and implement for your projects. This Article and the repositories will keep updating on new stuff, and explore anything updated for the Drawer navigation, to keep it ready for future patches. I hope you enjoyed my article and found it as informative as you have expected. Thank you for your time, take care. Happy Coding Thanks Sir, Very helpful your articles. keep updating more articles like this on react native Thank you so much, I really appreciate your support, I am no where to stop adding articles, stay tuned for more. This is great. Thank you I really need your help and guide on how to do a role-based login app with different user levels using RN. Let’s say we have users 1. Cashier, 2. Admin and 3. Customer Each user will have a dashboard and navigation menu (drawer) specific to it. when you login as Manager it takes you to Manager’s dashboard with Manager’s specific Drawer menu when you login as Cashier it takes you to Cashier’s dashboard with Cashier’s specific Drawer menu when you login as Customer it takes you to Customer’s dashboard with Customer’s specific Drawer menu I am using a nodejs API on my local server as backend. I just need a basic example on how to do this and how to structure my folder. Hi Russell, Thanks for reaching out to me. I think I can help you with your concern. I think the simplest way you can achieve that is by have dashboards and Header Drawer role specific using the routes in the state. You can have it predefined per role, or fill it based on the user role when the drawer is mounted. I have worked on a similar functionality for an app before. and this how I had it done. Thank you Youssef for your prompt response and genuine concern. I just started with react native, and I was hoping you do a tutorial on this, particularly on how to structure folder for components/pages specific to each user role. Because I really need this as a project for a job. Is there a github or tutorial link you can recommend. Nonetheless, what you suggested in your reply is very well appreciated and useful still. You are doing a fantastic job, and I couldn’t believe my eyes on the quality and straight-to-the-point content you’ve got here. Hello again Russel, It’s really pleasant to hear from you again. Thank you a lot for your kind message, I am flattered, and appreciate your support. Concerning the folder structure, there is not much of a one way solution for this. I think every developer has his own way of structuring his projects. I would suggest you find your own based on your programming experience. For me I usually split my react native projects into screens folder, and smaller components folder when I get started and build my way from there. When it comes to react native there is one particular book I would recommend, it has it all and fully detailed. it’s fullstack react native. if you still need help or want to discuss something, join me on facebook Sir, Please add some article on firebase that how can we perform authentication and all that. You can help me show source code in gitgub ? Thank sir. I’m from VietNam Hi Sir, The Github source code is already mentioned in the article. is that valid for react-navigation 5.0 ? it does not work for me I think it will, if not the changes would be minor, check the new react-navigation docs to make sure. Still the concept is the same.
https://reactnativemaster.com/react-native-drawer-tutorial/
CC-MAIN-2020-16
refinedweb
2,706
55.84
I'm trying to perform a very standard multi mapping query using Dapper, and I'm getting the following error. I also get another error occasionally when this seems to work, but I'm unable to reproduce it at the moment. I'll append it to this post if/when the first problem is solved. Here is the query code: const string storedProc = "dbo.GetStopsForRouteID"; var stops = conn.Query<RouteStop, MapLocation, RouteStop>( storedProc, (stop, loc) => { stop.Location = loc; return stop; }, new { RouteID = routeId }, commandType: CommandType.StoredProcedure); In Dapper.cs on line 498: var deserializer2 = (Func<IDataReader, TSecond>)info.OtherDeserializers[0]; info.OtherDeserializers is null which causes a NullReferenceException. This is the guts of the stored procedure: SELECT RouteStops.StopID, RouteStops.Name, RouteStops.Description, RouteStops.IsInbound, RouteStops.Location.Lat as Latitude, RouteStops.Location.Long as Longitude FROM dbo.Routes INNER JOIN dbo.StopsOnRoute ON Routes.RouteID = StopsOnRoute.RouteID INNER JOIN dbo.RouteStops ON StopsOnRoute.StopID = RouteStops.StopID WHERE Routes.RouteID = @RouteID ORDER BY StopsOnRoute.SequenceNumber I've had an extensive look at the dapper code but I can't find anything that seems out of place other than that TFirst's deserialiser isn't null, but TSecond's is. Could there be a problem when it creates TSecond's deserializer that leaves it as null? Here are the types: public class MapLocation { public double Latitude { get; set; } public double Longitude { get; set; } } public class RouteStop { public int StopID { get; set; } public string Name { get; set; } public string Description { get; set; } public bool IsInbound { get; set; } public MapLocation Location { get; set; } } Probably the main problem here is that you haven't told it how to "split"; try adding the parameter: splitOn: "Latitude" without that, as far as dapper can see there is no second result portion (it splits on Id by default).
https://dapper-tutorial.net/knowledge-base/7656983/dapper-net-multi-mapping-tsecond-deserializer-is-null
CC-MAIN-2019-04
refinedweb
300
51.04
Configuring Mail Flow between Exchange 2007 and Lotus Domino Topic Last Modified: 2011-08-01 A main consideration when you configure mail flow between Microsoft Exchange Server 2007 and IBM Lotus Domino is how to handle e-mail addressing between the two messaging systems. The steps to configure mail flow differ for sharing a domain name, for when you use a subdomain, or for when you have different namespaces. You can use this document as a step-by-step guide to configure an Exchange 2007 server and a Domino server to successfully send e-mail between the servers. The steps in this document use subdomains for the messaging domains. Specifically, exchange.contoso.com as the Microsoft Exchange SMTP address space and domino.contoso.com as the Domino address space. In Microsoft Exchange, you can route messages to Lotus Notes from Exchange by using the SMTP protocol. This process is changed from earlier versions of Microsoft Exchange. To create a successful message flow to a Domino server in earlier versions of Microsoft Exchange, you must use the Message Transfer Agent (MTA), and you must have the Notes client installed on the Microsoft Exchange server. To perform the following procedures, the account that you use must be delegated the following users:, about delegating roles, and about the rights that are required to administer Exchange 2007, see Permission Considerations. Create a remote domain to represent the Domino address space. To do this, follow these steps: Start the Exchange Management Console. In the navigation pane, expand the Organization Configuration container. Under Organization Configuration, click Hub Transport. In the details pane, click the Remote Domains tab. In the Actions pane, click New Remote Domain. In the New Remote Domain dialog box that appears, type a descriptive name in the Name box. For example, type Domino_Mail. In the Domain name box, type the SMTP root domain for Exchange users and for Lotus Notes users. For example, type contoso.com. Click to select the Include all subdomains check box, and then click Next. Follow the remaining steps to create the new domain. After you create the domain object, right-click the object, and then click Properties. In the Domino_Mail Properties dialog box, click the General tab. Under Out-of-office message types delivered to this remote domain, click Allow external out-of-office messages only. Click the Message Format tab. Under Exchange rich-text format, click Never use, and then click OK. After you create a remote domain object for the Notes users, you must create a new SMTP Send connector. This lets. To configure a new Send connector, follow these steps: In the Actions pane under Hub Transport, click New Send Connector. In the New SMTP Send Connector dialog box, type a descriptive name in the Name box. For example, type Domino_Outbound. In the Select the intended use for this Send connector list, click Custom, and then click Next. Under Address space, click Add. In the Domain box, type domino.contoso.com. Click OK, and then click Next. Under Network settings, click Use domain name system (DNS) "MX" records to route mail automatically, and then click Next. Review the items that appear under Configuration Summary, and then click New. Create a new SMTP Receive connector to let the Domino SMTP server deliver messages to Microsoft Exchange. Because you cannot have two connectors that have the same scope, you must make this connector different from the default Receive connector. To do this, either change the scope of addresses that are allowed to connect to this connector, or specify the IP address or port that this connector listens on. The example in this document uses a specific IP address for the new Receive connector. To create the new connector, follow these steps: In the Exchange Management Console, perform one of the following steps. Edge Transport server On a computer that has the Edge Transport server role installed, click Edge Transport. In the details pane, click the Receive Connectors tab. Hub Transport server On a computer that has the Hub Transport server role installed, expand Server Configuration, and then click Hub Transport. In the details pane, select the server on which you want to create the connector, and then click the Receive Connectors tab. In the Actions pane click New Receive Connector. In the New SMTP Receive Connector dialog box, type a descriptive name in the Name box. For example, type Domino_Inbound. In the Select the intended use for this Send connector list, click Custom, and then click Next. Under Local Network settings, click Add, type the IP address that you want the server to use to receive Domino messages. For example, type 192.168.1.31. Click OK. In the Specify the FQDN this connector will provide in response to HELO or EHLO box, type the FQDN of the server. For example, type Exchange.contoso.com. Click Next. Review the items that appear under Completion, and then click Finish. After you create the new SMTP Receive connector, you must configure the connector to allow the Domino server to submit messages to Exchange. To do this, follow these steps: Right-click the new SMTP Receive connector, and then click Properties. Click the Permission Groups tab. Click to select the following check boxes. Anonymous users Exchange servers Click the Authentication tab, click to select the Externally Secured [for example, with IPsec] check box, and then click OK. Modify the configuration documents on the Domino server to configure mail flow together with Microsoft Exchange. These changes include the following: Modifying the Server document Modifying the Domain configuration document Modifying the Server configuration document Creating a new SMTP domain to represent the Exchange domain To modify the server document, follow these steps: Start a Lotus Notes Client instance by using an account that has administrative rights. Then, open the Domino Directory. Under Configuration, expand Servers, and then click All Server Documents. In the details pane, expand the domain if it is not already expanded, and then click the server object. Click Edit Server. In this document, the domain is named contoso and the example Domino server is named DomSrv1/contoso. On the Server: DomSrv1/contoso page, click the Basics tab, and then modify the SMTP Listener Task. To do this, click the task, click Enabled in the Select Keywords dialog box, and then click OK. Modify the Routing Task to include SMTP Mail Routing. To do this, follow these steps: Click Mail Routing. In the Select Keywords dialog box, click to select the following check boxes: Mail Routing SMTP Mail Routing Click OK. Click Save & Close. To modify the domain configuration document, follow these steps: In the Lotus Notes client, expand Configuration, expand Messaging, and then click Domains. In the details pane, expand Global Domain, click the global domain object, and then click Edit Domain. On the Domain <GlobalDomainName> page, Click the Conversions tab Modify the Alternate Internet domain aliases field by adding the domino.contoso.com domain. Click Save & Close. To modify the server configuration document, follow these steps: In the Lotus Notes client, expand Configuration, expand Servers, and then click Configurations. In the details pane, click the server object. For example, click DomSrv1/Contoso. Then, click Edit Configuration. On the Configuration Settings: DomSrv1/Contoso page, click the MIME tab. On the MIME tab, click the Conversion Options tab. On the Conversion Options tab, click the General tab. Then, set the Return receipts option to Disabled. On the Conversion Options tab, click the Outbound tab. On the Outbound tab, change the Message content option from Notes to Plain Text to from Notes to Plain Text and HTML. This action enables additional message enhancements when messages are sent from Notes to Microsoft Exchange users. Click Save & Close. To add a new domain, follow these steps: In the Lotus Notes client, expand Configuration, expand Messaging, and then click Domains. In the details pane, click Add Domain. In the new domain, click the Basics tab. Then change the Domain type field from Foreign Domain to Foreign SMTP Domain. Click the Routing tab. Then, modify the Internet Domain field to specify the Exchange subdomain. For example, exchange.contoso.com. Modify the Internet host field to specify the FQDN or IP address of the Exchange server to which you want to route outgoing SMTP messages. Click Save & Close. For more information about how to create SMTP connectors, see the following Exchange 2007 Help topics:
http://technet.microsoft.com/en-us/library/ff597984(v=exchg.80).aspx
CC-MAIN-2014-52
refinedweb
1,398
57.47
Red Hat Bugzilla – Full Text Bug Listing Description of problem: My son loves to play the "slingshot" game on my rawhide box. Yesterday, as part of my daily rawhide update, the numpy package got updated. Now slingshot doesn't work. Version-Release number of selected component (if applicable): [root@localhost ~]# rpm -q python python-devel slingshot numpy python-2.6-7.fc11.i586 python-devel-2.6-7.fc11.i586 slingshot-0.8.1p-3.fc11.noarch numpy-1.3.0-2.fc11.i586 How reproducible: All the time Steps to Reproduce: 1. Update a box to the latest from rawhide 2. Install slingshot 3. Try to run the game Actual results: [jsmith@localhost ~]$ slingshot Traceback (most recent call last): File "slingshot.py", line 31, in <module> import pygame File "/usr/lib/python2.6/site-packages/pygame/__init__.py", line 188, in <module> try: import pygame.sndarray File "/usr/lib/python2.6/site-packages/pygame/sndarray.py", line 73, in <module> import pygame._numpysndarray as numpysnd File "/usr/lib/python2.6/site-packages/pygame/_numpysndarray.py", line 38, in <module> import numpy File "/usr/lib/python2.6/site-packages/numpy/__init__.py", line 130, in <module> import add_newdocs File "/usr/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, in <module> from lib import add_newdoc File "/usr/lib/python2.6/site-packages/numpy/lib/__init__.py", line 13, in <module> from polynomial import * File "/usr/lib/python2.6/site-packages/numpy/lib/polynomial.py", line 11, in <module> import numpy.core.numeric as NX AttributeError: 'module' object has no attribute 'core' Expected results: The game should run. Additional info: numpy-1.3.0-1.fc11.i586 was the previous package I had installed, and it worked fine. I also maintain slingshot :) Slingshot requires pygame which requires numpy. The update to numpy was to split off the f2py portion into a subpackage. Checks were done on all of numpy's dependencies, and I didn't think pygame needed f2py. I need to determine whether slingshot or pygame was broken by this. Here's what I'd like you to do: 1. yum install solarwolf, and see if it works. It also uses pygame. 2. yum install numpy-f2py, and re-test slingshot and solarwolf. 3. Post the results here. Thanks! Thanks for your help. Solarwolf had the exact same problem as slingshot. Once I installed numpy-f2py, both games worked fine. Sounds like we have some missing dependencies. Solarwolf also printed the following warning to the console, but it didn't seem to have any effect on the game. [jsmith@localhost /]$ solarwolf /usr/lib/python2.6/site-packages/pygame/sysfont.py:139: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module. flin, flout, flerr = os.popen3('fc-list : file family style') Exception TypeError: TypeError("'NoneType' object is not callable",) in <bound method Popen.__del__ of <subprocess.Popen object at 0x872c10c>> ignored Awesome. I'll modify pygame to require numpy-f2py. Thanks for your help! Built in rawhide and F-11, Freeze Exception ticket: Tagged for F-11.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=496218
CC-MAIN-2016-44
refinedweb
507
63.05
An Emerging Project For Mccain Food Ltd Finance Essay McCain, the world largest frozen chips producer, is going to invest on the two projects ‘Waste Lagoon’ and ‘Wind Turbine system’ where both of the projects are for producing alternative source of energy. The two engineering companies (Omega Alternatives Plc and Alpha Renewable Plc) are in a bid for tender for the project. By applying all of the investment appraisal techniques for example, ARR, PP, NPV and IRR for the both project, the overall results strongly support the Waste Lagoon project. For selecting the better company for tender, the financial statements of last two years of that two companies have been analysed by using different financial ratios for example, ROCE, Capital Turnover, Net Profit Margin and Gross Profit Margin for determining the profitability ratios and Acid Test, Stock Turnover, Debtors Collection period and Creditor Payment period for determining the Liquidity and efficiency ratios of the two companies. The overall result of these ratios analysis is favourable for Alpha Renewable Plc and consequently it has been recommended to give the tender for Waste Lagoon Project to Alpha Renewable Plc because of its strong financial stability in compared to Omega Alternatives Plc. McCain Food Ltd is one of the largest frozen chips producers in the world. Its first processing factory was opened in New Brunswick Canada in 1957; owned and managed by the McCain Family. The company grew rapidly and now it has become the market leader because of its continuous innovation on both variety and quality. McCain Foods’ Whittlesey plant in Cambridgeshire turns potatoes into bags of McCain chips. As a major user of energy for its production process McCain is seeking to reduce how much gas and electricity it uses and recently it wants to invest in two projects ‘Wind turbine system’ and ‘Waste Water treatment system’ at the Whittlesey plant as an alternative source of energy which will cut their production cost and will help to reduce the carbon footprint as well. The two projects together will need a capital outlay of £150 million. McCain therefore needed to evaluate the expected financial benefits of both projects before going to proceed. The company has also selected two engineering company (Omega Alternatives and Alpha Renewable) for tender and it needs to know which company is better for tender for the projects. I have been commission by the company to study on a comprehensive investment appraisal for the two projects, financial analysis of the two selected company for tender and appropriate source of funding if needed. Methodology, Findings & discussion Investment Appraisal on the two projects Given the importance of investment decisions to the viability of business, it is essential that investment proposals are all properly screened. Ensuring that the business uses appropriate methods of evaluation is an important part of this screening process. Research shows that there are basically four methods used in practice by businesses throughout the world to evaluate investment opportunities. Accounting Rate of Return (ARR) Payback period (PP) Net present value (NPV) Internal rate of return (IRR) ( Reff: Peter Atrill and Eddie McLaney, Management Accounting for Decision Makers, 5th Edition, p-246) Accounting Rate of return The accounting rate of return method takes average accounting profit that the investment will generate and expresses it as a percentage of the average investment in the project, as measured in accounting terms. Thus, ARR = (Average annual profit / average investment to earn that profit x 100) But for the two projects we have taken the denominator as ‘initial investment’ rather than average investment. ARR of ‘Wind Turbine’ project ARR of ‘Waste lagoon’ project 28.6% 30.64% See the appendix A. According to the writer Peter Atril and Eddie McLaney, users of ARR should apply the following decision rules: For any project to be acceptable it must achieve a target ARR as a minimum Where there are competing projects that all seem capable of exceeding this minimum rate, the one with the higher or highest ARR would normally be selected. ( Reff: Peter Atrill and Eddie McLaney, Management Accounting for Decision Makers, 5th Edition, p-248) It is clear from the table that Waste lagoon project will be more profitable because of its higher ARR. Merits It clearly shows the profitability of the projects. It allows easy comparison between project The opportunity cost of investment can be taken into account Demerits More complex method It does not take into account the effects of inflation on the value of money over a time period Payback period The payback period is the length of time it takes for an initial investment to be rapid out of the net cash inflows from a project. According to the writer Peter Atril and Eddie McLaney, the decision rule for using PP is: For a project to be acceptable it would need to have a payback period shorter than a maximum payback period set by the business. If there were two or more competing projects that were both shorter than the maximum payback period recruitment, the decision maker should select the project with the shorter payback period. ( Reff: Peter Atrill and Eddie McLaney, Management Accounting for Decision Makers, 5th Edition, p-246) PP of ‘Wind turbine’ project PP of ‘Water lagoon’ project 3years 6months 3years 5months See the appendix A The PP of Water Lagoon is 3y 5m means that the project will be able to recover its initial investment by that time from the commencement of the project whereas the other project will take one month more to recover its initial investment. Merit It is extremely simple Helps prevent cash flow problem- since money will be recovered as quickly as possible. Demerit Cash earned after the payback method is ignored. It does not account for the real value of money. Net Present Value Under the NPV net cash flows are discounted to their present value and then compared to the capital outlay required by the investment. The difference between these two amounts is referred to as NPV. A project is accepted when the net present value is zero or positive. NPV = (Total PV of future CF’s – Initial investment) NPV of ‘Wind Turbine’ Project (£ mill) NPV of Waste Lagoon Project (£ mill) (5.692) 0.154 See the appendix A In case of Waste lagoon project, the NPV is positive, so we should accept the project. Investing in this project will make the business £0.154 million benefit. The gross benefits from investing in this project are worth a total of £(50+0.154)=£50.154 million today and since the business can get the benefits for just £50 million today, the investment should be made. On the other hand, Wind Turbine project’s NPV is negative that means the present value of invested amount (£100 mill) is (100 – 5.692) = £94.308 million today which will be worthless investment. Merits Takes into account timing of all cash flows Takes into account the time value of money Simple decision rule Can be used to compare alternative projects Demerits Need to calculate cost of capital May be more difficult for lay people to understand Basic model ignores inflation Ignores timing of cash flows within individual years Internal Rate of Return The IRR is the discount rate at which NPV is zero. The IRR is calculated by discounting the net cash flows using different discount rates till it gives a net present value of zero. IRR = [positive rate + {positive NPV/ (positive NPV+ negative NPV) x range of rates}] IRR of ‘Wind turbine’ project IRR of waste lagoon project 12.78% 15.134% See the appendix A In case of Waste lagoon project, the NPV is positive £0.134 million at 15% discounting factor (see appendix A) implies that the rate of return that the project generates is more than 15%. But the NPV of another project is negative (see appendix A) at the same discounting factor implies that rate of return that the project generates is less than 15%. From the table it has been seen that the IRR of Waste lagoon is 15.134% means that the returns on its investment will be 15.134% which is an increased figure more than 15%. On the other hand, the IRR of wind turbine project is 12.78% means that the returns on its investment will be 12.78% which is lower than the cost of capital (15%). Merits No need to decide on cost of capital Provides margin of error when IRR is compared with hurdle rate (minimum IRR requirements for the acceptance of a project) Demerits Investment may have more than one IRR Can not choose between alternative projects using IRR Can not be used for least cost situations It completely ignores the scale of investment. B) Financial analysis of the two engineering companies Financial analysis is a comprehensive analysis of- Strategy Competition, regulation and taxes Past, current and projected financial performance Fundamental evaluation in relation to stock price Planning for the future >Operation >Investment >Financing (Reff: Financial statement Analysis by Professor SP Kothari, June 18,2004) There are different approaches for analyzing financial analysis and Ratio Analysis is one of the popular approaches. Ratio Analysis is a technique by which we can – Evaluate the financial performance and stability of an entity Types of Ratios: Profitability Ratio Liquidity &Efficiency Ratios Gearing Ratios Investment Ratios As McCain is not going to invest in those two companies, we only need to analyse the Profitability ratio and Liquidity & efficiency ratios. Profitability Ratios: “Profitability ratios are connected with the effectiveness of the business in generating profit. A very popular means of assessing a business is to assess the amount of wealth generated for the amount of wealth invested.” (Eddie McLaney,Business Finance 7th edition, Prentice Hall) Return on capital investment (ROCE): “ROCE is used in finance as a measure of the returns that a company is realizing from its capital employed.” ( Wikipedia definition) In case of Omega Alternatives, the trend of ROCE has decreased by 0.55% in 2008 from 2007 which implies that their efficiency in generating revenue from resources and management’s ability to control cost has decreased. On the other side, Alpha has got higher ROCE than Omega over the mentioned period and there also can be seen a rising trend of ROCE which means that Alpha’s business is much effective in generating revenue from its resources and has possessed strong management ability. Capital turnover: “Capital turnover is a measure indicating how effectively investment capital is used to produce revenues. Capital turnover is expressed as a ratio of annual sales to invested capital” (Reff: Capital turnover of omega 1.62 times in 2007 which implies that Omega has used its capital 1.62 times by that year to achieve its sales revenue. In case of both companies there can be seen a rising trend of capital turnover over the period but Alpha has used its employed capital 2.51 times (in 2008) which is 0.76 times higher than Omega over the same period (2008). But it is important to note that, “A high ratio is not necessarily beneficial if margins are so small that the net profit generated is unsatisfactory” (Eddie McLaney,Business Finance 7th edition, Prentice Hall) .So this measure ( capital turnover) is related to profit margin of company. The net profit margin of Omega is 13.15% in 2007 which implies that the company has earned £13.15 million out of £100 million sales revenue after all of the expenses of running the business for that period. Though there is a slight decreasing trend of net profit margin of Alpha from year 2007 to 2008, it possess higher Net profit margin than that of Omega which means Alpha has been earning more profit based on its sales revenue. A slight decline of gross profit margin by 0.51% has shown in 2008 from 2007 at Alpha whereas Omega’s gross profit margin has fallen by 1.62% over the same period. It can be noted that The Gross profit margin of Alpha is more than 4 times than that of Omega in the year of 2008 which implies that the amount of sales revenue remains after the expenses of making the stock available to the customer is more than 4times than Omega. Liquidity and efficiency ratio Liquidity ratios are used to try to asses how well the business manages its working capital. It is used to evaluate the solvency and financial stability of a business. There are different types of liquidity ratios: Acid Taste: The ratio of liquid assets and current liabilities is termed as Acid Test. It shows that how much a business is stable. The ratio should be 1:1. According to the minimum required value for acid test (1: 1), both of the company shows satisfactory result. It is also remarkable that a common decline trend of their (both of the company) liquid assets has seen from the year 2007 to 2008 though Alpha possesses a bit of more liquid assets than Omega in 2008. Stock Turnover: The ratio of cost of sales and average stock is termed as Stock turnover. Both the companies have shown a rising trend of stock turnover from 2007 to 2008 but Omega’s stock turnover is appraisal over the mentioned period. Omega’s stock had been turn 2.23 times in 2008 which is higher than Alpha. Debtor Collection period: This is the ratio which will tell us how long, on average, following the sale on credit, trade debtors take to meet their obligation to pay. A well-managed debtor policy will lead to debtors taking as short a time as possible to pay, without damaging good customer relation.( Eddie McLaney, Business Finance 7th edition, Prentice Hall, p-54) It is very clear from the ratios that the lower the debtor collection period the better the outcome. The debtor collection period of Alpha is less than half of Omega over the mentioned period. For example, Alpha was able to collect it’s debts by only 36.26 days in 2008 whereas Omega has taken more than double time (89.63 days) to collect the debts in the same year. Creditor payment period: This is the ratio that will tell us how long, on average, following a purchase on credit, the business takes to meet its obligation to pay for the goods or service bought. A well managed creditor policy will lead to as much ‘free’ credit being taken as possible without damaging the goodwill of suppliers. The more the creditor payment period the more the outcome of the business. Alpha has shown an incredible creditor payment period (which is 15 times of Omega) in compared to Omega in 2008 which means that alpha will be able to increase and improves its overall cash flows more frequently over the period than Omega for the longer creditor payment period. C) Graphical representation (cost savings Vs Year of operation) of the two Projects X- Axis> Year of operation Y- Axis > Net cash flow From the graph it is crystal clear that the net cash flows for both of the projects show an upward trend over the five years period. In case of Wind turbine project the cost of savings increases gradually over the period. But in case of waste lagoon project, there can be seen a sudden increase on its cost savings which starts from year 2 and a slight sharp increase continues up to year 5. By increasing the year of operation up to 7, it can be predicted that the cost savings of the both project will gradually increase on the next two years and consequently McCain will be gainer. D) Summary and Conclusion By taking consideration of all the results of financial appraisals measure, it can be easily assumed that which project will be more viable and feasible for McCain. To reach a conclusion we need to revise the following table Wind turbine does not exceed its cost of capital (12.78%<15%) which means worthless investing From the table it can be seen that, the NPV and IRR values are not favourable to invest for the wind turbine project, though it’s ARR and PP somewhat appraisal. All the above values, on the other hand, are mostly favourable for Waste lagoon Project that will be viable and feasible in terms of all values mentioned. So, I recommend the McCain Company to invest on the Waste Lagoon Project. Again, by analyzing the financial statements of the two engineering companies, it can be summarized as follows from which we can conclude that to whom the project can be handed over as a tender. Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: Request the removal of this essay
http://www.ukessays.com/essays/finance/an-emerging-project-for-mccain-food-ltd-finance-essay.php
CC-MAIN-2014-42
refinedweb
2,813
55.07
As well as providing a C++ API, the Pantheios logging API library also provides a C API for logging in C compilation units. This article provides a quick tutorial on how to use the C API for adding logging to your C programs, as well as offers some contrasts between the C and C++ APIs. This tutorial will not cover issues such as configuring and linking to Pantheios; readers will be helped by first consulting the introductory article on building and using Pantheios. Use of the Pantheios C API first requires explicit initialisation, via the functions pantheios_init() and pantheios_uninit(). pantheios_uninit() must be called once for each invocation of pantheios_init() that yields a return value >= 0. A negative value indicates that initialisation failed, in which case the only function that may be called is pantheios_getInitErrorString(). Both code paths are shown in the following example: #include <pantheios/pantheios.h> #include <stdio.h> #include <stdlib.h> extern const char PANTHEIOS_FE_PROCESS_IDENTITY[] = "pantheios-C"; int main(int argc, char** argv) { int panres = pantheios_init(); if(panres < 0) { fprintf(stderr, "Failed to initialise the Pantheios libraries: %s\n", pantheios_getInitErrorString(panres)); return EXIT_FAILURE; } else { /* . . . rest of program */ pantheios_uninit(); return EXIT_SUCCESS; } } The requirement for explicit initialisation is in contrast to the C++ API. where initialisation is automatic upon inclusion of pantheios/pantheios.hpp. (If your link unit contains one or more C++ compilation units that include pantheios/pantheios.hpp, and your link unit is not a DLL, and you've not defined PANTHEIOS_NO_AUTO_INIT, then you can omit explicit initialisation in the main C source file. However, it's best to do it anyway: it's idiomatic for using Pantheios in C, and you might later remove, or rewrite in C, the C++ compilation unit(s), and then find that your program mysteriously fails to run, or even to tell you why!) The Pantheios C API is based on the printf()-family of functions. This has consequences for syntax, robustness and the genericity and extensibility of the API. The main logging function in the API is pantheios_logprintf(). (The long name is to avoid any nameclashes in the global C namespace, since it's highly likely that there'll be logprintf() functions out there.) pan_sev_t is a typedef to a 32-bit signed integer. PANTHEIOS_CALL(int) pantheios_logprintf(pan_sev_t severity , char const* format , ...); Syntactically, the specification of format strings and arguments is identical to printf(), as in: int i = 10; double d = 9.9; pantheios_logprintf(PANTHEIOS_SEV_NOTICE, "i=%d, d=%G", i, d); The only differences are that pantheios_logprintf() takes a severity level as its first parameter, and it's not necessary to specify a carriage return ( '\n') in the format string. There are two other functions in the Pantheios C API: pantheios_logvprintf() and pantheios_logputs(). PANTHEIOS_CALL(int) pantheios_logvprintf( pan_sev_t severity , char const* format , va_list args); PANTHEIOS_CALL(void) pantheios_logputs( pan_sev_t severity , char const* message); The former takes an array of arguments in the same way as does vprintf(). The latter is a logging analogue for puts(), which processes the single C-style string directly through the Pantheios Core and out to the back-end(s). Consequently it is recommended for use when programs are experiencing unexpected behaviour as a best-chance attempt at writing to the log before termination. (Note: As is the case with any function in such circumstances, success is not guaranteed.) The second consequence of the printf()-like syntax of the C API is the restriction to types that printf() understands: integers, floating-point types, and C-style strings. This is in stark contrast to the Pantheios C++ API, which understands a great many types out of the box, and is infinitely extensible. It also means that the C API is not type-safe. Passing an integer to pantheios_logprintf() when a C-style string is expected is just as likely to crash the process as it is for printf(). Once again, this is in contrast with the C++ API, which is 100% type-safe. Unlike the C++ API, there is no assistance available in the C API for the logging of custom types. Consider the following example, where an IPv4 address argument is logged on entry to a function: int connect_to_peer(struct in_addr const* addr) { pantheios_logprintf(PANTHEIOS_SEV_DEBUG , "connect_to_peer(%u.%u.%u.%u)" , (NULL == addr) ? 0 : ((addr->s_addr & 0x000000ff) >> 0) , (NULL == addr) ? 0 : ((addr->s_addr & 0x0000ff00) >> 8) , (NULL == addr) ? 0 : ((addr->s_addr & 0x00ff0000) >> 16) , (NULL == addr) ? 0 : ((addr->s_addr & 0xff000000) >> 24)); . . . This is a lot of heavy boilerplate, and must be repeated (carefully) in each place an in_addr instance must be logged. Contrast this with the C++ API, which understands the in_addr type along with many others, and can be readily extended to work with any type you wish: int connect_to_peer(struct in_addr const* addr) { pantheios::log_DEBUG("connect_to_peer(", addr, ")"); . . . An alternative approach for C, which offers greater robustness and transparency of the application code, is to use a helper converter function, as in: char const* convert_addr(char* buff, size_t cchBuff, struct in_addr const* addr); int connect_to_peer(struct in_addr const* addr) { char buff[16]; /* space enough for IPv4 */ pantheios_logprintf(PANTHEIOS_SEV_DEBUG , "connect_to_peer(%s)" , convert_addr(&buff[0], STLSOFT_NUM_ELEMENTS(buff), addr)); . . . The downside is that the conversion always takes place, regardless of whether logging at the Debug level is currently enabled or not. With the previous explicit form, no conversion is undertaken until after the severity level is checked. The downloadable project contains implementations of both these approaches, along with the main program, to illustrate the differences between them. We've seen how to use the Pantheios C API, how to initialise it (including reporting errors in the initialisation), how to log basic types, and how to log custom types. We've seen that when logging custom types with the C API, you are forced to make compromises between efficiency and reuse and expressiveness. With the C++ API no such compromises are necessary - it is 100% type-safe and only ever performs conversions when they're going to be used. Naturally, the advice from the Pantheios team is to prefer to use the C++ API when you can (in C++ compilation units); when you can't the C API offers many, but not all, of the benefits of Pantheios. That's a brief introduction to using Pantheios with the be.WindowsConsole backend, with and without callback functionality. There's a whole lot more to the world of Pantheios, and in future articles I will explain more features, as well as cover best-practice and discuss how Pantheios offers 100% type-safety with unbeatable performance. Please feel free to post questions on the forums on the Pantheios project site on SourceForge. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cpp/pantheios_C.aspx
crawl-002
refinedweb
1,101
52.49
I.) July 26, 2012 at 12:01 pm | This looks really cool. When I saw the title I figured that SmartCheck used a solver to create inputs maximizing some coverage metric, like DART, Klee, and SAGE. Hasn’t anyone done that yet? If not, do you know why? July 26, 2012 at 5:53 pm | Yes, the name ‘SmartCheck’ may have been a little over zealous. :) I don’t think there’s been much work in concolic-like testing for functional languages. There is some recent work in this direction out of NEU. One challenge is the liberal use of higher-order functions in languages like Haskell or Racket. For example, consider the simple (but contrived) Haskell function: h :: Eq a => (a -> a -> a) -> [a] -> Int h _ [] = 1 h f ls = if head ls == foldl1 f ls then 0 else 1 To test each branch requires symbolically reasoning about the input f, which is a function. (Note though that QuickCheck can generate random monomorphic functions for testing.) But I take it that your point is that by limiting ourselves to inputs like algebraic functions, a symbolic solver might be possible. That’s an interesting idea, but I don’t know of work in this direction. August 10, 2012 at 1:27 pm Thanks for the links Lee and Ranjit. I guess there’s an interesting tension where functional languages are both easier and harder targets for concolic testing than are imperative languages. Wouldn’t some of these issues become easier if testing were done at the whole-program level, rather than at the unit level? In that case inputs are just bits… July 27, 2012 at 10:19 am | Hi John, I believe Suresh Jagannathan is/was also working on this… Ranjit. July 26, 2012 at 6:50 pm | Is this likely to do a better job avoiding exponential runtime explosions when generating recursive data structures? A major problem I have with QuickCheck is using sized and co to prevent it from running forever when generating things like trees. July 26, 2012 at 8:02 pm | Derek, Sorry, this won’t help there. SmartCheck uses QuickCheck as a back-end to generate random values, so it still depends on you using sized to bound the size of the values generated. SmartCheck can help though to see the problem with a failed test, so you don’t need to rerun QuickCheck to generate multiple failing values, so it can cut down the test/debug/rewrite/test loop. July 29, 2012 at 10:51 am | Take a look at gencheck: July 27, 2012 at 12:36 pm | Interesting! I’m still trying to port QuickCheck to languages without lambda and introspection, such as Pascal. The hardest part is often figuring out how to do (apply function-to-test list-of-arguments) without triggering compiler errors, due to the dynamic nature of the argument types. I managed a hack for C, not pretty, but it works. July 31, 2012 at 4:27 pm | I tried using the derive package to get the Arbitrary instance for your example. When I run the code I get: *** Failed! Falsifiable (after 14 tests): D (C 22) (D (D (D (D (A (C (-28)) (C (-26))) (C (-30))) (C (-3))) (C (-8))) (D (A (C 2) (A (C (-23)) (A (C 26) (C (-20))))) (C (-8)))) *** Smart Shrinking … *** Smart-shrunk value: D (C 22) (D (C 0) (C (-1))) *** Extrapolating values … : memory allocation failed (requested 1048576 bytes) The code I use is the same except for removing the Arbitrary instance and adding: {-# LANGUAGE TemplateHaskell #-} import Data.DeriveTH $( derive makeArbitrary ”M ) July 31, 2012 at 6:45 pm | Thanks, Daniel. I tried it myself, and I get the same result. Do you know what the definition of arbitrary that is generated by derive is? August 1, 2012 at 12:27 pm I use ‘ghc -fforce-recomp -ddump-splices ../examples/Div0.hs’ and it gives: instance Arbitrary M where arbitrary = do { x do { x1 do { x1 <- arbitrary; x2 do { x1 <- arbitrary; x2 error “FATAL ERROR: Arbitrary instance, logic bug” } } Yup, there’s no size restriction. I’ll look at the code although it’s TH.
http://leepike.wordpress.com/2012/07/26/smartcheck/?like=1&_wpnonce=ad1ddc05d8
CC-MAIN-2014-35
refinedweb
696
68.91
Often times the best approach to a custom element in a product is to insert it into a frame to display to the end-user. This can be useful for business intelligence tools, forms, maps, and other custom elements. In the example below a Jotform is used for collecting data to complete a user registration. This allows the same form to be reused outside of the Kleeen built application in places like social media or a website. The first step is to locate the correct custom folder. This one is located at: myproject/apps/cloud/src/app/modules/custom/registration/components/custom-view-u-sw-ks-svi-fcx-q-qiw-hnqgu-5-n.js. You will replace myproject with the name of your repository and the custom view name with the name of your reserved folder. If you used a placeholder image in your prototype the name of that image can be useful in confirming the folder location. Once you have located the correct folder, the starting code will look something like this: import React from 'react'; import { KUIConnect } from '@kleeen/core-react'; import { BackgroundUrl } from '@kleeen/react/components'; function CustomViewUSwKsSviFcxQQiwHnqgu5N({ translate, ...widget }) { return ( <> <BackgroundUrl url="" /> </> ); } export default KUIConnect(({ translate }) => ({ translate }))(CustomViewUSwKsSviFcxQQiwHnqgu5N); You will replace that code with the iFramed code provided by the third-party application as seen in the screenshot above. If there is a script portion of that frame required you will need to separate that code and place it in the index folder as described here. If you encounter difficulties with using an iframe, please contact support via email or the chat window inside the application. Please sign in to leave a comment.
https://kleeensoftware.zendesk.com/hc/en-us/articles/4403015676947-Adding-an-iFramed-Object-to-a-Custom-Page-in-an-Application
CC-MAIN-2022-40
refinedweb
277
51.99
#include <stdio.h> int higher_num(int x, int y); int main() { int x, y; printf("Enter an integer: "); scanf("%d", &x); printf("Enter another integer: "); scanf("%d", &y); printf("The higher number is %f\n", higher_num(x, y)); return(0); } int higher_num(int x, int y) { return( if (x > y) printf("%f\n", x); else printf("%f\n", y); ) } ok... i dunno where im going wrong. the question is: Write a program with a function (give it a meaningful name) that takes as parameters two integers and prints the larger integer with a meaningful message. In main, use scanf to get two integers from the user and pass them to your new function. Hint: test the function by calling it from main on two integer constants - after you know it works correctly, add the scanfs. i think i have it right.. but apparently its wrong..
http://www.dreamincode.net/forums/topic/56907-calling-user-input-in-the-function/
CC-MAIN-2013-20
refinedweb
145
69.52
Text analysis library based on the Annotated Suffix Tree method Project description EAST stands for the Enhanced Annotated Suffix Tree method for text analysis. Installation To install EAST, run: $ pip install EAST This may require admin permissions on your machine (and should then be run with sudo). EAST comes both as a CLI application and as a python library (which can be imported and used in python code). How to - CLI application] [-f <table_format>] [ -f option specifies the format in which the table should be printed. The format is XML by default (see an example below); the -f option can also take CSV as its parameter. -. If you want to print the output to some file, just redirect the EAST output (e.g. by appending > filename.txt to the command in Unix). Sample output referral confidence parameter). A keyphrase counts as occuring in a text if its presence score for that text ecxeeds some threshold [Mirkin, Chernyak, & Chugunova, 2012]. $ east [-s] [-d] [-f <graph_format>] [-c <referral_confidence>] [-r <relevance_threshold>] [-p <support_threshold>] [-a <ast_algorithm>] keyphrases graph <keyphrases_file> <directory_with_txt_files> - The -s, -d and -a options configure the algorithm of computing the matching scores (exactly as for the keyphrases table command). - The -p option configures the threshold for graph node support (the number of documents “containing” the corresponding keyphrase according to the AST method), starting with which the nodes get included into the graph. - The -f option -c option stands for referral confidence and controls the confidence level above which the implications between keyphrases are considered to be strong enough to be added as graph arcs. The confidence level should be a float in [0; 1] and is 0.6 by default. - The -r option stands for relevance ] ] How to - Python library The example below shows how to use the EAST package in code. Here, we build an Annotated suffix tree for a collection of two strings (“XABXAC” and “HI”) and then calculate matching scores for two queries (“ABCI” and “NOPE”): from east.asts import base ast = base.AST.get_ast(["XABXAC", "HI"]) print ast.score("ABCI") # 0.1875 print ast.score("NOPE") # 0 The get_ast() method takes the list of input strings and constructs an annotated suffix tree using suffix arrays by default as the underlying data structure (this is the most efficient implementation known). The algorithm used for AST construction can be optionally specified via the second parameter to get_ast() (along with “easa”, its possible values include “ast_linear” and “ast_naive”) Working with real texts already requires some preprocessing, such as splitting a single input text into a collection of small-sized strings, which later enables matching scores for queries to be more precise. There is a special method text_to_strings_collection() in EAST which does that for you. The following example processes a real text collection and calculates matching scores for an input query: import itertools from east.asts import base from east import utils text_collection = [...] # e.g. retrieved from a set of *.txt files strings_collection = itertools.chain.from_iterable( [utils.text_to_strings_collection(text) for text in text_collection]) ast = base.AST.get_ast(strings_collection) print ast.score("Hello, world") # will be in [0; 1] Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/EAST/
CC-MAIN-2021-49
refinedweb
545
54.42
A customer was running into this problem with a shell extension: I am writing a shell namespace extension. I need to get data from a COM server, which requires impersonation via CoInitializeSecuritywith RPC_C_IMP_LEVEL_IMPERSONATE. As I am just writing an extension into explorer.exe, I am not able to call CoInitialize, CoInitializeSecurityanymore from my extension. Is there a way I can start explorer.exeby setting RPC_C_IMP_LEVEL_IMPERSONATEin its COM initialization? I was browsing through web, and explorer.exeseems to take some settings from registry, but couldn't find anything related to this one.. This reminds me of a former co-worker of mine. I re-formatted his code to match our coding style conventions and he accused me of walking into his house and rearranging everything. At the time I had been working there for years and him months. Go figure he didn’t last very long. Developers with an attitude like the one presented in this article are the primary reason I try my hardest to keep my shell extensions as close to stock standard as I can. Inevitably when I come across someone complaining about Explorer’s stability they’ll have a million new entries in their context menus. After following the "use a local solution" link, I nominate "QueryBlanket" as the best method name ever. Ah, but consider that he had at least accepted ownership! Another very hard-to-grasp concept is that when shipping 64-bit product which has shell extension, you really want to ship both 32 and 64-bit shell extension dlls. After the third time I got the replay "but on 64 bit Windows Explorer is 64-bit, so a 32-bit shell extension is useless", I stopped trying to educate people. People do believe that Explorer is the only entity that has the holy right to touch shell extensions. "People do believe that Explorer is the only entity that has the holy right to touch shell extensions." Not to mention that you can run Explorer in 32-bit mode by manually going to %windir%syswow64explorer.exe. @Teo: "Another very hard-to-grasp concept is that when shipping 64-bit product which has shell extension, you really want to ship both 32 and 64-bit shell extension dlls." In fact, this is a good idea when shipping 32-bit product too. I understand it was probably a good idea at the time, when programmers could be expected to actually know what they were doing and memory was limited, but wouldn’t it be time to move shell extensions outside Explorer again? Some sort of "ShellExtensionProvider.exe" that would be started by explorer (sort of like MSN Messenger always starts ctfmon.exe or TortoiseSVN always starts TSVNCache.exe) and would answer to "shell extension requests" from explorer views hosted in all processes, 32 or 64-bits. Of course, because of backwards compatibility, the current shell extension model would have to stick around for another 10 years, but slowly all major extensions would be migrated to the new model. @Koro: And maybe we’ll finally get to write managed-code shell extensions. BTW, ctfmon.exe is a Windows process that has something to do with sound. It’s not a part of Messenger. I can see the cycle. As you speed up the machines, you can then dumb down the programmers and still end up with the same responsiveness of machines twenty years ago. Then despite upgrading your machine every couple of years the machine still feels as slow. Brilliant!!!! That’ll keep us on the gravy train for life! @porter Did you actually use a computer, say, 20 years ago? Those things were slow compared to an adequate machine running 7 or XP (I won’t touch Vista here …), if only because of I/O. Actually, that’s still the most limiting factor. Since MS pwns the house, they don’t care about whether changing the carpet could make the guests fall or not visit their house again. With this same mentality, I guess was why IColumnProvider was removed citing pseudo performance reasons. I have loyally stuck with XP because of this (Extensible column handlers are far too valuable to me than fancy Aero or Peek). How the hell can I display certain info like extension or size (especially folder size) thru extensions? Not possible with the Property system which only allows extracting static metadata from files. I hope Windows 8 or Windows 7 SP1 brings back IColumnProvider. Whatever happened to giving choice to users/developers? "Since MS pwns the house, they don’t care about whether changing the carpet could make the guests fall or not visit their house again." Unless said guests are important enough. Now, in the open source world … A problem is that not all shell extensions can be out of process. Notably icon handlers, the only type of shell extension I’ve written yet. "Not to mention that you can run Explorer in 32-bit mode by manually going to %windir%syswow64explorer.exe" Not anymore in Windows 7 x64. Anyone knows why? Years of playing FPSes have left me unable to see ctfmon as anything other than ‘Capture The Flag Monitor.’ Why stop there, why not have a separate process for every shell extension? @porter: I posit that having ONE copy of the shell extension running in a separate process, instead of having a copy of it loaded into each process that shows an Open/Save dialog box, would have net memory savings in the end, as it would not need to allocate its per-instance data for each process. It would also remove the need for shell extension helper processes like TSVNCache.exe. It would also have the benefit of not "polluting" the list of DLLs loaded into a process as soon as an Open/Save dialog would be shown. @Jonathan: Actually, ctfmon.exe is related to msctf.exe, which I *think* is related to IME’s and advanced text input… which MSN indirectly uses since it uses windowless RichEdit’s for its input. @anon: "Since MS pwns the house" Thanks for starting with that at the beginning of your long post. It let me know it was troll spam, and I didn’t have to waste my time reading it. To the other trolls: Please be as courteous in future posts as @anon was in this thread. @Ken White, how exactly is a troll defined? One who complains about useful features lacking in Windows which were there in the past? My tone is because Microsoft doesn’t listen, this way or that way. The shell team and the Windows Media team at MS need to be crucified under a guillotine. I have a problem with a product M’s DLL that gets loaded into my process (seems to be some sort of create window hook?) and initialises COM in such a way that my call to OleInitialize fails with CO_E_OLE1DDE_DISABLED…
https://blogs.msdn.microsoft.com/oldnewthing/20091202-00/?p=15823
CC-MAIN-2017-43
refinedweb
1,151
64
Library Interfaces and Headers - pathname pattern-matching types #include <glob.h> The <glob.h> header defines the structures and symbolic constants used by the glob(3C). The structure type glob_t contains the following members: size_t gl_pathc /* count of paths matched by pattern */ char **gl_pathv /* pointer to a list of matched pathnames */ size_t gl_offs /* lots to reserve at the beginning of gl_pathv */ The following constants are provided as values for the flags argument: Append generated pathnames to those previously obtained. Specify how many null pointers to add to the beginning of gl_pathv. Cause glob() to return on error. Each pathname that is a directory that matches pattern has a slash appended. If pattern does not match any pathname, then return a list consisting of only pattern. Disable backslash escaping. Do not sort the pathnames returned. The following constants are defined as error return values: The scan was stopped because GLOB_ERR was set or (*errfunc)() returned non-zero. The pattern does not match any existing pathname, and GLOB_NOCHECK was not set in flags. An attempt to allocate memory failed. Reserved. See attributes(5) for descriptions of the following attributes: glob(3C), attributes(5), standards(5)
http://docs.oracle.com/cd/E23823_01/html/816-5173/glob.h-3head.html
CC-MAIN-2014-52
refinedweb
193
58.18
Background Information and Tasks As background for the following chapters, this section describes basic techniques that you can use when you define client methods in Zen Mojo. It also contains a section on naming conventions to consider before starting to create Zen Mojo applications. It discusses the following topics: General-purpose variables and functions How to stash values in the client How to convert a JSON object to a string General-Purpose Client Variables and Functions For reference, this section lists some key general-purpose variables and functions that you can use within a Zen Mojo client method: this — This standard JavaScript variable represents the current instance of the template or page, depending on where you use the variable. That is, if you use this within a template method, it returns the template instance. If you use this within a page method, it returns the page instance. zenPage — This InterSystems variable represents the current page object. zen(id) — This InterSystems function returns a reference to the documentView or other component that has the given id. Note that this function cannot be used to access layout objects, which are not components. To access layout objects, you use methods of the documentView component; see “Interacting with Layout Objects,” later in this book. zenGet(property_or_array_item, optional_default_value) — This InterSystems function examines the given object property or array item and returns its value, an optional default value, or an empty string, depending on the scenario. Specifically, if the property or array item is defined, the function returns its value. If the item is not defined, and if the second argument is specified, this function returns the value specified by the second argument. If the item is not defined, and if the second argument is not specified, this function returns an empty string (''). For example, consider the following code: if (myobj.prop4 == undefined) { var returnval = 'No information available'; }Copy code to clipboard That code is equivalent to the following: var returnval=zenGet(myobj.prop4,'No information available');Copy code to clipboard console.log(argument) — This standard JavaScript function generates an entry in the web console log. alert(argument) — This standard JavaScript function generates a popup that displays the given message. InterSystems recommends that you use this function only for diagnostic purposes. zenPage.showMessage(argument) — This InterSystems function generates a popup message that displays the given message. InterSystems recommends that you use this function for messages intended for the user. For other variables and functions provided by InterSystems, see “Client Side Functions, Variables, and Objects” in Developing Zen Applications. For additional standard JavaScript options, see any suitable JavaScript documentation. Stashing Values In some cases, you may want to temporarily save client-side data so that you can use it within multiple layouts. For example, it may be necessary to collect data from the user via multiple layouts before submitting that to the server. Or you might want to display data from one layout within a different layout. Or you might need to keep track of an internal identifier. In such cases, you can stash values and then later use them. To stash a value, save it in a property of the page instance or the template instance, depending on your preference. To save a value in a property of the page instance, set a property of the zenPage variable. The zenPage variable is available in client methods in both the page and template classes. Start the property name with an underscore character (_), which ensures that this property is defined only on the client. To avoid collision with internally used property names, consider starting your property names with _myApp or a similar short string. To save a value in a property of the template instance, set properties of the this variable in a client method in a template class. Note that you can also use this in a page class. In that context, this refers to the page instance. To avoid confusion, it is best to choose a single context for the stashed values (page or template) and then use that context consistently. When you no longer need the stashed value, use delete to remove it. For example: delete this._MyNewValues; Converting a JSON Object to a String Zen Mojo packages all communication between the client and server as JSON objects, as follows: To retrieve data from the server, you (indirectly) call the template method %OnGetJSONContent(), which returns a Zen proxy object as output. Zen Mojo uses that to create an equivalent JSON object and then sends the JSON object to the client. To submit data to the server, you call the submitData() method of the page. As input, you must use a JSON object that contains the data. When you stash values, however, it is important to remember that you can stash only single JavaScript values (not objects). If you need to stash an entire JSON object, first use the utility method JSON.stringify(), which takes one argument, the JSON object, and returns a string. Then stash the returned string. The JSON.stringify() utility method is available in client methods. Establishing Naming Conventions It is worthwhile to establish naming conventions to avoid confusion. This section discusses the following topics: Zen Mojo programs are case-sensitive. Remember that a is not the same as A. Class Names InterSystems strongly recommends that you do not create any classes in a package called ZEN.Component using any combination of uppercase and lowercase characters. If you create a package called ZEN.Component, that interferes with the way that Zen generates client-side code. Also, note that because a Zen Mojo application can be easily extended to use multiple templates, you might want to place your template classes in a subpackage. Because these classes are automatically projected to XML, there is an additional consideration. If multiple classes have the same short class name (that is, the class name without the package), be sure that the NAMESPACE parameter is unique for each such class. This rule is a consequence of the fact that the short class name becomes the name of a global XML element, and those elements must be unique in an XML namespace. In Caché, each XML-enabled class must have a unique combination of the short class name and the NAMESPACE parameter. Method Names InterSystems classes use case to distinguish between client and server methods. You might find it helpful to follow these conventions as well. Content Objects and JSON Providers Each documentView is specified by two content objects — a data object and a layout graph, as described in “The Template System,” earlier in this book. You must use the names of these objects consistently in several places in the code, so it is useful to have a convention for them. One approach is as follows: Use the name document for the data object of your primary documentView Use the name layout for the layout graph of your primary documentView Use more specific names for the content objects of other documentView components Another approach is as follows: Use the name componentidData for the data object of the documentView whose id is componentid Use the name componentidLayout for the layout graph of the documentView whose id is componentid Note that (via the PROVIDERLLIST parameter) the page class includes a JSON provider for each data object. For this reason, the names of the data objects also become names of the JSON providers. The terms data object and JSON provider are sometimes used interchangeably, although this book simply uses data object. Keys You can associate keys with many of the layout objects, and your application can end up with a large number of keys. It is particularly useful to establish a naming convention for them. A simple system would be as follows: Use names that make the code easier to read. Use a verb for the name of a key in a control. Use a noun for the name of a key in any other scenario. Adopt a hierarchical (or semi-hierarchical) set of names for related keys. For example, you might use the key name accounts for a layout element that displays a table of accounts, and then use the key name account-detail for a layout element that displays details for one account. The name of a key cannot include a colon (:) character.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ZENMOJO_BKGRD
CC-MAIN-2021-10
refinedweb
1,383
53.1
Convert between radians and degrees (2 radians equals 360 degrees). BEGIN { use constant PI => 3.14159265358979; sub deg2rad { my $degrees = shift; return ($degrees / 180) * PI; } sub rad2deg { my $radians = shift; return ($radians / PI) * 180; } } Alternatively, use the above will define the rad2deg and deg2rad functions. The value of isn't built directly into Perl, but you can calculate it to as much precision as your floating-point hardware provides. If you put it in a BEGIN block, this is done at compile time. In the solution above, the PI function is a constant created with use 3 of Programming Perl; the documentation for the standard POSIX and Math::Trig modules (also in Chapter 7 of Programming Perl)
http://docstore.mik.ua/orelly/perl/cookbook/ch02_12.htm
crawl-002
refinedweb
117
60.85
- NAME - DESCRIPTION - SUBROUTINES - BINDABLE FUNCTIONS - Commands For Moving - Commands tor Manipulating the History - Commands For Changing Text - Killing and Yanking - Specifying Numeric Arguments - Letting Readline Type For You - Miscellaneous Commands - vi Routines - Internal Routines - get_window_size - readline - substr_with_props - rl_redisplay - redisplay - get_command - do_command - savestate - preserve_state - OnSecondByte - CharSize - WordBreak - kill_text - at_end_of_line - changecase - search - search - TextInsert - complete_internal - use_basic_commands - completion_matches - pretty_print_list - get_position - read_an_init_file - SEE ALSO NAME Term::ReadLine::Perl5::readline DESCRIPTION A non-OO package similar to GNU's readline. The preferred OO Package is Term::ReadLine::Perl5. But that uses this internally. It could be made better by removing more of the global state and moving it into the Term::ReadLine::Perl5 side. There is some support for EUC-encoded Japanese text. This should be rewritten for Perl Unicode though. Someone please volunteer to rewrite this! See also Term::ReadLine::Perl5::readline-guide. SUBROUTINES InitKeyMap InitKeymap(*keymap, 'default', 'name', bindings...) _unescape _unescape($string) -> List of keys This internal function that takes $string possibly containing escape sequences, and converts to a series of octal keys. It has special rules for dealing with readline-specific escape-sequence commands. New-style key bindings are enclosed in double-quotes. Characters are taken verbatim except the special cases: \C-x Control x (for any x) \M-x Meta x (for any x) \e Escape \* Set the keymap default (JP: added this) (must be the last character of the sequence) \x x (unless it fits the above pattern) Special case "\C-\M-x", should be treated like "\M-\C-x". bind_parsed_keyseq bind_parsed_keyseq($function1, @sequence1, ...) Actually inserts the binding for @sequence to $function into the current map. @sequence is an array of character ordinals. If sequence is more than one element long, all but the last will cause meta maps to be created. $Function will have an implicit F_ prepended to it. 0 is returned if there is no error. GNU ReadLine-ish Routines Many of these aren't the the name GNU readline uses, nor do they correspond to GNU ReadLine functions. Sigh. rl_bind_keyseq rl_bind_keyseq($keyspec, $function) Bind the key sequence represented by the string keyseq to the function function, beginning in the current keymap. This makes new keymaps as necessary. The return value is non-zero if keyseq is invalid. $keyspec should be the name of key sequence in one of two forms: Old (GNU readline documented) form: M-x to indicate Meta-x C-x to indicate Ctrl-x M-C-x to indicate Meta-Ctrl-x x simple char x where x above can be a single character, or the special: special means -------- ----- space space ( ) spc space ( ) tab tab (\t) del delete (0x7f) rubout delete (0x7f) newline newline (\n) lfd newline (\n) ret return (\r) return return (\r) escape escape (\e) esc escape (\e) New form: "chars" (note the required double-quotes) where each char in the list represents a character in the sequence, except for the special sequences: \\C-x Ctrl-x \\M-x Meta-x \\M-C-x Meta-Ctrl-x \\e escape. \\x x (if not one of the above) $function should be in the form BeginningOfLine or beginning-of-line. It is an error for the function to not be known.... As an example, the following lines in .inputrc will bind one's xterm arrow keys: "\e[[A": previous-history "\e[[B": next-history "\e[[C": forward-char "\e[[D": backward-char rl_bind Accepts an array as pairs ($keyspec, $function, [$keyspec, $function]...). and maps the associated bindings to the current KeyMap. rl_editMode Changes editmode to $1 which shoujld either be 'emacs', 'vi', 'viopos', 'vicmd', 'visearch' rl_set rl_set($var_name, $value_string) Sets the named variable as per the given value, if both are appropriate. Allows the user of the package to set such things as HorizontalScrollMode and EditingMode. Value_string may be of the form HorizontalScrollMode horizontal-scroll-mode Also called during the parsing of ~/.inputrc for "set var value" lines. The previous value is returned, or undef on error. Consider the following example for how to add additional variables accessible via rl_set (and hence via ~/.inputrc). Want: We want an external variable called "FooTime" (or "foo-time"). It may have values "January", "Monday", or "Noon". Internally, we'll want those values to translate to 1, 2, and 12. How: Have an internal variable $var_FooTime that will represent the current internal value, and initialize it to the default value. Make an array %var_FooTime whose keys and values are are the external (January, Monday, Noon) and internal (1, 2, 12) values: $var_FooTime = $var_FooTime{'January'} = 1; #default $var_FooTime{'Monday'} = 2; $var_FooTime{'Noon'} = 12; rl_filename_list rl_filename_list($pattern) => list of files Returns a list of completions that begin with the string $pattern. Can be used to pass to completion_matches(). This function corresponds to the Term::ReadLine::GNU function rl_filename_list). But that doesn't handle tilde expansion while this does. Also, directories returned will have the '/' suffix appended as is the case returned by GNU Readline, but not Term::ReadLine::GNU. Adding the '/' suffix is useful in completion because it forces the next completion to complete inside that directory. GNU Readline also will complete partial ~ names; for example ~roo maybe expanded to /root for the root user. When getpwent/setpwent is available we provide that. The user of this package can set $rl_completion_function to 'rl_filename_list' to restore the default of filename matching if they'd changed it earlier, either directly or via &rl_basic_commands. rl_filename_list_deprecated rl_filename_list_deprecated($pattern) This was the Term::ReadLine::Perl5 function before version 1.30, and the current Term::ReadLine::Perl function. For reasons that are a mystery to me (rocky), there seemed to be a the need to classify the result adding a suffix for executable (*), pipe/socket (=), and symbolic link (@), and directory (/). Of these, the only useful one is directory since that will cause a further completion to continue. rl_parse_and_bind rl_parse_and_bind($line) Parse $line as if it had been read from the inputrc file and perform any key bindings and variable assignments found. rl_basic_commands Called with a list of possible commands, will allow command completion on those commands, but only for the first word on a line. For example: &rl_basic_commands('set', 'quit', 'type', 'run'); This is for people that want quick and simple command completion. A more thoughtful implementation would set $rl_completion_function to a routine that would look at the context of the word being completed and return the appropriate possibilities. rl_read_init_file rl_read_initfile($filename) Read keybindings and variable assignments from filename $filename. BINDABLE FUNCTIONS There are pretty much in the same order as in readline.c Commands For Moving F_BeginningOfLine Move to the start of the current line. F_EndOfLine Move to the end of the line. F_ForwardChar Move forward (right) $count characters. F_BackwardChar Move backward (left) $count characters. F_ForwardWord Move forward to the end of the next word. Words are composed of letters and digits. Done as many times as $count says. F_BackwardWord Move back to the start of the current or previous word. Words are composed of letters and digits. Done as many times as $count says. F_ClearScreen Clear the screen and redraw the current line, leaving the current line at the top of the screen. If given a numeric arg other than 1, simply refreshes the line. F_RedrawCurrentLine Refresh the current line. By default, this is unbound. Commands tor Manipulating the History F_AcceptLine Accept the line regardless of where the cursor is. If this line is non-empty, it may be added to the history list for future recall with add_history(). If this line is a modified history line, the history line is restored to its original state. F_PreviousHistory Move `back' through the history list, fetching the previous command. F_PreviousHistory Move `forward' through the history list, fetching the next command. F_BeginningOfHistory Move to the first line in the history. F_EndOfHistory Move to the end of the input history, i.e., the line currently being entered. F_ReverseSearchHistory Search backward starting at the current line and moving `up' through the history as necessary. This is an incremental search. F_ForwardSearchHistory Search forward starting at the current line and moving `down' through the the history as necessary. This is an increment F_HistorySearchBackward Search backward through the history for the string of characters between the start of the current line and the point. The search string must match at the beginning of a history line. This is a non-incremental search. By default, this command is unbound. F_HistorySearchForward Search forward through the history for the string of characters between the start of the current line and the point. The search string may match anywhere in a history line. This is a non-incremental search. By default, this command is unbound. Commands For Changing Text F_DeleteChar Removes the $count chars from under the cursor. If there is no line and the last command was different, tells readline to return EOF. If there is a line, and the cursor is at the end of it, and we're in tcsh completion mode, then list possible completions. If $count > 1, deleted chars saved to kill buffer. F_BackwardDeleteChar Removes $count chars to left of cursor (if not at beginning of line). If $count > 1, deleted chars saved to kill buffer. F_QuotedInsert Add the next character typed to the line verbatim. This is how to insert key sequences like C-q, for example. F_TabInsert Insert a tab character. F_SelfInsert F_SelfInsert($count, $ord) $ord is an ASCII ordinal; inserts $count of them into global $line. Insert yourself. F_TransposeChars Switch char at dot with char before it. If at the end of the line, switch the previous two... Note: this could screw up multibyte characters.. should do correctly) F_TransposeWords Drag the word before point past the word after point, moving point past that word as well. If the insertion point is at the end of the line, this transposes the last two words on the line. F_UpcaseWord Uppercase the current (or following) word. With a negative argument, uppercase the previous word, but do not move the cursor. F_DownCaseWord Lowercase the current (or following) word. With a negative argument, lowercase the previous word, but do not move the cursor. F_CapitalizeWord Capitalize the current (or following) word. With a negative argument, capitalize the previous word, but do not move the cursor. F_OverwriteMode F_KillLine delete characters from cursor to end of line. F_BackwardKillLine Delete characters from cursor to beginning of line. F_UnixLineDiscard Kill line from cursor to beginning of line. F_KillWord Delete characters to the end of the current word. If not on a word, delete to ## the end of the next word. F_BackwardKillWord Delete characters backward to the start of the current word, or, if currently not on a word (or just at the start of a word), to the start of the previous word. F_UnixWordRubout Kill to previous whitespace. F_KillRegion Kill the text in the current region. By default, this command is unbound. F_CopyRegionAsKill Copy the text in the region to the kill buffer, so it can be yanked right away. By default, this command is unbound. F_Yank Yank the top of the kill ring into the buffer at point. Specifying Numeric Arguments F_DigitArgument Add this digit to the argument already accumulating, or start a new argument. M-- starts a negative argument. F_UniversalArgument. Letting Readline Type For You F_Complete Do a completion operation. If the last thing we did was a completion operation, we'll now list the options available (under normal emacs mode). In TcshCompleteMode, each contiguous subsequent completion operation lists another of the possible options. Returns true if a completion was done, false otherwise, so vi completion routines can test it. F_PossibleCompletions List possible completions F_PossibleCompletions Insert all completions of the text before point that would have been generated by possible-completions. Miscellaneous Commands F_ReReadInitFile Read in the contents of the inputrc file, and incorporate any bindings or variable assignments found there. F_Abort Abort the current editing command and ring the terminal's bell (subject to the setting of bell-style). F_Undo Incremental undo, separately remembered for each line. F_RevertLine Undo all changes made to this line. This is like executing the undo command enough times to get back to the beginning. F_TildeExpand Perform tilde expansion on the current word. F_SetMark Set the mark to the point. If a numeric argument is supplied, the mark is set to that position. F_ExchangePointAndMark Set the mark to the point. If a numeric argument is supplied, the mark is set to that position. F_OperateAndGetNext Accept the current line and fetch from the history the next line relative to current line for default. F_DoLowercaseVersion If the character that got us here is upper case, do the lower-case equivalent command. F_DoControlVersion do the equiv with control key... If the character that got us here is upper case, do the lower-case equivalent command. F_DoMetaVersion do the equiv with meta key... F_DoEscVersion If the character that got us here is Alt-Char, do the Esc Char equiv... F_Interrupt (Attempt to) interrupt the current program via kill('INT') F_Suspend (Attempt to) suspend the program via kill('TSTP') F_Ding Ring the bell. Should do something with $var_PreferVisibleBel here, but what? vi Routines F_ViRepeatLastCommand Repeat the most recent one of these vi commands: a A c C d D i I p P r R s S x X ~ F_SaveLine Prepend line with '#', add to history, and clear the input buffer (this feature was borrowed from ksh). F_ViNonePosition Come here if we see a non-positioning keystroke when a positioning keystroke is expected. ViPositionEsc Comes here if we see escchar, but not an arrow key or other mapped sequence, when a positioning keystroke is expected. F_ViFirstWord Go to first non-space character of line. F_ViTtoggleCase # Like the emacs case transforms. Note: this doesn't work for multi-byte characters. F_ViHistoryLine Go to the numbered history line, as listed by the 'H' command, i.e. the current $line is line 1, the youngest line in @rl_History is 2, etc. F_ViSearch Search history for matching string. As with vi in nomagic mode, the ^, $, \<, and \> positional assertions, the \* quantifier, the \. character class, and the \[ character class delimiter all have special meaning here. F_ViChangeEntireLine Kill entire line and enter input mode F_ViChangeChar Kill characters and enter input mode F_ViChangeLine Delete characteres from cursor to end of line and enter VI input mode. Internal Routines get_window_size get_window_size([$redisplay]) Note: this function is deprecated. It is not in Term::ReadLine::GNU or the GNU ReadLine library. As such, it may disappear and be replaced by the corresponding Term::ReadLine::GNU routines. Causes a query to get the terminal width. If the terminal width can't be obtained, nothing is done. Otherwise... Set $rl_screen_width and to the current screen width. $rl_margin is then set to be 1/3 of $rl_screen_width. any window-changeing hooks stored in array @winchhooks are run. SIG{WINCH} is set to run this routine. Any routines set are lost. A better behavior would be to add existing hooks to @winchhooks, but hey, this routine is deprecated. If $redisplay is passed and is true, then a redisplay of the input line is done by calling redisplay(). readline &readline:. substr_with_props substr_with_props($prompt, $string, $from, $len, $ket, $bsel, $esel) Gives the substr() of $prompt.$string with embedded face-change commands. rl_redisplay rl_redisplay() Updates the screen to reflect the current value of global $line. For the purposes of this routine, we prepend the prompt to a local copy of $line so that we display the prompt as well. We then modify it to reflect that some characters have different sizes. That is, control-C is represented as ^C, tabs are expanded, etc. This routine is somewhat complicated by two-byte characters.... must make sure never to try do display just half of one. This is some nasty code. redisplay redisplay[($prompt)] If an argument $prompt is given, it is used instead of the prompt. Updates the screen to reflect the current value of global $line via rl_redisplay. get_command get_command(*keymap, $ord_command_char) If the *keymap) has an entry for $ord_command_char, it is returned. Otherwise, the default command in $Keymap{'default'} is returned if that exists. If $Keymap{'default'} is false, 'F_Ding' is returned. do_command do_command(*keymap, $numericarg, $key) If the *keymap has an entry for $key, it is executed. Otherwise, the default command for the keymap is executed. savestate savestate() Save whatever state we wish to save as an anonymous array. The only other function that needs to know about its encoding is getstate/preserve_state. preserve_state preserve_tate() OnSecondByte OnSecondByte($index) Returns true if the byte at $index into $line is the second byte of a two-byte character. CharSize B CharSize($index) Returns the size of the character at the given $index in the current line. Most characters are just one byte in length. However, if the byte at the index and the one after both have the high bit set and $_rl_japanese_mb is set, those two bytes are one character of size two. Assumes that $index points to the first of a 2-byte char if not pointing to a 1-byte char. TODO: handle Unicode WordBreak WordBreak(index) Returns true if the character at index into $line is a basic word break character, false otherwise. kill_text kills from D=$_[0] to $_[1] (to the killbuffer if $_[2] is true) at_end_of_line Returns true if $D at the end of the line. changecase changecase($count, $up_down_caps) Translated from GNU's readline.c. If $up_down_caps is 'up' to upcase $count words; 'down' to downcase them, or something else to capitalize them. If $count is negative, the dot is not moved. search search($position, $string) Checks if $string is at position $rl_History[$position] and returns $position if found or -1 if not found. This is intended to be the called first in a potentially repetitive search, which is why the unusual return value. See also searchStart. search searchStart($position, $reverse, $string) $reverse should be either +1, or -1; Checks if $string is at position $rl_History[$position+$reverse] and returns $position if found or -1 if not found. This is intended to be the called first in a potentially repetitive search, which is why the unusual return value. See also search. TextInsert TextInsert($count, $string) complete_internal The meat of command completion. Patterned closely after GNU's. The supposedly partial word at the cursor is "completed" as per the single argument: "\t" complete as much of the word as is unambiguous "?" list possibilities. "*" replace word with all possibilities. (who would use this?) A few notable variables used: $rl_completer_word_break_characters -- characters in this string break a word. $rl_special_prefixes -- but if in this string as well, remain part of that word. Returns true if a completion was done, false otherwise, so vi completion routines can test it. use_basic_commands use_basic_commands($text, $line, $start); Used as a completion function by &rl_basic_commands. Return items from @rl_basic_commands that start with the pattern in $text. $start should be 0, signifying matching from the beginning of the line, for this to work. Otherwise we return the empty list. $line is ignored, but needs to be there in to match the completion-function API. completion_matches completion_matches(func, text, line, start) func is a function to call as func($text, $line, $start) where $text is the item to be completed, $line is the whole command line, and $start is the starting index of $text in $line. The function $func should return a list of items that might match. completion_matches will return that list, with the longest common prefix prepended as the first item of the list. Therefore, the list will either be of zero length (meaning no matches) or of 2 or more..... pretty_print_list Print an array in columns like ls -C. Originally based on stuff (lsC2.pl) by utashiro@sran230.sra.co.jp (Kazumasa Utashiro). See Array::Columnize for a more flexible and more general routine. get_position get_position($count, $ord, $fulline_ord, $poshash) Interpret vi positioning commands read_an_init_file read_an_init_file(inputrc_file, [include_depth]) Reads and executes inputrc_file which does things like Sets input key bindings in key maps. If there was a problem return 0. Otherwise return 1;
https://metacpan.org/pod/Term::ReadLine::Perl5::readline
CC-MAIN-2018-13
refinedweb
3,322
57.67
EDIT 10/4/2007: Since this post has been published, we have updated the Exchange 2007 Autodiscover Service whitepaper to include this information. Please consult the whitepaper for most up-to-date information. In reviewing all of the certificate data out there, Jim and I noticed that the information is fragmented into smaller topics and widely distributed. We wanted to supplement previous blog posts on this topic (this one and this one) with an overview of how Exchange 2007 uses certificates and a walk-though of how a typical small company might think about this topic.: Since Exchange 2007 shipped, we in Support Services have been helping a lot of customers navigate the process of obtaining and installing certificates. The following scenario comprises the majority of our experiences: Tom works for a company, Contoso Inc. Let's also say that Tom just put a default install of Exchange 2007 on a server called SERVER01 which makes its internal FQDN SERVER01.contoso.local since he also implemented split DNS. Tom wants to make sure he takes all of the correct steps in order for his External Outlook Anywhere 2007 clients to function correctly. He wants his users to be able to access OWA using. He has also read enough Microsoft documentation to know that the Outlook 2007 Auto-discover feature will attempt to find my auto-discover service at the following locations (in order from top to bottom): Service Connection Point (SCP) – client communicates directly to AD Tom doesn't want his users to get "invalid certificate" errors nor does he want to affect his clients with redirection requests. Tom has just one more decision to make and then its implementation time. Does he go with the recommended solution of a certificate with Subject Alternative Names (SAN) – also known as Unified Communications Certificates or with individual certificates? SAN Cert (Microsoft recommended solution) Pro – Simple to administer on the server Con – If you are purchasing the cert from a 3rd party it can be expensive (up to 10x more than a classic SSL cert) Con – If you generate this cert with your internal MS certificate server, external clients/devices must be configured to trust this internal CA which may involve configuring many devices (Outlook clients, mobile devices, etc). Con – not all CA's support this type of certificate. See this article for a list of CA's that do: 929395 Description of the Exchange-specific Web sites that are provided by X.509 certification authorities;EN-US;929395 Classic 3rd party SSL cert Pro – inexpensive Pro – most clients will trust the CA by default Con – can complicates deployment on the server or require the use of an unfamiliar alias The decision on this is your's (Tom's) hands so we'll cover both here: The SAN cert method You will need to contact a 3rd party CA that supports these types of certs (see link to KB929395 above) Next, you need to know all of the Subject Alternative Names that you need to register. Here is the list that applies in Tom's scenario (for the '-domainname' parameter): mail.contoso.com contoso.com contoso.local autodiscover.contoso.com Server01.contoso.local Server01 Officially, the NetBIOS names of the server are not required. But many users and admins like to use OWA internally and this will prevent unnecessary warnings about the cert when they log on. There are no ill effects from adding internal names but they are not necessary. This is the Exchange Management Shell (EMS) command Tom would enter to generate the cert request to be provided to the 3rd party CA in order to generate the actual certificate: New-Exchangecertificate -domainname mail.contoso.com, contoso.com, contoso.local, autodiscover.contoso.com, server01.contoso.local, server01 -Friendlyname contosoinc -generaterequest:$true -keysize 1024 -path c:\certrequest.req -privatekeyexportable:$true –subjectname "c=US o=contoso inc, CN=server01.contoso.com" We have found that the '–subjectname' option is the most confusing. The help contents in EMS are vague as well. The best description is found in the TLS whitepaper mentioned at the beginning of this post so we're not going to reproduce it here. As we just stated, the above command will generate a certificate request file you can then submit to the CA of your choosing. Once they have processed your request and you have the cert, you need to install it onto your default web site. You don't install the certificate using the IIS Admin Console, you need to do it using the management shell. First you have to import it: Import-exchangecertificate –path <full path to cert file> Then enable it: Enable-exchangecertificate When you run the above command you will be prompted to enter the name of the service you want to enable this certificate for. You can enable the cert for IIS, POP3, IMAP, SMTP, or UM depending on your circumstance. You can enable it for multiple services with the enable command by adding the following parameter: -services IMAP, POP, UM, IIS, SMTP After that it will prompt you for the thumbprint, so just copy and paste it from the results of the import procedure mentioned above. If for some reason you don't have the thumbprint in the same window you can get it by typing the following monad command: Get-Exchangecertificate You can also specify the thumbprint when you execute the 'enable-exchangecertificate' command by adding this parameter: -thumbprint D75305BEF8175570EB6E03BA6FF4372D05ACE39F4 Combined it would look like this: Enable-exchangecertificate –services IIS, UM, SMTP –thumbprint D75305BEF8175570EB6E03BA6FF4372D05ACE39F4 Make sure you copy the correct thumbprint if you have more than one. You can tell by running the 'get-exchangecertificate' PowerShell command and match up the subject with the correct thumbprint. Next you need external DNS records that point to the IP address of your CAS server for any external name mapped to this certificate. The other method Jim and I are also hearing "These 3rd party companies want to charge me a lot of money for this SAN cert thing, is there another method?" Why yes there are a couple of alternatives and here they are: Alternative 1 Get a normal SSL certificate for the autodiscover namespace (autodiscover.contoso.com in the scenario). If you plan on using TLS you'll need to make sure to follow the instructions above but for subjectname you only need to specify the one namespace. The steps are no different to import and install at that point. For this first example users will enter the following url for Outlook Anywhere or ActiveSync: They would use this url to get to OWA: Alternative 2 This alternative addresses users that may not be as open to learning a new URL for OWA, activesync, or other web services they may already have configured. Get 2 certs, one for mail.contoso.com and one for autodiscover.contoso.com. Mail.contoso.com cert goes on your default web site. Next, create a new Web site from within IIS manager called AutoDiscover. Right click, "Web Sites", choose "Web Site", make the description AutoDiscover, assign a new dedicated IP to this web site, use the default port of 80, don't enter a host header, for the Path, point to the same directory as your default web site c:\inetpub\wwwroot Also accept the default permissions. Right click this web site, get properties, and go to Directory Security. Assign the autodiscover.contoso.com cert here. From the Exchange Command Shell, run the following command: New-AutodiscoverVirtualDirectory –WebSiteName AutoDiscover –BasicAuthentication $true –WindowsAuthentication $true Note that the web site name parameter is case sensitive. Go back to IIS manager, confirm the creation of your new AutoDiscover Virtual Directory. You can delete the autodiscover virtual directory from the default web site but it's not necessary and there is no additional risk by leaving it there. Finally, make sure external DNS have A or CNAME records for the following: mail.contoso.com pointed to the external IP of Default Web Site autodiscover.contoso.com pointed to the external IP of the AutoDiscover web site Now that you have your cert installed, now what? Default certs issued by a MS certificate authority are valid for 2 years. The length of 3rd party certificate validity depends on your agreement with them. You can use the certificate manager addin for the local computer to renew these certs when the time comes or you can also repeat the steps above to get a new cert from another CA if you like. There are several ways to do this and the choice is yours to make as to how you accomplish the renewal. Caveats If you choose to install and use your own CA, you will have to ensure that clients, servers, and devices that access any secured site trust your CA as a root. This is actually a minor procedure but depending on the technical ability of your users or in large deployments it can become quite complicated. Also, if you plan to incorporate an SSL accelerator or ISA server located in a DMZ you need to make sure that you export the private key of the certificate (pfx file). You can do this from the IIS administrator program once the certificate has been installed following the previous procedure. Here are some links on that process: 299875 How to implement SSL in IIS;EN-US;299875 915840 How to install root certificates on a Windows Mobile-based device;EN-US;915840 297681 Error Message: This Security Certificate Was Issued by a Company that You Have Not Chosen to Trust;EN-US;297681 332077 IIS 6.0: Computer must trust all certification authorities trusted by individual sites;EN-US;332077 Certificates for Windows Mobile 5.0 and Windows Mobile 6... - Christopher Gregson, Jim Westmoreland Service Connection Point (SCP) – client communicates directly to AD Uzih, MS CA should support SAN's out of the box. Chris Lehr, This is always changing so rather than post it in here I chose to like to the best documentation we have about it. That doc will be updated over time so we don't have to keep changing the blog post.;EN-US;929395 Tim, The problem with Entourage is that it's looking for an Exchange virtual directory. This will only work if the mailboxes are on E2K3. If the mailbox is on E2k7 then it is redirected to the OWA virtual directory. That is documented here: 924625 When you use Outlook or Entourage with an Exchange 2007 mailbox, you cannot connect to Exchange 2007, and you receive an error message;EN-US;924625 Bennywmy, No, don't turn on anon access. I suggest you call in to our support center or use the help forums. There isn't enough detail in your post and this isn't the place to troubleshoot specific problems. has fixed the problem. Now it's only RPC/HTTPS I have to fix using the info in this post ;) Thanks also Uzih for the enable SAN tip... A lot of documentation leaves that little tidbit out. My impression is that a lot of the documentation is describing ideal and best practice scenarios. This is an easy conclusion to draw since the server loads with a self signed cert. I'm not sure if Outlook makes the http specific request visible in the UI but I've seen it plenty of times in debug and netmon traces. If you look you can find references to it, here's a webcast by Joe Turick:;EN-US;935439 Connect to AutoDiscover - hosteddomain Outlook connects to autodiscover.[hosteddomain.com]/ autodiscover/ and [hosteddomain.com]/ autodiscover by using HTTPS This fails – HTTPS not configured Outlook retries by using HTTP, but doesn’t authenticate Outlook gets an HTTPS redirect to hoster.com Short of posting up source code I'm not sure how else I can convince you. I suggest that you get a network trace of your connection attempt then filter the trace for HTTP traffic. If the HTTPS request fails you should see it -Jim It should work fine. You can test prior to this with your own CA if you want. Internal OL uses the SCP to get the URL, external clients will use the SMTP address of the user to 'guess' the URL so as long as your SMTP address is user@mydomain.com then it should work fine. The SCP address will probably fail or generate a prompt so you may want to change that value in AD for internal users. You can see what it's set to by using this command in the shell: get-clientaccessserver | fl autodiscoverserviceinternaluri I imagine you'll get your .local namespace returned. You can change it by doing this: set-clientaccessserver <servername> -autodiscoverserviceinternaluri: That way internal clients that are able to use the SCP and external clients will resolve to the same name that the cert was issued to. Loren, Yes,thereisatypointhatthereshouldbeaspacewhereyouindicated. And yes, I typoed the name in the command for .local and should have entered .com. I think it still makes the point though. I would blame Chris for that but he's not here to defend himself today :) Thanks, Jim
https://techcommunity.microsoft.com/t5/exchange-team-blog/more-on-exchange-2007-and-certificates-with-real-world-scenario/bc-p/592739/highlight/true
CC-MAIN-2021-43
refinedweb
2,194
58.72
Beautiful Days at the Movies Hackerrank Lily likes to play games with integers and their reversals. We can say the games like, for some integer x, we define reversed(x) to be the reversal of all digits in x. For example, reversed(123)=321, reversed(12)=21, and reversed(120)=021. Logan wants to go to the movies with Lily on some day x satisfying i<=x<=j, but he knows she only goes to the movies on days she considers to be beautiful.And yes, Lily considers a day to be beautiful in its own logical way like if the absolute value of the difference between x and reversed(x) is evenly divisible by k. Given i, j, and k, count and print the number of beautiful days and help Logan to decide when he can ask Lily to go out to the movies. Input Format A single line of three space-separated integers describing the respective values of i, j, and k. Constraints 1<=i,j<=2*10^6 1<=k<=2*10^9 Output Format Print the number of beautiful days in the inclusive range. Sample Input 20 23 6 Sample Output 2 Explanation Logan wants to go to the movies on days 20, 21, 22, and 23. We perform the following calculations to determine which days are beautiful: Day 20 is beautiful because the following evaluates to a whole number: |20-02|/6=3 Day is not beautiful because the following doesn’t evaluate to a whole number: |21-12|/6=1.5 Day is beautiful because the following evaluates to a whole number: |22-22|/6=0 Day is not beautiful because the following doesn’t evaluate to a whole number: |23-32|/6=1.5 Only two days, 20 and 22, in this interval are beautiful. Thus, we print 2 as our answer. Solution in C++ #include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int rev(int m){ int r=0,rem; while(m != 0) { rem = m%10; r = r*10 + rem; m /= 10; } return r; } int main() { /* Enter your code here. Read input from STDIN. Print output to STDOUT */ int i,j,re,c=0; long int k; cin>>i>>j>>k; for(int s=i; s<=j; s++){ double x= (s-rev(s))/double(k); if(floor(x) == x) c++; } cout<<c; return 0; } olution in python a, b, k = map(int, raw_input().split()) ans = 0 for i in range(a, b+1): ans = ans + abs(not (i - int(str(i)[::-1]))%k) print ans Solution in Java); int i=sc.nextInt(); int c=0; int j=sc.nextInt(); int k=sc.nextInt(); for(int n=i;n<=j;n++) { if( ( n-reversed(Integer.toString(n)) ) % k ==0) c++; } System.out.print(c); } static int reversed(String str) { return(Integer.parseInt(new StringBuffer(str).reverse().toString())); } } you can learn more hackerrank questions and solution 2 comments: On Beautiful Days at the Movies Hackerrank problem solution that line is getting the difference between the number and its reverse and then getting the remainder. If the remainder is not zero then NOT is taking its reverse value that is ‘1’ otherwise ‘0’. The absolute of the reversed number is then passed and add to the running sum. Hope this clears your doubt. a, b, k = map(int, raw_input().split()) ans = 0 for i in range(a, b+1): “””””””””””” ans = ans + abs(not (i – int(str(i)[::-1]))%k) “””””””””””””””””””””””””””” —–#explain this line please—– print ans
https://coderinme.com/beautiful-days-at-the-movies-hackerrank-problem-solution/
CC-MAIN-2020-10
refinedweb
585
63.09
Java Interview Questions & Answers In this article, we have compiled the most frequently asked Java Interview Questions. These questions will give you an acquaintance with the type of questions that an interviewer might ask you during you interview for Java Programming As a Fresher, you have either just attended an interview or planning to attend one soon. An Entry Level jobseeker looking to grow your career in software programming, you may be nervous about your upcoming interviews. All of us have those moments of panic where we blank out and might even forget what a thread is. We will simplify it for you, all you need to do it take a deep breath and check the questions that are most likely to be asked. You can’t avoid panicking, but you can definitely prepare yourself so that when you step in that interview room. You are confident and know you can handle anything the interviewer might throw at you. Here is a compiled list of comprehensive 21 Java Interview Questions with Answers (latest 2020) that will help you nail that confidence, and ensure you sail through the interview. 1. What all does JVM comprise of? JVM, short for Java Virtual Machine is required by any system to run Java programs. Its architecture essentially comprises of: ● Classloader: It is a subsystem of JVM and its main function is to load class files whenever a Java program is run. ● Heap: it is the runtime data that is used for allocating objects. ● Class area: it holds the class level of each class file such as static variables, metadata, and constant run pool. ● Stack: used for storing temporary variables. ● Register: the register contains the address of the JVM instruction currently being executed ● Execution engine: the EE consists of a virtual processor, an interpreter that executes instructions after reading the bytecode, and a JIT compiler which improves performance when the rate of execution is slow. ● Java Native Interface: it acts as the communication medium for interacting with another application developed in C, C++, etc. 2. What is object-oriented programming? Is Java an object-oriented language? Essentially, object-oriented programming is a programming paradigm that works on the concept of objects. Simply put, objects are containers – they contain data in the form of fields and code in the form of procedures. Following that logic, an object-oriented language is a language that works on objects and procedure. Since Java utilizes 8 primitive datatypes — boolean, byte, char, int, float, long, short, double — which are not objects, Java cannot be considered a 100% object-oriented language. 3. What do you understand by Aggregation in context of Java? Aggregation is a form of association in which each object is assigned its own lifecycle. But, there is ownership in this, and the child object cannot belong to any other parent object in any manner. 4. Name the superclass in Java. Java.lang. All different non-primitive are inherited directly or indirectly from this class. 5. Explain the difference between ‘finally’ and ‘finalize’ in Java? Used with the try-catch block, the ‘finally’ block is used to ensure that a particular piece of code is always executed, even if the execution is thrown by the try-catch block. In contrast, finalize() is a special method in the object class. It is generally overridden to release system resources when garbage value is collected from the object. 6. What is an anonymous inner class? How is it different from an inner class? Any local inner class which has no name is known as an anonymous inner class. Since it doesn’t have any name, it is impossible to create its constructor. It always either extends a class or implements an interface, and is defined and instantiated in a single statement. A non-static nested class is called an inner class. Inner classes are associated with the objects of the class and they can access all methods and variables of the outer class. 7. What is a system class? It is a core class in Java. Since the class is final, we cannot override its behavior through inheritance. Neither can we instantiate this class since it doesn’t provide any public constructors. Hence, all of its methods are static. 8. How to create daemon thread in Java? We use the class setDaemon(true) to create this thread. We call this method before the start() method, else we get IllegalThreadStateException. 9. Does Java support global variables? Why/Why not? No, Java doesn’t support global variables. This is primarily because of two reasons: ● They create collisions in the namespace. ● They break the referential transparency. How to become a rockstar data scientist 10. How is an RMI object developed? The following steps can be taken to develop an RMI object: ● Define the interface ● Implement the interface ● Compile the interface and their implementations with the java compiler ● Compile server implementation with RMI compiler ● Run RMI registry ● Run application 11. Explain the differences between time slicing and preemptive scheduling? In case of time slicing, a task executes for a specified time frame – also known as a slice. After that, it enters the ready queue — a pool of ‘ready’ tasks. The scheduler then picks the next task to be executed based on the priority and other factors. Whereas under preemptive scheduling, the task with the highest priority is executed either until it enters dead or warning states or if another higher priority task comes along. 12. Garbage collector thread is what kind of a thread? It is a daemon thread. 13. What is the lifecycle of a thread in Java? Any thread in Java goes through the following stages in its lifecycle: ● New ● Runnable ● Running ● Non-runnable (blocked) ● Terminated 14. State the methods used during deserialization and serialization process. ObjectInputStream.readObject Reads the file and deserializes the object. ObjectOuputStream.writeObject Serialize the object and write the serialized object to a file. 15. What are volatile variables and what is their purpose? Volatile variables are variables that always read from the main memory, and not from thread’s cache memory. These are generally used during synchronization. 16. What are wrapper classes in Java? All primitive data types in Java have a class associated with them – known as wrapper classes. They’re known as wrapper classes because they ‘wrap’ the primitive data type into an object for the class. In short, they convert Java primitives into objects. 17. How can we make a singleton class? By making its constructor private. 18. What are the important methods of Exception Class in Java? ● string getMessage() ● string toString() ● void printStackTrace() ● synchronized Throwable getCause() ● public StackTraceElement[] getStackTrace() 19. How can we make a thread in Java? We can follow either of the two ways to make a thread in Java: ● By extending Thread Class The disadvantage of this method is that we cannot extend any other classes since the thread class has already been extended. ● By implementing Runnable interface 20. Explain the differences between get() and load() methods. The get() and load() methods have the following differences: ● get() returns null if the object is not found, whereas load() throws the ObjectNotFound exception. ● get() always returns a real object, whereas load() returns a proxy object. ● get() method always hits the database whereas load() doesn’t. ● get() should be used if you aren’t sure about the existence of an instance, whereas load() should be used if you are sure that the instance exists. 21. What is the default value of the local variables? They aren’t initialized to any default value. Neither are primitives or object references. 22. What is Singleton in Java? It is a class with one instance in the whole Java application. For an example java.lang.Runtime is a Singleton class. The prime objective of Singleton is to control the object creation by keeping the private constructor. 23. What is the static method? A static method can be invoked without the need for creating an instance of a class. A static method belongs to the class rather than an object of a class. A static method can access static data member and can change the value of it. 24. What’s the exception? Exceptions Unusual conditions during the program. This may be due to an incorrect logic written by incorrect user input or programmer. Conclusion The above Java interview questions will provide a good start for preparing for the interview. Practice your coding skills, too, though, and make sure to be thorough in these questions and their related concepts so that when the interviewer fires a Q, you are ready to win the round with your A. Oh, and don’t forget 3 (inconspicuous) breaths when you present yourself before the interviewer.. All the best! Hope you Crack your Interviews !!
https://www.upgrad.com/blog/java-interview-questions-answers/
CC-MAIN-2020-40
refinedweb
1,459
57.47
Vuser Script Sections Each Vuser script contains at least the following sections: Before and during recording, you can select the section of the script into which VuGen will insert the recorded functions. When you run multiple iterations of a Vuser script, only the Actions sections of the script are repeated—the vuser_init and vuser_end sections are not repeated. For more information on the iteration settings, see the General > Run Logic view in the Runtime settings. VuGen Script Editor You use the VuGen script editor to display and edit the contents of each of the script sections. You can display the contents of only a single section at a time. To display a section in the script editor, double-click the name of the section in the Solution Explorer. Java Classes When working with Vuser scripts that use Java classes, you place all your code in the Actions class. The Actions class contains the following methods: init, action, and end. These methods correspond to the sections of scripts developed using other protocols—you insert initialization routines into the init method, client actions into the action method, and log off procedures in the end method. public class Actions{ public int init() { return 0;} public int action() { return 0;} public int end() { return 0;} } For more details, see Java Vuser (Manual) Protocol. Script Section Structure Example Every Vuser script contains three sections: vuser_init, Run (Actions), and vuser_end. You can instruct a Vuser to repeat the Run section when you run the script. Each repetition is known as an iteration. The vuser_init and vuser_end sections of a Vuser script are not repeated when you run multiple iterations. When you run scripts with multiple actions, you can indicate how to execute the actions, and how the Vuser executes them: In the following example, Block0 performs a deposit, Block1 performs a transfer, and Block2 submits a balance request. The Login and Logout actions are common to the three blocks. Sequence. You can set the order of actions within your script. You can also indicate whether to perform actions sequentially or randomly. Iterations. In addition to setting the number of iterations for the entire Run section, you can set iterations for individual actions or action blocks. This is useful, for example, in emulating a commercial site where you perform many queries to locate a product, but only one purchase. Weighting. For action blocks running their actions randomly, you can set the weight or percentage of each action within a block. In most cases, the name of the header file corresponds to the prefix of the protocol. For example, Database functions that begin with an lrd prefix, are listed in the lrd.h file. Header Files Header files commonly contain forward declarations of classes, subroutines, variables, and other identifiers. In most cases, the name of the header file corresponds to the prefix of the protocol. For example, Database functions that begin with an lrd prefix, are listed in the lrd.h file. The following table lists the header files associated with the most commonly used protocols:
https://admhelp.microfocus.com/lr/en/12.60-12.61/help/WebHelp/Content/VuGen/103300_c_script_sections.htm
CC-MAIN-2018-51
refinedweb
508
52.6
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards The sample program normal_misc_examples.cpp illustrates their use. First we need some includes to access the normal distribution (and some std output of course). #include <boost/math/distributions/normal.hpp> // for normal_distribution using boost::math::normal; // typedef provides default type is double. #include <iostream> using std::cout; using std::endl; using std::left; using std::showpoint; using std::noshowpoint; #include <iomanip> using std::setw; using std::setprecision; #include <limits> using std::numeric_limits; int main() { cout << "Example: Normal distribution, Miscellaneous Applications."; try { { // Traditional tables and values. Let's start by printing some traditional tables. double step = 1.; // in z double range = 4; // min and max z = -range to +range. int precision = 17; // traditional tables are only computed to much lower precision. // but std::numeric_limits<double>::max_digits10; on new Standard Libraries gives // 17, the maximum number of digits that can possibly be significant. // std::numeric_limits<double>::digits10; == 15 is number of guaranteed digits, // the other two digits being 'noisy'. // Construct a standard normal distribution s normal s; // (default mean = zero, and standard deviation = unity) cout << "Standard normal distribution, mean = "<< s.mean() << ", standard deviation = " << s.standard_deviation() << endl; First the probability distribution function (pdf). cout << "Probability distribution function values" << endl; cout << " z " " pdf " << endl; cout.precision(5); for (double z = -range; z < range + step; z += step) { cout << left << setprecision(3) << setw(6) << z << " " << setprecision(precision) << setw(12) << pdf(s, z) << endl; } cout.precision(6); // default And the area under the normal curve from -∞ up to z, the cumulative distribution function (cdf). // For a standard normal distribution cout << "Standard normal mean = "<< s.mean() << ", standard deviation = " << s.standard_deviation() << endl; cout << "Integral (area under the curve) from - infinity up to z " << endl; cout << " z " " cdf " << endl; for (double z = -range; z < range + step; z += step) { cout << left << setprecision(3) << setw(6) << z << " " << setprecision(precision) << setw(12) << cdf(s, z) << endl; } cout.precision(6); // default And all this you can do with a nanoscopic amount of work compared to the team of human computers toiling with Milton Abramovitz and Irene Stegen at the US National Bureau of Standards (now NIST). Starting in 1938, their "Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables", was eventually published in 1964, and has been reprinted numerous times since. (A major replacement is planned at Digital Library of Mathematical Functions). Pretty-printing a traditional 2-dimensional table is left as an exercise for the student, but why bother now that the Math Toolkit lets you write double z = 2.; cout << "Area for z = " << z << " is " << cdf(s, z) << endl; // to get the area for z. Correspondingly, we can obtain the traditional 'critical' values for significance levels. For the 95% confidence level, the significance level usually called alpha, is 0.05 = 1 - 0.95 (for a one-sided test), so we can write cout << "95% of area has a z below " << quantile(s, 0.95) << endl; // 95% of area has a z below 1.64485 and a two-sided test (a comparison between two levels, rather than a one-sided test) cout << "95% of area has a z between " << quantile(s, 0.975) << " and " << -quantile(s, 0.975) << endl; // 95% of area has a z between 1.95996 and -1.95996 First, define a table of significance levels: these are the probabilities that the true occurrence frequency lies outside the calculated interval. It is convenient to have an alpha level for the probability that z lies outside just one standard deviation. This will not be some nice neat number like 0.05, but we can easily calculate it, double alpha1 = cdf(s, -1) * 2; // 0.3173105078629142 cout << setprecision(17) << "Significance level for z == 1 is " << alpha1 << endl; and place in our array of favorite alpha values. double alpha[] = {0.3173105078629142, // z for 1 standard deviation. 0.20, 0.1, 0.05, 0.01, 0.001, 0.0001, 0.00001 }; Confidence value as % is (1 - alpha) * 100 (so alpha 0.05 == 95% confidence) that the true occurrence frequency lies inside the calculated interval. cout << "level of significance (alpha)" << setprecision(4) << endl; cout << "2-sided 1 -sided z(alpha) " << endl; for (int i = 0; i < sizeof(alpha)/sizeof(alpha[0]); ++i) { cout << setw(15) << alpha[i] << setw(15) << alpha[i] /2 << setw(10) << quantile(complement(s, alpha[i]/2)) << endl; // Use quantile(complement(s, alpha[i]/2)) to avoid potential loss of accuracy from quantile(s, 1 - alpha[i]/2) } cout << endl; Notice the distinction between one-sided (also called one-tailed) where we are using a > or < test (and not both) and considering the area of the tail (integral) from z up to +∞, and a two-sided test where we are using two > and < tests, and thus considering two tails, from -∞ up to z low and z high up to +∞. So the 2-sided values alpha[i] are calculated using alpha[i]/2. If we consider a simple example of alpha = 0.05, then for a two-sided test, the lower tail area from -∞ up to -1.96 is 0.025 (alpha/2) and the upper tail area from +z up to +1.96 is also 0.025 (alpha/2), and the area between -1.96 up to 12.96 is alpha = 0.95. and the sum of the two tails is 0.025 + 0.025 = 0.05, Armed with the cumulative distribution function, we can easily calculate the easy to remember proportion of values that lie within 1, 2 and 3 standard deviations from the mean. cout.precision(3); cout << showpoint << "cdf(s, s.standard_deviation()) = " << cdf(s, s.standard_deviation()) << endl; // from -infinity to 1 sd cout << "cdf(complement(s, s.standard_deviation())) = " << cdf(complement(s, s.standard_deviation())) << endl; cout << "Fraction 1 standard deviation within either side of mean is " << 1 - cdf(complement(s, s.standard_deviation())) * 2 << endl; cout << "Fraction 2 standard deviations within either side of mean is " << 1 - cdf(complement(s, 2 * s.standard_deviation())) * 2 << endl; cout << "Fraction 3 standard deviations within either side of mean is " << 1 - cdf(complement(s, 3 * s.standard_deviation())) * 2 << endl; To a useful precision, the 1, 2 & 3 percentages are 68, 95 and 99.7, and these are worth memorising as useful 'rules of thumb', as, for example, in standard deviation: Fraction 1 standard deviation within either side of mean is 0.683 Fraction 2 standard deviations within either side of mean is 0.954 Fraction 3 standard deviations within either side of mean is 0.997 We could of course get some really accurate values for these confidence intervals by using cout.precision(15); Fraction 1 standard deviation within either side of mean is 0.682689492137086 Fraction 2 standard deviations within either side of mean is 0.954499736103642 Fraction 3 standard deviations within either side of mean is 0.997300203936740 But before you get too excited about this impressive precision, don't forget that the confidence intervals of the standard deviation are surprisingly wide, especially if you have estimated the standard deviation from only a few measurements. Examples from K. Krishnamoorthy, Handbook of Statistical Distributions with Applications, ISBN 1 58488 635 8, page 125... implemented using the Math Toolkit library. A few very simple examples are shown here: // K. Krishnamoorthy, Handbook of Statistical Distributions with Applications, // ISBN 1 58488 635 8, page 125, example 10.3.5 Mean lifespan of 100 W bulbs is 1100 h with standard deviation of 100 h. Assuming, perhaps with little evidence and much faith, that the distribution is normal, we construct a normal distribution called bulbs with these values: double mean_life = 1100.; double life_standard_deviation = 100.; normal bulbs(mean_life, life_standard_deviation); double expected_life = 1000.; The we can use the Cumulative distribution function to predict fractions (or percentages, if * 100) that will last various lifetimes. cout << "Fraction of bulbs that will last at best (<=) " // P(X <= 1000) << expected_life << " is "<< cdf(bulbs, expected_life) << endl; cout << "Fraction of bulbs that will last at least (>) " // P(X > 1000) << expected_life << " is "<< cdf(complement(bulbs, expected_life)) << endl; double min_life = 900; double max_life = 1200; cout << "Fraction of bulbs that will last between " << min_life << " and " << max_life << " is " << cdf(bulbs, max_life) // P(X <= 1200) - cdf(bulbs, min_life) << endl; // P(X <= 900) Weekly demand for 5 lb sacks of onions at a store is normally distributed with mean 140 sacks and standard deviation 10. double mean = 140.; // sacks per week. double standard_deviation = 10; normal sacks(mean, standard_deviation); double stock = 160.; // per week. cout << "Percentage of weeks overstocked " << cdf(sacks, stock) * 100. << endl; // P(X <=160) // Percentage of weeks overstocked 97.7 So there will be lots of mouldy onions! So we should be able to say what stock level will meet demand 95% of the weeks. double stock_95 = quantile(sacks, 0.95); cout << "Store should stock " << int(stock_95) << " sacks to meet 95% of demands." << endl; And it is easy to estimate how to meet 80% of demand, and waste even less. double stock_80 = quantile(sacks, 0.80); cout << "Store should stock " << int(stock_80) << " sacks to meet 8 out of 10 demands." << endl; A machine is set to pack 3 kg of ground beef per pack. Over a long period of time it is found that the average packed was 3 kg with a standard deviation of 0.1 kg. Assuming the packing is normally distributed, we can find the fraction (or %) of packages that weigh more than 3.1 kg. double mean = 3.; // kg double standard_deviation = 0.1; // kg normal packs(mean, standard_deviation); double max_weight = 3.1; // kg cout << "Percentage of packs > " << max_weight << " is " << cdf(complement(packs, max_weight)) << endl; // P(X > 3.1) double under_weight = 2.9; cout <<"fraction of packs <= " << under_weight << " with a mean of " << mean << " is " << cdf(complement(packs, under_weight)) << endl; // fraction of packs <= 2.9 with a mean of 3 is 0.841345 // This is 0.84 - more than the target 0.95 // Want 95% to be over this weight, so what should we set the mean weight to be? // KK StatCalc says: double over_mean = 3.0664; normal xpacks(over_mean, standard_deviation); cout << "fraction of packs >= " << under_weight << " with a mean of " << xpacks.mean() << " is " << cdf(complement(xpacks, under_weight)) << endl; // fraction of packs >= 2.9 with a mean of 3.06449 is 0.950005 double under_fraction = 0.05; // so 95% are above the minimum weight mean - sd = 2.9 double low_limit = standard_deviation; double offset = mean - low_limit - quantile(packs, under_fraction); double nominal_mean = mean + offset; normal nominal_packs(nominal_mean, standard_deviation); cout << "Setting the packer to " << nominal_mean << " will mean that " << "fraction of packs >= " << under_weight << " is " << cdf(complement(nominal_packs, under_weight)) << endl; Setting the packer to 3.06449 will mean that fraction of packs >= 2.9 is 0.95. Setting the packer to 3.13263 will mean that fraction of packs >= 2.9 is 0.99, but will more than double the mean loss from 0.0644 to 0.133. Alternatively, we could invest in a better (more precise) packer with a lower standard deviation. normal pack05(mean, 0.05); cout << "Quantile of " << p << " = " << quantile(pack05, p) << ", mean = " << pack05.mean() << ", sd = " << pack05.standard_deviation() << endl; cout <<"Fraction of packs >= " << under_weight << " with a mean of " << mean << " and standard deviation of " << pack05.standard_deviation() << " is " << cdf(complement(pack05, under_weight)) << endl; // Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.05 is 0.9772 So 0.05 was quite a good guess, but we are a little over the 2.9 target, so the standard deviation could be a tiny bit more. So we could do some more guessing to get closer, say by increasing to 0.06 normal pack06(mean, 0.06); cout << "Quantile of " << p << " = " << quantile(pack06, p) << ", mean = " << pack06.mean() << ", sd = " << pack06.standard_deviation() << endl; cout <<"Fraction of packs >= " << under_weight << " with a mean of " << mean << " and standard deviation of " << pack06.standard_deviation() << " is " << cdf(complement(pack06, under_weight)) << endl; Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.06 is 0.9522 Now we are getting really close, but to do the job properly, we could use root finding method, for example the tools provided, and used elsewhere, in the Math Toolkit, see Root Finding Without Derivatives. But in this normal distribution case, we could be even smarter and make a direct calculation. normal s; // For standard normal distribution, double sd = 0.1; double x = 2.9; // Our required limit. // then probability p = N((x - mean) / sd) // So if we want to find the standard deviation that would be required to meet this limit, // so that the p th quantile is located at x, // in this case the 0.95 (95%) quantile at 2.9 kg pack weight, when the mean is 3 kg. double prob = pdf(s, (x - mean) / sd); double qp = quantile(s, 0.95); cout << "prob = " << prob << ", quantile(p) " << qp << endl; // p = 0.241971, quantile(p) 1.64485 // Rearranging, we can directly calculate the required standard deviation: double sd95 = abs((x - mean)) / qp; cout << "If we want the "<< p << " th quantile to be located at " << x << ", would need a standard deviation of " << sd95 << endl; normal pack95(mean, sd95); // Distribution of the 'ideal better' packer. cout <<"Fraction of packs >= " << under_weight << " with a mean of " << mean << " and standard deviation of " << pack95.standard_deviation() << " is " << cdf(complement(pack95, under_weight)) << endl; // Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.0608 is 0.95 Notice that these two deceptively simple questions (do we over-fill or measure better) are actually very common. The weight of beef might be replaced by a measurement of more or less anything. But the calculations rely on the accuracy of the standard deviation - something that is almost always less good than we might wish, especially if based on a few measurements. A bolt is usable if between 3.9 and 4.1 long. From a large batch of bolts, a sample of 50 show a mean length of 3.95 with standard deviation 0.1. Assuming a normal distribution, what proportion is usable? The true sample mean is unknown, but we can use the sample mean and standard deviation to find approximate solutions. normal bolts(3.95, 0.1); double top = 4.1; double bottom = 3.9; cout << "Fraction long enough [ P(X <= " << top << ") ] is " << cdf(bolts, top) << endl; cout << "Fraction too short [ P(X <= " << bottom << ") ] is " << cdf(bolts, bottom) << endl; cout << "Fraction OK -between " << bottom << " and " << top << "[ P(X <= " << top << ") - P(X<= " << bottom << " ) ] is " << cdf(bolts, top) - cdf(bolts, bottom) << endl; cout << "Fraction too long [ P(X > " << top << ") ] is " << cdf(complement(bolts, top)) << endl; cout << "95% of bolts are shorter than " << quantile(bolts, 0.95) << endl;
http://www.boost.org/doc/libs/1_53_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/stat_tut/weg/normal_example/normal_misc.html
CC-MAIN-2014-10
refinedweb
2,420
58.79
According to the latest figures issued by JSC of RZD, loading on the network of Russian Railways in January-March 2018 amounted 315.7 million tons (+3.5% y-o-y), from which 112.4 million tons accounted for March. Specification of cargoes, which loaded in network in January-March 2018 Freight turnover In January-March 2018 freight turnover on Russian Raiways network amounted 636.2 bn. tariff ton-km (+4.6%), freight turnover into account empty wagon runs – 811.7 bn ton-km (+4.1%). From which 224.2 bn tariff ton-km accounted for March, with taking into account empty wagon runs – 285.7 bn. ton-km (+4.1%). Transhipment of container For first quarter 2018 on Russian Rail network transported 1 million TEU (+12.6%) which is: in domestic direction – 430 thou. TEU (+3.5%); export – 274.8 thou. TEU (+16.7%); import – 198.3 thou. TEU (+21.6%); transit – 100.6 thou. TEU (+30.1%). In March 2018 container transportation on network amounted 360.2 thou TEU (+9.3%). Specification of cargoes, which transported in container on network in January-March 2018 Carriage of passengers During the January-March 2018 the infrastructure of Russian Railways carrying 252 million passengers (+3.2% y-o-y), of which: suburban passenger numbers – 229.9 mln. (+2.7%), long-distance passengers – 22.1 mln. (+8.8%). Passenger turnover in reporting period amounted 23.9 bn. Pass-km (+3.4%). Similar Posts: -
http://navilog.ru/en/overview-of-russian-railways-network-performance-in-january-march-2018/
CC-MAIN-2021-31
refinedweb
241
61.83
Coding a Data-Driven Unit Test A unit test functions as data-driven test if it has the attributes that a data-driven unit test requires. You can assign these attributes and their values either by using the Properties window or by adding the attributes directly to the test's code. For more information on configuring a unit test as data-driven by editing its properties, see How to: Configure a Data-Driven Unit Test. This topic describes how to code a unit test as a data-driven unit test, using the DataSource attribute and the TestContext class. Using Data from a Data Source When a data-driven unit test is running, data is retrieved from the rows of a data source. The data then is available to the running unit test through the DataRow and DataConnection properties of the TestContext class. In the following example, DataRow is of the type DataRow, and LastName is the name of a valid column in the row associated with the current iteration of the data-driven test. While LastName refers to a column by name, you can also refer to columns by column number. For each row in the table, any number of columns can be accessed. You can, for example, retrieve several columns of data at once, use them in a calculation, and then compare the result with a final column that contains an expected return value. Coding a Data-Driven Unit Test To create a data-driven unit test, you can start with either a unit test that you have created by hand or a generated unit test. For more information, see How to: Author a Unit Test and How to: Generate a Unit Test. To configure your existing unit test, add attributes that define the data source you want it to use, the way you want that data to be accessed, and the table whose rows you want your test to use as input. For more information on configuring these attributes, see How to: Configure a Data-Driven Unit Test. For example, the following code is from a data-driven unit test that uses data from the Northwind database. namespace TestProject1 { [TestClass] public class TestClass { private TestContext m_testContext; public TestContext TestContext { get { return m_testContext; } set { m_testContext = value; } } [TestMethod] [DeploymentItem("FPNWIND.MDB")] [DataSource("System.Data.OleDb", "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\"FPNWIND.MDB\"", "Employees", DataAccessMethod.Sequential)] public void TestMethod() { Console.WriteLine( "EmployeeID: {0}, LastName: {1}", TestContext.DataRow["EmployeeID"], TestContext.DataRow["LastName"] ); } } } The code within the test method in this example uses values from the LastName and EmployeeID columns in the "Employees" table of the data source. The test method accesses these values through a TestContext property, which is defined in the test class that contains the method.
https://msdn.microsoft.com/en-US/library/ms182527(v=vs.80).aspx
CC-MAIN-2016-30
refinedweb
459
50.87
The conventional split can't handle COBOL EBCDIC files because they don't have sensible \n line breaks. Translating an EBCDIC file to ASCII is high-risk because COMP and COMP-3 fields will be trashed by the translation. If the files include Occurs Depending On, then the FTP transfer should include the RDW/BDW headers. The SITE RDW (or LOCSITE RDW) are essential. It's much faster to include this overhead. Stingray can process files without the headers, but it's slower.There are two essential Python techniques for building file splitters than involve parsing. - The itertools.groupby() function. - The with statement. Along with this, we need an iterator over the underlying records. For example, the stingray.cobol.RECFM subclasses will parse the various mainframe RECFM options and iterate over records or records+RDW headers or blocks (BDW headers plus records with RDW headers. The itertools.groupby() function can break a record iterator into groups based on some group-by criteria. We can use this to break into sequential batches. itertools.groupby( enumerate(reader), lambda x: x[0]//batch_size ) This expression will break the iterable, reader, into groups each of which has a size of batch_size records. The last group will have total%batch_size records. The with statement allows us to make each individual group into a separate context. This assures that each file is properly opened and closed no matter what kinds of exceptions are raised. Here's a typical script. import itertools import stringray.cobol import collections import pprint batch_size= 1000 counts= collections.defaultdict(int) with open( "some_file.schema", "rb" ) as source: reader= stringray.cobol.RECFM_VB( source ).bdw_iter() batches= itertools.groupby(enumerate(reader), lambda x: x[0]//batch_size): for group, group_iter in batches: with open( "some_file_{0}.schema".format(group), "wb" ) as target: for id, row in group_iter: target.write( row ) counts['rows'] += 1 counts[str(group)] += 1 pprint.pprint( dict(counts) ) There are several possible variations on the construction of the reader object. - cobol.RECFM_F( source ).record_iter() -- result is RECFM_F. - cobol.RECFM_F( source ).rdw_iter() -- result is RECFM_V; RDW's have been added. - cobol.RECFM_V( source ).rdw_iter() -- result is RECFM_V; RDW's have been preserved. - cobol.RECFM_VB( source ).rdw_iter() -- result is RECFM_V; RDW's have been preserved; BDW's have been discarded. - cobol.RECFM_VB( source ).bdw_iter() -- result is RECFM_VB; BDW's and RDW's have been preserved. The batch size is the number of blocks, not the number of records. This should allow slicing up a massive mainframe file into pieces for parallel processing.
http://slott-softwarearchitect.blogspot.com/2014_05_01_archive.html
CC-MAIN-2016-40
refinedweb
417
60.61
Details - Type: Bug - Status: Resolved (View Workflow) - Priority: Major - Resolution: Fixed - Component/s: envinject-plugin, mask-passwords-plugin - Labels:None - Environment:Jenkins version 1.591 Mask Password plugin version - 2.7.2 Environment Injector Plugin - 1.9 - Similar Issues: Description Global Mask password are visible as a plain text in Environment Variables tab. You need to go job then click on specific build and on the left menu there is Environment Variables tab. Inside this table the mask passowrd can be read as a plain text. Password which are passed to job as a Password parameter are coded in this tab. Attachments Issue Links - is related to JENKINS-23630 Update to new environment variable APIs - Resolved Activity I don't want to sound annoying or anything but I am curious if you plan to make a change to solve this issue or if this is the intended design. We currently have a lot of pipelines with passwords exposed and would like to know if we need to redesign our pipelines/scripts or if we can wait for a fix from you. Thanks. @Matthew Struensee I suppose the fix for JENKINS-27382 solves your issue (envinject-1.92.1) Thank you. I ran some tests on a local dev Jenkins and everything seems to be working as expected. I will do final tests at work tomorrow for the dev pipelines there. Thank you for the quick response! Off-Topic Response: Yes I know, that is how it was trying to test this. This also works like this for the Credentials Plugin. When I do "Execute Windows batch command" @echo off echo MASKED_PASSWORD:%MASKED_PASSWORD% echo MASKED_PASSWORD:%MASKED_PASSWORD%>%WORKSPACE%/MASKED_PASSWORD_CMD.txt I get this: Jenkins console output -> MASKEDPASSWORD:******** File contents -> MASKEDPASSWORD:1234567890qwertyuiop When I do "Invoke Gradle script"{ file.delete() } class MaskedPasswords { static void main(String[] args) { println "MASKED_PASSWORD: ${System.getenv().get('MASKED_PASSWORD')}" def file = new File("${System.getenv().get('WORKSPACE')}/MASKED_PASSWORD.txt") if(file.exists()) file.withWriter('utf-8') { it.writeLine "MASKED_PASSWORD: ${System.getenv().get('MASKED_PASSWORD')}" } } } I get this: Jenkins console output -> MASKED_PASSWORD: ******** File Contents -> MASKED_PASSWORD: ******** So using it via Groovy script gets the *'s vs 1234567890qwertyuiop. Edit: When I pass it as args via build.gradle -> main args the ******* is translated into the dir command and my args turns into all files/folder names in the workspace vs just a string that contains 8 *'s...
https://issues.jenkins.io/browse/JENKINS-25821
CC-MAIN-2022-05
refinedweb
394
56.76
Advertising Gurch <matthew.brit...@btinternet.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |matthew.brit...@btinternet. | |com --- Comment #18 from Gurch <matthew.brit...@btinternet.com> 2010-03-03 17:51:34 UTC --- (In reply to comment #11) > Because of that kind of definition, only a handful languages (those which > using > Latin script probably) get any advantage of Edit-summary :(. > > There may be no update on edit summary since it developed in very initial > stage > of mediawiki software. It's not so much the age of the software as the inefficiency of adding and converting to larger fields in the database schema. It also doesn't help that most people equate one character with one byte and forget that users of non-Latin scripts are stuck with an encoding that takes two to three times as much storage space. > (In reply to comment #12) > > What sort of database/code refactoring were you thinking of? > > Maybe addition of rev_description/log_description which are pointers to > appropriate descriptions in text table? Or even to a page in special namespace > (long rationales may require fixes)? Not sure where you're getting the "long rationales may require fixes" idea from. 200 characters in a multibyte-encoded script doesn't necessarily convey any more information than 200 characters in a Latin script, it just takes more space to store. (In reply to comment #15) > We'd likely have a limit of 1000 Unicode characters or less, though, so would > using the text table be overkill? Would it make more sense to just have > rev_comment_long/log_comment_long or something? Nobody is suggesting anything like 1000 Unicode characters. 200 Unicode characters -- i.e. the same length Latin-script users get already -- would be more than enough, but currently scripts that are encoded with multibyte characters (almost anything non-Latin) don't get anywhere near that much. (In reply to comment #17) > Currently when we create a new page, if nothing is entered on the edit summary > box, the first few lines of the content will be automatically displayed on the > edit summary. > > If we are increasing the character limit, then it is better to remove the > above > functionality. Not really. Just cap the automatic summary length at a fixed number of characters, rather than bytes. No reason why non-Latin script shouldn't get to see the first ~200 characters of their articles too. -- Configure bugmail: ------- You are receiving this mail because: ------- You are the assignee for the bug. You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org
https://www.mail-archive.com/wikibugs-l@lists.wikimedia.org/msg35131.html
CC-MAIN-2018-13
refinedweb
426
63.39
The problem: I have an executable and mylib.dll (C# class library) in the same directory. I have a subdirectory "Patch" in that directory containing another version of that mylib.dll. I need to make a sample application somehow that loads the first dll, than frees it (like LoadLibrary and FreeLibrary in Windows API, but here I use Domain.Load, Domain.Unload), and then loads another version of that dll from "Patch" folder. Problems: 1) when I try to load the library from "Patch" directory, the library from the application directory is loaded (even if I add some other paths to domain), 2) if I rename "Patch\mylib.dll" to "Patch\mylib1.dll" and try to load the second, renamed one, it is anyway resolved to its original name and again - assembly from mylib.dll in application directory is loaded, 3) if I unload (free) mylib.dll, delete it from my computer, move "Patch\mylib.dll" to "mylib.dll", load it....... I anyway get the original dll that I deleted... it seems like there is hash, 4) changing "patched" dll version doesn't help - I anyway get the original dll if I loaded it once. Help me! I'm a novice in these things. How could work with dynamic labraries that was so simple in Windows API become so terribly complicated here, in .NET??? What do I do wrong? How can I free the library and load the new, changed one, but with the same name? - 2 Contributors - forum3 Replies - 13 Views - 10 Years Discussion Span - comment Latest Post by Sergei82 Simple code... maybe, something wrong here...:S using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; using System.IO; namespace PatchedApp { class Program { public static void LoadInvoke(string path) { AppDomain sandbox = AppDomain.CreateDomain("Sandbox"); try { AssemblyName an = AssemblyName.GetAssemblyName(path); Assembly a = sandbox.Load(an); Type[] mytypes = a.GetTypes(); BindingFlags flags = (BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Static | BindingFlags.Instance | BindingFlags.DeclaredOnly); Console.WriteLine(an.Version.ToString()); foreach (Type t in mytypes) { MethodInfo[] mi = t.GetMethods(flags); Object obj = Activator.CreateInstance(t); foreach (MethodInfo m in mi) { m.Invoke(obj, null); } } } catch (Exception ex) { Console.WriteLine(ex.ToString()); } finally { AppDomain.Unload(sandbox); } } public static void Main(string[] args) { string file = Directory.GetCurrentDirectory() + "\\PatchedLib.dll"; string patch = Directory.GetCurrentDirectory() + "\\PatchedLib1.dll"; string temp = "C:\\_PatchedLib.dll"; LoadInvoke(file); File.Move(file, temp); File.Copy(patch, file); LoadInvoke(file); File.Delete(file); File.Move(temp, file); Console.ReadKey(); } } } Well, anyway I found the solution to my problem at last:
https://www.daniweb.com/programming/software-development/threads/148468/patching-problem
CC-MAIN-2019-04
refinedweb
423
54.59
This action might not be possible to undo. Are you sure you want to continue? ) ■ Chapter 6 ■ Chapter 7 ■ Chapter 8 ■ Chapter 9 ■ Chapter 10 ■ Chapter 11 Administering DNS in a Windows Server 2003 Network Implementing, Managing, and Maintaining IP Addressing Implementing, Managing, and Maintaining Name Resolution Implementing, Managing, and Maintaining Routing and Remote Access Managing Network Security Maintaining a Network Infrastructure 1 CHAPTER Administering DNS in a Windows Server 2003 Network In this chapter, you will learn about • The NetBIOS namespace • The DNS namespace • Fully qualified domain names • Zones • Host names 6 Welcome to the 291 section of this All-in-One certification guide. This section will prepare you for the test entitled Implementing, Managing, and Maintaining a Microsoft Windows Server 2003 Network Infrastructure. Just what exactly does that mean? Mostly, it’s about getting Windows computers to talk to one another. For computers in a Windows 2003 network infrastructure to talk to one another, one of the key ingredients is the DNS service. DNS is the name resolution mechanism used by Windows Server 2003 clients to find other computers and services running on those computers. A client consults its configured DNS servers for a list of Active Directory domain controllers where it will then submit its logon credentials. Before we get too far along, however, you need to understand a few background concepts about the network infrastructure—any network infrastructure. This chapter will explain the concepts at work in DNS. If you are already familiar with the DNS service and terms such as zones, FQDNs, iterative queries, and the PTR records, you can probably move right ahead to Chapter 7 where the exam objectives are met with a discussion of TCP/IP. The topic of how to install, configure, and manage DNS in a Windows Server 2003 implementation is explored in Chapter 8. The NetBIOS Namespace We start our discussion of DNS with the NetBIOS (Network Basic Input Output System) namespace. Namespaces make it easier for humans to work with computers because 3 MCSE Windows Server 2003 All-in-One Exam Guide 4 both the best thing and the worst thing about computers is that they work with numbers. Humans, however, like to work with names. Computers and network services are therefore given names in these namespaces, and services like the DNS service exist to resolve the names that humans prefer into numbers computers rely on so that the computers can communicate. This is the essence of a namespace. But no two namespaces are exactly alike. There are important differences between the DNS namespace and the NetBIOS namespace, and identifying some of the advantages and disadvantages of each namespace can help you understand them. It can also help explain why almost all computer networks today use DNS as the namespace of choice. Prior to Windows 2000, the Windows networking model was built upon the NetBIOS namespace, not the DNS namespace. The NetBIOS namespace uses NetBIOS names. NetBIOS is actually an application-layer protocol (more on that in the next chapter) that can use the transport services of TCP/IP when used in a routed network. A NetBIOS name is a 16-byte address that identifies a NetBIOS resource on a network. The important thing to keep in mind about the NetBIOS namespace, especially when contrasting it to the DNS namespace, is that it’s a flat namespace. DNS, conversely, is a hierarchical namespace. Every NetBIOS name must be unique, period. There is no structure of parent and child namespaces that allows computer or service names to be used. For example, if the Internet used the NetBIOS namespace, there could only be one computer with the name of www. Of course, we know that www is used millions of times, because each instance of the www service only needs to be unique in the parent domain. In the NetBIOS world, there is no such thing as a parent domain. In the NetBIOS environment, computers and services register unique NetBIOS names by using a 15-character computer name appended with a 16th hexadecimal character that identifies the service on the network. If the computer name does not contain 15 characters, the protocol of NetBIOS dictates that the name is padded with as many spaces as necessary to generate a 15-character name. What’s more, there are still some services running on the default Windows Server 2003 installation that register NetBIOS names. An example is File and Print Sharing for Microsoft Networks (also known as the Server service). At startup time, your Windows Server 2003 system registers this unique name, which is generated by using the computer name given to the system during operating system installation. You can look up the NetBIOS name your computer uses by looking at the System Properties dialog box and choosing the Network Identification tab. Click Properties, then click More to display the NetBIOS name. Also by default, your system registers this name by broadcasting it to the network and listening to see if any computer has already registered the name. If there is no response, the system registers the name in its NetBIOS name cache. You can look at the NetBIOS name cache by using the nbtstat utility from the command prompt with the -n switch, as shown in Figure 6-1. If you see that it appears your Windows Server 2003 computer has registered its name more than once, as you should, it hasn’t; you are really looking at different NetBIOS names, because the 16th hexadecimal character makes the names unique. However, if there were another computer on the same network trying to use the same computer name, that computer would not be able to successfully register its name at startup time, Chapter 6: Administering DNS in a Windows Server 2003 Network 5 PART II Figure 6-1 A list of registered NetBIOS names because the NetBIOS names would conflict. For example, the Server service will always identify itself with the 16th hexadecimal character of 0x20, so two computers running the Server service with the same computer name would both try to register the same NetBIOS name, and a conflict would result. There is also an application that can be installed on a Windows Server 2003 computer that provides name registration, renewal, and resolution services for NetBIOS names to IP addresses. In Windows, this NetBIOS name server is called the Windows Internet Naming Service, or WINS (the Internet, in this instance, is a misnomer; WINS is not used on the Internet). WINS eliminates the need for broadcast resolution of NetBIOS names to TCP/IP addresses by keeping a centralized database of name-toIP-address mappings. In other words, WINS does for the NetBIOS namespace precisely what DNS servers do for the DNS namespace. In Windows 2003, WINS can still be deployed, and it may even be a good idea depending on whether the applications or operating systems in your network still rely heavily on NetBIOS name resolution. However, it should not be needed when the computing environment is entirely Windows 2000 or newer. Furthermore, you should not expect to be tested on your knowledge of WINS. So why did we start here? Because DNS is also a namespace, a much more flexible and scalable namespace, albeit one that is considerably more complex. The DNS Namespace The Domain Name System (DNS) is a vital component in a Windows 2003 network, as will be made clear throughout this chapter and throughout your pursuit of the MCSE certification. Without DNS, you would have to know the IP address of every computer you are communicating with. DNS exists to resolve the names of computers to IP addresses. It also aids in locating services on a network. In the DNS namespace, the computer names are known as hosts, although the word host can refer to about any network interface card (NIC) with an IP address bound to it. MCSE Windows Server 2003 All-in-One Exam Guide 6 Furthermore, DNS organizes these resources into a hierarchy of domains. The DNS you implement on a Windows Server 2003 system is built on the same standards as the DNS in use on other TCP/IP networks, such as the Internet. It provides a mechanism through which user-friendly names (such as) are resolved to IP addresses (such as 10.100.9.23) so that computers can establish a communications channel using protocols such as HTTP, FTP, or SMB. The TCP/IP protocols are the transport mechanism that carries data from one system to the other. And more important, at least in reference to the study of Windows 2003 networks, DNS provides the naming infrastructure for Active Directory. When you build your Active Directory domains, you name them in accordance with the DNS naming conventions in use in the Internet. That way, Active Directory can easily integrate with existing networks that follow the same naming conventions, using the same name resolution technologies—namely, DNS. In fact, DNS not only provides a possible namespace—an alternative to the NetBIOS namespace, say—it’s the required namespace for Active Directory. You can choose to integrate with the Internet or create a completely private Active Directory network, but you have no choice about the naming standards. As implemented in Active Directory, DNS provides a parent/child architecture for the naming of objects, and using this architecture, the DNS namespace allows for a virtually unlimited number of Active Directory objects. DNS Components There are three main components you’ll find in the Domain Name System. Not just Microsoft’s implementation, but any DNS solution. These three items are • Domain name servers • DNS resolvers • The logical namespace The domain name servers are servers running the DNS software component, which store information about a zone file (we’ll get to zones in just a bit). These name servers provide address resolution and other information about the computers that you access in both Active Directory domain and in the named domains across the entire Internet. DNS resolvers are pieces of code that are built into the operating system. These pieces of code, known also as DNS clients, request resolution of FQDNs to IP addresses by querying their configured name servers. FQDNs are defined in the next section. Finally, the namespace is the logical division of names where DNS objects are stored. The emphasis here is on the word logical. There is nothing you can point to, for example, and say, “That’s the domain.” To illustrate, ask yourself, “Where is the Microsoft domain?” You can’t really say. That’s because the DNS domain, much like an Active Directory domain, is an organizational entity. The only physical thing you can point to are the name servers, which are the computers that store information and service requests about the resources in the domain. Chapter 6: Administering DNS in a Windows Server 2003 Network 7 Keep in mind, too, that in an Active Directory domain, the namespace can often reflect the organizational chart of a particular company, where the company name starts at the root of the namespace, and then from there breaks into domains that provide a hierarchy for your domain enterprise. Fully Qualified Domain Names As just mentioned, the job of a resolver is to request resolution of a fully qualified domain name (FQDN) to an IP address. A fully qualified domain name represents a host name appended to the parent namespaces in a hierarchy. In other words, within the fully qualified domain name you can see the different levels in the namespace hierarchy. Figure 6-2 helps you visualize this hierearchy—the root level namespace, top-level domains, and so on—in use throughout the Internet today. Note that the leftmost portion of the FQDN is the host portion of the name. A host name is an alias we give to an IP address. Typically, any computer in a network is also considered a host, but other devices, such as routers and network print devices, can have names assigned to them, too. All other naming information—every name to the right of the first name—contained in the FQDN identifies the logical parent namespace where the host lives. There are organizations outside of your control that manage the topmost levels of the domain namespace. InterNIC is the organization that manages the top-level namespaces. NOTE For full information on the InterNIC and what it governs, visit http://. PART II The InterNIC, in fact, controls the first two levels of the DNS namespace: the root-level and top-level domains. There is only one root domain, which acts as the starting point of all fully qualified domain names. This root domain is designated with a dot (.), and in days of yore, people had to type this dot when using FQDNs. Now, however, Figure 6-2 The logical DNS hierarchy MCSE Windows Server 2003 All-in-One Exam Guide 8 applications like Internet Explorer assume the last dot is implied, and you no longer have to enter the root domain when browsing to an Internet address. The top-level domains, also under the governance of InterNIC, include familiar domains like .com, .edu, .gov, .net, .mil, all of which were intended to be used in the United States. Other top-level domain names include country codes like .ca, .uk, and .au (for Canada, the United Kingdom, and Australia, respectively). New top-level domains like .tv, .law, .info, .biz, and many more are either being proposed or implemented to accommodate new entities entering the Internet fray. To register a first-level domain, you need to ask the InterNIC whether your domain will be unique in the parent namespace, or at least have a company like register.com do so on your behalf. For example, if you want to register beanlake.com, you’re out of luck. However, if you want to register beanlake.org, you may. The domain beanlake can be used multiple times; the only requirement is that it be unique in the parent-level namespace. Likewise with host names. There are most likely several thousand computers with the host name of COMPUTER1. That’s okay, as long as the COMPUTER1 name is not reused within a single domain—for instance, there can only be one COMPUTER1 in the beanlake.com parent domain, just as there can only be one COMPUTER1 in the microsoft.com domain. Also, when you register a name for use on the Internet, you’re responsible for providing the addresses of two name servers (NS records; discussed next) that will resolve the names of hosts and other domains as well as other resources in that second-level domain. The second-level domains are controlled by you if you’re the one who registers the domain. From there you’re free to add records on your DNS servers representing individual computers in that domain space, subdomains in that domain space, or even provide the addresses of Active Directory domain controllers. Furthermore, in order to communicate with other computers, both in your own domain and across the Internet, you need to be able to resolve fully qualified domain names to an IP address. How is this done? You ask your configured DNS server for resolution. But before we examine the process for resolving a name to an IP address, we must understand what information is kept on the name servers. To store the name-to-IP-address mappings so crucial to network communication, name servers use zone files. Understanding Zones If domains represent logical division of the DNS namespace, zones represent the physical separations of the DNS namespace. In other words, information about records of the resources within your DNS domains is stored in a zone file, and this zone file exists on the hard drive of one of your name servers. So there are logical parts of the DNS namespace—the domains themselves—and there are physical parts—both the name servers and the zone files. Domain name servers are simply servers that store these zone database files, which in turn provide resolution for records in the zone files. The DNS servers also manage how those zone files are updated and transferred. Chapter 6: Administering DNS in a Windows Server 2003 Network 9 Zone files are divided into one of two basic types: • Forward lookup zone • Reverse lookup zone Provides host-name-to-IP-address resolution Provides IP-address-to-host-name resolution Generally speaking, humans are more concerned with the proper configuration of a forward lookup zone, as this is indeed more vital for successful computer communications. For example, a forward lookup zone is consulted when a domain user in a Windows Server 2003 Active Directory domain is looking for a domain controller where logon credentials can be submitted. And let’s not forget the web browser, which also relies on forward lookup zoned to resolve FQDNs such as or into IP addresses, either when typed in the address bar or coded within a hyperlink. As administrators everywhere know, users without working Internet access are unhappy users. Reverse lookup zones, on the other hand, are generally used by utilities like nslookup. In fact, nslookup, which we will discuss in Chapter 8, requires a properly configured reverse lookup zone in order to work like it should. When a zone file is first created on a DNS server, that server is said to be authoritative for that zone. Then, for each child DNS domain name included in a zone, the zone becomes the authoritative source for the resource records stored in that child domain as well. This means that the DNS server can provide resolution for multiple domains within a zone file, and all changes to the resource records in both domains are made to the authoritative zone it stores. Additionally, keep in mind that a zone can be authoritative for a single domain or multiple domains. This can be a little confusing because it’s possible that one zone file can be authoritative for multiple domains. If you have a DNS hierarchy that, for administrative reasons, you have broken into multiple domains, yet those domains don’t have vast number of resources, it may be good planning to store records about both namespaces, or all three or all five namespaces, on a single DNS server. In this example, this single zone would be authoritative for multiple portions of the DNS namespace. It usually helps if you can remember the distinction between the logical part of DNS (the domains) and the physical part (the zones). In Figure 6-3, name server A stores a zone file that’s authoritative for two domains, while name server B is authoritative for only a single domain. Figure 6-3 A zone can be authoritative for one domain or multiple domains. PART II MCSE Windows Server 2003 All-in-One Exam Guide 10 Zone Categories The DNS zones kept on Windows Server 2003 computers can be further broken down into one of three categories. For each forward or reverse lookup zone, the file will be one of these types of zones: • Primary zone • Secondary zone • Stub zone What’s more, all of the zones you can create in Windows 2003 can be integrated in Active Directory. Each of these zone categories is discussed in Chapter 8. Resource Records Stored in a Zone File Each record stored in a zone file has a specific purpose. Some of the records set the behavior of the name server, others have the job of resolving a host name or service into an IP address. Table 6-1 explains the most common resource records you will administer, in no particular order. Record Name Purpose A Host PTR Pointer SRV Service MX Mail Exchange NS Name Server A host record populates a forward lookup zone and is the workhorse record of a DNS zone. It provides host-name-to-IP-address resolution. This record populates the reverse lookup zone files, if configured, and does just the opposite of an A record: it provides IP-address-to-host-name resolution. A service record helps identify services running in a domain namespace. When a user submits a domain logon, his DNS server must resolve the domain to the IP address of a domain controller. The SRV records help perform this task. This record identifies the IP address of a mail server for a given domain. All mail destined for a domain such as yahoo.com is dropped at the IP address specified by the MX record in the zone files authoritative for the yahoo.com domain. These specify the name servers that are authoritative for a given potion of the DNS namespace. These records are essential when DNS servers are performing iterative queries to perform name resolution. Table 6-1 Resource Records Stored in a Zone File Chapter 6: Administering DNS in a Windows Server 2003 Network 11 Record Name Purpose SOA Start Of Authority CNAME Canonical Name This resource record indicates the name of origin for the zone and contains the name of the server that is the primary source for information about the zone. The information in an SOA record affects how often transfers of the zone are done between servers authoritative for the zone. It is also used to store other properties such as version information and timings that affect zone renewal or expiration. Also referred to as an alias record, the CNAME can be used to assign multiple names to a single IP address. For example, the server hosting the site is probably not named www, but a CNAME record exists for resolution of www to an IP address all the same. The CNAME record actually points not to an IP address, but to an existing A record in the zone. PART II Table 6-1 Resource Records Stored in a Zone File (continued) Updates to Windows Server 2003’s DNS DNS is open standards–based. Modifications and improvements are constantly being developed by the open standards community through a series of Requests for Comment (RFCs). These RFCs help shape DNS into a better name resolution service with each iteration, and help it integrate with the improvements in other areas of network communications. As such, there have been several enhancements to the DNS features available with the Windows 2003 implementation of DNS, especially when compared to Microsoft’s earlier deployments of the DNS service. Some of the improvements include the following: • Conditional forwarders DNS queries can be sent to specific DNS servers if they meet a defined set of conditions. For example, the 2003 DNS server can be set so that all queries of FQDNs that end in whatisthematrix.com be forwarded to a specific DNS server. • Stub zones Stub zones keep a DNS server that hosts a parent zone aware of the authoritative DNS servers for its child zone. This improves efficiency of DNS name resolution. • Enhanced DNS zone replication in Active Directory You now have four replication choices for Active Directory–integrated DNS zone data. • Enhanced DNS security features Windows Server 2003’s DNS now provides greater flexibility when administering security for the DNS server, DNS client, and DNS zone information data. • Enhanced debug logging The DNS server has been written with enhanced debug logging options to aid in troubleshooting of DNS name resolution. MCSE Windows Server 2003 All-in-One Exam Guide 12 Resolving a Host Name Now that you have an understanding of the components of the DNS infrastructure, you need to understand how a DNS client resolves an FQDN to an IP address. There are actually many ways. A client can sometimes answer a query using information cached from a previously successfully resolved name. In fact, this is the first location the DNS resolver checks. If the check of the cache is unsuccessful in providing IP address resolution, the resolver gets help from its configured DNS server, as outlined in the next section. This process is known as a recursive query. The DNS server in turn can use its own cache of resource record information to answer a query. Barring a quick resolution from the DNS servers cache, the server begins a “walk” of the DNS tree through a series of iterative queries. The next section describes the navigation through the DNS namespace. Forward Lookup Resolution of FQDNs Any time you enter a fully qualified domain name into an application, your operating system uses the resolver piece of code to query its configured DNS server (or servers) to get an IP address for the name you have just entered. If your locally configured DNS server has a zone file that contains a record for the resource you’re trying to browse to (or if it’s contained in the server’s cache), that resource’s IP address is returned to your resolver. In most cases, the zone file is not going to hold the IP address for the record that you’re trying to look up. In that case, the DNS server will resolve that name to an IP address on your behalf. The DNS server does that by walking the DNS hierarchy. For example, if you type into a browser, the browser needs to look up this fully qualified domain name using its resolver. The computer doesn’t care what the name of the computer is; in order to communicate, it needs the IP address. The first place it looks for resolution is its configured DNS server. This query to the locally configured DNS server is called a recursive query. If the local DNS server does not have an A record that maps to an IP address, the client’s local DNS server—if it’s configured to do so—will begin looking through the entire DNS hierarchy on behalf of the DNS client. The DNS server performs the name resolution; the DNS client sits there and waits for a response to its recursive query. The client’s local DNS server then talks to other DNS servers throughout the DNS hierarchy using a series of iterative queries. It begins with a check with one of its configured root-level name servers. Every Windows Server 2003 installation of DNS comes with several root-level name servers already known, and the server will query one of the servers in the list unless it’s been configured to be a root-level server of a private network not directly connected to the Internet. You can access this list of root-level name servers by opening your DNS console, right-clicking your server, and choosing Properties. The Root Hints tab, shown in Figure 6-4, contains the entries for the root-level name servers. In Chapter 8, we investigate the DNS console in greater detail. Chapter 6: Administering DNS in a Windows Server 2003 Network 13 Figure 6-4 The preconfigured list of root-level name servers PART II The root-level name servers won’t know how to resolve to an IP address either, but they’ll know to steer the client’s local DNS server to a top-level name server. Subsequently, one of the top-level name servers in the .com level namespace will be asked the same question by the local DNS server: “What’s the IP address for?” They won’t have records for that resource in their zone files either, so they’ll return their best answers, which in this case will be the NS records of the DNS servers authoritative for the domain .beanlake. And so it goes. When the client’s local DNS server finally locates the IP address for the DNS servers authoritative over the .beanlake subdomain, the local DNS server will then ask those servers for the record for the name ftp. At long last, the DNS server that’s just been queried—the one responsible for the zone file that stores host (A) records for the atchison.beanlake.com zone—will indeed have the IP address for that ftp name, and will return said IP address to the local DNS server. The local DNS server will hand the IP address back to the resolver, and then communication can be established from one IP address to another IP address (the FTP client and the FTP server in this case). This procedure is diagrammed in Figure 6-5. After the resolution is complete, the DNS server caches the successful resolution in its DNS cache. If the next request for a resource from the same name is requested, the name can be resolved without a query through DNS. Likewise, the entry is usually cached on the DNS server for the same purpose. If another client of the DNS server were to request resolution before the entry’s time-to-live (TTL) expires, the name would be resolved without walking the DNS tree. Recursive Queries and Iterative Queries As mentioned, the process takes place with two types of queries. The client asks its local DNS server using a recursive query. A recursive query says, basically, give me the answer or tell me that you can’t find it. It’s a pass/fail type of proposition. The other type of query, where other DNS servers are talking to each other as the local DNS server is walking the domain tree, is called an iterative query. When your DNS server uses an iterative MCSE Windows Server 2003 All-in-One Exam Guide 14 Figure 6-5 Iterative queries “walk” the DNS hierarchy query, it’s asking for a “best guess.” So the root-level name servers don’t have the IP address for, but they will give you their best response, which is, “I don’t have it, but I’ll send you down to the .com level name servers—you can go ask them.” If you’re asking for something in the .com level namespace, the root-level name servers aren’t going to send you the IP address of .net name servers or .gov name servers. They will give you the NS records for the name servers that govern the .com level namespace. Reverse Queries What was just described was the forward lookup process, where a client is looking for a name-to-IP-address mapping. This is the most common type of lookup, in which an IP address is the expected resource data that is to be provided by the response. But DNS also provides a mechanism to extract names from IP addresses. This enables clients to use a known IP address during a name query and look up a computer name. Instead of asking, “What’s the IP address for?” a reverse query asks, “What’s the name of the computer with the IP address of 10.169.254.23?” This is more common with IP diagnostic and troubleshooting utilities like nslookup, which uses the IP address of the client’s configured DNS server to query for resource re- Chapter 6: Administering DNS in a Windows Server 2003 Network 15 cords on that server. It is also used by reporting utilities that might collect information about who is accessing a particular web site. When HTTP “request” packets (they’re technically HTTP gets) enter a web server, the information in the packet contains the IP address of the requester, but not the requester’s computer name. So how do you find out who is hitting your site? With the reverse lookup zones. Utilities use reverse lookup zones to pinpoint either certain users or certain domains that are most frequent guests of the web site. When DNS was first designed, it wasn’t built to support this type of IP-address-to-name query. If you look at Figure 6-6, you see an FQDN, with an arrow representing the flow of the FQDN from general to specific. Below it, you see an IP address with the same arrow. As you can see, the FQDN resolves from the big namespace to the host from right to left, while the IP address identifies the network and then the host from left to right. So a modification was made to the DNS namespace to get IP addresses to look like FQDNs. To support this reverse lookup query, there’s a special domain called the in-addr.arpa domain, which is an abbreviation for “inverse-address. Advanced Research Projects Agency.” (ARPA was the Department of Defense agency that was instrumental in the development of the Internet.) The in-addr.arpa domain is now defined in RFC standards and is reserved in the Internet DNS namespace to provide a practical way to perform reverse queries. To create the reverse namespace, subdomains within the in-addr.arpa domain are formed using the reverse ordering of the numbers in the dotted-decimal notation of IP addresses. In other words, IP addresses are flipped around so that a query for the host name for 200.23.102.9 becomes a query resembling an FQDN, like so: L 6-1 9.102.23.200.in-addr.arpa PART II Notice that the order of host’s IP address will be reversed when building your reverse lookup zone files. The IP addresses of the DNS in-addr.arpa tree can be delegated to companies as they are assigned a specific or limited set of IP addresses within the Internet-defined address classes. Figure 6-6 The “flow” of an FQDN and an IP address MCSE Windows Server 2003 All-in-One Exam Guide 16 NOTE Forward lookup zones are built with A records, but reverse lookup zones are built with PTR records, which, strangely enough, point to A records in existing forward lookup zones. It doesn’t necessarily matter in what order you create your zones, but you definitely want a forward lookup ready when you’re populating the reverse lookup zone. In fact, you can populate the reverse lookup zone with PRT records at the same time you add host (A) records with a single check box. Chapter Review The Domain Name System is the central name resolution component of a Windows Server 2003 network, and is even required for when it’s time to implement Active Directory. In this chapter, you were either introduced to or reviewed some of DNS’s underlying concepts. We started with a look at the NetBIOS namespace, which is an alternate, flat namespace that can be used in Windows networking, and in fact was in versions prior to Windows 2000. It is not, however, used to resolve names on the Internet. We then looked at how the DNS hierarchy is put together. We looked at the purpose of domains, which are the logical divisions of the DNS namespace, and then at the job of the zone files, which hold the resource records that resolve (among other things) host names to IP addresses. We also looked at some of the improvements that have been made recently to DNS, which have been integrated into Windows Server 2003’s implementation. In Chapter 8, we’ll build upon this foundation and look at the many management tools and tasks needed to maintain your organization’s Windows Server 2003 DNS deployment. Because there aren’t that many things you will be directly tested on in this chapter, the review questions are fewer than usual. Don’t worry, we’ll make it up in chapters to come. Questions 1. You are installing the root domain controller for your forest. You have decided that the fully qualified domain name for the computer will be birmingham1.taylortoys.com. The system prompts you with a suggested NetBIOS name for the computer. Which NetBIOS name is prompted? A. birmingham1.taylor B. birmingham1 C. taylortoys D. birmingham1.taylortoys.com Chapter 6: Administering DNS in a Windows Server 2003 Network 17 2. You are installing the root domain controller for your forest. You’ve decided that the fully qualified domain name for the computer will be birmingham1 .taylortoys.com. The system prompts you with a suggested NetBIOS name for the computer. You decide to use a different name. Which names could you use? (Choose all that apply.) A. birmingham.1 B. birminghamserver1 C. bham D. birmingham1 3. You are the Domain Admin of a Windows Server 2003 network for a company named Taylortoys. You currently use the same DNS name, taylortoys.com, on both sides of your firewall. Management is concerned that a breach in the firewall could expose the Active Directory. Which other names could you use on the inside of your firewall? (Choose all that apply.) A. taylortoys.com.ad B. ttoys.ad C. taylortoys.toys.ad D. whateveryouwant.com 4. You are the Enterprise Admin of a Windows Server 2003 network. You are currently using only standard primary and standard secondary zones. Another administrator asks you what would be required to upgrade all zones to Active Directory integrated zones. Which statement is true? A. All servers in the forest would have to be Windows Server 2003. B. All servers in the forest would have to be Windows 2000 or Windows Server 2003. C. All DNS servers would have to be domain controllers. D. The domain will need to be in at least Windows 2000 native mode. 5. You are the Domain Admin of a Windows Server 2003 network named eaglesinc.com.ad. You have a computer in the domain named computer1. What is the fully qualified domain name of this computer? A. eaglesinc.com.ad.computer1 B. eaglesinc.computer1 C. computer1.ad.eaglesinc.com D. computer1.eaglesinc.com.ad PART II MCSE Windows Server 2003 All-in-One Exam Guide 18 Answers 1. B. The NetBIOS name is used by Windows Server 2003 computers for backward compatibility with legacy clients and legacy applications. It can be up to 15 characters in length and cannot contain any hierarchical symbols such as “/” or “.”. The system will add a 16th character that indicates what service that name provides. Computers that supply multiple services to the network will have multiple NetBIOS names. The system will suggest a NetBIOS name for the computer based on the prefix of the fully qualified domain name. 2. C and D. You can choose any NetBIOS name that meets the parameters and that is unique in the forest. In this case, since the computer established a forest root, uniqueness is not an issue. 3. A, B, C, and D. Management’s concerns in this scenario are valid. Since you are using the same name on both sides of the firewall, a breach in the firewall could expose the Active Directory. You have two other options. You could use an appended name to the current name (such as taylortoys.com.ad) or you could use a completely different name. Each strategy has its own advantages and disadvantages. 4. C. Active Directory integrated zones replicate their databases along with Active Directory replication. Therefore, all servers that host Active Directory integrated zones must be domain controllers. There is no functional level requirement and no requirement that all servers be Windows 2000 or Windows Server 2003. However, all of the DNS servers would need to be Windows 2000 or Windows Server 2003 domain controllers. 5. D. A fully qualified domain name consists of a prefix and a suffix. The prefix is the name of the computer or other object (user). The suffix is the full name of the domain in which the object is contained. In this case, the prefix is computer1 and the suffix is eaglesinc.com.ad. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/6885159/ch06
CC-MAIN-2016-40
refinedweb
6,603
60.75
Hey Danny and all, Alberto told me that there was a password entry box in TKInter. Can anyone tell me about that, please? Thanks, Nathan Pinno, Owner/operator of The Web Surfer's Store. MSN Messenger: falcon3166 at hotmail.com Yahoo! Messenger: spam_swatter31 AIM: f3mighty ICQ: 199020705 -----Original Message----- From: Danny Yoo [mailto:dyoo at hkn.eecs.berkeley.edu] Sent: November 28, 2005 2:57 PM To: Nathan Pinno Cc: Albertito Troiano; Tutor Mailing List Subject: Re: [Tutor] Is it a good idea to use TKInter to change my password program into a GUI? On Sun, 27 Nov 2005, Nathan Pinno wrote: > Is it a good idea to use TKInter to change my password program into a > GUI? I know it needs improvements, and I've noted them below: Hi Nathan, Yes, some of it should be usable if it were in a GUI. The easy way to pick out what functions will and won't be useful is this: what functions use print and input statements? If you exclude those, then what's left will be useful for both your GUI and terminal programs. Actually, that's not quite accurate. For functions that do use print statements, it's possible to split off the console-driven stuff from the pure computation stuff, so there's actualy quite a bit you can reuse. Let's go into this. load_file() and safe_file() are directly reusable, since they don't interact with the user. Let's look at something that mixes computation with user interaction: > def add_site(): > print "Add a login info card" > site = raw_input("Site: ") > ID = raw_input("User ID and passcard, seperated by a space: ") > sitelist[site] = ID It's possible to break this down into two parts: the part that asks for login info: ###### def ask_for_login_info(): print "Add a login info card" site = raw_input("Site: ") ID = raw_input("User ID and passcard, seperated by a space: ") return (site, ID) ###### and the part that really does the gruntwork of entering into the site list: ###### def add_to_sitelist(site, ID): sitelist[site] = ID ###### Because this example is so small, doing the breakup this way is a bit silly, so maybe this is overkill for your program. But, in general, when we design a program to be used from both the console and the GUI, we'd break out the direct user interface stuff into a separate set of "user interface" functions, and have those interface functions reuse the common "model" functions that do the underlying work. In a GUI framework like Tkinter, the ask_for_login_info() function might use a "dialog box". On a first pass to GUI-ify your program, each 'print' statement could be replaced with something like a printDialogMessage(): ###### import tkSimpleDialog import Tkinter def printDialogMessage(root, msg): """Uses a dialog window to display a message. If ok is pressed, returns True. If cancel is pressed, returns False. """ class Dialog(tkSimpleDialog.Dialog): def body(self, master): Tkinter.Label(master, text=msg).pack() def apply(self): self.result = True d = Dialog(root) if d.result: return True return False ###### Similarly, we can write something (let's call it readDialogInput) that simulates the console raw_input() function. And if you replace each use of 'print' with 'printDialogBox' and 'raw_input' with 'readDialogInput', we could argue that we have a GUI program. But if we take this route and just stop here, then this is no better than the console program. Getting GUIs right is more than just taking existing programs and putting nice shiny windows on them: it involves good user interface design that takes advantage of the things that GUIs get right. I don't know if there's a quick-and-dirty way to design such GUIs, though. You might find: helpful in getting started with Tkinter programming. Hope this helps!
https://mail.python.org/pipermail/tutor/2005-November/043656.html
CC-MAIN-2016-50
refinedweb
625
60.35
The Send Method: What it does and when to use it First, let’s go over the need-to-know essentials. When you’re calling n method on an object using dot (.) notation, like in the example below, you’re essentially passing a message to it. "Paris, France".downcase #=> "paris, france" * The string is the object. * The dot is the method in which we’re sending the object a message or command. * Downcase or the argument is the message. We can accomplish the same with the send method. "Paris, France".send(:downcase) #=> “paris, france” .send allows you to send a method call this way: send(:method_to_call) When to use the send method: Seeing it in an example makes it easier to comprehend when you should use it. In the Student Scraper lab in the OO Ruby section, we had to define an initialize method for our Student class that a) takes in an argument of a hash and b) uses metaprogramming to assign the newly created student attributes and values in accordance with the key/value pairs of the hash” def initialize(student_hash) student_hash.each do |attribute, value| self.send(“#{attribute}=”, value) end @@all << self endstudent_hash => {:name=>"Alex Patriquin", :location=> "New York, NY"} @@all => [#<Student:0x00000003ad0270 @] Here, the send method is essentially taking attributes-and-value pairs in the existing student hash and churning out instance variables and value assignments. Send method uses in Ruby: 1. Allows you to assign attributes 2. Allows you to call on methods by name with arguments 3. Allows you to call on methods without explicitly writing each individual method name every time Sources: * I found this article on metaprogramming in ruby provided really helpful for understand what the send method/command does and when to use it. * Found this video very helpful in really understanding the send method. * And this forum covers most of the send use cases
https://joannpan.medium.com/the-send-method-what-it-does-and-when-to-use-it-549c46e7b726
CC-MAIN-2022-40
refinedweb
314
63.39
For this function, I have to count how many even numbers there are in this array. It compiles fine and I get no warnings but when I run it, nothing gets outputted! I'm baffled because I've done this function before and do not know why this is happening. As always any help is appreciated. Code:#include <iostream> using namespace std; int EVEN (int a[],int size );//prototype int main() { const int size=10; int a[size]={1,2,3,4,5,6,7,8,9,10}; cout<<"The number of evens in this array are"; EVEN(a,size);//call return 0; } int EVEN(int a[], int size) //print number of even elements { int e=0; for(int i=0;i<size;i++){ if((a[i]%2)==0) e++; return e; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/54635-array-involving-even-numbers.html
CC-MAIN-2014-52
refinedweb
134
67.49
Created attachment 43380 [details] patch (only for Chromium) All the implementations of two methods in TextBreakIteratorInternalICU return "" and "en-US", respectively. There are FIXME comments that they should return the OS UI locale. It works as long as the OS UI locale is the same as the UI locale of a browser. In case of Chrome on Windows, the UI locale of Chrome can be different from the OS UI language. That is, English Windows users can run Chrome in Japanese. And, Chrome's browser process already passes along that information to a renderer process, which is in turn available in WebCore::defaultLanguage(). So, at least in Chrome, we can return that in two methods in TextBreakIteratorInternalICU. I'm tempted to remove two methods and just use WebCore::defaultLanguage() in all the call sites on all ports, but I'm not sure of the reasoning behind having them separate. So, I'm just making changes to Chrome's implementation of TextBreakIteratorInternalICU. If we can agree on the above point, I'll extend the patch to include other ports. With this change, for instance, Swedish Find-in-Page behaves as expected when CHrome is run in Swedish. Comment on attachment 43380 [details] patch (only for Chromium) > const char* currentSearchLocaleID() > { > - // FIXME: Should use system locale. > - return ""; > + // Chrome's UI language can be different from the OS UI language on Windows. > + // We want to return Chrome's UI language here. > + return defaultLanguage().ascii().data(); > } This will create a local CString, take a pointer to its contents, then destroy the CString and return a pointer to the destroyed memory. So you definitely can't land this as-is. The concept is probably OK for Chromium. For Mac OS X it would not be good to use defaultLanguage() for everything, so it's good that you did not try to change all ports to work this way. Mac OS X has a separate preference for "Order for sorted lists" that should be used for searches but not text breaking. Created attachment 46147 [details] updated patch per darin's comment Darin, Thank you for the clarification about other ports and also catching my stupid mistake. Can you take another look? Attachment 46147 [details] did not pass style-queue: Failed to run "WebKitTools/Scripts/check-webkit-style" exit_code: 1 WebCore/platform/text/chromium/TextBreakIteratorInternalICUChromium.cpp:23: Found other header before a header this file implements. Should be: config.h, primary header, blank line, and then alphabetically sorted. [build/include_order] [4] WebCore/platform/text/chromium/TextBreakIteratorInternalICUChromium.cpp:28: Place brace on its own line for function definitions. [whitespace/braces] [4] Total errors found: 2 Comment on attachment 46147 [details] updated patch per darin's comment > #include "config.h" > +#include "CString.h" > +#include "Language.h" > +#include "PlatformString.h" > #include "TextBreakIteratorInternalICU.h" As the style-bot said, please add the new includes in a new paragraph after a blank line. The file's own header goes first in the same paragraph with config.h. > +static const char* UILanguage() { Brace goes on a separate line. > + // Chrome's UI language can be different from the OS UI language on Windows. > + // We want to return Chrome's UI language here. > + static WebCore::CString locale; > + locale = WebCore::defaultLanguage().latin1(); This isn't quite right. The locale is a global variable, but you're re-initializing it with new data every time through the function. Merging these two lines of code into one will fix that. Created attachment 46314 [details] updated patch Thank you for bearing with me. Here's an updated patch. Comment on attachment 46314 [details] updated patch > +static const char* UILanguage() > +{ > + // Chrome's UI language can be different from the OS UI language on Windows. > + // We want to return Chrome's UI language here. > + static WebCore::CString locale = WebCore::defaultLanguage().latin1(); > + return locale.data(); > +} Normally I would put a function like this inside the WebCore namespace rather than outside. Any reason it's outside? You could avoid those two WebCore prefixes that way. Most places in WebCore we try to avoid global destructors that run on process exit. In fact, for code in the Mac OS X version, it's required. The idiom is to use the DEFINE_STATIC_LOCAL macro. Created attachment 46383 [details] update to use the macro for static local variable thanks for the review. I've seen that macro but was mistaken for that to be only used with a literal value. I updated the patch to use the macro for static local (to leak at the end of a process) and pulled in the helper function into WebCore ns. I'm carrying along r+ for the updated patch and will plus commit-queue once chrome-bot comes back green (although I locally built and ran some tests) and are chromium bugs related to this one. Comment on attachment 46383 [details] update to use the macro for static local variable Yes, looks good. Comment on attachment 46383 [details] update to use the macro for static local variable with r+, it looks like bots are not supposed to run. it should be ok, though and I'm going ahead with c-q +. Comment on attachment 46383 [details] update to use the macro for static local variable Clearing flags on attachment: 46383 Committed r53159: <>
https://bugs.webkit.org/show_bug.cgi?id=31597
CC-MAIN-2019-30
refinedweb
872
66.64
Java JAXP, Implementing Default XSLT Behavior in Java Java Programming Notes # 2206 - Preface - Preview - Some Details Regarding XSLT - Discussion and Sample Code - Run the Program - Summary - What's Next? - Complete Program Listings Preface In this lesson, I will explain default XSLT behavior, and will show you how to write Java code that mimics that behavior. The resulting Java code serves as a skeleton for more advanced transformation programs. What is JAXP? JAXP is an API designed to help you write programs for creating and processing XML documents. JAXP is very important for many reasons, not the least of which is the fact that it is a critical part of Sun's Java Web Services Developer Pack (JWSDP). As you are probably already aware, web services is expected by many to be a very important aspect of the Internet of the future This lesson is one in a series designed to help you understand how to use JAXP and how to use the JWSDP. The first lesson in this series was entitled Java API for XML Processing (JAXP), Getting Started. The previous lesson was entitled Java JAXP, Exposing a DOM Tree. What is XML?XML is an acronym for the eXtensible Markup Language. I will assume that you already understand XML, and will teach you how to use JAXP to write programs for creating and processing XML documents. What are XSL and XSLT? I provided quite a lot of background material on XSL and XSLT in a previous lesson in this series. A brief review of that material follows. XSL is an acronym for Extensible Stylesheet language. XSLT is an acronym for XSL Transformations. The W3C is a governing body that has published many important documents on XML, XSL, and XSLT. - Transforming non-XML documents into XML documents. - Transforming XML documents into other XML documents. - Transforming XML documents into non-XML documents.. Document and its superinterface Node declare numerous methods that can be used to navigate, extract information from, modify, and otherwise manipulate the DOM tree. As is always the case, classes that implement Document must provide concrete definitions of those methods. Many operations are possible Given an object of type Document, there are many methods that can be invoked on the object to perform a variety of operations. For example, it is possible to write Java code to move nodes from one location in the tree to another location in the tree, thus rearranging the structure of the XML document represented by the Document object. It is possible to delete nodes, and to insert new nodes. It is also possible to recursively traverse the tree, extracting information about the nodes along the way. Two ways to transform an XML document There are at least two ways to transform the contents of an XML document into another document: - By writing Java code to manipulate the DOM and perform the transformation. - By using XSLT to perform the transformation. It should be possible to write Java code to perform any transformation that can be performed using XSLT, but the reverse may not be true. General description of XSLT Here is a partial quotation from XML In A Nutshell, (which I highly recommend), by Elliotte Rusty Harold and W. Scott Means. This quotation provides a general description of XSLT: "... . ... Documents can be transformed using a standalone program or as part of a larger program that communicates with the XSLT processor through its API." In this lesson, I will provide and explain a larger program that communicates with the XSLT processor through its API. The program will also execute Java code that mimics the transformation provided by XSLT. Advantages and disadvantages As is usually the case, there are advantages and disadvantages to both approaches to document transformation.. A large library of functions With the XSLT transformation process, you write a stylesheet, which is somewhat analogous to a driver program in a more conventional programming environment. That driver program accesses and uses functions from a large library of pre-written functions to perform a series of well-defined operations on the DOM tree to produce the desired transformation. (XSLT authors don't call them functions. Rather, they are called XSLT elements. According to XML In A Nutshell, there are 37 standard XSLT elements. Also according to XML In A Nutshell, most XSLT processors also provide various nonstandard extension elements and allow you to write your own extension elements in languages such as Java.) Is there a similar library of Java methods? I am not aware of a library of Java methods in the public domain that emulates the 37 standard XSLT Elements. However, I freely admit that such a library may exist and I may simply not know about it. Therefore, to write a Java program that emulates an XSLT transformation, you need to either - Create your own library of Java methods and use that library with your Java code to perform the transformation, or - Start from scratch each time and write a custom program to perform the transformation. A skeleton library of Java methods This lesson, and several lessons to follow this one, will show you how to write the skeleton of a Java library containing methods that emulate the most common XSLT elements. Once you have the library, writing Java code to transform XML documents consists simply of writing a short driver program to access and use those methods. Thus, given the proper library of methods, it is no more difficult to write a driver. If you already know a lot about XSLT, you may learn a little about Java by studying these lessons. If you already know a lot about Java, you may learn a little about XSLT. If you don't already know either Java or XSLT, you may learn a little about both. Debugging XSLT can be difficult While writing a Java program to emulate an XSLT Transformation may require you to write more code than writing a stylesheet, in my opinion, it is much easier to debug a Java program that fails to deliver the desired result than it is to debug an XSL stylesheet that fails to deliver. This is an advantage of using Java code over XSLT. I find XSLT to be extremely difficult to debug (but I haven't attempted to use a fancy XSLT debugger, several of which are freely available on the Internet). Java provides more detailed control Another difference in using Java code relative to XSLT has to do with the detailed control of the transformation process. I believe, (but cannot prove), that it is possible to write Java programs to provide transformations that are not possible using standard XSLT elements. If I am correct, this may be another advantage of writing Java code over using XSLT. Some Details Regarding XSLT The following is a partial quotation from XML In A Nutshell. (Note that I will be referring to this excellent book several more times in this lesson. For brevity, I will refer to it simply as Nutshell.) "XSLT is an XML application for specifying rules by which one XML document is transformed into another XML document. An XSLT document -- that is, an XSLT stylesheet -- contains template rules. Each template rule has a pattern and a template. An XSLT processor compares the elements and other nodes in an input XML document to the template-rule patterns in a stylesheet. When one matches, it writes the template from that rule into the output tree. ... XSLT uses the XPath syntax to identify matching nodes." My explanation Let's see if I can explain this process in my own words. Assume that an XML document has been parsed so as to produce a DOM tree in memory that represents the XML document. (The creation of a DOM tree in this manner was discussed in several previous lessons in this series.) An XSLT processor starts examining the DOM tree at its root node. It obtains instructions from the XSLT stylesheet telling it how to navigate the tree, and what to do with each node that it encounters along the way. Finding matching template rules As each node is encountered, the processor searches the stylesheet looking for instructions on how to treat that node. (These instructions will be referred to later as template rules.) If the processor finds instructions that match the node type, it performs the operations indicated by the instructions. If it doesn't find matching instructions, it executes built-in instructions appropriate to that node. (An XML document can contain seven different types of nodes. The different types will be identified later. This lesson will describe and explain the built-in instructions for six of those seven node types. Java code will be developed that emulates the built-in instructions for each of the six types of nodes.) Establishing the context node An XPath expression can be used to point to a specific node and to establish that node as the context node. Once a context node is established, there are at least two XSLT elements that can be used to manage the traversal among children of that node: - xsl:apply-templates select, optional attribute mode, optional attribute xsl:sort, optional XSLT element - xsl:for-each select, required attribute xsl:sort, optional XSLT element The xsl:apply-templates XSLT element The first of these, xsl:apply-templates, examines and processes all child nodes of the context node that match an optional select attribute. (When combined with a default template rule to be discussed later, this often results in a recursive examination and processing of all descendant nodes of the context node.) According to Nutshell, "The xsl:apply-templates instruction tells the processor to search for and apply the highest-priority template in the stylesheet that matches each node identified by the select attribute." Applying template rules As each node is examined, the processor searches the stylesheet to determine if the XSLT programmer has provided a template rule that matches the node and defines how that node should be treated. If a matching template rule is found, the node is treated in the manner prescribed by the template rule. Literal text in the XSLT stylesheet elements You can think of the XSLT process as operating on an input DOM tree to produce an output DOM tree. If the template rule being applied contains literal text, that literal text is used to create a text node in the output tree. (I will explain how this feature is used to transform XML documents into XHTML documents in a future lesson.) If no match is found If a matching template rule is not found, the processor executes a built-in template rule appropriate to the type of node involved. Built-in template rules are provided by the XSLT processor to handle the seven different types of nodes in an XML document: - root node - element node - attribute node - text node - comment node - processing instruction node - namespace node This lesson will explain the built-in rules that handle the first six types of nodes in the above list. Recursion is common As mentioned earlier, the combination of xsl:apply-templates and a built-in template rule often produces recursion. Assuming that there is nothing in a matching template rule that stops the recursion operation, recursion continues until all descendant nodes of the original context node have been examined and processed. The mode attribute The mode attribute of xsl:apply-templates makes it possible to cause different template rules to match nodes of the same type at different places in the DOM tree. Sorting The optional xsl:sort element makes it possible to modify the order in which the nodes are examined. Iterative operation The second XSLT element in the above list, xsl:for-each, executes an iterative examination and processing of all child nodes of the context node that match the required select attribute. According to Nutshell, "The xsl:for-each instruction iterates over the nodes identified by its select attribute and applies templates to each one." In other words, the processor will examine all child nodes of the context node that match the select attribute. As each child node is examined, the processor will search the stylesheet looking for a template rule that matches the child node. If a matching template rule is found, the matching template rule will be used to process that node. If a matching template rule is not found, a built-in template rule appropriate for the type of node will be used to process the node. As before, the optional xsl:sort element makes it possible to modify the order in which the nodes are examined. I will explain this in detail in a future lesson. Combined operations Frequently a stylesheet will combine recursive and iterative operations to produce more complex operations. Enough talk, let's see some code I will begin by discussing the XML file named Dom11.xml (shown in Listing 29) along with the XSL stylesheet file named Dom11.xsl (shown in Listing 30). These two listings are provided near the end of the lesson. After explaining the transformation produced by applying this stylesheet to this XML document, I will explain the transformation produced by applying the empty stylesheet named Dom11a.xsl, (shown in Listing 33), to a nearly identical XML document. A Java program named Dom11 Following that, I will explain a Java program (shown in Listing 31) that emulates the behavior of the stylesheets shown in Listings 30 and 33 when applied to the XML file shown in Listing 29. I will explain that the Java program shown in Listing 31 emulates the behavior of the empty stylesheet shown in Listing 33, and will explain why that is true. Discussion and Sample Code The XML file shown in Listing 29 is relatively straightforward. A tree view of that XML file is shown in Figure 1. The values of the text nodes in Figure 1 were manually highlighted in red to make it easier to refer to those values later in this lesson.) A database of books As you may already have figured out, this XML document represents a small database containing information about books. However, the structure and content of this XML file was not intended to have any purpose other than to illustrate the default behavior of the built-in XSLT template rules. The XSL stylesheet file named Dom11.xsl The stylesheet file shown in Listing 30 is very important relative to the purpose of this lesson, so I will discuss it in detail. Recall that an XSL stylesheet is itself an XML file, and can therefore be represented as a tree. I will begin by showing you an abbreviated version of a tree view of the stylesheet, as shown in Figure 2. Why abbreviated? The reason that I refer to this as an abbreviated version is because I manually deleted comment nodes and extraneous text nodes in order to emphasize the important elements in the document. The root element The root node of all XML documents is the document node. However, in addition to the root node, there is also a root element.. According to Nutshell, the version must be 1.0. Also, according to Nutshell, Unable to verify this behavior I have been unable to verify this behavior experimentally. When I delete a character from the XSL namespace URI and then load the XML file into IE 6.0, there is simply no output. The browser screen remains blank. When I modify the XSL namespace URI and attempt to use JAXP to apply the stylesheet to the XML file, the system throws several errors and the program aborts. Neither approach seems to "output the stylesheet itself" as indicated by Nutshell. Children of the root element node As you can see from Figure 2, the root element node has two child nodes, both of which are of type xsl:template. Here is what XSLT and XPath On The Edge by Jeni Tennison has to say about xsl:template: As you can see from the attribute values in Figure 2, a match pattern is provided for both of the xsl:template nodes in Figure 2. Back to basics Getting back to XSLT basics, whenever the XSLT processor encounters a node while traversing the DOM tree, it will examine all of the template rules in the stylesheet searching for one whose match pattern matches the node. If it finds a matching template rule, it will execute the instructions contained as elements within the template rule. If it doesn't find a match, it will execute a built-in template rule that matches the node. An explicit representation of a built-in template rule Consider the first child node of the xsl:stylesheet root element in Figure 2. Listing 1 shows this template rule in XSL syntax, (extracted from Listing 30). The template rule shown in Listing 1 is an explicit representation of one of the built-in template rules.The template rule shown in Listing 1 is an explicit representation of one of the built-in template rules. Matching the root node and element nodes Consider the match pattern for this template rule (the text value of the attribute named match). According to Nutshell, The forward slash / is an XPath pattern that matches the root node. This is the first node the processor selects for processing, and therefore this is the first template rule the processor executes (unless a nondefault template rule also matches the root node). ... the vertical bar combines these two expressions so that it matches both the root node and element nodes." The <xsl:apply-templates/> element Now consider the <xsl:apply-templates/> element that makes up the body of this template rule. This element causes the processor to process all child nodes of each matching node, examining nodes, searching for matching template rules, and executing the elements embedded in matching template rules along the way. Again, according to Nutshell, still speaking of the template rule in Listing 1, An explicit representation of a built-in template rule Once again, the template rule shown in Listing 1 is an explicit representation of one of the built-in template rules. If I were to remove this template rule from the stylesheet, and then apply the stylesheet to the XML document, this template rule would still be applied where appropriate by the XSLT processor, because it is built into the processor. Handling text nodes by default Listing 2 shows the template rule, in XSL syntax that corresponds to the second child node of the root element node in Figure 2. Once again, this is a template rule with a match pattern. This template rule is also an explicit representation of one of the built-in rules, which copies the value of text and attribute nodes into the output document. The match patternThe match pattern The text() in the value of the attribute named match is an XPath pattern matching all text nodes. The @* is an XPath pattern matching all attribute nodes. The vertical bar combines the two patterns. Hence, the template rule matches all text and all attribute nodes. The xsl:value-of element Once a match is made, the behavior of the rule is governed by the single element that is embedded in the rule. The xsl:value-of element, with a select value of "." returns the text value of the context or current node. (This is similar to the use of a single period to represent the current directory in some file management systems such as MSDOS.) Text value to the output Therefore, whenever the XSLT processor applies this template rule to a text or attribute node, the text value of that node is sent to the output document (a text node is created in the output tree). If the node is a text node, the value is simply the text in the node. If the node is an attribute node, the value is the attribute value, but not the attribute name. The output Now it's time for the big question. What does the output look like when the stylesheet shown in Listing 30 is used to transform the XML document shown in Listing 29? The result of such a transformation is shown in Figure 3. The XML declaration The first line in Figure 3 is an XML declaration that was placed there by the XSLT processor independent of the content of the XML file. The text in the output If you compare the text in Figure 3 with the material highlighted in red in Figure 1, you will see that the output produced by this stylesheet containing only explicit representations of default template rules is the concatenation of text values for all the element nodes in the XML document. Line breaks in the output The two line breaks following the words Java and rules in Figure 3 correspond to the line breaks in the text portion of the title element shown in Listing 3. (This element was extracted from the original XML file in Listing 29.) Because these two line breaks occur within the text portion of the element, they also appear in the output in Figure 3. In other words, the line breaks are considered by the XSLT processor to be a legitimate part of the text content of the element.Because these two line breaks occur within the text portion of the element, they also appear in the output in Figure 3. In other words, the line breaks are considered by the XSLT processor to be a legitimate part of the text content of the element. The remaining line breaks in the XML file shown in Listing 29 occur between XML tags. Therefore, they are not considered to be a part of the text content of any element and they do not appear in Figure 3. No attribute values in the output You may have noticed that even though a couple of the elements in the XML file have attributes (see Figure 1), and one of the template rules matches attribute nodes, the attribute values do not appear in the output shown in Figure 3. Nutshell explains this in the following way: Nutshell goes on to tell us, Finally, Nutshell tells us, Applying an empty stylesheet Now consider the stylesheet shown in Listing 33, as shown in abbreviated tree format in Figure 4. Unlike Figure 2, the stylesheet represented by Figure 4 doesn't contain any template rules. In fact, except for the root (document) node and the xsl:stylesheet root element node, the stylesheet is completely empty. Produces exactly the same output However, the result of applying the empty stylesheet to the XML file discussed earlier produces exactly the same result as was produced by applying the stylesheet shown in Listing 30 and Figure 2 to that XML file. This is because the two template rules shown in Listing 30 and Figure 2 replicate the behavior of two of the built-in template rules. Therefore, removing them from the stylesheet has no impact on the result produced by applying the stylesheet to the XML file. If they are needed, they are available as built-in rules of the XSLT processor. Transformation behavior of an empty stylesheet Because the two template rules in the previous stylesheet replicate the behavior of two of the built-in template rules, removing those template rules from the stylesheet to produce an empty stylesheet had absolutely no impact on the transformation result. The transformation result produced by the previous stylesheet was identical to those produced by the empty stylesheet. According to Nutshell, when you transform an XML document using an empty stylesheet, Combined output Whenever the XSLT processor encounters a node for which you haven't defined a matching template rule, the default template rule for that type of node will be applied. Therefore, the total output is often a combination of output produced by template rules that you provide and built-in template rules. Therefore, if you are going to create a stylesheet containing template rules of your own design, it is very important for you to understand the default behavior provided by the built-in template rules. The total output produced by your stylesheet is very likely to be a combination of the output produced by your template rules and the output produced by the built-in template rules. Other built-in template rules I have explained the behavior of the built-in template rules that cover the following four types of nodes: - root node - element node - attribute node - text node - comment node - processing instruction node A Java program that emulates the built-in template rules Now let's change direction and concentrate on Java code rather than XSLT elements. The following paragraphs describe a Java program named Dom11. The primary purposes of this lesson are to: - Demonstrate Java code that replicates the behavior of the built-in template rules for six of the seven possible types of nodes. - Provide a skeleton program that can be expanded later to provide more complex behavior. As such, the program serves as the skeleton for the definition of custom template rules. Behavior of the program As written, this program extracts and concatenates all text values from a specified XML file, and writes that text into a result file, using two different approaches: - An XSLT transformation operating under program control. - Program code that emulates the behavior of the XSLT transformation. As you saw in the earlier discussion, both XSL files produce the same result when processed against the XML files named Dom11.xml and Dom11a.xml, demonstrating the behavior of the built-in template rules. The execution of these built-in template rules causes the contents of every text node to be concatenated and written into the result file. The program code in this program emulates those built-in template rules and produces the same results. Usage instructions The program requires three command line arguments in the following order: - The name of the input XML file - must be Dom11.xml or Dom11a.xml. - The name of the output file to be produced by the XSLT transformation. - The name of the output file to be produced by the program code that emulates the XSLT transformation. The program begins by executing code to transform the incoming XML file in a way that mimics the XSLT transformation. Along the way, it saves the processing instructions, (one of which contains the name of the stylesheet file), for later use by the code that governs the XSLT transformation process. (Otherwise, the code that performs the XSLT transformation later would have to search the DOM tree for the XSL stylesheet file name.) The name of the XSL stylesheet file is extracted from the processing instruction in the XML file. Then the program uses the XSL style sheet to transform the XML file into a result file. Errors, exceptions, and testing No effort was made to provide meaningful information about errors and exceptions. If an error or exception occurs, the default behavior for that error or exception will occur. The program was tested using SDK 1.4.2 under WinXP. Will discuss in fragments I will discuss this program in fragments. A complete listing of the program is shown in Listing 31 near the end of the lesson. Listing 4 shows the beginning of the class named Dom11 and the beginning of the main method. The code in Listing 4 declares a couple of variables, one of which will be used later to save processing instruction nodes.The code in Listing 4 declares a couple of variables, one of which will be used later to save processing instruction nodes. Then the code in Listing 4 provides usage instructions based on command-line arguments. Parse the input XML file The code in Listing 5 parses the input XML file, producing an object of type Document, which is a DOM tree in memory. Steps for creating a Document objectSteps for creating a Document object There is nothing new in the code in Listing 5. I have discussed the code required to create a Document object in several previous lessons beginning with the lesson entitled Java API for XML Processing (JAXP), Getting Started. As you saw in those earlier lessons, creating a Document object involves three steps: - Create a DocumentBuilderFactory object - Use the DocumentBuilderFactory object to create a DocumentBuilder object - Use the DocumentBuilder object to create a Document object Transformation through program code The code in Listing 6 begins the process of transforming the DOM tree into an output file through the execution of program code (as opposed to an XSLT transformation). The code begins by instantiating a new object of the Dom11 class. Get an output streamGet an output stream Then the program gets an output stream for the output produced by the program code. This stream points to an output file that was specified by the third command- line parameter. Process the DOM tree The code in listing 7 invokes the processDocumentNode method to process the DOM tree. This method (and the methods that it calls) begins with the Document node, and processes all the nodes in the DOM tree to produce the required output. Note that the code in listing 7 passes the Document object's reference to the method named processDocumentNode. This is the root node of the entire DOM tree, and can be treated as type Node, because the Document interface extends the Node interface.Note that the code in listing 7 passes the Document object's reference to the method named processDocumentNode. This is the root node of the entire DOM tree, and can be treated as type Node, because the Document interface extends the Node interface. Set the main method aside My explanation of this program will follow the execution thread through the program. At this point, I will set the discussion of the main method aside temporarily and come back to it later when the processDocumentNode method returns control to the main method. The processDocumentNode method The entire processDocumentNode method is shown in Listing 8. This method is used to produce any text required in the output at the document level, such as the XML declaration for an XML document. (As you can see from Listing 8, the code in this method writes an XML declaration into the output.)This method is used to produce any text required in the output at the document level, such as the XML declaration for an XML document. (As you can see from Listing 8, the code in this method writes an XML declaration into the output.). When the DOM tree has been processed ... When the processNode method returns, (after the entire DOM tree has been processed), the processDocumentNode method flushes the output stream and returns control to the main method. As you will see later, subsequent code in the main method invokes a method that will perform an XSLT transformation on the XML file and write the output into a different output file. I will discuss that method later in this lesson. The processNode method There are seven possible types of nodes in an XML document: - root or document node - element node - attribute node - text node - comment node - processing instruction node - namespace node Get and save the node type The beginning of the processNode method is shown in Listing 9. Note that the method receives an incoming parameter, which is a reference to an object as type Node. This can include any of the seven node types that can occur in a DOM tree. If the parameter doesn't point to an actual object, the method simply returns, as opposed to throwing a NullPointerException. The final statement in Listing 9 invokes the getNodeType method to get and save the type of the node whose reference was received as an incoming parameter.The final statement in Listing 9 determines the type of the incoming node. Listing 10 shows the beginning of a switch statement that is used to initiate the processing of each incoming node based on its type. The switch statement has six cases to handle six types of nodes, plus a default case to ignore namespace nodes.The switch statement has six cases to handle six types of nodes, plus a default case to ignore namespace nodes. The DOCUMENT_NODE case The code in Listing 10 will be executed whenever the incoming method parameter points to a document node. DOCUMENT_NODE is a constant (public static final variable) that is defined in the Node interface. (The interface provides similar constants for all node types other than namespace nodes.) These constants can be used to distinguish between different node types. Will invoke default behavior in this case Note that the code in the case in Listing 10 is an if/else construct. If the conditional clause in the if statement evaluates to true (which is not possible in this case), the code in the if statement will be executed. (This is where I will place the code for custom template rules in subsequent lessons.) If the conditional clause in the if statement does not evaluate to true, the code in the else statement will be executed. (This is where I have placed the code that mimics the built-in template rules.) Note that the code in the else statement in Listing 10 invokes a method named defElOrRtNodeTemp. When I discuss this method momentarily, you will see that its behavior mimics one of the built-in template rules that I discussed earlier in this lesson. Before getting to that, however, I want to give you a preview of how I will define custom template rules in future lessons. Creating custom template rules As you will see in subsequent lessons, the process for creating a custom template rule is as follows: - Go to the method named processNode, which I am discussing right now. - Before getting to the discussion of the method named defElOrRtNodeTemp, I want to show you the ELEMENT_NODE case in Listing. As before, the code in the if statement is not reachable in this program. The method named defElOrRtNodeTemp Still following the execution thread, I will set my discussion of the switch statement aside temporarily and discuss the method named defElOrRtNodeTemp. As mentioned above, this method is invoked as the default behavior for document nodes and element nodes in Listings 10 and 11. I will return to my discussion of the switch statement shortly. The entire method named defElOrRtNodeTemp is shown in Listing 12. Behavior of the method named defElOrRtNodeTempBehavior of the method named defElOrRtNodeTemp This method mimics the behavior of the built-in XSLT template rule shown in Listing 1, and repeated in Figure 5 below for convenient viewing. As I indicated earlier, the match pattern for this template rule matches the document node and all element nodes. Code is straightforward The code in this method is relatively straightforward. First it tests to confirm that the incoming parameter points to a node of the correct type, and throws an exception if the incoming parameter is not of the correct type. If the incoming parameter is of the correct type, the code in the method invokes a method named applyTemplates passing the node as a parameter to that method. The method named applyTemplates Continuing to follow the execution thread, I will now discuss the method named applyTemplates, shown in Listing 13. Behavior of the apply-templates ruleBehavior of the apply-templates rule The applyTemplates method partially emulates the XSLT apply-templates rule discussed earlier in this lesson, and shown in Figure 6. The apply-templates rule has two attributes, select and mode. As I explained earlier in this lesson, Behavior of the method named applyTemplates The applyTemplates method shown in Listing 13 receives two incoming parameters: - The context node. - The select parameter. The code in Listing 13 invokes the getChildNodes method on the context node to get a list of all child nodes of the context node. If there are no child nodes, it quietly returns. A recursive method call If there are child nodes, the method uses a for loop to process all child nodes that match the select parameter as described above. For each matching child node, the applyTemplates method makes a recursive call to the method named processNode, passing the child node's reference as a parameter to the processNode method. Return to defElOrRtNodeTemp method Eventually, the recursive process will end, and control will return to the defElOrRtNodeTemp method shown in Listing 12. From there, control will return to either the DOCUMENT_NODE case or the ELEMENT_NODE case in the switch statement in Listing 10 or Listing 11 from which the defElOrRtNodeTemp method was called. That, in turn, brings us back to a discussion of the other cases in the switch statement. The TEXT_NODE and ATTRIBUTE_NODE cases The next two cases from the switch statement that I will discuss are shown in Listing 14. (The switch statement began in Listing 10) Listing 14 shows the cases for text nodes and attribute nodes. I have grouped these two cases together because the default behavior of both cases is to invoke the method named defTextOrAttrTemp, and to send the String returned by that method to the output. The defTextOrAttrTemp methodThe defTextOrAttrTemp method Once again, following the execution thread, I will now discuss the method named defTextOrAttrTemp method. This method is called whenever: - The processNode method is called with a reference to either a text node or an attribute node, and. - The default behavior for the node type is executed. Emulates a built-in XSLT template ruleEmulates a built-in XSLT template rule This method emulates the built-in XSLT template rule shown in Listing 2 and repeated in Figure 7 below for convenient viewing. As I told you earlier, this template rule matches all text nodes and all attribute nodes. Therefore, the defTextOrAttrTemp method is invoked by the default behavior of either the TEXT_NODE case or the ATTRIBUTE_NODE case in the switch statement in Listing 14. Similar behavior Once again, note the similarity between the method named defTextOrAttrTemp in Listing 15 and the template rule shown in Figure 7. In Figure 7, the template rule executes the xsl:value-of XSLT element to send the value of the context node to the output. The method shown in Listing 15 invokes a method named valueOf, passing "." as a parameter (note the period between the quotation marks). The value returned by that method is sent to the output by the code in the default behaviors of the two cases in Listing 14. The method named valueOf The method named valueOf, which begins in Listing 16, is fairly complex. I will discuss portions of this method in this lesson and will discuss the remainder of the method in subsequent lessons. This method emulates an <xsl:value-of XSLT element. Three forms of method call The method requires two parameters. The first parameter is of type Node, and is the context node. The second parameter is of type String and is a select parameter. The valueOf method recognizes three forms of call: - valueOf(Node theNode,String "@attrName") - valueOf(Node theNode,String ".") - valueOf(Node theNode,String "nodeName") In the second form, which is the only form actually used in this program, the value of the select parameter is a String containing a single period. In this form, the method returns the concatenated text values of the context node and all descendants of the context node (including text nodes that are children of the context node). In the third form, the method returns the concatenated text values of all descendants of a specified child node of the context node. If the context node has more than one child node with the specified name, only the first one found is processed. The others are ignored. Features not supported The valueOf method does not support the following features, which are standard features of the xsl:value-of XSLT element: - disable-output-escaping - processing instruction nodes - comment nodes - namespace nodes Since the second form of call listed above is the only form actually used in this program, I will discuss only those portions of the method that support that form. I will defer discussion of the other portions of the method until they are used in subsequent lessons. Process the context node The code in Listing 16 picks up at the point where it is determined that the incoming value for select is a String object's reference with a value of "." (note the period between the quotation marks). This is a request to return the value of the context node. This method supports two possibilities for the context node: - Element node - return the concatenated text values of all descendant nodes of the context node. - Text node - return the text value of the text node. When the context node is an element node ... The code in Listing 16 shows the beginning of the code required to process the context node as an element node. Get list of child nodesGet list of child nodes In preparation for processing all descendant nodes of the context node, the code in Listing 17 gets a list of child nodes, along with the length of the list. In addition, the code in Listing 17 initializes a String variable named nodeTextValue that will be used to collect the concatenated text values of the descendant nodes. Note that this variable is initialized to contain an empty string. Process child nodes of context nodeProcess child nodes of context node Having gotten a list of child nodes of the context node, all that is required to accomplish the objective is to make a series of recursive calls to the valueOf method, passing each child node in turn to the valueOf method as shown in Listing 18. Each child node becomes the new context node upon re-entry into the valueOf method, and each call requests the value of the context node (the current child node) by passing "." for the select parameter.Each child node becomes the new context node upon re-entry into the valueOf method, and each call requests the value of the context node (the current child node) by passing "." for the select parameter. Concatenation The code in Listing 18 also deals with concatenation. The value returned from each call to the valueOf method is concatenated with the text value already stored in the variable named nodeTextValue. Finally, after all child nodes have been processed, the code in Listing 18 returns the concatenated value stored in the variable named nodeTextValue. When the context node is a text node ... If you understood all of the above, (including the recursion), you should find it easy to understand the code shown in Listing 19. Listing 19 shows the case where the context node is a text node. In this case, the method simply returns the value obtained by invoking getNodeValue on the text node.In this case, the method simply returns the value obtained by invoking getNodeValue on the text node. One other possibility There is one other possibility that is handled by the code in Listing 20. That possibility is that the context node is neither a text node nor an element node. In that case, the valueOf method returns an empty string. Other types of nodes in the switch statementOther types of nodes in the switch statement Returning to the switch statement that began in Listing 10, we find two additional cases, each of which invokes the same method by default: - COMMENT_NODE - PROCESSING_INSTRUCTION_NODE Save all processing instructionsSave all processing instructions I will discuss the defComOrProcInstrTemp method shortly. First, however, I will explain the extra code that appears in the default portion of the processing instruction node case in Listing 21. The purpose of a processing instruction in an XML file is to provide instructions to processing programs such as this one. The XML file shown in Listing 29 contains the three processing instructions shown in Listing 22. Stylesheet identified in a processing instructionStylesheet identified in a processing instruction The first and third of the three processing instructions are dummy processing instructions put there to test the capabilities of this program. However, the processing instruction in the middle is a real processing instruction that specifies the name of the file containing a stylesheet. That stylesheet will be used later when this program causes an XSLT transformation to take place using the XML file in Listing 29, and the stylesheet file identified in Listing 22. (That stylesheet actually appears in Listing 30.) In order to use that processing instruction to identify the stylesheet file, this program must capture the processing instruction and extract the file name from the processing instruction. A statement in the second case in Listing 21 causes references to all processing instruction nodes to be added to and saved in static variable of the Dom11 class named procInstr. That information will be used later to extract the name of the stylesheet file from the processing instruction. The defComOrProcInstrTemp method Both of the switch cases shown in Listing 21 invoke this method as their default behavior. A complete listing of the defComOrProcInstrTemp method is shown in Listing 23. The defComOrProcInstrTemp method emulates the built-in template rule shown in Figure 8.The defComOrProcInstrTemp method emulates the built-in template rule shown in Figure 8. According to Nutshell, the built-in template rule for comments and processing instructions doesn't output anything into the output tree. Therefore, the defComOrProcInstrTemp method shown in Listing 23 simply returns an empty string. The namespace node case The default case for the switch statement begun in Listing 10 is shown in Listing 24.. Also, here is what Nutshell has to say about the built-in template rule for namespace nodes: Therefore, the default case in Listing 24, which catches all namespace nodes, doesn't send anything to the output. End of the processNode method I have discussed everything of significance in the processNode method. Continuing to follow the execution thread, I will now turn my attention back to the main method. Perform an XSLT transformation After the code has been executed to process the document using program code (beginning with the invocation of the processDocumentNode method in Listing 7), the statement in Listing 25 invokes the doXslTransform method to cause the XML document to be transformed using the stylesheet identified in one of the processing instructions in the XML file. Stylesheet reference has been savedStylesheet reference has been saved The success of the method call in Listing 25 depends on the stylesheet processing instruction having been saved while the document was being processed. Otherwise, it would be necessary to add code in this method to search the DOM tree for the stylesheet processing instruction. All processing instructions are saved in a Vector object by this program. The Vector object's reference is passed as the third parameter to this method. The first parameter is a reference to the Document or root node in the DOM tree. The second parameter is the name of the output file. The doXslTransform method The doXslTransform method begins in Listing 26. This method uses an XSLT stylesheet file to transform an incoming Document object into an output file. A large portion of the code in this method is dedicated to: - Identifying the processing instruction containing the stylesheet information. - Extracting the stylesheet information from the processing instruction. The code in Listing 26 searches the Vector object seeking a processing instruction node that contains a stylesheet reference. How does this work?How does this work? To see how this code works, first take a look at the processing instruction in the XML file that contains the stylesheet reference. This processing instruction was shown in Listing 22, and is repeated below in Figure 9 for convenient viewing. The purpose of a processing instruction is to provide information to processing programs that will be used to process the XML file. Format of a processing instruction According to Nutshell, Applying this knowledge to the stylesheet processing instruction in Figure 9, you can see that the target consists of the following text: xml-stylesheet. Accessing the target and the data The target of a processing instruction node can be accessed in Java by invoking the getTarget method on the processing instruction node's reference. The remainder of the text in the processing instruction can be accessed by invoking the getData method on the same reference. The code in Listing 26 examines each of the objects in the Vector, invoking getTarget and getData, searching for a processing instruction whose target and data match that which is known to be true for a stylesheet. When a match is found, the code breaks out of the for loop. If no match is found, the code in Listing 26 throws an exception. Extract the stylesheet file name Having identified the processing instruction that contains the stylesheet reference, the code in Listing 27 uses the getData method of the ProcessingInstruction interface, along with some methods of the String class to extract the name of the file containing the stylesheet. The ability to extract the file name is based on the known format of the stylesheet processing instruction.The ability to extract the file name is based on the known format of the stylesheet processing instruction. Do the XSLT transformation The remaining code in the doXslTransform method is shown in Listing 28. You have seen this code beforeYou have seen this code before The code in Listing 28 is not new to this series of lessons. This code was discussed in detail in the earlier lesson entitled Getting Started with Java JAXP and XSL Transformations (XSLT). Therefore, other than to point out one difference relative to the previous code, and to review the steps involved, I won't discuss the code in Listing 28 further in this lesson. Steps for creating a Transformer object The following two steps are required to create a Transformer object. Once a Transformer object is available, it can be used to transform one DOM tree into another DOM tree. - Create a TransformerFactory object by invoking the static newInstance method of the TransformerFactory class. - Invoke the newTransformer method on the TransformerFactory object. There is one important difference between the code in Listing 28 and the code in the earlier lesson. The two programs invoke different overloaded versions of the newTransformer method of the TransformerFactory class. The earlier lesson entitled Getting Started with Java JAXP and XSL Transformations (XSLT) invoked a version that took no parameters and returned a Transformer object that simply copies a source tree to a result tree. The code in Listing 28 invokes a version of the newTransformer method that takes the stylesheet file as an input parameter and returns a Transformer object that uses the stylesheet file to perform an XSLT transformation. That concludes the discussion of the program named Dom11. Run the Program I encourage you to copy the Java code, XML files, and XSL files from the listings near the end of this lesson. Compile and execute the programs. Experiment with them, making changes, and observing the results of your changes. SummaryI explained default XSLT behavior and showed you how to write Java code that mimics that behavior. The resulting Java code serves as a skeleton for more advanced transformation programs. What's Next? In the next lesson, I will show you how to write a Java program that mimics an XSLT transformation for converting an XML file into a text file. I will also show that once you have a library of Java methods that emulate XSLT elements, it is no more difficult to write a Java program to transform an XML document than it is to write an XSL stylesheet to transform the same document.<<
http://www.developer.com/e-mail/xml/article.php/3313341/Java-JAXP-Implementing-Default-XSLT-Behavior-in-Java.htm
CC-MAIN-2017-22
refinedweb
8,610
59.03
OLED 128x64px SPI - Arduino Connection Diagram Information output from the microcontroller often creates more difficulties than the work of the main program itself - after all, it is necessary to “give” a few pins to connect, and a certain amount of memory for the graphics processing library. Newbies often make a mistake, either trying to stuff a large amount of data onto a small screen (like the Nokia 3310), or vice versa - a huge display that requires high performance is added to even the smallest project. We will consider a compromise solution - small in size (0.96 ”), but with a sufficient resolution (128 × 64) screen with only 3 wires connected! MODULE OVERVIEW SSD1306 is a controller on which several variations of display modules are built. Often there are also connecting via the I²C interface, but our today's version is connected via a faster SPI.SSD1306 is a controller on which several variations of display modules are built. Often there are also connecting via the I²C interface, but our today's version is connected via a faster SPI. The display itself, mounted on the module, is based on OLED technology, which guarantees low consumption, high switching speed (as a result, the largest possible refresh rate of the screen). The screen is two-color, which does not allow to display exact pictures, but it helps to reduce the buffer size for video memory to just 1 Kb, since 8 points are encoded at once in one byte. An SPI connection provides a connection speed of up to 20 Mbps, with which the I²C interface cannot compete with its requests and complex structure. A connection via parallel interfaces is also available, but modules with such a connection are not widely used. The module's power supply is 3.3 V, but on the board a stabilizer with a low voltage drop is installed, which allows the display to be powered from both 5 V and 3.3 V directly. It is better to use 5 V power supply - this will unload the already not very powerful 3.3 V stabilizer installed on the Arduino. The maximum consumption (when all the pixels of the display are on) is about 20 mA, so if necessary it can be powered directly from the pin of the microcontroller. SCHEME OF CONNECTION TO ARDUINO - CS - any pin Arduino - SCK - 13 pin (Arduino UNO) - MOSI - 11 pin - Vcc - 3.3 ... 5V - GND - GND Interface pinouts for other boards can be found on the official website. CONNECTING TO ARDUINO IDE There are many libraries for display, and no one forbids you to write your own - the protocol is described in the datasheet. But to simplify the task, we will take the ready-made U8gLib library. It supports a large number of screen controllers, including SSD1306. After connecting the display to your debug board, open the Arduino IDE (preferably a new version) and look for the Library Manager (Sketch-Add Library-Library Manager). In search, enter u8glib and install any version - the newer the better. Open any example. At the beginning of the sketch, there will be a large number of commented initialization lines for different controllers. Find among them SSD1306, connecting via SPI and having a resolution of 128 × 64, then uncomment this line and you can upload a sketch! #include "U8glib.h" U8GLIB_SSD1306_128X64 u8g(12, 11, 8, 9, 10); void setup() { /* nothing to do here */ } void loop() { u8g.firstPage(); do { int steps = 16; int dx = 128/steps; int dy = 64/steps; int y = 0; for(int x=0; x<128; x+=dx) { u8g.drawLine(x, 0, 127, y); u8g.drawLine(127-x, 63, 0, 63-y); y+=dy; } } while(u8g.nextPage()); } If the screen does not work, check the power and pins in the description line in the sketch - perhaps the software implementation of the interface is used there and its pins are reassigned, this depends on the parameters passed to the configuration functions. With this library and Arduino, the screen is able to update data at a frequency of 15 frames per second, and if you use PADI or ESP8266 (with which the library is also compatible), then you can squeeze out and 30 - and that’s enough for confident animations. You can also try the lighter Adafruit_SSD1306 for acceleration, but some of its versions are incompatible with the SPI mode. Add a comment
https://inventr.io/blogs/arduino/oled-128x64px-spi-arduino-connection-diagram
CC-MAIN-2020-16
refinedweb
726
60.35
Attachmate Fires Mono Developers 362 darthcamaro US based development team for Mono." And nothing of value was lost. (Score:2, Insightful) (I will gb2/b/ shortly). Good. (Score:3, Interesting). Thi Re: (Score:3, Insightful) The danger is that Microsoft is probably planning to force all free C# implementations underground some day using software patents. No, C# itself is covered by an open standard. Your suggestion of Microsoft Patent Ire is entirely academic, and Microsoft's patents covering Linux kernel technology are much greater concern And with Java, the danger is not academic. Oracle is actually suing Google over patents for their implementation resembling Java. Re: (Score:2) Except he said nothing about Java whatsoever. Why do you (and the first person to reply) insist on stuffing words in other people's mouths? Re: (Score:2) If somebody royally fucks you over, hating them IS being objective -- and rational. Re: (Score:2) MS patents covering the MS kernel? Huh? MS has no patents that cover the Linux kernel. Not that I've heard of. What patents are you talking about? Re: (Score:2) Re:Good. (Score:4, Insightful) Re:Good. (Score:5, Informative) He is not. C# has versions, and so does .NET. As well, C# has an Ecma standard, and so does .NET (CLI) - they are two separate documents. He is correct in that the most recent standardized (by Ecma and ISO) versions of both C# and CLI are 2.0. Re: (Score:3) There's no published standard document from Ecma yet. Re:Good. (Score:4, Interesting) I don't care for proprietary programming languages as much as the next guy. Take away the .net part of it, look at the principal architect of the C# language.> Sorry, URL formatting has me stumped, I've followed the syntax, but that's not the point of this post. You can find him. He was was heavily involved/ perhaps lead architect (I don't know as of now) of Borland's Delphi. A most wonderful development environment, and the only real competitor to VB at the time. So my suggestion is don't bash C# but rather the encumbrances places upon it, like .NET. Disclaimer: I still write in Delphi. If I want to update a network of 100 systems I just copy over the .exe. (Still using Delphi 7). No need to roll out updates to every machine. No registry usage. None of the BS that comes with rolling out a .Net application. And my clients find my work very valuable. My impression is that Delphi is much more common in the EU and I don't speak at all to the crap that's happened since then with the selling to this corp or that corp. I only point out that the person developed by C# is a talented individual. Re: (Score:3, Interesting) Erm, Have you actually tried to deploy a .net application recently ? Other then ensuring that the framework is installed, it is also generally as simple as copying a .exe file. ClickOnce deployment is vaguely more complicated but its complexities exist to counter security problems. One can hardly blame MS for trying to be a bit more proactive about security either. The largest (in terms of distribution) .NET program I've ever written had a target audience of roughly 40k computers. Our deployment process ? xco Re: (Score:2) last I checked MSI files are installers, meaning the user still has to install it once this file is on their pc. On linux you could store your program in a self-hosted repository and each client can just sudo apt-get install programname Installation all handled automatically so the user just has to click on the icon in the menu and run it. Software updates can happen without the user even knowing. This can be done for ALL linux desktop software, not just the ones you create... I believe there are ways to do th Re: (Score:3) what rinky dink Enterprise IT department do you work in? Users do not deploy programs to their computers. you push them out and they are just available from the end user's perspective. Re: (Score:2) Re:Good. (Score:5, Informative) 1) Do your research: [microsoft.com] [microsoft.com] 2) Stop plagarizing Richard Stallman's quotes without attribution: [fsf.org] Re: (Score:3) [microsoft.com] That says it only covers patents "that are necessary to implement the Covered Specification." How worthless is that? So if you implement it the same way Microsoft did, or in the most natural and straightforward way, but there was some alternative way of doing it that still meets the spec then you're not covered? As in, even if the only alternative is a crap implementation that will require twice as much memory and 10 times as much CPU? Obviously they couldn't have created a patent grant that says 'you can us Re: (Score:3) First, if they had done what I suggested and included a patent grant for all of the patents that the Microsoft implementation uses, it would only have implicated the FAT long filename patent (or any given other patent) if Microsoft's implementation had used it. And if Microsoft did use it for something in their implementation, the idea that a third party implementation that did the same thing wouldn't be covered is the whole thing people are concerned about. Second, what you are describing is the trade off b Re: (Score:2, Insightful) It is dangerous to depend on C#, so we need to discourage its use. This is the very definition of FUD. You have some assumptions made up of complete guesswork, and from that you try to scare the development community from using this language/platform. You have absolutely no facts to back up your assertions, and yet year after year people keep spreading this FUD and year after year it does not come true. The problem is not in the C# implementations, but rather in applications written in C#. If we lose the use of C#, we will lose them too. That doesn't make them unethical, but it means that writing them and using them is taking a gratuitous risk. So what is the answer? To avoid applications written in C#? If you do that, then you have already lost the applications without any lawsuits being filed. The paranoia wins. I Re: (Score:3) This is the very definition of FUD. Sometimes fear, uncertainty, and doubt are warranted. IMO any time you're dealing with MS you should be fearful, uncertain, and doubtful. MS does have a history, you know. The paranoia wins. You try walking home through the ghetto without being paranoid. I'm not talking about MS here, I'm talking about staggering home from Felbers. Live in my part of town and paranoia is the only thing that will keep you alive. And to tell the truth, I fear MS more than I fear the gangstas. Re: (Score:2, Insightful) Sometimes fear, uncertainty, and doubt are warranted. IMO any time you're dealing with MS you should be fearful, uncertain, and doubtful. MS does have a history, you know. No, Microsoft does not have a history of breaking their Microsoft Community Promise. They have never created a standard and then sued everybody for using that standard. (No, FAT32 was always a proprietary file system) Mono is not going to be killed by Microsoft's patents, just like OpenOffice was not targetted for using Microsoft's file formats (despite being rumoured for years that MS was just about to sue). You are correct that Microsoft do have a history, but it appears to be a history of letting others u Re: (Score:2) So why doesn't Microsoft sue? Because it would be a public relations nightmare - just as it was for SCO. That is the nail in the coffin for this FUD for me. Microsoft are just not stupid enough to put themselves in the position of such a David and Goliath lawsuit by going after the open source community. And, really, something like the closest cousin of the Streisand Effect -- by taking an Open Source alternative/competitor seriously enough to sue Microsoft would instantly provide them with more advertising, PR, and usage than they'd probably get in 20 years on their own. Re:Good. (Score:5, Interesting) So why doesn't Microsoft sue? Because it would be a public relations nightmare - just as it was for SCO. Perhaps you aren't aware that MS funded SCO's lawsuit. [eweek.com] SCO was just a proxy for MS. Nothing to stop MS from "selling" the patents in question to some patent troll and engaging in another proxy lawsuit. Re: (Score:2) regardless of whether its FUD or not, its been going on too long now to put the fire out. And in the open source world, as soon as something is despised or rejected by the community at large, its days are numbered. Re: (Score:3) regardless of whether its FUD or not, its been going on too long now to put the fire out. And in the open source world, as soon as something is despised or rejected by the community at large, its days are numbered. I don't know about that. The Mono Project has had to wear these accusations since it began and yet it still grows better all the time. Just because a few vocal people are against it does not mean that it will go away. I think that their branching out into the mobile phone arena will keep their profile up and ensure the project doesn't die. Let's face it, Windows is despised in the open source community too and yet there is still quite a lot of support for the operating system in open source software. Sure it Re: (Score:3). Miguel says everything is cool so you are wrong and we have nothing to fear. Ever. EVAR ! Re: (Score:2) Do you have the same opinion with wine? should we make life harder for those distributing wine so that people cannot try to run windows programs as a compatibility layer so easily? Same with mono, many universities teach c# these days in their courses, and if it were not for mono I would have had to actually used windows for once. Something of value WILL be lost, the ability to continue using your linux system in the face of being forced to use .net stuff. Re: (Score:2) It is dangerous to depend on C#, so we need to discourage its use. Whoosh. You're utterly confused. C# is no biggie, to put it mildly. It's but one of the languages for which an implementation exists that happens to target the CLR and the .net framework. It's the platform that's the big deal, not a single language. It's not dangerous to depend on C#, if anything it may be dangerous to depend on CLR or on the .net framework. Free C# implementations do not permit users to run C# programs on free platforms. A free C# implementation is a C#-to-bytecode compiler. To be functional Re:Good. (Score:5, Insightful) ECMA standards don't protect you from patent lawsuits. Especially not when the standard is saddled with RAND patents (which virtually guarantee that open source usage is out the window.) Re: (Score:2, Interesting) Good Question. In all the MSDN conference media -which I do not define as MSDN proper, but programmer conference media-, Microsoft has not only embraces Mono but showcases it. Microsoft has no intention of developing a .NET solution for other platforms, but it is advantageous for them to support others who do so. Did you (not you who I am replying to but the original commenter) not see the recent Microsoft PDC conference video where Miguel De Icaza himself presented on Mono? Re: (Score:3) Sorry, I should have included the link. Miguel [msdn.com] describes all the features in the most current version of Mono. At a Microsoft Developers Conference. Enuff Said. Re: (Score:2, Insightful) In all the MSDN conference media -which I do not define as MSDN proper, but programmer conference media-, Microsoft has not only embraces Mono but showcases it. Since I know Microsoft well, that is all the reason I need to avoid Mono now and forever. Did you... not see the recent Microsoft PDC conference video where Miguel De Icaza himself presented on Mono? I hope you are not under the misapprehension that Miguel de Icaza has a shred of credibility left with anyone, least of all me. Re:Good. (Score:5, Insightful) Microsoft has not only embraces Mono but showcases it. Since I know Microsoft well, that is all the reason I need to avoid Mono now and forever. So on one hand we have people stating that we should avoid Mono because Microsoft does not like the competition and will eventually crush it with their patents, while on the other hand we should avoid Mono because Microsoft likes it and showcases it as evidence of the .NET CLR cross platform status. It seems Microsoft can't do anything right! I hope you are not under the misapprehension that Miguel de Icaza has a shred of credibility left with anyone, least of all me. It is quite damning of Miguel that he has lost the support of the paranoid set. So what has he actually done? He has created a programming platform that works, has withstood the test of time, and that has not been crushed under the legal might if Microsoft. He proved the naysayers wrong. Re: (Score:3) nope, they're saying that the copyright/patent situation is not clear at all, and that MS pushing c# so hard right now, including via mono, in no way guarantees that they won't have a change of mind 5 months - 6 years from now, and close everything up again. They could also send out death squads to kill anyone who writes buggy software in C#. Or not. It is one thing to have doubts about whether Microsoft are lying about making an open standard, but it is another to then take every opportunity to convince the world that they are doing exactly that - even if it is completely contrary to every action that Microsoft has taken since it created .NET. All you are saying is that you fear that they may eventually turn against the developer community, that you can't be cer Re: (Score:2) Re:Good. (Score:5, Informative) Last I checked MONO was aiming to deliver .NET to Linux. .NET (platform) patents scare people, not patents regarding the language specification. I guess you can patent anything in USA and sue on ever more in Texas, but I do not think that the language specification contains anything patentable. Have you read the patent statement? It says: So, until you have Microsoft releasing GPL (w/ classpath or whatever assemblies you use on .NET exception) or LGPL code that compiles under Linux you really shouldn't be using it. Re:Good. (Score:5, Informative) Legally binding promise == estoppel (Score:3) Look it up. Basically if anyone acts in good faith relying on the promise (a promise here being a one-way contract where you do not have to agree to anything), the principle of *estoppel* springs into play. It is even more legally binding than a contract, because MS cannot even terminate it because of anything you may or may not do. Re: (Score:2) Re: (Score:2) That would create a single point of failure. If Novell decided to stop updating Mono (or, say, went out of business) then the community wouldn't be immune to the patents if they chose to pick up the slack. Re:Good. (Score:4, Insightful) Java is open source, GPL even, and has a patent covenant from Oracle not to sue for it's use. How much better could it fit in the GNU ecosystem? C# *and* core libraries (Score:3) MS patent grant and covenant covers C# and core libraries. Unlike Java, C# and core libraries is standardized through ECMA and ISO. As part of having a standard accepted by ISO a submitter must grant license for any patent necessary for implementation on a RAND basis. This was not enough for the OS community, so MS issued the "community promise". And yes, the community is legally binding and is even stronger than a contract as the recipients do not even have to agree to anything. Enough FUD already Re:C# *and* core libraries (Score:5, Informative) The FSFs stance [fsf.org], but since the FSF are just anti MS, Stallman following loonies (right?), here's Groklaw's stance [groklaw.net]. I'm sure you can find more with your friend [google.com]. But don't let the facts presented by people who understand the applicable law and the related issues stop your fanboyism. Re: (Score:3) only C# version 2.0 is standardised by ECMA and ISO, so forget using any of those nice new features like LINQ. Unless you do use those features, accidentally or otherwise, then you're obviously no longer covered. The comunity promise isn't strong enough for the community, I can believe that given the way patent lawyering has been going recently. I was considering Monodroid... (Score:2) Re: (Score:3, Informative) OK so you wish to live without dynamic language support, true generics, query expressions/LINQ, closures, lambda expressions, the new async/await, and a whole host of other features so you can stick with a language that hasn't seen a major new feature in a long time? One that continuously makes the wrong decisions just for backwards compatibility? (type erasure is idiotic, just make people upgrade their JVMs. the "lambda" support coming in 1.7 will suck for the same reason - it isn't true lambda expressions Re: (Score:2) Cost prohibitive (Score:2) Have you used Monodroid or Monotouch? No, because they're cost prohibitive for a hobbyist programmer who has already graduated. Re: (Score:2) Basically Java is frozen in stone and will never be updated with anything worthwhile. Apparently anything that requires JVM support is absolutely out of the question. When I first read about the type erasure fiasco and now the new lambda mess, this was my exact same thought. The only way they might be able to move the language and framework forward at this point is to have a huge drop-off where compatibility with older JVM is removed cold turkey in favor of improving the language. They'd call it something reasonable like Java 2, or something stupid like Java X, and it would be a fresh new start. It doesn't even seem like compatibility would be that bad. Java programs Re: (Score:3) The problem is that Oracle is behind the wheel now, and that just won't happen. As you said, Java is frozen. This only means that your hypothetical Java 2.0 won't be called Java. I don't know what it'll be called, but I bet it'll come out of Google. (and no, Go is not that) Re: (Score:2) Re: (Score:2) Indeed, this is the most interesting question now. Screw Mono - while nice in theory it never became popular on desktop Linux, and it's easy to understand why. On the other hand, for mobile development, MonoTouch/MonoDroid was shaping up as the only cross-platform mobile development framework with native integration (unlike, say, AIR) and good perf. Now it looks like we're back to square one. Looks like Attachmate didn't want Linux (Score:4, Insightful) Firing the mono developers didn't convince me of this. It's the fact they're basically moving Linux development to all be under a european division and giving them control over all the decisions. It's like they got that odd Linux thing and don't know exactly what to do with it. I worked at Attachmate for awhile, and this doesn't really surprise me. Re:Looks like Attachmate didn't want Linux (Score:4, Insightful) It's the fact they're basically moving Linux development to all be under a european division and giving them control over all the decisions. It's like they got that odd Linux thing and don't know exactly what to do with it. Or maybe they realize that the US Patent system hopelessly f'ks things up for Linux development. Or if not hopelessly, at least expensively. Terrible news (Score:2, Funny) I sure hope someone else catches mono. Re: (Score:2) Not many tears (Score:2, Insightful) >"Mono brings .NET to Linux," In a way that lags so far behind current versions and with limitations to make it unsuitable for just about anything useful. I am not shedding that many tears. It was a dangerous road to begin with (patents, not completely open, etc), and it is a shame those resources were not directed to something that would have truly benefited Linux and other Open Source platforms. In any case, I am sure development will continue in some way. But without those resources, it will just con Re:Not many tears (Score:5, Insightful) Ok, I'm not going to wholesale bite but you really need to bring some Citation to this FUD. You see, a simple google search results in this: [mono-project.com] Which show's that as far as base libraries and feature support, Mono is almost all there with full .Net 4.0. Seeing as that's the latest version of .Net and not even the latest version that a lot of businesses are targeting, would suggest that Mono isn't lagging at all. Re: (Score:2) I will admit that I based my comments from impressions of what I read over the last few years about things it couldn't do then and things it would never be able to do. There appeared to be a lot more about getting an app to work cross-platform than just the base libraries. I can't site a source, and I am not a mono or .NET programmer, so I will shut up and let other people analyze it. Re:Not many tears (Score:5, Insightful) As someone that build's cross platform .NET apps using Mono, you should definitely STFU, and you obviously are talking out your ass. .NET compatibility in mono these days is steller. The only things we really lack are features of Visual Studio, not so much mono itself. MonoDevelop however is pretty dang good. In .NET we've been getting some amazing database ORM's that point & click to build your DAL automatically for you. In mono its a little bit more old-fashioned having to invoke command line for auto-generation. WPF obviously is not available, as to be expected when developing cross platform, so you use GTK. Go back to fox news dude. Re: (Score:2) Re: (Score:2) ".NET compatibility in mono these days is steller." I have to agree. The only area I've run into trouble in general is with the XML parser. Apparently the Mono team wrote their own, completely redesigned XML libraries, and so there are areas where it behaves differently than .NET in really weird ways. For example, up until about a month ago, if you tried to read UTF-16-encoded XML from a MemoryStream, it would fail, indicating that the first character (the XML byte order marker, I believe) was invalid. I open Re: (Score:2) It is interesting to read testimonials about those who HAVE used Mono for something productive. But, like you, I question the validity of comments made by "Anonymous Cowards" who also resort to swearing and personal attacks. Fortunately, there are other comments that are valid retorts to my posting and are not made by cowards and are more convincing. But yes, I still think it is a possible trap. Like I said in other postings, I don't question the technical merits of .NET/C#/Mono as much as the motivation Re: (Score:2, Interesting) Professional full-time .Net programmer with extensive mono experience. Mono's implementation of winforms is shit. But hey, winforms is shit! Otherwise, I found mono to be entirely as good as MS' CLR, with the caveat that it lags behind by a short period of time. This becomes less and less important, as new language features are less and less important (generics was huge, linq was useful, type variance is nice...). Additionally, unlike winforms, mono's ASP.NET implementation is actually pretty passable. Re: (Score:2) Mono is great, but it also sucks for some specific purposes If you have an 100% .NET app, it works (most of the time) .NET apps with native code, as Wine and Mono don't work together. The problem is mixing Unfortunately this is very common Mono does work with wine... (Score:2) Re: (Score:2) It's not entirely true, as WPF uptake was fairly slow since it debuted in 3.0. There are plenty of .NET apps written in the last few years (on .NET 4, even) that still use WinForms, and will run on Mono. Of course, there's plenty other missing stuff. I definitely wouldn't call Mono compatibility story "stellar". It's quite possible to write cross-platform CLR apps using it if you mind the limitations (i.e. coding to lowest common denominator and/or cross-platform portable libraries such as Gtk#), but that's Re: (Score:3) Look, without Mono you can't run serveral projects in the cloud without paying Windows stupidity tax. You can run them on anything but windows if mono falls apart. Like it or not C# is at least as good a language as java and arguably better than c++ for many types of projects. We don't want to lose c# from the non-windows open source world. Re: (Score:2) I am not trying to imply that C# is not a good language. I am sorry if my comments sounds as such. It probably is just fine. I have heard/read things that support just what you said. And I would hate to see any project that benefits Open Source platforms suffer. My objections have a lot more to do with the source of C#, patents, past history with that company and what they do, etc. And also what distraction C#/mono could be in siphoning away mindshare or resources from historically more open and more Re: (Score:2) Re: (Score:2) It is interesting to read the various feedback from people, such as yourself, that have used Mono productively and for purposes that help instead of hurt platforms other than MS-Windows. Thanks for sharing the info. Re: (Score:2) C# of today is in significant areas way ahead of Java. LINQ and parallelism is only two areas. Java might catch up in some areas and will undoubtedly jump ahead in others in Java 7, but Java 7 has proven that the entire Java process is irreversibly broken. The delays and the Oracle ownership are significant problems. I build vertical in-house enterprise apps for a living. No environment on the planet currently matches .Net for this. Not even close. Being able to run on Linux servers is something I would miss Re: (Score:2) LINQ is interesting, but I'm not sure what you mean about parallelism being better in C# - can you elaborate? The main area that C# (actually all of .NET) lags behind Java is in the core libraries. The collections support is lacking (and only recently became useful in any real way), there's no equivalent that I'm aware of to something like java.util.concurrent (see previous comment about parallelism), etc. The toolset is also lacking - I don't care how many people say VS is awesome, it still needs Resharper Re: (Score:3) I'm not sure what you mean about parallelism being better in C# - can you elaborate? I suspect he means Parallel LINQ, which is, of course, not a language-specific feature. there's no equivalent that I'm aware of to something like java.util.concurrent (see previous comment about parallelism) I'm not saying that it's as rich, but System.Threading.Tasks [microsoft.com] and System.Collections.Concurrent [microsoft.com] namespaces provide similar high-level building blocks in .NET 4. By the way, this is about more than just parallelism - asynchrony is also neatly expressed via tasks/futures, and C# 5 will add some nice syntactic sugar [msdn.com] for that. As a platform though, .NET has a way to go before it's really mature IMHO. It largely depends on the field of application. You have to remember that .NET was originally marketed Re: (Score:3) 100% agreed there. What struck me as interesting (aft Re: (Score:3) What struck me as interesting (after years of explaining Swing to WinForms devs) is how much WPF reminds me of Swing, mixed with a little HTML and CSS. Do you mean the layouts and model/view separation? As far as layouts go, WPF is fairly bland, though I find it easier to reason about what goes where when you write the tree as XML (where it maps one-to-one), as opposed to wiring it all up in code. There are similar third-party solutions for Swing, so far as I know, but I never understood why they didn't do that from the get go - of all the ways XML is misused, UI layout is something that actually is a good application for once. As for model/view, I dare say Re: (Score:2) ... and? Are you saying that the feature itself is not useful? Mono also has it, by the way, and it's very handy for calling into various C libs on Linux as well. The problem is portability, but the way Mono does it, if you have the same shared library on all supported platforms, your code is readily portable. Re: (Score:2) I would also be interested in hearing what you think is missing from the .Net ecosystem. Re: (Score:2) It's not so much concerns (as in things which are wrong) and more that some of the libraries in Java are so powerful. I have a personal soft spot for util.concurrent (which started life as Doug Lea's concurrency package). The executor model effectively gets rid of the need to directly manipulate threads/pools (and are great for DI based apps, e.g. using Spring/Unity), and the concurrent collections (like ConcurrentLinkedQueue) are nice for performance in heavily multithreaded apps. Even the little things - Re: (Score:2) Re: (Score:2) Realised I forgot the second part of your query. In terms of ecosystem I'm referring to tools (profilers, decompilers, etc), libraries (for example hibernate, spring, jaxp, ANTLR, jbpm etc etc). There are equivalents for many of these in the .NET world now, but in many cases they're non-free and non-open, and also often less mature than their Java counterparts. Something I miss as a server-side dev is JMX - if anyone's aware of anything like that for .NET I'd love to hear about it! Re: (Score:2) Re: (Score:2) I knew mono was bad news (Score:2) not good (Score:2) As C# is the basis for some very important to me projects this is not in the slightest good news to me. Re: (Score:2) As _________ is the basis for some very important to me projects this is not in the slightest good news to me. This is the lesson everybody who hitches their wagons to Microsoft technology eventually learns. VB, FoxPro, mono, etc. the equivalent command in ubuntu... (Score:2) sudo apt-get purge cli-common mono-runtime Good riddance to bad rubbish. Re: (Score:2) And don't forget sudo apt-get install banshee- rhythmbox tomboy- gnote evolution- thunderbird or similar. Re: (Score:2) Gnote is a GTK version of Tomboy, and pretty much the only reason it exists is for people who don't want Mono on their box. Likewise rhythmbox. GOOD: Just think of energy saved... (Score:4, Insightful) By not loading up multi-megabyte runtime to print "Hello world!" Re: (Score:2) Impact on popular Linux applications (Score:3, Interesting) Looking through the Mono application screenshots [mono-project.com], what I believe are the most popular programs impacted by Mono development slowing are Banshee, F-Spot, and Tomboy. Since this trio is easily replaced by Rhythmbox, gThumb, and Gnote, among other options, good riddance to the lot of them. In addition to the standard Stallman concerns [fsf.org], the high concentration of the development team within Novell was always a problem anyway. There are way too many similar applications within open-source operating systems, so culling out some of the weaker ones--from a development risk standpoint--is a net benefit as far as I'm concerned. Re: (Score:2) Re:Impact on popular Linux applications (Score:4, Informative) F-Spot... easily replaced by... gThumb I'm actually enjoying Shotwell. It's also a good advertisement for the Vala [gnome.org] language, which seems interesting. Good start. (Score:2) Great (Score:3) now hopefully certain distros *cough*ubuntu*cough* will stop requiring mono just so they can put in Tomboy. (Or is it the other way around?) Re: (Score:2) I would suggest they were probably thinking of some difficult to diagnose disease, but that wouldn't be fair. Re: (Score:2) "Mono" is Spanish for "monkey". The people working on Mono are "Ximian" (simian). Why the monkey theme? Got me. Re: (Score:2) None of the core GNOME apps use Mono besides Tomboy [gnome.org] (the others aren't "core apps" and are considered "extra" or even "third-party" apps). You can run GNOME without any of them installed, and they all have reasonable replacements that don't use Mono (Rhythmbox, Gnote, Shotwell, GNOME Shell's search bar, etc.). Rhythmbox is actually quite good; I used to be a Songbird and Banshee fan, but I tried Rhythmbox and, while it doesn't have every single feature under the sun, it's nice to work with and not nearly as Re: (Score:2) Well, most of those projects have satisfying alternatives, except one (at least for me): Banshee. Rhythmbox just plainly sucks in comparison.
http://developers.slashdot.org/story/11/05/03/2226259/Attachmate-Fires-Mono-Developers
CC-MAIN-2016-07
refinedweb
5,777
72.87
onReadyRead attribute does not function the same on Windows as Linux I have a very simple application that basically runs an application via QProcess and returns the result. My Process class is of course written in C++ and is/has Q_INVOKABLE so that my QML can call the function and read the result. On Linux, I call the simple application "fortune-mod" to return a silly fortune to my front-end QML app. My QML code looks like this: Process { id: process onReadyRead: text_field.text = "You are running:\n" + readAll(); } Like I say, on Linux this works fine, however on Windows, the "onReadyRead" simply isn't evaluated or called at all. And as you can imagine, if onReadyRead isn't called, my readAll() functional isn't called either. Why is Windows behaving differently here? Is there a different function I need to call for Windows? - sierdzio Moderators last edited by Show us the code of Process & tell how are you using it on Windows. There is no "fortune-mod" on Windows, so if you're attempting to call it, then QProcess likely fails. So I have a button with a connection. In the connect , I hook up the onClicked event to my process.start(), and then call the shell command. On Linux I'm calling fortune -s, but since fortune doesn't exist on Windows, I've tried various other commands such as time, date, and ver. My button and Connection looks like this: Button { id: button x: 108 y: 173 text: qsTr("Button") font.pointSize: 8 } Connections { target: button onClicked: process.start("ver", [""]); } The ver command on Windows prints the version and build number, which I would like to return to my QML gui. Interesting, if I change the process from "ver" to something like "git like this: Connections { target: button onClicked: process.start("git", ["gui"]); } It works as expected - the git gui is launched. My process method is pretty straight forward, built from various examples I've found: #include <QProcess> #include <QVariant> class Process : public QProcess { Q_OBJECT public: Process(QObject *parent = 0) : QProcess(parent) { } Q_INVOKABLE void start(const QString &program, const QVariantList &arguments) { QStringList args; for (int i = 0; i < arguments.length(); i++) args << arguments[i].toString(); QProcess::start(program, args); } Q_INVOKABLE QByteArray readAll() { return QProcess::readAll(); } }; - SGaist Lifetime Qt Champion last edited by SGaist Hi, Where is that verapplication located ? That's the question isn't it? I'm a native Linux user and I'm not terribly familiar with Windows. On Linux, nearly everything is an application located in /bin or /usr/bin. As such, fortune-mod is an application. On Windows, ver isn't an application, its some kind of system command built into batch or the cmd shell. Either way, mode code is doing exactly what it should be. For example, when I change my connection to call an application located in my system path instead: Connections { target: button onClicked: process.start("git", ["--version"]); } My process behaves as expected: Thanks for you help folks! - SGaist Lifetime Qt Champion last edited by Then you may have to call cmd.exeto execute the command if it's a builtin on windows. @bockscaracer to add to what @SGaist says... From the docs for QProcess, here. I think this applies in your case. ." - jsulm Lifetime Qt Champion last edited by @bockscaracer Even on Linux not everything is a separate executable, see @bockscaracer Under Windoze, a list of "shell builtins" is available from e.g.. If you use any of these you must go via cmd /c. You can either: - Hard-code these (e.g. directly or via a "lookup" table); or - Use cmd /calways, just in case.... (If you do this for commands with arguments, be careful about the quoting needed to go via cmd /c.) Under Linux, I find it hard to think of much which is built into bashand not available as a external executable that would be of interest to execute as a standalone command. Under both OSes, if you use things like redirection symbols ( <, >, |), and some others, you must go via the shell ( cmd /c/ /bin/bash -c).
https://forum.qt.io/topic/89012/onreadyread-attribute-does-not-function-the-same-on-windows-as-linux
CC-MAIN-2022-33
refinedweb
684
64.51
Definition An instance S of the parameterized data type sortseq<K,I> is a sequence of items (seq_item). Every item contains a key from a linearly ordered data type K, called the key type of S, and an information from a data type I, called the information type of S. If K is a user-defined type, you have to provide a compare function (see Section User Defined Parameter Types). The number of items in S is called the size of S. A sorted sequence of size zero is called empty. We use k, i to denote a seq_item with key k and information i (called the information associated with key k). For each k in K there is at most one item k, i in S and if item k1, i1 precedes item k2, i2 in S then k1 < k2. Sorted sequences are a very powerful data type. They can do everything that dictionaries and priority queues can do. They also support many other operations, in particular finger searches and operations conc, split, merge, reverse_items, and delete_subsequence. The key type K must be linearly ordered. The linear order on K may change over time subject to the condition that the order of the elements that are currently in the sorted sequence remains stable. More precisely, whenever an operation (except for reverse_items) is applied to a sorted sequence S, the keys of S must form an increasing sequence according to the currently valid linear order on K. For operation reverse_items this must hold after the execution of the operation. An application of sorted sequences where the linear order on the keys evolves over time is the plane sweep algorithm for line segment intersection. This algorithm sweeps an arrangement of segments by a vertical sweep line and keeps the intersected segments in a sorted sequence sorted according to the y-coordinates of their intersections with the sweep line. For intersecting segments this order depends on the position of the sweep line. Sorted sequences support finger searches. A finger search takes an item it in a sorted sequence and a key k and searches for the key in the sorted sequence containing the item. The cost of a finger search is proportional to the logarithm of the distance of the key from the start of the search. A finger search does not need to know the sequence containing the item. We use IT to denote the sequence containing it. In a call S.finger_search(it,k) the types of S and IT must agree but S may or may not be the sequence containing it. #include < LEDA/core/sortseq.h > Types Creation Operations Iteration forall_items(it, S) { ``the items of S are successively assigned to it'' } forall_rev_items(it, S) { ``the items of S are successively assigned to it in reverse order'' } forall(i, S) { ``the informations of all items of S are successively assigned to i'' } forall_defined(k, S) { ``the keys of all items of S are successively assigned to k'' } Implementation Sorted sequences are implemented by skiplists [77]. Let n denote the current size of the sequence. Operations insert, locate, lookup and del take time O(log n), operations succ, pred, max, min_item, key, inf, insert_at and del_item take time O(1). clear takes time O(n) and reverse_items O(l ), where l is the length of the reversed subsequence. Finger_lookup(x) and finger_locate(x) take time O(log min(d, n - d )) if x is the d-th item in S. Finger_lookup_from_front(x) and finger_locate_from_front(x) take time O(log d ) if x is the d-th item in S. Finger_lookup_from_rear(x) and finger_locate_from_rear(x) take time O(log d ) if x is the n - d-th item in S. Finger_lookup(it,x) and finger_locate(it,x) take time O(log min(d, n - d )) where d is the number of items between it and the item containing x. Note that min(d,n - d) is the smaller of the distances from it to x if sequences are viewed as circularly closed. Split, delete_subsequence and conc take time O(log min(n1, n2)) where n1 and n2 are the sizes of the results of split and delete_subsequence and the arguments of conc respectively. Merge takes time O(log((n1 + n2)/n1)) where n1 and n2 are the sizes of the two arguments. The space requirement of sorted sequences is linear in the length of the sequence (about 25.5n Bytes for a sequence of size n plus the space for the keys and the informations.). Example We use a sorted sequence to list all elements in a sequence of strings lying lexicographically between two given search strings. #include <LEDA/core/sortseq.h> #include <iostream> using leda::sortseq; using leda::string; using leda::seq_item; using std::cin; using std::cout; int main() { sortseq<string, int> S; string s1, s2; cout << "Input a sequence of strings terminated by 'STOP'\n"; while (cin >> s1 && s1 != "STOP") S.insert(s1, 0); while(true) { cout << "\n\nInput a pair of strings:\n"; cin >> s1 >> s2; cout << "All strings s with " << s1 <<" <= s <= " << s2 << ":"; if(s2 < s1) continue; seq_item last = S.locate_pred(s2); seq_item first = S.locate(s1); if ( !first || !last || first == S.succ(last) ) continue; seq_item it = first; while(true) { cout << "\n" << S.key(it); if(it == last) break; it = S.succ(it); } } } Further examples can be found in section Sorted Sequences of [64].
http://www.algorithmic-solutions.info/leda_manual/sortseq.html
crawl-002
refinedweb
899
61.26
DatastoreOpenIssues From OLPC Activity intermediate-level API - How we pass around the data in files? - activities want to directly work with removable devices (security?) - update objects without creating a new version - data types in the metadata (properties in namespaces?) (Trac #2430) - special properties that the DS need to know about - versions use a composite key? - express hierarchy (just one level?) object spaces? - query available space - how can we assure atomicity in multi-document updates? - we may need to have the same object referenced from several entries. For example, we could have an entry "Interviewed my aunt" and another "Listened to 'Interview to my aunt'". Both should refer to the same version of the same object. - how can we give feedback about the progress of DS operations? We'll need it if any copy happens inside the DS. (Trac #2761) - need a way to efficiently query the number of entries that one query matches. (Trac #2454) - allow activities to provide extensions for extracting fulltext content from arbitrary files. (Trac #2460) - allow incremental saving, that is, "streaming" into the file held by a journal entry without having to create a potentially big temp file first. (bertf) DatastoreInternals - Diff-based storage to minimize disk use when storing many versions? Ben 13:08, 21 January 2008 (EST) - How do you store a diff if the entry is a .tar.gz of many files? - Some activities will produce entries that are not amenable to generalized diff algorithms, but can be stored efficiently by domain-specific diff algorithms (e.g. Paint and png files). How can Activities register to handle their own differential compression, or how will type-specific diff be implemented? - Can a gitattributes(5)-like design help here? - how can we assure maximum compatibility between format changes in the db? (Trac #2627) - Test suite? - what should happen when the db gets corrupted? (Trac #3180) - need to make sure that we don't store too many files in one single directory (Trac #4411) Journal - Need support from the DS to implement date scrolling? - Partial matches are desired (Trac #4817) - Need to store info about the buddies that participated in an activity and provide a combo box for filtering. (Trac #2969)
http://wiki.laptop.org/go/DatastoreOpenIssues
CC-MAIN-2014-41
refinedweb
365
57.67
libgen.h - definitions for pattern matching functions #include <libgen.h> The <libgen.h> header declares the following external variable: extern char* __loc1 (LEGACY) (Used by regex() to report pattern location.) The following are declared as functions and may also be defined as macros. Function prototypes must be provided for use with an ISO C compiler. char *basename(char *); char *dirname(char *); char *regcmp(const char *, ...); (LEGACY) char *regex(const char *, const char *, ...); (LEGACY) The function prototypes for regcmp() and regex() are included in this header for historical reasons. New applications should use the regcomp(), regexec(), regerror() and regfree() functions, and the <regex.h> header, which provide full internationalised regular expression functionality compatible with the ISO POSIX-2 standard, as described in the XBD specification, Regular Expressions . None. basename(), dirname().
http://www.opengroup.org/onlinepubs/007908799/xsh/libgen.h.html
crawl-001
refinedweb
129
50.84
Til. That windmill aught to break a lance. Just before JavaOne 2005 there was a huge dust-up over generics as people started using them, and trying to teach how they work. Ken Arnold blogged "Generics are a mistake... A design should have a complexity budget to keep its overall complexity under control. Generics are way out of whack." Trashing generics became the season's fashion. Chris Adamson retorted that we were experiencing "Buyer's remorse for Generics." After all, JSR-14 -- the generics proposal -- started in 1999, but did not appear in a release until JDK5 in late 2004. Any of us could play with generics in Java using the gj and pizzac compilers. We had five years to point out problems we found using the syntax. For example, borrowing C++ template syntax would look too much like XML. (If we get direct manipulation of XML, it won't look like XML. I'm not sure that's a bad thing. XML is pretty hard on the eyes. The angle brackets do make writing html about Generics or XML really painful -- <s everywhere.) That's water over the dam at this point. I'd like to start a conversation about finding ways to fix the stumbling blocks we've found, targeted for Dolphin. I like generics. I use them (and abuse them. See below) in my JDigraph project. I wrote JSDK 1.4 code in a Digraph interface that looked like this: public Object addEdge(Object fromNode, Object toNode, Object edge) throws NodeMissingException; which caused frequent NodeMissingExceptions from calls like hisDigraph.addEdge(hisFromNode, hisEdge, hisToNode); . The compiler couldn't point out that the second argument was supposed to be a node and the third was supposed to be an edge. Autocompletion in IDEs, combined with some developers' habit of grabbing the top argument in the autocomplete list led to NodeMissingExceptions, confusion and frustration. In JDK 5, the code looks like this: public Edge addEdge(Node fromNode, Node toNode, Edge edge) throws NodeMissingException; Just as Keys and Values in Maps are rarely of the same class, Nodes and Edges rarely share an ancestor closer than Object. The compiler helps the developers get things right. My code is much clearer to me (easier to figure out what I did) and much clearer to people trying to use it (fewer agitated developers at my office door). One of Ken's specific complaints was about the complexity of Enum<E extends Enum<E>>, the abstract parent of all Java enums. This particular class interacts closely with the compiler, jumping through some high hoops to give us enums of Objects instead of short integers. It's powerful, but impossible to explain without guessing the java compiler developers' intent. The compiler will specify using your enum class for E. So public enum YourEnums becomes something like public class YourEnums extends Enum<YourEnum> so that YourEnums is always an Enum of YourEnums. To define the Enum class, it'd be great if someone at Sun could just code public abstract class Enum<E is this.getClass()> and remove the guesswork. It's a corner case. I ran into something similar while I was generifying JDigraph's HasState interface. HasState provides a method to compare two objects with different representations, but share a defining principle interface that defines the interesting part of the state. For example, Digraphs and Subgraphs have the same principle interface, Digraph. Two Subgraphs are not equal if they are Subgraphs of different Digraphs. However, if the Subgraphs have the same nodes connected by the same edges then they have the same interesting internal state. I'd love to say public interface HasState<HasS is this.getPrincipleInterface() which better extend HasState<HasS>> but had to limit the generic type parameters to public interface HasState<HasS extends HasState<HasS>> and hope that a developer implementing HasState doesn't do something diabolically mismatched between the type specification and the filling for public Class getPrincipleInterface(); The compiler has all the pieces to sort out Enum and HasState puzzles, but there's no way in the language to direct the compiler to do the right thing in the code for the abstract class or the interface. Since this.getPrincipleInterface() and this.getClass() return hard-coded Classes, maybe something could be done with annotations interacting with generics. However, I think these two examples really are corner cases and don't rate a change in the language. Others might disagree, citing last Summer's venting of the spleens. In contrast, I do think we need something to help us avoid generics mismatch Hell. I think just having dot access to type parameters' type parameters would work well. It would look and feel about the same as using inner classes. The JDigraph project is my demonstration of reusable directed graph algorithms. It uses a general purpose graph containers inspired by the Java collections kit, built off of the Digraph interface. Here's the illustration, with examples from JDigraph: public interface Digraph<Node,Edge> is fine. You can specify any class to be Node, and any class to be Edge. Generics actually make the code easier to follow, as people stop confusing nodes with edges. (IndexedDigraph is a Digraph with indexed access to the nodes and edges. You'll see that in a few paragraphs.) Generics mismatch Hell starts innocuously enough. Subgraph is an interface for directed graphs that hold subsets of the nodes and edges of some Supergraph. Someone using Subgraph has to match up the Node and Edge type specifications for a Subgraph with the Supergraph. public interface Subgraph<Node,Edge,Supergraph extends Digraph<Node,Edge>> extends Digraph<Node,Edge> could be a bit simpler with dot access, because the compiler knows that Supergraph is a Digraph, and has type parameters for Node and Edge. public interface Subgraph<Supergraph extends Digraph> extends Digraph<Supergraph.Node,Supergraph.Edge> It's not a huge difference, and in fact takes more characters. However, Subgraph's definition could do the work to line up the type parameters. Also, Subgraph would only need one type parameter, not the three required above. The complexity grows from there. OverlayDigraph is an interface for representing a Digraph that shares nodes with an underlying Digraph, but has its own edges. Think about it like someone laying a transparency sheet over a bunch of dots and connecting the dots her own way with a sharpie. public interface OverlayDigraph<Node,Edge,UnderEdge,Undergraph extends Digraph<Node,UnderEdge>> extends Digraph<Node,Edge> The compiler knows what Undergraph will be using for Node and Edge. This is just generics mismatch Heck. Someone using this interface has to get the Node and UnderEdge type specs correct. It'd be a lot nicer to be able to take advantage of the compiler's knowledge of Undergraph's type parameters with something like public interface OverlayDigraph<Edge,Undergraph extends Digraph> extends Digraph<Undergraph.Node,Edge> and use Undergraph.Edge instead of UnderEdge in the code. That's two type parameters to fill in instead of four. (You're about to see IndexedMutableOverlayDigraph, which is an OverlayDigraph with indexed access to the nodes and mutators to add and remove edges.) Semirings define a set of useful operators for general graph labeling and graph minimization algorithms that work on Digraphs. The theory is right out of CLRS chapter 26.4. Semirings hold identity and annihilator label constants, and extension, summary and relax operators. The extension operator calculates the label to move from one node to another across an edge. The summary operator calculates the label combining two alternative labels. The relax operator calculates a new label by combining the existing label with a new, alternative label. These operators are the basic parts for building the Floyd-Warshall, Dijkstra, Prim's, Johnson's, Bellman-Ford and A* algorithms. I also made Semiring responsible for creating the initial OverlayDigraph of Labels. Encapsulating these things in Semiring has saved me writing the same tricky algorithm code for every new graph minimization problem. Still with me? Hold on to your synapses. Semiring's declaration looks like this: public interface Semiring<Node, Edge, Label, BaseDigraph extends IndexedDigraph<Node,Edge>, LabelDigraph extends IndexedMutableOverlayDigraph<Node,Label,Edge,BaseDigraph>> All five of the type specifiers have to line up correctly to compile code using this vile beast. It's hard for me to tame, and I wrote the thing. This is a big part of why JDigraph only does alpha releases. That's generics mismatch Hell. If the compiler were a bit smarter, Semiring's declaration could just be public interface Semiring<LabelDigraph extends IndexedMutableOverlayDigraph> Five type specifiers could collapse to just one! You tell the compiler what to use for LabelDigraph. LabelDigraph knows Undergraph and Label. Undergraph knows Node and Edge. The Semiring interface and the compiler could do all the work to match up the types. Remember, Semiring's reason to exist is to encapsulate all those operators so I can write generic algorithms. Dot access to type parameters would let Semiring encapsulate all the gory details of type parameters, too. The Floyd-Warshall is about the simplest graph minimization algorithm there is (three nested for() loops). Got any synapses left? Here's what its declaration looks like now: public class FloydWarshall<Node, Edge, Label, BaseDigraph extends IndexedDigraph<Node,Edge>, LabelDigraph extends IndexedMutableOverlayDigraph<Node,Label,Edge,BaseDigraph>, SRing extends Semiring<Node,Edge,Label,BaseDigraph,LabelDigraph>> Six! Six type parameters! Ah ha ha! I feel like that vampire from Sesame Street. (Dijkstra's algorithm and AStar need a Comparator, for a total of seven!) Letting dots access type parameters would let Semiring encapsulate all the type complexity, as well as the operators. public class FloydWarshall<SRing extends Semiring> Using dots to access contained parameters would be a Get out of Hell free card for this sort of problem. As more people decide to use generics, it's likely to help a lot of people a little bit. It would help JDigraph immensely. The next step is to figure out where in the community to propose the change. Graham Hamilton's blog on language changes in Dolphin is a bit dated. My own blog and a JavaOne community talk should be a good start. - Login or register to post comments - Printer-friendly version - dwalend's blog - 2582 reads
https://weblogs.java.net/blog/dwalend/archive/2006/05/tilting_at_the.html
CC-MAIN-2015-40
refinedweb
1,712
57.16
20130330 (Saturday, 30 March 2013)¶ Finishing lino_welfare.modlib.jobs.ui.JobsOverview: - In : lino.utils.appy_pod, renamed html2odf to ehtml - In the Default.odt, changed document header and default text style. - Converted lino.ui.ui.ExtUI.ar2html()to use lino.core.tables.TableRequest.get_field_info(). Visible result is that the columns of the preview have the wanted widths distribution. - Better docstrings at different places. How to install python-uno in a virtualenv on Debian Squeeze¶ I have a virtualenv “demo” which has been created without site-packages. This Python interpreter, as expected, doesn’t find any uno module: import uno raises an ImportError. And I could not find any pypi package to be installed using pip. Solution: the uno.py is in /usr/share/pyshared. Just add this directory to your PYTHONPATH. There are different methods to achive this; for me the easiest seems to create a file local.pth in /usr/local/pythonenv/demo/lib/python2.6/site-packages with one line of text: /usr/share/pyshared. Another problem was to have the openoffice daemon start. Because at first my /bash/oood didn’t seem to work, I surfed around and found an almost identical one, but LSB compatible, in a blog post by Glenn Enright. This is now in /bash/openoffice-headless.
http://luc.lino-framework.org/blog/2013/0330.html
CC-MAIN-2018-47
refinedweb
212
62.14
affine_grid¶ paddle.fluid.layers. affine_grid(theta, out_shape, name=None)[source] It generates a grid of (x,y) coordinates using the parameters of the affine transformation that correspond to a set of points where the input feature map should be sampled to produce the transformed output feature map. - Parameters theta (Variable) – The data type can be float32 or float64. out_shape (Variable | list | tuple) – The shape of target output with format [batch_size, channel, height, width]. out_shapecan be a Tensor or a list or tuple. The data type must be int32. name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name. - Returns A Tensor with shape [batch_size, H, W, 2] while ‘H’ and ‘W’ are the height and width of feature map in affine transformation. The data type is the same as theta. - Return type Variable - Raises ValueError– If the type of arguments is not supported. Examples import paddle.fluid as fluid import numpy as np place = fluid.CPUPlace() theta = fluid.data(name="x", shape=[None, 2, 3], dtype="float32") out_shape = fluid.data(name="y", shape=[4], dtype="int32") grid_0 = fluid.layers.affine_grid(theta, out_shape) grid_1 = fluid.layers.affine_grid(theta, [5, 3, 28, 28]) batch_size=2 exe = fluid.Executor(place) exe.run(fluid.default_startup_program()) output= exe.run(feed={"x": np.random.rand(batch_size,2,3).astype("float32"), "y": np.array([5, 3, 28, 28]).astype("int32")}, fetch_list=[grid_0.name, grid_1.name]) print(output[0]) print(output[1])
https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/affine_grid.html
CC-MAIN-2021-04
refinedweb
250
53.88
History so far For the past 6-9 months, as part of some of the tasks I’ve been handling at Collabora, I’ve been working on setting up a continuous build and testing system for GStreamer. For those who’ve been following GStreamer for long enough, you might remember we had a buildbot instance back around 2005-2006, which continuously built and ran checks on every commit. And when it failed, it would also notify the developers on IRC (in more or less polite terms) that they’d broken the build. The result was that master (sorry, I mean main, we were using CVS back then) was guaranteed to always be in a buildable state and tests always succeeded. Great, no regressions, no surprises. At some point in time (around 2007 I think ?) the buildbot was no longer used/maintained… And eventually subtle issues crept in, you wouldn’t be guaranteed checkouts always compile, tests eventually broke, you’d need to track what introduced a regression (git bisect makes that easier, but avoiding it in the first place is even better), etc… What to test Fast-forward to 2013, after talking so much about it, it was time to bring back such a system in place. Quite a few things have changed since: - There’s a lot more code. In 2005, when 0.10 was released, the GStreamer project was around 400kLOC. We’re now around 1.5MLOC ! And I’m not even taking into account all the dependency code we use in cerbero, the system for building binary SDK releases. - There are more usages that we didn’t have back then. New modules (rtsp-server, editing-services, orc now under the GStreamer project umbrella, ..) - We provide binary releases for Windows, MacOSX, iOS, Android, … The problems to tackle were “What do we test ? How do we spot regressions ? How to make it as useful as possible to developers ?”. In order for a CI system to be useful, you want to limit the Signal-To-Noise ratio as much as possible. Just enabling a massive bunch of tests/use-cases with millions of things to fix is totally useless. Not only is it depressing to see millions of failed tests, but also you can’t spot regressions easily and essentially people don’t care anymore (it’s just noise). You want the system to become a simple boolean (Either everything passes, or something failed. And if it failed, it was because of that last commit(s)). In order to cope with that, you gradually activate/add items to do and check. The bare minimum was essentially testing whether all of GStreamer compiled on a standard linux setup. That serves as a reference point. If someone breaks the build, it becomes useful, you’ve spotted a regression, you can fix it. As time goes by, you start adding other steps and builds (make check passes on gstreamer core, activate that, passes on gst-plugins-base, activate that, cerbero builds fully/cleanly on debian, activate that, etc…). The other important part is that you want to know as quickly as possible whether a regression was introduced. If you need to wait 3 hours for the CI system to report a regression … that person will have gone to sleep or be taken up by something else. If you know within 10-15mins, then it’s still fresh in their head, they are most likely still online, and you can correct the issue as quickly as possible. Finally, what do we test ? GStreamer has gotten huge. in that sentence GStreamer is actually not just one module, but a whole collection (GStreamer core, gst-plugins*, but also ORC, gst-rtsp-server, gnonlin, gst-editing-services, ….). Whatever we produce for every release … must be covered. So this now includes the binary releases (formerly from gstreamer.com, but which are handled by the GStreamer project itself since 1.x). So we also need to make sure nothing breaks on all the platforms we target (Linux, Android, OSX, iOS, Windows, …). To summarize - CI system must be set-up progressively (to detect regressions) - CI system must be fast (so person who introduced the regression can fix it ASAP) - CI system must cover all our offering (including cerbero binary builds) The result is here (yes, I know, we’re working on fixing the certificates once it moves to the final namespace). How this was implemented, and what challenges were encountered and handled will be covered in a next post.
https://blogs.gnome.org/edwardrv/category/gstreamer/
CC-MAIN-2019-39
refinedweb
749
61.56
view raw just a simple question for a render json call I'm trying to test. I'm still learning rspec, and have tried everything and can't seem to get this to work. I keep getting an ActionController::RoutingError, even though I defined the route and the call to the api itself works. In my controller I have the method: class PlacesController < ApplicationController def objects @objects = Place.find(params[:id]).objects.active render json: @objects.map(&:api) end end class Object def api { id: id, something: something, something_else: something_else, etc: etc, ... } end end get "places/:id/objects" => "places#objects" describe "objects" do it "GET properties" do m = FactoryGirl.create :object_name, _id: "1", shape: "square" get "/places/#{m._id}/objects", {}, { "Accept" => "application/json" } expect(response.status).to eq 200 body = JSON.parse(response.body) expect(body["shape"]).to eq "square" end end Failure/Error: get "/places/1/objects", {}, { "Accept" => "application/json" } ActionController::RoutingError: No route matches {:controller=>"places", :action=>"/places/1/objects"} Because you have the spec in the controllers folder RSpec is assuming it is a controller spec. With controller specs you don't specify the whole path to the route but the actual controller method. get "/places/#{m._id}/objects", {} Should be get :objects, id: m._id If you don't want this behaviour you can disable it by setting the config infer_spec_type_from_file_location to false. Or you could override the spec type for this file by declaring the type on the describe describe "objects", type: :request do - change :request to what you want this spec to be. Although I recommend using the directory structure to dictate what types of specs you are running.
https://codedump.io/share/quBO7oVsFnEw/1/rspec-controller-test-for-json-api--actioncontrollerroutingerror
CC-MAIN-2017-22
refinedweb
275
56.76
Recently Browsing 0 members No registered users viewing this page. Similar Content -(" ; Show Msgbox before Ending Script. Msgbox(64,"","Finished") The following script is a example where the script show the Msgbox pretty fast; ; Set Timeout to 2sec AutoItSetOption ("TCPTimeout", 2000) ; Read Website InetRead(" ; Show Msgbox before Ending Script. Msgbox(64,"","Finished") My question now is, what am i doing wrong and/or is there a other way to prevent Hanging the script? Thanks all - JonBMN Just trying to use a simple While loop to watch for input in a GUI window, but when I go to run it and then give the input. It seems to completely hang and I must at that point stop it manually and restart. I know I'm missing something (could be trivial), but a push would be greatly appreciated. #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> #include <GuiButton.au3> #include <EditConstants.au3> #include <MsgBoxConstants.au3> Local $F1Button, $F1Key HotKeySet("{Esc}", "Quit") $F1Key = HotKeySet("{F1}", "UnlockCar") GUI() While 1 $msg = GUIGetMsg() Select Case $msg = $GUI_Event_Close Quit() Case $msg = $F1Button UnlockMe() EndSelect WEnd Func GUI() $DisclaimerHandle = GUICreate("Disclaimer", 525, 245, -1, -1, -1, $WS_EX_TOPMOST) ;Creates the GUI window GUICtrlCreateLabel("example", 7, 15) GUICtrlCreateLabel("", 7, 30) GUICtrlCreateLabel("example", 7, 45) GUICtrlCreateLabel(" example", 7, 60) GUICtrlCreateLabel("", 7, 75) GUICtrlCreateLabel("example", 7, 90) GUICtrlCreateLabel("example", 7, 105) GUICtrlCreateLabel("example", 7, 135) GUICtrlCreateLabel("example", 7, 165) GUICtrlCreateLabel("if using a touchscreen press the F1 button below.", 7, 180) $F1Button = GUICtrlCreateButton("F1", 217, 205, 50, 30) GUISetState(@SW_SHOW) ;Shows the GUI window EndFunc ;==>GUI Func UnlockMe() MsgBox(0, "I work", "I work") Quit() EndFunc ;==>UnlockMe Func Quit() GUIDelete() Exit EndFunc ;==>Quit - By uncommon I know InetClose closes the "resources" to make sure it does not leak as the help file says but what kind of resources are we talking about? My reason for asking is I may need to run a script the fires the Inetget a few times and does not wait for the download to finish in a loop. If I use InetClose then it will kill the downloads in progress, so I want to know the consequence of leaving several of theses resources open. Thanks - Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/195532-inetclose-hang-program-for-several-seconds/?tab=comments#comment-1402012
CC-MAIN-2022-21
refinedweb
397
51.68
A Guide To Realtime IoT Analytics: Connecting Your Devices (Part One) - By Syed Ahmed - Follow @nxsyed Upcoming Webinar! Arduino Microcontroller and the Future of IoT A demonstration of the powerful impact that Arduino microcontrollers are having on smart home devices and a look at the current market space of the Internet of Things. There’s a tipping point in a person’s life where they just have too many Raspberry Pi’s out in the field. Controlling coffee machines, speakers and LED Hue lights, for the everyday maker, you’ve got a lot going on. This is where realtime IoT analytics comes in – you need to track your different internet-enabled devices. Overview Before we dive into the code walkthrough, let’s think about realtime analytics for IoT devices and what we’re trying to achieve. We can define our process as grabbing data from our IoT device then displaying that same data in a readable format on a common device. Simple enough, right? In more technical terms, we’ll be using libraries to access hardware data. We’ll push that data onto a central server then on our device, we’ll listen for that data and display it on our interface as it changes. If we were to think of the meat of our platform, it would the connection between our hardware and our interface on the device. . In reality, our use case isn’t just limited to a Raspberry Pi. Feel free to connect any IoT device through whichever method you feel is good. All the code is available on the Github. What’s the Best Way to Send Data? Before we rigidly define our streaming method, let’s first go over some key concepts. The first one to understand is the publisher/subscribe design pattern. It may sound daunting but in reality, all it means is that there is a channel where you are publishing your data. Anyone subscribed to that channel will receive that data. This will help us in our project because it will allow us to publish our IoT device data and our interface which is subscribed to that channel can display it in a readable format. Learn more about publish-subscribe patterns here. Now that we’ve looked at our project on an individual device level, let’s also see how we can expand it so that any device can connect and we can group our different channels. This leads into Channel Groups, otherwise known as Stream Controller. This labels all the channels under one umbrella so instead of subscribing to each individual group you can discover all of them by subscribing to the channel group. We’ll be using the channel groups to group all of our IoT devices and each IoT device will have a unique channel where it can send its data over. Connected Internet Of Things – Now Available in PubNub Flavor To start, you’ll have to sign up for a PubNub account to get your pub/sub keys. Worry not, it’s free with a generous sandbox tier! Once you’ve done so, we’ll create a simple Python script that will add our device to our channel group. Then we’ll send data through the device specific channel we define. The data will be in a format that both sides can understand (in this case JSON). Now we can begin by importing the libraries and packages. The code is also available on Github as data_collector.py. from pubnub.callbacks import SubscribeCallback from pubnub.enums import PNStatusCategory from pubnub.pnconfiguration import PNConfiguration from pubnub.pubnub import PubNub We can now configure our PubNub object to connect to our channels/channel groups and send data. pnconfig = PNConfiguration() pnconfig.publish_key = 'insert_your_pub_key' pnconfig.subscribe_key = 'insert_your_sub_key' Our functionality should follow this flow: def my_publish_callback(envelope, status): if not status.is_error(): pass else: pass class MySubscribeCallback(SubscribeCallback): def presence(self, pubnub, presence): pass def status(self, pubnub, status): if status.category == PNStatusCategory.PNUnexpectedDisconnectCategory: pass elif status.category == PNStatusCategory.PNConnectedCategory: while True: temperature = random.randint(20,23) pubnub.publish().channel(hostname).message( { 'Temperature': temperature } ).async(my_publish_callback) time.sleep(3) elif status.category == PNStatusCategory.PNReconnectedCategory: pass elif status.category == PNStatusCategory.PNDecryptionErrorCategory: pass def message(self, pubnub, message): print(message.message) The my_publish_callback function is created so that once we perform a publish the function will execute the defined device behavior. Then we created a MySubscribeCallback class which we’ll use later to subscribe to our channel. This class will allow us to publish data to our channel. Specifically, we have a status function which has our PubNub object and status passed in. This function then checks the status of our connection and reacts accordingly. In the case that our connection is successful, we publish our data in JSON format to our channel. At this point, without functions and classes made, we can create our channel group and append our channel. pubnub.add_channel_to_channel_group().\ channels([hostname]).\ channel_group('your_channel_group').\ sync() pubnub.add_listener(MySubscribeCallback()) pubnub.subscribe().channels(hostname).execute() We also added a listener which creates a MySubscribeCallback instance. At this point, our python script should be running and should successfully be publishing data to our channel. Before you can run the file be sure to go through the installation steps on the Github page. By the end it should look something like this: When we complete our Python script, we should be able to see data coming to our channel. We can do this by choosing our keyset on the Admin Dashboard, then navigating to the debug console in the side menu. Then, entering our unique channel name as the channel should allow us to view incoming messages. Next Steps Now that our devices are connected, we need to create the UI that receives and visualizes the live data. We’re going to build a React Native app to do just this. Head over to Part Two and we’ll show you how!
https://www.pubnub.com/blog/guide-to-realtime-iot-analytics-connecting-devices/
CC-MAIN-2018-34
refinedweb
985
56.45
I'm trying to run a script that tries to run a program called omniidl.exe using subprocess.call(), like so: import subprocess retcode = subprocess.call("omniidl", shell=True) As soon as I get to Popen.__init__()-->_execute_child()-->CreateProcess() the following error is printed: 'import site' failed; use -v for traceback Traceback (innermost last): File "<string>", line 1, in ? File "c:\Python25\lib\os.py", line 39 return [n for n in dir(module) if n[0] != '_'] ^ SyntaxError: invalid syntax The location of omniidl.exe is properly pointed to by my PATH env var and I can successfully run this same command: 0. via the DOS command line: "omniidl" 1. via Python command line: "retcode = subprocess.call("omniidl", shell=True)" The expected output is "omniidl: No files specified. Use 'omniidl -u' for usage." Any idea what might be going on here? I'm using Windows XP, Python 2.5.2, Eclipse SDK Version: 3.3.2 Build id: M20080221-1800. Pydev version 1.3.22. Thanks, Tim Fabio Zadrozny 2009-06-27 Have you checked if the process running has the path/pythonpath variables correct through os.environ? (maybe you have more than one python installed and the wrong one ends up available in the path?) Cheers, Fabio I have verified that os.environ is correct in my script, before the call to subprocess.call. But the process I'm invoking is a win32 executable, not python. The process does, however, have python embedded in it, however. The executable does some processing and then imports a customizable python module and calls a function in it. But I'm calling omniidl with no parameters, which should just immediately print out an error and not even invoke the python interpreter. That is why I posted it on pydev instead of omniidl. It smells like an eclipse thing, bc I can do all the same operations manually from the dos command line... Any other ideas?
http://sourceforge.net/p/pydev/discussion/293649/thread/a484ec36
CC-MAIN-2015-40
refinedweb
323
69.48
. Table of Contents utilize Unicode. This document assumes you have read the Unicode document.. Pylons uses the Python gettext module for internationalization. It is based off the GNU gettext API. Everywhere in your code where you want strings to be available in different languages you wrap them in the _() function. There are also a number of other translation functions which are documented in the API reference at The _() function is a reference to the ugettext() function. _() is a convention for marking text to be translated and saves on keystrokes. ugettext() is the Unicode version of gettext(); it returns unicode strings. In our example we want the string 'Hello' to appear in three different languages: English, French and Spanish. We also want to display the word 'Hello' in the default language. We'll then go on to use some pural words too. Lets call our project translate_demo: 1 $ paster create -t pylons translate_demo Now lets add a friendly controller that says hello: 1 2 $ cd translate_demo $ paster controller hello Edit controllers/hello.py to make use of the _() function everywhere where the string Hello appears: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import logging from pylons.i18n import get_lang, set_lang from translate_demo.lib.base import * log = logging.getLogger(__name__) class HelloController(BaseController): def index(self): response.write('Default: %s<br />' % _('Hello')) for lang in ['fr','en','es']: set_lang(lang) response.write("%s: %s<br />" % (get_lang(), _('Hello'))) When writing wrapping strings in the gettext functions, it is important not to piece sentences together manually; certain languages might need to invert the grammars. Don't do this: 1 2 3 # BAD! msg = _("He told her ") msg += _("not to go outside.") but this is perfectly acceptable: # GOOD msg = _("He told her not to go outside") The controller has now been internationalized, but it will raise a LanguageError until we have setup the alternative language catalogs.. GNU gettext provides a suite of command line programs for extracting messages from source code and working with the associated gettext catalogs. The Babel project provides pure Python alternative versions of these tools. Unlike the GNU gettext tool xgettext', Babel supports extracting translatable strings from Python templating languages (currently Mako and Genshi). To use Babel, you must first install it via easy_install. Run the command: $ easy_install Babel Pylons (as of 0.9.6) includes some sane defaults for Babel's distutils commands in the setup.cfg file. It also includes an extraction method mapping in the setup.py file. It is commented out by default, to avoid distutils warning about it being an unrecognized option when Babel is not installed. These lines should be uncommented before proceeding with the rest of this walk through: 1 2 3 4 message_extractors = {'translate_demo': [ ('**.py', 'python', None), ('templates/**.mako', 'mako', None), ('public/**', 'ignore', None)]}, We'll use Babel to extract messages to a .pot file in your project's i18n directory. First, the directory needs to be created. Don't forget to add it to your revision control system if one is in use: $ cd translate_demo $ mkdir translate_demo/i18n $ svn add translate_demo/i18n Next we can extract all messages from the project with the following command: 1 2 3 4 5 6 7 $ python setup.py extract_messages running extract_messages extracting messages from translate_demo/__init__.py extracting messages from translate_demo/websetup.py ... extracting messages from translate_demo/tests/functional/test_hello.py writing PO template file to translate_demo/i18n/translate_demo.pot This will create a .pot file in the i18n directory that looks something like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # Translations template for translate_demo. # This file is distributed under the same license as the translate_demo project. # FIRST AUTHOR <EMAIL@ADDRESS>, 2007. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: translate_demo 0.0.0\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" "POT-Creation-Date: 2007-08-02 18:01-07dev-r215\n" #: translate_demo/controllers/hello.py:10 translate_demo/controllers/hello.py:13 msgid "Hello" msgstr "" The .pot details that appear here can be customized via the extract_messages configuration in your project's setup.cfg (See the Babel Command-Line Interface Documentation for all configuration options). Next, we'll initialize a catalog (.po file) for the Spanish language: $ python setup.py init_catalog -l es running init_catalog creating catalog 'translate_demo/i18n/es/LC_MESSAGES/translate_demo.po' based on 'translate_demo/i18n/translate_demo.pot' Then we can edit the last line of the new Spanish .po file to add a translation of "Hello": msgid "Hello" msgstr "¡Hola!" Finally, to utilize these translations in our application, we need to compile the .po file to a .mo file: 1 2 3 4 5 $ python setup.py compile_catalog running compile_catalog 1 of 1 messages (100%) translated in 'translate_demo/i18n/es/LC_MESSAGES/translate_demo.po' compiling catalog 'translate_demo/i18n/es/LC_MESSAGES/translate_demo.po' to 'translate_demo/i18n/es/LC_MESSAGES/translate_demo.mo' We can also use the update_catalog command to merge new messages from the .pot to the .po files. For example, if we later added the following line of code to the end of HelloController's index method: response.write('Goodbye: %s' % _('Goodbye')) We'd then need to re-extract the messages from the project, then run the update_catalog command: 1 2 3 4 5 6 7 8 9 10 11 $ python setup.py extract_messages running extract_messages extracting messages from translate_demo/__init__.py extracting messages from translate_demo/websetup.py ... extracting messages from translate_demo/tests/functional/test_hello.py writing PO template file to translate_demo/i18n/translate_demo.pot $ python setup.py update_catalog running update_catalog updating catalog 'translate_demo/i18n/es/LC_MESSAGES/translate_demo.po' based on 'translate_demo/i18n/translate_demo.pot' We'd then edit our catalog to add a translation for "Goodbye", and recompile the .po file as we did above. For more information, see the Babel documentation and the GNU Gettext Manual. Next we'll need to repeat the process of creating a .mo file for the en and fr locales: 1 2 3 4 5 6 7 8 $ python setup.py init_catalog -l en running init_catalog creating catalog 'translate_demo/i18n/en/LC_MESSAGES/translate_demo.po' based on 'translate_demo/i18n/translate_demo.pot' $ python setup.py init_catalog -l fr running init_catalog creating catalog 'translate_demo/i18n/fr/LC_MESSAGES/translate_demo.po' based on 'translate_demo/i18n/translate_demo.pot' Modify the last line of the fr catalog to look like this: #: translate_demo/controllers/hello.py:10 translate_demo/controllers/hello.py:13 msgid "Hello" msgstr "Bonjour" Since our original messages are already in English, the en catalog can stay blank; gettext will fallback to the original. Once you've edited these new .po files and compiled them to .mo files, you'll end up with an i18n directory containing: i18n/translate_demo.pot i18n/en/LC_MESSAGES/translate_demo.po i18n/en/LC_MESSAGES/translate_demo.mo i18n/es/LC_MESSAGES/translate_demo.po i18n/es/LC_MESSAGES/translate_demo.mo i18n/fr/LC_MESSAGES/translate_demo.po i18n/fr/LC_MESSAGES/translate_demo.mo Start the server with the following command: $ paster serve --reload development.ini Test your controller by visiting. You should see the following output: Default: Hello fr: Bonjour en: Hello es: ¡Hola! You can now set the language used in a controller on the fly. For example this could be used to allow a user to set which language they wanted your application to work in. You could save the value to the session object: session['lang'] = 'en' session.save() then on each controller call the language to be used could be read from the session and set in your controller's __before__() method so that the pages remained in the same language that was previously set: def __before__(self): if 'lang' in session: set_lang(session['lang']) Pylons also supports defining the default language to be used in the configuration file. Set a lang variable to the desired default language in your development.ini file, and Pylons will automatically call set_lang with that language at the beginning of every request. E.g. to set the default language to Spanish, you would add lang = es to your development.ini: [app:main] use = egg:translate_demo lang = es If you are running the server with the --reload option the server will automatically restart if you change the development.ini file. Otherwise restart the server manually and the output would this time be as follows: Default: ¡Hola! fr: Bonjour en: Hello es: ¡Hola! If your code calls _() with a string that doesn't exist at all in your language catalog, the string passed to _() is returned instead. Modify the last line of the hello controller to look like this: response.write("%s %s, %s" % (_('Hello'), _('World'), _('Hi!'))) Warning Of course, in real life breaking up sentences in this way is very dangerous because some grammars might require the order of the words to be different. If you run the example again the output will be: Default: ¡Hola! fr: Bonjour World! en: Hello World! es: ¡Hola! World! This is because we never provided a translation for the string 'World!' so the string itself is used. Pylons also provides a mechanism for fallback languages, so that you can specify other languages to be used if the word is ommitted from the main language's catalog. In this example we choose fr as the main language but es as a fallback: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 import logging from pylons.i18n import add_fallback, set_lang from translate_demo.lib.base import * log = logging.getLogger(__name__) class HelloController(BaseController): def index(self): set_lang('fr') add_fallback('es') return "%s %s, %s" % (_('Hello'), _('World'), _('Hi!')) If Hello is in the fr .mo file as Bonjour, World is only in es as Mundo and none of the catalogs contain Hi!, you'll get the multilingual message: Bonjour Mundo, Hi!. This is a combination of the French, Spanish and original (English in this case, as defined in our source code) words., in a Mako template: ${_('Hello')} would produce the string 'Hello' in the language you had set. Babel currently supports extracting gettext messages from Mako and Genshi templates. The Mako extractor also provides support for translator comments. Babel can be extended to extract messages from other sources via a custom extraction method plugin. Pylons (as of 0.9.6) automatically configures a Babel extraction mapping for your Python source code and Mako templates. This is defined in your project's setup.py file: message_extractors = {'translate_demo': [ ('public/**', 'ignore', None), ('**.py', 'python', None), ('templates/**.mako', 'mako', None), ]}, For a project using Genshi instead of Mako, the Mako line might be replaced with: ('templates/**.html, 'genshi, None), See Babel's documentation on Message Extraction for more information. Occasionally you might come across a situation when you need to translate a string when it is accessed, not when the _() or other functions are called. Consider this example: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import logging from pylons.i18n import get_lang, set_lang from translate_demo.lib.base import * log = logging.getLogger(__name__) text = _('Hello') class HelloController(BaseController): def index(self): response.write('Default: %s<br />' % _('Hello')) for lang in ['fr','en','es']: set_lang(lang) response.write("%s: %s<br />" % (get_lang(), _('Hello'))) response.write('Text: %s<br />' % text) pylons.i18n import get_lang, lazy_gettext, set_lang from helloworld.lib.base import * log = logging.getLogger(__name__): response: package_data={'translate_demo': ['i18n/*/LC_MESSAGES/*.mo']}, Pylons also provides the ungettext() function. It's designed for internationalizing plural words, and can be used as follows: ungettext('There is %(num)d file here', 'There are %(num)d files here', n) % {'num': n} Plural forms have a different type of entry in .pot/.po files, as described in The Format of PO Files in GNU Gettext's Manual: 1 2 3 4 5 6 #: translate_demo/controllers/hello.py:12 #, python-format msgid "There is %(num)d file here" msgid_plural "There are %(num)d files here" msgstr[0] "" msgstr[1] "" One thing to keep in mind is that other languages don't have the same plural forms as English. While English only has 2 plural forms, singular and plural, Slovenian has 4! That means that you must use ugettext for proper pluralization. Specifically, the following will not work: # BAD! if n == 1: msg = _("There was no dog.") else: msg = _("There were no dogs.") This document only covers the basics of internationalizing and localizing a web application. GNU Gettext is an extensive library, and the GNU Gettext Manual is highly recommended for more information. Babel also provides support for interfacing to the CLDR (Common Locale Data Repository), providing access to various locale display names, localized number and date formatting, etc. You should also be able to internationalize and then localize your application using Pylons' support for GNU gettext. Please feel free to report any mistakes to the Pylons mailing list or to the author. Any corrections or clarifications would be gratefully received. in para "For a project using Genshi..." there is a typo. it should read: ('templates/**.html', 'genshi', None), In my 0.9.6.1 template the "message_extractors" in my setup.py are by default commented out. So to use gettext in templates you need to remove the "#" there. Another comment on my i18n researches regarding formencode. If you write your own validators and want to localize the error messages then the best way to do that appears to be: from pylons.i18n import _ import formencode class MyValidator(formencode.FancyValidator): messages = { 'myerror' : _('Something crazy is wrong.') } ... And when using the @validate decorator you have to pass in a state argument that sets "_" to Pylons' own gettext function. A simple sample class called PylonsFormEncodeState is described in. All you have to do then is call @validate like this: @validate(MyValidationSchema, form='myform', state=PylonsFormEncodeState()) Just another note (sorry for spamming): if you use "self.message()" in your own validators then formencode can just use its builtin localisation. So usually you either have the localized error message for built-in validators XOR for your own validators. Actualls self.message() doesn't do much so it should be safe to use this instead: raise formencode.Invalid( _('Your input is just pure garbage.'), value, state That way "python setup.py extract_messages" will find your _() string to create a POT file and you can still use the built-in validators with proper l18n. Even if you use _('some string') and try to set_lang to a language that you don't have a PO/MO file for then you need to create a dummy english ('en') PO file to avoid getting an LanguageError: IOError: Errno 2 No translation file found for domain error. So just python setup.py init_catalog -l en and you can use "add_fallback('en')". No need to change anything in the en/LC_MESSAGES/foobar.po file. It just has to be there. (So it seems.) After doing everything as explained above I get the following error trace, Traceback (most recent call last): File "setup.py", line 31, in """, File "/sw/lib/python2.5/distutils/core.py", line 112, in setup _setup_distribution = dist = klass(attrs) File "/sw/lib/python2.5/site-packages/setuptools/dist.py", line 223, in __init__ _Distribution.__init__(self,attrs) File "/sw/lib/python2.5/distutils/dist.py", line 267, in __init__ self.finalize_options() File "/sw/lib/python2.5/site-packages/setuptools/dist.py", line 256, in finalize_options ep.load()(self, ep.name, value) File "/sw/lib/python2.5/site-packages/pkg_resources.py", line 1912, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/sw/lib/python2.5/site-packages/Babel-0.9.2-py2.5.egg/babel/messages/__init__.py", line 16, in from babel.messages.catalog import * File "/sw/lib/python2.5/site-packages/Babel-0.9.2-py2.5.egg/babel/messages/catalog.py", line 29, in from babel.dates import format_datetime File "/sw/lib/python2.5/site-packages/Babel-0.9.2-py2.5.egg/babel/dates.py", line 34, in LC_TIME = default_locale('LC_TIME') File "/sw/lib/python2.5/site-packages/Babel-0.9.2-py2.5.egg/babel/core.py", line 629, in default_locale return '_'.join(filter(None, parse_locale(locale))) File "/sw/lib/python2.5/site-packages/Babel-0.9.2-py2.5.egg/babel/core.py", line 737, in parse_locale raise ValueError('expected only letters, got %r' % lang) ValueError: expected only letters, got 'utf-8' mykola-paliyenkos-macbook-pro:uaprom mykola$ What am I doing wrong?
http://wiki.pylonshq.com/display/pylonsdocs/Internationalization+and+Localization
crawl-001
refinedweb
2,752
51.24
The most important thing to understand about React Router v5 is how composable it is. React Router doesn’t give you a house - it gives you some nails, screws, plywood, and a hammer while trusting that you can do the rest. A more technical way to say that is React Router v * as React from 'react'import {BrowserRouter as Router,Route,Link} from 'react-router-dom'const Home = () => <h2>Home</h2>const About = () => <h2>About</h2>export default function App () {return (<Router><div>{/* Links */}<hr/><Route exact<Home/></Route><Route path="/about"><About/></Route></div></Router>)}. export default function App() {return (<Router><div><OldSchoolMenuLink exact={true}Home</OldSchoolMenuLink><OldSchoolMenuLink to="/about">About</OldSchoolMenuLink><hr /><Route exact<Home /></Route><Route path="/about"><About /></Route></div></Router>)} First, let’s do the easy part. We know what props OldSchoolMenuLink is going to be taking in, so we can build out the skeleton of the component. function. function useRouteMatch custom Hook. useRouteMatch gives you information on how (or if) the Route matched. Typically you invoke it with no arguments to get the app’s current path and url. In our case, instead of just getting the current path and url, we want to customize it to see if the app’s path matches OldSchoolMenuLink’s to prop. If it does we want to pre-pend > and if it doesn’t we won’t. To tell useRouteMatch what we want to match for, we can pass it an object with a path prop and an exact prop. Just like that, we’ve created our own Link component and used React Router’s useRouteMatch custom Hook.
https://ui.dev/react-router-v5-custom-link/
CC-MAIN-2021-43
refinedweb
271
61.06
Run parameterized shell commands from Atom. Highlights: See the changelog for the latest improvements. Install Process Palette and then either generate or download a configuration file. Packages|Process Palette|Togglefrom the menu or Process Palette: Togglefrom the command palette. The following panel will appear: Do it!buttons. If a project specific configuration is created and more than one project is open then one can be chosen from the dialog box that will appear. Do it!buttons will create an example configuration file and open it in the graphical editor. Packages|Process Palette|Edit Configurationfrom the menu or Process Palette: Edit Configurationfrom the command palette. Closing the editor will automatically reload the configuration. The process-palette.jsonfile can also be edited directly, but then it needs to be reloaded by running Process Palette: Reload Configuration. Packages|Process Palette|Reload Configurationfrom the menu or Process Palette: Reload Configurationfrom the command palette. These example configurations define a command that will echo a message to standard output. It can be run by choosing Process Palette: Echo Example from the command palette. This will open the Process Palette panel and show the output. The panel can also be opened directly by pressing Ctrl-Alt-P or running Process Palette: Toggle from the command palette. It also contains an example called Stream Example to show the direct stream ability. When streaming is enabled the output is written directly to the target without being formatted. Process Palette: Reload Configurationcommand when making changes directly to the process-palette.jsonfile. The graphical editor makes it easier to edit the configuration files. It can be opened by choosing Process Palette: Edit Configuration from the command palette. A dialog will pop up from where you can choose to edit either the global configuration or a project specific configuration. The following is a screenshot of the graphical editor. The commands are listed on the left. Selecting one will show its details on the right. Pressing the Edit Patterns button allows you to define custom patterns for recognizing file paths and line numbers when writing output to the panel, although the default built-in pattern ought to be sufficient in most cases. The configuration file will be saved and automatically reloaded when closed. The palette panel shows the configured command and output targets for each command. The visibility of these can be toggled in the settings. Commands will be executed by the system's default shell, which is sh on OSX and Linux and cmd.exe on Windows. If you would like to use a particular shell then you can specify it under Process Palette's settings. This shell will then be used when running any of the commands. Leave the value blank for the system default to be used. Process Palette has a small panel that lists all the commands that are configured. It can be toggled by pressing Ctrl-Alt-P or from the menu Packages|Process Palette|Toggle. From here one can see all the commands and also run them. Pressing the down arrow in the top right corner will hide the panel. Multiple instances of a process can run at a time. The process ID of each instance is shown on the right in the form of a button. Pressing the button will show that process' output. The process can be manually terminated by pressing the square stop button next to the process ID. If the command is configured to output to the Process Palette panel then clicking on the process ID button will cause the panel to switch to showing the output of that process. The other process instances will still be shown, but the selected one will be highlighted. Scroll lock can be toggled with the lock button. Scroll lock will also enable when one starts to scroll or clicks on the output. It will automatically disable when one scrolls to the bottom. The output can be cleared by pressing the trash can button. From here one can return to the list by pressing the button in the top left corner. Each time a process is executed a message will be shown in the top right hand corner. A successful execution with an exit status code of 0 will show a success message. Anything other than 0 will show a warning. What these messages display can be configured or even disabled completely as will be seen in the Advanced Configuration section. Commands can be run from the tree view with the selected file as input to the command. Any command that references any of the {file*} variables will be available. To choose the command to run a file with, open the context menu on a file in the tree view and choose the command from the Run With sub menu. See the example at the top. The configuration files can also be edited by hand. The remainder of the document will describe how to do this. Commands are specified with a configuration file in JSON format. The name of the file must be process-palette.json and should be in the root of your project folder. If you have multiple project folders, each with its own configuration file, then their configurations will be merged. A process-palette.json file can also be placed in your ~/.atom folder. If that is the case then it will be loaded first and any project specific files will be loaded afterwards. A process-palette.json configuration file contains an array called commands. The following is an example of an empty array: {"commands" : []} Each entry in the array is an object that describes one command. The most basic configuration simply specifies the command to run and associates it with an action. The following command will run Ant without any arguments: {"commands" : [{"command" : "ant","action" : "Ant default"}]} Tip! : All process-palette.json configuration files can be reloaded by running the Process Palette: Reload Configuration command. It can be found in the Command Palette or in the Packages|Process Palette menu. There is also a reload button in the top right of the Process Palette panel. The new command will cause an entry to be added to the command palette called Process Palette: Ant default. The working directory used when running a command is by default the project path, but it can also be configured. More on this in the Advanced Configuration section. Command line arguments can also be specified in the form of an array of strings. The following example adds another command that causes the clean target to be executed by means of an argument: {"commands" : [{"action" : "ant-default","command" : "ant"},{"action" : "ant-clean-artifacts","command" : "ant clean"}]} Reloading the configuration will cause the command palette to now have two new entries: The namespace used for all commands is by default Process Palette. This is also configurable. One must just be careful to not override commands in existing packages. Let's modify the previous two commands to use a namespace call Ant: {"commands" : [{"namespace" : "ant","action" : "default","command" : "ant"},{"namespace" : "ant","action" : "clean-artifacts","command" : "ant clean",}]} After reloading the configuration file the entries will be: Custom shortcut keys can also be associated with commands by adding a keystroke entry. Let's add the keystroke Ctrl-Alt-A to the Ant: Default command: {"namespace" : "ant","action" : "default","command" : "ant","keystroke" : "ctrl-alt-a"} After reloading the configuration the Ant: Default command can be run by pressing Ctrl-Alt-A. The namespace, action, command and keystroke aren't the only properties that can be configured. Of these only the action and command are required. The rest are optional and have default values. Many of the properties can be parameterized with variables from the environment. The following two sections describe the configurable properties and also the variables that can be used to parameterize them. The following properties relate to the output produced by the process. The output can be redirected to a particular target. It can also be formatted depending on whether the process executed successfully or not. Giving any of the xxxOutput properties a value of null will prevent that output from being shown. The following properties relate to the messages shown before and after a command is executed. Giving any of the xxxMessage properties a value of null will prevent that message from being shown. The following properties relate to custom JavaScript that can be executed before and after the process. The outputTarget property specifies where the output produced by the process should be directed to. The following are valid targets: The default value of outputTarget is "panel". If it is overridden with null then it will default to "void". Some of the properties can be parameterized with variables. Variables are added by enclosing the name of the variable in braces : { and }. The default values of some of the properties are already parameterized as can be seen in the tables above. There are two types of variables : input and output. Input variables are available before the process executes and output variables are available after it has executed. The following tables list the input and output variables: Input Input from editor The following input variables are only available if an editor is open. Their values default to an empty string otherwise. Input from user The inputDialogs property is an array of objects, each defining an input dialog that will be opened in order to take input from the user. Every dialog must specify the variableName that can then be used just like any other variable. Dialogs can be customized with a different message and initialInput. Here is an example of one such dialog: "inputDialogs" : [{"variableName": "userInput","message": "Foo?","initialInput": "Bar!"}] Output These variables are only available after the process has executed. They can therefore typically be used in the output and message related properties. The table below shows which properties support input variables and/or output variables: The namespace, action and keystroke properties do not support variables. The env property supports variables only in its values, for example : "env" : {"MYVAR" : "{fileName}"} A useful way of seeing the values of the variables is to add them to one of the output properties and then executing the command. For instance : "successOutput" : "File path : {filePath}\nProject path : {projectPath}" will show the values of filePath and projectPath respectively. Another way is to simply echo them as shown in the example. Keep in mind that the arguments property is an array of strings. Adding variables to arguments should therefore be done as such: "arguments" : ["{fileNameExt}", "{selection}"] The value of a variable can be modified by piping it through a transform. The syntax is {variable | transform} The following transforms are available: It may sometimes be necessary to convert a file path to use separators for a different platform. Any of the variables can be converted by piping it through a transform. If, for instance, you are running on Linux and need to convert the filePath variable to a Windows style path, then you can specify it as {filePath | win}. The opposite can also be done with {filePath | unix} or {filePath | posix} when running on Windows. Commands that write to the output panel can be configured to detect file paths and optionally line numbers. Detected paths will be underlined. Clicking it will open the file and if a line number is detected jump to it. All commands will detect file paths by default. If line numbers are required then additional configuration is necessary. Patterns are used to detect paths and line numbers. Process Palette has one built in pattern for detecting paths and all commands use this pattern by default. Custom patterns can be added to the configuration file. Commands can then be configured to detect any of these patterns. The following example shows two custom patterns. The one pattern (P1) detects a path followed by a : and then the line number, whereas the other (P2) has whitespace between the path and line number. "patterns" : {"P1" : {"expression" : "(path):(line)"},"P2" : {"expression" : "(path)\\s+(line)"}},"commands" : [{"patterns" : ["P1", "P2"]}] Notice the following: patternsis an object and is defined on the same level as commands. P1and P2in this case. expression. (path)and (line)are special placeholders. These are substituted with regular expressions for matching each respectively. (path)and (line)should be valid regular expressions. With this configuration the command will be able to match P1 and P2 patterns. This overrides the default configuration that matches only paths. The built-in pattern for matching paths is called default. To match the default pattern as well, simply add it to the list: "patterns" : ["P1", "P2", "default"] Order matters! Patterns are evaluated in the order they are given. This means that if default was first in the list then it will be matched, but never any of the line numbers. To disable pattern matching simply set the value of patterns to null. In the previous example the (path) and (line) placeholders were used to quickly create an expression. In the background these are replaced with appropriate regular expressions. The (path) placeholder, in particular, will be replaced with an expression that is appropriate for the platform. It may be that the built in expression is not sufficient for detecting paths in your command's output. If that is the case then you can overwrite it with your own. The following example shows how. "patterns" : {"P1" : {"expression" : "(path):(line)","path" : "(?:\\/[\\w\\.\\-]+)+"}} In this case the given expression for path will be used instead. Important note about groups Groups are enclosed in round brackets. path and line each forms a group and in this order they are at index 1 and 2 respectively. In this example the path expression is being overwritten, but this expression defines a group of its own. What's important to notice is the ?: at the start of the group, which ensures that the group is not counted. The only groups that are allowed to be counted are for path and line, but neglecting to exclude other groups will interfere with their indexes. You can specify your own JavaScript to run at certain stages of a process. Code can be specified to run before the process starts and after it has completed. Separate scripts can be specified based on whether the process completed successfully or failed. Any of the input and output variables are available from within the scripts. Variables can be accessed simply by using its name, without the need to enclose it in braces. You also have access to Atom's API. For example: atom.workspace.open(fileProjectPath + '/my.file') Environment variables can be accessed via a variable called env. For example: console.log(env['MY_ENV_VAR']) These scripts are base-64 encoded. It is therefore advised to rather edit the scripts from the graphical editor instead of directly in the process-palette.json file, because then the script will be encoded automatically. Process Palette considers its process to be completed only when all commands have finished executing. For instance, if a command is executed with an & appended then Process Palette will continue to handle the output produced by it until the child process spawned by that command exits. This in itself is not a major problem. The issue is that Process Palette currently cannot kill child processes that are executed in this way. If the command executed with & opens a window then closing the window will allow the parent process to complete, but if it doesn't then one will have to kill the process by whatever means your OS allows. Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
https://atom.io/packages/process-palette
CC-MAIN-2019-51
refinedweb
2,598
56.76
User talk:AtionSong From Uncyclopedia, the content-free encyclopedia uhh ... whats up with your rewriting V.F.D.?? That was my page and it got deleted even though I wasn't finished working on it ... so how come you are now allowed to get de page?? --Nerd42eMailTalkUnMetaWPediah2g2 04:17, 17 Jan 2006 (UTC) - Your article got deleted, obviously because the content was not up to un-standards. It got deleted. I decided to write a new article for it. If you had tried to rewrite the same article again, if would just have gotten a Vote for Speedy Deletion. So, I'm not quite sure what you have a problem with. -AtionSong 16:30, 17 Jan 2006 (UTC) edit Re: Why Deleted? - Oops, sorry about that. Restored. Make sure it doesn't suck, though! -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 22:05, 6 April 2006 (UTC) edit Not you as well... You finished a sentance! ~ 07:01, 20 May 2006 (UTC) - Ay No! Yo soy ravaged con un ganga ravid de Wikipedians! My bad. -AtionSong 14:01, 20 May 2006 (UTC) edit Moved page Your page marked "Work in Progress" was moved to User:AtionSong/Uncyclopedia:Errors in Wikipedia that have been corrected in Uncyclopedia after 7 days with no activity. Feel free to move it back when it's ready. -- sannse (talk) 15:32, 30 December 2006 (UTC) edit UnTunes:The Star and Eagle Good one! I'm thinking that the extra info at The_Star_and_Eagle_and_Baseball_and_Mom's_Apple_Pie_and_Rocky_Mountain_Sunset_and_Liberty_Bell_and_Iwo_Jima_Statue_and_George_Washington's_Head_Spangled_Banner,_Yee-Haw! should oughta might be merged into it, so there's not 2 separate articles. Your thoughts?User:Tooltroll/sig 22:30, 15 January 2007 (UTC) - Personally, I like how the articles compliment each other. It's like having an article about a song (to give info), then a recording of the song (to prove that it's "real"). Get what I'm saying? Each article strengthens the other. Plus, I think that the different namespaces should remain separated. -AtionSong 22:40, 15 January 2007 (UTC) - Sure, sure. Your call. I was just thinking an admin might wonder why it's split across namespaces when it could all be in one. But that's just me. If keeping them split lures more people to UnTunes, so much the better. Kudos also for Oscar's Rap. I hope you keep doing these- we need some quality contributors to UnTunes. Cheers!User:Tooltroll/sig 22:51, 15 January 2007 (UTC) edit Alternate reality game The WIP tag on your article expired, so I've moved it to your userspace for now so you can work on it there. Feel free to move it back to the main space when it's done. -- 16:51, 22 February 2007 (UTC) Why the hell did you rewrite halo?
http://uncyclopedia.wikia.com/wiki/User_talk:AtionSong?oldid=2155443
CC-MAIN-2015-40
refinedweb
464
68.06
I am trying to create the baysian belief network by using the example given in website but in given example its for three variable while I am trying to create with two variable, but always getting KeyError: (‘A’, ‘A’, ‘B’): from pomegranate import * guest = DiscreteDistribution({'A': 0.5, 'B':0.5}) prize = DiscreteDistribution({'A': 0.5, 'B': 0.5}) monty = ConditionalProbabilityTable( [['A', 'A', 0.5], ['B', 'B', 0.5], ['A', 'B', 0.5], ['B', 'A', 0() It will be great help if someone can help me to resolve this issue, I have tried my best not able to figure out this. Answer The conditional table, in this problem must contain 3 variables. This is because the conditional probability, in this case is given by P(Monty|Guess,Prize). In another word, in order to achieve a certain state of Monty, a joint probability of guess and prize must be satisfy. Hence, no variable could be taken out of the table(or the conditional probability equation). And to solve your problem of using only 2 variable, we need to change the approach to the problem by ignoring either the probability of “Guess” or “Prize”, in order to make monty takes in only 1 variable. And the new probabilistic equation will becomes P(Monty|temp). from pomegranate import * temp = DiscreteDistribution({'A': 0.5, 'B':0.5}) monty = ConditionalProbabilityTable( [['A', 'A_prime', 0.5], ['B', 'B_prime', 0.5], ['A', 'B_prime', 0.5], ['B', 'A_prime', 0.5]], [temp]) s1 = Node(temp, name="temp") s2 = Node(monty, name="monty") model = BayesianNetwork("Not a Monty Hall Problem") model.add_states(s1, s2) model.add_edge(s1, s2) model.bake()
https://www.tutorialguruji.com/python/bayesiannetwork-pomegranate-getting-keyerror-while-create-bbn/
CC-MAIN-2021-43
refinedweb
270
60.11
This Java program is to Implement binary tree and check whether a tree is subtree of another. This can be done in two ways. A tree can be subtree of another if they have same structure (same object references but different value) and with same structure and values. This given class checks for both. Here is the source code of the Java Program to Check Whether an Input Binary Tree is the Sub Tree of the Binary Tree. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. //This is a java program to check whether a binary tree is subtree of another tree class Btrees { Object value; Btrees Left; Btrees Right; Btrees(int k) { value = k; } } public class SubBinaryTree { public static boolean ifsubtreeRef(Btrees t1, Btrees t2) { if (t2 == null) return true; if (t1 == null) return false; return (t1 == t2) || ifsubtreeRef(t1.Left, t2) || ifsubtreeRef(t1.Right, t2); } public static boolean ifsubtreeValue(Btrees t1, Btrees t2) { if (t2 == null) return true; if (t1 == null) return false; if (t1.value == t2.value) if (ifsubtreeValue(t1.Left, t2.Left) && ifsubtreeValue(t1.Right, t2.Right)) return true; return ifsubtreeValue(t1.Left, t2) || ifsubtreeValue(t1.Right, t2); } public static void main(String[] args) { Btrees t1 = new Btrees(1); t1.Left = new Btrees(2); t1.Right = new Btrees(3); t1.Right.Left = new Btrees(4); t1.Right.Right = new Btrees(5); Btrees t2 = new Btrees(3); t2.Left = new Btrees(4); t2.Right = new Btrees(5); if (ifsubtreeRef(t1, t2)) System.out.println("T2 is sub-tree of T1 (Reference wise)"); else System.out.println("T2 is NOT sub-tree of T1 (Reference wise)"); if (ifsubtreeValue(t1, t2)) System.out.println("T2 is sub-tree of T1 (Value wise)"); else System.out.println("T2 is NOT sub-tree of T1 (Value wise)"); } } advertisements Output: $ javac SubBinaryTree.java $ java SubBinaryTree T2 is NOT sub-tree of T1 (Reference wise) T2 is sub-tree of T1 (Value wise) Sanfoundry Global Education & Learning Series – 1000 Java Programs. Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
http://www.sanfoundry.com/java-program-check-whether-input-binary-tree-sub-tree-binary-tree/
CC-MAIN-2016-50
refinedweb
354
59.9
import whiley.lang.Math bool check(int x, int y): return max(x,y) == x Previously, the above code would compile with function max() being imported from whiley.lang.Math. In other words, an import statement automatically imports all names from a given module. However, this gives relatively little control over namespaces and quickly leads to namespace pollution. max() whiley.lang.Math Therefore, in the upcoming release of Whiley, the semantics of import statements has been brought more in line with [[Python (programming language)|Python]]. Thus, the above would not compile as is. Instead, we would need to write: import whiley.lang.Math bool check(int x, int y): return Math.max(x,y) == x This all makes sense, and I’m absolutely happy with the choice to do this. However, as usual, there are some hidden issues I didn’t foresee. The first issue with the above change came out from actually writing code using it! In particular, I was working on my bytecode disassembler benchmark and constructed a module ClassFile as follows: ClassFile define ClassFile as ... define Reader as ... define Writer as ... This gives rise to the types ClassFile.Reader and ClassFile.Writer, both of which make sense. But, it also gives rise to the type ClassFile.ClassFile, which frankly is rather cumbersome. That’s because a ClassFile is both a key concept in my design and, coincidently, a namespace as well. Of course, I could prossibly rename ClassFile to be module Class as follows: ClassFile.Reader ClassFile.Writer ClassFile.ClassFile Class define File as ... define Reader as ... define Writer as ... This gives rise to the types Class.Reader, Class.Writer and Class.File. This is better, but I suspect such renaming won’t always fit well with the top-level design of a program. I imagine Python must suffer from this problem as well, so I’ll have to look into it more … Class.Reader Class.Writer Class.File Another problem with the above change to import statements, is how it affects process messages. The following illustrates: import whiley.io.File [byte] ::readFile(String filename): fr = File.Reader(filename) return fr.read() This all looks fairly sensible, right? Well, currently, it doesn’t compile. The reason becomes apparent if we look inside the File module: File package whiley.io define Reader as process { ... } Reader ::Reader(String filename): return spawn { ... } [byte] Reader::read(): ... What we see is that read() is a message declared on process type File.Reader. In Java terms, read() is a static method which accepts an argument of type File.Reader. And, therein lies the problem. To get our readFile() example to compile we need to write this: read() File.Reader static readFile() import whiley.io.File [byte] ::readFile(String filename): fr = File.Reader(filename) return fr.(File.read)() Or, alternatively, we could write it as this: import whiley.io.File import read from whiley.io.File [byte] ::readFile(String filename): fr = File.Reader(filename) return fr.read() I find this somewhat annoying. However, it’s not clear how much of a problem it really is. That’s because, in practice, we’d probably want to define File.Reader as an interface like so: interface package whiley.io define Reader as interface { [byte] read() } Reader ::Reader(String filename): proc = spawn { ... } return { this: proc, read: &read } [byte] Reader::read(): ... An interface is a special kind of record with an explicit field this. Then, when we access the field read, this is automatically used as the receiver. With Reader implemented as above, our original incantation of readFile() would actually compile. That’s because, in this case, fr.read() corresponds to an indirect message send, where as before it was a direct message send. this read Reader fr.read() On the whole, I’m not sure what I’m complaining about! Implementing Reader as an interface versus a process is much of a muchness. There is a minor issue of performance as, for a process you get a static method invocation. But, I’m probably just splitting hairs … process In Ocaml they have the same issue. A struct/sig (i.e. module/interface) names Set defines a set type and all it’s functions. The convention is to use t as the module primary type: The “wbiley.” prefix in import statements seems unnecessary. It’s just more typing for the programming, without much benefit. Hi Krzysztof, Well, it’s like the “java.” prefix in Java. As in e.g. “import java.util.*”. Given that there are plenty of packages which aren’t in the standard library, it seems useful to me? Hi Daniel, Thanks for that pointer. I didn’t know much about OCaml namespacing, but it does seem rather interesting! I found this quite good: So, in my case, I was considering having a default where a type of the same name as the enclosing module is automatically imported. E.g. if importing module ClassFile, then the name ClassFile.ClassFile is automatically imported as well. The other option is to decouple modules from namespaces. That’s probably more flexible, but might be a little weird. Looks like you’ve got me on the right track! F# is interesting as well, in that it does separate namespaces and modules (at least to some extent): Hmmmm, need to think about all this!! […] direction, it’s not without some outstanding issues — which I discussed in more detail.450 seconds.
http://whiley.org/2011/09/03/namespaces-in-whiley/
CC-MAIN-2017-47
refinedweb
899
69.89
--- Leigh Purdie <Leigh Purdie intersectalliance com> wrote: > But that raises the question I guess.. If a user > attempts to > access /path/to/protected/file.txt, and ACLs block > them at /path/to, > what should the event report? In existing (Solaris, Irix) audit systems you will get - The access attempted (e.g. open for read, stat) - The process attributes - The path requested, /path/to/protected/file.txt - The path resolved, /path/to - The attributes of /path/to that resulted in access being denied (the ACL in this case) It is amazingly common that the path requested is not the path that ends up being interesting in audit records. > Failed access > to /path/to/protected/file.txt (at which point, the > auditor wants to > know 'how did they get that far in the directory > tree??), or Failed > access to /path/to (at which point, the auditor has > no idea of an > attempted attack on a 'sensitive file')? This is why both are necessary. > My feeling is that the second option is most useful, > but if we follow > the above logic to conclusion, perhaps we would > receive both events. Meeting the CAPP requirements as well as being useful to an auditor adds a certain spice to the design. > > - When /etc/passwd is renamed to /etc/opasswd > > do you want to stop watching it? > > Yes. I don't think we should second-guess the intent > of the auditor. If > they request /etc/passwd, monitor that only. If they > request /etc/*passwd*, that's a different story. The CAPP requirements are written in subject/object terms. The object that was /etc/passwd is now named /etc/opasswd. To meet the CAPP requirements you need a way to monitor the object, even if the name changes. Pathnames aren't even attached to the objects. A file can exist without one, and still needs to be audited even when there is no reference to it in the file system namespace. Really. ===== Casey Schaufler casey schaufler-ca com __________________________________ Do you Yahoo!? Yahoo! Mail - now with 250MB free storage. Learn more.
https://www.redhat.com/archives/linux-audit/2005-January/msg00119.html
CC-MAIN-2015-11
refinedweb
343
66.03
This is obviously the end of Minecraft. Mojang killed their own game by attacking the only thing that was keeping it alive in the first place, and... That's an awesome face. That is a fragment. I'm going to check with my developers to see how long it is going to take. I think they started it already. We are developing this for my server. Its private. Maybe we could make a deal? Guys, lay off him. Come on, the Bukkit community is better than that. We are making this a custom plugin for my server. Maybe we could strike some deal? Also, WarmakerT is right. This is for requesting plugins to be made by developers, not to request someone to find them. Ill try to do it. I am currently learning java and could try.... However I think someone else should do it. I probably wont be successful. :P *PANT* *PANT* I can just feel the hysteria and panic. Hmm... List cannot be resolved to a type. Config cannot be resolved player cannot be resolved. Just the player names. How do you search the contents of a certain file and do an if/else if that string is there or not? i.e. if (player.getName() isfound in... Oh! I went to sleep without thanking you! Thank you! Giant When I log in, it shows me as something like CraftPlayer{name=TobyG123} Thank you Giant and travja! Giant Still nothing. I dont care, here is the ENTIRE plugin: package net.arcadecraft; import java.util.logging.Logger; import... Hmm. Two things. 1. An error occurs that says that plugin cant be resolved to a variable. 2. Another one on one of the brackets that says a )... I have this: @EventHandler public void onPlayerLogin(PlayerLoginEvent event) { Player player = event.getPlayer();... Wow. Separate names with a comma.
https://dl.bukkit.org/search/119356074/
CC-MAIN-2022-27
refinedweb
306
79.36
); } int x = 10;); } int x = 10;. 🙂 As I told you, there are subtle semantic differences between the two operators. That’s one of them. 🙂 — Eric @Mike, all me to present the "is run over by" operator, <=– So can we expect the ==> operator in future versions of C# that incements by 2? This would be a very handy operator, too. tia 😉) { if( g_fGoingUp ) return value1 < value2; else! Any plans to implement the "dagger" operator? Read "x stabs y" int x = 10; int y = 5; while (x ++!=– y) { break; } So, when you say "*Today* I am announcing" ……. Ahh, lol, whitespace is for humans eh? Mostly anyways. 🙂 Also, in the continuing spirit of TheDailyWTF, I would also like to request the "definitely" operator, which wraps any boolean expression, and evaluates it twice times (just to be sure it’s really true). E.g. if (definitely(foo != null)) { …. } That’d also be some sweet syntactic sugar for dealing with those pesky race condition related bugs, writing thread-safe singletons etc – if it crashes, well, just add another "definitely". To make it easier, allow them to be chained without extra parens: if (definitely definitely definitely (foo != null)) { // NOTE: if code crashes here, add another "definitely" above! foo.Bar(); } @James: Why not follow Eric’s advice, and try it on your local compiler version? To be in parity with: for (int i = 0; i < 10; i++) How about: while (int x = 10; x–>0) I wish more large companies allowed their employees to have a sense of humor. Kudos! @Pavel: public class definitely { private bool Val; public implicit operator bool(definitely def) { return def.Val; } public explicit operator definitely(bool val) { return new definitely() {Val = val}; } } if ((definitely) (definitely) (definitely) (foo != null)) { // NOTE: if code crashes here, add another "definitely" above! foo.Bar(); } @Stuart: that won’t do the trick, since the expression (foo != null) will only be evaluated once, and then the _computed result_ of that expression will be repeatedly re-evaluated. The crucial part of the concept – that which makes it so valuable – is that the entire parenthesized expression itself is recomputed every time. I don’t think this is doable as a library solution using existing C# means – though by all means go ahead and prove me wrong, if possible. @Pavel: Well, you could make it work with just a small change in the syntax : public class definitely { private Func<bool> _test; public static implicit operator bool(definitely def) { return def._test(); } public static explicit operator definitely(Func<bool> test) { return new definitely { _test = test }; } } To use it : if ((definitely) (definitely) (definitely) (() => foo != null)) { foo.Bar(); } how about adding the ‘??=’ operator ‘last-minute’ ? are there any other binary operators missing their assignment equivalents? I thought this was an April fool’s joke until I tried it on vs2010. works fairly ok for integers, though. the ‘tends-to" operator is always a decrementing operator, such that x—> y where y is larger than x does not yield expected results. With the double data type, it goes completely crazy. The following code will produce a last output of 2.7. Go figure! var x = 50.7d; while (x– > 3.5) { Console.WriteLine(x); } Thanks Eric. Amazed at the Float version of it!! If there’s time for changes, I’d like to report a bug. I know you’re pressed for time so I’ve already written a unit test that demonstrates the problem: [TestMethod] public void AssertShift() { int x = 1 << 1; int y = 1 << 32; Assert.IsTrue(x < y); } This fails in VS2008 using .NET 3.5, and although I had to uninstall the CTP recently, I believe it fails there, too. I know you’re probably busy as heck but I wrote it up in the hope that today is the last day you’ll be considering this sort of submission before RTM. You scared the shit out of me. Glad it is an April fools joke 🙂 Besides joking, a range operator may be quite useful. I mean “..” (two dots) may be shortcut to Enumerable.Range() method. foreach(var x in 0..100) { Console.WriteLine(x); } var notPrimes = from x in 2..10 from y in x..10 select x*y; var primes = (2..100).Except(notPrimes); And of course number of dots can indicate step size: foreach(var x in 0….100) // increment by 3 { Console.WriteLine(x); } 🙂. — Eric How much extra hours your team had to work to implement this really _useful_ feature? 🙂 This is antastic news. The key between ‘D’ and ‘G’ on my keyboard doesn’t work, so I’ve never been able to write incrementing loops – you know; the ones that use that keyword that begins with the key between ‘D’ and ‘G’. These new operators will probably save my coding career. God bless you Microsot. Well, I may have preferred the ‘..’ as known to powershell: (0..20) | % { Write-Host "I am $_" } Eric, why have not you blogged about another very cool new feature – embedded URLs? using System; class C { static void Main() { Console.WriteLine("Go"); } } Are we certain that you have examine all the possible security and runtime implications? int jd = 21; while(jd –> vegas) { //"This item is obfuscated and can not be translated" } How will we know what the value is once we get jd back? F# has the ability to create lists in steps [5..10]. I know its different, but useful. Mimics the while loop "list creation" here. Of course these operators should NOT be added until the "!!!" operator is in-place (used when programmer bangs ghead three times due to foolishness!!!) while (Microsoft –> Hell) { // TODO: } /// <summary> /// –> => > >> >>> ! <<< << < <= <– /// </summary> Genious! Lets create fixed like in java, multimple inharitance like in c++. Const function like in c++; Preprocessors!!! I need preprocessor like in c++! Good joke 🙂 BTW, operator = ability to specify operators in interface. Common interface for all numerical types. Matrix type. Extends System.Math to use Complex type, etc. All those stuff allows to use C# in math. Instead of mathlab, mathematica or maple. I very sad that C++ is more usefull than c#. I love it. This is the kind of humor that was typical of C language programming in the "old days." With this C# example, I feel a tear of nostalgia and a ray of hope for the future. That is crazy, I was so flown to the article. Never noticing x –> 0 is same as x– > 0. >. Maybe because the functionality is still available in form of Enumerable.Skip/Take/ToArray, which can all be evaluated just fine in Watch (except when stuck in native code during mixed debugging)? You had me there.. until I read the last paragraph. But yes, this is a nice trick to play on people. Arun Some Last-Minute New C# 4.0 Features –> ha ha ha, but nice joke 🙂 The minute I saw the title of the post, I looked at the date and I knew without reading the post where this is leading! Nice one!! That made me remind of an old (ancient, really ancient) joke. Tell me if you can "compile" this sentence: IF IF THEN THEN = ELSE ELSE ELSE = THEN 😉 It would be helpful to me if x –>!> y meant x approaches y VERY fast, so that I could write software that performed well on machines with fewer resources. In the mean time I will resort to avoiding bubble sort and the like. Long ago in the days of ‘goto’ it was proposed that we should have a ‘comefrom’ which would be very helpful with debugging. I gather that we might have that now. Not in C#. But you can do comefrom in Python. — Eric
https://blogs.msdn.microsoft.com/ericlippert/2010/04/01/some-last-minute-new-c-4-0-features/
CC-MAIN-2017-13
refinedweb
1,274
67.15
Library tutorials & articles Custom SMTP in C# Introduction This series of articles is written to show the user how to write TCP/ IP based client applications using C# on Microsoft's new .NET framework. This is the first article in this series.. The SMTP or Simple Mail Transfer Protocol is described in RFC 821. This application protocols is used to send email over the Internet. The .NET framework already contains an SMTP class in the System.Web.Mail namespace called SmtpMail. This class is sufficient for sending email over the Internet and I would not suggest that the class I'm presenting in this article is any better or worse. Let's just say that it is different. If you can get away with using the .NET SmtpMail class, then I suggest you do just that. The only advantage of my class is that it is open source and let me suggest that the SmtpMail class in .NET has a few more features. My motivation in writing this article is not to try and write a better SMTP class, but rather to show how to write TCP/ IP based clients in C#. Related articles Related discussion How to POP3 in C# by bishnu.tewary (7 replies) google apps email using php by lghtyr . Try using this library: It's simple and effective. Allows sending, receiving and parsing email messages. Encodes/decodes any kind of attachment. Includes POP3 and STMP clients. Written entirely in managed code in C#. CC (and BCC, but the B stands for "Blind" so it's not marked as a copy) are traditionally used to send it to others that need to see the letter as well without making any changes to it. With SMTP, I think you just use another RCPT TO and perhaps add a CC: blah and in the data section of the email... So to RECEIVE SMTP or INTERCEPT SMTP packets, does one need to write a local mail server? Looking at the code, I don't see any differentiation between To, CC, and BCC when you are sending them to the Mail server. For that matter, RFC821 doesn't seem to be much help on the matter either. How should one identify the three types when doing an SMTP send? In this example, SMTP class inherits tcpclient, is it more suitable to use a udpclient to send and receive e-mail? I searched this site again and found this article If I was to strip out all my proprietary code you would end up with this example. Hopefully this will be enough to get you started. Cheers Gord Gordonm, thank you so very much for your willingness to share your codes with me. YES, I do want it. A little detail about my program: I am trying to built a mail server just like Yahoo or Hotmail, but it's a mini one It's supposed to receive an email then save the sender address, title, message, and date to SQL server 2000 database. I am writing it in C# and ASP.NET. Once again, THANK YOU SO MUCH Sincerely, tinybunny_8 I was getting frustrated with not being able to find any examples of this so I wrote my own. It is socket based, uses SMTP and seems to work very nicely. If you are interested let me know and perhaps I can post enough of an example to get people going. Gord It would appear so at a cursory glance.... I'll email Randy about that.... You can't receive email with SMTP - as its only for sending email! You use POP3 instead - see I agree with gordonm, if you know how to receive email using SMTP and C#... would you please write one or show me where to start? I am working on project and need a clue so badly. Thanks tinybunny_8 I would love to see an example of using Sockets to receive email. Could you please do an article covering that area. TIA Gord You saved my day with this excellent example, thx Randy In your Write() method, you declare a byte array of size 1024. Is that the maximum string length that you can send to the Write() method at once? This thread is for discussions of Custom SMTP in C#.
http://www.developerfusion.com/article/4039/custom-smtp-in-c/
crawl-002
refinedweb
714
81.83
>>. Re:Python for Scientific use (Score:3, Interesting) Wake me when there is something even close to replace Simulink. Matlab is cool and all, but the real power of the program is Simulink.. Alternative to matplotlib (Score:1, Interesting) Re:In Defense of Matlab range(-5,5)] ) Plus, g.interact() will just drop you into the currently running gnuplot session, which can be very convenient. Re:Python for Scientific use (Score:2, Interesting) Re:In Defense of Matlab are configured to use the same number of processors (linear algebra is typically highly parallel, and in huge matrix operations, multi-thread overhead will be negligible). If you code your own matrix multiplication naively in C, you may end up with a factor of 6 or 7 in speed *below* that of Matlab. However if we're talking about generic loops for example, C is then much, much faster. Matlab has a Just-in-Time (JIT) optimizer which vectorizes straightforward loops; the same for Python is not ready yet (this would be the Unladen Swallow project from Google I think). Depending on the precise morphology of a loop, very different speeds would be obtained in Matlab, C or Python. The lesson here is to use numpy or scipy precompiled and pre-optimized code whenever this is possible. But when it's done right, there's generally much less difference between Matlab, Python and optimized C than many people think. Re:In Defense of Matlab figures exported to other format (PDF, EPS, bitmaps) are fine, on-screen Matlab figures are not anti-aliased and sometimes present quirks that are not really there. Matplotlib uses the Antigrain library for screen output, so the end result is much more pleasing to the eye. Speed: Using numpy, you benefit from the binary linear algebra subroutine (BLAS) speed, much as Matlab. Generic loops tends to be slower than Matlab because of Matlab's Just-in-Time (JIT) optimizer. Documentation: I'll give you this one hands-down: Matlab has *excellent* documentation, written by experts in the field. This is an often neglected area, but clean and profuse documentation and examples allows you to do more things, much quicker. Dev environment: very good in Matlab, but using any Python syntax-aware text editor + the IPython shell, I don't miss much when developing Python. Python is generally more consistent (e.g. you can define a multi-statement function interactively in the Python shell), which speeds up development. Also, Matlab is beginning to feel a namespace crunch..
http://books.slashdot.org/story/10/05/12/1343227/Matplotlib-For-Python-Developers/interesting-comments
CC-MAIN-2014-41
refinedweb
418
52.7
SDL_GL_GetCurrentContext not found at window initializationPosted Thursday, 29 May, 2014 - 21:01 by Metapyziks in I've been using OpenTK for a few years now without issue, but since updating to use the nuget version a few weeks ago I haven't been able to run any new projects using any recent builds of OpenTK. Each time I get an EntryPointNotFoundException at Sdl2GraphicsContext.cs line 317, "Unable to find an entry point named 'SDL_GL_GetCurrentContext' in DLL 'SDL2.dll'". I get this with the last few nuget builds (all the ones from this year I think, I'll have to verify this), the pre-built download from this site, and building manually from opentk/opentk.git (Debug and Release). However, the examples all seem to work in both the pre-built download and a manual build. Some other info that may be useful: OS: Windows 8.1 (64 bit) GPU: AMD Radeon HD 7850 .NET: 4.5 Here's a minimal snippet that produces the error for me: using OpenTK; namespace OpenTKTest { class Program : GameWindow { static void Main(string[] args) { using (var app = new Program()) { app.Run(); } } } } Literally if I just create a new project, add OpenTK as a reference, and add this code, I get the error. I've heard of one other person that has a similar issue (or at least an issue that meant they couldn't use the latest versions of OpenTK), but apart from that I can't find anything else online about this. I can't think of anything else I installed / updated that could have caused this outside of updating OpenTK, and yet this obviously isn't a widespread issue. Re: SDL_GL_GetCurrentContext not found at window ... Can you please post the debug output from a debug build of the library? It appears that OpenTK is using the SDL2 backend. It may be that there is a stray/old SDL2.dll in your dll search path that is somehow missing this method. You can force OpenTK to use the native win32 backend instead: Alternatively, if you wish to use the SDL2 backend you can copy SDL2.dll from opentk/Dependencies to your output directory. Re: SDL_GL_GetCurrentContext not found at window ... I hadn't realised that it wasn't always using SDL2, but forcing it to use the native backend has fixed the issue. I'll see if I can find the offending SDL2.dll that was causing the issue too. Here's the debug log anyway: Thank you for spotting the problem, and also for providing such a fantastic library! Re: SDL_GL_GetCurrentContext not found at window ... Unless you have a specific requirement, I'd recommend using the native backend. It's faster and better tested than the SDL2 backend, at least on Windows and Mac OS X. It may be that some application has installed a beta version of SDL2.dll into C:/Windows/System32 or somewhere else in your path. I'll add a version check to ensure we ignore versions prior to 2.0.0. Re: SDL_GL_GetCurrentContext not found at window ... A workaround to this issue was added in You should now be able to use OpenTK without forcing the native backend. Re: SDL_GL_GetCurrentContext not found at window ... That's great, thank you. Incidentally I did find the SDL2.dll, it was in the binary directory of the game Garry's Mod, which I must have included in $PATH$ at some point.
http://www.opentk.com/node/3678
CC-MAIN-2016-36
refinedweb
569
73.47
Mar 31, 2012 10:28 PM|gideonn|LINK I have a gridview on the left which is basically a list of members. Now on the right it should display all the data associated with the member. At first I'm supposed to select a member from the left gridview, then the gridview on the right should populate. I've been trying hard to get the value of the selected element from the left but I'm not getting the right method. Plus I need to display photo, Name , Age and all data in a different way, like an ID, so I'm getting second thoughts on whether GridView is the way to do it. And if not gridview, then what ? Apr 01, 2012 03:18 AM|usman400|LINK If the left side just has to display one record i.e. person name, then you should use List View control handling its selected item or selected index is straight forward Apr 01, 2012 03:35 AM|basheerkal|LINK Hi If your are particular to use a Gridview only on the left side, enable selection in Gridview and use Selected index Changed event or put button in all rows and use RowCommand to get the member from the Left Gridview. If you tell whether you are using BoundFileds or TemplateField to display member name I can show some sample code Apr 01, 2012 06:24 AM|gideonn|LINK I don't know why I can't bind the datasource to list view. void BindData() { SqlConnection con = new SqlConnection(connection); con.Open(); SqlDataAdapter da = new SqlDataAdapter("Select name FROM MInfo", con); DataSet ds = new DataSet(); da.Fill(ds, "MemNames"); if (ds.Tables[0].Rows.Count > 0) { listView1.DataSource = ds.Tables["MemNames"]; listView1.DataBind(); //memListGrid.DataBind(); } con.Close(); } Am I missing out something ? Apr 01, 2012 07:04 AM|basheerkal|LINK May I ask why do you use a ListView. For your Purpose ListBox is the apt control to use Ok. If you want to populate ListView the code shoul be like this ,, void BindData() { SqlConnection con = new SqlConnection(YourConnectionString"); con.Open(); SqlDataAdapter da = new SqlDataAdapter("Select name FROM MInfo", con); DataSet ds = new DataSet(); da.Fill(ds, "MemNames"); if (ds.Tables[0].Rows.Count > 0) { ListView1.DataSource = ds; ListView1.DataBind(); // memListGrid.DataBind(); } con.Close(); } No need to DataBind the grid here, Also don't forget to configure the ListView at design time by puting item templates in it. But If you want to select a member and use that value to change the gridview display only populating List view won't work.. You have to use a List Box. Populate it and Its SelectedIndexChanged event write code .. Apr 01, 2012 02:36 PM|basheerkal|LINK Try this way void FillListBox() { SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["YourString"].ConnectionString); con.Open(); SqlDataAdapter da = new SqlDataAdapter("Select field1 FROM YourTable", con); DataSet ds = new DataSet(); da.Fill(ds, "Field1"); if (ds.Tables[0].Rows.Count > 0) { ListBox1.DataSource = ds; ListBox1.DataValueField = "Field1"; ListBox1.DataTextField = "Field1"; ListBox1.DataBind(); } con.Close(); } And if youwant to update gridview display basedon that... use this sample code protected void ListBox1_SelectedIndexChanged(object sender, EventArgs e) { string membername = ListBox1.SelectedValue.ToString().Trim(); SqlDataSource1.SelectCommand = "SELECT [idField], [field1], [field2] FROM [tester] WHERE Field1= '" + membername + "'"; GridView2.DataBind(); } Apr 01, 2012 02:47 PM|gideonn|LINK Thanks for the effort but DataValueField, DataTextField and DataBind() are showing errors! Error 1 'System.Windows.Forms.ListBox' does not contain a definition for 'DataValueField' and no extension method 'DataValueField' accepting a first argument of type 'System.Windows.Forms.ListBox' could be found (are you missing a using directive or an assembly reference?) I have used these assemblies: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Data.SqlClient; Apr 01, 2012 04:51 PM|basheerkal|LINK These are the name spaces i used and no errors. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Data.SqlClient; using System.Configuration; Only last three are added by me. Others we get by default in VWD 2010 express Apr 01, 2012 05:58 PM|basheerkal|LINK Sorry I am not able to give u any explanation. Do it in Web fom (aspx page) Bye All-Star 32681 Points Apr 02, 2012 01:03 PM|superguppie|LINK Confusion: This forum is about ASP.NET Data Controls. The question seems to be about a Windows Forms Control. Why did MS have to use the same name for two totally different Controls? Better ask the question in a forum about those. 11 replies Last post Apr 02, 2012 01:03 PM by superguppie
http://forums.asp.net/t/1787742.aspx?Getting+the+value+of+the+selected+element+in+GridView
CC-MAIN-2015-18
refinedweb
795
60.82
DSM <dsm001 at users.sourceforge.net> added the comment: Hmm. I quickly wrote my own implementation and I agree with the uuid module and disagree with the RFC value. Further searching suggests that this may be an error in the RFC. See ; see also for a specific explanation of what probably caused the bug in the reference code. I can reproduce the RFC value by (IMO incorrectly) flipping the namespace endianness. (It may be worth noting, though, that one of the links above points to the python implementation for support-- so there could be a vicious circle here. :^) ---------- nosy: +dsm001 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
https://mail.python.org/pipermail/python-bugs-list/2009-March/072481.html
CC-MAIN-2016-36
refinedweb
109
75.1
django-donations 0.7.4 Reusable django app to receive & track donations on charitable sites Reusable django app to receive & track donations on charitable sites Documentation The full documentation is at. Quickstart Install Django Donations: pip install django-donations Add it to your INSTALLED_APPS: INSTALLED_APPS = ( ... 'donations.apps.DonationsConfig', ... ) Add Django Donations’s URL patterns: from donations import urls as donations_urls urlpatterns = [ ... url(r'^', include(donations_urls)), ... ] Just Giving Configuration The app needs to be configured with your JustGiving API settings: # Ability to point to Production or Sandbox URLs JUST_GIVING_WEB_URL = '' JUST_GIVING_API_URL = '' # Replace below with your personal details JUST_GIVING_CHARITY_ID = '123456' JUST_GIVING_APP_ID = 'changeme' # Add a list of all the currencies you need to support CURRENCIES = ['GBP'] TODO - Update the documentation and readme - integrate with readthedocs or pythonhosted or both! - tests - unit/integration - task to periodically verify pending donations (* dashboard - track/view donations from the business side - kpis etc * views/urls? - provide an api hook into the system (/donations - dashboard)) v2 and beyond - (other providers (paypal etc)) - tasks.py - recurring donation handling - this is not possible right now as SDI is not an API to be automated Supported Providers - Just Giving SDI Credits Tools used in rendering this package: History 0.7.4 (2017-11-06) - New setting DONATION_VERIFY_API_URL_NAME to configure the name of URL to reverse when verifying a donation. 0.7.3 (2017-10-30) - Bump version to fix tagging 0.7.2 (2017-10-30) - Revert URL conf for Django 2.0 as it’s causing issues 0.7.0 (2017-09-26) - Add on_delete=models.CASCADE on foreign keys in migrations - Migrate URL confs to Django 2.0 syntax - Use django-compat for importing reverse 0.6.2 (2017-06-09) - Add on_delete=models.CASCADE on foreign keys for Django 2.0 0.6.1 (2017-06-08) - Python 3: fix app name as bytes in migrations - Django 1.11 compatibility 0.6.0 (2017-01-30) 0.5.0 (2017-01-27) 0.4.0 (2017-01-27) - Fix bug with urllib import on Python 3 #4 - Remove dependency on django-autoconfig - Regenerate with cookie cutter for Django standalone app, resulting in: * Cleanup a few unused files * Remove the example project which isn’t kept up to date * Add a changelog * Switch testing to use tox * Switch from coveralls to codecov.io - Test views 0.3.0 (2016-10-20) - Drop support for Django 1.6 and 1.7 - Support Django 1.9 - Prepare Django 1.10 0.2.7 (2015-12-17) - Add the app config for Django 1.7+ 0.2.6 (2015-12-07) - Some Python 3 compatibilty fixes - Prepare for Django 1.9 compatibility 0.2.5 (2015-11-23) - Django 1.8 compatibility - Fix a few issues with Python 3 0.2.4 (2015-11-12) - Doc improvements - Django 1.7 compatibility 0.2.3 (2015-10-23) - Fix a crash with anonymous donor 0.2.2 (2015-10-22) - Mostly tests improvements 0.2.0 (2015-10-19) - Fix various unicode crashes - Fix that prevented the server from starting when config was being loaded before the tables were created. - Capture Donor name from JustGiving 0.1.3 (2016-10-16) - Fix a Unicode crash in models and providers - Revert erroneous change in setup.py 0.1.2 (2015-10-16) - Admin improvements - Installation fixes 0.1.1 (2015-10-13) - Fix packaging on PyPI - Docs improvements 0.0.2 (2015-10-12) - Squash South migrations - Autoconfig enhancements 0.0.1 (2015-10-12) - First release on PyPI. - Author: Andrew Miller - Keywords: django-donations - License: BSD - Categories - Development Status :: 5 - Production/Stable - Framework :: Django - Framework :: Django :: 1.10 - Framework :: Django :: 1.11 - Framework :: Django :: 1.8 - Framework :: Django :: 1.9 - Intended Audience :: Developers - License :: OSI Approved :: BSD License - Natural Language :: English - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: 3.5 - Programming Language :: Python :: 3.6 - Requires Distributions - Package Index Owner: nanorepublica - Package Index Maintainer: Bruno.Alla - DOAP record: django-donations-0.7.4.xml
https://pypi.python.org/pypi/django-donations/
CC-MAIN-2018-09
refinedweb
678
52.66
. Posted by Adrian Holovaty on October 31, 2006 Frederik De Bleser October 31, 2006 at 2:42 p.m. Fantastic! This is how tech books should be written. Matt Boersma October 31, 2006 at 2:50 p.m. The Django Book looks similar in concept and detail to the excellent "Version Control with Subversion" at. I'll try to make constructive comments as I read it. Django's great documentation is one of its best features, and this only improves things. Good work! SuperJared October 31, 2006 at 3 p.m. Fantastic news! Is there any way you'll be releasing the source of the app itself? Antonio Cangiano October 31, 2006 at 3:21 p.m. Good job guys. Jacob's inline comment system is simply wonderful. Alex Aguilar October 31, 2006 at 3:58 p.m. Awesome! Smooth comment system (Django can do ajax ;) and I like the free until its done concept. Keep up the spectacular work guys! Jacob October 31, 2006 at 6:51 p.m. Lorenzo: yes, the book is versioned (I use SVN for just about everything, actually). For now the repository is private (and may remain so). Matt: Good eyes - the Subversion book is one of my all-time favorite tech books, and one of the main inspirations behind wanting to put our book online. Mark October 31, 2006 at 7:27 p.m. I second SuperJared's question. It would be great to publish the app. I think it would be a very popular way to edit books or online documentation. sandro October 31, 2006 at 7:45 p.m. digg in!... Clifford October 31, 2006 at 8:05 p.m. Going to the book's web site with Firefox 1.5 on Linux freezes the browser until the "A script is running slowly dialog comes up." at which point, I'll elect "Stop Script". It's Google Analytics causing the problem. Eric Lake October 31, 2006 at 8:14 p.m. This is some of the best news I have read all day. Now back to reading. Douglas Jarquin October 31, 2006 at 8:21 p.m. Yes! Next milestone is 1.0 Cheng Zhang October 31, 2006 at 9:27 p.m. If we translate it into other languages, would you prefer that we host the translation copy somewhere, or somehow they all get collected on the main site? dp_wiz November 1, 2006 at 1:55 a.m. Will there be any interface for localization? radek November 1, 2006 at 4:34 a.m. What I am missing in tutorials and docs are pictures. Are you going to have pictures (for instance models diagrams) in the book? anonim November 1, 2006 at 6:15 a.m. Hi Nice! I have one question: will the book feature example on how to add ajax features to django websites? Thanks (already ordered at amazon) Peter Bailey November 1, 2006 at 7:02 a.m. Great idea Adrian. Thank you. Lorenzo Bolognini November 1, 2006 at 9:21 a.m. +1 for adding an Ajax with Django primer using YUI Chris McAvoy November 1, 2006 at 9:21 a.m. Great work guys, congratulations! Baczek November 1, 2006 at 1:23 p.m. Bobbo November 1, 2006 at 2:27 p.m. Great work Adrian. Thank you for a great framework - it's just great. And I'm really looking forward to see database migration and Ajax implemented into the framework. Bobbo Enquest November 2, 2006 at 3:18 a.m. This is super. I can't wait to study the code how its done to make comments like this... Although I already can imagine. IMHO: Only thing is that jquery is a better Ajax lib. It has the same philosophy as Django... Frankie Robertson November 2, 2006 at 3:50 a.m. Enquest: I prefer mochikit myself. I think it's a bit more readable and it can keep to its own namespace if you want. Nice and pythonic. I use YUI/YUI.ext for widgets. In what way does jQuery have the same philosophy as Django? Metin Amiroff November 2, 2006 at 6:06 a.m. Really great peace of information, helped me to get started with Django! Ari Flinkman November 2, 2006 at 11:57 a.m. Could we get the same comments system to djangoproject.com/documentation too? At places it's outdated and that just might help... Enquest November 3, 2006 at 7:26 a.m. Frankie Robertson just look at the following example, of jquery vs other ajax libs ... See how much code jquery is in the example.... See how much kb size jquery is. Tim Child November 4, 2006 at 6:39 p.m. Excellent news. Well done. bobby November 5, 2006 at 2:55 a.m. Rails what do you have to say about this? __SERF__ November 5, 2006 at 8:50 a.m. Congratulations!... and kudos to the authors and the publisher for not betraying and a$$ raping the concept of open source. Joel November 6, 2006 at 3:33 a.m. Come on guys, get the other chapters uploaded! Your teasing us aren't you Elake November 6, 2006 at 9:44 a.m. I keep hitting the reload but the new chapters aren't there. I'm just not patient enough I guess. Vugar November 6, 2006 at 10 a.m. It's not just you, Elake! ;-) Elake November 6, 2006 at 12:08 p.m. Chapter 3 is up on the site now. gonz November 6, 2006 at 9:51 p.m. I've noticed you guys will realease the book other languages besides english, which I think is pretty cool. So I was wondering, will you be needing help with this task? And if this is the case will you wait until the book is finished?. I'm willing to help as a spanish translator (in case you need one, of course) since I'd love to see this book available in my mother tongue as well. I'm sure others will feel the same way about this with their own languages. Thanks and keep up the excelent work! Chuky November 15, 2006 at 3:47 p.m. Please while planning on the release,pls dont release it only at Apress. Apress uses only paypal payment system for payment verification and its not every country that is available with paypal. Publishers like Packt Publishing, is more broad reaching and accepts the major credit cards without asking for paypal accounts.Also, I hope the book will be available in pdf e-book. This is cheaper and faster to receive after payment. Stefan November 16, 2006 at 1:54 p.m. To Chucky, I have purchased several books from Apress and never payed from a Paypal account. Are you saying they only accept credit card payments from certain countries? Chuky November 22, 2006 at 1:09 p.m. To Stefan, The last time I wanted to buy a book there, I was asked to verify payment and the payment verification they use is paypal. You can go check it now. Yes , you may ultimately pay with your credit card but they will ask for paypal account first Stefan November 28, 2006 at 7:33 a.m. Chuky, Must be different for different countries. I don't have to use any paypal information at all. akaihola December 8, 2006 at 3:56 p.m. If I think something is missing both from the documentation and the book, should I make a comment in both, or are you copying stuff between them? macdet May 25, 2007 at 1:59 p.m. great stuff. battery included! i am learing django. hope django will help against bullying :) is waiting for my first django app! elliot June 22, 2007 at 10:21 a.m. I would love to reuse your code for helping other open source projects write books. Are you still planning to release the inline commenting code? BILL October 2, 2007 at 2:40 p.m. BOOKPOOL.COM has the book on presale for 50% off any comments on Professional Python Frameworks: Web 2.0 Programming with Django and TurbogearsMoore, Dana; Raymond Budd; William Wright just published by wrox Bir2su October 6, 2007 at 12:08 p.m. its great book. if you love the pdf format. then download the book at bir2su.blogspot.com To prevent spam, comments are no longer allowed after sixty days. Lorenzo Bolognini October 31, 2006 at 2:42 p.m. Thanks, thanks, thanks! Just one question. Is the book versioned somewhere? Thanks, Lorenzo
http://www.djangoproject.com/weblog/2006/oct/31/book/
crawl-002
refinedweb
1,433
78.04
!]]> I’ve had all I can stand and I can’t stand no more. It’s been four months since I moved from a Windows to a Mac machine. I really like the macbook air. But Outlook 2011 for the Mac is driving me crazy. It keeps giving me incomprehensible error messages like “unknown error” or “end of file was reached”. It mangles my meeting invites. It looses attachments. It makes me restart every time I change networks otherwise it’s “Not connected”. It’s terribly slow. It kept me from doing my tax return (okay, that’s not Outlook’s fault. I was just procrastinating). And for some inexplicable reason it was always 10-15 minutes behind my actual inbox – my mail in Outlook on a VMWare machine always gets mail faster than the Mac version. I started looking into alternatives. Apple’s Mail.app looks pretty good, but it’s not customizable enough. Mozilla Thunderbird, which I use for personal mail, is very customizable. But due to the way things are set up with Exchange, it seems I would have to go through hoops to download mail when outside the Adobe network if I used it. I also looked to see if there were new versions of the Emacs mail readers I used to use like Mew and Wanderlust. Unfortunately Wanderlust isn’t maintained, and as far as I can tell Mew doesn’t have full support of HTML mail which a lot of people around the office use. After a few days of pondering, I decide to go back ot using Outlook on Windows, in a VM. I was already using it for doing some calendaring things that Outlook on Mac doesn’t support. It’s fast. It’s written to work well with Exchange (well, as good as possible I suppose). Calendaring actually works – unlike with the Mac version. And it’s much more feature-rich and customizable. One thing I’ve gotten used to with Outlook on the Mac, though, are the Emacs key bindings. As an avid Emacs user for just about everything (unfortunately, as mentioned above, no longer for mail though…), this has been working great for me and makes me more productive. So I decided to see if I could customize Outlook on Windows with all the key bindings that I wanted, and succeeded! I found an open source tool called AutoHotKey that allows for setting up macros and shortcut keys. It works great so far. I probably still have a lot of tweaking to do, but thanks to the AutoHotKey documentation I’m up and running. For anyone interested, below is the filtering code for Outlook windows and controls as well as the key bindings I’m using. In case you’re wondering, a few of the mappings are from Wanderlust, an emacs mail reader mentioned above that I used to use. Let me know if it’s useful for you and/or if you have some better ideas for my mail and calendaring needs! ;; Sets up partial matching for window titles SetTitleMatchMode 2 ;; Use these keys when the main outlook window has focus #IfWinActive ahk_class rctrl_renwnd32, NUIDocumentWindow ^n::Send {down} ^p::Send {up} ^f::Send {right} ^b::Send {left} ^g::Send {esc} ^s::Send ^!k ;; search this folder ^a::Send {home} ^e::Send {end} w:: if ActiveControlIsOfClass("SUPERGRID") ;; if the inbox control has focus otherwise send the default Send ^M ;; write a new message else Send w return o:: if ActiveControlIsOfClass("SUPERGRID") Send ^V ;; refile a message else Send o return +f::Send !4 ;; folllow-up later f:: if ActiveControlIsOfClass("SUPERGRID") Send !6 ;; forward else Send f return !1::Send ^1 ;; Mail pane ;; map command-1 to ctrl-1 because I'm used to command 1 !2::Send ^2 ;; Calendar pane ^d:: if ActiveControlIsOfClass("SUPERGRID") Send ^d else Send {delete} return #IfWinActive ;; Use these keys if focus is on a new message #IfWinActive ahk_class rctrl_renwnd32, Message ^n::Send {down} ^p::Send {up} ^f::Send {right} ^b::Send {left} ^a::Send {home} ^e::Send {end} ;; search ^d::Send {delete} #IfWinActive ]]>]]> A: Other settings! ]]>]]>
http://blogs.adobe.com/silverman/feed/atom/
CC-MAIN-2017-09
refinedweb
682
71.55
OpenGL:Tutorials:Tutorial Framework:First Polygon Here's a quick primer for OpenGL, how to setup a perspective view and render your first polygon. Setting up the screen We use glViewport() to tell OpenGL how much of the current screen to use. Generally we want to use the whole screen so we use something like: glViewport(0,0,800,600); Next we need to specify the perspective projection. glMatrixMode(GL_PROJECTION); glLoadIdentity(); If you do not have the GLU Library: You can specify a perspective projection by directly using the glFrustum function: GLfloat view_height = 600.f / 800.f; glFrustum(.5f, -.5f, -.5f * view_height, .5f * view_height, 1.f, 500.f); Or, if you do not have the GLU Library: you can mimic the gluPerspective function alltogether: #include <cmath> // header for math calculations GLfloat view_height = 600; // perhaps pulled from OS screen resolution GLfloat view_width = 800; // these would indicate aspect void gluPerspective(GLfloat fovy, GLfloat aspect, GLfloat zmin, GLfloat zmax) { GLfloat xmin, xmax, ymin, ymax; ymax = zmin * tan(fovy * M_PI / 360.0); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; glFrustum(xmin, xmax, ymin, ymax, zmin, zmax); } gluPerspective(45.0f, GLfloat(view_width/view_height), 1.0f, 500.0f); If you do have the GLU Library you will only need this: gluPerspective(45.0f, 800.0f / 600.0f, 1.0f, 500.0f); In any event, we have selected the projection matrix, and reset it. The call to gluPerspective() has 4 parameters. - The field of view angle, here set to 45 degrees. - The aspect ratio of the view, this will normally be ScreenResX / ScreenResY. - The near and far clipping planes. Both these values should be positive. A frustum is kind of like a trapezoid shape that indicates what falls within the view of the camera. We are specifying the same frustum here as the one in the above gluPerspective function. The first four parameters indicate how pointy the trapizoid is. They give the left, right, bottom and top of the rectangle that you would get if you chopped off the top of a pyramid at the near clipping plane (5th parameter to the function). Here the width of the pyramid is 1 at that point, and the height is 3/4 - based on the aspect ratio of the screen (here 800x600). The last two parameters give the near and far clip planes, just like in gluPerspective. Now the screen is set up, we select the modelview matrix which we'll manipulate to display our renderings. glMatrixMode(GL_MODELVIEW); Drawing our polygon Before we do anything, we'll clear away the screen and depth buffer: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); You can change the color that the screen is cleared to (defaults to black) with glClearColor. This will give a pretty pink background (the arguments are red, green, blue, alpha, use 1 for alpha if you don't know what it does yet): glClearColor(1.f, 0.f, 1.f, 1.f); Now we can reset the modelview matrix: glLoadIdentity(); // Reset current matrix (Modelview) We call glTranslatef() to move the view 'out' a little, away from our soon to be drawn polygon. Then we apply a rotation using glRotatef(): glTranslatef(0.0f,0.0f,-5.0f); glRotatef(Rotate,0.0f,0.0f,1.0f); The parameters to glTranslatef() are pretty straight forward. They are an x,y and z offset from the current position. The parameters to glRotatef() are a rotation value and an axis vector. Finally, we can draw our polygon: glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.5f,0.0f); glVertex3f(-1.0f,-1.0f,0.0f); glVertex3f( 1.0f,-1.0f,0.0f); glEnd(); glBegin(GL_TRIANGLES) treats each set of three vertices specified by glVertex() as a separate triangle. There are many modes for glBegin() to draw points, lines, triangle strips or fans, quads and irregular polygons. The parameters of glVertex() are simply x, y and z coordinates specifying a point in our 3D world. Last of all, we increase the rotation angle for the next frame and swap the buffers so that we can see our work. Rotate+=0.05f; FlipBuffers(); When you run this example, you should see a rotating white triangle on a black background. If you can stand the excitement, stay tuned and we'll show you how to add colour! Source Code The source to Render.cpp, compile this demo using the OpenGL Tutorial Framework #include "Framework.h" void Render(void) {); // This loop will run until Esc is pressed while(RunLevel) { if(Keys[VK_ESCAPE]) // Esc Key RunLevel=0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); // Reset current matrix (Modelview) glTranslatef(0.0f,0.0f,-5.0f); glRotatef(Rotate,0.0f,0.0f,1.0f); glBegin(GL_TRIANGLES); glVertex3f( 0.0f, 1.5f,0.0f); glVertex3f(-1.0f,-1.0f,0.0f); glVertex3f( 1.0f,-1.0f,0.0f); glEnd(); Rotate+=0.05f; FlipBuffers(); } }
http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:First_Polygon
CC-MAIN-2014-52
refinedweb
795
63.9
#include <essence.h> Inheritance diagram for BodyWriter: Smart pointer to a StreamInfo. Type for list of streams to write. The list is kept in the order that the BodySIDs are added Prevent NULL construction. Prevent copy construction. Construct a body writer for a specified file. Clear any stream details ready to call AddStream(). This allows previously used streams to be removed before a call to WriteBody() or WriteNext() Add a stream to the list of those to write. Set the KLV Alignment Grid. Get the KLV Alignment Grid. Set flag if BER lengths should be forced to 4-byte (where possible). Get flag stating whether BER lengths should be forced to 4-byte (where possible). Set what sort of data may share with header metadata. Set the template partition pack to use when partition packs are required. The byte counts and SIDs will be updated are required before writing. FooterPosition will not be updated so it must either be 0 or the correct value. Any associated metadata will be written for the header and if the handler (called just before the write) requests it. Get a pointer to the current template partition pack. Write the file header. No essence will be written, but CBR index tables will be written if required. The partition will not be "ended" if only the header partition is written meaning that essence will be added by the next call to WritePartition() End the current partition. Once "ended" no more essence will be added, even if otherwise valid. A new partition will be started by the next call to WritePartition() Write stream data. Write the next partition or continue the current one (if not complete). Will stop at the point where the next partition will start, or (if Duration > 0) at the earliest opportunity after (at least) Duration edit units have been written Determine if all body partitions have been written. Will be false until after the last required WritePartition() call Write the file footer. No essence will be written, but index tables will be written if required. Set a handler to be called before writing a partition pack within the body. Will be called before a body partition is written Set the minumum size of the non-essence part of the next partition. This will cause a filler KLV to be added (if required) after the partition pack, any header metadata and index table segments in order to reach the specified size. This is useful for reserving space for future metadata updates. This value is read after calling the partition handlers so this function may safely be used in the handlers. Set the minumum size of filler between the non-essence part of the next partition and any following essence. If non-zero this will cause a filler KLV to be added after the partition pack, any header metadata and index table segments of at least the size specified. This is useful for reserving space for future metadata updates. This value is read after calling the partition handlers so this function may safely be used in the handlers. Initialize all required index managers. Move to the next active stream (will also advance State as required). Write a complete partition's worth of essence. Will stop if: Frame or "other" wrapping and the "StopAfter" reaches zero or "Duration" reaches zero Clip wrapping and the entire clip is wrapped < Exit this iteration - no further checks required < Exit at the earliest valid time (for example the next edit point) The state for this writer. Destination file. List of streams to write. KLV Alignment Grid to use. Flag set if BER lengths should be forced to 4-byte (where possible). The body partition handler. The minimum size of the non-essence part of the next partition. The minimum size of filler before the essence part of the next partition. If true index tables may exist in the same partition as metadata. If true essence may exist in the same partition as metadata. The current BodySID, or 0 if not known (will move back to the start of the list). The current partition is done and must not be continued - any new data must start a new partition. Iterator for the current (or previous) stream data. Only valid if CurrentBodySID != 0. Flag set when a partition pack is ready to be written. Is the pending metadata a header? Is the pending metadata a footer? Is the next partition write going to have metadata? Pointer to a chunk of index table data for the pendinf partition or NULL if none is required. BodySID of the essence or index data already written or pending for this partition. This is used to determine if particular essence can be added to this partition. Set to zero if none yet written.
http://freemxf.org/mxflib-docs/mxflib-1.0.0-docs/classmxflib_1_1_body_writer.html
CC-MAIN-2018-05
refinedweb
796
74.9
The cursor in my text editor was lagging. It’s quite unusual given my 8 cores machine with 32GB of RAM. While tracking down that issue, I discovered that my escape game was consuming 20-30% of the CPU while idling. That’s bad! It turns out it was invisible elements being rotated via CSS. It’s a bit of a pain. This means we need to remove all those elements which fade-away, otherwise they pile up and create load. Here I’ll show you my solution using React — the top-layers of my game are in React, that’s why I used it. I’m not suggesting you use React to solve this problem. But if you have animated HTML elements, get rid of them if they aren’t visible. The Problem While loading scenes, I display an indicator in the top-right corner of the screen. This fades in when loading starts and fades out when loading is done. I wanted to avoid an abrupt transition. I handled this with CSS classes to hide and show the element. My React code looks like this: <SVGElement url={url} className={RB.class_name("load-marker", className, is_loading && 'loading')} /> SVGElement is my component to load SVG files and display them inline. An img tag will perform the same way for this setup. The key is the is_loading && ‘loading’ part of the className attribute. This adds the loading class name to the element while it’s loading. When finished loading, I remove the class name. This is the CSS (SCSS): .load-marker { &:not(.loading) { animation-name: fade-out; animation-fill-mode: forwards; animation-duration: 0.5s; animation-timing-function: ease-in-out; } &.loading { animation-fill-mode: forwards; animation-duration: 0.5s; animation-timing-function: ease-in-out; animation-name: fade-in; } @keyframes fade-out { from { opacity: 1; visibility: visible; } to { opacity: 0; visibility: collapse; } } @keyframes fade-in { from { opacity: 0; visibility: collapse; } to { opacity: 1; visibility: visible; } } } I have an urge to digress into a rant about CSS's animation system! I've written animation and layout systems before, and argh, this is acid thrown in my eyes. Indeed, that system has a clear adding and removing animation support, making this whole setup trivial. But this is CSS, and, alas… When an item loses the .loading class it will transition to a transparent state. The problem however came from some other CSS: .loader { svg { animation: rotation 6s infinite linear; overflow: visible; position: absolute; top: 20px; right: 20px; width: 70px; height: 70px; } @keyframes rotation { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } } That infinite bit is the problem. It’s irrelevant that we’ve faded the opacity to 0, the animation is still running! Firefox still does a style and layout update, each frame. Why it ends up consuming so much CPU, I have no idea. Chrome also consumed CPU, but only around 10%. Note, 10% is still ridiculous for a static screen. I could also “solve” the problem by not spinning the item unless something is loading. This creates a rough transition where the icon abruptly stops rotating while fading away. Not good. The Solution I have two animated indicators, the loader and a disconnected icon, for when you lose the WebSocket connection to the server. I abstracted a common base component to handle them the same. This is how I use it, for the loader: export function Loader({ is_loading }) { return <HideLoader url={theme.marker_loading} is_loading={is_loading} } This is the implementation: function HideLoaderImpl({ is_loading, url, className }) { const [ timer_id, set_timer_id ] = React.useState(0) React.useEffect(() => { if( !is_loading && !timer_id ) { const css_duration = 1000 const new_timer_id = setTimeout( () => set_timer_id(0), css_duration ) set_timer_id(new_timer_id) } }, [is_loading]) // only trigger on an is_loading change const visible = is_loading || timer_id if(!visible) { return null } return ( <SVGElement url={url} className={RB.class_name("load-marker", className, is_loading && 'loading')} /> ) } const HideLoader = React.memo(HideLoaderImpl) At first glance, it’s not obvious how this achieves a delayed removal of the element. The HTML generation is clear, when visible is false, then display nothing. When true, display the element as before, with the same logic for setting the If is_loading is true, then visible will be true. This is the simple case. But there is the other true condition when we have a timer_id. The setTimeout callback does nothing but clear the timer_id when it’s done. At first I suspected I’d have to track another variable, setting at the start and end of the timeout. It turns out that all I need to know is whether there is a timeout at all. So long as I have a timer, I know that I shouldn’t remove the element. The condition list to React.useEffect is important here. I provide only is_loading — I only wish for the effect to run if the value of is_loading has changed. Some style guides will insist that I include timer_id (and set_timer_id) as well in the list. That approach defines the second argument to useEffect as a dependency list, but this is incorrect. It’s actually a list of values, which if changed, will trigger the effect to run again. The React documents are clear about this. Yet also say it’s a dependency list, and recommend a lint plugin that would complain about my code. That recommendation makes sense for useCallback and useMemo, but not for useEffect. Adding timer_id to the list would be wrong. When the timer finishes, it sets the timer_id to 0. That change would cause the effect to trigger again. This is a case where we do “depend” on the timer_id value, but we shouldn’t re-execute when it changes, as that would end up creating create a new timer. In any case, this simple code now does what I want. It defers the DOM removal of the element until after the end of the animation. Well, it defers it one second, which is long enough to cover the 0.5s CSS animation. It’s complicated to keep these times in sync — more fist shaking at the CSS animation system! If you’ve got an eye for defects, there is one there. The loader icon can be removed too early: when is_loadingbecomes true, then false, then within one second becomes true and false again. I don’t create a new timer if one already exists, so the deferral time will still be from the first timer. In practice, this will not likely happen, and the impact is minimal. The fix is to cancel an existing timeout and always create a new one. My lagging cursor I never got an obvious answer why my cursor was lagging. There were all sorts of applications, idle applications, consuming 5-10% CPU. It’s perhaps a real cost of high-level languages. More on that another day. I still hope that future apps will strive for less energy use. For now, remove all those invisible animated HTML elements. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mortoray/highly-inefficient-invisible-animations-css-firefox-chrome-react-1g86
CC-MAIN-2022-33
refinedweb
1,151
66.64
It’s no secret that the tech world is dominated by a relatively small pool of programming languages. While exact figures are difficult to obtain (and no doubt vary over time), you could probably name a handful of languages which comprise the vast majority of all programming output over a given period of time. Two interesting sites I visited while researching this article let you visualize programming languages by popularity. IEEE Spectrum lets you interactively adjust the weightings of various metrics, whereas PYPL serves up a neat table giving actual % share figures based on Google Trends data over the last 12 months. Now, I refuse to be drawn into any debates over exactly what the best metric of programming language popularity may or may not be (and whether or not that’s an important statistic anyhow). What follows is just a hasty analysis to illustrate a point (any excuse really to use R-Fiddle!) Using the data from PYPL, we can see a couple of clear trends: 1) — The top 10 languages account for almost a 90% share of Google Trends data In order, these are: Java, Python, PHP, C#, JavaScript, C, C++, Objective-C, R and Swift. Combined, they have a Google Trends share of 87.1%. 2) —Language popularity follows a power-law distribution Using my favorite R package, ‘igraph’, for its trusty power.law.fit() function, I found that the popularity of programming languages follows a power-law distribution: > pL = power.law.fit(shares)> pL$KS.p$KS.p[1] 0.9873141 That $KS.p value of 0.987 is the p-value associated with a Kolmogorov-Smirnov test statistic, which tells us that we can be pretty damn sure the distribution of popularity (as defined by PYPL) does follow a power-law distribution. Like many other phenomena, the relative popularity of programming languages is unevenly distributed. This can usually be explained by a positive feedback (or ‘snowball-effect’) mechanism — a simplistic version might go that the more popular a language is, the more jobs are available in it, so more people are incentivised to learn it, thus increasing its popularity. So what’s new? It’s not really a surprise that some programming languages are waaaay more popular than others. Everyone already knows that Java, C, C++, C#, Python et al. are by far the most used languages. What’s more interesting, in my opinion, is the observation that for every behemoth programming language, there must be dozens of smaller, more niche languages out there in the wilderness. Sheer curiosity aside, there are good reasons to be interested by this. Anyone who’s dabbled in more than a handful of programming languages knows that different languages suit different purposes. JavaScript is for web development, PHP for server-side programming, R for statistics, Matlab for full-on mathematics. With programming languages, variety is a good thing. Out there, there might just be a language ideally suited for solving that problem you didn’t even know existed. But where to find them? One place to look is Rosetta Code. A Programming Safari I can’t remember exactly how I came across Rosetta Code, but once I found it, I was hooked. It describes itself as a programming chrestomathy site, and features an impressive 647 programming languages in its archives. Go and check it out. The truly awesome thing is that Rosetta Code goes beyond just giving a generic “Hello World!” example for each language. No, instead, it has a collection of over 800 assorted programming tasks, from some as simple as “Odd or Even”, through to more advanced problems, such as maze-solving and web-scraping. Each task page describes the problem to be solved, then gives solutions in a range of programming languages. Here, reputation doesn’t matter. As well as C, C++, Java etc., you will find solutions in languages you’ve never heard of before. Some are retro, some are modern; some look familiar, while others are esoteric beyond belief. You could spend longer than you’d care to admit browsing through all the examples — but to help you get started, I’ve compiled a list of some of the lesser-known and/or more interesting languages which caught my attention. Activate nerd-mode, and dive in! Blast From The Past Some languages last forever, or so it seems. As well as C, languages descended from Lisp and Fortran have been around for decades, and others such as BASIC and Pascal may have fallen out of fashion but live on in popular memory. Time has been less kind to others, though. Here’s a list of some languages with code samples on Rosetta which, to put it one way, are unlikely to get you hired any time soon. EDSAC Order Code EDSAC is a famous early computer, designed and built by Maurice Wilkes’ team at the University of Cambridge in the late 1940’s. The construction of EDSAC saw David Wheeler earn the first ever PhD in Computer Science in 1951. Whilst he was at it, he also invented the ‘Wheeler jump’, or closed subroutine — which we commonly refer to today as ‘functions’. Despite its unshakeable place in history, EDSAC has been out of action since 1958, so don’t rush to learn its custom programming language. Here’s an example from Rosetta Code. It’s the ‘empty program’, or the shortest legitimate program. It doesn’t do very much at all. T64K [ set load point ]GK [ set base address ]ZF [ stop ]EZPF [ begin at load point ] GEORGE This language, invented in 1957, is one that would have been entered via punched tape into a machine the size of a room. Nevertheless, it was full of features, including loops, conditional statements, subroutines, and matrix data structures. It even reads a little like a more modern language. Sixty years on though, GEORGE is no longer with us. Here’s how it would have been use to calculate the sum of a series: 0 (s)1, 1000 rep (i) s 1 i dup × / + (s) ;]P BCPL ‘Basic Combined Programming Language’, or BCPL, is worthy of its place in computing history. As well as apparently giving rise to the tradition of “Hello World!”, BCPL had a profound influence on the design of B, which was itself the forerunner of C. BCPL was the first language to introduce braces “{“ as a way of defining blocks of code — a convention still used in many of today’s most prominent languages. Decent, as legacies go. Here’s a “Hello World” program written in BCPL: GET "libhdr" LET start() = VALOF{ writef("Hello world!") RESULTIS 0} PL/I Developed by IBM in the early 1960’s PL/I (Programming Language One) was widely used in its heyday, but never quite displaced its competitors Fortran and COBOL. PL/I was primarily a mainframe language, and with the advent of the PC and the rising popularity of languages such as C++ and Java, PL/I slipped out of favor. There are many examples of PL/I on Rosetta Code; here it is generating a Fibonacci sequence: /* Form the n-th Fibonacci number, n > 1. */get list(n);f1 = 0; f2 = 1;do i = 2 to n; f3 = f1 + f2; put skip edit('fibo(',i,')=',f3)(a,f(5),a,f(5)); f1 = f2; f2 = f3;end; SNOBOL4 SNOBOL was developed in the early 1960’s and became a popular teaching language in the following decade. However, it ran out of steam throughout the 1980’s and 90's, but not before it was able to influence the design of Lua, which makes a top-20 appearance in the PYPL rankings we saw earlier. Here’s a SNOBOL4 program that outputs the length of a string: output = "Byte length: " size(trim(input))end FOCAL FOCAL (‘Formulating On-Line Calculations in Algebraic Language’, since you asked) was introduced in 1968, and was an efficient language that could run on very memory-limited systems. One particular quirk of the language was its apparent phobia of strings. Inputting the string “HELLO” would be interpreted as asking the computer to calculate 8 ^ "LLO” , which FOCAL struggled to work out before spitting out a massive numerical answer. Despite its eccentricities, FOCAL was used widely enough during the 70’s and 80’s. Coca-Cola even used their own version, which they imaginatively called COKE. This example from Rosetta Code shows a FOCAL program that converts temperatures between different units: 01.10 ASK "TEMPERATURE IN KELVIN", K01.20 TYPE "K ", %6.02, K, !01.30 TYPE "C ", %6.02, K - 273.15, !01.40 TYPE "F ", %6.02, K * 1.8 - 459.67, !01.50 TYPE "R ", %6.02, K * 1.8, ! SETL SETL was invented in the late 1960’s and was based heavily on set theory, the branch of mathematics that concerns collections of objects. The most recent stable release was back in 2005, but despite its decline from use, SETL has a couple of claims to fame. The first compiler of Ada, which was developed by the US Department of Defense, was written in SETL. Also, it is said to have influenced ABC — the language which went on to inspire the design of Python. Here’s how SETL calculates the greatest common divisor of two integers. See any resemblance to Python? proc gcd (u, v); return if v = 0 then abs u else gcd (v, u mod v) end;end; MUMPS This unfortunately named language has been around since 1966, and is also referred to as M. A key feature is the built in database system, which allows for super-efficient access to stored data. Although no longer in common use, MUMPS lives on the form of GT.M and InterSystems_Cache — which have a niche in hospitals and financial database systems. The European Space Agency has also used InterSystems_Cache for its recent Gaia mission. This is how MUMPS can be used to reverse a string: REVERSE ;Take in a string and reverse it using the built in function $REVERSE NEW S READ:30 "Enter a string: ",S WRITE !,$REVERSE(S) QUIT Deliberately Confusing What are the hallmarks of a successful programming language? Speed? Versatility? Readability? Nah, forget all that — let’s look at a branch of programming languages out there that are intentionally difficult and/or unintuitive to use. Esoteric languages, or ‘esolangs’, are programming languages used sometimes for experimentation, sometimes for a challenge, and sometimes just as the ultimate nerdy in-joke. If you don’t quite get it, that’s ok — in fact, that’s usually the point. Better-known examples include Brainf***, Befunge and the particularly migraine-inducing Malbolge. Here’s a list of a few others, ranging from the amusing to the interesting, to the outright obtuse. Include these on your CV/Resume at your own risk. INTERCAL The original esoteric programming language was invented in 1972, making it as old as C. It was introduced as a parody of programming practices prevalent at the time, but its continued survival to this day suggests it is still as relevant as ever. On top of an obtuse syntax, INTERCAL confuses its users even further by requiring them to use the keyword PLEASE every so often, else the program refuses to run. However, being overly polite backfires — saying ‘please’ too frequently will also result in an error. Of course, this eccentricity was not officially documented, because that’d just be too helpful. Here’s an infinite loop, written in INTERCAL: NOTE THIS IS INTERCAL PLEASE ,1 <- #5 DO ,1 SUB #1 <- #54 DO ,1 SUB #2 <- #192 DO ,1 SUB #3 <- #136 PLEASE ,1 SUB #4 <;- #208 DO ,1 SUB #5 <- #98 DO COME FROM (1) DO READ OUT ,1(2) DO ,1 SUB #1 <;- #134(1) PLEASE ABSTAIN FROM (2) Beeswax This is a conceptually interesting language, which takes the movement of bees around honeycomb as inspiration for the movement of pointers across instructions. Beeswax is capable of arithmetic, reading/writing files, and even modifying its own source code. Below is a program that calculates n-factorial (n!) of a user-input integer. p <_1FT"pF>M"p~.~d >Pd >~{; Chef This is perhaps my favorite of the languages I found on Rosetta Code. I’d previously read about it elsewhere, but hadn’t seen anything like the range of examples provided here. Unlike most programming languages, Chef reads almost completely naturally, as each program is formatted much like a recipe (hence the name!). For completeness, it also refers to variables, instructions and data structures with cooking-related names, such as “mixing bowl”, “refrigerator”, “mix”, “chop” etc. Why not? Here’s a sample program that calculates the sum and product of an array of numbers. Sum and Product of Numbers as a Piece of Cake. This recipe sums N given numbers. Ingredients.1 N0 sum1 product1 number Method.Put sum into 1st mixing bowl.Put product into 2nd mixing bowl.Take N from refrigerator.Chop N.Take number from refrigerator.Add number into 1st mixing bowl.Combine number into 2nd mixing bowl.Chop N until choped.Pour contents of 2nd mixing bowl into the baking dish.Pour contents of 1st mixing bowl into the baking dish. Serves 1. Golfscript Familiar to fans of code golf (a fantastically geeky hobby in which participants try to solve programming puzzles in as few bytes of code as possible), Golfscript is a language designed to do a lot with a little. It certainly achieves this goal, and allows its users to solve complex puzzles very concisely. Its website tells us this brevity is attained through ‘using single symbols to represent high level operations’. Would you use it in a production setting? Maybe, if you were a seasoned code golfer and had no regard for the sanity of any successor to your project. Otherwise… probably not. Rosetta Code has several nice examples of Golfscript, and since it manages to be so damn concise, I’ve included three of them here: [2 4 3 1 2]$ #Sort an integer array [296,{3/)}%-1%["No more"]+[" bottles":b]294*[b-1<]2*+[b]+[" of beer on the wall\n".8<"\nTake one down, pass it around\n"+1$n+]99*]zip #99 Green Bottles Lyrics [{"close"}100*]:d;10,{)2?(.d<\["open"]\)d>++:d;}/[100,{)"door "\+" is"+}%d]zip{" "*puts}/ #100 Doors Challenge Hoon Hoon is fascinating in that, although some would class it as an esolang, it does actually serve a practical purpose. It can be used to program web services on Urbit, which describes itself as a ‘secure peer-to-peer network of personal servers’. Take a look at the ‘greatest element’ example below. Hoon is described as Lisp-y, and note the two-character symbols at the start of each line. These ‘runes’ are used in place of reserved keywords, which is a great concept, but does impact readability for those unfamiliar with its logic, and probably qualifies Hoon as somewhat esoteric. :- %say|= [^ [a=(list ,@) ~] ~]:- %noun (snag 0 (sort a gte)) > +max [1 2 3 ~]3 Piet By far the most unique language I came across was Piet, named after the 20th Century Dutch artist, Piet Mondrian. It follows one highly unusual design principle: Program code should be in the form of abstract art. How is this achieved? The solution is nothing short of genius. Integers are represented by the number of ‘codels’ in a contiguous block of color. The pointer starts in the top-left corner, and moves around the image. Every time it encounters a color change, an instruction is executed. The exact instruction is specified by the changes in both hue and brightness. Mind = Blown. Playing With Arrays One thing that caught my attention was the number of array-based languages there are out there. Array-based programming has been around since the early 1960’s, with the invention of APL, and although they’re not exactly mainstream, there are plenty of offshoots still used to various extents today. These languages all have a lot in common, so I’ll spare you too much detail, but they’re interesting in just how concise they can be. J J was invented by Kenneth Iverson, who was also the inventor of APL. J is a very terse language, letting you get a lot done with very few lines of code. Below is a K-means clustering algorithm. For comparison, the same example in C runs to 184 lines. NB. Selection of initial centroids, per K-means++ initialCentroids =: (] , randomCentroid)^:(<:@:]`(,:@:seedCentroid@:[))~ seedCentroid =: {~ ?@# randomCentroid =: [ {~ [: wghtProb [: <./ distance/~ distance =: +/&.:*:@:-"1 NB. Extra credit #3 (N-dimensional is the same as 2-dimensional in J) wghtProb =: 1&$: : ((%{:)@:(+/\)@:] I. [ ?@$ 0:)"0 1 NB. Due to Roger Hui NB. Having selected the initial centroids, the standard K-means algo follows centroids =: ([ mean/.~ closestCentroid)^:(]`_:`initialCentroids) closestCentroid =: [: (i.<./)"1 distance/ mean =: +/ % # K, q These two languages were both developed commercially by Kx Systems. Both are APL-like, array-based languages that have applications in finance and big data. q is wrapped around K, and provides enhanced readability. I’ve included a couple of examples of each below. These are super-concise languages, and would no doubt be good for a round of code golf, if that’s what you’re into. / 1-D Cellular automata in Kf:{2=+/(0,x,0)@(!#x)+/:!3} / Anagrams in K{x@&a=|/a:#:'x}{x g@&1<#:'g:={x@<x}'x}0::`unixdict.txt / Pascal's Triangle in qpascal:{(x-1){0+':x,0}\1} / 100 Doors Challenge in q`closed`open (1+til 100) in `int$xexp[;2] 1+til 10 Klong Klong is similar to K and q, but its website claims it is less ambiguous. Judge for yourself — below is a “Middle Three Digits” solution written in Klong. items::[123 12345 1234567 987654321 10001 -10001 -123 -100 100 -12345 1 2 -1 -10 2002 -2002 0] mid3::{[d k];:[3>k::#$#x;"small":|0=k!2;"even";(-d)_(d::_(k%2)-1)_$#x]}.p(mid3'items) IDL One more array-based language for you. IDL (Interactive Data Language), around since 1977, has been used by organizations including NASA and ESA. In fact, IDL found itself something of a niche in space research, and it was once used to help technicians repair the Hubble Space Telescope. A more down-to-earth application is this function which generates a Sierpinski triangle. pro sierp,n s = (t = bytarr(3+2^(n+1))+32b) t[2^n+1] = 42b for lines = 1,2^n do begin print,string( (s = t) ) for i=1,n_elements(t)-2 do if s[i-1] eq s[i+1] then t[i]=32b else t[i]=42b endend Up-and-Coming? Of course, some languages don’t see much use simply due to the fact they haven’t been around very long. Whether or not they catch on depends on a variety of factors, and the reality is that the vast majority won’t see widespread adoption. But you’ve gotta start somewhere, right? Here are a selection of languages from Rosetta’s archives that are all relative newcomers to the show. Crystal This project is still in alpha-testing, so don’t switch over to it just yet — but keep an eye out. Influenced by the writing efficiency of Ruby and the running efficiency of C, Crystal’s authors seem set on producing an all-round best-of-both-worlds language. Time will tell if they’re successful at doing so. Below is a ‘quick-sort’ algorithm written in Crystal — why not have a go running it yourself? def quick_sort(a : Array(Int32)) : Array(Int32) return a if a.size <;= 1 p = a[0] lt, rt = a[1 .. -1].partition { |x| x < p } return quick_sort(lt) + [p] + quick_sort(rt)end a = [7, 6, 5, 9, 8, 4, 3, 1, 2, 0]puts quick_sort(a) Frege Functional programming is the new big thing, and Frege is a purely functional language first introduced in 2011. It’s been described as “Haskell for the Java Virtual Machine”. Named after the mathematical logician Gottlob Frege, this language compiles to Java, and is also available to try out online. Below is a solution the “99 Bottles” challenge. It is virtually identical to the same solution in Haskell, which is to be expected. module Beer where main = mapM_ (putStrLn . beer) [99, 98 .. 0]beer 1 = "1 bottle of beer on the wall\n1 bottle of beer\nTake one down, pass it around"beer 0 = "better go to the store and buy some more."beer v = show v ++ " bottles of beer on the wall\n" ++ show v ++ " bottles of beer\nTake one down, pass it around\n" ++ head (lines $ beer $ v-1) ++ "\n" Futhark Although suffering from a lack of comprehensive documentation, the Futhark project nevertheless seems like a promising line of research. The aim is to compile to high-performance Graphical Processing Unit (GPU) code — but not for producing graphical output. Instead, Futhark’s goal is to harness the power of the GPU to carry out computationally-intensive procedures that would ordinarily take much longer using a more conventional language. Below is an example of a function used to calculate a geometric mean. include futlib.numeric fun agm(a: f64, g: f64): f64 = let eps = 1.0E-16 loop ((a,g)) = while abs(a-g) > eps do ((a+g) / 2.0, F64.sqrt (a*g)) in a Sidef Sidef is approaching its fourth year of active development, having started out as a project in March 2013. It seems well advanced and very well documented, and has over 600 examples of coding solutions on Rosetta Code. Sidef is mostly used for research purposes, and looks to explore both OOP and functional programming. Personally, I really like the look of it. The example below shows it in action finding the intersection of two lines. func det(a, b, c, d) { a*d - b*c } func intersection(ax, ay, bx, by, cx, cy, dx, dy) { var detAB = det(ax,ay, bx,by) var detCD = det(cx,cy, dx,dy) var ΔxAB = (ax - bx) var ΔyAB = (ay - by) var ΔxCD = (cx - dx) var ΔyCD = (cy - dy) var x_numerator = det(detAB, ΔxAB, detCD, ΔxCD) var y_numerator = det(detAB, ΔyAB, detCD, ΔyCD) var denominator = det( ΔxAB, ΔyAB, ΔxCD, ΔyCD) denominator == 0 && return 'lines are parallel' [x_numerator / denominator, y_numerator / denominator]} say ('Intersection point: ', intersection(4,0, 6,10, 0,3, 10,7)) > Intersection point: [5, 5] Sparkling Like Sidef, this language started out in 2013. Its design has been inspired by features of C, Python and Lua — and a disdain for several features of JavaScript. It aims to be a lightweight and extensible scripting language that runs pretty much anywhere. Below is a number guessing game, which you can try and get working in your browser here. printf("Lower bound: ");let lowerBound = toint(getline()); printf("Upper bound: ");let upperBound = toint(getline()); assert(upperBound > lowerBound, "upper bound must be greater than lower bound"); seed(time());let n = floor(random() * (upperBound - lowerBound) + lowerBound);var guess; print(); while true { printf("Your guess: "); guess = toint(getline()); if guess < n { print("too low"); } else if guess > n { print("too high"); } else { print("You guessed it!"); break; }} Noah’s Ark One more category for you — there were loads of potential languages and I couldn’t possibly get through them all to pick out every interesting example. If you spot any I may have missed, please leave a response below! One thing I did notice was that a lot of languages were named after animals. Is there an explanation for this?! I won’t go into detail, but here’s a quick run-through to finish up with: Cat, Kitten Cat is described as a functional language, but appears to be no longer in existence. However, Kitten seems to be currently under development, and calls itself a successor to Cat. Influenced heavily by Haskell, but aims to be more accessible. "Hello world!" writeln //Cat "Hello world!" say //Kitten Cobra OOP language, influenced by Python, C#, Eiffel and Objective-C. class Hello def main print 'Hello world!' ><> (“Fish”) Another multidimensional esolang, if you’re into that kind of thing. !v"Hello world!"r! >l?!;o Heron Inspired by C++, Python and Pascal, but no commits since 2012, so appears to be no longer under active development. Its only sample on Rosetta is a lengthy solution to the N-queens problem. For brevity, I’ve inferred a simple “Hello world!” program to show here instead. Main() { WriteLine("Hello world!");} Lobster A game programming language that aims to be readily portable across platforms. Appears to be under active development. print "Hello world!" Panda The website states that Panda aims to be simple enough that a Panda could program it. I’ve no idea how good Panda’s are at coding, though, so I’m still in the dark about that one… say("Hello world!") Pony With influences ranging from C++ to Erlang, Pony looks to be an interesting project with thorough tutorial. actor Main new create(env: Env) => env.out.print("Hello world!") Salmon Salmon aims to intermix the writing of both low-level and high-level code. "Hello world!"! print("Hello world!\n"); standard_output.print("Hello world!\n"); Squirrel Squirrel is a lightweight scripting language that has been embedded in games like Left 4 Dead 2, Portal 2 and CS:GO. print("Hello world!"); Phew! That was a whistle-stop tour! If you’ve made it this far and enjoyed the ride (or spotted a glaring, glaring error), leave a response underneath — I’ll try and reply asap! Thanks for reading! If you want to dive deeper: Thank you for reading!
https://www.freecodecamp.org/news/rosetta-code-unlocking-the-mysteries-of-the-programming-languages-that-power-our-world-300b787d8401/
CC-MAIN-2021-31
refinedweb
4,276
63.29
For an independent study focused on using MVC 3.0 over the summer, I put together this article to show how great this technology is and the incredible ease of use that can turn you into developing Web-Applications for businesses or personal use in a rapid agile environment. This article will touch on a new method using MVC called the Code First Approach that makes developing apps simple without much hassle. We will also be extending this approach by using an extension called the Nuget Package Console in order to build our Views and Controllers. MVC stands for Model-View-Controller which is an architectural pattern that separates your application into three main components,"model, the view, & controller." Here is a great resource that explains this design pattern. It is essential that you look at this resource before continuing on. Some great tutorials can be found here that can help understand MVC. If you complete one of these tutorials then you should have no problem following along this tutorial. Microsoft just released MVC 4.0 RC that is equipped with new features such as Web API support, Mobile App Features, and enhanced support for asynchronous methods. Read the full features list in the release candidate notes. Although the new addition is in its release candidate stages, the new features supported merely let you develop highly sophisticated web-applications from mobile to web in a fluid and easily manner. To get started you must have the following installed: 1. Visual Studio 2010 2. MVC 3.0 3. Service Pack 1 4. Nuget 5. Source Code for Part 1 *Good to download so you can follow a long easily but not required* For this tutorial we will be building a student management system using the Code First Approach. In the student management system teachers can add courses and students. They will be able to manage students in each course and keep notes on different lectures pertaining to different courses. This tutorial will be a 3 part series that will get you to build a student information system allowing teachers to manage students, lectures, classes, keep notes on lectures and/or students, and keep track of students attendance. It will also notify the teacher if the student is falling behind by missing to many classes. For this article we will be going over the 1st part in the series of articles. The 1st part will create the base layer to get this started. After completing the 1st part, users will be able to add and create classes or students as well as take notes on the students and classes. This information system can be created with very little code and in a matter of an hour if you are familiar with this approach. Here is a great description about the Code First Approach that you should read before continuing. **Parts 2 & 3 will be released when I get a little bit of free time within a couple weeks from the creation date of this article"** 1. Open Visual Studio and under the File Menu select create new project. 2. Select create new MVC 3 Project. 3. Create an MVC 3 Project that has an internet application template a long with HTML5 semantic markup just like Figure 1. Name your project/solution what ever you would like. Figure 1 1. In the solution explorer right click on the Model folder and create a new class. Figure 2 Figure 2 2. Name the class ManagementModel.cs Once you create the class, the following code is created inside the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MvcApplication.Models { public class ManagementModel { } } 3. Change the public class MangementModel to the following: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MvcApplication.Models { public class Course { public int CourseId { get; set; } public string Name { get; set; } public virtual ICollection<Student> Students { get; set; } public virtual ICollection<Lecture> Lectures { get; set; } } public class Lecture { public int LectureId { get; set; } public string Name { get; set; } public DateTime Date { get; set; } public string Description { get; set; } public string Notes { get; set; } public int CourseId { get; set; } public virtual Course Course { get; set; } } public class Student { public int StudentId { get; set; } public string Name { get; set; } public string Notes { get; set; } public int CourseId { get; set; } public virtual Course Course { get; set; } } } From the model we have created, Mvc will automatically create a database in our app-data folder that will store all the data after completing Step 3 which eliminates a lot of tedium work. As you can see the class Courses has one to many relationships with the classes Lectures & Students. To learn more about creating models with different relationships and for a more in depth understanding of what we will be doing in Step 3 click here. In order to continue you must ensure you have the latest version of Nuget installed as listed in the prerequisites section of this article. Nuget is a great way to manage extensions/packages for your Mvc Project. It makes creating applications such as the one we are extremely easy and face-paced. Here is a good resource if you a new to Nuget that explains installation and various commands you can input into Nugets command line. get started we are going to ensure we have the latest version of entity framework installed. 1. Open the Nuget Command Console under the Library Package Sub Menu in the Tools Menu. 2. Input into the command line Update-Package EntityFramework as in Figure 3. Figure 3 After updating Entity Framework we will want to install a new package that allows us to easily scaffold our Models into a View as well as Controller. To accomplish this we will be adding an extension called MVC Scaffolding. Steve Sandersons Blog is a great resource for those considering this approach and more in depth tutorials. Steve Sandersons Blog. 3. In the Nuget Command Line type the following: Install-Package MvcScaffolding 4. Then type the following into the Command Line: Scaffold Controller Course This will automatically scaffold a Controller and View from the class Course in our in the file named ManagementModel.cs. 5. Scaffold a controller for the other three classes (Student & Lecture) in the ManagementModel.cs by typing the following code: Scaffold Controller Student Then Scaffold Controller Lecture 6. Run the solution and create a new course by adding the following to your localhosts port number: localhost:yourprojectport#/Courses/Create As you can see a form was automatically created that allows us to create a new course. 7. Create a new course of the desired name you choose. I will create a new course called Systems Analysis and Design. After creating a Course you will be redirected to your localhost/Courses view. This view has a grid showing you the Courses that you have created as in Figure 4. It also includes the number of students and lectures that each course has. Currently there are 0 students and 0 lecutres because we have not created any students or lectures. Lets add some students and lectures: 8. Select the Create New under the Course Grid View and add a couple more courses. Figure 4 9. Go to the url localhost/yourprojectport#/Students/Create and create some students by filling out the form. 10. Go to the url localhost/yourprojectport#/Lectures/Create and create some lectures for each Course. You may or may have not noticed that MVC has automatically error trapped our forms to make sure that users input the correct requirements of each field like Figure 5. Figure 5 1. In the Shared Folder under Views in the Solution explore you notice a file called _Layout.cshtml. This file allows you to create the surrounding layout of your website. The layout will surround each View that is created. Lets create add menu items so Users can easily navigate to the Students/Lectures or Courses View. To do this we will change the add this code under the <nav> tag. <li>@Html.ActionLink("Courses", "Index", "Courses")</li> <li>@Html.ActionLink("Lectures", "Index", "Lectures")</li> <li>@Html.ActionLink("Students", "Index", "Students")</li> This is what your Layout View should look like after you make the above changes. <pre lang="html"><>My MVC Application</h1> </div> <div id="logindisplay"> @Html.Partial("_LogOnPartial") </div> <nav> <ul id="menu"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("About", "About", "Home")</li> <li>@Html.ActionLink("Courses", "Index", "Courses")</li> <li>@Html.ActionLink("Lectures", "Index", "Lectures")</li> <li>@Html.ActionLink("Students", "Index", "Students")</li> </ul> </nav> </header> <section id="main"> @RenderBody() </section> <footer> </footer> </div> </body> </html> 2. Run the Solution and you've completed Part 1! The final result should look like Figure 6-9. Figure 6 Figure 7 Figure 8 Figure 9 As you can see it is extremely easy to create a simple application in Mvc 3.0 using the Code First Approach with MvcScaffolding. For Part 2 of this series we will implementing the ability for teachers to take attendance and a notification system that allows teachers to see slacking students who are missing several classes. Part 3 will go over security and will have an in depth look at our Controllers. Check back soon to get those.
http://www.codeproject.com/Articles/429950/MVC-3-0-Code-First-for-Beginners?fid=1756650&df=90&mpp=50&sort=Position&spc=Relaxed&select=4333817&tid=4333817
CC-MAIN-2014-41
refinedweb
1,552
56.45
First of all, what is a configuration file? A configuration file is a file which contains initial settings for your program. It is nothing more than a text file, which contains a specific structure. That structure usually looks like this: key = value We call the structure "key = value" a parameter. In more advanced config files, parameters can be grouped in sections, but I'm not going to talk about that now. This parser, what is it capable of? The parser that I'm going to present is capable of parsing simple configuration files, with the basic structure like this: key1 = value1 key2 = value2 key3 = value3 It will also remove the leading & trailing whitespace from key & values, it will ignore blank lines, and supports comment parsing. A comment will start with a semicolon (; ), and everything from that semicolon until the end of the line, will be ignored. Example of line with trailing whitespace & leading whitespace & comments: key1 = value1 ; I'm a comment The parser will remove that comment & whitespace, and the result structure will look like this: key1=value1 No leading spaces, no trailing spaces, no comments. The parser is also capable of recognizing keys with the same name. In case you have multiple keys with the same name, an according error message will be thrown. The parser will also recognize multiple words key values. Example: car = toyota corolla The value of car will be toyota corolla, and not only toyota. The same thing doesn't apply to keys itself, which can't be formed from multiple words. Example: car 1 = toyota corolla The parser will ignore that 1, therefore, the key will be car, with the value of toyota corolla. Setting up the project: This step is very easy. Create an empty Console Application, and add one source file (.cpp/.cxx) to your project. I named it ConfigFile.cpp. Create the configuration file parser: We got here. This is, of course, the most important thing in this tutorial. Open ConfigFile.cpp. Right now, you have a blank file. Start by adding the needed included files at the top of the file: #include <iostream> #include <string> #include <sstream> #include <map> #include <fstream> I think <iostream> and <string> are pretty self-explanatory. <sstream> is needed for conversion between std::string and primitive types, and vice-versa. <map> is needed for holding the pair of key-value, and <fstream> is of course needed for file handling. Now, we are going to create a class which contain only two functions, needed for conversion of std::string to primitive types (int/float/double/...), and vice-versa. I have called it Convert. This is the code for it: class Convert { public: // Convert T, which should be a primitive, to a std::string. template <typename T> static std::string T_to_string(T const &val) { std::ostringstream ostr; ostr << val; return ostr.str(); } // Convert a std::string to T. template <typename T> static T string_to_T(std::string const &val) { std::istringstream istr(val); T returnVal; if (!(istr >> returnVal)) exitWithError("CFG: Not a valid " + (std::string)typeid(T).name() + " received!\n"); return returnVal; } template <> static std::string string_to_T(std::string const &val) { return val; } }; Now, you may find all over the internet the function encapsulated in Convert class. It's the classic stringstream that performs the conversions. I just want to point a thing. I have specialized string_to_T function, for std::string. Why? Well, take a look at this: if (!(istr >> returnVal)) If function parameter val would be a string containing whitespace, like: toyota corolla then string_to_T will return only "toyota", since istringstream will stop extracting at the first whitespace. Now, you may wonder what's with exitWithError function. This function posts a message on the console, then it aborts the execution of the program. The function looks like this: void exitWithError(const std::string &error) { std::cout << error; std::cin.ignore(); std::cin.get(); exit(EXIT_FAILURE); } Now, let's create the main class, which contains functions needed to parse the configuration file. I have called it ConfigFile. Copy-paste this into your file: class ConfigFile { private: public: }; Now, let's deal with the private zone of the class. As member variables, we will only have a std::string, which will hold the name of the configuration file, and a std::map<std::string, std::string>, which will hold pairs of key-value. Let's add the to the class: std::map<std::string, std::string> contents; std::string fName; Done. Right now, we will create a function that removes the comment from an individual line. It looks like this (copy-paste it to private section of class): void removeComment(std::string &line) const { if (line.find(';') != line.npos) line.erase(line.find(';')); } So, what does it do? It checks if the line contains a semicolon, and if it does, it removes everything from the semicolon (including it), to the end of the line. If the line contains nothing but a comment, then, after comment removal, the line will only contain whitespace. That's why I created a separate function which checks this: bool onlyWhitespace(const std::string &line) const { return (line.find_first_not_of(' ') == line.npos); } Basically, the function returns false if a non-space character was found, true otherwise. The function is "const" because it does not alter any class member variables. Now, a very important function is on its way. This function checks if an individual line has the correct structure of a config file (key = value). It looks like this: bool validLine(const std::string &line) const { std::string temp = line; temp.erase(0, temp.find_first_not_of("\t ")); if (temp[0] == '=') return false; for (size_t i = temp.find('=') + 1; i < temp.length(); i++) if (temp[i] != ' ') return true; return false; } Let's take it step by step. First of all, the function accepts as parameter a std::string, which is an individual line (with removed comment), from the config file. Let's take a look at this part: std::string temp = line; temp.erase(0, temp.find_first_not_of("\t ")); if (temp[0] == '=') return false; The .erase() simply removes every character starting from position 0 -> first non-tab or non-space character. After removal, if the first character is '=', it means that we do not have a key. Something like this: ; Oups? Missing the key from below line! = someValue; Now, let's analyze the second part of the function: for (size_t i = temp.find('=') + 1; i < temp.length(); i++) if (temp[i] != ' ') return true; return false; The for loop loops starting from the position of the '=', until the end of the line. If a non-space character was found, then we have a key value. If the "if" never executes, the function returns false, because the key doesn't have a value. An example in which the function also returns false: ; Ooups. No key value in the below line: key = ; no value! Done with that too. Now, we will create a function that extracts the key from the pair of key = value. It looks like this: void extractKey(std::string &key, size_t const &sepPos, const std::string &line) const { key = line.substr(0, sepPos); if (key.find('\t') != line.npos || key.find(' ') != line.npos) key.erase(key.find_first_of("\t ")); } sepPos represents the position of the '=', in line (we will discuss about it in another function). Let's give an example and see what the function would assign to "key". Example: car = ford The value of key will be: " car" (there are three whitespaces in front of car, but DIC code tags won't allow whitespace). Why? Because that .substr() creates a substring starting with the character at position 0, and finishes with the character from the position of '=' - 1. Then, everything from the first space or tab character, is removed. Now, since we created a function that extracts the key, let's create one that extracts the value of the key. It looks like this: void extractValue(std::string &value, size_t const &sepPos, const std::string &line) const { value = line.substr(sepPos + 1); value.erase(0, value.find_first_not_of("\t ")); value.erase(value.find_last_not_of("\t ") + 1); } Again, sepPos is the position of the '=', and line is the individual line with the comment removed. Let's take an example and see what "value" will be assigned: car = toyota corolla value will be assigned "toyota corolla". First of all, .substr() creates a substring starting from positon of '=' + 1, to the end of the line. Then, value.erase(0, value.find_first_not_of("\t ")); removes the leading whitespace, and value.erase(value.find_last_not_of("\t ") + 1); removes everything starting with the position of the last non-tab or non-space character. Now, all we need to do is to create some functions which calls the above functions. Copy-paste these, again into your private section of the class: void extractContents(const std::string &line) { std::string temp = line; // Erase leading whitespace from the line. temp.erase(0, temp.find_first_not_of("\t ")); size_t sepPos = temp.find('='); std::string key, value; extractKey(key, sepPos, temp); extractValue(value, sepPos, temp); if (!keyExists(key)) contents.insert(std::pair<std::string, std::string>(key, value)); else exitWithError("CFG: Can only have unique key names!\n"); } // lineNo = the current line number in the file. // line = the current line, with comments removed. void parseLine(const std::string &line, size_t const lineNo) { if (line.find('=') == line.npos) exitWithError("CFG: Couldn't find separator on line: " + Convert::T_to_string(lineNo) + "\n"); if (!validLine(line)) exitWithError("CFG: Bad format for line: " + Convert::T_to_string(lineNo) + "\n"); extractContents(line); } I don't think that the functions needs some more presentation. This: if (!keyExists(key)) keyExists() is a function which checks if a key given as parameter, already exists in the std::map (contents). I will present it later. Now, the only thing that we have to do, in the private zone of the class, is to add a function that opens the configuration file, and extracts & parses it's contents. It looks like this: void ExtractKeys() { std::ifstream file; file.open(fName.c_str()); if (!file) exitWithError("CFG: File " + fName + " couldn't be found!\n"); std::string line; size_t lineNo = 0; while (std::getline(file, line)) { lineNo++; std::string temp = line; if (temp.empty()) continue; removeComment(temp); if (onlyWhitespace(temp)) continue; parseLine(temp, lineNo); } file.close(); } So what does the function does? It opens up the configuration file. Then, the while loop keeps extracting lines, until EOF is found. We check each line if it's empty, and if it is, we jump over it. Comments are removed, then, if the line contains only whitespaces, we jump over it. Lastly, parseLine is called, and line contents are added to our map. We have finished adding function to the private zone of the class, now, let's deal with the public zone. Let's start by adding class constructor, which sets the name of the configuration file, then calls ExtractKeys to perform extraction: ConfigFile(const std::string &fName) { this->fName = fName; ExtractKeys(); } Done. Now, let's create a function which keys if a specific key exists in the configuration file. Since the pair of key-value is extracted in our map, all we have to do is to use std::map::find function to look for the key: bool keyExists(const std::string &key) const { return contents.find(key) != contents.end(); } And lastly, let's create the function that retrieves the value of a specific key. It looks like this: template <typename ValueType> ValueType getValueOfKey(const std::string &key, ValueType const &defaultValue = ValueType()) const { if (!keyExists(key)) return defaultValue; return Convert::string_to_T<ValueType>(contents.find(key)->second); } The function returns a default value (operator()() of ValueType), if the key couldn't be found. Otherwise, it will return the converted value from string to ValueType, of the key. We will discuss right now on how to use this function. Create a sample configuration file that can be used with this parser: I have called it config.cfg. ; This is a comment ; Another comment ; color=red ; comment fruit = apple ; some whitespace + comment car = toyota corolla ; key value more than one word double =3.1223 ; a double How to use the ConfigFile class: It is extremely easy. Everything you have to do is this: ConfigFile cfg("config.cfg"); Of course, "config.cfg" can be replaced with the name of your configuration file. Check if a key exists: everything you need to do is to use ConfigFile::keyExists function: // Check if car key exists. It does, in our case. if (cfg.keyExists("car")) std::cout << "car key exists!\n"; // Check if fruits key exists. It doesn't, in our case. if (cfg.keyExists("fruits")) std::cout << "fruits key exists!\n"; Retrieve the value of a specific key: // Retrieve the value of key "car": // If car key doesn't exist, an empty string is returned. // Value type is std::string. // In our case it returns "toyota corolla" std::string carValue = cfg.getValueOfKey<std::string>("car"); // Retrieve the value of key "double": // We directly retrieve it as a double: // If key "double" is not found, the return value will be 1. // In our case it returns "3.1223" double doubleVal = cfg.getValueOfKey<double>("double", 1); And that's pretty much everything you need to know about how to use the ConfigFile functions. You may also wonder why I didn't use separate header files for ConfigFile / Convert classes, and separate source files. Well, I should have done that, but I wanted to keep the tutorial as short as I could. You are free to add separate header files / source files to keep your project cleaner. I have also attached the whole source code presented in this tutorial. Additional references: Wiki article about Configuration files This post has been edited by sarmanu: 26 July 2010 - 03:58 AM
http://www.dreamincode.net/forums/topic/183191-create-a-simple-configuration-file-parser/
CC-MAIN-2016-40
refinedweb
2,287
66.23
java user input - Java Beginners java user input how to input from keyboard? please give a example. Hi Friend, There are lot of ways to input data from keyboard.You...:****** try{ System.out.print("Enter String:"); BufferedReader input = new user input in java - Java Beginners user input in java i am trying to write a code to compute the average of 5 numbers but i dont know what to do for the program to accept user input Hi import java.util.*; public class AverageExample { public input input a java program that will exchange the last names of two students that were input by a user Java User Input Java User Input I am using Scanner class for user Input. Scanner s = new Scanner(System.in); String name = s.next(); but i am unable to store full name in name variable...how i can store full name plsss reply use input - Java Beginners input i want to know how to take value from user in core java.help me soon. Hi Friend, There are lot of ways to input data.You can use... to input the data from the command prompt. Try the following code to input name button to accept user input the radiobutton and spinner.The user input does not show on my Studentinfo.mdb database it just gives me an error.please help Here is java swing example...button to accept user input private void jButton1ActionPerformed Java get User Input Java get User Input In this section, you will learn how to get the user input from... that will communicate with the user at the command line and returns the user input. We have XP Bowling Game User input help XP Bowling Game User input help I was asked to do the XP Bowling... to make the code accept input from a user and display the scores/frames in a command window. Being new to Java....I have no clue how to out put it to the console java input problem - Java Beginners java input problem I am facing a Java input problem Keyboard Input Java NotesKeyboard Input There are two approaches to getting keyboard input from the user. GUI (Graphical User Interface). Displaying a graphical text... of the old Teletype machine. Java was designed for graphical user interfaces (GUI Input and Output problems - Java Beginners . This link will help you. and Output problems 1) what is Difference between... to a particular platform. Thus, this class reads characters from a byte input stream input output ; Introduction The Java I/O means Java Input/Output and is a part... input to read the user input.. In this section, you will see how the standard I... DataInputStream A data input stream is use to read primitive Java data types from escaping user input in php escaping user input in php Is it possible to escape the user input while submitting data into database in PHP user input to database user input to database /* * Studentinfo2App.java */ package studentinfo2; import java.sql.Connection; import java.sql.DriverManager; import... in my database user input from my applet does not show calculation after if. user input user input???? My apologies good evening peoples Once again I am...calculation after if. user input System.out.print ("please select...[]args){ Scanner input=new Scanner(System.in); System.out.print Getting ISD code when user input country name Getting ISD code when user input country name I want to create program which user needs by typing Country name he will get ISD code of that country... code, from that list i have to read the file and when user input any country name Java User Transaction. - Java Beginners Java User Transaction. Hi Experts, Could you please guide me for writing a java program of File Handling using user transaction, also please tell me which jars do I need. Please give me a program for above mentioned How to read user input in C++ How to read user input in C++ How can i get an input from the users in C++? You can use the standard input "cin" to get the input from user.. For example: int age; cin >> age Java User Transaction - Java Beginners Java User Transaction Hi, I am trying to do file handling through User Transaction, I would like to know that if server crashes while writing file... Operation and file handling both in one User Transaction with atomicity input ang message box - Java Beginners input ang message box can you help me to calculates for the sum, difference, product and quotient of the two inputted values using Input Dialog...("Input First Number"); int num1=Integer.parseInt(input1); String input2 Dialog and Console Input-Output , but it also gets input from the user. 1 2 3 4 5 6 7 8... that also accepts user input. It returns a string that can be stored... and file input. Of course, your program should eventually have a GUI graphical user interface - Java Beginners graphical user interface how do i write a code for a jmenu bar, File with items like open, save , save as. that lead to another interface? .../java/example/java/swing/SwingMenu.shtml Thanks error in taking input from user. error in taking input from user. //i m having problem inputting the array from user.my program is import java.io.*; class bubble { public static void main(String args[]) { int a[]=new int[20]; int i=0; BufferedReader br=new Input And Output ; Introduction The Java I/O means Java Input/Output and is a part of java.io... standard input to read the user input.. In this section, you will see how... DataInputStream A data input stream is use to read primitive Java data types from Servlet Error Message based on user input Servlet Error Message based on user input  ... to check the user input against database and display the message to the user. This example illustrate how to ensure that user input is correct by validating Change the user input to integer Change the user input to integer  ... will create a object of a Rectangle class. Now we ask the user to input two values... for an input, then we should use Integer.parseInt(string str). As we know - Java Beginners graphical user interface Hi, could u please tell me whats wrong with the code below. tried compiling but it gives me 2 errors. class or interface expected.thx import javax.swing.*; import java.awt.event.*; import Dialog Box Input Loop Prev: Example: Capitalize | Next: Java NotesDialog Box Input Loop... When reading input in a loop user must have some way of indicating that the end... of the input is for the user to enter a special value to indicate How to take input by user using jDialogue and use that input How to take input by user using jDialogue and use that input I am using NetBeans i have a 'Search by employee id' button . I want when user click this button a dialogue box appear which take employee_id as input from user Save the input data until user get changed Save the input data until user get changed I would like to know how to store the give input data and perform the calculations even after re-opening the program. I am developing a college library management system, i would like Managing Multiple User Input Data in an Array (or ArrayList) Managing Multiple User Input Data in an Array (or ArrayList) Hey... record alphabetically (2) If user chooses choice1, the input data is stored in an ARRAY (or ARRAYLIST) until the user chooses to stop inputting data. Kindly Java Graphical user interface Java Graphical user interface Hi, guys. I'm christina, need help with my home work Task 1: GUI Design and Implementation The user requirements of your Java quiz GUI application are specified by the following program flow input box input box give me the code of input box in core java Simple input application in Echo3 an application which contains the window from where the user can input a text and this value would be shown into the content pane when user will click... Simple input application in Echo3   How to get the unicode of japanese character input in java How to get the unicode of japanese character input in java Good Evening sir/madam; I am trying to create an application in java which needs to show... the user to check their vocabulary.But I stuck in the middle coz if user enters Show input dialog box anything from user. Java Swing provides the facility to input any thing (whether... Swing Input Dialog Box Example - Swing Dialogs Input dialog box is very important Console Input: Scanner Java NotesConsole Input: Scanner The java.util.Scanner class (added in Java 5) allows simple console and file input. Of course, your program should eventually have a GUI user interface, but Scanner is very useful for reading data Array and input [ ]={1,2,3,4,5,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26}; i want to ask user to input one of the above numbers then i want to print the array without the number user input. how will i do tht Display PHP clock with user input date and time Display PHP clock with user input date and time The following PHP code displays a clock with current date and time. I want the clock to receive user input date and time. How can this be done? <?php date<em> user-define package for applet - Java Beginners user-define package for applet how to import a user-define package to applet ?? Hi import javax.swing.*; import java.awt.Graphics...:// multi user chat server - Java Beginners multi user chat server write a multi chat server and client with step by step explanation? please send me this source code to my mail id with step by step explanation URGENT: User Defined Classes - Java Beginners URGENT: User Defined Classes Can someone help me? Design and implement the class Day that implements the day of the week in a program. The class...:// Here, you will get different data user defined subclass of throwable class - Java Beginners user defined subclass of throwable class Dear Sir,pls help me to write this pgm: Program to illustrate the use of user defined subclass of throwable class Hi Friend, Try the following: 1) public class User defined package problem - Java Beginners User defined package problem Hello friend, i was trying to execute the user-defined packages according to the chart that has been given in the Complete-Reference Book of JAVA.in that class-members access protection table Command Line Standard Input In Java Command Line Standard Input In Java In this section we will discuss about the Java IO Standard Input through Command Line. Standard streams, feature of various O/S, are read through standard input in Java. Standard streams are read accept integer from user accept integer from user Write an Java application that accepts 100 integer numbers from a user. The input should be in the range of 1-200. Error message needs to be displayed if user entered input which is not in this range input output input output java program using fileinputstream and fileoutputstream Hi Friend, Try the following code: import java.io.*; class FileInputStreamAndFileOutputStream { public static void main(String[] args User Interface Toolkits boxes etc and also dealing user input via those components. Swing The Swing...User Interface Toolkits User Interface Toolkits / Libraries are given below : Input Method Framework In entering text, this Framework make possible How to prompt user How to prompt user Dear Sir, I'm a new student, a beginner in Java. Pls help to write program as below :- a)to prompt user to input 2 integers...[] args) throws Exception{ Scanner input=new Scanner(System.in); int Updating user profile Updating user profile how should i provide user to update his profile with edit option including entered data by user should be shown in jsp page.... When the user clicks the particular edit button, that data will get shown Java User-defined Exception Java User-defined Exception In this tutorial, you will learn about the User... to handle errors in the applications with customized responses. It creates the java application more user friendly and easily understood. The given example throw How to create file from input values in Jframe ? How to create file from input values in Jframe ? hi i m doing my project using java desktop application in netbeans.i designed a form to get the user's academic details and on clicking the submit button,it displays all Java Command Line Input Java Command Line Input How to get input from command line in java... System.out.println("Please Input A Number"); BufferedReader br... this code you will find the output as Please Input A Number 2 data is- 2 different type input in java type of input method. thank Java Read through DataInputStream: import...); } } Java read input through Scanner: import java.util.*; class...); } } Java read input through BufferedReader import java.io.*; public class retrive mails from user using java code - Java Beginners retrive mails from user using java code how to retrive mails as user "username"??? using java for ex: class Mail{ private String subject... to retrive all mails for user "userName" //and return a set of mail objects input output in java input output in java java program using filereader and filewriter Hi Friend, Try the following code: import java.io.*; class FileInputStreamAndFileOutputStream { public static void main(String[] args) throws to retrive e mails as per user name - Java Beginners to retrive e mails as per user name hi friends, how to retrive e mails as per user "user name " for ex: class Mail{ private String subject... to retrive all mails for user "userName" //and return a set of mail objects java - Java Beginners output. Therefore in Java suggest how to accept input from the user and display...-user-input.shtml java - Java Beginners java how to input the character data from user System.in is use for get some input from user should close the browser with user confirmation in javascript - Java Beginners should close the browser with user confirmation in javascript Hi, I need to close the browser with the user confirmation. If the user accept olny he/she can close the browser otherwise can't close the browser. How can I do Input in Stateless Bean - EJB Input in Stateless Bean Hello, I am having problem in taking the input in stateless bean, when I am trying to get integer or double type input from the user in the client file, It shows me the error Take input in batch file Take input in batch file How to take input in batch file? Thanks Hi, You can use the following line of code in your batch file: set... entered by user. Thanks Data input & output Stream Data input & output Stream Explain Data Input Stream and Data Output Stream Class. DataInputStream and DataOutputStream A data input Servlet to authenticate user ;Enter User ID: <input type="text... Servlet to Authenticate User For security everyone want to restrict the unauthenticated user Develop user registration form User Registration Form in JSP In this example we are going to work with a user registration... user registration jsp page. In this example we will create a simple Input / Output in Easiest Method in Java Input / Output in Easiest Method in Java How to input from keyboard... with the easiest method in Java? Hi Friend, Try the following code: import...)throws Exception{ Scanner input=new Scanner(System.in Java code - Java Beginners Java code Write a program to download a website from a given URL. It must download all the pages from that website. It should take the depth of retrieval from the user input. All the files/pages must be stored in a folder java - Java Beginners java how to write a program to determine if a number input by the user is divisible by 37 To know if a number, n, is divisible by another number, m, (n%m == 0). That is, for input of 100, 100%37 == 26. So 100 java beginners java beginners Write a program that asks the user to enter five test scores. The program should display a letter grade for each score... static void main(String[]args){ Scanner input=new Scanner(System.in javascript focus input whether user has entered some value to the input boxes one by one. If any one... javascript focus input...; JavaScript can be very useful while the user fills the form containing number java code - Java Beginners in user input System.out.println("Enter the numbers"); String str...java code i need a program that will arrange the numbers from.... if the entered number is not in the range, the output should be invalid. example: input Developing User Registration Form to the user to take the input. Here is the code of the userRegister.jsp file... Developing User Registration Form  ... necessary to develop the User Registration Form for our Struts, Hibernate and Spring java - Java Beginners java Hi sir .. Write a program in java for Password Generator that should ask from user in the format as given below. Please reply with output as well. Input the string:- Insert string of any length Password Type User Module User Module The user first need to make registration on the website... of that particular language. For example- if he selects java the he can only see the java language test paper. After submitting the test paper he views the result Java Program - Java Beginners Java Program Write a java program to find out the sum of a given number by the user? Hi Friend, Try the following code: import...){ System.out.print("Enter first number:"); Scanner input=new Scanner(System.in Java Stack - Java Beginners Java Stack Can you give me a code using Java String STACK using the parenthesis symbol ( ) the user will be the one to input parenthesis symbol and it will determine if valid or invalid and it also tell what Java Program - Java Beginners Java Program Write a java program to find out the sum of digits of a given number by the user? Hi Friend, Try the following code...(String[] args) { Scanner input = new Scanner(System.in); int user validation ;tr><td>Username:</td><td><input type="text" name="user">...user validation i hv just started with my lessons in jsp n also doin...(); } } /* if(userName.equals(request.getParameter("user")) && Java - Java Beginners an array of colors and fills the array with user input, using the menu java - Java Beginners java how to wtite a program that evaluates the series for some integer n input by the user where n! is a factorial of n Hi Friend, Please clarify your problem. Do you want to print the factorial of a number java program - Java Beginners java program write a program that asks the user for a starting value and an ending value and then writes all the integers (inclusive) between those...){ System.out.println("Enter Start:"); Scanner input=new Scanner(System.in); int num1 Java Code - Java Beginners that assigns user input values to above mentioned variables. - main ( ) method that creates array of 4 objects of Computer class and that takes input for all above...Java Code Create a class Computer that stores information about Java Compilation - Java Beginners Java Compilation I want to write a Java program that asks the user... the characters will be uppercase. I want the third file to hold the context of all input files that my program has processed. I want to create 3 input files that can HTML5 input examples, Introduction and implementation of input tag. HTML5 input examples, Introduction and implementation of input tag. Introduction:In this tutorial, you will see the use of input tag. The input tag is a input field in a form. In which user can insert data, and the type of data characters java - Java Beginners HELP ME THANK YOU!!Design a program to search a word for letters that the user... and lowercase,so your program converts all letters by the user to the same case).You do... finds within the word a letter that the user entered,change the value of the array Java prog - Java Beginners Java prog Write a program on Java that ask the user to input: Cost of Machine, Life of Machine, Cost of capital in Percentage, in an JOption pane window and that calculates and outputs Annual Lease rent. Hi friend i iPhone Input Field, iPhone Input Field Tutorial iPhone Input Field Generally, input fields are used to take input from the users. In iPhone application we can use Text Field to take the input. .... That keyboard can be customize according to our need i.e. if we wants the user
http://roseindia.net/tutorialhelp/comment/18011
CC-MAIN-2013-48
refinedweb
3,489
54.32
#include <qsemaphore.h> A QSemaphore can be used to serialize thread execution, in a similar way to a QMutex. A semaphore differs from a mutex, in that a semaphore can be accessed by more than one thread at a time. For example, suppose we have an application that stores data in a large tree structure. The application creates 10 threads (commonly called a thread pool) to perform searches on the tree. When the application searches the tree for some piece of data, it uses one thread per base node to do the searching. A semaphore could be used to make sure that two threads don't try to search the same branch of the tree at the same time. A non-computing example of a semaphore would be dining at a restuarant. A semaphore is initialized to have a maximum count equal to the number of chairs in the restuarant. As people arrive, they want a seat. As seats are filled, the semaphore is accessed, once per person. As people leave, the access is released, allowing more people to enter. If a party of 10 people want to be seated, but there are only 9 seats, those 10 people will wait, but a party of 4 people would be seated (taking the available seats to 5, making the party of 10 people wait longer). When a semaphore is created it is given a number which is the maximum number of concurrent accesses it will permit. Accesses to the sempahore are gained using operator++() or operator+=(), and released with operator--() or operator-=(). The number of accesses allowed is retrieved with available(), and the total number with total(). Note that the incrementing functions will block if there aren't enough available accesses. Use tryAccess() if you want to acquire accesses without blocking..
http://man.linuxmanpages.com/man3/qsemaphore.3qt.php
crawl-003
refinedweb
300
62.27
PigPen is map-reduce for Clojure, or distributed Clojure. It compiles to Apache Pig or Cascading but you don't need to know much about either of them to use it. Getting Started, Tutorials & Documentation Getting started with Clojure and PigPen is really easy. - The wiki explains what PigPen does and why we made it - The tutorial is the best way to get Clojure and PigPen installed and start writing queries - The full API lists all of the operators with example usage - PigPen for Clojure users is great for Clojure users new to map-reduce - PigPen for Pig users is great for Pig users new to Clojure - PigPen for Cascading users is great for Cascading users new to Clojure Note: It is strongly recommended to familiarize yourself with Clojure before using PigPen. Note: PigPen is not a Clojure wrapper for writing Pig scripts you can hand edit. While entirely possible, the resulting scripts are not intended for human consumption. Questions & Complaints Artifacts pigpen is available from Maven: With Leiningen: ;; core library [com.netflix.pigpen/pigpen "0.3.3"] ;; pig support [com.netflix.pigpen/pigpen-pig "0.3.3"] ;; cascading support [com.netflix.pigpen/pigpen-cascading "0.3.3"] ;; rx support [com.netflix.pigpen/pigpen-rx "0.3.3"] The platform libraries all reference the core library, so you only need to reference the platform specific one that you require and the core library should be included transitively. Note: PigPen requires Clojure 1.5.1 or greater Parquet To use the parquet loader, add this to your dependencies: [com.netflix.pigpen/pigpen-parquet-pig "0.3.3"] Here an example of how to write parquet data. (require '[pigpen.core :as pig]) (require '[pigpen.parquet :as pqt]) ;; ;; assuming that `data` is in tuples ;; ;; [["John" "Smith" 28] ;; ["Jane" "Doe" 21]] (defn save-to-parquet [output-file data] (->> data ;; turning tuples into a map (pig/map (partial zipmap [:firstname :lastname :age])) ;; then storing to Parquet files (pqt/store-parquet output-file (pqt/message "test-schema" ;; the field names here MUST match the map's keys (pqt/binary "firstname") (pqt/binary "lastname") (pqt/int64 "age"))))) And how to load the records back: (defn load-from-parquet [input-file] ;; the output will be a sequence of maps (pqt/load-parquet input-file (pqt/message "test-schema" (pqt/binary "firstname") (pqt/binary "lastname") (pqt/int64 "age")))) And check out the pigpen.parquet namespace for usage. Note: Parquet is currently only supported by Pig Avro To use the avro loader (alpha), add this to your dependencies: [com.netflix.pigpen/pigpen-avro-pig "0.3.3"] And check out the pigpen.avro namespace for usage. Note: Avro is currently only supported by Pig Release Notes 0.3.3 - 5/19/16 - Explicitly disable *print-length*and *print-level*when generating scripts - Add a better error message for storage types that expect a map with keywords 0.3.2 - 1/12/16 - Allow more types in generated pig scripts 0.3.1 - 10/19/15 - Update cascading version to 2.7.0 - Report correct pigpen version to concurrent - Update nippy to 2.10.0 & tune performance 0.3.0 - 5/18/15 - No changes 0.3.0-rc-7 - 4/29/15 - Fixed bug in local mode where nils weren't handled consistently 0.3.0-rc.6 - 4/14/15 - Add local mode code eval memoization to avoid thrashing permgen - Added pigpen.pig/set-optionscommand to explicitly set pig options in a script. This was previously available (though undocumented) by setting {:pig-options {...}}in any options block, but is now official. 0.3.0-rc.5 - 4/9/15 - Update core.async version 0.3.0-rc.4 - 4/8/15 - Memoize code evaluation when run in the cluster 0.3.0-rc.3 - 4/2/15 - Bugfixes 0.3.0-rc.2 - 3/30/15 - Parquet refactor. Local parquet loading no longer depends on Pig. Parquet schemas are now defined using Parquet classes. 0.3.0-rc.1 - 3/23/15 - Added Cascading support pigpen.cascading/generate-flow- Generate a cascading flow from a pigpen query pigpen.cascading/load-tap- Load data from an existing cascading tap pigpen.cascading/store-tap- Store data using an existing cascading tap - Added pigpen.core/keys-fn, a new convenience macro to support named anonymous functions. Like keys destructuring, but less verbose. - New function based operators to build more dynamic scripts. These are function versions of all the core pigpen macros, but you have to handle quoting user code manually. These were previously available, but not officially supported. Now they're alpha, but supported and documented. See pigpen.core.fn - New lower-level operators to build custom storage and commands. These were previously available, but not officially supported. Now they're alpha, but supported and documented. See pigpen.core.op - Breaking Changes pigpen.core/scriptis now pigpen.core/store-many pigpen.core/generate-scriptis now pigpen.pig/generate-script pigpen.core/write-scriptis now pigpen.pig/write-script pigpen.core/showis now pigpen.viz/show(requires dependency [com.netflix.pigpen/pigpen-viz "..."]) pig/dumphas changed. The old version was based on rx-java, and still exists as pigpen.rx/dump. The replacement for pigpen.core/dumpis now entirely Clojure based. The Clojure version is better for unit tests and small data. All stages are evaluated eagerly, so the stack traces are simpler to read. The rx version is lazy, including the load-* commands. This means that you can load a large file, take a few rows, and process them without loading the entire file into memory. The downside is confusing stack traces and extra dependencies. See here for more details. - The interface for building custom loaders and storage has changed. See here for more details. Please email [email protected] with any questions. 0.2.15 - 2/20/15 - Include sources in jars 0.2.14 - 2/18/15 - Avro updates 0.2.13 - 1/19/15 - Added load-avroin the pigpen-avro project: - Fixed the nRepl configuration; use gradlew nReplto start an nRepl - Exclude nested relations from closures 0.2.12 - 12/16/14 - Added load-csv, which allows for quoting per RFC 4180 0.2.11 - 10/24/14 - Fixed a bug (feature?) introduced by new rx version. Also upgraded to rc7. This would have only affected local mode where the data being read was faster than the code consuming it. 0.2.10 - 9/21/14 - Removed load-pig and store-pig. The pig data format is very bad and should not be used. If you used these and want them back, email [email protected] and we'll put it into a separate jar. The jars required for this feature were causing conflicts elsewhere. - Upgraded the following dependencies: - org.clojure/clojure 1.5.1 -> 1.6.0 - this was also changed to a provided dependency, so you should be able to use any version greater than 1.5.1 - org.clojure/data.json 0.2.2 -> 0.2.5 - com.taoensso/nippy 2.6.0-RC1 -> 2.6.3 - clj-time 0.5.0 - no longer needed - joda-time 2.2 -> 2.4 - pig needs this to run locally - instaparse 1.2.14 - no longer needed - io.reactivex/rxjava 0.9.2 -> 1.0.0-rc.1 - Fixed the rx limit bug. pigpen.local/*max-load-records*is no longer required. 0.2.9 - 9/16/14 - Fix a local-mode bug in pigpen.fold/avgwhere some collections would produce a NPE. - Change fake pig delimiter to \n instead of \0. Allows for \0 to exist in input data. - Remove 1000 record limit for local-mode. This was originally introduced to mitigate an rx bug. Until #61 is fixed, bind pigpen.local/*max-load-records*to the maximum number of records you want to read locally when reading large files. This now defaults to nil(no limit). - Fix a local dispatch bug that would prevent loading folders locally 0.2.8 - 7/31/14 - Fix a bug in load-tsvand load-lazy 0.2.7 - 7/31/14 Don't use - Fix load-lazyand speed up both load-tsvand load-lazy - Convert to multi-project build - Added pigpen-parquet with initial support for loading the Parquet format: 0.2.6 - 6/17/14 - Minor optimization for local mode. The creation of a UDF was occurring for every value processed, causing it to run out of perm-gen space when processing large collections locally. - Fix (pig/return []) - Fix (pig/dump (pig/reduce + (pig/return []))) - Fix Longs in scripts that are larger than an Integer - Memoize local UDF instances per use of pig/dump - The jar location in the generated script is now configurable. Use the :pigpen-jar-locationoption with pig/generate-scriptor pig/write-script. 0.2.5 - 4/9/14 - Remove dump&showand dump&show+in favor of pigpen.oven/bake. Call bakeonce and pass to as many outputs as you want. This is a breaking change, but I didn't increment the version because dump&showwas just a tool to be used in the REPL. No scripts should break because of this change. - Remove dymp-async. It appeared to be broken and was a bad idea from the start. - Fix self-joins. This was a rare issue as a self join (with the same key) just duplicates data in a very expensive way. - Clean up functional tests - Fix pigpen.oven/clean. When it was pruning the graph, it was also removing REGISTER commands. 0.2.4 - 4/2/14 - Fix arity checking bug (affected varargs fns) - Fix cases where an Algebraic fold function was falling back to the Accumulator interface, which was not supported. This affected using cogroupwith foldover multiple relations. - Fix debug mode (broken in 0.1.5) - Change UDF initialization to not rely on memoization (caused stale data in REPL) - Enable AOT. Improves cluster perf - Add :partition-byoption to distinct 0.2.3 - 3/27/14 - Added load-json, store-json, load-string, store-string - Added filter-by, and remove-by 0.2.2 - 3/25/14 - Fixed bug in pigpen.fold/vec. This would also cause fold/mapand fold/filterto not work when run in the cluster. 0.2.1 - 3/24/14 - Fixed bug when using forto generate scripts - Fixed local mode bug with mapfollowed by reduceor fold 0.2.0 - 3/3/14 - Added pigpen.fold - Note: this includes a breaking change in the join and cogroup syntax as follows: ; before (pig/join (foo on :f) (bar on :b optional) (fn [f b] ...)) ; after (pig/join [(foo :on :f) (bar :on :b :type :optional)] (fn [f b] ...)) Each of the select clauses must now be wrapped in a vector - there is no longer a varargs overload to either of these forms. Within each of the select clauses, :on is now a keyword instead of a symbol, but a symbol will still work if used. If optionalor requiredwere used, they must be updated to :type :optionaland :type :required, respectively. 0.1.5 - 2/17/14 - Performance improvements - Implemented Pig's Accumulator interface - Tuned nippy - Reduced number of times data is serialized 0.1.4 - 1/31/14 - Fix sort bug in local mode 0.1.3 - 1/30/14 - Change Pig & Hadoop to be transitive dependencies - Add support for consuming user code via closure 0.1.2 - 1/3/14 - Upgrade instaparse to 1.2.14 0.1.1 - 1/3/14 - Initial Release
https://devhub.io/repos/Netflix-PigPen
CC-MAIN-2020-10
refinedweb
1,899
60.01
you want to really give your robot personality, add your own routines to perform actions such as moving the eyebrows or squinting the nose. You will be surprised at the number of emotions that can be displayed. Giving your robot the ability to speak can give your robot personality, but giving it the ability to understand speech can be even more powerful. With a little training and a good microphone, Windows can transform dictation into a Word document with an accuracy better than you might expect. Microsoft asserts that speech recognition works only with compatible programs, so most people don’t realize that the system will work fine with the applications written in many programming languages. It is important to realize the above distinction. We are not saying you can use speech recognition in the IDE for a computer language because that would require the language to be written so that verbal commands issued by the user are trapped and dealt with appropriately. In non-compatible programs, some commands may work properly (saying FILE, for example, brings down the File Menu in the RobotBASIC IDE), but there will be many things that do not work because the program was not specifically designed to work with speech. The code in Figure 9 allows the user to draw randomly sized circles and squares at random places on the screen simply by saying the word circle or square. The user can also determine the color to use for future images with the words red, blue, or green. If you look carefully at the code, you will see that it is not checking to see if the spoken text is equal to one of our selected words. Instead, the code checks to see if the string obtained contains one of the key words. This is a powerful concept. It means that you can draw a circle by saying circle, but you can also draw a circle by saying please draw a circle. Notice also that each IF statement is independent (there are no ELSE statements). This means if you say draw a red circle, then the color will be set to red, because red is contained in the string. The circle will then be drawn because circle is contained in the string. Notice that this sequence works properly because the program checks for the names of the colors before it checks for the names of the shapes. Also notice that the program in Figure 8 will terminate if you say either of the words stop or quit. Notice too that if MidString had been used for these cases as well, then even the phrases please stop or quit now would terminate the program. Providing multiple options like this can make your programs appear far more intelligent than they are. The point is that you can use Windows’ speech recognition with the input commands and functions found in many languages as long as you take care to not use words that represent Windows’ commands. For most robotic applications, this should not be a severe limitation. As this series comes to an end, we hope we have convinced you to consider a Windows 8 tablet in your next project. Small microcontrollers can’t offer the enormous computing capacity, let alone the versatility or ease of programming of a tablet PC. Combine this power with integrated sensors, cameras, and the ability to communicate verbally, and you will have the foundation for the robot many hobbyists have always dreamed of building. SV As long as you do not try to use Microsoft’s reserved commands, the application programs created with many languages can easily use Windows’ speech engine as an input device. To demonstrate this, let’s look at a simple RobotBASIC example. RobotBASIC can obtain an input string from a user with a command such as: INPUT A When this command is executed, the user is expected to type in a response and then press the ENTER key to indicate they are finished. At that point, the variable A will contain whatever was typed. If Windows’ speech recognition is turned on before the INPUT command is executed, then whatever is said will become the data for the variable A just as if it were typed — even though RobotBASIC is not a compatible program. The only problem is that the user must say the word enter (a keyword trapped by Windows) to get the INPUT command to finish and move on to the next statement in the program. Notice this is no different from having to press the ENTER key after you type text in response to a command. The need to say the word enter can be eliminated by using some form of input that does not require pressing the ENTER key. One option for that in RobotBASIC is to create an EditBox as demonstrated by the program fragment in Figure 9. The code is commented so if you are using a different language, just create similar actions utilizing the capabilities of your language. SERVO 03.2014 51 // Create a name for the edit box EditBox = "MyEditBox" // Create the box and focus on it AddEdit EditBox,100,100,500 FocusEdit EditBox repeat // create random numbers x=random(600)+100 y=random(400)+100 r=random(200)+ 50 //if the text changes, retrieve it if EditChanged (EditBox) t=ToLower(GetEdit(EditBox)) if InString(t,"blue") then SetColor Blue if InString(t,"red") then SetColor Red if InString(t,"green") then SetColor Green if InString(t,"circle") then Circle(x-r, y-r,x+r,y+r) if t="square" then Rectangle(x-r,y-r,x+r,y+r) endif until t="stop" or t="quit" Figure 9. A zip file containing all the code from this series can be obtained from the In the News tab at. Click to subscribe to this magazine
http://servo.texterity.com/servo/201403?pg=51&lm=1505325204000
CC-MAIN-2019-26
refinedweb
976
55.78
Utility class to construct Taproot outputs from internal key and script tree. More... #include <standard.h> Utility class to construct Taproot outputs from internal key and script tree. Definition at line 225 of file standard.h. Add a new script at a certain depth in the tree. Add() operations must be called in depth-first traversal order of binary tree. If track is true, it will be included in the GetSpendData() output. Definition at line 428 of file standard.cpp. Like Add(), but for a Merkle node with a given hash to the tree. Definition at line 441 of file standard.cpp. Combine information about a parent Merkle tree node from its child nodes. Definition at line 336 of file standard.cpp. Finalize the construction. Can only be called when IsComplete() is true. internal_key.IsFullyValid() must be true. Definition at line 451 of file standard.cpp. Compute scriptPubKey (after Finalize()). Definition at line 462 of file standard.cpp. Compute spending data (after Finalize()). Definition at line 464 of file standard.cpp. Insert information about a node at a certain depth, and propagate information up. Definition at line 379 of file standard.cpp. Return whether there were either no leaves, or the leaves form a Huffman tree. Definition at line 308 of file standard.h. Return true if so far all input was valid. Definition at line 306 of file standard.h. Check if a list of depths is legal (will lead to IsComplete()). Definition at line 405 of file standard.cpp. The current state of the builder. For each level in the tree, one NodeInfo object may be present. m_branch[0] is information about the root; further values are for deeper subtrees being explored. For every right branch taken to reach the position we're currently working in, there will be a (non-nullopt) entry in m_branch corresponding to the left branch at that level. For example, imagine this tree: - N0 - / \ N1 N2 / \ / \ A B C N3 / \ D E Initially, m_branch is empty. After processing leaf A, it would become {nullopt, nullopt, A}. When processing leaf B, an entry at level 2 already exists, and it would thus be combined with it to produce a level 1 one, resulting in {nullopt, N1}. Adding C and D takes us to {nullopt, N1, C} and {nullopt, N1, C, D} respectively. When E is processed, it is combined with D, and then C, and then N1, to produce the root, resulting in {N0}. This structure allows processing with just O(log n) overhead if the leaves are computed on the fly. As an invariant, there can never be nullopt entries at the end. There can also not be more than 128 entries (as that would mean more than 128 levels in the tree). The depth of newly added entries will always be at least equal to the current size of m_branch (otherwise it does not correspond to a depth-first traversal of a tree). m_branch is only empty if no entries have ever be processed. m_branch having length 1 corresponds to being done. Definition at line 283 of file standard.h. The internal key, set when finalizing. Definition at line 285 of file standard.h. The output key, computed when finalizing. Definition at line 286 of file standard.h. The tweak parity, computed when finalizing. Definition at line 287 of file standard.h. Whether the builder is in a valid state so far. Definition at line 246 of file standard.h.
https://bitcoindoxygen.art/Core-master/class_taproot_builder.html
CC-MAIN-2021-49
refinedweb
578
68.87
Download script into Pythonista Hi to everyone. I would like to know if there is some way to download scripts from my Desktop PC into my iphone. Thanks for your help. Alejandro. There are various options, my current favorite is the script I've already linked to on Twitter: → Pythonista It basically starts an FTP server on your device that you can then connect to from a Mac/PC. If you're new here, it might seem a little odd that you need a custom script for something as basic as this, but unfortunately, it's not possible for me to include any kind of file import as a built-in feature because of Apple's app review guidelines. In short: downloading or importing executable code is not allowed. You can even use this with an FTP client on your device to transfer files into Pythonista locally. E.g. Transmit which has a share sheet extension. EDIT: and now I see that @omz already thought of that ;) I'm new here, after downloading .py and .pyui files into Pythonista's a folder, how to make it run if report erros: no module named numpy, no module named ui etc. - If you put just one line, import numpy, into a .py file and run it, does it report an error? - If you put just one line import uiinto a .py file and run it, does it report an error? - If you go into Apple's App Storeapplication, does it indicate that there is an upgrade available for Pythonista? - Webmaster4o I would try uninstall/reinstall pythonista, or update it. If you have no UI or numpy, you're running a pre-1.5 version. Make sure to use PythonistaBackup.pyor something like it to save all of your scripts onto another device BEFORE you uninstall Pythonista or your scripts could be lost forever. Much obliged. Its a pre-1.5 version. There's also a webdav server written in Python that you can execute in pythonista. Search for "pythonista webdav" in github. I am a real neophyte regarding networking so this may sound like a stupid question. I copied the PythonistaFTP file to my iPad and it appears to work correctly. But when I try to connect as guest to my macbook pro the connection attempt times out without actually connecting. Does something in the wireless router need to be changed to enable the connection to go through? I have previously successfully made connections between my computer and portable devices on port 8080 without needing to configure the router, but perhaps port 2121 is different. Try connecting to your device by IP address (192.168.x.x), not the name (my-device-name.local). You can find your device's IP address in the wi-fi settings by tapping on the network that you're currently connected to. That did the trick. Thanks for the suggestion. Perhaps Ole could add this suggestion to the instructions that are printed to the console when the server starts to aid other neophytes such as myself. I have a general question about PythonistaFTP. I used finder on the mac to start an ftp session to verify that dgelessus' suggestion worked. After closing the link I noticed that a log of the communication was printed to the console and that a new session was started and, after each file retrieved from Pythonista on the iPad, that session was then closed. I also noticed that at the start of the connection it seemed like there were two identical sessions opened that were not closed until the end of the communication. Is this the way that PythonistaFTP is supposed to work? My limited knowledge is many years old, but in years gone by when I would manually use ftp, I would log in to the remote machine and keep the session open until I had moved all the files that I wanted. Only then would I log out of the session. I am also trying to use the FTP server to manually sync scripts between my iPhone and iPad. I downloaded the script on both devices and had difficulty connecting from my Mac until I used the IP addresses instead of the device names. Not a big problem. I was then able to drag scripts from each device to a folder on my Mac. The problem is that I can't upload any of those scripts to the other device to complete the manual sync. It seems to be a permissions issue because the FTP "drive" is read only to me. Any thoughts? I don't really have an answer to help you with uploading from a Mac to your mobile device, but I had the same problem. Uploads from the Mac are blocked for some reason. My solution was to send the file I wanted to upload to a Linux machine (Ubuntu 14.04 running Gnome). On the Linux machine the upload action to the iPad was essentially the same as used on the Mac. On the "Places" menu that opens a file manager menu there is a "Connect to Server" menu item. Because this worked on the Linux machine the problem is apparently due to some security restrictions in place on the Mac. I, too, would be interested in a solution for the Mac if someone has one. Although I haven't verified that it works, I think that you can also do the transfer via the Windows 7 or greater file manager. Hopefully someone will be able to shed some more light on this. Thanks for showing that it's not a general problem but specific to the mac platform or some mac configuration that I'm not familiar with. @farf Hi, do you refer to the IP Address In the iOS devices?. In your Mac what do you use as User and Password?. The Mac OS X you use is Yosemite. Thanks in advance for your Help @ManuelU You can find the IP address of your iOS device in the WiFi settings (tap the "(i)" next to your connected WiFi network in the Settings app). The local IP address of my iPad is 10.0.1.5, so I connect to. Select "Guest/Anonymous" when you're asked for authorization. - ovrCaffeinatd Anyone try this with iOS 9? Guest/Anonymous is logging: USER 'guest' failed login (twice, fwiw). you might try adding import logging logging.basicConfig(level=logging.DEBUG) before the lines that start the server. Then, check the console in pythonista to see if there are any errors logged server side.
https://forum.omz-software.com/topic/1966/download-script-into-pythonista
CC-MAIN-2017-47
refinedweb
1,096
72.87
Tips for Designing COM-Friendly .NET Class Libraries The design of COM Interoperability usually enables .NET class libraries to be usable from COM without any additional work or forethought by the library author. Still, there are some library-writing guidelines that, when followed, can greatly improve the COM-usability of .NET class libraries. Here are ten major guidelines: Avoid creating members with names that conflict with method names of IUnknown or IDispatch, such as Release or Invoke. Such members are exposed to COM with mangled names to avoid collision. Provide alternatives to parameterized constructors, because they are not directly exposed to COM. Don't create APIs that rely on static members, because they are not easily accessible from COM. Don't create APIs that rely on methods or properties of value types, because they are not directly exposed to COM. Don't create APIs that rely on nested arrays, because the Interop marshaler does not support them. Think twice before using overloaded methods, because they are exposed to COM with unintuitive and version-brittle names (suffixed with _2, _3, and so on). Don't forget the benefits of interface-based programming. When defining public classes, it often makes sense to define a corresponding interface for the class to implement, and to always use the interface type rather than the class type in any public method, property, field, or event definitions. Also mark the class with ClassInterfaceAttribute and its ClassInterfaceType.None value to make the implemented interface the default. Throw exception types defined in the mscorlib assembly whenever appropriate, because user-defined exceptions don't get the same special treatment by the Interop marshaler. If you define a method that returns null (Nothing in VB .NET) to indicate failure, provide the option for it to throw an exception on failure instead. Because COM clients see S_OK returned whenever an exception isn't thrown, not throwing an exception may mistakenly lead them to believe the call succeeded when it really did not. Use ComSourceInterfacesAttribute on classes that expose events, so they expose connection points to COM event sinks.
http://www.informit.com/articles/article.aspx?p=26994
CC-MAIN-2018-51
refinedweb
346
55.34
Simple linear regression with TensorFlow To get started with TensorFlow and machine learning we want to show you a simple example for how to do linear regression. Using this simple example, we can start exploring the TensorFlow APIs and get a feeling for machine learning and also learn techniques which can be used to create more complex applications later on. The task will be to find a line function y = m * x + b for a given set of data points, which has the minimum mean square distance to all the given data points. Each data point will provide a value for x and a value for y. This means the model which we will create and train has to be able to find values for m and b, so that the resulting line function has a minimal distance to all data points. If we plot an example of the given data points it can look something like this: Each point in the image above will be used as input to train our model. Preparation This tutorial assumes you already have Python installed. In case you have not installed it head over to and install Python 3.6.6 (Python 3.7 is not supported by TensorFlow at the moment). First of all you need to install TensorFlow. To do this open up your console and enter pip install tensorflow or pip install tensorflow-gpu (If you want to use GPU support you need a NVIDIA GPU and CUDA installed on your machine, see) After installing TensorFlow you can verify your installation by starting the python console and do a simple import of TensorFlow Open Python console by entering python in your Bash/Cmd/Powershell Then enter: import tensorflow as tf tf.__version__ You should see the currently installed version of TensorFlow printed out. At the time this tutorial was created the current version was '1.11.0' In addition to TensorFlow we also need Numpy and Matplotlib for this tutorial. Like we did with TensorFlow open your console and enter: pip install numpy pip install matplotlib Numpy will be used for creating our sample data and Matplotlib will be used to display the generated sample data and the resulting line when using our model. Generate data As our first step, we want to generate the necessary data on which we want to perform a linear regression. Create a new project in your preferred IDE (PyCharm, VsCode, Jupyter Notebook). We used Jupyter and the inline plot functionality of Matplotlib. Therefore all plots are displayed without using the show function. In case you are using a different IDE consider also calling show on the created plots. Import the needed libraries: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # if you are using Juypter and want to see the plot inlined %matplotlib inline Generate values for x and y using Numpy) This will create 100 values for x and 100 values for y between -100 and 300. Using Matplotlib you can display the values as seen in the image above plt.plot(x, y, '*'). Your plot can differ in comparison to the image shown above, because of the random values added. Train a linear regression model The easiest way to train a linear regression model is to use the already predefined API of TensorFlow. For this kind of task TensorFlow is providing a class named LinearRegressor. You have to give it a set of input and expected values. In our example the input values are all of our x values and the expected values are all of our y values. The y values are the expected values, because our line function is y = m * x + b and we want to learn how to calculate y for a given x. To be able to calculate y from x our model has to learn the correct values for m and b that produce the minimum mean squared error among our given training data set. If you provide an input value and a known result value, this kind of training is called supervised learning. We input a value x into our model for which it will try to calculate a resulting y. By knowing the correct result y in the training process we are able to correct the model in case it is calculating a wrong value. To setup the training process we create a so called input function. This function will provide x and y to our LinearRegressor. TensorFlow has an easy to use API for creating an input function out of a Numpy array. input_fn = tf.estimator.inputs.numpy_input_fn({"x": x}, y, shuffle=True) As the first parameter of this function we provide our x values within a dictionary. It has to be in a dictionary, because it is prossible to provide multiple input values into a LinearRegressor to train more complex models. For example this could be calculating housing prices by given it a set of features like size, age, etc. In our example x is the only input we need to calculate y. We are naming our input x with "x" in the input dictionary, so it can be identified later on. The second parameter are our expected values, in our case y. The last parameter is determining if the input parameters should be shuffled. This is normally a good idea when training complex models, because it shows the model the variations in the training data early on. The model does not optimize to a subset of the data, but to a random sample chosen from the whole dataset. Therefore we are setting shuffle to True. Next we want to tell the generic LinearRegressor API what the input is and how it can get to it. To do this we create a numeric feature column. This tells the LinearRegressor the input will be a number and it can be identfied with the key "x". column_x = tf.feature_column.numeric_column("x") Now that we have our input function and a mapping of x to a feature column we can create our LinearRegressor regressor = tf.estimator.LinearRegressor([column_x], model_dir="/tmp/tutorial/linear_regression") and start training our model for i in range(20): print("Running epoch", i+1) regressor.train(input_fn) As mentioned above a LinearRegressor can have multiple inputs. Therefore we provide our input column as an array. The model_dir is where our trained model will be saved locally. This can be changed to whatever folder you would like your model to reside in. After creating a LinearRegressor we start the training by calling train on it and passing it our input function as parameter. Normally it takes longer than one training iteration (epoch), to get good results. That is why we are doing the training 20 times. Meaning we are showing our generated data 20 times to our model. On each iteration the values m and b will be adjusted by TensorFlow, producing a more accurate result. While the training is running you can see the output printed to your console. The generated output looks like this, even though the concrete values may vary for you because of the random input data. Running epoch 1 /tmp/tutorial/linear_regression\model.ckpt. INFO:tensorflow:loss = 1541075.6, step = 0 INFO:tensorflow:Saving checkpoints for 1 into /tmp/tutorial/linear_regression\model.ckpt. INFO:tensorflow:Loss for final step: 1541075.6. Running epoch 2 INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tutorial/linear_regression\model.ckpt-1 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 1 into /tmp/tutorial/linear_regression\model.ckpt. INFO:tensorflow:loss = 1094541.0, step = 1 INFO:tensorflow:Saving checkpoints for 2 into /tmp/tutorial/linear_regression\model.ckpt. INFO:tensorflow:Loss for final step: 1094541.0. Running epoch 3 INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tutorial/linear_regression\model.ckpt-2 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 2 into /tmp/tutorial/linear_regression\model.ckpt. INFO:tensorflow:loss = 892589.5, step = 2 INFO:tensorflow:Saving checkpoints for 3 into /tmp/tutorial/linear_regression\model.ckpt. INFO:tensorflow:Loss for final step: 892589.5. ... As you can see each epoch has a log statement showing us the current loss. This is representing the calculated minimum mean squared distance our model is currently producing. As the training is progressing this number has to decrease otherwise the training is not working correctly. In our case running the training for 20 epochs on the training data produces a loss of 556745. Note that even though a smaller loss is always better it can never reach 0 given our current input and expected values. Verify our model Now that we successfully have trained our model, we want to verify that the training we did has produced the desired result. To do this we create a new set of test input values for x and let our model calculate the corresponding y value. x_test = np.linspace(0, 200, 2, dtype=np.float32) + np.random.uniform(-100, 100, size=2) We also have to create a new input function using our new x_test. The difference this time is, that we don't provide any values for y, because we now want to calculate them using our trained model. test_input_fn = tf.estimator.inputs.numpy_input_fn({"x": x_test}, shuffle=False) The feature column is the same as we used before, therefore we are still using "x" as key in our input dictionary. We perform the calculation of the y values by calling predict and passing it our new input function. result_iterator = regressor.predict(test_input_fn) The result is an iterator. The iterator does not contain the values of y yet. These values will be calculated lazily by TensorFlow only when they are requested. To get the actual values, we are using a list comprehension to iterate over all available results and putting them into a simple array. y_pred = [predictions["predictions"] for predictions in result_iterator] The value within the iterator is a dictionary containing the key "predictions". Similar to passing in the input data as a dictionary, the result will be returned as a dictionary as well by the LinearRegressor. After we got the calculated values for y, we want to visualize the result to see if the training was successfull. To do this we are using Matplotlib to plot the data set used for training, as well as the resulting line created with our trained model. plt.plot(x, y, "*") plt.plot(x_test, y_pred, 'r') As you can see the red line is representing the line function learned by our model accoding to the given data set. If you would go back and perform the training several more times it is most likely to produce an even better approximation of the line function with the smaller minimum mean squared distance between all data points. You can download the Jupyter Notebook with the code used in this tutorial here. In case you are interessed what the learned values for m and b are, you can use the function get_variable_value of the LinearRegressor and either pass in linear/linear_model/x/weights to get m or linear/linear_model/bias_weights to get b. m = regressor.get_variable_value("linear/linear_model/x/weights") b = regressor.get_variable_value("linear/linear_model/bias_weights") Conclusion In conclusion we can say it was pretty easy to create a simple linear regression with TensorFlow using just a few lines of code. This was only a small example to get you started, but the same techniques and APIs can be used to train more complex models with more inputs (features).
https://arconsis.de/unternehmen/blog/simple-linear-regression-with-tensorflow
CC-MAIN-2019-18
refinedweb
1,959
55.13
how to create?? “EthTokenBalance” “EthTokenTransfers” “EthTransactions” “Item” how to create?? “EthTokenBalance” “EthTokenTransfers” “EthTransactions” “Item” Hey @thomasmitchell, hope you are ok. Check here: Carlos Z @thecil, Ok, so I’m running that and all its returning are addresses. I’d like to inspect each for possible ERC20 token name, symbol, and decimals. Do you guys at Moralis have a convenience function or an indexed table of addresses and contract identity for that? Or am I stuck brute force web3 calling every address on the list for its identity? It seems like that should be a Moralis job, not something to bog down my front-end with. Yes, we do have a way sir Carlos Z @thecil if only that were true. Let’s clarify my question. I’ve got getAllERC20() working fine, and pushed it to the screen. I’ve also merged it with price data from CoinGecko per the “I cloned Zerion” video (there’s an error in coinGeckoTokenList.json that brings back the wrong token for “UNI”. I/we/CoinGecko need to fix that). As you can see what I’m trying to do is a FinTech-style drop-down drawer for each line that lists all transactions filtered by that token. It should read something like <Text>{list.date} {list.counterparty} {list.amount} </Text> where list.counterparty is either an ENS or raw address. But…I’ve got no, as we’d call it back in my day: “relation” aka “dictionary” of token contract addresses to filter the transactions on. import { Flex, Text } from "@chakra-ui/react"; import { useEffect } from "react"; import { useMoralis } from "react-moralis"; export const TransactionList = (props) => { const { isAuthenticated, Moralis } = useMoralis(); useEffect(() => { if (isAuthenticated) { Moralis.Web3.getTransactions().then(console.log()); } }); return ( <Flex justifyContent="center"> <Text> Transaction List: {props.tokenName}, {props.symbol} </Text> </Flex> ); }; All getTransactions() brings back is in pure addresses. And I don’t have a table to relate { name:‘Uniswap’ || symbol:‘UNI’ } to { “eth”: “0x1f9840a85d5af5bf1d1762f925bdaddc4201f984” }. I stole that address information from but I’d rather pull it from Moralis. Thoughts?
https://forum.moralis.io/t/how-to-create-these-tables/963
CC-MAIN-2021-43
refinedweb
337
58.79
A Python library for scraping the Google search engine. Project description googlesearch googlesearch is a Python library for searching Google, easily. googlesearch uses requests and BeautifulSoup4 to scrape Google. Installation To install, run the following command: python3 -m pip install googlesearch-python usage To get results for a search term, simply use the search function in googlesearch. For example, to get results for "Google" in Google, just run the following program: from googlesearch import search search("Google") Additional options googlesearch supports a few additional options. By default, googlesearch returns 10 results. This can be changed. To get a 100 results on Google for example, run the following program. from googlesearch import search search("Google", num_results=100) In addition, you can change the language google searches in. For example, to get results in French run the following program: from googlesearch import search search("Google", lang="fr") googlesearch.search googlesearch.search(str: term, int: num_results=10, str: lang="en") -> list Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/googlesearch-python/
CC-MAIN-2020-45
refinedweb
187
56.66
I'll assume that we all know what pidls are and how the shell namespace uses them. That's the prerequisite for today. A simple pidl is an item ID list that refers to a file or directory that may not actually exist. It's a way of playing "what if": "If there were a file or directory at this location, here is what I would have created to represent it." For the times you care enough to send the very fake. We've seen these things in action with the SHGFI_ flag, which tells the SHGetFileInfo function, "Pretend that the file/directory exists with the attributes I specified, and tell me what the icon would be, were that item to actually exist." Internally, the SHGetFileInfo function creates one of these "simple pidls", and then asks the simple pidl for its icon. Note that a simple pidl is really a special case of a pidl created from a WIN32_. When you parse a display name with a custom bind context, and the bind context has a STR_ bind context object, then that object is used to control the information placed into the pidl instead of getting the information from the file system. Here's a program that creates a simple pidl and then does a few simple things with it. (Note that the Parsing with Parameters sample covers this topic too, so if you don't like the way I did it, you can look to see how somebody else did it.) #define STRICT_TYPED_ITEMIDS #include <new> #include <windows.h> #include <ole2.h> #include <oleauto.h> #include <shlobj.h> #include <propkey.h> #include <atlbase.h> #include <atlalloc.h> class CFileSysBindData : public IFileSystemBindData { public: static HRESULT CreateInstance( _In_ const WIN32_FIND_DATAW *pfd, _In_ REFIID riid, _Outptr_ void **ppv); // *** IUnknown *** IFACEMETHODIMP QueryInterface( _In_ REFIID riid, _Outptr_ void **ppv) { *ppv = nullptr; HRESULT hr = E_NOINTERFACE; if (riid == IID_IUnknown || riid == IID_IFileSystemBindData) { *ppv = static_cast<IFileSystemBindData *>; } // *** IFileSystemBindData *** IFACEMETHODIMP SetFindData(_In_ const WIN32_FIND_DATAW *pfd) { m_fd = *pfd; return S_OK; } IFACEMETHODIMP GetFindData(_Out_ WIN32_FIND_DATAW *pfd) { *pfd = m_fd; return S_OK; } private: CFileSysBindData(_In_ const WIN32_FIND_DATAW *pfd) : m_cRef(1) { m_fd = *pfd; } private: LONG m_cRef; WIN32_FIND_DATAW m_fd; }; HRESULT CFileSysBindData::CreateInstance( _In_ const WIN32_FIND_DATAW *pfd, _In_ REFIID riid, _Outptr_ void **ppv) { *ppv = nullptr; CComPtr<IFileSystemBindData> spfsbd; HRESULT hr = E_OUTOFMEMORY; spfsbd.Attach(new (std::nothrow) CFileSysBindData(pfd)); if (spfsbd) { hr = spfsbd->QueryInterface(riid, ppv); } return hr; } The CFileSysBindData object is extraordinarily boring. It simply implements IFileSystemBindData, which is a simple interface that just babysits a WIN32_ structure. (There is also a IFileSystemBindData2 interface which babysits a little more information, but for the purpose of this program, we're interested only in the WIN32_.) HRESULT CreateBindCtxWithOpts( _In_ BIND_OPTS *pbo, _Outptr_ IBindCtx **ppbc) { CComPtr<IBindCtx> spbc; HRESULT hr = CreateBindCtx(0, &spbc); if (SUCCEEDED(hr)) { hr = spbc->SetBindOptions(pbo); } *ppbc = SUCCEEDED(hr) ? spbc.Detach() : nullptr; return hr; } A bind context is basically a string-indexed associative array of COM objects. There is also a BIND_ (or BIND_) structure in there, but the things most people care about are the object parameters. They provide an extensible method of passing arbitrary parameters to a function. (Think of it as the COM version of the JavaScript convention of jamming random junk into an Options parameter.) You start with a IBindCtx parameter, and any time you need to add a new flag or parameter, you just stuff it into the IBindCtx. If you just want to add a new boolean flag, you can even ignore the contents of the object parameter and merely base your behavior on whether the parameter exists at all. HRESULT AddFileSysBindCtx( _In_ IBindCtx *pbc, _In_ const WIN32_FIND_DATAW *pfd) { CComPtr<IFileSystemBindData> spfsbc; HRESULT hr = CFileSysBindData::CreateInstance( pfd, IID_PPV_ARGS(&spfsbc)); if (SUCCEEDED(hr)) { hr = pbc->RegisterObjectParam(STR_FILE_SYS_BIND_DATA, spfsbc); } return hr; } To add a file system bind parameter, you just create an object which implements IFileSystemBindData and register it with the bind context with the string STR_. HRESULT CreateFileSysBindCtx( _In_ const WIN32_FIND_DATAW *pfd, _Outptr_ IBindCtx **ppbc) { CComPtr<IBindCtx> spbc; BIND_OPTS bo = { sizeof(bo), 0, STGM_CREATE, 0 }; HRESULT hr = CreateBindCtxWithOpts(&bo, &spbc); if (SUCCEEDED(hr)) { hr = AddFileSysBindCtx(spbc, pfd); } *ppbc = SUCCEEDED(hr) ? spbc.Detach() : nullptr; return hr; } The CreateFileSysBindCtx function simply combines the two steps of creating a bind context and then adding a file system bind parameter to it. In casual conversation, a bind context is often named after the parameter inside it. In this case, we have a bind context with a file system bind parameter, so we call it a "file system bind context". HRESULT CreateSimplePidl( _In_ const WIN32_FIND_DATAW *pfd, _In_ PCWSTR pszPath, _Outptr_ PIDLIST_ABSOLUTE *ppidl) { *ppidl = nullptr; CComPtr<IBindCtx> spbc; HRESULT hr = CreateFileSysBindCtx(pfd, &spbc); if (SUCCEEDED(hr)) { hr = SHParseDisplayName(pszPath, spbc, ppidl, 0, nullptr); } return hr; } This is where everything comes together. To create a simple pidl, we take the WIN32_ containing the metadata we want to use, put it inside a file system bind context, then use that bind context to parse the file name. The presence of a file system bind context tells the parser, "Trust me on this, just go with what's in the bind context." It suppresses all disk access, and the final pidl will describe an item that exactly matches the metadata you provided, whether that accurately reflects reality or not. (You can also pass the bind context to SHCreateItemFromParsingName if you prefer to get an IShellItem.) Okay, let's take this out for a spin. void DoStuffWith(_In_ PCIDLIST_ABSOLUTE pidl) { // Print the file name wchar_t szBuf[MAX_PATH]; if (SHGetPathFromIDListW(pidl, szBuf)) { wprintf(L"Path is \"%ls\"\n", szBuf); } // Print the file size CComPtr<IShellFolder2> spsf; PCUITEMID_CHILD pidlChild; if (SUCCEEDED(SHBindToParent(pidl, IID_PPV_ARGS(&spsf), &pidlChild))) { CComVariant vt; if (SUCCEEDED(spsf->GetDetailsEx(pidlChild, &PKEY_Size, &vt))) { if (SUCCEEDED(vt.ChangeType(VT_UI8))) { wprintf(L"Size is %I64u\n", vt.ullVal); } } } } int __cdecl wmain(int argc, PWSTR argv[]) { CCoInitialize init; if (SUCCEEDED(init)) { WIN32_FIND_DATAW fd = {}; fd.dwFileAttributes = FILE_ATTRIBUTE_NORMAL; fd.nFileSizeLow = 42; CComHeapPtr<ITEMIDLIST_ABSOLUTE> spidlSimple; if (SUCCEEDED(CreateSimplePidl(&fd, L"Q:\\Whatever.txt", &spidlSimple))) { DoStuffWith(spidlSimple); } } return 0; } Our test program asks for a simple pidl to Q:\, and then prints information from it. Observe that the creation of the simple pidl succeeds even though you probably don't have a Q: drive, and even if you did, the code never tried to access it. And when we ask the pidl, "Hey, what's the file size?" it retrieves the fake value 42 we passed in the WIN32_ structure. Sure, that was kind of artificial, but so-called simple pidls are handy if you want to talk about an object on slow media (such as a network share) without actually accessing the target device. Exercise: What changes are necessary in order to create a simple pidl that refers to a file with illegal characters in its name? Hint: STR_. SHSimpleIDListFromPath()? There is a design flaw with these simple pidls and SHChangeNotify* for namespace extensions with a junction point somewhere in the filesystem (IIRC). SHCNF_PATH will create a simple pidl with the "folder data" in a MS internal format when really the pidl data is supposed to be defined by the IShellFolder the item is in. If the junction uses the foldername.{guid} "registration" you can probably detect it, not so much if it uses desktop.ini or the registry. SHParseDisplayName() never uses the cFileName field of the WIN32_FIND_DATA? @Raymond: The fact that the pidl is simple is a shell32 implementation detail. Many third party applications need to inform the shell about some change and some of them use SHCNF_PATH. A namespace extension has no say in whether the pidl is simple or not but to handle the shell notifications is has to ignore the fact that other peoples pidls are supposed to be a black box. The file isn't guaranteed to exist for a non-simple pidl. The only thing that differ from a simple pidl is that the "file" probably existed in some of the the layers in the software/hardware stack some time ago. Simple-pidl. The O.J. Simpson of pidls: "I didn't modify the filesystem, but if I did, this is what it would look like." Looks like the Hallmark link has changed to corporate.hallmark.com/…/Brand-Legacy instead of corporate.hallmark.com/…/Brand-Legacy
https://blogs.msdn.microsoft.com/oldnewthing/20130503-00/?p=4463/
CC-MAIN-2017-13
refinedweb
1,364
51.78
or view pane on the right side of Explorer. A third will implement IEnumIDList and is responsible for maintaining and providing the "items" in both the tree view and the content pane. We will also add a few more classes when the time is right, but for now, let's create a new ActiveX DLL project called DemoSpace. We'll begin by adding three classes. Table 11.7 describes the classes and the interfaces they will implement. Add these to the project. We're going to have to write more than just a few lines of code to "wire" up this namespace extension. Previously, we were able to discuss one class at a time, write the code for it, enter in a few registry entries, and we were done. Not so in this case. Things won't make sense if we do it that way. A namespace extension has a certain flow, and we need to follow that flow to make the best sense of it all. Therefore, we will be doing some jumping around between these (and other) classes. Also, as mentioned previously, there is some code in a namespace extension that is generic, and there is more of it that is not. This distinction will be noted whenever applicable . As you might expect, we have some vtable swaps for this class. In this case, there are two methods that need to be swapped: CompareIDs and GetUIObjectOf . But this time we have a variation of the swap. Remember, each of these objects operates independently of the others. There might be a case in which two instances of ShellFolder exist at one time. This presents a little problem, which, fortunately, has a simple solution. If you remember, vtables are shared between every instance of a class. All addresses of the methods that comprise a class are the same for each instance. What does this mean to us? Well, if you haven't noticed, we have been swapping these functions in the Initialize and Terminate events of the class. When a second instance of ShellFolder is instantiated , the functions will be swapped again. Consider this call: m_pOldCompareIDs = SwapVtableEntry(ObjPtr(pFolder), _ 8, _ AddressOf CompareIDsX) The first time this function is called, the address of the CompareIDs method is swapped out with the CompareIDsX function defined in Demospace.bas . Now, if a second instance of ShellFolder is instantiated before the first instance terminates, this call will be made again. But remember, vtables are global for every instance of a class. So, on the second call, the vtable for the class already contains the address of CompareIDsX . Basically, all that happens in this case is that the same address is copied into the vtable. So our address swapping in the Initialize event is not a problem. The problem lies in the Terminate event, when we swap the addresses back. If the first instance terminates, swapping the functions back, the second instance is no longer bound to the proper methods. A crash is sure to result. We will get around this by actually reference counting ShellFolder ourselves . We will maintain a public counter declared in the Demospace.bas code module that is incremented every time Initialize is called and is decremented every time Terminate is called. If the counter is when we terminate, we'll know it's safe to swap the methods back. There are four methods that need to be swapped: BindToObject , CompareIDs , CreateViewObject , and GetUIObjectOf . Let's look at the code for Class_Initialize and Class_Terminate, which is shown in Example 11.6. 'Declared in Demospace.bas Public g_FolderSwapRef As Long 'ShellFolder.cls Private m_pOldBindToObject As Long Private m_pOldCompareIDs As Long Private m_pOldCreateViewObj As Long Private m_pOldGetUIObjectOf As Long Private Sub Class_Initialize( ) Set m_pMalloc = GetMalloc If g_FolderSwapRef = 0 Then Dim pFolder As IShellFolder Set pFolder = Me m_pOldBindToObject = SwapVtableEntry(ObjPtr(pFolder), _ 6, AddressOf BindToObjectX) m_pOldCompareIDs = SwapVtableEntry(ObjPtr(pFolder), _ 8, AddressOf CompareIDsX) m_pOldCreateViewObj = SwapVtableEntry(ObjPtr(pFolder), _ 9, AddressOf CreateViewObjectX) m_pOldGetUIObjectOf = SwapVtableEntry(ObjPtr(pFolder), _ 11, AddressOf GetUIObjectOfX) End If g_FolderSwapRef = g_FolderSwapRef + 1 End Sub Private Sub Class_Terminate( ) g_FolderSwapRef = g_FolderSwapRef - 1 If (g_FolderSwapRef = 0) Then Dim pFolder As IShellFolder Set pFolder = Me m_pOldBindToObject = SwapVtableEntry(ObjPtr(pFolder), _ 6, m_pOldBindToObject) m_pOldCompareIDs = SwapVtableEntry(ObjPtr(pFolder), _ 8, m_pOldCompareIDs) m_pOldCreateViewObj = SwapVtableEntry(ObjPtr(pFolder), _ 9, m_pOldCreateViewObj) m_pOldGetUIObjectOf = SwapVtableEntry(ObjPtr(pFolder), _ 11, m_pOldGetUIObjectOf) End If Set m_pMalloc = Nothing End Sub We will use this same reference counting technique for ShellView and EnumIDList, as well. Each of the Class_Initialize and Class_Terminate events for both of these classes will increment and decrement a counter. Class_Terminate will only swap back the methods in the vtable when the counter is equal to zero. We will come back to BindToObject , CompareIDs , CreateViewObject and GetUIObjectOf later, since they are significant methods in the grand scheme of things. For now, take note of the private member variable m_ pMalloc . All of the primary classes in the namespace extension will use IMalloc to allocate memory for PIDLs. In the Class_Initialize event, we call GetMalloc to retrieve a reference to this interface. GetMalloc is shown in Example 11.7. Public Function GetMalloc( ) As IMalloc Dim pMalloc As IMalloc Dim lpMalloc As Long Dim hr As Long hr = SHGetMalloc(lpMalloc) If (hr = S_OK) Then CopyMemory pMalloc, lpMalloc, 4 Set GetMalloc = pMalloc End If End Function GetMalloc primarily wraps the function SHGetMalloc , which returns the IMalloc reference to us. SHGetMalloc is found in shell32.dll and is defined like so: Public Declare Function SHGetMalloc Lib "shell32.dll" _ (lpMalloc As Long) As Long We used CopyMemory before when we had to deal with raw interface addresses. The difference here is that an AddRef is actually being performed when the function returns using Set . So it is safe to set the interface equal to Nothing when we are finished with it. Now that we have laid the groundwork for ShellFolder, let's continue with implementing the methods the object will need to support. We'll start with GetClassID, only because it stands in the way of more important matters, and then move on from there. The action (albeit there's not much of it) begins with IPersistFolder . IPersistFolder contains one method, Initialize , that will not be implemented. But the method must return S_OK , or the whole works come tumbling down. In order to return S_OK , we can just leave the method empty. VB handles the rest: Private Sub IPersistFolder_Initialize( _ ByVal pidl As VBShellLib.LPCITEMIDLIST) 'Must return S_OK End Sub Because IPersistFolder is "derived" from IPersist , it also contains GetClassID . We've seen this method more than a few times now (see Section 5.3.1 in Chapter 5), but we've never actually implemented it. Let's do that now. Example 11.8 contains the implementation. Private Sub IPersistFolder_GetClassID(lpClassID As VBShellLib.clsid) Dim clsid As GUID Dim sProgID As String sProgID = "DemoSpace.ShellFolder" CLSIDFromProgID StrPtr(sProgID), clsid lpClassID = VarPtr(clsid) End Sub This method is quite simple. It is just required to return the CLSID for the object implementing IShellFolder . Not a string representation of the CLSID, mind you, but the actual 128-bit number. This is accomplished by calling CLSIDFromProgID , which is declared as follows : Public Declare Function CLSIDFromProgID Lib "ole32.dll" _ (ByVal lpszProgID As Long, pClsid As GUID) As Long This method takes a pointer to a program identifier and to a GUID structure, which, as you might recall, is defined like so: Public Type GUID Data1 As Long Data2 As Integer Data3 As Integer Data4(7) As Byte End Type With that out of the way, we are ready to create the view object. This function is responsible for creating the object that will manage the view. For the most part, this function is generic. Example 11.9 shows the function in its entirety. The most exciting thing about this method is that it is one of the few times we actually have to call IUnknown::QueryInterface ourselves. It has that "I just got my hands dirty" feel to it, doesn't it? This method, like BindToObject , is also passed a reference to an IID. Under Windows 9x and NT, this IID always appears to be IShellView . However, under Windows 2000, the shell sometimes asks for IShellLink . We haven't discussed this latter interface, and we're not going to. But the gist of the IShellLink interface is that it is used for shortcuts. Specifically, this is used to accommodate distributed link tracking, which is a feature of Windows 2000 that enables client applications to track link sources that have moved. CreateViewObject needs to return E_OUTOFMEMORY in the event the shell requests an interface other than IShellView . Therefore, this method is swapped in the vtable with a replacement function. 'Demospace.bas Public Const IID_IShellView = "{000214E3-0000-0000-C000-000000000046}" Public Function CreateViewObjectX(ByVal this As IShellFolder, _ ByVal hwndOwner As hWnd, _ ByVal riid As REFIID, _ ppvOut As LPVOID) As Long CreateViewObjectX = E_OUTOFMEMORY Dim iid As String iid = GetIID(riid) If iid = IID_IShellView Then 'Get reference to current shell folder. Dim pShellFolder As ShellFolder Set pShellFolder = this 'Create new view. Dim pShellView As ShellView Set pShellView = New ShellView 'Pass folder info to view. pShellView.Initialize pShellFolder, pShellFolder.pidl 'Query view for IShellFolder. Dim pUnk As IUnknownVB Set pUnk = pShellView pUnk.QueryInterface riid, ppvOut Set pUnk = Nothing Set pShellView = Nothing Set pShellFolder = Nothing CreateViewObjectX = S_OK End If End Function This quite possibly could be a generic implementation, but look at the call to ShellView.Initialize (not to be confused with Class_Initialize). This call is the equivalent of a C++ constructor. Note, though, that Initialize is not a method defined by the IShellView interface; we've implemented it purely to pass information to the view object class right when it is created. So in this case, we pass an object reference to ShellFolder and a PIDL. PIDLs should contain everything needed to describe the items they represent, so this implementation might suffice (it does for all three example extensions). But there might be times when your own Initialize event will require something a little more exotic. It's up to you. Whatever your view object might need, this is the place to pass it. Anyway, this version passes a PIDL to the view. The view object will use this PIDL to populate its list view control with folders and items. After we have created an instance of ShellView, we get a reference to IUnknownVB (our no-holds-barred version of IUnknown which is discussed in detail in Chapter 6), and we call QueryInterface with the riid and ppvOut parameters given to us by the shell. With this done, the shell now has a reference to our view object and will call IShellView::CreateViewWindow , which is the method that actually creates the view window and places it in the content pane. We need to add the Initialize method to ShellView so that the view object can receive the object reference to ShellFolder and the PIDL that represents that folder. We'll do that first, then we will implement CreateViewWindow. Initialize is shown at the bottom of Example 11.10. 'ShellView.cls Private m_pidl As LPITEMIDLIST Private m_parentFolder As ShellFolder Private m_pidlMgr As pidlMgr Private Sub Class_Initialize( ) Set m_pidlMgr = New pidlMgr End Sub Private Sub Class_Terminate( ) Set m_pidlMgr = Nothing End Sub Public Sub Initialize(f As ShellFolder, ByVal pidl As LPITEMIDLIST) Set m_ parentFolder = f m_ pidl = m_ pidlMgr.Copy(pidl) End Sub Don't worry about the code pertaining to pidlMgr. For now, just know that it is a class we will use to help us work with PIDLs. We'll get to the pidlMgr class when I talk about EnumIDList. The purpose of this function could not be clearer. Its name gives it away. CreateViewWindow is responsible for creating the view window and returning the handle of that window back to the shell. The function is fairly easy to implement, but there is quite a bit going on. The syntax of this method is as follows: HRESULT CreateViewWindow( IShellView *lpPrevView, LPFOLDERSETTINGS lpfs, IShellBrowser *psb, RECT *prcView, HWND *phWnd ); The first parameter, lpPrevView , is a pointer to the view window that was exited before our view object was created. This could be any view window, depending on where we were in the namespace before our extension was activated. This could also be a previous instance of our view object. The Platform SDK also says that this value could be NULL . In any case, we will not use the value. But it could come in handy if you want to communicate with the previous view in your own extension, possibly as an optimization. The second parameter, lpfs , is very important. It's the address of a FOLDERSETTINGS structure. We don't need to go into details with this structure, but it is important. We will cache this value for later and give it right back to the shell. This parameter is how the shell maintains the state of the viewwhich, in this case, means one of the views (Web Page, Large Icons, Small Icons, List, and Details) defined by Explorerwhen jumping between namespace extensions. The third parameter, psb , is a reference to IShellBrowser . We will cache this value, as well. Later, we'll use it for a variety of tasks , such as adding menu items and displaying text in Explorer's status bar. The view object will also make use of this parameter to handle browsing into folders from the view pane side of things (versus the tree view side). The fourth parameter, prcView , is the address of a RECT structure that contains the coordinates of the view pane. We'll create a local instance of this structure using CopyMemory and size our view window to these values. Last, but not least, we have an HWND , which is an [in, out] parameter. So, when the view window has been created, we will use this parameter to pass the handle back to the shell. CreateViewWindow is shown in Example 11.11. Take a look, and then we'll discuss the details. 'ShellFolder.cls Private m_folderSettings As FOLDERSETTINGS Private Sub IShellView_CreateViewWindow( _ ByVal lpPrevView As VBShellLib.IShellView, _ ByVal lpfs As VBShellLib.LPCFOLDERSETTINGS, _ ByVal psb As VBShellLib.IShellBrowser, _ ByVal prcView As VBShellLib.LPRECT, phWnd As VBShellLib.hWnd) Dim dwStyle As DWORD Dim parentWnd As hWnd Dim rc As RECT 'Save folder settings CopyMemory m_folderSettings, ByVal lpfs, Len(m_folderSettings) 'Get window rect CopyMemory rc, ByVal prcView, Len(rc) Set m_frmView = New frmView parentWnd = psb.GetWindow dwStyle = GetWindowLong(m_frmView.hWnd, GWL_STYLE) dwStyle = dwStyle Or WS_CHILD Or WS_CLIPSIBLINGS SetWindowLong m_frmView.hWnd, GWL_STYLE, dwStyle SetParent m_frmView.hWnd, parentWnd MoveWindow m_frmView.hWnd, rc.Left, rc.Top, _ rc.Right - rc.Left, rc.bottom - rc.Top, True ShowWindow m_frmView.hWnd, SW_SHOW phWnd = m_frmView.hWnd Set m_pShellBrowser = psb Set m_frmView.ShellBrowser = m_pShellBrowser FillList End Sub After the FOLDERSETTINGS have been saved and the view window has been created and sized to the RECT structure, things get a little interesting. First, we call IShellBrowser::GetWindow ( IShellBrowser is actually derived from IOleWindow ) to get the handle to the content pane window in Explorer. Once we have that, we can use GetWindowLong and SetWindowLong Win32 API functions to actually change the style bits of our window and transform it into a child window. The SetParent API function allows us to set the parent of our newly born child to the window given to us by IShellBrowser . Once this has all been accomplished, we can position our view according to prcView using MoveWindow and then use ShowWindow to display our view. But we are not quite done. Before we exit the method, we need to give the handle to our view back to the shell. We will also save a private copy of IShellBrowser and give another copy to the view window. Finally, we call FillList to populate our view with items (we'll come back to this function in Section 11.6.9 later in this chapter). FillList is a function we will create to handle populating of the list view. Of course, for any of the code in Example 11.12 actually to work, we need a view window. This is easy enough. Add a Form to the project called frmView and do the following: Set its BorderStyle property equal to "None." Add a list view control called "ListView." Add a column header to the list view called "Items." Add an ImageList control. Now, we need to add a ShellBrowser property to the form, which CreateViewWindow will use to provide the form with a reference to IShellBrowser . The list view also needs to be resized to the form whenever Explorer is resized, so we'll use the MoveWindow API in a resize event to handle the job. Also, the form will eventually need to work with PIDLs, so we'll add a private instance of the mysterious pidlMgr class to the form as well (we'll talk about this class in Section 11.6 later in this chapter). The code for frmView is shown in Example 11.12. Option Explicit Private m_pidlMgr As pidlMgr Private m_pShellBrowser As IShellBrowser Private Sub Form_Load( ) Set m_pidlMgr = New pidlMgr End Sub Private Sub Form_Resize( ) MoveWindow ListView.hWnd, 0, 0, Me.Width, Me.Height, 1 End Sub Private Sub Form_Unload(Cancel As Integer) Set m_pidlMgr = Nothing Set m_pShellBrowser = Nothing End Sub Public Property Set ShellBrowser(sb As IShellBrowser) Set m_pShellBrowser = sb End Property The remaining methods of IShellView , with the exception of UIActivate , can now be implemented. UIActivate , though, will have to remain until later, because it will be different for every namespace that you create. The last of the IShellView methods are very simple to implement. Each requires a few lines of code. Let's get them out of the way, then we can get to the EnumList class. This method is called when Explorer wants to terminate the view window. When this happens we simply unload the form: Private Sub IShellView_DestroyViewWindow( ) Unload m_frmView Set m_frmView = Nothing End Sub This method is called when the shell wants the current folder settings. These folder settings were cached in IShellView::CreateViewWindow (Example 11.12), so all we have to do is pass them back to the shell: Private Sub IShellView_GetCurrentInfo( _ ByVal lpfs As VBShellLib.LPFOLDERSETTINGS) CopyMemory ByVal lpfs, m_folderSettings, Len(m_folderSettings) End Sub The only responsibility of this method is to return the handle to the view object: Private Function IShellView_GetWindow( ) As VBShellLib.hWnd IShellView_GetWindow = m_frmView.hWnd End Function This method is called whenever the view is refreshed (i.e., View Refresh is selected from Explorer's menu, or F5 is pressed). This method is fairly generic, but it is possible your needs could be greater. This implementation merely clears the list view and repopulates it: Private Sub IShellView_Refresh( ) SendMessage m_frmView.ListView.hWnd, LVM_DELETEALLITEMS, 0, 0& FillList End Sub The remaining methods (with the exception of UIActivate ) are not implemented. The shell tells the namespace extension to prepare the data that it wants displayed by calling IShellFolder::EnumObjects . The primary responsibility of this method is to create an object that implements IEnumIDList , which it will pass back to the shell. This object, in our case EnumIDList, is responsible for maintaining the list of PIDLs that represent the items the shell will display in either the tree view or the list view. Let's implement IShellFolder::EnumObjects ; then we will move on to the EnumIDList class and see how that works. Enum-Objects is shown in Example 11.13. 'ShellFolder.class Private m_iLevel As Integer Private Function IShellFolder_EnumObjects( _ ByVal hwndOwner As VBShellLib.hWnd, _ ByVal grfFlags As VBShellLib.DWORD) As VBShellLib.IEnumIDList Dim e As New EnumIDList e.CreateEnumList m_iLevel, grfFlags Set IShellFolder_EnumObjects = e End Function To implement EnumObjects, all we have to do is create an instance of Enum-IDList, which is our class that implements IEnumIDList . Then we pass this object back to Explorer. Look at the call to EnumIDList::CreateEnumLis t. Before we give EnumIDList over to the shell, we need to actually create the list of items that it will wrap. CreateEnumList is not a method of IEnumIDList , it is a public function we'll add to EnumIDList for the purpose of creating the list of items. It works like this: CreateEnumList will build a linked list of PIDLs that will be maintained internally by the EnumIDList class. This list of PIDLs contains one or more folders or items for a particular level of the namespace hierarchy. When the shell is ready for these items, it will call IEnumIDList:: Next for a PIDL. Our implementation of IEnumIDList::Next will give the shell a PIDL from this internally maintained linked list. This happens repeatedly until there are no more PIDLs left in the list. The two parameters to CreateEnumList require some explanation. m_iLevel is the current "level" where we are in the hierarchy. Look back at Figure 11.6 for a moment. The folders and items are in the following format: Type/Level/Index. The m_iLevel parameter represents this level. The second parameter, grfFlags , which is given to us by the shell, is quite important. This will be a value from the following SHCONTF enumeration: typedef enum tagSHCONTF{ SHCONTF_FOLDERS = 32, SHCONTF_NONFOLDERS = 64, SHCONTF_INCLUDEHIDDEN = 128, } SHCONTF; This flag lets us know whether the shell wants "folders" or "items" when it asks us to build the PIDL list. We will use this information to make sure we comply with the shell's request. Before we actually implement CreateEnumList , let's get the Class_Initialize and Class_Terminate events out of the way. They are shown in Example 11.14. Once again, ignore the references to the pidlMgr class and the IMalloc reference. We will discuss these two items momentarily. 'EnumIDList.cls Implements IEnumIDList Private m_pMalloc As IMalloc Private m_pidlMgr As pidlMgr Private m_pOldNext As Long Private Sub Class_Initialize( ) Set m_pMalloc = GetMalloc Set m_pidlMgr = New pidlMgr 'Swap If (g_EnumSwapRef = 0) Then Dim pEnumIDList As IEnumIDList Set pEnumIDList = Me m_pOldNext = SwapVtableEntry(ObjPtr(pEnumIDList), 4, _ AddressOf NextX) End If g_EnumSwapRef = g_EnumSwapRef + 1 End Sub Private Sub Class_Terminate( ) DeleteList Set m_pidlMgr = Nothing Set m_pMalloc = Nothing g_EnumSwapRef = g_EnumSwapRef - 1 If (g_EnumSwapRef = 0) Then Dim pEnumIDList As IEnumIDList Set pEnumIDList = Me m_pOldNext = SwapVtableEntry(ObjPtr(pEnumIDList), 4, _ m_pOldNext) End If End Sub Notice the call to DeleteList in the Class_Terminate event. We'll talk about this function in Section 11.5.3 later in this chapter, but for now, just know that it is a function that will be called to free the linked list we will create for the PIDLs. Now on to CreateEnumList. This function will be different for every namespace extension. But its purpose is always the same: to build a list of PIDLs that will be used by IEnumIDList::Next . Let's look at the function, which is shown in Example 11.15; then we'll discuss its nuances . 'DemoSpace.bas Public Const g_nMaxLevels = 5 'EnumIDList.class Public Function CreateEnumList(ByVal iLevel As LPITEMIDLIST, _ ByVal dwFlags As DWORD) As Boolean Dim i As Integer Dim pidlNew As LPITEMIDLIST CreateEnumList = False If iLevel < g_nMaxLevels Then For i = 0 To iLevel pidlNew = m_pidlMgr.Create(PT_FOLDER, iLevel, i) If (pidlNew) Then AddToEnumList pidlNew End If CreateEnumList = True Next i End If 'Enumerate the non-folder items (values) If (dwFlags And SHCONTF_NONFOLDERS) Then iLevel = iLevel - 1 If iLevel <= g_nMaxLevels Then For i = 0 To iLevel - 1 pidlNew = m_pidlMgr.Create(PT_ITEM, iLevel, i) If (pidlNew) Then AddToEnumList pidlNew End If Next i CreateEnumList = True End If End If End Function First, the level is checked for validity. The hierarchy is restricted to five levels in this example by the constant g_nMaxLevels . If you look at Figure 11.6, you will see that the hierarchy contains folders with the levels 04. We will use a For...Next loop to create the folders and items based on this level number that was passed in to the function. But keep this in mind: the implementation of this function is completely arbitrary. If you look at the example code for the sample RegSpace application, this function is implemented in a totally different manner. It uses the registry enumeration API functions to build the list of PIDLs. The PIDL itself is created with a call to pidlMgr::Create . We will talk about this method in detail in Section 11.6 later in this chapter. For now just look at the call itself. If you examine the parameters to this function, you will see three values: the PIDL type (folder or item), the level of the PIDL item, and the index of the PIDL item. This is the format of our PIDL. If you remember, the PIDL is nothing more than two bytes that specify the size of the PIDL's data, followed by whatever data we want ( terminated by an empty ITEMIDLIST ). Therefore, our PIDL format is the following: size/type/level/index pidlMgr::Create will create a PIDL in this format for us. We determine whether we are creating folders or non-folders by the dwFlags parameter. Once we have the PIDL, we need to maintain it in a list of some sort . We will use an internal linked list to maintain our PIDLs. To understand how it works, you need to look at the following structure: Public Type PIDLLIST pNext As Long pidl As LPITEMIDLIST End Type The pidl member is easy to understandit contains the PIDL we want to keep track of. The pNext member contains a pointer to another structure of type PIDLLIST , which is the next PIDL in the list. Using this method, we can chain a list of PIDLs together (see Figure 11.7). This is much more efficient than using ReDim Preserve to build a variable length array, so don't just limit the linked lists to a namespace extension. They are good any time you have a variable-length list of data that needs to be searched efficiently . Our EnumIDList class will contain three private member variables that correspond to the first member of the list, the current member of the list, and the last member of the list. AddToEnumList uses this information to determine where the next PIDL will go into the list and adjusts these list pointers accordingly . Let's examine the AddToEnumList function, which is shown in Example 11.16. 'EnumIDList.cls Public m_pFirst As Long Public m_pCurrent As Long Public m_pLast As Long Public Function AddToEnumList(ByVal pidl As LPITEMIDLIST) As Boolean Dim aPidlList As PIDLLIST Dim pNewItem As Long AddToEnumList = False 'Allocate memory for enum linked list item pNewItem = m_pMalloc.Alloc(Len(aPidlList)) If (pNewItem > 0) Then aPidlList.pNext = 0& aPidlList.pidl = pidl CopyMemory ByVal pNewItem, aPidlList, Len(aPidlList) If (m_pFirst = 0) Then m_pFirst = pNewItem m_pCurrent = m_pFirst End If If (m_pLast > 0) Then CopyMemory aPidlList, ByVal m_pLast, Len(aPidlList) aPidlList.pNext = pNewItem CopyMemory ByVal m_pLast, aPidlList, Len(aPidlList) End If m_pLast = pNewItem AddToEnumList = True End If End Function We'll use the shell's memory allocator for the first time to allocate the memory for the new linked list item. The PIDL is assigned to the PIDLLIST structure, and pNext is set to 0& . Note that the ampersand in the assignment statement is important. This is a long value that is a NULL address. This marks the end of the list. The first time AddToEnumList is called, m_ pFirst and m_ pCurrent are both assigned to the PIDLLIST link item. Thereafter, the pNext member of m_ pLast is assigned to the new item, and the new item is added to the end of the list. When the shell starts calling IEnumIDList::Next for PIDLs, we will pass back whatever PIDL is pointed to by m_ pCurrent . m_ pCurrent will then be adjusted to point to the next item in the linked list. Because we have allocated the memory for the linked list ourselves, when Enum-IDList terminates, we free the list using a call to DeleteList (see Example 11.14). DeleteList is shown in Example 11.17. Private Sub DeleteList( ) Dim aPidlList As PIDLLIST Do While (m_pFirst > 0) CopyMemory aPidlList, ByVal m_pFirst, Len(aPidlList) m_pFirst = aPidlList.pNext If (aPidlList.pidl > 0) Then m_pidlMgr.Delete aPidlList.pidl End If Loop m_pFirst = 0 m_pCurrent = 0 m_pLast = 0 End Sub Starting with m_ pFirst , DeleteList merely copies the PIDL into a local instance of PIDLLIST , adjusts m_ pFirst to point to the next PIDL in the list, then frees the current PIDL (which is now in aPidlList ) by calling pidlMgr::Delete . This function merely wraps a call to IMalloc::Free . Shortly after we have built our linked list of PIDLs, the shell begins to call several functions repeatedly in an effort to display the PIDL appropriately. It will call IEnumIDList::Next for the PIDL itself. It will call IShellFolder::GetAttributesOf to find out whether this PIDL is a folder or an item. It will call IShellFolder::GetDisplayNameOf for the display text of the PIDL. And it will call IShellFolder::CompareIDs to determine in which order it should display the PIDLs. Then it will call IEnumIDList::Next again. This process repeats until there are no more PIDLs. The process looks like this: Get PIDL. Determine attributes: is it a "File" or a "Folder?" Get the display name of the PIDL. Compare this PIDL to a previous PIDL to determine the display order. Start over. As we mentioned earlier, when the shell calls the Next method, we will give it the next PIDL in our linked list via the rgelt parameter; this is whatever is pointed to by m_ pCurrent . m_ pCurrent is then adjusted to point to the next item in the list. If m_ pCurrent is equal to 0, we know that we are at the end of the list, so we return S_FALSE . The shell will also expect us to tell it how many PIDLs we are returning. Although we will not do this, this method can be written to accommodate returning several PIDLs at once. This process is demonstrated in Example 11.18. Remember, this method has undergone a vtable swap; therefore, it exists in a code module. 'DemoSpace.bas Public Function NextX(ByVal this As IEnumIDList, _ ByVal celt As ULONG, _ rgelt As LPITEMIDLIST, _ pceltFetched As ULONG) As Long Dim cEnumIDList As EnumIDList Set cEnumIDList = this NextX = S_FALSE pceltFetched = 0 rgelt = 0 If cEnumIDList.m_pCurrent = 0 Then Exit Function End If Dim aPidlList As PIDLLIST CopyMemory aPidlList, _ ByVal cEnumIDList.m_pCurrent, _ Len(aPidlList) rgelt = aPidlList.pidl cEnumIDList.m_pCurrent = aPidlList.pNext pceltFetched = 1 NextX = S_OK End Function
https://flylib.com/books/en/1.107.1.79/1/
CC-MAIN-2019-47
refinedweb
5,074
63.9