text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Kubernetes¶ Kubernetes discovery is a service discovery choice, it implements ServiceDiscovery Plugin, which leads go-chassis to do service discovery in kubernetes cluster according to Services. Import Path¶ kube discovery is a service discovery plugin that should import in your application code explicitly. import _ "github.com/go-chassis/go-chassis-plugins/registry/kube" Configurations¶ If you set cse.service.Registry.serviceDiscovery.type as “kube”, then “configPath” is necessary to comminucate with kubernetes cluster. The go-chassis consumer applications would find Endpoints and Services in cluster that provider applications deployed. NOTE: Provider applications with go-chassis must deploy itself as a Pod asscociate with Services. The Service ports must be named and the port name must be the form <protocol>[-<suffix>]. protocol can set to be restor highwaynow. cse: service: Registry: serviceDiscovery: type: kube configPath: /etc/.kube/config To see the detailed use case of how to use kube discovery with chassis please refer to this example.
https://rajtestrst.readthedocs.io/en/latest/discovery/kube-discovery.html
CC-MAIN-2022-05
en
refinedweb
Debug Guide For Web Analytics And Tag Management One. In the guide, we’ll use Google Analytics 4 and Google Tag Manager as examples. However, the guide’s topics be extended to any web analytics tools and tag management solutions out there, because the use cases are universal.X The Simmer Newsletter Subscribe to the Simmer newsletter to get the latest news and content from Simo Ahava into your email inbox! Browser developer tools Your best friend in debugging issues with your web analytics and tag management systems in the web browser are the browsers’ own developer tools. Developer tools are a suite of utilities that you can use to investigate various different aspects of the dynamic web page the user is inspecting. The image above lists the keyboard shortcuts for opening the developer tools in various browsers and operating systems. Note that on Safari, you’ll want to open Safari -> Preferences -> Advanced and check the button next to Show Develop menu in menu bar. All of the browsers above utilize a similar set of developer tools with just small differences in the UI and the chosen semantics. The most useful tools we’ll focus on are: The element inspector, which lets you look at the dynamic representation of the web page (Document Object Model). This is particularly useful for inspecting injected tracking scripts and custom HTML elements added by tag management solutions. The network debugger, which lets you inspect and analyze outgoing HTTP requests sent from the page (or embedded frames). This is one of the most important tools for you, as it will tell you the truth about what is actually sent from the user’s browser. The JavaScript console, which lets you run JavaScript code in the context of the current page (or embedded frame). Vital for recognizing issues with the JavaScript implementations of your analytics scripts. The sources browser, which lets you browse through the JavaScript resources loaded on the current page (or embedded frames). Great for identifying mismatches between what you expected the JavaScript file to contain vs. what’s actually contained within. The application storage, which lets you investigate the browser storage (cookies, localStorage, etc.) utilized on the current page (or embedded frames). Instrumental for understanding why some trackers might not be able to persist data consistently. In this section, we’ll walk through these different tools with more detail, so that you’ll have a better understanding of how to use them to unravel some of the more complicated debugging cases you run into. Note! This section uses mainly Chrome’s developer tools in examples. It’s possible the actual actions are (slightly) different in other browsers. Element inspector The element inspector lets you analyze the DOM (Document Object Model) of the page. When the browser loads the HTML source it retrieves from the site’s web server, it renders it into a dynamic tree model, accessible via JavaScript. This dynamic representation is the DOM. Whenever an analytics snippet or a tag manager interacts with the page, such as by loading a third-party JavaScript library or adding a click listener somewhere, they modify the DOM. Thus, if you have problems with your implementation, it’s useful to search through the element inspector to find what the issue could be. The element inspector is also extremely useful when you’re building triggers or variables in your tag management system with the purpose of scraping some information from the page or adding listeners to and injecting elements into the document. Here are a couple of cool tricks you can try in the element inspector. Hit CMD + F / CTRL + F to open a search bar. You can then search for a string (e.g. text content) or a CSS selector to have the inspector locate a specific element. For example, to search for all elements with the class name simmer, you can type .simmer into the search bar and the inspector will return all the matches. If you’re unsure whether Google Tag Manager actually added the Google Analytics 4 script on the page, for exampled, you could search for gtag/js to find all elements that have that particular string somewhere in them. Since GA4 is loaded from the /gtag/js?id=G-... URL, the inspector should find the GA4 script element, unless it’s actually missing. Select a specific element on the page itself You can also do the reverse. If you click the little element selector icon (this differs from browser to browser) in the developer tools panel, you can actually select an element on the page to reveal it in the element inspector. In addition to pinpointing it in the element inspector, the tool also displays some useful information about the element in question. Interact with the element context menu When you select an element in the inspector with the right mouse button, you’ll see a context menu. You can do a lot of cool stuff with this, such as: Add / edit the element’s attributes. I use this a lot to change link URLs to something else, when I want to test a specific outgoing link, for example. Edit / delete the element itself. Sometimes it’s useful to just delete an element entirely (e.g. an overlay that hides the page underneath). And sometimes you’ll want to edit the entire HTML structure of an element when you want to add new contents to it for testing a click listener, for example. Copy selector, as this lets you copy a CSS selector path to the element itself. Extremely useful when configuring triggers in Google Tag Manager, for example. Just note that the CSS selector path is usually needlessly complex, and you can trim it down to be more generic in order to avoid subtle changes in the DOM from breaking your triggers. Break on… lets you pause the page whenever the element or its subtree (nested elements) changes. This is pretty useful when you want to inspect what happens when an element becomes visible or when an element’s content is dynamically changed (think form validation error messages, for example). Change element styles When you’ve selected the element, you can change the associated styles with the developer tools. This might have marginal use for analytics debugging, but occasionally you might use it to make an element visible or move it around when you can’t find it in the actual page. Network debugger Often you’ll find that your tag management system and/or your analytics debugger extension claims that your hit has fired, but no data is collected into the vendor endpoint. At this point, you need to open the Network tab in the developer tools, as this lists all the HTTP requests sent from the browser. The network debugger is the ultimate truth. If a request doesn’t appear here, then the browser has not sent it. So if your tag management system says that a GA4 hit has fired but you see no evidence of a collect?v=2 request in the network debugger, then the hit has not been sent. It might be useful to check the Preserve log option, as this doesn’t clear the log of requests between page loads. This is very useful when you want to debug what happened on a previous page. For a more detailed walkthrough of the network debugger, check out this excellent resource. Here are some of the things you can do with the network debugger. Filter for specific requests By typing something into the filter field, you can show only the HTTP requests that match the filter. The filter searches through the request URL and the request body. When you select a request in the debugger, a small overlay opens with various tabs. Headers includes all the HTTP headers of the request and the response. This is useful for debugging things like cookies, referrer values, and where the request was actually sent. Payload contains the query parameters and the request body. Extremely important when analyzing what was actually sent to the analytics platform. Response shows what the web server actually responded with. This is often empty or nonsensical with pixel requests, but it’s useful when analyzing API responses or JavaScript files. Cookies lists all the cookies that were sent with the request and set in the response. Block a request from future loads You can right-click any request in the list and choose Block request URL or Block request domain to prevent the page from being able to send the request until the block is removed. When the resource URL (or domain) is blocked, then any requests that match the URL (or domain) will be blocked in future requests. This is extremely useful when you want to figure out which JavaScript resource is causing conflicts on the page. By blocking each one-by-one and then reloading the page, you’ll be able to pinpoint the file that causes issues. To unblock a resource, press the ESC key to toggle the Drawer, select the Network request blocking tab and uncheck the box next to the request (or delete it from the respective buttons). If you don’t see the tab, you can select it by clicking the Drawer menu (the three vertical dots) in the top-left corner of the Drawer. You can also right-click the blocked resource in the network debugger and choose Unblock…. Manually throttle the page speed If you want to test how the page loads like on a slower connection (might be useful every now and then), you can choose a throttling speed from the network tools. Remember to switch back to No throttling when done, though, or you’ll have to continue suffering with slowly loading pages. Pinpoint the initiator of the request In the Initiator column, you can see the resource or process that caused the redirect to fire. By clicking the resource in the column, it will jump to the relevant part of the developer tools to show what process initiated the request. If the initiator is (index), it means the request originated from a resource injected into the DOM, which means clicking it will open the element inspector with the element selected. If the initiator is a JavaScript file or a CSS file, for example, then developer tools will open the Sources tab with the relevant part of the source file code selected. This might not be very useful in itself, but you can use the Sources panel to add a breakpoint to the injection moment in order to replay through the stack after a page reload (more on this below in the Sources section). JavaScript console The JavaScript console lets you execute any JavaScript commands in the context of the current page. The context of the page is the DOM, and you can use DOM methods to interact with page elements. Here are some use cases for the JavaScript console. Inspect logs Many tools output debug messages to the JavaScript console, so the first course of action whenever you run into trouble is to open the JavaScript console to see if there are any relevant errors shown. It’s a good idea to add some console.log(msg) calls in whatever code you add yourself via a tag management system, for example. That way you can debug the log output in the JavaScript console – often this is the easiest way to debug code execution. Check the value of global variables In browser JavaScript, global variables are stored in the window object. If you want to check whether a global dataLayer array exists, you can simply type window.dataLayer and press enter. If the console shows undefined, it means that dataLayer does not exist in global scope. You can also set new global variables simply by initializing them in the window object: window.myGlobalVariable = "Hello, I am a global variable."; Add an event listener with a breakpoint This is a cool trick. Whenever any JavaScript that runs on the page uses the debugger; keyword, the page will pause and you will be diverted to the Sources tab where you can then walk through the stack (more on this below). If you want to try this out, you can add a click listener on the page with debugger; in the callback. Then, when you click anywhere on the page, the execution will pause and you’ll be taken to the Sources tab. window.addEventListener('click', () => { debugger; }) Change the context to an iframe The JavaScript console is initially bound to the context of the top frame (the URL the user is currently navigating). If the page embeds other content in iframes, you can use the frame selector in the console to change context to the iframe. This is, of course, particularly useful when debugging embedded content such as third-party shopping carts or widgets. Note that the frame selector is confusingly in different parts of the developer tools across different browsers. The screenshot above is from Chrome (and it’s the same on Edge). With Firefox the context picker is in the top-right corner of the developer tools, and with Safari it’s in the bottom-right corner of the developer console. Sources The sources panel can be game-changing but it can also be very overwhelming to use. The panel lists all of the subresources (images, scripts, frames, etc.) loaded on the current page. The list is sorted by URL, starting with the origin and then drilling own to paths and files. The sources panel would deserve a separate article in itself due to the many, many different things you can do with it. However, here I’ve listed some of the most useful things you can tamper with. Open a file, pretty-print it, and search through it When you open a minified JavaScript file (as they tend to be in order to save space and bandwidth), the panel prompts you if you want to pretty-print the file. Always do this. By pretty-printing the file, the code is displayed in a more readable format, even if the code itself is still minified. You can then hit CMD + F (or CTRL + F) to open the search bar. The search will look through the formatted file for all references to the searched string. Add a breakpoint to the file A breakpoint is an instruction for the brower to halt page execution and allow the user to see the exact state of the page at the time of the breakpoint. To add a breakpoint, open a file in the Sources panel, right-click the row number you want to halt execution on and choose Add breakpoint. When you then reload the page (or cause the code to be evaluated some other way), the browser will stop at the breakpoint and give you bunch of options how to proceed. Some useful tools here include: Look through the Scope list. Here are listed all the variables in Local (current function context) scope, Closure (scope of the function that initiated the closure code), and Global scope. Walk back through the Call Stack menu, as this will let you inspect the full execution thread all the way to where it was initialized. This is a great way to find just where the JavaScript started breaking down. Use the step methods (see screenshot below) to continue progressing through the code, pausing after every line. To remove the breakpoint (good idea once you’re done debugging), expand the Breakpoints list and uncheck the breakpoints you want to deactivate. Search through the sources Sometimes it can be useful to simply do a string search through the sources. For example, if you find GTM’s dataLayer is overwritten, you could even search for something as simple as dataLayer= in order to find if it’s been overwritten by custom code in some other file. You can find the search tool by opening the Drawer (press ESC) and selecting the Search tab. Here, you can do a string search and developer tools will search through all the source files for a match. If you do find a match, you can click it to jump to the relevant part of the source file. Add local overrides This can be extremely helpful in testing your page. You can actually edit / add / remove things in the page HTML or the linked JavaScript resources by adding local overrides. When you give the browser permission to write local overrides, you can then freely edit any of the files in the Sources panel, save them, and when you reload the page the saved and modified file will be used instead of the one loaded over the network. In Chrome, you’ll see a little purple button next to the file if it has a local override. To stop using overrides, just open the Overrides tab in the Sources panel and uncheck “Enable Local Overrides”. Application storage The application storage tab shows all the browser storage used on the current page (and in embedded frames). This includes things like cookies, localStorage, and sessionStorage. You can use this tab to create, edit, and delete items in browser storage. It’s a great resource to identify problems with persistence in your analytics tools, for example. Here are some of the things you can do with application storage. Inspect first-party and third-party storage It’s a bit confusing, but you can see multiple domains in both the navigation column as well as in the actual storage list. First, all the domains listed in the navigation column ( simoahava.com and gtmtools.com in the screenshot above), represent URL origins loaded in frames. For example, in this case simoahava.com was the URL of the top frame, and the page I was on loaded an iframe from gtmtools.com. In the Application tab I can choose gtmtools.com to inspect the cookies of that particular context. In the list of cookies, on the other hand, if you see multiple different domains listed it’s because the page is sending requests to these domains. If the browser supports third-party cookies, you’ll see all the cookies set on the domains the browser is communicating with. If the browser blocks third-party cookies, you’ll only see cookies set on the site of the top frame (the URL the user is browsing). Create, edit, and delete storage By clicking any storage entry and pressing backspace / delete, you can remove that particular item from the browser storage. You can edit any storage values by double-clicking the value you want to edit in the relevant column. You can create a new cookie (or other storage entry) by double-clicking the empty row at the bottom of the storage list. You can then input any values you want into the columns. Debug web analytics implementations Once you have a feel for the developer tools, you can start debugging actual issues with your implementations. Please note that since the scope of things you can do with JavaScript is so immensely vast, it’s impossible to exhaustively list all the things that you should do when debugging your setup. In this chapter, I’ve listed some of the more common ways to approach implementation issues. If you have other, common use cases you think should be listed here, please let me know in the comments! Check browser cookies Many analytics tools utilize browser storage to persist things like user and client identifiers. Google Analytics 4 uses the _ga and _ga_<measurementId> cookies to store information about the user and the session, respectively. If you see too many (or too few) users in your GA4 data compared to the count of sessions, for example, it might be that there’s something wrong with the browser cookies. So open the Application tab and look for _ga. If you don’t find any cookies with that name or prefix, it’s possible that there’s something wrong with your implementation and the cookie is either not being set or it’s being set incorrectly. A typical mistake is overwriting the cookie_domain field with something incorrect. The tracker tries to set the cookie on an invalid domain and the cookie write fails. It might be useful to have the GA Debugger extension enabled, as the JavaScript console will inform you if the cookie write failed for some reason. Another very common reason for cookies failing is because the page is actually running in a cross-site embedded iframe, and due to third-party cookie restrictions the page is unable to set the cookies correctly. Filter network requests You’ll also want to look through the network requests to see if your endpoint is actually receiving the data. Google Analytics requests are sent to the /collect path, with the domain or analytics.google.com (latter if Google Signals is enabled). If you see a request to /collect with a status code of 200, it means that your request was most likely collected by the endpoint successfully. If you see an aborted request (or some other error), it’s typically because the request happened just as the user was navigating away from the page (although this is less common with Google Analytics 4). If you don’t see the request at all, it means that something is preventing the request from being dispatched. Again, using the GA Debugger extension is a good idea, because the extension logs information into the console that might be of use. Identify conflicts in global methods Sometimes, rarely, the global methods used by your web analytics tool have been overwritten by some other tool. This is a constant danger when using global methods, and Google Analytics using the ga namespace for Universal Analytics is an example of a global name that can easily be taken by some other, unrelated tool. Sometimes it’s difficult to find issues with the global namespace, because using tools like Google Tag Manager might make global methods irrelevant. But if you’re having trouble with your requests working, you can always check if the global methods that your tool uses (e.g. ga, gtag, dataLayer) are either undefined or set to something that doesn’t resemble your analytics platform code at all. Note about iframes If you embed content in a cross-site (or third-party) iframe, remember that anything that happens in the iframe happens in third-party context. This means that if you’re running Google Analytics in the iframe, it will not work because by default the GA cookies are not treated in a way that would make them work in third-party context. You’ll see the Cookie write failed error message in the console when using the GA debugger, if this is the case. Furthermore, even if you update the cookies to use the required samesite=none and secure flags, browsers like Safari and Brave outright block third-party storage. Debug tag management solutions Since tag management solutions are more like a framework for deploying analytics and advertising pixels, if they fail then everything that’s loaded through them fails too. In this chapter, we’ll take a look at some of the ways you can debug a TMS implementation gone awry. Check Data Layer queue for conflicts First, if you’re seeing issues with your setup, check if the global dataLayer queue has been overwritten or is otherwise compromised. With Google Tag Manager, type dataLayer.push into the console and check what it returns. If it outputs something like [native code], it means that dataLayer has been reset to its initial state, and Google Tag Manager’s listeners no longer work with it. Alternatively, if you see something other than a variation of the below, it means that dataLayer has possible been overwritten in a way that doesn’t correctly work with GTM. This is a difficult scenario to debug, and requires that you add a breakpoint to the dataLayer processing in order to see if some other tool is cannibalizing the array and not passing the arguments to GTM. You can mock this with something like: window.dataLayer.push(function() { debugger; }); Then step through the stack and see if the gtm.js library is ever visited when stepping through the methods. If GTM is never referenced, it means that some other tool has taken control of dataLayer and you need to resolve this conflict elsewhere. This is a notoriously difficult scenario to debug, so you just need to patiently look through the Sources to find the culprit. Search for references to dataLayer, for example. Block JavaScript files one by one in the network debugger to find the one that is causing the conflict. Search through the element inspector Sometimes it might be useful to search through the element inspector to find the Custom HTML tags and script elements injected by Google Tag Manager. You can simply search for a string that you know to be within the injected code, but just remember that Google Tag Manager minifies all injected code automatically, so searching for a specific variable name might not be very useful. Search for content within strings or script URLs instead. If you don’t see any references to the elements you’d expect GTM to inject, it means that GTM did not add those elements to the page. Typically this is because the tag never fired or the tag was misconfigured. Sometimes it can be because of a CSP conflict that you need to resolve. Look for requests in network tools With GTM, too, it’s always useful to look for requests in the network tools. This is always the ultimate truth when it comes to debugging the end-to-end flow of pixel requests. If Tag Assistant says that the tag fired but you can’t see anything in your analytics platform, then you need to open the network debugger and see if the request was ever sent. Remember to check Preserve log to make sure that the requests aren’t gobbled up by page navigation. If you don’t see the requests anywhere, it means that the tags never managed to dispatch them. Note that this is only the first step of potentially a very complex debugging flow, because there can be a million different reasons for a failed request. Sometimes you do see the request but you still don’t see data in your analytics tool. In this case, meticulously sift through the headers and the payload of the request to see if it has all the information you expect it to contain. Search through sources Finally, searching through sources can be very useful in identifying issues with your tag management system. With GTM, it’s often the case that the version wasn’t actually published to the live environment but rather to a staging context. It might be useful to take a tag that you expect the most recent version to have, and then search for something in that tag’s code (an ID, a parameter, a key-value pair) within the gtm.js library. Sometimes searching through the sources is an exercise in futility, especially because the code is so minified that you can’t make any heads or tails of it. Trying to narrow down with more specific search terms often helps. Useful browser extensions I’ve already mentioned GA Debugger above, but there are other useful extensions you might consider installing in your browser (Chrome, most often) which make debugging your analytics setup a breeze. Many of these extensions take the existing capabilities of browser developer tools and hone them to suit a specific set of use cases in the analytics world. I still recommend to always use developer tools as your main tool of choice, but do take a look at these extensions and see if they could help alleviate some of the pain of debugging your analytics or tag manager implementation. GTM/GA Debugger David Vallejo’s Chrome extension, GTM/GA Debugger, is certainly the most ambitious and the most impressive browser extension for debugging Google Analytics and Google Tag Manager implementations. The feature set is so incredibly rich that it doesn’t make sense to go through all the functionality here, but here’s just a short overview of things you can do with the extension: - Block GA3/GA4 hits - Inspect server-side tagging hits - Export as a table, copy objects - Enhanced ecommerce reports - Analyze dataLayer and the outgoing requests - Show POST data in a meaningful way - View all dataLayermessages (not just the ones sent on the current page) Everything is just displayed so nicely and so intuitively. You don’t have to sift through complicated headers and messy payloads – it’s all visually displayed in rich tables and with purposeful structure. GA Debugger GA Debugger is the official Universal Analytics, Google Analytics 4, and Google Tag Manager debugger. It outputs information about the aforementioned platforms into the console. I love this tool because once the information is in the console, searching through it is easy with the JavaScript console’s own search tools. The GA Debugger helpfully informs you why something went wrong rather than just showing a non-descript error message. Naturally, in some cases you still need to continue investigating, but the console output can be very helpful in collecting the initial set of leads for your research. Note! This extension doesn’t work if you load your files from or collect hits to a server-side endpoint. Live HTTP Headers Live HTTP Headers is my favorite extension for debugging HTTP traffic. It’s an easy way to inspect the headers of all the HTTP requests dispatched by the browser. Although today you can get pretty much all the same information in the network debugger of the developer tools, I still prefer the visual output of this extension and use it diligently whenever I want to quickly look through redirects or HTTP headers. Summary Even though this article is long and full of words and pictures, it covers only a fraction of the approaches you can take when debugging your analytics and tag management implementations. The idea wasn’t to list everything that can go wrong and give you the solution how to fix it. Instead, the purpose of this guide is to give you the tools and the confidence to start collecting leads for your own investigations. Debugging the complexity of what happens within a web browser is a daunting task. But the debugging steps themselves are often very predictable and can be repeated from one scenario to the next. My workflow is pretty simple: - Always start with inspecting the network requests. Try to find the requests that you’re debugging. If you find them, look through the headers and the payload to see if they contain the data you expect them to have. - Search through the sources to find inconsistencies in the libraries you use (particularly Google Tag Manager). If you can’t find references to a specific tag you expect to be in GTM, it means that you might be previewing the wrong version or that you never actually published a version. - Look through cookies to see if the values used by your analytics tool persist from one page load to the next. If the values change, then it means that the cookies are being reset or overwritten for some reason (typically a consent management platform is to blame). - Utilize the element inspector especially when you are certain that the page should include some element or tag deployed through Google Tag Manager. If you have additional ideas for the debugging flow that you think should be included in this guide, please let me know in the comments. What are your favorite ways of debugging analytics and tag management solution issues?
https://www.simoahava.com/analytics/debug-guide-web-analytics-tag-management/
CC-MAIN-2022-05
en
refinedweb
i have a program at the moment that is asking the user to input thier name, age and course. the details are stored in a stucture and is the passed to the function by the caller. the function collects the data and then on the return to the caller the stucture contains the useres details. then main() should output the contents of the structure. at the moment i have this but for some reason when you put your name in it skips Age, and Course. also i am unsure if it is doing what i have explained corectly. this is what i have so far any help will be much appreciatedany help will be much appreciatedCode:#include <iostream> using namespace std; struct myStruct { char Name; int age; char course; }; void myFunc1(myStruct m) // by value { cout<<"Enter you name"; cin >>m.Name; } void myFunc2(myStruct &m) // by reference { cout<<"Age: "; cin>>m.age; } void myFunc3(myStruct &m) { cout<<"Course: "; cin >>m.course; } int main() { myStruct m; m.Name; m.age; m.course; myFunc1(m); myFunc2(m); myFunc3(m); system("pause"); }
http://cboard.cprogramming.com/cplusplus-programming/84767-structures-help.html
CC-MAIN-2014-10
en
refinedweb
Euler problems/1 to 10 From HaskellWikiOnetoN n = n * (n+1) `div` 2 problem_1 = sumStep 3 999 + sumStep 5 999 - sumStep 15 999 where sumStep s n = s * sumOnetoN instead of in problem_3 problem_7 = head $ drop 10000 primes As above, this can be improved by using instead of Here is an alternative that uses a sieve of Eratosthenes: primes' = 2 : 3 : sieve (tail primes') [5,7..] where sieve (p:ps) x = h ++ sieve ps (filter (\q -> q `mod` p /= 0) t where (h, _:t) = span (p*p <) x problem_7_v2 = primes' !! 10000 8 Problem 8 Discover the largest product of five consecutive digits in the 1000-digit number. Solution: import Data.Char groupsOf _ [] = [] groupsOf n xs = take n xs : groupsOf n ( tail xs ) problem_8 x= maximum . map product . groupsOf 5 $ x main=do t<-readFile "p8.log" let digits = map digitToInt $foldl (++) "" $ lines t print $ problem_8 10 Problem 10 Calculate the sum of all the primes below one million. Solution: problem_10 = sum (takeWhile (< 1000000) primes)
http://www.haskell.org/haskellwiki/index.php?title=Euler_problems/1_to_10&oldid=18345
CC-MAIN-2014-10
en
refinedweb
On Thursday 08 July 2004 20:40, paolo veronelli wrote: > Most of my imperative pieces of software find their answers by > touching around in some space of solutions and my favourite > approximation algorithms use random distributions. > > Is it haskell the wrong languages for those, as I'm obliged to > code them inside Monads loosing the benefits of lazyness? No, no reason to lose laziness. First off, suppose your algorithm is a loop that needs one random number per iteration. An obvious structure would be: loop :: Solution -> IO Solution loop old_solution = do a <- randomIO let new_solution = next a old_solution if goodEnough new_solution then return new_solution else loop new_solution With this structure, the loop is strict and imperative but the 'next' function which improves the solution is pure, lazy code. Except in trivial cases, the next function will be where all the action is so you still get most of the benefit of laziness. We can do better though. Using two functions in System.Random, it's easy to get an infinite list of random numbers: randomRsIO :: IO [Int] randomRsIO = do g <- getStdGen return (randoms g) Let's rewrite the above code to use a list of random numbers and an initial solution to search for a better solution: genSolutions :: [Int] -> Solution -> [Solution] genSolutions (r:rs) s0 = s0 : genSolutions (next r s0) -- slightly more cryptic implementation using foldl and map possible -- and probably preferable findGoodSolution :: [Int] -> Solution -> Solution findGoodSolution rs s0 = head (dropWhile (not . goodEnough) solutions) where solutions = genSolutions rs s0 main = do rs <- randomRsIO print (findGoodSolution rs initial_solution) Hope this helps, -- Alastair Reid
http://www.haskell.org/pipermail/haskell-cafe/2004-July/006443.html
CC-MAIN-2014-10
en
refinedweb
Implementing Pinch zooming with Gesture API for Maps in Java ME This article explains how to implement own pinch zooming (zoom in and out) support for maps implemented with Maps API in Series 40 Java 2.0 runtime. Introduction The Maps API for Java ME makes it possible to embed Nokia maps into Java ME applications. However the Maps API is missing pinch zoom functionality. If Java ME applications want to support pinch zoom functionality in maps then they have to implement custom pinch zooming in their application. Note that this custom pinch zooming works only on S40 full touch devices and will not work on other devices. Enabling Pinch zoom Generally there are two ways to implement pinch action, one being MultiPoint touch events and other one is Gesture API. Using Gesture API is the right choice since Gesture API provides ready-made pinch implementation. Pinch zooming support for maps in S40 Java can be implemented using Gesture API . Pinch zooming happens in S40 devices when two fingers are pressed on screen and one finger is moved near to other finger (pinch zoom in) or moved away (pinch zoom out). Gesture event handling To implement support for pinch zoom in the map, a gesture event of pinch has to be registered and handled inside MapCanvas class, which would result in MapCanvas class looking like the following: public class MyMapCanvas extends MapCanvas implements GestureListener,… { … private GestureInteractiveZone gestureZone; public MyMapCanvas(Display display, MIDlet midlet) { super(display); … GestureRegistrationManager.setListener(this, this); // Register for pinch events in the whole canvas area GestureInteractiveZone gestureZone = new GestureInteractiveZone( GestureInteractiveZone.GESTURE_PINCH); GestureRegistrationManager.register(this, gestureZone); … } public void gestureAction(Object arg0, GestureInteractiveZone arg1, GestureEvent arg2) { ... } ... } Pinch zooming is actually handled by calculating the difference of the distances between start and end point of moving finger. When pinch is happened, gestureAction(…) method will be called. Check the GestureInteractiveZone.GESTURE_PINCH type and do the calculation for pinch zoom. Distance is calculated based on the difference of distance between current pinch distance (distance between two fingers after pinch) and starting pinch distance (distance between two fingers before pinch). 100 pixels difference is used as one zoom level change, and the max/min zoom value is checked before setting the zoom. Finally the map is set to the original center. public void gestureAction(Object arg0, GestureInteractiveZone aGestureZone, GestureEvent aGestureEvent) { int eventType = aGestureEvent.getType(); switch (eventType) { case GestureInteractiveZone.GESTURE_PINCH: // Pinch detected int curPinchDistance = aGestureEvent.getPinchDistanceCurrent(); int startingPinchDistance = aGestureEvent .getPinchDistanceStarting(); int zoomNew = zoomOrg + (int) ((curPinchDistance - startingPinchDistance) / 100); if (zoomNew > getMapDisplay().getMaxZoomLevel()) { zoomNew = (int) getMapDisplay().getMaxZoomLevel(); } if (zoomNew < getMapDisplay().getMinZoomLevel()) { zoomNew = (int) getMapDisplay().getMinZoomLevel(); } if (zoomNew != zoomOrg) { getMapDisplay().setZoomLevel(zoomNew, 0, 0); getMapDisplay().setCenter(orgCenter); } zoomOrg = getMapDisplay().getZoomLevel(); break; } } Resources Full source codes for the example can be found from File:JavaMeMaps Gesture PinchZoom.zip Summary The Maps API for Java ME has much more functionality which makes it possible to integrate all Nokia Maps features into Java ME applications.
http://developer.nokia.com/community/wiki/index.php?title=Implementing_Pinch_zooming_with_Gesture_API_for_Maps_in_Java_ME&oldid=188221
CC-MAIN-2014-10
en
refinedweb
28 November 2011 16:56 [Source: ICIS news] LONDON (ICIS)--Abu Dhabi’s IPIC will sell back its 70% stake in chemical engineering firm Ferrostaal for €350m ($467m) to Germany-based industrial firm MAN as the two groups agreed to settle a long-running dispute, they said on Monday. At the same time, MAN plans to transfer all of Ferrostaal to MPC Group, a Hamburg-based investment and commodities firm. MAN expect to received up to €160m from MPC for Ferrostaal. IPIC, which owns ?xml:namespace> However, IPIC subsequently sought to back out of the deal because of a corruption scandal at Ferrostaal that saw two of the company’s managers charged with alleged bribery. The matter pre-dated IPIC’s acquisition of the stake. “This settlement is the outcome of very good cooperation between both shareholders, and enables IPIC and MAN to finally put their differences aside,” said IPIC managing director Khadem Al Qubaisi. MAN's settlement with IPIC and the planned transfert of Ferrostaal to MPC are both subject to regulatory approvals. Essen-based Ferrostaal builds chemical, fertilizer and other industrial
http://www.icis.com/Articles/2011/11/28/9512170/ipic-returns-70-stake-in-chem-engineer-ferrostaal-to-man.html
CC-MAIN-2014-10
en
refinedweb
.regexRegular expressions are a commonly used method of pattern matching on strings, with regex being a catchy word for a pattern in this domain specific language. Typical problems usually solved by regular expressions include validation of user input and the ubiquitous find & replace in text processing utilities. Synposis: import std.regex; import std.stdio; void main() { // Print out all possible dd/mm/yy(yy) dates found in user input. // g - global: find all matches. auto r = regex(r"\b[0-9][0-9]?/[0-9][0-9]?/[0-9][0-9](?:[0-9][0-9])?\b", "g"); foreach(line; stdin.byLine) { // Match returns a range that can be iterated // to get all subsequent matches. foreach(c; match(line, r)) writeln(c.hit); } } ... // Create a static regex at compile-time, which contains fast native code. enum ctr = ctRegex!(`^.*/([^/]+)/?$`); // It works just like a normal regex: auto m2 = match("foo/bar", ctr); // First match found here, if any assert(m2); // Be sure to check if there is a match before examining contents! assert(m2.captures[1] == "bar"); // Captures is a range of submatches: 0 = full match. ... // The result of the match is directly testable with if/assert/while. // e.g. test if a string consists of letters: assert(match("Letter", `^\p{L}+$`));The general usage guideline is to keep regex complexity on the side of simplicity, as its capabilities reside in purely character-level manipulation, and as such are ill-suited for tasks involving higher level invariants like matching an integer number bounded in an [a,b] interval. Checks of this sort of are better addressed by additional post-processing. The basic syntax shouldn't surprise experienced users of regular expressions. Thankfully, nowadays the web is bustling with resources to help newcomers, and a good reference with tutorial on regular expressions can be found. This library uses an ECMAScript syntax flavor with the following extensions: - Named subexpressions, with Python syntax. - Unicode properties such as Scripts, Blocks and common binary properties e.g Alphabetic, White_Space, Hex_Digit etc. - Arbitrary length and complexity lookbehind, including lookahead in lookbehind and vise-versa. Pattern syntax std.regex operates on codepoint level, 'character' in this table denotes a single unicode codepoint. Character classes Regex flags Unicode support This library provides full Level 1 support* according to UTS 18. Specifically: - 1.1 Hex notation via any of \uxxxx, \U00YYYYYY, \xZZ. - 1.2 Unicode properties. - 1.3 Character classes with set operations. - 1.4 Word boundaries use the full set of "word" characters. - 1.5 Using simple casefolding to match case insensitively across the full range of codepoints. - 1.6 Respecting line breaks as any of \u000A | \u000B | \u000C | \u000D | \u0085 | \u2028 | \u2029 | \u000D\u000A. - 1.7 Operating on codepoint level. Boost License 1.0. Authors: Dmitry Olshansky, API and utility constructs are based on original std.regex by Walter Bright and Andrei Alexandrescu. Source: std/regex.d - struct Regex(Char); - Regex object holds regular expression pattern in compiled form. Instances of this object are constructed via calls to regex. This is an intended form for caching and storage of frequently used regular expressions. - struct StaticRegex(Char); - A StaticRegex is Regex object that contains specially generated machine code to speed up matching. Implicitly convertible to normal Regex, however doing so will result in losing this additional capability. - struct Captures(R, DIndex = size_t) if (isSomeString!(R)); - Captures object contains submatches captured during a call to match or iteration over RegexMatch range. First element of range is the whole match. Example, showing basic operations on Captures: import std.regex; import std.range; void main() { auto m = match("@abc#", regex(`(\w)(\w)(\w)`)); auto c = m.captures; assert(c.pre == "@"); // Part of input preceeding match assert(c.post == "#"); // Immediately after match assert(c.hit == c[0] && c.hit == "abc"); // The whole match assert(c[2] =="b"); assert(c.front == "abc"); c.popFront(); assert(c.front == "a"); assert(c.back == "c"); c.popBack(); assert(c.back == "b"); popFrontN(c, 2); assert(c.empty); } - R pre(); - Slice of input prior to the match. - R post(); - Slice of input immediately after the match. - R hit(); - Slice of matched portion of input. - R front(); R back(); void popFront(); void popBack(); const bool empty(); R opIndex()(size_t i); - Range interface. - R opIndex(String)(String i); - Lookup named submatch. import std.regex; import std.range; auto m = match("a = 42;", regex(`(?P<var>\w+)\s*=\s*(?P<value>\d+);`)); auto c = m.captures; assert(c["var"] == "a"); assert(c["value"] == "42"); popFrontN(c, 2); //named groups are unaffected by range primitives assert(c["var"] =="a"); assert(c.front == "42"); - const size_t length(); - Number of matches in this object. - @property ref auto captures(); - A hook for compatibility with original std.regex. - struct RegexMatch(R, alias Engine = ThompsonMatcher) if (isSomeString!(R)); - A regex engine state, as returned by match family of functions. Effectively it's a forward range of Captures!R, produced by lazily searching for matches in a given input. alias Engine specifies an engine type to use during matching, and is automatically deduced in a call to match/bmatch. - R pre(); R post(); R hit(); - Shorthands for front.pre, front.post, front.hit. - @property auto front(); void popFront(); auto save(); - Functionality for processing subsequent matches of global regexes via range interface: import std.regex; auto m = match("Hello, world!", regex(`\w+`, "g")); assert(m.front.hit == "Hello"); m.popFront(); assert(m.front.hit == "world"); m.popFront(); assert(m.empty); - bool empty(); - Test if this match object is empty. - T opCast(T : bool)(); - Same as !(x.empty), provided for its convenience in conditional statements. - @property auto captures(); - Same as .front, provided for compatibility with original std.regex. - auto regex(S)(S pattern, const(char)[] flags = ""); - Compile regular expression pattern for the later execution. Returns: Regex object that works on inputs having the same character width as pattern. Parameters: Throws: RegexException if there were any errors during compilation. - template ctRegex(alias pattern, alias flags = []) - Experimental feature. Compile regular expression using CTFE and generate optimized native machine code for matching it. Returns: StaticRegex object for faster matching. Parameters: - auto match(R, RegEx)(R input, RegEx re); auto match(R, String)(R input, String re); - Start matching input to regex pattern re, using Thompson NFA matching scheme. This is the recommended method for matching regular expression.. - auto bmatch(R, RegEx)(R input, RegEx re); auto bmatch(R, String)(R input, String re); - Start matching input to regex pattern re, using traditional backtracking matching scheme.. - R replace(alias scheme = match, R, RegEx)(R input, RegEx re, R format); - Construct a new string from input by replacing each match with a string generated from match according to format specifier. To replace all occurrences use regex with "g" flag, otherwise only the first occurrence gets replaced. Parameters: Example: // Comify a number auto com = regex(r"(?<=\d)(?=(\d\d\d)+\b)","g"); assert(replace("12000 + 42100 = 54100", com, ",") == "12,000 + 42,100 = 54,100");The format string can reference parts of match using the following notation. assert(replace("noon", regex("^n"), "[$&]") == "[n]oon"); - R replace(alias fun, R, RegEx, alias scheme = match)(R input, RegEx re); - Search string for matches using regular expression pattern re and pass captures for each match to user-defined functor fun. To replace all occurrances use regex with "g" flag, otherwise only first occurrence gets replaced. Returns: new string with all matches replaced by return values of fun. Parameters: Example: Capitalize the letters 'a' and 'r': string baz(Captures!(string) m) { return std.string.toUpper(m.hit); } auto s = replace!(baz)("Strap a rocket engine on a chicken.", regex("[ar]", "g")); assert(s == "StRAp A Rocket engine on A chicken."); - struct Splitter(Range, alias RegEx = Regex) if (isSomeString!(Range) && isRegexFor!(RegEx, Range)); - Range that splits a string using a regular expression as a separator. Example: auto s1 = ", abc, de, fg, hi, "; assert(equal(splitter(s1, regex(", *")), ["", "abc", "de", "fg", "hi", ""])); - Splitter!(Range, RegEx) splitter(Range, RegEx)(Range r, RegEx pat); - A helper function, creates a Splitter on range r separated by regex pat. Captured subexpressions have no effect on the resulting range. - String[] split(String, RegEx)(String input, RegEx rx); - An eager version of splitter that creates an array with splitted slices of input. - class RegexException: object.Exception; - Exception object thrown in case of errors during regex compilation.
http://dlang.org/std_regex.html
CC-MAIN-2014-10
en
refinedweb
#include <coherence/net/cache/SimpleCacheStatistics.hpp> Inherits Object, and CacheStatistics. List of all members. A cache hit is a read operation invocation (i.e. get()) for which an entry exists in this map. A cache miss is a get() invocation that does not have an entry in this map. For the LocalCache implementation, this refers to the number of times that the prune() method is executed. prune() For the LocalCache implementation, this refers to the time spent in the prune() method.); [virtual] Register a cache hit. Register a multiple cache hit. Register a cache miss. Register a multiple cache miss. Register a cache put. Register a multiple cache put. Register a cache prune.
http://docs.oracle.com/cd/E15357_01/coh.360/e18813/classcoherence_1_1net_1_1cache_1_1_simple_cache_statistics.html
CC-MAIN-2014-10
en
refinedweb
<ac:macro ac:<ac:plain-text-body><![CDATA[ <ac:macro ac:<ac:plain-text-body><![CDATA[ Zend_Calendar is an extension class for Zend_Date. Zend Framework: Zend_Calendar Component Proposal Table of Contents 1. Overview It handles calendar formats different from gregorian calendar. 2. References 3. Component Requirements, Constraints, and Acceptance Criteria <ac:macro ac:<ac:plain-text-body><![CDATA[ Zend_Calendar is an extension class for Zend_Date. Zend_Calendar is an extension class for Zend_Date. - Simple API - Same handling as Zend_Date to increase useability - Locale aware - Conversion between different Calendar Formats 4. Dependencies on Other Framework Components - Zend_Exception - Zend_Date - Zend_Locale 5. Theory of Operation Zend_Calendar will extend Zend_Date to work also with other calendar formats than the gregorian ones. It will have an adaptor interface to convert between the different formats. 6. Milestones / Tasks - [IN PROGRESS] Acceptance of Proposal - Code Basic Class with Adaptor Interface - Code Arabic Adaptor - Write unit tests - Write Docu - Code Chinese Adaptor - Code Hebrew Adaptor - Code Indian Adaptor - Code Julian Adaptor - Code Persian Adaptor - Code Bahai Adaptor - Code French Adaptor 7. Class Index - Zend_Calendar - Zend_Calendar_Exception - Zend_Calendar_Arabic - Zend_Calendar_Chinese - Zend_Calendar_Hebrew - Zend_Calendar_Julian - Zend_Calendar_xxxxx (other calendar formats as written in milestones) 8. Use Cases Define a arabic date, convert to gregorian Work with Calendars Work with Calendars 21 Commentscomments.show.hide Dec 27, 2006 Gavin <p>Public domain JavaScript converters for numerous calendars: <a class="external-link" href=""></a></p> Dec 28, 2006 Thomas Weidner <p>There are numerous online pages out there which do the same...<br /> The question is: Is this a reason for us not to support other calendar formats with the benefit of Zend_Date ???</p> <p>Zend_Calendar could for example also be used for monthly calendar page creation with an iterator interface implemented.</p> <p>And also to mention... there are people out there in the WWW which disable javascript because of security reasons...</p> Aug 10, 2007 Felipe Ferreri Tonello <p>And you forgot the main reason: To make a easy way of different calendars on ZF. Who cares about JS converters?! I'm doing my application using ZF and I simply don't want to depend of any other solution.</p> Jan 18, 2008 AmirBehzad Eslami <p>There are two type of Calendars in Muslim countries:</p> <ul class="alternate"> <li>Arabic Calendar (a.k.a, Islamic Calendar)</li> <li>Iranian Calendar (a.k.a., Persian Calendar or Jalali Calendar).</li> </ul> <p>In Iran, we use Iranian Calendar which is completely different from<br /> Arabic calendar.</p> <p>For example, Jan 1 2008 equals to:</p> <ul class="alternate"> <li>Iranian/Persian Calendar: Month=Dey, Day=11, Year = 1386</li> <li>Arabic Calendar: Month=Dhu al-Hijjah, Day=21, Year = 1428</li> </ul> <p>Now two requests:<br /> Please add "class Zend_Calendar_Iranian {} ".<br /> Please and correct the current Use Cases (1385 Dey 05 is an Iranian Date, not Arabic.)</p> <p>@see: <a class="external-link" href=""></a><br /> @see: <a class="external-link" href=""></a></p> Jan 19, 2008 Thomas Weidner <p>This are only example-calendars...</p> <p>In fact we will support</p> <ul> <li>Buddhist Calendar</li> <li>Chinese Calendar</li> <li>Coptic Calendar</li> <li>Ethiopic Calendar</li> <li>Gregorian Calendar</li> <li>Hebrew Calendar</li> <li>Islamic Calendar</li> <li>Islamic Civil Calendar</li> <li>Japanese Calendar</li> <li>Persian Calendar</li> </ul> <p>These are the calendar formats where we have support informations from unicode.<br /> But as I have had no time to finish this proposal it will still take some time to be implemented.<br /> Other things like Zend_File_Upload and improvements to Zend_Translate have more priority than this proposal. But it will not be forgotten... development will just take more time.</p> Feb 22, 2008 Ziyad Saeed <p>will this be part of the Zend_Calendar features.<br /> "Give me date of last Sunday of year 2008" or" What's the date of second Monday for Jan 2007"</p> <p>is there a proposal for repeat events<br /> What about weekly and daily views</p> Feb 23, 2008 Thomas Weidner <p>There is actually no way to recognise "give me date of last Sunday of year 2008" or a silimar string in any known language.</p> <p>But you can do this already with the standard Zend_Date API in an equivalent way by simple date calculation.</p> <p>Related to your second and third comment:<br /> This proposal shall make it possible to use different calendar formats than gregorian with Zend_Date. So you can use an islamic date for example.<br /> It is not intended to create a calendar view. Here you would need a view-helper for example.<br /> And it is not intended to be used as event handler.</p> <p>If you see benefit for a feature you can add a comment showing how the api could look like and what the benefit and usecase would be. I am open to all ideas when they fit.</p> Feb 27, 2008 Ziyad Saeed <p>oh i wasn't asking for a natural language interpreter, just that if i want to know whats the date of last sunday of the month jan 2008, will it be able to give me a date</p> <p>I'm interested in a proposal that i can use to generate full calendar. it doesn't have to generate the views, but give me something that i can use to generate the views. for eg, If i want to show week 3 to 6 in a calendar view, This proposal should give me the date and day of week 3 to 6. If i ask it show me 3 months from Febuary to march for year 2008, it should give me the date and dates for that. or give me dates for next 7 days starting Feb 26 2008.</p> <p>I guess I'm interested in an events calendar, not just a calendar converter to different localizations.</p> <p>if you plan to move this into the event type calendar, then i can give you lot of usecases and apis examples.</p> Feb 27, 2008 Thomas Weidner <p>You should take a look at the API of Zend_Date... several things you mentioned here are already possible with just a few single lines.</p> <p>For example to get 3 montags from feb 2008 as mentioned with a calendarlike output you can do:</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ $from = new Zend_Date("February 2008", "en"); $to = $from->addMonths(3); for (; $date->addDay(1); $date->isEarlier($to)) { print $date->getIso(); print $date->toString(Zend_Date::WEEKDAY); } ]]></ac:plain-text-body></ac:macro> <p>Anyhow... this will not be a event calendar proposal... a event calendar has to handle dates, events, create views, act on user input and so on...</p> <p>This proposal is an extension of Zend_Date for all people which are not using gregorian calendars. Nothing more, nothing less <ac:emoticon ac:</p> Feb 27, 2008 Thomas Weidner <p>mistyped... $date should of course be $from... <ac:emoticon ac:</p> Feb 27, 2008 Ziyad Saeed <p>How about we design this this so someone else can extend this and make it into an events calendar. that would be great.</p> Feb 27, 2008 Thomas Weidner <p>All what is needed is already here...<br /> Why should I create the half of something which itself if useless when so someone else can do the second half to have a working complete component ?</p> <p>I have really enough to do in my sparetime with existing proposals and feature enhancements. And as all contributors I am only working on things which make sense for ourself. Actually I have no use for a event component. This may change in future but not in short term. Anyone is free to create such a component for himself.</p> <p>But I doesn't see any pro for creating a component which needs 3 lines instead of using existing things only to save 2 lines. There is really enough other work to do...</p> <p>My 2 cents...</p> Feb 28, 2008 Ziyad Saeed <p>what are you going on about<br /> I never said make one useless half and have someone else do the other half.</p> <p>I'm asking the design to me EXTENSIBLE. it doesn't mean you have to do a half baked job. or it has to be incomplete in any way. You can make a complete calendar (with whatever you want) but it would be designed so later on someone else can come along and extend it like Zend_calendar_Event. Design the api by keeping future improvements in mind. How is that bad is beyond me.</p> <p>you mentioned a very important point, about doing "what makes sense to you". So its essentially your project. I didn't know that. thanks for clarifying.</p> Feb 28, 2008 Thomas Weidner <p>Every contributor does what makes sense to him... otherwise he would not have created a proposal.</p> <p>I would never create a Service proposal because I have no use of services. And if someone is in need of a event component he is free to create a proposal and add this component to ZF.</p> <p>Myself is not in need of such a event component... so why should I spend my sparetime for creating something which I do not see benefit for me ?<br /> And no, ZF is not my project... you may have mentioned that it is open source and several 100s of developers are working on it. <ac:emoticon ac: <br /> Just look at the names beside the proposals.</p> Feb 23, 2008 Thomas Weidner <p>What I forgot to mention:<br /> This proposal is not dead. I had just other priorities like the new Zend_File_Transfer class and several other improvements and until now there were not many people wanting this integrated. So it has just low priority on my work-list. <ac:emoticon ac:</p> Aug 11, 2008 Juan Felipe Alvarez Saldarriaga <p>We hope that isn't dead <ac:emoticon ac:, I really like to see this library on next 2.0 release or maybe in a mini-release <ac:emoticon ac:.</p> Aug 11, 2008 Thomas Weidner <p>Wether dead nor forgotten.</p> <p>But there is much other work beside like the file transfer component.<br /> It's always a matter of priority... the comunity decided for the other component <ac:emoticon ac:</p> Aug 18, 2008 Andrea Turso <p>When I read of Zend_Calendar I thought of a component capable of handling calendars, something that given some options can generate a calendar (in terms of a table with the calendar of the specified month and year).</p> <p>If Zend_Calendar is a mere extension of Zend_Date for handling calendars that differs from the gregorian one, why don't just enhance Zend_Date to handle different calendars instead of creating confusion with this namespace?</p> <p>Think of a programmer who wants to implement a calendar in his application, don't you think he will try to use the Zend_Calendar namespaces' classes to make a calendar? A Zend_Calendar that handle different calendars - in terms of calendar types - will likely confuse him.</p> <p>I hope I explained my point. </p> Sep 26, 2008 Ramses Paiva <p>Hi, Thomas!</p> <p>In my point of view, we have two kinds of calendar.<br /> One calendar is an object that handles events.<br /> Of course, this calendar should be aware of locale and should have adapters to convert calendars to the right locale.<br /> The other calendar, is the calendar related to dates.<br /> If you want to just extend functionality from Zend_Date, you should keep it locally and create the adapters as Zend_Date_Adapter_Interface to have the conversions of the date specifically and not of the events calendar, that is a set of years, months, days, hours, minutes and seconds. Or simply creating it as Zend_Date_Calendar instead of Zend_Calendar.<br /> Another thing is instead of having methods called toGregorian(), toIslamic(), etc, it should implement a factory method and the calendar to be converted to should be passed as parameter, like:</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ $date = '...'; $calendar = new Zend_Date_Calendar(Zend_Date_Adapter::ARABIC, '1385 Dey 05'); $calendar->toCalendar('islamic', $date); ]]></ac:plain-text-body></ac:macro> <p>In the Zend_Calendar, we could use all the features of the Zend_Date, but with other perspective, and also use the Zend_Date adapters to have the correct view for the calendar also. In the Zend_Calendar, I also think it would be nice to be possible to set working days, working hours, first day in the calendar, adding holidays, events, shifts and display the calendar according to all this settings.</p> <p>The Zend_Calendar interface should look like this:</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ interface Zend_Calendar_Interface { /** * */ public function setFirstWeekday($weekday); /** * */ public function setWorkingHours($start, $end); /** * */ public function setWorkingDays($firstDay, $lastDay); /** * */ public function setHolidays(array $holidays); /** * */ public function setEvents(array $events); /** * */ public function addHoliday($holiday); /** * */ public function addEvent($event); /** * */ public function getFirstWeekday(); /** * */ public function getHeader(); /** * */ public function getWorkingHours(); /** * */ public function getWorkingDays(); /** * */ public function getHolidays(); /** * */ public function getDay(); /** * */ public function getWeek(); /** * */ public function getMonth(); /** * */ public function getYear(); } ]]></ac:plain-text-body></ac:macro> <p>These set of classes below would be supposed to handle with events:_Adapter // In the case of this calendar, the adapter will work as view translators, translating the calendar objects to its properly locale<br /> Zend_Calendar_Interface<br /> Zend_Calendar_Adapter_Arabic<br /> Zend_Calendar_Adapter_Buddhist<br /> Zend_Calendar_Adapter_Chinese<br /> Zend_Calendar_Adapter_Coptic<br /> Zend_Calendar_Adapter_Ethiopic<br /> Zend_Calendar_Adapter_Gregorian<br /> Zend_Calendar_Adapter_Hebrew<br /> Zend_Calendar_Adapter_Islamic<br /> Zend_Calendar_Adapter_IslamicCivil<br /> Zend_Calendar_Adapter_Japanese<br /> Zend_Calendar_Adapter_Persian<br /> Zend_Calendar_Adapter_Interface<br /> Zend_Calendar_Adapter_Exception</p> <p>That's my opinion!</p> <p>Regards,</p> <p>–<br /> Ramses Paiva<br /> Software Consultant<br /> SourceBits Technologies<br /></p> Sep 26, 2008 Ramses Paiva <p>Better idea:</p> <ac:macro ac:<ac:plain-text-body><![CDATA[ class Zend_Calendar_Year extends Zend_Date_Calendar {} class Zend_Calendar_Month extends Zend_Date_Calendar {} class Zend_Calendar_Week extends Zend_Date_Calendar {} class Zend_Calendar_Day extends Zend_Date_Calendar {} ]]></ac:plain-text-body></ac:macro> <p>And the components would be only:_Interface</p> <p>And the adapters would have a different approach:<br /> Zend_Calendar_Adapter -> instead of conversion of dates, it could be used as a database connection to retrieve events (and holidays, once the holidays is a specialization of events)<br /> Zend_Calendar_Adapter_Interface<br /> Zend_Calendar_Adapter_Exception<br /> Zend_Calendar_Adapter_DbTable</p> <p>Regards,</p> <p>-<br /> Ramses Paiva<br /> Software Consultant<br /> SourceBits Technologies<br /></p> Oct 28, 2008 Wil Sinclair <p>This proposal requires some refactoring and Thomas currently does not have the time to do it. This will be archived for resurrection later.</p>
http://framework.zend.com/wiki/display/ZFPROP/Zend_Calendar+-+Thomas+Weidner?focusedCommentId=8454439
CC-MAIN-2014-10
en
refinedweb
01 April 2012 23:37 [Source: ICIS news] SAN ANTONIO, Texas (ICIS)--Braskem, Latin America’s’s largest petrochemicals producer, and Peruvian state oil company Petroperu are due to begin feasibility studies on their $3bn (€2.3bn) petrochemical plant project, a market source said on Sunday. The studies have been approved and will start imminently, a source said on the sidelines of the International Petrochemical Conference (IPC). The polyethylene plant, which may begin producing 1m tonnes/year of the resin by 2017, will be the only petrochemical facility on Latin America’s ?xml:namespace> The partners in the project will conduct a $5m, one-year feasibility study, Braskem vice president, Luiz de Mendonca, said in November.
http://www.icis.com/Articles/2012/04/01/9546686/afpm-12-braskem-petroperu-move-forward-with-peru-pe-project.html
CC-MAIN-2014-10
en
refinedweb
Based on the software pattern 'composite', this class RTT_SCRIPTING_API allows composing command objects into one command object. More... #include <rtt/scripting/CommandComposite.hpp> Based on the software pattern 'composite', this class RTT_SCRIPTING_API allows composing command objects into one command object. Definition at line 50 of file CommandComposite.hpp. add a command to the vect Definition at line 106 of file CommandComposite.hpp. Referenced by RTT::scripting::FunctionGraphBuilder::appendFunction(), copy(), and RTT::scripting::FunctionFactory::produce(). from RTT::base::ActionInterface. Definition at line 115 of file CommandComposite.hpp. Execute the functionality of all commands. Commands will be executed in the order they have been added Implements RTT::base::ActionInterface. Definition at line 81 of file CommandComposite from RTT::base::ActionInterface. Definition at line 94 of file CommandComposite.hpp.
http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1scripting_1_1CommandComposite.html
CC-MAIN-2014-10
en
refinedweb
] releasing. #include <Wire.h> #include <ADXL345.h> the whole work there quite soon. I’m inviting you to follow me on twitter, facebook and everywhere else. The real big picture of everything working fine Previous: Data Management --- Next: Conclusion
http://cycling74.com/wiki/index.php?title=BayleAdvancedProject-p6a&oldid=4251
CC-MAIN-2014-10
en
refinedweb
This chapter contains these topics: Continuous Query Notification Database Startup and Shutdown Implicit Fetching of ROWIDs Fault Diagnosability in OCI Continuous Query Notification enables client applications to register queries with the database and receive notifications in response to DML or DDL changes on the objects or in response to result set changes associated with the queries. The notifications are published by the database when the DML or DDL transaction commits. During registration, the application specifies a notification handler and associates a set of interesting queries with the notification handler. A notification handler can be either a server side PL/SQL procedure or a client side C callback. Registrations are created at either the object level will not be invoked. See Also:Oracle Database Advanced Application Developer's Guide, chapter 13, "Using Continuous Query Notification" for a complete discussion of the concepts of this feature. One use of this feature is in middle-tier applications that need to have cached data and keep the cache as recent as possible with respect to the back-end database. The contents of a Real Application Cluster (Oracle RAC) the database delivers a notification when the first instance starts or the last instance shuts down. See Also:"Publish-Subscribe Notification in OCI" To record QOS (quality of service flags) specific to continuous query (CQ) notifications set the attribute OCI_ATTR_SUBSCR_CQ_QOSFLAGS on the subscription handle OCI_HTYPE_SUBSCR. To request that the registration is at query granularity, as opposed to object granularity, set the OCI_SUBSCR_CQ_QOS_QUERY flag bit on the attribute OCI_ATTR_SUBSCR_CQ_QOSFLAGS. The pseudocolumn CQ_NOTIFICATION_QUERY_ID can be optionally specified to retrieve the query ID of a registered query. Note that this does not automatically convert the granularity to query level. The value of the pseudocolumn on return is set to the unique query ID assigned to the query. The query ID pseudo column can be omitted for OCI-based registrations, in which case the query ID is communicated back as a READ attribute of the statement handle. (This attribute is called OCI_ATTR_CQ_QUERYID). During notifications, the client-specified callback is invoked and the top level notification descriptor is passed as an argument. Information about the query IDs of the changed queries is conveyed through a special descriptor type called OCI_DTYPE_CQDES: A collection ( OCIColl) of query descriptors is embedded inside the top level notification descriptor. Each descriptor is of type OCI_DTYPE_CQDES. The query descriptor has the following attributes: OCI_ATTR_CQDES_OPERATION - can be one of OCI_EVENT_QUERYCHANGE or OCI_EVENT_DEREG. OCI_ATTR_CQDES_QUERYID - query ID of the changed query. OCI_ATTR_CQDES_TABLE_CHANGES - array of table descriptors describing DML operations on tables which led to the query result set change. Each table descriptor is of the type OCI_DTYPE_TABLE_CHDES. See Also:"OCI_DTYPE_CQDES" The calling session must have the CHANGE NOTIFICATION system privilege and SELECT privileges on all objects that it attempts to register. A registration is a persistent entity that is recorded in the database, and is visible to all instances of Oracle RAC. If the registration was at query granularity, transactions that cause query result set to change and commit in any instance of Oracle RAC generate notification.If the registration was at object granularity, transactions that modify registered objects in any instance of the Oracle RAC generate notification. Queries involving materialized views or non: void notification_callback (void *ctx, OCISubscription *subscrhp, void *payload, ub4 paylen, void . If OCI_SUBSCR_QOS_PURGE_ON_NTFN is set, the registration is purged on the first notification. If OCI_SUBSCR_QOS_RELIABLE is set, notifications are persistent. Surviving instances of Oracle RAC can be used to send and retrieve continuous query. Call OCISubscriptionRegister() to create a new registration in the DBCHANGE namespace. Multiple query statements can be associatedCI statement handle for subsequent executions, it must repopulate the registration handle attribute of the statement handle. A binding of a subscription handle to a statement handle is only permitted when the statement is a query (determined at execute time). If a DML statement is executed as part of the execution, then an exception is issued. The subscription handle attributes for continuous query notification are described next. The attributes can be divided into generic (which are common to all subscriptions) and namespace-specific attributes particular to continuous query notification. The WRITE attributes on the statement handle can only be modified before the registration is created. OCI_ATTR_SUBSCR_NAMESPACE ( WRITE) - This must be set to OCI_SUBSCR_NAMESPACE_DBCHANGE for subscription handles. OCI_ATTR_SUBSCR_CALLBACK ( WRITE) - Use to store the callback associated with the subscription handle. The callback is executed when a notification is received. When a new continuous query notification message becomes available, the callback is invoked in the listener thread with desc pointing to a descriptor of type OCI_DTYPE_CHDES which contains detailed information about the invalidation. For setting OCI_ATTR_SUBSCR_QOSFLAGS, a generic flag, with values #define OCI_SUBSCR_QOS_RELIABLE 0x01 /* reliable */ #define OCI_SUBSCR_QOS_PURGE_ON_NTFN 0x10 /* purge on first ntfn */ If OCI_SUBSCR_QOS_RELIABLE is set, then notifications are persistent. Therefore, surviving instances of Oracle RAC cluster can be used to send and retrieve invalidation messages, even after a node crash, because invalidations associated with this registration ID are queued persistently into the database. If FALSE, then invalidations are enqueued in to a fast in memory queue. Note that this option describes the persistence of notifications and not the persistence of registrations. Registrations are automatically persistent by default. If OCI_SUBSCR_QOS_PURGE_ON_NTFN bit is set, it means that the registration is purged on the first notification. A parallel example is presented here: OCI_ATTR_SUBSCR_CQ_QOSFLAGS. This attribute describes the continuous query notification-specific QOS flags (mode is WRITE, datatype is ub4) which are: 0x1 OCI_SUBSCR_CQ_QOS_QUERY - If set, it indicates that query level granularity is required. Notification should be only generated if the query result set changes. By default this level of QOS will have no false positives. 0x2 OCI_SUBSCR_CQ_QOS_BEST_EFFORT - If set, it indicates that best effort filtering is acceptable. Maybe used by caching applications. The database may use heuristics based on cost of evaluation and avoid full pruning in some cases. OCI_ATTR_SUBSCR_TIMEOUT - This attribute can be used to specify a ub4 timeout value defined in seconds. If the timeout value is 0, or not specified, then the registration lives until explicitly unregistered.The rest of the attributes are namespace or feature-specific to the continuous query notification feature. continuous query notification message includes row level details such as operation type and ROWID. OCI_ATTR_CHNF_OPERATIONS - This is a ub4 flag that can be used to selectively filter notifications based on operation type. This option is ignored if the registration is of query level granularity. Flags stored are: OCI_OPCODE_ALL - All operations OCI_OPCODE_INSERT - Insert operations on the table OCI_OPCODE_UPDATE - Update operations on the table OCI_OPCODE_DELETE - Delete operations on the table OCI_ATTR_CHNF_CHANGELAG - This is a ub4 value that can be used by the client to specify the number of transactions by which the client is willing to lag behind. This option can be used by the client as a throttling mechanism for continuous query notification messages. When this option is chosen, ROWID-level granularity of information is not available in the notifications, even if OCI_ATTR_CHNF_ROWIDS was set to TRUE. This option is ignored if the registration is of query level granularity.. Notifications can be spaced out by using the grouping NTFN option. The relevant generic notification attributes are: OCI_ATTR_SUBSCR_NTFN_GROUPING_VALUE OCI_ATTR_SUBSCR_NTFN_GROUPING_TYPE OCI_ATTR_SUBSCR_NTFN_GROUPING_START_TIME OCI_ATTR_SUBSCR_NTFN_GROUPING_REPEAT_COUNT See Also:" The continuous query notification descriptor is passed into the desc parameter of the notification callback specified by the application. The following attributes are specific to continuous query notification. The OCI type constant of the continuous query notification descriptor is OCI_DTYPE_CHDES. The notification callback receives the top-level notification descriptor, OCI_DTYPE_CHDES, as an argument. This descriptor in turn includes either a collection of OCI_DTYPE_CQDES or OCI_DTYPE_TABLE_CHDES descriptors based on whether the event type was OCI_EVENT_QUERYCHANGE or OCI_EVENT_OBJCHANGE. An array of table continuous query descriptors is embedded inside the continuous query descriptor for notifications of type OCI_EVENT_QUERYCHANGE. If ROWID level granularity of information was requested, each OCI_DTYPE_TABLE_CHDES contains an array of row level continuous query descriptors ( OCI_DTYPE_ROW_CHDES) corresponding to each modified ROWID. This is the top-level continuous query notification descriptor type. OCI_ATTR_CHDES_DBNAME ( oratext *) - Name of the database (source of the continuous query notification). OCI_ATTR_CHDES_XID ( RAW(8)) - Message id of the message. OCI_ATTR_CHDES_NFYTYPE - Flags describing the notification type: 0x0 OCI_EVENT_NONE - No further information about the continuous query notification. 0x1 OCI_EVENT_STARTUP - Instance startup. 0x2 OCI_EVENT_SHUTDOWN - Instance shutdown. 0x3 OCI_EVENT_SHUTDOWN_ANY - Any instance shutdown - Real Application Clusters (Oracle RAC). ox5 OCI_EVENT_DEREG - unregistered or timed out. 0x6 OCI_EVENT_OBJCHANGE - Object change notification. 0x7 OCI_EVENT_QUERYCHANGE - Query change notification. OCI_ATTR_CHDES_TABLE_CHANGES - A collection type describing operations on tables of datatype (OCIColl *). This attribute is only present if the OCI_ATTR_CHDES_NFTYPE attribute was of type OCI_EVENT_OBJCHANGE, else it is NULL. Each element of the collection is a table of continuous query descriptors of type OCI_DTYPE_TABLE_CHDES: OCI_ATTR_CHDES_QUERIES: A collection type describing the queries which were invalidated. Each member of the collection is of type OCI_DTYPE_CQDES. This attribute is only present if the attribute OCI_ATTR_CHDES_NFTYPE was OCI_EVENT_QUERYCHANGE, else it is NULL. This notification descriptor describes a query which was invalidated, usually in response to the commit of a DML or a DDL transaction. It has the following attributes: OCI_ATTR_CQDES_OPERATION ( ub4, READ) - Operation which occurred on the query. It can be one of the two values: OCI_EVENT_QUERYCHANGE - Query result set change. OCI_EVENT_DEREG - Query unregistered. OCI_ATTR_CQDES_TABLE_CHANGES ( OCIColl *, READ) - A collection of table continuous query descriptors describing DML or DDL operations on tables which caused the query result set change. Each element of the collection is of type OCI_DTYPE_TABLE_CHDES. OCI_ATTR_CQDES_QUERYID ( ub8, READ) - Query ID of the query which was invalidated. This notification descriptor conveys information about changes to a table involved in a registered query. OCI_ATTR_CHDES_TABLE_NAME (oratext *) - Schema annotated table name. OCI_ATTR_CHDES_TABLE_OPFLAGS ( ub4) - Flag field describing the operations on the table. Each of the following flag fields is in a separate bit position in the attribute: 0x1 OCI_OPCODE_ALLROWS - The table is completely invalidated. 0x2 OCI_OPCODE_INSERT - Insert operations on the table. 0x4 OCI_OPCODE_UPDATE - Update operations on the table. 0x8 OCI_OPCODE_DELETE - Delete operations on the table. 0x10 OCI_OPCODE_ALTER - Table altered (schema change). This includes DDL statements and internal operations that cause row migration. 0x20 OCI_OPCODE_DROP - Table dropped. OCI_ATTR_CHDES_TABLE_ROW_CHANGES - This is an embedded collection describing the changes to the rows within the table. Each element of the collection is a row continuous query, demoquery.c. See the comments in the listing. The calling session must already have the CHANGE NOTIFICATION system privilege and SELECT privileges on all objects that it attempts to register. /* Copyright (c) 2006, Oracle. All rights reserved. */ #ifndef S_ORACLE # include <oratypes.h> #endif /************************************************************************** *This is a DEMO program. To test, compile the file to generate the executable *demoquery. Then demoquery can be invoked from a command prompt. *It will have the following output: Initializing OCI Process Registering query : select last_name, employees.department_id, department_name from employees, departments where employee_id = 200 and employees.department_id = departments.department_id Query Id 23 Waiting for Notifications *Then from another session, log in as HR/HR and perform the following * DML transactions. It will cause two notifications to be generated. update departments set department_name ='Global Admin' where department_id=10; commit; update departments set department_name ='Adminstration' where department_id=10; commit; *The demoquery program will now show the following output corresponding *to the notifications received. Query 23 is changed Table changed is HR.DEPARTMENTS table_op 4 Row changed is AAAMBoAABAAAKX2AAA row_op 4 Query 23 is changed Table changed is HR.DEPARTMENTS table_op 4 Row changed is AAAMBoAABAAAKX2AAA row_op 4 *The demo program waits for exactly 10 notifications to be received before *logging off and unregistering the subscription. ***************************************************************************/ /*--------------------------------------------------------------------------- PRIVATE TYPES AND CONSTANTS ---------------------------------------------------------------------------*/ /*--------------------------------------------------------------------------- STATIC FUNCTION DECLARATIONS ---------------------------------------------------------------------------*/ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <oci.h> #define MAXSTRLENGTH 1024 #define bit(a,b) ((a)&(b)) static int notifications_processed = 0; static OCISubscription *subhandle1 = (OCISubscription *)0; static OCISubscription *subhandle2 = (OCISubscription *)0; static void checker(/*_ OCIError *errhp, sword status _*/); static void registerQuery(/*_ OCISvcCtx *svchp, OCIError *errhp, OCIStmt *stmthp, OCIEnv *envhp _*/); static void myCallback (/*_ dvoid *ctx, OCISubscription *subscrhp, dvoid *payload, ub4 *payl, dvoid *descriptor, ub4 mode _*/); static int NotificationDriver(/*_ int argc, char *argv[] _*/); static sword status; static boolean logged_on = FALSE; static void processRowChanges(OCIEnv *envhp, OCIError *errhp, OCIStmt *stmthp, OCIColl *row_changes); static void processTableChanges(OCIEnv *envhp, OCIError *errhp, OCIStmt *stmthp, OCIColl *table_changes); static void processQueryChanges(OCIEnv *envhp, OCIError *errhp, OCIStmt *stmthp, OCIColl *query_changes); static int nonractests2(/*_ int argc, char *argv[] _*/); int main(int argc, char **argv) { NotificationDriver(argc, argv); return 0; } int NotificationDriver(argc, argv) int argc; char *argv[]; { OCIEnv *envhp; OCISvcCtx *svchp, *svchp2; OCIError *errhp, *errhp2; OCISession *authp, *authp2; OCIStmt *stmthp, *stmthp2; OCIDuration dur, dur2; int i; dvoid *tmp; OCISession *usrhp; OCIServer *srvhp; printf("Initializing OCI Process\n"); /* Initialize the environment. The environment has to be initialized with OCI_EVENTS and OCI_OBJECTS to create a continuous query notification registration and receive notifications. */ OCIEnvCreate( (OCIEnv **) &envhp, OCI_EVENTS|OCI_OBJECT, (dvoid *)0, (dvoid * (*)(dvoid *, size_t)) 0, (dvoid * (*)(dvoid *, dvoid *, size_t))0, (void (*)(dvoid *, dvoid *)) 0, (size_t) 0, (dvoid **) 0 ); OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &errhp, OCI_HTYPE_ERROR, (size_t) 0, (dvoid **) 0); /* server contexts */ OCIHandleAlloc((dvoid *) envhp, (dvoid **) &srvhp, OCI_HTYPE_SERVER, (size_t) 0, (dvoid **) 0); OCIHandleAlloc((dvoid *) envhp, (dvoid **) &svchp, OCI_HTYPE_SVCCTX, (size_t) 0, (dvoid **) 0); checker(errhp,OCIServerAttach(srvhp, errhp, (text *) 0, (sb4) 0, (ub4) OCI_DEFAULT)); /* *)((text *)"HR"), (ub4)strlen((char *)"HR"), OCI_ATTR_USERNAME, errhp); OCIAttrSet((dvoid *)usrhp, (ub4)OCI_HTYPE_SESSION, (dvoid *)((text *)"HR"), (ub4)strlen((char *)"HR"), OCI_ATTR_PASSWORD, errhp); checker(errhp,OCISessionBegin (svchp, errhp, usrhp, OCI_CRED_RDBMS, OCI_DEFAULT)); /* Allocate a statement handle */ OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &stmthp, (ub4) OCI_HTYPE_STMT, 52, (dvoid **) &tmp); OCIAttrSet((dvoid *)svchp, (ub4)OCI_HTYPE_SVCCTX, (dvoid *)usrhp, (ub4)0, OCI_ATTR_SESSION, errhp); registerQuery(svchp, errhp, stmthp, envhp); printf("Waiting for Notifications\n"); while (notifications_processed !=10) { sleep(1); } printf ("Going to unregister HR\n"); fflush(stdout); /* Unregister HR */ checker(errhp, OCISubscriptionUnRegister(svchp, subhandle1, errhp, OCI_DEFAULT)); checker(errhp, OCISessionEnd(svchp, errhp, usrhp, (ub4) 0)); printf("HR Logged off.\n"); if (subhandle1) OCIHandleFree((dvoid *)subhandle1, OCI_HTYPE_SUBSCRIPTION); if (stmthp) OCIHandleFree((dvoid *)stmthp, OCI_HTYPE_STMT); if (srvhp) OCIHandleFree((dvoid *) srvhp, (ub4) OCI_HTYPE_SERVER); if (svchp) OCIHandleFree((dvoid *) svchp, (ub4) OCI_HTYPE_SVCCTX); if (authp) OCIHandleFree((dvoid *) usrhp, (ub4) OCI_HTYPE_SESSION); if (errhp) OCIHandleFree((dvoid *) errhp, (ub4) OCI_HTYPE_ERROR); if (envhp) OCIHandleFree((dvoid *) envhp, (ub4) OCI_HTYPE_ENV); return 0; } void checker(errhp, status) OCIError *errhp; sword status; { text errbuf[512]; sb4 errcode = 0; int retval = 1; switch (status) { case OCI_SUCCESS: retval = 0;; } if (retval) { exit(1); } } void processRowChanges(OCIEnv *envhp, OCIError *errhp, OCIStmt *stmthp, OCIColl *row_changes) { dvoid **row_descp; dvoid *row_desc; boolean exist; ub2 i, j; dvoid *elemind = (dvoid *)0; oratext *row_id; ub4 row_op; sb4 num_rows; if (!row_changes) return; checker(errhp, OCICollSize(envhp, errhp, (CONST OCIColl *) row_changes, &num_rows)); for (i=0; i<num_rows; i++) { checker(errhp, OCICollGetElem(envhp, errhp, (OCIColl *) row_changes, i, &exist, &row_descp, &elemind)); row_desc = *row_descp; checker(errhp, OCIAttrGet (row_desc, OCI_DTYPE_ROW_CHDES, (dvoid *)&row_id, NULL, OCI_ATTR_CHDES_ROW_ROWID, errhp)); checker(errhp, OCIAttrGet (row_desc, OCI_DTYPE_ROW_CHDES, (dvoid *)&row_op, NULL, OCI_ATTR_CHDES_ROW_OPFLAGS, errhp)); printf ("Row changed is %s row_op %d\n", row_id, row_op); fflush(stdout); } } void processTableChanges(OCIEnv *envhp, OCIError *errhp, OCIStmt *stmthp, OCIColl *table_changes) { dvoid **table_descp; dvoid *table_desc; dvoid **row_descp; dvoid *row_desc; OCIColl *row_changes = (OCIColl *)0; boolean exist; ub2 i, j; dvoid *elemind = (dvoid *)0; oratext *table_name; ub4 table_op; sb4 num_tables; if (!table_changes) return; checker(errhp, OCICollSize(envhp, errhp, (CONST OCIColl *) table_changes, &num_tables)); for (i=0; i<num_tables; i++) { checker(errhp, OCICollGetElem(envhp, errhp, (OCIColl *) table_changes, i, &exist, &table_descp, &elemind)); table_desc = *table_descp; checker(errhp, OCIAttrGet (table_desc, OCI_DTYPE_TABLE_CHDES, (dvoid *)&table_name, NULL, OCI_ATTR_CHDES_TABLE_NAME, errhp)); checker(errhp, OCIAttrGet (table_desc, OCI_DTYPE_TABLE_CHDES, (dvoid *)&table_op, NULL, OCI_ATTR_CHDES_TABLE_OPFLAGS, errhp)); checker(errhp, OCIAttrGet (table_desc, OCI_DTYPE_TABLE_CHDES, (dvoid *)&row_changes, NULL, OCI_ATTR_CHDES_TABLE_ROW_CHANGES, errhp)); printf ("Table changed is %s table_op %d\n", table_name,table_op); fflush(stdout); if (!bit(table_op, OCI_OPCODE_ALLROWS)) processRowChanges(envhp, errhp, stmthp, row_changes); } } void processQueryChanges(OCIEnv *envhp, OCIError *errhp, OCIStmt *stmthp, OCIColl *query_changes) { sb4 num_queries; ub8 queryid; OCINumber qidnum; ub4 queryop; dvoid *elemind = (dvoid *)0; dvoid *query_desc; dvoid **query_descp; ub2 i; boolean exist; OCIColl *table_changes = (OCIColl *)0; if (!query_changes) return; checker(errhp, OCICollSize(envhp, errhp, (CONST OCIColl *) query_changes, &num_queries)); for (i=0; i < num_queries; i++) { checker(errhp, OCICollGetElem(envhp, errhp, (OCIColl *) query_changes, i, &exist, &query_descp, &elemind)); query_desc = *query_descp; checker(errhp, OCIAttrGet (query_desc, OCI_DTYPE_CQDES, (dvoid *)&queryid, NULL, OCI_ATTR_CQDES_QUERYID, errhp)); checker(errhp, OCIAttrGet (query_desc, OCI_DTYPE_CQDES, (dvoid *)&queryop, NULL, OCI_ATTR_CQDES_OPERATION, errhp)); printf(" Query %d is changed\n", queryid); if (queryop == OCI_EVENT_DEREG) printf("Query Deregistered\n"); checker(errhp, OCIAttrGet (query_desc, OCI_DTYPE_CQDES, (dvoid *)&table_changes, NULL, OCI_ATTR_CQDES_TABLE_CHANGES, errhp)); processTableChanges(envhp, errhp, stmthp, table_changes); } } void myCallback (ctx, subscrhp, payload, payl, descriptor, mode) dvoid *ctx; OCISubscription *subscrhp; dvoid *payload; ub4 *payl; dvoid *descriptor; ub4 mode; { OCIColl *table_changes = (OCIColl *)0; OCIColl *row_changes = (OCIColl *)0; dvoid *change_descriptor = descriptor; ub4 notify_type; ub2 i, j; OCIEnv *envhp; OCIError *errhp; OCIColl *query_changes = (OCIColl *)0; OCIServer *srvhp; OCISvcCtx *svchp; OCISession *usrhp; dvoid *tmp; OCIStmt *stmthp; ); OCIAttrGet (change_descriptor, OCI_DTYPE_CHDES, (dvoid *) ¬ify_type, NULL, OCI_ATTR_CHDES_NFYTYPE, errhp); fflush(stdout); if (notify_type == OCI_EVENT_SHUTDOWN || notify_type == OCI_EVENT_SHUTDOWN_ANY) { printf("SHUTDOWN NOTIFICATION RECEIVED\n"); fflush(stdout); notifications_processed++; return; } if (notify_type == OCI_EVENT_STARTUP) { printf("STARTUP NOTIFICATION RECEIVED\n"); fflush(stdout); notifications_processed++; return; } notifications_processed++; checker(errhp, OCIServerAttach( srvhp, errhp, (text *) 0, (sb4) 0, (ub4) OCI_DEFAULT)); OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &svchp, (ub4) OCI_HTYPE_SVCCTX, 52, (dvoid **) &tmp); /* *)"HR", (ub4)strlen("HR"), OCI_ATTR_USERNAME, errhp); OCIAttrSet((dvoid *)usrhp, (ub4)OCI_HTYPE_SESSION, (dvoid *)"HR", (ub4)strlen("HR"), OCI_ATTR_PASSWORD, errhp); checker); /* Allocate a statement handle */ OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &stmthp, (ub4) OCI_HTYPE_STMT, 52, (dvoid **) &tmp); if (notify_type == OCI_EVENT_OBJCHANGE) { checker(errhp, OCIAttrGet (change_descriptor, OCI_DTYPE_CHDES, &table_changes, NULL, OCI_ATTR_CHDES_TABLE_CHANGES, errhp)); processTableChanges(envhp, errhp, stmthp, table_changes); } else if (notify_type == OCI_EVENT_QUERYCHANGE) { checker(errhp, OCIAttrGet (change_descriptor, OCI_DTYPE_CHDES, &query_changes, NULL, OCI_ATTR_CHDES_QUERIES, errhp)); processQueryChanges(envhp, errhp, stmthp, query_changes); } checker(errhp, OCISessionEnd(svchp, errhp, usrhp, OCI_DEFAULT)); checker(errhp, OCIServerDetach(srvhp, errhp, OCI_DEFAULT)); if (stmthp) OCIHandleFree((dvoid *)stmthp, OCI_HTYPE_STMT); if (errhp) OCIHandleFree((dvoid *)errhp, OCI_HTYPE_ERROR); if (srvhp) OCIHandleFree((dvoid *)srvhp, OCI_HTYPE_SERVER); if (svchp) OCIHandleFree((dvoid *)svchp, OCI_HTYPE_SVCCTX); if (usrhp) OCIHandleFree((dvoid *)usrhp, OCI_HTYPE_SESSION); if (envhp) OCIHandleFree((dvoid *)envhp, OCI_HTYPE_ENV); } void registerQuery(svchp, errhp, stmthp, envhp) OCISvcCtx *svchp; OCIError *errhp; OCIStmt *stmthp; OCIEnv *envhp; { OCISubscription *subscrhp; ub4 namespace = OCI_SUBSCR_NAMESPACE_DBCHANGE; ub4 timeout = 60; OCIDefine *defnp1 = (OCIDefine *)0; OCIDefine *defnp2 = (OCIDefine *)0; OCIDefine *defnp3 = (OCIDefine *)0; OCIDefine *defnp4 = (OCIDefine *)0; OCIDefine *defnp5 = (OCIDefine *)0; int mgr_id =0; text query_text1[] = "select last_name, employees.department_id, department_name \ from employees,departments where employee_id = 200 and employees.department_id =\ departments.department_id"; ub4 num_prefetch_rows = 0; ub4 num_reg_tables; OCIColl *table_names; ub2 i; boolean rowids = TRUE; ub4 qosflags = OCI_SUBSCR_CQ_QOS_QUERY ; int empno=0; OCINumber qidnum; ub8 qid; char outstr[MAXSTRLENGTH], dname[MAXSTRLENGTH]; int q3out; fflush(stdout); /* allocate subscription handle */ OCIHandleAlloc ((dvoid *) envhp, (dvoid **) &subscrhp, OCI_HTYPE_SUBSCRIPTION, (size_t) 0, (dvoid **) 0); /* set the namespace to DBCHANGE */ checker(errhp, OCIAttrSet (subscrhp, OCI_HTYPE_SUBSCRIPTION, (dvoid *) &namespace, sizeof(ub4), OCI_ATTR_SUBSCR_NAMESPACE, errhp)); /* Associate a notification callback with the subscription */ checker(errhp, OCIAttrSet (subscrhp, OCI_HTYPE_SUBSCRIPTION, (void *)myCallback, 0, OCI_ATTR_SUBSCR_CALLBACK, errhp)); /* Allow extraction of rowid information */ checker(errhp, OCIAttrSet (subscrhp, OCI_HTYPE_SUBSCRIPTION, (dvoid *)&rowids, sizeof(ub4), OCI_ATTR_CHNF_ROWIDS, errhp)); checker(errhp, OCIAttrSet (subscrhp, OCI_HTYPE_SUBSCRIPTION, (dvoid *)&qosflags, sizeof(ub4), OCI_ATTR_SUBSCR_CQ_QOSFLAGS, errhp)); /* Create a new registration in the DBCHANGE namespace */ checker(errhp, OCISubscriptionRegister(svchp, &subscrhp, 1, errhp, OCI_DEFAULT)); /* Multiple queries can now be associated with the subscription */ subhandle1 = subscrhp; printf("Registering query : %s\n", (const signed char *)query_text1); /* Prepare the statement */ checker(errhp, OCIStmtPrepare (stmthp, errhp, query_text1, (ub4)strlen((const signed char *)query_text1), OCI_V7_SYNTAX, OCI_DEFAULT)); checker(errhp, OCIDefineByPos(stmthp, &defnp1, errhp, 1, (dvoid *)outstr, MAXSTRLENGTH * sizeof(char), SQLT_STR, (dvoid *)0, (ub2 *)0, (ub2 *)0, OCI_DEFAULT)); checker(errhp, OCIDefineByPos(stmthp, &defnp2, errhp, 2, (dvoid *)&empno, sizeof(empno), SQLT_INT, (dvoid *)0, (ub2 *)0, (ub2 *)0, OCI_DEFAULT)); checker(errhp, OCIDefineByPos(stmthp, &defnp3, errhp, 3, (dvoid *)&dname, sizeof(dname), SQLT_STR, (dvoid *)0, (ub2 *)0, (ub2 *)0, OCI_DEFAULT)); /* Associate the statement with the subscription handle */ OCIAttrSet (stmthp, OCI_HTYPE_STMT, subscrhp, 0, OCI_ATTR_CHNF_REGHANDLE, errhp); /* Execute the statement, the execution performs object registration */ checker(errhp, OCIStmtExecute (svchp, stmthp, errhp, (ub4) 1, (ub4) 0, (CONST OCISnapshot *) NULL, (OCISnapshot *) NULL , OCI_DEFAULT)); fflush(stdout); OCIAttrGet(stmthp, OCI_HTYPE_STMT, &qid, (ub4 *)0, OCI_ATTR_CQ_QUERYID, errhp); printf("Query Id %d\n", qid); /* commit */ checker(errhp, OCITransCommit(svchp, errhp, (ub4) 0)); } static void cleanup(envhp, svchp, srvhp, errhp, usrhp) OCIEnv *envhp; OCISvcCtx *svchp; OCIServer *srvhp; OCIError *errhp; OCISession *usrhp; { /* detach from the server */ checker(errhp, OCISessionEnd(svchp, errhp, usrhp, OCI_DEFAULT)); checker(errhp, OCIServerDetach(srvhp, errhp, (ub4)OCI_DEFAULT)); if (usrhp) (void) OCIHandleFree((dvoid *) usrhp, (ub4) OCI_HTYPE_SESSION); if (svchp) (void) OCIHandleFree((dvoid *) svchp, (ub4) OCI_HTYPE_SVCCTX); if (srvhp) (void) OCIHandleFree((dvoid *) srvhp, (ub4) OCI_HTYPE_SERVER); if (errhp) (void) OCIHandleFree((dvoid *) errhp, (ub4) OCI_HTYPE_ERROR); if (envhp) (void) OCIHandleFree((dvoid *) envhp, (ub4) OCI_HTYPE_ENV); } The OCI functions, OCIDBStartup() and OCIDBShutdown(), provide the minimal interface needed to start up and shut down an Oracle database. Before calling OCIDBStartup(), the C program must connect to the server and start a SYSDBA or SYSOPER session in the preliminary authentication mode. This mode is the only one permitted when the instance is not up and it is used only to bring up the instance. A call to OCIDBStartup() starts up one server instance without mounting or opening the database. To mount and open the database, end the preliminary authentication session and start a regular SYSDBA or SYSOPER session to execute the appropriate ALTER DATABASE statements. An active SYSDBA or SYSOPER session is needed to shut down the database. For all modes other than OCI_DBSHUTDOWN_ABORT, make two calls to OCIDBShutdown(): one to initiate shutdown by prohibiting further connections to the database, followed by the appropriate ALTER DATABASE commands to dismount and close it; and the other call to finish shutdown by bringing the instance down. In special circumstances, to shut down the database as fast as possible, just up or shut down the database when connected to a shared server through a dispatcher. The OCIAdmin administration handle datatype is used to make the interface extensible. OCIAdmin is associated with the handle type OCI_HTYPE_ADMIN. Passing a value for the OCIAdmin parameter, admhp, is optional for OCIDBStartup() and is not needed by OCIDBShutdown(). See Also: "Administration Handle Attributes" Oracle Database Administrator's Guide To do a startup, you must be connected to the database as SYSOPER or SYSDBA in OCI_PRELIM_AUTH mode. You cannot be connected to a shared server through a dispatcher. To use a client-side parameter file ( pfile), the attribute OCI_ATTR_ADMIN_PFILE must be set in the administration handle using OCIAttrSet(); otherwise, a server-side parameter file ( spfile) is used. In the latter case, pass (OCIAdmin *)0. A call to OCIDBStartup() starts up one instance on the server. The following sample code uses a client-side parameter file ( pfile) that is set in the administration handle: ... /* Example 0 - Startup: */ OCIAdmin *admhp; text *mount_stmt = (text *)"ALTER DATABASE MOUNT"; text *open_stmt = (text *)"ALTER DATABASE OPEN"; text *pfile = (text *)"/ade/viewname/oracle/work/t_init1.ora"; /* Start the authentication session */ checkerr(errhp, OCISessionBegin (svchp, errhp, usrhp, OCI_CRED_RDBMS, OCI_SYSDBA|OCI_PRELIM_AUTH)); /* Allocate admin handle for OCIDBStartup */ checkerr(errhp, OCIHandleAlloc(((void *) admhp, (ub4) OCI_HTYPE_ADMIN)); /* End the authentication session */ OCISessionEnd(svchp, errhp, usrhp, (ub4)OCI_DEFAULT); /* Start the sysdba session */ checkerr(errhp, OCISessionBegin (svchp, errhp, usrhp, OCI_CRED_RDBMS, OCI_SYSDBA)); /* Mount the database */ checkerr(errhp, OCIStmtPrepare2(svchp, &stmthp, errhp, mount_stmt, (ub4) strlen((char*) mount)); /* Open the database */ checkerr(errhp, OCIStmtPrepare2(svchp, &stmthp, errhp, open_stmt, (ub4) strlen((char*) open)); /* End the sysdba session */ OCISessionEnd(svchp, errhp, usrhp, (ub4)OCI_DEFAULT); ... To. /* Example 1 - Orderly shutdown: */ ... text *close_stmt = (text *)"ALTER DATABASE CLOSE NORMAL"; text *dismount_stmt = (text *)"ALTER DATABASE DISMOUNT"; /* Start the sysdba session */ checkerr(errhp, OCISessionBegin (svchp, errhp, usrhp, OCI_CRED_RDBMS, OCI_SYSDBA)); /* Shutdown in the default mode (transactional, transactional-local, immediate would be fine too) */ checkerr(errhp, OCIDBShutdown(svchp, errhp, (OCIAdmin *)0, OCI_DEFAULT)); /* Close the database */ checkerr(errhp, OCIStmtPrepare2(svchp, &stmthp, errhp, close_stmt, (ub4) strlen((char*) close)); /* Dismount the database */ checkerr(errhp, OCIStmtPrepare2(svchp, &stmthp, errhp, dismount_stmt, (ub4) strlen((char*) dismount)); /* Final shutdown */ checkerr(errhp, OCIDBShutdown(svchp, errhp, (OCIAdmin *)0, OCI_DBSHUTDOWN_FINAL)); /* End the sysdba session */ checkerr(errhp, OCISessionEnd(svchp, errhp, usrhp, (ub4)OCI_DEFAULT)); ... The next shutdown example uses OCI_DBSHUTDOWN_ABORT mode: /* Example 2 - Shutdown using abort: */ ... /* Start the sysdba session */ ... checkerr(errhp, OCISessionBegin (svchp, errhp, usrhp, OCI_CRED_RDBMS, OCI_SYSDBA)); /* Shutdown in the abort mode */ checkerr(errhp, OCIDBShutdown(svchp, errhp, (OCIAdmin *)0, OCI_SHUTDOWN FOR UPDATE statement identifies the rows that will be updated and then locks each row in the result set. This is useful when you want to base an update on the existing values in a row. In that case, you must make sure that another user does not change the row. Note that when specifying character buffers for storing the values of the ROWIDs (for example, if getting it in SQLT_STR format), allocate enough memory for storing ROWIDs. Distinction should be made between ROWID datatype and UROWID datatype. ROWID datatype can only store physical ROWIDs, but UROWID is the type that can store logical ROWIDs (identifiers for the rows of Index-Organized Tables) as well. The maximum internal length for the ROWID type is 10 Bytes, but is 3950 bytes for the UROWID datatype. Dynamic define is equivalent to calling OCIDefineByPos() with mode set as OCI_DYNAMIC_FETCH. Dynamic defines enable us to set up additional attributes for a particular define handle. It specifies a callback function, which is invoked at runtime to get a pointer to the buffer into which the fetched data or a piece of it will be retrieved. The attribute OCI_ATTR_FETCH_ROWID must be set on the statement handle before implicit fetching of ROWIDs can be used, at one time into the user buffers and getting their respective ROWIDs at the same time. It allows for fetching of ROWIDs in the case of SELECT....FOR UPDATE statements even when ROWID is not one of the columns in the SELECT query. When fetching the data one by one into the user buffers, the existing attribute OCI_ATTR_ROWID can be used. If you use this feature to fetch the ROWIDs, the attribute OCI_ATTR_ROWID on the statement handle cannot be used simultaneously to get the ROWIDs. Only one of them can be used at a time for a particular statement handle. See Also:"OCI_ATTR_FETCH_ROWID" Use this fragment of a C program to build, getting. This feature, or else the query execution will be on the server. You must enable OCI statement caching or cache statements at the application level when using the client result cache. See Also:"Statement Caching in OCI" The benefits of OCI client query result cache are: Since the result cache is on the client-side, a cache hit causes OCIStmtExecute() and OCIStmtFetch() calls to be processed locally, instead of cheaper than server memory. A local cache on the client will have better locality of reference for queries executed by that client. You need to annotate a query with a /*+ result_cache */ hint to indicate that results are to be stored in the query result cache. It is recommended that applications annotate queries with result cache hints for read-only or read-mostly database objects. If the result cache hint is for queries whose results are large, these results can use a large amount of client result cache memory. As each set of bind values specified by the application creates a different cached result set (for the same SQL text), these result sets together can use a large amount of client result cache memory. You can set the RESULT_CACHE_MODE initialization parameter to control whether the SQL query client result cache is used for all queries (when possible), or only for queries that are annotated with the result cache hint. will always go to the server even if there might be a valid cached result set. It is necessary that an OCIStmtExecute() call be made for each statement handle to be able to match a cached result set. Oracle recommends applications to a result set to be cached, the OCIStmtExecute(), OCIStmtFetch(), or OCIStmtFetch2() calls that transparently create this cached result set must fetch rows until it gets an ORA-1403 "No Data Found" error. Subsequent OCIStmtExecute() or OCIStmtFetch() calls that match a locally cached result set need not fetch to completion. Unless the RESULT_CACHE_MODE server initialization parameter is set to FORCE, it is necessary to explicitly specify the queries to be cached via SQL hints. The SQL /*+ result_cache */ or /*+ no_result_cache */ hint needs to be set in SQL text passed to OCIStmtPrepare(), and OCIStmtPrepare2() calls. See Also: Oracle Database SQL Language Reference, for more information about these SQL hints Oracle Database Reference for more information about RESULT_CACHE_MODE To use this feature, applications must be re-linked with release 11.1 or higher client libraries and be connected to a release 11.1 or higher database server. This feature is available to all OCI applications including JDBC Type II driver, OCCI, Pro*C/C++, and ODP.NET. The OCI drivers will automatically pass the result cache hint to OCIStmtPrepare(), and OCIStmtPrepare2() calls, thereby getting the benefits of caching. There are queries that are not cached on the OCI client even if the result cache hint is specified. Such queries may be cached on the database if the server result cache feature is enabled (see Oracle Database Concepts, "SQL Query Result Cache" for more information). If a SQL query includes any of the following, then the result set of that query is not cached in the OCI client result cache: Views Remote objects Complex types in the select list Snapshot-based or flashback queries Queries executed in a serializable, read-only transaction, or inside a flashback session Queries that have PL/SQL functions in them Queries that have VPD policies enabled on the tables Note:OCI Client Result Cache is not supported in Release 11.1 with database resident connection pooling (DRCP). The client cache transparently keeps the result set consistent with any session state or database changes that can affect its cached result sets. When a transaction modifies the data or metadata of any of the database objects used to construct that cached result, invalidation will be sent to the OCI client on its subsequent round trip to the server. If the OCI application does no database calls for a period of time, then the client cache lag setting will force the next OCIStmtExecute() call to make a database call to check for such invalidations. The cached result sets relevant to database invalidations will be immediately invalidated and no subsequent OCIStmtExecute() calls un-used memory. If a session has a transaction open, OCI ensures that its queries that reference database objects changed in this transaction go to the server instead of the client cache. This consistency mechanism ensures that the OCI cache will always be close to committed database changes. If the OCI application has relatively frequent calls involving database round trips due to queries that cannot be cached, (such as DMLs, OCILob calls, and so on) then these calls will transparently keep the client cache up-to-date with database changes. The OCI client result cache does not require thread support in the client. The client result cache has server initialization parameters and client configuration parameters, for its deployment time settings. There are two server initialization parameters: CLIENT_RESULT_CACHE_SIZE The default value is zero, implying that the client cache feature is disabled. To enable the client result cache feature, set the size to 32768 bytes (32KBytes) means of the CLIENT_RESULT_CACHE_SIZE initialization parameter. You can view the current default maximum size by displaying the value of the CLIENT_RESULT_CACHE_SIZE parameter. If you want hence client result cache can lag behind any changes in the database that affect its result sets. The default is 3000 milliseconds. You can view the current lag by displaying the value of the CLIENT_RESULT_CACHE_LAG parameter. If you want will disable. OCI will periodically send, on existing round trips from the OCI client, statistics related to its client cache to the server and they will be stored in CLIENT_RESULT_CACHE_STATS$. Information such as the number of result sets successfully cached, number of cache hits, and number of cached result sets invalidated will be stored here. The number of cache misses for queries is at least equal to the number of Create Counts in client result cache statistics. For more precise cache miss count, it equals the number of server executions as seen in server Automatic Workload Repository (AWR) reports. See Also:Oracle Database Reference, for information about the CLIENT_RESULT_CACHE_STAT$view The client-side result cache is a separate feature from the server result cache. The client result cache caches results of top-level SQL queries in OCI client memory, while the server result cache caches result sets in server SGA memory. The server result cache may also cache query fragments. The client result caching can be enabled independently of the server result cache, though they both share the SQL result cache hints and some of the parameter settings. (See Oracle Database Concepts "SQL Query Result Cache" for details). Fault Diagnosability was introduced into OCI in 11g Release 1 (11.1). An incident (an occurrence of a problem) on the OCI client is captured without user intervention in the form of diagnostic data: dump files or core dump files. The diagnostic data is stored in an Automatic Diagnostic Repository (ADR) subdirectory created for the incident. For example, if a Linux or UNIX application fails with a null pointer reference then the core file is written in the ADR home directory (if it exists) instead of the operating system directory. The ADR subdirectory structure and a utility to deal with the output, ADR Command Interpreter (ADRCI), will be discussed in the following sections. An ADR home is the root directory for all diagnostic data for an instance of a particular product such as OCI and a particular operating system user. ADR homes are grouped under the same root directory, the ADR base. Fault diagnosability and the ADR structure for the Oracle Database are described in detail in the following documentation: See Also:Oracle Database Administrator's Guide, "Managing Diagnostic Data" The location of the ADR base is determined by OCI in the following order: OCI first looks in the file sqlnet.ora (if it exists) for a statement such as (Linux or UNIX): ADR_BASE=/foo/adr adr (the name of a directory) must already exist and be writable by all operating system users who execute OCI applications and want to share the same ADR base. and the ADR home could be something like /home/chuck/test/oradiag_chuck/diag/clients/user_chuck/host_4144260688_11 $ORACLE_BASE ( %ORACLE_BASE% on Windows) exists. In this case, the client subdirectory exists because it was created during installation of the database using the Oracle Universal Installer. For example, if $ORACLE_BASE is /home/chuck/obase then ADR base is /home/chuck/obase and ADR home could be similar to /home/chuck/obase/diag/clients/user_chuck/host_4144260688_11 $ORACLE_HOME (%ORACLE_BASE% on Windows) exists. In this case, the client subdirectory exists because it was created during installation of the database using the Oracle Universal Installer. For example, if $ORACLE_HOME is /ade/chuck_l1/oracle then ADR base is /ade/chuck_l1/oracle/log ADR base is /home/chuck/oradiag_chuck and ADR home could be /home/chuck/oradiag_chuck/diag/clients/user_chuck/host_4144260688_11 See Also:"OCI Instant Client" On Windows, if the application is run as a service, the home directory option will be skipped. Temporary directory in the Linux or UNIX operating system: /var/tmp. For example, in an Instant Client, if $HOME is not writable, then ADR base is /var/tmp/oradiag_chuck and ADR home could be /var/tmp/oradiag_chuck/diag/clients/user_chuck/host_4144260688_11 Temporary directories in the Windows operating system, searched in this order: %TMP% %TEMP% %USERPROFILE% Windows system directory If none of these directory choices are available and writable, ADR will not be created and there are no diagnostics. See Also:Oracle Database Net Services Reference ADRCI is a command-line tool that enables you to view diagnostic data within the ADR and to package incident and problem information into a zip file for Oracle Support to use. ADRCI can be used interactively and from a script. A problem is a critical error in OCI or the client. Each problem has a problem key. An incident is a single occurrence of a problem and is identified by a unique numeric incident ID. Each incident has a problem key which is a set of attributes: the ORA error number, error parameter values, and other information. Two incidents have the same root cause if their problem keys match. See Also:Oracle Database Utilities for an introduction to the ADRCI Here is a launch of ADRCI in a Linux system, then a use of the HELP command for the SHOW BASE command, and then the use of the SHOW BASE command with the option -PRODUCT CLIENT, which is necessary for OCI applications. The ADRCI commands are case-insensitive. User input is shown in bold. % adrci ADRCI: Release 11.1.0.5.0 - Beta on Wed May 2 15:53:06/oradiag_chuck" Next, the SET BASE command is described:> quit When ADRCI is started, then" To remove diagnosability, diagnostics can be turned off by setting the following parameters in sqlnet.ora (the default is TRUE): DIAG_ADR_ENABLED=FALSE DIAG_DDE_ENABLED=FALSE To turn off the OCI signal handler and re-enable standard operating system failure processing, place the following parameter setting in sqlnet.ora: DIAG_SIGHANDLER_ENABLED=FALSE As noted previously, ADR_BASE is used in sqlnet.ora to set the location of the ADR base. See Also: Oracle Database Net Services Reference Oracle Database Net Services Administrator's Guide for more information about the structure of the Automatic Diagnostic Repository
http://docs.oracle.com/cd/B28359_01/appdev.111/b28395/oci10new.htm
CC-MAIN-2014-10
en
refinedweb
Hey 10 Exciting New Features in Windows 8.1 and reviewers and at least some aspects can really be termed exciting... avatar. Here are our picks on 10 exciting new features in Windows 8.1.... to settings, all in no time. Bing Smart Search One of the most exciting sounds for older versions of Internet Explorer sounds for older versions of Internet Explorer How do I put sounds for older versions of Internet Explorer Hey - Java Beginners Hey neighbor, can you lend some Hammers? Hey neighbor, can you lend some Hammers? import java.util.Date; public class AccountProblem { public static void main(String[] args) { //create an instance object of class Stock Account myAccount = new Account? class Maxof2{ public static void main(String args[]){ //taking problem of static in jsp page by multiple user access from my jsp page , ths page can access by more number of users .every user changing my Singleton class property so that i am getting different answers...problem of static in jsp page by multiple user access hi , i am:useBean** <jsp:setProperty...;title>JSP Page</title> </head> <body> jsp jsp How can you implement the auto-fill feature for username in the user name text box, so that the user name is auto-populated when the user revisits the Tutorials - Page2 JSP Tutorials page 2 JSP Examples Hello World JSP Page... world" on your browser. Jsp can be learned very easily. This is just the beginning to learn this exciting language. The Page ZF Hello World of the works in ZF is done from command prompt, so we need to do some preliminary settings, so right click on the My Computer icon, select properties, select Why PHP Is So UsefulSP Code - JSP-Servlet JSP Code Hi, Do we have a datagrid equivalent concept in JSP? If so, then please help me to find the solution of the problem. Its Urgent..., Please visit the following links: GRID IN JSP - JSP-Servlet GRID IN JSP I m creating one ERP project in which i need to fill the data from the data base in a grid in jsp page. So pls any one provide the code JSP Dependent Drop Down Menu JSP Dependent Drop Down Menu Hey Guy/Gals! I need someone help... menu and it's selection. So in my database I have a category table that has a ID... searching the interwebs for the past hour so I don't know how useful links will be to me jsp code - JSP-Servlet jsp code hello frns i want to display image from the database along... so please tell me the solution how to write a text after image display... from database in Jsp to visit.... Jsp - JSP Charts in JSP - JSP-Servlet of charts in JSP.. So, Can i know the pre requirement for that? Do i need to download any package and import it? If so, plz mention the package name and link from... the following link: Java - JSP-Servlet for this. thank you. hey , Just create a jsp file and place in server...Java Hello Sir/Madam, How can I write a JSP in my website? I mean...\webapps\ jsp-examples\your.jsp and place a WEB-INF folder in it a web.xml - see Can i use Scriptlets inside javascript,if so how? Can i use Scriptlets inside javascript,if so how? Can i use Scriptlets inside javascript,if so how jsp servlet jsp servlet i dont know how to write a code to Create a JSP with one text field to enter the URL and a submit button.On clicking the submit... to create a thread. so please help me in writing this code jsp,beans,jdbc - JSP-Servlet jsp,beans,jdbc I have created jsp page and corresponding servlet... database for specific date .this data is retrived on jsp page by beans.i use beans for each column variable. query is select sum(count)from tablename; so output Free JSP Books Free JSP Books Download the following JSP books... it is sent by POST. How to using JSP HTML Form Scriptless Jsp - JSP-Interview Questions Scriptless Jsp Hi Deepak, Can we create scriptless jsp, if so explain me how, with advantages. can we access database by using javascript only. Thank u in advance jsp applet communication - JSP-Servlet jsp applet communication Hi... We've an application where v need to create an object in the jsp and send that to an applet... For this v used in which we passed object as a parameter... v created a connection object in jsp jsp error - JDBC jsp error in this code the else part is nt executing atall. if the 'id'(which i am retriving frm a html page) is nt matched with the database...] Invalid cursor state " is coming....please answer my question Hey jsp dropdown refresh - JSP-Servlet jsp dropdown refresh Sir Thank u for your jsp code... drop down it also refresh. so i cannot get the selected data. Please kindly give some solution with code so that i shall ever great full to you JSP - JSP-Servlet difficult in this situation so u please send your complete source code.... We Controler as JSP - JSP-Interview Questions as a controller in mvc-1.Bboz in mvc-1 as jsp is used as both controller and view so...Controler as JSP Hi Deepak, Can we use jsp... not clear. Thank u advance Hi friend, Yes ,We use Jsp as a controller jsp - JSP-Servlet jsp Hi, I need solution this program. In cart.jsp I printed the total amount of the products then the next page is shipping and billing details... payment) page modify to labels so no one change the details of the amounts modify this so that i can work better - Swing AWT modify this so that i can work better this program is about opening a savings account, i need any one to modify it and add new features to it to make i more better. the codes are; //OpenSavingsAccount1.java import please modify this so that it can work better - Swing AWT please modify this so that it can work better this program is about opening a savings account, i need any one to modify it and add new features to it to make i more better.no database is needed. the codes are; // class jsp servlet code to Create a JSP with one text field to enter the URL and a submit button i dont know how to write a code to Create a JSP with one text field... a thread. so please help me in writing this code jsp servlet how to write a code to Create a JSP with one text field to enter the URL i dont know how to write a code to Create a JSP with one text field.... so please help me in writing this code Common connection in JSP - JSP-Servlet Common connection in JSP Hi I am creating an JSP application. I want to use a common JSP page so that I can include it into other pages and get connection to database. I am able to get connection in other(common) JSP JSP - JSP-Servlet .) or may be any else. so please be clear what problem you are facing. Thanks JSP Hindi Page - JSP-Servlet JSP Hindi Page I need to provide users with option to print a Certificate in English or Hindi. This is an html page generated by JSP Code. Depending... be shown. Dynamic data will be obtained from data based and will be in english. So Rose India ? An Exciting New Concept towards Working from Home Rose India – An Exciting New Concept towards Working from Home Are you searching for an opportunity to make some money while working from the comfort of your home? Rose India may just be the right kind Upload and Download in JSP - JSP-Servlet and downloading a file in JSP.. When the admin clicks upload, he must be able to upload... to do so.. I am Using a table named file_tbl (MS-Access) Also if opossible, send me the coding in the same jsp for viewing the User status i.e when the admin JSP help JSP help Hi i am using jsp,html and mysql. I am displaying one question & their 4 options per page. For that i have one label for question &... button i want to display another record. so i want first record to be display jsp question? jsp question? see you all know about including file in a jsp... i...'%> <%@include file='two.jsp'%> i have included same file two times.... so...... right? so it measn before including file second time i should check that.. if once Cookie in jsp Cookie in jsp Define Cookie in jsp ? The cookie file is a file that resides on the client machine. It contains data passed from web sites, so that web sites can communicate with this file when the same client jsp and servlets jsp and servlets i want code for remember password and forget password so please send as early as possible ............. thanks in advance Please visit the following link: jsp servlet jsp servlet i have to create one application in which at one page login id will be entered and after entering it will go to second page where first... on 2nd page.how to do it in eclipse-java. i am new to java so please give < probleming in runing jsp - JSP-Servlet probleming in runing jsp hi this is my simple jsp code... HTTP Status 404 - /jsp%20prog/ -------------------------------------------------------------------------------- type Status report message /jsp%20prog Servlets,Jsp,Javascript - JSP-Servlet Servlets,Jsp,Javascript Hi in my application i am creating a file... to show a busy cursor for this 1 minute so that the user know some processing... ExportToExcel button, jsp Upload plot nos (.xls format). GO Reset ExportToExcel JSP date example JSP date example Till now you learned about the JSP syntax, now I will show you how to create a simple dynamic JSP page that prints sort a record in jsp - JSP-Servlet it displays the next 10 pages and so on. Im using jsp and mysql database. thx priya...: Pagination of JSP page Roll No Name Marks Grade JSP: Dynamic Linking - JSP-Servlet JSP: Dynamic Linking Hi I am fetching data as a search result from... want to provide hyper link to the id so that full details could be displayed... inside your result set loop to provide them a dynamic link. JSP Code Servlet-JSP Mapping - JSP-Servlet Servlet-JSP Mapping Dear Sir, My Query were: How I use the issueData() in Jsp which define in Servlet_Two.java. If any mistake in my code please...\.wlnotdelete\extract\myserver_ProjDemo_ProjDemo\jsp_servlet\__issue.java' failed JSP:Dynamic Linking - JSP-Servlet JSP:Dynamic Linking Hi This is extension to my previous question: " I am fetching data as a search result from database and displaying... to the id so that full details could be displayed of that id. can u help me in doing JSP code I get an error when i execute the following code : <... = con.createStatement(); st.executeQuery(query); %> <jsp:forward</jsp:forward> HTTP Status 500 - type Exception report sendRedirect In JSP the concept more easily, so what we are doing is, we have created one JSP...sendRedirect in JSP In this tutorial we will discuss about sendRedirect in JSP...=ISO-8859-1"> <title>JSP sendRedirect</title> </head> < java,servlet,jsp - JSP-Servlet the test stats from the begining .So ,please give me the code so that my test please help in jsp - JSP-Servlet please help in jsp i have two Jsp's pages. on main.jsp have some... area by condtion so that when i move from main.jsp to Home.jsp, it can show some data. here some data of Jsp's. main.jsp Using Scriplet in JSP - JSP-Servlet jsp but this should be done upon suitable condition becomes true for eg..."; public static final String COMPANY_TYPE="COMP100"; // and so on } IN HOME JSP ****** home.jsp **** Currently I am directly using JSP Pagination JSP Pagination Hi , I have several JSP's that displays data from... I could possibly use?? JSP Pagination pagination.jsp: <%@ page..."); } %> <html><h3>Pagination of JSP page</h3> <body>< java.text.ParseException: Unparseable date - JSP-Servlet java.text.ParseException: Unparseable date Hey ! .. can anybody help me ! .. in my project generated servlet from jsp .. is throwing an exception .. i tried a lot but failed .. jsp code JSP with JavaMail - JavaMail JSP with JavaMail I have developed the JSP code to send Java Mail. I am confused how to install Mail Server in my system. Please guide me how to do so how to retrieve data from multiple tables in jsp using javabeans how to retrieve data from multiple tables in jsp using javabeans hey friends.... plz me to solve this question... I have used the following code...,phno second table having field: seat-id But the data is not retrieved:why so JSP code problem - JSP-Servlet JSP code problem Hi friends, I used the following code... so. The uploaded image is not getting saved anywhere. The following...: <% //to get the content type information from JSP
http://www.roseindia.net/tutorialhelp/comment/34576
CC-MAIN-2014-10
en
refinedweb
On Wed, 5 Mar 2003, Thomas Leonard wrote: > On Wed, Mar 05, 2003 at 10:14:45AM -0500, Alexander Larsson wrote: > > On Tue, 4 Mar 2003, Thomas Leonard wrote: > > > > > I've put up a new version of the MIME spec and shared database: > > > > > > > > > > > > The format of the generated 'magic' file has changed, making it much > > > easier to parse. The update-mime-database command now performs more > > > validation and has better error reporting. > > > > This looks pretty good to me. > > Some comments on the spec: > > > > The XMLnamespaces section says: > > "The lines are sorted (using strcmp) and there are ..." > > This means strcmp in the "C" locale, right? > > strcmp is independant of locale isn't it? Isn't that what strcoll is for? Oh, yeah. Sorry about that. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Alexander Larsson Red Hat, Inc alexl redhat com alla lysator liu se He's a genetically engineered coffee-fuelled rock star haunted by memories of 'Nam. She's a blind green-skinned Hell's Angel with her own daytime radio talk show. They fight crime!
https://www.redhat.com/archives/xdg-list/2003-March/msg00032.html
CC-MAIN-2014-10
en
refinedweb
Hi, I am throwing a structure from within my class which contains specific error information. Would anyone be able to tell me if it is stack safe or not. Because I declare an instance of the structure on the stack, and then assign data to the members of the struct. For example, this wont compile... just for illustration purposes. Is it safe to process the error in this way? Code:#include <iostream> using namespace std; struct ErrorInfo { CString cStrErrorMsg; CString cStrAdditional; }; func1(void); func2(void); int main(void) { try { func1(); } catch(ErrorInfo error) { cout << error.cStrErrorMsg << " " << error.cStrAdditional << "." } return 0; } func1(void) { try { func2(); } catch(ErrorInfo error) throw; } func2(void) { ErrorInfo error error.cStrErrorMsg = "Error Occured"; error.cStrAdditional = "Nothing"; throw error; }
http://cboard.cprogramming.com/cplusplus-programming/55580-throwing-structures.html
CC-MAIN-2014-10
en
refinedweb
This is a discussion on Python help modules act strange - Suse ; Hi, I don't know if this the right group to post this, but because I always reading the post from this group (I don't know any other group related to python), I'll post it anyway. For the last few days ... Hi, I don't know if this the right group to post this, but because I always reading the post from this group (I don't know any other group related to python), I'll post it anyway. For the last few days I've been learning python and now I've found something strange happen if I enter interactive help from python and list all modules available with "modules"(or by typing "help("modules")" from the interpreter). It will start compiz-manager then hung-up, until I "quit" the program, then it will displayed the list of all modules available. > snorzzz@optimus:~> python > Python 2.5.1 (r251:54863, Aug 1 2008, 00:32:16) > [GCC 4.2.1 (SUSE Linux)] on linux2 # I'm using opensuse ver. 10.3 > Type "help", "copyright", "credits" or "license" for more information. > >>> help("modules") > > Please wait a moment while I gather a list of all available modules... > > * Detected Session: fluxbox > * Searching for installed applications... > * Non-mesa driver on Xgl detected > ... no mesa libGL found for preloading, this may not work! > * Starting Compiz > ... executing: compiz --replace --sm-disable --ignore-desktop-hints ccp > compiz: Trying '/usr/$LIB/libIndirectGL.so.1' > compiz (core) - Error: Another window manager is already running on screen: 0 > compiz (core) - Fatal: No manageable screens found on display :0.0 # It stop here, waiting until I "quit" I know that help() will try to import the modules with the same name as it's parameter. In this case, correct me if I'm wrong, help() will import them all. There are 2 modules listed that I suspected to be responsible, FusionIcon and compizconfig. But importing both of them directly from interpreter do nothing. If someone would be kind enough to explain me why this happen, I would be very grateful. C
http://fixunix.com/suse/534133-python-help-modules-act-strange.html
CC-MAIN-2014-10
en
refinedweb
Well,. Matthew wrote: >>I am in no way trying to attack you. I am just pointing out that C and >>C++ breeds bad programming practice, and we need protection from them. > [snip] > Bottom line: if you're a good engineer, you're a good engineer. If you're > not, you're not. The language used won't affect this truth. And avoiding > peaking inside abstractions won't help you become one. I think you didn't get his point: he's not worried that /he/ will misuse pointers, he's worried that _his colleagues_ will. >. By implementation detail, are you speaking to it nulling the pointer? I was pretty sure that was in the spec, and not in the implementation. Delete is needed if you ever want to immediately call a destructor. If used wisely, it can also decrease the memory usage of your software, and reduce garbage collection runs (if the GC won't run unless there's more than X to collect.) Overriding new and delete would definitely fit into the same class as pointers, recursion, casting, != in fors, and delete. They're all scary. -[Unknown] > Chris Miller wrote: >> On Mon, 13 Feb 2006 00:26:48 -0500, nick <nick.atamas@gmail.com> wrote: >> >>> Now you're talking crazy talk. Throws declarations may be a bad idea - I >>> agreed after having read up on it. I have yet to hear a good reason why >>> the unsafe keyword or some other safeguard against dangerous pointer >>> code is a bad idea. >>> >> Then would 'delete' be 'unsafe'? Even though it nulls the reference, >> other places may still be referencing it, hence making it unsafe. > > That seems to be an implementation detail. However, my immediate > reaction is that delete probably should be unsafe; however, I am not > sure. It all depends on how much it is needed for mainstream software > development and how much damage it tends to cause. > > Of course, if you are talking about overriding new and then calling > delete, that's a different story. By allocating memory manually you are > preventing a good garbage collector from optimizing your heap, so you > should be avoiding that in most cases. > > The upshot of using "unsafe" is that all code that messes with the > memory manually would get marked unsafe. So, someone working on OS > features may end up having to put an "unsafe:" at the top of every file > and compiling with the --unsafe flag (or something to that effect). It > seems like a small price to pay for preventing amateurs from screwing up > your code. > > It seems to me that most people who write code don't need pointers. Both > D and C++ are languages that provide high-level and low-level access. > You are going to get both experts who need the pointers and amateurs who > don't need them. > > Both Bjarne and Matthew seem to think that people should just "learn to > code well". Despite admitting that most coders are not experts, Bjarne says: > > "The most direct way of addressing the problems caused by lack of type > safety is to provide a range-checked Standard Library based on > statically typed containers and base the teaching of C++ on that". > <> > > I must disagree. There are too many people to teach. In some cases it is > a lot easier to modify a language than to teach everyone not to use a > feature. This may be one of those cases. I think experts tend to forget > that a language is there to help programmers develop software and to > reduce chances of human error. Why don't you give them access to a scripting language? Perhaps something like Python/Ruby or even DMDScript? If performance is an issue, just make sure the scripting language doesn't allow eval (which is so much more evil than pointers, by the way) and you should be able to convert easily. -[Unknown] > Note: I did a search for this and didn't come up with any threads. If it > has been discussed before... my apologies. > > > Recently I introduced D to a friend of mine (a C.S. grad student at > Purdue). His reaction was the usual "wow, awesome". Then he became > concerned about pointer safety. D allows unrestricted access to pointers. > > I will skip the "pointers are unsafe" rigmarole. > > Should D provide something analogous to the C# unsafe keyword? On Sun, 12 Feb 2006 22:33:25 -0800, Unknown W. Brackets wrote: >> Most programmers are amateurs; you're not going to change that. More indication that we could really do with a 'lint' program for D. It could warn about pointer usage too. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Down with mediocracy!" 13/02/2006 5:44:24 PM Pointer problems are notoriously difficult to track. Pointers are a feature that is not necessary in 90% of production code. Hey, Joel called them DANGEROUS. (I'm going to use that one a lot now.) My example demonstrates a potential error that, if occurs in a library that you don't have source for, will cause you hours of grief. My example was carefully constructed. In it an object was passed in using the /in/ keyword. That should guarantee that my copy of the object doesn't change. If you are saying it is OK for it to change, then you are basically saying that the /in/ keyword is useless (well, not really useless but almost). I don't think that's cool. Unknown W. Brackets wrote: > What's going to stop them from making other mistakes, unrelated to > pointers? For example, the following: > > void surprise(in char[] array) > { > ubyte[100] x = cast(ubyte[100]) array; > array[99] = 1; > } > > This will compile fine, and uses zero pointers. It's exactly the same > concept, too. No, it won't compile. Maybe I have a different version of dmd, but I get this: main.d(3): e2ir: cannot cast from char[] to ubyte[100] Try it yourself. The rest of these aren't really pointer bugs. So, if you want to try a slippery slope and argue that all of programming is unsafe, be my guest. It isn't particularly productive though. (Sorry, I am getting cranky; it's late.) Here's another one: > > void surprise(in int i) > { > if (i == 0 || i > 30) > return i; > else > return surprise(--i); > } > > Oops, what happens if i is below 0? Oh, wait, here's another common > mistake I see: > > for (int i = 0; i != len; i++) > { > ... > } > > What happens if len is negative? I've seen this happen, a lot, in more > than a few different people's code. They weren't stupid, you're right, > but it did happen. > > So do we mark != in fors as "unsafe"? Recursion too? And forget > casting, any casting is unsafe now as well? > > Seems to me like you're going to end up spending longer dealing with > their problems, if they think they can use pointers but really can't, > than you would just reviewing their darn code. > > Oh wait, it's only open source where you do that "code review" thing. > Otherwise it's considered a waste of time, except in huge corporations. > Why bother when "unsafe" can just fix everything for you like magic? > > Valentine's day is coming up... good thing there are flowers, they just > fix everything too. I can be a jerk and all I need are flowers, right? > Magic. > > -[Unknown] Unknown W. Brackets wrote: > Well,. That's an easy one. You can't do unsafe things without wrapping your code in the unsafe keyword. That's fairly easy to add, if you ask me. However, when that amateur gets the compiler error, he/she will look it up. Once they do, there will be a big notice "DANGER, USE THIS INSTEAD". I work with a lot of EEs who only had one or two programming courses. They get a job mainly based on their hardware architecture knowledge. Now they have I have to work with them and write a hardware simulator. Oh, I don't know if you realize this, but essentially removed /in/out/inout from the D spec with my example; please go read it. If you think that people are going to use the language the RIGHT way when there is such a tempting wrong way, I suggest you look at C++ and its operator overloading. Andrew Fedoniouk wrote: >>. I didn't say I had a solution, I just said I have a problem. The "unsafe" thing is just some syntax that looked pretty cool in C#. If c-style pointers are left the way they are now, you might as well not have in/out/inout parameters. To save you from reading the rest of the thread, here is an example: CODE: ----- import std.stdio; class A { private int data[]; public this() { data.length = 10; } public void printSelf() { writefln("Data: ", this.data); } } void surprise(in A a) { byte *ap = cast(byte *)(a); ap[9] = 5; } int main() { A a = new A(); a.printSelf(); surprise(a); a.printSelf(); return 0; } OUTPUT: ------- Data before surprise: [0,0,0,0,0,0,0,0,0,0] Data after surprise: [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4287008,0,2004,216,1245184,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8855552,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,8855680,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8855808, <..SN] Derek Parnell wrote: > On Sun, 12 Feb 2006 22:33:25 -0800, Unknown W. Brackets wrote: > >>> Most programmers are amateurs; you're not going to change that. > > More indication that we could really do with a 'lint' program for D. It > could warn about pointer usage too. > > A lint-like tool may be the way to go. However, there definitely need to be an in-language solution to the /in/ parameter problem. That seems to be unacceptable (see my previous posts for the details). There is a lint-like project for Java called Find Bugs. Bill Pugh at UMCP is leading it. I happen to know Dr. Pugh; he taught one of my courses and sponsored my senior C.S. project. If someone decides to work on a lint-like tool, I will be happy to introduce them to Dr. Pugh.
http://forum.dlang.org/thread/dsobph$2mdo$1@digitaldaemon.com?page=3
CC-MAIN-2014-10
en
refinedweb
13 April 2011 09:06 [Source: ICIS news] SINGAPORE (ICIS)--Petrochemicals and metals firm Industries Qatar posted a 71% year-on-year increase in its first-quarter net profit to Qatari riyal (QR) 2.09bn ($574m), as sales surged, the company said late on Tuesday. Revenue in the first quarter jumped 47.6% year on year to QR4bn, the company said in a release on the Qatar Exchange. At 10:50 hours ?xml:namespace> (
http://www.icis.com/Articles/2011/04/13/9452051/industries-qatar-q1-net-profit-surges-71-sales-jump-47.6.html
CC-MAIN-2014-10
en
refinedweb
Writing and Reading XML with XIST March 16, 2005. Installation: Building and Writing XML"), email(u"info@centigrade.bogus"), ),. Listing 2: Output from Listing 1 <xsa><vendor><name>Centigrade systems</name> <email>info@centigrade.bogus</email></vendor> <product id="100"> <name>100 Server</name> <version>1.0</version> <last-release>20030401</last-release><changes></changes> </product></xsa> Completing the Document If you look carefully at Listing 1, you'll notice that what I've created is really just the top-level XSA element, and not the entire XML document. There is no XML declaration, and no XSA document type declaration (which is required for it to be a valid XSA document). XIST does allow for all this added detail. To create a full XML document you use an ll.xist.xsc.Frag object, which can gather together all the needed nodes, including declarations. Listing 3 illustrates this. You can run it by just pasting in part one from the top of Listing 1. I didn't reproduce Part 1 in order to save space. Listing 3: Using XIST to Generate a Proper XSA Document XSA_PUBLIC = "-//LM Garshol//DTD XML Software Autoupdate 1.0//EN//XML" XSA_SYSTEM = "" class xsa_doctype(xsc.DocType): """ Document type for XSA """ def __init__(self): xsc.DocType.__init__( self, 'xsa PUBLIC "%s" "%s"'%(XSA_PUBLIC, XSA_SYSTEM) ) doc = xsc.Frag( xml.XML10(), xsa_doctype(), xsa( vendor( name(u"Centigrade systems"), email(u"info@centigrade.bogus"), ), product( name(u"100\u00B0 Server"), version(u"1.0"), last_release(u"20030401"), changes(), id = u"100\u00B0" ) ) ) print doc.asBytes(encoding="utf-8") This time I create an explicit document type declaration class and bundle this into a document fragment along with an instance of ll.xist.ns.xml.XML10, which represents the XML declaration. Listing 4 shows the resulting output. Again the actual output is all on one line, but I have inserted line feeds for formatting reasons. Listing 4: Output from the Variation in Listing 3 <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE xsa PUBLIC "-//LM Garshol//DTD XML Software Autoupdate 1.0//EN//XML" ""> <xsa><vendor><name>Centigrade systems</name> <email>info@centigrade.bogus</email></vendor> <product id="100"> <name>100 Server</name> <version>1.0</version> <last-release>20030401</last-release><changes></changes> </product></xsa> Reading XML XIST provides parsers that you can use to read XML into the sorts of XIST data structures I describe above. It's really quite simple, so I'll get right to it. Listing 5 is a simple example using XIST to parse a Docbook instance. Listing 5: Using XIST to Parse an XML Document from ll.xist import xsc from ll.xist import parsers #You must import this XIST namespace module, otherwise you #get a validation error because the parser does not Know the #vocabulary from ll.xist.ns import docbook <article> <articleinfo> <title>DocBook article example</title> <author> <firstname>Uche</firstname> <surname>Ogbuji</surname> </author> </articleinfo> <section label="main"> <title>Quote from "I Try"</title> > </section> </article> """ doc = parsers.parseString(DOC) I'll work interactively from this listing to show some of the tree navigation facilities for XIST trees. First I'll show how to use XIST iterators to search for the blockquote element. $ python -i listing5.py >>> blockquotes = doc.walk(xsc.FindTypeAll(docbook.blockquote)) >>> bq = blockquotes.next() >>> print bq Talib Kweli Life is a beautiful struggle People search through the rubble for a suitable hustle Some people using the noodle Some people using the muscle Some people put it all together, make it fit like a puzzle >>> print bq.asBytes() > >>> The walk method creates an iterator over the nodes in document order. xsc.FindTypeAll creates a filter that restricts the iterator to find all instances of all the given elements within the subtree. There is also xsc.FindType, which searches only the immediate children of the node. So, to find the attribution of the quote: >>> attribs = bq.content.walk(xsc.FindTypeAll(docbook.attribution)) >>> attrib = attribs.next() >>> print attrib Talib Kweli >>> Once you find an element of interest, it's trivial to access one of its attributes. They are available as if items in a dictionary. >>> sections = doc.walk(xsc.FindTypeAll(docbook.section)) >>> sect = sections.next() >>> print sect[u"label"] main >>> XIST also takes advantage of Python's operator overloading to support a language in some ways like XPath, but given as Python expressions rather than strings (Unicode objects, to be precise). This language is called XFind. The examples in the documentation look very interesting, but I had some trouble getting the expected results from XFind expressions. I couldn't be sure whether it was something I was doing wrong or quirks in the library, so I'll leave exploring XFind more deeply for another time. I encourage you to experiment with XFind, though. Many people have called for such a pure Python take on XPath, and it looks as if XIST is well on its way down this road. Wrap Up It's surprising that XIST is such a dark horse. It has been around for a long time. It has a lot of very original and interesting capabilities. It's pretty well documented, and has a mature feel about it. Yet I had never tried it before working on this article, and I don't think I know of anyone else who had. Based on my experimentation, it is definitely worth serious consideration when you're looking for a Python-esque XML processing toolkit. The extremely object-oriented framework can feel a bit heavy, but I can appreciate some of the resulting benefits, and it would certainly suit some users' tastes very well. I should also mention that there is a lot more to XIST that I was able to cover in this article. I didn't touch on its support for different HTML and XHTML vocabularies, XML namespaces, XML entities, validation and content models, tree modification, pretty printing, image manipulation, and more. I could only find one new development to report on regarding XML in the Python space. It's the interesting news that Fred Drake, Pythonista extraordinaire, appears to have started chipping in on the ZSI project for Python Web services. He made the announcement of ZSI 1.7. For those who are still interested in mainstream Web services, this is surely great news.
http://www.xml.com/pub/a/2005/03/16/py-xml.html
CC-MAIN-2017-22
en
refinedweb
sharepathway 0.5.0 A Python package for KEGG pathway enrichment analysis with multiple gene lists SharePathway is a python package for KEGG pathway enrichment analysis with multiple gene lists. There have been dozens of tools or web servers for enrichment analysis using a list of candidate genes from some kinds of high throughput experiments,such as Exome-Seq and RNA-Seq. But the reality is that we usually get multiple gene lists, each from one sample or patient. We can do enrichment analysis for each sample then check which pathway or module is enriched. This strategy is simple and commonly used in cancer study. But we may lose some important driver genes. SharePathway is motivated at providing users a simple and easy-to-use tool for enrichment analysis on multiple lists of genes simultaneously, which may help gain insight into the underlying biological background of these lists of genes. Installation This version is for both python2 and python3. The first step is to install Python. Python is available from the Python project page . The next step is install sharepathway. Install from PyPi using pip, a package manager for Python: $ pip install sharepathway Or, you can download the source code at Github or at PyPi for SharePathway, and then run: $ python setup.py install Usage Assume you have put all the path of your gene list files in one summary file genelists.txt (one path per line) in the directory ~/data/. Go into this directory,open python and run the scripts below. The result will be saved in the result.html file: import sharepathway as sp filein="genelists.txt" fileout="result" sp.Run(fi=filein,fo=fileout,species='hsa',r=0.1) The default value of species is ‘hsa’, represents human species. The ratio r is the min threshold. The default value is 0.01. Entrez Gene ID is supported. The result will be output to a html file. Output Description Summary This part summarizes the input data. Details This part list the ranked pathways and related information as shown below. Test data See the gene list files and genelists.txt file in data/. This is just toy data. Author: Guipeng Li - Author: Guipeng Li - Keywords: detection,pathway,enrichment,share,multiple gene lists - Package Index Owner: GuipengLi - DOAP record: sharepathway-0.5.0.xml
https://pypi.python.org/pypi/sharepathway
CC-MAIN-2017-22
en
refinedweb
How to: Respond to Font Scheme Changes in a Windows Forms Application In the Windows operating systems, a user can change the system-wide font settings to make the default font appear larger or smaller. Changing these font settings is critical for users who are visually impaired and require larger type to read the text on their screens. You can adjust your Windows Forms application to react to these changes by increasing or decreasing the size of the form and all contained text whenever the font scheme changes. If you want your form to accommodate changes in font sizes dynamically, you can add code to your form. Typically, the default font used by Windows Forms is the font returned by the Microsoft.Win32 namespace call to GetStockObject(DEFAULT_GUI_FONT). The font returned by this call only changes when the screen resolution changes. As shown in the following procedure, your code must change the default font to IconTitleFont to respond to changes in font size. To use the desktop font and respond to font scheme changes Create your form, and add the controls you want to it. For more information, see How to: Create a Windows Forms Application from the Command Line and Controls to Use on Windows Forms. Add a reference to the Microsoft.Win32 namespace to your code. Add the following code to the constructor of your form to hook up required event handlers, and to change the default font in use for the form. Implement a handler for the UserPreferenceChanged event that causes the form to scale automatically when the Window category changes. Finally, implement a handler for the FormClosing event that detaches the UserPreferenceChanged event handler. - Compile and run the code. To manually change the font scheme in Windows XP While your Windows Forms application is running, right-click the Windows desktop and choose Properties from the shortcut menu. In the Display Properties dialog box, click the Appearance tab. From the Font Size drop-down list box, select a new font size. You will notice that the form now reacts to run time changes in the desktop font scheme. When the user changes between Normal, Large Fonts, and Extra Large Fonts, the form changes font and scales correctly. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using Microsoft.Win32; namespace WinFormsAutoScaling { public partial class Form1 : Form { public Form1() { InitializeComponent(); this.Font = SystemFonts.IconTitleFont; SystemEvents.UserPreferenceChanged += new UserPreferenceChangedEventHandler(SystemEvents_UserPreferenceChanged); this.FormClosing += new FormClosingEventHandler(Form1_FormClosing); } void SystemEvents_UserPreferenceChanged(object sender, UserPreferenceChangedEventArgs e) { if (e.Category == UserPreferenceCategory.Window) { this.Font = SystemFonts.IconTitleFont; } } void Form1_FormClosing(object sender, FormClosingEventArgs e) { SystemEvents.UserPreferenceChanged -= new UserPreferenceChangedEventHandler(SystemEvents_UserPreferenceChanged); } } } The constructer in this code example contains a call to InitializeComponent, which is defined when you create a new Windows Forms project in Visual Studio. Remove this line of code if you are building your application on the command line. PerformAutoScale Automatic Scaling in Windows Forms
https://msdn.microsoft.com/en-us/library/ms229594.aspx
CC-MAIN-2017-22
en
refinedweb
into a number of import problems when trying to dynamically add jar files or .class file directories to the jython path. If I put the .jar on the java classpath (i.e. by setting it in the shell script that launches jython), everything works fine, but if I try to add the .jar to sys.path, I get the following errors on import: import my.package.name AttributeError: 'javapackage' object has no attribute 'name' I tried adding the .jar file via sys.path as well as calls to sys.packageManager.addJar and sys.add_extdir. Neither worked. Googling for the error, I've seen people run into similar problems, but not a solution. It seems like the packages are getting hidden by directories in the java paths that are not python packages. If I manually put __init__.py files in the java package directories that contain the .class files, the error goes away, but obviously that doesn't really work for .jar files. Anyone have any suggestions for this? Chris I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200901&viewday=25&style=flat
CC-MAIN-2017-22
en
refinedweb
Content-type: text/html sigqueue - Queue a signal and data to a running process. #include <signal.h> int sigqueue ( pid_t pid, int signo, const union sigval value); The sigqueue function causes the signal specified by signo to be sent with the value specified by value to the process specified by pid. The conditions required for a process to have permission to queue a signal to another process are the same as for the kill function. If the call is successful, the signal will be queued to the specified process. If the process has the SA_SIGINFO flag enabled for the queued signal, the specified value will be delivered to its signal handler as the si_value field of the siginfo parameter. Nonprivileged callers are restricted in the number of signals they can have actively queued. This per-process quota value can be returned with sysconf(_SC_SIGQUEUE_MAX). Upon successful completion, the sigqueue function returns a value of 0. Otherwise, a value of -1 is returned and errno is set to indicate the error. If sigqueue fails, no signal is sent, and errno is set to one of the following values: Headers: siginfo(5) Functions: kill(2), sigaction(2), sysconf(3) delim off
http://backdrift.org/man/tru64/man3/sigqueue.3.html
CC-MAIN-2017-22
en
refinedweb
Catalyst::View::ByCode - Templating using pure Perl code version 0.28 # 1) use the helper to create your View ./script/myapp_create.pl view ByCode ByCode # 2) inside your Controllers do business as usual: sub index :Path :Args(0) { my ($self, $c) = @_; # unless defined as default_view in your config, specify: $c->stash->{current_view} = 'ByCode'; $c->stash->{title} = 'Hello ByCode'; # if omitted, would default to # controller_namespace / action_namespace .pl $c->stash->{template} = 'hello.pl'; } # 3) create a simple template eg 'root/bycode/hello.pl' # REMARK: # use 'c' instead of '$c' # prefer 'stash->{...}' to 'c->stash->{...}' template { html { head { title { stash->{title} }; load Js => 'site.js'; load Css => 'site.css'; }; body { div header.noprint { ul.topnav { li { 'home' }; li { 'surprise' }; }; }; div content { h1 { stash->{title} }; div { 'hello.pl is running!' }; img(src => '/static/images/catalyst_logo.png'); }; }; }; }; # 274 characters without white space # 4) expect to get this HTML generated: <html> <head> <title>Hello ByCode!</title> <script src="" type="text/javascript"> </script> <link rel="stylesheet" href="" type="text/css" /> </head> <body> <div id="header" style="noprint"> <ul class="topnav"> <li>home</li> <li>surprise</li> </ul> </div> <div class="content"> <h1>Hello ByCode!</h1> <div>hello.pl is running!</div> <img src="/static/images/catalyst_logo.png" /> </div> </body> </html> # 453 characters without white space Catalyst::View::ByCode tries to offer an efficient, fast and robust solution for generating HTML and XHTML markup using standard perl code encapsulating all nesting into code blocks. Instead of typing opening and closing HTML-Tags we simply call a sub named like the tag we want to generate: div { 'hello' } generates: <div>hello</div> There is no templating language you will have to learn, no quirks with different syntax rules your editor might not correctly follow and no indentation problems. The whole markup is initially constructed as a huge tree-like structure in memory keeping every reference as long as possible to allow greatest flexibility and enable deferred construction of every building block until the markup is actially requested. Every part of the markup can use almost every type of data with some reasonable behavior during markup generation. Every tag known in HTML (or defined in HTML::Tagset to be precise) gets exported to a template's namespace during its compilation and can be used as expected. However, there are some exceptions which would collide with CORE subs or operators generates a <select> tag generates a <link> tag generates a <tr> tag generates a <td> tag generates a <sub> tag generates a <sup> tag generates a <meta> tag generates a <q> tag generates a <s> tag generates a <map> tag Internally, every tag subroutine is defined with a prototype like sub div(;&@) { ... } Thus, the first argument of this sub is expected to be a coderef, which allows to write code like the examples above. Nesting tags is just a matter of nesting calls into blocks. There are several ways to generate content which is inserted between the opening and the closing tag: OUTglob can be used: print OUT 'some content here.'; HTML::FormFusimple use the <RAW> glob: print RAW '<?xxx must be here for internal reasons ?>'; As usual for Perl, there is always more than one way to do it: # appending attributes after tag div { ... content ... } id => 'top', class => 'noprint silver', style => 'display: none'; the content goes into the curly-braced code block immediately following the tag. Every extra argument after the code block is converted into the tag's attributes. # using special methods div { id 'top'; class 'noprint silver'; attr style => 'display: none'; 'content' }; Every attribute may be added to the latest opened tag using the attr sub. However, there are some shortcuts: is equivalent to attr id => 'name' is the same as attr class => 'class' However, the class method is special. It allows to specify a space-separated string, a list of names or a combination of both. Class names prefixed with - or + are treated special. After a minus prefixed class name every following name is subtracted from the previous list of class names. After a plus prefixed name all following names are added to the class list. A list of class names without a plus/minus prefix will start with an empty class list and then append all subsequentially following names. # will yield 'abc def ghi' div.foo { class 'abc def ghi' }; # will yield 'foo def xyz' div.foo { class '+def xyz' }; # will yield 'bar' div.foo { class '-foo +bar' }; produces the same result as attr div { on click => q{alert('you clicked me')}; }; div top.noprint.silver(style => 'display: none') {'content'}; div top.noprint.silver(style => {display => 'none'}) {'content'}; div top.noprint.silver(style => {marginTop => '20px'}) {'foo'}; marginTop or margin_top will get converted to margin-top. div (data_something => \'<abcd>') { ... }; will not escape the ref-text <abcd>. div (id => \&get_next_id) { ... }; will call get_next_id() and set its return value as a value for id and in case of special characters, escapes it. Every attribute may have almost any datatype you might think of: Scalar values are taken verbatim. Hash references are converted to semicolon-delimited pairs of the key, a colon and a value. The perfect solution for building inline CSS. Well, I know, nobody should do something, but sometimes... Keys consisting of underscore characters and CAPITAL letters are converted to dash-separated names. dataTarget or data_target both become data-target. Array references are converted to space separated things. no idea if we like this all other references simply are stringified. This allows the various objects to forward stringification to their class-defined code. Setter or Getter for attribute values. Using the attr sub refers to the latest open tag and sets or gets its attribute(s): div { attr(style => 'foo:bar'); # set 'style' attribute attr('id'); # get 'id' attribute (or undef) ... more things ... a { attr(href => ''); # refers to 'a' tag }; attr(lang => 'de'); # sets attribute in 'div' tag }; define a block that may be used like a tag. If a block is defined in a package, it is automatically added to the package's @EXPORT array. # define a block block navitem { my $id = attr('id'); my $href = attr('href'); li { id $id if ($id); a(href => $href || '') { block_content; }; }; }; # use the block like a tag navitem some_id (href => '') { # this gets rendered by block_content() -- see above 'some text or other content'; } # will generate: <li id="some_id"> <a href="">some text or other content</a> </li> a simple shortcut to render the content of the block at a given point. See example above. holds the content of the $c variable. Simple write c->some_method instead of $c->some_method. provides a shortcut for defining class names. All examples below will generate the same markup: div { class 'class_name'; }; div { attr class => 'class_name'; }; div { attr('class', 'class_name'); }; div.class_name {}; Using the class() subroutine allows to prefix a class name with a + or - sign. Every class name written after a + sign will get appended to the class, each name written after a - sign will be erased from the class. a very simple way to generate a DOCTYPE declatation. Without any arguments, a HTML 5 doctype declaration will be generated. The arguments (if any) will consist of either of the words html or xhtml optionally followed by one or more version digits. The doctypes used are taken from. some examples: doctype; # HTML 5 doctype 'html'; # HTML 5 doctype html => 4; # HTML 4.01 doctype 'html 4'; # HTML 4.01 doctype 'html 4s'; # HTML 4.01 strict doctype 'html 4strict'; # HTML 4.01 strict doctype 'xhtml'; # XHTML 1.0 doctype 'xhtml 1 1'; # XHTML 1.1 provides a shortcut for defining id names. All examples here will generate the same markup: div { id 'id_name'; }; div { attr id => 'id_name'; }; div { attr('id', 'id_name'); }; div id_name {}; an easy way to include assets into a page. Assets currently are JavaScript or CSS. The first argument to this sub specifies the kind of asset, the second argument is the URI to load the asset from. Some examples will clearify: load js => '/static/js/jquery.js'; load css => '/static/css/site.css'; If you plan to develop your JavaScript or CSS files as multiple files and like to combine them at request-time (with caching of course...), you might like to use Catalyst::Controller::Combine. If your controllers are named Js and Css, this will work as well: load Js => 'name_of_combined.js'; provides a syntactic sugar for generating inline JavaScript handlers. a(href => '#') { on click => q{alert('you clicked me'); return false}; }; generates a series of param tags. applet ( ... ) { params( quality => 'foo', speed => 'slow', ); }; is a shortcut for c->stash. essentially generates a sub named RUN as the main starting point of every template file. Both constructs will be identical: sub RUN { div { ... }; } template { div { ... }; }; Be careful to add a semicolon after the template definition if you add code after it!!! Without arguments, yield forwards exection to the next template in the ordinary execution chain. Typically this is the point in a wrapper template that includes the main template. With an argument, it forwards execution to the template given as the argument. These values are possible: if a symbolic name is given, this name is searched in the stash->{yield}->{...} hashref. If it is found, the file-name or subref stored there will be executed and included at the given point. if a template file exists at the path name given as the argument, this template is compiled and executed. a code ref is directly executed. If yield is not able to find something, simply nothing happens. This behavior could be useful to add hooks at specified positions in your markup that may get filled when needed. You might build a reusable block line the following calls: block 'block_name', sub { ... }; # or: block block_name => sub { ... }; # or shorter: block block_name { ... }; The block might get used like a tag: block_name { ... some content ... }; If a block-call contains a content it can get rendered inside the block using the special sub block_content. A simple example makes this clearer: # define a block: block infobox { # attr() values must be read before generating markup my $headline = attr('headline') || 'untitled'; my $id = attr('id'); my $class = attr('class'); # generate some content div.infobox { id $id if ($id); class $class if ($class); div.head { $headline }; div.info { block_content }; }; }; # later we use the block: infobox some_id.someclass(headline => 'Our Info') { 'just my 2 cents' }; # this HTML will get generated: <div class="someclass" id="some_id"> <div class="head">Our Info</div> <div class="info">just my 2 cents</div> </div> every block defined in a package is auto-added to the packages @EXPORT array and mangled in a special way to make the magic calling syntax work after importing it into another package. A simple configuration of a derived Controller could look like this: __PACKAGE__->config( # Change extension (default: .pl) extension => '.pl', # Set the location for .pl files (default: root/bycode) root_dir => cat_app->path_to( 'root', 'bycode' ), # This is your wrapper template located in root_dir # (default: wrapper.pl) wrapper => 'wrapper.pl', # all these modules are use()'d automatically include => [Some::Module Another::Package], ); By default a typical standard configuration setting is constructed by issuing the Helper-Module. It looks like this and describes all default settings: __PACKAGE__->config( # # Change default # extension => '.pl', # # # Set the location for .pl files # root_dir => 'root/bycode', # # # This is your wrapper template located in the 'root_dir' # wrapper => 'wrapper.pl', # # # specify packages to use in every template # include => [ qw(My::Package::Name Other::Package::Name) ] ); The following configuration options are available: With this option you may define a location that is the base of all template files. By default, the directory root/bycode inside your application will be used. This is the default file extension for template files. As an example, if your Controller class is named MyController and your action method calls MyAction then by default a template located at root_dir/mycontroller/myaction.pl will get used to render your markup. The path and file name will get determined by concatenating the controller-namespace, the action namespace and the extension configuration directive. If you like to employ another template, you may specifiy a different path using the stash variable template. See "STASH VARIABLES" below. A wrapper is a template that is rendered before your main template and includes your main template at a given point. It "wraps" something around your template. This might be useful if you like to avoid repeating the standard page-setup code for every single page you like to generate. The default wrapper is named wrapper.pl and is found directoy inside root_dir. See "Using a wrapper" in TRICKS below. As every template is a perl module, you might like to add other modules using Perl's use directive. Well, you may do that at any point inside your template. However, if you repeatedly need the same modules, you could simply add them as a hashref using this configuration option. The following stash variables are used by Catalyst::View::ByCode: If you like to override the default behavior, you can directly specify the template containing your rendering. Simply enter a relative path inside the root directory into this stash variable. If the template stash variable is left empty, the template used to render your markup will be determined by concatenating the action's namespace and the extension. Overriding the default wrapper is the job of this stash variable. Simply specify a relative path to a wrapping template into this stash variable. Yielding is a powerful mechanism. The yield stash variable contains a hashref that contains a template or an array-ref of templates for certain keys. Every template might be a path name leading to a template or a code-ref able that should be executed as the rendering code. $c->stash->{yield}->{content} is an entry that is present by default. It contains in execution order the wrapper and the template to get executed. Other keys may be defined and populated in a similar way in order to provide hooks to magic parts of your markup generation. See "Setting hooks at various places" in TRICKS below. If you construct a website that has lots of pages using the same layout, a wrapper will be your friend. Using the default settings, a simple file wrapper.pl sitting in the root directory of your templates will do the job. As two alternatives you could set the $c->stash->{wrapper} variable to another path name or specify a wrapper path as a config setting. # wrapper.pl html { head { # whatever you need }; body { # maybe heading, etc. # include your template here yield; }; }; If you need to sometimes add things at different places, simply mark these positions like: # in your wrapper: html { head { # whatever you need # a hook for extra headings yield 'head_extras'; }; body { # a hook for something at the very beginning yield 'begin'; # maybe heading, etc. # a hook for something after your navigation block yield 'after_navigation'; # include your template here yield; # a hook for something after your content yield 'end'; }; }; # in an action of your controller: $c->stash->{yield}->{after_navigation} = 'path/to/foo.pl'; In the example above, some hooks are defined. In a controller, for the hook after_navigation, a path to a template is filled. This template will get executed at the specified position and its content added before continuing with the wrapper template. If a hook's name is not a part of the stash->{yield} hashref, it will be ignored. However, an info log entry will be generated. Every template is a perl module. It resides in its own package and every thing you are not used to type is mangled into your source code before compilation. It is up to you to use every other module you like. A simple module could look like this: package MyMagicPackage; use strict; use warnings; use base qw(Exporter); use Catalyst::View::ByCode::Renderer ':default'; our @EXPORT = qw(my_sub); sub my_sub { # do something... } block my_block { # do something else }; 1; Using the Renderer class above gives your module everything a template has. You can use every Tag-sub you want. To use this module in every template you write within an application you simply populate the config of your View: __PACKAGE__->config( include => [ qw(MyMagicPackage) ] ); If you are using one of the above packages to render forms, generating the markup is done by the libraries. There are a couple of ways to get the generated markup into our code: # assume stash->{form} contains a form object # all of these ways will work: # let the form object render itself print RAW stash->{form}->render(); # use the form object's stringification print RAW "${\stash->{form}}"; # inside any tag, let me auto-stringify div { stash->{form} }; Very simple: # in an action of your controller: my $html = $c->forward('View::ByCode', render => [qw(list of files)]); Some attributes behave in a way that looks intuitive but also generates correct markup. The examples below do not need futher explanation. # both things generate the same markup: input(disabled => 1); input(disabled => 'disabled'); input(checked => 1); input(checked => 'checked'); input(required => 1); input(required => 'required'); option(selected => 1); option(selected => 'selected'); textarea(readonly => 1); textarea(readonly => 'readonly'); # remember that choice() generates a E<lt>selectE<gt> tag... choice(multiple => 1); choice(multiple => 'multiple'); beside these examples all currently defined HTML-5 boolean attributes are available: disabled, checked, hidden, inert, multiple, readonly, selected, required. will be called by process to render things. If render is called with extra arguments, they are treated as wrapper, template, etc... returns the template result fulfill the request (called from Catalyst) Wolfgang Kinkeldei, <wolfgang@kinkeldei.de> This library is free software, you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Catalyst-View-ByCode/lib/Catalyst/View/ByCode.pm
CC-MAIN-2017-22
en
refinedweb
view raw I am trying to place an image resized with PIL in a tkinter.PhotoImage object. import tkinter as tk # I use Python3 from PIL import Image, ImageTk master = tk.Tk() img =Image.open(file_name) image_resized=img.resize((200,200)) photoimg=ImageTk.PhotoImage(image_resized) photoimg.put( "#000000", (0,0) ) AttributError: 'PhotoImage' object has no attribute 'put' photoimg=tk.PhotoImage(file=file_name) photoimg.put( "#000000", (0,0)) ImageTk.PhotoImage as in PIL.ImageTk.PhotoImage is not the same class as tk.PhotoImage ( tkinter.PhotoImage) they just have the same name here is ImageTk.PhotoImage docs: as you can see there is no put method in it. but ImageTk.PhotoImage do have it:
https://codedump.io/share/bdcofziUqc9P/1/attributeerror-when-creating-tkinterphotoimage-object-with-pilimagetk
CC-MAIN-2017-22
en
refinedweb
public class Test { public static void main(String[] args) throws ClientProtocolException, IOException { HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(""); JSONObject json = new JSONObject(); json.put("parameter1", "This is parameter1"); json.put("parameter2", "This is parameter2"); StringEntity se = new StringEntity( json.toString()); // StringEntity input = new StringEntity("{\'parameter1\':\'This is parameter1\',\'parameter2\':\'This is parameter2\'}"); se.setContentType("application/json"); post.setEntity(se); HttpResponse response = client.execute(post); BufferedReader rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent())); String line = ""; System.out.println("Output..."); while ((line = rd.readLine()) != null) { System.out.println(line); } } } @Path("theservice") public class ServiceResource { @Context private UriInfo context; public ServiceResource() { } @POST @Produces("text/plain") @Consumes({"application/json"}) public String postHandler(String content, String content2 ) { System.out.println("empty postHandler!!!"); return By adding "postHandler". The returned value will be "text" as you using the notation "@Produces("text/plain")". are you sure? I don't believe you need to mention the name of the method in a REST URI. @modland please replace @Consumes({"application/js with @Consumes(MediaType.APPLIC This course will introduce you to Ruby, as well as teach you about classes, methods, variables, data structures, loops, enumerable methods, and finishing touches. The resource is mapped to @Path("theservice") The @Post method does not have a @Path annotation so it is not mapped explicitly to any path. Thats why you should be calling /theservice and not /theService/postHandler. Further more, the @Post endpoint seems has two parameters in the method - String content and String content2. This wont work since you are posting with a body consisting of a JSON object. You have set those parameters on a single JSON object in your code You need to unmarshall this in to a JSON entity (e.g via JAXB annotations on a class) Just a small thought... I'm going to have 3 Post REST services and wonder if it would be most natural to have them in the same class with each @Post method having a distinct @Path annotation, or is it more natural to have them in different classes? They are related but do different things. If they are related to one another then they should be in same controller. And controller means class hre, right? Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
https://www.experts-exchange.com/questions/28862079/Java-web-service.html
CC-MAIN-2018-51
en
refinedweb
A while ago, I stumbled over a component called DFPlayer. It's a tiny component that allows to play mp3 from an micro SD card (for less that 10€!).... Since I was sick of using my smartphone as an alarm clock just to have music to wake me, I decided to build an alarm clock with it. Step 1: Required Components - 1x DFPlayer (7,80 Euro) - 1x Arduino UNI (20 Euro) - 1x LCD Display (20x4, ~10 Euro) - 4x 10k Resistors - 2x 1k Resistors - 1x 10k Poti - 1x Rotary Encoder (1,50 Euro) - 3x Push Buttons - 1x Visaton FRS 8 Speaker (10 Euro) - Some cables for wiring and circuit board(s) Additionally you may want an adjusting knob for the rotary encoder, a frame for the display and a case. My father made an oak case for me which looks pretty nice. Step 2: Arduino Setup - Connect the Arduino via USB to your PC. - Make sure that you have the Arduino IDE installed. - Download the .ino the attached .ino file and the additional 3x zip files The Arduino IDE provides a wizard where can select the zip file to include a new library.(Sketch -> Include Libraries ...): Install all 3x libraries using this wizard The .ino file should compile now and you can upload it to your arduino. Step 3: Wire Components Before soldering everything together, I recommend using a breadboard to prototype everything. I created a fritzing image to help with that. You should check the correct wiring of the DFPlayer by using the provided link. You can find a description of the DFPlayer's pins there, but assuming the 5V+ pin is in the upper left corner, the wiring should match as shown on the picture. Step 4: Build Everything Into the Case The step varies depending on what case you have chosen to use. I've added some pictures here to show the steps I've done to wire everything into my oak case. I decided to build smaller circuits on smaller circuit boards to have checkpoints where I can verify that everything is still working. 19 Discussions 2 years ago Ups, my fault. I build this clock a year ago so excuse me if I forgot something. You have to install the timer library. How To: Download: The Arduino IDE also provides a wizard where can select the zip file to include a new library.(Sketch -> Include Libraries ...) Reply 2 years ago Thanks for a quick reply, I managed to upload your code, however I'm experiencing another problem. After wiring my lcd according to arduino, all I get on display are 2 rows of blocks (no characters) and no illumination. I checked the specification for my lcd here And I can tell that the backlight pins are 15 and 16, whereas pin 15 on your diagram goes to arduino and does something else I presume. I assume that other pins will be different too. I'd appreciate it if you can help me out here, otherwise I'll just give up with this project. Reply 2 years ago I compared the datasheet with the one I used:... From what I can see the pins are the same. I recommend to prototype the lcd without any other components. I used the Adafruit tutorial for this:... The first step is to get the backlight running, so I hope this will help you. 1 year ago This is a simple yet beautiful project. I love it! :3 1 year ago anyone can confirm this instructable works? building it atm, got the screen working but using some old parts... so would be nice if someone can confirm and i have to check parts had it working but started up and it was frozen in ouhr menu, now i fried my 10k pot ;-p waiting for parts 1 year ago Hi, Time.h attached to this tutorial is outdated. The new one can be found here: 2 years ago Hi again, Ive been trying to upload your program to my Arduino and it keeps saying Arduino: 1.6.8 (Windows 7), Board: "Arduino/Genuino Uno" C:\Users\Golomp\Desktop\mephisto_V\mephisto_V.ino:3:19: fatal error: Timer.h: No such file or directory #include "Timer.h" ^ compilation terminated. exit status 1 I installed some Time library but it hasnt resolved the issue. Also I see that you have included a #include <DFPlayer_Mini_Mp3.h> Library, where do I get it from? Sorry to be a pain, I'm not a programmer so I dont have a clue. Thanks! 2 years ago To be honest, I can only guess since I only have some basic skills and I had to play around with rotary encoders a lot before I got a successful combination of code and hardware. My approach would be to connect the rotary encoder to the arduino and play around with the connections. Create a separate sketch for it using this logic: /** * The rotary encoder implementation */ == 13 || sum == 4 || sum == 2 || sum == 11) encoderValue ++; if(sum == 2) encoderValue --;//skip updates //if(sum == 14 || sum == 7 || sum == 1 || sum == 8 ) encoderValue --; if(sum == 1) encoderValue ++;//skip updates } Use the Serial.print to log the encoderValue and check if it is working as expected (I had to google this code, but can't remember where I got it). 2 years ago Hi, I'm preparing to make this mp3 player, I have almost all needed parts. I'm just wondering how to connect my encoder as it seems to have a different pinout than yours. Here is the eBay auction number 181956049541 Many thanks in advance. 2 years ago Are these momentary switch buttons or latch switch? Reply 2 years ago Momentary switch buttons, this one:;jsessionid=07DD185D114B945F9143B6859DBF3877.ASTPCEN23?search=2050000219405&searchType=mainSearchBar 2 years ago Is it possible to use a potentiometer instead of a rotary encoder? Reply 2 years ago Technically yes, but I wouldn't recommend it. It is just to complicate to use the position of the potentiometer to calculate the current input value. 2 years ago Does the microSD go in the back? or do you have to take it apart to put one in. Either way I love this project. Reply 2 years ago There is one photo of the last step where you can see a small opening on the back. This is where you can access the sd card. The dfplayer has a little spring inside the sd slot, so it jumps out. A little bit of a hassle though: the mp3 files on the SD card must be numbered, starting with 0001_.., 0002_... etc. I've written a small program that renames the mp3 files I've selected automatically before I copy them on the SD. 2 years ago the 10K pot is for dimming the screen i assume? Reply 2 years ago Yes, its for dimming the screen. I followed the adafruit tutorial for wiring lcds. 2 years ago It looks so professional! Nice job! Reply 2 years ago Thanks!
https://www.instructables.com/id/Arduino-MP3-Alarm-Clock/
CC-MAIN-2018-51
en
refinedweb
QMetaType::type with QString I want to return the id of a previously registered custom class by using a QString. I've tried the following, but all of them return QMetaType::UnknownType @QString name = "MyClass"; QMetaType::type(name.toLatin1().data()); // fail QMetaType::type(name.toUtf8().data()); // fail QMetaType::type(name.toLocal8Bit() .data()); // fail char *chars = new char[name.length()]; strcpy(chars, name.toStdString().c_str()); QMetaType::type(chars); // fail@ I thought there was a bug with QMetaType or Q_DECLARE_METATYPE. This is not the case, since the following succeeded: @QMetaType::type("MyClass"); // success@ Since no QString object is passed, the parameter is (I assume) directly used as a char*. Does anyone know how to correctly decode a QString to a char* so that QMetaType::type works correctly? EDIT: Does this maybe have something to do with the terminating-character? - SGaist Lifetime Qt Champion Hi, Two questions for you: - Did you use qRegisterMetaType<>() ? - What combination of Qt/Platform are you using ? QMetaType::type will only work if your type has been registered. Hope it helps Works fine here: @ #include <QCoreApplication> #include <QDebug> class MyClass { }; Q_DECLARE_METATYPE(MyClass) int main(int argc, char **argv) { QCoreApplication app(argc, argv); int type = qRegisterMetaType<MyClass>("MyClass"); qDebug() << "MyClass" << type; QString name("MyClass"); qDebug() << "QString" << QMetaType::type(name.toAscii()); qDebug() << "String literal" << QMetaType::type("MyClass"); MyClass *c = static_cast<MyClass*>(QMetaType::construct(type)); qDebug() << c; delete c; return 0; } @ I suspect you have not called qRegisterMetaType<>(). I'm using Qt5 and qRegisterMetaType is not documented anymore, so I'm not sure if it's deprecated. I also don't want to use qRegisterMetaType, since it is run-time bound. Q_DECLARE_METATYPE on the other hand is (as far as I know) preprocessor bound. I've registered my class with: @Q_DECLARE_METATYPE(MyClass);@ and since @QMetaType::type("MyClass");@ worked and I was able to create an object with this, means that the registration was successful. My problem is with retrieving it by using a QString. The toAscii() function is also gone in the new Qt5, so I'm not sure if toLatin1() does exactly the same, but for some reason the passed char* is not working. Ok, I'm not sure what I did, but can't replicate my problem. So I don't think it has anything to do with the QString encoding. Here is my class decleration: @class MyClass { ... }; Q_DECLARE_METATYPE(MyClass); int main() { if(QMetaType::type("MyClass") == QMetaType::UnknownType) { int id = qMetaTypeId<ViSigmoidActivationFunction>(); cout << id <<endl; cout << QMetaType::typeName(id) << endl; } }@ The output of main is as follows: @1126 MyClass@ So my registration of my class was successfull, but there is a problem retireving the id using QMetaType::type("MyClass"). When I add the following to the first line of main: @qRegisterMetaType<MyClass>("MyClass");@ the 2 output staments are never execute, meaning my type was found. But I don't want to use this, since it is a runtime registration. Additionally Qt5 always refers to Q_DECLARE_METATYPE, saying that qRegisterMetaType is only when you want to use signals/slots parameters.
https://forum.qt.io/topic/25271/qmetatype-type-with-qstring
CC-MAIN-2018-51
en
refinedweb
Contents 0.2.5 (2011-04-22) - Added some new includes that were needed after PCL removed them from some of its header files. 0.2.4 () 0.2.3 (2011-02-09) Un-mangled namespace Eigen3:: to Eigen:: - General makefile and dependency cleanup. 0.2.2 (2011-01-07) - Updated for compatibility with unstable PCL and OpenCV. 0.2.1 (2010-12-14) - Changed sparselib download location due to failed download server. 0.2.0 (2010-11-12) 0.1.7 (2010-12-14) --> Last release into C-Turtle - Fixed rosdeps for Maverick. - Changed sparselib download location due to failed download server. 0.1.6 (2010-10-25) - Added missing rosdep. 0.1.5 (2010-07-27) - Fix bug with downdating in SBA. - Added SBA stand-alone node (sba_node). - New message types for SBA Node. 0.1.4 (2010-07-20) 0.1.3 (2010-07-19) Disable building of more demos in suitesparse. - Fix bugs with tests. 0.1.2 (2010-07-19) Disable building of tests and demos in suitesparse. 0.1.1 (2010-07-16) - Lucid compatibility 0.1.0 (2010-07-16) - Initial release
http://wiki.ros.org/vslam/ChangeList
CC-MAIN-2018-51
en
refinedweb
Sharpen Solutions. 1 Part One. Sometimes there s more than one right answer. And sometimes the - Jemima Collins - 3 years ago - Views: Transcription 1 1 Part One g h Sharpen Solutions g You didn t think we d just leave you hanging, did you? No, we thought we d be all nice and helpful with this first book, to get you hooked, and then slam you in the next one... Sometimes there s more than one right answer. And sometimes the answer is whatever you want it to be. If you don t like our answers, argue with us. If we re just plain wrong, we ll change it and give you credit on the web site. If you re wrong, we ll publicly humiliate you, using a very large font. Just kidding. Please share your ideas and solutions with us, and we ll add them with your name (unless you want to be anonymous, and who could blame you.) this is a new chapter 1 2 chapter one Page 4 int size = 27; String name = Fido ; Dog mydog = new Dog(name, size); x = size - 5; if (x < 15) mydog.bark(8); while (x > 3) { mydog.play(); declare an integer variable named size and give it the value 27 declare a String variable named name and give it the String value Fido declare a Dog variable named mydog and give it a new Dog (that has a name and a size) subtract 5 from the current value of the variable size, assign the result to the variable x if the value of x is less than 15, then tell mydog to bark 8 times as long as the value of x is greater than 3, tell mydog to play Page 11 Given the output: % java Test DooBeeDooBeeDo Fill in the missing code: public class DooBee { public static void main (String[] args) { int x = 1; while (x < ) { System.out. ( Doo ); System.out. ( Bee ); x = x + 1; 3 3 print print if (x == ) { System.out.print( Do ); 2 Sharpen Your Pencil - Part One 3 chapter two Head First Java Sharpen Solutions Page 32 Television int channel int volume boolean power setchannel() setvolume() setpower() skipcommercials() searchforsimpsons() Page 35 MOVIE title genre rating playit() object 1 title genre rating Gone with the Stock Tragic -2 title Lost in Cubicle Space object 2 genre Comedy rating 5 title Byte Club object 3 genre Tragic but ultimately uplifting rating 127 you are here 4 3 4 chapter three Page 50 Circle the legal statements from the following list: 1. int x = 34.5; 2. boolean boo = x; 3. int g = 17; 4. int y = g; Page 50 What is the current value of pets[2]? null What code would make pets[3] refer to one of the two existing Dog objects? pets[3] = pets[0] 5. y = y + 10; 6. short s; 7. s = y; 8. byte b = 3; 9. byte v = b; 10. short n = 12; 11. v = n; 12. byte k = 128; 13. int p = 3 * g + y; 4 Sharpen Your Pencil - Part One 5 chapter four Head First Java Sharpen Solutions Page 85 What s legal? Given the method below, which of the method calls listed on the right are legal? Put a checkmark next to the ones that are legal. (Some statements are there to assign values used in the method calls). int calcarea(int height, int width) { return height * width; int a = calcarea(7, 12); short c = 7; calcarea(c,15); int d = calcarea(57); calcarea(2,3); long t = 42; int f = calcarea(t,17); int g = calcarea(); calcarea(); byte h = calcarea(4,20); int j = calcarea(2,3,5); need two args t is a long (too big for the int parameter height ) need two args need two args calcarea returns an int, not a byte need two args you are here 4 5 6 chapter five Page 101 In the next couple of pages we implement the SimpleDotCom class, and then later we return to the test class. Looking at our test code above, what else should be added? What are we not testing in this code, that we should be testing for? Write your ideas (or lines of code) below: Make a fake user guess that is a MISS instead of a hit Try all three hits Try out a range of guesses Try duplicate guesses (these are just a few...) Page 105 Turn the to the next page in your book (106) for the answer. But then, you obviously know that already. We just put this in for completeness. Didn t want you thinking we just skipped it. Although we actually are skipping it. Here in the solutions document, anyway, not in the real book. You know what we mean. Page 111 It s a cliff-hanger! Will we find the bug? Will we fix the bug? Will Ben marry J-Lo? Stay tuned for the next chapter, where we answer these questions and more... And in the meantime, see if you can come up with ideas for what went wrong and how to fix it. The current version of the game cares only about the NUMBER of hits, not what the actual hits really are. So entering the same number (as long as it s a hit) three times is the same as entering the three correct, but different, numbers corresponding to the three hit locations. So, we need to somehow keep track of each hit, and maybe cross it off the list to show that it s been hit and is no longer a valid entry. 6 Sharpen Your Pencil - Part One 7 chapter six Head First Java Sharpen Solutions Page 130 Turn to page 132 in your book for the answer. Page 141 We didn t do an answer for this one, but nobody around here can remember why. Must have been some excuse about how that makes it more of a learning opportunity for you. If you ve got an answer you want to share with others (for that warm fuzzy feeling and good karma points), let us know and we ll include it (with your name). [ Kathy, this looks pretty weak here for chapter six. Doesn t look like we re giving them ANYTHING! Don t you feel guilty about that? Bert ] [ No. Kathy ] you are here 4 7 8 chapter seven Page 165 How many instance variables does Surgeon have? How many instance variables does FamilyDoctor have? How many methods does Doctor have? How many methods does Surgeon have? How many methods does FamilyDoctor have? 2 1 Can a FamilyDoctor do treatpatient()? Can a FamilyDoctor do makeincision()? Yes No Page 172 Page 175 Fan Musician Put a check next to the relationships that make sense. Oven extends Kitchen Guitar extends Instrument Rock Star concert Pianist I think I see a problem here... what if you have a bass player who IS a rock star? A rock star who IS a bass player? Not all bass players are rock stars, and not all rock stars are bass players, so what do you do? [answer: you ll have to wait for chapter 8 where we talk about interfaces...] Bass Player Person extends Employee Ferrari extends Engine FriedEgg extends Food Beagle extends Pet Container extends Jar Metal extends Titanium GratefulDead extends Band Blonde extends Smart Beverage extends Martini What if I want Beagle to extend Dog, but not all dogs are pets? (chapter 8) Hint: apply the IS-A test 8 Sharpen Your Pencil - Part One 9 chapter eight Head First Java Sharpen Solutions Page 201 Concrete Sample class Abstract golf course simulation Tree tree nursery application monopoly game House architect application satellite photo application Town video game Atlas/map application Football Player coaching application banquet seating app home floorplan design app Chair business modeling app technical support app Customer employee training app Store inventory system Sales Order home inventory program online book store Book Mall store directory kiosk Warehouse distribution system Store Simple business accounting Just-in-time inventory system Supplier simple golf game Pro shop POS system Golf Club Parts Inventory app Carburetor Home / architectural design Engine design software Gourmet restaurant app Oven Note: this is a little confusing and a little subjective, but here s a tip for this exercise -- the abstract category is for applications where the class in the center column would need to be SUBCLASSED. The concrete category is for applications where the class in the center can be concrete, and the only thing differentiating one instance from another is instance variable values. For example, in a home inventory system, you probably don t need to distinguish between different classes of books, you simply need an instance variable with the title, so that each instance of Book represents a book in your house. But a bookstore might *care* about different TYPES of books like FictionBook, TradePaperback, Children, etc. since each type might have different policies for stocking, discounts, and advertising. you are here 4 9; DEFINITELY. GAME CHANGER? EVOLUTION? Big Data Big Data EVOLUTION? GAME CHANGER? DEFINITELY. EMC s Bill Schmarzo and consultant Ben Woo weigh in on whether Big Data is revolutionary, evolutionary, or both. by Terry Brown EMC+ In a recent survey of Top 5 Mistakes Made with Inventory Management for Online Stores Top 5 Mistakes Made with Inventory Management for Online Stores For any product you sell, you have an inventory. And whether that inventory fills dozens of warehouses across the country, or is simply stacked LEAD with Love. A Guide for Parents LEAD with Love A Guide for Parents When parents hear the news that their son or daughter is lesbian, gay, or bisexual, they oftentimes freeze and think they don t know what to do they stop parenting all Hypercosm. Studio. Hypercosm Studio Hypercosm Studio Guide 3 Revision: November 2005 Copyright 2005 Hypercosm LLC All rights reserved. Hypercosm, OMAR, Hypercosm 3D Player, and Hypercosm Studio are trademarks SALE TODAY All toys half price Name: Class: Date: KET Practice PET TestPractice Reading Test and Reading Writing KET PET Part 1 Questions 1 5 Which notice (A H) says this (1 5)? For Questions 1 5 mark the correct letter A H on your High Value Advertising Formula The High Value Advertising Formula If you want to attract the type of customer who is sophisticated enough to understand that a higher price can be well worth it if they get genuine value, you must The Marketer s Guide To Building Multi-Channel Campaigns The Marketer s Guide To Building Multi-Channel Campaigns Introduction Marketing is always changing. Currently there s a shift in the way we talk to customers and potential customers. Specifically, where COUNTING LOOPS AND ACCUMULATORS COUNTING LOOPS AND ACCUMULATORS Two very important looping idioms are counting loops and accumulators. A counting loop uses a variable, called the loop control variable, to keep count of how many cycles. GAcollege411 Site Overview Transcript GAcollege411 Site Overview Transcript College Student Hey little man. High School Student What!!? UGH! Don t call me that! So, home for Spring Break, how s college going? College Student Oh, pretty good, Potholes. Mark Levy, Revenue Development Resources, Inc. Welcome to Potholes. (And How You Might Avoid Them) Welcome to Potholes (And How You Might Avoid Them) With Mark Sparky Levy President Revenue Development Resources It s one thing to get beat by a gadget play, or something like that, but to get beat because CLAIM, QUOTE, COMMENT! (CQC) CLAIM, QUOTE, COMMENT! (CQC) Why do we have to use quotes? Everything you write is an argument. Yes, everything. No matter what you are writing, you are claiming that your opinion is accurate and what Sample CSE8A midterm Multiple Choice (circle one) Sample midterm Multiple Choice (circle one) (2 pts) Evaluate the following Boolean expressions and indicate whether short-circuiting happened during evaluation: Assume variables with the following names STEPS TO COLLECTING AMAZING CUSTOMER FEEDBACK. a publication of 5 STEPS TO COLLECTING AMAZING CUSTOMER FEEDBACK a publication of Contents INTRODUCTION....1 STEP 1: SET GOALS...2 STEP 2: CHOOSE A SOLUTION...3 STEP 3: PUT IT EVERYWHERE...5 STEP 4: RESPOND IN A TIMELY Quarterly Mobile Apps, Business Intelligence, & Database. BILT Meeting June 17, 2014. Meeting Minutes Quarterly Mobile Apps, Business Intelligence, & Database BILT Meeting June 17, 2014 Meeting Minutes ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 10 tips for running successful competitions on facebook White paper 10 tips for running successful competitions on facebook 07/ 2014 Introduction Facebook has undoubtedly become the largest social network in the world. With over 1.23 billion federal reserve bank of cleveland federal reserve bank of cleveland Table of Contents 1. You Decide 2. Learn to Earn 3. Earning Money 4. Spending: What s the Real Cost? 5. I Want It! I Need It! 6. Budget Basics 7. Budget Worksheet An old. Test Driven Development Test Driven Development Introduction Test Driven development (TDD) is a fairly recent (post 2000) design approach that originated from the Extreme Programming / Agile Methodologies design communities. Email Marketing Now let s get started on probably the most important part probably it is the most important part of this system and that s building your e-mail list. The money is in the list, the money Salary Review Spreadsheets Salary Review Spreadsheets What s a Salary Review Spreadsheet? You might know them by another term. Every year you have to make up these spreadsheets that have each employee s current salary and a way Can you briefly describe, for those listening to the podcast, your role and your responsibilities at Facebook? The Audience Measurement Event Speaker s Spotlight Series Featured Speaker: Fred Leach, Facebook Interviewer: Joel Rubinson, President, Rubinson Partners Can you briefly describe, for those listening to Bigfork present: customer profiling for fun & profit. Why produce customer profiles? Bigfork present: customer profiling for fun & profit As part of our successful website design series, we look at why you need to research and profile your website visitors and how to do it. Don t worry, Module 5: Control and Responsibility Module 5: Control and Responsibility In order for teens to gain more freedom, they need to be able to demonstrate skill in taking responsibility. They also need to understand when they do and do not have Chapter 4. Color, Music, and Pizazz Chapter 4. Color, Music, and Pizazz Drawing thin white lines on a dull white screen gets real boring real fast. So before we go any further, let s add some pizazz to Logo procedures. Add color to Goal: Practice writing pseudocode and understand how pseudocode translates to real code. Lab 7: Pseudocode Pseudocode is code written for human understanding not a compiler. You can think of pseudocode as English code that can be understood by anyone (not just a computer scientist). Pseudocode So you want to create an Email a Friend action So you want to create an Email a Friend action This help file will take you through all the steps on how to create a simple and effective email a friend action. It doesn t cover the advanced features; PEER PRESSURE TEACHER S GUIDE: TEACHER S GUIDE: PEER PRESSURE LEARNING OBJECTIVES Students will be able to identify peer pressure as both a positive and negative force. Students will understand how peer pressure impacts everyone. Students Copyright 2011 - Brad Kendall Introduction!4 Are you a commodity or an expert?!6 How to become an Expert!18 Pick your field of expertise!19 Pick something that already interests you!19 Make sure you know STUDENT LEARNING OUTCOME (SLO): Paragraphs should be logically organized and focused. Topic: Using Transitions Directed Learning Activity Course: English B STUDENT LEARNING OUTCOME (SLO): Paragraphs should be logically organized and focused. DLA OBJECTIVE/PURPOSE: Student will be able to Nine Things You Must Know Before Buying Custom Fit Clubs Nine Things You Must Know Before Buying Custom Fit Clubs Exclusive insider information on the Custom Fit Industry 2 Introduction If you are in the market for new custom fit clubs here are some things Income For Life Introduction Income For Life Introduction If you are like most Americans, you ve worked hard your entire life to create a standard of living for yourself that you wish to maintain for years to come. You have probably 16.1 DataFlavor. 16.1.1 DataFlavor Methods. Variables In this chapter: DataFlavor Transferable Interface ClipboardOwner Interface Clipboard StringSelection UnsupportedFlavorException Reading and Writing the Clipboard 16 Data Transfer One feature that: YouTube Channel Authority - The Definitive Guide YouTube Channel Authority - The Definitive Guide So what exactly is YouTube channel authority and how does it affect you? To understand how channel authority works, you first need a basic understand of HOW TO GET THE MOST OUT OF YOUR ACCOUNTANT Prepared by HOW TO GET THE MOST OUT OF YOUR ACCOUNTANT Do you find dealing with your accountant confusing? Not exactly sure what services an accountant can provide for you? Do you have expensive accounting Multiplying Integers. Lesson Plan Lesson Plan Video: 12 minutes Lesson: 38 minutes Pre-viewing :00 Warm up: Write 5 + 5 + 5 + 5 = on the board. Ask students for the answer. Then write 5 x 4 = on the board. Ask the students for the answer. Industries. Industry Standards Industries Industry Standards Review: Top Domestic Grossing Films of All Time: Gone With the Wind (1939) Star Wars: Episode IV A New Hope (1977) The Sound of Music (1965) E.T. (1982) Titanic (1997) The Do you have what it really takes to make $1,000,000 or more a year in personal income? Doyouhavewhatitreallytakestomake $1,000,000ormoreayearinpersonalincome? FACT:Only0.15%ofthepeoplefilingtaxesintheUSAmake$1M ormoreayearso Why? Ifyouwantoneyearofprosperity,growseeds.Ifyouwantten yearsofprosperity,growtrees.ifyouwantalifetimeof Get of Debt in 24 Months How to Get of Debt in 24 Months by John Bonesio, Financial Coach How to Get Out of Debt in 24 Months There are lots of debt solutions out there. You may have heard RESOURCES.RESULTS.RELIEF. RESOURCES.RESULTS.RELIEF. Helping Your Anxious Child Make Friends One of childhood's toughest lessons is learning how to be a good friend. However, some anxious children and teens find it tough to Your Health Insurance: Questions and Answers Your Health Insurance: Questions and Answers This simple guide will help you understand how to use and keep your health insurance Meet four people with questions about their health insurance: George is Start Your Own Hosting Company for Free! How to Start Your Own Hosting Company for Free! Starting your own hosting company for free is nice, but it gets even better. Imagine that your hosting company only takes a couple hours to set up. Imagine ETHICAL VALUES AND OTHER K INDS OF VALUES 3. Defining Ethical V a l u e s ETHICAL VALUES AND OTHER K INDS OF VALUES PURPOSE To help students begin to understand what ethics means To distinguish ethical values from other kinds of values PREPARATION The first program: Little Crab CHAPTER 2 The first program: Little Crab topics: concepts: writing code: movement, turning, reacting to the screen edges source code, method call, parameter, sequence, if-statement In the previous chapter, CHAPTER 3 The Game Plan: Developing a Workflow CHAPTER 3 The Game Plan: Developing a Workflow If you can t describe what you are doing as a process, you don t know what you re doing. W. Edwards Deming While you can certainly purchase Adobe Creative LINKED DATA STRUCTURES LINKED DATA STRUCTURES 1 Linked Lists A linked list is a structure in which objects refer to the same kind of object, and where: the objects, called nodes, are linked in a linear sequence. we keep a reference Making the Case for Service Recovery - Customer Retention Making the Case for Service Recovery - Customer Retention Service Recovery is one of the key ingredient s to good service, leading to happy customers and we all know happy customers are very good for business, Getting Started with WebSite Tonight Getting Started with WebSite Tonight WebSite Tonight Getting Started Guide Version 3.0 (12.2010) Copyright 2010. All rights reserved. Distribution of this work or derivative of this work is prohibited Contented Website The Contented Website Why most business websites don t work and what to do about it Deb Jeffreys About the Author Deb Jeffreys and is the founder of Brilliant Blue Marketing. Together with her team Algebra: Real World Applications and Problems Algebra: Real World Applications and Problems Algebra is boring. Right? Hopefully not. Algebra has no applications in the real world. Wrong. Absolutely wrong. I hope to show this in the following document. 2 The first program: Little Crab 2 The first program: Little Crab topics: concepts: writing code: movement, turning, reacting to the screen edges source code, method call, parameter, sequence, if statement In the previous chapter, we 4 Ways. To Strengthen Your Brand. Dental Marketing Experts 4 Ways To Strengthen Your Brand Dental Marketing Experts Establish Your Brand 1First, what does brand mean? A brand is a name, term, design or other feature that distinguishes one seller s product Getting Started Guide Getting Started Guide How to Use This Guide Welcome to BridalLive! This Getting Started Guide will walk you through the ten basic steps you need to start using BridalLive in your store today. The steps ONLINE SAFETY TEACHER S GUIDE: TEACHER S GUIDE: ONLINE SAFETY LEARNING OBJECTIVES Students will learn how to use the Internet safely and effectively. Students will understand that people online are not always who they say they are. KEY ENGLISH TEST for Schools KEY ENGLISH TEST for Schools PAPER 1 Reading and Writing Sample Paper Time 1 hour 10 minutes INSTRUCTIONS TO CANDIDATES Do not open this question paper until you are told to do so. Write your name, Centre SQL for Data Scientists Introduction to SQL for Data Scientists Ben O. Smith College of Business Administration University of Nebraska at Omaha Learning Objectives By the end of this document you will learn: 1. How to perform Program 123: Income and Expenses BIZ KID$ Program 123: Income and Expenses Introduction Explain that Biz Kid$ is a program to help people become financially educated, learn work-readiness skills, and to even become entrepreneurs or Biz DO YOUR OWN THING GUIDE DO YOUR OWN THING GUIDE for kids YOU can make our world a brighter place! Table of Contents What do you need?... 1 Your project idea!... 1 Where?... 3 Who?... 3 Let s make a plan!... 3 Time to reflect... Marketing Automation And the Buyers Journey Marketing Automation And the Buyers Journey A StrategyMix White Paper By Jonathan Calver, Managing Director, StrategyMix About this Paper This paper will first introduce you to the principles and concepts Reducing Customer Churn Reducing Customer Churn A Love Story smarter customer contact Breaking up is hard to do The old adage that it s cheaper (and better) to hold onto an existing customer than to acquire a new one isn t just
http://docplayer.net/399539-Sharpen-solutions-1-part-one-sometimes-there-s-more-than-one-right-answer-and-sometimes-the.html
CC-MAIN-2018-51
en
refinedweb
Captured webcam frames arranged in a 3D rotate-able cube for your viewing pleasure. A simple tech demo trying out the new getUserMedia API in Chrome 21, with some 3D transforms and canvas based pixel manipulation thrown in for good measure. Try it outMake sure you're using Chrome >= 21 and have a webcam attached. Pseudo-3.5D This is the default setting, the idea is to make quite a slow repetitive movement and then pause the webcam (Webcam:paused=true). Hopefully the background will be ignored and your movement will be the only thing visible in each frame (if not try fiddling the Webcam:renderThreshold). Now rotate the cube by dragging it around for some 3.5D action! Time-warp A strange and unexpected effect that I noticed when building the demo. Rotate the cube so all that you can see is the oldest frame (Rotation:rotX=0, Rotation:rotY=180) and make sure you're rendering full frames (Webcam:renderMovement=true and Webcam:renderStatic=true). Point the webcam at yourself and let the cube fill up with frames. You're now looking 10 seconds into the past (frameCount=100 and Webcam:fps=10), spooky! For me it seems that 10 seconds is just about long enough for me to forget what I was doing, so it has the odd effect of looking like me but not feeling like me. Technical notes HTML5 Rocks has a great run-down of the getUserMedia API that I'm using for video-capture, as well as a more general look at it's parent specification WebRTC. The thresholding code used for movement tracking is also heavily inspired by the filter code from another article on canvas image filters. If you haven't yet had your fill of CSS 3D transform tutorials then again I recommend starting from the graphics feature overview. Also worthy of a quick mention is RequireJS, a simple module loader for JavaScript. I've previously used Closure which comes with it's own custom mechanism and the CommonJS pattern used by Node, but this is the first time I've tried RequireJS. If you're not familiar with the concept, it allows you to easily create modular code and manage the references between modules, think package/import from Java or namespace/using from C#. So far I'm pretty impressed, but I'm yet to look into its support for compile-time optimisation and run-time async module loading, look out for another blog post.
https://blog.scottlogic.com/2012/08/08/gum-cube.html
CC-MAIN-2021-21
en
refinedweb
AVR115: Data Logging with Atmel File System on ATmega32U4. Microcontrollers. Application Note. 1 Introduction. Atmel - Gwendoline Barrett - 5 years ago - Views: Transcription 1 AVR115: Data Logging with Atmel File System on ATmega32U4 Microcontrollers Application Note 1 Introduction Atmel provides a File System management for AT90USBx and ATmegaxxUx parts. This File System allows performing data logging operation and this application note shows how to implement this feature using ATmega32U4 and EVK527 board. The EVK527-series4-datalogging is the firmware package related to AVR 115. The reader should be familiar with AVR114 Application Note before reading this one. Rev. 2 2 Hardware Requirements The Data Logging example application requires the following hardware: AVR USB evaluation board ATEVK527 which includes: - ATmega32U4 - DataFlash (32Mbits) - SD/MMC connector USB cable (Standard A to Mini B) PC running on Windows (98SE, ME, 2000, XP), Linux or MAC OS with an USB 1.1 or 2.0 host 3 In-System programming and Device Firmware Upgrade To program the device you can use one of the following methods: The JTAG interface using the JTAGICE mkii The SPI interface using the AVRISP mkii and JTAGICE mkii The USB interface thanks to the factory DFU bootloader and FLIP(1) software The parallel programming using the STK 500 or STK600 erase before programming in AVR Studio. If checked, the DFU bootloader is deleted before to programming. 2 AVR115 3 AVR115 4 Quick Start Once your device is programmed with EVK527-series4-datalogging hex file, you can start the data logging demonstration: 1. Unplug the USB cable and plug a power cable (9 V) 2. Press key down to start record : The log file is created either on the MMC/SD card if present, or on DataFlash memory The LED0 is turned on when the recording starts The LED2 is turned on when an error occurs (Disk not present, Disk full, ) 3. Press the key HWB or plug the USB cable to stop the recording. 4. Plug USB cable on your PC to run the U-Disk and to read the log file Disk:\log000\log000.bin Figure 4-1. EVK521 Rev1 USB Solder SP2 & SP4 to enable LED0 and LED2 External Power 5V MMC/SD slot Stop record Start record Board power = 3V for Operational Amplifier Note: This is the EVK527 default factory configuration except the SP2 & SP4 which must be sold. By default, the recorded data is only a digital number (16-bits) incremented and stored each 120 µs. One can change the record source via the software compilation options in datalogging.c file: #define LOG_ADCMIC_EXAMPLE #define LOG_ADCEXT_EXAMPLE #define LOG_PIN_EXAMPLE #define LOG_ENUM_EXAMPLE // From microphone ADC to a WAV file // From other ADC input to a BIN file // Records pins states to a BIN file // Records counter value to a BIN file (Default) 3 4 5 Application 5.1 Behavior The sample application provides two operating modes: Download mode: the user has to connect the kit to PC (removable disk U-Disk ) to be able to access to the log file written on the memories (DataFlash or SD/MMC card). In this mode, the embedded Atmel File System is not allowed to access to the memories. Note: To prevent data corruption, only one file system management may be active at a given time. Figure 5-1. Download mode (U-Disk) MMC / SD Card (via SPI) The File System (FS) is decoded by the PC USB Device Class Mass Storage DataFlash (via SPI) Data logging mode: during this mode the data recording can be performed. The kit must be disconnected from the USB host and starting/stopping record actions are managed by EVK527 buttons (see Figure 4-1. EVK521 Rev1). When the data recording starts, a file is created in a memory. An 8 KHz interrupt timer samples the values from ADC or another source and writes them in a buffer. The full buffers are transferred in the file by the data logging task. Figure 5-2. Data Logging mode Two buffers of 512 B ping-pong mode The FS is decoded by the embedded Atmel FS ADC I/O MMC / SD Card (via SPI) Timer Interrupt (8 KHz) Data Logging Task (scheduled by main) DataFlash (via SPI) 4 AVR115 5 AVR Firmware This section explains only the File System management code and not the USB module. The following code samples are extracted from the data_logging.c file from EVK527-series4-datalogging package Enable/disable embedded FS The File System module initialization and exit are managed by the datalogging_task(). When the chip exits from USB Device mode, one can call nav_reset() to initialize the embedded Atmel File System. When one wants to stop the embedded Atmel File System or when the USB Device mode starts, the nav_exit() must be called to flush cache information into the memories. Figure 5-3. Example of datalogging_task() routine void datalogging_task(void) // Change the data logging state if( Is_joy_down() // If data logging started by user && (!Is_device_enumerated())) // and if USB Device mode stopped if(!g_b_datalogging_running ) // Start data logging nav_reset(); //** Init File System g_b_datalogging_running = datalogging_start(); if( Is_hwb() // If data logging stopped by user Is_device_enumerated() ) // or if USB Device mode started if( g_b_datalogging_running ) // Stop data logging g_b_datalogging_running = FALSE; datalogging_stop(); nav_exit(); //** Exit of File System module // Execute data logging background task if( g_b_datalogging_running ) g_b_datalogging_running = datalogging_file_write_sector(); if(!g_b_datalogging_running ) datalogging_stop(); nav_exit(); //** Exit of File System module 5 6 5.2.2 Open disk In this example, only one navigator handle 1 is needed because the application has only one disk exploration and only one file opened at the same time. In that case, we select the default navigator handle 0 nav_select(0). Note: 1. see Application Note AVR114 for more information about navigator handle The algorithm to open a disk depends on the number of disks connected (see configuration in conf_access.h file). By default the example uses the DataFlash and MMC/SD driver and the following algorithm: Try to mount MMC/SD disk and format if necessary If error (disk not present, fail, ) Try to mount DataFlash disk and format if necessary If error (disk not present, fail, ) Abort data logging Bool datalogging_open_disk(void) U8 u8_i; nav_select(fs_nav_id_default); Figure 5-4. datalogging_open_disk() routine #if( (LUN_2 == ENABLE) && (LUN_3 == ENABLE)) // Configuration set in conf_access.h // Select and try to mount disk MMC/SD (lun 1) or DataFlash (lun 0) // Try first the MMC/SD for( u8_i=1; u8_i!=0xff; u8_i-- ) #else // There is only one memory MMC/SD or DataFlash (lun 0) for( u8_i=0; u8_i!=0xff; u8_i-- ) #endif if( nav_drive_set(u8_i) ) // Select driver (not disk) // Driver available then mount it if(!nav_partition_mount() ) // Error during the mount then check error status if( FS_ERR_NO_FORMAT!= fs_g_status ) continue; // Disk fails (not present, HW error, system error, // Disk no formated then format it if(!nav_drive_format(fs_format_default) ) continue; // Format fails return TRUE; // Here disk mounted // No valid disk found 6 AVR115 7 AVR Create path file This part creates the following path file Disk:\logxxx\logxxx.bin. The datalogging_create_path_file() routine creates the directory \logxxx\. The nav_setcwd() routine searches and eventually creates the path (the third argument must be TRUE). Bool datalogging_create_path_file(void) char ascii_name[15]; U16 u16_dir_num = 0; Figure 5-5. datalogging_create_path_file() routine if(!nav_dir_root() ) // Error FS while( u16_dir_num < 30 ) // The limitation of number of directories is just an example sprintf( ascii_name, "./log%03d/", u16_dir_num); // Enter in sub directory and eventually create it if don't exist if(!nav_setcwd( ascii_name, FALSE, TRUE) ) // Error FS // Create a file if( datalogging_create_file() ) return TRUE; // File created // Here, the directory is full then go to parent directory to create the next sub directory if(!nav_dir_gotoparent() ) // Error FS u16_dir_num++; // Too many log directories and files The datalogging_create_file() routine creates the file \logxxx.bin. This one limits the number of log file in a log directory to 10, because the exploration of a directory with many files may be too slow. Bool datalogging_create_file(void) char ascii_name[15]; U16 u16_file_num = 0; Figure 5-6. datalogging_create_file() routine while( u16_file_num < 10 ) // Create file sprintf( ascii_name, "log%03d.bin", u16_file_num); if( nav_file_create( ascii_name )) return TRUE; // Here, the file is created, closed and emptied // Error during creation then check error if( fs_g_status!= FS_ERR_FILE_EXIST ) // Error FS (Disk full, directory full,...) // The file exists then increment number in name u16_file_num++; // Too many log files in current directory 7 8 5.2.4 File space allocation This section is not mandatory but allows increasing the data logging bandwidth. For more explanation, see 6.5 of AVR114 application note. Note: At the end of data logging, the remaining allocated memory (size allocated - final size) is freed up when the file is closed. Bool datalogging_alloc_file_space(void) Fs_file_segment g_recorder_seg; // Open the file created in write mode if(!file_open(fopen_mode_w)) Figure 5-7. datalogging_alloc_file_space() routine // Define the size of segment to alloc (unit 512 B) // Note: you can alloc more in case of you don't know the total size g_recorder_seg.u16_size = FILE_ALLOC_SIZE; // Alloc in FAT a cluster list equal or inferior at segment size if(!file_write( &g_recorder_seg )) file_close(); // If you want then you can check the mimimun size allocated if( g_recorder_seg.u16_size < FILE_ALLOC_SIZE_MIN ) file_close(); nav_file_del(false); // Close/open file to reset size // Note: This sequence doesn't remove the previous FAT allocation file_close(); // Closes file if(!file_open(fopen_mode_w)) // Opens file in write mode and forces the size to 0 return TRUE; //** File open and FAT allocated 8 AVR115 9 AVR File filling The file_write_buf() is the best routine to fill a file. Using a multiple of 512 B as buffer size and current file position will give optimal speed performance, because the memory interface uses a block of 512 B. Buffers (2 * 512 B) are filled by timer0 interrupt routine. The maximum data logging bandwidth on ATmega32U4 at 8 MHz is 18 KB/s. This value has been measured with the following example. Figure 5-8. datalogging_file_write_sector() routine Bool datalogging_file_write_sector(void) //!!!! Note : // if the written buffer size has a multiple of 512 B // and if the current file position is a multiple of 512 B // then the "file_write_buf()" routine is very efficient. if( g_b_buf_full[g_u8_cur_buf] ) if(!file_write_buf( &g_data_buf[g_u8_cur_buf*fs_size_of_sector], FS_SIZE_OF_SECTOR ) ) // Error write g_b_buf_full[g_u8_cur_buf] = FALSE; // Now wait new buffer g_u8_cur_buf++; if( NB_DATA_BUF == g_u8_cur_buf ) g_u8_cur_buf = 0; return TRUE; 9 10 Disclaimer Headquarters International Atmel Corporation 2325 Orchard Parkway San Jose, CA USA Tel: 1(408) Fax: 1(408) Atmel Asia Unit 1-5 & 16, 19/F BEA Tower, Millennium City Kwun Tong Road Kwun Tong,, Atmel logo and combinations thereof, AVR, STK, AVR Studio, DataFlash and others, are the registered trademarks or trademarks of Atmel Corporation or its subsidiaries. Windows and others are registered trademarks or trademarks of Microsoft Corporation293: USB Composite Device AVR293: USB Composite Device Features Combining several USB applications using ONE DEVICE No HUB needed Bus powered 1. Introduction Adding to the flexibility given to the user with the Hot Plug & Play, CryptoAuthentication Product Uses. Atmel ATSHA204. Abstract. Overview Application Note Atmel CryptoAuthentication Product Uses Atmel Abstract Companies are continuously searching for ways to protect property using various security implementations; however, the cost of security Introducing a platform to facilitate reliable and highly productive embedded developments Beyond the IDE Introducing a platform to facilitate reliable and highly productive embedded developments Author: Joerg Bertholdt, Director of Marketing, MCU Tools and Software, Atmel Corporation Beyond32737: AVR32 AP7 Linux Getting Started. 32-bit Microcontrollers. Application Note. Features. 1 Introduction AVR32737: AVR32 AP7 Linux Getting Started Features Linux development tools overview Introduction to the Linux boot process Compiling, running and debugging applications 1 Introduction This application. USB Mass Storage Device Implementation. USB Microcontrollers. References. Abbreviations. Supported Controllers USB Mass Storage Device Implementation References Universal Serial Bus Specification, revision 2.0 Universal Serial Bus Class Definition for Communication Devices, version 1.1 USB Mass Storage Overview, Designing Feature-Rich User Interfaces for Home and Industrial Controllers Designing Feature-Rich User Interfaces for Home and Industrial Controllers Author: Frédéric Gaillard, Product Marketing Manager, Atmel We have all become familiar with intuitive user interfaces on our AN3354 Application note Application note STM32F105/107 in-application programming using a USB host 1 Introduction An important requirement for most Flash-memory-based systems is the ability to update firmware installed in the Getting started with DfuSe USB device firmware upgrade STMicroelectronics extension User manual Getting started with DfuSe USB device firmware upgrade STMicroelectronics extension Introduction This document describes the demonstration user interface that was developed to illustrate use... New Features and Enhancements Dell Migration Manager for SharePoint 4.7 Build number: 4.7.20141207 December 9, 2014 These release notes provide information about the Dell Migration Manager for SharePoint release. New Features and Enhancements At AN3990 Application note Application note Upgrading STM32F4DISCOVERY board firmware using a USB key Introduction An important requirement for most Flash memory-based systems is the ability to update the firmware installed in 8-bit Microcontroller. Application Note. AVR105: Power Efficient High Endurance Parameter Storage in Flash Memory AVR105: Power Efficient High Endurance Parameter Storage in Flash Memory Features Fast Storage of Parameters High Endurance Flash Storage 350K Write Cycles Power Efficient Parameter Storage Arbitrary Size 32771: USB High speed Device Mass storage on SD/MMC card with optional AES. 32-bit Microcontrollers. Application Note. Features. AVR 32771: USB High speed Device Mass storage on SD/MMC card with optional AES Features High Speed USB for high read and write speed Modular code simplifies maintenance and extensions Widely supported Quest vworkspace Virtual Desktop Extensions for Linux Quest vworkspace Virtual Desktop Extensions for Linux What s New Version 7.6 2012 Quest Software, Inc. ALL RIGHTS RESERVED. Patents Pending. This guide contains proprietary information protected by copyright.
https://docplayer.net/12000290-Avr115-data-logging-with-atmel-file-system-on-atmega32u4-microcontrollers-application-note-1-introduction-atmel.html
CC-MAIN-2021-21
en
refinedweb
Class UndeclaredThrowableException - java.lang.Object - java.lang.Throwable - java.lang.Exception - java.lang.RuntimeException - java.lang.reflect.UndeclaredThrowableException - All Implemented Interfaces: Serializable public class UndeclaredThrowableException extends Runtime." - Since: - 1.3 - See Also: InvocationHandler, Serialized Form Constructor Summary Method Summary Methods declared in class java.lang.Throwable addSuppressed, fillInStackTrace, getLocalizedMessage, getMessage, getStackTrace, getSuppressed, initCause, printStackTrace, printStackTrace, printStackTrace, setStackTrace, toString Methods declared in class java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait Constructor Detail UndeclaredThrowableException public UndeclaredThrowableException(Throwable undeclaredThrowable) Constructs an UndeclaredThrowableException with the specified Throwable. - Parameters: undeclaredThrowable- the undeclared checked exception that was thrown UndeclaredThrowableException public UndeclaredThrowableException(Throwable undeclaredThrowable, String s) Constructs an UndeclaredThrowableException with the specified Throwable and a detail message. - Parameters: undeclaredThrowable- the undeclared checked exception that was thrown s- the detail message Method Detail getUndeclaredThrowable public Throwable getUndeclaredThrowable() Returns the Throwable instance getCause public Throwable getCause() Returns the cause of this exception (the Throwable instance wrapped in this UndeclaredThrowableException, which may be null).
https://docs.w3cub.com/openjdk~11/java.base/java/lang/reflect/undeclaredthrowableexception
CC-MAIN-2021-21
en
refinedweb
First solution in Clear category for Fizz Buzz by anushachaturvedi7 # Your optional code here # You can import some modules or create additional functions def checkio(number: int) -> str: if(number % 3==0 and number % 5==0): #Check divisibility by 3 and 5 return "Fizz Buzz" elif( number % 3==0): #Check divisibility by 3 return "Fizz" elif(number % 5==0): #Check divisibility by 5 return "Buzz" return str(number) #Convert integer in string #!") Feb. 18, 2020 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/fizz-buzz/publications/anushachaturvedi7/python-3/first/share/244ffdf9c7a1de86ce7d16d2500f668a/
CC-MAIN-2021-21
en
refinedweb
CASSANDRA SUMMIT 2015 Mario Lazaro September 24th 2015 #CassandraSummit2015 Multi-Region Cassandra in AWS WHOAMI Mario Cerdan Lazaro Big Data Engineer Born and raised in Spain - Joined GumGum 18 months ago - About a year and a half experience with Cassandra #5 Ad Platform in the U.S 10B impressions / month 2,000 brand-safe premium publisher partners 1B+ global unique visitors Daily inventory Impressions processed - 213M Monthly Image Impressions processed - 2.6B 123 employees in seven offices Agenda - Old cluster - International Expansion - Challenges - Testing - Modus Operandi - Tips - Questions & Answers - 25 Classic nodes cluster - 1 Region / 1 Rack hosted in AWS EC2 US East - Version 2.0.8 - Datastax CQL driver - GumGum's metadata including visitors, images, pages, and ad performance - Usage: realtime data access and analytics (MR jobs) OLD C* CLUSTER - MARCH 2015 OLD C* CLUSTER - REALTIME USE CASE - Billions of rows - Heavy read workload - 60/40 - TTLs everywhere - Tombstones - Heavy and critical use of counters - RTB - Read Latency constraints (total execution time ~50 ms) OLD C* CLUSTER - ANALYTICS USE CASE - Daily ETL jobs to extract / join data from C* - Hadoop MR jobs - AdHoc queries with Presto International expansion First Steps - Start C* test datacenters in US East & EU West and test how C* multi region works in AWS - Run capacity/performance tests. We expect 3x times more traffic in 2015 Q4 First thoughts - Use AWS Virtual Private Cloud (VPC) - Cassandra & VPC present some connectivity issueschallenges - Replicate entire data with same number of replicas TOO GOOD TO BE TRUE ... CHALLENGES - - Your application needs to connect to C* using private IPs - Custom EC2 translator /** * Implementation of {@link AddressTranslater} used by the driver that * translate external IPs to internal IPs. * @author Mario <mario@gumgum.com> */ public class Ec2ClassicTranslater implements AddressTranslater { private static final Logger LOGGER = LoggerFactory.getLogger(Ec2ClassicTranslater.class); private ClusterService clusterService; private Cluster cluster; private List<Instance> publicDnss; @PostConstruct public void build() { publicDnss = clusterService.getInstances(cluster); } /** * Translates a Cassandra {@code rpc_address} to another address if necessary. * <p> * * @param address the address of a node as returned by Cassandra. * @return {@code address} translated IP address of the source. */ public InetSocketAddress translate(InetSocketAddress address) { for (final Instance server : publicDnss) { if (server.getPublicIpAddress().equals(address.getHostString())) { LOGGER.info("IP address: {} translated to {}", address.getHostString(), server.getPrivateIpAddress()); return new InetSocketAddress(server.getPrivateIpAddress(), address.getPort()); } } return null; } public void setClusterService(ClusterService clusterService) { this.clusterService = clusterService; } public void setCluster(Cluster cluster) { this.cluster = cluster; } } - - Challenges - Datastax Java Driver Load Balancing - Multiple choices - Datastax Java Driver Load Balancing - Multiple choices - DCAware + TokenAware - Datastax Java Driver Load Balancing - Multiple choices - DCAware + TokenAware + ? CHALLENGES - Zone Aware Connection: - Webapps in 3 different AZs: 1A, 1B, and 1C - C* datacenter spanning 3 AZs with 3 replicas 1A 1B 1C 1B 1B CHALLENGES - We added it! - Rack/AZ awareness to TokenAware Policy CHALLENGES - Third Datacenter: Analytics - Do not impact realtime data access - Spark on top of Cassandra - Spark-Cassandra Datastax connector - Replicate specific keyspaces - Less nodes with larger disk space - Settings are different - Ex: Bloom filter chance CHALLENGES - Third Datacenter: Analytics Cassandra Only DC Realtime Cassandra + Spark DC Analytics Challenges - Upgrade from 2.0.8 to 2.1.5 - Counters implementation is buggy in pre-2.1 versions My code never has bugs. It just develops random unexpected features CHALLENGES To choose, or not to choose VNodes. That is the question. (M. Lazaro, 1990 - 2500) - Previous DC using Classic Nodes Works with MR jobs Complexity for adding/removing nodes Manual manage token ranges - New DCs will use VNodes - Apache Spark + Spark Cassandra Datastax connector - Easy to add/remove new nodes as traffic increases testing Testing - Testing requires creating and modifying many C* nodes - Create and configuring a C* cluster is time-consuming / repetitive task - Create fully automated process for creating/modifying/destroying Cassandra clusters with Ansible # Ansible settings for provisioning the EC2 instance --- ec2_instance_type: r3.2xlarge ec2_count: - 0 # How many in us-east-1a ? - 7 # How many in us-east-1b ? ec2_vpc_subnet: - undefined - subnet-c51241b2 - undefined - subnet-80f085d9 - subnet-f9138cd2 ec2_sg: - va-ops - va-cassandra-realtime-private - va-cassandra-realtime-public-1 - va-cassandra-realtime-public-2 - va-cassandra-realtime-public-3 TESTING - Performance - Performance tests using new Cassandra 2.1 Stress Tool: - Recreate GumGum metadata / schemas - Recreate workload and make it 3 times bigger - Try to find limits / Saturate clients # Keyspace Name keyspace: stresscql #keyspace_definition: | # CREATE KEYSPACE stresscql WITH replication = {'class': #'NetworkTopologyStrategy', 'us-eastus-sandbox':3,'eu-westeu-sandbox':3 } ### Column Distribution Specifications ### columnspec: - name: visitor_id size: gaussian(32..32) #domain names are relatively short population: uniform(1..999M) #10M possible domains to pick from - name: bidder_code cluster: fixed(5) - name: bluekai_category_id - name: bidder_custom size: fixed(32) - name: bidder_id size: fixed(32) - name: bluekai_id size: fixed(32) - name: dt_pd - name: rt_exp_dt - name: rt_opt_out ### Batch Ratio Distribution Specifications ### insert: partitions: fixed(1) # Our partition key is the visitor_id so only insert one per batch select: fixed(1)/5 # We have 5 bidder_code per domain so 1/5 will allow 1 bidder_code per batch batchtype: UNLOGGED # Unlogged batches # # A list of queries you wish to run against the schema # queries: getvisitor: cql: SELECT bidder_code, bluekai_category_id, bidder_custom, bidder_id, bluekai_id, dt_pd, rt_exp_dt, rt_opt_out FROM partners WHERE visitor_id = ? fields: samerow TESTING - Performance - Main worry: - Latency and replication overseas - Use LOCAL_X consistency levels in your client - Only one C* node will contact only one C* node in a different DC for sending replicas/mutations TESTING - Performance - Main worries: - Latency TESTING - Instance type - Test all kind of instance types. We decided to go with r3.2xlarge machines for our cluster: - 60 GB RAM - 8 Cores 160GB Ephemeral SSD Storage for commit logs and saved caches RAID 0 over 4 SSD EBS Volumes for data - Performance / Cost and GumGum use case makes r3.2xlarge the best option - Disclosure: I2 instance family is the best if you can afford it TESTING - Upgrade - Upgrade C* Datacenter from 2.0.8 to 2.1.5 - Both versions can cohabit in the same DC - New settings and features tried - DateTieredCompactionStrategy: Compaction for Time Series Data - Incremental repairs - Counters new architecture MODUS OPERANDI MODUS OPERANDI - Sum up - From: One cluster / One DC in US East - To: One cluster / Two DCs in US East and one DC in EU West MODUS OPERANDI MODUS OPERANDI - Second step: - Upgrade old datacenter from 2.0.8 to 2.1.5 - nodetool upgradesstables (multiple nodes at a time) - Not possible to rebuild a 2.1.X C* node from a 2.0.X C* datacenter. WARN [Thread-12683] 2015-06-17 10:17:22,845 IncomingTcpConnection.java:91 - UnknownColumnFamilyException reading from socket; closing org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find cfId=XXX rebuild MODUS OPERANDI - Third step: - Start EU West and new US East DCs within the same cluster - Replication factor in new DCs: 0 - Use dc_suffix to differentiate new Virginia DC from old one - Clients do not talk to new DCs. Only C* knows they exist - Replication factor to 3 on all except analytics 1 - Start receiving new data - Nodetool rebuild <old-datacenter> - Old data MODUS OPERANDI RF 3 RF 3:0:0:0 RF 3:3:3:1 Clients US East Realtime EU West Realtime US East Analytics Rebuild Rebuild Rebuild MODUS OPERANDI From 39d8f76d9cae11b4db405f5a002e2a4f6f764b1d Mon Sep 17 00:00:00 2001 From: mario <mario@gumgum.com> Date: Wed, 17 Jun 2015 14:21:32 -0700 Subject: [PATCH] AT-3576 Start using new Cassandra realtime cluster --- src/main/java/com/gumgum/cassandra/Client.java | 30 ++++------------------ .../com/gumgum/cassandra/Ec2ClassicTranslater.java | 30 ++++++++++++++-------- src/main/java/com/gumgum/cluster/Cluster.java | 3 ++- .../resources/applicationContext-cassandra.xml | 13 ++++------ src/main/resources/dev.properties | 2 +- src/main/resources/eu-west-1.prod.properties | 3 +++ src/main/resources/prod.properties | 3 +-- src/main/resources/us-east-1.prod.properties | 3 +++ .../CassandraAdPerformanceDaoImplTest.java | 2 -- .../asset/cassandra/CassandraImageDaoImplTest.java | 2 -- .../CassandraExactDuplicatesDaoTest.java | 2 -- .../com/gumgum/page/CassandraPageDoaImplTest.java | 2 -- .../cassandra/CassandraVisitorDaoImplTest.java | 2 -- 13 files changed, 39 insertions(+), 58 deletions(-) Start using new Cassandra DCs MODUS OPERANDI RF 0:3:3:1 Clients US East Realtime EU West Realtime US East Analytics RF 3:3:3:1 MODUS OPERANDI RF 0:3:3:1 Clients US East Realtime EU West Realtime US East Analytics RF 3:3:1 Decomission TIPS tips - automated Maintenance Maintenance in a multi-region C* cluster: - Ansible + Cassandra maintenance keyspace + email report = zero human intervention! CREATE TABLE maintenance.history ( dc text, op text, ts timestamp, ip text, PRIMARY KEY ((dc, op), ts) ) WITH CLUSTERING ORDER BY (ts ASC) AND bloom_filter_fp_chance=0.010000 AND caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND comment='' AND dclocal_read_repair_chance=0.100000 AND gc_grace_seconds=864000 AND read_repair_chance=0.000000 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'LZ4Compressor'}; CREATE INDEX history_kscf_idx ON maintenance.history (kscf); gumgum@ip-10-233-133-65:/opt/scripts/production/groovy$ groovy CassandraMaintenanceCheck.groovy -dc us-east-va-realtime -op compaction -e mario@gumgum.com tips - spark - Number of workers above number of total C* nodes in analytics - Each worker uses: - 1/4 number of cores of each instance - 1/3 total available RAM of each instance - Cassandra-Spark connector - SpanBy - .joinWithCassandraTable(:x, :y) - Spark.cassandra.output.batch.size.bytes - Spark.cassandra.output.concurrent.writes TIPS - spark val conf = new SparkConf() .set("spark.cassandra.connection.host", cassandraNodes) .set("spark.cassandra.connection.local_dc", "us-east-va-analytics") .set("spark.cassandra.connection.factory", "com.gumgum.spark.bluekai.DirectLinkConnectionFactory") .set("spark.driver.memory","4g") .setAppName("Cassandra presidential candidates app") - Create "translator" if using EC2MultiRegionSnitch - Spark.cassandra.connection.factory since c* in EU WEST ... US West Datacenter! EU West DC US East DC Analytics DC US West DC Q&A GumGum is hiring! Cassandra Summit 2015 - GumGum By mario2
https://slides.com/mario2/cassandra-summit-2015-gumgum
CC-MAIN-2021-21
en
refinedweb
Learning C#: How to Master an Object Oriented Programming Language If you’ve worked with any C-style language, C# will come as second nature to you as a programmer. C# is extremely similar to C++ and especially Java. Java programmers will have no problem learning C#, and for a native C# programmer, moving on to languages such as Java for Android development will be just a matter of learning simple syntax. C# is a part of the Microsoft .NET library. At first, when .NET was introduced, C# was not as popular as its fellow .NET language, VB.NET. .NET was introduced as the next programming language after Classic ASP and Visual Basic, so most developers flocked to VB.NET as the natural next language to learn. Now, however, MVC C# is the next framework, and it’s been a catalyst for C#’s popularity. If you want to learn C#, there are a few basic frameworks, language syntax and programming style you need to know. Learning Object Oriented Languages and C# Object Oriented versus Linear Programming Languages The first item you need to know is object oriented languages versus the linear execution steps of older languages. Object oriented languages aren’t a new concept. C++ is an object oriented languages. However, many of the older languages were linear. Linear languages run one page from start to finish. There are no compartmentalizing objects, which makes linear languages messier than object oriented languages. Linear languages only allow you to import code from other files, but this code does not need to have any organization or logic flow. C# is a true object oriented language. At first, you’ll probably get frustrated. Understanding object oriented languages is difficult for most people, especially if you are used to older languages. Object oriented languages use a concept of “classes.” These classes represent parts of your code. For instance, if you have a program about a car, you map out the parts of the car as classes. You’d have a class for the engine, the interior, the exterior and maybe some classes about the dashboard and passengers. The complexity of your classes is dependent on the complexity of your car. The flow of an object oriented language is completely different from a linear language. When a linear code file executes, the compiler runs through the code line-by-line. With object oriented classes, you call class methods, properties and events at any point in your code. When you call a method, the execution process jumps to the corresponding class and returns to the next line of execution. The C# language is written with Visual Studio, so you can step through your class code to see the flow of execution. To Get Started Microsoft offers the Visual Studio software for free on the company’s website. You’ll need the .NET framework installed on your computer, but if you run Windows, you probably have the framework requirements. Visual Studio installs all the necessary software to get started with C# including the C# compiler and the .NET framework if you don’t have it. A few advantages with Visual Studio will help you get started with the language. First, Visual Studio has an excellent debugger that works with website code, web and Windows services code and class libraries. You can step through your code to more easily find bugs and errors. Second, Visual Studio has an excellent interface that includes color-coded identification for different code elements such as classes (light blue), primitive data types (dark blue), and strings (red). Visual Studio is flexible and allows you to change these color codes, but they are well known in the industry as the default colors. IntelliSense is probably Visual Studio’s best feature. IntelliSense tries to “guess” what you want to type next, so you don’t have to fully type out all of your code. Start typing a C# method or property and IntelliSense lets you click “Tab” to finish the syntax without fully typing the code. Start learning Visual Studio and C# today Designing Applications and Understanding Classes Classes are the main component for any C# or object oriented language. Classes are also the biggest hurdle for most people. Classes represent parts of your code, and as you display windows or views in your application, you call these parts as needed. For instance, using the car scenario, you might have a class that describes the engine. The engine can be on or off and the engine also gives the car the ability to move forward. When you want to show the user a car moving forward, you probably need to call the engine class and its methods that move the car forward. In another window, you might want to show the car moving backwards. You would again call the engine class, except this time you’d call the method that moves the car in reverse. This is one of C#’s advantages and all object oriented languages for that matter. You only need to create one engine class, and then this class can be called in various parts of your code. With linear code, you need to retype the same code in the execution file. With C# and object oriented code, you simply call the class and execute the parts of the class you need to represent the program’s action. The classes you create are usually determined when you design the application. Designing applications takes some time to learn, because if you create a poor design, it can make engineering the application more difficult. You might even need to re-code several parts of your program if it is poorly designed. The basic idea of design is to put yourself in the user’s shoes. What would you want out of the application? After you’ve figured out the basic functionality for the program, you design the classes. These classes usually entwine with your database design as well, but database design is another learning obstacle. While the classes represent your program parts, the database syncs with the classes to store the information. For most C# programmers, a group of people determine the applications functionality, which makes it easier for the programmer. You take the functional design and turn them into classes. For instance, the functional requirements will tell you that the program is a car and the car needs to move forward, backward, make turns and turn off and on. You then take these functional requirements and use them to design your classes. In this example, you’d use the functional requirements to create methods in your engine class. The methods would represent each car action including the forward and back motion and turning the car off and on. Building Applications with C# You have several options when you learn C#, which makes it one of the leading languages to learn for people who plan to write a wide range of applications. Probably the most popular type of application you will eventually build is a web application. Web applications are usually written in MVC C#, but older styles such as web forms are still common. C# is also a valuable language for writing services. Web services are applications that allow external users to call methods over the web. For instance, Twitter, Facebook and Salesforce all have web services. They are usually referred to as an API. You can write these APIs in C# and publish them to your website. Windows services are small programs that run on servers or desktops. C# is also used to write these services that run in the machine’s background and execute code on a scheduled basis. You can build and deploy all of these applications using Visual Studio, which compiles and publishes your app without any manual code copying or moving files to the target machine. You first need to know the basics, so get started with C# fundamentals. Learning the C# and Object Oriented Programming Style With each job you have, you’ll be asked to follow coding guidelines. Most guidelines are universal among other development shops. The standards make it easier for other programmers to maintain your C# code after you’re finished. To learn basic C# coding style, take note of how the syntax is formatted and presented when you watch programming videos. For instance, camel case is common for variables. Camel case is a format where the first letter is lower case, and each word following the first variable word is upper case. Classes always have upper case for each word in the class definition. One common issue that most programmers face is understanding that C# (and any C derived language for that matter) is case sensitive. When you create a variable named “myvariable,” this is an entirely different variable from “MyVariable” or “myVariable.” If you get case sensitivity wrong, you wind up creating logic errors in your code. With Visual Studio, IntelliSense will prompt you for the correct variable syntax, which is one benefit to using C# and Visual Studio. Learn practical C# coding styles and object oriented syntax. Understanding the .NET Framework The hardest part about C# is learning the .NET framework. The .NET framework is a large collection of libraries provided by Microsoft when you code in the C# language. Just like a large library, you don’t know where to find certain functions, classes and code you need to complete a project, so you have to look up these parts of your code. For instance, if you want to work with .NET’s XML library, you have to find its namespace. A namespace is a group of C# .NET libraries that encompass a group of methods. You add these namespaces to the top of your code, so you can use the library functions. There is no way to search for namespaces other than Google or experience. Experienced C# coders will remember most namespaces to add them to their code. As a student or new C# coder, you’ll probably have to use Google. You can also purchase books that give you an overview and reference for main .NET library namespaces. The .NET framework is huge, and you aren’t expected to know them by heart, even when you code in C# for years. Most developers know that you’ll need to look up namespaces during the day, but it helps to get your feet wet with the popular libraries. Practice Makes Perfect While learning C# is time consuming, the best way to learn more quickly and more efficiently is to practice. If you walk away from the language and stop practicing, you’ll find yourself learning all of the basics all over again. Just like any human language, you also need to practice when perfecting a machine language. You can accomplish practicing using a number of methods. You can program small applications from ideas you come up with. As you program them, you come across certain hurdles that you solve, and this problem-solving helps you understand the language and how to fix certain bugs. Videos are also a great way to keep your language skills up-to-date. Videos can teach you anything from the basics to more advanced techniques. Videos are also a good option when you do walk away from the language for too long and need to brush up on the basics. Finally, walking through steps while learning (such as Udemy.com videos) and performing the syntax on your own will help you understand how to work with C# and its libraries. You can’t just watch videos and learn a language. You need to practice. Install Visual Studio when you watch the videos and walk through the steps with the instructor. This is much more beneficial than just watching the videos. C# is a valuable language to learn, but it’s also fun! C# also gives you an advantage when you want to learn other languages in the future such as C, C++ or Java. You can create a wide range of applications when you know how to code in C#, so you will have an invaluable asset on your resume during your job hunting. Updated by Jennifer Marsh Top courses in C# C# students also learn Empower your team. Lead the industry. Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business.
https://blog.udemy.com/learning-c-sharp/
CC-MAIN-2021-21
en
refinedweb
Callum Lerwick wrote: > On Sat, 2008-12-27 at 12:00 -0600, Eric Sandeen wrote: >> On a 256mb filesystem the journal will only be 32mb by default. Still a >> chunk of the fs, but not half! :) > > Hmmm, I think this has changed over the years, but it seems the recent > code looks like this: > > int ext2fs_default_journal_size(__u64 blocks) > { > if (blocks < 2048) > return -1; > if (blocks < 32768) > return (1024); > if (blocks < 256*1024) > return (4096); > if (blocks < 512*1024) > return (8192); > if (blocks < 1024*1024) > return (16384); > return 32768; > } > > It's based on block size. So on a 256mb filesystem, the block size > defaults to 1k, and you get an 8mb journal. You're right, I used 4k blocks in my napkin-sketch, oops :) -Eric
https://listman.redhat.com/archives/fedora-devel-list/2008-December/msg02865.html
CC-MAIN-2021-21
en
refinedweb
I have tried achieving a good delta time and fps counter over the last few days, read and watched a lot about it, but still can’t seem to get it to work. Here is an example: #include <iostream> #include <SDL.h> #include <SDL_image.h> int main(int argc, char* argv()) { if (SDL_Init(SDL_INIT_VIDEO) != 0) { printf("error initializing SDL: %sn", SDL_GetError()); } SDL_Window* window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 1000, 900, SDL_WINDOW_SHOWN); if (!window) { printf("error creating window: %sn", SDL_GetError()); } Uint32 renderFlags = SDL_RENDERER_ACCELERATED; SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, renderFlags); if (!renderer) { printf("error creating renderer"); SDL_DestroyWindow(window); SDL_Quit(); } SDL_Surface* surface = IMG_Load("dot.bmp"); if (!surface) { printf("Error creating surface: %sn", SDL_GetError()); SDL_DestroyRenderer(renderer); SDL_DestroyWindow(window); SDL_Quit(); } SDL_Texture* texture = SDL_CreateTextureFromSurface(renderer, surface); SDL_FreeSurface(surface); if (!texture) { printf("error creating texture: %sn", SDL_GetError()); SDL_DestroyRenderer(renderer); SDL_DestroyWindow(window); SDL_Quit(); } SDL_Rect dest; dest.x = 0; dest.y = 0; SDL_QueryTexture(texture, NULL, NULL, &dest.w, &dest.h); Uint64 NOW = SDL_GetPerformanceCounter(); Uint64 LAST = 0; double deltaTime = 0; while (true) { LAST = NOW; NOW = SDL_GetPerformanceCounter(); deltaTime = (double)((NOW - LAST) / (double)SDL_GetPerformanceFrequency()); SDL_RenderClear(renderer); dest.x += 50 * deltaTime; dest.y += 50 * deltaTime; SDL_RenderCopy(renderer, texture, NULL, &dest); SDL_RenderPresent(renderer); std::cout << "Delta Time: " << deltaTime << std::endl; std::cout << "FPS: " << 60.0 - deltaTime << std::endl; SDL_Delay(1000.0f / (60.0 - deltaTime)); } return 0; } I used the suggestion from this post: How to calculate delta time with SDL? I print out “delta time” and “FPS” to the console and, while the deltaTime is slightly different each time, the FPS is stable 60 (which is the same value that I use in SDL_Delay to calculate delay in ms). But the test image is clearly moving not smoothly, it stutters and moves at inconsistent speed, and I can’t understand why. Please help me understand what I am doing wrong. I just can’t understand it even after looking through many examples.
https://proxies-free.com/tag/sdl/
CC-MAIN-2021-21
en
refinedweb
Area3DSeriesView Class Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.1.dll Declaration public class Area3DSeriesView : Line3DSeriesView Public Class Area3DSeriesView Inherits Line3DSeriesView Remarks The Area3DSeriesView class provides functionality of a series view of the Area 3D type within a chart control. At the same time, the Area3DSeriesView class serves as a base for the StackedArea3DSeriesView and FullStackedArea3DSeriesView classes. In addition to the common view settings inherited from the base Line3DSeriesView class, the Area3DSeriesView class declares the Area 3D type specific settings, which allow you to define the depth of a slice that represents the 3D area series (via the Area3DSeriesView.AreaWidth property). Note that a particular view type can be defined for a series via its SeriesBase.View property. For more information on series views of the 3D Area type, please see the Area Chart topic. Examples The following example demonstrates how to create a ChartControl with a series of the Area3DSeries areaChart3D = new ChartControl(); // Add an area series to it. Series series1 = new Series("Series 1", ViewType.Area3D); // Add points to the series. series1.Points.Add(new SeriesPoint("A", 10)); series1.Points.Add(new SeriesPoint("B", 2)); series1.Points.Add(new SeriesPoint("C", 14)); series1.Points.Add(new SeriesPoint("D", 7)); // Add the series to the chart. areaChart3D.Series.Add(series1); // Customize the view-type-specific properties of the series. ((Area3DSeriesView)series1.View).AreaWidth = 5; // Access the diagram's options. ((XYDiagram3D)areaChart3D.Diagram).ZoomPercent = 110; // Add a title to the chart and hide the legend. ChartTitle chartTitle1 = new ChartTitle(); chartTitle1.Text = "3D Area Chart"; areaChart3D.Titles.Add(chartTitle1); areaChart3D.Legend.Visible = false; // Add the chart to the form. areaChart3D.Dock = DockStyle.Fill; this.Controls.Add(areaChart3D); }
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.Area3DSeriesView?v=19.1
CC-MAIN-2021-21
en
refinedweb
Provides the different DNA and RNA alphabet types. More... Provides the different DNA and RNA alphabet types. Nucleotide sequences are at the core of most bioinformatic data processing and while it is possible to represent them in a regular std::string, it makes sense to have specialised data structures in most cases. This sub-module offers multiple nucleotide alphabets that can be used with regular containers and ranges. Keep in mind, that while we think of "the nucleotide alphabet" as consisting of four bases, there are indeed more characters defined with different levels of ambiguity. Depending on your application it will make sense to preserve this ambiguity or to discard it to save space and/or optimise computations. SeqAn offers six distinct nucleotide alphabet types to accommodate for this. The specialised RNA alphabets are provided for convenience, however the DNA alphabets can handle being assigned a 'U' character, as well. See below for the details. Which alphabet to chose? As with all alphabets in SeqAn, none of the nucleotide alphabets can be directly converted to char or printed. You need to explicitly call seqan3::to_char to convert to char. The only exception is seqan3::debug_stream which does this conversion to char automatically. T and U are represented by the same rank and you cannot differentiate between them. The only difference between e.g. seqan3::dna4 and seqan3::rna4 is the output when calling to_char(). char. You need to explicitly call assign_charor use a literal (see below). std::views::transform. See our cookbook for an example. When assigning from char or converting from a larger nucleotide alphabet to a smaller one, loss of information can occur since obviously some bases are not available. When converting to seqan3::dna5 or seqan3::rna5, non-canonical bases (letters other than A, C, G, T, U) are converted to 'N' to preserve ambiguity at that position, while for seqan3::dna4 and seqan3::rna4 they are converted to the first of the possibilities they represent (because there is no letter 'N' to represent ambiguity). See the greyed out values in the table at the top for an overview of which conversions take place. char values that are none of the IUPAC symbols, e.g. 'P', are always converted to the equivalent of assigning 'N', i.e. they result in 'A' for seqan3::dna4 and seqan3::rna4, and in 'N' for the other alphabets. If the special char conversion of IUPAC characters to seqan3::dna4 is not your desired behavior, refer to our cookbook for an example of A custom dna4 alphabet that converts all unknown characters to A to change the conversion behavior. To avoid writing dna4{}.assign_char('C') every time, you may instead use the literal 'C'_dna4. All nucleotide types defined here have character literals and also string literals which return a vector of the respective type. The nucleotide submodule defines seqan3::nucleotide_alphabet which encompasses all the alphabets defined in the submodule and refines seqan3::alphabet. The only additional requirement is that their values can be complemented, see below. In the typical structure of DNA molecules (or double-stranded RNA), each nucleotide has a complement that it pairs with. To generate the complement value of a nucleotide letter, you can call an implementation of seqan3::nucleotide_alphabet::complement() on it. The only exception to this table is the seqan3::dna3bs alphabet. The complement for 'G' is defined as 'T' since 'C' and 'T' are treated as the same letters. However, it is not recommended to use the complement of seqan3::dna3bs but rather use the complement of another dna alphabet and afterwards transform it into seqan3::dna3bs. For the ambiguous letters, the complement is the (possibly also ambiguous) letter representing the variant of the individual complements. Return the complement of a nucleotide object. nucl, e.g. 'C' for 'G'. This is a function object. Invoke it with the parameter(s) specified above. It acts as a wrapper and looks for three possible implementations (in this order): complement(your_type const a)of the class seqan3::custom::alphabet<your_type>. complement(your_type const a)in the namespace of your type (or as friend). complement(). Functions are only considered for one of the above cases if they are marked noexcept ( constexpr is not required, but recommended) and if the returned type is your_type. Every nucleotide alphabet type must provide one of the above. This is a customisation point (see Customisation). To specify the behaviour for your own alphabet type, simply provide one of the three functions specified above.
https://docs.seqan.de/seqan/3-master-user/group__nucleotide.html
CC-MAIN-2021-21
en
refinedweb
Important: Please read the Qt Code of Conduct - Executing second window causes program to get stuck Hey guys, Some of this is a generic C++ problem but some of it also seems to be Qt. Basically I have the MainWindow that is there by default and I created a second window with its own class called "ProcDialog". I want this window to pop-up when a button is pressed in the file menu of my MainWindow, which I accomplished. This is how I have the window popup: #include "procdialog.h" ... void MainWindow::on_actionStart_Calibration_triggered() { ProcDialog CalibPD; CalibPD.setModal(true); CalibPD.exec(); ProcDialog::fnStatusWindowUpdate(); } As you can see I have a function under the ProcDialog class I have a function called "fnStatusWindowUpdate". I have this function declared as a static under the public section of the class. The problem, is that I am trying to using this second window as a processing status window that reports progress as the program continues, but what keeps happening is that the program seems to get stuck on the line where the second window is created. I want the program to pop-up the second window, and then immediately run that function. But what happens is the window pops up with "CalibPD.exec();" and the line right after (the function) does NOT run until I close the second window that popped up. I am not entirely sure what happened because this wasn't a problem before when I first created the window and I don't remember changing anything that would cause this behavior. The function itself seems to be OK. This is what it does (for now I just have it with testing filler it will do more later: void ProcDialog::fnStatusWindowUpdate() { qDebug() << "Working"; } Once the second window is closed "Working" appears in the Application Output tab. So what I need to know (if anyone can figure this thing out) is how to have it so after the window pops up the program just keeps running as normal. I don't know why it gets stuck on " CalibPD.exec();" and requires the window to be closed to continue. Thank you for your time. EDIT: I tried using "CalibPD.show();" instead which does allow the program to continue but then the second window appears entirely blank with nothing that I added appearing on it. @oblivioncth said: Hi CalibPD.exec(); is blocking. Its a event loop. so its stays in there. (it must) You can use show() to avoid this.' BUT you code has a classic bug if you use show void MainWindow::on_actionStart_Calibration_triggered() { ProcDialog CalibPD; CalibPD.setModal(true); CalibPD.show(); ProcDialog::fnStatusWindowUpdate(); } <<< here CalibPD is deleted .. so to you show() you must new it void MainWindow::on_actionStart_Calibration_triggered() { ProcDialog *CalibPD= new ProcDialog(this); CalibPD->show(); ProcDialog::fnStatusWindowUpdate(); } make sure to delete it again; you can set CalibPD->setAttribute( Qt::WA_DeleteOnClose, true ); to make Qt delete it when closed. Works flawlessly. I had seen on many other posts people talking about making sure objects aren't destroyed before you used them again but I didn't have the experience here to see that was the problem. You're a life saver. Thanks again mrjj. @oblivioncth You are most welcome. And no worries, you will do the show() trick from time to time. I do :) Hey, sorry to be a bug, but it seems i have another problem. This is now my function: void MainWindow::on_actionStart_Calibration_triggered() { ProcDialog *CalibPD = new ProcDialog(this); CalibPD->setModal(true); CalibPD->setAttribute(Qt::WA_DeleteOnClose,true); CalibPD->show(); ProcDialog::fnStatusWindowUpdate(); AR_Main(); } I now have the additional function call "AR_Main();". This function is under another source file so I have "#include "AR_Main.h" at the top of this file. The issue is that the contents of the Proc Dialog pop-up window do stick around when ProcDialog::fnStatusWindowUpdate(); is called, but are destroyed (the window goes blank) once AR_Main(); gets called. If I comment AR_Main(); out the window contents are NOT destroyed so it seems to be a particular issue with calling AR_Main(); and not just the function MainWindow::on_actionStart_Calibration_triggered() ending. @oblivioncth said: hi, np AR_Main(); so what does this function do ? There should be no reason that CalibPD should be affected. (the window goes blank) It sounds to me that perhaps you make endless loop ? in AR_Main(); It is a giant function that does a lot of image processing with OpenCV, so its hard to summarize. I did figure out of more things though that might help. So when the program hits the line: CalibPD->show(); The window pops up and is blank. I determined this by setting a break point just before it. When I say blank I just mean its a window with the header bar at the top but the body is just white with none of the window objects I setup. Therefore, the window is still obviously blank while AR_Main is running. I tried stepping through the program to determine when the contents of the window actually appear but stepping past everything only got me into the dissembler and then finally to: int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); return a.exec(); <- Won't go past this } and wont let me step past "return a.exec();". The window contents only show up when I click "continue" in the debugger so I am not sure what is actually causing them to show up. Maybe it is when the program idles some kind of update routine is called? I do remember reading under the documentation for "show" under QDialog that using that method would require some kind of update function to be called now and then. Could that have something to do with it? Basically it seems that the window objects show up sometime after the program idles and I need them to basically appear and be intereactable by the time that "CalibPD->show();" is called. If it helps I could perhaps make a quick video demonstrating the issue. @oblivioncth said: return a.exec(); This is the application main loop or event loop. It runs until application is quit. Think of it as Qt main function allowing signals and events to work. if CalibPD->show(); does display as expected if you dont call AR_Main(); i assume it loops / use lots of cpu. And since its called sort while CalibPD is still displaying then i assume CalibPD never really got the needed time to fully "paint" try to insert qApp->processEvents(); Right after CalibPD->show(); Just to clarify, I was wrong earlier. I thought that it worked without AR_Main being called but really what was happening is that without AR_Main the program would run through and the contents would eventually be painted. The reason it wasn't working with AR_Main is that there is a line in AR_Main that freezes the program (it was a press enter to continue prompt, I am porting a console app over to this GUI). So when the AR_Main was called the program never got to a point where Qt would paint the window, but without calling it the window would be painted due to the wait for enter prompt never happening. So the issue was actually there all along. Regardless, adding that command did allow the screen to be painted even while AR_Main was running. I may have to use that command every time I want to update the text edit box i have on that window, but that is ok. I am going to leave this open for the moment because I have really in my gut I might run into another small issue or two while manipulating this window and I want to have it fully working before I close the thread again haha. I only have a couple more things to add so it shouldn't take too long to find out. You did solve my immediate problem so thanks again. I think that event update command is what I was referring to in my last post. @oblivioncth Hi ok, so it was not a heavy loop but an input. :) When you go from console to event based GUI , its important NOT to make large loops/massive calculations as then the whole GUI will appear frozen. ( a.exec(); is never allowed to run) You can indeed often fix this with qApp->processEvents(); to allow Qt to function but its not a fix all issues. If the AR_Main will du heavy stuff, you might need to put in its own thread to keep your mainwindow/dialogs responsive. @oblivioncth well, Just keep in mind that if u make huge loops, the main window and dialogs will stop looking alive :) I have definitely noticed that while debugging haha. Ok so while you definitely solved the issue of the window freezing, the reason I had some of these function calls in the first place were part of a larger problem. I currently have these two windows, and each of them is in their on class like I previously described. However, the issue is that manipulating these windows usually uses the ui->UIMEMBER->UIFUNCTION method, but ui's scope is only within the .cpp file of each window respectively. ui for mainwindow is different than ui in ProcDialog (obviously). I need to be able to use these functions on these members from .cpp files other than ProcDialog.cpp and MainWindow.cpp. The reason for a lot of what you just helped me with is that I basically need to run: ui->pteCalibProcWindow->appendPlainText(qsMesg); //Where qsMesg is a QString while within AR_Main.cpp. Obviously AR_Main.cpp doesn't know what "ui" is so the way I attacked this problem was with what you were helping me with. The approach was what I am used to doing for situations like this, where you prototype a function in a header file and then include that header in the other source file where you want to use that function. So what I did was make the function: void ProcDialog::fnStatusWindowUpdate(QString qsMesg) { ui->PshBtnCalibFinish->setEnabled(true); } within ProcDialog and I prototyped like so: public: ... void fnStatusWindowUpdate(QString qsMesg); In other projects I could then just call the function in any .cpp that I included "procdialog.h" in but since the function is within I class I thought that just calling it via "ProcDialog::fnStatusWindowUpdate()" would work. But while I figured that should do the trick (I've made functions in other source files available this way in C before when not dealing with classes or the -> method) I run into a paradoxical problem. Just doing the above, the compiler throws a "illegal call of a non-static member function" when I try to call fnStatusWindowUpdate() in AR_Main. But if I make the function static, that error goes away but then the compiler says: error: C2227: left of '->PshBtnCalibFinish' must point to class/struct/union/generic type error: C2227: left of '->setEnabled' must point to class/struct/union/generic type While I don't fully understand the -> architecture it is clear that making a function static interferes with using the -> reference. So it seems like the way I approached the issue won't work. Is there any simple way to manipulate my QPlainTextEdit that is on my ProcDialog window from AR_main.cpp? I know that this is something that is not Qt specific, but I have never had to do any C++ codding that require this complex of a program structure until using Qt. hi - "ProcDialog::fnStatusWindowUpdate()" would work. To call a function like that requires its to be a static function and its not normally needed unless for certain use cases. So that is what "illegal call of a non-static member function" comes from. To call a function defined in a class, you need an instance of that class Myclass a; << an instance. a.print(); or Myclass *a=new Myclass; a->print(); notice that non new syntax uses "." and if pointer we use "->" .print() vs ->print() - Is there any simple way to manipulate my QPlainTextEdit that is on my ProcDialog window from AR_main.cpp? Is there anything Qt in AR_main.cpp? Normally you would use slot and signals for such task but if all of AR_main.cpp is "foreign" then you cannot easy send signals. You could give AR_main a pointer to ur dialog AR_main( dialogpointer ) and AR_main could call public functions in this instance. Not pretty but i fear it will not be your biggest issue :) There is something magic about forums. I swore I tried something just like what you were saying before and it didn't work lol, but now it did!. I must have had a thing or two wrong. I first just tried to create an arbitrary member of ProcDialog and use that to call the function. It would run but the QPlainTextEdit woudn't show anything. So then I tried chaning AR_Main to be: int AR_Main(ProcDialog *TEST) and when I called it from mainwindow.cpp I did: AR_Main(CalibPD); Then finally, in AR_Main I wrote: TEST->fnStatusWindowUpdate("Sample string"); and it did exactly what I wanted. Thank you for being patient with me and dealing with the fact I have some holes in my knowledge for C++. I can't say this will happen here but I started off just as leachy on another form, but eventually became a well known moderator lol. So trust me, I am learning this stuff as people help me with me it. It just takes some time :). I try to make up for the fact I keep asking questions by at least making my posts clear and well formatted, and so that they don't just sound like "Helps givez me codes". Cheers. @oblivioncth said: Ok, its shaping up , super :) "Helps givez me codes". We do help those too but good questions often get better answers. Also its clear that you do try to fix it first yourself, so some holes in c++ is no issue. Cheers
https://forum.qt.io/topic/65552/executing-second-window-causes-program-to-get-stuck
CC-MAIN-2021-21
en
refinedweb
Streaming with Rails 4 Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 What is Streaming ? Streaming has been around in Rails since 3.2, but it has been limited to template streaming. Rails 4 comes with much more mature, real time streaming fuctionality. Essentially, this means Rails is now able to handle I/O objects natively and send data to the client in real-time. Streaming and Live are two different modules written under ActionController. Streaming comes included by default, whereas Live needs to be defined explicitly in the controller subclass. The main streaming api uses the Fiber class of Ruby (available in 1.9.2+). Fiber provides the building blocks of thread-like concurrency in ruby. Fiber invoked threads can be paused and resumed at will by the programmer, rather than being inherently preemptive. Template Streaming Streaming inverts the regular Rails process of rendering the layout and template. By default, Rails renders template first and then the layout. The first method it runs is yield, and loads up the template. Then, the assets and layouts are rendered. Consider a query-intensive view, such as a system-wide timeline of multiple classes, like so: class TimelineController def index @users = User.all @tickets = Ticket.all @attachments = Attachment.all end end In this case, streaming seems to be a good fit. In a typical Rails scenario, this page takes longer to load than usual because it has to retrieve all the data first. Let’s add streaming: class TimelineController def index @users = User.all @tickets = Ticket.all @attachments = Attachment.all render stream: true end end The Rails method render stream: true will lazily load the queries and allow them to run after the assets and layout have been rendered. Streaming only works with templates and not any other forms (such as json or xml). This adds a clever technique to make the application prioritize templates based on the type of page and content. Passing the Stuff in Between Streaming changes the method of rendering the template and layout. This brings forth a logical issue: Templates making use of instance variables. Since the database calls have not happened when the templates are rendered, references to instance variables will fail. Hence, in order to load attributes like title or meta we need to use content_for instead of the regular yield method. yield, however, still works for the body. Previously, our method looked like this: <%= yield :title %> It will now look like this : <%= content_for :title, "My Awesome Title" %> Going Live with the Live API Live is a special module included in ActionController class. It enables Rails to open and close a stream explicitly. Let’s create a simple app and see how to include this and access it from the outside. We are looking at concepts of live streaming and concurrency, and WEBrick does not play well with such things. We will, as a result, use Puma for handling the concurrency and threads in our app. Add Puma to the Gemfile and bundle. gem "puma" :~/testapp$ bundle install Puma integrates well with Rails, so as soon as you run `rails s` (server restart required if you are already running it) Puma boots up on the same port number as WEBrick. :~/testapp$ rails s => Booting Puma => Rails 4.0.0 application starting in development on => Run `rails server -h` for more startup options => Ctrl-C to shutdown server Puma 2.3.0 starting... * Min threads: 0, max threads: 16 * Environment: development * Listening on tcp://0.0.0.0:3000 Let’s quickly generate a controller for sending out messages. :~/testapp$ rails g controller messaging Also add the basic method to stream out messages to standard out. class MessagingController < ApplicationController include ActionController::Live def send_message response.headers['Content-Type'] = 'text/event-stream' 10.times { response.stream.write "This is a test Messagen" sleep 1 } response.stream.close end end and a route in routes.rb get 'messaging' => 'messaging#send_message' We can access this method via curl as follows: ~/testapp$ curl -i HTTP/1.1 200 OK X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff X-UA-Compatible: chrome=1 Content-Type: text/event-stream Cache-Control: no-cache Set-Cookie: request_method=GET; path=/ X-Request-Id: 68c6b7c7-4f5f-46cc-9923-95778033eee7 X-Runtime: 0.846080 Transfer-Encoding: chunked This is a test message This is a test message This is a test message This is a test message When we make a call on the method send_message, Puma initiates a new thread and handles the data streaming for a single client in this thread. Default Puma configuration allows 16 concurrent threads, which means 16 clients. Of course, this can be increased, but not without some memory overhead. Let’s build a form and see if we can send some data to our view: def index end def send_message response.headers['Content-Type'] = 'text/event-stream' 10.times { response.stream.write "#{params[:message]}n" sleep 1 } response.stream.close end Create a form to send the data to the stream. <%= form_tag messaging_path, :method => 'get' do %> <%= text_field_tag :message, params[:message] %> <%= submit_tag "Post Message" %> <% end %> And a route ot make the call. root 'messaging#index' get 'messaging' => 'messaging#send_message', :as => 'messaging' As soon as you type the message and press “Post Message”, the browser receives the stream response as a downloadable text file wich contains the message logged 10 times. Here, however, the stream does not know where to send the data or in what format. Thus, it writes to a text file on the server. You can also check it by sending the params via curl. :~: 382bbf75-7d32-47c4-a767-576ec59cc364 X-Runtime: 0.055470 Transfer-Encoding: chunked awesome awesome Server Side Events (SSEs) HTML5 introduced a method called server side events (SSEs). SSE is a method available in the browser, that recognizes and fires an event whenever the server sends the data. You can use SSE in conjunction with the Live API to achieve full-duplex communication. By default Rails provides a one-way communication process by writing the stream to the client when data is available. However, if we can add SSEs, we can enable events and responses, thus making it two-way. A simple SSE looks like the following : require 'json' module ServerSide class SSE def initialize io @io = io end def write object, options = {} options.each do |k,v| @io.write "#{k}: #{v}n" end @io.write "data: #{object}nn" end def close @io.close end end end This module assigns the I/O stream object to a hash and converts it into a key-value pair so that it is easy to read, store, and send it back in JSON format. You can now wrap your stream object inside the SSE class. First, include your SSE module inside your controller. Now, the opening and closing of connections are handled by the SSE module. Also, if not terminated explicitly, the loop will go on infinitely and connection will be open forever, so we add the ensure clause. require 'server_side/sse' class MessagingController < ApplicationController include ActionController::Live def index end def stream response.headers['Content-Type'] = 'text/event-stream' sse = ServerSide::SSE.new(response.stream) begin loop do sse.write({ :message => "#{params[:message]}" }) sleep 1 end rescue IOError ensure sse.close end end end This is what the response looks like: :~: b922a2eb-9358-429b-b1bb-015421ab8526 X-Runtime: 0.067414 Transfer-Encoding: chunked data: {:message=>"awesome"} data: {:message=>"awesome"} Gotchas There are a few gotchas (there always are…) - All the streams have to be closed, else they will be open forever. - You will have to make sure your code is threadsafe, as the controller always spawns a new thread when the method is called. - After the first commit of the response, the header cannot be rewritten in writeor Conclusion This is a feature many people are looking forward to, because it will significantly improve the performance of Rails apps (template streaming) and would also pose a strong competition to node.js (Live). There are folks already benchmarking the differences, but I feel it’s just the beginning and will take time (read further releases) for the feature to mature. For now, it’s a good start and exciting to get these features in Rails. Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/streaming-with-rails-4/
CC-MAIN-2021-21
en
refinedweb
*Django authentication system* 1、 Django default user authentication system Django authentication provides a user authentication system with two functions of authentication and authorization: the storage table auth uses_ user For reference, click here Django user authentication system deals with user accounts, groups, permissions and cookie based user sessions. - Django authentication system handles both authentication and authorization - Authentication: to verify whether a user can be used for account login. - Authorization: authorization determines what an authenticated user is allowed to do. - Django authentication system includes the following contents: Main modules: from django.contrib import auth //Contains the core of the authentication framework and its default modelfrom django.contrib import contenttypes //Is the Django content type system, which allows permissions to be associated with models you create. Module details: from django.contrib import auth //Make sure that each of your Django models is created with four default permissions: add, modify, delete, and viewfrom django.contrib.auth.models import User //User objectfrom django.contrib.auth import authenticate //Authenticate authenticate userfrom django.contrib.auth.models import Group //A general method of classifying users Permission Operation of user object: django.contrib.auth . models.User ”Object has two many to many fields: groupsand user_permissions It can be done through user_permissionsProperty to assign permissions to user, or through permissionsProperty assigned to group Operation: myuser.groups.set([group_list]) myuser.groups.add(group, group, ...) myuser.groups.remove(group, group, ...) myuser.groups.clear() myuser.user_permissions.set([permission_list]) myuser.user_permissions.add(permission, permission, ...) myuser.user_permissions.remove(permission, permission, ...) myuser.user_permissions.clear() Let’s take a look at the basics 2、 Implementation of user authentication system Several methods are mainly used - create_ User create user - Authenticate authentication login - Login remembers the login status of the user - Logout log out - Is_ Authenticated determines whether the user is logged in login_ Required decorator to determine whether the user is logged in 1. Create user: The main attributes of the user object are username, password, email, first_ name, last_ name 1.1 common create user from django.contrib.auth.models import User user = User.objects.create_user('yym', '[email protected]', 'yympassword') ~Create super user directive Python manage.py createsuperuser –username=yym –email=[email protected] ~ 1.2 change password from django.contrib.auth.models import User u = User.objects.get(username='yym') u.set_password('new password') u.save() ~Change password instruction Python manage.py changepasswordusername ~ 2. Verify users use authenticate()To authenticate the user. It uses usernameand passwordAs a parameter, each authentication backend is checked. If the back-end validation is valid, a user object is returned. If the PermissionDeniedError, will return None from django.contrib.auth import authenticate user = authenticate(username='john', password='secret') if user is not None: # A backend authenticated the credentials else: # No backend authenticated the credentials 3. Authority When installed_ Apps set django.contrib.auth It will ensure that each of your Django models is created with four default permissions: add, modify, delete, and view 3.1 create permissions from car.models import UseCar from django.contrib.auth.models import Permission from django.contrib.contenttypes.models import ContentType #Create the rights to issue orders for the vehicle model content_type = ContentType.objects.get_for_model(UseCar) permission = Permission.objects.create( codename='can_publish', name='Can Publish Posts', content_type=content_type, ) 3.2 permission cache The first time you need to get a permission check on a user object, ModelBackendWill cache their permissions from django.contrib.auth.models import Permission, User from django.contrib.contenttypes.models import ContentType from django.shortcuts import get_object_or_404 from car.models import UseCar def user_gains_perms(request, user_id): user = get_object_or_404(User, pk=user_id) # any permission check will cache the current set of permissions user.has_perm('car.change_usecar') content_type = ContentType.objects.get_for_model(UseCar) permission = Permission.objects.get( codename='change_usecar', content_type=content_type, ) user.user_permissions.add(permission) # Checking the cached permission set user.has_perm('car.change_usecar') # False # Request new instance of User # Be aware that user.refresh_from_db() won't clear the cache. user = get_object_or_404(User, pk=user_id) # Permission cache is repopulated from the database user.has_perm('car.change_usecar') # True ... 4. Authentication of web request 4.1 verification of web requests Django uses sessions and middleware to hook the authentication system to the request object They are provided in every request request.userProperty. If no user is currently logged in, this property will be set to AnonymousUserOtherwise, it will be set to Userexample. use userProperties of is_authenticatedTo distinguish whether a user has been authenticated use userProperties of is_anonymousTo distinguish between user and anonymoususer objects if request.user.is_authenticated: user ... else: pass 4.2 user login If you want to attach the authenticated user to the current session, you will pass the login()Function completion from django.contrib.auth import authenticate, login def my_view(request): username = request.POST['username'] password = request.POST['password'] user = authenticate(request, username=username, password=password) //verificationif user is not None: login(request, user) //Sign in# Redirect to a success page. ... else: # Return an 'invalid login' error message. ... 4.3 user logout Delete the authenticated user from the current session logout()Function completion from django.contrib.auth import logout def logout_view(request): logout(request) The following problems are often encountered in actual project development (these problems will be discussed in detail later) - Since the default user can’t meet our needs in actual development, we usually inherit the user table to expand and pay attention to the password plaintext when creating users after the expansion. - In reality, due to some shortcomings of session, token based authentication mechanism will be used in general projects. - As for the server, it must store the sessions of all online users, which takes up a lot of resources (CPU, memory), and seriously affects the performance of the server - The server is extended to cluster, but there is also the problem of distributed session - Django’s built-in permission mechanism cannot meet the needs of the project, which will expand the permission setting Schematic diagram of session mechanism: Schematic diagram of token mechanism: This work adoptsCC agreementThe author and the link to this article must be indicated in the reprint
https://developpaper.com/basic-process-principle-of-python-django-02-authentication-system/
CC-MAIN-2021-21
en
refinedweb
RadarAxisYLabelItem Class Represents an individual axis label item of the Y-axis in the Radar Series Views. Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.1.dll Declaration public class RadarAxisYLabelItem : AxisLabelItemBase, IAxisLabelLayout Public Class RadarAxisYLabelItem Inherits AxisLabelItemBase Implements IAxisLabelLayout Remarks The RadarAxisYLabelItem class contains settings that define the functionality of an individual axis label item for the Y-axis of radar series view types. Axis label items are access the properties of the associated axis itself. For more information on axis label items, refer to Axis Labels.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.RadarAxisYLabelItem?v=19.1
CC-MAIN-2021-21
en
refinedweb
There are two different methods of transmitting USB data between your microcontroller board and your PC. They are called "USB Stacks". They are layers of code that handle all of the protocols for transmitting data whether you are using it to upload your program to the board, receiving data via the serial monitor or serial plotter, or talking back to your computer emulating a mouse, keyboard or other device. Traditional Arduino 8-bit boards all use the original Arduino stack. Newer boards such as M0 and M4 based on the SAMD21 and SAMD51 have the option of using either the Arduino stack or a different version called the TinyUSB Stack. Still other boards such as the nRF52840 based boards use only TinyUSB and it is likely that upcoming boards such as the ESP32-S2 will continue to use only TinyUSB. This is primarily because TinyUSB is the underlying architecture for implementing CircuitPython on these boards. If you are using an M0 or M4 board you select which stack you want to use in the Tools menu of the Arduino IDE as shown below. The image shows the tools menu of the Arduino IDE and we have selected an Adafruit Feather M0 Express board. Here you have a choice between using the Arduino stack or the TinyUSB stack. However in the image below, we have configured for an Adafruit Feather nRF52840 Express. As you can see there is no "USB Stack" option. What you cannot see is that this particular board only uses TinyUSB. If you try to #include it will not find the proper library because is not supported under TinyUSB. For the M0 and M4 boards you can simply choose to select the Arduino stack and there's no problem. However the TinyUSB stack also has many other features that might be useful to you. Among them are the ability to use WebUSB and to use the onboard flash chip of your board as a mass storage device. This essentially turns your Feather board into a flash drive where you can drag-and-drop files. We will not be covering those capabilities in this tutorial. Of course if you're using the nRF52840 based systems don't have a choice. You have to use TinyUSB. As mentioned previously, the traditional way to control mouse or keyboard is the following include files. #include <HID.h> #include <Mouse.h> #include <Keyboard.h> #include <HID.h> #include <Mouse.h> #include <Keyboard.h> You should erase those lines and replace them with #include <TinyUSB_Mouse_and_Keyboard.h> #include <TinyUSB_Mouse_and_Keyboard.h> This will automatically detect whether you are using the Arduino Stack or the TinyUSB Stack. If you are using the original Arduino Stack it will simply do the necessary include files for you. And if you're using TinyUSB Stack it will instead use its own code that works exactly like the originals. Note that there is no way to separate Mouse and Keyboard inclusion in our system. It was much easier to implement both at once rather than implementing them separately because of the way TinyUSB implements its HID functions. Theoretically when you compile your code if you did not make any reference to Keyboard and only to Mouse the linking loader will eliminate the Keyboard code. And vice versa if you use only Keyboard and not Mouse. Combining these into a single library saved us a lot of headaches. If you have existing code that uses Mouse.h or Keyboard.h or both you should make the changes noted above and give it a try. If you are using an M0 or M4 based board, you will have to set the Tools->USB Stack to "TinyUSB". In fact try switching back and forth between the two stacks and recompiling. You should see the same results using either stack. If you are using the nRF52840 processor, you do not need to select the TinyUSB option. While developing and testing this library, we discovered that occasionally it makes a difference when you call the Mouse.begin() or Keyboard.begin() methods relative to the Serial.begin(…) method. Sometimes your computer would get confused as to how your USB was operating. Was it a mouse? Was it a keyboard? Was it a serial device? We had inconsistent results. Our best results came if you did your Mouse.begin() and/or Keyboard.begin() before doing Serial.begin(…). No such restriction is necessary when using the BLE52 version of the library. It only affects the TinyUSB version. In the next section, we will describe the BLE52 library followed by a series of three demonstration examples.
https://learn.adafruit.com/mouse-and-keyboard-control-using-tinyusb-and-ble/tinyusb-mouse-and-keyboard-usage
CC-MAIN-2021-21
en
refinedweb
imports not properly cached under pypy3 TLDR: originally this started out as an issue about strptime, but it turns code like the following is the cause: import time def smuggler(): """Secretly imports things""" import datetime NUM = 1000 * 1000 t0 = time.time() for _ in range(NUM): smuggler() t1 = time.time() print("Smuggled {} modules in {}s".format(NUM, t1 - t0)) Running this gives us: $ pypy importer.py Smuggled 1000000 modules in 0.0068838596344s $ pypy3 importer.py Smuggled 1000000 modules in 10.808286428451538s Original strptime benchmark: from datetime import datetime from random import randint from time import time NUMBER = 1000 * 1000 def randdate(): y = str(randint(1900, 2100)) m = str(randint(1, 12)).zfill(2) d = str(randint(1, 28)).zfill(2) return "{}-{}-{}T00:00:00".format(y, m, d) print("Building dates") dates = [randdate() for _ in range(NUMBER)] print("Done - proceeding to parse") while True: t0 = time() for d in dates: datetime.strptime(d, "%Y-%m-%dT%H:%M:%S") t1 = time() print("Took {}s ({}/s)".format(t1 - t0, int(NUMBER / (t1 - t0)))) On my system it produces the following eventual numbers: pypy3.7-7.3.4 Took 12.74s (78474/s) pypy2.7-7.3.3 Took 1.60s (623199/s) To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
https://foss.heptapod.net/pypy/pypy/-/issues/3431
CC-MAIN-2021-21
en
refinedweb
Debugging a technique that completely replaces debugging. If debugging is required, the design is bad. Let’s say I’m a bad imperative procedural programmer, and this is my Java code: class FileUtils { public static Iterable<String> readWords(File f) { String text = new String( Files.readAllBytes(Paths.get(f)), "UTF-8" ); Set<String> words = new HashSet<>(); for (String word : text.split(" ")) { words.add(word); } return words; } } This static utility method reads file content and then finds all the unique words in it. Pretty simple. However, if it doesn’t work, what do we do? Let’s say this is the file: We know what we are, but know not what we may be. From it, we get this list of words: "We" "know" "what" "we" "are,\n" "but" "not" "may" "be\n" Now that doesn’t look right to me … so what is the next step? Either the file reading doesn’t work correctly or the split is broken. Let’s debug, right? Let’s give it a file through an input and go step by step, tracing and watching the variables. We’ll find the bug and fix it. But when a similar problem shows up, we’ll have to debug again! And that’s what unit testing is supposed to prevent. We’re supposed to create a unit test once, in which the problem is reproduced. Then we fix the problem and make sure the test passes. That’s how we save our investments in problem solving. We won’t fix it again, because it won’t happen again. Our test will prevent it from happening. However, all this will work only if it’s easy to create a unit test. If it’s difficult, I’ll be too lazy to do it. I will just debug and fix the problem. In this particular example, creating a test is a rather expensive procedure. What I mean is the complexity of the unit test will be rather high. We have to create a temporary file, fill it with data, run the method, and check the results. To find out what’s going on and where the bug is, I’ll have to create a number of tests. To avoid code duplication, I’ll also have to create some supplementary utilities to help me create that temporary file and fill it with data. That’s a lot of work. Well, maybe not “a lot,” but way more than a few minutes of debugging. Thus, if you perceive debugging to be faster and easier, think about the quality of your code. I bet it has a lot of opportunities for refactoring, just like the code from the example above. Here is how I would modify it. First of all, I would turn it into a class, because utility static methods are a bad practice: class Words implements Iterable<String> { private final File file; Words(File src) { this.file = src; } @Override public Iterator<String> iterator() { String text = new String( Files.readAllBytes(Paths.get(this.file)), "UTF-8" ); Set<String> words = new HashSet<>(); for (String word : text.split(" ")) { words.add(word); } return words.iterator(); } } It looks better already, but the complexity is still there. Next, I would break it down into smaller classes: class Text { private final File file; Text(File src) { this.file = src; } @Override public String toString() { return new String( Files.readAllBytes(Paths.get(this.file)), "UTF-8" ); } } class Words implements Iterable<String> { private final String text; Words(String txt) { this.text = txt; } @Override public Iterator<String> iterator() { Set<String> words = new HashSet<>(); for (String word : this.text.split(" ")) { words.add(word); } return words.iterator(); } } What do you think now? Writing a test for the Words class is a pretty trivial task: import org.junit.Test; import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*; public class WordsTest { @Test public void parsesSimpleText() { assertThat( new Words("How are you?"), hasItems("How", "are", "you") ); } } How much time did that take? Less than a minute. We don’t need to create a temporary file and load it with data, because class Words doesn’t do anything with files. It just parses the incoming string and finds the unique words in it. Now it’s easy to fix, since the test is small and we can easily create more tests; for example: import org.junit.Test; import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*; public class WordsTest { @Test public void parsesSimpleText() { assertThat( new Words("How are you?"), hasItems("How", "are", "you") ); } @Test public void parsesMultipleLines() { assertThat( new Words("first line\nsecond line\n"), hasItems("first", "second", "line") ); } } My point is that debugging is necessary when the amount of time to write a unit test is significantly more than the time it takes to click those Trace-In/Trace-Out buttons. And it’s logical. We all are lazy and want fast and easy solutions. But debugging burns time and wastes energy. It helps us find problems but doesn’t help prevent them from reappearing. Debugging is needed when our code is procedural and algorithmic—when the code is all about how the goal should be achieved instead of what the goal is. See the examples above again. The first static method is all about how we read the file, parse it, and find words. It’s even named readWords() (a verb). To the contrary, the second example is about what will be achieved. It’s either the Text of the file or Words of the text (both are nouns). I believe there is no place for debugging in clean object-oriented programming. Only unit testing!
https://www.yegor256.com/2016/02/09/are-you-still-debugging.html
CC-MAIN-2018-39
en
refinedweb
Description editnxs provides built-in handling for list and dictionary types. Other types of structures can be registered by adding command prefixes to $set and $get that provide the same semantics as the builtin types.I struggled for a while to find some way to pass the extracted values back up through the recursive calls to nxs, and eventually came up with this strategy: catch {uplevel $nsetlevel [list ::tailcall lindex $struct]}This takes advantage of a quirk of the implementation of tailcall, described on the uplevel page, where tailcall doesn't itself tear down the current level and replace it, but just calls return in the current level after arranging for it to be replaced upon return. It's combine with catch to supress the return call arranged by tailcall. It's a bit hacky, but it works, and the alternatives weren't that appealing.PYK 2015-09-23: Update: Now that the procedure has been divided into nset and nset_internal, and alternative to catch {tailcall ...} would be to set the result in the local scope of nset.nxs is included in ycl struct, along with a test suite that provides examples.To extract the third item from the fifth list in the second list: nxs get $data 1 4 2 #or nxs get $data {1 4 2}To extract the first through third and fifth through seventh items from the fifth list in the second list: nxs get $data 1 4 2 {{0 2} {4 6}} #or nxs get $data {1 4 2 {{0 2} {4 6}}}To look up a name in the third dictionary in the second list of dictionaries: nxs get $data l 0 2 d name nxs nset data l 0 2 d nameTo nset the same name: nxs nset data l 0 2 d = name VincentioTo nset the third item in a deeply nested list: nxs nset data l 4 1 l = 2 newvalueTo replace the third item in a deeply nested list with three items that are expanded (i.e., the list more elements than it originally did) into the list: nxs nset data l 4 1 l = 2 value1 value2 value3To prepend multiple items to a deeply-nested list: nxs nset data l 4 1 l = -1 value1 value2 valu3To append three values to a deeply-nested list: nxs nset data l 4 1 l = + value1 value2 valu3To unset the fourth item in a deeply-nested list nxs nset data l 4 1 l - 3To unset range ranges of items in a deeply-nested list nxs nset data l 4 1 l - {2 5} {7 10} Code editA more recent version of this code may be available in ycl struct #! /bin/env tclsh if 0 { Use {ycl ns dupensemble} to duplicate and specialize this namespace . To add handlers for a new structure, choose an unused name add to $set and $unset command prefixes conforming to the semantics of the built-in handlers. } # When args is empty , set nothing , return the indicated indices . # When args is a list containing only the empty string , set the specified # items to the empty string # When keys is empty , operate on the primary value variable set { d {apply {{op name keys args} { upvar 1 $name var # If anything is to be set , there should , at the very least , be one # value in $args if {[llength $args]} { set res {} if {[llength $args] % 2 && [llength $keys]} { set args [list [lindex $keys end] {*}$args] set keys [lreplace $keys[set keys {}] end end] } foreach {key val} $args { dict set var {*}$keys $key $val[::set val {}] dict set res {*}$keys $key [dict get $var {*}$keys $key] } } else { set res [dict get $var {*}$keys] } if {[info exists outer]} { dict set outer {*}$keys $var set var $outer } return $res }}} l {apply {{op name keys args} { upvar 1 $name var set keycount [llength $keys] set valscount [llength $args] set i 0 set lastval {} if {[llength $args]} { foreach key $keys val $args { if {$i >= $valscount} { set val $lastval } else { set lastval $val } if {[llength $key] == 2} { lassign $key[set key {}] firstkey lastkey } else { set firstkey $key set lastkey $key } if {$i == $keycount -1} { set val [list $val {*}[lrange $args[set args {}] $i+1 end]] if {$key eq {+}} { lappend var {*}$val } else { set var [lreplace $var[set var {}] $firstkey $lastkey {*}$val] } } else { if {$key eq {+}} { lappend var $val } set var [lreplace $var[set var {}] $firstkey $lastkey $val] } incr i if {$i >= $keycount} break } return $var } else { if {[llength $keys]} { foreach key $keys { if {[llength $key] == 2} { lassign $key[set key {}] firstkey lastkey lappend res [lrange $var $firstkey $lastkey] } else { lappend res [lindex $var $key] } } if {[llength $res] == 1} { set res [lindex $res[set res {}] 0] } return $res } # No changes were made, so return nothing } }}} } variable unset { d {apply {{op name keys} { upvar 1 $name var foreach key $keys { dict unset var $key } return $var }}} l {apply {{op name indices} { set res {} upvar 1 $name var foreach index $indices { # Make sure to return the result as a list if a range was provided, # or as single value if an index was provided if {[llength $index] == 2} { lassign $index[set index {}] first last lappend res [lrange $var $first $last] } else { lappend res [lindex $var $index] set first $index set last $index } set var [lreplace $var[set var {}] $first $last] } if {[llength $res] == 1} { set res [lindex $res[set res {}] end] } return $res }}} } proc nget {struct args} { nset struct {*}$args } variable doc::nset { description { Set and retrieve values in a nested heterogeneous structure . } args { synopsis { nset VARNAME ?KEYTYPE KEYS ARG ...? nset VARNMAE ?KEYTYPE KEYS ARG ...? OPERATOR ?ARG ...? } args { Each key type is followed by a sequence keys indicating which nodes to traverse , with the final key being itself a list of keys indicating which items to select . If there is only one argument between key types, that argument is interpreted as if its items had occurred as individual key arguments . If a KEYTYPTE is followed by an OPERATOR , subsequent arguments are processed as defined for that operator before the results are returned . builtin key types d dictionary operators = Values in the nested dictionary indicated by KEYS are replaced by ARGS , which are alternating keys and values . If there is an odd number of ARGS , the last item in KEYS becoms the first ARG . - Each ARG is a key to unset in the nested dictionary indicated by KEYS . l list operators = Arguments at even indices are ranges , while arguments at odd indices are sets of values . Each range is either a single index in which case the following argument is a simple value , or two indices indicating the first and last item to operate on, and the following argument is a list of values . For eaach index in a range , one value is consumed from the corresponding set of values , and placed in the list at that index . If the set of values is larger than the range of indices , remaining values are placed in the list after the last index in the range , increasing the size of the list by the number of additional values. Each range is calculated against the original list, before the first change is made, and is adjusted as changes occur so that it always refers to indices relative to the original list. - Each argument is an index to unset . If the argument is a two-items list , it is a range , as described for [lrange] . } } } proc nset {name args} { set nsetlevel -1 upvar 1 $name struct nset_internal struct {*}$args } proc nset_internal {name args} { upvar 1 nsetlevel nsetlevel incr nsetlevel variable set variable unset upvar 1 $name struct set length [llength $args] set args [lassign $args[set args {}] type keys] if {[info exists struct]} { set prevstruct $struct } if {$keys in {= -}} { set op $keys set args [lassign $args[set args {}] keys] switch $op { - { set res [ {*}[dict get $unset $type] $op struct [list $keys {*}$args]] } = { set res [{*}[dict get $set $type] $op struct $keys {*}$args] } } catch {uplevel $nsetlevel [list ::tailcall lindex $res]} } elseif {$args == 1} { tailcall [lindex [info level 0] 0] $name $type $keys = {*}$args } else { while {[llength $args] > 0 && [lindex $args 0] ni {= -} && ![dict exists $set [lindex $args 0]]} { # Must be another key set args [lassign $args[set args {}] key] lappend keys $key } if {[llength $keys] > 1} { set keys [lassign $keys[set keys {}] key1] # Expand key here so that in the future , multiple branches can # be manipulated . set struct [{*}[dict get $set $type] {} prevstruct {*}$key1] #Reduce the reference count of the Tcl_Obj behind $struct {*}[dict get $set $type] {} prevstruct {*}$key1 set oldnsetlevel $nsetlevel set nsetlevel -1 set struct2 [[lindex [info level 0] 0] struct $type $keys {*}$args] set nsetlevel $oldnsetlevel catch {uplevel $nsetlevel [list ::tailcall lindex $struct2]} {*}[dict get $set $type] {} prevstruct $key1 $struct set struct $prevstruct[set prevstruct $struct; list] } else { set struct [{*}[dict get $set $type] {} prevstruct $keys] #Reduce the reference count of the Tcl_Obj behind $struct {*}[dict get $set $type] {} prevstruct $keys {} if {[llength $args]} { [lindex [info level 0] 0] struct {*}$args } else { catch {uplevel $nsetlevel [list ::tailcall lindex $struct]} } {*}[dict get $set $type] {} prevstruct $keys $struct set struct $prevstruct[set prevstruct $struct; list] } } incr nsetlevel -1 return }
http://wiki.tcl.tk/41950
CC-MAIN-2018-39
en
refinedweb
Hello All, I want to reduce the db calls amount and decided to refactor my code to load all the related stuff in one query using 'join fetch'. The first problem was with Hibernate unable to load List collections (multiple bag's), so I've converted it to HashSet's. But seems like it doesn't work 100% for Set's ether, I have something similar to this: @Entity @Inheritance(strategy = InheritanceType.JOINED) public class BaseObj { @Id @GeneratedValue private Long id; } @Entity @Name("bigObj") public class BigObj extends BaseObj { @OneToMany(cascade = CascadeType.ALL, mappedBy = "big", fetch = FetchType.LAZY) private Set<SomeItem> items; } @Entity public class SomeItem { @Id @GeneratedValue private Long id; @ManyToOne private BigObj big; } Following query (the BigObj with id eq 1 exists in the database): select distinct o from BigObj join fetch o.items where o.id = 1 loads BigObj only if items Set is not empty, if i remove the 'join fetch o.items' the BigObj is loaded. Not sure if this is the right place to ask Why?, but probably a lot of seamsters have some experience with the 'fetch join' and somebody has the time to confirm if it supposed to work or not. user LEFT JOIN FETCH instead. The query the way you have written it will be using an inner join, and so will return no results if the set is empty. it worked, thanks a lot Stuart!
https://developer.jboss.org/thread/189991
CC-MAIN-2018-39
en
refinedweb
Hi! Could you please help me to understand if it's possible (i'm newbie to seam): I have a bean like: @Name("myAction") @Scope(ScopeType.CONVERSATION) public class MyAction { private MyDTO myDto; public void setMyDto()... public MyDTO getMyDto()... @Create @Override public void onCreate() { myDto = new MyDTO(); //it needs to be created manually } } then MyDTO: public class MyDTO { private String name; public void setName... public String getName... } and view: <h:form> <ui:defineName:</ui:define> <h:inputText <s:link </h:form> Despite i can see that view inputs are getting filled with initial values from the myDto i can't get them propagated back to the bean on form submit ( Next link). So, the question if it's possible at all, and if so then how? Best regards, Eugene. Hi, the s:link and s:button don't submit any value. Change to a h:button or its ajax-family member ant it will probably work. Leo P.S. Try to get rid of DTO's which is a pattern longer needed in a normal POJO - Seam application. Use dirctly the classes or make use of much simpler Home objects. If you're bridging to Spring, download the free Seam in Action chapter at Manning.com which explains very good how to do cooperate with Spring.. thank you, it did the trick (you probably meant h:commandButton). Regarding DTO's - i wish i could get rid of them but just can't because of legacy issues.
https://developer.jboss.org/thread/192889
CC-MAIN-2018-39
en
refinedweb
Hi,Another release of the Compaq Hotplug PCI driver is available against2.4.10-pre12 is at: a full changelog at: since the last release: - forward ported to 2.4.10-pre12 - cleaned up the portions of the patch that touched the pci core kernel code. The patch against those files is now smaller and less intrusive. - pci core only exports symbols needed by the hotplug pci drivers if CONFIG_HOTPLUG is enabled - Compaq driver cleanups, with more global symbols removed, and a common namespace for the driver. - Compaq controller specific /proc interface has been moved to the proper /proc/drivers location. - lots of testing with different pci device types.Again, the old Compaq tool will not work with this version of thedriver. An updated version must be downloaded from the cvs tree atsf.net at: current generic hotplug_pci interface is based on a controllermodel. This will be changed to a slot based model, which will enablethe userspace interface code to be much cleaner, and models what the pcihotplug spec recommends. Any comments on this is appreciated.thanks,greg k-h-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2001/9/20/197
CC-MAIN-2018-39
en
refinedweb
Improve your app's ranking in Google Search, from content creation through performance analysis. Analyze with Search Console Use the Search Analytics report in the Search Console to analyze and improve your app's search performance. Filter and group data by categories such as search query, pages, or country. Using the Search Appearance filter, you can also analyze the performance of the Install button for your app. Read more about how to use the Search Analytics Report in our Help Center. Analyze with Search referrals You can use the referrer information from links to your app that come from Google Search for other analytics tools. For example, our codelab on tracking referrals to your app shows how you might use search referrals in combination with Google Analytics for overall app analysis. To build your own solution for tracking Search traffic to your app, you can pass a test android.intent.extra.REFERRER_NAME extra to your app using the Android Debug Bridge. The following example command demonstrates how to do this, assuming your app's package name is package_name and your app URL is: adb shell am start -a android.intent.action.VIEW -c android.intent.category.BROWSABLE -e android.intent.extra.REFERRER_NAME android-app://com.google.android.googlequicksearchbox/https/ -d com.examplepetstore.android This test simulates opening a HTTP URL in your app and passing in referrer information specifying that the traffic came from the Google app. Extract Referrer Information The com.google.firebase.appindexing.AndroidAppUri class helps with extracting referrer URIs. An Intent extra provides the referrer information for your HTTP URL with the following key: android.intent.extra.REFERRER_NAME. The following examples show referrer values from various sources: - Chrome: - Google app: android-app://com.google.android.googlequicksearchbox/https/ - Googlebot: android-app://com.google.appcrawler - Google AdsBot App Crawler: android-app://com.google.adscrawler The following code snippet shows how to extract referrer information from Search. public class MeasureActivity extends AppCompatActivity { @Override public Uri getReferrer() { // There is a built in function available from SDK>=22 if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP_MR1) { return super.getReferrer(); } Intent intent = getIntent(); Uri referrer = (Uri) intent.getExtras().get("android.intent.extra.REFERRER"); if (referrer != null) { return referrer; } String referrer_name = intent.getStringExtra("android.intent.extra.REFERRER_NAME"); if (referrer_name != null) { try { return Uri.parse(referrer_name); } catch (ParseException e) { // ... } } return null; } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // ... Uri referrer = getReferrer(); if (referrer != null) { if (referrer.getScheme().equals("http") || referrer.getScheme().equals("https")) { // App was opened from a browser // host will contain the host path (e.g.) String host = referrer.getHost(); // Add analytics code below to track this click from web Search // ... } else if (referrer.getScheme().equals("android-app")) { // App was opened from another app AndroidAppUri appUri = AndroidAppUri.newAndroidAppUri(referrer); String referrerPackage = appUri.getPackageName(); if ("com.google.android.googlequicksearchbox".equals(referrerPackage)) { // App was opened from the Google app // host will contain the host path (e.g.) String host = appUri.getDeepLinkUri().getHost(); // Add analytics code below to track this click from the Google app // ... } else if ("com.google.appcrawler".equals(referrerPackage)) { // Make sure this is not being counted as part of app usage // ... } } } // ... } } Create Good Web & Mobile Content You can enhance your app's ranking by providing high-quality content in both your app and your associated website. This is because our systems analyze the association between the two properties to determine ranking for both web and mobile Search results. Specifically, keep in mind the following: - Make sure your associated site meets our design and content guidelines. - Follow the same practices recommended in our Mobile SEO guide. - Make sure you follow the recommended practices for your app as described in Search Console for Apps on our Help Center..
https://firebase.google.com/docs/app-indexing/android/measure?hl=ru
CC-MAIN-2018-39
en
refinedweb
Text that is to be rendered as part of an SVG document fragment is specified using the ‘text’ element. The text within a ‘text’ element can be rendered: The section Text layout gives an introduction to text layout. It is followed by sections covering content areas and the algorithm for laying out text within a content area. The specialized layout rules corresponding to text that is pre-formatted, auto-wrapped, and on a path are then addressed in individual sections. Rules for text layout in SVG 1.1 are mostly defined within the SVG 1.1 specification. The rules mirror to a large extent those found in CSS. In SVG 2, the dependence on CSS is more explicit. In practice the resulting layout is the same. The change to directly relying on CSS specifications simplifies the SVG specification while making it more obvious that rendering agents can use the same code to render both text in HTML and in SVG. In particular, SVG 2 auto-wrapped text is based on CSS text layout. SVG's ‘text’ elements are rendered like other graphics elements. Thus, coordinate system transformations, painting, clipping and masking features apply to ‘text’ elements in the same way as they apply to shapes such as paths and rectangles. SVG text supports advanced typographic features including: SVG text supports international text processing needs such as: Multi-language SVG content is possible by substituting different text strings based on the user's preferred language. The characters to be drawn are expressed as character data ([xml], section 2.4) inside the ‘text’ element. As a result: For accessibility reasons, it is recommended that text that is included in a document have appropriate semantic markup to indicate its function. For example, a text element that provides a visible label for part of a diagram should have an ‘id’ that is referenced by an ‘aria-labelledby’ attribute on the relevant group or path element. See SVG accessibility guidelines for more information. Essentially, a Unicode character. A character may be a control instruction (such as a tab, carriage return, or line feed), a renderable mark (letter, digit, punctuation or other symbol), or a modifier (such as a combining accent). If support for CSS generated-content text is introduced in the future, it would be included in the array of addressable characters. Although various CSS 3 text layout specs use the term, none current establish a formal definition. The link is therefore to CSS 2.1, and an issue has been filed with CSS WG. startand endsides of a line or part of a line of text (relevant, for example to how the text-anchor property is applied). It is determined by the direction property. (Note: the ordering of characters in a line of text is primary controlled by the Unicode bidi algorithm and not the inline-base direction.) A font consists of a collection of glyphs together with other information (collectively, the font tables) necessary to use those glyphs to present characters on some visual medium. The combination of the collection of glyphs and the font tables is called the font data. A font may supply substitution and positioning tables that can be used by a formatter (text shaper) to re-order, combine and position a sequence of glyphs to form one or more composite glyphs. The combining may be as simple as a ligature, or as complex as an indic syllable which combines, usually with some re-ordering, multiple consonants and vowel glyphs. The tables may be language dependent, allowing the use of language appropriate letter forms. When a glyph, simple or composite, represents an indivisible unit for typesetting purposes, it is know as a typographic character. Ligatures are an important feature of advance text layout. Some ligatures are discretionary while others (e.g. in Arabic) are required. The following explicit rules apply to ligature formation: Ligature formation should not be enabled when characters are in different DOM text nodes; thus, characters separated by markup should not use ligatures. Ligature formation should not be enabled when characters are in different text chunks. Discretionary ligatures should not be used when the spacing between two characters is not the same as the default space (e.g. when letter-spacing has a non-default value, or text-align has a value of justify and text-justify has a value of distribute). (See CSS Text Module Level 3, ([css-text-3]). SVG attributes such as ‘dx’, ‘textLength’, and ‘spacing’ (in ‘textPath’) that may reposition typographic characters do not break discretionary ligatures. If discretionary ligatures are not desired they can be turned off by using the font-variant-ligatures property. For OpenType fonts, discretionary ligatures include those enabled by the liga, clig, dlig, hlig, and cala features; required ligatures are found in the rlig feature. Proper text rendering may depend on using the same font as used during authoring. For this reason SVG requires support for downloadable fonts as defined in the Font Resources section of the CSS Fonts Module. In particular, support for the Web Open Font Format [WOFF] is required. New in SVG 2, WOFF allows authors to provide the fonts needed to properly render their content. This includes ensuring that the fonts have the proper OpenType tables to support complex scripts, discretionary ligatures, swashes, old-style numbers, and so on. WOFF also allows the fonts to be compressed, subsetted, and include licensing information. Glyph selection and positioning is normally handled according to the rules of CSS. In some cases, however, the final layout of text in SVG requires knowledge of the geometry properties of individual glyphs. The geometric font characteristics are expressed in a coordinate system based on the EM box. (The EM is a relative measure of the height of the glyphs in the font.) The box 1 EM high and 1 EM wide is called the design space. This space is given a geometric coordinates by sub-dividing the EM into a number of units per em.. Most often, the (0,0) point in this coordinate system is positioned on the left edge of the EM box, but not at the bottom left corner. The Y coordinate of the bottom of a roman capital letter is usually zero. The descenders on lowercase roman letters have negative coordinate values. An 'M' inside an Em box (blue square). The 'M' sits on the baseline (blue line). The origin of the coordinate system is shown by the small black circle.. Within an OpenType font ([OPENTYPE]),. Glyphs are positioned relative to a particular point on each glyph known as the alignment point. For horizontal writing-modes, the glyphs' alignment points are vertically aligned while for vertical writing-modes, they are horizontally aligned. The position of the alignment point depends on the script. For example, Western glyphs are aligned at the bottom-base. Example baselines (red lines) in three different scripts. From left to right: alphabetic, hanging, ideographic. The EM box is shown in blue for the ideographic script. As glyphs are sequentially placed along a baseline, the alignment point of a glyph is typically positioned at the current text position (some properties such as vertical-align may alter the positioning). After each glyph is placed, the current text position is advanced by the glyph's advance value (typically the width for horizontal text or height for vertical text) with any correction for kerning or other spacing adjustment as well as for new lines in pre-formatted or auto-wrapped text. The initial and final current text positions are used for alignment (e.g. when the text-anchor value is either 'middle' or 'end'). The glyph's advance is needed when placing text along a path. Example of font metrics. The blue boxes show the geometric boxes for the three glyphs. The labeled small circles show the current text position before glyph placement. The small square shows the final current text position after placing the last glyph. Note that the left side of the 'a' glyph's box is not aligned with the right side of the 'V' glyph's box due to kerning. If a glyph does not provide explicit advance values corresponding to the current glyph orientation, then an appropriate approximation should be used. For vertical text, a suggested approximation is the em size. The initial current text position is established by the ‘x’ and ‘y’ attributes on the ‘text’ element or first rendered ‘tspan’ element for pre-formatted text, or auto-wrapped text when the content area is determined by the inline-size property. For other auto-wrapped text, the initial current text position is determined by the position of the first rendered glyph after applying the CSS line wrapping algorithm.. Some fonts may not have values for the baseline tables. Heuristics are suggested for approximating the baseline tables in CSS Inline Layout Module Level 3 [css-inline-3] when a given font does not supply baseline tables. When a different font (or change in font size) is specified in the middle of a run of text, the dominant baseline determines the baseline used to align glyphs in the new font (new size) to those in the previous font. The dominant-baseline property is used to set the dominant baseline. Alignment between an object relative to its parent is determined by the alignment baseline. It is normally the same baseline as the dominant baseline but by using the shorthand vertical-align property (preferred) or the longhand alignment-baseline another baseline can be chosen. The dominant baseline can be temporarily shifted (as needed for superscripts or subscripts) by using either the shorthand vertical-align property (preferred) or the longhand baseline-shift property. Note that shifts can be nested, each shift added to the previous shift. Examples of using the 'vertical-align' property. Left: 'vertical-align:mathematical' ('alignment-baseline:mathematical') is applied to the ‘tspan’ containing '[z]'. The light-blue line shows the position of the mathematical baseline. Right: 'vertical-align:super' ('baseline-shift:super') applied to the ‘tspan’ containing '2'. The light-blue lines indicate the shift in baseline. inline-base direction position of the alignment point is on the start-edge of the glyph. Additional information on baselines can be found in the CSS Inline Layout Module Level 3 specification. [css-inline-3] (Also see: CSS Writing Modes Level 3 specification. [css-writing-modes-3]) A single line of text is laid out inside a line box. Multi-line text is produced by stacking these boxes. The height of a line box is determined by finding the maximum ascent and the maximum descent of all the glyphs in a line of text after applying the effect of the line-height property. The width of a line box is normally the width of the containing text block. In SVG, when the containing text block does not have a fixed geometry (as with pre-formatted text), the line box tightly wraps the glyph boxes within the box. Example of determining the height of a line box. First each glyph box (small light-blue boxes) is extended vertically above and below according to the line-height property. In this case the line-height property is 125%. The larger glyphs have a font-size of 96px so their extra height is 24px (25% of 96px). The extra height is evenly divided above and below resulting in the red boxes. (For clarity, all glyphs in the same inline element have been grouped together). The final line box (large light-blue box) is then found using the maximum extents of the red boxes above and below the baseline. In order to support various international writing systems, line boxes may be orientated in a horizontal or vertical direction. Text within a vertical line box flows from top to bottom. Text within a horizontal line box may flow left-to-right (e.g., modern Latin scripts), right-to-left (e.g., Hebrew or Arabic), or a mixture of left-to-right and right-to-left (bidirectional text). The processing model for bidirectional text is as follows: While kerning or ligature processing might be font-specific, the preferred model is that kerning and ligature processing occurs between combinations of characters or glyphs after the characters have been re-ordered. The orientation of line boxes as well as the direction in which they are stacked (block-flow direction) is determined by the writing-mode property. For horizontal text (writing-mode value horizontal-tb) line boxes are stacked from top to bottom. For vertical text, line boxes are stacked from right-to-left (writing-mode value vertical-rl) or left-to-right (writing-mode value vertical-lr). The ‘text’ element defines a graphics element consisting of text. The ‘tspan’ element within a ‘text’ or another ‘tspan’ element, allows one to switch the style and/or adjust the position of the rendered text inside the ‘tspan’ element relative to the parent element. The character data within the ‘text’ and ‘tspan’ elements, along with relevant attributes and properties, and character-to-glyph mapping tables within the font itself, define the glyphs to be rendered. The attributes and properties on the ‘text’ and ‘tspan’ elements indicate such things as the writing direction, font specification, and painting attributes which describe how exactly to render the characters. Subsequent sections of this chapter describe the relevant text-specific attributes and properties. Since ‘text’ and ‘tspan’ elements are rendered using the same rendering methods as other graphics elements, all of the same painting features that apply to shapes such as paths and rectangles also apply to ‘text’ and ‘tspan’ elements, except for markers. In addition, coordinate system transformations, clipping, and masking can be applied to the ‘text’ element as a whole. In CSS terms, the ‘text’ element acts as a block element. The ‘tspan’, ‘textPath’, and ‘a’ elements that are descended from text content elements act as inline ‘text’ element in all cases, even when different effects are applied to different ‘tspan’ or ‘textPath’ elements within the same ‘text’ element. The ‘text’ element renders its first glyph (after bidirectionality reordering) at the initial current text position (with possible adjustments due to the value of the text-anchor property or the text-align property). For pre-formatted text and for auto-wrapped text where the content area is determined by the inline-size property, the initial current text position is determined by the ‘x’ and ‘y’ values of the ‘text’ or ‘tspan’ element which contains the first rendered character. For auto-wrapped text in a shape or text on a path see the Auto-wrapped text or Text on a path sections, respectively, to determine the initial current text. The text string Hello, out there! is rendered onto the canvas using the Verdana font family with the glyphs filled with the color blue. <?xml version="1.0" standalone="no"?> <svg width="10cm" height="3cm" viewBox="0 0 1000 300" xmlns="" version="1.1"> <text x="250" y="180" font- Hello, out there! </text> </svg> A ‘tspan’ is used to change the styling of the word not. <?xml version="1.0" standalone="no"?> <svg width="10cm" height="3cm" viewBox="0 0 1000 300" xmlns="" version="1.1"> <g font- <text x="160" y="180" fill="blue" > You are <tspan font-not</tspan> a banana. </text> </g> </svg> Two ‘tspan’ elements are repositioned horizontally and vertically using the ‘x’ and ‘y’ attributes. Because all the text is within a single ‘text’ element, a user will be able to select through all the text and copy it to the system clipboard in user agents that support text selection and clipboard operations. <?xml version="1.0" standalone="no"?> <svg width="10cm" height="3cm" viewBox="0 0 1000 300" xmlns="" version="1.1"> <g font- <text x="100" y="180" fill="blue" > But you <tspan dx="2em" dy="-50" font- are </tspan> <tspan dy="100"> a peach! </tspan> </text> </g> </svg> This decision was reversed. See GitHub Issue 210. CSS/HTML does not allow transforms on inline elements and no renderer supports transforms on the ‘a’ element when inline (in both SVG and HTML). If a single <length> is provided, then the value represents the new absolute X (Y) coordinate for the current text position for rendering the glyphs that correspond to the first character within this element or any of its descendants. If a comma- or space-separated list of n <length>s is provided, then the values represent new absolute X (Y) coordinates for the current text position for rendering the glyphs corresponding to each of the first n addressable characters within this element or any of its descendants. If more <length>s are provided than characters, then the extra <length>s will have no effect on glyph positioning. If more characters exist than <length>s, or if the attribute is not specified on a ‘tspan’, then for each additional character: In SVG 2, the ‘text’ and ‘tspan’ ‘x’ and ‘y’ attributes are not presentation attributes and cannot be set via CSS. This may change in a future version of SVG. If a single <length> is provided, this value represents the new relative X (Y) coordinate for the current text position for rendering the glyphs corresponding to the first character within this element or any of its descendants. The current text position is shifted along the x-axis (y-axis) of the current user coordinate system by <length> before the first character's glyphs are rendered. If a comma- or space-separated list of n <length>s is provided, then the values represent incremental shifts along the x-axis (y-axis) for the current text position before rendering the glyphs corresponding to the first n addressable characters within this element or any of its descendants. Thus, before the glyphs are rendered corresponding to each character, the current text position resulting from drawing the glyphs for the previous character within the current ‘text’ element is shifted along the x-axis (y-axis) of the current user coordinate system by <length>. If more <length>s are provided than characters, then any extra <length>s will have no effect on glyph positioning. If more characters exist than <length>s, or if the attribute is not specified, then for each additional character: The supplemental rotation, in degrees, about the current text position that will be applied to all of the glyphs corresponding to each character within this element. If a comma- or space-separated list of <number>s is provided, then the first <number> represents the supplemental rotation for the glyphs corresponding to the first character within this element or any of its descendants, the second <number> represents the supplemental rotation for the glyphs that correspond to the second character, and so on. If more <number>s are provided than there are characters, then the extra <number>s will be ignored. If more characters are provided than <number>s, then for each of these extra characters the rotation value specified by the last number must be used. If the attribute is not specified and if an ancestor of a ‘tspan’ element specifies a supplemental rotation for a given character via a ‘rotate’ attribute (nearest ancestor has precedence), then the given supplemental rotation is applied to the given character. If there are more characters than <number>s specified in the ancestor's ‘rotate’ attribute, then for each of these extra characters the rotation value specified by the last number must be used. This supplemental rotation has no impact on the rules by which current text position is modified as glyphs get rendered and is supplemental to any rotation due to text on a path and to text-orientation, glyph-orientation-horizontal, or glyph-orientation-vertical. The author's computation of the total sum of all of the advance values that correspond to character data within this element, including the advance value on the glyph (horizontal or vertical), the effect of properties letter-spacing and word-spacing and adjustments due to attributes ‘dx’ and ‘dy’ on this ‘text’ or ‘tspan’ element or any descendants. This value is used to calibrate the user agent's own calculations with that of the author. The purpose of this attribute is to allow the author to achieve exact alignment, in visual rendering order after any bidirectional reordering, for the first and last rendered glyphs that correspond to this element; thus, for the last rendered character (in visual rendering order after any bidirectional reordering), any supplemental inter-character spacing beyond normal glyph advances are ignored (in most cases) when the user agent determines the appropriate amount to expand/compress the text string to fit within a length of ‘textLength’. If attribute ‘textLength’ is specified on a given element and also specified on an ancestor, the adjustments on all character data within this element are controlled by the value of ‘textLength’ on this element exclusively, with the possible side-effect that the adjustment ratio for the contents of this element might be different than the adjustment ratio used for other content that shares the same ancestor. The user agent must assume that the total advance values for the other content within that ancestor is the difference between the advance value on that ancestor and the advance value for this element. This attribute is not intended for use to obtain effects such as shrinking or expanding text. A negative value is an error (see Error processing). The ‘textLength’ attribute is only applied when the wrapping area is not defined by the shape-inside or the inline-size properties. It is also not applied for any ‘text’ or ‘tspan’ element that has forced line breaks (due to a white-space value of pre or pre-line). If the attribute is not specified anywhere within a ‘text’ element, the effect is as if the author's computation exactly matched the value calculated by the user agent; thus, no advance adjustments are made. The user agent is required to achieve correct start and end positions for the text strings, but the locations of intermediate glyphs are not predictable because user agents might employ advanced algorithms to stretch or compress text strings in order to balance correct start and end positioning with optimal typography. Note that, for a text string that contains n characters, the adjustments to the advance values often occur only for n−1 characters (see description of attribute ‘textLength’), whereas stretching or compressing of the glyphs will be applied to all n characters. The ‘x’, ‘y’, ‘dx’, ‘dy’, and ‘rotate’ on the ‘text’ and ‘tspan’ elements are useful in high-end typography scenarios where individual glyphs require exact placement. These attributes are useful for minor positioning adjustments between characters or for major positioning adjustments, such as moving a section of text to a new location to achieve the visual effect of a new line of text (compatible with SVG 1.1). Note that the ‘x’, ‘y’, ‘dx’, ‘dy’, and ‘rotate’ attributes are ignored for auto-wrapped text (except for the initial current text position when the content area is specified by the inline-size property). It was decided at the 2015 Sydney F2F that 'dx', 'dy', and 'rotate' would be ignored for auto-wrapped text. (Technically, it is not difficult to apply them but it was not seen as being really useful.) ‘x’, ‘y’, ‘dx’, or ‘dy’ attribute values which are meant to correspond to a particular font processed by a particular set of viewing software and either of these requirements is not met, then the text might display with poor quality. The following additional rules apply to attributes ‘x’, ‘y’, ‘dx’, ‘dy’, and ‘rotate’ when they contain a list of numbers: <tspan dx="11 12 13 14 15 0 21 22 23 0 31 32 33 34 35 36">Latin and Hebrew</tspan> Example tspan04 uses the ‘rotate’ attribute on the ‘tspan’ element to rotate the glyphs to be rendered. This example shows a single text string in a ‘tspan’ element that contains more characters than the number of values specified in the ‘rotate’ attribute. In this case the last value specified in the ‘rotate’ attribute of the ‘tspan’ must be applied to the remaining characters in the string. <?xml version="1.0" standalone="no"?> <svg width="10cm" height="3cm" viewBox="0 0 1000 300" xmlns="" version="1.1"> <desc> Example tspan04 - The number of rotate values is less than the number of characters in the string. </desc> <text font- <tspan x="250" y="150" rotate="-30,0,30"> Hello, out there </tspan> </text> <!-- Show outline of viewport using 'rect' element --> <rect x="1" y="1" width="998" height="298" fill="none" stroke="blue" stroke- </svg> Example tspan04 View this example as SVG (SVG-enabled browsers only) Example tspan05 specifies the ‘rotate’ attribute on the ‘text’ element and on all but one of the child ‘tspan’ elements to rotate the glyphs to be rendered. The example demonstrates the propagation of the ‘rotate’ attribute. <?xml version="1.0" standalone="no"?> <svg width="100%" height="100%" viewBox="0 0 500 120" xmlns="" version="1.1"> <desc> Example tspan05 - propagation of rotation values to nested tspan elements. </desc> <text id="parent" font- Not <tspan id="child1" rotate="-10,-20,-30,-40" fill="orange"> all characters <tspan id="child2" rotate="70,60,50,40,30,20,10" fill="yellow"> in <tspan id="child3"> the </tspan> </tspan> <tspan id="child4" fill="orange" x="40" y="90"> text </tspan> have a </tspan> <tspan id="child5" rotate="-10" fill="blue"> specified </tspan> rotation </text> <!-- Show outline of viewport using 'rect' element --> <rect x="1" y="1" width="498" height="118" fill="none" stroke="blue" stroke- </svg> Example tspan05 View this example as SVG (SVG-enabled browsers only) Rotation of red text inside the ‘text’ element: Rotation of the orange text inside the "child1" ‘tspan’element: Rotation of the yellow text inside the "child2" ‘tspan’element: Rotation of the blue text inside the "child5" ‘tspan’ element: The following diagram illustrates how the rotation values propagate to ‘tspan’ elements nested withing a ‘text’ element: This section gives a short overview of SVG text layout. It is followed by sections that cover different aspects of text layout in more detail. Text layout in SVG is a multi-stage process that takes as input a ‘text’ element subtree and its property values and produces a sequence of glyphs to render and their positions in each text content element's coordinate system. First, a ‘text’ element and its descendants are laid out inside a content area or wrapping area according to CSS, as if the ‘text’ were a block element and any ‘tspan’, ‘textPath’, and ‘a’ descendants were inline elements. This layout takes into account all paragraph level and font related CSS properties described in this chapter. The content area may be explicitly declared by setting the inline-size property, or by setting the shape-inside property that defines or references an SVG shape. If a content area is not declared, it defaults to a rectangle of infinite width and height. Second, any positioning given by ‘x’, ‘y’, ‘dx’ and ‘dy’ attributes are applied to the resulting glyph positions from the CSS layout process. The rules for which transforms are allowed depend on if the content area was explicitly declared or not. If not explicitly declared, the rules define the layout of pre-formatted text. If declared, the rules define the layout of auto-wrapped text. Third, the effect of the text-anchor property is applied if necessary. Finally, layout of glyphs for any ‘textPath’ elements is performed, converting pre-formatted text to text-on-a-path. Examples of the different types of text layout: An example of multi-line pre-formatted text. <svg xmlns=""> Example of multi-line, <tspan x="20" y="75">pre-formatted text.</tspan> </text> </svg> An example of auto-wrapped text. <svg xmlns=""> Example of text auto-wrapped.</text> </svg> Auto-wrapped text. The inline-size property defines a rectangular content area of infinite height (shown in light blue). An example of text on a path. <svg xmlns=""> <text style="font: 24px sans-serif;"> <textPath href="#MyPath">Text on a path.</textPath> </text> </svg> SVG 2 introduces the ability to automatically wrap text inside a rectangle or other shape by specifying a content area. The design of SVG wrapped text is motivated by the desire that SVG text wrapping be as compatible as possible with text wrapping in CSS in order that renderers that support CSS text wrapping can implement SVG text wrapping easily (but without requiring non-HTML compatible SVG renderers to implement HTML). There are several differences between SVG and CSS text wrapping. The most important is that in SVG, a content area must be explicitly provided as SVG does not have an automatic finite (or semi-finite) content area (provided in CSS by the box model). Another difference is that SVG does not have the <p></p> and <br/> elements which create line breaks. Instead, SVG relies on the pre and pre-line values of white-space to provide line breaks. SVG wrapped text also allows a content-creation tool to provide a natural fallback for SVG 1.1 renderers that do not support wrapped text (by use of ‘x’ and ‘y’ attributes in the ‘text’ and ‘tspan’ elements, which are ignored by SVG 2 renderers for auto-wrapped text). SVG's text layout options are designed to cover most general use cases. If more complex layout is required (bulleted lists, tables, etc.), text can be rendered in another XML namespace such as XHTML [HTML] embedded inline within a ‘foreignObject’ element. A content area is defined by specifying in a ‘text’ element an inline-size property, or a shape-inside property that defines or references an SVG shape. If no content area is provided, the content area defaults to a rectangle of infinite width and height (see the pre-formatted text section). If both an inline-size property and a shape-inside property with value other than 'none' are given, the shape-inside property is used. Wrapped text is laid out in a wrapping area. The wrapping area is normally the same as the content area. When the content area is defined using the shape-inside property, the wrapping area may be smaller due to the presence of a shape-subtract property and/or a shape-padding property. The shape-subtract property (along with the shape-margin property) defines a wrapping context. The wrapping area is found by insetting the content area by the shape-padding distance, and then subtracting the wrapping context. Once a wrapping area is defined, the text is laid out inside the wrapping area according to the rules of CSS (respecting any special rules given in this section). Constructing equivalent wrapping areas in SVG and HTML. The text inside the wrapping areas is rendered the same in both cases. Defining a wrapping area in SVG. The ‘text’ element has both a shape-inside property and a shape-subtract property. The shape-inside property references a circle that defines a content area (dotted purple line). The shape-subtract property referencing two semicircles defines a wrapping context (dotted green line) which when subtracted from the content area results in the wrapping area (light blue line). Defining a wrapping area in HTML. A wrapper <div> contains two float <div>s. The wrapper <div> defines a rectangular region (solid purple line). Its shape-inside property defines a content area within the <div> (dotted purple line). The two other <div>s define two floats, one on the left (solid green line) and the right (solid pink line). The floats are rectangular in shape. Each float has a @@ unknown element, attribute or property "shape-outside" property which defines the wrapping context for each float (dotted green and pink lines). The combined wrapping context is subtracted from the content area to defined the wrapping area (light blue line). 'extent' added by resolution from February 12th, 2015. 'extent' replaces the 'width' and 'height' attributes, added by resolution from June 27th, 2013. Replaced by 'inline-size' presentation attribute per resolution from Linkoping F2F, June 11, 2015. The inline-size property allows one to set the wrapping area to a rectangular shape. The computed value of the property sets the width of the rectangle for horizontal text and the height of the rectangle for vertical text. The other dimension (height for horizontal text, width for vertical text) is of infinite length. A value of zero disables the creation of a wrapping area. The initial current text position is taken from the ‘x’ and ‘y’ attributes of the ‘text’ element (or first child ‘tspan’ element if the attributes are not given on the ‘text’ element). For left-to-right text, the initial current text position is at the left of the rectangle. For right-to-left text it is at the right of the rectangle. For vertical text, the initial current text position is at the top of the rectangle. The rectangle (wrapping area) is then anchored according to the text-anchor property using the edges of the wrapping area to determine the start, middle, and end positions. The inline-size property method to wrap text is an extension to pre-formatted SVG text where the author simply gives a limit to the width or height of the block of text; thus the use of the ‘x’ and ‘y’ attributes along with the direction and text-anchor properties to position the first line of text. If full justification is needed, the shape-inside property should be used to create the wrapping area. An example of using inline-size for wrapping horizontal text. <svg xmlns="" width="300" height="100" viewBox="0 0 300 100"> <text x="50" y="30" style="font: 20px sans-serif; inline-size: 200px"> This text wraps at 200 pixels. </text> </svg> Horizontal text wrapping. The light-blue lines indicate the limits of the content area. Note that the content area is of infinite height. The red dot shows the initial current text position. An example of using inline-size for wrapping right to left horizontal text. <svg xmlns="" width="300" height="100" viewBox="0 0 300 100"> <text x="250" y="30" style="font: 20px PakType Naqsh; inline-size: 200px; direction: rtl;"> هذا النص يلتف في 200 بكسل.</text> </svg> Horizontal text wrapping for right to left text. The light-blue lines indicate the limits of the content area. Note that the content area is of infinite height. The red dot shows the initial current text position. Some browser may not render this SVG 1.1 figure correctly. Batik and Firefox seems to get it right. Bug filed against Chrome. An example of using inline-size for wrapping vertical text. <svg xmlns="" width="100" height="300" viewBox="0 0 100 300"> <text x="62.5" y="25" inline- テキストは10文字後に折り返されます。</text> </svg> Vertical text wrapping. The light-blue lines indicate the limits of the content area. Note that the content area is of infinite width. The red dot shows the initial current text position. This SVG 1.1 image doesn't work in Firefox, even nightly. Firefox does not support the presentation attribute 'writing-mode'. Bug filed against Firefox. An example of using inline-size for wrapping horizontal text, anchored in the middle. <svg xmlns="" width="300" height="100" viewBox="0 0 300 100"> <text x="50" y="30" style="font: 20px sans-serif; inline-size: 200px; text-anchor: middle"> This text wraps at 200 pixels. </text> </svg> Horizontal text wrapping. The light-blue lines indicate the limits of the content area. The text is anchored in the middle. The red dot shows the initial current text position. The shape-inside property allows one to set the content area to a CSS basic shape or to an SVG shape. In CSS/HTML shape-inside applies to block-level elements and absolute and percentage values are defined relative to the block-level element. In SVG absolute and percentage values are defined relative to the current local coordinate system and the ‘viewBox’. An example of using a CSS basic-shape for wrapping horizontal text. <svg xmlns="" width="300" height="300" viewBox="0 0 300 300"> <text style="font: 20px/25px sans-serif; text-align: center; shape-inside: circle(120px at 150px 150px);"> Lorem ipsum dolor sit amet, consec-tetuer adipiscing elit...</text> </svg> Horizontal text wrapping inside a CSS circle shape. The light-blue circle indicates the limit of the content area. An example of using a reference to an SVG shape for wrapping horizontal text. <svg xmlns="" width="300" height="100" viewBox="0 0 300 100"> <defs> <rect id="wrap" x="50" y="10" width="200" height="80"/> </defs> <text style="font: 20px sans-serif; shape-inside: url(#wrap);"> This text wraps in a rectangle.</text> </svg> Horizontal text wrapping inside an SVG rectangle shape. The light-blue lines indicate the limits of the content area. The CSS values of 'outside-shape', 'shape-box', and 'display' are invalid for SVG. SVG allows the shape-inside property to have a list of shapes. Each shape defines an independent content area. Text is first laid out in the content area of the first shape. If the text overflows the first shape, the overflow text is laid out in the next shape until all text is laid out or no more shapes are available. The effect is similar to CSS columns, except that the columns can have arbitrary shapes. It is recommended that an overflow shape be provided to ensure the accessibility of all text in cases; for example, if a user increases the font size. Except as noted, see the CSS Shapes Module Level 2 for the definition of 'shape-inside'. [css-shapes-2] 'shape-inside' was removed when the CSS Exclusions and Shapes Module was split into separate Exclusions and Shapes modules. At the Tokyo joint SVG/CSS F2F meeting, it was agreed that it would reappear in CSS Shapes Module Level 2. The shape-subtract property allows one to exclude part of the content area from the wrapping area. The excluded area is the addition of all the areas defined in a list of CSS basic shapes and/or SVG shapes. It was resolved at the 2016 Sydney F2F that 'shape-subtract' should be uses instead of 'shape-outside' due to the different behavior required. ('shape-outside' reduces the area of an exclusion.) Absolute and percentage values are defined relative to the current local coordinate system and the ‘viewBox’. An example of using shape-subtract.. <svg xmlns="" width="450" height="300" viewBox="0 0 450 300"> <rect id="rect1" x="25" y="25" width="225" height="175" fill="white" stroke="black"/> <rect id="rect2" x="200" y="125" width="225" height="150" fill="white" stroke="black" style="shape-margin:25px;"/> <text style="shape-inside:url(#rect1); shape-subtract:url(#rect2); shape-padding:25px; font-family:DejaVu Sans; font-size:12px; text-align:justified; line-height:110%">Lorem ipsum ...</text> <text style="shape-inside:url(#rect2); shape-padding:25px; font-family:DejaVu Sans; font-size:12px; text-align:justified; line-height:110%">Lorem ipsum ...</text> </svg> Horizontal text wrapping inside two overlapping rectangles using shape-subtract as well as shape-inside, shape-padding and shape-margin. The black rectangles show the content areas. The inner blue lines show the wrapping areas. The shape-image-threshold defines the alpha channel threshold used to extract the shape using an image. A value of 0.5 means that the shape will enclose all the pixels that are more than 50% opaque. For the purposes of SVG, this property applies to ‘text’ elements. Except as noted, see the CSS Shapes Module Level 1 for the definition of 'shape-image-threshold'. [css-shapes-1] The shape-margin property adds a margin to a shape referenced with shape-subtract. It defines a new shape where every point is the specified distance from the original shape. This property takes on positive values only. Except as noted, see the CSS Shapes Module Level 1 for the definition of See 'shape-margin'. [css-shapes-1] The shape-padding property can be used to offset the inline flow content wrapping on the inside of elements. Offsets created by the ‘wrap-padding’ property are offset from the content area of the element. This property takes on positive values only. An example of using shape-padding. <svg xmlns="" width="300" height="300" viewBox="0 0 300 300"> <circle id="circle" cx="150" cy="150" r="125" fill="none" stroke="black"/> <text style="shape-inside: url(#circle); shape-padding: 25px; font: 18px DejaVu Sans; text-align: justified; line-height: 110%;">This is an example of wrapped text in SVG 2! There should be 25 pixel padding around the text. The text is justified on both sides. It looks good!</text> </svg> Horizontal text wrapping inside a circle with a shape-padding. The outer black circle shows the content area. The inner blue circle shows the wrapping area. This image is a PNG. Figure out how to make a good SVG. Note: Chrome supports 'textLength' on 'tspan' but Firefox does not. Except as noted, see the CSS Shapes Module Level 2 for the definition of 'shape-padding'. Text layout begins by passing to a CSS-based text renderer the content of the ‘text’ element which includes text data along with styling information and a description of one or more shapes to be filled. The ‘text’ element is treated as a block element and its descendant ‘tspan’, ‘textPath’ and ‘a’ elements are treated as inline elements. The CSS renderer returns a set of typographic characters with their positions resulting from laying out the text as if the text were absolutely positioned. A typographic character may contain more than one glyph. It is assumed here the relative positioning of the glyphs inside a typographic character is encapsulated by the typographic character and it is not user controllable. Once a content area has been defined, the following algorithm is used to determine the typographic characters and their positions for a given ‘text’ element: SVG allows the shape-inside property to reference more than one shape. Each shape should be filled in turn until there is no more text or no more shapes. This means that text is laid out with the box width set to the inline-size value if writing-mode is horizontal-tb, or with the box height set to the inline-size value otherwise. A number of CSS properties have no or limited effect on SVG text layout: This ensures that the ‘text’ element is treated as if it were a block element. This ensures that ‘tspan’, ‘textPath’ and ‘a’ elements are treated as if they were inline elements. Note: the transform property has no effect on inline elements. This ensures that graphics and metadata elements inside a ‘text’ element do not render. Various SVG attributes and properties may reposition the typographic characters depending on how the content area is defined: The following SVG text layout algorithm returns output information about each character in the DOM in the ‘text’ element's subtree. That information includes: The arrays given in the SVG attributes ‘x’, ‘y’, ‘dx’, ‘dy’, and ‘rotate’ are indexed by addressable characters. However, repositioning is applied to typographic characters. If a typographic character corresponds to more than one character (e.g. a ligature), only the array values corresponding to the first character are used in positioning the typographic character. Array values corresponding to other characters in the typographic character are skipped (for ‘x’ and ‘y’), are accumulated and applied to the next typographic character (for ‘dx’ and ‘dy’), or if it is the last value in the array, applied to the following typographic characters (for ‘rotate’). This ensures, for example, that attribute values are applied to the same characters regardless of whether or not a font has a particular ligature. The SVG specific text layout algorithm is as follows: This will be a single line of text unless the white-space property causes line breaks. For each array element with index i in result: Since there is collapsible white space not addressable by glyph positioning attributes in the following ‘text’ element (with a standard font), the "B" glyph will be placed at x=300. <text x="100 200 300"> A B </text> This is because the white space before the "A", and all but one white space character between the "A" and "B", is collapsed away or trimmed. This ensures chunks shifted by text-anchor do not span multiple lines. Position adjustments (e.g values in a ‘x’ attribute) specified by a node apply to all characters in that node including characters in the node's descendants. Adjustments specified in descendant nodes, however, override adjustments from ancestor nodes. This section resolves which adjustments are to be applied to which characters. It also directly sets the rotate coordinate of result. This flag will allow ‘y’ (‘x’) attribute values to be ignored for horizontal (vertical) text inside ‘textPath’ elements. A recursive procedure that takes as input a node and whose steps are as follows: i is an index of addressable characters in the node; j is an index of all characters in the node. This loop applies the ‘x’, ‘y’, ‘dx’, ‘dy’ and ‘rotate’ attributes to the content inside node. Setting the flag to false ensures that ‘x’ and ‘y’ attributes set in a ‘text’ element don't create anchored chunk in a ‘textPath’ element when they should not. The ‘x’ attribute is ignored for vertical text on a path. The ‘y’ attribute is ignored for horizontal text on a path. A ‘textPath’ element always creates an anchored chunk. The ‘dx’ and ‘dy’ adjustments are applied before adjustments due to the ‘textLength’ attribute while the ‘x’, ‘y’ and ‘rotate’ adjustments are applied after. A recursive procedure that takes as input a node and whose steps are as follows: Child nodes are adjusted before parent nodes. This loop finds the left-(top-) most and right-(bottom-) most extents of the typographic characters within the node and checks for forced line breaks. User agents are required to shift the last typographic character in the node by delta, in the positive x direction if the "horizontal" flag is true and if direction is lrt, in the negative x direction if the "horizontal" flag is true and direction is rtl, or in the positive y direction otherwise. User agents are free to adjust intermediate typographic characters for optimal typography. The next steps indicate one way to adjust typographic characters when the value of ‘lengthAdjust’ is spacing. Each resolved descendant node is treated as if it were a single typographic character in this context. This loop applies ‘x’ and ‘y’ values, and ensures that text-anchor chunks do not start in the middle of a typographic character. This loops over each anchored chunk. This loop finds the left-(top-) most and right-(bottom-) most extents of the typographic character within the anchored chunk. Here we perform the text anchoring. Here we apply ‘textPath’ positioning. The user agent is free to make any additional adjustments to mid necessary to ensure high quality typesetting due to a ‘spacing’ value of 'auto' or a ‘method’ value of 'stretch'. This implements the special wrapping criteria for single closed subpaths. Set mid = mid mod length. This sets the starting point for rendering any characters that occur after a ‘textPath’ element to the end of the path. This option corresponds to basic SVG 1.1 text layout. This is the default text layout method and is used in the absence of an explicitly defined content area. It is also used as a first step in laying out text on a path (with slightly modified rules). In this layout method, no automatic line breaking or word wrapping is done. Nominally, the text is rendered as a single line inside a rectangular content area of infinite width and height. Multiple lines of text can be obtained by precomputing line breaks and using one of the following methods: New in SVG 2. The following properties do not apply to pre-formatted text: text-align, text-align-last, line-break, word-break, hyphens, word-wrap, and overflow-wrap. Multi-line pre-formatted text may be created by using the white-space values pre or pre-line. In these cases, a line-feed or carriage return is preserved as a forced line break which creates a new line box. The line boxes are stacked following the rules of CSS. After text is laid out according to the basic CSS text layout rules, typographic characters can be repositioned using SVG specific rules. Absolute repositioning can be prescribed by giving absolute coordinates in the ‘x’ and ‘y’ attributes or by forced line breaks. Absolute repositioning may be influenced by the text-anchor property. Relative repositioning can be prescribed by giving relative coordinates in the ‘dx’ and ‘dy’ attributes. The typographic characters may be arbitrarily rotated by giving a list of values in the ‘rotate’ attribute. Absolute repositioning (including any adjustment due to the text-anchor property) is done before relative repositioning and rotation. Text is automatically wrapped when a content area is specified in the ‘text’ element. The content area defines the outermost container for wrapping text. A wrapping context (set of exclusion areas) may also be given. The actual wrapping area is defined by subtracting the wrapping context from the content area. The wrapping context may also be reduced by the value of the shape-padding property. The effective area of an exclusion may be enlarged by the value of the shape-margin property. In the case where the content area is defined by the inline-size property, the ‘x’ and ‘y’ attributes corresponding to the first rendered typographic character define the initial current text position. When the content area is inside a shape, the initial current text position is determined from the position of the first rendered typographic character after laying out the first line box inside the shape. Except when used to determine the initial current text position, all values ‘x’ and ‘y’ are ignored on ‘text’, and ‘tspan’ elements in auto-wrapped text. The attributes ‘x’ and ‘y’ can provide a natural fallback mechanism for SVG1.1 renderers for wrapped text. Content producers may wish to pre-layout text by breaking up lines into ‘tspan’ elements with ‘x’ and ‘y’ attributes. Then, for example, if a fallback font is used to render the text, an SVG2 renderer will ignore the ‘x’ and ‘y’ attributes and reflow the text using the font metrics of the fallback font. An SVG1.1 renderer will use the ‘x’ and ‘y’ attributes in rendering the text which will usually result in readable text even if the rendering doesn't match the shape. Many of the text wrapping examples in this section rely on this mechanism to render text in browsers that have not implemented text wrapping. The following examples illustrate a few issues with laying out text in a shape. Given an arbitrary shaped wrapping area, the first line box may not fit flush against the top (or side for vertical text). In this case, the first line box is shifted until it fits. In CSS, the edge of a shape is treated as a series of 1 pixel × 1 pixel floats. A future CSS specification may define a line grid that could be used to control the position of the first line of text to allow alignment of text between different wrapping areas. The top line box (small light-blue rectangle), consisting of the smallest possible block of text, is moved down until the line box fits inside the wrapping area (light-blue path). Note, the line box includes the effect of the line-height property, here set to 1.25. The red rectangles tightly wrap the glyphs in each line box. This appears to be different from the SVG 1.2 draft in which the top of the wrapping area served as the origin of a line grid. The first line was moved down by the line height until it fit inside the shape. Given an arbitrary shaped wrapping area, a single line of text might be broken into more than one part. In this case, a line box for each part is created. The height of all line boxes in a single line of text must be the same (ensuring all parts have the same baseline). This height is calculated by looking at all glyphs in the line of text. This default behavior was agreed to at the CSS/SVG joint working group meeting in Sydney 2016. A future CSS specification may allow one to control which parts of a shape broken into different parts is filled (e.g, fill only the right most parts, fill only the left most parts, etc.). The top line is split into two line boxes (light-blue rectangles), text in each line box is centered inside the box (due to 'text-align:center'). SVG can place text along a path defined either by a ‘path’ element or the path equivalent of a basic shape. This is specified by including text within a ‘textPath’ element that has either an ‘href’ attribute with an URL reference pointing to a ‘path’ element or basic shape, or by specifying a value for the ‘path’ attribute that specifies the path data directly. The ability to place text along a basic shape is new in SVG 2. Placing text along a basic shape was resolved at the Sydney (2015) meeting. Directly using a 'd' attribute to specify the path was added to SVG 2. The 'd' attribute was renamed to 'path' by decision at the London (2016) editor's meeting at the same time 'd' was promoted to being a presentation attribute. Text on a path is conceptionally like a single line of pre-formatted text that is then transformed to follow the path. Except as indicated, all the properties that apply to pre-formatted text apply to text on a path. Both the ‘path’ attribute and the ‘href’ attribute specify a path along which the typographic characters will be rendered. If both attributes are specified on a ‘textPath’ element, the path that is used must follow the following order of precedence: If the ‘path’ attribute contains an error, the ‘href’ attribute must be used. An offset from the start of the path for the initial current text position, calculated using the user agent's distance along the path algorithm, after converting the path to the ‘textPath’ element's coordinate system. If a <length> other than a percentage is given, then the ‘startOffset’ represents a distance along the path measured in the current user coordinate system for the ‘textPath’ element. If a percentage is given, then the ‘startOffset’ represents a percentage distance along the entire path. Thus, startOffset="0%" indicates the start point of the path and startOffset="100%" indicates the end point of the path. Negative values and values larger than the path length (e.g. 150%) are allowed. Limiting values to the range 0%-100% prevents easily creating effects like text moving along the path. Any typographic characters with mid-points that are not on the path are not rendered. Rendering for different values of the ‘startOffset’ attribute. From top to bottom: default value, 50%, -50%. The bottom path should show only "path." on the left side of the path. Chrome and Safari both do not handle offsets outside the range 0% to 100%. Chrome bug For paths consisting of a single closed subpath (including an equivalent path for a basic shape), typographic characters are rendered along one complete circuit of the path. The text is aligned as determined by the text-anchor property to a position along the path set by the ‘startOffset’ attribute. For the start (end) value, the text is rendered from the start (end) of the line until the initial position along the path is reached again. For the middle, the text is rendered from the middle point in both directions until a point on the path equal distance in both directions from the initial position on the path is reached. Rendering for text on a path referencing a ‘circle’ with different values of the ‘startOffset’ attribute. Left: 0. Right: 75% or -25%. The red circle marks the beginning of the path (after the canonical decomposition of the circle into a path). Rendering for text on a path referencing a ‘circle’ with different values of the text-anchor attribute. Left: 'start'. Middle: 'middle'. Right: 'end'. The red circles marks the beginning of the path (after the canonical decomposition of the circle into a path). The blue square marks the reference point for the start of rendering (shifted from the path start by a ‘startOffset’ value of 75%). The gray arrow(s) shows the direction of typographic character placement and the point at which typographic character placement stops. The arrow(s) would be reversed if the direction property has a value of rtl. Rendering all glyphs was agreed to for basic shapes at the Sydney (2015) meeting (but missing in minutes). Limiting the wrapping to a path with a single closed sub-path and to one loop, effected by the 'startOffset' attribute agreed to at the London (2016) Editor's Meeting. Indicates the method by which text should be rendered along the path. A value of align indicates that the typographic character should be rendered using simple 2×3 matrix transformations such that there is no stretching/warping of the typographic characters. Typically, supplemental rotation, scaling and translation transformations are done for each typographic characters to be rendered. As a result, with align, in fonts where the typographic characters are designed to be connected (e.g., cursive fonts), the connections may not align properly when text is rendered along a path. A value of stretch indicates that the typographic character outlines will be converted into paths, and then all end points and control points will be adjusted to be along the perpendicular vectors from the path, thereby stretching and possibly warping the glyphs. With this approach, connected typographic characters, such as in cursive scripts, will maintain their connections. (Non-vertical straight path segments should be converted to Bézier curves in such a way that horizontal straight paths have an (approximately) constant offset from the path along which the typographic characters are rendered.) Indicates how the user agent should determine the spacing between typographic characters that are to be rendered along a path. A value of exact indicates that the typographic characters should be rendered exactly according to the spacing rules as specified in Text on a path layout rules. A value of auto indicates that the user agent should use text-on-a-path layout algorithms to adjust the spacing between typographic characters in order to achieve visually appealing results. Determines the side of the path the text is placed on (relative to the path direction). Specifying a value of right effectively reverses the path. Rendering for text on a path referencing a ‘circle’ with different values of the ‘side’ attribute. Left: left. Right: right. Added in SVG 2 to allow text either inside or outside closed subpaths and basic shapes (e.g. rectangles, circles, and ellipses). Adding 'side' was resolved at the Sydney (2015) meeting. A path data string describing the path onto which the typographic characters will be rendered. An empty string indicates that there is no path data for the element. This means that the text within the ‘textPath’ does not render or contribute to the bounding box of the ‘text’ element. If the attribute is not specified, the path specified with ‘href’ is used instead. An URL reference to the ‘path’ element or basic shape element onto which the glyphs will be rendered, if no ‘path’ attribute is provided. If the attribute is used, and the <url> is an invalid reference (e.g., no such element exists, or the referenced element is not a ‘path’ element) or basic shape, then the ‘textPath’ element is in error and its entire contents shall not be rendered by the user agent. Refer to the common handling defined for URL reference attributes and deprecated XLink attributes. The path data coordinates within the referenced ‘path’ element or basic shape element are assumed to be in the same coordinate system as the current ‘text’ element, not in the coordinate system where the ‘path’ element is defined. The transform attribute on the referenced ‘path’ element or basic shape element represents a supplemental transformation relative to the current user coordinate system for the current ‘text’ element, including any adjustments to the current user coordinate system due to a possible transform property on the current ‘text’ element. For example, the following fragment of SVG content: <svg xmlns=""> <g transform="translate(25,25)"> <defs> <path id="path1" transform="scale(2)" path="M0,10 L40,20 80,10" fill="none" stroke="red"/> </defs> </g> <text transform="rotate(45)"> <textPath href="#path1">Text on a path</textPath> </text> </svg> should have the same effect as the following: <svg xmlns=""> <g transform="rotate(45)"> <defs> <path id="path1" transform="scale(2)" path="M0,10 L40,20 80,10" fill="none" stroke="red"/> </defs> <text> <textPath href="#path1">Text on a path</textPath> </text> </g> </svg> and be equivalent to: <svg xmlns=""> <text transform="rotate(45)"> <textPath path="M0,20 L80,40 160,20" >Text on a path</textPath> </text> </svg> Note that the transform="translate(25,25)" has no effect on the ‘textPath’ element, whereas the transform="rotate(45)" applies to both the ‘text’ and the use of the ‘path’ element as the referenced shape for text on a path. Further note that the transform="scale(2)" scales the path (equivalent to multiplying every coordinate by 2 for simple linear paths), but does not scale the text placed along the path. Example toap01 provides a simple example of text on a01 - simple text on a path</desc> <use href="#MyPath" fill="none" stroke="red" /> <text font- <textPath href="#MyPath"> We go up, then we go down, then up again </textPath> </text> <!-- Show outline of viewport using 'rect' element --> <rect x="1" y="1" width="998" height="298" fill="none" stroke="blue" stroke- </svg> Example toap01 View this example as SVG (SVG-enabled browsers only) Example toap02 shows how ‘tspan’ elements can be included within ‘textPath’ elements to adjust styling attributes and adjust the current text position before rendering a particular glyph. The first occurrence of the word "up" is filled with the color red. Attribute ‘dy’ is used to lift the word "up" from the baseline. The ‘x’, ‘y’, ‘dx’, ‘dy’, and ‘rotate’ attributes can only be specified on ‘text’ and ‘tspan’ elements (but may effect the rendering of glyphs in text on a path —02 - tspan within textPath</desc> <use href="#MyPath" fill="none" stroke="red" /> <text font- <textPath href="#MyPath"> We go <tspan dy="-30" fill="red" > up </tspan> <tspan dy="30"> , </tspan> then we go down, then up again </textPath> </text> <!-- Show outline of viewport using 'rect' element --> <rect x="1" y="1" width="998" height="298" fill="none" stroke="blue" stroke- </svg> Example toap02 View this example as SVG (SVG-enabled browsers only) Example toap03 demonstrates the use of the ‘startOffset’ attribute on the ‘textPath’ element to specify the start position of the text string as a particular position along the path. Notice that glyphs that fall off the end of the path are not rendered 03 - text on a path with startOffset attribute</desc> <use href="#MyPath" fill="none" stroke="red" /> <text font- <textPath href="#MyPath" startOffset="80%"> We go up, then we go down, then up again </textPath> </text> <!-- Show outline of viewport using 'rect' element --> <rect x="1" y="1" width="998" height="298" fill="none" stroke="blue" stroke- </svg> Example toap03 View this example as SVG (SVG-enabled browsers only) Conceptually, for text on a path the target path is stretched out into either a horizontal or vertical straight line segment. For horizontal text layout flows, the path is stretched out into a hypothetical horizontal line segment such that the start of the path is mapped to the left of the line segment. For vertical text layout flows, the path is stretched out into a hypothetical vertical line segment such that the start of the path is mapped to the top of the line segment. The standard text layout rules are applied to the hypothetical straight line segment and the result is mapped back onto the target path. Vertical and bidirectional text layout rules also apply to text on a path. The orientation of each glyph along a path is determined individually. For horizontal text layout flows, the default orientation (the up direction) for a given glyph is along the vector that starts at the intersection point on the path to which the glyph is attached and which points in the direction 90 degrees counter-clockwise from the angle of the curve at the intersection point. For vertical text layout flows, the default orientation for a given glyph is along the vector that starts at the intersection point on the path to which the glyph is attached and which points in the direction 180 degrees from the angle of the curve at the intersection point. Default glyph orientation along a path. Left, horizontal text. Right: vertical text. Example toap04 will be used to illustrate the particular layout rules for text on a path that supplement the basic text layout rules for straight line horizontal or vertical text. <?xml version="1.0" standalone="no"?> <svg width="12cm" height="3.6cm" viewBox="0 0 1000 300" version="1.1" xmlns=""> <defs> <path id="MyPath" d="M 100 125 C 150 125 250 175 300 175 C 350 175 450 125 500 125 C 550 125 650 175 700 175 C 750 175 850 125 900 125" /> </defs> <desc>Example toap04 - text on a path layout rules</desc> <use href="#MyPath" fill="none" stroke="red" /> <text font- <textPath href="#MyPath"> Choose shame or get war </textPath> </text> <!-- Show outline of viewport using 'rect' element --> <rect x="1" y="1" width="998" height="298" fill="none" stroke="blue" stroke- </svg> Example toap04 View this example as SVG (SVG-enabled browsers only) The following picture does an initial zoom in on the first glyph in the ‘text’ element. The small dot above shows the point at which the glyph is attached to the path. The box around the glyph shows the glyph is rotated such that its horizontal axis is parallel to the tangent of the curve at the point at which the glyph is attached to the path. The box also shows the glyph's charwidth (i.e., the amount which the current text position advances horizontally when the glyph is drawn using horizontal text layout). The next picture zooms in further to demonstrate the detailed layout rules. For left-to-right horizontal text layout along a path (i.e., when the glyph orientation is perpendicular to the inline-base direction the layout rules are as follows: Comparable rules are used for top-to-bottom vertical text layout along a path (i.e., when the glyph orientation is parallel with the inline-base direction, the layout rules are as follows: In the calculations above, if either the startpoint-on-the-path or the endpoint-on-the-path is off the end of the path, then extend the path beyond its end points with a straight line that is parallel to the tangent at the path at its end point so that the midpoint-on-the-path can still be calculated. When the inline-base direction is horizontal, then any ‘x’ attributes on ‘text’ or ‘tspan’ elements represent new absolute offsets along the path, thus providing explicit new values for startpoint-on-the-path. Any ‘y’ attributes on ‘text’ or ‘tspan’ elements are ignored. When the inline-base direction is vertical, then any ‘y’ attributes on ‘text’ or ‘tspan’ elements represent new absolute offsets along the path, thus providing explicit new values for startpoint-on-the-path. Any ‘x’ attributes on ‘text’ or ‘tspan’ elements are ignored. After positioning all characters within the ‘textPath’, the current text position is set to the end point of the path, after adjusting for the ‘startOffset’ in the case of paths that are a single closed loop. In other words, text that follows a ‘textPath’ element (but is still inside a ‘text’ element) that does not have explicit positioning information (‘x’ and ‘y’ attributes) is positioned from the end of the path. The starting point for text after the ‘textPath’ element without explicit positioning information is at the end of the path (red dot). The choice of the end of the path over the position of the last character was chosen as it is more author predictable (not depending on font, etc.) Decided at the London (2016) editor's meeting. See also GitHub Issue 84. A ‘text’ element is rendered in one or more chunks. Each chunk (as produced by the text layout algorithm) is rendered, one after the other, in document order. Each rendered chunk, which consists of one or more glyphs, is filled and stroked as if it were a single path. This means that all glyphs in the chunk should be filled, then all glyphs stroked at once, or the reverse according to the value of the paint-order property. For the purposes of painting, a ‘text’ has zero, one, or more equivalent paths, one for each chunk. Each equivalent path consists of one subpath per glyph within that chunk. Since the fill-rule property does not apply to SVG text elements, the specific order of the subpaths within the equivalent path does not matter. The specific position of the start of each subpath, and the direction that the path progresses around glyph shape, is not defined. However, user agents should be consistent for a given font and glyph. This means that dashed strokes on text may not place the dash pattern at the same positions across different implementations. CSS offers a multitude of properties for styling text. In general, only two sets of properties are applicable to SVG: those that determine which glyphs are to be rendered (font-family, font-style, etc.) and those that apply at the paragraph level (direction, writing-mode, line-height, letter-spacing, etc.). The list of CSS properties that must be supported on SVG text elements can be found in the Styling chapter. text-align, text-align-last, text-justify, text-indent, line-break, word-break, hyphens, word-wrap, overflow-wrap. Additionally, the @font-face rule must be supported for font selection as well as the ::first-line and ::first-letter pseudo-elements must be supported on ‘text’ elements. In interactive modes, the ::selection pseudo-element must also be supported. Other CSS properties that affect text layout and rendering may also be supported by an SVG user agent; their effect should be taken into account as part of the CSS text layout step of the overall SVG text layout process. For example, while SVG 2 does not require support for the text-combine-upright property, its behavior in an SVG context should be obvious. A number of CSS properties must not have any effect on SVG text elements: Additionally, the ::before and ::after generated content pseudo-elements must not apply to SVG text elements. A future specification may introduce support for the ::before and ::after generated content pseudo-elements; authors should not rely on them being ignored. This section covers properties that are not covered elsewhere in this specification and that are specific to SVG. The text-anchor property is used to align (start-, middle- or end-alignment) a string of pre-formatted text or auto-wrapped text where the wrapping area is determined from the inline-size property relative to a given point. It is not applicable to other types of auto-wrapped text, see instead text-align. For multi-line text, the alignment takes place for each line. The text’ element assigned explicitly to the first rendered character in a text chunk, or determination of the initial current text position for a ‘textPath’ element. Each text chunk also has a final current text position which is the current text position after placing the last glyph in the text chunk. The positions are determined before applying the text-anchor property. Values have the following meanings: An example of using text-anchor on multi-line text. <svg xmlns="" width="300" height="100" viewBox="0 0 300 100"> <text x="150" y="30" style="font: 20px sans-serif; white-space: pre-line; text-anchor: middle;"> This multi-line text is anchored to the middle.</text> </svg> The preserved line-feed creates two text chunks, each anchored independently. Another example of using text-anchor on multi-line text. <svg xmlns="" width="200" height="150" viewBox="0 0 200 150"> <path d="m 100,0 0,150" style="fill:none;stroke:#add8e6"/> <text x="100 100 100" y="50 95 140" style="font-size: 42px; text-anchor: middle">I❤SVG</text> </svg> The text is divided into three text chunks (due to the three coordinates in the ‘x’ and ‘y’ attributes). Each text chunk is independently anchored. This property has been removed in SVG 2. This property applies only to vertical text. It has been obsoleted in SVG 2 and partially replaced by the text-orientation property of CSS Writing Modes Level 3. The following SVG 1.1 values must still be supported by aliasing the property as a shorthand to text-orientation as follows: Any other values must be treated as invalid. The ‘kerning’ property has been removed in SVG 2. SVG 1.1 uses the 'kerning' property to determine if the font kerning tables should be used to adjust inter-glyph spacing. It also allows a <length> value which if given is added to the spacing between glyphs (supplemental to any from the letter-spacing property). This property is replaced in SVG 2 by the CSS font-kerning property which solely controls turning on/off the use of the font kerning tables. Chrome's UseCounter data showed a use of 0.01% in 2014. See GitHub issue 80. This section covers CSS properties that are not covered elsewhere in this specification and have either SVG specific adaptions or are significantly altered from SVG 1.1. CSS Font Module Level 3 changes the meaning of the 'font-variant' property from that defined by CSS 2.1. It has been repurposed (and its functionality greatly expanded) as a shorthand for selecting font variants from within a single font. SVG 2 requires all font-variant subproperties to be implemented (e.g. font-variant-ligatures). SVG uses the line-height property to determine the amount of leading space which is added between lines in multi-line text (both for horizontal and vertical text). It is not applicable to text on a path. Except for the additional information provided here, the normative definition of the line-height property is in the CSS 2.1 specification ([CSS2]). The CSS Inline Module Level 3 may update the definition of 'line-height'. This property sets the block-flow direction; or in-other-words, the direction in which lines of text are stacked. As a consequence it also determines if the text has a horizontal or vertical orientation. SVG 2 references CSS Writing Modes Level 3 for the definition of the 'writing-mode' property. That specification introduces new values for the property. The SVG 1.1 values are obsolete but must still be supported by converting the specified values to computed values as follows: In SVG 1.0, this property could be interpreted as to also setting the inline-base direction leading to confusion about its role relative to the direction property. SVG 1.1 was a bit more specific about the role of direction (e.g. that direction set the reference point for the text-anchor property) but still was open to interpretation. The fact that neither SVG 1.0 nor SVG 1.1 allowed multi-line text added to the confusion. Except for the additional information provided here, the normative definition of the writing-mode property is in CSS Writing Modes Level 3 ([css-writing-modes-3]). The property specifies the inline-base direction of a ‘text’ or ‘tspan’ element. It defines the start and end points of a line of text as used by the text-anchor and inline-size properties. It also may affect the direction in which characters are positioned if the unicode-bidi property's value is either embed or bidi-override. The direction property applies only to glyphs oriented perpendicular to the inline-base direction, which includes the usual case of horizontally-oriented Latin or Arabic text and the case of narrow-cell Latin or Arabic characters rotated 90 degrees clockwise relative to a top-to-bottom inline-base direction. Reviewers, please take special care to ensure this agrees with CSS3 Writing modes. Except for the additional information provided here, the normative definition of the direction property is in CSS Writing Modes Level 3 ([css-writing-modes-3]). outermost svg element, and allow that direction to inherit to all text elements, as in the following example (which may be used as a template): <svg xmlns="" width="100%" height="100%" viewBox="0 0 600 72" direction="rtl" xml: <title direction="ltr" xml:Right-to-left Text</title> <desc direction="ltr" xml: A simple example for using the 'direction' property in documents that predominantly use right-to-left languages. </desc> <text x="300" y="50" text-داستان SVG 1.1 SE طولا ني است.</text> </svg> Example View this example as SVG (SVG-enabled browsers only) Below is another example, where where implicit bidi reordering is not sufficient: <?xml version="1.0" encoding="utf-8"?> <svg xmlns="" width="100%" height="100%" viewBox="0 0 600 72" direction="rtl" xml: <title direction="ltr" xml:Right-to-left Text</title> <desc direction="ltr" xml: An example for using the 'direction' and 'unicode-bidi' properties in documents that predominantly use right-to-left languages. </desc> <text x="300" y="50" text- כתובת MAC:‏ <tspan direction="ltr" unicode-00-24-AF-2A-55-FC</tspan> </text> </svg> Example View this example as SVG (SVG-enabled browsers only) This property is defined in the CSS Line Layout Module 3 specification. See 'dominant-baseline'. [css-inline-3] SVG 2 introduces some changes to the definition of this property. In particular: SVG uses the value of the dominant-baseline property to align glyphs relative to the ‘x’ and ‘y’ attributes. For the text-orientation value sideways, the auto value for dominant-baseline is alphabetic; however, for backwards compatibility, the glyphs should be aligned to the ‘x’ and ‘y’ attributes using the value central. We are interested in any actual use where one would prefer the old behavior. The SVG 1.1 definition of the dominant-baseline property was derived from the XSL specification. (See XSL 'dominant-baseline'.) This property is defined in the CSS Line Layout Module 3 specification. See 'alignment-baseline'. [css-inline-3] The vertical-align property shorthand should be preferred in new content. SVG 2 introduces some changes to the definition of this property. In particular: the values 'auto', 'before-edge', and 'after-edge' have been removed. For backwards compatibility, 'text-before-edge' should be mapped to 'text-top' and 'text-after-edge' should be mapped to 'text-bottom'. Neither 'text-before-edge' nor 'text-after-edge' should be used with the vertical-align property. This property is defined in the CSS Line Layout Module 3 specification. See 'baseline-shift'. [css-inline-3] The vertical-align property shorthand should be preferred in new content. The SVG 1.1 initial value 'baseline' has been removed from SVG 2. User agents may support this value as computing to '0' if necessary to support legacy SVG content. The 'baseline' value was removed with the conversion of 'vertical-align' to a shorthand for 'alignment-baseline' and 'baseline-shift' as it is also a value for 'alignment-baseline' and it is redundant with a length value of '0'. 'baseline-shift' is important for aligning subscripts and superscripts (Inkscape relies on it for this purpose). It remains in the CSS Inline Layout Module Level 3. 'vertical-align' is a shorthand for changing multiple properties at once, including 'baseline-shift'. SVG 2 removes percentage values from the letter-spacing property. Except as noted, see CSS Text Level 3 for the definition of the letter-spacing.([css-text-3]). Percentage values based on the SVG viewport are not seen as useful. This brings the definition of 'letter-spacing' in line with CSS. SVG 2 changes the meaning of percentage values for the word-spacing property. In SVG 1.1, percentages define additional spacing as a percentage of the SVG viewport size. In SVG 2, following CSS Text Level 3, percentages define additional spacing as a percentage of the affected character's width. Except as noted, see CSS Text Level 3 for the definition of the word-spacing.([css-text-3]). Percentage values based on the SVG viewport are not seen as useful. This brings the definition of 'word-spacing' in line with CSS. New in SVG 2. Added to allow user agents to handle text strings that overflow a predefined wrapping area in a more useful way. Aligns SVG and HTML/CSS text processing. See the CSS3 UI specification for the definition of 'text-overflow'. [css-ui-3] SVG uses the text-overflow property to control how text content block elements render when text overflows line boxes as, for example, can happen when the white-space property has the value nowrap. The property does not apply to pre-formatted text or text-on-a-path. In SVG text-overflow has an effect if there is a validly specified wrapping area, regardless of the computed value of the overflow property on the text content block element. If the text-overflow property has the value ellipsis then if the text that is to be rendered overflows the wrapping area an ellipsis is rendered such that it fits within the given area. For the purposes of rendering, the ellipsis is treated as if it replaced the characters at the point where it is inserted. If the text-overflow property has the value clip then any text that overflows the wrapping area is clipped. Characters may be partially rendered. Any other value for text-overflow is treated as if it wasn't specified. Note that the effect of text-overflow is purely visual, the ellipsis itself does not become part of the DOM. For all the DOM methods it is as if text-overflow was not applied, and as if the wrapping area did not constrain the text. The following example shows the use of text-overflow. The top line shows text as it would normally be rendered (text overflows due to white-space value pre and is displayed due to overflow value visible). The middle line shows text with text-overflow value clip, and the bottom line shows text with text-overflow value ellipsis. <svg xmlns="" width="300" height="150" viewBox="0 0 300 150"> <style> text { font: 25px sans-serif; white-space: pre; } path { fill: none; stroke: #add8e6; } </style> <path d="m 50,0 0,150"/> <path d="m 200,0 0,150"/> <text x="50" y="35" inline-SVG is awesome</text> <text x="50" y="85" inline-SVG is awesome</text> <text x="50" y="135" inline-SVG is awesome</text> </svg> The text-overflow property used on text elements, the bottom line showing text with an ellipsis applied. It has been argued that this property is useless. It would be of more use if coupled with a mechanism that would expose the hidden text (tool-tip on hovering over ellipses?). The text-overflow property only deals with text that overflows off the end of a line. It does not deal with text that overflows the off the end of the wrapping area. New in SVG 2. Added white-space to allow a more useful way to control white-space handling. Aligns SVG and HTML/CSS text processing. ‘xml:space’ deprecated in new content, retained for backwards compatibility. Rendering of white space in SVG 2 is controlled by the white-space property. This specifies two things: Values and their meanings are defined in CSS Text Module Level 3.[css-text-3] An example of using the white-space value pre-line. <svg xmlns=""> <text x="150" y="30" style="font: 20px IPAMincho; writing-mode: vertical-rl; white-space: pre-line;"></text> </svg> Example of multi-line vertical text with line breaks. The text is from the Japanese poem Iroha. The lines are broken at traditional places. For compatibility, SVG 2 also supports the XML attribute ‘xml:space’ to specify the handling of white space characters within a given ‘text’ element's character data. New content should not use ‘xml:space’ but instead, use the white-space property. This section should be simplified to limit the discussion of xml:space and instead define it in the user agent style sheet using the white-space property. The CSS Text 4 specification's preserve-spaces value for the 'white-space-collapse' property is intended to match xml:space=preserve. (fantasai agreed to add an appropriate value for white-space to match SVG 1.1's odd xml:space="preserve" behavior.) Note that any child element of a ‘text’ element may also have an ‘xml:space’ attribute which will apply to that child element's text content. The SVG user agent has special processing rules associated with this attribute as described below. These are behaviors that occur subsequent to XML parsing [XML] and any construction of a DOM. ‘xml:space’ is an inheritable attribute which can have one of two values: "a b"(three spaces between "a" and "b") will produce a larger separation between "a" and "b" than "a b"(one space between "a" and "b"). The following example illustrates that line indentation can be important when using xml:space="default". The fragment below show two pairs of similar ‘text’ elements, with both ‘text’ ‘text’ elements above show the effect of indented character data. The attribute xml:space="default" in the first ‘text’ element instructs the user agent to: The second pair of ‘text’ elements above show the effect of non-indented character data. The attribute xml:space="default" in the third ‘text’ element instructs the user agent to: Note that XML parsers are required to convert the standard representations for a newline indicator (e.g., the literal two-character sequence "U+000D U+000A", CARRIAGE-RETURN LINE-FEED or the stand-alone literals U+000D or U+000A) into the single character U+000A before passing character data to the application. Thus, each newline in SVG will be represented by the single character U+000A, no matter what representation for newlines might have been used in the original resource. (See XML end-of-line handling.) Any features in the SVG language or the SVG DOM that are based on character position number, such as the ‘x’, ‘y’, ‘dx’, ‘dy’ and ‘rotate’ attributes on the ‘text’ and ‘tspan’ elements,. Note that a glyph corresponding to a white-space character should only be displayed as a visible but blank space, even if the glyph itself happens to be non-blank. See display of unsupported characters [UNICODE]. The ‘xml:space’ attribute is: Animatable: no. Older, SVG 1.1 content will use ‘xml:space’. New content, and older content that is being reworked, will use white-space and remove any existing ‘xml:space’. However, user agents may come across content which uses both methods on the same element. If the white-space property is set on any element, then the value of ‘xml:space’ is ignored. Text in SVG can be decorated with an underline, overline, and/or strike-through. The position and style of the decoration is determined respectively by the text-decoration-line and text-decoration-style properties, or by the text-decoration shorthand property as defined in the Line Decoration section of the CSS Text Decoration Module Level 3 [(css-text-decor-3)] specification. The fill and stroke of the decoration are given by the text-decoration-fill and text-decoration-stroke properties. If a color value is specified either by the text-decoration-color property or by the text-decoration shorthand, and no text-decoration-fill property is specified, it is interpreted as if the text-decoration-fill property were specified with that color value. If the fill or stroke of the text decoration are not explicitly specified (via text-decoration, text-decoration-color, text-decoration-fill, or text-decoration-stroke), they are given by the fill and stroke of the text at the point where the text decoration is declared (see example below). The text-decoration-line and text-decoration-style properties are new in SVG 2. The SVG 1.1/CSS 2.1 text-decoration property is transformed in a backwards compatible way to a short hand for these properties. text-decoration-fill and text-decoration-stroke are SVG specific properties which may be added to a future level of the CSS Text Decoration specification. The order in which the text and decorations are drawn is defined by the Painting Order of Text Decorations section of CSS Text Decoration Module Level 3. The paint order of the text decoration itself (fill/stroke) is determined by the value of the paint-order property at the point where the text decoration is declared. Example textdecoration01 provides examples for text-decoration. The first line of text has no value for text-decoration, so the initial value of text-decoration:none is used. The second line shows text-decoration:line-through. The third line shows text-decoration:underline. The fourth line illustrates the rule whereby decorations are rendered using the same fill and stroke properties as are present on the element for which the text-decoration is specified. Since text-decoration is specified on the ‘text’ element, all text within the ‘text’ element has its underline rendered with the same fill and stroke properties as exist on the ‘text’ element (i.e., blue fill, red stroke), even though the various words have different fill and stroke property values. However, the word "different" explicitly specifies a value for text-decoration; thus, its underline is rendered using the fill and stroke properties as the ‘tspan’ element that surrounds the word "different" (i.e., yellow fill, darkgreen stroke): <?xml version="1.0" standalone="no"?> <svg width="12cm" height="4cm" viewBox="0 0 1200 400" xmlns="" version="1.1"> <desc>Example textdecoration01 - behavior of 'text-decoration' property</desc> <rect x="1" y="1" width="1198" height="398" fill="none" stroke="blue" stroke- <g font- <text x="100" y="75">Normal text</text> <text x="100" y="165" text-Text with line-through</text> <text x="100" y="255" text-Underlined text</text> <text x="100" y="345" text- <tspan>One </tspan> <tspan fill="yellow" stroke="purple" >word </tspan> <tspan fill="yellow" stroke="black" >has </tspan> <tspan fill="yellow" stroke="darkgreen" text-different </tspan> <tspan fill="yellow" stroke="blue" >underlining</tspan> </text> </g> </svg> Example textdecoration01 View this example as SVG (SVG-enabled browsers only) The CSS working group agreed to the SVG specification of the 'text-decoration-fill' and 'text-decoration-stroke' properties at the joint CSS/SVG 2014 TPAC meeting. They again endorsed the use of these properties at the joint 2015 Sydney meeting. They expressed their intention to extend CSS text decoration to include these properties at the same time they allow 'fill' and 'stroke' properties on text. Conforming SVG viewers on systems which have the capacity for text selection (e.g., systems which are equipped with a pointer device such as a mouse) and which have system clipboards for copy/paste operations are required to applying styles for the ::selection pseudo-class. As the pointer is moved during the text selection process, the end glyph for the text selection operation is the glyph within the same ‘text’ element whose glyph cell is closest to the pointer. All characters within the ‘text’ element whose position within the ‘text’ is required to user agent must support text selection in logical order, which will result in discontinuous highlighting of glyphs due to the bidirectional reordering of characters. User agents can user agent is required to make appropriate adjustments to copy only the visually selected characters to the clipboard. SVG authors and SVG generators should order their text strings to facilitate properly ordered text selection within SVG viewing applications such as Web browsers; in other words, the DOM order of the text should match the natural reading order of the text. The SVGTextContentElement interface is implemented by elements that support rendering child text content. For the methods on this interface that refer to an index to a character or a number of characters, these references are to be interpreted as an index to a UTF-16 code unit or a number of UTF-16 code units, respectively. This is for consistency with DOM Level 2 Core, where methods on the CharacterData interface use UTF-16 code units as indexes and counts within the character data. Thus for example, if the text content of a ‘text’ element is a single non-BMP character, such as U+10000, then invoking getNumberOfChars on that element will return 2 since there are two UTF-16 code units (the surrogate pair) used to represent that one character. [Exposed=Window] interface SVGTextContentElement : SVGGraphicsElement { // lengthAdjust Types const unsigned short LENGTHADJUST_UNKNOWN = 0; const unsigned short LENGTHADJUST_SPACING = 1; const unsigned short LENGTHADJUST_SPACINGANDGLYPHS = 2; [SameObject] readonly attribute SVGAnimatedLength textLength; [SameObject] readonly attribute SVGAnimatedEnumeration lengthAdjust; long getNumberOfChars(); float getComputedTextLength(); float getSubStringLength(unsigned long charnum, unsigned long nchars); DOMPoint getStartPositionOfChar(unsigned long charnum); DOMPoint getEndPositionOfChar(unsigned long charnum); DOMRect getExtentOfChar(unsigned long charnum); float getRotationOfChar(unsigned long charnum); long getCharNumAtPosition(optional DOMPointInit point); void selectSubString(unsigned long charnum, unsigned long nchars); }; The numeric length adjustment type constants defined on SVGTextContentElement are used to represent the keyword values that the ‘lengthAdjust’ attribute can take. Their meanings are as follows: The textLength IDL attribute reflects the ‘textLength’ content attribute. The lengthAdjust IDL attribute reflects the ‘lengthAdjust’ content attribute. The numeric type values for ‘lengthAdjust’ are as described above in the numeric length adjust type constant table. The getNumberOfChars method returns the total number of addressable characters available for rendering within the current element, regardless of whether they will be rendered. When getNumberOfChars() is called, the following steps are run: The getComputedTextLength method is used to compute a "length" for the text within the element. When getComputedTextLength() is called, the following steps are run: The getSubStringLength method is used to compute the formatted text advance distance for a substring of text within the element. When getSubStringLength(charnum, nchars) is called, the following steps are run: This means that, for example, if there is a ligature that is only partly included in the substring, then the advance of the typographic character and any subsequent letter-spacing or word-spacing space will be assigned to the first character's text length. Previous versions of SVG required that this method and getComputedTextLength also include positioning adjustments in the inline direction due to ‘dx’ or ‘dy’ on child elements, so that the returned value would be equivalent to the user agent's calculation for ‘textLength’. However, it was poorly specified, poorly implemented, and of dubious benefit, so has been simplified to match implementations. Change to text length methods resolved at August 2015 Paris face-to-face. To find the typographic character for a character at index index within an element element, the following steps are run: The getStartPositionOfChar method is used to get the position of a typographic character after text layout has been performed. When getStartPositionOfChar(charnum) is called, the following steps are run: The getEndPositionOfChar method is used to get the trailing position of a typographic character after text layout has been performed. When getEndPositionOfChar(charnum) is called, the following steps are run: The getExtentOfChar method is used to compute a tight bounding box of the glyph cell that corresponds to a given typographic character. When getExtentOfChar(charnum) is called, the following steps are run: The getRotationOfChar method is used to get the rotation of typographic character. When getRotationOfChar(charnum) is called, the following steps are run: The getCharNumAtPosition method is used to find which character caused a text glyph to be rendered at a given position in the coordinate system. Because the relationship between characters and glyphs is not one-to-one, only the first character of the relevant typographic character is returned When getCharNumAtPosition(point) is called, the following steps are run: The selectSubString method is used to select text within the element. When selectSubString(charnum, nchars) is called, the following steps are run: Selects a substring of the text in this element, beginning at character index charnum and extending forwards nchars characters. The following steps must be followed when this method is called: Ignoring the argument checking and exception throwing, this is equivalent to performing the following: var selection = document.getSelection(); selection.removeAllRanges(); var range = new Range(); range.setStart(textContentElement, charnum); range.setEnd(textContentElement, charnum + nchars); selection.addRange(range); This method is deprecated, as it duplicates functionality from the Selection API. The SVGTextPositioningElement interface is implemented by elements that support attributes that position individual text glyphs. It is inherited by SVGTextElement and SVGTSpanElement. [Exposed=Window] interface SVGTextPositioningElement : SVGTextContentElement { [SameObject] readonly attribute SVGAnimatedLengthList x; [SameObject] readonly attribute SVGAnimatedLengthList y; [SameObject] readonly attribute SVGAnimatedLengthList dx; [SameObject] readonly attribute SVGAnimatedLengthList dy; [SameObject] readonly attribute SVGAnimatedNumberList rotate; }; The x, y, dx, dy and rotate IDL attributes reflect the ‘x’, ‘y’, ‘dx’, ‘dy’ and ‘rotate’ content attributes, respectively. An SVGTextElement object represents a ‘text’ element in the DOM. [Exposed=Window] interface SVGTextElement : SVGTextPositioningElement { }; An SVGTSpanElement object represents a ‘tspan’ element in the DOM. [Exposed=Window] interface SVGTSpanElement : SVGTextPositioningElement { }; An SVGTextPathElement object represents a ‘textPath’ element in the DOM. [Exposed=Window] interface SVGTextPathElement : SVGTextContentElement { // textPath Method Types const unsigned short TEXTPATH_METHODTYPE_UNKNOWN = 0; const unsigned short TEXTPATH_METHODTYPE_ALIGN = 1; const unsigned short TEXTPATH_METHODTYPE_STRETCH = 2; // textPath Spacing Types const unsigned short TEXTPATH_SPACINGTYPE_UNKNOWN = 0; const unsigned short TEXTPATH_SPACINGTYPE_AUTO = 1; const unsigned short TEXTPATH_SPACINGTYPE_EXACT = 2; [SameObject] readonly attribute SVGAnimatedLength startOffset; [SameObject] readonly attribute SVGAnimatedEnumeration method; [SameObject] readonly attribute SVGAnimatedEnumeration spacing; }; SVGTextPathElement includes SVGURIReference; The numeric method type constants defined on SVGTextPathElement are used to represent the keyword values that the ‘method’ attribute can take. Their meanings are as follows: The numeric spacing type constants defined on SVGTextPathElement are used to represent the keyword values that the ‘spacing’ attribute can take. Their meanings are as follows: The startOffset IDL attribute reflects the ‘startOffset’ content attribute. The method IDL attribute reflects the ‘method’ content attribute. The numeric type values for ‘method’ are as described above in the numeric method type constant table. The spacing IDL attribute reflects the ‘spacing’ content attribute. The numeric type values for ‘spacing’ are as described above in the numeric spacing type constant table.
https://www.w3.org/TR/2018/CR-SVG2-20180807/text.html
CC-MAIN-2018-39
en
refinedweb
MediaFactory Video Smoothingedduddiee Mar 29, 2010 7:24 AM I currently have a MediaFactory which creates my media elements. However, this returns a generic MediaElement rather than an instance of a VideoElement. What want to be able to do is set smoothing to true on the elements returned but I can't seem to find a way to gain access to the VideoElement properties. Any ideas on how I would do this? 1. Re: MediaFactory Video SmoothingEdwin van Rijkom Mar 29, 2010 7:29 AM (in response to edduddiee) var media:MediaElement = factory.createMediaElement(...); var video:VideoElement = media as VideoElement; if (video != null) { video.smoothing = true; } 2. Re: MediaFactory Video Smoothingedduddiee Mar 29, 2010 8:25 AM (in response to Edwin van Rijkom) When using trying that the `video` variable is always set too null so it never enters the if statement. If it helps this is the code I am using to create the media element where `url` is a string to a m4v file: var element:MediaElement = factory.createMediaElement(new URLResource(url)); var video:VideoElement = element as VideoElement; 3. Re: MediaFactory Video Smoothingbringrags Mar 29, 2010 9:25 AM (in response to edduddiee) If you're not getting a VideoElement back from createMediaElement then that probably means that the URL passed in is not a video URL (or doesn't appear to be one). Does the result come back as null, or as a different type of MediaElement? What's the URL that you're passing in, does it have a file extension (assuming it's HTTP-based)? If not (or if it has a non-video extension), you might need to indicate that it's a video by setting resource.mediaType to MediaType.VIDEO. 4. Re: MediaFactory Video Smoothingedduddiee Mar 29, 2010 10:36 AM (in response to bringrags) I am using this with a m4v manifest file. It would seem this returns a M4VElement rather than a VideoElement like I was expecting. Sorry for the confusion. In this case how would I go about getting the video produced by the m4v smoothed? If it helps the m4v file will sometimes have a single progressive file and at other times have a list of dynamic streaming items. 5. Re: MediaFactory Video Smoothingbringrags Mar 29, 2010 10:49 AM (in response to edduddiee) I assume you mean F4MElement (rather than M4VElement). The F4MLoader uses a MediaFactory to create VideoElements. So one way to fix this would be to have F4MLoader work with a MediaFactory which always generates VideoElement's that have smoothing enabled. First, you'd need to create a subclass of MediaFactory that generates smoothed VideoElements: public class SmoothedMediaFactory extends MediaFactory { ... // The init() method gets invoked by the constructor. See DefaultMediaFactory.as for a similar approach. private function init():void { netLoader = new NetLoader(); addItem ( new MediaFactoryItem ( "my.custom.elements.video" , netLoader.canHandleResource , function():MediaElement { var videoElement:VideoElement = new VideoElement(null, netLoader); videoElement.smoothing = true; return videoElement; } ) ); } ... } Then, use this MediaFactory with the F4MLoader that's passed to your F4MElement: var f4mLoader:F4MLoader = new F4MLoader(new SmoothedMediaFactory()); mediaPlayer.media = new F4MElement(resource, f4mLoader); You could migrate the above two lines into a MediaFactoryItem that gets added to a MediaFactory, so that someone could work directly with a MediaFactory to create smoothed VideoElements (i.e. without having to know about F4MLoader, F4MElement, etc.). 6. Re: MediaFactory Video SmoothingSyberkitten May 13, 2013 7:07 AM (in response to bringrags) it doesn't seem to work, considering my custom custom media factory class has this code for the f4mloader: as i understand, what you have written relates to the regular netLoader, and not the f4mloader. when i do try to implement your solution as is, i simpy get an error saying Stream not Found (media error) Is there no way to pass a scope function into the init process of the media factory and get access to its objects, video element amongs them, or something like that? any help is appreciated 7. Re: MediaFactory Video SmoothingSyberkitten May 16, 2013 8:44 AM (in response to Syberkitten) found a good solution in this article: basically the idea is listen to a trait_add event, check if the MediaElementEvent is of type MediaElementEvent.TRAIT_ADD and if the traitType == MediaTraitType.DISPLAY_OBJECT then simply smooth the display object. code sample:
https://forums.adobe.com/thread/606497
CC-MAIN-2018-39
en
refinedweb
Seleccionar idioma Cloud computing What is multitenancy? Multitenancy is a software architecture where a single software instance can serve multiple, distinct user groups. Software-as-a-service (SaaS) offerings are an example of multitenant architecture. In cloud computing, multitenancy can also refer to shared hosting, in which server resources are divided among different customers. Multitenancy is the opposite of single tenancy, when a software instance or computer system has 1 exclusive user or group of users. From timesharing to SaaS The idea of multitenancy has been around for decades. In the 1960s, universities with powerful, expensive mainframes developed timesharing software that allowed multiple users to access the computer at essentially the same time. That idea never really went away, and today the concept of multitenancy is what makes cloud computing possible. A public cloud takes a pool of shared resources—processing power and memory—and divides it among multiple tenants. The workloads of each tenant remain isolated, even if they happen to run on the same physical machine or group of machines. If we take the same idea a step further and apply it to software architecture, we arrive at the modern concept of SaaS. A SaaS provider runs a single instance of an application and offers access to individual customers. Each user’s data remains isolated, even though they’re accessing the same software as every other user. When referring to a container orchestration platform such as Kubernetes, the term multitenancy usually means a single cluster that serves multiple projects. The cluster is configured so each project runs isolated from the others. Benefits of multitenant architecture Multitenancy has a whole array of advantages, which are evident in the popularity of cloud computing. Multitenancy can save money. Computing is cheaper at scale, and multitenancy allows resources to be consolidated and allocated efficiently. For an individual user, paying for access to a cloud service or a SaaS application is often more cost-effective than running single-tenant hardware and software. Multitenancy enables flexibility. If you’ve invested in your own hardware and software, it might reach capacity during times of high demand or sit idle during times of slow demand. A multitentant cloud, on the other hand, can allocate a pool of resources to the users who need it, as their needs scale up and down. As a customer of a public cloud provider, you can access extra capacity when you need it, and not pay for it when you don’t. Multitenancy can be more efficient. Multitenancy reduces the need for individual users to manage infrastructure and handle updates and maintenance. Individual tenants can rely on a central cloud provider, rather than their own teams, to handle those routine chores. When you might prefer single-tenant Despite the advantages of multitenancy, there are use cases that are better suited for single-tenant computer systems. Chief among them: Applications involving highly sensitive data. Public cloud environments and SaaS products are designed to isolate workloads and data, and have a strong record of working as designed. But in controlled tests, researchers have discovered vulnerabilities that could, at least theoretically, allow cross-tenant attacks in cloud environments. In practice these risks are relatively small. Shared tenancy vulnerabilities are rare and require high levels of sophistication, according to a 2020 report on cloud vulnerabilities from the U.S. National Security Agency. As of the NSA’s report, there had been no documented cross-tenant attacks on any major public cloud provider. The NSA considers these risks smaller than the risks from poor access control and misconfigurations. Multitenant environments in Linux Anyone setting up a multitenant environment will face the choice of isolating the environments using virtual machines (VMs) or containers. With VMs, a hypervisor spins up guest machines that each have their own operating system as well as applications and dependencies. The hypervisor also makes sure users are isolated from each other. Compared to VMs, containers offer a more lightweight, flexible, and easier-to-scale model. Containers simplify multi-tenancy deployments by deploying multiple applications on a single host, using the kernel and the container runtime to spin up each container. In contrast to VMs, which each include its own kernel, applications running in containers share a kernel, even across multiple tenants. In Linux®, namespaces make it possible for several containers to use the same resource at the same time without creating a conflict. Securing a container is the same as securing any running process. When using Kubernetes for container orchestration, it’s possible to set up multitenant environments using a single Kubernetes cluster. It’s possible to separate tenants into their own namespaces, and create policies that enforce tenant isolation. The tools you need for cloud computing Stay lightweight and run your Linux containers with an optimized, minimal-footprint operating system. A cloud infrastructure that runs off standard hardware—letting you deploy the private cloud tools you need, when you need them, all from 1 place. Develop, deploy, and manage your containers—anywhere, at any scale.
https://www.redhat.com/es/topics/cloud-computing/what-is-multitenancy
CC-MAIN-2020-34
en
refinedweb
Provided by: libpcre2-dev_10.32-5_amd64 NAME PCRE2 - Perl-compatible regular expressions (revised API) SYNOPSIS #include <pcre2.h> void pcre2_code_free(pcre2_code *code); DESCRIPTION If code is NULL, this function does nothing. Otherwise, code must point to a compiled pattern. This function frees its memory, including any memory used by the JIT compiler. If the compiled pattern was created by a call to pcre2_code_copy_with_tables(), the memory for the character tables is also freed. There is a complete description of the PCRE2 native API in the pcre2api page and a description of the POSIX API in the pcre2posix page.
http://manpages.ubuntu.com/manpages/eoan/man3/pcre2_code_free.3.html
CC-MAIN-2020-34
en
refinedweb
Metering Configuring and using Metering in OpenShift Container Platform Abstract Chapter 1. About Metering 1.1. Metering overview Met. 1.1.1. Metering resources Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. Metering is managed using the following CustomResourceDefinitions (CRDs): Chapter 2. Installing metering Review the following sections before installing metering into your cluster. To get started installing metering, first install the Metering Operator from OperatorHub. Next, configure your instance of metering by creating a CustomResource, referred to here as your MeteringConfig. Installing the Metering Operator creates a default MeteringConfig that you can modify using the examples in the documentation. After creating your MeteringConfig, install the metering stack. Last, verify your installation. 2.1. Prerequisites Metering requires the following components: - A StorageClass for dynamic volume provisioning. Metering supports a number of different storage solutions. - 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available. The minimum resources needed for the largest single Pod installed by metering are 2GB of memory and 2 CPU cores. - Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters. 2.2. Installing the Metering Operator You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack. You cannot create a Project starting with openshift- using the web console or by using the oc new-project command in the CLI. 2.2.1. Installing metering using the web console You can use the OpenShift Container Platform web console to install the Metering Operator. Procedure Create a namespace object YAML file for the Metering Operator with the oc create -f <file-name>.yamlcommand. You must use the CLI to create the namespace. For example, metering-namespace.yaml: apiVersion: v1 kind: Namespace metadata: name: openshift-metering 1 annotations: openshift.io/node-selector: "" 2 labels: openshift.io/cluster-monitoring: "true" - In the OpenShift Container Platform web console, click Operators → OperatorHub. Filter for meteringto find the Metering Operator. - Click the Metering card, review the package description, and then click Install. - Select an Update Channel, Installation Mode, and Approval Strategy. - Click Subscribe. Verify that the Metering Operator is installed by switching to the Operators → Installed Operators page. The Metering Operator has a Status of Succeeded when the installation is complete.Note It might take several minutes for the Metering Operator to appear. - Click Metering on the Installed Operators page for Operator Details. From the Details page you can create different resources related to metering. To complete the metering installation, create a MeteringConfig resource to configure metering and install the components of the metering stack. 2.2.2. Installing metering using the CLI You can use the OpenShift Container Platform CLI to install the Metering Operator. Procedure Create a namespace object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, metering-namespace.yaml: apiVersion: v1 kind: Namespace metadata: name: openshift-metering 1 annotations: openshift.io/node-selector: "" 2 labels: openshift.io/cluster-monitoring: "true" Create the namespace object: $ oc create -f <file-name>.yaml For example: $ oc create -f openshift-metering.yaml Create the OperatorGroup object YAML file. For example, metering-og: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-metering 1 namespace: openshift-metering 2 spec: targetNamespaces: - openshift-metering Create a Subscription object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the redhat-operatorsCatalogSource. For example, metering-sub.yaml: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metering-ocp 1 namespace: openshift-metering 2 spec: channel: "4.4" 3 source: "redhat-operators" 4 sourceNamespace: "openshift-marketplace" name: "metering-ocp" installPlanApproval: "Automatic" 5 - 1 - The name is arbitrary. - 2 - You must specify the openshift-meteringnamespace. - 3 - Specify 4.4 as the channel. - 4 - Specify the redhat-operatorsCatalogSource, which contains the metering-ocppackage manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). - 5 - Specify "Automatic" install plan approval. 2.3. Installing the metering stack After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack. Prerequisites - Review the configuration options Create a MeteringConfig resource. You can begin the following process to generate a default MeteringConfig, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create your MeteringConfig resource: - For configuration options, review About configuring metering. - At a minimum, you need to configure persistent storage and configure the Hive metastore. There can only be one MeteringConfig resource in the openshift-metering namespace. Any other configuration is not supported. Procedure - From the web console, ensure you are on the Operator Details page for the Metering Operator in the openshift-meteringproject. You can navigate to this page by clicking Operators → Installed Operators, then selecting the Metering Operator. Under Provided APIs, click Create Instance on the Metering Configuration card. This opens a YAML editor with the default MeteringConfig file where you can define your configuration.Note For example configuration files and all supported configuration options, review the configuring metering documentation. - Enter your MeteringConfig into the YAML editor and click Create. The MeteringConfig resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation. 2.4. Verifying the metering installation You can verify the metering installation by performing any of the following checks: Check the Metering Operator ClusterServiceVersion (CSV) for the metering version. This can be done through either the web console or CLI. Procedure (UI) - Navigate to Operators → Installed Operators in the openshift-meteringnamespace. - Click Metering Operator. - Click Subscription for Subscription Details. - Check the Installed Version. Procedure (CLI) Check the Metering Operator CSV in the openshift-meteringnamespace: $ oc --namespace openshift-metering get csv In the following example, the 4.4 Metering Operator installation is successful: NAME DISPLAY VERSION REPLACES PHASE elasticsearch-operator.4.4.0-202006231303.p0 Elasticsearch Operator 4.4.0-202006231303.p0 Succeeded metering-operator.v4.4.0 Metering 4.4.0 Succeeded Check that all required Pods in the openshift-meteringnamespace are created. This can be done through either the web console or CLI.Note Many Pods rely on other components to function before they themselves can be considered ready. Some Pods may restart if other Pods take too long to start. This is to be expected during the Metering Operator installation. Procedure (UI) - Navigate to Workloads → Pods in the metering namespace and verify that Pods are being created. This can take several minutes after installing the metering stack. Procedure (CLI) Check that all required Pods in the openshift-meteringnamespace are created: $ oc -n openshift-metering get pods The output shows that all Pods are created in the Readycolumn: NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s Verify that the ReportDataSourcesare beginning to import data, indicated by a valid timestamp in the EARLIEST METRICcolumn. This might take several minutes. Filter out the "-raw" ReportDataSources, which do not import data: $ oc get reportdatasources -n openshift-metering | grep -v raw $ oc get reportdatasources -n openshift-metering | grep -v raw NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s After all Pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. 2.5. Additional resources - For more information on configuration steps and available storage platforms, see Configuring persistent storage. - For the steps to configure Hive, see Configuring the Hive metastore. Chapter 3. Configuring metering 3.1. About configuring metering A and reapply the file. 3.2. Common configuration options 3.2.1. Resource requests and limits You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The default-resource-limits.yaml below provides an example of setting resource request and limits for each component. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: reporting-operator: spec: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi presto: spec: coordinator: resources: limits: cpu: 4 memory: 4Gi requests: cpu: 2 memory: 2Gi worker: replicas: 0 resources: limits: cpu: 8 memory: 8Gi requests: cpu: 4 memory: 2Gi hive: spec: metastore: resources: limits: cpu: 4 memory: 2Gi requests: cpu: 500m memory: 650Mi storage: class: null create: true size: 5Gi server: resources: limits: cpu: 1 memory: 1Gi requests: cpu: 500m memory: 500Mi 3.2.2. Node selectors You can run the metering components on specific sets of nodes. Set the nodeSelector on a metering component to control where the component is scheduled. The node-selectors.yaml file below provides an example of setting node selectors for each component. Add the openshift.io/node-selector: "" namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand Pods. Specify "" as the annotation value. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: reporting-operator: spec: nodeSelector: "node-role.kubernetes.io/infra": "" 1 presto: spec: coordinator: nodeSelector: "node-role.kubernetes.io/infra": "" 2 worker: nodeSelector: "node-role.kubernetes.io/infra": "" 3 hive: spec: metastore: nodeSelector: "node-role.kubernetes.io/infra": "" 4 server: nodeSelector: "node-role.kubernetes.io/infra": "" 5 Add the openshift.io/node-selector: "" namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand Pods. When the openshift.io/node-selector annotation is set on the project, the value is used in preference to the value of the spec.defaultNodeSelector field in the cluster-wide Scheduler object. Verification You can verify the metering node selectors by performing any of the following checks: Verify that all Pods for metering are correctly scheduled on the IP of the node that is configured in the MeteringConfig custom resource: Procedure Check all pods in the openshift-meteringnamespace: $ oc --namespace openshift-metering get pods -o wide The output shows the NODEand corresponding IPfor each Pod running in the openshift-meteringnamespace: NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none> hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none> metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none> nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none> presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none> reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none> Compare the nodes in the openshift-meteringnamespace to each node NAMEin your cluster: $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.18.3+6025c28 ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.18.3+6025c28 ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.18.3+6025c28 ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.18.3+6025c28 ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.18.3+6025c28 ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.18.3+6025c28 Verify that the node selector configuration in the MeteringConfig custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand Pods are scheduled. Procedure Check the cluster-wide Scheduler object for the spec.defaultNodeSelectorfield, which shows where Pods are scheduled by default: $ oc get schedulers.config.openshift.io cluster -o yaml 3.3. Configuring persistent storage Metering requires persistent storage to persist data collected by the metering-operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation. 3.3.1. Storing data in Amazon S3 Metering can use an existing Amazon S3 bucket or create a bucket for storage. Metering does not manage or delete any S3 bucket data. When uninstalling metering, any S3 buckets used to store metering data must be manually cleaned up. To use Amazon S3 for storage, edit the spec.storage section in the example s3-storage.yaml file below. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3" s3: bucket: "bucketname/path/" 1 region: "us-west-1" 2 secretName: "my-aws-secret" 3 # Set to false if you want to provide an existing bucket, instead of # having metering create the bucket on your behalf. createBucket: true 4 - 1 - Specify the name of the bucket where you would like to store your data. You may optionally specify the path within the bucket. - 2 - Specify the region of your bucket. - 3 - The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-idand data.aws-secret-access-keyfields. See the examples that follow for more details. - 4 - Set this field to falseif you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have CreateBucketpermissions. Use the example secret below as a template. The values of the aws-access-key-id and aws-secret-access-key must be base64 encoded. apiVersion: v1 kind: Secret metadata: name: your-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg==" You can use the following command to create the secret. This command automatically base64 encodes your aws-access-key-id and aws-secret-access-key values. oc create secret -n openshift-metering generic your-aws-secret --from-literal=aws-access-key-id=your-access-key --from-literal=aws-secret-access-key=your-secret-key/*", "arn:aws:s3:::operator-metering-data" ] } ] } If you left spec.storage.hive.s3.createBucket set to true, or unset, then you should use the aws/read-write-create.json file below, which contains permissions for creating and deleting buckets. { "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:HeadBucket", "s3:ListBucket", "s3:CreateBucket", "s3:DeleteBucket", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::operator-metering-data/*", "arn:aws:s3:::operator-metering-data" ] } ] } 3.3.2. Storing data in S3-compatible storage To use S3-compatible storage such as Noobaa, edit the spec.storage section in the example s3-compatible-storage.yaml file below. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "s3Compatible" s3Compatible: bucket: "bucketname" 1 endpoint: "" 2 secretName: "my-aws-secret" 3 Use the example secret below as a template. apiVersion: v1 kind: Secret metadata: name: your-aws-secret data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg==" 3.3.3. Storing data in Microsoft Azure To store data in Azure blob storage you must use an existing container. Edit the spec.storage section in the example azure-blob-storage.yaml file below. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "azure" azure: container: "bucket1" 1 secretName: "my-azure-secret" 2 rootDirectory: "/testDir" 3 Use the example secret below as a template. apiVersion: v1 kind: Secret metadata: name: your-azure-secret data: azure-storage-account-name: "dGVzdAo=" azure-secret-access-key: "c2VjcmV0Cg==" You can use the following command to create the secret. oc create secret -n openshift-metering generic your-azure-secret --from-literal=azure-storage-account-name=your-storage-account-name --from-literal=azure-secret-access-key=your-secret-key 3.3.4. Storing data in Google Cloud Storage To store your data in Google Cloud Storage you must use an existing bucket. Edit the spec.storage section in the example gcs-storage.yaml file below. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: storage: type: "hive" hive: type: "gcs" gcs: bucket: "metering-gcs/test1" 1 secretName: "my-gcs-secret" 2 Use the example secret below as a template: apiVersion: v1 kind: Secret metadata: name: your-gcs-secret data: gcs-service-account.json: "c2VjcmV0Cg==" You can use the following command to create the secret. oc create secret -n openshift-metering generic your-gcs-secret --from-file gcs-service-account.json=/path/to/your/service-account-key.json 3.4. Configuring the Hive metastore Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a PersistentVolume attached to the pod. Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data. 3.4.1. Configuring PersistentVolumes By default, Hive requires one PersistentVolume to operate. hive-metastore-db-data is the main PersistentVolumeClaim (PVC) required by default. This PVC is used by the Hive metastore to store metadata about tables, such as table name, columns, and location. Hive metastore is used by Presto and the Hive server to look up table metadata when processing queries. You remove this requirement by using MySQL or PostgreSQL for the Hive metastore database. To install, Hive metastore requires that dynamic volume provisioning be enabled via a StorageClass, a persistent volume of the correct size must be manually pre-created, or that you use a pre-existing MySQL or PostgreSQL database. 3.4.1.1. Configuring the storage class for Hive metastore To configure and specify a StorageClass for the hive-metastore-db-data PVC, specify the StorageClass in your MeteringConfig. An example StorageClass section is included in metastore-storage.yaml file below." 1 size: "5Gi" 3.4.1.2. Configuring the volume sizes for the Hive Metastore Use the metastore-storage.yaml file below as a template." size: "5Gi" 1 3.4.2. Use MySQL or PostgreSQL for the Hive metastore The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive. There are 4 configuration options you can use to control the database used by Hive metastore: url, driver, username, and password. Use the example configuration file below to use a MySQL database for Hive: spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:mysql://mysql.example.com:3306/hive_metastore" driver: "com.mysql.jdbc.Driver" username: "REPLACEME" password: "REPLACEME" You can pass additional JDBC parameters using the spec.hive.config.url. For more details see the MySQL Connector/J documentation. Use the example configuration file below to use a PostgreSQL database for Hive: spec: hive: spec: metastore: storage: create: false config: db: url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" driver: "org.postgresql.Driver" username: "REPLACEME" password: "REPLACEME" You can pass additional JDBC parameters using the URL. For more details see the PostgreSQL JDBC driver documentation. 3.5. Configuring the reporting-operator The reporting-operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Operator is primarily done through your MeteringConfig file. 3.5.1. Prometheus connection When you install metering on OpenShift Container Platform, Prometheus is available at. To secure the connection to Prometheus, the default metering installation uses the OpenShift Container Platform certificate authority. If your Prometheus instance uses a different CA, the CA can be injected through a ConfigMap. See the following example. spec: reporting-operator: spec: config: prometheus: certificateAuthority: useServiceAccountCA: false configMap: enabled: true create: true name: reporting-operator-certificate-authority-config filename: "internal-ca.crt" value: | -----BEGIN CERTIFICATE----- (snip) -----END CERTIFICATE----- Alternatively, to use the system certificate authorities for publicly valid certificates, set both useServiceAccountCA and configMap.enabled to false. The reporting-operator can also be configured to use a specified bearer token to auth with Prometheus. See the following example. spec: reporting-operator: spec: config: prometheus: metricsImporter: auth: useServiceAccountToken: false tokenSecret: enabled: true create: true value: "abc-123" 3.5.2. Exposing the reporting API On OpenShift Container Platform the default metering installation automatically exposes a Route, making the reporting API available. This provides the following features: - Automatic DNS - Automatic TLS based on the cluster CA Also, the default installation makes it possible to use the OpenShift service for serving certificates to protect the reporting API with TLS. The OpenShift OAuth proxy is deployed as a side-car container for reporting-operator, which protects the reporting API with authentication. 3.5.2.1. Using OpenShift Authentication By default, the reporting API is secured with TLS and authentication. This is done by configuring the reporting-operator to deploy a pod containing both the reporting-operator’s container, and a sidecar container running OpenShift auth-proxy. In order to access the reporting API, the metering operator exposes a route. Once that route has been installed, you can run the following command to get the route’s hostname. METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host') Next, set up authentication using either a service account token or basic authentication with a username/password. 3.5.2.1.1. Authenticate using a service account token With this method, you use the token in the reporting Operator’s service account, and pass that bearer token to the Authorization header in the following command: TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator) curl -H "Authorization: Bearer $TOKEN" -k "[Report Name]&namespace=openshift-metering&format=[Format]" Be sure to replace the name=[Report Name] and format=[Format] parameters in the URL above. The format parameter can be json, csv, or tabular. 3.5.2.1.2. Authenticate using a username and password We are able to do basic authentication using a username and password combination, which is specified in the contents of a htpasswd file. By default, we create a secret containing an empty htpasswd data. You can, however, configure the reporting-operator.spec.authProxy.htpasswd.data and reporting-operator.spec.authProxy.htpasswd.createSecret keys to use this method. Once you have specified the above in your MeteringConfig, you can run the following command: curl -u testuser:password123 -k "[Report Name]&namespace=openshift-metering&format=[Format]" Be sure to replace testuser:password123 with a valid username and password combination. 3.5.2.2. Manually Configuring Authentication In order to manually configure, or disable OAuth in the reporting-operator, you must set spec.tls.enabled: false in your MeteringConfig. This also disables all TLS/authentication between the reporting-operator, presto, and hive. You would need to manually configure these resources yourself. Authentication can be enabled by configuring the following options. Enabling authentication configures the reporting-operator pod to run the OpenShift auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting-operator API isn’t exposed directly, but instead is proxied to via the auth-proxy sidecar container. - reporting-operator.spec.authProxy.enabled - reporting-operator.spec.authProxy.cookie.createSecret - reporting-operator.spec.authProxy.cookie.seed You need to set reporting-operator.spec.authProxy.enabled and reporting-operator.spec.authProxy.cookie.createSecret to true and reporting-operator.spec.authProxy.cookie.seed to a 32-character random string. You can generate a 32-character random string using the following command. $ openssl rand -base64 32 | head -c32; echo. 3.5.2.2.1. Token authentication When the following options are set to true, authentication using a bearer token is enabled for the reporting REST API. Bearer tokens can come from serviceAccounts or users. - reporting-operator.spec.authProxy.subjectAccessReview.enabled - reporting-operator.spec.authProxy.delegateURLs.enabled When authentication is enabled, the Bearer token used to query the reporting API of the user or serviceAccount must be granted access using one of the following roles: - report-exporter - reporting-admin - reporting-viewer - metering-admin - metering-viewer The metering-operator is capable of creating RoleBindings for you, granting these permissions by specifying a list of subjects in the spec.permissions section. For an example, see the following advanced-auth.yaml example configuration. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: permissions: # anyone in the "metering-admins" group can create, update, delete, etc any # metering.openshift.io resources in the namespace. # This also grants permissions to get query report results from the reporting REST API. meteringAdmins: - kind: Group name: metering-admins # Same as above except read only access and for the metering-viewers group. meteringViewers: - kind: Group name: metering-viewers # the default serviceaccount in the namespace "my-custom-ns" can: # create, update, delete, etc reports. # This also gives permissions query the results from the reporting REST API. reportingAdmins: - kind: ServiceAccount name: default namespace: my-custom-ns # anyone in the group reporting-readers can get, list, watch reports, and # query report results from the reporting REST API. reportingViewers: - kind: Group name: reporting-readers # anyone in the group cluster-admins can query report results # from the reporting REST API. So can the user bob-from-accounting. reportExporters: - kind: Group name: cluster-admins - kind: User name: bob-from-accounting reporting-operator: spec: authProxy: # htpasswd.data can contain htpasswd file contents for allowing auth # using a static list of usernames and their password hashes. # # username is 'testuser' password is 'password123' # generated htpasswdData using: `htpasswd -nb -s testuser password123` # htpasswd: # data: | # testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc= # # change REPLACEME to the output of your htpasswd command htpasswd: data: | REPLACEME Alternatively, you can use any role which has rules granting get permissions to reports/export. This means get access to the export sub-resource of the Report resources in the namespace of the reporting-operator. For example: admin and cluster-admin. By default, the reporting-operator and metering-operator serviceAccounts both have these permissions, and their tokens can be used for authentication. 3.5.2.2.2. Basic authentication (username/password) For basic authentication you can supply a username and password in reporting-operator.spec.authProxy.htpasswd.data. The username and password must be the same format as those found in an htpasswd file. When set, you can use HTTP basic authentication to provide your username and password that has a corresponding entry in the htpasswdData contents. 3.6. Configure AWS billing correlation Metering can correlate cluster usage information with AWS detailed billing information, attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example aws-billing.yaml file below. apiVersion: metering.openshift.io/v1 kind: MeteringConfig metadata: name: "operator-metering" spec: openshift-reporting: spec: awsBillingReportDataSource: enabled: true # Replace these with where your AWS billing reports are # stored in S3. bucket: "<your-aws-cost-report-bucket>" 1 prefix: "<path/to/report>" region: "<your-buckets-region>" reporting-operator: spec: config: aws: secretName: "<your-aws-secret>" 2 presto: spec: config: aws: secretName: "<your-aws-secret>" 3 hive: spec: config: aws: secretName: "<your-aws-secret>" 4 To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see Turning on the AWS Cost and Usage Report in the AWS documentation. - 1 - Update the bucket, prefix, and region to the location of your AWS Detailed billing report. - 2 3 4 - All secretNamefields should be set to the name of a secret in the metering namespace containing AWS credentials in the data.aws-access-key-idand data.aws-secret-access-keyfields. See the example secret file below for more details. apiVersion: v1 kind: Secret metadata: name: <your-aws-secret> data: aws-access-key-id: "dGVzdAo=" aws-secret-access-key: "c2VjcmV0Cg==" To store data in S3,/*", 1 "arn:aws:s3:::operator-metering-data" 2 ] } ] } { /*", 3 "arn:aws:s3:::operator-metering-data" 4 ] } ] } This can be done either pre-installation or post-installation. Disabling it post-installation can cause errors in the reporting-operator. Chapter 4. Reports 4.1. About Reports. 4.1.1. Reports. 4.1.1.1. Example Report with a Schedule The following example Report will contain information on every Pod’s CPU requests, and will run 4.1.1.2. Example Report without a Schedule (Run-Once) The following example Report will contain" 4.1.1.3. query: 4.1.1.4. schedule ... 4.1.1.4.1. periodis an integer value between 0-23. minuteis an integer value between 0-59. secondis an integer value between 0-59. dayOfWeekis a string value that expects the day of the week (spelled out). dayOfMonthis an integer value between 1-31. For cron periods, normal cron expressions are valid: expression: "*/5 * * * *" 4.1.1.5. reportingStart To support running a Report against existing data, you can set the spec.reportingStart field to a RFC3339 timestamp to tell the Report to run according to its schedule starting from reportingStart rather than the current create a report with the following values: apiVersion: metering.openshift.io/v1 kind: Report metadata: name: pod-cpu-request-hourly spec: query: "pod-cpu-request" schedule: period: "hourly" reportingStart: "2019-01-01T00:00:00Z" 4.1.1.6. reportingEnd reportingEnd, the last period in the schedule will be shortened to end at the specified reportingEnd time. If left unset, then the Report will run forever, or until a reportingEnd is set on the Report. For example, if you wanted" 4.1.1.7. runImmediately When runImmediately is set to true, the report will be run immediately. This behavior ensures that the report is immediately processed and queued without requiring additional scheduling parameters. When runImmediately is set to true you must set a reportingEnd and reportingStart value. 4.1.1.8. inputs. 4.1.1.9. Roll-up Reports" 4.1.1.9.1. Report Status The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here. The status field of a Report currently has two fields: conditions: Conditions is a list of conditions, each of which have a type, status, reason, and messagefield. Possible values of a condition’s typefield are Runningand Failure, indicating the current state of the scheduled report. The reasonindicates why its conditionis in its current state with the statusbeing either true, falseor, unknown. The messageprovides a human readable indicating why the condition is in the current state. For detailed information on the reasonvalues see pkg/apis/metering/v1/util/report_util.go. lastReportTime: Indicates the time Metering has collected data up to. 4.2. Storage Locations A StorageLocation is a custom resource that configures where data will be stored by the reporting-operator. This includes the data collected from Prometheus, and the results produced by generating a Report custom resource. You only need to configure a StorageLocation if you want to store data in multiple locations, like multiple S3 buckets or both S3 and HDFS, or if you wish to access a database in Hive/Presto that was not created by metering. For most users this is not a requirement, and the documentation on configuring metering is sufficent to configure all necessary storage components. 4.2.1. StorageLocation examples This first example is what the built-in local storage option looks like. It is configured to use Hive, and by default data is stored wherever Hive is configured to use storage (HDFS, S3, or a ReadWriteMany PVC). Local storage example apiVersion: metering.openshift.io/v1 kind: StorageLocation metadata: name: hive labels: operator-metering: "true" spec: hive: 1 databaseName: metering 2 unmanagedDatabase: false 3 - 1 - If the hivesection is present, then the StorageLocation will be configured to store data in Presto by creating the table using Hive server. Only databaseName and unmanagedDatabase are required fields. - 2 - The name of the database within hive. - 3 - If true, then this StorageLocation will not be actively managed, and the databaseName is expected to already exist in Hive. If false, this will cause the reporting-operator to create the database in Hive. The next example uses an AWS S3 bucket for storage. The prefix is appended to the bucket name when constructing the path to use. Remote storage example apiVersion: metering.openshift.io/v1 kind: StorageLocation metadata: name: example-s3-storage labels: operator-metering: "true" spec: hive: databaseName: example_s3_storage unmanagedDatabase: false location: "s3a://bucket-name/path/within/bucket" 1 There are some additonal optional fields that can be specified in the hive section: - (optional) defaultTableProperties: Contains configuration options for creating tables using Hive. - (optional) fileFormat: The file format used for storing files in the filesystem. See the Hive Documentation on File Storage Format for a list of options and more details. - (optional) rowFormat: Controls the Hive row format. This controls how Hive serializes and deserializes rows. See the Hive Documentation on Row Formats and SerDe for more details. 4.2.2. Default StorageLocation If an annotation storagelocation.metering.openshift.io/is-default exists and is set to true on a StorageLocation resource, then that resource becomes the default storage resource. Any components with a storage configuration option where StorageLocation is not specified will use the default storage resource. There can only be on default storage resource. If more than one resource with the annotation exists, an error will be logged and the operator will consider there to be no default. apiVersion: metering.openshift.io/v1 kind: StorageLocation metadata: name: example-s3-storage labels: operator-metering: "true" annotations: storagelocation.metering.openshift.io/is-default: "true" spec: hive: databaseName: example_s3_storage unmanagedDatabase: false location: "s3a://bucket-name/path/within/bucket" Chapter 5. Using Metering Prerequisites - Install Metering - Review the details about the available options that can be configured for a Report and how they function. 5.1. Writing Reports Writing a Report is the way to process and analyze data using Metering. To write a Report, you must define a Report resource in a YAML file, specify the required parameters, and create it in the openshift-metering namespace by using oc. Prerequisites - Metering is installed. Procedure Change to the openshift-meteringproject: $ oc project openshift-metering Create a Report resource as a YAML file: Create a YAML file with the following content: apiVersion: metering.openshift.io/v1 kind: Report metadata: name: namespace-cpu-request-2019 1 namespace: openshift-metering spec: reportingStart: '2019-01-01T00:00:00Z' reportingEnd: '2019-12-30T23:59:59Z' query: namespace-cpu-request 2 runImmediately: true 3 - 2 - The queryspecifies ReportQuery used to generate the Report. Change this based on what you want to report on. For a list of options, run oc get reportqueries | grep -v raw. - 1 - Use a descriptive name about what the Report does for metadata.name. A good name is the query, and the schedule or period you used. - 3 - Set runImmediatelyto truefor it to run with whatever data is available, or set it to falseif you want it to wait for reportingEndto pass. Run the following command to create the Report: $ oc create -f <file-name>.yaml report.metering.openshift.io/namespace-cpu-request-2019 created You can list Reports and their Runningstatus with the following command: $ oc get reports NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE namespace-cpu-request-2019 namespace-cpu-request Finished 2019-12-30T23:59:59Z 26s 5.2. Viewing Report results Viewing a Report’s results involves querying the reporting-api Route and authenticating to the API using your OpenShift Container Platform credentials. Reports can be retrieved as JSON, CSV, or Tabular formats. Prerequisites - Metering is installed. - To access Report results, you must either be a cluster administrator, or you need to be granted access using the report-exporterrole in the openshift-meteringnamespace. Procedure Change to the openshift-meteringproject: $ oc project openshift-metering Query the reporting API for results: Get the route to the reporting-api: $ meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')" $ echo "$meteringRoute" Get the token of your current user to be used in the request: $ token="$(oc whoami -t)" To get the results, use curlto make a request to the reporting API for your report: $ reportName=namespace-cpu-request-2019 1 $ reportFormat=csv 2 $ curl --insecure -H "Authorization: Bearer ${token}" "{meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat" The response should look similar to the following (example output is with reportName=namespace-cpu-request-2019and reportFormat=csv): period_start,period_end,namespace,pod_request_cpu_core_seconds 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-sdn,94503.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 2019-01-01 00:00:00 +0000 UTC,2019-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000 Chapter 6. Examples of using metering Use the following example Reports to get started measuring capacity, usage, and utilization in your cluster. These examples showcase the various types of reports metering offers, along with a selection of the predefined queries. Prerequisites - Install Metering - Review the details about writing and viewing reports. 6.1. Measure cluster capacity hourly and daily The following Report demonstrates how to measure cluster capacity both hourly and daily. The daily Report works by aggregating the hourly Report’s results. The following report measures cluster CPU capacity every hour. Hourly CPU capacity by cluster example apiVersion: metering.openshift.io/v1 kind: Report metadata: name: cluster-cpu-capacity-hourly spec: query: "cluster-cpu-capacity" schedule: period: "hourly" 1 The following report aggregates the hourly data into a daily report. Daily CPU capacity by cluster example apiVersion: metering.openshift.io/v1 kind: Report metadata: name: cluster-cpu-capacity-daily 1 spec: query: "cluster-cpu-capacity" 2 inputs: 3 - name: ClusterCpuCapacityReportName value: cluster-cpu-capacity-hourly schedule: period: "daily" - 1 - To stay organized, remember to change the name of your Report if you change any of the other values. - 2 - You can also measure cluster-memory-capacity. Remember to update the query in the associated hourly report as well. - 3 - The inputssection configures this report to aggregate the hourly report. Specifically, value: cluster-cpu-capacity-hourlyis the name of the hourly report that gets aggregated. 6.2. Measure cluster usage with a one-time Report The following Reports to measure cluster usage from a specific starting date forward. The Report only runs once, after you save it and apply it. CPU usage by cluster example apiVersion: metering.openshift.io/v1 kind: Report metadata: name: cluster-cpu-usage-2019 1 spec: reportingStart: '2019-01-01T00:00:00Z' 2 reportingEnd: '2019-12-30T23:59:59Z' query: cluster-cpu-usage 3 runImmediately: true 4 - 1 - To stay organized, remember to change the name of your Report if you change any of the other values. - 2 - Configures the Reports to start using data from the reportingStarttimestamp until the reportingEndtimestamp. - 3 - Adjust your query here. You can also measure cluster usage with the cluster-memory-usagequery. - 4 - This tells the Report to run immediately after saving it and applying it. 6.3. Measure cluster utilization using cron expressions You can also use cron expressions when configuring the period of your reports. The following report measures cluster utilization by looking at CPU utilization from 9am-5pm every weekday. Weekday CPU utilization by cluster example apiVersion: metering.openshift.io/v1 kind: Report metadata: name: cluster-cpu-utilization-weekdays 1 spec: query: "cluster-cpu-utilization" 2 schedule: period: "cron" expression: 0 0 * * 1-5 3 Chapter 7. Troubleshooting and debugging metering Use the following sections to help troubleshoot and debug specific issues with metering. In addition to the information in this section, be sure to review the following topics: 7.1. Troubleshooting metering A common issue with metering is Pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a StorageClass or Secret. 7.1.1. Not enough compute resources A common issue when installing or running metering is lack of compute resources. Ensure that metering is allocated the minimum resource requirements described in the installation prerequisites. To determine if the issue is with resources or scheduling, follow the troubleshooting instructions included in the Kubernetes document Managing Compute Resources for Containers. 7.1.2. StorageClass not configured Metering requires that a default StorageClass be configured for dynamic provisioning. See the documentation on configuring metering for information on how to check if there are any StorageClasses configured for the cluster, how to set the default, and how to configure metering to use a StorageClass other than the default. 7.1.3. Secret not configured correctly A common issue with metering is providing the incorrect secret when configuring your persistent storage. Be sure to review the example configuration files and create you secret according to the guidelines for your storage provider. 7.2. Debugging metering Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive as well as view the dashboards of the Presto and HDFS components. All of the commands in this section assume you have installed metering through OperatorHub in the openshift-metering namespace. 7.2.1. Get reporting operator logs The command below will follow the logs of the reporting-operator. $ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator 7.2.2. Query Presto using presto-cli The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the Pod. If this occurs, you should increase the memory request and limits of the Presto Pod. By default, Presto is configured to communicate using TLS. You must to run the following command to run Presto queries: $ oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" -- /usr/local/bin/presto-cli --server --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem Once you run this command, a prompt appears where you can run queries. Use the show tables from metering; query to view the list of tables: $ presto:default> show tables from metering; Table) Query 20190503_175727_00107_3venm, FINISHED, 1 node Splits: 19 total, 19 done (100.00%) 0:02 [32 rows, 2.23KB] [19 rows/s, 1.37KB/s] presto:default> 7.2.3. Query Hive using beeline The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the Pod. If this occurs, you should increase the memory request and limits of the Hive Pod. $ oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) -c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl' Once you run this command, a prompt appears where you can run queries. Use the show tables; query to view the list of tables: $ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering; +----------------------------------------------------+ | tab_name | +----------------------------------------------------+ | selected (13.101 seconds) 0: jdbc:hive2://127.0.0.1:10000/default> 7.2.4. Port-forward to the Hive web UI Run the following command: $ oc -n openshift-metering port-forward hive-server-0 10002 You can now open in your browser window to view the Hive web interface. 7.2.5. Port-forward to hdfs To the namenode: $ oc -n openshift-metering port-forward hdfs-namenode-0 9870 You can now open in your browser window to view the HDFS web interface. To the first datanode: $ oc -n openshift-metering port-forward hdfs-datanode-0 9864 To check other datanodes, run the above command, replacing hdfs-datanode-0 with the Pod you want to view information on. 7.2.6. Metering Ansible Operator Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your MeteringConfig custom resource. 7.2.6.1. Accessing ansible logs In the default installation, the metering Operator is deployed as a Pod. In this case, we can check the logs of the ansible container within this Pod: $ oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible Alternatively, you can view the logs of the Operator container (replace -c ansible with -c operator) for condensed output. 7.2.6.2. Checking the MeteringConfig Status It can be helpful to view the .status field of your MeteringConfig custom resource to debug any recent failures. You can do this with the following command: $ oc -n openshift-metering get meteringconfig operator-metering -o json | jq '.status' Chapter 8. Uninstalling Metering You can remove metering from your OpenShift Container Platform cluster. 8.1. Uninstalling metering from OpenShift Container Platform You can remove metering from your cluster. Prerequisites - Metering must be installed. Procedure To remove metering: - In the OpenShift Container Platform web console, click Operators → Installed Operators. - Find the Metering Operator and click the menu. Select Uninstall Operator. - In the dialog box, click Remove to uninstall metering. Metering does not manage or delete any S3 bucket data. If you used Amazon S3 for storage, any S3 buckets used to store metering data must be manually cleaned up.
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html-single/metering/index
CC-MAIN-2020-34
en
refinedweb
Rhino objects in Python This guide provides an overview of RhinoScriptSyntax Object Geometry in Python. Objects Rhino can create and manipulate a number of objects, including points, point clouds, curves, surfaces, B-reps, meshes, lights, annotations, and references. Each object in the Rhino document is identified by a globally unique identifier, or GUID, that is generated and assigned to objects when they are created. Object identifiers are saved in the 3dm file, so an object’s identifier will be the same between editing sessions. To view an object’s unique identifier, use Rhino’s Properties command. For convenience, RhinoScriptSyntax returns object identifiers in the form of a string. For example, an object’s identifier will look something like the following: F6E01514-3264-4598-8A07-A58BFE739C38 The majority of RhinoScriptSyntax’s object manipulation methods require one or more object identifiers to be acquired before the method can be executed. Geometry and Attributes Rhino objects consist of two components: the object’s geometry and the object’s attributes. The types of geometry support by Rhino include points, point clouds, curves, surfaces, polysurfaces, extrusions, meshes, annotations and dimensions. The attributes of an object include such properties as color, layer, linetype, render material, and group membership, amongst others. Example The following example uses Object IDs to create reference geometry: import rhinoscriptsyntax as rs startPoint = [1.0, 2.0, 0.0] endPoint = [4.0, 5.0, 0.0] line1 = [startPoint, endPoint] line1ID = rs.AddLine(line1[0],line1[1]) # Adds a line to the Rhino Document and returns an ObjectID startPoint2 = [1.0, 4.0, 0.0] endPoint2 = [4.0, 2.0, 0.0] line2 = [startPoint2, endPoint2] line2ID = rs.AddLine(line2[0],line2[1]) # Returns another ObjectID int1 = rs.LineLineIntersection(line1ID,line2ID) # passing the ObjectIDs to the function.
https://developer.rhino3d.com/wip/guides/rhinopython/python-rhinoscriptsyntax-objects/
CC-MAIN-2020-34
en
refinedweb
. Figure 1.2. Project permissions -: Figure 2.1. Add view - From Git: Use this option to import an existing codebase in a Git repository to create, build, and deploy an application on OpenShift Container Platform. - Container Image: Use existing images from an image stream or registry to deploy it on to OpenShift Container Platform. -. Serverless options in the Developer perspective are displayed only if the OpenShift Serverless Operator is installed in your cluster.. Figure 2.2. Import from Git -. -: - Routingis used. - Build and Deployment Configuration -. - Scaling. -.4 cluster. - The etcd Operator already installed cluster-wide by an administrator. Procedure - ClusterServiceVersions (CSVs). CSVs are used to launch and manage the software provided by the Operator.Tip You can get this list from the CLI using: $ oc get csv On the Installed Operators page, click Copied, and then click the etcd Operator to view more details and available actions: Figure 2.3. etcd Operator overview As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdClusterresource). These objects work similar to the built-in native Kubernetes ones, such as Deploymentsor ReplicaSets, but contain logic specific to managing etcd. Create a new etcd cluster: - In the etcd Cluster API box, click Create New. - The next screen allows you to make any modifications to the minimal starting template of an EtcdClusterobject,. Figure 2.4. etcd Operator resources Verify that a Kubernetes service has been created that allows you to access the database from other Pods in your project. All users with the editrole. Container Platform automatically detects whether the a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your BuildConfig to access the repository. 2.3.1.3. Build strategy detection in the OpenShift Container Platform. Container Platform, use the -f|--file argument. For example: $ oc new-app -f examples/sample-app/application-template-stibuild.json 2.3.3.1. Template Parameters=- 2.3.4. Modifying application creation. Table 2.2. new-app output objects 2.3.4.1. Specifying environment variables=- Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 2.3.4.2. Specifying build environment variables=- 2.3.3.4.4. Viewing the output without creation Container Platform objects. To output new-app artifacts to a file, edit them, then create them: $ oc new-app \ -o yaml > myapp.yaml $ vi myapp.yaml $ oc create -f myapp.yaml 2.3.3.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: $ oc new-app -n myproject 2.3.4.7. Creating multiple objects If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 2.3.4.8. Grouping images and source in a single Pod 2. 2.4. Figure 2.5. Application topology The application resource name is appended with indicators for the different types of resource objects as follows: - DC: DeploymentConfigs - D: Deployment - SS: StatefulSet - DS: Daemonset 2.4.2..3. Scaling application pods and checking builds and routes. 2.4.4. Grouping multiple components within an application. Prerequisites - Ensure that you have created and deployed a Node.js application on OpenShift Container Platform using the Developer perspective. Procedureadded to the Labels section in the Overview Panel. Figure 2.6. Application grouping. 2.4.5. Connecting components within an application and across applications. 2.4 2.7. Connector -and Value = nodejs-exannotation added to the service. Similarly you can create other applications and components and establish connections between them. Figure 2.8. Connecting multiple applications 2.4.5.2. Creating a binding connection between components Service Binding. Currently, a few specific Operators like the etcd and the PostgresSQL Database Operator’s service instances are bind.is created and the Service Binding Operator controller injects the DB connection information into the application Deploymentas environment variables. After the request is successful, the application is redeployed and the connection is established. Figure 2.9. Binding connector 2.4. Editing applications You can edit the configuration and the source code of the application you create using the Topology view.. 2.5. 2.5.2. 2.10. 2.11. Edit and redeploy application 2.6. Working with Helm charts using the Developer perspective 2.6.1. Understanding Helm Helm. 2.6.1.1. Key features create Helm releases. 2.6.2. Installing Helm charts You can use the Developer perspective to create Helm releases from the Helm charts provided in the Developer Catalog. Prerequisites - You have logged in to the web console and have switched to the Developer perspective. Procedure - In the Developer perspective, navigate to the Add view and select the From Catalog option to see the Developer Catalog. In the filters listed on the left, under Type, select the Helm Charts filter to see the available Helm charts. Figure 2.12. Helm charts in developer catalog -. 2.7. Deleting applications You can delete applications created in your project. 2.7.1. Deleting applications using the Developer perspective. Chapter 3. Service brokers 3.1. Installing Service Catalog Service Catalog is deprecated in OpenShift Container Platform 4. Equivalent and better functionality is present in the Operator Framework and Operator Lifecycle Manager (OLM).. 3.1.1. About Service Catalog. Service Catalog is not installed by default in OpenShift Container Platform 4. 3.1.2. Installing Service Catalog. Procedure Enable Service Catalog’s API server. Use the following command to edit Service Catalog’s API server resource. $ oc edit servicecatalogapiservers Under spec, set the managementStatefield to Managed: spec: logLevel: Normal managementState: Managed Save the file to apply the changes. The Operator installs Service Catalog’s API server component. As of OpenShift Container Platform 4, this component is installed into the openshift-service-catalog-apiservernamespace. Enable Service Catalog’s controller manager. Use the following command to edit Service Catalog’s controller manager resource. $ oc edit servicecatalogcontrollermanagers Under spec, set the managementStatefield to Managed: spec: logLevel: Normal managementState: Managed Save the file to apply the changes. The Operator installs Service Catalog’s controller manager component. As of OpenShift Container Platform 4, this component is installed into the openshift-service-catalog-controller-managernamespace. - Verify that the installation is completed successfully by checking that Service Catalog appears in the left navigation of the web console. 3.2. Uninstalling Service Catalog. You can uninstall Service Catalog if you have installed it previously. Service Catalog must be removed before the cluster can upgrade to future minor versions of OpenShift Container Platform. 3.2.1. Uninstalling Service Catalog If Service Catalog is installed, cluster administrators can uninstall it by using the following procedure. Uninstalling Service Catalog will impact any service brokers and provisioned services from those brokers, such as the Template Service Broker, in your cluster. Prerequisites - Service Catalog is installed. Procedure Using an account with cluster-adminprivileges, edit the servicecatalogcontrollermanagersresource: $ oc edit servicecatalogcontrollermanagers - Change the managementStateparameter from Managedback to the default Removed, and save your changes. Edit the servicecatalogapiserversresource: $ oc edit servicecatalogapiservers - Change the managementStateparameter from Managedback to the default Removed, and save your changes. - Verify that the removal is completed successfully by checking that Service Catalog is removed from the left navigation of the web console. 3.3. Installing the Template Service Broker You can install the Template Service Broker to gain.3.1. About the Template Service Broker The Template Service Broker gives. 3.3.2. Installing the Template Service Broker Operator Prerequisites - Service Catalog is installed. Procedure The following procedure installs the Template Service Broker Operator using the web console. Create a namespace. - Using the Administrator perspective, navigate in the web console to Administration → Namespaces and click Create Namespace. Enter the following: openshift-template-service-brokerin the Name fieldNote The namespace must start with openshift-. openshift.io/cluster-monitoring=truein the Labels field - Click Create. -.3.3. Starting the Template Service Broker After you have installed the Template Service Broker Operator, you can start the Template Service Broker using the following procedure. Prerequisites - Service Catalog is installed. - The Template Service Broker Operator is installed. Procedure - Using the Administrator perspective, navigate in the web console to Operators → Installed Operators and select the openshift-template-service-brokerproject. - Select the Template Service Broker Operator. - Under Provided APIs, click Create Instance for Template Service Broker. - Review the default YAML and click Create. Verify that the Template Service Broker starts correctly by checking that the template applications are available. Note - To check from the web console, navigate to Service Catalog → Broker Management → Service Classes to view the list of template application service classes. To check from the CLI: $ oc get ClusterServiceClasses -n openshift-template-service-broker Service. 3.4. Uninstalling the Template Service Broker You can uninstall the Template Service Broker if you no longer require. Chapter 4. Deployments 4.1. Understanding Deployments and DeploymentConfigs Deployments and DeploymentConfigs in OpenShift Container Platform. A deployment strategy is a way to change or upgrade an application. The aim is to make the change without downtime in a way that the user barely notices the improvements. Because the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on DeploymentConfig features or routing features. Strategies that focus on the DeploymentConfig impact all routes that use the application. Strategies that use router features target individual routes. Many deployment strategies are supported through the DeploymentConfig, and some additional strategies are supported through router features. DeploymentConfig strategies are discussed in this section. Choosing a deployment strategy Consider the following when choosing a deployment strategy: - Long-running connections must be handled gracefully. - Database conversions can be complex and must be done and rolled back along with the application. - If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. - You must have the infrastructure to do this. - If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new Pod is ready for use. If a readiness check fails, the. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a Rolling deployment: -. have to take into account resource quota and can accept partial unavailability, use maxUnavailable. will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a Custom deployment or using a blue-green deployment strategy. 4.3.1.2. Creating a Rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a Rolling deployment using the CLI. Procedureimage. Scale the DeploymentConfig. When using the CLI, the following command shows how many Pods are on version 1 and how many are on version 2. In the web console, the Pods are progressively added to v2 and removed from v1: $ oc describe dc deployment-example During the deployment process, the new ReplicationController To start a rolling deployment to upgrade an application: - In the Topology view of the Developer perspective, click on the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 4.1. Rolling update. When to use a Recreate deployment: -.and click Save. - In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate. Use the Actions drop-down menu to select Start Rollout to start an update using the Recreate strategy. The Recreate strategy first terminates Pods for the older version of the application and then spins up Pods for the new version. Figure 4.2. Recreate update 4.3.4. Custom strategy The Custom strategy allows you to provide your own deployment behavior. Example Custom strategy definition Container Platform provides the following environment variables to the deployment process: The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.. 4.4.1. Proxy shards and traffic splitting. 4.4.2. N-1 compatibility. 4.4.3. Graceful termination. 4.4.4. Blue-green deployments..5.1. Load balancing for A/B testing. Procedure to adjust the number of Pods to handle the anticipated loads. ...flag. - Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default. 5.1.3. Quota enforcement. 5.1.4. Requests versus limits. can exist in the project. openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 compute-resources.yaml -. besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 limits.ephemeral-storage: "4Gi" 4 scopes: - NotTerminatingonds >=0. For example, this quota would charge for build or deployer pods, but not long running pods like a web server or database. storage-consumption.yaml is set to 0, it means bronze storage class cannot request storage. - 7 - Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0, it means bronze storage class cannot create claims. -fcounter is correct: # oc describe quota gpu-quota -n nvidia Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Attempt to create a second GPU pod in the nvidianamespace.. 5.1.7. Viewing a quota You can view usage statistics related to any hard limits defined in a project’s quota by navigating in the web console to the project’s Quota page. You can also use the CLI to view quota details. Procedure Command-line Interface (CLI), commonly known as.: v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend project and application metrics using the Developer perspective The Monitoring view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. Prerequisites - You have logged in to the web console and have switched to the Developer perspective. - You have created and deployed applications on OpenShift Container Platform. 6.1. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure. Figure 6.1. Monitoring dashboard Use the following options to see further details: -. Figure. Figure 6. 6.2. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the metrics for your application. Procedure - In the Topology view, click the application to see the application details in the right panel. Click the Monitoring tab to see the warning events for the application; graphs for CPU, memory, and bandwidth usage; and all the events for the application. Figure 6.4. Monitoring application metrics - Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. - Click View monitoring dashboard to see the monitoring dashboard for that application. Chapter 7. Monitoring application health In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers. ..... 8. Idling applicationsConfigs. The action of idling an application involves idling all associated resources. 8.1. Idling applications. 8.1.1. Idling a single service Procedure To idle a single service, run: $ oc idle <service> 8. 8.. 9.2. Pruning groups To prune groups records from an external provider, administrators can run the following command: $ oc adm prune groups \ --sync-config=path/to/sync/config [<options>] Table 9.3. Pruning deployments In order to prune deployments that are no longer required by the system due to age and status, administrators can run the following command: $ oc adm prune deployments [<options>] Table 9.2. Prune deployments CLI configuration 9.4. Pruning builds In order to prune builds that are no longer required by the system due to age and status, administrators can run the following command: $ oc adm prune builds [<options>] Table 9.3. Prune builds CLI configuration Developers can enable automatic build pruning by modifying their Build Configuration. Additional resourcesThan: 3600000000000 4 resources: {} 5 affinity: {} 6 nodeSelector: {} 7 tolerations: [] 8 successfulJobsHistoryLimit: 3 9 failedJobsHistoryLimit: 3 10 status: observedGeneration: 2 11 conditions: 12 - true. - 3 keepTagRevisions: The number of revisions per tag to keep. This is an optional field, default is 3. The initial value is 3. - 4 keepYoungerThan: Retain images younger than this duration in nanoseconds. This is an optional field, default is 3600000000000(60 minutes). - 5 resources: Standard Pod resource requests and limits. This is an optional field. - 6 affinity: Standard Pod affinity. This is an optional field. - 7 nodeSelector: Standard Pod node selector. This is an optional field. - 8 tolerations: Standard Pod tolerations. This is an optional field. - 9 successfulJobsHistoryLimit: The maximum number of successful jobs to retain. Must be >= 1to ensure metrics are reported. This is an optional field, default is 3. The initial value is 3. - 10 failedJobsHistoryLimit: The maximum number of failed jobs to retain. Must be >= 1to ensure metrics are reported. This is an optional field, default is 3. The initial value is 3. - 11 observedGeneration: The generation observed by the Operator. - 12. 9.6. Manually pruning images The Pruning Custom Resource enables automatic image pruning. However, administrators can manually prune images that are no longer required by the system due to age, status, or exceed limits by running the following command: $ oc adm prune images <options> To manually prune images, you must first log in to the CLI as a user with an access token. The user must also have the cluster role of system:image-pruner or greater (for example, cluster-admin). Pruning images removes data from the integrated registry unless --prune-registry=false is used. Pruning images with the --namespace flag does not remove images, only imagestreams.. oc adm prune images operations require a route for your registry. Registry routes are not created by default. The Prune images CLI configuration options table describes the options you can use with the oc adm prune images <options> command. Table 9.4. Prune images CLI configuration options - Imagestreams created less than --keep-younger-thanminutes ago - Running Pods - Pending Pods - ReplicationControllers - Deployments - DeploymentConfigs - ReplicaSets - Build Configurations - Builds --keep-tag-revisionsmost recent items in stream.status.tags[].items That are exceeding the smallest limit defined in the same project and are not currently referenced by any: - Running Pods - Pending Pods - ReplicationControllers - Deployments - DeploymentConfigs - ReplicaSets - Build Configurations -. 9.6.2. Running the image prune operation Procedure To see what a pruning operation would delete: Keeping up to three tag revisions, and keeping resources (images, imagestreams,..Caution. 10. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 10.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota. 10.1.1. Connect OpenShift Container Platform clusters to the Marketplace. 10.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application. You can access installed applications from the web console by clicking Operators > Installed Operators. 10.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console’s Administrator and Developer perspectives. The Developer perspective. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list.
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html-single/applications/index
CC-MAIN-2020-34
en
refinedweb
A native plugin is a C++ dynamically loaded library that provides one or more plugin implementations to MarkLogic. This chapter covers how to create, install, and manage native plugins. A native plugin is a dynamically linked library that contains one or more UDF (User Defined Function) implementations. When you package and deploy a native plugin in the expected way, MarkLogic distributes your code across the cluster and makes it available for execution through specific extension points. The UDF interfaces define the extension points that can take advantage of a native plugin. MarkLogic currently supports the following UDFs: The implementation requirement for each UDF varies, but they all use the native plugin mechanism for packaging, deployment, and version. Native plugins are deployed as dynamically loaded libraries that MarkLogic Server loads on-demand when referenced by an application. The User-Defined Functions (UDFs) implemented by a native plugin are identified by the relative path to the plugin and the name of the UDF. For a list of the supported kinds of UDFs, see What is a Native Plugin?. When you install a native plugin library, MarkLogic Server stores it in the Extensions database. If the MarkLogic Server instance in which you install the plugin is part of a cluster, your plugin library is automatically propagated to all the nodes in the cluster. There can be a short delay between installing a plugin and having the new version available. MarkLogic Server only checks for changes in plugin state about once per second. Once a change is detected, the plugin is copied to hosts with an older version. In addition, each host has a local cache from which to load the native library, and the cache cannot be updated while a plugin is in use. Once the plugin cache starts refreshing, operations that try use a plugin are retried until the cache update completes. MarkLogic Server loads plugins on-demand. A native plugin library is not dynamically loaded until the first time an application calls a UDF implemented by the plugin. A plugin can only be loaded or unloaded when no plugins are in use on a host. Native plugins run in the same process context as the MarkLogic Server core, so you must compile and link your library in a manner compatible with the MarkLogic Server executable. Follow these basic steps to build your library: -fPICoption. The sample plugin in marklogic_dir /Samples/NativePlugins includes a Makefile usable with GNU make on all supported platforms. Use this makefile as the basis for building your own plugins as it includes all the required compiler options. The makefile builds a shared library, generates a manifest, and zips up the library and manifest into an install package. The makefile is easily customized for your own plugin by changing a few make variables at the beginning of the file: PLUGIN_NAME = sampleplugin PLUGIN_VERSION = 0.1 PLUGIN_PROVIDER = MarkLogic PLUGIN_DESCRIPTION = Example native plugin PLUGIN_SRCS = \ SamplePlugin.cpp The table below shows the compiler and standard library versions used to build MarkLogic Server. You must build your native plugin with compatible tools. You must package a native plugin into a zip file to install it. The installation zip file must contain: marklogic::AggregateUDF, and the registration function marklogicPlugin. manifest.xml. See The Plugin Manifest. Including dependent libraries in your plugin zip file gives you explicit control over which library versions are used by your plugin and ensures the dependent libraries are available to all nodes in the cluster in which the plugin is installed. The following example creates the plugin package sampleplugin.zip from the plugin implementation, libsampleplugin.so, a dependent library, libdep.so, and the plugin manifest. $ zip sampleplugin.zip libsampleplugin.so libdep.so manifest.xml If the plugin contents are organized into subdirectories, include the subdirectories in the paths in the manifest. For example, if the plugin components are organized as follows in the zip file: $ unzip -l sampleplugin.zip Archive: sampleplugin.zip Length Date Time Name -------- ---- ---- ---- 28261 06-28-12 12:54 libsampleplugin.so 334 06-28-12 12:54 manifest.xml 0 06-28-12 12:54 deps/ 28261 06-28-12 12:54 deps/libdep.so -------- ------- 56856 4 files Then manifest.xml for this plugin must include deps/ in the dependent library path: <?xml version="1.0" encoding="UTF-8"?> <plugin xmlns=""> <name>sampleplugin-name</name> <id>sampleplugin-id</id> ... <native> <path>libsampleplugin.so</path> <dependency>deps/libdep1.so</dependency> </native> </plugin> After packaging your native plugin as described in Packaging a Native Plugin, install or update your plugin using the XQuery function plugin:install-from-zip or the Server-Side JavaScript function plugin:installFromZip. For example, the following code installs a native plugin contained in the file / space/plugins/sampleplugin.zip. The relative plugin path in the Extensions directory is native. If the plugin was already installed on MarkLogic Server, the new version replaces the old. An installed plugin is identified by its path. The path is of the form scope /plugin-id, where scope is the first parameter to plugin:install-from-zip, and plugin-id is the ID in the <id/> element of the plugin manifest. For example, if the manifest for the above plugin contains <id>sampleplugin-id</id>, then the path is native/sampleplugin-id. The plugin zip file can be anywhere on the filesystem when you install it, as long as the file is readable by MarkLogic Server. The installation process deploys your plugin to the Extensions database and creates a local on-disk cache inside your MarkLogic Server directory. Installing or updating a native plugin on any host in a MarkLogic Server cluster updates the plugin for the whole cluster. However, the new or updated plugin may not be available immediately. For details, see How MarkLogic Server Manages Native Plugins. To uninstall a native plugin, call the XQuery function plugin:uninstall or the Server-Side JavaScript function plugin.uninstall. In the first parameter, pass the scope with which you installed the plugin. In the second parameter, pass the plugin ID (the <id/> in the manifest). For example: The plugin is removed from the Extensions database and unloaded from memory on all nodes in the cluster. There can be a slight delay before the plugin is uninstalled on all hosts. For details, see How MarkLogic Server Manages Native Plugins. There can be a slight delay When you install a native plugin, it becomes available for use. The plugin is loaded on demand. When a plugin is loaded, MarkLogic Server uses a registration handshake to cache details about the plugin, such as the version and what UDFs the plugin implements. Every C++ native plugin library must implement an extern "C" function called marklogicPlugin to perform this load-time registration. The function interface is: using namespace marklogic; extern "C" void marklogicPlugin(Registry& r) {...} When MarkLogic Server loads your plugin library, it calls marklogicPlugin so your plugin can register itself. The exact requirements for registration depend on the interfaces implemented by your plugin, but must include at least the following: marklogic::Registry::version. marklogic::Registryregistration method. For example, Registry::registerAggregatefor implementations of marklogic::AggregateUDF. Declare marklogicPlugin as required by your platform to make it accessible outside your library. For example, on Microsoft Windows, include the extended attribute dllexport in your declaration: extern "C" __declspec(dllexport) void marklogicPlugin(Registry& r)... For example, the following code registers two AggregateUDF implementations. For a complete example, see marklogic_dir /Samples/NativePlugins. #include MarkLogic.h using namespace marklogic; class Variance : public AggregateUDF {...}; class MedianTest : public AggregateUDF {...}; extern "C" void marklogicPlugin(Registry& r) { r.version(); r.registerAggregate<Variance>("variance"); r.registerAggregate<MedianTest>("median-test"); } Your implementation of the registration function marklogicPlugin must include a call to marklogic::Registry::version to register your plugin version. MarkLogic Server uses this information to maintain plugin version consistency across a cluster. When you deploy a new plugin version, both the old and new versions of the plugin can be present in the cluster for a short time. If MarkLogic Server detects this state when your plugin is used, MarkLogic Server reports XDMP-BADPLUGINVERSION and retries the operation until the plugin versions synchronize. Calling Registry::version with no arguments uses a default version constructed from the compilation date and time ( __DATE__ and __TIME__). This ensures the version number changes every time you compile your plugin. The following example uses the default version number: extern "C" void marklogicPlugin(Registry& r) { r.version(); ... } You can override this behavior by passing an explicit version to Registry::version. The version must be a numeric value. For example: extern "C" void marklogicPlugin(Registry& r) { r.version(1); ... } The MarkLogic Server native plugin API ( marklogic_dir/include/MarkLogic.h) is also versioned. You cannot compile your plugin library against one version of the API and deploy it to a MarkLogic Server instance running a different version. If MarkLogic Server detects this mismatch, an XDMP-BADAPIVERSION error occurs. Using the Admin Interface or the xdmp:host-status function, you can monitor which native plugin libraries are loaded into MarkLogic Server, as well as their versions and UDF capabilities. Native plugin libraries are demand loaded when an application uses one of the UDFs implemented by the plugin. Plugins that are installed but not yet loaded will not appear in the host status. To monitor loaded plugins using the Admin Interface: To examine loaded programatically, open Query Console and run a query similar to the following: You will see output similar to the following if there are plugins loaded. The XQuery code emits XML. The JavaScript code emits a JavaScript object (pretty-printed as JSON by Query Console). This output is the result of installing and loading the sample plugin in MARKLOGIC_DIR /Samples/NativePlugin, which implements several aggregate UDFs (max, min, etc.), a lexer UDF, and a stemmer UDF. A native plugin zip file must include a manifest file called manifest.xml. The manifest file must contain the plugin name, plugin id, and a <native> element for each native plugin implementation library in the zip file. The manifest file can also include optional metadata such as provider and plugin description. For full details, see the schema in MARKLOGIC_INSTALL_DIR /Config/plugin.xsd. Paths to the plugin library and dependent libraries must be relative. You can use the same manifest on multiple platforms by specifying the native plugin library without a file extension or, on Unix, lib prefix. If this is the case, then MarkLogic Server forms the library name in a platform specific fashion, as shown below: .dllextension libprefix and a .soextension libprefix and a .dylibextension The following example is the manifest for a native plugin with the ID sampleplugin-id, implemented by the shared library libsampleplugin.so. <?xml version="1.0" encoding="UTF-8"?> <plugin xmlns=""> <name>sampleplugin-name</name> <id>sampleplugin-id</id> <version>1.0</version> <provider-name>MarkLogic</provider-name> <description>Example native plugin</description> <native> <path>libsampleplugin.so</path> </native> </plugin> If the plugin package includes dependent libraries, list them in the <native> element. For example: <?xml version="1.0" encoding="UTF-8"?> <plugin xmlns=""> <name>sampleplugin-name</name> ... <native> <path>libsampleplugin.so</path> <dependency>libdep1.so</dependency> <dependency>libdep2.so</dependency> </native> </plugin> Administering (installing, updating or uninstalling) a native plugin requires the following:, or application-plugin-registrar role. Loading and running a native plugin can be controlled in two ways: native-pluginprivilege () enables the use of all native plugins. to enable users to use a specific privilege. The plugin-path is same plugin library path you use when invoking the plugin. For example, if you install the following plugin and its manifest specifies the plugin path as sampleplugin, then the plugin-specific privilege would be. plugin:install-from-zip("native", xdmp:document-get("/space/udf/sampleplugin.zip")/node()) The plugin-specific privilege is not pre-defined for you. You must create it. However, MarkLogic Server will honor it if it is present. You can explore a sample native plugin through the source code and makefile in MARKLOGIC_DIR /Samples/NativePlugins. This example implements several kinds of UDF. The sample Makefile will lead you through compiling, linking, and packaging the native plugin. The README.txt provides instructions for installing and exercising the plugin library.
http://docs.marklogic.com/guide/app-dev/native-plugins
CC-MAIN-2020-34
en
refinedweb
SoIntensityExtremaQuantification engine computes basic statistics of an image. More... #include <ImageViz/Engines/ImageAnalysis/Statistics/SoIntensityExtremaQuantification.h> The SoIntensityExtremaQuantification engine computes the minimum, the maximum and the mean value of the intensities of an image. SoIntensityStatisticsQuantification. Constructor. Select the compute Mode (2D or 3D or AUTO) Use enum ComputeMode. Default is MODE_AUTO The input image. Default value is NULL. Supported types include: grayscale binary label color image. The output measure result. Default value is NULL.
https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_intensity_extrema_quantification.html
CC-MAIN-2020-34
en
refinedweb
W, HTTP or HTTPS settings are configured by using a command-line tool. At the minimum you will want to configure a URL registration, and add a Firewall exception for the URL your service will be using. The tool used to configure HTTP settings depends on the operating system the computer is running. When running Windows Server 2003 or Windows XP, use the HttpCfg.exe tool. Windows Server 2003 automatically installs this tool. When running Windows XP, you can download the tool at Windows XP Service Pack 2 Support Tools. For more information, seeHttpcfg Overview. When running Windows Vistaor Windows 7, you configure these settings with the Netsh.exe tool.. Running Windows XP or Server 2003)" Running Windows Vista, Windows Server 2008 R2 or Windows 7 If you are running on Windows Vista, Windows Server 2008 R2 or Windows 7, use the Netsh.exe tool. The following shows an example of using this command. netsh http add urlacl url= user=DOMAIN\user This command adds an URL reservation for the specified URL namespace for the DOMAIN\user account. For more information on using the netsh command type "netsh http add urlacl" in a command-prompt and press enter. Configuring a Firewall Exception When self-hosting a WCF service that communicates over HTTP, an exception must be added to the firewall configuration to allow inbound connections using a particular URL. For more information, see Open a port in Windows Firewall (Windows 7) Configuring SSL Certificates. Configuring the IP Listen List. Running Windows XP or Server 2003 Use the httpcfg tool to modify the IP Listen List, as shown in the following example. The Windows Support Tools documentation explains the syntax for the httpcfg.exe tool. httpcfg.exe set iplisten -i 0.0.0.0:8000 Running Windows Vista or Windows 7 Use the netsh tool to modify the IP Listen List, as shown in the following example. netsh http add iplisten ipaddress=0.0.0.0:8000 Other Configuration Settings When using WSDualHttpBinding, the client connection uses defaults that are compatible with namespace reservations and the Windows. Issues Specific to Windows XP. See Also WSDualHttpBinding How to: Configure a Port with an SSL Certificate
https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/configuring-http-and-https
CC-MAIN-2017-26
en
refinedweb
I often find myself writing or consuming API’s which require the caller to specify some sort of options. I’ve seen numerous ways to specify those options, but I’ve yet to find one that I really like. Let’s work with an example throughout the rest of this post. Imagine we’re working on an application which has some notion of a User. We’re now about to create a function to output a User to a Stream. We want to provide two options: - Should we include the User’s email address? - Should we include the User’s phone number? I can imagine several ways of designing this API. First, we could use a couple of bool parameters, like this: Next, we could use a flags enum to represent the options, like this: Finally we could create a struct which contains two bools instead of the enum, like so: I don’t like passing the two bools, because I find that reading the callsite of a method designed like this to be difficult, because you don’t know what the parameters mean anymore. Does this display the email address or the phone number? I can’t remember anymore. I know that parameter help can tell you the names of the parameters, but that doesn’t help when I’m doing a code review in windiff, since windiff doesn’t have parameter help tool-tips yet. On my current team, we have a convention that when we call a method like this, we put the parameter name in a comment, so that we can see it, like: However, in general, I don’t like to rely on conventions that force people to put comments in a certain style to make the code readable. If possible, I’d rather design the API in such a way that it has to be readable. The second option is using enum flags to represent the options. My problem with this approach is that it is hard to get right, both for the caller of the API, and the implementer. This example is sort of trivial, but I can remember a time when I was dealing with a bit-field that contained 31 unique bit values that could be set and unset independently. Getting all of the ~ and | and &’s just right was very hard, and once it was done, it was hard to figure out what it was trying to do. A final reason that I don’t like enums is that in practice I often find that there are more behaviors that I want to be able to add to the options which isn’t possible with enums. For example, it might be a requirement that two of the options are mutually exclusive. It’s difficult to ensure that this is enforced with an enum. Finally, we have the option of using a struct, which is addresses both of the two concerns above. You can write a call like: Well, that’s certainly clear. It also makes it easier to understand control flow that sets or clears the options, and to understand conditions based on them. It also gives a place to add that behavior: I can add methods to the struct, make the fields into properties which have setters that do validation, etc. The problem with this approach is that in simple cases like the above, it is much more verbose than either of the other two alternatives. However, I recently realized that in C# 3.0, we can take advantage of Object Initializers to make the simple case simple again: It’s almost like having named parameters in C# 🙂 What do you think? Which alternative do you prefer? Do you have another one that I haven’t thought of? I tend to use the second option quite a lot myself when regularly passing around more than one value between methods, or when the parameters to a function get quite large. The third form does look rather nice when I only need to set one or two properties such as in the example, but any more than about three and I’d probably stick with the second form. I like the third form, its like named parameters from VB, but with the ability to easily move it out of being in-line when the number of parameters gets to be too many. The first option with a set of boolean parameters is a bad idea, not just for readability, but for extensibility. If you add additional options in a later revision, you would have to change the method signature, which would result in a breaking change to your API (internal or external, it’s still undesirable). As for options 2 and 3, using an Enum would be compatible with .NET 1.0, 1.1, 2.0, and 3.0, but object initializers would only work with .NET 3.5. Having said that, I think that using an object with the appropriate fields is better than an Enum, since you could abstract the initialization into a function to improve readability (I’m thinking of your 31 options scenario here, which would also make the object intializers convention formatting a challenge). I pretty much agree with your take on this. The first option, bools, doesn’t really have any benefits, but does have a lot of drawbacks. Enums are nice for relatively simple situations (like your example), while using a settings object (be it a class or a struct) works well for more complex scenarios. I frequently use *Options structures with nullable types representing optional parameters. With the anonymous types feature of C# 3.0, can’t DisplayUser(user, Console.Out, new DisplayUserOptions { Email = true, PhoneNumber = false }); be shorteneed to?: DisplayUser(user, Console.Out, new { Email = true, PhoneNumber = false }); Seeing as though the DisplayUserOptions type’s only purpose is to provide named arguments to the DisplayUser function, it doesn’t seem important to keep the class name there. Then again, you could argue this reduces the clarity of the function call for later programmers, but, assuming DisplayUserOptions is a sealed class, the documentation for the DisplayUser function should make it pretty clear what the type of the third argument is. Great use of object initializers. Will be even nicer when C# supports initializing readonly structures with that kind of syntax… Bruce, unfortunately, you can’t omit the type name in this scenario. The reason is that the method needs to *take* a named type, and there isn’t a way to do that. Anonymous types don’t unify to named types, even if they have the same set of property names. Down with premature optimization! Up with readability & obvious correctness. I’ve created too many bugs in my career because of bit twiddling. That was a vote for the structs of bools. I prefer the following: DisplayUserOptions options = new DisplayUserOptions(); options.Email = true; options.PhoneNumber = false; DisplayUser(user, Console.Out, options); However, I do not care for Object Initializers in C# 3. To me, the verbosity of the two are the same because they compile into the exact same code. It’s nothing but syntactic sugar. There are also several advantages to not using object initializers: 1) Can more freely add comments 2) Easier to debug The boolean option is the worst in regard to both readability and (perhaps more importantly) extensibility. Hi Craig, I understand that Object Initializers compile to the same IL, and so their performance and semantics are identical, but that doesn’t mean they are equally verbose. By verbosity, I mean, "how much code does it take to express the idea." C# has several other constructs which are purely syntactic sugar, but which are nonetheless very useful. Some examples are: 1. Loop constructs like while/do/for. These are all just a series of goto’s. 2. the "using" statement, which exands to a try/finally with a Dispose call. 3. the "foreach" statement, which expands to a try/finally, a call to GetEnumerator and a while loop. 4. Anonymous methods, which expand to a display class plus method. 5. Lambda’s which are equivalent to anonymous methods but with less verbose syntax. 6. Query Expressions. 7. etc. Just because something is "syntactic sugar" doesn’t mean that it shouldn’t be used if it makes your code easier to read. Regarding adding comments, I agree that an object initializer inline in a call is not a good structure for adding comments. If I needed to, I would probably apply the "introduce explaining variable" refactoring to end up with an object initializer assigned to a local: var options = new DisplayUserOptions { // Free to add comments here. PhoneNumber = false; }; DisplayUser(user, Console.Out, options); Regarding debugging. This is a good point. It’s too bad that the VS debugger doesn’t support expression level debugging, so that you could set a breakpoint and step on each of the lines of object initializer. I hope that capability gets added in a future version of the debugger. However, for options type structs like these, I usually find that the expressions are simple enough that I don’t need to break on individual ones. Welcome to the thirty-seventh edition of Community Convergence. Visual Studio 2008 has been released Welcome to the thirty-seventh edition of Community Convergence. Visual Studio 2008 has been released I vote for structs as well 🙂 I also find it a good pattern to derive the struct name by adding ‘Options’ to whatever name the method has. I don’t like _any_ of those options. Personally, I think the clearest option would be an enum with all the possibilities, such as, say: enum DisplayUserOptions { None, PhoneNumber, } Now you don’t have to worry about passing in DisplayUserOptions.Email | DisplayUserOptions.PhoneNumber which sounds rather unclear when read aloud ("Email OR PhoneNumber", when you really mean the customer has "Email AND PhoneNumber"). Kirill, my naming convention varies. If the options are related to a class (like MessageBoxOptions in the BCL), I would rather make it a nest type named "Options". That way I have less things to rename when I refactor. In my example I was using a pseudo-visitor pattern, and so the expection is that there might be a bunch of "DisplayFoo" calls, where each Foo might want a different set of options. In this case, I didn’t have a good name for the options type. Hi Kyralessa, I agree that having a "combination value" in the enum is probably better than the straight enum, but still has three drawbacks in my opinion: 1. The implemenation of the "DisplayUser" becomes a little bit more complex, because you need to check combinations of flags. However this is a minor issue from an API design point of view, since it affects the implementor, instead of the caller. 2. It results in a cominatorial explosion of the possible values. For example, in my case of 31 possible flag values, your enum would need to have 2^31 different values specified explicitly, which can’t be good 🙂 3. There is still no logical home for all of the other behaviors and validation associated with it. In my experience, that behavior is almost always _there_, it’s just hard to _see_ unless you’re looking for it, and you have a better place to put itt. I generally follow the guidance in the .NET Framework Design Guidelines – but there is a case I think for reviewing those given the new syntactic forms that C# 3.0 enables. I have been using bools in simpler situations and enums in slightly more complex ones. But I must admit that the situations I had had to code for never had 2^31 possibilities. I would think that anytime it gets more complex, use a class and assign it the responsibility of figuring out what’s right and what’s not. Someone suggested to enumerate all possibilities in an enum rather than be burden the consumer with providing a|b when what you mean is that a and b. The disadvantage that I see is that even though you have all the valid possibilities explicitly stated it won’t stop someone from passing yourenum.a|yourenum.b & yourenum.c or any other combination to your API. How’s that going to be handled? Therefore, IMO, – bools in academic/example sitations – enums in slightly complex ones – classes in more complex ones seems to be a solution to me. I’d like to see syntax like this: public class Class1 { public bool GetSomething(int iParam, string sParam) { return true; } } Which is then called like this (analogous to the new way of calling constructors in 3.0): Class1 c = new Class1(); bool ret = c.GetSomething( iParam = 0, sParam = "you lose"); Is this what an earlier commenter referred to as "named parameters"? Kenneth, yes that’s named parameters. I like the enum option. Regardless of what you do, when it comes to 31 different options to single method, it’s gonna get complicated. Besides, if you use the struct option, the code to set all 31 fields of a struct will be more than the code to AND and OR 31 enums. Tundey, I agree that the case of having 31 different options is a pathological case. And while it may end up being more code, I think that the code for a struct based solution is easier to understand, and so I tend to prefer it, even though it might be a little bit more verbose. Interesting, I’d start by thinking of the consumers of my method. Is it something any programmer can use? If so, then I’d probably create more than one method, rather than try anything flash. I’d think about the likely uses of the process and create methods that have an easy set of parameters. A good API should cater for what a consumer wants to do. Rather than force the consumer to pass parameters, give the consumer a method that does what they want in most situations. EG DisplayUser, DisplayUserAndEmail, DisplayUserAndPhone and DisplayUserWithEmailAndPhone. Sure, you get long method names, but the use of the method is clear in its name. Plus, it gives the implementer of the methods the ability do what they like. Separate routines for all of the public methods? Fine. Can consolidate the public methods into one routine? Again fine. If you and you alone are the consumer of your methods, then you reap what you sow. In the hypothetical example of many possible parameters: My knee jerk response is your object model is not right. I’ve struggled to come up with an example of complexity that would need a heavily parameterised method. The only thing I can think of is something we are talking about at work, allowing users to specify the combination of columns they want to see in a report. The combinations quickly get out of hand and are next to impossible for a single method to deal with. In this case, forget an all-seeing method that can deal with the choices, break down the choices into chunks and have a different method process each chunk. Shadders, I pretty much agree with all of your points, and I frequently do use your strategy of having multiple named methods. This does suffer from the combinatorial explosion problem however. I’ll admit that 31 options isn’t a good example, but even if you have 6 or so independent options, that’s still a lot of methods to maintain. The real life example where I had 31 was for determining what things to include in completion lists. The options were things like: include static, include instance, include non-public, include properties/events/methods/types/namespaces, etc, but you’re right that it isn’t a very good way of structuring the code. Kevin, good point about using a nested type for Options! This starts looking somewhat like a Memento to me! Others: of course, it’s not like one option is the right way and two others are wrong – you can totally use any approach depending on your situation. It is that thinking about picking the best approach and being explicit about your decision – that’s what makes for good code. Also, that’s where Introduce Parameter Object refactoring might come in handy. Finally, here’s a post about ‘named arguments’ in C#: <a href=""></a> I use myself a combination of option 2 and 3, depending on the situation. It’s sure that option 1 isn’t viable if there is more than one parameter, and if I am sure that the function won’t be extended. However, I think that using explicit combination values in enums are not a good idea when there are more than 3 or 4 values (2^4), because it tends to complicate things from implementer perspective, and from caller perspective (should I look for enum.option1ANDoption2ANDoption3 or option2ANDoption1ANDoption3 etc…) OK this can be solved by using a convention.. For the function name option, it suffers (in my point of view) from the unability of dissociating option declaration from function call. In fact, there is chance that it will ultimately only move the issue. if(checkBoxEmail.Checked && checkBoxName.Checked) { DisplayEmailANDMail(user); } elseif…. elseif…. elseif…. I prefer DisplayUserOptions options; if(checkBoxEmail.Checked) { options.Email = true; } if(checkBoxName.Checked) { options.Name = true; } DisplayUser(user,option) I find it easier (even more easier if we want to add an option to display the user’s phonenumber) I use techniques 1 and 2 with the same concerns. There will be no proper solution unless named parameters are made part of C# –something I’ve long wished for. So that’s my vote: "Technique 4" of a better future world: C# named parameters. I prefer using the enum with params, so the call would be something like this: DisplayUser(user, Console.Out, DisplayUserOptions.Email, DisplayUserOptions.PhoneNumber); This way the caller doesn’t have to know bitwise operations (for small list of options). You still may use bitwise operations to let the code more readble in a very large list of options. Followin is an example of code using Enums/params/Extension methods: Using this extension method (that could be inproved with generics): public static class DisplayUserOptionsExtension { public static Boolean In(this DisplayUserOptions option, params DisplayUserOptions[] options) { DisplayUserOptions selectedOptions = DisplayUserOptions.None; foreach (DisplayUserOptions selectedOption in options) { selectedOptions |= selectedOption; } return (selectedOptions & option) == option; } } We can use a code like this to call the method: DisplayUser(user, Console.Out, DisplayUserOptions.Email, DisplayUserOptions.PhoneNumber); And the implementation of the method is not difficult either with the extension method: public void DisplayUser(User user, TextWriter textWriter, params DisplayUserOptions[] options) { if (DisplayUserOptions.Email.In(options)) { // print e-mail } } I tend to agree with Ricardo. This sort of dependency injection keeps the complexity of the parameters with the parameters. This object can grow to deal with changes without affecting the calling and called function(s). I think that wherever possible complexity and or functionality should be encapsulated. Of course this approach introduces complexity in terms of implementation but as Shadders has pointed out we reap what we sow. I often find myself in both consumer and provider roles. I am lazy. So I tend to work overly hard to minimize the work I will have to do in the future. I think of this as an investment which has the largest return in terms extensibility and containment of possible defects. Riccardo, I hadn’t considered the use of params enums. It’s very interesting, I’ll have to think about it more. Although the consumer of Ricardo’s solution has a very easy syntax, it looks like this comes with a performance penalty – constructing a temporary object for a single boolean check. C++ used to use constructs like this: union DisplayOptions { struct { bool Email:1; bool PhoneNumber:1; }; int BitFlags; }; …which, although a little ugly to define, gave performance + readability. Is there no C# equivalent? I can think of 1.5 alternatives. The 0.5 is to use bools, but moderating the downside Foo(true) by supplying a couple of consts const bool kShowMail = true; const bool kHideMail = false; The true alternative (and the pattern I typically use) is by using non-bitfield enums. enum DisplayEmailOption { Show, Hide } enum DisplayPhoneOption { Show, Hide } Type-safe, expandable both to other things (DisplayAddressOption) as well as values enum DisplayEmailOption { Show, Hide, ShowErrorOnly } Of the three options given, I’d go for (d) None of the above. For the simple case, I’d just create two different, well named, methods instead of having the boolean parameter. For more complex cases, I create a MethodObject to capture the configuration and required execution. To make things readable, I’ll often use a static method on a utility class plus some method chaining. For example: Display.User(user).WithEmail().On(stream); Display.User(user).WithPhone().WithAddress().On(stream); Requires a little bit of setup work, but is very clean to read – plus intellisense can help by showing what’s valid. Hey Bevan, thanks for the comment. Your second approach looks a lot like the Fluent Interface approach that I mention in my follow up post. I agree that it’s a great way too, since you can create an API that really flows together well. I generally prefer enums; I have never found bitwise operations particularly complicated or burdensome, and the ease of use and readability on the consumer side is a huge plus. The only time I would move to using an options class would be if, as you pointed out in your post, there was the possibility of complex interactions between options, which is generally not very common; and even then, often this can be overcome by factoring your options into two or three enums where the individual options are addressing a common issue. I have often thought that the Reflection API’s could have done much better in this area. None of the above work well with Web services, where you need the caller and callee to be as independent as possible. Your client may be several versions behind the server, which is an extreme version of late binding I guess. Also callers will likely be using a different programming language. So I vote for ordinary bitmaps (not even enums). @Pete: .NET enums map directly onto a bitmap for purposes of exposing the method to external callers. Why not use the enum from within .NET to make your life easier? I prefer the enums anytime. Once you get a hang of how bitmasking works it’s all very easy. What I usually do is use the enum as a method argument and then just declare local boolean variables for each of the enum items and initialize them with the bitmask values. For example if my enum has three items called READ=1, WRITE=2, and APPEND=4 bool read = ((byte)(enumParam & myEnum.READ)) > 0; bool write = ((byte)(enumParam & myEnum.WRITE)) > 0; bool append = ((byte)(enumParam & myEnum.APPEND)) > 0; If there are too many enum items or each value is only used once then I’d probably do the bit-masking in-place rather than declaring local variables. I would probably be casting to int or long or whatever base type my enum needs to accomodate all enum items. #1 and #3 should definitely work fine on a web service. When making changes to your webservice you must make sure that your changes will not break earlier client versions if backward compatibility is required, on that aspect these techniques are no different than anything else. If you use the technique #1 then if you add a parameter you will need to create a separate method with the new parameters and keep the old one intact for compatiblity. If you use #3 you can add more fields to the structure without breaking older versions unless the new field is required on your logic which wouldn’t be the case if you’re trying to keep backwards compat. Now the best method to use in a webservice IMO would be the bitmasking one (#2). Of course, it won’t work out of the box with web services. You would have at least 2 choices: 1) You could create a custom serializer/deserializer that serializes enums as their base type (an integer type) or serialize all items that are ORed together as a delimetered list, I guess the later would make more sense since that way the client won’t need the codes for each item. 2) Simply make the parameter an integer type and pass the items ORed together and then casted to int as follows "(int)(myEnum.ITEM1 | myEnum.ITEM2)". You can then AND the parameter value with each of the enum items to determine if the item has been ORed with the parameter value. Bevan: Functionally I don’t see any problems with your approach but I don’t think it’s that readable. Functionally that’s exactly the same as using properties since a property is just a method internally like C++ getter or setter methods. However .NET’s OOP model uses properties for this purpose, so a .NET programmer will expect that to be done with properties so that will be rather confusing for a .NET programmer. If you must use methods for any reason you might want to use the std. convention used in cpp and other languages for getter and setters (use a get or set prefix, the set takes in a value and the get returns it. A few questions: 1) How can you tell if the WithEmail() method has already been called for an object? 2) Do you have a WithoutEmail() method to clear that flag? Whatever the answer is, properties make more sense to me. I just realize that here I’m discussing the use of methods (used as getter or setters) vs. using properties in .NET which functionaly is the same thing, but this post is talking about method arguments vs. method calls to set the arguments and there is a huge functional difference in that case. #1 you cannot use methods to provide values for another method unless you make the variable global which unless you really need it global for anohter reason is a very bad idea. #2 even if the method is infact an instance method and you do need to save the parameter globally for the instance it’s still a bad idea to use a method call just to set a value that could be set on the next call without the need of allocating and deallocating a stack frame for that method. @Fernan Why do you say that enums won’t work out of the box with web services? An enum is just an integer type with predefined constants and some additional syntax checking. For purposes of exposing your method to a webservice, its the exact same thing. Enums for the win! While no size fits all, when the initialization comes to more than a few items, I like the structs. There is nothing to say that bools (which I like) and enums (which I also like) can not live together in these structs, depending on what is being initialized. (Say one item might be a method of formating the text). When using the structs, we have the option of using object initializers, or to pre-build the struct and fill it member by member before the call using it. I also do not like the idea of having a handfull of functions to use depending on how I want to initialize. I believe that the fewer the functions in a struct/class the easier it is to maintain and understand. I find when I browse a struct/class that has a large function base my eyes start to glaze over. Thank you for posting this. I find it a good thing to go back and question how we do the common tasks, and appreciate seeing how others deal with these. Up to now, I only used the enum approach, but structs may be a good alternative if you have many options. But then again: I think "too many options to us an enum" is a big hint to think about the design again. Mutual exclusive options fall into this category, as well: Why not make two methods, instead? In case you go for the complex option parameter, another struct approach might be handy to be able to start with an enum first and migrate to stuct later: struct DisplayUserOptions { static readonly DisplayUserOptions Email = new DisplayUserOptions(1); static readonly DisplayUserOptions PhoneNumber = DisplayUserOptions(2); private DisplayUserOptions(int bits) { … } // overload &, | operators … } (I did not try this in real code, just a thought…) It all depends on the complexity. Nothing else. The complexity determines the best suitable method. A boolean parameter would be used in conjunction with a method that is named in such way that it is obvious what the boolean parameter does: MyObject.Reset(true); MyObject.Reset(CheckBoxClearSettings.Checked); The next step of specifying options is the enum, naturally. When options are exclusive, the enum IS best suited for the option. MyReport.Export(ExportTo.PDF); MyReport.Export(ExportTo.File, fileName); The other situation is the example used in this topic. We’re now speaking of "Flags" DisplayUser(user, DisplayUserFlag.EMail|DisplayUserFlag.Name); This however would only be feasible with a limited number of flags, where there is no interdependancy on flag bits. The next level is using a class. This way obviously offers the most flexible way of specifying parameters. public class MyMethodDisplayOptions { public bool ShowEmail; public bool ShowAddress; public bool ShowCity; } enum ExportTo {PDF, Text, Printer……} public class MyMethodOptions { public MyMethodDisplayOptions DisplayOptions { get…. } public ExportTo ExportTo { get …. } } The goal is to avoid having a method incredibly long parameter lists. It doesn’t read well, code doesn’t look smooth. It may take some effort to define the parameter support objects, but it is very readable. It also offers the possibility to embed intelligence into the options classes, where they can adjust options when other options are specified. (Mutual exclusiveness can be enforced etc.) Even exceptions can be raised by the options classes. So… in the end, there’s no "best" way by itself. I use all three ways of specifying options. It’s just a matter of picking the *right* way for a given situation. The thing to do is to determine how you can keep the method parameter lists short. Where option names, enums or classes are self-explanatory. Happy coding 🙂 This is another vote for method objects, although the technique is a little obscure at the moment it’s got a lot going for it in terms of discoverability, readability & flexibility. That said, it does take a lot to setup, so unless they’re really needed, I’d go with named parameters if they’re available – although no projects I’m on can use them at the moment. How about the move method to another class refactoring? Create a separate renderer or view class for each domain entity / target output combination. public class UserTextStreamRenderer : UserRenderer { public User UserData {get;set;} public bool ShowEmail {get;set} public bool ShowPhone {get;set} //other options… public TextWriter TargetStream {get; set;} public override void Render(){Render(TargetStream);} public void Render(TextWriter stream); } public abstract class UserRenderer { public abstract void Render(); } To clarify, I would only use the separate renderer class as I proposed if there were many rendering options. In the case of two options I’d probably use separate methods, but they would be named based on the purpose of the rendering rather than list the fields that get rendered in the method name. more things to do in the code snippet I posted… 1) move common options/properties to the base UserRenderer (out of UserTextStreamRenderer). 2) Add an IsValid() method. based on the application this method may return a collection of validation messages instead of a simple bool. 3) if applicable, the render method should throw an exception if render is called with invalid options because the caller should have verified options with IsValid before calling render. I have not used this pattern before, but only because I really have not had the need for many options to control a single method. I’d be interested to see an example where this was actually required in a real world application.
https://blogs.msdn.microsoft.com/kevinpilchbisson/2007/11/30/coding-styles-bools-structs-or-enums/
CC-MAIN-2017-26
en
refinedweb
. Here is the modified script that uses action script instead of share script. ( @cvp What is "start workflow" step in "pythonista script"? Do I have to add " webbrowser.open("workflow://") at the end?) from PIL import Image, ImageOps import ui, clipboard import webbrowser def main(): img = clipboard.get_image() if not img: print('No input image') return if not img.mode.startswith('RGB'): img = img.convert('RGB') gray_img = ImageOps.grayscale(img) clipboard.set_image(gray_img) main() webbrowser.open('workflow://') url_scheme = 'workflow://x-callback-url/run-workflow?name=your_file&input=clipboard&x-success=pythonista://' if appex.is_running_extension(): app = UIApplication.sharedApplication() app.openURL_(nsurl(url_scheme)) else: webbrowser.open(url_scheme) # assume your Workflow writes xxx in the clipboard to warn it is finished text = clipboard.get() while text != 'xxx': text = clipboard.get() time.sleep(0.3) @cvp Thanks. I do not use (or the need to create) any complicated workflows and I feel that "webbrowser.open('workflow://')" is good enough to return back to workflow app. @MartinPacker editorial could do that (i.e. ingest and emit images directly). Workflow app may not add this facility in future but omz can make editorial workflow as easy as workflow app. If I correctly understood your request, you don't need Workflow because your Pythonista script could perform it-self all your "actions", like - select photos - convert them - save them to an album My example is for illustration purpose only and I do not require that workflow. My sentiments are similar to the ones expressed in the following blog. But the template could be useful in doing something very fast with little code, may be at some time later. Your comments are very helpful in building the template.I have also looked at the workflow site and the reddit site I do not want to investigate much into this now and I am ok with current action script template. Once again I would like to thank you for your helpful comments and I will look at your"x-callback-url" some time later. Before I know Pythonista, I wrote a lot of Workflows, but since I use Pythonista, I've rewritten all in Python, with only one exception for Workflows saving or getting a file to/from ICloud Drive because Pythonista does not provide this functionality. Would it be possible to interop with iCloud Drive using objc_utils? Or with something like I don't think so, I had already seen this import but it does not cover the iCloud Drive storage, only ICloud services. To reveal a little more of the picture... ... My idea was to create a graphing Pythonista capability so Workflow might pass a CSV (file) into Pythonista and then do something with the graph (image) returned. In essence I wanted to add graphing to Workflow. I would expect Pythonista would be a good way to do it. Would Editorial do this better than Pythonista? @cvp Please check the File Storage (Ubiquity) section of the page: You can access documents stored in your iCloud account by using the files property’s dir method @MartinPacker , I am not really sure about Editorial vrs Pythonista for the graphing, they could be equivalent. But my guess would be that Pythonista would be the superset of functionality. But just really wanted to mention the 'Pie Chart Demo.py' in the Examples/Plotting dir in Pythonista. Great demo how to easily create a pie chart with some nice features. It's so easy to understand. Then I am pretty sure if you use a ui.ImageContext to draw into you could return that image for the clipboard. Possibly the matplotlib gives a more direct way. Just interesting @ccc Hi I had checked this page but they only support iCloud file storage, that will say Apple apps storage, like Notes, but not iCloud Drive storage of all other apps, what does Workflow app. In Workflow, you can save a file to the ICloud Drive of another app, like Documents, FileBrowser, Pages, Numbers etc... @ccc It's exactly like you can manually do in Pythonista by - edit - select a file - add to ICloud Drive But automatically All this talk about iCloud illustrates one of the reasons why I'm not looking for a "write to file" solution. Clipboard would be fine, though "trashing" the clipboard should only be done if necessary. It seems that it is. Something for y'all to pick the bits out of... A reply from @WorkflowHQ on Twitter contained the following... "Also regarding sending images, Workflow can decode base64 encoded images." Does this help us here? Can we encode images base64?
https://forum.omz-software.com/topic/3590/passing-an-image-back-to-workflow
CC-MAIN-2017-26
en
refinedweb
0 For a homework assignment I need to write a function that computes the average value of an array of floating-point data: double average(double* a, int a_size) In the function, use a pointer variable, and not an integer index, to traverse the array elements. The code below is what I have so far but I don't know what I can do to make it better. Any help or comments will be appreciated! Dennis #include <iostream> using namespace std; double average (double* a, int a_size) { double total; double average; double* p = a; for (int i = 1; i < a_size; i++) { total = total + a[i]; average = total / a_size; } return average; } int main() { return 0; }
https://www.daniweb.com/programming/software-development/threads/348092/help-improving-function
CC-MAIN-2017-26
en
refinedweb
Ok on the suggestion of Blackroot I am doing a Risk contest. I have everything completed, but before I post I wanted to get some feedback first on the player_interface. Basically, I want to make sure there aren't any other member functions you need to be able to easily create an AI for it? Other suggestion are fine as well. and helpers.hand helpers.hCode: #ifndef PLAYER_INTERACE_H #define PLAYER_INTERACE_H #include "helpers.h" #include "player.h" class PlayerInterface { friend class Player; public: //virtual methods must be overidden virtual std::string getName()const =0; // This is used by the framework to get the preferred name // for your AI player. You should return a string container the name of your player. // If you duplicate an existing name extra letters will be tacked onto your name // to make it unique. virtual void init()=0; // This method is call after all players have been added and the // players have been "shuffled" to start choosing countries. // please note that the play order changes again, after all initial // armies are place before play begin. // You can use it for whatever initialization your player needs to do that can't // be done in the constructor virtual std::string chooseCountry()=0; // This method is called when there are unclaimed countries available to claim // You should return a string containing the name of the unclaimed country you wish // claim. If you return an invalid name (spelling/case or already claimed) one will // be chosen for you at random. For comparison purposes, it's recommend you use the // country names as defined in helpers.h. There are several arrays, enums etc at your // disposable. virtual std::string placeArmy ()=0; // This function is called in 2 circumstances // 1st at the beginning after all countries have been claimed // this function is called to place your remaining armies on // countries you own // 2nd This function is called at the beginning of your turn when you have new armies // to place. // Again, you should return a string containing the name of the country you wish to place the army // One army is placed at a time. Invalid names will result in an army being randomly placed in // a country you control virtual std::vector<const Card> redeemCards()=0; // This method is called if you are in possesion of cards that can be redeemed // (Either a matched set - 3 like card, or an unmatched set - 3 unlike cards, or // 2 cards and a wild card. If you wish to redeem your set, return a std::vector<const Card> // to the framework with the 3 cards from your hand you wish to redeem. You are not // required to redeem your cards unless you have 5 or more cards. If you do not wish to redeem // just return an empty vector. // If you return an invalid set, and you hold less than 5 cards, no action will be taken. // If you hold 5 or more cards, a valid set will be chosen for you. // Also not that the order the cards are entered into the vector is important: // If the country on the card matches a country you control, you receive a bonus of 2 armies // on that country (They will be placed automatically). You can only receive 1 bonus of 2 // armies per trade-in. Therefore, if more than one of your cards have countries that // you control and you want the 2 armies to go on a particular country // put that country first into your vector. virtual Action getAction()=0; // This method gets your actions for your turn. There is no set turn length as long // as you have armies to attack with. Therefore you determine most of the time when // your turn is over by submitting the apporpriate action. (See helpers.h for actions) // Basically you complete an action struct with your action and submit it to the framework. // If you submit an invalid action, it will be treated as an end of turn without fortification. virtual int defendCountry(Action A)=0; // When your are on the defending end of an attack, you have the choice // of defending with 1 or 2 armies (1 or 2 dice) return your choice here. // If you return a number greater than 2, or if you only have 1 army in the country // and return 2 then 1 will be used. virtual int captureCountry(Action A)=0; // If you defeat all the armies in a country you must now claim that country // by moving armies into it from the attacking country // you must move minimum the same number that you attacked with and maximum 1 less // than the number of armies you have in the attacking country. // return the number you wish to move. An invalid number results in the minimum move. virtual void notify(Action A)=0; // After every action a notification is went to all of the players // showing the results. Invalid submissions will have been corrected // so that the actual results are shown here. // This is information only, use as you see fit. //non-virtual must NOT!!! be overidden std::vector<std::string> getCountriesByPlayer(std::string owner_name); // return a vector of the names of the countries controlled // by owner_name std::vector<std::string> getAdjacent(std::string country_name); // returns a vector of the names of the countries touching/adjacent to country_name // useful for deciding attacks/fortifications etc. std::string me(); // Since the framework may change your player name to guarantee uniqueness // use this function if you need to submit a function with your players name. // ex: std::vector<std::string> myCountries = getCountriesByPlayer(me()); size_t getArmyCount(std::string country_name); // returns the number of Armies in country_name std::string getOwner(std::string country_name); // returns the owner of countr_name std::vector<std::string> getPlayers(); // return a vector of the names of the player and their current order // orders is set twice, once before countries are chosen // and once before play begins private: Player* mpPlayer; // this is no concern to you and you should not try to use this, but since it's here // This is a pointer to the real player class where most of the player info is stored // The Player class also acts as a go between for the player_interface (this class) // and the main game class. It handles all the validations so the game class only // gets good inputs and doesn't have to do a lot of error checking }; #endif Code: #ifndef HELPERS_H #define HELPERS_H #include <string> enum CARD_TYPE {INFANTRY, CAVALRY, ARTILLERY, WILD}; const std::string CONTINENTS[] = {"North America","Europe","Asia","Africa","Australia","South America"}; const std::string COUNTRY_NAMES[] = {"}; enum COUNTRY_INDEX {, Wild, Bad_Index }; const COUNTRY_INDEX NORTH_AMERICA[] = {Alaska, Alberta, Central_America, Eastern_United_States, Greenland, Northwest_Territory, Ontario, Quebec, Western_United_States}; const COUNTRY_INDEX EUROPE[] = {Great_Britain, Iceland, Northern_Europe, Scandinavia, Southern_Europe, Ukraine, Western_Europe}; const COUNTRY_INDEX ASIA[] = {Afghanistan, China, India, Irkutsk, Japan, Kamchatka, Middle_East, Mongolia, Siam, Siberia, Ural, Yakutsk}; const COUNTRY_INDEX AFRICA[] = {Congo, East_Africa, Egypt, Madagascar, North_Africa, South_Africa}; const COUNTRY_INDEX AUSTRALIA[] = {Eastern_Australia, Indonesia, New_Guinea, Western_Australia}; const COUNTRY_INDEX SOUTH_AMERICA[] = {Argentina, Brazil, Peru, Venezuela}; const size_t NORTH_AMERICA_SIZE = 9; const size_t NORTH_AMERICA_AWARD = 5; const size_t EUROPE_SIZE = 7; const size_t EUROPE_AWARD = 5; const size_t ASIA_SIZE = 12; const size_t ASIA_AWARD = 7; const size_t AFRICA_SIZE = 6; const size_t AFRICA_AWARD = 3; const size_t AUSTRALIA_SIZE = 4; const size_t AUSTRALIA_AWARD = 2; const size_t SOUTH_AMERICA_SIZE = 4; const size_t SOUTH_AMERICA_AWARD = 2; const size_t COUNTRIES_SIZE = 42; struct Action { enum ACTION {ATTACK, FORTIFY, ENDTURN, CAPTURE, ADD, REMOVE, PLACEMENT};// // ATTACK - attack ToCountry from FromCountry wint n number of armers. n = Armies with value of 1, 2 or 3 // FORTIFY - move n armies from FromCountry to ToCountry and ends turn. // n = Armies with value 1 to AllInCountry-1 // ENDTURN - ends turn without fortifying position. Other values are ignored. ACTION action;// one of three above std::string FromCountry; std::string ToCountry; unsigned Armies; //1, 2 or 3 for attack, n-1 for fortify with n being number of armies // in FromCountry //RESULTS (to be filled in by framework, not player) int AttackerLosses; int DefenderLosses; bool CountryCaptured; std::string PlayerName; std::string FromPlayerName; std::string ToPlayerName; }; // struct Card { Card(CARD_TYPE t, COUNTRY_INDEX c):type(t),country(c){} bool operator==(const Card& card)const{return this->type == card.type;} bool operator!=(const Card& card)const{return !((*this) == card);} bool operator< (const Card& card)const{return (int)type < (int)card.type;} CARD_TYPE type; COUNTRY_INDEX country; }; // COUNTRY_INDEX getCountryIndex(std::string Name); //...removed for posting purposes... not relavent anyway bool isAdjacent(std::string c1, std::string c2); #endif
https://cboard.cprogramming.com/contests-board/76317-code-review-upcoming-contest-printable-thread.html
CC-MAIN-2017-26
en
refinedweb
Ppt on polynomials in maths lesson Ppt on natural resources for class 4 Ppt on 9-11 conspiracy theories attacks Ppt on training and placement cell Ppt on 5 pen pc technology Ppt on pre ignition curves Ppt on non ferrous metals Ppt on singly linked list Ppt on combination of resistance bands Ppt on namespace in c++
http://slideplayer.com/slide/1598962/
CC-MAIN-2017-26
en
refinedweb
Deploy a website using git in Ubuntu - Conrad Kramer[web search] Deploy. sudo apt-get install git Next, put your website under version control with git cd /srv/www/helloworld git init git add application.py git commit -m 'First commit' A gitignore file is also a good idea. For a Flask app, put the following in .gitignore env *.pyc and to add it to the repository git add .gitignore git commit -m 'Added .gitignore file' Now we are going to create a bare ‘hub’ repository that will act as an intermediary between the active deployment folder and the rest of the world. sudo mkdir -p /srv/git sudo chown -R $USER:$GROUP /srv/git cd /srv/git git clone --bare /srv/www/helloworld cd helloworld.git git remote rm origin Next, add the hub as a remote for the deployment repository cd /srv/www/helloworld git remote add hub /srv/git/helloworld.git #!/bin/sh echo echo "Pulling changes into deployment repository" echo git --git-dir /srv/www/helloworld/.git --work-tree /srv/www/helloworld pull hub master and make sure that it is executable chmod 755 /srv/git/helloworld.git/hooks/post-update #!/bin/sh echo echo "Pushing changes to hub" echo git push hub and add make it executable chmod 755 /srv/www/helloworld/.git/hooks/post-commit sudo apt-get install build-essential Next, put the following in ~/reload_uwsgi.c #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <stdlib.h> int main() { setuid(0); system("/sbin/initctl reload uwsgi"); return 0; } Now to compile, install and configure the binary cd ~ sudo gcc reload_uwsgi.c -o /usr/local/bin/reload_uwsgi rm reload_uwsgi.c sudo chown root:root /usr/local/bin/reload_uwsgi sudo chmod 4755 /usr/local/bin/reload_uwsgi And lastly add this executable to our git hooks, which should look like the following /srv/git/helloworld.git/hooks/post-update: #!/bin/sh echo echo "Pulling changes into deployment repository" echo git --git-dir /srv/www/helloworld/.git --work-tree /srv/www/helloworld pull hub master /usr/local/bin/reload_uwsgi /srv/www/helloworld/.git/hooks/post-commit: #!/bin/sh echo echo "Pushing changes to hub" echo git push hub /usr/local/bin/reload_uwsgi Testing it out Clone the repository on your local machine git clone user@server:/srv/git/helloworld.git Make some changes, commit them to master, and push them to the server vim application.py git commit -am 'Made some changes!' git push origin master and voila! The change should be reflected on the website. Discuss on Hacker News here
https://jaytaylor.com/notes/node/1355261924000.html
CC-MAIN-2020-05
en
refinedweb
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab. Microcontroller Programming » Where is the NerdKit's button hooked up on the breadboard? I'm trying to make a simple program where, when the button is pressed, an LED is turned on and off, and I'm not sure where the button is placed. Any help is appreciated. Hi SamGiambalvo, That is a great exercise you have in mind. Certainly a good way to start getting familiar with things. Where you put the buutton is pretty much up to you as long as the rows you connect the push button to are all not being used. You need to wire up your pushbutton such that the microcontroller can detect when you push the button. It is up to you which pin you do this on, you just have to make the code reflect the correct pin. Our digital calipers dro has a great explanation of push buttons both in the text, and in the video. Take a look through that and see if you understand how the push button is wired up. Let us know if you have any questions. Note: the push button leads are a bit large, but we usually just bend them a bit force them into the breadboard. It actually makes for a fairly snugg fit. Humberto Humberto, Thanks for the reply, but I just can't seem to get it working. Pin C of the button is connected to MCU 28, NO is connected to MCU 26, and NC is connected to MCU 25. When I run it, the led immediately blinks and then turns off. Here is my code: #define F_CPU 14745600 #include <avr/io.h> #include <inttypes.h> #include "../libnerdkits/delay.h" int main() { DDRC |= (1<<PC4); PORTC |= (1<<PC5); if((PINC & (1<<PC5)) == 0) { PORTC |= (1<<PC4); delay_ms(1000); PORTC &= ~(1<<PC4); } else { PORTC &= ~(1<<PC4); } return 0; Thanks again for your help, SamGiambalvo Hi Sam, It looks like you are more or less on the right track with the code, but you don't really understand what is happening with the button. In end your MCU can only detect a voltage change on a pin. You want to wire up your Push button such that pushing the button changes the voltage on the pin. You have the pull up resistor turned on on PB5, that is a great start, this means that PB5 will be high unless it is otherwise pulled low. You should connect your push button with the NO (normally open) lead connected to your input pin (PB5 in this case). The C (common) lead to GND, and the NC (normally closed) lead unconnected. This way when you push the button the NO and C leads become connected electrically, pulling the voltage on PB5 low. All this is explained in the dro video tutorial I pointed you to earlier. Please log in to post a reply.
http://www.nerdkits.com/forum/thread/732/
CC-MAIN-2020-05
en
refinedweb
Alpine size and Docker Layers I am designing the containerization of applications at my company. As I look at containers I see a lot of Docker questions and answers I am designing the containerization of applications at my company. As I look at containers I see a lot of I’m trying to run a Flask app with Celery (worker + beat) on Docker Alpine using docker-compose. I want it I have a private docker registry in k8 in the default namespace with tls at. By doing local port I am quite new to bash (barely any experience at all) and I need some help with a bash script. i’m in charge to migrate a legacy project from pip to pipenv. When the build is done, the dependencies does Working with Aspose Word (v18.8.0) to convert Document to PDF. Code is working fine inside Visual Studio when i try When I use curl –head to test my website, it return the server information. I followed this tutorial to Looks like current version of alpine 3.9 is supporting python 3.8, but I am using python 3.6 and can’t upgrade So I have setup a alpine Linux docker ssh git server that runs CMD [“/usr/sbin/sshd”, “-D”] and I’m trying to I have a dockerfile for node js code as below FROM node:10.14-alpine as buildcontainer COPY source-code/config /home/app/config COPY source-code/src /home/app/src
https://dockerquestions.com/category/alpine/
CC-MAIN-2020-05
en
refinedweb
Oct.}{\,|\,}$ Previously, I talked about trees, AdaBoost, and gradient boosting. To complete the series, I will talk about two remaining popular methods called bagging and random forest. Boosting, bagging, and random forest are examples of ensemble learning, where the decision is made not from one, but a collection of - sometimes weak - classifiers. Personally, I am a huge fan of ensemble learning because it uses the similar idea as in the Central Limit Theorem (CLT), which relies on the fact that the variance (i.e. uncertainty) of the sample mean will decrease at the rate proportional to $\sqrt{n}$ where $n$ is the sample size. This is analogous to having a committee, where each committee member is a classifier. In most cases, decisions made from a large committee of experts are usually more reliable and consistant than those made from a single person. To be consistant to the previous posts, we will focus on classification tree first. Suppose that for a classification problem the response variable $Y$ takes on values $1,2,...,K$. The basic idea behind fitting a classification tree is to follow a top-down, greedy approach that successively splits the feature space, where each binary split of a terminal node is made to minimizes the weighted Gini impurity measure: $$ w_1G_1 + w_2 G_2, $$ where $w_1$ and $w_2$ are weights associated to two resulting rectangles from a binary split and $G_m$ is the Gini index defined by $$ G_m = \sum_{k=1}^K \hat{p}_{mk}(1-\hat{p}_{mk}), $$ where $\hat{p}_{mk}$ is the proportion of class $k$ in the terminal rectangle $R_m$. There is a plethora of advantanges for using trees. For example, the binary decision-making process has more resemblance to human decision-making. Also, there is not many "formulas" involved, making it more intepretable to non-experts than, say, linear regression. Unfortunately, a single tree usually result in a low flexibility in fitting the data. Also overfitting is an issue if the tree is split too many times. If we have a lot of training sets, by the CLT, the variance problem can be fixed by averaging a set of predictions made by the tree, but we generally cannot have access to multiple training sets. Instead, we can use bootstrap, by repeatedly sampling from the training data with replacement. Suppose we generate $N$ bootstrapped training sets. We then fit a classification tree on the $i$th boostrapped training set to get a prediction $\hat{f}_{i}(\bs{x})$. Then we form a set containing all $N$ predictions from all bootstrapped samples. The final prediction is made by taking the majority vote, i.e., $$ \widehat{f}_{\text{bag}}(\bs{x}) = \argmax{k} \sum_{i=1}^N \left[I\left(\hat{f}_{i}\left(\bs{x}\right)= k\right)\right]\tag{1} $$ For regression problem, the final prediction is simply the average of all predicted values. Another benefit of bagging is that the test error can be estimated right out of the bag - pun intended. The reason for that on average, each bootstrapped sample will use around 63% of the observations. The most straightforward way to see that is by Monte Carlo simulation. We implement it as follows. import numpy as np # Draw N random numbers from 1,...,N with replacement def draw(N): boot = np.random.randint(1, N+1, N) return len(np.unique(boot))/N # Draw N random numbers 10000 times def Monte_Carlo(N, num_trials=10000): proportions = [] for i in range(num_trials): proportions.append(draw(N)) mean = np.mean(proportions) std = np.std(proportions) print('N:', N) print('Mean:', mean) print('Confidence Interval: ({}, {})'.format(mean-1.96*std, mean+1.96*std)) print('--------') # Run simulation for different sample size for N in [5, 10, 100, 1000, 10000, 100000]: Monte_Carlo(N) N: 5 Mean: 0.66938 Confidence Interval: (0.3894468769706593, 0.9493131230293407) -------- N: 10 Mean: 0.6511899999999998 Confidence Interval: (0.455086305210126, 0.8472936947898737) -------- N: 100 Mean: 0.6335280000000001 Confidence Interval: (0.5724497575679719, 0.6946062424320283) -------- N: 1000 Mean: 0.6323075 Confidence Interval: (0.6130252092919436, 0.6515897907080564) -------- N: 10000 Mean: 0.6321464299999999 Confidence Interval: (0.6259728110435051, 0.6383200489564947) -------- N: 100000 Mean: 0.6321254230000001 Confidence Interval: (0.6301649792855756, 0.6340858667144246) -------- Based on the results, as long as we have more than 100 data points, there will most likely be around $100\%-63\% =27\%$ of the original data points untouched by the bootstrap sample. But as I was writing the simulation, one thing struck me: It looks like the expected value is getting close to $1-e^{-1} \approx 0.63212$. Incredible! This suggests that a closed-form solution exists. To find it, let $N$ denote the total number of samples in the dataset. Also, denote by the random variable $X$ the proportion of distinct elements drawn. Define the Bernoulli random variables $Y_i$ such that $$ Y_i = \begin{cases} 1 & \text{ if the $i$th observation is drawn } \\ 0 & \text{ otherwise.} \quad \text{ for } i=1,...,N \end{cases} $$ We can now cleverly express $X$ as a combination of $Y_i$'s, and using the linearity property of the expected value, we have $$ X = \frac{1}{N}\sum_{i=1}^N Y_i \implies E(X) = E(Y_i). $$ For each $i$, we can compute the expected value of $Y_i$ as $$ \begin{aligned} E(Y_i) &= P(Y_i=1) \\ &= 1 - P(Y_i=0) \\ &= 1 - P(\text{$i$th observation is not drawn}) \\ &= 1 - \left(\frac{N-1}{N}\right)^N \\ &= 1- \left( 1- \frac{1}{N}\right)^N \overset{N\to \infty}{\to} 1-e^{-1} \end{aligned} $$ So for each bagged tree $\hat{f}_{i}$, we let $O_i$ be the set of the roughly 27% of data that are not used. The set $O_i$ is also referred to the out-of-bag (OOB) samples for the $i$th tree. Now for training examples $\bs{x}^{(1)}, \bs{x}^{(2)}, ..., \bs{x}^{(N)}$, we can predict each one using the trees such that the sample $\bs{x}^{(i)}$ was in $O_i$. That is $$ \hat{f}_{\text{CV}}(\bs{x}^{(i)}) = \argmax{k} \sum_{\{i\,:\,\bs{x}^{(i)}\in O_i\}} \left[I\left(\hat{f}_{i}\left(\bs{x}\right)= k\right)\right] $$ Given training labels $y^{(1)},y^{(2)},..., y^{(N)}$, we can conveniently compute the misclassification rate as an estimate for the test error: $$ L = \frac{1}{N}\sum_{i=1}^N I\left(y^{(i)} = \hat{f}_{\text{CV}}(\bs{x}^{(i)})\right). $$ If we go a step back and find the probabilities that each training example belongs to any of the $K$ classes, we can use the multinomial cross entropy as the error measure. We will be testing on the Iris dataset. The following is data preparation. import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris iris = load_iris() data = iris['data'] Y = iris['target'] print('Training data has dimension', data.shape) print('Training label has dimension', Y.shape) Training data has dimension (150, 4) Training label has dimension (150,) For demonstration purpose, let's use principal component analysis to reduce the dimension of the dataset to 2. from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA # Function that standardizes the data def standardize(data): scaler = StandardScaler() scaler.fit(data) scaled_data = scaler.transform(data) return scaled_data # Define PCA transform function def pca_transform(data, dimension=2): scaled_data = standardize(data) pca = PCA(n_components = dimension) pca.fit(scaled_data) x_pca = pca.transform(scaled_data) return x_pca, pca.explained_variance_ratio_ # PCA plot X, explained_var = pca_transform(data) print('Explained variance from Principal Component 1:', explained_var[0]) print('Explained variance from Principal Component 2:', explained_var[1]) plt.scatter(X[:,0], X[:,1], c=Y, cmap=plt.cm.Spectral, edgecolors='k') plt.show() Explained variance from Principal Component 1: 0.7277045209380135 Explained variance from Principal Component 2: 0.2303052326768062 Doing a PCA here is a good idea since the first two principal components captures about 95% of the total variance. Plus we can visualize the dataset in 2D! Let's define a function that plots') We are ready to fit a bagging classifier. Here we will fit 100 bootstrapped trees and compute the OOB score discussed earlier. from sklearn.ensemble import BaggingClassifier bag = BaggingClassifier(oob_score=True, n_estimators=100) bag.fit(X, Y) print('OOB score is', bag.oob_score_) plot_decision_boundary(bag, X, Y) OOB score is 0.9133333333333333 For comparison, let's look at the decision boundary from Support Vector Machine, which is clearly a better choice for this dataset than bagged trees. from sklearn.svm import SVC svc = SVC() svc.fit(X, Y) plot_decision_boundary(svc, X, Y) Let's check logistic regression. from sklearn.linear_model import LogisticRegression glm = LogisticRegression() glm.fit(X, Y) plot_decision_boundary(glm, X, Y) One problem about using decision trees is overfitting. As seen from the decision boundary, bagging classifier demonstrates slight overfitting, even though Iris is a very well behaved dataset. Random forest is an upgrade to bagged trees by decorrelating the trees. We know from the Central Limit Theorem that by adding uncorrelated random variables, the variance goes down! The idea behind random forest is as follows. Let's do a GridSearch before fitting the model. from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV param_grid = {'n_estimators': [10, 20, 30, 50], 'max_depth': [1, 5, 10, 15, 20, None], 'max_features': [0.5, 'auto', 'log2']} grid = GridSearchCV(RandomForestClassifier(),param_grid,refit=True,verbose=0) grid = grid.fit(X, Y) print(grid.best_params_) plot_decision_boundary(grid.best_estimator_, X, Y) {'max_depth': 20, 'max_features': 'log2', 'n_estimators': 20} Clearly for a nice dataset like Iris, ensemble learning is not needed. A simple 2-split decision tree gets the job done very well after principal component analysis.
https://elanding.xyz/blog/2019/Ensemble.html
CC-MAIN-2020-05
en
refinedweb
#include <stdlib.h>int system(const char *str); The system( ) function passes the string pointed to by str as a command to the command processor of the operating system. If system( ) is called with a null pointer, it will return nonzero if a command processor is present, and zero otherwise. (Programs executed in unhosted environments will not have access to a command processor.) For all other cases, the return value of system( ) is implementation defined, but typically, zero is returned if the command was successfully executed and a nonzero return value indicates an error. A related function is exit( ).
https://flylib.com/books/en/3.13.1.343/1/
CC-MAIN-2020-05
en
refinedweb
dockerfile 0.1.0 dockerfile # A library for manipulating Dockerfile Usage # A simple usage example: import 'package:dockerfile/dockerfile.dart'; main() { var dockerfile = new Dockerfile(); dockerfile.from('google/dart', tag: dartVersion); dockerfile.run('pub', args: ['build']); dockerfile.add(somePath, otherPath); await dockerfile.save(saveDirectory); } Features and bugs # Please file feature requests and bugs at the issue tracker. Changelog # 0.1.0 # - Dart 2 0.0.4 # - tidied a few things 0.0.3 # - Fix analyzer warnings 0.0.2 # - tidy dependency versions 0.0.1 # - Initial version, created by Stagehand // Copyright (c) 2015, Anders Holmgren. All rights reserved. Use of this source code // is governed by a BSD-style license that can be found in the LICENSE file. library dockerfile.example; main() {} Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: dockerfile: :dockerfile/dockerfile.dart'; We analyzed this package on Jan 17, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.4 Health suggestions Fix lib/src/dockerfile.dart. (-7.71 points) Analysis of lib/src/dockerfile.dart reported 16 hints, including: line 25 col 19: Unnecessary new keyword. line 28 col 50: Use = to separate a named parameter from its default value. line 29 col 19: Unnecessary new keyword. line 32 col 53: Use = to separate a named parameter from its default value. line 38 col 19: Unnecessary new keyword. Fix lib/src/docker_command.dart. (-1 points) Analysis of lib/src/docker_command.dart reported 2 hints: line 21 col 13: Unnecessary new keyword. line 97 col 30: Use = to separate a named parameter from its default value. Maintenance suggestions Package is getting outdated. (-29.04 points) The package was last published 67 weeks ago.
https://pub.dev/packages/dockerfile
CC-MAIN-2020-05
en
refinedweb
Source: Deep Learning on Medium Parts list Here’s the basic list of things we’ll need to create. - input data— what is getting encoded and decoded? - an encoding function — there needs to be a network that takes an input and encodes it. - a decoding function — there needs to be a network that takes the encoded input and decodes it. - loss function — The autoencoder is good when the output of the decoded version is very close to the original input data (loss is small), and bad when the decoded version looks nothing like the original input. The Approach The simplest autoencoder looks something like this: x → h → r, where the function f(x) results in h, and the function g(h) results in r. We’ll be using neural networks so we don’t need to calculate the actual functions. Logically, step 1 will be to get some data. We’ll grab MNIST from the Keras dataset library. It’s comprised of 60,000 training examples and 10,000 test examples of digits 0–9. Next, we’ll do some basic data preparation so that we can feed it into our neural network as our input set, x. Then in step 2, we’ll build the basic neural network model that gives us hidden layer h from x. - We’ll put together a single dense hidden layer that takes in x as input with a ReLU activation layer. - Next, we’ll pass the output of this layer into another dense layer, and run the output through a sigmoid activation layer. Once we have a model, we’ll be able to train it in step 3, and then in step 4, we’ll visualize the output. Let’s put it together: First, let’s not forget the necessary imports to help us create our neural network (keras), do standard matrix mathematics (numpy), and plot our data (matplotlib). We’ll call this step 0. # Importing modules to create our layers and model. from keras.layers import Input, Dense from keras.models import Model# Importing standard utils import numpy as np import matplotlib.pyplot as plt Step 1. Import our data, and do some basic data preparation. Since we’re not going to use labels here, we only care about the x values. from keras.datasets import mnist(train_xs, _), (test_xs, _) = mnist.load_data() Next, we’ll normalize them between 0 and 1. Since they’re greyscale images, with values between 0 and 255, we’ll represent the input as float32’s and divide by 255. This means if the value is 255, it’ll be normalized to 255.0/255.0 or 1.0, and so on and so forth. # Note the '.' after the 255, this is correct for the type we're dealing with. It means do not interpret 255 as an integer. train_xs = train_xs.astype('float32') / 255. test_xs = test_xs.astype('float32') / 255. Now think about this, we have images that are 28 x 28, with values between 0 and 1, and we want to pass them into a neural network layer as an input vector. What should we do? We could use a convolutional neural network, but in this simple case, we’ll just use a dense layer. So how do we feed it in? We’ll flatten it into a single dimensional vector of 784 x 1 values (28 x 28). train_xs = train_xs.reshape(len(train_xs), np.prod(np.prod(train_xs.shape[1:])))test_xs = test_xs.reshape(len(test_xs), np.prod(np.prod(test_xs.shape[1:]))) Step 2. Let’s put together a basic network. We’re simply going to create an encoding network, and a decoding network. We’ll put them together into a model called the autoencoder below. We’ll also decrease the size of the encoding so we can get some of that data compression. Here we’ll use 32 to keep it simple. # Defining the level of compression of the hidden layer. Basically, as the input is passed through the encoding layer, it will come out smaller if you want it to find salient features. If I choose 784, there would be a compression factor of 1, or nothing. encoding_dim = 32 input_img = Input(shape=(784, ))# This is the size of the output. We want to generate 28 x 28 pictures in the end, so this is the size we're looking for. output_dim = 784encoded = Dense(encoding_dim, activation='relu')(input_img)decoded = decoded = Dense(output_dim, activation='sigmoid')(encoded) Now create a model that accepts input_img as inputs and outputs the decoder layer. Then compile the model, in this case with adadelta as the optimizer and binary_crossentropy as the loss. autoencoder = Model(input_img, decoded)autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') Step 3. Our model is ready to train. You’ll be able to run this without a GPU, it doesn’t take long. We’ll call fit on the autoencoder model we created, passing in the x values for both the inputs and outputs, for 50 epochs, with a relatively large batch size (256). This will help it train somewhat quickly. We’ll enable shuffle to prevent homogeneous data in each batch and then we’ll use the test values as validation data. autoencoder.fit(train_xs, train_xs, epochs=50, batch_size=256, shuffle=True, validation_data=(test_xs, test_xs) That’s it. Autoencoder done. You’ll see it should have a loss of about 0.69 meaning that the reconstruction we’ve created generally represents the input fairly well. But can’t we take a look at it for ourselves? Step 4. For this, we’ll do some inference to grab our reconstructions from our input data, and then we’ll display them with matplotlib. For this we want to use the predict method. Here’s the thought process: take our test inputs, run them through autoencoder.predict, then show the originals and the reconstructions. # Run your predictions and store them in a decoded_images list. decoded_images = autoencoder.predict(test_xs) Here’s how you get that image above: # We'll plot 10 images. n = 10 plt.figure(figsize=(16, 3)) for i in range(n): # Show the originals ax = plt.subplot(2, n, i + 1) plt.imshow(test_xs[i].reshape(28, 28)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False)# Show the reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False)plt.show()
https://mc.ai/autoencoder-neural-networks-what-and-how/
CC-MAIN-2020-05
en
refinedweb
I'm transferring Matlab's imresize imresize imresize from scipy.misc import imresize import numpy as np dtest = np.array(([1,2,3],[4,5,6],[7,8,9])) scale = 1.4 dim = imresize(dtest,1/scale) imresize dtest = [1,2,3; 4,5,6; 7,8,9]; scale = 1.4; dim = imresize(dtest,1/scale); The scipy.misc.imresize function is a bit odd for me. For one thing, this is what happens when I specify the sample 2D image you provided to a scipy.misc.imresize call on this image with a scale of 1.0. Ideally, it should give you the same image, but what we get is this (in IPython): In [35]: from scipy.misc import imresize In [36]: import numpy as np In [37]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9])) In [38]: out = imresize(dtest, 1.0) In [39]: out Out[39]: array([[ 0, 32, 64], [ 96, 127, 159], [191, 223, 255]], dtype=uint8) Not only does it change the type of the output to uint8, but it scales the values as well. For one thing, it looks like it makes the maximum value of the image equal to 255 and the minimum value equal to 0. MATLAB's imresize does not do this and it resizes an image in the way we expect: >> dtest = [1,2,3;4,5,6;7,8,9]; >> out = imresize(dtest, 1) out = 1 2 3 4 5 6 7 8 9 However, you need to be cognizant that MATLAB performs the resizing with anti-aliasing enabled by default. I'm not sure what scipy.misc.resize does here but I'll bet that there is no anti-aliasing enabled. As such, I probably would not use scipy.misc.imresize. The closest thing to what you want is either OpenCV's resize function, or scikit-image's resize function. Both of these have no anti-aliasing. If you want to make both Python and MATLAB match each other, use the bilinear interpolation method. imresize uses bicubic interpolation by default and I know for a fact that MATLAB uses custom kernels to do so, and so it will be much more difficult to match their outputs. See this post for some more informative results: MATLAB vs C++ vs OpenCV - imresize For the best results, don't specify a scale - specify a target output size to reproduce results. MATLAB, OpenCV and scikit-image, when specifying a floating point scale, act differently with each other. I did some experiments and by specifying a floating point size, I was unable to get the results to match. Besides which, scikit-image does not support taking in a scale factor. As such, 1/scale in your case is close to a 2 x 2 size output, and so here's what you would do in MATLAB: >> dtest = [1,2,3;4,5,6;7,8,9]; >> out = imresize(dtest, [2,2], 'bilinear', 'AntiAliasing', false) out = 2.0000 3.5000 6.5000 8.0000 With Python OpenCV: In [93]: import numpy as np In [94]: import cv2 In [95]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]), dtype='float') In [96]: out = cv2.resize(dtest, (2,2)) In [97]: out Out[97]: array([[ 2. , 3.5], [ 6.5, 8. ]]) With scikit-image: In [100]: from skimage.transform import resize In [101]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]), dtype='uint8') In [102]: out = resize(dtest, (2,2), order=1, preserve_range=True) In [103]: out Out[103]: array([[ 2. , 3.5], [ 6.5, 8. ]])
https://codedump.io/share/xw6TsW7PMzUE/1/how-to-use-matlab39s-imresize-in-python
CC-MAIN-2017-43
en
refinedweb
I'm still exploring REST, node.js and generally web development. What I found out is that xmlhttprequest is mostly(if not always) used when using AJAX. As I learned AJAX is for asynchronous Javascript and XML. So my question is should I be using xmlhttprequest in my node.js project, just when I want to do asynchronous parts on my webpage? or does node.js HTTP also have opportunity to asynchronous javascript? How can I balance well the use of HTTP and xmlhttprequest(or AJAX) so that I don't get too messy in all my REST API stuff? P.S. I kinda don't want to use AJAX, because of XML. I have heard that XML is much heavier in data than JSON and isn't worth using anymore. Is it true? What would you recommend me to do? non async on node? you're trying to build an endpoint api so all the other cases of not using async should be thrown out the window. As soon as you have a single non async code in your node.js project it will freeze the entire process until it is complete. Remember Node.js runs a single Thread (theoretically) which means all the other concurrent users are gonna get frozen.. that's one way to make people really upset. say for instance you need to read a file from your Node.js server on a get request from a client (let's say a browser) well you want to make it a callback/promise never do non-async with an API server there is just no reason not to (in your case). example below import * as express from "express"; import * as fs from 'fs'; let app = express(); app.get('/getFileInfo', function(req, res) { fs.readFile('filePath', 'UTF-8', function(err, data) { if (err) { console.log(err); res.json({error: err}); } else { res.json({data: data}); } }) }); //users will freeze while the file is read until it is done reading app.get('/nonasync', function(req, res) { let data = fs.readFileSync('path', 'utf-8'); res.json({data:data}); }); the exact same idea applies to your web browser.. if you are going to not do something async in the browsers javascript the entire web application will be unresponsive because it also runs in the same manner, it has one main loop and unless they are in callbacks/promises/observable the website will freeze. Ajax is a much neater/nicer way to implement post/get/put/delete/get:id from a server then an XMLHttpRequest. now both of these have an option to send and receive JSON not only XML. Ajax is safer due to supporting different browser compatibility specs as XMLHttpRequest has some limitations in IE and Safari I believe. NOTE: if you're not using a framework with node.js you should, it helps keep your endpoints neat and testable along with being able to pass the project on to others without them having to learn the way you implemented your req, res structure some frameworks for node some frameworks for web browsers you might like angular 2 (my preference as I'm from a MEAN stack) reactJS (created by big blue Facebook) knockoutJS (simple and easy) all the browser frameworks have their own implementation of the RESTful api's, but more are leaning towards Observable objects.
https://codedump.io/share/g5wxnnzfEaDr/1/should-i-use-http-or-xmlhttprequest-on-nodejs-when
CC-MAIN-2017-43
en
refinedweb
An object stores its state in variables. Definition 1. A variable is an item of data named by an identifier. You must explicitly provide a name and a type for each variable you want to use in your program. The variable's name must be a legal identifier -- an unlimited series of Unicode characters that begins with a letter. You use the variable name to refer to the data that the variable contains. The variable's type determines what values it can hold and what operations can be performed on it. To give a variable a type and a name, you write a variable declaration, which generally looks like this: type name In addition to the name and type that you explicitly give a variable, a variable has scope. The section of code where the variable's simple name can be used is the variable's scope. The variable's scope is determined implicitly by the location of the variable declaration, that is, where the declaration appears in relation to other code elements. The MaxVariablesDemo program, shown below, declares eight variables of different types within its main method. public class MaxVariablesDemo { public static void main(String args[]) { //; // display them all System); } } The largest byte value is 127 The largest short value is 32767 The largest integer value is 2147483647 The largest long value is 9223372036854775807 The largest float value is 3.40282e+38 The largest double value is 1.79769e+308 The character S is upper case. The value of aBoolean is true Every variable must have a data type. A variable's data type determines the values that the variable can contain and the operations that can be performed on it. For example, in the MaxVariablesDemo program, the declaration int largestInteger declares that largestInteger has an integer data type (int). Integers can contain only integral values (both positive and negative). You can perform arithmetic operations, such as addition, on integer variables. The Java programming language has two categories of data types: primitive and reference.. Table 1: Primitive data types: int anInt = 4; The digit 4 is a literal integer value. Here are some examples of literal values of various primitive. Arrays, classes, and interfaces are reference types. The value of a reference type variable, in contrast to that of a primitive type, is a reference to (an address of) the value or set of values represented by the variable. A reference is called a pointer, or a memory address in other languages. The Java programming language does not support the explicit use of addresses like other languages do. You use the variable's name instead. A program refers to a variable's value by the variable's name. For example, when it displays the value of the largestByte variable, the MaxVariablesDemo program uses the name largestByte. A name, such as largestByte, that's composed of a single identifier, is called a simple name. Simple names are in contrast to qualified names, which a class uses to refer to a member variable that's in another object or class. In the Java programming language, the following must hold true for a simple name: By convention variable names begin with a lowercase letter, and class names begin with an uppercase letter. If a variable name consists of more than one word, the words are joined together, and each word after the first begins with an uppercase letter, like this: isVisible. The underscore character ( ) is acceptable anywhere in a name, but by convention is used only to separate words in constants (because constants are all caps by convention and thus cannot be case-delimited). A variable's scope is the region of a program within which the variable can be referred to by its simple name. Secondarily, scope also determines when the system creates and destroys memory for the variable. Scope is distinct from visibility, which applies only to member variables and determines whether the variable can be used from outside of the class within which it is declared. Visibility is set with an access modifier. The location of the variable declaration within your program establishes its scope and places it into one of these four categories: A member variable is a member of a class or an object. It is declared within a class but outside of any method or constructor. A member variable's scope is the entire declaration of the class. However, the declaration of a member needs to appear before it is used when the use is in a member initialisation expression. You declare local variables within a block of code. In general, the scope of a local variable extends from its declaration to the end of the code block in which it was declared. In MaxVariablesDemo, all of the variables declared within the main method are local variables. The scope of each variable in that program extends from the declaration of the variable to the end of the main method -- indicated by the first right curly bracket } in the program code. Parameters are formal arguments to methods or constructors and are used to pass values into methods and constructors. The scope of a parameter is the entire method or constructor for which it is a parameter. Exception-handler parameters are similar to parameters but are arguments to an exception handler rather than to a method or a constructor. The scope of an exception-handler parameter is the code block between { and } that follow a catch statement. Consider the following code sample: if (...) { int i = 17; ... } // error<br /> System.out.println("The value of i = " + i);</p> The final line won't compile because the local variable i is out of scope. The scope of i is the block of code between the { and }. The i variable does not exist anymore after the closing }. Either the variable declaration needs to be moved outside of the if statement block, or the println method call needs to be moved into the if statement block. Local variables and member variables can be initialised with an assignment statement when they're declared. The data type of the variable must match the data type of the value assigned to it. The MaxVariablesDemo program provides initial values for all its local variables when they are declared. The local variable declarations from that program follow, with the initialisation code set in bold: //; Parameters and exception-handler parameters cannot be initialised in this way. The value for a parameter is set by the caller. You can declare a variable in any scope to be final. The value of a final variable cannot change after it has been initialised. Such variables are similar to constants in other programming languages. To declare a final variable, use the final keyword in the variable declaration before the type: final int aFinalVar = 0; The previous statement declares a final variable and initialises it, all at once. Subsequent attempts to assign a value to aFinalVar result in a compiler error. You may, if necessary, defer initialisation of a final local variable. Simply declare the local variable and initialise it later, like this: final int blankfinal; . . . blankfinal = 0; A final local variable that has been declared but not yet initialised is called a blank final. Again, once a final local variable has been initialised, it cannot be set, and any later attempts to assign a value to blankfinal result in a compile-time error. When you declare a variable, you explicitly set the variable's name and data type. The Java programming language has two categories of data types: primitive and reference. A variable of primitive type contains a value. All of the primitive data types along with their sizes are shown in Table 1. The location of a variable declaration implicitly sets the variable's scope, which determines what section of code may refer to the variable by its simple name. There are four categories of scope: member variable scope, local variable scope, parameter scope, and exception-handler parameter scope. You can provide an initial value for a variable within its declaration by using the assignment operator (=). You can declare a variable as final. The value of a final variable cannot change after it's been initialised. by Anatoliy Malyarenko RSS feed Java FAQ News
http://javafaq.nu/java-article1047.html
CC-MAIN-2017-43
en
refinedweb
Recently I was trying to manage my docker-compose service configuration (namely docker-compose.yml ruamel.yaml version: '2' services: srv1: image: alpine container_name: srv1 volumes: - some-volume:/some/path srv2: image: alpine container_name: srv2 volumes_from: - some-volume volumes: some-volume: version: '2' services: srv1: image: alpine container_name: srv1 volumes: - some-volume:/some/path #srv2: # image: alpine # container_name: srv2 # volumes_from: # - some-volume volumes: some-volume: srv2 If srv2 is a key that is unique for all of the mappings in your YAML, then the "easy" way is to loop over de lines, test if de stripped version of the line starts with srv2:, note the number of leading spaces and comment out that and following lines until you notice a line that has equal or less leading spaces. The advantage of doing that, apart from being simple and fast is that it can deal with irregular indentation (as in your example: 4 positions before srv1 and 6 before some-volume). Doing this on the using ruamel.yaml is possible as well, but less straightforward. You have to know that when round_trip_loading, ruamel.yaml normally attaches a comment to the last structure (mapping/sequence) that has been processed and that as a consequence of that commenting out srv1 in your example works completely different from srv2 (i.e. the first key-value pair, if commented out, differs from all the other key-value pairs). If you normalize your expected output to four positions indent and add a comment before srv1 for analysis purposes, load that, you can search for where the comment ends up: from ruamel.yaml.util import load_yaml_guess_indent yaml_str = """\ version: '2' services: #a #b srv1: image: alpine container_name: srv1 volumes: - some-volume:/some/path #srv2: # image: alpine # container_name: srv2 # volumes_from: # - some-volume volumes: some-volume: """ data, indent, block_seq_indent = load_yaml_guess_indent(yaml_str) print('indent', indent, block_seq_indent) c0 = data['services'].ca print('c0:', c0) c0_0 = c0.comment[1][0] print('c0_0:', repr(c0_0.value), c0_0.start_mark.column) c1 = data['services']['srv1']['volumes'].ca print('c1:', c1) c1_0 = c1.end[0] print('c1_0:', repr(c1_0.value), c1_0.start_mark.column) which prints: indent 4 2 c0: Comment(comment=[None, [CommentToken(), CommentToken()]], items={}) c0_0: '#a\n' 4 c1: Comment(comment=[None, None], items={}, end=[CommentToken(), CommentToken(), CommentToken(), CommentToken(), CommentToken()]) c1_0: '#srv2:\n' 4 So you "only", have to create the first type comment ( c0) if you comment out the first key-value pair and you have to create the other ( c1) if you comment out any other key-value pair. The startmark is a StreamMark() (from ruamel/yaml/error.py) and the only important attribute of that instance when creating comments is column. Fortunately this is made slightly easier then shown above, as it is not necessary to attach the comments to the "end" of the value of volumes, attaching them to the end of the value of srv1 has the same effect. In the following comment_block expects a list of keys that is the path to the element to be commented out. import sys from copy import deepcopy from ruamel.yaml import round_trip_dump from ruamel.yaml.util import load_yaml_guess_indent from ruamel.yaml.error import StreamMark from ruamel.yaml.tokens import CommentToken yaml_str = """\ version: '2' services: srv1: image: alpine container_name: srv1 volumes: - some-volume:/some/path srv2: image: alpine container_name: srv2 # second container volumes_from: - some-volume volumes: some-volume: """ def comment_block(d, key_index_list, ind, bsi): parent = d for ki in key_index_list[:-1]: parent = parent[ki] # don't just pop the value for key_index_list[-1] that way you lose comments # in the original YAML, instead deepcopy and delete what is not needed data = deepcopy(parent) keys = list(data.keys()) found = False previous_key = None for key in keys: if key != key_index_list[-1]: if not found: previous_key = key del data[key] else: found = True # now delete the key and its value del parent[key_index_list[-1]] if previous_key is None: if parent.ca.comment is None: parent.ca.comment = [None, []] comment_list = parent.ca.comment[1] else: comment_list = parent[previous_key].ca.end = [] parent[previous_key].ca.comment = [None, None] # startmark can be the same for all lines, only column attribute is used start_mark = StreamMark(None, None, None, ind * (len(key_index_list) - 1)) for line in round_trip_dump(data, indent=ind, block_seq_indent=bsi).splitlines(True): comment_list.append(CommentToken('#' + line, start_mark, None)) for srv in ['srv1', 'srv2']: data, indent, block_seq_indent = load_yaml_guess_indent(yaml_str) comment_block(data, ['services', srv], ind=indent, bsi=block_seq_indent) round_trip_dump(data, sys.stdout, indent=indent, block_seq_indent=block_seq_indent, explicit_end=True, ) which prints: version: '2' services: #srv1: # image: alpine # container_name: srv1 # volumes: # - some-volume:/some/path srv2: image: alpine container_name: srv2 # second container volumes_from: - some-volume volumes: some-volume: ... version: '2' services: srv1: image: alpine container_name: srv1 volumes: - some-volume:/some/path #srv2: # image: alpine # container_name: srv2 # second container # volumes_from: # - some-volume volumes: some-volume: ... (the explicit_end=True is not necessary, it is used here to get some demarcation between the two YAML dumps automatically). Removing the comments this way can be done as well. Recursively search the comment attributes ( .ca) for a commented out candidate (maybe giving some hints on where to start). Strip the leading # from the comments and concatenate, then round_trip_load. Based on the column of the comments you can determine where to attach the uncommented key-value pair.
https://codedump.io/share/MEwsp6UnZ3xD/1/how-to-comment-out-a-yaml-section-using-ruamelyaml
CC-MAIN-2017-43
en
refinedweb
I just ran across a post by John Baez pointing to an article by Alan Frieze on random minimum spanning trees. Here’s the problem. - Create a complete graph with n nodes, i.e. connect every node to every other node. - Assign each edge a uniform random weight between 0 and 1. - Find the minimum spanning tree. - Add up the edges of the weights in the minimum spanning tree. The surprise is that as n goes to infinity, the expected value of the process above converges to the Riemann zeta function at 3, i.e. ζ(3) = 1/1³ + 1/2³ + 1/3³ + … Incidentally, there are closed-form expressions for the Riemann zeta function at positive even integers. For example, ζ(2) = π² / 6. But no closed-form expressions have been found for odd integers. Simulation Here’s a little Python code to play with this. import networkx as nx from random import random N = 1000 G = nx.Graph() for i in range(N): for j in range(i+1, N): G.add_edge(i, j, weight=random()) T = nx.minimum_spanning_tree(G) edges = T.edges(data=True) print( sum([e[2]["weight"] for e in edges]) ) When I ran this, I got 1.2307, close to ζ(3) = 1.20205…. I ran this again, putting the code above inside a loop, averaging the results of 100 simulations, and got 1.19701. That is, the distance between my simulation result and ζ(3) went from 0.03 to 0.003. There are two reasons I wouldn’t get exactly ζ(3). First, I’m only running a finite number of simulations (100) and so I’m not computing the expected value exactly, but only approximating it. (Probably. As in PAC: probably approximately correct.) Second, I’m using a finite graph, of size 1000, not taking a limit as graph size goes to infinity. My limited results above suggest that the first reason accounts for most of the difference between simulation and theory. Running 100 replications cut the error down by a factor of 10. This is exactly what you’d expect from the central limit theorem. This suggests that for graphs as small as 1000 nodes, the expected value is close to the asymptotic value. You could experiment with this, increasing the graph size and increasing the number of replications. But be patient. It takes a while for each replication to run. Generalization The paper by Frieze considers more than the uniform distribution. You can use any non-negative distribution with finite variance and whose cumulative distribution function F is differentiable at zero. The more general result replaces ζ(3) with ζ(3) / F ‘(0). We could, for example, replace the uniform distribution on weights with an exponential distribution. In this case the distribution function is 1 – exp(-x), at its derivative at the origin is 1, so our simulation should still produce approximately ζ(3). And indeed it does. When I took the average of 100 runs with exponential weights I got a value of 1.2065. There’s a little subtlety around using the derivative of the distribution at 0 rather than the density at 0. The derivative of the distribution (CDF) is the density (PDF), so why not just say density? One reason would be to allow the most general probability distributions, but a more immediate reason is that we’re up against a discontinuity at the origin. We’re looking at non-negative distributions, so the density has to be zero to the left of the origin. When we say the derivative of the distribution at 0, we really mean the derivative at zero of a smooth extension of the distribution. For example, the exponential distribution has density 0 for negative x and density exp(-x) for non-negative x. Strictly speaking, the CDF of this distribution is 1 – exp(-x) for non-negative x and 0 for negative x. The left and right derivatives are different, so the derivative doesn’t exist. By saying the distribution function is simply exp(-x), we’ve used a smooth extension from the non-negative reals to all reals. 2 thoughts on “Random minimum spanning trees” What version of python did you use to run this program sketch? ~~~ Python 3.5.2
https://www.johndcook.com/blog/2017/08/09/random-minimum-spanning-trees/
CC-MAIN-2017-43
en
refinedweb
Re: Help needed: sending complex structures Expand Messages - --- In soaplite@y..., Paul Kulchenko <paulclinger@y...> wrote: > Hi, Adrian!and > > > I have a similar service that doesn't have the Exchange value, > > have fewer return values, and that worked perfectly. I suspectthe > >schema > > xsi:type="types:Exchanges" have something to do with it. > You're right. The way SOAP::Lite works now is that for unknown > it tries to parse complex data types as described in SOAP spec andtype, > since there is no information on how to process unknown simple > it complains and stops processing.new > > I won't tell you what to do for your current version, but in the > version (should be released today/tomorrow) you can do:{''} > > package EncodedTypes; > > sub as_TickDirection { $_[1] } > sub as_Exchanges { $_[1] } > > package main; > > ....... > > $soap->deserializer->xmlschemas-> > = 'EncodedTypes';but > > So, you bind xmlnamespace to class that will process datatypes in > that namespace. You can handle both complex and simple datatype, > you don't need to do it for complex ones as soon as defaultdecoding > is ok for you. I specified handlers for two datatypes:TickDirection > and Exchanges and just return value. Any processing can apply, youclass > can return complex datastructure or do whatever you want. > > Current version will also look for handlers in SOAP::Serializer > if no separate classes are specified, so you can also put them inthat > SOAP::Serializer (or inherited class), but I'd rather keep them > separate, esp because when SOAP::Lite gets full XML Schema support, > so don't need to change your code to get new functionality (I hope > :)). > > What for a new version and let me know how it works for you. I will > also include this code as an example if you don't mind :). > > Best wishes, Paul. > > --- adrian@c... wrote: > > Hi Paul, > > > > Thanks! That worked perfectly. I have came across a second > > problem, however, can you please advise on what I should do? I > > have > > called a web service, and by a tunneling application I can see > >Web > > I get the results back, but when I call: > > $s->WebService($var1); > > Perl terminates, with the error message being the result of the > >and > >, > > have fewer return values, and that worked perfectly. I suspectthe > >('realtime'); > >. > > > > > > > > ------------------------ Yahoo! Groups Sponsor > > > > To unsubscribe from this group, send an email to: > > soaplite-unsubscribe@y... > > > > > > > > Your use of Yahoo! Groups is subject to > > > > > > > > > __________________________________________________ > Do You Yahoo!? > Get personalized email addresses from Yahoo! Mail > Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/657?var=1
CC-MAIN-2017-43
en
refinedweb
Code::Blocks User forums => Help => Topic started by: Speculi on August 23, 2007, 06:42:03 pm Title: CB can't find Header included in an Header Post by: Speculi on August 23, 2007, 06:42:03 pm I tried to compile an Ogre3D Project with CB and I tried to compile the CB Plugins from SVN, both give me the same error. CB can't find the Header Files which are included in another Header File. I'm not that experienced with C++ and not sure if it is an CB related problem or if there are mistakes in the Code, so here is an example. C:\OgreSDK\include is in my search path in C:\OgreSDK\samples\example.cpp there is OIS.h included: #include <OIS\OIS.h> No Problem this far, but in OIS.h there are some Headers included like: #include "OISmouse.h" CB can't find those files, but all the OIS Headers are in the same directory. I thought CB should look for these files in the same directory like the file where it was included? I got the same problem when I wanted to compile CB from source, CB itself compiled fine, but when I tried to compile the Plugins CB can't find the needed Headers. I'm running CB under Windows XP and wx2.4.8.0 with MinGW. SMF 2.0.15 | SMF © 2017 , Simple Machines
http://forums.codeblocks.org/index.php?action=printpage;topic=6759.0
CC-MAIN-2019-51
en
refinedweb
Asynchronous Task Execution Using Redis and Spring Boot Asynchronous Task Execution Using Redis and Spring Boot In this article, we are going to learn how to use Spring Boot 2.x and Redis to execute asynchronous tasks. Join the DZone community and get the full member experience.Join For Free In this article, we are going to learn how to use Spring Boot 2.x and Redis to execute asynchronous tasks, with the final code demonstrating the steps described in this post. You may also like: Spring and Threads: Async Spring/Spring Boot Spring is the most popular framework available for Java application development. As such, Spring has one of the largest open-source communities. Besides that, Spring provides extensive and up-to-date documentation that covers the inner workings of the framework and sample projects on their blog — there are 100K+ questions on StackOverflow. In the early days, Spring only supported XML-based configuration, and because of that, it was prone to many criticisms. Later, Spring introduced an annotation-based configuration that changed everything. Spring 3.0 was the first version that supported the annotation-based configuration. In 2014, Spring Boot 1.0 was released, completely changing how we look at the Spring framework ecosystem. A more detailed timeline can be found here. Redis Redis is one of the most popular NoSQL in-memory databases. Redis supports different types of data structures. Redis supports different types of data structures e.g. Set, Hash table, List, simple key-value pair just name a few. The latency of Redis call is sub-milliseconds, support of a replica set, etc. The latency of the Redis operation is sub-milliseconds that makes it even more attractive across the developer community. Why Asynchronous task execution A typical API call consists of five things - Execute one or more database(RDBMS/NoSQL) queries - One or more operations on some cache systems (In-Memory, Distributed, etc ) - Some computations (it could be some data crunching doing some math operations) - Calling some other service(s) (internal/external) - Schedule one or more tasks to be executed at a later time or immediately but in the background. A task can be scheduled at a later time for many reasons. For example, an invoice must be generated 7 days after the order creation or shipment. Similarly, email notifications need not be sent immediately, so we can make them delayed. With these real-world examples in mind, sometimes, we need to execute tasks asynchronously to reduce API response time. For example, if we delete 1K+ records at once, and if we delete all of these records in the same API call, then the API response time would be increased for sure. To reduce API response time, we can run a task in the background that would delete those records. Delayed Queue Whenever we schedule a task to run at a given time or at a certain interval, then we use cron jobs that are scheduled at a specific time or interval. We can run schedule tasks using different tools like UNIX style crontabs, Chronos, if we’re using Spring frameworks then it’s out of box Scheduled annotation ❤️. Most of the cron jobs find the records for when a particular action has to be taken, e.g. finding all shipments after seven days have elapsed and for which invoices were not generated. Most of these scheduling mechanisms suffer scaling problems, where we do scan database(s) to find the relevant rows/records. In many of the situations, this leads to a full table scan which performs very poor. Imagine the case where the same database is used by a real-time application and this batch processing system. As it's not scalable, we would need some scalable system that can execute tasks at a given time or interval without any performance problems. There are many ways to scale in this way, like run tasks in a batched fashion or operate tasks on a particular subset of users/regions. Another way could be to run a specific task at a given time without depending on other tasks, like serverless function. A delayed queue can be used in such cases where as soon as the timer reaches the scheduled time a job would be triggered. There’re many queuing systems/software available but very few of them provides this feature, like SQS which provides a delay of 15 minutes, not an arbitrary delay like 7 hours or 7 days, etc. Rqueue Rqueue is a message broker built for the spring framework that stores data in Redis and provides a mechanism to execute a task at any arbitrary delay. Rqueue is backed by Redis since Redis has some advantages over the widely used queuing systems like Kafka, SQS. In most of the web applications backend, Redis is used to store either cache data or for some other purpose. In today's world, 8.4% web applications are using the Redis database. Generally, for a queue, we use either Kafka/SQS or some other systems these systems bring an additional overhead in different dimensions e.g money which can be reduced to zero using Rqueue and Redis. Apart from the cost if we use Kafka then we need to do infrastructure setup, maintenance i.e. more ops, as most of the applications are already using Redis so we won’t have ops overhead, in fact, same Redis server/cluster can be used with Rqueue. Rqueue supports an arbitrary delay Message Delivery Rqueue guarantee at-least-once message delivery as long data is not lost in the database. Read about it more at Introducing Rqueue Tools we will need: 1. Any IDE 2. Gradle 3. Java 4. Redis We're going to use Spring Boot for simplicity. We'll create a Gradle project from the Spring Boot initializer at. For dependencies, we will need: 1. Spring Data Redis 2. Spring Web 3. Lombok and any others. The directory/folder structure is shown below: We’re going to use the Rqueue library to execute any tasks with any arbitrary delay. Rqueue is a Spring-based asynchronous task executor, that can execute tasks at any delay, it’s built upon the Spring messaging library and backed by Redis. We’ll add the dependency of Rqueue spring boot starter with com.github.sonus21:rqueue-spring-boot-starter:1.2-RELEASE dependencies { implementation 'org.springframework.boot:spring-boot-starter-data-redis' implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'com.github.sonus21:rqueue-spring-boot-starter:1.2-RELEASE' compileOnly 'org.projectlombok:lombok' annotationProcessor 'org.projectlombok:lombok' providedRuntime 'org.springframework.boot:spring-boot-starter-tomcat' testImplementation('org.springframework.boot:spring-boot-starter-test') { exclude group: 'org.junit.vintage', module: 'junit-vintage-engine' } } We need to enable Redis Spring Boot features. For testing purposes, we will enable WEB MVC as well. Update the application file as: @SpringBootApplication @EnableRedisRepositories @EnableWebMvc public class AsynchronousTaskExecutorApplication { public static void main(String[] args) { SpringApplication.run(AsynchronousTaskExecutorApplication.class, args); } } Adding tasks using Rqueue is very simple. We need to annotate a method with RqueueListener. The RqueuListener annotation has a few fields that can be set based on the use case. For delayed tasks, we need to set delayedQueue="true" and value must be provided; otherwise, it'll be ignored. The value is the name of a given queue. Set deadLetterQueue to push tasks to another queue. Otherwise, the task will be discarded on failure. We can also set how many times a task should be retried using the numRetries field. Create a Java file named MessageListener and add some methods to execute tasks: @Component @Slf4j public class MessageListener { @RqueueListener(value = "${email.queue.name}") (1) public void sendEmail(Email email) { log.info("Email {}", email); } @RqueueListener(delayedQueue = "true", value = "${invoice.queue.name}") (2) public void generateInvoice(Invoice invoice) { log.info("Invoice {}", invoice); } } We will need Invoice classes to store email and invoice data respectively. For simplicity, classes would only have a handful number of fields. Invoice.java: import lombok.Data; @Data @AllArgsConstructor @NoArgsConstructor public class Invoice { private String id; private String type; } import lombok.Data; @Data @AllArgsConstructor @NoArgsConstructor public class Email { private String email; private String subject; private String content; } Task Submissions A task can be submitted using the RqueueMessageSender bean. This has multiple methods to put a task depending on the use case. For example, for retry, use a method retry count, and then use delay for delayed tasks. We need to auto-wire RqueueMessageSender or use the constructor-based injection to inject these beans. Here's how to create a Controller for testing purposes. We're going to schedule invoice generation that can be done in 30 seconds. For this, we'll submit a task with 30000 (milliseconds) delay on the invoice queue. Also, we'll try to send an email that can be executed in the background. For this purpose, we'll add two GET methods, sendEmail and generateInvoice, we can use POST as well. @RestController @RequiredArgsConstructor(onConstructor = @__(@Autowired)) @Slf4j public class Controller { private @NonNull RqueueMessageSender rqueueMessageSender; @Value("${email.queue.name}") private String emailQueueName; @Value("${invoice.queue.name}") private String invoiceQueueName; @Value("${invoice.queue.delay}") private Long invoiceDelay; @GetMapping("email") public String sendEmail( @RequestParam String email, @RequestParam String subject, @RequestParam String content) { log.info("Sending email"); rqueueMessageSender.put(emailQueueName, new Email(email, subject, content)); return "Please check your inbox!"; } @GetMapping("invoice") public String generateInvoice(@RequestParam String id, @RequestParam String type) { log.info("Generate invoice"); rqueueMessageSender.put(invoiceQueueName, new Invoice(id, type), invoiceDelay); return "Invoice would be generated in " + invoiceDelay + " milliseconds"; } } Add the following in application.properties file: Now, we can run this application. Once the application starts successfully, you can browse this link. In the log, we can see the email task is being executed in the background: Below is invoice scheduling after 30 seconds: Conclusion We can now schedule tasks using Rqueue without much boiler code! We made important considerations when configuring and using the Rqueue library. One important thing to keep in mind is: Whether a task is a delayed task or not, by default, it's assumed that tasks need to be executed as soon as possible. The complete code for this post can be found at my GitHub repo. If you found this post helpful, please share it with your friends and colleagues, and don't forget to give it a thumbs up! Further Reading Spring Boot: Creating Asynchronous Methods Using @Async Annotation Spring and Threads: Async Distributed Tasks Execution and Scheduling in Java, Powered by Redis Published at DZone with permission of Sonu Kumar . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/asynchronous-task-executor-using-redis-and-spring
CC-MAIN-2019-51
en
refinedweb
JavaIME Turn your Sublime text into a java completion text editor. Details Installs - Total 30K - Win 17K - OS X 6K - Linux 7K Readme - Source - raw.githubusercontent.com Java Import Made Easy (JavaIME) About This Sublime text package makes it easy to import virtually all Java packages, gives method completion suggestions for java methods, easily instantiate objects and create action listener methods. Check out Intellekt plugin. It is meant to support various languages including Java and should originally be a replacement for JavaIME. It provides intellisense and code documentation features for Java and other languages. Screenshot How to Use You can use a lot of JavaIME features in the following ways. Example Import Note you can also press ENTER rather than TAB To import the java.utils.Collections package, type COLLECTIONS in (uppercase). Suggestions should already be popping up then press TAB to select. The package will be automatically imported. Auto-Complete listener interfaces To show a listener interface completion, type L- followed by the listener interface name. e.g. L-Focus TAB should show FocusListener Class Instance Type I-class name e.g. I-Scanner should show “java Scanner input = new Scanner(System.in); #### Listener Events Type **E-** followed by event and press tab. e.g. **E-act** then press TAB should show ```java public void actionPerformed(ActionEvent e){ // contents } Try Catch You can also press try then TAB to get try { } catch (Exception err) { System.out.println(err.getMessage()); } Methods Methods pop up by default so you can just press TAB or ENTER. Static properties Type S- followed by method name. e.g. Type s-magenta then TAB to get MAGENTA class declaration - pc -> generates a public class declaration - pcm -> generates the public class declaration with a main method - pcc -> generates the public class declaration with a constructor - pccm -> generates the public class declaration with a main method and constructor - pcfx -> generates a public class declaration for JavaFX and a main method. Example Assume we have a file named Test.java Typing pc then TAB or ENTER should produce public class Test { } Typing pcc then TAB or ENTER should produce public class Test { // Constructor public Test(){ } } Typing pcm then TAB or ENTER should produce public class Test { public static void main(String[] args){ } } Typing pcfx then TAB or ENTER should produce public class Test extends Application{ @Override public void start(Stage primaryStage) throws Exception { // Contents } public static void main(String[] args){ launch(args); } } … and so on! JavaIME and click it. The package will be installed and ready for use. Using Git Locate your Sublime Text 2 Packages directory by using the menu item Preferences -> Browse Packages.... While inside the Packages directory, clone the theme repository using the command below: git clone Download Manually - Download the files using the GitHub .zip download option - Unzip the files - Copy the folder to your Sublime Text Packagesdirectory Contributing All contributions are welcome. fork me on Github and create a pull request. Any suggestions or bugs, please let me know. License © 2015 Taiwo Kareem | taiwo.kareem36@gmail.com. Read license.txt Acknowledgements I'd first like to say a very big thank you to God my creator. Without him, this wouldn't be possible.
https://packagecontrol.io/packages/JavaIME
CC-MAIN-2019-51
en
refinedweb
23 December 2017 7 comments Python This is an update to a old blog post from 2006 called Fastest way to uniquify a list in Python. But this, time for Python 3.6. Why, because Python 3.6 preserves the order when inserting keys to a dictionary . How, because the way dicts are implemented in 3.6, the way it does that is different and as an implementation detail the order gets preserved. Then, in Python 3.7, which isn't released at the time of writing, that order preserving is guaranteed. Anyway, Raymond Hettinger just shared a neat little way to uniqify a list. I thought I'd update my old post from 2006 to add list(dict.fromkeys('abracadabra')). Reminder, there are two ways to uniqify a list. Order preserving and not order preserving. For example, the unique letters in peter is p, e, t, r in their "original order". As opposed to t, e, p, r. def f1(seq): # Raymond Hettinger hash_ = {} [hash_.__setitem__(x, 1) for x in seq] return hash_.keys() def f3(seq): # Not order preserving keys = {} for e in seq: keys[e] = 1 return keys.keys() def f5(seq, idfun=None): # Alex Martelli *******5b(seq, idfun=None): # Alex Martelli ******* order preserving if idfun is None: def idfun(x): return x seen = {} result = [] for item in seq: marker = idfun(item) # in old Python versions: # if seen.has_key(marker) # but in new ones: if marker not in seen: seen[marker] = 1 result.append(item) return result def f7(seq): # Not order preserving return list(set(seq)) def f8(seq): # Dave Kirby # Order preserving seen = set() return [x for x in seq if x not in seen and not seen.add(x)] def f9(seq): # Not order preserving, even in Py >=3.6 return {}.fromkeys(seq).keys() def f10(seq, idfun=None): # Andrew Dalke # Order preserving return list(_f10(seq, idfun)) def _f10(seq, idfun=None): seen = set() if idfun is None: for x in seq: if x in seen: continue seen.add(x) yield x else: for x in seq: x = idfun(x) if x in seen: continue seen.add(x) yield x def f11(seq): # f10 but simpler # Order preserving return list(_f10(seq)) def f12(seq): # Raymond Hettinger # return list(dict.fromkeys(seq)) FUNCTION ORDER PRESERVING MEAN MEDIAN f12 yes 111.0 112.2 f8 yes 266.3 266.4 f10 yes 304.0 299.1 f11 yes 314.3 312.9 f5 yes 486.8 479.7 f5b yes 494.7 498.0 f7 no 95.8 95.1 f9 no 100.8 100.9 f3 no 143.7 142.2 f1 no 406.4 408.4 The fastest way to uniqify a list of hashable objects (basically immutable things) is: list(set(seq)) And the fastest way, if the order is important is: list(dict.fromkeys(seq)) Now we know. Follow @peterbe on Twitter What is this mysterious sequence "seq" that you're testing with? Can you please show the *whole* benchmark code? See the link in the top to that gist.github.com Thanks, missed that since it's "above the article". Other things: - You say in Python 3.7 "order preserving is guaranteed" but following that link I only see Raymond saying "a guarantee for 3.7 is almost inevitable". Am I missing something again or are you overstating that? - Better remove f2 from the list of functions to test, as it's suuuper slow. - Would be good to include *min* as well, in addition to mean and median. As Veedrac argued at - "all error in timing is positive [so] the shortest time has the least error in it". And *Raymond* uses min there as well (in his answer, the accepted one). - I came up with four order-preserving solutions slower than Raymond's but much faster than the others. Then I also found them in the comments of the old blog post, ha. Here they are: def f13(seq): seen = set() add = seen.add return [add(x) or x for x in seq if x not in seen] return [add(x) or x for x in seq if not x in seen] return [x for x in seq if not (x in seen or add(x))] return [x for x in seq if x not in seen and not add(x)] - f1, f3 and f9 aren't quite right. They don't return a list but a "dict_keys" view object, which isn't fair, as that takes only O(1) time. - What makes you think f9 doesn't preserve order? I don't see why it wouldn't, and it did preserve order in some tests I just did, including your script (after fixing the "dict_keys" issue by wrapping "list(...)" around it). - In the comments under Raymond's answer he also very strongly suggested using *min*: "Use the min() rather than the average of the timings. That is a recommendation from me, from Tim Peters, and from Guido van Rossum. The fastest time represents the best an algorithm can perform when the caches are loaded and the system isn't busy with other tasks. All the timings are noisy -- the fastest time is the least noisy. It is easy to show that the fastest timings are the most reproducible and therefore the most useful when timing two different implementations." Nice analysis. f7 is now order preserving because it's backed by dictionaries, which are now going to be order preserving. Sorry, make that will be... when 3.7 comes out. :) Since Python 3.7, insertion-order dict’s are part of the language, as per the “official verdict” by BDFL:.
https://www-origin.peterbe.com/plog/fastest-way-to-uniquify-a-list-in-python-3.6
CC-MAIN-2019-51
en
refinedweb
ALM-9 Failed to Restart Containerized Application Description This alarm is reported when a pod fails to start containerized applications. Pod: In Kubernetes, pods are the smallest unit of creation, scheduling, and deployment. A pod is a group of relatively tightly coupled containers. Pods are always co-located and run in a shared application context. Containers within a pod share a namespace, IP address, port space, and volume. Attribute Parameters Impact on the System Related functions may be unavailable. System Actions The pod keeps restarting. Possible Causes The container startup command configured for the pod is incorrect. As a result, the containers cannot run properly. Procedure - Obtain the name of the pod that fails to be started. - Use a browser to log in to the FusionStage OM zone console. - Log in to ManageOne Maintenance Portal. - Login address: for accessing the homepage of ManageOne Maintenance Portal:31943, for example,. - The default username is admin, and the default password is Huawei12#$. - On the O&M Maps page, click the FusionStage link under Quick Links to go to the FusionStage OM zone console. - Choose Application Operations > Application Operations from the main menu. - In the navigation pane on the left, choose Alarm Center > Alarm List and query the alarm by setting query criteria. - Click to expand the alarm information. Record the values of name and namespace in Location Info, that is, podname and namespace. - Use PuTTY to log in to the manage_lb1_ip node. The default username is paas, and the default password is QAZ2wsx@123!. - Run the following command and enter the password of the root user to switch to the root user: su - root Default password: QAZ2wsx@123! - Run the following command to obtain the IP address of the node on which the pod runs: kubectl get pod podname -n namespace -oyaml | grep -i hostip: In the preceding command, podname is the instance name obtained in 1, and namespace is the namespace obtained in 1. Log in to the node using SSH. - Search for the error information based on the pod name and correct the container startup configuration. - Run the following commands to view the kubelet log: cd /var/paas/sys/log/kubernetes/ vi kubelet.log Press the / key, enter the name of the pod, and then press Enter for search. If the following content in bold is displayed, the container startup command fails to be executed: I0113 14:19:29.497459 70092 docker_manager.go:2703] checking backoff for container "container1" in pod "nginx-run-1869532261-29px2" I0113 14:19:29.497620 70092 docker_manager.go:2717] Back-off 20s restarting failed container=container1 pod=nginx-run-1869532261-29px2_testns(b01b9e9c-f829-11e7-aa58-286ed488d1d4) E0113 14:19:29.497673 70092 pod_workers.go:226] Error syncing pod nginx-run-1869532261-29px2-b01b9e9c-f829-11e7-aa58-286ed488d1d4, skipping: failed to "StartContainer" for "container1" with CrashLoopBackOff: "Back-off 20s restarting failed container=container1 pod=nginx-run-1869532261-29px2_testns(b01b9e9c-f829-11e7-aa58-286ed488d1d4) - Run the following command to query the container ID: docker ps -a - Run the following command to check the specific error information: docker logs containerID containerID is the container ID obtained in 5.b.The following information in bold indicates that the startup script does not exist: # docker logs 128acfd300c8 container_linux.go:247: starting container process caused "exec: \"bash /tmp/test.sh\": stat bash /tmp/test.sh: no such file or directory" - Correct the container startup command based on error information, delete and deploy the application again, and then check whether the alarm is cleared. In the case of the above error, you need to specify the correct directory for the startup script or put the startup script in the directory. - Contact technical support for assistance. Alarm Clearing This alarm is automatically cleared after the fault is rectified. Related Information None
https://support.huawei.com/enterprise/en/doc/EDOC1100062365/5fb51853/alm-9-failed-to-restart-containerized-application
CC-MAIN-2019-51
en
refinedweb
Contains options that specify how a control layout is stored to, and restored from a storage (a stream or string). Namespace: DevExpress.Web.ASPxPivotGrid Assembly: DevExpress.Web.ASPxPivotGrid.v19.2.dll public class PivotGridWebOptionsLayout : PivotGridOptionsLayout Public Class PivotGridWebOptionsLayout Inherits PivotGridOptionsLayout The ASPxPivotGrid.OptionsLayout property of the PivotGridWebOptionsLayout type allows you to control which of the control's options should be stored to, and restored from a storage (a stream or string). It also determines what to do with fields that exist in the layout being restored, but do not exist in the current control layout (as well as with fields that exist in the current control layout, but do not exist in the layout being restored).
https://docs.devexpress.com/AspNet/DevExpress.Web.ASPxPivotGrid.PivotGridWebOptionsLayout
CC-MAIN-2019-51
en
refinedweb
When using ESXCLI, the output is formatted using a "default" formatter based on the type of data being displayed. However, you can easily modify the output by using one of the three supported formatters: xml, csv and keyvalue or even leverage some internal ones mentioned by Steve Jin here. When working with some of the ESXCLI 'storage' namespaces, such as listing all the devices on an ESXi host, the output can be quite verbose as seen in the example below: Usually for such a command, you are interested in a couple of specific properties and I bet you are probably spend a good amount of time scrolling up and down, I know I do. One useful option that is not very well documented (will be filing a bug for this) is the --format-param options which goes in-conjunction with the csv formatter. I always forget the syntax and can never find it when I Google for it so I am documenting this for myself but I think this would also be useful for others to know about. The --format-param option allows you to specify specific property fields you care about. If we use the our ESXCLI example above, what I really care about are the following for each device: - Display Name - Is Local - Is SSD Using the following command, we can then extract only those fields we care about: If we now look at our output, we can easily see that we have 5 devices on our ESXi host and I can quickly see the Display Name of our device, whether it is a local device seen by ESXi and if it is an SSD. I find this filtering mechanism especially handy during troubleshooting or when you need to quickly identify a device for configuration. Thanks for the comment!
https://www.virtuallyghetto.com/2014/04/quick-tip-esxcli-csv-format-param-options.html
CC-MAIN-2019-51
en
refinedweb
PDF generation. Yawn. But, every enterprise application has an “export to PDF” feature. There are obstacles to overcome when generating PDFs from Azure Web Apps and Functions. The first obstacle is the sandbox Azure uses to execute code. You can read about the sandbox in the “Azure Web App sandbox” documentation. This article explicitly calls out PDF generation as a potential problem. The sandbox prevents an app from using most of the kernel’s graphics API, which many PDF generators use either directly or indirectly. The sandbox document also lists a few PDF generators known to work in the sandbox. I’m sure the list is not exhaustive, (a quick web search will also find solutions using Node), but one library listed indirectly is wkhtmltopdf (open source, LGPLv3). The wkhtmltopdf library is interesting because the library is a cross platform library. A solution built with .NET Core and wkhtmltopdf should work on Windows, Linux, or Mac. For this experiment I used the Azure Functions 2.0 runtime, which is still in beta and has a few shortcomings. However, the ability to use precompiled projects and build on .NET Core are both appealing features for v2. To work with the wkhtmltopdf library from .NET Core I used the DinkToPdf wrapper. This package hides all the P/Invoke messiness, and has friendly options to control margins, headers, page size, etc. All an app needs to do is feed a string of HTML to a Dink converter, and the converter will return a byte array of PDF bits. Here’s an HTTP triggered function that takes a URL to convert and returns the bytes as application/pdf. using DinkToPdf; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.Azure.WebJobs.Host; using System; using System.Net.Http; using System.Threading.Tasks; using IPdfConverter = DinkToPdf.Contracts.IConverter; namespace PdfConverterYawnSigh { public static class HtmlToPdf { [FunctionName("HtmlToPdf")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "post")] ConvertPdfRequest request, TraceWriter log) { log.Info($"Converting {request.Url} to PDF"); var html = await FetchHtml(request.Url); var pdfBytes = BuildPdf(html); var response = BuildResponse(pdfBytes); return response; } private static FileContentResult BuildResponse(byte[] pdfBytes) { return new FileContentResult(pdfBytes, "application/pdf"); } private static byte[] BuildPdf(string html) { return pdfConverter.Convert(new HtmlToPdfDocument() { Objects = { new ObjectSettings { HtmlContent = html } } }); } private static async Task<string> FetchHtml(string url) { var response = await httpClient.GetAsync(url); if (!response.IsSuccessStatusCode) { throw new InvalidOperationException($"FetchHtml failed {response.StatusCode} : {response.ReasonPhrase}"); } return await response.Content.ReadAsStringAsync(); } static HttpClient httpClient = new HttpClient(); static IPdfConverter pdfConverter = new SynchronizedConverter(new PdfTools()); } } Notice the converter class has the name SynchronizedConverter. The word synchronized is a clue that the converter is single threaded. Although the library can buffer conversion requests until a thread is free to process those requests, it would be safer to trigger the function with a message queue to avoid losing conversion requests in case of a restart. You should also know that the function will not execute successfully in a consumption plan. You’ll need to use a Basic or higher app service plan in Azure. To deploy the application you’ll need to include the wkhtmltopdf native binaries. You can build the binary you need from source, or download the binaries from various places, including the DinkToPdf repository. Function apps currently only support .NET Core on Windows in a 32-bit process, so use the 32-bit dll. I added the binary to my function app project and set the build action “Copy to Output Directory”. As we are about to see, the 32 bit address space is not a problem. To see how the function performs, I created a single instance of the lowest standard app service plan (S1 – single CPU). For requests pointing to 18KB of HTML, the function produces a PDF in under 3 seconds regularly, although 20 seconds isn’t abnormal either. Even the simplest functions on the v2 runtime have a high standard deviation for the average response time. Hopefully the base performance characteristics improve when v2 is out of beta. Using a single threaded component like wkhtmltopdf in server-side code is generally a situation to avoid. To see what happens with concurrent users I ran some load tests for 5 minutes starting with 1 user. Every 30 seconds the test added another user up to a maximum of 10 concurrent users. The function consistently works well up to 5 concurrent requests, at which point the average response time is ~30 seconds. By the time the test reaches 7 concurrent users the function would consistently generate HTTP 502 errors for a subset of requests. Here are the results from one test run. The Y axis labels are for the average response time (in seconds). Looking at metrics for the app service plan in Azure, you can see the CPU pegged at 100% for most of the test time. With no headroom left for other apps, you’d want to give this function a dedicated plan. I wouldn’t consider this solution viable for a system whose sole purpose is generating large number of PDF files all day, but for small workloads the function is workable. Much would depend on the amount of HTML in the conversion. In my experience the real headaches with PDFs come down to formatting. HTML to PDF conversions always look like they’ve been constructed by a drunken type-setter using a misaligned printing press, unless you control the HTML and craft the markup specifically for conversion.
https://odetocode.com/blogs/scott/archive/2018/02/14/pdf-generation-in-azure-functions-v2.aspx
CC-MAIN-2019-51
en
refinedweb
hi, Does Turbo c++ compiler conforms with ANSI standards. Becaues I am using Turbo C++ 3.0 compiler. But it doesn't support the type "bool" and namespaces. hi, Does Turbo c++ compiler conforms with ANSI standards. Becaues I am using Turbo C++ 3.0 compiler. But it doesn't support the type "bool" and namespaces. You probably have an older version that doesn't support the new ISO standard. -Prelude My best code is written with the delete key. A valid question would be, "does any compiler fully support the standard?", I've yet to find one. Wave upon wave of demented avengers march cheerfully out of obscurity unto the dream. Well... in fact most do. You can include parameters that will make them compile to fully ANSI/ISO. Like the '-ansi -pedantic' for gcc I learned on this forum some time ago. (Btw, I didn't take the '-pedantic' pun lightly) But, unfortunately, when you do that you are in for a nasty surprise. No matter how tidy your code is, the fact is that the libs that come with those compilers aren't fully compliant. I had the standard lib issuing errors all over when I tried the above on mingw So, instead of having a fully portable language like it could be described (and it was intended) we have yet another spaghetti one simply because library designers aren't as "pedantic" as they should be. Regards, Mario Figueiredo Using Borland C++ Builder 5 Read the Tao of Programming This advise was brought to you by the Comitee for a Service Packless World
https://cboard.cprogramming.com/cplusplus-programming/18645-cplusplus-standards.html
CC-MAIN-2017-39
en
refinedweb
In the previous article we looked at how backpressure works when working with akka-streams. Akka also uses akka-streams as the base for akka-http, with which you can easily create a HTTP server and client, where each request is processed using a stream materialized using actors. In the latest release akka-http also supports websockets. In this article we'll show you how you can create a websockets server using akka-http. We'll show and explain the following subjects: - Respond to messages using a simple flow created with the flow API. - Respond to messages with a flow graph, created with the flow graph DSL. - Proactively push messages to the client by introducing an additional source to the flow. - Create a custom publisher from an Akka Actor. When writing this article, it became a bit longer than initially planned. I'll write a follow-up on how you can see that with websockets backpressure and rate control is also working, so watch for that one in a couple of weeks. The source files for this article can be found in the following Github Gists: So first lets look at how to set up the basic skeleton of the application. Getting started Lets start by looking at the dependencies we need. For all the examples we use the following simple sbt file name := "akka-http-websockets" version := "1.0" scalaVersion := "2.11.6" libraryDependencies ++= Seq( "com.typesafe.akka" %% "akka-stream-experimental" % "1.0-RC2", "com.typesafe.akka" %% "akka-http-core-experimental" % "1.0-RC2", "com.typesafe.play" %% "play-json" % "2.3.4" ) object WSRequest { def unapply(req: HttpRequest) : Option[HttpRequest] = { if (req.header[UpgradeToWebsocket].isDefined) { req.header[UpgradeToWebsocket] match { case Some(upgrade) => Some(req) case None => None } } else None } } /** * Simple websocket server using akka-http and akka-streams. * * Note that about 600 messages get queued up in the send buffer (on mac, 146988 is default socket buffer) */ object WSServer extends App { // required actorsystem and flow materializer implicit val system = ActorSystem("websockets") implicit val fm = ActorFlowMaterializer() // setup the actors for the stats // router: will keep a list of connected actorpublisher, to inform them about new stats. // vmactor: will start sending messages to the router, which will pass them on to any // connected routee val router: ActorRef = system.actorOf(Props[RouterActor], "router") val vmactor: ActorRef = system.actorOf(Props(classOf[VMActor], router ,2 seconds, 20 milliseconds)) // Bind to an HTTP port and handle incoming messages. // With the custom extractor we're always certain the header contains // the correct upgrade message. // We can pass in a socketoptions to tune the buffer behavior // e.g options = List(Inet.SO.SendBufferSize(100)) val binding = Http().bindAndHandleSync({ case WSRequest(req@HttpRequest(GET, Uri.Path("/simple"), _, _, _)) => handleWith(req, Flows.reverseFlow) case WSRequest(req@HttpRequest(GET, Uri.Path("/echo"), _, _, _)) => handleWith(req, Flows.echoFlow) case WSRequest(req@HttpRequest(GET, Uri.Path("/graph"), _, _, _)) => handleWith(req, Flows.graphFlow) case WSRequest(req@HttpRequest(GET, Uri.Path("/graphWithSource"), _, _, _)) => handleWith(req, Flows.graphFlowWithExtraSource) case WSRequest(req@HttpRequest(GET, Uri.Path("/stats"), _, _, _)) => handleWith(req, Flows.graphFlowWithStats(router)) case _: HttpRequest => HttpResponse(400, entity = "Invalid websocket request") }, interface = "localhost", port = 9001) // binding is a future, we assume it's ready within a second or timeout try { Await.result(binding, 1 second) println("Server online at") } catch { case exc: TimeoutException => println("Server took to long to startup, shutting down") system.shutdown() } /** * Simple helper function, that connects a flow to a specific websocket upgrade request */ def handleWith(req: HttpRequest, flow: Flow[Message, Message, Unit]) = req.header[UpgradeToWebsocket].get.handleMessages(flow) } Echo flow Before we look at the flow lets look a bit closer at what our handler functions require. The signature for "req.header[UpgradeToWebsocket].get.handleMessages" looks like this: def handleMessages(handlerFlow: Flow[Message, Message, Any], subprotocol: Option[String] = None)(implicit mat: FlowMaterializer): HttpResponse As you can see this function requires a Flow with an open input which accepts a Message and an open output which also expects a message. Akka-streams will attach the created websocket as a Source and pass any sent messages from the client into this flow. Akka-streams will also use the same websocket as a Sink and pass the resulting message from this flow to it. The result from this function is a HTTPResponse that will be sent to the client. Now lets look at the echo flow. For this flow we defined the following case: case WSRequest(req@HttpRequest(GET, Uri.Path("/echo"), _, _, _)) => handleWith(req, Flows.echoFlow) def echoFlow: Flow[Message, Message, Unit] = Flow[Message] The flow configured at this endpoint responds with the same text as was entered. We didn't really do anything with this message. Lets add some custom functionality to the flow and see what happens. SimpleFlow When we connect a client to "/simple" the Flows.reverseFlow is used to handle incoming websocket messages: case WSRequest(req@HttpRequest(GET, Uri.Path("/echo"), _, _, _)) => handleWith(req, Flows.echoFlow) def reverseFlow: Flow[Message, Message, Unit] = { Flow[Message].map { case TextMessage.Strict(txt) => TextMessage.Strict(txt.reverse) case _ => TextMessage.Strict("Not supported message type") } } Anything you enter here, is returned to the client but reversed. So far we've only created very simple flows, using the flow API directly. I've you've already looked a bit closer at akka-streams you probably know that there is an alternative way of creating flows. You can also use the Graph DSL, as we'll show in the next example: The Graph flow With a graph flow it is very easy to create more complex message processing graphs. In the following sample we'll show you how you can use a couple of standard flow constructs to easily process and filter incoming messages. This sample will be run when we access the server on the following endpoint: case WSRequest(req@HttpRequest(GET, Uri.Path("/graph"), _, _, _)) => handleWith(req, Flows.graphFlow) /** * Flow which uses a graph to process the incoming message. * * compute * collect ~> broadcast ~> compute ~> zip ~> map * compute * * We broadcast the message to three map functions, we * then zip them all up, and map them to the response * message which we return. * * @return */ def graphFlow: Flow[Message, Message, Unit] = { Flow() { implicit b => import FlowGraph.Implicits._ val collect = b.add(Flow[Message].collect[String]({ case TextMessage.Strict(txt) => txt })) // setup the components of the flow val compute1 = b.add(Flow[String].map(_ + ":1")) val compute2 = b.add(Flow[String].map(_ + ":2")) val compute3 = b.add(Flow[String].map(_ + ":3")) val broadcast = b.add(Broadcast[String](3)) val zip = b.add(ZipWith[String,String,String,String]((s1, s2, s3) => s1 + s2 + s3)) val mapToMessage = b.add(Flow[String].map[TextMessage](TextMessage.Strict)) // now we build up the flow broadcast ~> compute1 ~> zip.in0 collect ~> broadcast ~> compute2 ~> zip.in1 broadcast ~> compute3 ~> zip.in2 zip.out ~> mapToMessage (collect.inlet, mapToMessage.outlet) } } At this point we've only responded to messages from the client, but didn't push anything to the client from the server proactively. In the following sample, we'll introduce an additional source that can push messages to the client regardless whether the client requested it. Pushing messages to the client One of the patterns that are matched, use the Flow.graphFlowWithExtraSource flow: case WSRequest(req@HttpRequest(GET, Uri.Path("/graphWithSource"), _, _, _)) => handleWith(req, Flows.graphFlowWithExtraSource) /** * When the flow is materialized we don't really just have to respond with a single * message. Any message that is produced from the flow gets sent to the client. This * means we can also attach an additional source to the flow and use that to push * messages to the client. * * So this flow looks like this: * * in ~> filter ~> merge * newSource ~> merge ~> map * This flow filters out the incoming messages, and the merge will only see messages * from our new flow. All these messages get sent to the connected websocket. * * * @return */ def graphFlowWithExtraSource: Flow[Message, Message, Unit] = { Flow() { implicit b => import FlowGraph.Implicits._ // Graph elements we'll use val merge = b.add(Merge[Int](2)) val filter = b.add(Flow[Int].filter(_ => false)) // convert to int so we can connect to merge val mapMsgToInt = b.add(Flow[Message].map[Int] { msg => -1 }) val mapIntToMsg = b.add(Flow[Int].map[Message]( x => TextMessage.Strict(":" + randomPrintableString(200) + ":" + x.toString))) val log = b.add(Flow[Int].map[Int](x => {println(x); x})) // source we want to use to send message to the connected websocket sink val rangeSource = b.add(Source(1 to 2000)) // connect the graph mapMsgToInt ~> filter ~> merge // this part of the merge will never provide msgs rangeSource ~> log ~> merge ~> mapIntToMsg // expose ports (mapMsgToInt.inlet, mapIntToMsg.outlet) } } When we run a message to this flow, we should see 2000 messages being pushed to the client as fast as the client can process: Which is exactly what happens. As soon as the connection is created, 2000 messages are pushed to the client. Any messages sent from the client are ignored, as you can see in the following screenshot: We've also added a small logging step to the flow. This will just print out all numbers from 1 tot 2000, to give us an idea how everything is running. At this point we've only used the standard components provided by akka-streams. In the next section we're going to create a custom publisher, that pushes VM information such as memory usage to a websocket client. Pusing messages to the client with a custom publisher We'll need to take a couple of steps before we can get this to work correctly, and this will involve creating a couple of agents: - We'll need a actor that forms our stream. For this we'll use an agent that together with a scheduler sends a VMStat messages at a configured interval. - In akka-streams you can't connect a new subscriber to a running publisher. To work around this we'll have the actor from step 1, send its messages to a router. This router will then broadcast the messages further to an actor that can inject them into a flow. - Finally we need the actor that connects the messages to the akka flow. For this we create an actor for each websocket request, which acts like a publisher, and passes on messages received from the router into the flow. Lets start with the first one. The VMActor The VMActor is a simple actor, which, when started, sends a message every period to the provided actorRef like this: class VMActor(router: ActorRef, delay: FiniteDuration, interval: FiniteDuration) extends Actor { import scala.concurrent.ExecutionContext.Implicits.global context.system.scheduler.schedule(delay, interval) { val json = Json.obj( "stats" -> getStats.map(el => el._1 -> el._2)) router ! Json.prettyPrint(json) } override def receive: Actor.Receive = { case _ => // just ignore any messages } def getStats: Map[String, Long] = { val baseStats = Map[String, Long]( "count.procs" -> Runtime.getRuntime.availableProcessors(), "count.mem.free" -> Runtime.getRuntime.freeMemory(), "count.mem.maxMemory" -> Runtime.getRuntime.maxMemory(), "count.mem.totalMemory" -> Runtime.getRuntime.totalMemory() ) val roots = File.listRoots() val totalSpaceMap = roots.map(root => s"count.fs.total.${root.getAbsolutePath}" -> root.getTotalSpace) toMap val freeSpaceMap = roots.map(root => s"count.fs.free.${root.getAbsolutePath}" -> root.getFreeSpace) toMap val usuableSpaceMap = roots.map(root => s"count.fs.usuable.${root.getAbsolutePath}" -> root.getUsableSpace) toMap baseStats ++ totalSpaceMap ++ freeSpaceMap ++ usuableSpaceMap } } The router For the router, initially, I wanted to use the standard BroadcastGroup router. But this router is immutable and doesn't really allow dynamically adding new routees. So for this usecase we create a very simple alternative router, which we create like this: val router: ActorRef = system.actorOf(Props[RouterActor], "router") val vmactor: ActorRef = system.actorOf(Props(classOf[VMActor], router ,2 seconds, 20 milliseconds)) class RouterActor extends Actor { var routees = Set[Routee]() def receive = { case ar: AddRoutee => routees = routees + ar.routee case rr: RemoveRoutee => routees = routees - rr.routee case msg => routees.foreach(_.send(msg, sender)) } } VMStatsPublisher Akka-streams provides an ActorPublisher[T] trait which you must use on your actors, so that they can be used as publisher inside a flow. Before we look at the implementation of this actor, first lets look at the flow that uses this actor: /** * Creates a flow which uses the provided source as additional input. This complete scenario * works like this: * 1. When the actor is created it registers itself with a router. * 2. the VMActor sends messages at an interval to the router. * 3. The router next sends the message to this source which injects it into the flow */ def graphFlowWithStats(router: ActorRef): Flow[Message, Message, Unit] = { Flow() { implicit b => import FlowGraph.Implicits._ // create an actor source val source = Source.actorPublisher[String](Props(classOf[VMStatsPublisher],router)) // Graph elements we'll use val merge = b.add(Merge[String](2)) val filter = b.add(Flow[String].filter(_ => false)) // convert to int so we can connect to merge val mapMsgToString = b.add(Flow[Message].map[String] { msg => "" }) val mapStringToMsg = b.add(Flow[String].map[Message]( x => TextMessage.Strict(x))) val statsSource = b.add(source) // connect the graph mapMsgToString ~> filter ~> merge // this part of the merge will never provide msgs statsSource ~> merge ~> mapStringToMsg // expose ports (mapMsgToString.inlet, mapStringToMsg.outlet) } } val source = Source.actorPublisher[String](Props(classOf[VMStatsPublisher],router)) /** * for now a very simple actor, which keeps a separate buffer * for each subscriber. This could be rewritten to store the * vmstats in an actor somewhere centrally and pull them from there. * * Based on the standed publisher example from the akka docs. */ class VMStatsPublisher(router: ActorRef) extends ActorPublisher[String] { case class QueueUpdated() import akka.stream.actor.ActorPublisherMessage._ import scala.collection.mutable val MaxBufferSize = 50 val queue = mutable.Queue[String]() var queueUpdated = false; //)) } def receive = { // receive new stats, add them to the queue, and quickly // exit. case stats: String => // remove the oldest one from the queue and add a new one if (queue.size == MaxBufferSize) queue.dequeue() queue += stats if (!queueUpdated) { queueUpdated = true self ! QueueUpdated } // we receive this message if there are new items in the // queue. If we have a demand for messages send the requested // demand. case QueueUpdated => deliver() // the connected subscriber request n messages, we don't need // to explicitely check the amount, we use totalDemand propery for this case Request(amount) => deliver() // subscriber stops, so we stop ourselves. case Cancel => context.stop(self) } /** * Deliver the message to the subscriber. In the case of websockets over TCP, note * that even if we have a slow consumer, we won't notice that immediately. First the * buffers will fill up before we get feedback. */ () } } Cool right! There are a couple of other topics to explore regarding websockets and akka-streams, most importantly backpressure. I'll create a separate article on that one in the next couple of weeks to show that slow websocket clients trigger backpressure with akka-streams. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/create-reactive-websocket
CC-MAIN-2017-39
en
refinedweb
shots pull down the entire Internet into a single animated gif. description by leveraging waybackpack — a python program that pulls down the entire Wayback Machine archive for a given URL — shots goes one step further, by grabbing screenshots out of each of the archived pages, filtering out visually similar pages and blank pages, and ultimately creating a filmstrip of the website over time, as well as an animated gif that shows how the website evolved over time. sample install pip install waybackpack npm i shots -S usage import shots from 'shots'; shots({ dest: 'resources/shots', site: 'amazon.com' }); the shots function will return a Promise that’ll resolve once an animated gif of the site’s history, along with a side-by-side static filmstrip image are generated in resources/shots/output as 1024x768.gif and 1024x768.png respectively. you can specify different options. api the shots api is exported as a single shots(options) function that returns a Promise . its options are outlined below. options there are several options , described next. options.dest directory used to store all wayback machine archive pages, their screenshots, the diffs between those screenshots, and your glorious output gifs. defaults to a temporary directory. note that you’ll get that path back from the shots promise, e.g: shots().then(dest => { // ... }) options.concurrency concurrency level used throughout the lib. determines how many screenshots are being taken at any given time, or how many diffs are being computed, etc. options.pageres options merged with defaults shown below and passed to pageres . only 9999x9999 -formatted sizes are supported (e.g: don’t use 'iphone 5s' ) . { "crop": true, "scale": 1, "sizes": ["1024x768"] } options.sites a site (or any url, really) that you want to work with. can also be an array of sites. options.site alias for options.sites . options.tolerance number between 0 and 100 where 100 means every screenshot will be considered different, whereas 0 means every screenshot will be considered the same. only "duplicate" screenshots (within the tolerated range) will be used when building the gif and filmstrip image. steps note that shots has a long runtime, due to the nature of the task it performs. be prepared to wait a few minutes until the gif is finally written to disk. the following steps happen in series. the tasks in each step are executed concurrently where possible. - runs waybackpackfor every provided options.site, starting at the last timestamp that can be found in the ${dest}/pagesdirectory to save time - takes screenshots of every archive page, except for pages we have existing screenshots for at ${dest}/screenshots - computes difference between every screenshot and the previous ones - screenshots considered to be the same according to toleranceare discarded - screenshots considered to be noise (e.g: failed page loads) are discarded - creates the filmstrip - creates the gif debugging and logging if you want to print debugging statements, shots uses debug , so you can do DEBUG=shots node app and you’ll see tons of debug information pop into your screen. license » shots: pull down the entire Internet into a single animated gif 评论 抢沙发
http://www.shellsec.com/news/16623.html
CC-MAIN-2017-39
en
refinedweb
django-helpscout 0.5.0 Help Scout integration for Django Help Scout integration for Django. Introduction If you are using Help Scout to handle support tickets from your users for your Django web application, you can use Help Scout’s custom app feature to provide information on the user, such as the following: This project provides a Django app which allows you to integrate Custom App into your Django web application and easily customize the output HTML. Installation You can install this library via pip: pip install django-helpscout Once installed, add django_helpscout to your INSTALLED_APPS: INSTALLED_APPS = ( ..., 'django_helpscout' ..., ) Getting Started A Django view is provided to make it easy for you to get started. First, add the view to your urls.py:from django_helpscout.views import helpscout_user urlpatterns = patterns('', # Your URL definitions url(r'^helpscout-user/$', helpscout_user), ) Once done, deploy your web application to production and point your Help Scout custom app URL to the url you have configured above and you should see a simple HTML output on Help Scout with the user’s username and date joined. Customizing the HTML Output You will most likely want to customize the HTML output to add in additional information related to the user. This library provides an easy way for you to override the templates that are used. In your templates folder, create the following structure: templates/ |- django_helpscout |- 404.html |- helpscout.html Details on the two templates: - 404.html: Used when a user with the given email address is not found - helpscout.html: Used when a user is found By adding your own templates and effectively overriding the library’s built-in templates, you can customize the output to suit your needs. Further Customizations You might want to use select_related to prefetch related models for a particular user, or you have complicated processing involved when loading a user. A helper decorator is available if you wish to use your own views. The decorator helps you deal with verifying Help Scout’s signature when a request is made from their side. You can use the decorator in the following manner: from django_helpscout.helpers import helpscout_request # your view @helpscout_request def load_user_for_helpscout(request): ... code. History 0.5.0 (2014-08-06) - PyPI release. 0.0.1 (2014-08-01) - Initial release on GitHub. - Author: Victor Neo - Keywords: Django-Helpscout,Django,Help Scout - License: Apache License V2 - Categories - Package Index Owner: victorneo - DOAP record: django-helpscout-0.5.0.xml
https://pypi.python.org/pypi/django-helpscout/0.5.0
CC-MAIN-2017-39
en
refinedweb
This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project. Hi all, I'm working on cross compilers for AROS, a free operating system. We have patches for gcc and g++/libstdc++ for 3.4, and just for gcc (gcc-core specifically) for 4.0.0. I've been trying to put together an updated patch including g++/libstdc++ for 4.2 and beyond, using the previous work as a guide. While I still have a way to go, I did get a basic setup working using 4.0.3. My patches applied more-or-less as-is to 4.1.2, and to 4.2.1. I'm now seeing issues with 4.2.1 where it can't seem to find the C++ headers, but it works fine with 4.0 and 4.1. configure is run like so: ---------- $ PATH=$PATH:$(pwd)/../usr40/bin ../gcc-4.0.3/configure --prefix=$(pwd)/../usr40 --target=i386-pc-aros --enable-sjlj-exceptions --enable-long-long --with-headers=/home/rob/code/aros/git/src/bin/linux-i386/AROS/Development/include --with-libs=/home/rob/code/aros/git/src/bin/linux-i386/AROS/Development/lib ---------- I'm compiling a simple "hello world program": ---------- #include <iostream> using namespace std; int main() { cout << "hello" << endl; return 0; } ---------- ---------- $ PATH=$PATH:$(pwd)/usr40/bin i386-pc-aros-g++ -o hello hello.cpp -L $(pwd)/usr40/lib $ PATH=$PATH:$(pwd)/usr41/bin i386-pc-aros-g++ -o hello hello.cpp -L $(pwd)/usr41/lib $ PATH=$PATH:$(pwd)/usr42/bin i386-pc-aros-g++ -o hello hello.cpp -L $(pwd)/usr42/lib hello.cpp:1:20: error: iostream: No such file or directory hello.cpp: In function 'int main()': hello.cpp:6: error: 'cout' was not declared in this scope hello.cpp:6: error: 'endl' was not declared in this scope ---------- Some strace output: ---------- $ PATH=$PATH:$(pwd)/usr40/bin strace -f i386-pc-aros-g++ -o hello hello.cpp -L $(pwd)/usr40/lib 2>&1 | grep iostream [pid 12681] read(3, "#include <iostream>\n\nusing names"..., 101) = 101 [pid 12681] stat64("/home/rob/code/aros/gcc/usr40/bin/../lib/gcc/i386-pc-aros/4.0.3/../../../../include/c++/4.0.3/iostream.gch", 0xbf9cab94) = -1 ENOENT (No such file or directory) [pid 12681] open("/home/rob/code/aros/gcc/usr40/bin/../lib/gcc/i386-pc-aros/4.0.3/../../../../include/c++/4.0.3/iostream", O_RDONLY|O_NOCTTY) = 4 [pid 12681] read(4, "// Standard iostream objects -*-"..., 2958) = 2958 ---------- ---------- $ PATH=$PATH:$(pwd)/usr41/bin strace -f i386-pc-aros-g++ -o hello hello.cpp -L $(pwd)/usr41/lib 2>&1 | grep iostream [pid 12696] read(3, "#include <iostream>\n\nusing names"..., 101) = 101 [pid 12696] stat64("/home/rob/code/aros/gcc/usr41/bin/../lib/gcc/i386-pc-aros/4.1.2/../../../../include/c++/4.1.2/iostream.gch", 0xbf85c214) = -1 ENOENT (No such file or directory) [pid 12696] open("/home/rob/code/aros/gcc/usr41/bin/../lib/gcc/i386-pc-aros/4.1.2/../../../../include/c++/4.1.2/iostream", O_RDONLY|O_NOCTTY) = 4 [pid 12696] read(4, "// Standard iostream objects -*-"..., 2962) = 2962 ---------- ---------- $ PATH=$PATH:$(pwd)/usr42/bin strace -f i386-pc-aros-g++ -o hello hello.cpp -L $(pwd)/usr42/lib 2>&1 | grep iostream [pid 12711] read(3, "#include <iostream>\n\nusing names"..., 101) = 101 [pid 12711] stat64("/home/rob/code/aros/gcc/usr42/bin/../lib/gcc/i386-pc-aros/4.2.1/include/iostream.gch", 0xbf989b24) = -1 ENOENT (No such file or directory) [pid 12711] open("/home/rob/code/aros/gcc/usr42/bin/../lib/gcc/i386-pc-aros/4.2.1/include/iostream", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory) [pid 12711] stat64("/home/rob/code/aros/gcc/usr42/bin/../lib/gcc/i386-pc-aros/4.2.1/../../../../i386-pc-aros/sys-include/iostream.gch", 0xbf989b24) = -1 ENOENT (No such file or directory) [pid 12711] open("/home/rob/code/aros/gcc/usr42/bin/../lib/gcc/i386-pc-aros/4.2.1/../../../../i386-pc-aros/sys-include/iostream", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory) [pid 12711] write(2, "iostream: No such file or direct"..., 35iostream: No such file or directory) = 35 ---------- As you can see, 4.2.1 doesn't seem to be looking in usr42/include/c++/4.2.1 for iostream and other headers. I don't think this has anything to do with my changes, but I don't know enough about GCC to be sure. I've really just been trying to build it and fix errors as they pop up. FYI, my patches (extremely rough) are here: Can anyone offer some pointers on where to start looking? Cheers, Rob.
http://gcc.gnu.org/ml/gcc-help/2007-10/msg00104.html
CC-MAIN-2017-39
en
refinedweb
In programming, function is a piece of code that has a name and we can execute them, using that name. In C functions can accept arguments and return a value. As usual, let's start by giving an example. You have already created a function - the main(). Remember? It was a special case and that is always the start of any program. int main(void) { ... return 0; } In this example: Functions in C consist of two parts: Also, in C functions need to be: Let's take a look at each of these concepts! The prototype of a function in C consists of its: <return type> <function name>([list of parameters]) The prototype identifies a function. As you will see later in section "overloading", we can create several functions with the same name if the accept different arguments. So, to uniquely identify a function we need its entire prototype. We need to write the prototype twice - when we declare the function and when we define it. This could be any valid data type, including void. When you declare that a function returns value you must return a value of that type when the function finishes. If you don't want to return a value, declare the function to be void. To return a value, use the return keyword inside the body. We use the name to execute the body of the function. When we do that we say that we are calling that function. The body is just like any other block of code in C. You can perform the actions that you need and use the return statement to end the execution of the function and return a value. Note that if you declare the function to return a value, all paths in the body must return a value. Take a look at the next example. It seems that this innocent little function is OK, but it is not. Can you find the problem? int returnMax(int num1, int num2) { if(num1 > num2) return num1; if(num2 > num1) return num2; } What happens if the two numbers are equal? Neither the first, nor the second if will be true, so our function will not return anything. To correct this we need to handle all possible cases. We can do that in several ways, I choose to use an else statement to return num2 if the first condition is false. After all, if num1 is not greater we can safely return num2, because it is either bigger and we will return the correct result, or the numbers are equal and it doesn't matter which one we return. int returnMax(int num1, int num2) { if(num1 > num2) return num1; else return num2; } You can also end the execution of a void function by calling return, followed by semicolon without specifying a return value: void someFunction() { ...some code if(some condition) return; ... some more code } In this case when the condition of the if statement is met, the return statement will end the execution of this function and the "...some more code" stuff will not be executed. In most programming languages there is a difference between declaring a function and defining it. Note that in C we need to both declare and define a function before we can use it. This is important note if you come from a language like Java or C# where you don't need to do the declaration. Declaration means that we declare that this function exists. In C, we declare a function by writing down its prototype. Usually we put all function declarations in one place in the top of the current file, just below the preprocessor directives like #include. Usually we keep all declarations (functions and variables) in the header files and the source file includes the headers that it needs. void printHello(); Defining a function means giving the function a body. We do that in the source file. void printMessage() { printf("Farewell, you traveller through the land of functions!\n"); printf("May the forces of evil get confused on the way to your house!"); } Function can be called only from within a function. To do that write down the name of the function that you want to call, followed by parenthesis. Here is an example where we call "printMessage()": int main(void) { printMessage(); return 0; } We have already called a function that accepts arguments - printf(), so this is nothing new for you. As an argument we can use any expression that returns a value: printf("This is a string literal."); float price = basePrice + basePrice * VAT; updatePrice(price); goDistance(v0 * t0 + (a * t * t) / 2); What will happen here is that first the expression is calculated and the final result is passed as an argument to the function. Note: When you call a function, you must supply the correct arguments. If the function is declared to accept one int and one float argument then you need to provide the same number of arguments in the same order. We use a comma to separate the arguments. Here is one example: #include <stdio.h> int findMax(int num1, int num2); int main(void) { int num1, num2, max; scanf("%d%d", &num1, &num2); max = findMax(num1, num2); printf("The bigger number is %d.", max); return 0; } int findMax(int num1, int num2) { int result = num1; if(num2 > num1) result = num2; return result; } In the example above we call max() that finds and returns the bigger of two numbers. As you see, we can catch and use the returned value. In this case we assign it to a new variable - max. When you call a function in C with arguments two different things may happen: What actually happens when we call a function? Let's explain this, using the example with findMax() above. As you know the program starts from main() and it starts to execute line by line. We say that the program control is in main(). The execution reaches a function call. Next, the program will pause the execution of the current function and start to execute the called function line by line. So in our example, main() will be paused the control will be transferred to scanf(). It will read the numbers that we input in the console and then return control to main(). It will resume its execution and continue with the next line - the call to the max() function. At this moment the control is transferred to max() it will execute and then return will return the control back to main(). When the main function returns, our program finishes and it returns control to the point of execution. Since this is a console application, the control will be returned to the console and the user can start another application. The idea behind functions is to make the code more simple and reusable. For this reason, a function should be focused on doing one thing. Avoid big, complex functions that do several tasks. It will be better to separate that in several functions. This has huge advantages. A small function: Naming should follow the variable naming practices that we discussed before.
http://www.c-programming-simple-steps.com/functions-in-c.html
CC-MAIN-2017-39
en
refinedweb
Hi Eric, On Fri, Sep 1, 2017 at 11:29 PM, Eric Snow <ericsnowcurrently at gmail.com> wrote: > Nice working staying on top of this! Keeping up with discussion is > arguably much harder than actually writing the PEP. :) I have some > comments in-line below. Thanks! > > -eric > > > On Fri, Sep 1, 2017 at 5:02 PM, Yury Selivanov <yselivanov.ml at gmail.com> wrote: >> [snip] >> >> Abstract >> ======== >> >> [snip] >> >> Rationale >> ========= >> >> [snip] >> >> Goals >> ===== >> > > I still think that the Abstract, Rationale, and Goals sections should > be clear that a major component of this proposal is lookup via chained > contexts. Without such clarity it may not be apparent that chained > lookup is not strictly necessary to achieve the stated goals (i.e. an > async-compatible TLS replacement). This matters because the chaining > introduces extra non-trivial complexity. Let me walk you through the history of PEP 550 to explain how we arrived to having "chained lookups". If we didn't have generators in Python, the first version of PEP 550 ([1]) would have been a perfect solution. First let's recap the how all versions of PEP 550 reason about generators: * generators can be paused/resumed at any time by code they cannot control; * thus any context changes in a generator should be isolated, and visible only to the generator and code it runs. PEP 550 v1 had excellent performance characteristics, a specification that was easy to fully understand, and a relatively straightforward implementation. It also had *no* chained lookups, as there was only one thread-specific collection of context values. PEP 550 states that v1 was rejected because of the following reason: "The fundamental limitation that caused a complete redesign of the first version was that it was not possible to implement an iterator that would interact with the EC in the same way as generators". I believe it's no longer true: I think that we could use the same trick [2] we use in the current version of PEP 550. But that doesn't really matter. The fundamental problem of PEP 550 v1 was that generators could not see EC updates *while being iterated*. Let's look at the following: def gen(): yield some_decimal_calculation() yield some_decimal_calculation() yield some_decimal_calculation() with decimal_context_0: g = gen() with decimal_context_1: next(g) with decimal_context_2: next(g) In the above example, with PEP 550 v1, generator 'g' would snapshot the execution context at the point where it was created. So all calls to "some_decimal_calculation()" would be made with "decimal_context_0". We could easily change the semantics to snapshot the EC at the point of the first iteration, but that would only make "some_decimal_calculation()" to be always called with "decimal_context_1". In this email [3], Nathaniel explained that there are use cases where seeing updates in the surrounding Execution Context matters in some situations. One case is his idea to implement special timeouts handling in his async framework Trio. Another one, is backwards compatibility: currently, if you use a "threading.local()", you should be able to see updates in it while the generator is being iterated. With PEP 550 v1, you couldn't. To fix this problem, in later versions of PEP 550, we transformed the EC into a stack of Logical Contexts. Every generator has its own LC, which contains changes to the EC local to that generator. Having a stack naturally made it a requirement to have "chained" lookups. That's how we fixed the above example! Therefore, while the current version of PEP 550 is more complex than v1, it has a *proper* support of generators. It provides strong guarantees w.r.t. context isolation, and at the same time maintains their ability to see the full outer context while being iterated. [..] > Speaking of which, I have plans for the near-to-middle future that > involve making use of the PEP 550 functionality in a way that is quite > similar to decimal. However, it sounds like the implementation of > such (namespace) contexts under PEP 550 is much more complex than it > is with threading.local (where subclassing made it easy). It would be > helpful to have some direction in the PEP on how to port to PEP 550 > from threading.local. It would be even better if the PEP included the > addition of a contextlib.Context or contextvars.Context class (or > NamespaceContext or ContextNamespace or ...). :) However, I recognize > that may be out of scope for this PEP. Currently we are focused on refining the fundamental low-level APIs and data structures, and the scope we cover in the PEP is already huge. Adding high-level APIs is much easier, so yeah, I'd discuss them if/after the PEP is accepted. Thanks! Yury [1] [2] [3]
https://mail.python.org/pipermail/python-dev/2017-September/149139.html
CC-MAIN-2017-39
en
refinedweb
Hi All, I am trying to generate client classes (Stub and Callback) to access a remote WS using axis2 inside Eclipse 3 (Helius) with a wsdl as source supplied by a partner. The situation is that some objects do not present the right converters, as this sample: //... while (!reader.isStartElement() && !reader.isEndElement()) reader.next(); if (reader.isStartElement() && new javax.xml.namespace.QName(" /xml/service","Header").equals(reader.getName())){ java.lang.String content = reader.getElementText(); //error: The next line is not properly generated (undefined method) object.setHeader(org.apache.axis2.databinding.utils.ConverterUtil.convertToHeader_type0(content)); reader.next(); } // End of if for expected property start element else{ // A start element we are not expecting indicates an invalid parameter was passed throw new org.apache.axis2.databinding.ADBException("Unexpected subelement " + reader.getLocalName()); } //... I did one attempt replacing ConverterUtil by a Header.Factory.parse(reader) to make the source compilable, but I guess this is not correct to be done. Other weird thing is that using axis1, the classes are generated correctly. How can I fix that? Could the third part wsdl be wrong? Thanks in advance, José Renato
http://mail-archives.apache.org/mod_mbox/axis-java-user/201101.mbox/%3CAANLkTi=phVeSkWFupn-RrBar93rZeBCRmZvoGyri7key@mail.gmail.com%3E
CC-MAIN-2017-39
en
refinedweb
Remove Delivery and Mode As submitted, the WS-Eventing specification defines the wse:Subscribe request as follows: <s:Envelope ...> <s:Header ...> <wsa:Action> </wsa:Action> ... </s:Header> <s:Body ...> <wse:Subscribe ...> <wse:EndTo> endpoint-reference </wse:EndTo> ? <wse:Delivery Mode="xs:anyURI"? ...> xs:any * </wse:Delivery> <wse:Expires>[xs:dateTime | xs:duration]</wse:Expires> ? <wse:Filter Dialect="xs:anyURI"? ...> xs:any * </wse:Filter> ? xs:any * </wse:Subscribe> </s:Body> </s:Envelope> Note that the ellipsis (i.e. "…") indicate a point of extensibility that allows additional attribute content. The /wse:Subscribe/wse:Delivery/@Mode attribute is described as follows: The delivery mode to be used for notification messages sent in relation to this subscription. Implied value is "", which indicates that Push Mode delivery should be used. See Section 1.2 Delivery Modes for details. If the event source does not support the requested delivery mode, the request MUST fail, and the event source MAY generate a wse:DeliveryModeRequestedUnavailable fault indicating that the requested delivery mode is not supported. Section 1.2 Delivery Modes describes the concept of modes: In order to support this broad variety of event delivery requirements, this specification introduces an abstraction called a Delivery Mode. This concept is used as an extension point, so that event sources and event consumers may freely create new delivery mechanisms that are tailored to their specific requirements. It is clear then, that the Mode attribute is a named extension point that is intended to indicate the use of Notification delivery mechanisms outside those defined by WS-Eventing (i.e. Push). The following sections provide various arguments against the use of the wse:Mode attribute and constitute the case for its removal and the removal of the wse:Delivery wrapper from the WS-Eventing specification. The arguments within this page relate to issues 6432 and 6692 in the W3C WS-RA Working Group. Contents 1. Mode Violates WS-* Design Principles One of the core architectural principles of WS-* is the idea of "decentralized composition". WS-* technologies and specifications can be developed independently to solve specific problems then combined as needed to provide solutions with synergistic properties. For example, one could combine WS-AtomicTransaction and WS-ReliableMessaging to create a solution that leveraged the automated transmission retry facilities of WS-RM to minimize the number transactions aborted due to network errors. The Mode attribute is an anti-pattern for this form of composition because every combination of technologies that it can support must be described a priori and assigned a unique URI. 2. Mode is Not Scalable The value of the Mode attribute is a single URI. The use of any single value to express all the combinations of a number of different options is inherently non-scalable. For example, suppose there is an extension that defines a "Pause" operation that will temporarily halt the transmission of Notifications for a Subscription. Further suppose that there are two additional options to the basic Pause function; an auto-pause feature that pauses the Subscription after every N Notifications and a feature that requests that the Event Source persist any Notifications that were produced while the Subscription was paused. To cover all the combinations of this feature you would need 4 separate Mode URIs (pause, pause-with-auto-pause, pause-with-persist, and pause-with-persist-and-auto-pause). If you added another sub-feature or option you would need 8 Mode URIs. Contrast this with the standard WS-* pattern of indicating extensions via discrete elements or attributes. The request to enable the "Pause" extension could be indicated via the presence of the "foo:Pause" element in the wse:Subscribe request. The "foo:Pause" element could be defined to include two optional boolean attributes, "autoPause" and "persist". This "foo:Pause" extension element could occur alongside other extension elements without the need to define unique URIs that describe such combinations. 3. Mode is Redundant The use of a named element or attribute as an extension point does not impart any qualities beyond those that apply to "anonymous" extension points (i.e. those that are defined by the use of unnamed xs:any and xs:anyAttribute within XML Schema). For example, the following element: <ns:Foo> <ns:SomeStuff> some stuff </ns:SomeStuff> <ns:Extension> xs:any * </ns:Extension> ? </ns:Foo> is no more extensible than: <ns:Foo> <ns:SomeStuff> some stuff </ns:SomeStuff> xs:any * </ns:Foo> The latter is, in fact, more efficient because extensions can simply be added to the ns:Foo element without the need for the surrounding ns:Extension element. The ns:Extension element doesn't provide any additional semantics, since any element that is not defined in the schema for ns:Foo is, by definition, an extension. Either of the above examples is preferable to: <ns:Foo> <ns:SomeStuff> some stuff </ns:SomeStuff> <ns:Extension> xs:any * </ns:Extension> ? xs:any * </ns:Foo> for the following reasons: - Implementations that consume the ns:Foo element must check two locations for the presence of extensions; both inside the optional ns:Extension element and at the end of the ns:Foo element (after the ns:SomeStuff element in cases where the ns:Extension is not present, or after the ns:Extension element in cases where it is). Checking two locations is less efficient and more error prone than checking a single location. - Developers that wish to extend the ns:Foo element must decide whether to add the extension as a child element of ns:Extension or of ns:Foo. If the specification for ns:Foo defines differences in the processing of extensions between these two points (e.g. differences in the behavior for non-understood extensions), such differences must be clearly described or it is likely that developers will pick the wrong point for their extension. A single extension point is more efficient in terms of both the developers time and the actual runtime costs. It should be noted that the wse:Subscribe message is analogous to our final example above, except that wse:Subscribe has several more layers of extensibility including SOAP Headers, the wse:Subscribe element, the wse:Delivery element, the extensibility of the various wsa:EndpointReferences, and the Mode attribute. The following sections show how these multiple layers of extensibility could be removed in a manner that still allows the wse:Subscribe message to support the use cases that are supported by the Member Submission. We sill use the following definition of wse:Subscribe for these examples: <wse:Subscribe ...> <wse:NotifyTo> endpoint-reference </wse:NotifyTo> <wse:EndTo> endpoint-reference </wse:EndTo> ? <wse:Expires>[xs:dateTime | xs:duration]</wse:Expires> ? <wse:Filter Dialect="xs:anyURI"? ...> xs:any * </wse:Filter> ? ... </wse:Subscribe> 3.1 WSMAN PushWithAck The WS-Management v1.0.0.a specification defines a WS-Eventing delivery mode extension called "PushWithAck". This extension is described as follows: With this mode, each SOAP message has only one event, but each event is acknowledged before another may be sent. The service MUST queue all undelivered events for the subscription and only deliver each new event after the previous one has been acknowledged. A request for a subscription with "PushWithAck"> Using the new definition of our wse:Subscribe element, this request might look like this: :UseNotifyAcks </s:Header> <s:Body> <wse:Subscribe wsman: <wse:NotifyTo> <wsa:Address></wsa:Address> </wse:NotifyTo> </wse:Subscribe> </s:Body> </s:Envelope> In this example we chose to extend the wse:Subscribe request with a attribute of type xs:boolean. This is because there are no parameters or other modifiers for this extension; it is either on or off. If there where parameters such as the expected acknowledgment interval, etc. this extension would have taken the form of an added child-element within wse:Subscribe. Note the use of the wsman:UseNotifyAcks SOAP Header. This header is used to conform to the extension semantics of the Mode attribute wherein extensions that are not understood must generate a fault and fail to create a subscription. There are cases in which the Subscriber may want to omit this header. We will cover such cases in a later section. An even more literal translation of this extension might look like the following: <s:Envelope xmlns: </s:Header> <s:Body> <wse:Subscribe> <wse:NotifyTo> <wsa:Address></wsa:Address> </wse:NotifyTo> <wse06:Delivery </wse:Subscribe> </s:Body> </s:Envelope> The above example actually preserves the use of the wse:Delivery element and it's @Mode attribute in their original, WS-Eventing 200603 namespace, but treats them as extensions in our proposed uniform extension model. The purpose of this is to allow existing implementation to support both WS-Eventing 1.0 and 200603 using a common codebase. The above examples illustrate a literal translation of the semantics of the existing WS-Management 1.0 "PushWithAck" extension using the proposed uniform extension model. In general, the DMTF and other organizations should be encouraged to re-use existing web service standards where appropriate rather than inventing idiosyncratic alternatives for the same functionality. In this case this would entail the use of WS-ReliableMessaging to provide the acknowledgment framework and retry semantics. A Subscribe request that includes the indication that Notifications should be sent using WS-RM might look like the following: :UseEPRPolicy </s:Header> <s:Body> <wse:Subscribe> <wse:NotifyTo> <wsa:Address></wsa:Address> <wsa:Metadata> <wsp:Policy> <wsrmp:RMAssertion> <wsp:Policy> <wsrmp:DeliveryAssurance> <wsp:Policy> <wsrmp:ExactlyOnce/> <wsrmp:InOrder/> </wsp:Policy> </wsrmp:DeliveryAssurance> </wsp:Policy> </wsrmp:RMAssertion> </wsp:Policy> </wsa:Metadata> </wse:NotifyTo> </wse:Subscribe> </s:Body> </s:Envelope> Although this may look a bit intimidating, note that the majority of the extension material lies within the NotifyTo EPR. There is an implicit architecture underlying this in which there are separate, WS-RM components that can understand and parse this policy. The wsman:UseEPRPolicy serves as an indication to the Event Source that this Subscribe request contains policies within one or more of its EPRs. The semantics may be defined such that the Event Source must understand and be capable of complying with the contain policies or it must reject the Subscribe request (again, we're not trying to present a full-fledged design on the composition of WS-RM and WS-Eventing, we're just trying to show the kinds of extensions that are possible within our uniform model). 3.2 WSMAN Pull The WS-Management v1.0.0.a specification defines a WS-Eventing delivery mode extension called "Pull". This extension is described as follows: WS-Management defines a custom event delivery mode, "pull mode", which allows an event source to maintain a logical queue of event messages received by enumeration. A request for a subscription with "Pull"> As defined by WS-Management, a successful the wse:SubscribeResponse for such a request must be extended with a WS-Enumeration wsen:EnumerationContext as shown in the following example: <s:Envelope xmlns: <s:Header> </s:Header> <s:Body> <wse:SubscribeResponse> <wse:SubscriptionManager> <wsa:Address></wsa:Address> </wse:SubscriptionManager> <wse:Expires>P0Y0M0DT30H0M0S</wse:Expires> <wsen:EnumerationContext>uuid:c8e50110-2a13-11de-8c30-0800200c9a66</wsen:EnumerationContext> </wse:SubscribeResponse> </s:Body> </s:Envelope> 3.3 WSMAN Batsman:MaxElements>6</wsman:MaxElements> <wsman:MaxTime>PT10M</wsman:MaxTime> </wse:Delivery> </wse:Subscribe> </s:Body> </s:Envelope>
https://www.w3.org/2002/ws/ra/wiki/Remove_Delivery_and_Mode
CC-MAIN-2017-39
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards hawick_circuits template <typename Graph, typename Visitor, typename VertexIndexMap> void hawick_circuits(Graph const& graph, Visitor visitor, VertexIndexMap const& vim = get(vertex_index, graph)); template <typename Graph, typename Visitor, typename VertexIndexMap> void hawick_unique_circuits(Graph const& graph, Visitor visitor, VertexIndexMap const& vim = get(vertex_index, graph)); Enumerate all the elementary circuits in a directed multigraph. Specifically, self-loops and redundant circuits caused by parallel edges are enumerated too. hawick_unique_circuits may be used if redundant circuits caused by parallel edges are not desired. The algorithm is described in detail in. #include <boost/graph/hawick_circuits.hpp> IN: Graph const& graph The graph on which the algorithm is to be performed. It must be a model of the VertexListGraphand AdjacencyGraphconcepts. IN: Visitor visitor The visitor that will be notified on each circuit found by the algorithm. The visitor.cycle(circuit, graph)expression must be valid, with circuitbeing a const-reference to a random access sequence of vertex_descriptors. For example, if a circuit u -> v -> w -> uexists in the graph, the visitor will be called with a sequence consisting of (u, v, w). IN: VertexIndexMap const& vim = get(vertex_index, graph) A model of the ReadablePropertyMapconcept mapping each vertex_descriptorto an integer in the range [0, num_vertices(graph)). It defaults to using the vertex index map provided by the graph.
http://www.boost.org/doc/libs/1_63_0/libs/graph/doc/hawick_circuits.html
CC-MAIN-2017-39
en
refinedweb