text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi, I've recently installed mailman and got it to do *almost* what I want. I need to make it available for use for our users on different mail domains, so that there is no 'namespace' clash (i.e. list at domain1.com is different from list at domain2.com). I have managed to get most of the way there by embedding the domain name into the internal list name (i.e. list(@domain.com) would be known to mailman as 'domain-com-list') and then munge the mail aliases so that the wrapper gets the correct parameters when mail is sent to list at domain.com. The general settings page is modifed to show the 'visible' list-name (list) in the prefix and hostname options. The General options screen states that changing the 'public name' of the list will cause bad things to happen as it will be advertised as the email address. This is what I want to happen. Unfortunately, it isn't. So instead of subscribers being told that the address of the list is 'list at domain.com', they would get 'domain-com-list at domain.com'. Can anyone give me some pointers to what I should be doing? I don't mind hacking code but I don't know Python (although it shouldn't be that difficult to pick up). Regards,
https://mail.python.org/pipermail/mailman-users/2000-January/003449.html
CC-MAIN-2018-05
refinedweb
226
73.37
Details - Type: Bug - Status: Patch Available - Priority: Minor - Resolution: Unresolved - Affects Version/s: 1.4.0 - Fix Version/s: None - Component/s: None - Labels: Description The default serializer for the ElasticSearch sink (ElasticSearchLogStashEventSerializer) duplicates fields that are mapped to default logstash fields. For instance timestamp, source, host. Those appear both as logstash fields ("@timestamp", "@source_host" etc.), and both as fields under the @fields ("@fields.timestamp", "@fields.host"). When inserting a field from the headers as a logstash system field it should be removed from the dictionary so it wouldn't get written again under the "@fields" field. Issue Links - depends upon FLUME-2099 Add serializer to ElasticSearchSink for new logstash v1 format - Open Activity - All - Work Log - History - Activity - Transitions Hi Dib v1 will certainly makes the schema less noisy, but this issue is not due to the v0 schema, it's just seems like a bug in the sink. The serializer creates a map of headers, extracts some fields from this map and sets them as top fields, and then goes over all the items in the map and adds them under the "@fields" field. So the items that where extracted before and were already added as logstash fields are added again also under "@fields". This is redundant. Items from the map that where added should be removed from the map before doing the generic adding so they won't appear twice. Hope I managed to be clear. If I'll get to it I'll try to attach a code fix (still trying to understand the procedure of submitting code to an Apache project...). Hi Rotem, My apologies for misinterpreting your bug post. I also started on Flume commit process very recently. > Typically, you start off with assigning the ticket to yourself. > Make changes in your local flume copy > Test the changes > Create diff file of the patch once you have tested the changes > Goto, upload the diff file, tag the bug with JIRA and the group to Flume > Upload the patch to the JIRA ticket and mark the ticket as patch-available > Once someone reviews the patch and marks it ok to ship commit your patch to appropriate branches in git, in this case I think it should be trunk and Flume-1.5 branch in git You can find more details here () and here (). Hope this helps, - Dib Hi Rotem, Correction about the Flume commit process outline I posted in the previous comment: ----- CORRECTION (last step in the Flume commit process comment above) ----- > Flume committer / contributor reviews the patch; marks it ok to ship; commits your patch to appropriate branches in git, in this case I think it should be trunk and Flume-1.5 branch in git. You WON'T be pushing the code from your local branch, someone with committer privilege can push the code. Thanks Rotem for the patch. Looks fine to me. Now please wait for a flume contributor / committer to review it. Meanwhile, I downloaded the diff file from reviewboard and attached it with the JIRA ticket as per flume patch submission process. Hope you won't mind me uploading the patch to the JIRA. Also please assign the ticket to yourself Rotem and mark the JIRA ticket to patch available to make sure that flume community knows about the patch. Best, - dib Dib Ghosh - Unfortunately, the author himself has to attach the patch to the jira, since that is what grants the ASF the license to include the patch in ASF projects. Rotem Hermon - Please attach the patch to this jira Hari Shreedharan - My apologies for the slip up. Attached the patch. How do I assign the ticket to myself? I don't seem to have a way to set the assignee. Is there something else needed? Assigned to you. You should now be able to assign jiras to yourself, Rotem Rotem Hermon you need ticket assignment rights on JIRA to do it. Also, mark the JIRA to patch available once you are given the JIRA permission. Not a problem, sir. It is required so the required license/copyright etc are granted to ASF. Thanks, Hari Hi Rotem, This issue is due to the v0 logstash json schema used by Flume. Internally Flume's ElasticSearchSink adds @source and @source_host to mimic v0 logstash format. This should be resolved with migration to v1 json schema of Logstash. There is an open bug request on Flume for this one () and logstash documentation about the v0 schema problem here -. To quote the issue from logstash bug list - "The current logstash json schema has a few problems: It uses two namespacing techniques when only one is needed ("@" prefixing, like "@source", and "@fields" object for another namespace) @source_host and @source_path duplicate @source." I am also linking your ticket to Flume-2099. Hope this helps,
https://issues.apache.org/jira/browse/FLUME-2220
CC-MAIN-2017-26
refinedweb
799
70.94
Cwd - get pathname of current working directory use Cwd; $dir = cwd; use Cwd; $dir = getcwd; use Cwd; $dir = fastcwd; use Cwd; $dir = fastgetcwd; use Cwd 'chdir'; chdir "/tmp"; print $ENV{'PWD'}; use Cwd 'abs_path'; # aka realpath() print abs_path($ENV{'PWD'}); use Cwd 'fast_abs_path'; print fast_abs_path($ENV{'PWD'}); This module provides functions for determining the pathname of the current working directory. By default, it exports the functions cwd(), getcwd(), fastcwd(), and fastgetcwd() into the caller's namespace. Each of these functions are called without arguments and return the absolute path of the current working directory. It is recommended that cwd (or another *cwd() function) be used in all code to ensure portability. The cwd() is the most natural and safe form for the current architecture. For most systems it is identical to `pwd` (but without the trailing line terminator). fastgetcwd() function is provided as a synonym for cwd(). The abs_path() function takes a single argument and returns the absolute pathname for that argument. It uses the same algorithm as getcwd(). (Actually, getcwd() is abs_path(".")) Symbolic links and relative-path components ("." and "..") are resolved to return the canonical pathname, just like realpath(3). This function is also callable as realpath(). The fast_abs_path() function looks the same as abs_path() but runs faster and, like fastcwd(), is more dangerous..
http://search.cpan.org/~gsar/perl/lib/Cwd.pm
CC-MAIN-2018-17
refinedweb
215
56.45
UIFlow Desktop 1.0.17, Custom Block Problem? Is there a bug in UIFlow Desktop version or am I doing something wrong? I'm unable to create a working custom block. Also, can't reload a created block for editing. I'm able to create custom blocks with the Block Maker. Everything seems to work fine, and I can save (Download) the new custom block. Newly created blocks can be opened in UIFlow, but the namespace (group name) has no blocks in it. The blocks that were created do not appear and can't be inserted into Blockly. Attempting to edit a previously created block in the Block Maker also doesn't work. I can click on Open .m5b, and the file selection window pops up. I select a block I created, but Block Maker does not import the file's Block Settings. Whatever block design was open at that time remains unchanged. I'm pretty sure the problem is in the desktop version of the Block Maker. Here's why... I found a .m5b file on the internet called MakerCloud1.m5b and downloaded it. In UIFlow the MakerCloud1 group name now has blocks in it that I can move onto my workspace. But Block Maker doesn't import those block designs when I open the .m5b file. So it looks to me like desktop Block Maker can't read and can't write correct files. Can anyone please offer help on how to fix this in my project? I tried this with the online version of UIFlow, and am able to open MakerCloud1.m5b but not the custom blocks I created with the desktop version. Using the online UIFlow Block Maker, I was able to create, save (download), and open for edit a new block. I was also able to open the new custom block for insertion into the Blockley workspace. This is in fact a bug in the desktop version of Block Maker. The desktop version hasn't been updated in ages and won't be until UIFlow 2.0 comes out.
https://forum.m5stack.com/topic/4061/uiflow-desktop-1-0-17-custom-block-problem/4
CC-MAIN-2022-40
refinedweb
344
77.03
On Fri, Feb 24, 2012 at 04:20:47PM +0200, Uoti Urpala wrote: > Roger Leigh wrote: > > I certainly don't think it's fair for fairly niche platforms to hold > > back Linux indefinitely. There is a high cost on maintainers to > > support these platforms, and it would be an ideal situation if > > systemd or upstart were sufficiently portable to run on them, even > > if they didn't support all the Linux-specific features they offer > >. Here's a trivial one from schroot: #if defined(__FreeBSD__) || defined(__FreeBSD_kernel__) (this->get_device()).is_character() #else (this->get_device()).is_block() #endif Disc devices are character on FreeBSD, rather than block. So we work around that fact. Big deal. Once that's done, it works perfectly on both platforms. Taking the stance that there must never, ever, be any platform- specific hacks is both extreme and counter-productive. If you want your software to be usable outside your own world, you have to make some compromises. > > . By mandating them systemd may be painting itself into a corner given that they might be removed or replaced with something better in the future. If they are present, by all means use the facility. But don't create the seed of a future maintenance nightmare. This doesn't just affect systemd, it affects the kernel indefinitely since we can't break working systems by removing the feature. This applies to all use of Linux-specific APIs: don't be too clever--it will come back to bite somewhere down the line. Asking this sort of question about systemd isn't stupid--there are long-term issues which could arise from short-sighted decisions by the systemd developers in the short-term. It would greatly ease its adoption if they would be given due consideration. And this includes portability to other systems just as much as it does to future versions of Linux. I would love Debian to adopt systemd (or an equivalent). But I really can't see that happening without a change in attitude by upstream. Portability to other systems is really just a special case of portability to future versions of the Linux kernel. Things can and will change, and systemd needs to make sure it doesn't break horribly when it happens. Regards, Roger -- .''`. Roger Leigh : :' : Debian GNU/Linux `. `' schroot and sbuild `- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
https://lists.debian.org/debian-devel/2012/02/msg01076.html
CC-MAIN-2015-11
refinedweb
398
63.7
tag:blogger.com,1999:blog-74538728572196054752020-05-03T11:37:57.599-07:00CodiformGeoffrey Wiseman SSH Whitelists on AWS with awswlIf you have EC2 instances on AWS, it is common for them to be layered behind firewalls implemented with VPC Security Groups. That means that if you need to access these servers directly, you may not be able to unless you take measures to make that happen.<br /><br />In an enterprise AWS account, there are lots of good solutions to this problem. Firstly, if you are far enough down the containerization path, you may argue that directly accessing the instances via SSH is to be avoided, that you should only be building and deploying containers. Alternately, if you do need to access these servers, you can likely do so with a VPN, either hardware or software.<br /><br />However, for a smaller AWS account, like a small project or small business, these solutions may be more complicated than you desire. I find myself needing to access small AWS accounts from a variety of places, as I move around a fair bit meeting with clients and working, and I need to be able to access EC2 instances on AWS while I do, so I found myself wanting a tool that would allow me to quickly add and remove my current external IP address or particular networks (expressed through CIDR blocks) to an AWS security group.<br /><br />So I built a little open-source tool. Since the most-popular AWS client library is <a href="">boto</a>, and since python is a reasonable choice for a simple cross-platform cli tool, I built it in Python. I called it awswl (aws whitelist) and I've been refining it, adding tests, documentation, making sure it works with both python2 and python3. Now it's finally ready to release to the wild.<br /><br />You can find it <a href="">on pypi</a> if you want to install it, <a href="">on GitHub</a> if you want to read the source or contribute, and you can browse the documentation on either one.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Add Taxes to Invoice on New FreshbooksI migrated to the "New Freshbooks" almost two years back, I think -- and for the most part it went fairly smooth. The design is a little cleaner, although there are a few features of the old system that aren't present.<br /><br />Initially, when finding a feature gap, I assumed it was coming. Some of those features, however, still haven't come, and I'm tired of waiting. In particular, the ability to add taxes automatically to all the lines of an invoice is a real thorn in my side because for some of my clients I have very detailed timesheet invoices.<br /><br />On the latest example of this, I had 87 lines. Going through 87 lines one by one and clicking "Add Taxes", then a checkbox, then save is ... repetitive, boring, and just the sort of work that computers are great at and humans are less great at. And yet I've done this on a whole ton of invoices because there wasn't an automatic way to do this on the new FreshBooks.<br /><br />But no more. I gave up waiting for FreshBooks to do it and wrote a little piece of JavaScript that I can run with Tampermonkey:<br /><br /><pre>// ==UserScript== // @name Add HST to FreshBooks Invoice // @namespace // @version 1.0 // @description Go through a FreshBooks Invoice and add HST to each line // @author Geoffrey Wiseman // @match* // @grant none // @run-at context-menu // ==/UserScript== (function() { 'use strict'; let popovers = document.querySelectorAll("div.js-tax-picker-popover"); for( var popover of popovers ) { let hst = popover.querySelectorAll("td.js-taxes-popover-checkbox")[1]; hst.querySelector("input").click(); popover.querySelector("button.button-primary").click(); } alert( "Done adding HST to Invoice." ); })(); </pre><br />Now I just edit an invoice, right click, select "Tamper Monkey" and then "Add HST to FreshBooks Invoice" and wait for it to run. Way, way better than three clicks per line.<br /><br />If you have the same problem, feel free to adopt my solution. You might need to adjust the index of the popover checkbox if you don't want to add the second-defined tax (I used to have PST and GST), now I just have HST -- but other than that it should probably work for you.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman coming to SafariIf ServiceWorkers are <a href="">coming to Safari for macOS and iOS</a>, it looks like progressive web applications will be pretty plausible <a href="">across all major modern browsers</a> -- both desktop and mobile. You'll still have a problem with older browsers (and thus older machines that haven't updated), but this can still get you a single codebase across desktop and mobile, which is appealing to a lot of companies. It'll be curious to see how much of an impact this has on React Native traction.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Firewalld on CentOS 7 with AnsibleI've been working on the configuration of a new server with Ansible; this client has CentOS servers, so I'm configuring it for the current release of CentOS 7.x, which comes with <a href="">Firewalld</a>. All of the existing playbooks are set up for iptables, which was used in earlier versions of CentOS.<br /><br />I thought about rolling it back to iptables, but I decided to try using firewalld first. I hit some problems:<br /><br /><ul><li>Firewalld port forwarding only supports remote traffic:</li><ul><li>If I want to run Tomcat and use the firewall to forward from port 80 to the Tomcat port, then accessing will not trigger these port forwarding rules.</li><li>Fortunately, you can work around that with a "direct" rule.</li></ul><li>The ansible <a href="">firewalld module</a> seems immature:</li><ul><li>flagged as 'preview'</li><li>doesn't support <a href="">port forwarding</a></li><li>doesn't support <a href="">direct rules</a></li><li>You can work around that by invoking `firewall-cmd` using the command module.</li></ul><li>Not easy to use `firewall-cmd` in an idempotent way.</li><ul><li>Firewalld is configured with commands, somewhat like iptables. You can run these commands using the command module.</li><li>It's not that easy to invoke these commands in Ansible in a way that lets you be properly idempotent -- only run this command or mark it changed if something has changed. As a result, these will run every time, and if you have a handler to restart the firewalld service, that will also trigger every time.</li></ul></ul><div>What I ended up doing is configuring firewalld with commands, then looking at the configuration files that result and instead of having the ansible playbook trigger these commands, I have the playbook copy these files into place. File copying is something that is easier to do in an idempotent way than command invocation, so I can configure Ansible to copy configuration files into place (<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">/etc/firewall/direct.xml</span>, <span style="font-family: Courier New, Courier, monospace; font-size: x-small;">/etc/firewall/zones/public.xml</span>), and then restart firewalld if the files have changed.</div><div><br /></div><div>That seems to be reasonably happy.</div><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Android Localization to Supported LanguagesWhen you're working with an Android app that will be used in a variety of markets, localization is an important feature to support the different countries, date formats, number and currency formats that are used around the world. Android has deep support for localization which you can <a href="">read about</a> in great detail. <br /><br />In iOS, in order to add support for a language, you <a href="">add languages to your project</a> properties in an explicit way. In Android, adding support for a language is more implicit -- you simply start localizing resources. This is easy to use but it creates a problem where you might have partial support for a language in place but not be ready to take that support live. Android doesn't offer a means of explicitly listing which languages you wish customers to have access to. This can lead to situations where you have to keep localization resources on a branch or out of source-control or otherwise limit how those files are managed. That's viable but it isn't a great fit for every workflow.<br /><br />On a recent client project, I did a little work with Gradle to make supported languages more explicit. I added a list of supported languages in the <span style="font-family: "courier new" , "courier" , monospace;">build.gradle</span>, used <a href="">resource shrinking</a> with <span family="Courier New, Courier" style="font-family: "courier new" , "courier" , monospace;">resConfigs</span> and exposed the languages to Java in <a href="">BuildConfig</a> using <span family="Courier New, Courier" style="font-family: "courier new" , "courier" , monospace;">buildConfigField</span>:<br /><br /><pre class="code">// Approved Languages def approvedLanguages = [ "en", "ru" ] resConfigs approvedLanguages buildConfigField "String[]", "APPROVED_LANGUAGES", "{\"" + approvedLanguages.join("\",\"") + "\"}" </pre><br />By exposing the approved languages to Java, it was possible to use it localization code, for instance to only localize dates to languages that are in the supported language list:<br /><pre class="code">public class LocaleUtils { public static boolean isSupported(Locale locale) { for (String language : BuildConfig.APPROVED_LANGUAGES ) { if (locale.equals(locale.getLanguage())) for (String language : BuildConfig.APPROVED_LANGUAGES) { if (language.equals(locale.getLanguage())) return true; } return false; } public static String localizeLocalDate(LocalDate localDate, String format) { return localDate.toString( DateTimeFormat.forPattern(format).withLocale(getDefaultSupportedLocale())); Locale locale = getDefaultSupportedLocale(); DateTimeFormatter formatter = DateTimeFormat.forPattern(format).withLocale(locale); return localDate.toString(formatter); } } </pre><br />With this in place, we can start working on a new locale, integrating translated strings and graphics in the main line of development and know that they won't be exposed to customers until we're ready. With a little work, we could also use a different list of supported languages on a debug and production build.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Mutable, Hashable and Collections<p>Swift sets have similar behaviour to Java sets when it comes to mutable objects: if you have a mutable object, wherein mutation of the object can result in a change to the hash value, that can lead to surprising behaviour. The collection may store the object in a bucket based on the hash it had at the time of insertion. If you modify the object, the hash value changes, but it doesn't move from one bucket to another. This can mean that a set may report that the object is not present even when it is.</p><p>I made a little playground (<a href="">raw view</a>, <a href="">compressed form for download</a>) to demonstrate this issue, but in essence:</p><pre>let mi = MutableInt(1) let mis:Set = [mi] mis.contains(mi) // true mi.value = 2 mis.contains(mi) // false </pre><p>The same object is present in the same set for both contains calls, but because the hash value has changed, the set can no longer find it.</p><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Key Verification and AnsibleI've been using <a href="">Ansible</a> to configure some instances on Amazon Web Services with a client, for two reasons: <ul><li>To make it repeatable in the future, if we want to configure an instance again, or another instance.</li><li>Because I will be repeating it immediately now, making more than one server to live behind a load balancer that are meant to be configured identically. </ul>When you connect to a host for the first time over SSH, you are asked to verify the host key. When you use Ansibleagainst a host for the first time, the same thing happens. If you are connecting to multiple hosts at the same time, <a href="">bad things happen</a> -- you get multiple prompts to verify the host key and responding to them doesn't seem to work:<br /><br /><pre>$ ansible servergroup -m ping The authenticity of host '10.0.2.161 (<no hostip for proxy command>)' can't be established. ECDSA key fingerprint is SHA256:hEdMy3XKWV/zWobmSuwf+b6oI9xt4cYJzM1eAa2T8Ak. Are you sure you want to continue connecting (yes/no)? Please type 'yes' or 'no': yes Please type 'yes' or 'no': yes Please type 'yes' or 'no': yes Please type 'yes' or 'no': yes Please type 'yes' or 'no': yes Please type 'yes' or 'no': ^CProcess WorkerProcess-3: Process WorkerProcess-2: Traceback (most recent call last): [ERROR]: User interrupted execution </pre><br /> While I was pleased to read <a href="">the toroid.org post</a> about bugs filed, it doesn't seem to be fixed yet.<br /><br />Happily, as long as you connect to a single host at at time, everything is fine, and after that you can connect to the group: <pre>$ ansible server-one -m ping server-one | SUCCESS => { "changed": false, "ping": "pong" } </pre>That's fine for a couple of servers, but it wouldn't be fun with a large cluster. If there's a better way, I haven't discovered it yet.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Ligature Fonts in IntelliJIntelliJ IDEA 2016.2 was released with improved support for programming fonts with ligatures. I tried a few of them to see how I felt about them. I do some Scala in IntelliJ, so I used Scala examples to compare how I felt about things.<br /><br /><b>Hasklig</b><br /><a href="">Hasklig</a> initially felt like the right font for me. It's based on <a href="">Source Code Pro</a>, which I like, although it's never been my favourite programming font. It's very round, and a bit wider than I normally choose, but legible, and the ligatures look nice.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />However, I do like the idea of having a ligature for "!=", and Hasklig limited ligature set doesn't include one for this.<br /><br /><b>Fira Code</b><br /><a href="">Fira Code</a> looks a reasonable amount like Hasklig, really. It's a little narrower, a little less round. I found it very slightly less legible at first, but it was a close second place. Notice that it does include a not-equals lig;">Although I started with Hasklig, I have since switched to Fira. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><b>Monoid</b><br /><a href="">Monoid</a> is big. It's the only one that I'm showing here in 12pt instead of 13pt, and even that feels bigger than the others. It's pretty legible, but the weight feels a little too light as a result to me. Feels like coding in Helvetica Neue Ultra-Light.<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><b>Pragmata Pro</b><br /><a href="">Pragmata Pro</a> is narrow and tall. The wideness of Source Code Pro is one of the reasons I stayed away from it at first, but Pragmata feels almost cramped to me. I like the idea of firing a lot of text in, but it feels less legible as a result of its narrowness, so I'm not sold.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div>The fact that it costs €199 for the complete family does not endear me to it either. I understand making fonts is a lot of work, but many programmers aren't going to bother with a custom font at all, and making it that expensive pretty much rules out the odds of it being an impulse purchase. Some way of trailing the font in an IDE would be nice.<br /><br />If I felt this were the right font for me, I suppose I would probably purchase it anyway, but at that price I'd want to be sold right away, and I'm not. If it were cheap, maybe I would have impulse-purchased and given it a longer trial.<br /><br />Pragmata does have a lot of nice features, so if you do feel so inclined, it's still worth looking at.<br /><br /><b>My Choice</b><br />To be honest, these are all pretty decent fonts, and if you're interested in having programming ligatures, I urge you to try some of them out and see which one fits for you.<br /><br />For me, I think I'm going to try <i>Fira Code</i> for a while. If Hasklig or Source Code Pro adds more ligatures, I might consider going back. I'll also be watching for other ligature programming fonts (Inconsolata, for instance?).<br /><br />Have any suggestions I should try?<br /><br /><br /><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman and unsignedIntegerValueSo, if you have an <span style="font-family: "courier new" , "courier" , monospace;">NSDecimalNumber</span> and you want to get the unsigned integer equivalent of it, you'd do <span style="font-family: "courier new" , "courier" , monospace;">[nsdn unsignedIntegerValue]</span>, right?<br /><br />Wrong. Or, at least, risky.<br /><br />I hit a bug in <span style="font-family: "courier new" , "courier" , monospace;">NSDecimalNumber</span> this week, and I thought I'd share it as a playground, which you can <a href="">download</a> or <a href="">read</a>, but I'll summarize here.<br /><br />Essentially, if you have an <span style="font-family: "courier new" , "courier" , monospace;">NSDecimalNumber</span> containing a real number with a large mantissa and you call <span style="font-family: "courier new" , "courier" , monospace;">unsignedIntegerValue</span> on it, you may get zero instead of the nearest integer. I assume it's a bug, rather than a design decision, but it's definitely risky to rely on <span style="font-family: "courier new" , "courier" , monospace;">unsignedIntegerValue</span> here. If you convert the value yourself (e.g. call <span style="font-family: "courier new" , "courier" , monospace;">floor(nsdn.doubleValue)</span>) you'll be much better off.<br /><br />I've filed a <a href="">radar</a>.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman App Transport Security failures in SafariOne of my client projects is an iOS application which, like many iOS Applications, has an embedded web view, which it uses in this case to display help content from the web. Because this app targets iOS 7.x as a baseline, it uses <span style="font-family: Courier New, Courier, monospace;">UIWebView</span> instead of, say, <span style="font-family: Courier New, Courier, monospace;">SFSafariViewController</span>.<br /><div><br /></div><div>The help content is part of the company's website, and the web page footer has social media links, and some of these links, if you clicked on them, wouldn't load.</div><div><br /></div><div>Initially, I thought this might be because of the <span style="font-family: Courier New, Courier, monospace;">UIWebView</span> in the past, so I built a <span style="font-family: Courier New, Courier, monospace;">UIWebViewDelegate</span> that would inject a little bit of JavaScript to remove the targets. It didn't work. So I changed the JavaScript to set the target to <span style="font-family: Courier New, Courier, monospace;">"_self"</span>. Still didn't work:<br /><br /></div><pre>- (void)webViewDidFinishLoad:(UIWebView *)webView { NSString *injectedJavascript = @"function "</pre><pre> "AppPrefix_Injected_suppressBlankTargets() {\n"</pre><pre> " var links = document.getElementsByTagName('a');\n" " var hrefs = [];\n" " for( index = 0; index < links.length; index++ ) {\n" " var link = links.item( index );\n" " if( link.getAttribute( 'target' ) == '_blank' ) {\n" " link.setAttribute( 'target', '_self' );\n" " hrefs.push( link.outerHTML );\n" " }\n" " }\n" " return hrefs.join();\n" "}" "AppPrefix_Injected_suppressBlankTargets();"; NSString *result = [webView </pre><pre> stringByEvaluatingJavaScriptFromString:injectedJavascript]; DLog( "Injected javascript to suppress blank targets on links: %@", result ); } </pre><div><br />Having failed on two assumptions, I realized that I should verify my assumption instead, and implemented <span style="font-family: Courier New, Courier, monospace;">webView:didFailLoadWithError</span>, which quickly showed me that the actual problem was App Transport Security.</div><div><br /></div><div>The social media links? Some of them are HTTP rather than HTTPS, and those ones won't load because App Transport Security doesn't like it. That left options.</div><div><br /></div><div><b>Disable App Transport Security?</b></div><div>I don't like going this route until I have to. I don't want to whitelist certain sites because the social media links might lead to other sites which would also need to be whitelisted and then eventually you just end up disabling app transport security anyway.</div><div><br /></div><div><b>Open ATS Failures in Safari</b></div><div>What I ended up doing instead is listening for errors (<span style="font-family: Courier New, Courier, monospace;">webView:didFailLoadWithError</span>) and then check the error. If it was an app transport security failure, pull the URL out of <span style="font-family: Courier New, Courier, monospace;">NSError</span>'s <span style="font-family: Courier New, Courier, monospace;">userInfo</span> and then open that link in mobile safari instead.</div><div><div><br /></div></div><div>It's fairly straightforward. Write a <span style="font-family: Courier New, Courier, monospace;">UIWebViewDelegate</span> like this:<br /><br /></div><pre>- (void)webView:(UIWebView *)webView didFailLoadWithError:(NSError *)error { NSURL *failedUrl = [self parseATSError:error]; if( failedUrl != nil ) { DLog( "ATS Failure, opening in Safari: %@", failedUrl ); [[UIApplication sharedApplication] openURL:failedUrl]; } else { DLog( "Failed to load, with non-ATS error: %@", error ); } } - (NSURL *)parseATSError:(NSError*)error { if( error == nil ) return nil; if( ![error.domain isEqualToString:NSURLErrorDomain] ) return nil; if( error.code != -1022 ) return nil; NSString *url = error.userInfo[ NSURLErrorFailingURLStringErrorKey ]; return [NSURL URLWithString:url]; } </pre><div><br />This seems to work. So if you're using an <span style="font-family: Courier New, Courier, monospace;">UIWebView</span> and running afoul of App Transport Security, maybe this will help you.</div><script src=""></script><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Release JoyAfter what felt like an eternal delay, one of the my client's apps has finally gone live in the App Store. It is always nice when something you write goes live, but somehow the app store process, painful though it may be, makes the final victory of "Hey, there's my App, live in the App Store" feel that much sweeter.<br /><br />I've already seen some press for the app (and related wearables), so it's already somewhat promising, but it'll be interesting to watch it grow.<br /><script src=""></script><br /><script src=""></script><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman / Crashlytics ADD in ImagesRemember how <a href="">I commented</a> that Crashlytics' daily email was so focused on the daily delta that it lacked a sense of the overall trends?<br /><br />Here's two daily emails back to back. October 30th:<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="100" src="" width="320" /></a></div><br /><br />October 31st:<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="100" src="" width="320" /></a></div><br /><br />App name pixelated because I haven't cleared sharing this information, so I'm sharing it without the identifiable information.<br /><br />These kinds of transitions, from "growing like crazy" to "growth slows" and back are a daily occurrence which totally obscures any real trends about how many users are actually using your app.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Plugin Still Not Ready for Eclipse 4.5?Eclipse Mars / v4.5 was <a href="">released June 24, 2015</a>. The Google Plugin for Eclipse 4.5 has not yet been released. If you go to the Google Plugin for Eclipse <a href="">quickstart page</a>, you can see that it's still showing links for Eclipse 4.4, 4.3 and 4.2 / 3.8.<br /><br />In essence, if you'd like to use GWT and you want to use the latest Eclipse, there's no officially supported option nearly four months later.<br /><br />If you do a little searching, you'll find a few threads on the subject, :<br /><br /><ul><li>Some threads recommend just using the Eclipse 4.4 plugin (<a href="">Google group for the Google Plugin</a>).</li><li>Other threads (<a href="">stackoverflow</a>, ) suggest that the 4.4 plugin doesn't work well with Eclipse 4.5.</li><li>And of course, some threads just got no answer (<a href="">devquestion</a>)</li></ul><div>A friend tried Eclipse 4.5 and the Google Plugin for Eclipse 4.4 together and hit a NoClassDefFound error for Java2HTMLEntityReader. A quick search found that it was a <a href="">bug</a>, but that raises further questions about a <a href="">plugin fork</a>, which looks like it might have been around <a href="">since 2013</a>. It looks like there's <a href="">a lot of work underway</a>.</div><div><br /></div><div>Ultimately, between this and some of the <a href="">posts about GWT 2.8</a>, this adds to my long-growing sense that GWT is stagnating. It hasn't descended into a terrible state yet by any means. It's quite usable for existing projects. Still, I wouldn't recommended it for a new project, and I'd recommend that people using it on existing projects start thinking about the long-term plan for their projects and consider a plan to move away from GWT at some point and how to manage that transition.</div><br /><script src=""></script><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman / Crashlytics Growth ADDCrashlytics' / Fabric has a daily email which gives you a sense of recent crashes, and also some basic stats. It's useful, but they don't seem to consider that there can be a fair amount of day-to-day variance and attempt to smooth that out and give you a sense of real trends. Instead, everything is about yesterday's change:<br /><br /><ul><li><b>Day N</b>:</li><ul><li>New sessions up 20%!</li><li>Your app is on fire!</li><li>Seek investment immediately! </li></ul><li><b>Day N+1</b>:</li><ul><li>New sessions down 15%!</li><li>Your app is beginning to stagnate!</li><li>Oh noes!</li></ul></ul><div><b>UPDATE</b>:<br />I <a href="">captured images</a> from two back-to-back emails as a good example.</div><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman for Android Documentation in DashAndroid developers: are you using <a href="">Kapeli's Dash</a> to read Android docs on a Mac? Want to be able to search for inherited methods without hitting "expand all" first? Request it!<br /><br /><script src=""></script> <br /><blockquote class="twitter-tweet" lang="en"><div dir="ltr" lang="en"><a href="">@geoffreywiseman</a> I might change it to auto-expand depending on how many users request it. So far you're the first one.</div>— Kapeli (@kapeli) <a href="">September 24, 2015</a></blockquote><script async="" charset="utf-8" src="//platform.twitter.com/widgets.js"></script><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Integration between MailChimp and MandrillI'm surprised by how limited the integration is between <a href="">MailChimp</a> and <a href="">Mandrill</a>. Given that Mandrill is a product developed by MailChimp, I was expecting the integration between the two to be pretty strong, and it really doesn't seem to be.<script src=""></script><br /><div><br /></div><div>They're not the same use case, I admit. MailChimp is more of a broadcast / marketing email platform, with elements like campaigns to consider, whereas Mandrill is a transactional email service, for personalized delivery of email messages like notifications from an application. I'm not surprised that they have somewhat different feature sets as a result. </div><div><br /></div><div>I was a little surprised to discover that Mandrill and Mailchimp have a totally different account system, and that while you can link your MailChimp account to your Mandrill account, it's not like having a single account with multiple products/services.</div><div><br /></div><div>The more you look, the deeper the differences get, and the more surprised I was.</div><div><br /></div><div>For instance, look at templates. MailChimp allows you to define email templates that you might send to a mailing list. Think of something like a sale email, where you might configure a template and then merge in a few specifics as you're sending to each person. MailChimp has a sophisticated WYSIWYG editor that you can use to define these email templates.</div><div><br /></div><div>Mandrill also has templates. You might want to configure a template for the welcome email for your application in Mandrill rather than having the application encode all the details of what's in the email. This means that later, you could come back in and modify the template without changing the application.</div><div><br /></div><div>However, Mandrill doesn't have a WYSIWYG editor. If you wanted to have someone on your growth hacking team change the email, they'll have to know HTML to do it. That may not be the end of the world, depending on your company structure but it was a surprising inconsistency for me, and a bit of a missed opportunity.</div><div><br /></div><div>There are integrations between the two. For instance, you can actually create an HTML template in MailChimp using their editor and then send the template over to Mandrill. But when you do so, you don't seem have the option of choosing which template to replace, or to make use of Mandrill's template versioning. And if you make use of Merge Codes (think personalized fields) in MailChimp's designer, some, but not all of those codes are supported in Mandrill, but neither MailChimp nor Mandrill seems to mind if you make use of them and then send the template from MailChimp to Mandrill. Those codes won't do their job anymore, but neither system tells you that, from what I can see.</div><div><br /></div><div>I just think MailChimp has missed an opportunity, both to integrate these two systems more tightly, encouraging MailChimp customers to consider Mandrill and vice versa, and also, frankly, to improve the experience of both systems by leveraging the work done by the other. Why doesn't MailChimp offer their WYSIWYG designer in the Mandrill interface? Can MailChimp templates by sent to Mandrill to become alternate versions of existing templates rather than replacing them? If not, why not?</div><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman HTTP ServersI am doing some HTTP-based integration work for a client on a GWT-based project (which is why I wanted to know <a href="">which HttpClient version to use</a>). I'm still waiting on access to the third-party system, but I've already written and tested the code to generate the XML request and parse the XML response, so I wanted to take a step farther and write the code that would connect that with HTTP:<br /><br /><ul><li>Generate XML using the XML writer/generator</li><li>Send the XML over HTTP</li><li>Wait for the HTTP Response, check it (status code, etc)</li><li>Parse the HTTP response's body using the XML parser/reader</li></ul><br />But if I were going to do that, I want to test it, and testing it requires something to listen for HTTP requests (or to mock out the HTTP client, but this is already a pretty thin layer). I don't have access to the real service yet, so I can't write an integration test and, to be honest, I tend to try not to rely on integration tests. So basically I want something that can pretend to be an HTTP Server of some kind but that has an API that makes writing test code convenient. I went looking for candidates to easily test an HTTP-based client in Java.<br /><br />When I'm picking tools in other platforms, I often consider the popularity of the dependency as well as other signs of its maturity. In Java, it's hard to get good numbers for such things. First of all, while there are some dependency management systems for Java, they're not universal or nearly-universal like they are on other platforms. Maven's central repository is probably the closest thing to that, but even then, there are lots of projects that download their dependencies directly. And, sadly, Maven doesn't publish stats about their dependencies, stats you could use to gauge how many other people are using and trusting a particular library vs. another.<br /><br />However, after looking at some of the alternatives, I decided to start with <a href="">mocky.io</a>. That worked pretty well, but I don't really love using an external server for a simple test. That test wouldn't run offline, wouldn't run if the internet was down, or if mocky.io was having problems. It's basically too much of an integration test, and not even an integration test against the real service. Still, it worked well, and it's a reasonable place to start if you just want to fire up a simple test without adding new dependencies to your project.<br /><br />I looked at alternatives to replace it. <a href="">WireMock</a> has a reputation for being very capable and having a good API, but perhaps not kept as up-to-date as one might like and a little heavy-weight. I also looked at <a href="">MockServer</a>, which looks very capable but also looks pretty heavy. My requirements for now are super-simple, so I ended up using <a href="">okhttp's MockWebServer</a>, one of Square's open-source projects. It's very simple, reasonably light, very fast, and has enough capability for my simple needs. I like it.<br /><br />There are lots of alternatives, from something custom (Netty, RxNetty, Spray) to other competitive libraries (<a href="">jailer</a>,<a href="">mock-http-server</a>) and services (<a href="">Sandbox</a>), but for the moment, MockWebServer does the job for me.<br /><br /><script src=""></script><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman / HttpClient Version CompatibilityIf you're using GWT and Apache HttpClient in a single project, it can be useful to know which version of HttpClient ships with which version of GWT, because GWT includes HttpClient directly in its jar, and conflicting versions can be a bit of a pain.<br /><br />I can't find that information published anywhere else, so I'm publishing it here. Basically, look in the <a href="">dev/build.xml</a> file for the tagged version of GWT that you care about. Since I've already done that, here's the result for several of the most recent releases:<br /><br /><table><thead><tr><th>GWT Version</th><th>HttpClient Version</th></tr></thead><tbody><tr><td>2.7</td><td>4.3.1</td></tr><tr><td>2.6.x</td><td>4.3.1</td></tr><tr><td>2.5.x</td><td>4.1.2</td></tr><tr><td>2.4</td><td>4.0.1</td></tr></tbody></table><br />I hope this will be useful for someone else.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Assembly is Backed by the Right PeopleThere have been a few attempts over the years to take a more revolutionary approach to pushing the web forward. Google's Dart, Microsoft's TypeScript are some examples of these. While these have been interesting to skim, I have never followed them too closely because they lacked the kind of support they would need to succeed. JavaScript alternatives can compile to JavaScript, but without being built into the browser, they were never going to replace JavaScript.<br /><br />Yesterday, news of the Web Assembly initiative broke:<br /><ul><li>TechCrunch: <a href="">Google, Microsoft, Mozilla And Others Team Up To Launch WebAssembly, A New Binary Format For The Web</a></li><li>Ars Technica: <a href="">The Web is getting its bytecode: WebAssembly</a></li></ul>It's far too early to decide whether or not this will work out, there's very little detail to go on at this point. However, I was pleased to see that the group of companies putting some support behind it seems to be the right group. If Apple, Mozilla, Microsoft and Google all put their weight behind an initiative to improve the web, there's a much better chance of it succeeding than if one, or none of them does.<br /><br />Now, "weight" is a subjective term, and it wouldn't be hard to imagine one or all of these companies not putting enough force behind this to make it a success. But if they really do all commit significant resources to this kind of effort, there's a real chance that this could be a moment in the history of the web that we look back on as a turning point.<br /><br />It's still too early to have much of an opinion, but I'm interested to see where this goes.<br /><script src=""></script><br /><script src=""></script><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Testing FrameworksOn a recent <a href="">Clarity</a> call, one of the topics we discussed was JavaScript Testing Frameworks.<br /><br />It’s a complex topic, more complex than it is in many other platforms, because JavaScript testing frameworks are pretty granular – there are lots of pieces and they can be used together or independently.<br /><br />There are:<br /><ul><li>Test runners</li><li>Unit testing frameworks</li><li>Integration testing frameworks</li><li>Assertion frameworks</li><li>Mocking frameworks</li></ul>Some tools are very specifically just one of these. Other tools cover more than one category or blur the lines between several of these.<br /><br />If you’ve used some of these, you might have a preference based on what feels right for you. If you haven't, you might use a different criteria: popularity.<br /><br />If you don’t have a preference, picking the most popular test framework is useful for a few reasons:<br /><ul><li>You are crowdsourcing the research. If the most people are using a particular framework, there’s often a reason for that. It might not be the best framework, or the up-and-coming framework, but it’s probably mature and reasonably solid.</li><li>If you have an issue, you probably won’t be the first person to have that issue, and finding answers to your problems is often easier.</li><li>When hiring or outsourcing, it’s a lot easier to find someone with experience with the popular frameworks than the niche frameworks.</li></ul><div>Of course, using the popular frameworks isn't likely to give you a competitive advantage, but it's up to you to decide when it's worth seeking that kind of advantage. I'm not sure that Javascript testing frameworks is the first place I'd seek it.</div><h2>Gauging Popularity</h2>Of course, if you’re trying to assess the popularity of a tool, there’s a bunch of different ways you could do that, including:<br /><ul><li><a href="">npm</a> statistics</li><li><a href="">stack overflow</a> questions</li><li>job trends (e.g. <a href="">indeed</a>)</li><li>google <a href="">search trends</a></li></ul><h2>Candidates</h2>What are some of the primary candidates for JavaScript and testing?<br /><h3>QUnit</h3><a href="">QUnit</a> is a unit testing framework that was originally part of JQuery.<br /><br />Popularity: <br /><ul><li><em>NPM</em>: ~20k downloads in the last month. Often over 500 downloads per day.</li><li><em>Job Trends</em>: QUnit was initially a strong contender against Jasmine until early 2013, after which it has stayed steady while Jasmine has quadrupled. In early 2014, Mocha also surpassed it and has now doubled.</li><li><em>Stack Overflow</em>: a distant third in total questions, and rarely top of the trending tags for Javascript test frameworks.<br /.</li></ul><h3>Jasmine</h3><a href="">Jasmine</a> is a behaviour-driven test framework, intended to be a fairly complete system on its own – it comes with assertions, spies, mocking.<br /><br />Popularity: <br /><ul><li><em>NPM</em>: Almost ~300k downloads in the last month. Was hitting ~1k/day in 2014, regularly over 5k/day in Q1 2015 and now often over 10k/day.</li><li><em>Job Trends</em>: Jasmine is the leading JavaScript testing framework, although Mocha has been making steady progress, and is probably the strongest alternative.</li><li><em>Stack Overflow</em>: Jasmine is the leader in tag trends and total questions, but the data is close and varies wildly.</li><li><em>Google</em>: Jasmine got the early lead and has held it. Mocha is making some headway.</li></ul><h3>Mocha</h3>Test framework designed with an extension model in mind.<br /><br />Popularity:<br /><ul><li><em>NPM</em>: Mocha seems to dominate NPM downloads, with 2M downloads in the last month, and many days over 50k downloads/day this year.</li><li><em>Job Trends</em>: Mocha is a relative newcomer, but has already surpassed QUnit and is making headway on Jasmine. Still, Jasmine is featured in twice as many job postings as Mocha, so it’s still the underdog.</li><li><em>Stack Overflow</em>: Half of Jasmine’s total questions, but pretty competitive on tag trends.</li><li><em>Google</em>: Mocha got the latest start on the search trends, but has already surpassed QUnit. It’s not clear if it’s making headway on Jasmine on search trends.</li><li><em>Stackshare</em>: Mocha handily beats Jasmine <a href="">on Stackshare</a>. I’m not sure if people are more happy to share that they’re using Mocha (e.g. aspirational) or if the kinds of people on Stackshare are more likely to use Mocha (e.g. selection bias).</li></ul>Mocha is clearly a credible alternative to Jasmine, and seems better suited to people who want to mix and match pieces of the framework rather than relying on a single complete solution. I feel like Mocha is ‘in fashion’ or ‘trending’ right now now. Still, I don’t think you’ll go that far wrong with either Mocha or Jasmine.<br /><h3>Karma</h3><a href="">Karma</a> is a test runner, popularized by AngularJS. It is test framework agnostic, and works with QUnit, Jasmine and Mocha.<br /><br />Popularity:<br /><ul><li><em>NPM</em>: Almost 1M downloads in the last month and regularly over 30k dowloads/day this year.</li><li><em>Job Trends</em>: Karma is the only JavaScript test runner with meaningful penetration on job postings that I could find, although it’s certainly less common than any of the test frameworks.</li><li><em>Stack Overflow</em>: Vastly ahead of all the other JavaScript test runners on total questions.</li><li><em>Google</em>: Karma’s pretty recent on Google search trends, but it’s already competitive with Mocha in terms of search.</li></ul><h3>Chai</h3><a href="">Chai</a> is an assertion framework, but one that has an extension model of its own, allowing yet another layer of library to layered on top of it.<br /><br />Popularity:<br /><ul><li><em>NPM</em>: ~850k downloads in the last month and regularly over 20k downloads per day. Chai is about twice as popular as Should, but they’re in the same league.</li><li><em>Job Trends</em>: Chai is the only assertion framework to have meaningful results on job trends. Of course, doesn’t help that its two strongest competitors, expect.js and should.js have more generic names that make searching a little harder.</li><li><em>Stack Overflow</em>: Doesn’t seem to place on tag trends, but well ahead of should.js on total questions.</li><li><em>Google</em>: Up alongside QUnit, but below Mocha and Jasmine.</li></ul><h3>Should.js</h3><a href="">Should</a> is a BDD-style assertion framework for node and browsers.<br /><br />Popularity:<br /><ul><li><em>NPM</em>: Almost 400k downloads int he last month, and regularly over 10k/day downloads this year.</li><li><em>Job Trends</em>: Although it’s harder to search for “javascript” and “should” and get useful results, neither “shoud.js” nor “shouldjs” seem to show up on significant numbers of job posts.</li><li><em>Stack Overflow</em>: Well behind Chai on total questions.</li><li><em>Google</em>: Hard to get clean results, but “should.js” is clearly below Chai.</li></ul><h3>Others</h3>There are, of course, countless others. Some of them are specific to a particular platform, but many are simply alternatives to those listed above, or another tool related to testing but not fitting within the broad categorization above. <br /><br />I’ve tried to cover the most popular set, but there are lots of others on the long tail of popularity, such as:<br /><ul><li><a href="">Expect</a> is another BDD-style assertion framework, built by Automattic, for node.js and browser testing. At about 200 downloads/day and <10k downloads in the last month, it’s clearly far less popular than Should and Chai.</li><li><a href="">Wallaby</a> is another test runner.</li><li><a href="">NodeUnit</a> is a test framework built on node.js’s assertions.</li><li><a href="">PhantomJS</a> is a headless WebKit with scriptable JavaScript API which can be used for testing.</li><li><a href="">JSTestDriver</a> is another test runner.</li><li><a href="">Cucumber.js</a> is a behaviour-driven development (BDD) framework for JavaScript.</li><li><a href="">Dalek</a> is more of an end-to-end framework, with a runner, assertions, browser support and more.</li><li><a href="">Atomus</a> is a test framework for client-side tests in a node environment by simulating a browser.</li><li><a href="">Testem</a> is another JavaScript test runner. At 200k NPM installs in the last month, it’s probably the most credible alternative to Karma.</li><li><a href="">Sinon</a> adds spies, stubs and mocks, and is often used with Chai.</li><li><a href="">Protractor</a> is an end-to-end testing tool for AngularJS.</li><li>Although code coverage tools aren’t specific to testing, they are often used in conjunction with testing. Some of the candidates here are:</li><ul><li><a href="">Istanbul</a></li><li><a href="">Blanket</a></li><li><a href="">JSCover</a></li><li><a href="">Saga</a></li></ul><li><a href="">Nightwatch</a> is an end-to-end testing framework that works with Selenium and node.js.</li><li><a href="">Jest</a> is layered on top of Jasmine with the goals of making Jasmine that much easier to use (“painless”).</li><li><a href="">Chutzpah</a> is another JavaScript test runner that seems to be built with the Microsoft stack at least partially in mind.</li></ul><h2>What Should I Use?</h2>That’s ultimately your decision, but if popularity is your primary criterion, then <em>Karma</em> with either <em>Jasmine</em> or <em>Mocha</em> & <em>Chai</em> are clearly the leaders. <br /><br />If you want to talk about this or something else, <a href="">call me on Clarity</a>, or <a href="">get in touch</a>.<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Up a CITS eApp Notification FeedI'm in the late stages of wrapping up a project to add a <a href="">CITS</a> eApp Notification feed as a integration between an insurance client of mine and a vendor's system (<a href="">bluesun's WealthServ</a>), and I thought it might be helpful to look back over the effort and share things that I learned anew or, at least, things I received fresh reminders of along the way.<br><br>If you haven't encountered some of these terms elsewhere, let me explain:<br><br><ul><li><i><a href="">CITS</a></i> is the Canadian Insurance Transaction Standardization effort by <a href="">CLIEDIS</a>.</li><li><i><a href="">CLIEDIS</a></i> is the Canadian Life Insurance EDI Standards organization.</li><li><i>eApp Notification feed</i> is a feed to inform vendors of an eApplication submission. Basically, it carries core information about a new application from one organization to another.</li></ul><br>For instance, an insurance company might want to send information about new policies in the application process to, say, <a href="">WealthServ</a> or <a href="">VirtGate</a> so that brokers could login and see their policy, and get updates and commission information.<br><div><br></div>I've been taking notes in an outline to write up, and the outline is now big enough that I'm sure I'm going to need more than one post. So I'm starting here with a brief description of the project and some of the terms, and then I'll follow that with a series of posts covering many of these topics:<br><br><ul><li>Integration Projects and Models</li><li>Generic Data Models</li><li>Monitoring and Logging</li><li>Security</li></ul><br>That's not an exhaustive list, but it's a start. More to come.<br><br><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Tower 2 Recommendation: Cautious Buy (Update: Buy)I've been using <a href="">Git Tower</a> for a couple of years now, and I'm happy to recommend v1.x as a solid GUI Git client for OS X. There are a few things I'd like to see improved, but the basics are quite good, and it's been relatively bug-free for me (although after doing a Twitter search, I can see some people had some problems with v1.x that I didn't).<br><br>Since Git Tower v2.0 was released, I've installed a trial and I've been using it on and off in parallel with Git Tower v1.x. In general, I like it. I think it's a fine evolution of Git Tower, I think the UI changes are mostly improvements, and I'm looking forward to using it full-time.<br><br>Having said that, I don't think it's ready for prime-time use, at least not for me. I've reported a few bugs already while using the trial, and I'm <a href="">not the only one</a>. And if you read over the <a href="">mentions that @gittower has on Twitter</a>, I think that's the general sentiment: looks good, but needs a bit more attention.<br><br>There has been a few point releases already (v2.0.3 at the time of this writing) and I expect there will be more. Basically, I'd say it looks like a promising upgrade, but if you're in no rush, it wouldn't hurt to wait a little longer. If you're curious, then download the trial in the meantime. If you really want to use Tower 2, you probably can, but accept there might be a few issues along the way.<div><br></div><div><b>Update</b>: I have been using Git Tower 2 for months now and almost all the issues have been worked out. It still, isn't perfect, but I don't think the remaining issues warrant much caution. I would say "buy".</div><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman XSS Vulnerability in Desktop ApplicationTweetDeck's recent <a href="">XSS vulnerability</a> highlights one of the downsides to using web technologies to build a desktop app.<br /><br />Of course, no app is immune to vulnerabilities, and using desktop application platforms certainly doesn't prevent all vulnerabilities. But TweetDeck's vulnerability exists in part because the content it displays is delivered in the same form as its user interface.<br /><br />A desktop application that failed to properly control web content that it was displaying would still expose the web container in which the content was running, but that wouldn't make it vulnerable to manipulating the application controls. A badly secured web view to display a tweet within in a native application with native controls for displaying tweets would still make it hard to, say, trigger the application to <a href="">retweet the content automatically</a>.<br /><br /><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman v2.0 ReleasedI put the finishing touches on the second release of one of my open-source projects, Moo, last night. I'd been updating the documentation for a few weeks now to get that in sync with all the code changes that have been taking place. Now that it's finally done, I've released Moo v2.0 up to the Maven central repository.<br /><br />Moo v2.0 is the first release that was driven more from requests made by users of Moo than for my own needs. If you're using Moo and you'd like to make a request, get in touch over email, twitter, or GitHub, and I'll see what I can do.<br /><br />For those of you already using Moo, I'd love to hear some thoughts about how Moo v2.0 works for you and what enhancements you still need.<br /><br />If you're not using Moo, it's a relatively unintrusive way of mapping one object graph to another in Java. Check out the <a href="">website</a>, the <a href="">release notes</a> or the <a href="">wiki</a> to learn more.<br /><br /><img src="" height="1" width="1" alt=""/>Geoffrey Wiseman Cheatsheet for git-svnOne of my clients uses Subversion for source-control, and since I spend a lot of time in git anyway, I have been using <span style="font-family: Courier New, Courier, monospace;"><a href="">git-svn</a></span> to interact with their subversion repository. I like being able to commit locally even when I don't have a connection to their server, and then push the commits up to subversion when I do have a connection.<br /><br />However, while <span style="font-family: Courier New, Courier, monospace;">git-svn</span> lets me use some of the same tools that I use elsewhere, like Git Tower, the syntax for <span style="font-family: Courier New, Courier, monospace;">git-svn</span> isn't exactly the same as either the <span style="font-family: Courier New, Courier, monospace;">git</span> or <span style="font-family: Courier New, Courier, monospace;">svn</span> clients, and it's not a perfect abstraction, which means that I occasionally have trouble remembering the exact syntax or the way to do something like find out which git commit corresponds to a subversion revision number.<br /><br />Since I've been using <a href="">Dash</a> as a documentation browser as well, and since <a href="">Dash 2.0</a> introduced cheat sheets, it seemed like I could save myself some time by writing a <a href="">git-svn cheatsheet</a> for Dash, which I did, and since it might save someone else some time, I contributed it to the Dash <a href="">cheatsheet repo</a> (using a <a href="">pull request</a>).<br /><br />It's available now, so if you ever use <span style="font-family: Courier New, Courier, monospace;">git-svn</span> and you use Dash, you should be able to download it within the app. And since the repository is open, you can even submit a pull request for changes to my cheatsheet if there are things you'd like to add to it (or, if you prefer, you can let me know, and perhaps I'll make the change).<img src="" height="1" width="1" alt=""/>Geoffrey Wiseman
http://feeds.feedburner.com/codiform
CC-MAIN-2021-04
refinedweb
9,556
51.99
VibBAN USER Looks like Product question,. Use Map to store the parent child relations and construct the tree, public class TreeNode { public string data; public TreeNode parent; public TreeNode left; public TreeNode right; public TreeNode(string d) { data = d; parent = null; left = null; right = null; } } public TreeNode convertToTree(List<String> pairs) { HashMap<string, TreeNode> map = new HashMap<string, TreeNode>; foreach(String pair in pairs) { String child = pair.Split(',').Trim(); String parent = pair.Split(',').Trim(); TreeNode ctn , ptn; if(!map.containsKey(child)) { ctn = new TreeNode(child); map.put(child, ctn); } if(!map.containsKey(parent)) { ptn = new TreeNode(parent); map.put(parent, ptn); } if(map.get(parent).left == null) { map.get(parent).left = ctn; } else if(map.get(parent).right == null) { map.get(parent).right = ctn; } else { //Invalid binary tree return null; } map.get(child).parent = ptn; } Iterator it = map.entrySet().iterator(); while(it.hasNext()) { Map.Entry pair = (Map.Entry) it.next(); if(pair.getValue() == null) { return pair.getValue(); } } return null; } preprocess all characters say "a" to "z" and create a map (index) <Character, number of bits set>. Now you just to look up the map to get the answer. sorry, found that the api is internally implemented with offset and count mechanism only, exposed API takes begin and end indexes. In Java, String substring method takes beginIndex and endIndex, What you described above in C++. Guess interviewer's brand is johnny walker, Can't we just store both lastname -> firsname and firstname -> lastname in the same map and do with 1 comparision ? you see any issue ? This doesn't guarantee the order. One way to implement is to keep three conditional variable like con12, con23 and con31 and make the thread 1 and 2 wait and notify on con12, make thread 2 and 3 wait and notify on con23 so on... Horse moves in L shape, accordingly your get_next_pos function changes. Rest looks fine. but that can be handled by removing from the right. if(removed < n) { number.substring(0, number.length()-1 - (n - removed)); } boolean matchPattern(char* str, char* pat) { if(*str == '\0' && *pat == '\0)'{ return true; } if(*exp == '#'' && *(exp+1) != '\0' && '*str == '\0) { return true; } if(*str == *exp) { matchPattern(exp + 1, pat +1); } if(*exp = '#')'{ return matchPattern(exp +1, pat) || matchPattern(exp, pat + 1); } return false; } Is it BT or BST, if it is BST, construction is as follows 1. make the first element as root 2. find index i where a[i] > a[0] 3. root->left = constructTree with nodes less than i 4. root->right = constructTree with nodes from i+1 If this is what interviewer is expecting in MS interview, he can as well ask a word from crossword puzzle which would be a better question. All the product companies you mentioned would look for the same skills as of US since they are US based :). Start and give the first one and try to complete a full loop with any. You would have better idea about what they are expecting (also better chance of cracking the next). string parsing should not be done for this. Also you are overwriting input array's i+1 location without first saving the original, @Skor's solution is elegant and probably the best. Class.forName() will load the driver into memory, If successful, the static initializer of the driver is called. All JDBC Drivers have a static block that registers itself with DriverManager, something like below. static { try { java.sql.DriverManager.registerDriver(new Driver()); } catch (SQLException E) { throw new RuntimeException("Can't register driver!"); } } JVM executes the static block and the Driver registers itself with the DriverManager. In order to create the connection to the database, the DriverManager class has to know which database driver you want to use. It does that by iterating over the array of drivers that have registered with it and calls the acceptsURL(url) method on each driver in the array, effectively asking the driver to tell it whether or not it can handle the JDBC URL Question is not clear. why the this object only. You can call other non synchronized methods when one thread is executing synchronized methods right ? I don't you should consider number of processors when designing. It should work seamlessly for single / multi processor. The idea about keeping the hashmap of Pubs and Subs is sound but do not see point in keeping k queues. Clarifying the question, Given an unbalanced tree, how would you replace each node's value with the sum of all the values in its subtree that are smaller than the node's value ? You also need to pad the array from the multiplication with second number onwards with zeros to get the correct sum result. Ex: 123 46 ----- 738 492"0" -------- 5658 -------- Yes, this is the method i think interviewer is expecting, especially with constraint that you shouldn't be creating the extra Node inside your function. Agree with @Skor. You can simply keep updating the value in loop and if redCounter is odd then increment the start point by 1 and output the bestVal. Otherwise print the bestVal Not sure we need this, Every time you delete the node, you are decrementing the count. So cant we just adjust the last node pointer to previous one. 2. if even point to the next node When we compose a new mail in gmail, it simply displays the emails with matching prefixes. For this purpose trie is a good solution. i too think that was the intention of the question careercup + linkdin- Vib February 02, 2019
https://careercup.com/user?id=5076156636200960
CC-MAIN-2020-50
refinedweb
922
65.83
freitasm: You mean static IP address? Dochart:freitasm: You mean static IP address? Can’t you still get a public ip address (dynamic ip address which changes a few times each month) if you want to opt out of CGNAT without having to pay for a static ip. P.S: sorry if I’m mixing up the terminology. it appears 2D are statically assigning, likely it's one or the other... would make no sense for them to build out whole new functionality.. just another thing to maintain otherwise. #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
https://www.geekzone.co.nz/forums.asp?forumid=85&topicid=255661&page_no=24
CC-MAIN-2020-05
refinedweb
112
66.33
{{wiki.template('Np-plugin-api')}} Summary Posts data to a URL. Syntax #include <npapi.h> NPError NPN_PostURL(NPP instance, const char *url, const char *target, uint32 len, const char *buf, NPBool file); Parameters The function has the following parameters: - instance - Pointer to the current plug-in instance. - url - URL of the request, specified by the plug-in. - target - Display target, specified by the plug-in. If null, pass the new stream back to the current plug-in instance regardless of MIME type. For values, see NPN_GetURL. - len - Length of the buffer buf. - buf - Path to local temporary file or data buffer that contains the data to post. Temporary file is deleted after use. Data in buffer cannot be posted for a protocol that requires a header. - file - A boolean value that specifies whether to post a file. Values: - true: Post the file whose the path is specified in buf, then delete the file. - false: Post the raw data in buf. Returns - If successful, the function returns NPERR_NO_ERROR. - If unsuccessful, the plug-in is not loaded and the function returns an error code. For possible values, see Error Codes. Description NPN_PostURL works similarly to NPN_GetURL, but in reverse. NPN_PostURLreads data from the URL and either displays it in the target window or delivers it to the plug-in. NPN_PostURLwrites data from a file or buffer to the URL and either displays the server's response in the target window or delivers it to the plug-in. If the target parameter is null, the new stream is passed to the plug-in regardless of MIME type. When you use NPN_PostURL to send data to the server, you can handle the response in several different ways by specifying different target parameters. - If target is null, the server response is sent back to the plug-in. You can get the data and save it in a file or use it in a program. - If you specify _current, _self, or _top, the response data is written to the same plug-in window and the plug-in is unloaded. - If you specify _new or _blank, the response data is written to a new browser window. You can also write the response data to a frame by specifying the frame name as the target parameter. For HTTP URLs only, the browser resolves this method as the HTTP server method POST, which transmits data to the server. The data to post can be contained either in a local temporary file or a new memory buffer. - To post to a temporary file, set the flag file to true, the buffer buf to the path name string for a file, and len to the length of the path string. The file-type URL prefix "file://" is optional. MS Windows and Mac OS If a file is posted with any protocol other than FTP, the file must be text with Unix-style line breaks ('\n' separators only). - To post data from a memory buffer, set the flag file to false, the buffer buf to the data to post, and len to the length of buffer. Possible URL types include HTTP (similar to an HTML form submission), mail (sending mail), news (posting a news article), and FTP (upload a file). Plug-ins can use this function to post form data to CGI scripts using HTTP or upload files to a remote server using FTP. You cannot use NPN_PostURL to specify headers (even a blank line) in a memory buffer. To do this, use NPN_PostURLNotify. is typically asynchronous: it returns immediately and only later handles the request. For this reason, you may find it useful to call NPN_PostURLNotify instead; this function notifies your plug-in upon successful or unsuccessful completion of the request. See Also NPN_GetURL, NPN_GetURLNotify, NPN_PostURLNotify, NPP
https://developer.mozilla.org/en-US/docs/NPN_PostURL$revision/192710
CC-MAIN-2015-27
refinedweb
626
64.61
I installed the Gosmore engine as it says in that web pag(i built an apache server),but it did not work. i am not very sure about my operation. Gosmore can not find the Pak file. asked 25 Mar '11, 19:30 monument 69●5●6●7 accept rate: 0% edited 25 Mar '11, 19:40 Did you see the description of gosmores CGI interface in the wiki? answered 25 Mar '11, 19:36 petschge 8.0k●20●71●96 accept rate: 21% Yes ,i read it,but i am not very clear. when i installed the engine,i just read the pag All that tells you is "install gosmore" and "make a pak file available". The other site tells you how you actually can "talk" to gosmore in a programm. thank you for your guidance, but Gosmore show that it did not find the pak file,is there anything wrong with my operation? I don't know. You didn't really tell use how and where you installed gosmore and which pak file you used and where it is stored. i want to upload the pictures,but...it did not show the pictures,all red X..... By the way , this page is an example in linux,is there any help for windows, i got the gosmore for windows.... For all I know the CGI interface of apache works the same whether on linux or on windows. Firstly note that I forked YOURS to make the Osm.org Routing Demo. You can grab my latest tarball. My changes include support for IE, restricting jQuery to it's own namespace, making routes dragable and many bug fixes. The best way to install gosmore is to follow the instructions on the Gosmore wiki page, i.e. svn co, configure and make. The "make install" step is not necessary. Then rebuild with an extract and test it e.g. QUERY_STRING="flat=45.303213&flon=-63.304713&tlat=44.6890011&tlon=-63.8092375fast=1&v=motorcar" ./gosmore Although there are scripts for rebuilding the complete planet (in 2 sections), it is strongly advised that you use the shared infrastructure. answered 28 Mar '11, 15:14: gosmore ×10 question asked: 25 Mar '11, 19:30 question was seen: 3,700 times last updated: 28 Mar '11, 15:14 How to start gosmore on an MD 97430? Problem installing gosmore? Problem using gosmore to rebuild a .pak file from osm data Gosmore, how to get a map on screen rebuild of gosmore [closed] Is Gosmore down? Error building Gosmore under ubuntu 12.04 Offline multi-modal routing on mobile Yournavigation Server is down First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/4096/how-to-make-the-gosmore-work-in-my-own-website
CC-MAIN-2019-47
refinedweb
445
75.3
#include <line.h> #include <line.h> Inheritance diagram for aeLine: Definition at line 35 of file line.h. Create a new line object with no name. Naturally you won't be able to find the line from the engine by its name. Create a new line object given a name. Use this if you want to find the object from the objectpool at later time. [virtual] Draw the line. After you have added the object to the engine's objectpool, aeEngine::Render() (which you should call upon aeevProcessFrame event) will call this method to draw the object. You should not need to call this yourself. Implements aeObject.
http://aeengine.sourceforge.net/documentation/online/pubapi/classaeLine.html
CC-MAIN-2017-13
refinedweb
107
77.03
In this tutorial we are describing Runnable Thread with example.In this tutorial we are describing Runnable Thread with example. In this tutorial we are describing Runnable Thread with example. Runnable Thread : Runnable thread is an easy way to create a thread by implementing the Runnable interface. You need to implement a single method called run(). Its syntax like following - public void run( ) Now you can put your code inside the run() method. run() can call another methods too. After creating class that implements Runnable interface, you can create object of type Thread under that class. There are many constructor defined by the Thread like - Thread(), Thread(String threadName), Thread(Runnable threadOb, String threadName) etc.. After creating new Thread, call start() method which is declared in thread. Example : In this example we are creating thread by using Runnable interface. class RunnableThread implements Runnable { Thread thread; public RunnableThread() { } public RunnableThread(String threadName) { /* Creating new thread */ thread = new Thread(this, threadName); System.out.println(thread.getName()); /* Starting thread */ thread.start(); } public void run() { /* Display info about current thread */ System.out.println(Thread.currentThread()); } } public class RunnableThreadExample { public static void main(String[] args) { Thread th1 = new Thread(new RunnableThread(), "thread1"); Thread th2 = new Thread(new RunnableThread(), "thread2"); RunnableThread th3 = new RunnableThread("thread3"); /* Threads start */ th1.start(); th2.start(); try { Thread.currentThread(); /* delay for a second */ Thread.sleep(1000); } catch (InterruptedException e) { } /* Display info about current thread */ System.out.println(Thread.currentThread()); } } Output : thread3 Thread[thread3,5,main] Thread[thread2,5,main] Thread[thread1,5,main] Thread[main,5,main]
https://www.roseindia.net/tutorial/java/core/RunnableThread.html
CC-MAIN-2022-33
refinedweb
256
51.44
Murugesh wrote: >Hi all, >I'm a newbie to python.I need to login to a webpage after supplying >usename and password. > >import urllib >sock = urllib.urlopen("") >htmlSource = sock.read() >sock.close() >print htmlSource > >In the above code how can i supply username and password to that URL. >Thanks for you time. > > xop-pc.main.com is not an existing site. Can you tell me what kind of authentication method it is using? If that is the "basic authentication" (defined the standard HTTP protocol) then you need to read this: Basically, you need to subclass URLopener or FancyURLopener, and overwrite its "prompt_user_passwd" method. That method will be then called whenever the server needs authentication. Here is a template for you (untested): from urllib import FancyURLOpener class MyOpener(FancyURLOpener): def prompt_user_passwd(host,realm): return ('myusername','mypassword') opener = MyOpener({}) f = opener.open("") try: html_source = f.read() finally: f.close() Best, Les
https://mail.python.org/pipermail/python-list/2005-October/342729.html
CC-MAIN-2014-10
refinedweb
149
53.27
Encoder Tester - Sin/Cos and Incremental Quadrature I broke my original encoder tester, and so it's time to make another one. Ryan from Everything Bends shows how to make a simple interpolator that turns Sin/Cos signals to incremental ones here: So I decided to add that feature to be able to test Sin/Cos encoders as well. This time, I will be making a custom PCB. The original encoder tester used the Encoder library by PJRC: and it works very well, so I will stick with that. I am going to use a Teensy 2.0, since that runs on 5V and is made by PJRC as well. It has 4 interrupt pins (5,6,7,8), just enough for 2 encoder channels. And according to this: it has a SPI channel on pins 0,1,2,3. So I can just use a SPI OLED screen, as before. I chose a 1.3" OLED screen. They are plentiful and cheap on eBay: There is some guidance as to how to connect and program here: and here:. And I am choosing the same Wurth 615008140621 RJ45 connector I have used for the Differential Encoder Shield for KFLOP/SnapAMP and the HEDL Encoder to RJ45 Adapter Board. Power will come from the Teensy USB port, and the whole board will operate on 5V. I sent the board to me made by OSHPark. It is a shared project, in case you want to make your own: The boards came back, and the only immediate problem is that the mounting holes on the OLED do not match the holes that were meant for it exactly. But one mated up well enough to put a screw trough and the pins on J5 provide additional support. Unless there are other problems, I don't see a need to redo the boards. To test the OLED, only the Teensy 2.0 and the OLED need to be populated, so that is a good starting point. I used the U8x8lib to drive it. The example comes with a large sample of constructors: ... //U8X8_SSD1306_128X64_NONAME_4W_HW_SPI u8x8(/* cs=*/ 12, /* dc=*/ 4, /* reset=*/ 6); // Arduboy 10 (Production, Kickstarter Edition) //U8X8_SSD1306_128X64_NONAME_4W_HW_SPI u8x8(/* cs=*/ 10, /* dc=*/ 9, /* reset=*/ 8); //U8X8_SSD1306_128X64_NONAME_3W_SW_SPI u8x8(/* clock=*/ 1, /* data=*/ 2, /* cs=*/ 0, /* reset=*/ 4); //U8X8_SSD1306_128X64_NONAME_HW_I2C u8x8(/* reset=*/ U8X8_PIN_NONE); //U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/* clock=*/ 2, /* data=*/ 0, /* reset=*/ U8X8_PIN_NONE); // Digispark ATTiny85 //U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/* clock=*/ SCL, /* data=*/ SDA, /* reset=*/ U8X8_PIN_NONE); // OLEDs without Reset of ... and it took a little playing around to find which one worked for my OLED. Apparently many identically looking variations are available.... This works as a test, using hardware SPI to drive the display: #include <Arduino.h> #include <U8x8lib.h> #ifdef U8X8_HAVE_HW_SPI #include <SPI.h> #endif U8X8_SH1106_128X64_WINSTAR_4W_HW_SPI u8x8(/* cs=*/ 0, /* dc=*/ 20, /* reset=*/ 4); void setup(void) { u8x8.begin(); u8x8.setPowerSave(0); } void loop(void) { u8x8.setFont(u8x8_font_chroma48medium8_r); u8x8.drawString(2,4,"Encoder Test!"); delay(2000); } Next, we simply need to instantiate two instances of Paul Stoffgrens' encoder library. Then we just repeatedly read and display the values. // lots of good info at; // uses Encoder Library by Paul Stoffregen <paul@pjrc.com> #include <Arduino.h> #include <U8x8lib.h> #include <Encoder.h> #ifdef U8X8_HAVE_HW_SPI #include <SPI.h> #endif U8X8_SH1106_128X64_WINSTAR_4W_HW_SPI u8x8; // Change these two numbers to the pins connected to your encoder. // Best Performance: both pins have interrupt capability // Good Performance: only the first pin has interrupt capability // Low Performance: neither pin has interrupt capability Encoder myEncoder1(5, 6); Encoder myEncoder2(7, 8); void setup(void) { u8x8.begin(); u8x8.setPowerSave(0); u8x8.setFont(u8x8_font_chroma48medium8_r); u8x8.clear(); u8x8.drawString(2,0,"Sin/Cos"); u8x8.drawString(2,1,"Encoder"); u8x8.drawString(2,2,"Position:"); u8x8.drawString(2,4,"Incremental"); u8x8.drawString(2,5,"Encoder"); u8x8.drawString(2,6,"Position:"); } void loop(void) { char str[80]; long newPosition = myEncoder1.read(); sprintf(str, "%ld", newPosition); u8x8.clearLine(3); u8x8.drawString(2,3,str); Serial.print("Sin/Cos: "); Serial.print(newPosition); newPosition = myEncoder2.read(); sprintf(str, "%ld", newPosition); u8x8.clearLine(7); u8x8.drawString(2,7,str); Serial.print("Incremental: "); Serial.println(newPosition); delay(20); } Paul states an approximate maximum signal rate of 100 kHz. For a 0.0001 inch/per step linear encoder, this would be 10 in/sec maximum move rate. I find that I lose steps at speeds much lower than that, probably due to the signal quality of the connection I made. But when moving the encoder slowly, the tester works fine. Here a video of the test:
https://www.sites.google.com/site/janbeck/encoder-tester---sin-cos-and-incremental-quadrature
CC-MAIN-2019-51
refinedweb
738
66.74
Something I get asked by most customers starting out with ASP.NET MVC is how they should package, group, factor, and reuse their UI components. We're all used to thinking about User Controls, Custom Controls, and other Web Forms approaches. But rest assured, MVC provides a wealth of options. The thoughts below describe how I see each of these options being put to best use. That doesn't mean the guidelines below are completely concrete (or indeed complete) – but they should get you thinking along the right lines initially. If you've thoughts to add please do shout up! Rendering extensions are Extension Methods applied to the HtmlHelper class so that they can be used in views to output clean HTML. They are best used for packaging reusable pieces of HTML found across a site. Often they're reusable across multiple web sites. To summarise they; · Output simple HTML · Focus on small portions of HTML · Do not usually contain or use business logic · Do not contain layout for entities · Are often reusable across multiple web sites A good example might be a rendering extension to output a textbox and matching label, or perhaps a button with some specific attributes set. I do not see Action Filters as a form of UI composition on their own (although the p&p guys did come up with an interesting way of using them in combination with Partial Views). However, they can be really useful for applying behaviour that has a UI focus. A great example of this is the ShowNothingIfNotAuthorizedAttribute class in the Web Client Guidance RI. It looks like this; public class ShowNothingIfNotAuthorizedAttribute : AuthorizeAttribute { protected override void HandleUnauthorizedRequest( AuthorizationContext filterContext) { filterContext.Result = new EmptyResult(); } } I think this is pretty self explanatory – if the user isn't authorized, they see no content. And this works if the current view is included in a parent view using RenderAction. I would not suggest that you use this approach to build HTML though; it should be reusable behaviour applied to multiple actions, that is agnostic of the content being generated. In summary they; · Do not output HTML without delegating to a view · May reference some business logic, but be careful applying this to many views (it can lead to thousands of unintended database hits) · Do not contain layout Partial Views and Html.RenderPartial should be used for creating HTML content that meets the following criteria; · The HTML is more complex or larger than makes sense for a rendering extension (the visual editing of a partial view is a great help for maintenance purposes here) · It is reused across the web site · It does not contain reused business logic – just layout · It does not contain layout for entities · It may contain other composition techniques – like templates or Html.RenderAction. A good example is perhaps defining how a list of some form should be displayed. The list and individual items themselves would probably use templates (see below), but the surrounding layout and chrome would fit well in a partial view. In truth, the dividing line between a partial view and RenderAction is more about the design of your controller action. If it passes all the data the partial view needs, and there is no need to split out behaviour for fetching that data, a partial view makes sense. Templates are new in MVC 2 and allow you to use syntax such as the following; <%= Html.DisplayFor(m => m.Artist)%> This outputs HTML according to the type of the argument. This type could be simple (e.g. string, integer, etc), or it could be a business-specific entity (e.g. Song, User, ShoppingCart, and so on). They can be customised by dropping partial views into the DisplayTemplates or EditorTemplates subfolders for a controller. This means these basically follow the same rules as for partial views, except that they do contain layout for entities (note: I mean entities specific to your web site here; call them what you like... domain entities, view models, business entities, etc, depending upon your architecture). · The HTML is more complex or larger than makes sense for a rendering extension · It does contain layout for a specific breed of entity or type · It may make some limited use of other composition techniques Using RenderAction is different to RenderPartial in that it executes a controller action and then inserts the HTML output into the parent view. This means there is one very strong distinction between the two approaches – RenderAction should be used when behaviour as well as layout must be reused. In summary, use it when; · HTML is more complex or larger than makes sense for a rendering extension · It does contain both reused business logic / behaviour and layout · It may make use of other composition techniques A good example of this might be rendering a shopping basket. You do not want all the actions responsible for views that include a shopping basket to have to retrieve the basket content, pass it in the view model, and use RenderPartial to pass it to the basket partial view. Instead, it makes much more sense to call out from the host view to a ShoppingBasket controller's Display action. This also makes it much easier if the content should be rendered using a different view depending upon business logic – perhaps a ShoppingCart.aspx view or a NotLoggingIn.aspx view. RenderAction is also handy when composing views across modules. This is called out in the docs for the Web Client Guidance (check the "UI Composition" topic). An example of this in action is the incorporation of "My Friends Top Songs" in the home page for the Reference Implementation using this syntax; <% Html.RenderAction("FriendsTopSongs", "TopSongs", new {area = "TopSongs"});%> The logic for what to render in this list and the data access to get the list is found at target route. Finally, patterns & practices came up with an interesting way of making part of a view extensible – and that is UI Extensions. The idea here is that a view defines a location that can be extended by other components, probably in different modules. It does this by iterating through a collection of extensions and rendering them with code something like the following; <div class="options"> <% foreach (SongLinkMetadata songAction in song.SongActions) { Html.RenderPartial("SongAction", songAction); } %> </div> This is taken from "~/Views/Search/Results.aspx" in the Web Client Guidance Reference Implementation. The MyLibrary module then registers a type called AddToLibrarySongActionProvider that populates this region. In this case, · The HTML layout is fixed by the extension point, not the extensions · The logic to provide these extensions exists in decoupled code, or another module Well this post is way longer than I wanted it to be, but I hope it explains the basics of your options. The aim is really to introduce the concepts, and then to encourage you to investigate the Web Client Guidance to learn more and see these approaches in action. Do remember what I said – these are not hard and fast rules, but they should get you started. I have also avoided the debate about what does and does not adhere to the MVC pattern – for example RenderAction is seen by some as breaking the rules. Make your own mind up!
http://blogs.msdn.com/b/simonince/archive/2010/02/02/packaging-ui-components-in-mvc.aspx
CC-MAIN-2013-20
refinedweb
1,203
57.81
{- | Module : Control.Arrow.SP Copyright : (c) 2007 by Shawn Garbett and Peter Simons License : BSD3 Maintainer : simons@cryp.to Stability : provisional Portability : portable A continuation-based monadic stream processor implemented as an 'Arrow'. References: * John Hughes, \"/Generalising Monads to Arrows/\": <> * Magnus Carlsson, Thomas Hallgren, \"Fudgets--Purely Functional Processes with applications to Graphical User Interfaces\": <> -} module Control.Arrow.SP ( SP(..), runSP, mapSP , module Control.Arrow ) where import Prelude hiding ( id, (.) ) import Control.Category import Control.Arrow import Control.Monad ( liftM ) -- |A generic stream processor. data SP m i o = Put o (SP m i o) | Get (i -> SP m i o) | Block (m (SP m i o)) instance Monad m => Category (SP m) where id = Get (\x -> Put x id) (Get sp2) . (Put i sp1) = sp1 >>> sp2 i (Put o sp2) . sp1 = Put o (sp1 >>> sp2) (Get sp2) . (Get sp1) = Get (\i -> sp1 i >>> Get sp2) (Block spm) . sp = Block (liftM (sp >>>) spm) sp . (Block spm) = Block (liftM (>>> sp) spm) instance Monad m => Arrow (SP m) where arr f = Get (\x -> Put (f x) (arr f)) first = bypass empty where bypass :: Monad m => Queue c -> SP m a b -> SP m (a,c) (b,c) bypass q (Get f) = Get (\(a,c) -> bypass (push c q) (f a)) bypass q (Block spm) = Block (liftM (bypass q) spm) bypass q (Put c sp) = case pop q of Just (c', q') -> Put (c,c') (bypass q' sp) Nothing -> Get (\(_,d) -> Put (c,d) (bypass q sp)) -- ArrowZero just waits in a state getting input forever. instance Monad m => ArrowZero (SP m) where zeroArrow = Get (\_ -> zeroArrow) -- ArrowPlus allows running in parallel, output merged into -- a single stream. instance Monad m => ArrowPlus (SP m) where Put o sp1 <+> sp2 = Put o (sp1 <+> sp2) sp1 <+> Put o sp2 = Put o (sp1 <+> sp2) Get sp1 <+> Get sp2 = Get (\i -> sp1 i <+> sp2 i ) sp1 <+> Block spm = Block (liftM (sp1 <+>) spm) Block spm <+> sp2 = Block (liftM (<+> sp2) spm) -- Left messages pass through like a conduit. Right messages -- are processed by the SP. instance Monad m => ArrowChoice (SP m) where left (Put c sp) = Put (Left c) (left sp) left (Block spm) = Block (liftM left spm) left (Get f) = Get (either (left . f) (\b -> Put (Right b) (left (Get f)))) -- A feedback loop where a SP can examine it's own output. instance Monad m => ArrowLoop (SP m) where loop sp = loop' empty sp where loop' :: Monad m => Queue c -> SP m (a,c) (b,c) -> SP m a b loop' q (Block spm) = Block (liftM (loop' q) spm) loop' q (Put (a,b) sp') = Put a (loop' (push b q) sp') loop' q (Get sp') = case pop q of Just (i, q') -> Get (\x -> loop' q' (sp' (x,i))) Nothing -> Block (fail "invalid attempt to consume empty SP feedback loop") -- |Evaluate a stream processor. runSP :: Monad m => SP m () () -> m () runSP (Block spm) = spm >>= runSP runSP (Put () f) = runSP f runSP (Get _) = return () -- |Use a monadic transformer to map a stream. mapSP :: (Monad m) => (i -> m o) -> SP m i o mapSP f = Get (\i -> Block (f i >>= \o -> return (Put o (mapSP f)))) ----- Helper Functions ----------------------------------------------- data Queue a = Queue [a] [a] empty :: Queue a empty = Queue [] [] push :: a -> Queue a -> Queue a push x (Queue o i) = Queue o (x:i) pop :: Queue a -> Maybe (a, Queue a) pop (Queue (o:os) i) = Just (o, Queue os i) pop (Queue [] []) = Nothing pop (Queue [] i) = pop (Queue (reverse i) [])
http://hackage.haskell.org/package/streamproc-1.6.2/docs/src/Control-Arrow-SP.html
CC-MAIN-2016-36
refinedweb
574
71.48
The following are the types of logs produced by the operation of Domino. Domino execution logs Domino application logs. blob_storage.logs. All Domino services output their logs using the standard Kubernetes logging architecture. Relevant logs are printed to stdout or stderr as indicated, and are captured by Kubernetes. For example, to look at your front end logs you can do the following: List your all namespaces to find the name of you platform namespace: kubectl get namespace List all the pods in your platform namespace to find the name of a front end. You will likely have more than one front end pod. kubectl get pods -n <namespace for you platform nodes> Print the front ends logs for one of your front ends: kubectl logs <pod name of your front end pod> -n <namespace for you platform nodes> -c nucleus-front in this namespace correspond to ephemeral pods hosting using work. Each pod will contain a user-defined environment container, whose logs are described previously as Execution logs. There are additional supporting containers in those pods, and their logs might contain additional information on any errors or behavior seen with specific Domino executions. Domino advises that you aggregate and keep at least 30 days of logs to facilitate debugging. These logs can be harvested with a variety of Kubernetes log aggregation utilities, including: You can enable audit logging for several events. Audit logging for models has been improved in the 4.6.1 release. These are the major model events that are logged when triggered through the Domino UI: New model create New model version publish Model version stop / start Model archived Model collaborator add / change / remove Model settings change - Syslog server Mixpanel After you enable audit logging, messages are written to Application logs. Other log targets require additional configuration. Contact support@dominodatalab.com for assistance enabling, accessing, and processing audit logs. Details for audit logging of data interactions can be found in Tracking and auditing data interactions in Domino.
https://admin.dominodatalab.com/en/5.0/admin_guide/a9e507/domino-application-logging/
CC-MAIN-2022-27
refinedweb
330
50.97
Specific Tables/Projects Contents - Specific Tables/Projects - Component Groups of J0(N)(R) and J1(N)(R) - Cuspidal Subgroup - Discriminants of Hecke Algebra - Compute a table of semisimplications of reducible representations of elliptic curves - Dimensions of modular forms spaces - Compute the exact torsion subgroup of J0(N) for as many N as possible - Characteristic polynomial of T2 on level 1 modular forms - Characteristic polys of many Tp on level 1 - Arithmetic data about every weight 2 newform on Gamma0(N) for all N<5135 (and many more up to 7248) - Systems of Hecke Eigenvalues: q-expansions of Newforms - Eigenforms on the Supersingular Basis - Elliptic curve tables - Congruence modulus and modular degree for elliptic curves - j-invariants of CM elliptic curves - Pari table of Optimal elliptic curves - Optimal quotients whose torsion isn't generated by (0)-(oo) - Data About Abelian Varieties Af Attached to Modular Forms - The odd part of the intersection matrix of J0(N) - Weierstrass point data - Rationals part of the special values of the L-functions The misc tables are listed here: The Harvard URL is the best, since the has none of the cgi-bin script dynamic data. Component Groups of J0(N)(R) and J1(N)(R) URL: and. The second page has much more extensive data and a conjecture. Page explaining the algorithm - New Code: This function computes the J_0(N) real component groups. def f(N): M = ModularSymbols(N).cuspidal_subspace() d = M.dimension()//2 S = matrix(GF(2),2*d,2*d, M.star_involution().matrix().list()) - 1 return 2^(S.nullity()-d) For J_1(N) it is: def f(N): M = ModularSymbols(Gamma1(N)).cuspidal_subspace() d = M.dimension()//2 S = matrix(GF(2),2*d,2*d, M.star_involution().matrix().list()) - 1 return 2^(S.nullity()-d) Future extension: one could replace Gamma1(N) by GammaH(N,...). One could also do the new subspace. And note Frank's conjecture: Conjecture: Let m = #odd prime factors of N + {1, if N = 0 mod 8 {0, otherwise. Then the component group is isomorphic to (Z/2Z)^f, where f = 2^m - 1. the above conjecture is wrong, but the following matches our data (up to level N<=2723): Conjecture (Boothby-Stein): Let m = #odd prime factors of N - {1, if N != 0 mod 8 {0, otherwise. Then the component group is isomorphic to (Z/2Z)^f, where f = 2^m - 1, unless N=1,2,4, in which case the component is Soroosh -- the prime level case is known. See Calegari and Emerton () which *just* cites Agashe and Merel ( -- page 12). The following worksheet has code for computing the action of Atkin-Lehner on the component group: IDEA! Page 174 of Ling-Oesterle has a corollary 1 with a 'very similar formula'! It's about the number of rational 2-torsion points on J0(N) in the Shimura subgroup and analyzes this using Atkin-Lehner operators. Since the real component group has to do with real torsion this formula actually gives maybe a lower bound too. Here's the paper: Cuspidal Subgroup Computing the structure of the cuspidal subgroup of J0(N) and J1(N) (say). URL: (the displayed formula is backwards at the top) - New Sage code: def cuspidal_subgroup_J0(N): J = J0(N) I = J.cuspidal_subgroup().invariants() # maybe pickle J return I def cuspidal_subgroup_J1(N): J = J1(N) I = J.cuspidal_subgroup().invariants() # maybe pickle J return I BUT WAIT -- isn't there an a priori formula for this structure/order? Yes -- Ligozat, but not really -- that gives only rational cuspidal subgroup. The algorithm is explained in cuspidal subgroup section of. Preliminary implementation of this algorithm is now ticket #6925 Anyway, I'm computing a few of these here, as a test of the modular symbols code, etc., since this is easy: Soroosh and I also thought about cuspidal torsion and had some ideas which we recorded here: They reference this not-finished-paper: Discriminants of Hecke Algebra Computation of discriminants of various Hecke algebras. Amazingly, it seems that there is "discriminants of Hecke algebras" implementation in Sage! Here is a straightforward algorithm: The input is the level N. Chose a random vector v in the space M of cuspidal modular symbols of level N. Compute the sturm bound B. Compute the products T_1(v), ..., T_B(v), and find a basis b_i for the ZZ-module they span. Find Hecke operators S_1, ..., S_n such that S_i(v) = b_i. (This is linear algebra -- inverting a matrix and a matrix multiply.) Compute the determinant det ( Trace(S_i * S_j) ). That is the discriminant. This also gives a basis for the Hecke algebra, which is very useful for lots of things. Note: See for very slow code for computing a basis for the Hecke algebra. Faster code up at Here is a more complicated algorithm, but it might suck because of hidden denseness! The input is the level N. If N is divisible by a prime p^3 and X_0(N/p^3) has positive genus, then the discriminant is 0, as one can see by taking images of forms of level N/p^3. I think the above is an if and only if condition for when the discriminant is 0. See I think Coleman-Voloch. - The actual algorithm now. Find a random Hecke operator t such that the charpoly of t has nonzero discriminant. Choose a random vector v in the space of cuspidal modular symbols. Let B be the Sturm bound. Compute the images T_n(v) for n up to the Sturm Bound. Compute a table of semisimplications of reducible representations of elliptic curves Ralph Greenberg asked for a specific example of an elliptic curve with certain representation, and Soroosh and William found it. In order to do this, we developed a (mostly) efficient algorithm for computing the two characters eps and psi that define the semisimplication of an elliptic curve's Galois representation. This project is to fully implement the algorithm, then run it on curves in the Cremona database and all primes for which the Galois representation is reducible. There is relevant code here: and In fact, one can use the algorithm mentioned above to compute the semisimplication for any modular abelian variety! It would be good to do this for say every J0 modabvar of level up to say 3200 (since we have an ap table up that far): Dimensions of modular forms spaces Currently has a couple of table with a kludgy and completely broken. These tables are nicer:. I think a static table that can do Gamma0, Gamma1, and character for all levels up to 100000 and weight 2 would be good to have. But its value would only be in having it easily usable, since there is no value in asking for an individual space. Anyway, compute the data. It would in fact by a good idea. Also, for each character, we should compute the dimensions of the modular, eisenstein spaces and the new cuspidal, and p-new cuspidal subspaces for each p dividing the level. The following session illustrates that in fact that would be quite valuable to have pre-computed in a table: sage: G = DirichletGroup(21000) sage: time C = G.galois_orbits() Time: CPU 2.21 s, Wall: 2.52 s sage: time z = [(e[0], dimension_cusp_forms(e[0], 2)) for e in C] Time: CPU 8.86 s, Wall: 9.79 s I (=William) started a small calculation going of dimensions of spaces with character for weights <= 16 and for each space computing the dimension of the cuspidal, eisenstein, new, and p-new for each p parts. It's here: Compute the exact torsion subgroup of J0(N) for as many N as possible See for some work in this direction by Stein and Yazdani. Characteristic polynomial of T2 on level 1 modular forms See this table: I have a straightforward algorithm to recompute these, which I'm running here: Characteristic polys of many Tp on level 1 Next, see this table, which gives "Characteristic polynomials of T2, T3, ..., T997 for k<=128" That exact range can likely be easily done using modular symbols. There is another algorithm, that uses the matrix of T_2 (which I'm computing and caching above!), which can compute the charpolys of many other T_p. It's described here:, along with a magma implementation (need to port to Sage). Thus it might be nice to implement this and run it, and get say all T_{p,k} for p,k \leq 1000. Arithmetic data about every weight 2 newform on Gamma0(N) for all N<5135 (and many more up to 7248) This table is challenging to replicate/extend, but easy to move over. It would likely be better to continue our aplist computations, etc. up to 10000 (they only went to 3200 so far), and also take our saved decompositions and compute other data. Anyway, this table is more of a challenge. Systems of Hecke Eigenvalues: q-expansions of Newforms This is the point of these tables from last summer, which are much more comprehensive. Also, Tom Boothby has made some tables of traces of these. Eigenforms on the Supersingular Basis I made I think for a paper with Merel. A close analogue of the above table could likely be extended/recomputed using the SupersingularModule code. However, the first problem is: sage: X = SupersingularModule(97) sage: X.decomposition() Traceback (most recent call last): ... NotImplementedError So some love and care would be needed to compute these decompositions. But this should be done. And then would be pretty similar data to that table. It would be straightfoward (I think) to get the same data is in once the decompositions are computed. Elliptic curve tables I think the following can be safely ignored since they are already easily available from Cremona now-a-days: # [NEW] PARI Tables of Cremona Elliptic Curves # [NEW] Isogeny class matrices for elliptic curves of conductor <= 40000 # [NEW] Lratios L(E,1)/Omega_E for elliptic curves of conductor <= 40000 This can be ignored: # Armand Brumer and Oisin McGuinness's table of elliptic curves of prime conductor less than 108 (or try the local mirror) This should be made officially part of our database in some way, possibly including better wrapping in Sage: # The Stein-Watkins database of elliptic curves of conductor up to 108 and of prime conductor up to 1010. The following table was "fun", but I don't think anybody ever used it, so maybe forget it: Congruence modulus and modular degree for elliptic curves It would be very valuable and possibly challenging to recompute and extend to level 1000 the data here:. There's an interesting conjecture there as well, which could possibly be refined. Here's how to do the computation: sage: E = EllipticCurve('54a') sage: E.modular_degree() # trivial 6 sage: E.congruence_number() # hard in general 6 I actually started a computation of the same data that goes a bit further than the old table here: However quite a few levels up to 1000 remain: [558, 592, 594, 624, 650, 657, 672, 702, 704, 720, 738, 744, 756, 758, 759, 760, 762, 763, 765, 766, 768, 770, 774, 775, 776, 777, 780, 781, 782, 784, 786, 790, 791, 792, 793, 794, 795, 797, 798, 799, 800, 801, 802, 804, 805, 806, 807, 808, 810, 811, 812, 813, 814, 815, 816, 817, 819, 822, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 836, 840, 842, 843, 845, 846, 847, 848, 849, 850, 851, 854, 855, 856, 858, 861, 862, 864, 866, 867, 869, 870, 871, 872, 873, 874, 876, 880, 882, 885, 886, 888, 890, 891, 892, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 909, 910, 912, 913, 914, 915, 916, 918, 920, 921, 922, 923, 924, 925, 926, 927, 928, 930, 931, 933, 934, 935, 936, 938, 939, 940, 942, 943, 944, 946, 948, 950, 954, 955, 956, 957, 960, 962, 964, 965, 966, 968, 969, 970, 972, 973, 974, 975, 976, 978, 979, 980, 981, 982, 984, 985, 986, 987, 988, 989, 990, 994, 995, 996, 997, 999] It would also be very valuable if we compute the congruence modulus and modular degree modulus (I don't know what the good term for this is) for elliptic curves. This data might provide some insight on the discrepency between the two invariants. j-invariants of CM elliptic curves The table as is here is silly: However, a generalization of it to give CM j-invariants over various number fields would be nice. It would probably easy to collate such data if one could find it. I think David Kohel might have such data on his site (?). Pari table of Optimal elliptic curves Pointless -- get rid of -- was only useful when I used PARI a lot. Optimal quotients whose torsion isn't generated by (0)-(oo) This should be redone, but in more generality and systematically. To do this calculation basically all that is needed that is nontrivial is a way to compute the order of the image of 0-oo efficiently. That is best done numerically. The idea of the algorithm: Compute L(E,1) and \Omega_E as floating point real numbers. If the period lattice is rectangular let \omega = \Omega_E / 2; otherwise, let \omega = \Omega_E. The order of the image of (0)-(\infty) is equal to the denominator of the rational number L(E,1)/\omega. (The reason is because the image of (0)-(\infty) can be interpreted as a period integral and so can L(E,1), and they're the same, basically.) Here's code: def order_of_zero_inf(E): R = E.lseries().L_ratio() if E.period_lattice().is_rectangular(): R /= 2 return R.denominator() note that this algorithm is far better than the one I ran to make the above table (), which I think was a modular symbols algorithm. I'm running the above for Cremona's tables here: Data About Abelian Varieties Af Attached to Modular Forms This has a bunch of data about abelian varieties A_f. Some is easy and some is hard to compute. This table took a long time to compute, I think. It will be important to recompute this data and much more. The following data should go in the same table: This is another similar table: And of course this entire table is similar: (broken usually!) data available...; it's in a ZODB. This is also something for the same table = higher dimensional gen. of Cremona: The odd part of the intersection matrix of J0(N) The above data would be very good to have to high levels. It gives the combinatorial "graph structure" of J_0(N). (Sourav San Gupta's final project in my course was related.) Here is code to compute the (odd part of) the intersection matrix: def f(N,k=2): S = ModularSymbols(N,k,sign=1).cuspidal_subspace().new_subspace() D = S.decomposition() n = len(D) A = matrix(ZZ,n) for i in range(n): for j in range(i,n): A[i,j] = odd_part(ZZ(D[i].intersection_number(D[j]).numerator())) A[j,i] = A[i,j] return A (To get the exact value, not just the odd part, get rid of sign=1, and odd_part above, at the least.) It would be nice to run the above for N\leq 1000 and k=2. It would also be nice to gather some data for higher weight. Weierstrass point data and The data could be just copied over, but it would be good to compute it. I think the algorithm just involves computing a basis of q expansions and looking at it. Rationals part of the special values of the L-functions Level 1: Higher level: This would likely be very good to recompute and extend, but it requires implementing an L-ratio method of on modular symbols spaces A (for speed), which for some reason I still haven't done. This involves: Compute the modular symbols e_i=X^iY^{k-2-i}\{0,\infty\} as an element of the ambient space, for each i=0,1,\ldots,k-2. Define a function that computes the sparse action of T_n on a sparse vector, and apply it to e, which will be very sparse. This function will be built on the function _hecke_image_of_ith_basis_element. Use to compute T_n(e) for n\leq B, where B is the Sturm bound. Compute the image of the T_n(e) under the rational_period_mapping() associated to our modular symbols factor A (this is the first time A actually appears). Compute the ZZ-module V spanned by the T_n(e). Compute the integral structure on A, take the subspace that is the +1 eigenspace for the * involution, and take the image W of that subspace under the rational period mapping. The L-ratio is the lattice index of [W:V]. As a shortcut, first compute image of e under rational period mapping -- if it is 0, we know all T_n(e) map to 0, so the L-ratio is 0.
https://wiki.sagemath.org/days17/projects/presagedays/discussion
CC-MAIN-2021-49
refinedweb
2,824
61.56
06 October 2009 19:48 [Source: ICIS news] TORONTO (ICIS news)--BASF Coatings has suspended short-time work at its production hub in Munster, Germany, where it employs a staff of 2,250, due to improved orders, it said on Tuesday. However, while orders have improved, BASF’s coatings business was still below last year’s levels and the outlook remained uncertain, spokesman Michael Golek said. The company was receiving orders, but customers had shifted to placing smaller order sizes, instead of large volumes, he said. In coatings, producing and handling smaller-size orders required about the same staff levels as working big volume orders, he added. BASF Coatings's application for government-subsidised short-time work lasts through January and the company could not rule out that some workers may again need to be put on the programme in coming months, Golek said. The company started introducing short-time work in February 2009, initially putting almost 2,000 workers on short-time work. The number varied from month to month. In September, 900 workers were still affected, Golek said. Many German chemicals producers resorted to short-time work this year. The programme enabled them to cut working hours while maintaining staff levels during the economic crisis. ?xml:namespace>
http://www.icis.com/Articles/2009/10/06/9253196/basf-coatings-lifts-short-time-work-in-germany.html
CC-MAIN-2013-48
refinedweb
209
51.78
Just trying all possibilities for the first two numbers and checking whether the rest fits. def isAdditiveNumber(self, num): n = len(num) for i, j in itertools.combinations(range(1, n), 2): a, b = num[:i], num[i:j] if b != str(int(b)): continue while j < n: c = str(int(a) + int(b)) if not num.startswith(c, j): break j += len(c) a, b = b, c if j == n: return True return False I'm not sure, but I think "0011" for input is valid, Because it is not explicitly mentioned in the problem description. ref: Not all strings are valid. Do you also think "four plus two" is valid input, representing 6? Your "0011" is probably supposed to represent 11, whose normal string representation is "11". Not "0011" or "eleven" or "one more than ten" or whatever one could imagine. How about this: The problem is called "Additive Number". Not "Additive String". And the definition is "a positive integer whose...". I'd say we get the number as a string for technical reasons (C++ & Co not having arbitrarily large integers) but it's really about the number. And is for example the number 11 additive or not? If you represent it as "11", then it's not, but if you represent it as "011", then it is? How can the same number both be additive and not be additive? Nonsense. No, it's about the number, and the digital representation is "11" and nothing else. How would you handle the overflow problem? I wonder if a interviewer will skip this question for a pythoner. @zhuyinghua1203 I don't really want to think about that :-P. But I guess I'd add a+b one digit at a time and compare it to the rest of the string while doing so. Not sure how to handle that a+b might be longer than both. Maybe instead I'd go from the back, trying all possibilities for the last two numbers and subtracting to get the next number(s) before them... @zhuyinghua1203 Yeah, I'd definitely go backwards instead of forwards, I think that's a lot simpler. If I go forwards, I only know where the third number starts. But if I go backwards, I know where the third-last number ends. Which is a lot more helpful, because adding/subtracting digit by digit is simpler from right to left. I think in python we can compare a+b with 2**31-1, but that sounds redundant since python doesn't have overflow problem... In language like c, if overflow, a+b < 0. And the problem becomes how to do big-number addition in c with string input/output (there may be a separate leetcode problem for this.) Oh, I misunderstood you. I thought you were already talking about "big numbers". I didn't even realize the minor a+b overflow :-). Yeah, I agree in Python it doesn't make any sense to handle the a+b overflow as Python does that anyway. It could make sense to use our own digit-by-digit addition or subtraction, though, if the numbers can get very big. It might be more efficient because one test can stop after just one digit, whereas turning substrings into ints and then working with those... already spends much time on turning the entire substrings into ints. @kidd9_ I do check b for leading zeros, and a doesn't need to be checked because then the input would have leading zeros and that would be invalid, see the previous comments. Or at least it used to be invalid. I see the problem text has been changed in the meantime (in my opinion it's worse now and I don't intend to adapt my solution). Clear code! But I have to say that use of itertools.combinations is not efficient. For example, i can not be greater than 2 when a string of 5 characters is given but the combinations (3,4) is still generated and tested. I modified the fifth line to be if a != str(int(a)) or b != str(int(b)): to pass the test case '0235813' which should be false. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/29845/python-solution
CC-MAIN-2018-05
refinedweb
715
74.08
Large Scale Geospatial Visualization with Deck.gl, Mapbox-gl and Vue.js Musthaq Ahamad Updated on ・6 min read Geospatial visualization and analytics can open up lots of opportunities for any company which collects location data. More than any external data, internal data can help much more in growing your products. Understanding the patterns, affluence and much more can help you aid in forming effective marketing, distribution or utilization strategy. We all do not doubt data being the driving force of growth in startups, but most of the time location data ends up as just another column in your CSV files. Maps can add an invaluable component of location context into your data. They help you understand the where from your data, which otherwise ends up as just latitude and longitude columns. Seeing things on a map gives much more valuable information about how your entities move and interact with your on-ground assets. Let's see how we can build beautiful large-scale visualization on the web using Vue and deck.gl. What is Deck.gl? Deck.gl is Uber's opensource visualization framework. It helps to build high-performance GPU powered visualization on the web. It is built to handle large-scale datasets without much performance issues. Deck.gl is part of uber's opensource visualization framework suite vis.gl. Deck.gl follows a reactive paradigm which makes it extremely easy to integrate with modern UI development libraries and frameworks. The vis.gl suite comes with a React.js wrapper, but we'll be using the @deck.gl/core sub-module which doesn't have React dependency and will be integrating it with Vue.js components. Installing Dependencies We'll be using Mapbox-gl for rendering maps and Deck.gl for visualizations in an existing Vue.js application. Deck.gl has out-of-the-box support for React, this article will be mainly focused on building an application using these technologies with Vue.js. Use the following command inside a bootstrapped Vue.js application to install the dependencies. $ npm install --save mapbox-gl @deck.gl/core @deck.gl/layers Working with Deck.gl and Mapbox-gl There are two main ways we can use Deck.gl with Mapbox-gl in a Vue.js application. - By using Deck.gl layers as custom Mapbox layers - By using Mapbox as a base map and overlaying Deck.gl canvas We'll discuss how we can build an app with both of these methods. Using Mapbox's custom layers The @deck.gl/mapbox sub-module helps us create Deck.gl layers that can be used as custom Mapbox layers. It's the easiest way to work with both the libraries but comes with some known limitations. This particular method is still experimental and can cause unexpected bugs. This method is not recommended if you have layers that need frequent updates/rerendering. By using this method we can tap into the full-power of Mapbox's visualizations and interleave Deck.gl layers with Mapbox Layers to create beautiful visualizations. We can simply create a Mapbox instance in a component, and add the deck.gl layer as a custom layer. 1. Creating the map We can use the mapbox-gl library to quickly add a map inside our component. <template> <div class="container"> <div id="map" ref="map"></div> </div> </template> <script> import mapboxgl from "mapbox-gl";, }); }, } </script> <style lang="scss"> .map-container { width: 100%; height: 100%; position: relative; overflow: hidden; } </style> 2. Attaching the deck.gl MapBox Layer Using the @deck.gl/mapbox module we can create a custom Mapbox layer and include a deck.gl layer within. Once you add them both, the component should look like this, and you are ready to go!s <template> <div class="container"> <div id="map" ref="map"></div> </div> </template> <script> import mapboxgl from "mapbox-gl"; import { GeoJsonLayer } from "@deck.gl/layers"; import { MapboxLayer } from "@deck.gl/mapbox";, }); this.loadLayer(); }, methods: { loadLayer() { // create a new MapboxLayer of type GeoJSON Layer const layer = new MapboxLayer({ id: 'geojson-layer', type: GeoJsonLayer, data: this.mapData, filled: true, lineWidthScale: 20, lineWidthMinPixels: 2, getFillColor: d => [245, 133, 5, 0], getLineColor: d => [245, 245, 245], getLineWidth: 1, )}; // add the layer to map this.map.addLayer(MapboxLayer); } }, }; </script> <style lang="scss"> .map-container { width: 100%; height: 100%; position: relative; overflow: hidden; } </style> Using MapBox as base map and Overlaying Deck.gl In this method of using deck.gl we are using MapBox as just a base map to render the maps and deck.gl for visualisations and interactivity. We give full interactivity control to deck.gl so that every zoom, pan, and tilt happening to the deck.gl canvas will be reflected upon the base map. By far, this is the most robust implementation of deck.gl we can use in production. 1. Setting up the template While building a component in the above-mentioned method, we need to add both deck.gl canvas and mapbox-gl element to the template. And make sure, the deck.gl canvas stays atop of the mapbox element. <template> <div class="deck-container"> <div id="map" ref="map"></div> <canvas id="deck-canvas" ref="canvas"></canvas> </div> </template> <style scoped> .deck-container { width: 100%; height: 100%; position: relative; } #map { position: absolute; top: 0; left: 0; width: 100%; height: 100%; background: #e5e9ec; overflow: hidden; } #deck-canvas { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } </style> 2. Connecting MapBox-gl and Deck.gl Instances Next, we need to initialize both the map and deck.gl instance in the component and connect the interactivity. We can use the mounted hook to initialize both of them and assign them to a non-reactive variable for future use-cases. import { Deck } from "@deck.gl/core"; import mapboxgl from "mapbox-gl"; export default { data() { return { viewState: { latitude: 100.01, longitude: 100.01, zoom: 12, pitch: 0, bearing: 0 } } }, created() { this.map = null; this.deck = null; }, mounted() { // creating the map this.map = new mapboxgl.Map({ accessToken: this.accessToken, container: this.$refs.map, interactive: false, style: this.mapStyle || "mapbox://styles/haxzie/ck0aryyna2lwq1crp7fwpm5vz", center: [this.viewState.longitude, this.viewState.latitude], zoom: this.viewState.zoom, pitch: this.viewState.pitch, bearing: this.viewState.bearing, }); // creating the deck.gl instance this.deck = new Deck({ canvas: this.$refs.canvas, width: "100%", height: "100%", initialViewState: this.viewState, controller: true, // change the map's viewstate whenever the view state of deck.gl changes onViewStateChange: ({ viewState }) => { this.map.jumpTo({ center: [viewState.longitude, viewState.latitude], zoom: viewState.zoom, bearing: viewState.bearing, pitch: viewState.pitch, }); }, }); } } 3. Creating and Rendering Layers Since deck.gl has an internal interactivity built-in, we can simply set the layer props of the deck.gl instance and it'll render the layers efficiently. We need to trigger this rerender by using deck.setProps({}) whenever the layer's data is being changed. The below example illustrates well how to achieve this. import { PathLayer } from "@deck.gl/layers"; export default { data() { return { // all your data properties pathData: [ { path: [[100, 10], [200, 30]...], color: [255, 255, 255, 50] }, ... ] // some geo data } }, computed: { // a reactive property which creates the layer objects whenever the data is changed getLayers() { const paths = new PathLayer({ id: "path-layer", data: this.pathData, widthScale: 20, widthMinPixels: 2, getPath: d => d.path, getColor: d => d.color, getWidth: d => 1 }); return [paths] } }, methods: { renderLayers(layers) { // setting the layers to deck.gl props this.deck.setProps({ layers }) } }, watch: { // whenever the layer data is changed and new layers are created, // rerender the layers getLayers(layers) { this.renderLayers(layers); } } } You can even abstract this method to be just used for rendering and make it a separate deck.gl wrapper component. Once you have this component ready, you can compute the layers outside the component and pass it as props to your deck.gl wrapper component. You can learn more about deck.gl and it's APIs at deck.gl Love reading about GeoSpatial visualizations? Stay tuned for more in-depth articles about how you can use deck.gl in your applications in production. published on haxzie.com Are Technical Interviews a good measure of software engineering ability? Sarah Mei @sarahmei Why do we keep doing whiteboard interviews, even though... Hi, quite interesting I haven't use Mapbox yet, I have used mainly Leaflet and ArcGiS JS What would be the strong point of using Deck.gl with Mapbox compared to the others? I am too working in Geospatial :) Deck.gl's MapView is designed to sync perfectly with the camera of Mapbox, at every zoom level and rotation angle. From the official documentation, we can get to know that: As I mentioned in the post, there should not be much issues while using any other Map providers as basemap, if you are following the second method. Since the second method (using the basemap method) gives the control of map's viewstate over to Deck.gl and the basemap is only used to render the map, this should work with any map libraries. You can learn more about this from here Interesting, especially if it can correctly integrate with other providers. The visuals are really stunning! I will remember this library when I have cases of specifics layers not supported Awesome. We have came across a lot of libraries when it comes to geospatial visualisation. Only DeckGL could cut the cake for large-scale datasets. Will be shortly writing about other alternative libraries which could help visualize large-scale datasets 🙌🏼 Great, I would love to read about those. It is always interesting to know about alternatives to basics, mostly when customers ask for really specifics behaviors not covered by the generics API, and the last time I had to work on a case like that, we faced a huuuge problem of performance and memory management.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/localeai/large-scale-geospatial-visualization-with-deck-gl-mapbox-gl-and-vue-js-54im
CC-MAIN-2020-16
refinedweb
1,621
52.46
A ScapeGoat tree is a self-balancing Binary Search Tree like AVL Tree, Red-Black Tree, Splay Tree, ..etc. - Search time is O(Log n) in worst case. Time taken by deletion and insertion is amortized O(Log n) - The balancing idea is to make sure that nodes are α size balanced. Α size balanced means sizes of left and right subtrees are at most α * (Size of node). The idea is based on the fact that if a node is Α weight balanced, then it is also height balanced: height <= log1/&aplpha;(size) + 1 - Unlike other self-balancing BSTs, ScapeGoat tree doesn’t require extra space per node. For example, Red Black Tree nodes are required to have color. In below implementation of ScapeGoat Tree, we only have left, right and parent pointers in Node class. Use of parent is done for simplicity of implementation and can be avoided. Insertion (Assuming α = 2/3): To insert value x in a Scapegoat Tree: - Create a new node u and insert x using the BST insert algorithm. - If the depth of u is greater than log3/2n where n is number of nodes in tree then we need to make tree balanced. To make balanced, we use below step to find a scapegoat. - Walk up from u until we reach a node w with size(w) > (2/3)*size(w.parent). This node is scapegoat - Rebuild the subtree rooted at w.parent. What does rebuilding the subtree mean? In rebuilding, we simply convert the subtree to the most possible balanced BST. We first store inorder traversal of BST in an array, then we build a new BST from array by recursively dividing it into two halves. 60 50 / / \ 40 42 58 \ Rebuild / \ / \ 50 ---------> 40 47 55 60 \ 55 / \ 47 58 / 42 Below is C++ implementation of insert operation on Scapegoat Tree. // C++ program to implement insertion in // ScapeGoat Tree #include<bits/stdc++.h> using namespace std; // Utility function to get value of log32(n) static int const log32(int n) { double const log23 = 2.4663034623764317; return (int)ceil(log23 * log(n)); } // A ScapeGoat Tree node class Node { public: Node *left, *right, *parent; float value; Node() { value = 0; left = right = parent = NULL; } Node (float v) { value = v; left = right = parent = NULL; } }; // This functions stores inorder traversal // of tree rooted with ptr in an array arr[] int storeInArray(Node *ptr, Node *arr[], int i) { if (ptr == NULL) return i; i = storeInArray(ptr->left, arr, i); arr[i++] = ptr; return storeInArray(ptr->right, arr, i); } // Class to represent a ScapeGoat Tree class SGTree { private: Node *root; int n; // Number of nodes in Tree public: void preorder(Node *); int size(Node *); bool insert(float x); void rebuildTree(Node *u); SGTree() { root = NULL; n = 0; } void preorder() { preorder(root); } // Function to built tree with balanced nodes Node *buildBalancedFromArray(Node **a, int i, int n); // Height at which element is to be added int BSTInsertAndFindDepth(Node *u); }; // Preorder traversal of the tree void SGTree::preorder(Node *node) { if (node != NULL) { cout << node->value << " "; preorder(node -> left); preorder(node -> right); } } // To count number of nodes in the tree int SGTree::size(Node *node) { if (node == NULL) return 0; return 1 + size(node->left) + size(node->right); } // To insert new element in the tree bool SGTree::insert(float x) { // Create a new node Node *node = new Node(x); // Perform BST insertion and find depth of // the inserted node. int h = BSTInsertAndFindDepth(node); // If tree becomes unbalanced if (h > log32(n)) { // Find Scapegoat Node *p = node->parent; while (3*size(p) <= 2*size(p->parent)) p = p->parent; // Rebuild tree rooted under scapegoat rebuildTree(p->parent); } return h >= 0; } // Function to rebuilt tree from new node. This // function basically uses storeInArray() to // first store inorder traversal of BST rooted // with u in an array. // Then it converts array to the most possible // balanced BST using buildBalancedFromArray() void SGTree::rebuildTree(Node *u) { int n = size(u); Node *p = u->parent; Node **a = new Node* [n]; storeInArray(u, a, 0); if (p == NULL) { root = buildBalancedFromArray(a, 0, n); root->parent = NULL; } else if (p->right == u) { p->right = buildBalancedFromArray(a, 0, n); p->right->parent = p; } else { p->left = buildBalancedFromArray(a, 0, n); p->left->parent = p; } } // Function to built tree with balanced nodes Node * SGTree::buildBalancedFromArray(Node **a, int i, int n) { if (n== 0) return NULL; int m = n / 2; // Now a[m] becomes the root of the new // subtree a[0],.....,a[m-1] a[i+m]->left = buildBalancedFromArray(a, i, m); // elements a[0],...a[m-1] gets stored // in the left subtree if (a[i+m]->left != NULL) a[i+m]->left->parent = a[i+m]; // elements a[m+1],....a[n-1] gets stored // in the right subtree a[i+m]->right = buildBalancedFromArray(a, i+m+1, n-m-1); if (a[i+m]->right != NULL) a[i+m]->right->parent = a[i+m]; return a[i+m]; } // Performs standard BST insert and returns // depth of the inserted node. int SGTree::BSTInsertAndFindDepth(Node *u) { // If tree is empty Node *w = root; if (w == NULL) { root = u; n++; return 0; } // While the node is not inserted // or a node with same key exists. bool done = false; int d = 0; do { if (u->value < w->value) { if (w->left == NULL) { w->left = u; u->parent = w; done = true; } else w = w->left; } else if (u->value > w->value) { if (w->right == NULL) { w->right = u; u->parent = w; done = true; } else w = w->right; } else return -1; d++; } while (!done); n++; return d; } // Driver code int main() { SGTree sgt; sgt.insert(7); sgt.insert(6); sgt.insert(3); sgt.insert(1); sgt.insert(0); sgt.insert(8); sgt.insert(9); sgt.insert(4); sgt.insert(5); sgt.insert(2); sgt.insert(3.5); printf("Preorder traversal of the" " constructed ScapeGoat tree is \n"); sgt.preorder(); return 0; } Output: Preorder traversal of the constructed ScapeGoat tree is 7 6 3 1 0 2 4 3.5 5 8 9 A scapegoat tree with 10 nodes and height 5. 7 / \ 6 8 / \ 5 9 / 2 / \ 1 4 / / 0 3 Let’s insert 3.5 in the below scapegoat tree. Initially d = 5 < log3/2n where n = 10; Since, d > log3/2n i.e., 6 > log3/2n, so we have to find the scapegoat in order to solve the problem of exceeding height. - Now we find a ScapeGoat. We start with newly added node 3.5 and check whether size(3.5)/size(3) >2/3. - Since, size(3.5) = 1 and size(3) = 2, so size(3.5)/size(3) = ½ which is less than 2/3. So, this is not the scapegoat and we move . >>IMAGE . - Now, size(4)/size(2) = 3/6. Since, size(4)= 3 and size(2) = 6 but 3/6 is still less than 2/3, which does not fulfill the condition of scapegoat so we again move up. - Now, size(2)/size(5) = 6/7. Since, size(2) = 6 and size(5) = 7. 6/7 >2/3 which fulfills the condition of scapegoat, so we stop here and hence node 5 is a scapegoat Finally, after finding the scapegoat, rebuilding will be taken at the subtree rooted at scapegoat i.e., at 5. Comparison with other self-balancing BSTs Red-Black and AVL : Time complexity of search, insert and delete is O(Log n) Splay Tree : Worst case time complexities of search, insert and delete is O(n). But amortized time complexity of these operations is O(Log n). ScapeGoat Tree: Like Splay Tree, it is easy to implement and has worst case time complexity of search as O(Log n). Worst case and amortized time complexities of insert and delete are same as Splay Tree for Scapegoat tree. References: - - This article is contributed by Rahul Aggarwal and Sahil Chhabra (akku).
https://www.geeksforgeeks.org/scapegoat-tree-set-1-introduction-insertion/
CC-MAIN-2018-09
refinedweb
1,307
68.2
While updating my VsVim editor extensions for Beta2 [1] I got hit by a change in the way F# exposed discriminated unions in metadata. My extension consists of a core F# component with a corresponding set of unit tests written in C#. It’s mostly API level testing and as such I use a lot of F# generated types in my C# test assembly. In Beta1 all information which could be extracted from a discriminated type union was immediately available on the value. The underlying type presentation was less than desirable but these details were hidden by type inference and the very accessible API. The type wasn’t perfect because given a particular instance only the subset of the properties relevant to the union value type were valid. All others threw exceptions. But the code use of these methods and properties flowed very well. For instance take the following F# definition type ActionKind = | Mouse = 1 | Keyboard = 2 type ActionResult = | Complete of (ActionKind * int) | Error of string | NeedMore of (char -> ActionResult) The use case in C# was quite simple [TestMethod] public void TestActionBeta1(){ var res = GetResult(); Assert.IsTrue(res.IsComplete()); Assert.AreEqual(ActionKind.Mouse, res.Complete1.Item1); Assert.AreEqual(42, res.Complete1.Item2); } Notice how no type information is necessary and the code flows quite naturally. C# type inference works great here and allows me to do what I need to do without fussing around with little stuff. The type in this case is a detail I don’t need to know about. It simply adds no value. Discriminated Unions in Beta2 changed substantially in this area. Instead of generating the set of all values on the exposed type, there is now an inner type generated for every discriminated union value and the properties relevant to that union value are stored on the inner type. The outer type now contains only properties to determine which type of value it is (certainly an upgrade from methods!) [2] For instance in the case of ActionResult there are 3 generated inner classes: Complete, Error and NeedMore. Each one contains a single property Item which contains the associated value(s). This means to get to the value portion a cast to the inner type must be inserted! Lets take a a look at how the above test code has to change to deal with the Beta2 generation of ActionResult. [TestMethod] public void TestActionBeta2() { var res = GetResult(); Assert.IsTrue(res.IsComplete); Assert.AreEqual(ActionKind.Mouse, ((ActionResult.Complete)res).Item.Item1); Assert.AreEqual(42, ((ActionResult.Complete)res).Item.Item2); } Notice the explicit casts which must be added to access the values. This makes it impossible to rely soley on C# type inference. I must now understand the underlying type structure of discriminated unions in order to use them. This extra cast adds no real value to my code. My C# test assembly has literally hundreds of test cases which use this pattern on F# types. I didn’t know the return type of every method and found myself hitting “Goto Def” on a lot of “var” instances to discover the static type, going back to the original file and inserting the cast. It was a tedious and slow process. Eventually I settled on a different solution. For every type I exposed in F# I added a set of extension methods in the form of AsXXX where XXX represented the name of the generated inner types. For example public static ActionResult.Complete AsComplete(this ActionResult res) { return (ActionResult.Complete)res; } The advantage of this approach is 2 fold - Removes the need to explicitly name types in code and hence gets back the advantages of type inference - I can now use . on any of the values and let Intellisense help me find the appropriate method to use This extension method allows me to get closer to the Beta1 style code [TestMethod] public void TestActionBeta2() { var res = GetResult(); Assert.IsTrue(res.IsComplete); Assert.AreEqual(ActionKind.Mouse, res.AsComplete().Item.Item1); Assert.AreEqual(42, res.AsComplete().Item.Item2); } With these methods and a quick series of Find / Replace calls, I was back in business. [1] It’s coming I promise! [2] It also contains a handy set of factory methods for generating values but it’s not relevant to this discussion. Any reason not to just use dynamic? [TestMethod] public void TestActionBeta2() { dynamic res = GetResult(); Assert.IsTrue(res.IsComplete); Assert.AreEqual(ActionKind.Mouse,res.Itm.Item1); Assert.AreEqual(42, res.Item.Item2); } Seems easier and it’s not like you get much intellisense benefit in this scenario. though speaking of intellisense it would stop the idiotic typos. But in unit tests you’d find them pretty quickly so not a big deal. @ShuggyCoUk, In general I avoid dynamic when a static type solution is readily available. I definitely could have fixed this problem with dynamic. But once I had the AsXXX extension method fix in place, switching to that over dynamic required just about the same amount of work. Also, it’s my unit test code so I like to keep it safe as possible. No fun debugging random failures in the actual test code as opposed the production code. Fair enough, depends how many unions you make I guess. Looks like something to try T4 on, though if you aren’t comfortable with T4 then I wouldn’t think it was a net win. it will accept x values, but will deny y. So is exclusive.
https://blogs.msdn.microsoft.com/jaredpar/2009/10/27/using-f-discriminated-unions-in-c-beta2/
CC-MAIN-2016-50
refinedweb
906
54.83
The Allegro GUI is a very flexible and easy to use user interface. But, as we all know it doesn't look very pleasing. For your normal set of tools this doesn't really matter. But if you want to use the GUI inside your game, it's a different story. The GUI has to look modern and fit the theme of the game. Most of the time, this means that we re-implement many widgets already existing in the Allegro GUI. On the other hand, it's very easy to change the look of the widgets. You just need to write your own dialog_proc function, and handle only the MSG_DRAW event, and route all other messages to the default dialog proc. int my_button_proc(int msg, DIALOG *d, int c) { int ret = D_O_K; if (msg == MSG_DRAW) { /* All drawing code here */ } else { ret = d_button_proc(msg,d,c); } return ret; } While this is not that hard, it's still a repeating task. So, why not write a set of dialog functions which let's us change the look of the GUI in a very easy way? Easy as in exchanging some bitmaps and maybe adjusting some INI file settings?Yeah why not? So I decided to do exactly this: Writing a set of truly skinnable dialog functions, so I can change the look of the GUI by just replacing some image files. {"name":"screenshot.jpg","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/4\/e4d12f04b9172211c8de6390168b6eec.jpg","w":440,"h":600,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/4\/e4d12f04b9172211c8de6390168b6eec"}A screenshot using an Aqua style theme. Overloaded Dialog FunctionsI've prefixed all of my functions with "lex_" to avoid namespace clutter (LEX is 3 letter entry I normally use for Hiscores). The following functions have been overloaded: - lex_button_proc- lex_slider_proc- lex_check_proc- lex_radio_proc- lex_edit_proc- lex_list_proc What's new? I added a few extra extra features to the button and listbox. lex_button_procYou can specify a callback function for the button as dp2 which will be called once you click the button. You can use this callback to check the dialog content before it is closed. You can also make change the return type of the button, say from D_CLOSE (close dialog) to D_O_K (do nothing) to ensure that the dialog is not closed.The prototype for the button callback function is: int button_callback(int id) The submitted id is the d1 member of the dialog struct. Set it to NULL if you don't want to use a callback function. I also created one new function- lex_dialog_procwhich can be used to create a movable dialog. If you want to create a moveable dialog box just place this function as the first dialog proc in your DIALOG structure like this: DIALOG the_dialog[] = { /* (dialog proc) (x) (y) (w) (h) (fg) (bg) (key) (flags) (d1) (d2) (dp) (dp2) (dp3) */ { lex_dialog_proc, 100, 100, 440, 200, 0, -1, 0, 0, 0, 0, "Dialog", NULL, NULL }, /* Dialog box content goes here */ } The dp member is used for the dialog title. The remainder works as you would expect it from the normal Allegro functions. To use it, simply use the lex_ prefixed dialog functions. I've also added a slightly enhanced version of do_dialog() called (be prepared for a surprise...) lex_do_dialog() which allows you to doubleBuffer your UI.Check the test.c sample to see how it works. If you want to create new skins, have a look at the provided aqua.skin file... it's heavily commented, so you should be able to create your own skins. If you have done skins on your own, please drop me a mail. You can download it from the (fresh, ugly but functional) lexgui homepage: [url] Tell me what you think... is this something you might want to use? --There are no stupid questions, but there are a lot of inquisitive idiots. Looks nice. I'll have to give that a go. Any change you'll write a dlg plugin for your addon? Hmmm... your test doesn't update properly, it seems the objects sit in thier own loop, so Im guessing your still using some of the original gui code... and its horribly slow if i turn your Buffer into a sub bitmap of the screen and comment out the blit" -- Spellcaster: It does look very nice indeed Tom: That slowdown is inevitable when anti-aliasing or doing translucency what aliasing or translucency? I have a 900Mhtz Tbird. It shouldn't be that slow... I mean it was really slow. as if I was running windows 98 on a 486@90Mhtz, the only thing that i know that goes that slow on this machine is TuxRacer with Hardware OpenGL turned off. (with all the extra effects turned on, like shadows, trails, more aliasing, more detail, etc...) The symptoms you describe are are sure sign that there is some blending going on. No matter how fast your cpu is reading from video memory can be horribly slow. I dont think there is. But I haven't checked the code. I think the custom update_dialog is doing something funky casue the dialog is dragable. Hm... not really. Not unless you you actually drag the dialog.What exactly is your problem?I have nbo problems at all speedwise. What have you tried to do, what have you changed? Ahm... why would you want to turn the buffer into a sub bitmap? If you want to do blit to the screen, just use the normal do_dialog. Use my function if you don't want to blit to the screen... that's the whole idea of that extended do_dialog... I was just playing with your 'test' code. I executed it without changing a thing, and if I click a button, I don't get to see the button pressed image cause the drawing is halted till I let go of the mouse button, but by that time the button has reverted to the 'not pressed' image. The list box does the same, it doesn't draw till you stop scrolling, and when draging the dialog no objects update, but the draging is SLOOOOOWWW, slow enough to see the individual blits that update the background image. I turned the buffer into a sub bitmap to remove the extra blit. i just looked at it... dragging the dialog makes the whole dialog white while dragging, and the scroll bar move while you drag it, but it goes to the place where you let go of the button. this is from running gui.exe. i have an athlon xp 1.1ghz w/geforce2. haven't looked at the code, lemme see what's going on. Hi all(egorites):) Great one SC ...it really looks cool...if i may say u have CAST-A-SPELL;)...pls send me ur magic wand ...;D Thomas Fjellstrom : Any change you'll write a dlg plugin for your addon? I second that thought , SC any change of that...will really help lazy bones like me;)..will luv it.. RegardsPradeepto --I vote for Pradeepto. - Richard PhippsHey; Pradeepto's alive! - 23yrold3yrold Very cool;D. Will that work with the stuff posted here concerning GUI, DLG, AGUP, and a link to the GUI Clinic tutorial on the AGDN site by Daniel Harmon?I hope so because I want to start making good GUI in games that I plan on making (Tetris(sorry I know there are a lot ot T Clones but I have to do it just once), Super Mario Bros Clone, Breakout Clone, Galga Clone, Pac Man Clone, Mrs. Pac Man Clone(don't want to be sexest;)), Tic-Tac-Toe, etc.).? My curiousity and ambition has just sky rocketed;D. Thanks in advance for any and all help you can provide Ok, first of all:All of the problems mentioned come from my lex_do_dialog(). If you use the normal do_dialog() that won't happen. On the other hand, if you use the normal do_dialog(), you don't have double buffering. It seems like the allegro dialog functions really like to do stuff in loops, and that they don't have callback functions.The solution is simple: I need to handle the according functions in my dialog handlers, which should be mainly cut-and-paste. Expect a new version pretty soon now Will that work with the stuff posted here concerning GUI, DLG, AGUP, and a link to the GUI Clinic tutorial on the AGDN site by Daniel Harmon? Let's start to work through that list step by step:a) GUI/DLG related stuff: You can simply create a dialog using the normal gui_procs and simply put a lex_ in front of them to change them to my code. b) Not sure what the questions are regarding AGUP and the GUI clinic tutorial? I'm also not sure if there's interest in that kind of functionality (skinable GUI procs). Right now I've these things on my list (most important things first): Next release:- Fix the double buffering code- Make the skin loading more robust Other things on the list- Add a filechooser- Additional skins- Create a new frame work, so you can have modal dialogs. Spellcaster: Oooooh pretty!Pradeepto: I'm dumb! But... I wan't DLG to show me my pretty dialogs when I make them It seems like the allegro dialog functions really like to do stuff in loops, and that they don't have callback functions. I think that when they do something in a loop, they broadcast a MSG_IDLE to the whole dialog... this could be the callback you need! But it might not be easy to differentiate it from regular MDG_IDLE Perhaps it can be merged in with AGUP, since that has Win32, GTK, and QNX Photon themes. --- Bob [ -- All my signature links are 404 -- ] I'd love to see that, lexgui as an AGUP theme. And maybe that Aqua thing (by gnudutch? sorry, I forget) too. Perhaps it can be merged in with AGUP, since that has Win32, GTK, and QNX Photon themes. Hm... as I said, that code can switch "themes" dynamically. So it's not really efficient to throw it together with AGUP.In fact, I started working on that since I found the "themes" in AGUP pretty disappointing. Hm... so you guys want more themes? Give me half an hour... It's 11:21 (locasl time here) let's see how long it will take to create a win32 theme. Cool. But please not the Clown theme from XP... shudderOOOhhhh.. A BeOS theme would be cool too. Ok, it took a bit longer (had to eat dinner in between ;-) ) and also needed to change the code slightly. {"name":"screen_bill32.jpg","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/4\/e\/4efc335c192b01e33431796de328b98f.jpg","w":646,"h":505,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/4\/e\/4efc335c192b01e33431796de328b98f"} That's a plain and boring win98 screen.Time needed just to create the skin from scratch: 35 minutes. I'll upload the changed package in the evening, this should contain a better doublebuffering stratgy, also. Just like to say, nice initiative to create a skinnable GUI, it's looking great thus far. And a question about the scroller on the listbox in the demo program.. it doesn't follow the mouse when it is dragged. Is it supposed to do that? ---BEER: It's not just for breakfast anymore. And a question about the scroller on the listbox in the demo program.. it doesn't follow the mouse when it is dragged. Is it supposed to do that? Nope, that's a a problem with the double buffer repaint code. It will be fixed this evening. Great, can't wait. Spellcaster: Thanks, but that was only the answer to my first question. What about my second question? The question was:? Here is a lame image I made in M$ Paint to illustrate what I'm asking: that image doesn't show up I made a web page with it on there. The URL is: in advance for any and all help you can offer;D. Hm... I haven't touched the menu stuff yet. But it shouldn't be too hard. So, you want to be able to specify a BITMAP that is placed at a side of a menu?Similar to the windows text if you open the start menu? If this is what you want, it shouldn't be too hard to do. (I've no idea right now, because I've not used the allegro menus yet) I'll add it to my "todo" list together with skinnable menu support.
https://www.allegro.cc/forums/thread/182142/182345
CC-MAIN-2018-39
refinedweb
2,091
73.88
Technical Articles What is form-data and how to send it from SAP Cloud Platform Integration (CPI) Disclaimer This blog is meant for documentation/educational purposes only. It is the result of our findings and no guarantee that the same solution will work in all cases. I was informed that multipart sending/form-data is not officially supported by SAP Cloud Integration. Introduction Our team solved an interesting challenge involving SAP Cloud Platform Integration and an API expecting to receive form-data today. We faced an API which accepted a POST request using the Postman tool but would always throw an error when ‘exactly the same content’ (we will learn more about that later) was sent from the CPI. Because the POST request needed to send form-data, this strange behaviour made us dig into the formal definition of form-data by W3C. We also found some very nice blogs like this one by Pieterjan who explains form-data from a little different angle. In this blog, we will explain the concept of form-data, how to send it from Postman and how to send it from the Cloud Platform to another system. What is form-data anyway? Form-data originates from HTML forms which take user input and send it through the browser to a web server. The same technology can be used to send data between applications other than browsers. It comes in two flavours: application/x-www-form-urlencoded and multipart/form-data. Both are used to send key-value mappings. In HTML forms, a field is represented as a mapping of the name of the field to the content. Urlencoded form-data is the standard way but not sufficient for sending large quantities of binary data also known as files. For more information see the W3C definition. Two ways of sending form-data The rest of this blog will talk about the content type multipart/form-data. Don’t be scared by the multipart, it also works if your request contains only one part. First, we will explore how to send the form-data through Postman. This is easily done with the below settings. The body is set to form-data and key-value pairs can be added. Note that it is possible to add simple strings, formatted data like JSON and also files as values. Postman takes care of the rest behind the scenes. To recreate this request in the SAP Cloud Platform Integration we first recreated this request in Postman using the raw content type. This is needed as the CPI does not support form-data directly but has only the basic capabilities to alter header and body. The information we needed to enter was gathered from the W3C definition in combination with Postman’s handy feature to generate code from a request by clicking on Code (in the top right corner of the screenshot above). The details will follow in the next section. Header and body Two things are important in the generated code to create the form-data: the header Content-Type and the body formatting. Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW The header labels the content as form-data and introduces the boundary, which is a user-chosen string. Postman generates this one for us. Don’t worry about the multipart, this is used for all form data, even if it contains only one part like in our examples. Content-Disposition: form-data; name="formElement" ExampleValue ------WebKitFormBoundary7MA4YWxkTrZu0gW-- The body declares form-data again but also gives the data a name, this is the key in the Postman example in the previous section. We also see our “ExampleValue” which is the value in the said example. Please note that the boundary in the body has two more dashes “–” in the beginning than the boundary declared in the header. This is the rule per the definition. More key-value pairs could be added in the body here but let’s keep it simple for now. The very last boundary needs to also have to dashes in the end (compared again to the boundary in the header). To the Cloud Platform Integration In the CPI you would now go ahead and create the header: With the value (note the much simpler but still working boundary): Content-Type: multipart/form-data; boundary=cpi and the body: Content-Disposition: form-data; name="formElement" ExampleValue --cpi-- Normally our story would end here. The CPI sends out the well-formatted form-data and we are happy. The anomaly In our case, the API we were trying to talk to would accept the form-data from Postman with either way, raw or as form-data. If we copied the exact same header and body which worked in Postman to the CPI we’d get an HTTP 400 error “Unable to parse multipart body”. This happens for example when you leave out the newline between Content-Disposition and the ExampleValue in the example above. After observing different payloads over and over again we tried not to create the form-data in the CPI but send it there from Postman and route it through to the API. Normally for our use-case, we need to send a SOAP message but as nothing was working this seemed like something we could give a try. And it was successful. With the message from Postman routed through the CPI, the API accepted the request. So we checked what the difference was between the request we created “by hand” in the CPI (see chapter To Cloud Platform Integration) and the one coming from Postman. Even comparing them side by side in the CPI tracing view showed no difference. We also created an Iflow that stripped away all headers but the Content-Type but with no luck. Finally, we compared the two payloads in Notepad++ which can show hidden characters like line ending characters (View -> Show Symbol -> Show All Characters). It seems Postman on Windows creates the “correct” line ending characters and the CPI creates UNIX line endings which our API could not handle. Turns out it wasn’t ‘exactly the same data’. The solution To solve our problem we created a script which would automatically convert the linefeeds. You can find it below should you ever run into this kind of trouble. import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; def Message processData(Message message) { //Body def body = message.getBody(String); body = body.replaceAll("\n", "\r\n"); body = """--cpi\r\nContent-Disposition: form-data; name="envelope"\r\n\r\n\r\n""" + body + """\r\n\r\n--cpi--""" message.setBody(body); return message; } Conclusion In this blog we explained what form-data is, showed the different forms and how to send it through Postman. Then we moved to the Cloud Platform Integration and sent form-data from there. In the end we showcased an error with an API that threw an error on UNIX line endings and how to fix it. Thank you for reading! Please let me know if you faced similar problems or if you have further questions regarding form-data requests or other CPI topics. Hi Matti Thanks for sharing this. Few months back, there was an interesting question that came up in the forums that had to post form-data to Leonardo's MFLS OCR service. Forming the form-data part with Content Modifier did not work in that case because the content was binary, and the Content Modifier would corrupt the content because it would undergo conversion to string. Here's my solution for that case using Groovy Script for that case, which could cater for both binary or text content. Might be of interest to you 🙂 Best regards, Eng Swee Hi Eng Swee, while searching for an answer for the anomaly described in this blog, I came across your answer on that particular problem! Naive that I was at that point I thought it can't be that complicated - having to compare files on a binary level. Turns out I was wrong and we had to dig quite a bit into the file/encoding level here. I will keep this in mind going forward and use some parts of your script for convenient multipart sending 🙂 Thank you for your comment. Best, Matti Hi Matti Nevertheless, your blog highlights some of the challenges and what to look for in form-data interfaces, and that is really useful 🙂 The whole idea of comparing the payload with Postman is a useful troubleshooting approach. Another approach that I would recommend (which I used myself in the above case) was to route the message through a proxy that captures the details of the HTTP call. You can find out more in the following blog - that has been a life saver many times round. Regards Eng Swee Hi Matti, I encountered very much exactly same issue early this year like you experienced in this blog. I was trying to post zip file inside multipart form-data to Ariba Network server from CPI. The content produced by my groovy script looks exactly same as same one from PostMan but server always returned "invalid content length". I suspected HTTP adapter might work in a different way than I thought. In the end, my work around solution is to post form and binary content through groovy script like below and it works. Kind regards, Nick Hi Nick, thank you for your comment! That's a strange error, glad you could fix it - the only thing is that doing HTTP calls from groovy is really discouraged... Best regards Matti Hello Nick, This is really interesting, because I'm running into the exact same problem; the Content-Length not defined error. I've tried nearly everything, but with no success. My current script is very similar to the one Eng Swee posted in the other blog, but I'm really curious in understand your approach. At the moment I'm testing your script and I'm running into the following error: java.lang.NoSuchMethodException: No signature of method: sun.net. is applicable for argument types: (org.apache.camel.converter.stream.InputStreamCache) values: [org.apache.camel.converter.stream.InputStreamCache@144c18fd] Possible solutions: write([B), write(int), write(int), write(int), wait(), wait(long) I'm using SFTP Adapter to pickup the file to send to Ariba. The only thing in the iflow is the Script Process. Can you kindly give your opinion on this matter? Thank you in advance. João Hi João, Based on error message, the type of input parameter in below code is unacceptable. The type of variable zipContent in my script is byte array and it's the output of combination of form + binary content (zip file). If you want to discuss further, then create a question post and I'll answer in the new post. Kind regards, Nick Hello Nick, I've created another question. It's on this URL: Thank you in advance for providing your insights on this topic. Kind regards, João Gomes Hello Matti, Thank you so much for the blog.. I have similar kind of requirement. However, I need to pass as below with the following Body in form-data. Kindly help on the same. Hi Prashanth, you can probably just use the OAuth Adapter, Type: Client Credentials to do this type of POST request. If it needs to be form-data, you can refer to my guide above and add your key value pairs accordingly. In Postman you can show the final request by clicking on "Code" next to the Send Button or "</>" in the newest version of Postman. Best regards Matti Hello Matti, Thank you so much for the reply. I have tried as below(attached Screenshot). Please correct me if the following parameters are incorrect. Currently, I am facing 400 bad request error. Kindly help. Content-Modifier-Header Content Modifier- Body Regards, Prashanth Bharadwaj.Ch Hello Matti, Thank you so much for the blog. I have similar kind of requirement, but it does not work: I am facing 400 bad request error. Kindly your help please. Regards, C.Buriticá Great work Matti Leydecker Code for sending file in form data. Reference: Regards, Sunil
https://blogs.sap.com/2019/11/14/what-is-form-data-and-how-to-send-it-from-sap-cloud-platform-integration-cpi/
CC-MAIN-2022-27
refinedweb
2,029
63.09
What will we cover in this Tutorial If you are starting from scratch with NumPy and do not know what ndarray is, then you should read this tutorial first. - How to make arithmetics with ndarray. - Sliding and indexing of ndarray with 1-dimension. - Sliding and indexing of ndarray with 2-dimensions. Arithmetics with NumPy An amazing feature with ndarrays is that you do not need to make for–loops for simple operations. import numpy as np a1 = np.array([[1., 2., 3.], [3., 2., 1.]]) a2 = np.array([[4., 5., 6.], [6., 5., 4.]]) print(a1) print(a2) print(a2 - a1) print(a1*a2) print(1/a1) print(a2**0.5) This looks too good to be true. Right? The output is as you would expect. [[1. 2. 3.] [3. 2. 1.]] [[4. 5. 6.] [6. 5. 4.]] [[3. 3. 3.] [3. 3. 3.]] [[ 4. 10. 18.] [18. 10. 4.]] [[1. 0.5 0.33333333] [0.33333333 0.5 1. ]] [[2. 2.23606798 2.44948974] [2.44948974 2.23606798 2. ]] Then you understand why all are so madly in love with NumPy. This type of “batch” operation is called vectorization. You can also make comparisons. import numpy as np a1 = np.array([[1., 2., 3.], [6., 5., 4.]]) a2 = np.array([[4., 5., 6.], [3., 4., 5.]]) print(a1 < a2) Which gives what you expect. [[ True True True] [False False True]] At least I hope you would expect the above. Slicing and basic indexing If you are familiar with Python lists, then this should not surprise you. import numpy as np a = np.arange(10) print(a) print(a[5]) print(a[2:5]) You guessed it. [0 1 2 3 4 5 6 7 8 9] 5 [2 3 4] But this might surprise you a bit. import numpy as np a = np.arange(10) print(a) a[4:7] = 10 print(a) Resulting in. [0 1 2 3 4 5 6 7 8 9] [ 0 1 2 3 10 10 10 7 8 9] That is quite a surprise. You can take a “view” from it (also called a slice) like the following example shows. import numpy as np a = np.arange(10) print(a) a_slice = a[4:7] print(a_slice) a_slice[0:1] = 30 print(a_slice) print(a) Resulting in. [0 1 2 3 4 5 6 7 8 9] [4 5 6] [30 5 6] [ 0 1 2 3 30 5 6 7 8 9] Slicing and indexing of 2-dimensions First of, this seems similar to Python lists. import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(a[1]) print(a[2][2]) print(a[2, 2]) Maybe the last statement is surprising, but it does the same as the above. That is, the effect of a[2][2] is the same as of a[2, 2]. [4 5 6] 9 9 Slicing the above ndarray will be done by rows. import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(a[:2]) Which results in the following. [[1 2 3] [4 5 6]] A bit more advanced to slice it as. import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(a[:2, 1:]) Resulting in. [[2 3] [5 6]] It might not be clear the the second slice does fully vertical slices, which is illustrated by the following example. import numpy as np a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(a[:, :1]) This will most likely surprise you. Right? [[1] [4] [7]] Makes sense, right?
https://www.learnpythonwithrune.org/master-the-numpy-basics/
CC-MAIN-2021-25
refinedweb
610
83.05
-----Original Message----- From: Christian Seifert <cs5b@yahoo.com> To: tomcat-user@jakarta.apache.org <tomcat-user@jakarta.apache.org> Date: Monday, April 02, 2001 1:45 PM Subject: Re: Namespace clarification >tomcat puts together the classpath dynamically and you >have no way of influencing the order it does so. so >the sax parser might me loaded before xerces. >one way to get around this is to edit the tomcat.bat >and change the dynamically setting of the classpath to >statically setting the classpath and taking care that >xerces.jar is at the beginning. >you might also want to check out the cocoon project as >this provides the functionality of what you are trying >to do. it also allows you to create XSP pages (hybrid >of JSP and XML)....the url is xml.apache.org. > >hope this helps > >christian > >--- "Prasanna.N" <prasanna@indofuji.soft.net> wrote: >> Hi, >> I am using TOMCAT 3.2. I am trying to transform an >> xml >> doc with XSL to generate a HTML. When i write a >> simple Java Class and use >> xerces, xalan it works just fine. When written in >> JSP using JAVA, When i call >> the page i am getting the following error >> >> "Namespace not supported by SAX Parser". >> >> I think Tomcat uses SAX Parser. How to come around >> this problem. I am using >> namespaces. Is there any option, to plug in XERCES >> parser with TOMCAT, if so >> how to do it. Since XERCES supports namespaces could >> this be a option to come >> around this problem? Please help. >> >> Regards, >> Prasan >> >> Prasanna N >> Indo-Fuji Information Technologies Pvt. Ltd. >> No: 484, II Cross >> 25th Main, II Stage >> BTM Layout, Bangalore 560 076. >> Tel : 91-80-6784122/33/44/55 >> > > >__________________________________________________ >Do You Yahoo!? >Get email at your own domain with Yahoo! Mail. > Hi prassan i am also from bangalore and am looking for a job do you think that i could forward my resume to you.i am from iit delhi and have 18 months of work exp with TCS Shall i forward my resume to you. YOURS SINCERELY Shivakanth
http://mail-archives.apache.org/mod_mbox/tomcat-users/200104.mbox/%3C003d01c0bb63$81a05a20$02000003@v4l2z0%3E
CC-MAIN-2014-42
refinedweb
344
76.52
Hello to all, welcome to therichpost.com. In this post, I will tell you, Angular 9 Material Carousel Slider. Guys here is the updated Material Carousel for Angular 12 : Angular 11 Material Carousel Angular 12 same and if you are new then you must check below two links: Here are the basic steps for Angular 9 Material Carousel Slider and please follow carefully: 1. Here is the command, you need to run into your command prompt to get angular material modules: ng add @angular/material npm i @ngmodule/material-carousel 2. Here is the code, you need to add into your app.module.ts file: //... import { MatCarouselModule } from '@ngmodule/material-carousel'; @NgModule({ // ... imports: [ // ... MatCarouselModule.forRoot(), // ... ] }) export class AppModule {} 3. Here is the code, you need to add into your app.component.ts file: ... import { MatCarousel, MatCarouselComponent } from '@ngmodule/material-carousel'; export class AppComponent { ... // Slider Images slides = [{'image': ''}, {'image': ''},{'image': ''}, {'image': ''}, {'image': ''}]; } 4. Here is the code, you need to add your app.component.html file: <h1 style="text-align:center;">Angular 9 Material Carousel Slider</h1> <mat-carousel <mat-carousel-slide #matCarouselSlide *</mat-carousel-slide> </mat-carousel> Now we are done friends. If you have any kind of query or suggestion or Thank you 68 Comments ERROR in The target entry-point “@ngmodule/material-carousel” has missing dependencies: – @angular/core – @angular/platform-browser – @angular/animations – @angular/cdk/a11y – @angular/common – rxjs – rxjs/operators – @angular/material/button – @angular/material/icon – @angular/material/core – @angular/compiler/src/core im getting this error while adding it to app module Please try this also: npm install –save @angular/material npm install –save @angular/animations I don’t see anything on my screen. Totally blank. Did I do something wrong? Do we have to do anything else? After cli update and angular 10 came and I also going to update this and thank you your reply. Thank you and it worked well. Thank you karen. how we can add more than 1 item on one slide? for example, if I want to have a multi-items slider Try this: Hi, how can I change slider transition animation? Hi, you want to change the slider animate timings? I have issue with the size of my carousel It’s within a router outlet and it doesn’t display correctly, the size is too big and overwhelms the navigator. As a consequence the arrow on the right is not displayed. The other issue is that the image used in the carousel are too bigs sometimes and they don’t fully display. You can do this with custom styles and for images, you can decrease the image size. Thanks You can only display 1920×480 size image ? yes How can I display text over the slide?? HI, I will share the working post link. Thanks Please check this: Hi, it this material carousel support click event? for example, after clicks image 1 and will navigate to another page. Appreciated your help! Yes and this link will help you to add custom button on every slider. Thanks Hi, thanks for your reply. By the way, which link i can refer. I will update you soon. Hi I am not getting the arrows and indicators are not middle of the carousel. Could you help me Hi, you are not getting material carousel slider like my example? I am not getting the icons “->” and “<-". The slide indicators are not in the middle of page. thank you! Is this written for v10 or 9? Yes this is for both. Thanks i have saved the image to the base64 string format in database but on retrieval i am unable to bind it to the carousel. my code of carousel is above line of code does not work but when i simply write out of the this code shows the image on the screen. Please help. Hi, can you please send me your image src or you can email me on therichposts@gmail.com Thanks Hello. I’m getting a word “back and forward arrow” instead of an icon, Any thoughts? Yes, you can do with custom script code. Thanks I’m a newbie to programming. Do you have an example? I will update you and if you are subscribed then you will get noticed. Thanks. Click bell icon bottom left. Done You need to import below into your app.module.ts file: import {MatIconModule} from ‘@angular/material/icon’; Thanks I did that and nothing happens 🙂 Thanks for your reply 🙂 Am I missing anything? 1).how can i use proportion=25 in laptop view and proportion=80 in mobile view ? 2).and I am getting the following warning- Warning: Entry point ‘@ngmodule/material-carousel’ contains deep imports into ‘C:/Users/teja/Desktop/jgd_courses-sample/jgd/node_modules/@angular/compiler/src/core’. This is probably not a problem, but may cause the compilation of entry points to be out of order. 3).and error in console: ERROR Error: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: ‘undefined’. Current value: ‘[object Object]’. at throwErrorIfNoChangesMode (core.js:5467) at bindingUpdated (core.js:13112) at Module.ɵɵproperty (core.js:13907) at MatCarouselComponent_li_4_Template (ngmodule-material-carousel.js:55) at executeTemplate (core.js:7447) at refreshView (core.js:7316) at refreshEmbeddedViews (core.js:8408) at refreshView (core.js:7340) at refreshComponent (core.js:8454) at refreshChildComponents (core.js:7109) please resolve these three isuues I need to see your code. Dear Ajay, How can I increase the size of the carousel. Like a full page carousel slider. Please guide. Thank You Hi and this is already full width slider. Thanks I want this slider to cover full height of the page. Okay means full page slider? Yes, Also if you can provide a proper easy to.underatand angular flex tutorial, that will be a great help. The carousel is inside router outlet. I am sharing the code, please help. app.component.html app.component.css mat-sidenav-container, mat-sidenav-content,mat-sidenav { height: 100%; } mat-sidenav { width: 250px; } welcome.component.html app-routing.module.ts import { NgModule } from ‘@angular/core’; import { Routes, RouterModule } from ‘@angular/router’; import { AppComponent } from ‘src/app/app.component’; import { WelcomeComponent } from ‘src/app/welcome/welcome.component’; const routes: Routes = [ { path: ”, component: WelcomeComponent } ]; @NgModule({ declarations: [], imports: [RouterModule.forRoot(routes)], exports: [RouterModule], providers: [] }) export class AppRoutingModule { } Don’t worry, I will update you soon. Ok thanks Thank You so much,it worked well. I Appreciate your work. Thank you 🙂 Hello! thanks for the tutorial, I’m struggling right now because I am trying to build a Carousel of Material Cards in Angular 10, but I can’t find nothing. I’d like to know if is it possible to use cards instead of images for this carousel.. Thanks in advance. Kind regards, Costantino Welcome and I will update you on this. Is it possible to add router-link or click event on the images of the slider? if yes then please tell me how? Yes and I will update you on this. Thanks there is no output event like (drag)=drageslide($event) I did not get your query? – i make click on element but when i drag slide i need to prevent click event. -how to show 3.5 item in slide. Okay, I will update you. Thanks. How can I change the arrow icons with another mat icons?? With CSS you can do that. My error: Error: src/app/product/product.component.html:53:1 – error TS2322: Type ‘string’ is not assignable to type ‘number’. 53 interval=”5000″ ~~~~~~~~~~~~~~~ src/app/product/product.component.ts:15:18 15 templateUrl: ‘./product.component.html’, ~~~~~~~~~~~~~~~~~~~~~~~~~~ Error occurs in the template of component ProductComponent. src/app/product/product.component.html:56:1 – error TS2322: Type ‘string’ is not assignable to type ‘number’. 56 proportion=”25″ ~~~~~~~~~~~~~~~ src/app/product/product.component.ts:15:18 15 templateUrl: ‘./product.component.html’, ~~~~~~~~~~~~~~~~~~~~~~~~~~ Error occurs in the template of component ProductComponent. src/app/product/product.component.html:57:1 – error TS2322: Type ‘string’ is not assignable to type ‘number’. 57 slides=”5″ ~~~~~~~~~~ src/app/product/product.component.ts:15:18 15 templateUrl: ‘./product.component.html’, ~~~~~~~~~~~~~~~~~~~~~~~~~~ Error occurs in the template of component ProductComponent. My english is not good. I looking forward to hearing from you. Thanks! Hi, The images are not displayed. any clue? you can add new images. First of all Thank you very much of very good explanation … I have question about ….On load page can I jump to specific slide? like I have 10 images but I want to slide to 3rd on load page….I hope you got my point. Yes I got your point and I will update you on this. Thanks. I need to show 4 images at the time ..any solution plz Okay and I will update you thanks.
https://therichpost.com/angular-9-material-carousel-slider/
CC-MAIN-2021-31
refinedweb
1,461
61.12
Introduction Services in Android are components which run in the background. They do not have any user interface. One application can start a service and the service can run in the background even if this application is switched with a new application by the user. There are two types of services namely Unbound Service and Bound Service Unbound Service is a kind of service which runs in the background indefinitely, even if the activity which started this service ends. Bound Service is a kind of service which runs till the lifespan of the activity which started this service. In this article we are going to see step by step procedure to how to create an unbound service. also read: - How to set up android development environment? - How to write AndroidManifest.xml? - How to create a new activity in Android? Android Development Tools The software tools required for this example are as below: - 1. Java Development kit (JDK 5 or higher) - 2. Android Software Developer’s Kit(SDK) - 3. Eclipse IDE Eclipse version 3.4 or higher preferably galileo - 4. Android Developer Tool (ADT) How to create a Android service class? Step 1: Create an Android Application in Eclipse as below: Go to File->New-> Other->Android->Android Project Provide a Project Name, Select a build target. Provide Application name, package name, Activity name, min sdk version and click finish. Step 2: Create a folder named raw inside the res folder of your Android Application. Copy a mp3 file to this folder and rename it without any extension with all the letters in lowercase. Check whether a resource id is generated for the same in R.java file in your Android application. Step 3: Right click on the Android Project created->New->Class Provide a Class Name. Click on Browse button near the superclass option, type Service in the search box. Select the Service class from the package android.app and click finish. Now a class which extends from the Service class will be generated in your Android application. Step 4: Edit the class and modify the code similar to the one below import android.app.Service; import android.content.Intent; import android.media.MediaPlayer; import android.os.IBinder; import android.util.Log; import android.widget.Toast; public class MyService extends Service { private static final String TAG = "MyService"; MediaPlayer player; @Override public IBinder onBind(Intent intent) { return null; } @Override public void onCreate() { Toast.makeText(this, "My Service Created", Toast.LENGTH_LONG).show(); Log.d(TAG, "onCreate"); player = MediaPlayer.create(this, R.raw.sound_file_1); player.setLooping(false); // Set looping } @Override public void onDestroy() { Toast.makeText(this, "My Service Stopped", Toast.LENGTH_LONG).show(); Log.d(TAG, "onDestroy"); player.stop(); } @Override public void onStart(Intent intent, int startid) { Toast.makeText(this, "My Service Started", Toast.LENGTH_LONG).show(); Log.d(TAG, "onStart"); player.start(); } } } In the above code, onCreate method is overridden to initialize the Mediaplayer with the file you have pasted in the raw folder.onDestroy method is overridden to stop the player.onStart method is overridden to start the player. Creating Layout Step 1: Go to res folder of your project->layout. Here you can see an xml file, edit the file. Otherwise right click on the folder and create a new layout file(Go to New->other->Android XML File->select Layout and give a name) Step 2: In the Layout file add two buttons, name it as start and stop and set a background image for the screen The Layout code is as below: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <Button android:</Button> <Button android:</Button> </LinearLayout> Updating the Activity Class Step 1 : Update the Activity class as below: import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; public class ServicesDemo extends Activity implements OnClickListener { private static final String TAG = "ServicesDemo"; Button buttonStart, buttonStop; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); buttonStart = (Button) findViewById(R.id.buttonStart); buttonStop = (Button) findViewById(R.id.buttonStop); buttonStart.setOnClickListener(this); buttonStop.setOnClickListener(this); } public void onClick(View src) { switch (src.getId()) { case R.id.buttonStart: Log.d(TAG, "onClick: starting srvice"); startService(new Intent(this, MyService.class)); break; case R.id.buttonStop: Log.d(TAG, "onClick: stopping srvice"); stopService(new Intent(this, MyService.class)); break; } } } In the activity class above the onCreate method is overriden in which, the layout you have created is set as the main screen and the event listener for the button is set and the onClick method is defined to start the service and stop the service based on the input event. startService method is used to start the service in unbound mode, i.e; the service will continuously run without regard for the activity which initiated it. stopService method is used to stop the service from the current activity, even if the current activity is restarted. Output of the Application Given below is the output for the service application in the emulator: This output shows that your mediaplayer started in the background and it starts playing. You can move to other activities, still you can hear the music is keep on playing in the background. To stop the servie either navigate back to your service activity and click the stop button. You will get the output as below. Your service will be stopped. Another way is leave the service as such till the time the song ends, your service will use stopSelf method to stop itself after the timeout period Conclusion Service is a basic building block of Android. Services can be bound or unbound based on the coding methodology followed. The unbound service can be used to have a separate process which should be executed apart from the activity. also read: - How to set up android development environment? - How to write AndroidManifest.xml? - How to create a new activity in Android? This post was very useful. But the created service should be included in AndroidManifest.xml file. Only then application starts the media player. So please include that in this topic. Nice Example to Understand the Concept but you should write this line in the manifest file… /> Hello Sir… Its really a nice & informative post.. Can you please help me out? Actually I have a MusicPlayerService. I start it from MusicPlayerActivity But I want to keep service running though MusicPlayerActivity finishes… How can I achieve it? Thanks. this is example for creating unbound service, can you give me the example for creating bound service?
http://javabeat.net/creating-services-in-android/
CC-MAIN-2016-40
refinedweb
1,095
51.44
_lwp_cond_reltimedwait(2) - get process, process group, and parent process IDs #include <unistd.h> pid_t getpid(void); pid_t getpgrp(void); pid_t getppid(void); pid_t getpgid(pid_t pid); The getpid() function returns the process ID of the calling process. The getpgrp() function returns the process group ID of the calling process. The getppid() function returns the parent process ID of the calling process. The getpgid() function returns the process group ID of the process whose process ID is equal to pid, or the process group ID of the calling process, if pid is equal to 0. The getpid(), getpgrp(), and getppid() functions are always successful and no return value is reserved to indicate an error. Upon successful completion, getpgid() returns the process group ID. Otherwise, getpgid() returns (pid_t)-1 and sets errno to indicate the error. The getpgid() function will fail if: The process whose process ID is equal to pid is not in the same session as the calling process, and the implementation does not allow access to the process group ID of that process from the calling process. There is no process with a process ID equal to pid. The getpgid() function may fail if: The value of the pid argument is invalid. See attributes(5) for descriptions of the following attributes: Intro(2), exec(2), fork(2), getsid(2), setpgid(2), setpgrp(2), setsid(2), signal(3C), attributes(5), standards(5)
http://docs.oracle.com/cd/E19963-01/html/821-1463/getppid-2.html
CC-MAIN-2016-26
refinedweb
232
70.33
-------- The link to the old httperf page wasn't working anymore. I updated it and pointed it to the new page at HP. Here's a link to a PDF version of a paper on httperf written by David Mosberger and Tai Jin: "httperf -- a tool for measuring Web server performance". Also, openload is now OpenWebLoad, and I updated the link to its new home page. -------- In this post, I'll show how I conducted a series of performance tests against a Web site, with the goal of estimating how many concurrent users it can support and what the response time is. I used a variety of tools that measure several variables related to HTTP performance. - httperf is a benchmarking tool that measures the HTTP request throughput of a web server. The way it achieves this is by sending requests to the server at a fixed rate and measuring the rate at which replies arrive. Running the test several times and with monotonically increasing request rates, one can see the reply rate level off when the server becomes saturated, i.e., when it is operating at its full capacity. - autobench is a Perl wrapper around httperf. It runs httperf a number of times against a Web server, increasing the number of requested connections per second on each iteration, and extracts the significant data from the httperf output, delivering a CSV format file which can be imported directly into a spreadsheet for analysis/graphing. - openload is a load testing tool for Web applications. It simulates a number of concurrent users and it measures transactions per second (a transaction is a completed request to the Web server) and response time. I ran a series of autobench/httperf and openload tests against a Web site I'll call site2 in the following discussion (site2 is a beta version of a site I'll call site1). For comparison purposes, I also ran similar tests against site1 and against. The machine I ran the tests from is a Red Hat 9 Linux server co-located in downtown Los Angeles. I won't go into details about installing httperf, autobench and openload, since the installation process is standard (configure/make/make install or rpm -i). Here is an example of running httperf against: # httperf --server= --rate=10 --num-conns=500 httperf --client=0/1 --server= --port=80 --uri=/ --rate=10 --send-buffer=4096 --recv-buffer=16384 --num-conns=500 --num-calls=1 Maximum connect burst length: 1 Total: connections 500 requests 500 replies 500 test-duration 50.354 s Connection rate: 9.9 conn/s (100.7 ms/conn, <=8 concurrent connections) Connection time [ms]: min 449.7 avg 465.1 max 2856.6 median 451.5 stddev 132.1 Connection time [ms]: connect 74.1 Connection length [replies/conn]: 1.000 Request rate: 9.9 req/s (100.7 ms/req) Request size [B]: 65.0 Reply rate [replies/s]: min 9.2 avg 9.9 max 10.0 stddev 0.3 (10 samples) Reply time [ms]: response 88.1 transfer 302.9 Reply size [B]: header 274.0 content 54744.0 footer 2.0 (total 55020.0) Reply status: 1xx=0 2xx=500 3xx=0 4xx=0 5xx=0 CPU time [s]: user 15.65 system 34.65 (user 31.1% system 68.8% total 99.9%) Net I/O: 534.1 KB/s (4.4*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 The 3 arguments I specified on the command line are: - server: the name or IP address of your Web site (you can also specify a particular URL via the --uri argument) - rate: specifies the number of HTTP requests/second sent to the Web server -- indicates the number of concurrent clients accessing the server - num-conns: specifies how many total HTTP connections will be made during the test run -- this is a cumulative number, so the higher the number of connections, the longer the test run Autobench is a simple Perl script that facilitates multiple runs of httperf and automatically increases the HTTP request rate. Configuration of autobench can be achieved for example by means of the ~/.autobench.conf file. Here is how my file looks like: # Autobench Configuration File # host1, host2 # The hostnames of the servers under test # Eg. host1 = iis.test.com # host2 = apache.test.com host1 = testhost1 host2 = testhost2 # uri1, uri2 # The URI to test (relative to the document root). For a fair comparison # the files should be identical (although the paths to them may differ on the # different hosts) uri1 = / uri2 = / # port1, port2 # The port number on which the servers are listening port1 = 80 port2 = 80 # low_rate, high_rate, rate_step # The 'rate' is the number of number of connections to open per second. # A series of tests will be conducted, starting at low rate, # increasing by rate step, and finishing at high_rate. # The default settings test at rates of 20,30,40,50...180,190,200 low_rate = 10 high_rate = 50 rate_step = 10 # num_conn, num_call # num_conn is the total number of connections to make during a test # num_call is the number of requests per connection # The product of num_call and rate is the the approximate number of # requests per second that will be attempted. num_conn = 200 #num_call = 10 num_call = 1 # timeout sets the maximimum time (in seconds) that httperf will wait # for replies from the web server. If the timeout is exceeded, the # reply concerned is counted as an error. timeout = 60 # output_fmt # sets the output type - may be either "csv", or "tsv"; output_fmt = csv ## Config for distributed autobench (autobench_admin) # clients # comma separated list of the hostnames and portnumbers for the # autobench clients. No whitespace can appear before or after the commas. # clients = bench1.foo.com:4600,bench2.foo.com:4600,bench3.foo.com:4600 clients = localhost:4600 The only variable I usually tweak from one test run to another is num_conn, which I set to the desired number of total HTTP connections to the server for that test run. In the example file above it is set to 200. I changed the default num_call value from 10 to 1 (num_call specifies the number of HTTP requests per connection; I like to set it to 1 to keep things simple). I started my test runs with low_rate set to 10, high_rate set to 50 and rate_step set to 10. What this means is that autobench will run httperf 5 times, starting with 10 requests/sec and going up to 50 requests/sec in increments of 10. When running the following command line... # autobench --single_host --host1= --file=example.com.csv ...I got this output and this CSV file. Here is a graph generated via Excel from the CSV file obtained when running autobench against for a different test run, with 500 total HTTP connections (the CSV file is here): A few things to note about this typical autobench run: - I chose example.com as an example of how an "ideal" Web site should behave - the demanded request rate (in requests/second) starts at 10 and goes up to 50 in increments of 5 (x-axis) - for each given request rate, the client machine makes 500 connections to the Web site - the achieved request rate and the connection rate correspond to the demanded request rate - the average and maximum reply rates are roughly equal to the demanded request rate - the reponse time is almost constant, around 100 msec - the are no HTTP errors What this all means is that the example.com Web site is able to easily handle up to 50 req/sec. The fact that the achieved request rate and the connection rate increase linearly from 10 to 50 also means that the client machine running the test is not the bottleneck. If the demanded request rate were increased to hundreds of req/sec, then the client will not be able to keep up with the demanded requests and it will become the bottleneck itself. In these types of situations, one would need to use several clients in parallel in order to bombard the server with as many HTTP requests as it can handle. However, the client machine I am using is sufficient for requests rates lower than 50 req/sec. Here is an autobench report for site1 (the CSV file is here): Some things to note about this autobench run: - I specified only 200 connections per run, so that the server would not be over-taxed - the achieved request rate and the connection rate increase linearly with the demanded request rate, but then level off around 40 - there is a drop at 45 req/sec which is probably due to the server being temporarily overloaded - the average and maximum reply rates also increase linearly, then level off around 39 replies/sec - the response time is not plotted, but it also increases linearly from 93 ms to around 660 ms To verify that 39 is indeed the maximum reply rate that can be achieved by the Web server, I ran another autobench test starting at 10 req/sec and going up to 100 req/sec in increments of 10 (the CSV file is here): Observations: - the reply rate does level off around 39 replies/sec and actually drops to around 34 replies/sec when the request rate is 100 - the response time (not plotted) increases linearly from 97 ms to around 1.7 sec Here is an autobench report for site2 (the CSV file is here): Some things to note about this autobench run: - the achieved request rate and the connection rate do not increase with the demanded request rate; instead, they are both almost constant, hovering around 6 req/sec - the average reply rate also stays relatively constant at around 6 replies/sec, while the maximum reply rate varies between 5 and 17 - there is a dramatic increase in response time (not plotted) from 6 seconds to more than 18 seconds Some things to note about this autobench run: - the achieved request rate and the connection rate increase linearly with the demanded request rate from 1 to 6, then level off around 6 - the average reply rate is almost identical to the connection rate and also levels off around 6 - the maximum reply rate levels off around 8 - the reponse time (not plotted) increases from 226 ms to 4.8 seconds Finally, here are the results of a test run that uses the openload tool in order to measure transactions per second (equivalent to httperf's reply rate) and reponse time (the CSV file is here): Some notes: - the transaction rate levels off, as expected, around 6 transactions/sec - the average response time levels off around 7 seconds, but the maximum response time varies considerably from 3 to around 20 seconds, reaching up to 30 seconds Conclusion The tools I described are easy to install and run. The httperf request/reply throughput measurements in particular prove to be very helpful in pinpointing HTTP bottlenecks. When they are corroborated with measurements from openload, an overall picture emerges that is very useful in assessing HTTP performance numbers such as concurrent users and response time. Update I got 2 very un-civil comments from the same Anonymous Coward-type poster. This poster called my blog entry "amateurish" and "recklessly insane" among other things. One slightly more constructive point made by AC is a question: why did I use these "outdated" tools and not other tools such as The Grinder, OpenSTA and JMeter? The answer is simple: I wanted to use command-line-driven, lightweight tools that can be deployed on any server, with no need for GUIs and distributed installations. If I were to test a large-scale Web application, I would certainly look into the heavy-duty tools mentioned by the AC. But the purpose of my post was to show how to conduct a very simple experiment that can still furnish important results and offer a good overall picture about a Web site's behavior under moderate load. 29 comments: I recommend you look at Apaache Flood: Unlike some of the other modern tools, it is completely driven by the command line. Thanks, I'll check out Apache Flood as soon as I get a chance, it seems promising. Let me add that none of "modern" tools is particularly easy to use to test varying loads as you have done here. Neither JMeter nor Grind make it very easy to look at how response time varies with request rate (without manually varying the request rate or going through some very un-obvious gyrations with the tools). As you indicate, the suite of tools you are using falls a bit short for testing web applications (parsing server response for forms etc) but it is great for getting quick raw performance numbers. Thanks for the helpful post! FYI: openload has been renamed to openwebload. Apparently some company (opendemand.com) has trademarked the name openload :( It is now available from Nice article, -Pelle Johnsen, developer of openload Thank you for the examples, very helpful! -- Lisa Crispin Just found your blog, and enjoyed reading this article. Have you any idea what causes the consistant quirk at 45 requests per second? It's present on all of your graphs that get to the 45 mark - some are more obvious than others. Any idea what could be causing the throughput reduction (if that's what it is)? It seems odd that both the example.com webserver and yours should consistantly hiccup at the same point. Thanks for posting the article, Kind regards. Andrew -- not sure what's going on when the number of concurrent users reaches the magical 45; I suspect the Linux client where I was running these tests gets in a funky state at that point. It's definitely on the client side. Grig The new URL for httperf is here: Hi, I am a freshman for httpref/autobench. Thanks for you nice article. But I have a doubtful point. In your example of "autobench report for site1(200 connections per run)", you explained the response time increase linearly from 93 ms to around 660ms; also in your example "autobench report #2 for site1(200 connections per run)", response time increase linearly from 97 ms to aournd 1.7sec, etc. I wanna know how you get the figure.(How to caculate them), thanks!! regards ye Well, I see now, from csv files, right ^0^ But it's not easy to be found from the graphic chart, right? The link to the detailed analysis, at no longer works. FYI. Have you tried funkload? I looked briefly at funkload, but it had a lot of pre-requisites for its installation, and its configuration seemed kind of complicated, so I chose to go with simpler tools. I haven't abandoned it altogether though, so I may go back to it at some point. Do you have any pointers to tutorials/howtos about it? Grig I've done some simple benchmarks with httperf some time ago for: pylons, lighttpd vs cherokee and django :) Riklaunim -- very nice! You should get your blog aggregated into Planet Python, so other Pythonistas can benefit from your findings. I will conceed that funkload is rather tedious to get into but on the other hand I don't think it's any harder than any of the tools you discuss here. Don't get me wrong I don't praise for funkload but it has worked quite well for me. I don't have an example at hand but I have discussed it in the CherryPy book as my example of load testing (not pushing to get the book BTW just telling ;)) In any case I really enjoy your articles and this one in particular. Well done. I actually didn't know some of these tools. "The fact that the achieved request rate and the connection rate increase linearly from 10 to 50 also means that the client machine running the test is not the bottleneck." I only see the demanded request rate increase linearly on the ideal webserver test. Does it means that the rest of the test must be executed on several clients? Are they valid? PD: I guess I'm wrong..I hope I'm wrong anyone know how to set the HTTP headers with httperf? I need to set "Content-type: text/xml", but can't seem to find any documentation on it. thanks :) austin -- I don't think you can set HTTP headers with httperf. You can use other tools though, among them twill; you can get twill from. Quick example for adding XMLHttpRequest headers: import twill b = twill.get_browser() b._browser.addheaders += [('X-Requested-With', 'XMLHttpRequest')] I am very happy to find your blog. I am trying my first performance project and have no idea how to do it. I am trying to use this tool--silk performer if my company purchases it coz I am not good at using the open source tools as you. I do have a main question on how to select scenarios for load tests. I have been doing functional QA for almost 10 years. I know that test scenarios in performance should be very different than in functional tests. But how should I select scenarios. My company's product is a web application. So would this kind of scenarios is ok: for example, on this blog website, to do load tests, a scenario could be post a blog. Another scenario would be post a blog, post a comment, edit a comment--which one would be a valid scenario for performance tests? Really appreciate your answers. Thanks for consildating the open source tools. Its steep learning curve but really appreciate the compilation here. :) information u provided is more than enough for starters like me Hi, I have just spent some time playing around with httperf and found it quite simple and easy to use. Stumbled upon your interesting and informative blog when searching for more information on Httperf. I am looking for some option with httperf that will allow one to run a test for a specified lenght of time. i.e some option that will allow one to run a test (a set of http post/request) with (say) 10 connections for 30 minutes. It would be great if you could throw some light. -- Layla Layla -- I'm not sure how to run httperf for a specified period of time. But one solution I see is to write your own script that sits in a loop and calls httperf repeatedly with as many connections as you want. At the end of the loop, you time it, and if the total running time is less than what you need, you go through the loop again. Would that work? Grig HTTP headers can be added by using something similar to the following with httperf. Command: httperf --server server.example.com --add-header="content-type: text/xml\n" --wsesslog 1,2,httperf-test In File httperf-test: /testurl method=POST contents='data here' Hope this helps. I've used grinder many times, JMeter a few times, whilst both are useful, there are some significant reasons to choose httperf. Both grinder and jmeter suffer from teh same flaw. A slow server will act as a gate restrictingthe test client from sending more requests until those pending have been processed. This means that thes e tools won't effectively simulate an overload situation. Httperf is one of teh few load generation tools that don't ahve this restriction and this is why the resulst gathered from httperf ar emore realistic than those gathered with jmeter or grinder. As a note: httperf only collects reply rate samples once every 5 seconds. If your server is faster than that, you'll get 0s (zeros, i'm adding that for SEO, I wasted an hour figuring this out and hope google reindexes your site) if your server is too fast. Boost num_conns, and/or num_calls to get results. (p.s. thanks) hi, i'm newbie in httperf. i;m using httperf to measure sctp. i wanna ask about i'm use this command [root@localhost httperf.tcp]# /usr/local/httperf-tcp/bin/httperf --server 192.168.4.2 --uri /test1.pdf --port 80 --num-conns 10 and i put the file in /usr/local/httperf-tcp/bin/www am i right, please help me... hi, how to get the tool itself ? can you post link to download it Sherif Amer.
http://agiletesting.blogspot.com/2005/04/http-performance-testing-with-httperf.html?showComment=1172522280000
CC-MAIN-2013-20
refinedweb
3,420
69.72
Tell us what’s happening: This works fine, I got the tests to work. However, replacing line 6 with <p>The current date is: {Date()} </p> and line 21 with <CurrentDate date='{this.props.date}'/> Also works fine. Both passes the test and renders the desired result. So I’m just confused a bit why this works. I think it’s because I’m rendering Date() inside the CurrentDate component, which then gets rendered by the Calendar? I think the way I did it is the correct one, set the prop inside the Parent element (Calendar) as a prop to the CurrentDate element in JSX and then do the {props.date} in the child element, right? Just help me understand the logic please, because it works in both cases and I’m confused. Your code so far const CurrentDate = (props) => { return ( <div> { /* change code below this line */ } <p>The current date is: {props.date} </p> { /* change code above this line */ } </div> ); }; class Calendar extends React.Component { constructor(props) { super(props); } render() { return ( <div> <h3>What date is it?</h3> { /* change code below this line */ } <CurrentDate date={Date()}/> { /* change code above this line */ } </div> ); } }; Your browser information: User Agent is: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.92 Safari/537.36. Link to the challenge:
https://www.freecodecamp.org/forum/t/react-pass-props-to-a-stateless-functional-component/227861
CC-MAIN-2019-09
refinedweb
226
75.81
The objective of this post is to explain how to add mDNS address resolving to a ESP8266 HTTP server. Introduction The objective of this post is to explain how to add mDNS address resolving to a ESP8266 HTTP server, using the ESP8266 libraries for the Arduino IDE. mDNS is a protocol that allows to make the resolution of locally defined names to IPs without the need for dedicated infra-structures (such as a DNS server) [1]. The protocol works over multicast UDP [2]. The big advantage of mDNS is that we don’t need to know the IP address assigned to the ESP8266 to access the HTTP webserver running on it. On top of that, we don’t need to deploy a dedicated server just to do the resolution of names into IPs. So, we can just define that the ESP8266 will be listening on something like “myesp.local” and we can just access the server in a web browser by typing instead of having to know the IP address. Besides this enhancement, the HTTP web server will work the same as it did before. In order for this example to work, the machine that is accessing the ESP8266 web server also needs to support mDNS. Otherwise, it won’t be able to send the query needed to receive the resolved IP. So, if you are working on a Windows machine, you need to install Bounjour, and if on a Linux machine, you need to install Avahi [3]. I’ve tested the code with success on a Windows 8.1 machine with Bonjour installed. The code Most of the code will be similar to the one explained in this previous post about setting a simple HTTP server on the ESP8266. So, we start by including the libraries necessary to connect to a WiFi network and to set the HTTP server. #include <ESP8266WiFi.h> #include <ESP8266WebServer.h> Additionally, we will include the library needed for all the mDNS functionality. You can check the implementation here. #include <ESP8266mDNS.h> As usual, we will initialize our web server object listening on port 80 (default HTTP port). ESP8266WebServer server(80); Since most of the code is similar to what we have been doing in the previous tutorials, we will focus on the mDNS part. So, to start the mDNS resolver, we just need to call the begin method on an extern variable called MDNS. This MDNS extern variable is an object of class MDNSResponder, which makes available all the functionality needed for the resolution of addresses. Nevertheless, the good part is that we don’t need to know the low level details since they are handled for us in an easy to use interface. Bellow is the call to the mentioned begin method. We pass as the constructor the host name that will be used in the URL. Just take in consideration that the host name can’t have a length greater than 63 characters [4]. MDNS.begin(“myesp”)); The complete setup function is shown bellow. We first start by connecting to a WiFi network and only then start the MDNS functionality. Since the begin method returns false when some problem occur, we check for that value to infer if everything starts fine. We then associate a handling function called handleRoot (we will define it latter) to the “/” path, so it is executed when a HTTP request is done to that URL. Finally, we call the begin method on the server object to start the HTTP server. void setup(){ Serial.begin(115200); //Start serial connection WiFi.begin(“YourNetworkName”, “YourNetworkPassword”); //Connect to WiFi network while (WiFi.status() != WL_CONNECTED) { //Wait for the connection to the WiFi network delay(500); Serial.print(“.”); } if (MDNS.begin(“myesp”)) { //Start mDNS Serial.println(“MDNS started”); } server.on(“/”, handleRoot); //Associate handler function to path server.begin(); //Start server Serial.println(“HTTP server started”); } To handle the actual incoming of HTTP requests, we need to call the handleClient method on the server object, on the main loop function. void loop(){ server.handleClient(); } Finally, we declare the handling function for the “/” path. In this case, we will just return a hello message to the client. void handleRoot() { server.send(200, “text/plain”, “Hello resolved by mDNS!”); }. Testing the code To test if everything is working correctly, just open a web browser and type: You should get the output message shown in figure 1, which we defined in the handling function. Figure 1 – ESP8266 web server response. Note that the .local is implicitly added for us and it shouldn’t be specified in the begin method we called earlier. Related posts - ESP8266: Setting a simple HTTP webserver - ESP8266: uploading code from Arduino IDE - ESP8266: Connecting to a WiFi Network References [1] [2] [3] [4] Technical details ESP8266 libraries: v2.3.0 Nice article. 🙂 LikeLiked by 1 person Thanks 🙂 I’m planning on exploring mDNS a little bit more soon. That’d be great! LikeLiked by 1 person This method of locating an esp device using “hostname.local” works with windows or linux computers, but if we want to establish a network of ESp8266 devices, they are not able to recognize other devices by these mdns names. Any solution for this? LikeLiked by 1 person Hi! I haven’t tried it yet with multiple ESPs, but I was able to recognize a service exposed by another device, the Linkit Smart, from a ESP8266, using mDNS. If I find a solution, I will make a post explaining it. Pingback: LinkIt Smart Duo: Configuring mDNS services | techtutorialsx Pingback: ESP8266: Query LinkIt Smart mDNS services | techtutorialsx
https://techtutorialsx.com/2016/11/20/esp8266-webserver-resolving-an-address-with-mdns/
CC-MAIN-2017-26
refinedweb
928
65.22
Today we released a prototype of a C# feature called “nullable reference types“, which is intended to help you find and fix most of your null-related bugs before they blow up at runtime. We would love for you to install the prototype and try it out on your code! (Or maybe a copy of it! 😄) Your feedback is going to help us get the feature exactly right before we officially release it. Read on for an in-depth discussion of the design and rationale, and scroll to the end for instructions on how to get started! The billion-dollar mistake Tony Hoare, one of the absolute giants of computer science and recipient of the Turing Award, invented the null reference! It’s crazy these days to think that something as foundational and ubiquitous was invented, but there it is. Many years later in a talk, Sir Tony actually apologized, calling. There’s general agreement that Tony is actually low-balling the cost here. How many null reference exceptions have you gotten over the years? How many of them were in production code that was already tested and shipped? And how much extra effort did it take to verify your code and chase down potential problems to avoid even more of them?! What can be done? There are some programming languages, such as F#, that don’t have null references or at least push them to the periphery of the programming experience. One popular approach instead uses option types to express that a value is either None or Some(T) for a given reference type T. Any access to the T value itself is then protected behind a pattern matching operation to see if it is there: The developer is forced, in essence, to “do a null check” before they can get at the value and start dereferencing it. But that’s not how it works in C#. And here’s the problem: We’re not going to add another kind of nulls to C#. And we’re not going to add another way of checking for those nulls before you access a value. Imagine what a dog’s breakfast that would be! If we are to do something about the problem in C#, it has to be in the context of existing nulls and existing null checks. It has to be in a way that can help you find bugs in existing code without forcing you to rewrite everything. Step one: expressing intent The first major problem is that C# does not let you express your intent: is this variable, parameter, field, property, result etc. supposed to be null or not? In other words, is null part of the domain, or is it to be avoided. We want to add such expressiveness. Either: - A reference is not supposed to be null. In that case it is alright to dereference it, but you should not assign null to it. - A reference is welcome to be null. In that case it is alright to assign null to it, but you should not dereference it without first checking that it isn’t currently null. Reference types today occupy an unfortunate middle ground where both null assignment and unchecked dereferencing are encouraged. Naively, this suggests that we add two new kinds of reference types: “safely nonnullable” reference types (maybe written string!) and “safely nullable” reference types (maybe written string?) in addition to the current, unhappy reference types. We’re not going to do that. If that’s how we went about it, you’d only get safe nullable behavior going forward, as you start adding these annotations. Any existing code would benefit not at all. I guess you could push your source code into the future by adding a Roslyn analyzer that would complain at you for every “legacy” reference type string in your code that you haven’t yet added ? or ! to. But that would lead to a sea of warnings until you’re done. And once you are, your code would look like it’s swearing at you, with punctuation? on! every? declaration! In a certain weird way we want something that’s more intrusive in the beginning (complains about current code) and less intrusive in the long run (requires fewer changes to existing code). This can be achieved if instead we add only one new “safe” kind of reference type, and then reinterpret existing reference types as being the other “safe” kind. More specifically, we think that the default meaning of unannotated reference types such as string should be non-nullable reference types, for a couple of reasons: -. Here’s what it looks like: class Person { public string FirstName; // Not null public string? MiddleName; // May be null public string LastName; // Not null } This class is now able to express the intent that everyone has a first and a last name, but only some people have a middle name. Thus we get to the reason we call this language feature “nullable reference types”: Those are the ones that get added to the language. The nonnullable ones are already there, at least syntactically. Step two: enforcing behavior A consequence of this design choice is that any enforcement will add new warnings or errors to existing code! That seems like a breaking change, and a really bad idea, until you realize that part of the purpose of this feature is to find bugs in existing code. If it can’t find new problems with old code, then it isn’t worth its salt! So we want it to complain about your existing code. But not obnoxiously. Here’s how we are going to try to strike that balance: - All enforcement of null behavior will be in the form of warnings, not errors. As always, you can choose to run with warnings as errors, but that is up to you. - There’s a compiler switch to turn these new warnings on or off. You’ll only get them when you turn it on, so you can still compile your old code with no change. - The warnings will recognize existing ways of checking for null, and not force you to change your code where you are already diligently doing so. - There is no semantic impact of the nullability annotations, other than the warnings. They don’t affect overload resolution or runtime behavior, and generate the same IL output code. They only affect type inference insofar as it passes them through and keeps track of them in order for the right warnings to occur on the other end. - There is no guaranteed null safety, even if you react to and eliminate all the warnings. There are many holes in the analysis by necessity, and also some by choice. To that last point: Sometimes a warning is the “correct” thing to do, but would fire all the time on existing code, even when it is actually written in a null safe way. In such cases we will err on the side of convenience, not correctness. We cannot be yielding a “sea of warnings” on existing code: too many people would just turn the warnings back off and never benefit from it. Once the annotations are in the language, it is possible that folks who want more safety and less convenience can add their own analyzers to juice up the aggressiveness of the warnings. Or maybe we add an “Extreme” mode to the compiler itself for the hardliners. In light of these design tenets, let’s look at the specific places we will start to yield warnings when the feature is turned on. Avoiding dereferencing of nulls First let’s look at how we would deal with the use of the new nullable reference types. The design goal here is that if you mark some reference types as nullable, but you are already doing a good job of checking them for null before dereferencing, then you shouldn’t get any warnings. This means that the compiler needs to recognize you doing a good job. The way it can do that is through a flow analysis of the consuming code, similar to what it currently does for definite assignment. More specifically, for certain “tracked variables” it will keep an eye on their “null state” throughout the source code (either “not null” or “may be null“). If an assignment happens, or if a check is made, that can affect the null state in subsequent code. If the variable is dereferenced at a place in the source code where its null state is “may be null“, then a warning is given. void M(string? ns) // ns is nullable { WriteLine(ns.Length); // WARNING: may be null if (ns != null) { WriteLine(ns.Length); // ok, not null here } if (ns == null) { return; // not null after this } WriteLine(ns.Length); // ok, not null here ns = null; // null again! WriteLine(ns.Length); // WARNING: may be null } In the example you can see how the null state of ns is affected by checks, assignments and control flow. Which variables should be tracked? Parameters and locals for sure. There can be more of a discussion around fields and properties in “dotted chains” like x.y.z or this.x, or even a field x where the this. is implicit. We think such fields and properties should also be tracked, so that they can be “absolved” when they have been checked for null: void M(Person p) { if (p.MiddleName != null) { WriteLine(p.MiddleName.Length); // ok } } This is one of those places where we choose convenience over correctness: there are many ways that p.MiddleName could become null between the check and the dereference. We would be able to track only the most blatant ones: void M(Person p) { if (p.MiddleName != null) { p.ResetAllFields(); // can't detect change WriteLine(p.MiddleName.Length); // ok p = GetAnotherPerson(); // that's too obvious WriteLine(p.MiddleName.Length); // WARNING: saw that! } } Those are examples of false negatives: we just don’t realize you are doing something dangerous, changing the state that we are reasoning about. Despite our best efforts, there will also be false positives: Situations where you know that something is not null, but the compiler cannot figure it out. You get an undeserved warning, and you just want to shut it up. We’re thinking of adding an operator for that, to say that you know better: void M(Person p) { WriteLine(p.MiddleName.Length); // WARNING: may be null WriteLine(p.MiddleName!.Length); // ok, you know best! } The trailing ! on an expression tells the compiler that, despite what it thinks, it shouldn’t worry about that expression being null. Avoiding nulls So far, the warnings were about protecting nulls in nullable references from being dereferenced. The other side of the coin is to avoid having nulls at all in the nonnullable references. There are a couple of ways null values can come into existence, and most of them are worth warning about, whereas a couple of them would cause another “sea of warnings” that is better to avoid: - Assigning or passing null to a non-nullable reference type. That is pretty egregious, right? As a general rule we should warn on that (though there are surprising counterarguments to some cases, still under debate). - Assigning or passing a nullable reference type to a nonnullable one. That’s almost the same as 1, except you don’t know that the value is null – you only suspect it. But that’s good enough for a warning. - A defaultexpression of a nonnullable reference type. again, that is similar to 1, and should yield a warning. - Creating an array with a nonnullable element type, as in new string[10]. Clearly there are nulls being made here – lots of them! But a warning here would be very harsh. Lots of existing code would need to be changed – a large percentage of the worlds existing array creations! Also, there isn’t a really good work around. This seems like one we should just let go. - Using the default constructor of a struct that has a field of nonnullable reference type. This one is sneaky, since the default constructor (which zeroes out the struct) can even be implicitly used in many places. Probably better not to warn, or else many existing struct types would be rendered useless. - Leaving a nonnullable field of a newly constructed object null after construction. This we can do something about! Let’s check to see that every constructor assigns to every field whose type is nonnullable, or else yield a warning. Here are examples of all of the above: void M(Person p) { p.FirstName = null; // 1 WARNING: it's null p.LastName = p.MiddleName; // 2 WARNING: may be null string s = default(string); // 3 WARNING: it's null string[] a = new string[10]; // 4 ok: too common } struct PersonHandle { public Person person; // 5 ok: too common } class Person { public string FirstName; // 6 WARNING: uninitialized public string? MiddleName; public string LastName; // 6 WARNING: uninitialized } Once again, there will be cases where you know better than the compiler that either a) that thing being assigned isn’t actually null, or b) it is null but it doesn’t actually matter right here. And again you can use the ! operator to tell the compiler who’s boss: void M(Person p) { p.FirstName = null!; // ok, you asked for it! p.LastName = p.MiddleName!; // ok, you handle it! } A day in the life of a null hunter When you turn the feature on for existing code, everything will be nonnullable by default. That’s probably not a bad default, as we’ve mentioned, but there will likely be places where you should add some ?s. Luckily, the warnings are going to help you find those places. In the beginning, almost every warning is going to be of the “avoid nulls” kind. All these warnings represent a place where either: - you are putting a null where it doesn’t belong, and you should fix it – you just found a bug! – or - the nonnullable variable involved should actually be changed to be nullable, and you should fix that. Of course as you start adding ? to declarations that should be allowed to be null, you will start seeing a different kind of warnings, where other parts of your existing code are not written to respect that nullable intent, and do not properly check for nulls before dereferencing. That nullable intent was probably always there but was inexpressible in the code before. So this is a pretty nice story, as long as you are just working with your own source code. The warnings drive quality and confidence through your source base, and when you’re done, your code is in a much better state. But of course you’ll be depending on libraries. Those libraries are unlikely to add nullable annotations at exactly the same time as you. If they do so before you turn the feature on, then great: once you turn it on you will start getting useful warnings from their annotations as well as from your own. If they add anotations after you, however, then the situation is more annoying. Before they do, you will “wrongly” interpret some of their inputs and outputs as non-null. You’ll get warnings you didn’t “deserve”, and miss warnings you should have had. You may have to use ! in a few places, because you really do know better. After the library owners get around to adding ?s to their signatures, updating to their new version may “break” you in the sense that you now get new and different warnings from before – though at least they’ll be the right warnings this time. It’ll be worth fixing them, and you may also remove some of those !s you temporarily added before. We spent a large amount of time thinking about mechanisms that could lessen the “blow” of this situation. But at the end of the day we think it’s probably not worth it. We base this in part on the experience from TypeScript, which added a similar feature recently. It shows that in practice, those inconveniences are quite manageable, and in no way inhibitive to adoption. They are certainly not worth the weight of a lot of extra “mechanism” to bridge you over in the interim. The right thing to do if an API you use has not added ?s in the right places is to push its owners to get it done, or even contribute the ?s yourself. Become a null hunter today! Please install the prototype and try it out in VS! Go to github.com/dotnet/csharplang/wiki/Nullable-Reference-Types-Preview for instructions on how to install and give feedback, as well as a list of known issues and frequently asked questions. Like all other C# language features, nullable reference types are being designed in the open here: github.com/dotnet/csharplang. We look forward to walking the last nullable mile with you, and getting to a well-tuned, gentle and useful null-chasing feature with your help! Thank you, and happy hunting! Mads Torgersen, Lead Designer of C# Join the conversationAdd Comment Does this apply also to properties? So how do we initialize non-nullable reference properties with { get; set; } ? The initialization requirement applies to fields, including the hidden underlying fields of auto-properties. You can initialize an auto-property in a constructor or with an initializer (string Name { get; set; } = “JohnDoe”; ). Will creating a constructor that assigns to non nullable fields / props remove the warning of creating a prop without a default value. I’m talking about example six above: You want to ensure that First and Last Name are always set, but don’t want to set arbitrary defaults. I’d assume that if you have such a constructor you won’t get warnings. However, if you had a second constructor that didn’t assign values to all non-nullable properties, you’d get a warning _in that constructor_. In general I’d assume the warnings are shown in each affected constructor or, if you don’t have one, on the class declaration itself. Hmmm. The warning should be on the declaration itself I think i.e. this value hasn’t been proven to always be assigned. This is where records are a much better fit than classes – single ctor defined for all fields, always. This is correct. Warnings? Ok, hobby projects it is. The enterprise/finance teams I have seen are like cattle – they only understand the sharp unavoidable pain of compilation errors not mild friendly warnings. Enumerated same lazy sequence ten times? It’s a warning – doesn’t matter! We can drag you to the river, but we cannot force you to drink. 😉 Seriously, we just have to be gentle here, and let people and teams adopt this in their own time. You cannot please everyone! Retrofitting null checks into a mainstream language is an unprecedented challenge. Maintaining popularity of C# is more important than perfection I guess. Just please make an awesome easy to use webassembly toolchain next! Might want a more fine grained Warnings As Errors; i.e so this can be toggled on as an error, without switching everything on as an error That’d little bit of as slippery slope though – you end up with a middle ground of some warnings that are more than other warnings but less than errors. Worse still this could change from project to project (and even dev environment) so you end up with different “versions” of C# across a team. Yes please! Yes please. Before optional arguments were available in C#, passing null to a reference type was the only way. You may have run some stats on github projects but I’d expect this to generate tons of warnings. It’s basically a change of conventions, almost a breaking change. Not a bad change but a big deal in my humble opinion. This should have something akin to “Option Strict” where you can set it to generate errors instead of warnings. In that environment, it’s easy to add the “warnings as errors” flag to the compiler on the build server. 🙂 I think this is a great example of a solution in search of a problem. I know that’s a bit harsh, and yes nullable problems still happen. They always will happen because we can’t always align and enforce the existence of data with the operations defined in code. Even if you eliminated all nulls, the default or un-set values would be the problem source instead of nulls. It’s just transferring the stated “billion dollar” problem. But we already have glorious constructs such as the null coalesce (??) and safe navigation operator (?.). Complexity grows incrementally, and thus C# is approaching an overly complex syntax with all these new techniques. This one, instead of being a concrete implementation of something, is a syntactical expression of intent. What’s so offputting to me in reading this is that the value of default(string) will not change. Why wouldn’t it? I mean I understand from the compiler side that underneath it’s the same type. But shouldn’t the default value now be string.Empty? This makes it harder for us to teach the language because in the cases of primitives adding a ? changes the underlying type, while in the case of strings and classes it’s merely an annotation for the compiler. Harder to teach means harder to learn, and I don’t the return is there on the investment. A very well worded response and I agree with all of it I agree with you. This seems very convoluted. Null coalesce (??) and safe navigation operator (?) were great additions. I’m not convinced that we need anything else.. Just because we have a problem and a solution doesn’t necessarily mean that the solution is worth it, of course. Part of building and sharing an early prototype is to understand the tradeoffs, shortcomings and obstacles before committing to whether and what we ship. WRT default(string): we absolutely cannot change the meaning of that. While the warnings we add are a “breaking change” in the mild sense that they can occur on code that used to compile cleanly, changing the runtime semantics of an existing construct would be a much worse break: existing code starts behaving differently! That can be extremely insidious, and we try very hard to avoid ever doing that! Better for you to get a warning on your “default(string)” and decide for yourself to replace it with “string.Empty”. Agreed that the semantic differences between “int?” and “string?” are unfortunate. We’ve decided to live with it, as the lesser of evils. We do discuss adding the flow analysis to nullable VALUE types as well, and letting you dereference those when known not to be null. This and other similar things would help narrow the gap. The proper response is to hard fork C#. We should get a clean C# that uses whatever nice things we want to borrow from TypeScript and F#. Non-nullability from the ground up would be one basic feature. Call the new language something brilliant like C#++. The nice thing is that if you guys don’t go too crazy with the new language, you can keep it in sync with new features added to C#. Moreover, since they are both .Net languages, they would be interoperable. Lastly, it would be easier to do this and port existing C# to C#++ than to deal with all of these contrivances… We really do need a way of starting fresh with an awesomer C# without breaking old code… Another great example would be to have C#++ use F# style async/await implementation since improvements to the async/await concept were thought of after C# already shipped it’s async/await (e.g. it is now known how to implement async/await without allocation of Task classes). Regards, We already have a better c#. It’s called f# 😉 Disagree on that, F# and C# are 2 totally different things. So why don’t you and Mads Torgersen and others who are not mentally able to do a null checks go with F# and leave C# the fuck alone? I was always thinking of doing something like that, but never got the time to, so sad 🙁 C#++?? I’m sorry, no. That’s just too odd. It should be C##. If default(string) was string.Empty we would indeed be in a worse place, as you even brought up at the beginning of your post- we’d have default values instead of nulls and _two_ kinds of mistakenly uninitialized data to bite us in two different kinds of ways. Also, it doesn’t generalize to all reference types and that’s a non-starter. It’s not simply moving the billion dollar problem around if you address the warnings _properly_. That would only happen if you opt to coalesce all nulls to default values without doing sufficient investigation of the resulting semantic change you’ve made to each occurrence. It’s no different from that angle than if you’d been passing all your parameters around as `object` all this time and began to tighten up your static type safety. Of course you now have to address the possibilities that you’ve been passing mismatching types without knowing it. Do you address that by using safe casts with a fallback to a default instance of the correct type? Of course not. You fix that by doing the hard thing and requiring the callers to only pass instances of the correct type, which possibly means fixing the callers of the callers, etc. Looking at my code base today I can point to dozens of places where I’ve fixed null reference exceptions. So no, I don’t have to look far to find the problem this is supposed to solve. Yeah we had a null reference exception stop our entire business like what two weeks ago. QC missed it a warning would not have. This is probably the most helpful change I’ve ever seen. Also, Scott Hunter ran into an NRE while presenting live at Connect today. I thought of the people in this thread. :’D This actually simplifies the language. It makes value types and reference types behave the same way, and gives the ? suffix the same meaning in both: it adds null to the set of allowed values. The ?? and ?. become easier to learn and use because it (for the first time) becomes possible for the IDE to tell the user when they need to be used or not. Also even though the value of default(string) has not changed, its type has: it’s now string?. The typical use for default is in generics so default(T) is of type T?, e.g. specifying the default value of a parameter and (for the first time) being absolutely clear that null is an allowed value and must be handled by the implementer. This seems all right and good, and thoughtfully, well, thought through. The only thing that strikes me is how do you work with tooling such as PostSharp? I have a weaver/aspect on all my projects that do null-checking for me and throw errors accordingly. The code isn’t there, of course, as it is weaved in at compiled time. How will this new day and era work with my projects and setup? From an IL point of view (the output of compilation), the only change that a tool like PostSharp will see is that some signatures will have attributes on them signifying nullability. They can then choose whether it’s important for them to take them into account or not. Yes, like pretty much any other language feature, downstream tools have to deal with it. As for the weaving in of null-checking operations, that’s probably still useful. If I offer an API with nonnullable parameters, it can still be called with null by someone who decides not to follow the rules (or just has an older compiler). So on public boundaries, you likely still need to defend yourself. Ah thank you for taking the time to reply and for all your great replies (as usual!) in this thread. In hindsight I now see I should have been a little more specific in my worry. My primary concern is about tooling in the sense of warnings. If I already have a tool (PostSharp) that handles this problem domain for me, am I still going to be bombarded with warnings in the IDE about these issues? Obviously, it would be ideal for the tooling to be able to interact with one another to provide a synchronous/harmonious experience. Specific worries are good! 🙂 I see what you mean now. This is a bit of a general problem with code rewriting: What if the rewrite solves a problem in the original code? Can we make the warnings/errors go away? This feature is probably no better or worse than others in that context. That would be a nice problem to try to solve orthogonally sometime! OK cool. I am assuming that at worst case scenario, we can mute these warnings like we can with general analysis/warnings today. If not, that would be nice to have. 😉 The default will be non-null!? That seems totally bonkers to me. This seems like something you’d want to opt in to, not out of. Well, I’ll try it on a few projects and see what happens, I guess. The consequence of that would be that everything unannotated would be assumed to maybe be null, and you’d get warnings on all unprotected dereferencing. It seems to me that this would be MORE intrusive. But I absolutely think both models can work. The arguments for what we chose are in the post. I must agree with the OP here. If this feature had been worked on for C# 1.0, then non-null as the default would have made sense. It is far too late in the game to make non-null the default now. Additionally, to address “the too many warnings on unannotated types” point, I don’t think we should have those warnings implemented to begin with. We don’t have them for nullable types right now (int?, double?, etc) and those work great without them. Why have them now for unannotated types? They wouldn’t even detect problems in half of the cases as mentioned in this post anyway. If it’s a problem for you then you can turn off the warnings. This will be a huge help, and a removal of a big stumbling block in the language. On nullable value types today, we don’t have *warnings* for dereferencing, we have *errors*. The team has definitely made the right choices in the balance of crossing the language into the non-nullable realm which it needs to in order to survive against Swift and Kotlin, who have been designed around the new paradigm. We’ve had non-null preconditions in our C# codebases for the past 8 years – cannot wait for this to finally be done at the language level and for the .NET ecosystem to begin making the large leap ahead. Screetching to what is old and resisting change always leads to failure. I was at RIM/Blackberry – I’ve seen it first-hand. .NET badly needs to cross this chasm and from having deep experience with both Kotlin and Swift I can tell you their choices are spot on. I agree. C# badly needs the concepts from Kotlin and Swift becoming first class citizens in the language. I’m not concerned with old C# libraries that are already compiled that don’t deal with this, (ironically VBnet doesnt deal with it so you still have the same problem right?) What I am concerned with, is compiling my current libraries and future libraries to be a lot more NRE robust. We need a way to represent the fact that theres no value, without using Null. Perhaps deprecate the “null” keyword in favor of nil? with wrapped null_safe blocks? Best addition to C# in the last 10 years. Thanks! (Disclaimer: I work for Microsoft. Still personal opinion) Nah, lambdas are. (Disclaimer: I work for Microsoft, too) Nah, expression-bodied members. (Disclaimed: I work on the C# language design team 😉 ) Almost ten years (wow!) since we got them Yes I thought it was odd too when C# started in the days after Databases. They had a DBNull construct being //something// that represents no value instead of null which is an empty pointer that points nowhere. The C# language should’ve had null and nil and reserve null keyword within pointer access type of code. Possibly even only within unsafe blocks? nil then becomes a keyword that indicates something that indicates no value is present, which is totally different. Perhaps C# should look closely into Swift and Kotlin as somebody else pointed out. For one, I completely agree with you!! Though now I may have to get rid of my geeky t-shirt with the null reference error message on it… Thanks Mads and team for the hard work, keep it up! A few questions, if you don’t mind: – In the “extreme” scenario, would you be able to get warnings even for arrays or collections somehow? I guess string?[] is obvious, but could List vs List be working as it could be imagined or is that considered like a dependent library as you mentioned? – How does multithreading fit into all of this? A value might change between checking and deferencing a value, does this complicate things at all (obviously with Typescript there were no such issues)? – Also for having neatly organized code, you might have to duplicate it sometimes, which I’m not a fan of (I still prefer DRY code), could this have any relation to the previously discussed topic of primary constructors (relating to point 6 in your post for assigning all relevant fields in the constructors)? Thanks! Sorry, some content got stripped: In my first question I mean a list of nullable and nonnullable strings. I didn’t say much about generics. The current plan is to allow unconstrained generics to be constructed with nullable type arguments (and to add a few extra constraints such as ‘object’ and ‘class?’). Assigning between “List-of-string” and “List-of-string?” will yield a warning both ways, because both can lead to exposure to null dereference. Creating a List-of-string does not have the same problem as creating a string[], because the list will have 0 elements, whereas the array will have a number of null elements in it. The IMPLEMENTATION of the List-of-T generic type may have an array inside, which will start out with nulls inside. It may need to use “!” inside to convince the compiler that this is alright. But it protects the user from ever accessing null elements of the array, because (in a List-of-string) null will never be assigned to elements of the array that represent current elements of the list. There’s a List-of-T contructor that creates a list of N elements:, so it is also not safe from uninitialized elements. That just sets the initial capacity. The length is still zero until you add elements. That constructor just reserves the internal memory for the case when you know how many elements you’ll need. It doesn’t make them publicly visible. This throws, for example: var list = new List(10); var first = list[0]; // out of bounds exception You’ve answered me already on Twitter but for posterity might be useful to answer here as well 🙂 1. How will this work in terms of the relationship between IL and the compiler. Will nullability be represented at runtime somehow in IL, or will this be a compile-time “erased” thing? 2. Will this feature be back-ported to the types and methods that are exposed in the BCL? Cheers For posterity, then, here goes! 1. The compiler adds attributes to member signatures to represent where things are nullable. That’s the only representation in the compiled output. 2. One way or another, yes, that’s the plan. It may be that we hack the reference assemblies rather than modify the source code itself, or some other trick we haven’t thought of yet. Is that even possible without breaking everyone? Like if you change the signature of an interface, this will result in errors, not warnings for whoever implements it. Or if you change the signature of a commonly inherited class like Stream, that’s going to break any override in the child classes. Or will nullable and non nullable types be treated as having compatible signatures? My (limited) understanding is that this is simply going to be attribute metadata which the C# compiler will understand and treat differently. It won’t be encoded into the type system directly, and therefore won’t need a complete rewrite of the BCL. I do wonder though how one could safely know which return values from BCL are nullable and which aren’t, though. But that means that a pre-nullable references, already compiled assembly will not break. But what about your current project, where you are implementing an interface that returns a nullable reference but your implementation doesn’t return a nullable reference. Will Visual Studio not treat that mismatch in signature as an error? Or will it be simply a warning too? To be honest, I find this a bad design choice. Language should not care about perceived errors users may have. But even then this way have a few problems: 1. There are much more cases when you expect nulls than when you don’t. Why not mark those and left default case unchanged? I’m not looking forward to changing a bunch of completely sound code “for safety” or messing with warnings because someone on the design team made this choice and is pushing his agenda. 2. Marking non-nullable or nullable intent should be semantic, not syntactic. Syntax has to do with inner structure of language, and here it’s not changed. Moreover, it’s confusing with Nullable for value types. Adding NotNull or Null attributes would be much better (and readable) choice. There are even issues on GitHub asking for exactly that. 3. Subjectively, question marks everywhere look horrendous. All in all compiler should help me write good code, not force me to adhere to unnecessary (and not even guaranteed!) safety rules and/or someone’s ideas of how type system should work. This would have been an ok choice if it was done when the language was created, but not at this point. “Language should not care about perceived errors users may have.”: We always had warnings to help people with probable bugs, on top of errors when things really don’t make sense. 1. Do you think there are more places that are intended to be null than not? Our impression is the opposite. It’s hard to tell on existing code bases precisely BECAUSE people don’t have a way of expressing that intent. But it probably varies from code base to code base. If you have overwhelmingly have nullable intent (and want to use this feature, which, remember, is entirely opt-in!), then yes, you have some initial annotation work to do. But once you do, it will help you find all those place where you forget to check for null. 2. I’m sorry, I don’t understand the syntactic/semantic distinction here: do you mean it should not have it’s OWN syntax? Agreed about the differences with nullable value types. On the one hand it’s nice that you can reuse some of the intuition and syntax, but on the other it may lead to the wrong expectations where the two features differ. We are aware of this, but currently think of it as the lesser evil. 3. Yeah, that is subjective. Sorry you don’t like it. Hopefully it’s not EVERYwhere! Just my two cents here coming from the perspective of using F# on a daily basis, but having worked for years beforehand primarily with C#. Although the approach to nullability is different in F# to what’s proposed here, I can say that on a standard LOB codebase, we find that the vast majority of the time having non-nullable as the default and having to opt in to nullability is the most common scenario. In fact I only realised this after working in F# for a reasonable amount of time- it’s simply nothing that I considered when working in C#, since as Mads points out, there’s no real way of specifying it. Non-nullable by default is the way to go. I agree, non-nullable should be the default behavior. Why? Because of applications work that way: fvalidation of input (where nullables might be mixed with non-nullable) first, then safe haven of verified references. Not other way around. Cannot agree more. Reference Type can and needs to be able to pointing to nothing. It’s the programmer’s responsibility to check whether it pointing to nothing or not. Compiler has no idea what the programmer intend to do at all. Also, simply generate a warning cannot prevent someone to be stupid. I’ve seen so many people, including those with a SENIOR title, get used to ignore warnings. This change mean nothing to them. Also, this change mean nothing to the GOOD ones, as they always know what they are doing and the implications. In other words, it’s a catch 22, nothing more. Right now all our reference typing can give us is `string | null` and `ICanPerformX | null`. There is no purity, no way to express that I need an `ICanPerformX` and nothing else, not a `null`, not a `ICanPerformY`. Conflating `ICanPerformX` with `null` is every bit as useless as conflating `ICanPerformX` with `ICanPerformY`. If you need to add on the possibility of receiving a `null`, which is inherently a special case not conforming to the contract of `ICanPerformX`, that should be opt-in. You’re effectively asking for one of two incompatible contracts (`ICanPerformX` and `null`) rather than one which brings with it the need to switch behavior depending on the contract. It should be opt-in because you’re inherently raising the complexity. I have to say, I run into null as more of a special case and I would dread having to opt out of nullability. For the code I’ve seen, opt-out not opt-in would result in punctuation absolutely everywhere. It also fits the mental model we already use with nullable value types, so there’s that. I do not agree with the idea that null is suspicious. Null is no different than an integer defaulting to 0. This is just the “language of computer science.” Yes, trying to access methods/properties/etc of a null object will cause an exception. But that’s where training and education are important. I have met a lot of developers whose only qualification to do their job is “they like computers.” A professional developer does not have trouble with null. Where null is different from an integer defaulting to 0 is that it breaks the contract of the type and 0 does not. When we ask for a parameter of type ICanPerformXin today’s C#, we’re only able to ask for effectively ICanPerformX | null. Thinking that nullfulfills the contract ICanPerformXis just as mistaken as thinking that ICanPerformYfulfills the contract ICanPerformX. In comparison, 0 fulfills the contract `int` in every way. I’m super happy that we’re getting tools that allow us to finally express, effectively, ICanPerformXrather than ICanPerformX | null. And I’m just as happy that the tools warn when you’re lying by passing a potential nullin a place that requires an ICanPerformXcontract. Anecdotal statistics and statistics I’m taking on authority both disagree with your assertion that professional developers do not have trouble with inadvertently treating nullas though it fulfills a contract. Finally, The progression of software language features over the years is all about making things easier and taking mental load off of the developer. An good example is using a regular for loop instead of foreach. It isn’t that much harder to use the regular for loop, so why bother? Well you might have an off by one error, you might have nested loops and use the wrong increment variable. Sure senior developers will make those mistakes less often than junior ones but it still happens, and can take time to track down. Why should I need to bother with initializing, incrementing, and comparing an index when all I need to do is work with each value in a list? It is noise to the problem I’m solving and an opportunity for mistakes that just doesn’t need to be there. This is no different. Sure a good developer knows an reference can be null and that they should check in appropriate spots. But here’s a question. Do you have a specific null check in every single function in your codebase that takes in reference parameters? Or do you often have cases where the caller is your own code and you know the value was checked for null already? In those cases, you don’t need a specific null check in the called function. If you add one anyway, it is adding clutter to the code. But not adding it means there is more mental load. What if you’re modifying a function someone else wrote? Should they have added a null check? Now you need to go up through the callers and make sure one won’t be passed in. Or maybe you need to call an existing function in your own code or even a library. Do you know if it can handle null? Maybe you’re lucky and it has documentation saying it can throw a null arg exception. But most won’t want to go that far for regular functions that aren’t being called by the outside word. Have you ever navigated to the definition of a function you’re about to call to see if it handles null? All of that intent is helped by having something like this in place. By marking the parameter as non-nullable, you know right away that it isn’t doing its own check and expects that to have been done by the caller. If that isn’t done, there is a warning. Now you can easily do a null check at the outer boundaries of your code and have everything inside not have to worry about it. If your particular function is called from a new area where there might be a null value, you get a warning. You either need to modify the caller to check for null or change the signature of the function and put a check there. Otherwise you might assume the function checked for null and have a bug or else you have to spend time looking at the function (and perhaps more time if it send that argument down into yet another function). Or in Redux you construct actions that are dispatched on later. Instead of having an action take a bunch of nullable properties or an “any” property just called value, you can specify different types. If you got an AddFoo action, you know the foo property on it isn’t null. There would have been a warning while constructing it. TypeScript works this way and it is quite helpful. In fact its type system allows other handy things like specifying a type can be Foo[] | LOADING | NOT_AUTHORIZED. The UI can treat the different states in different ways easily. Instead of trying something like null means unauthorized and empty array means still loading. But then oops an empty array might be valid and we want to display a message for that. Anything that makes intentions explicit is helpful IMO. I’d argue that integer shouldn’t default to 0 and should require an assignment. It’d be too hard to change right now in C# but it does make some sense. The only reason int defaults to 0 and string doesn’t default to empty string is an implementation detail. Strings can be large and we don’t want to have to make a copy in memory every time we assign it to another variable or pass it to a function. So it is treated as a reference type in C# but other languages do it differently. Awesome addition! I’ve been pussing for it for a long time () but now we have a brave and smart solution. I think C# 8 is going to be memorable like C# 3 was with LINQ! Unfortunately, once you start decorating your entity properties with ?, null-propagation becomes mandatory… but doesn’t work on Database Queries!! Example: public class Person { public string? Name; } class Program { static IQueryable Persons { get; set; } static void Main(string[] args) { //An expression tree lambda may not contain a null propagating operator. var a = Persons.Select(b => b.Name?.Length); } } Pleeease fix this. Even if generating the same expression than `Persons.Select(b => (int?)b.Name.Length);` to avoid braking LINQ providers. this problem deserves more attention. the lack of linq support for ?? and ?. is going to be a big deal with nullable emergence types around. *reference (ty autocorrect) I think ?? works in expression trees, because it’s from C# 2.0 The problem is with all the new stuff after c# 3.0, like dynamic, async await, or patter maching Admittedly, I don’t see to much use for async await or even patern maching when translating to sql. Dynamic could be quite useful, but ?. Now with non nullable reference types is going to be a must! If not… how are we supposed to work with Entity Framework? The implementation problem is that adding new expression types will break queries that use them till the LINQ provider gets support. This point is well taken. Thanks! Yes great point! A very brave step towards the correct direction. Kudos, this is gonna make other languages jealous. A more powerful pattern matching would be the next if I am not mistaken. Oh and speaking of pattern matching how does these nullable types stack up with existing pattern matching syntax? They’ll work well together. Patterns are one of many ways to learn whether a variable is null or not. The prototype doesn’t fully implement this behavior. Also, in C# 8.0 we plan to have recursive patterns. All recursive patterns do a null check. One simple form will do nothing but: “expr is {} x”. This is a degenerate form of the property pattern, where we check no properties (hence the empty “{}”). We do a null check, however, and if not null, put the value of the expr into x and return true. What other languages, pray tell? Most languages have opted out of the null problem a long time ago. Most languages *perhaps*, but not the languages of most *users*. 😉 Long awaited! Wonderful work! Will `public string MyString {get; set;}` emit a null parameter check for the setter? No. We do not change the code we emit. But the person trying to set a null into MyString will get a warning. It’d be great if there was a straight forward to convert warnings to build failure for specific cases, like this one. All warnings as failures is egregiously annoying. Maybe some kind of #pragma statement that says this warning is a failure, or converse to the ability mark a warning as ignored to mark it as failure (this might be simpler at the assembly level instead of grammar level) You can list the specific warnings you want to be failures in C#. Or you can list the specific warnings you don’t want to be failures using the WarningsNotAsErrors tag. `CSwhatever` or the /warnaserrors 1594 or whatever command line <WarnAsErrors>CSwhatever</WarnAsErrors> or the /warnaserrors 1594 or whatever command line What is difference between that approach and Resharper’s CanBeNullAttribute ? It works for people who don’t have Resharper? 1. The syntax is more concise 2. It’s checked by the compiler itself 3. It works in constructed types, such as string?[] and List<string?> 4. It is tracked by type inference 5. It is shown in hovertips and IntelliSense 6. It is standardized for everyone 7. Warnings can be silenced by “!” Etc… Thanks ! IMO 4th point is most significant. Point 3 can be achieved with ItemCanBeNull attribute. There is no ItemItemCanBeNull though for something like Task Task<string[]> Is your intention to revisit all .net framework standard libraries and to switch reference types to nullable reference types when they may return null? See my question above and Mads answer. Mads, thank you for the comprehensive post. I’ve been following the basics of this on Github for the past…well…I guess since it’s been first discussed. Two questions, though. 1) ReSharper from JetBrains provides a set of annoation attributes amongst them NotNullAttribute and CanBeNullAttribute which ReSharper uses to provide a meaningful null-analysis. Are you in contact with JetBrains to make sure that the compiler and ReSharper aren’t going to start fighting over the user’s attention to detail? 2) Custom tools…I’ve got about 6000 argument-not-null-checks embedded in my code via utility methods (was surprised there’s not more of them but apparently my API surface is tighter than I had thought). There’s long since been a discussion about using post-compile steps to inject the argument checks and the NotNull ReSharper Annotations by default / unless the developer already specified it the other way around. The problem with the post-compile step though, is compile time performance. It just takes time and can be more efficient to just add it once manually (or via tooling) to the code than to have a tool run on very compilation. Now that we have Roslyn, it would be so, so, so (sorry the emphasis) great to just use it as a compiler pipeline and hook up my own transformations into the compiler run. That way, I would only have to pay for the actual tree manipulation, not the assemly building etc. Lastly, I applaud the tightrope act you are performing here. Personally, I’d love to just turn the null-checks up to eleven and opt out of non-nullability on a case-by-case basis. Best regards, Michael Great news! Made my day, I’ll definitely try it. Yes Yes Yes! I currently use ReSharper’s [NotNull] and [CanBeNull] attributes on method parameters, properties and sometimes even fields. I cannot overstate how many NullReferenceExceptions that has already prevented. It also makes working with APIs so much more convenient. If I see a method with a string? parameter I know it’s safe to pass it a potentially null argument. If I have a non-nullable parameter in a private help method, I’m confident I don’t have to null-check it and anyone who would call it with null would get a warning. Will there be attributes to help the compiler understand common patterns? For example, consider a method like this: bool TryGet(out string? result) The return value indicates whether result is null or not. It would be great if the compiler could understand this as well and not create any warnings. ReSharper provides a ContractAnnotation for this: Essentially you put a [ContractAnnotation(“=>true,result:notnull; =>false,result:null”)] on the TryGet method and the compiler can interpret it. There are various null-related patterns where something like this would be useful, for instance a Guard.AgainstNull(myArgument) that throws when myArgument is null, meaning the compiler doesn’t have to create warnings after the call. With ReSharper, a [ContractAnnotation(“arg:null=>halt”)] expresses this. I think this is a really great step forward, thanks! Nulls are nasty This looks great. PS. “alright” is not a word. Hmm… 😀 Definition of alright :all right First Known Use: 1865 But then I think the only advantageous or realistic philosophy of language is descriptivism, not prescriptivism. =) However, “aggresiveness” is a typo. @Mads Well done, @jnm2. So it is. Fixed! 😀 This is looking very good. A nice feature, with simple syntax, that will bring more safety to C# code, similar to F#. This is a bad move. Coming from the C++ world, I always understood perfectly well what a reference type meant. Just like a pointer can be null, a reference type can be null. If you want to wrap a reference type in a NonNullable and also add some syntactic sugar to the language, great! That would be an explicit contract for a reference type. Assigning a reference type to null means it doesn’t point to any instance of that object. This is not a hard concept. Why are you breaking our code with this? Just like you added Nullable to wrap value types to allow them to be nullable…you could create a NonNullable for reference types. Give it some syntax like string! foo = “some string”; The string! means not nullable….come on…. Why should nullability semantics be coupled to how data is stored in memory or how they are passed as arguments to other functions? It was a mistake to do this originally. The sooner C# moves away from this the better. I disagree that it was a mistake. Regardless, the language is mature and now this would change how code is interpreted. No longer can we look a the code and know how reference types are handled unless we know it was compiled with this option. This is like redefining what a reference type is now. Why redefine it? It is what it is and if the problem is that we need to ensure via some contract that a reference type can’t be nullable (which doesn’t make sense to me anyway sense it is a pointer to an object in memory not the object itself), then we *add* that to the language not redefine what a reference type means. non-nullable reference doesn’t even make sense. Maybe they should have stuck with C++ sematics but they didn’t. It’s fine with the way it is…just add something too the language that gives clear indication that the reference type isn’t nullable. Easy. I agree that what is known in C# as reference is in fact POINTER. If I would be an advocate of conspiracy theories I would say that the term “reference” was chosen to just make C# look better than C++ (for marketing purposes). Look how C++ distinguish what can be null (pointers) from what CANNOT be null: // Equivalent of current C# code looks in C++ like this: vector* pointerToListOfPointers; // C# should really find a way to express THIS! vector nonNullListOfNonNullObjects; // or sth potentially more doable in C# – a real reference vector nonNullListOfReferencesToNonNullObjects; Going back to the proposition. Warnings are not a solution. IMHO really great developers have already found some ways to tackle with nulls in an effective way (attributes, contracts, etc. + deliberately taking care of using them). They do not need such suboptimal solution like this and will not switch to it. Other developers that suffers from nulls every day will still not be freed from “The billion-dollar mistake” because the proposition does not solve the problem entirely. Sorry! This forum editor has ate my code 🙂 Now I have put spaces trying to avoid this: //; Sorry again. Now I have put underscores trying to avoid this. Haven’t found help what is a syntax of posts on MSDN blogs to include code samples. //; I’m done. How to post code sample here? I would really like to clean all this mess with my posts by myself. If only forum would allow me to edit and/or deleted my posts…. @Mads Torgersen, simply feel free to delete all my post as they do not provide correct code at all. In C++ we already have not_null. See here: Example from the linked URL: int length(const char* p); // we must assume that p can be nullptr int length(not_null p); // better: we can assume that p cannot be nullptr Why is this added to the language in A.D. 2017? Hoare “invented” null in the 1960s, but didn’t force you to reinvent it in a language made almost forty years later. You had every chance in the world to fix this as part of the language and not as a patch on the language. Man, I wish I had your hindsight! 😉 Isn’t forty years of hindsight enough? Apparently 40 years is enough, because they’re addressing it now! Blame that Anders guy. As a teacher, one of my main goal is to make students understand the fundamental difference between Value and Reference types. The current solution exposed here will be a mess. In my experience, you first understand Value and Reference types with a simple rule Value type are manipulated by value and as such can not be null. References are “naturally” nullable. Based on this: – You want a nullable value? Suffix it with a ? – You want a non-nullable reference? Suffix it with a ! And yes, this EXHIBITS the very different nature of these two beasts. And this is a good thing. I would worry about teaching that as the difference between reference and value types. You should illustrate how they behave different when passing them as arguments – one is a *reference* to a value. The other is a *copy* of the value. That’s the only difference that counts. Languages like F# (for example) have support for both reference and value types. But neither are nullable. The fact that reference types are nullable in C# should best be thought of as a coincidence – not an explicit effect. Olivier, totally agree! This is a good step in the right direction, assuming devs pay attention to the warnings. I’d rather see some information, such as the property name in a NullReferenceException that is thrown at runtime if a member is dereferenced on a null object. Yes, it is better to handle issues at compile time rather than runtime, however, NullReferenceExceptions thrown at runtime without information on precisely what is null is more of a pain point for me. Visual Studio now has better support for showing you what was null when a null reference exception happens in the debugger. That doesn’t help you in production, but then again, if you turn on nullable reference types you won’t *have* so many in production. 🙂 This approach feels uncharacteristic for C#; like its a band-aid that covers up the problem imperfectly rather than a comprehensive solution. Like it has the same sort of “gotcha” warts that would have existed if generics had been implemented using Java-style type erasure by the compiler rather than drilling native support for them down into the runtime itself; or if Nullable hadn’t been given support in the runtime and was just done via compiler magic only. Holes in the warning system because certain constructs are “too common” are a pretty big red flag that this isn’t the right approach; and aside from the mushy areas around arrays that C# inherited from Java, C# just doesn’t *do* holes in its type enforcement system. Seeing all the negative comments I just want to add that I think you are doing the right thing, null is absolutely the exception and non null should be the default. Thought experiment: Code Review this method…assume that Foo is a reference type… public void PrintFooName(Foo foo) { Console.WriteLine(foo.Name); } Now depending on how the compiler is set, Foo will either be a real reference or a non-nullable reference but we don’t know that unless we know what the compiler is doing. Now if the default behavior becomes the only behavior, then we’ll have to ask which version of the compiler was used before we know the behavior of this code. However if we keep the behavior as it is now, we know that this method needs a null check regardless of the compiler being used. Now let’s rewrite it with new syntax to support non-nullable reference types with an *explicit* syntax. public void PrintFooName(Foo! foo) { Console.WriteLine(foo.Name); } Now we know several things about this code. 1. It is using new syntax which implies newer compiler. 2. It is explicit that the foo parameter is not nullable (because of the “!” or we could use “&”, e.g. Foo& to be closer to C++ concept of a reference) so there is no need for a null check. This would not result in broken code, confusion, and changing the behavior of a fundamental concept as a reference type in C#….I can’t believe this is even being considered. Changing the meaning (not the behaviour) will create some pain at the beginning, but typescript has already gone though it, and now is ok. I prefer to suffer a little bit for some time than having to write Dictionary<string!,List!>! for the rest of my live. This change does not change the behavior – there isn’t two compilers, there isn’t emitted code differences, There are just warnings and no warnings, so everything you are saying is wrong. Pete, it is my understanding that initially this will be a compiler flag to enable. So right there, if you compile the code above with the compiler flag enabled, the behavior does change in that you’re guaranteed that foo can’t be null so no null reference checking required. But if you compile with today’s compiler/options, it can be null. But you can’t know that by looking at the code alone. That’s my point. There’s no obvious way to know that this code behaves a certain way because we don’t know how it is compiled. But the does the behavior of the compiled code actually change? My impression is that this is _not_ a change to the actual type system, just what warnings the compiler throws. So the compiler might behave differently depending on the compiler version/options while the resulting code behaves the same way regardless, but that’s always been true. There is no guarantee that the parameter is not null, just a strong likelihood if everyone is playing by the rules. Jim, as the others said, this doesn’t actually change the code compiled at all. If you saw this function in an older project, you’d assume it won’t be called with null since it has no check. If you aren’t sure, you’d travel up the stack to callers to see if it is checked and if there is a bug in the code. If you want more information without doing that legwork yourself, you compile with the flag enabled. Now it says “FooCreator calls PrintFooName but its code paths don’t sure it is non-null by that time”. So you should probably fix the bug… It isn’t about knowing what version it was compiled with, it is about you compiling it with the warnings on and seeing if you’ve made a mistake. I “think” for external libraries there will be some kind of typing information exported you can use? If not, when using external libraries you don’t control you probably should put a null check at the point of using their return value and now you’ll know the rest of your code is safe. I think it all makes more sense if you know TypeScript. TypeScript just outputs regular JavaScript and has no special runtime checks. If an external library has a type definition file saying a function returns something with interface IFoo but it doesn’t actually return that, your code could blow up. But that isn’t any different from the docs of the library saying it has property Name on the returned object but it doesn’t. Most of the time it all works fine and saves a lot of time in preventing bugs and additional documentation. I’m puzzled about the developers that have commented about not seeing the need for this feature. I’m extremely enthusiastic about this feature in general. My team has been using Code Contracts since I introduced it 5-6 years ago, and a big part of the need that Code Contracts fills is around clarifying and enforcing nullability (via parameter checks, documentation, code clarity, etc). I’m been quite frustrated with CodeContracts being abandoned in the last 18 months, particularly the lack of support in VS 2017 and no official statement on the project, though it is “in the box” for VS 2015. I spent more time than I’d like figuring out how to make Code Contracts work in VS 2017 tooling, but 1 ominous obstacle is that ccrewrite crashes on portable PDBs. This means any portable PDBs in dependencies will crash ccrewrite. While this feature doesn’t replace all the benefits of CodeContracts, it replaces the primary benefit (in theory, I haven’t used it yet) in a clearer and more productive way. If Roslyn adds support for code generation () IMO the CC functionality could be effectively replaced by library authors. The one complaint I have about the language design described above is the use of `!` suffix operator to mean “not nullable”. I find the `?` operator to be natural for inline null checks and `??` for null alternatives, but I don’t find `!` to be a natural way to assert/communicate not nullable. I will read it as “not” at least until it becomes mainstream and I figure out a way to not dislike it. I don’t have a better suggestion, sorry. Correcting my post since there doesn’t seem to be a way to edit it: Nullable reference types DOES NOT replace the primary benefit of Code Contracts, but it does provide a better model for the 70% (maybe even 80%) use case. My objection is redefining how reference types work be default. I’m totally onboard with having compiler supported non-nullable reference types but as an explicit syntax whatever that may be. In C++ reference variables were like this Foo& so maybe use the & but the idea is to not break code nor change behavior such a fundamental thing as reference types. I know where you’re coming from, but this seems like an ok compromise for now for me. I think it is pretty safe to say that the vast majority of code should be non-nullable with certain exceptions needed. That points toward non-null being the default and nullable being extra syntax. The trouble is not just needing a tooon of ! everywhere to specify non-null but now you also have three kinds in the system. Unspecified, specified nullable and specified non-nullable. If you get to the point in your code where everything is either ! or ? and you enforce needing them specified, then it’d make sense to just get rid of the ! again and we’re back where we started. But actually changing the underlying IL in some way to break existing code would be a huge deal. Maybe it can happen eventually if this takes off, but would be too invasive right now. So this solution helps the syntax to make nullability clearer and helps detect a lot of bugs without such an invasive change at the IL level. Forget pragmas, I want a VStudio option to disable this micro-managing nonsense permanently. I’m sort of appalled any professional developer has such a hard time dealing with nulls that they’d applaud this weirdness. I have a higher regard for C# than any other language I’ve used in the past 40 years, but I’m starting to dread the recent explosion of line-noise syntax, and nothing is worse than mandatory new syntax that isn’t even functionally useful. It’s mixing design-time code analysis into the language syntax itself. Ridiculous. Somebody asked a good question earlier — does Microsoft intend to fix the entire .NET codebase? How about all the gajillions of Azure, SQL Server, Office, and other SDKs? What about the thousands and thousands of MSDN Library samples and articles, etc etc etc? Surely if the dreaded null is so threatening and dangerous, it would be terribly irresponsible of you to foist this eyesore on our codebases without addressing your own lack of explicitly stated intent. I also see this being a maintenance problem. When you have a large code base there’s a lot of different people working on it. Some people know the code very well and others barely at all. People make changes without knowing the impact of their change, compounded by the absence of unit testing, no documentation, no mandatory code reviews, everyone being short on time. Not all teams are able to follow recommended practices, depending on what management’s focus is – is this a software company, or does the software exist only to facilitate the business in making money in some other industry? Good engineering requires budgets that permit it. I think what will happen here is someone will start modifying the code in ways where what was not null before, can be null now, or vice versa, but no one will have the resources to go through the code base to make sure all the ? and ! remain correct. I understand that the purpose of this feature is so that the compiler will alert you to what is breaking due to nullity, but it does not happen automatically – a person still has to express their intent, that intent can and will change, and it is not necessarily the case that the person making the change understands what is correct. I have been stumped many times when reading code written by someone else and I can’t figure out what they were thinking. And I can’t ask them because they work somewhere else now. Honestly, I feel this thinking kind of misses the point. It makes it easier to deal with larger codebases instead of harder. Right now there can be no intent specified in the code, so you have to rely on outside things. Currently if a given function uses an argument of a reference type and has no null check, your only way to know if this is a bug is to check up the stack of all the callers. But with the changes it’d give a warning saying “foo class that called your function might be passing a null to it, so you should check for null before calling it”. Currently you may know your function is safe to not need an internal null check due to the callers being safe. But what if someone on another team calls it and isn’t safe? Or they may not know if the function can be called with null. So now they need to look at the body of the function to see if it accepts null (and maybe it delegates off to another call, so you have to go deeper). As for libraries, BCL, etc. I think if it doesn’t have some specific null annotation file for it (like what TypeScript does), you could treat those as always nullable and work from there. If you know the external library never returns null, you could use the !, otherwise would check for null as always. It’d just encourage you to check at the first point that your code expects there not to be null. I believe this new behaviour should be project property and not IDE setting. Because ultimately it is your project which either by default handles variables as nullable or non-nullable and not your entire IDE. This way we keep current codebase intact but both new projects and old projects can utilize that non-null enforcement if explicitly desired. I voted for the feature, but now reading the comments I am not sure the implementation is what we (who voted) wanted. Sorry for this comment as I did not help with a proposal. What about runtime checking? From what I understood it is a compile time feature. If it is compile time only it will not prevent runtime NPEs when exposing APIs through REST on an MVC application for example. In that aspect I miss code contracts that in my point of view should be integrated in the language syntax (see Spec Sharp) with is ability to toggle runtime enforcement. There is no runtime change (except for attributes on method declarations). Ooh, I missed that there is no runtime enforcement. I think that’s a major problem, and it doesn’t make sense to me, which is why it didn’t occur to ask specifically about that. If a non-nullable property/field can be set to null (eg using reflection or serialization) then you’ll get NPEs, and all the code expects that the field can’t/wont be null. Matt Warren: Are you sure that there is no runtime enforcement? I suppose I could test it out, but that just doesn’t make any sense… Great stuff! I was wondering how you would solve the problem with 3rd-party libraries that are not “nullable reference aware”. And I’m surprised that your response is basically “Fix your external library or push owner of this library to fix it.” I understand that your experience with Typescript shows that it’s not that big deal as it seems to be but in Typescript/Javascript it’s a bit easier: 1. Community embrace open-source more. It’s easier to ask owner to fix or provide fix yourself. 2. Even if you didn’t write this library you can always edit source or, more likely, .d.ts files with type definitions 3. If you use some ancient library that no one is supporting you probably wrote type definitions yourself or didn’t bother and just typed everything as “any” And what option do I have when I use some ancient library that I only have .dll for. I know which arguments or return types can be null and which don’t. I know it from experience with working with this library. Can a consumer of this library (which is can be some other .dll library) express this additional information about nullability somehow? Or I need to use “!” everywhere in my consuming code when I use this “not null-aware library”? You can also put a #pragma disable warning Your point about what’s different in the TypeScript ecosystem is certainly valid. Just because the approach worked out there, doesn’t *necessarily* mean it can work for us. I love you all and I hope you all win the lottery the day after this lands in the official compiler. So this feature looks like regular code analyzer (or maybe code contracts static analysis?) with only difference that this one has syntax sugar for annotation. Its good to have such a tool, but… Its far not the same as having burned-into-code preconditions support. I can’t rely that some other code which is compiled without respecting these annotations won’t try to pass nulls to my code. I anyway have to manually put null checks to all public APIs. How about at least adding a compiler switch to enable preconditions generation for public APIs based on this non-nullability support? I think this is a good thing to get done and I like the approach. Any idea when this will be included in Entity Framework 6? Our programs rely so much on EF that I will not start on this, before the API generate classes and relations correctly according to the new way, when we use the edmx designer. Ofc the int? and stuff is already there, but the missing part is when relations to objects might be null, because they are connected to a nullable field. Hope it makes sense Cheers. I was initially concerned about how nullable reference types end up implemented. I must say that I like this approach and especially that non-nullable is the default. I only hope that this will be included in VB.NET as well! @Peter, A feature like this will have a huge impact on the .NET platform and all three of our languages will have work to do to take advantage of it. There is significant overlap between the VB and C# language design teams and we started to review what this feature would look like in VB but decided to first release this preview in C# and get as much feedback on the design before putting in more design work on the VB side. The idea being, we’ll no doubt need to make changes based on the feedback we get on the prototype so we should make sure the design is in a really good place before VB-ifying it. So you can help us get it into VB by trying it in C# and telling us what you love and what you hate so we can end up with the best design for both languages 🙂 -ADG Let’s see how all this evolves. I’m looking forward to it. So far I think this is an unnecessary complication added to a well-established domain. Seems like the new feature now comes to rescue me from evil nulls after 15 years of dealing with them in C#… I’m sad that my favorite, innovative language got this after ‘Swift’ Please include ‘HasValue’ as per nullable value types Or maybe HasReference or whatever… How would you envisage null.HasValue working? It would effectively have to be a new property on Object that always returned true and would have to be accessed via objectUnderTest?.HasValue. It could be implemented as an extension method on object 🙂 public static bool HasValue(this object? o) => o != null; I think it is a valid line of thought to try to minimize the differences between nullable reference types and nullable value types. Whether this particular method is a good idea, I am not sure. I don’t like extension methods that handle null arguments, because it breaks the instance method “illusion”. Yes, that thought struck me too. Consistency is good. Could it be the other way around? For example Person! means a non-nullable type while Person should remain nullable? Basically I appreciate this as an improvement of the Expression of intent of the Code, in particular for new code. However, the loopholes, where no warning is created even though there is a Chance that a reference may be set to null might produce hard to find Errors in new Code. A clean way treating existing code when introducing this concept would be to Change all reference declarations R to R?. Yes, changing the Code was never something a Compiler was allowed or even able to do. however: this way the semantics of the Code would, in fact, be untouched. Which I would prefer to Change the semantics of an existing Code base (is a Little bit as if laws were retro-fitted). One Little favor (most likely you thought about that anyway) – You want to allow this: string[] a = new string[10]; // 4 ok: too common Please be so nice as to flag any usage like a[i] with appropriate warnings. ok – if that string[] a is the return value of some function, and not all places of the Array have been set, that might be a difficult to find loophole. > Please be so nice as to flag any usage like a[i] with appropriate warnings. Yes, in essence I wonder if such arrays could automatically be treated as nullable reference types? Yeah, it’d be nice if there was more tooling around arrays. Like if var ages = int[3]; would give a warning that it should be int?[3]; instead (or at least treat the var as int?[3] after). But var ages = [12, 23, 34] or List’s ToArray() method could actually be int[3] since it knows it has to be ok. Anything in between I’m guessing is too hard to detect that all items were filled in later and now it is safe automatically. (repost – my original comment didn’t seem to appear) I know how I’m going to test this feature. I’m going to turn off the warning and never turn it on. Once again C# is getting a feature of limited value that was “requested” by the community. It seems like the C# team has completely lost focus in this new “design by committee” world. We’re getting features of limited useful to every day code. This feature, in particular, doesn’t solve anything. I’m still going to write the exact same code except now I have to write more code to fix the compiler warnings. Let’s look at a couple of examples. public void Foo ( string value ) { } Getting a warning here solves nothing. I’m still checking for null. Why? Because the users of this code are using one of the many .NET languages that don’t understand this nonnull ref concept. So I have to code for that. Here’s another one. public class SomeType { public string SomeProperty { get; set; } } void Foo ( SomeType value ) { var str = value?.SomeProperty; } I have to check for null here as well. Why? Because the object may have been created by some other language or by serialization (which doesn’t require a constructor call). Other than adding more warnings that I have to resolve, this feature solves absolutely nothing. I would like to request a new language feature – it’s called immutable. This feature basically dismisses the language design committee and locks the C# repo for a while so people can actually use the language without it continually changing out from under them. Fast iterations aren’t always good and this is one of them. The new features are being added so fast that they are half baked. The language, which was a simple replacement for C++, has become a nightmare to work in. It’s quickly becoming as bad as C++ to learn and use. I used to be able to know what is available by knowing the VS version. Now I have to know whether you have update 3 or 4 installed for language features. Try teaching this in a classroom which is at VS 2017 update 1 while the students at home are running VS 2017 update 4. I know because I’m in that situation right now. “Users of this code are using one of the many .NET languages that don’t understand”. As the post stated, this improvement is for C#. Blaming C# for the lack of support in other languages is just making no sense. F# is very strong in immutability, leaving VB. Honestly, not a very strong case. Eh, quite a lot of people aren’t writing public APIs that might be consumed by people in different .NET languages, etc. If you have a large enterprise application, most of that code is in-house and using the same language but might be used by different teams. It’d be very handy to know that method created by team C doesn’t handle null, so I should take care of that before calling it. Otherwise I have to go into the method and see how exactly it is using the value. It might even be passing it down to other things that now you have to look at. Vice versa, if a particular method has no null check in it, you have no idea if that is a bug or not without looking at all the callers and perhaps their callers. It’d be overkill to have every method of every class in your codebase that accepts a reference type have to check for null before using it. Now you have more security about if your method in the bowels of the system is being called with null or not and you know a bug hasn’t been introduced by Team B also using the method but not checking for null as it was in the previous caller. Sure you still have to check at the edges of your application and when working with external libraries but where it helps is within your own inner code IMO. Actually, thinking about it, shouldn’t it be a tri-state thing (nullable, non-nullable, undefined)? Any newly compiled assembly would have to be either nullable or non-nullable (per the syntax suggested). However any legacy assembly where the nullable attribute is not set is treated as undefined. And you do not generate warnings if you are consuming an undefined type. This would allow backward compatibility, and not polluting solutions with warnings they you can’t do anything about when they are consuming a legacy assembly. That would mirror Kotlin’s “platform types” for java interop. Seems to work well there. This is explicitly what Mads said they don’t want to do: they don’t want all declarations to be written as “string”, “string!” or “string?”. I’m not too afraid of adding a simple “!” non-nullable operator because of the preferred explicitness. We already have the “var” keyword for type inference to allow retaining “pretty” looking code when the type is explicitly known. So, that would be my preferred change for C#. Their very clever system of warnings, et al, to treat it as a two-state problem has too many potential issues. And it ensures that we need to go back and modify old code anyway. If we have to go back and modify old code anyway, why not have the non-nullability be opt-in? No I am not suggesting new code should be allowed to have three states. New code should be only nullable or non-nullable, so there is no need for a third notation. Just do not interpret legacy assembly where the nullable flag hasn’t been defined as non-nullable. Just interpret it as a third state that doesn’t generate warnings. Thinking about it, legacy assemblies will be around for a while. Not just because people don’t update them. Also because if you create a library you don’t really want to target the very latest version of the framework otherwise you make it incompatible to whoever is targeting an earlier version (think some big projects where changing the targeted version is a big deal). This is not out of the question. We looked at approaches like that. We decided to try to live without them, because they do bring complexity with them. It is absolutely possible – easy, even – to identify “pre-nullable” assemblies: Just have the compiler add an attribute to the new “post-nullable” ones, and check if it’s there. We could then have a third state that’s “classic”, “legacy”, “oblivious”, “dangerous”, or whatever we want to call it, and interpret all “old” assemblies as trafficking in that. What do we want to do with legacy APIs, though? Avoid all warnings on the grounds that they don’t express their intent, so it wouldn’t be appropriate? Or warn *everywhere*, precisely because all the references are null unsafe, and you want to defensively guard against both nulls going in and nulls coming out? I can see arguments for both answers. So far we’re happy to see if we can just dodge this whole question! 🙂 But we may have to revisit it. It would be nice to be able to identify post-nullable assemblies that are known to be clean. The ones that only reference other clean assembles or perform all the right checks when referencing legacy ones. Then after adoption increases a move to errors instead of warnings might be possible. I already posted this as a reply to a reply to a comment… So, I’m posted here as a top-level comment, with some additional thoughts. The proper response is to hard fork of C# that’s only everything we’ve always wanted: call the new language something brilliant like C#++. I’ll use C#++ for now. We should get a clean C#++ that uses whatever nice things we want to borrow from TypeScript, F#, Kotlin, etc.. Non-nullability from the ground up would be one basic feature. Another great example would be to have C#++ use F# style async/await implementation since improvements to the async/await concept were thought of after C# had already shipped it’s async/await implementation (e.g. it is now known how to implement async/await without allocation of Task classes). Or even keep the current implementation and add an alternative flavor that’s more efficient. We’ve already got all those async methods written returning Task objects… But that’s still a great place for some kind of automated convert to step in (I’ve sure JetBrains would put a converter like that in Resharper, for instance). Lastly, it would be easier to do this and port existing C# to C#++ than to deal with all of these contrivances forced onto C#. We really do need a way of starting fresh with an awesomer flavor of C# without breaking old code or doing these kinds of incredible logistical backflips. Implementing amazingly complicated features like this new non-nullable reference type contrivance is pushing off the real solution. Other communities introduce new languages all the time to deal with situations exactly like non-nullability. I believe we are way past time for a C#++ to hit the scene. The nice thing is that if you guys don’t go too crazy with the new language, you can keep it in sync with new features added to C#. Moreover, since they are both .Net languages, they would be interoperable. C#++ code could call C# code, etc. And we could follow TypeScript’s lead and have an externalizable metadata provider to wrap C# legacy assemblies with the metadata to make the C#++ compiler happy. For example, if old code ensures non-nullability, we could either add [NonNullable] attributes to our old C# as appropriate or have these defined in external XML metadata. This is the real solution. And it will generate a huge consulting/career opportunity for developers to bring legacy code into the new, safer age of programming. And companies are the winners: getting rid of generalized nullability is way past due. And companies are already rewriting entire codebases and libraries into new languages just to get benefits like that. So, why not provide them a C# successor that requires very little code changes for accomplishing just that? That’s a big win for MS: solves the core issues, keeps current developers on .NET, and provides a new stable foundation for marketing .NET (with C#++) as a viable solution for companies looking to address those core safety issues without jumping to another platform (e.g. Kotlin). Regards, For the record, in C#++, my preferred outcome would be to have non-nullability be default. And for structs and references, nullability can be imparted with the “?” modifier. If we cannot get the momentum to create C#++, my preferred fix for C# is to have an explicit non-nullability operator “!”. Yes, it means we have to rewrite old code to opt-in. But the current solution requires a potential massive code rewrite anyway. The existence of the “var” keyword allows clean looking code where the explicit type can be inferred. So we don’t necessarily have to see “?” and “!” after every variable use — just the declarations. Moreover, we could have an automated fixup and introduce a one-time fixup for old code and turn every naked reference variable into a “?” declaration and then make the default declaration non-nullable. In that automated fixup scenario, we’d need to do things like turn default(string) into default(string?). Lastly, we would then just extend the current Nullable semantics to apply to nullable reference and struct types. I am *pretty* sure that this would all be syntactic sugar and that no CLR changes would be required (I believe the required CLR capabilities were added when Nullable was added for structs). Moreover, the automated fixup for C# like what I described above could be two-way: if a developer selects C#vnext in the drop down, it runs the converter forward, and if the developer selects C#legacy it would could the converter backwards. The conversion logic is deterministic both directions, so that is very doable. That’s my 2 cents: let developers opt-in to non-nullability as appropriate and make the syntax for nullable types by the same for structs and references. (And have an automated way to fixup old code. Everyone likes wizards, right?) Thinking about this some more. Implementing my preferred design on C# might require a CLR change :/ I’m not an expert on CIL and I’m not sure the exact changes that were added to support nullable structs. But I can perceive that it might require a new CLR to enforce non-nullable reference types at runtime: e.g. non-nullable can convert implicitly to nullable (unchecked conversion), but nullable cannot implicitly convert to non-nullable (explicit checked conversion would be required). If the existing Convert and ConvertChecked CIL instructions could be used-as is with this language design, then I think it could be done without changing the CLR. If not… the CLR would have to be changed too. However, a change like this to Convert and ConvertChecked in the CLR would make designing languages with explicit nullability easier to write for the CLR. Since F# has explicit nullability and it runs on the current CLR, I’m guessing the current CIL instructions were sufficient to do so. However, maybe they did some inefficient CIL emission hackery to stay within the current CLR version. If that is the case, adding this feature to the CLR would make languages like F# more effective at runtime too. So, it would be worth it. Moreover, this could be handled at the JIT level without modifying the CLR: leave CIL the way it is (e.g. use whatever mechanism for emitting CIL for non-nullable reference types that F# uses), and just have the JIT compiler recognize the pattern for reference checking and output efficient runtime code. I’m *guessing* that F#’s non-nullability feature is already done this way and that this JIT optimization (or something like it) is already implemented (either that, or the CLR already had the requisite support required to efficiently achieve F#’s implementation). The path you are suggesting will lead to a Python 2/3 scenario, where massive amounts of existing code just breaks by failing to compile. Or, you are proposing all the corporations to embark on a massive “Upgrade to C#++” project for no apparent business value. And at some point, people will also ask for improvements to your “C#++” language, but since your repository is locked down, you now need to create C##++. No thanks to any of these outcomes. F#’s non-nullable implementation is all at the type-system level and done at compile-time. There’s no runtime checking to stop you e.g. using reflection to “beat” the system AFAIK. It works remarkably well – I can’t remember the last time we got a null reference exception from F# code itself (it’s still possible to get it from BCL and C# libraries). I think you overestimate the willingness of companies to rewrite old code bases to get new language features. In 37 years I nave not come across one company willing to rewrite a code base to take advantage of new language features. There always had to be an extremely compelling business reason to do a rewrite. My most recent experience with this was with a company whose entire product was written in a combination of VB6, c++/COM and c# 1.1. They refused to move forward, and for no other reason than “its worked.” The year was 2017. Who says they have to upgrade to C# 8? I’ve contributed to OSS projects that still have the version locked at C# 6 and even C# 5, and it is 2017. But that’s their choice. Swift ? Have a look at Swift, which manages it pretty well, I think. And in the same time, if you could add the let keyword (i.e. something like var const), make the parameter names part of method signatures, and allow partial definitions in different assemblies (i.e. extensions), that would be great 🙂. I’m sad about C#. The default for nullability is plainly crazy. Have you worked in a language that has non-nulls as default? If so, which one – and why didn’t you like it? No language in use has non-nulls as default. Maybe some freakish languages such as Kotlin and Swift… and I just don’t give a damned s. about they, because we are talking about C#: a practical language with a HUGE codebase that will become broken after this change. Seriously, guys, are null reference access such a big issue? I have never been worried about that kind of bugs. I can sort them without help. Yes, some help would be appreciated, but what you pretend is to turn all conventions upside down. That’s crazy. The codebase will not get broken because these are warnings only I hate warnings in my code. Most languages that support non nullability have it on by default. There’s even one in .NET that works this way, F#. I wouldn’t call that (or Swift or Kotlin) “freakish”. This is a big change for someone that has only ever used C/C++/Java/C# because it’s changing something you take for granted. I’ve tried both for years now – non null by default is way better. I love the C# language and I respect you guys a lot but there are things I just can’t wrap my head around. Validating arguments is the first thing we do in methods: public void Foo(string bar, int baz) { if (bar == null) throw new ArgumentNullException(nameof(bar)); if (baz < 20) throw new ArgumentOutOfRangeException(nameof(baz), "…"); } You said there will be no runtime checking, so the compiler of Visual Basic or any other .NET language (even the older versions of C#) without this feature can still pass null (or Nothing) here right? If this is true and I'll still need to do my runtime checking anyway, I fail to see just how useful it is to have non-nullable types. Won't this cause more bugs if we call something non-nullable when we still can get null instances of it, like in the array example? The people already know that instances of reference types can be null. They know they need to sanitize their arguments before using them, it's CS 101. But now you're giving them this so called non-nullable types which can actually be null. "Reference types that can't be null but actually can" is a concept that is hard to learn, hard to teach and adds way more complexity to the language than it needs to. I believe the lack of argument validation is the underlying problem and NREs are just a symptom of it. In my opinion Code Contracts (or Spec#) was on a much better direction to solve it. There are things people need to understand in order to write good, maintainable code and reference semantics are one of them. I wish these things were simpler but sometimes failing fast and learning something from it is better than not failing. I really find it counter-intuitive to hide the problem under the carpet where it lurks instead of teaching people that reference types can be null and arguments need to be sanitized before use. CS 101, I think you nail the point exactly, all the requests they are getting are for folks that never made it that far, the reason they are getting what they perceive more requests for this type of thing is that professional developers esp on MS stack spend there days creating solutions not surfing github and stack overflow for code , basically Windows Division is calling the shots, the theory is app should be single page “no lines” or other distractions, and shall be created by the semi skill masses and no more investment in WPF/Real Net , UWP is the future .Net Core Standard XAML, Xamarine Forms for maC and Linux, Period. it’s “all about” the “learn by delivering” throw garbage at the wall , get user feedback, rinse and repeat scrumness and now this. the points you’ve made are just the tip of the iceberg, it’s ethier #define CheckForNulls, which might be useful, or Sabotage IMHO. It would be nice, and conceptually simpler, if we actually could have “reference types that can’t be null”. That is not a viable option, especially when added to an existing language. We have to settle for “reference types that shouldn’t be null”. That, I think, is a pretty accurate one-liner to describe what I call “non-nullable” reference types above. Because people can still (knowingly or by accident) pass null to public APIs that don’t “want” it, those APIs probably still need to do argument checking. The value here is not so much to the author of the API, but to the caller: instead of getting a runtime exception, they can get a compile time warning. We’ve talked about a syntactic shorthand for doing the argument null checking. As a strawman syntax: public string GetFullName(Person p!) { … } Could be a shorthand for public string GetFullName(Person p) { if (p is null) throw new ArgumentNullException(nameof(p)); … } Mads, I do like this idea of being able to manually specify the edges of code where they meet the outside with an easy to see syntax. Some people might get carried away (or misunderstand) and use it everywhere, but hopefully not. I’d guess theoretically there could also be a slower compile mode for debugging where all non-null parameters get this runtime check? Mads, and everybody else on the C# language design team, I want to contragulate you on the proposed “nullable reference types” approach for C# 8. I wrote the original Visual Studio User Voice Request for such a feature in 2011 []. After six years of careful evaluation, you have finally found a sensible, well-balanced, and pragmatic way to immensely improve the current mess with null references in C#. I want to suggest two things when implementing this feature. 1) I have worked for many years in large, mixed C#/F# projects (and also VB.NET in the past). As the leading .NET language, C# has a special responsibility. Please make sure that, on the IL level, nullable reference types are implemented in a way that can be well integrated with other languages. This should not be too hard, as the major .NET languages are all maintained by Microsoft itself. What a great opportunity to work together! For instance, F# has non-nullability by default; therefore, a C# project referencing an F# assembly should take this into consideration. Maybe we need an assembly-level attribute that testifies, regardless of language, whether an assembly was compiled with [CSharpNullabilityStrictness] or not. Even better, it should be called [CliNullabilityStrictness] and become part of the CLI specification. 2) Please add the flow analysis to nullable value types as well, to keep the language symmetric. Thanks, and keep up the good work! +1 to this. How would something like the code below be parsed? void Foo(string bar) { string? x var c = x!=bar; } We need to preserve the meaning that it already has in C# today, so it would be a not-equals expression, and yield an error that x is read without being initialized. Using “expr!” to the left of a simple assignment (or in general as an “l-value” – a target of assignment) is not useful or meaningful, and should probably be disallowed. Parser would probably error with “; expected” on line 2. Finally! I had proposed this idea many times! I am glad you are doing it the right way! void M(Person p) { WriteLine(p.MiddleName.Length); // WARNING: may be null WriteLine(p.MiddleName!.Length); // ok, you know best! } I think you should use: MiddleName?.Length “`MiddleName?.Length“` would not be ok (it already has meaning today). That would return an “`int?“` not an “`int“`. The “`!“` operator is so you can say that you want this to be the exact same as p.MiddleName.Length, just with the compiler silencing the warning in a easy manner. Many thanks for a very interesting post. I’m a huge fan of C# and have welcomed all the improvements over the years, but I’m not sure sure about this one. I agree with other comments that this change is uncharacteristically flawed. I for one am quite happy that references can be null, and am used to the pattern of checking for null arguements. One specific point I’d like to check: if we are to encourage library vendors to change their API signatures, they will understandably want to be sure that they are not imposing a specific compiler version on their clients. Will a client who uses a compiler that doesn’t understand the new syntax work with a library compiled with the new syntax! I’m guessing yes as it is just a new attribute, but I thought it worth checking. Many thanks Wonderfully lucid post! I fully support your design principles for this feature and I’m eagerly looking forward to using it. Having used Resharper’s null attributes for almost 10 years I can attest to the benefit of having nullability analysis baked into the design-time environment. Over the years it has identified many issues in my code, saving the cost of countless runtime crashes. A NullReferenceException thrown from operational code is often tricky to track down, particularly when it occurs on a line with deeply chained method calls, or within a legacy code base with little input validation and insufficient precondition checks upon entering methods. A mistakenly assigned null in these circumstances can propagate far from the point of assignment making it costly to detect and fix. Fantastic feature. Thank you! Will this be added to VB.Net too? (please, please, please) this is amazing, when do you expect to release? This is a great approach! Thank you guys for involving the community to try to get best result and great acceptance by developers. Better still just add Code Contracts as a first class citizen. And don’t change the default way that code works. If you want something like to make a value Not Nullable then do something like string! instead of breaking all of the existing code just to make your own bias forced on the world. It doesn’t actually break any existing code, though. The only output change at the IL level is adding an attribute which by default any code would be ignoring. This purely gives optional feedback during the compilation processing of what code is safer and what should have more null checks. It also helps provide developers documentation of the code without having to actually look at the body of a method. While I understand that null reference types are a large enough issue that a language specific approach is useful, but I also believe an approach based on Code Contracts would be much better overall, as it would create a framework for a larger set of static analysis and warnings to be created without requiring syntactic sugar (even if such sugar is very useful in the nullability case). Example of additional analysis and possible warnings are indexers that require a >=0 value, but still have an argument type of a signed integer. Or non-empty or non-blank strings. And so on. The existing Code Contracts tool does need a lot of work, but it already had the notion of contract assemblies to annotate third party code to prevent warnings and there are techniques to infer constraints based on if-throw patterns and the like. I understand that Code Contracts is tricky, because it has been used for static annotation as well as run-time code rewriting to enforce constraints. Many people see the second usage as paramount. But, the two usages are completely independent; One does not demand the other. All an analysis framework based on Code Contracts would need is a setup to “assume reference types are non-nullable” option to generate appropriate warnings and contract suggestions, as Code Contracts already tries to infer and suggest contracts during it’s static analysis. At the very least, the flow analysis should respect Code Contract annotations and their usage in terms of “being smart” about null references. Yeah I do see where you’re coming from. I haven’t had the chance to use Code Contracts much, so it’s a bit of a blind spot. This is a place where it’d be nice for some enhancements to the type system. Like defining a constructor/validation function that outputs an int of custom type PositiveInt which can still be a normal integer under the hood but can only be created through that function so you know it is safe to use later on. And maybe a custom default value or prohibition on using it for things like empty arrays being initialized. I mean technically you can work around all this by wrapping the value in a class of course, but not so great for efficiency depending on the application. Same problem with structs not being able to hide the parameter-less constructor. Hopefully they can expand things later on, but I do think the null issue is probably the most important to start with. You can’t even do the “wrapping the value in a class” trick for this particular issue since any class could be null. Very welcome improvements! Will guard-clause libraries that provide null-assertions be supported? For example, the popular Shoudly () or Ensure.That () provide guard assertion code that is often put at the top of the function, like: EnsureArg.IsNotNullOrWhiteSpace(myString, nameof(myString)); Will the compiler be able to determine that myString would not be null hereafter? If so, would that be done through code analysis or through attributes? Congrats — this is a much needed, and long overdue C# ability. A few thoughts: 1) ReferenceEquals seems to complain (CS8600) when used to check for nullness, whether explicitly using “null” as an argument or passing a T? as an argument. I think it is a smoother migration path if ReferenceEquals did not treat this as an error by default. 2) While it is critical to consider the migration path of existing code, C# should also consider “green field” projects, where nulls are “forbidden” from the start. 3) Reading the announcements, it was not clear that the nullness can be specified in an interface. 4) A mechanism to define nullness for dusty deck code would be nice, similar to what can be done for CodeContracts. 5) For the documented “Known Issues”, could you add some verbiage on each item about where the language is headed> 6) Will using ? with a generic type parameter work whether or not the actual type is a reference or value/struct? 7) Is there a supported way to specify nullness in a where clause? 8) What are the planned semantics of nullness with dynamic classes? 9) Maybe give some thought to allowing events to be null even without use of ? 10) Use of Predicate in an interface definition seems to cause problems (CS0535) as it appears to be mishandled as Func 11) My personal belief is that ?. should only be used for invoking events or in finally/dispose handlers. Any chance Roslyn could add some compiler switches along these lines? 12) Are the nullness rules and inferences defined for volatile and ThreadStatic storage? What about unsafe also? I have been using ReSharper “strict” for years, and it is nice to see C# also adopting this critical functionality. So again, congratulations 🙂 byte[] nullArray = null; “Cannot convert null to non-nullable reference” byte[]? nullArray = null; This gives a compile error. How can I express that an array can be null? There is no way to get rid of this warning. public void LogError(string msg, [CanBeNull] Exception e); LogError(“error”,null); “Cannot convert null to non-nullable reference” public void LogError(string msg, Exception? e); This gives a compile error. How do I indicate that e is allowed to be null? Here is an example of the kinds of constructs you referenced. This builds and runs (on my machine 🙂 ) using VS2017 Enterprise 15.5.0 Preview 4.0, with the nullable package referenced at the top of this post. I am using NCrunch, ReSharper, GhostDoc, and several other add-ons, and building this as a .Net 4.7.1 console app. Other notes: Build->Advanced->LanguageVersion is “C# latest minor version (latest)”, ReSharper project C# Language Level is “Experimental”, and I am not treating any warnings as errors. There are numerous “false negative” intellisense errors flagged (as warned in known issues). But the app does compile and run. #region usings using System; using System.Collections.Generic; using System.Linq; using Platform.Annotations; // my ReSharper annotation project #endregion namespace NullableReferenceA { public interface ILogErrorApi { #region instance public methods void LogError ( [ NotNull ] string msg, [ CanBeNull ] Exception ? e ); #endregion } public class LogErrorApi : ILogErrorApi { #region non-public constructors [CanBeNull] public static byte[] ? NullArray ( [ CanBeNull ] IEnumerable ? nullable ) { byte [] ? result = null; if ( ! ReferenceEquals ( null, nullable ) ) { result = nullable.ToArray ( ); } return result; } #endregion #region class public methods #endregion #region instance public methods /// public void LogError ( string msg, Exception ? e ) { Console.WriteLine ( $”{msg}: {e}” ); } #endregion } internal class Program { #region class non-public methods private static void Main ( [ NotNull ] [ ItemNotNull ] [ UsedImplicitly ] string [ ] args ) { var logger = new LogErrorApi ( ); logger.LogError ( “first”, null ); logger.LogError ( “second”, new InvalidOperationException ( “third” ) ); foreach ( var sample in new [ ] { LogErrorApi.NullArray ( null ), LogErrorApi.NullArray(new byte[] { 0x1, 0x2}) } ) { logger.LogError ( “is” + ( ReferenceEquals ( null, sample) ? “” : ” not” ) + ” null”, null ); } } #endregion } } Here is sample output: first: second: System.InvalidOperationException: third is null: is not null: NOTE — You could also configure the project to ignore warnings CS8600,CS8604 to silence the nullable related warnings. Also, be careful with ReSharper reformat code — it does not always understand the new syntax, and can really mangle things. I noticed a formatting issue after I posted — the IEnumerable type specification should have the generic type argument of byte. There may be other things lost in transmission, but I can zip & ship if you are interested. The first “null reference” I fixed was an Error 429 in a 5-year old VB4/5/6 program (cant remember the details, just the pain). Before that I had dealt with “wild pointers” (actually in-initialised) in C Fast forward a few years, I joined a team with a BIG VC++ codebase, and I was given the task of turning on GS – stack check. I spent the next month cleaning some of the 8000 odd warnings. Reading some of the comments, It feels like people are totally focussed on their specific pain-point, and have (justifiable) concern about how features like this will affect THEIR particular class/API/method. We (and MS) have to look at the bigger picture, “you cant please all of the people all of the time”. Nulls are a fact of life in every major codebase. A well-designed SQL server database will have null columns – DateOfBirth should most likely be non-null, DateOfDeath should be null – unless you work at a headstone manufacturer. These Person properties will need to emerge from the data layer and be used/tested in code in one layer or another. (it would be useful if SQL server and C# could agree on what the date is…) I do agree with some of the comments about the gradual increase in complexity in C#. K&R’s original book described a clean, simple C language, then came C++, now I struggle to read some of the MACRO-ridden, pragma infested uncommented code offered as “examples”. One suggestion I’d offer is rather than turning this option on with a compiler switch, allow us to decorate a class / assembly with some [NullCheck] attribute, so we can fix the new warnings incrementally, rather than the Big Bang. If it means taking something from resharper, fine. (repost – my update didn’t seem to appear). I’m aware this wont be popular, its perhaps tangential to the language changes. System.ComponentModel.DataAnnotations (10+ years old now) allows us to decorate classes and properties with attributes like [Required] so [MetadataType(typeof(PersonMetaData))] public class Person { public string GivenName { get; set; } public string FamilyName { get; set; } public DateTime DateOfBirth { get; set; } public DateTime? DateOfDeath { get; set; } public string Email { get; set; } public string WebSite { get; set; } public string Notes { get; set; } } public class PersonMetaData { [Required, Display(Description =”First Name in western cultures”,Name =”Given Name”,Prompt =”etc”)] public string GivenName { get; set; } [Required] public string FamilyName { get; set; } [Required,Range(typeof(DateTime),””,””)] public DateTime DateOfBirth { get; set; } public DateTime? DateOfDeath { get; set; } public string Email { get; set; } [Url] public string WebSite { get; set; } [MaxLength(2000), Display(Prompt = “watermark”, Name = “label”, GroupName = “validation group”, Description = “tooltip”), UIHint(“MultilineText”)] public string Notes { get; set; } } The attributes are read by controls like asp:DynamicEntity and asp:DynamicControl ( yes webForms ! ) to create a UI that formats input, validates, checks not just for null, but also range. And its localised by resources. If the Person is passed to an API method, then a ValidationContext can do the checks, and raise exceptions. How about public bool Save( [Required] Person person){ var context = new ValidationContext(person, null, null); bool result = Validator.TryValidateObject(person, context, results, true); } Its more typing (4 characters – “[Re” followed by ctrl tab) but its readable , no language change required, predicable results, can be added to existing code, and no punctuation soup like ” ! ? * ??” – fine when you’re fresh but not so cool after an 8 hour coding binge. Fur Nullable aspects, this is very similar to ReSharper’s [NotNull/CanBeNull/ItemNotNull/ItemCanBeNull]. They are pure static/compile-time specifications that are tracked through flow-analysis with ReSharper displaying messages when something may be violated. As to the representation aspects, that can already be done via domain-defined data types. There are cyclic shifts in “everything must be a domain” vs “everything must be a primitive”, and unfortunately we seem to currently be in the primitive phase. Also, I’m not sure if the overhead associated with enforcing DataAttributes is acceptable if it is done every time an assignment is made to a decorated property. But it is worth exploring (and you could probably prototype a generic pattern to do the checking on each assignment without too much trouble). C# would still have to change to allow attributes on anonymous declarations, casts,, locals, etc. (basically anywhere a type could be specified, the attributes should be allowed also). SQL Server and C# do agree on what the the date is when you use the more modern Date or DateTime2. And the default value of that is 0001-01-01 (DateTime.MinValue) – which you can use in the database too and then you don’t have null values! I think this is a fantastic idea and the implementation makes complete sense. I will try it out… My remark about dates was slightly tongue-in-cheek, but to expand on that: SQL server has 6 date / time / datetime datatypes, of varying precision. Such a fixed-point type is useful for storing a date, but less useful for indexing. In my (slightly morbid) example, retrieving living people is simply dealt with if DateOfDeath is null. To search on values where its not DateTime.MinValue requires a decent index, maybe on a field with a resolution of 100 nanoseconds. Again, taking the bigger picture, when marketing want to do a mailshot to living prospects, if the select is slow, because of the NOT Datetime.MinValue bit, they will say just select everyone, and a few grieving spouses will get the letter. I just dont like using magic numbers to mean “Not set”. I worked on one system where an integer customer Id was set in code, and negative values represented something special. Unfortunately, the system was popular, and there was a risk of the value rolling over… Remember my 10cents above was about a feature that’s been in the framework (not the C# language) for at least 10 years. It may not be perfect, but it works well. I haven’t benchmarked the difference between decorating with a [Required] attribute and coding a null check/raise exception. Maybe over the Christmas break 🙂 Will the compiler respect an if statement that contains string.IsNullOrEmpty(str)? This is a definitely bad invention, and if this can be turned off, I will do it for sure in every project! Default non-nullability vs nullability, a tough choice indeed, and a matter of personal preference. I would vote for default nullability though. Considering: 1. Added value 2. Introduced problems 3. Added complexity to the language 4. Elegance 5. Intuitiveness 6. Compatibility with existing code bases and libraries In general, I would prefer a compiler to be 100% right about a subset of my code, rather than being sometimes right about all of it. By “default nullability” I mean the “Person! p” syntax that would ensure for 100% that p is never null. Person![] p = new Person![10] –compile error Person![] p = new Person![1] { new Persion() } – ok public void Do(Person! p) { // implicit null check and null argument exception for public methods, no check for privates } struct A { public Person! p; } – compile error class B { public Person! p; public B(Person! p) { this.p = p; } } – ok if B is internal, error if B is public class in a library. We could decide that public fields cannot be non-nullable due to a multithreaded scenarios, and for public properties throw in case of access to an object that was not fully initialized. default(B!) – provide a way to define default member of non-nullable class, similar to string.Empty for example: class B { public static readonly B! default = new B(default(Person!)); } default(string!) == string.Empty – ok **** ADDED VALUE **** default nullability – no added value for existing code bases. 100% proof of null safety where it is provably possible. default non-nullability – will catch a subset of bugs on existing code bases. **** INTRODUCED PROBLEMS **** default nullability – none I can think of default non-nullability – Is likely to introduce a set of bugs where null-check will be omitted, due to the expressed intent of non-nullability which will prove to be wrong. **** ADDED COMPLEXITY TO THE LANGUAGE **** default nullability – introduces new syntax, makes the code uglier default non-nullability In general non-nullability enforces new concepts and mindset upon you. You can’t choose not to deal with it one way or another. Either you will have to ignore specific warnings, or ignore the new intent you are putting into the code. 1. The notion of nullable references are confusing – since they are always suspected to be nullable. 2. The notion of “somewhat non-nullable” – as a programmer you will have to know when you can trust this non-nullability and when not. Instead of compiler taking this concerns away from you, it brings it in by default. 3. Confusingly different semantics with nullable value types. **** ELEGANCE **** default nullability – “?” – adds nullability to value types, “!” – adds non-nullability to reference types seems like an elegant dichotomy. default non-nullability – elegant but fake. **** INTUITIVENESS **** default nullability – expression of the intent is intuitive; however, the interpretation of code correctness is not. default non-nullability – expression of the intent is intuitive, code correctness is ensured by the compiler. Also, the compiler helps you learn and understand the limitations of the language by preventing you form using non-nullable types in cases where they cannot be proved to be non-nullable. This will if not prevent you from making mistakes, and make you aware of the cases you might not have thought of. **** COMPATIBILITY WITH EXISTING CODE BASES AND LIBRARIES **** default nullability – seamless, since this is a new concept did not exist in previous versions of the libraries, and behaves the same way in the new versions. default non-nullability – confusing, you need to know the version of the compiler in order to understand the intent of the code. Nowhere in the code it is claimed that non-nullability is assumed. Thanks for the great job of moving the language forward, hope you reconsider this decision. I also prefer support for P!, as it opens the door to ” P ! – expected to never be null”, ” P ? – expected that it may be null sometime”, and ” P -no idea when or why it will be null or not null”. Maybe there is a way to also support “once/deferred-readonly” semantics (i.e. may be null until assigned, then never null until Dispose()). There also is a semantic/intention difference between ” P [ 10 ] “, ” P ! [10 ]”, ” P [10] ! }, and “P ! [10] ! “. This is similar to the [NotNull] and [ItemNotNull] support in ReSharper. This gets more precise as C# approaches support for the A68 triple ref technique. As to defaults, I could see introducing a mechanism to define the “default” value for a reference type, similar to how default constructors and field/property initializers are supported. I strongly prefer default of not-null for API elements (return types, parameters, properties). I do not think there is serious risk of “default not-null” introducing a new set of run-time bugs — static flow analysis does a very good job of detecting null-status. Of course there are ways around this analysis (just as using reflection can defeat a lot of safety). And a new project option (or even project type) tailored to nullness is worth considering — similar to “unsafe”, ClsCompliant, … Thanks for your comment ) I don’t quite understand the benefit of having three states: Person? p; // intentionally nullable Person p; // no intent Person! p; // intentionally non-nullable It seems excessive and confusing to me. As well as eventually non-nullable, can you provide an example where this could be useful? “I do not think there is serious risk of “default not-null” introducing a new set of run-time bugs” Can’t agree. If you still need to guard all references even non-nullable then what’s the point of non-nullability? If you start avoiding guards, all the cases when the compiler cannot guarantee non-nullability and does not provide warnings you will have bugs. In the examples provided by Mads: void M(Person p) { if (p.MiddleName != null) { p.ResetAllFields(); // can’t detect change WriteLine(p.MiddleName.Length); // ok } } In other words you need to be smarter than the compiler, in that case what to you need the compiler for? In a paradoxical way it seems to me this could be yet another billion dollar mistake. Since it will give coders confidence to avoid null checks. Sava, your exactingly correct , the value proposition is not a technical one, it’s quite obvious, unfortunately there a new system of (opinion based design principles) the anti pattern here is basically better to continue executing in some undefined state for poorly designed system than crash , it’s a check box “Feature” like “Open Source”, don’t worry thou , who ever is behind this is is for a rude awakening, the aren’t in the position to dictate anything anymore it’s not 1992 A benefit to supporting all 3 (!, ?, and no intent) is that it makes it more explicit. As you jump between different solutions and projects, it is easy to “forget” what semantics are applied to a project (i.e. does it default a missing specifier to nullable or not nullable?). Using ! and ? clarify things. The ReSharper analogy here is optimistic vs pessimistic ( ). I always prefer Pessimistic as it reports more potential issues during static analysis. The examples which ignore volatility are not helpful. Since an async action can modify MiddleName even this is unsafe: if (p.MiddleName != null) WriteLine(p.MiddleName.Length); // unsafe, even though post says “ok” The correct way to code this is using a local temporary variable to hold the value of p.Middle: var temp = p.MiddleName; if (temp != null) … In general, any code that has multiple chained dereferences (p.MiddleName.Length) should be avoided. Proper use of [Pure] (and a default of “treat unspecified as Impure) could be used to have the flow analysis treat the ResetAllFields() method as something that modifies the state of p. And again for clarity both [Pure] and [Impure] should be supported. ( ) Migrating to never-null is not trivial, and the ripple effect can be a shock. But proper flow-analysis can take a lot of the uncertainty out of things, especially coupled with properly expressive architecture rules (such as Pure/Impure). Maybe by C# 10.0 the crossover point may be reached where the majority of code written is null-safe (we also need a uniform taxonomy covering the models and rules etc.) After giving it some more thought, i like this interpretation: Person? p; // Optional concept, null being a valid value to be assigned. Person p; // Initially null, but eventually expected to become and remain not null. cannot be assigned with null or nullable (i would make it a compile error and not warning) Person! p; // Compiler ensures that variable is initialized as not null and remains not null. Can throw exception though, but throws it as early as possible. Makes most sense to me. That’s great feature. Glad to see this kind of changes to c#. Keep up! How do you plan to handle multi-threaded code? if (ns != null) { // context switch – another thread could null ns WriteLine(ns.Length); // there should be a warning here } Compiler does not give any guarantees about null reference safety. And does not take that concern away from you. The intent of non-nullability is confusing because it is simply not true. All types are initially null, and the compiler is good with that. You should still write safe code and put null-checks everywhere, the compiler will warn you in certain use cases, in others won’t. I don’t think writing null checks “everywhere” is needed. If we ignore the malicious case of someone using reflection to get around the nullableness, then it is mainly an issue when assigning to something declared to be non-nullable. Static analysis can recognize that code is safe (i.e. ensures it does not assign a null to something not nullable), and raise an error for code that it cannot determine is safe. Concurrent access may still require the usual safeguards, but nullableness does not really contribute to this — the types of coding required will still work with the nullableness feature. I disagree that all types are initially null, but there definitely is a mindshift associated with “never dereference a null” design. C:\Roslyn_Nullable_References_Preview_11152017>dir Volume in drive C is Win10 Volume Serial Number is 2236-DFF3 Directory of C:\Roslyn_Nullable_References_Preview_11152017 12/07/2017 04:01 PM . 12/07/2017 04:01 PM .. 11/14/2017 11:07 PM 96 install.bat 12/07/2017 04:01 PM tools 11/14/2017 11:07 PM 98 uninstall.bat 12/07/2017 04:01 PM vsix 2 File(s) 194 bytes 4 Dir(s) 103,129,927,680 bytes free C:\Roslyn_Nullable_References_Preview_11152017>install Using VS Instance 1862b88f at “C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise” Installing all Roslyn VSIXes Installing vsix\Roslyn.Compilers.Extension.vsix ” Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly ‘Microsoft.VisualStudio.Threading, Version=15.3.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified. —> System.IO.FileNotFoundException: Could not load file or assembly ‘Microsoft.VisualStudio.Threading, Version=15.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ or one of its dependencies. The system cannot find the file specified. —> System.IO.FileNotFoundException: Could not load file or assembly ‘:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\PrivateAssemblies\Microsoft.VisualStudio.Threading.dll’ VsixExpInstaller.Program.c__DisplayClass16_2.g__RunProgram1() at VsixExpInstaller.Program.Main(String[] args)” System.Management.Automation.RuntimeException:” at Exec-CommandCore, C:\Roslyn_Nullable_References_Preview_11152017\tools\utils.ps1: line 39 at Exec-Console, C:\Roslyn_Nullable_References_Preview_11152017\tools\utils.ps1: line 71 at Deploy-VsixViaTool, C:\Roslyn_Nullable_References_Preview_11152017\tools\install.ps1: line 29 at , C:\Roslyn_Nullable_References_Preview_11152017\tools\install.ps1: line 41 C:\Roslyn_Nullable_References_Preview_11152017> The preview version of vs should be the only one installed So I read a lot of these comments and even tho I initially was like “WTF?” I understand your approach and intend… and it is good idea overall. However as I understand from this portion: “The feature design envisions a separate compiler switch to enable warnings related to nullability.” this is implemented as compiler switch… so my question is – why did you not make that behaviour as derivative from project property? Because this would pretty much handle a lot of mentioned problems. Simply add property “references are not null” (or whatever) and if this is checked then you will launch compiler in no-null mode, leave it unchecked and compiler would work as it worked before (and if project does not have that property then it is recognized as unchecked so no issues with old projects here). Reason for this is that compiler mode used should actually be defined by project and not workstation or IDE env. When I write code I know what compiler behaviour I expect and I should get the very same results 10 years later when I compile my project again. Please do not force another generation of programmers to use outdated Visual Studios just so they can compile their legacy assemblies. Also eventually with this approach you actually could enforce true no-null state without breaking anyone code – so string s = default(string) being string.empty and not null. As another point – for consistency sake please add .HasValue and .Value for all ? (nullable) variables even if they are reference type. I can see how this feature would be useful, but I am very concerned about how it will impact legacy code. I realize that it will only produce warnings for now, and the warnings are optional, but to me, that just feels like a way of easing it into the language, to where it will eventually become mandatory. And I think it goes without saying, but this a MAJOR change — reference types being nullable by default is, in my opinion, one of the defining features of C#. And yes, I believe it is a feature. A lot of code that myself (and I’m sure many others) have written is designed around the fact that reference types are nullable by default. I have so many questions that I am not even sure where to begin… For one, how does this play with optional arguments that are of a generic type? For instance, let’s say I have the following method: T GetValueOrDefault(string key, T defaultValue = default(T)) Obviously since T is unconstrained, it can be either a value or a reference type. Does this generate a compiler warning? And if so, how would I write this method “properly”? I suppose I could write it like this: T GetValueOrDefault(string key, T? defaultValue = default(T)) But doesn’t that imply that T must be a reference type? Perhaps I am reading too much into it. And why does “string s = default(string)” generate a warning? If we are going all the way, then shouldn’t “default(string)” translate to “string.Empty”? You could use the following syntax to default for nullable reference types: “string? s = default(string?)” I have a lot more questions, so I should probably just download the preview and try it for myself… Perhaps my opinion will change after I see it in action, but for now, I really believe that going the other way, that is, requiring that non-nullable reference types be explicitly defined, is the proper way to do things. If we were creating a new language, then sure, the “string?” syntax would be ok. In my opinion, it’s just too significant of a change to make this late in the game. It looks like an awkward solution to far-fetched issue. First, people having problems with NullReferenceException should realize that they get this exception not because C# is bad language, but because they are incompetent developers. Second, even if you want to make thoose teenagers happy, give them some new class / extension method / attribute / whatever else, but do not touch the language. There is no excuse for changes in syntax unless you are sure that this feature cannot be implemented at BCL level. Third, you selected the worst way to implement this feature because it will be breaking change for all developers, not an option for rare F# fans. Completely agree. Null checking is tedious, I’ll give them that. However, it’s not a real issue. The null conditional operator they added in C# 6.0 made it much easier to do null checking (and therefore, avoid NullReferenceExceptions), and it’s a change that is universily loved. In my opinion, those are the types of changes they should continue to make — not major breaking changes like this! They claim to have received a lot of requests for this feature, but from whom? I can’t help but think that the requests are mostly coming from C# noobies who recently migrated from another language and don’t yet understand how classes function in C#. Personally, I’ve never heard anyone complain about NullReferenceExceptions in C#. Overall, I think C# has improved a lot over the past few years, and I’ve been a big fan of many of their recent changes, but not this one… Like you said, breaking changes, especially of this magnitude, are always a bad idea, but doubly so when there are alternative ways to accomplish your goal without breaking anything! If they actually move forward with this change, then I’m going to be very concerned with the future of C# — which is sad, because I love this language. While I still prefer language support for ! (not-nullable) and ? (nullable), this is easily one of the most value-enhancing C# changes since the introduction of generics. ReSharper NotNull/CanBeNull attributes are an intermediate step, but language change(s) to allow specifying nullability wherever a type can be specified are critical (and cannot be done solely via attributes). While “?.” is reasonable for Dispose() and Event handling, it is a wart outside of those areas — and a fairly good indication of risky code. Similarly, other language changes (lambdas, anonymous methods to name a couple) more often than not simply indicate that “this code is poorly tested”. Reference nullability is a significant step towards safer code, as well as “designed” code instead of keyboard banging. I agree that it has value, I just disagree with how they are choosing to implement it. I too would favor a “!” (non-nullable) and “?” (nullable) syntax. However, hypothetically, if we were rebuilding C# from scratch, then I would actually favor their approach. It’s just, at this point in the language’s history, this is far too significant of breaking change. Only warnings for now, sure, but I worry that the team may eventually decide to enforce it at the compiler level. In which case, I would be left with the following choices: 1. Update a TON of legacy code. 2. Opt-out of all future C# versions. I would prefer neither. As far as the “?.” operator goes, I disagree that it is an indicator code smell — at least, when it is used properly. I find that it is a great QOL enhancement, especially for Linq queries. Personally, I love the functionality that lambda expressions enable (e.g. Tasks, Linq queries), but I have seen them greatly abused by some people, so they are a love/hate for me. As far as anonymous methods go… I couldn’t agree more. Very cool!
https://blogs.msdn.microsoft.com/dotnet/2017/11/15/nullable-reference-types-in-csharp/
CC-MAIN-2018-13
refinedweb
24,180
63.7
Reallocating Memorysuggest change You may need to expand or shrink your pointer storage space after you have allocated memory to it. The void *realloc(void *ptr, size_t size) function deallocates the old object pointed to by ptr and returns a pointer to an object that has the size specified by size. ptr is the pointer to a memory block previously allocated with malloc, calloc or realloc (or a null pointer) to be reallocated. The maximal possible contents of the original memory is preserved. If the new size is larger, any additional memory beyond the old size are uninitialized. If the new size is shorter, the contents of the shrunken part is lost. If ptr is NULL, a new block is allocated and a pointer to it is returned by the function. #include <stdio.h> #include <stdlib.h> int main(void) { int *p = malloc(10 * sizeof *p); if (NULL == p) { perror("malloc() failed"); return EXIT_FAILURE; } p[0] = 42; p[9] = 15; /* Reallocate array to a larger size, storing the result into a * temporary pointer in case realloc() fails. */ { int *temporary = realloc(p, 1000000 * sizeof *temporary); /* realloc() failed, the original allocation was not free'd yet. */ if (NULL == temporary) { perror("realloc() failed"); free(p); /* Clean up. */ return EXIT_FAILURE; } p = temporary; } /* From here on, array can be used with the new size it was * realloc'ed to, until it is free'd. */ /* The values of p[0] to p[9] are preserved, so this will print: 42 15 */ printf("%d %d\n", p[0], p[9]); free(p); return EXIT_SUCCESS; } The reallocated object may or may not have the same address as *p. Therefore it is important to capture the return value from realloc which contains the new address if the call is successful. Make sure you assign the return value of realloc to a temporary instead of the original p. realloc will return null in case of any failure, which would overwrite the pointer. This would lose your data and create a memory leak.
https://essential-c.programming-books.io/reallocating-memory-eda3203fa7ba4134857ae4c0750cb9f6
CC-MAIN-2021-25
refinedweb
332
52.9
"Message Addressing Property is not present" on client side The WSDL of my Web service (document style) contains a UsingAddressing property, the client sends a wellformed request containing To, Action, ReplyTo, but the Web service sends a response (being expected and wellformed) without these headers. Thus the client stack tells me: WARNUNG: A required header representing a Message Addressing Property is not present, Problem header:{}Action com.sun.xml.ws.addressing.model.MapRequiredException at com.sun.xml.ws.addressing.WsaTube.checkCardinality(WsaTube.java:222) at com.sun.xml.ws.addressing.WsaTube.validateInboundHeaders(WsaTube.java:141) at com.sun.xml.ws.addressing.WsaClientTube.processResponse(WsaClientTube.java:81) at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:605) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:554) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:539) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:436) at com.sun.xml.ws.client.Stub.process(Stub.java:248) at com.sun.xml.ws.client.sei.SEIStub.doProcess(SEIStub.java:135) at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:109) at com.sun.xml.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:89) at com.sun.xml.ws.client.sei.SEIStub.invoke(SEIStub.java:118) What is going wrong here? > Ok. Look like your STS with this WSDL is not exactly > the WS-Trust based STS, right? > > Is it a Provider based Web service? Yes, it is a Provider based Web service. > Why don't you use a "trust" STS for this purpose? Any > requirement that can't be meet? > > Just curious. Sorry, I don't know what the actual reason for this architectural decision was. > You may try to add the attribute > wsap10:Action=" > ssionTokenService/requestAdmissionTokenCollection" > > in the input and output in the operation in the > portType. Has no effect. For Privider based servcie, you need to have the annotation @RespectBinding @BindingType to enable the Addressing. where: import javax.xml.ws.BindingType; import javax.xml.ws.RespectBinding; > For Privider based servcie, you need to have the > annotation > @RespectBinding > @BindingType > to enable the Addressing. > where: > > import javax.xml.ws.BindingType; > import javax.xml.ws.RespectBinding; Thanks for your help, it's working now. @RespectBinding is necessary and sufficient to get it running. I got the same positive result when using @Addressing instead @RespectBinding. Since I have one Web service that does not require this annotation (it's WSDL employs the standard WS-Trust RST / RSTR message) and another that requires it, I have willy-nilly to anaylse the structural differences between the WSDL files. Now I can analyse the structural differences between the WSDL files Can you attach you wsdl here ? Most probably, you might not have specified these headers "to be present" in the incoming operation level policy in the wsdl. Here it is. You are trying your Trust sample with Glassfish right ? So, can you put the following properties under And can you attach server side message log to us ? By looking into your wsdl, i have a questions : - Do your Trust application involve more than one STS. I am asking this question, as i see you have used IssuedToken assertion in sts wsdl also. I see your attached wsdl have EncryptedElements. Right now, it is not supported in streaming security implementation (). But we do support it in DOM based implementation. To enable this implementation, you have to add the following assertions in your client and server policy files : Server side policy assertion: Client side policy assertion: > You are trying your Trust sample with Glassfish right > ? Tomcat 5.5.23. > And can you attach server side message log to us ? > Attached > By looking into your wsdl, i have a questions : > > - Do your Trust application involve more than one > STS. I am asking this question, as i see you have > used IssuedToken assertion in sts wsdl also. > Yes. It is a complex scenario with a lot of SAML assertions involved. Don't ask me why this cannot be handled by using only 1 assertion. What does the server side log say. Do you see some validation error in the server logs. There was a bug that when the server throws a Security Validation related Fault the client see's this kind of an error instead of the real SOAPFault, but that was fixed a few days ago, so if you are using the latest builds then you should not see such a problem. Ok. Look like your STS with this WSDL is not exactly the WS-Trust based STS, right? Is it a Provider based Web service? Why don't you use a "trust" STS for this purpose? Any requirement that can't be meet? Just curious. You may try to add the attribute wsap10:Action="" in the input and output in the operation in the portType.
https://www.java.net/node/670341
CC-MAIN-2015-22
refinedweb
806
51.34
Adding AWS Resources to a Cloud Gem Cloud Canvas cloud gems can be used out of the box, without interacting with the code and the Cloud Gem Framework that powers them. However, you might be interested in modifying existing cloud gems and creating your own cloud gems, possibly for distribution to others. If so, you might need to add support for additional AWS CloudFormation types beyond the ones natively supported by Cloud Canvas. This topic provides information on how to do that. Cloud Canvas gems provide AWS CloudFormation templates that specify the AWS resources that the gem requires. AWS CloudFormation templates support AWS resource types, which are prefixed with AWS::. They also support AWS::CloudFormation::CustomResource custom resource types or any resource type prefixed with Custom::. The template file for a cloud gem is located at . The lumberyard_version\dev\Gems\ gem_name\AWS\resource_template.json resource_template.json file is a specialized version of a AWS CloudFormation template that provides metadata specific to Cloud Canvas. While AWS CloudFormation templates support a large catalog of AWS resource types, the templates for Cloud Canvas gems are more limited in scope. When creating a Cloud Canvas gem, you have the following options: Use one of the subset of types in the AWS::namespace that are directly supported by Cloud Canvas (no additional work required). Add support for AWS::types not already supported by Cloud Canvas. Add your own Custom::*resource types that execute custom Lambda function code when instances of that type are created, modified or deleted. Custom resources are a good way to integrate your in-house services and access AWS services that are not directly supported.
https://docs.aws.amazon.com/lumberyard/latest/userguide/cloud-canvas-cgf-adding-aws-resources.html
CC-MAIN-2018-39
refinedweb
273
53.81
FYI: I am more accustomed to python where a variable that is initialized within a function has scope local to that function. I do not believe that javascript has a function like the python globals() and locals() functions, but it would be great if it did. Aside from that, might someone know if any utilities that might produce a list of global variables? Or pointers to discussion regarding best practices. IOWS: I'm looking for a way to trouble-shoot and look for variables that were not defined in a function with the 'var' keyword. thanks tim I don't think there's a simple way to do this, but a couple of points: 1) global vars are stored within the scope of the window object. However, since window is somewhat 'special' in comparison to other objects, it doesn't behave the same. Thus, the following returns 0, not 1. Code: var myglobalvar = "hi, there!"; alert(window.length); ...even though... Code: alert(window.myglobalvar); ...will find the variable you created. 2) You could theoretically do what you're asking by recursively traversing a script and discsounting any vars found that are properties of objects (i.e. local variables, since in JS functions and objects are effectively the same thing). You can read a function's properties via arguments.callee. 3) A good script these days operates within a namespace, where all data is a sub element of that namespace so there shouldn't be any global variables. If a var needs to be read by another function it can either be passed to it or, better, hardwired into one of the functions via funcname.varname = value, so it can be accessed from anywhere. Not sure if this helps, but hey. var myglobalvar = "hi, there!"; alert(window.length); alert(window.myglobalvar); Your point 3) is well taken. I have written a script to programmatically parse a legacy "non-namespaced" javascript file into a list of variables, then create an output document with writeln statements in try/catch constructs. Does a pretty good of catching the "stray variables". I must play with your window object approach a bit. That could also be edifying. thanks tim There are currently 1 users browsing this thread. (0 members and 1 guests) Forum Rules
http://www.webdeveloper.com/forum/showthread.php?216565-IE-refusing-to-render-object-passed-via-.innerHTML&goto=nextoldest
CC-MAIN-2017-34
refinedweb
378
73.88
/* This program is written by T S Pradeep Kumar on 28th Sep 2010 This is to display Hello World to the display unit */ #include <stdio.h> //including the standard IO functions like printf(), scanf() int main() { printf(“Hello World \n”); return 0; } - stdio.h is the library which contains the printf(), scanf() and other standard IO functions, so before we use any function we need to include in our C program. Most of the compilers will take stdio.h automatically, even if we dont include in our program - int main() {} is the main function - printf(“Hello World\n”); printf() is the function which is printing to the display unit and it is displaying Hello World. Whatever is there inside the double quotes will displayed as it is in the monitor except the format specifiers or format codes (%d, %f, %c) - return 0; is the final statement which returns the integer value 0 (this is just to make the compiler happy) Here in the above program, you can see there are semi colons ; Every C statement must end with a semicolon. If a statement is not completed, then there need not be a semicolon.
https://www.nsnam.com/2010/09/sample-c-program.html
CC-MAIN-2021-43
refinedweb
193
61.29
When a page is to be mapped executable for userspace, we can presume that the icache doesn't contain anything valid for its address range but we cannot be sure that its content has been written back from the dcache to L2 or memory further out. If the icache fills from those memories, ie. does not fill from the dcache, then we need to ensure that content has been flushed from the dcache. This was being done for lowmem pages but not for highmem pages. Fix this by mapping the page & flushing its content from the dcache in __flush_icache_page, before the page is provided to userland. Signed-off-by: Paul Burton <paul.burton@xxxxxxxxxx> --- arch/mips/mm/cache.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 3f159ca..734cb2f 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -16,6 +16,7 @@ #include <linux/mm.h> #include <asm/cacheflush.h> +#include <asm/highmem.h> #include <asm/processor.h> #include <asm/cpu.h> #include <asm/cpu-features.h> @@ -124,10 +125,14 @@ void __flush_icache_page(struct vm_area_struct *vma, struct page *page) unsigned long addr; if (PageHighMem(page)) - return; + addr = (unsigned long)kmap_atomic(page); + else + addr = (unsigned long)page_address(page); - addr = (unsigned long) page_address(page); flush_data_cache_page(addr); + + if (PageHighMem(page)) + __kunmap_atomic((void *)addr); } EXPORT_SYMBOL_GPL(__flush_icache_page);
http://lkml.iu.edu/hypermail/linux/kernel/1602.3/00131.html
CC-MAIN-2019-51
refinedweb
231
60.51
When you have already learned how to make search requests in Splunk GUI, it may be nice to figure out how do the same from your own scripts using the Splunk REST API. It’s really easy! Ok, we have a Splunk SIEM account: user="user" pass="Password123" And we want to execute this search request: search='search index="index_nessus" host="192.168.56.50"' First of all we need to get ID of our search request (make sure that 8089 port is open! ?): curl -u $user:$pass -k -d search="$search" <?xml version="1.0" encoding="UTF-8"?> <response> <sid>1490878950.3029</sid> </response> Now, having this ID we can check if the results of this search request are available: curl -u $user:$pass -k This command will return a huge a xml. We need to figure out if it is finished or not: curl -s -u $user:$pass -k | grep "dispatchState" <s:keyDONE</s:key> When dispatchState is DONE, we can get the results: curl -u $user:$pass -k --get -d output_mode=csv This command will return 200 lines of text like this: ... "index_nessus~11~C780C931-CD4E-4CE4-ACEC-6240AF49DAB2","11:3468150",1490703404,"{""scan_group"":""vm_moscow"", ""plugin_output"":""Port 135/tcp was found to be open"", ""protocol"":""tcp"", ""severity"":""0"", ""script_version"":""$Revision: 1.73 $"", ""risk_factor"":""None"", ""solution"":""n/a"", ""plugin_modification_date"":""2017/03/27"", ""pluginName"":""Netstat Portscanner (WMI)"", ""agent"":[""windows""], ""pluginFamily"":""Port scanners"", ""synopsis"":""The list of open ports could be retrieved via netstat."", ""pluginID"":""34220"", ""plugin_name"":""Netstat Portscanner (WMI)"", ""fname"":""wmi_netstat.nbin"", ""plugin_publication_date"":""2008/09/16"", ""plugin_type"":""local"", ""svc_name"":""epmap"", ""port"":""135"", ""description"":""Using the WMI interface, it was possible to get the open ports by running the netstat command remotely.""}",14,"192.168.56.101" ... It’s some plugin data from Nessus scan results. And if we use “| table ” operator, in results we also get great parsable csv table. Therefore, you do not need to mess around with the creation of reports, as in the case of searching in Splunk graphical interface. Upd. 04.10.2018 Recently I made some python scripts to automate Splunk searches via API. This functionality can be very useful to debug the searches for dashboards. Or you even can get some events from Splunk and put them back with some different data, for example if some connector was broken for a day. Let’s say we want to get all events from ‘important_index’ for 10/04/2018. First of all, we need to get ID for the search: import requests search='index="important_index" earliest=10/04/2018:0:0:0 latest=10/04/2018:23:59:00' data = {'search': search, 'max_count':'10000000'} response = requests.post('', data=data, auth=(splunk_user, splunk_user_password), verify=False) It will return an xml, so we can parse it with Python etree module: import xml.etree.ElementTree as ET root = ET.fromstring(response.text) for tag in root: job_id = tag.text Now we should check the status of the job until it is ‘DONE’: import xml.etree.ElementTree as ET def get_dispatch_state(xml_text): root = ET.fromstring(xml_text) dispatchState = "" for tag in root: if "content" in tag.tag: for tag2 in tag: for tag3 in tag2: if tag3.attrib['name'] == "dispatchState": dispatchState = tag3.text return (dispatchState) status = "UNKNOWN" while status != "DONE": response = requests.post('' + job_id, auth=(splunk_user, splunk_user_password), verify=False) status = get_dispatch_state(response.text) print(status) time.sleep(5) Output: PARSING DONE And finally we should repetitive results of the job. We can get no more than 50000 event in one request, so it will be necessary to make several requests and use an offset: data = {'output_mode': 'json'} offset = 0 results_fullness = False events = list() while not results_fullness: response = requests.get('' + job_id + \ '/results?count=50000&offset='+str(offset), data=data, auth=(splunk_user, splunk_user_password), verify=False) response_data = json.loads(response.text) print("Count: " + str(len(response_data['results']))) events += response_data['results'] if len(response_data['results']) == 0: #This means that we got all the results results_fullness = True else: offset += 50000 print("All: " + str(len(events))) Output: Count: 50000 Count: 2751 Count: 0 All: 52751 Now we will have all the events in events variable. > Pingback: How to list, create, update and delete Grafana dashboards via API | Alexander V. Leonov
https://avleonov.com/2017/04/03/making-splunk-searches-using-rest-api/
CC-MAIN-2022-40
refinedweb
693
56.25
ftw.dashboard.portlets.favourites A favourite Portlet Project Description Overview ftw.dashboard.portlets.favourites provides a favorites portlet for your dashboard The favorite portlet shows links for all your favorites in your Home Folder. In the edit mode you have also the possibility to remove a single favorite. Additionaly it implements the site action "add to favorites", which add a current section to your favorites Folder. Install - Add ftw.dashboard.portlets.favourites to your buildout configuration [instance] eggs = ftw.dashboard.portlets.favourites - Run buildout - Install ftw.dashboard.portlets.favourites in portal_setup Links - Package repository: - Issue tracker: - Package on pypi: - Continuous integration: This package is copyright by 4teamwork. ftw.dashboard.portlets.favourites is licensed under GNU General Public License, version 2. Changelog 3.1.1 (2013-08-15) - Do not use move cursor on empty favourite items. [Julian Infanger] - Fixed link creation for users without Modify portal content permission on the favouritefolder. [phgross] 3.1 (2013-04-16) - Added Migration Upgradestep for old favourite portlets to the new implementation (3.0). [phgross] - Fixed UnicodeEncodeError in AddFavourite, wich happens when adding a Dexterity item with a non-ascii title to the Favourites. [phgross] - Replace jq by $. [mathias.leimgruber] - Updated German translations. [phabegger] 3.0 (2013-01-25) - Full refactored. Favourites full configurable over the portlet. [eschmutz] 2.0.1 (2012-03-05) - Added some French translations [ttschanz] - Add IFavouritesLocation adapter for customizing favourites location. [jone] - Add missing german translations. [jone] - Add to favourites: do not use title_or_id for dexterity support. [jone] - Translate portlet title in "plone" domain. [jone] - Fix messages in "add to favourites" script, so that translations work. [jone] - Added some missing german translations. [jone] 2.0 (2010-10-26) - Only plone4 compatible - fixed namespaces in setup.py [phgross] - added the addToFavorites script (use now links and no more the depracted favorite type), and to some other changes for plone4 support [phgross] - removed inline css on portlet [fsprenger] - added new, removed unused translations [phgross] 1.1 - removed the addToFavourites script. Now Using the standard Plone Favorites, and their script. [phgross] - Added Brazilian Portuguese translation. [lucmult] - Improve management of Favorites folder, can be in this names: favourites, favorites, Favourites, Favourites, to integrate better with default Plone action. [lucmult] 1.0 - Initial release Current Release ftw.dashboard.portlets.favourites 3.1.1 Released Aug 15, 2013 — tested with Plone 4.1, Plone 4.2 Get ftw.dashboard.portlets.favourites for all platforms - ftw.dashboard.portlets.favourites-3.1.1.zip - If you are using Plone 3.2 or higher, you probably want to install this product with buildout. See our tutorial on installing add-on products with buildout for more information.
http://plone.org/products/ftw.dashboard.portlets.favourites/
CC-MAIN-2013-48
refinedweb
438
61.12
Hey, I'm building a simple little program for minecraft to reduce the repetitiveness, and to help me along, I'm using getpixel, it's slow, but i only need to grab about 6-12 pixels. but the problem is, I can get the pixel data if I capture from the entire desktop display, but as soon as I tell it to grab from the active window (wich goes MUCH faster) it doesn't return any colour value's. any Idea why that is? I'd rather not have to use getDIBits or program in java or anything since I want to keep it fairly kiss, and if it can't be done, then.. ah well, I'll just grab from the entire desktop, speed isn't to much of an issue, but it'll be a pain to keep the window at the same place. any help would be awesome, thanks. (since people always want to see code, here's the code I'm using (simplified, but compilable) Code: #include <Windows.h> #include <iostream> using namespace std; // vvv Ignore this, it's just used for retrieving the handle of a window without giving the full window name vvvv struct results { char* text; UINT (__stdcall*GET)(HWND,LPSTR,UINT); HWND hRet; }; BOOL WINAPI StruEnumProc( HWND hwnd, results* stru ) { char loc_buf[128]; stru->GET( hwnd, loc_buf, 127 ); if( strstr( loc_buf, stru->text ) ) { stru->hRet = hwnd; return FALSE; } return TRUE; } HWND FindWindowTitleContains( char* text ) { results res = { text, (UINT (__stdcall *)(HWND,LPSTR,UINT))GetWindowTextA, 0 }; EnumWindows( (WNDENUMPROC)StruEnumProc, (LPARAM)&res ); return res.hRet; } // ^^^^^ Ignore this ^^^^^^^^^^ // just some global variables int colour; int main() { HWND handle = FindWindowTitleContains( "Minecraft" ); HDC hdca = GetDC( handle); // make 'handle' 'NULL' to grab from the entire desktop while(true) { colour = GetPixel(hdca, 100, 100); cout << colour << '\r'; } return 0; }
http://forums.codeguru.com/printthread.php?t=537553&pp=15&page=1
CC-MAIN-2016-50
refinedweb
299
58.15
Save UI data and load UI data from .txt file On 29/03/2017 at 07:04, xxxxxxxx wrote: Hi guys, I am trying to save UI data to a .txt file and load UI settings back by the .txt file for the UI. so for example: If I have four GeDialog.AddEditText with different names in each one. and I have a save button and press it and save it right to the .txt file, which is good. now I have a Load button, but how you load the .txt file and read it, so it can fill in the four GeDialog.AddEditText with the data from .txt file. This is how I am saving the .txt file and it works, but how to go about loading it back when I open it? def Save_UI_Settings_Data(self) : ChrName = self.GetString(self.ChrName) Save_file = p.join(p.split(__file__)[0], ChrName+'_Data.txt') doc = c4d.documents.GetActiveDocument() SubN1 = self.GetString(self.UI_SN_1) SubN2 = self.GetString(self.UI_SN_2) SubN3 = self.GetString(self.UI_SN_3) SubN4 = self.GetString(self.UI_SN_4) if ChrName: with open(Save_file,'a') as type_file: type_file.write(p.split('Slot1')[1]+';'+SubN1+'\n') type_file.write(p.split('Slot2')[1]+';'+SubN2+'\n') type_file.write(p.split('Slot3')[1]+';'+SubN3+'\n') type_file.write(p.split('Slot4')[1]+';'+SubN4+'\n') return Note! This is new to me when saving data for a plugin, so plz help , a tip or example on this that I can find online to help me. Cheers, Ashton On 30/03/2017 at 08:01, xxxxxxxx wrote: Hi Ashton, to be honest I don't understand the question. I mean, you already write the contents of string widgets into a file, where's the problem with the opposite? On 30/03/2017 at 08:37, xxxxxxxx wrote: Hi Andreas I am saying i don't know how to do the opposite which is the loading. So it like if I have a button that is for opening your save settings and you open it with a open file dialog , you select the file which is the .txt and it load the UI settings that you have in the .txt So How you make it read .txt and load settings to ui. Hope you understand what I am saying. On 31/03/2017 at 07:58, xxxxxxxx wrote: Hi Ashton, in our C++ docs we do have a manual on GeDialog, which might shed some more light. In order to set the values of string gadgets, GeDialog.SetString() is the function to use. Depending on your needs InitValues() could be a good place to read the file and set these values, but it could of course also be done in reaction to a button press in GeGeDialog.Command(). Reading and writing files with standard Python functions is a bit beyond the scope of MAXON's SDK Team, but I'm sure the official Python docs can help here, see for example Chapter 7.2 Reading and Writing Files.
https://plugincafe.maxon.net/topic/10039/13508_save-ui-data-and-load-ui-data-from-txt-file
CC-MAIN-2021-17
refinedweb
497
76.82
Shutting down GlassFish remotely We run a lot of tests of Metro on Hudson with GlassFish, but there's one common problem we had. Namely, often test jobs abort in the middle, leaving an application server running behind. This not only wastes memory, but it also wrecks havoc to successive builds that attempt to start the server on the same port. Obviously you can shut down a server if you have access to asadmin script, but for programs like Hudson having that sort of dependencies is problematic. I know Tomcat has a very easy way to shut down as long as you know the port number — you just need to send a magic word. So I started wonder if I can do something similar for GlassFish. I did a bit of research, and I found that doing this is fairly easy with JMX. The following is the entire program: public class Main { public static void main(String[] args) throws Exception { // shutdown("localhost:8686", "admin", "adminadmin"); if(args.length!=3) { System.err.println("Usage: java -jar shutdown-gf.jar <hostAndPort> <adminUserName> <adminPassword>"); System.err.println(" e.g., java -jar shutdown-gf.jar localhost:8686 admin adminadmin"); System.exit(-1); } shutdown(args[0],args[1],args[2]); System.exit(0); } private static void shutdown(String hostAndPort, String username, String password) throws Exception { JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://"+hostAndPort+"/jmxrmi"); Map<String,Object> envs = new HashMap<String,Object>(); envs.put(JMXConnector.CREDENTIALS,new String[]{username, password}); MBeanServerConnection con = JMXConnectorFactory.connect(url,envs).getMBeanServerConnection(); try { con.invoke(new ObjectName("amx:j2eeType=J2EEServer,name=server"),"stop",new Object[0],new String[0]); } catch (UnmarshalException e) { if(e.getCause() instanceof EOFException) { // to be expected, as the above would shut down the server. } else { throw e; } } } } For this to work you need to know three things. The hostname/IP where GlassFish runs, TCP port of JMX (normally 8686 but asadmin start-domain will tell you), and user name and password of the domain administrator. I hope to integrate this into a Hudson plugin that Rama and I have been writing, but that will be a topic for another day. - Login or register to post comments - Printer-friendly version - kohsuke's blog - 1365 reads <p>Kohsuke</p> <p>Thanks for this, it really helped ... by nosuch - 2011-05-25 01:18 Kohsuke Thanks for this, it really helped alot! Just want to know if you could help. After if shut the server down, with the "stop" command, the J2EEServer's mbean state goes to 3. But i have no idea that 3 means? Where or how could i get a discription on this? Searching the web i found that -1 is failed, 0 is stopped, and 1 is running, but could not find anything on what 3 is. Any help would be greatly appreciated. Winston
http://weblogs.java.net/blog/2007/10/25/shutting-down-glassfish-remotely
crawl-003
refinedweb
471
57.57
This checklist is meant at GDAL committers that review a contribution (e.g. a new GDAL or OGR driver) before commiting it. Contributors are invited to read it as well, to make sure they have taken the necessary steps to make their contribution more easily integrated. Reminder: the gist of the work is supposed to be done by the contributor, not by the reviewer. A contributor not willing to fix his work and take into account reviewer's remarks will not probably be a long-term asset for the project. Code review checks - All source files (.h, .c, .cpp, .py, etc...) should have a header with copyright attribution and the text of the GDAL X/MIT license. - The contributor should have the right to contribute his work. This can be done by requesting him to send a public email to gdal-dev mailing list mentionning he has right to contribute his work under the X/MIT license. - Pay attention if the driver depends on a third-party library. If it is the case, the compilation of the driver must be made conditional to the presence of the library. Drivers should try to re-use existing third-party libraries dependencies as much as possible, e.g. Expat for SAX XML parsing. - The patch should be against trunk - For a OGR driver, check that the Open() method of the driver (often delegated to a Open() method of the datasource) is selective enough (i.e. it will not accept data files that are not meant for the driver), and robust enough (it will not crash for small variations w.r.t content that it would recognize). Check that it can deal with unusual filenames. For a GDAL driver, similar checks, as well for the optional Identify() method - Functions and methods should have a selective enough namespace ("GDAL" or "OGR" prefix, or use of C++ namespace) to avoid symbol collision. - Developer guidelines defined in RFC8 should be followed. - Check that the code is portable enough : - at the time of writing, no C++11 features. - independant of host endianness. Use of CPL macros to do byte-swapping - be careful about the use of "long" datatype, especially when reading/writing serialized data. It is 32-bit wide on gcc/clang 32bit and MSVC 32/64bit, but 64-bit wide on gcc/clang 64bit. - Ideally, the code should use CPL infrastructure when available, for example VSI*L API for file I/O - Check that the indentation is consistant (recommended: 4 spaces for consistency with the global code base), and particularly it has no tabulation characters - End-of-line characters should be the Unix way (\n) - Check that the contribution includes Unix and Windows makefiles - Check that the contribution includes documentation (frmt_XXXX.html, drv_XXXX.html as well as links from the main format pages frmts/formats_list.html or ogr/ogrsf_frtms/ogr_formats.html). Documentation should, at minimum, say some words about the format handled by the driver, and, when relevant, describe the particular syntax for the connection string, creation options, configuration options, link to format description, mention needed third-party libraries. - Ideally, the contribution should include a Python script to add to the autotest suite, and one or small samples (each < 20 KB ) appropriate to go to autotest/gdrivers/data or autotest/ogr/data - If the driver depends on an optional third-party library, the autotest script should gracefully skip the tests when the driver is not available at run-time. Compile-time checks - The code should be as compilation warning-free as possible. - You can directly test Linux and Windows compilation with the GDAL Vagrant VM Run-time checks - For a OGR driver, compile the test_ogrsf utility (cd apps; make test_ogrsf) and run sample files with it. - Run the provided autotest scripts, natively (and if you can cope with the noise due to Python-related false-positive warnings under Valgrind to detect memory use errors and leaks) - Run the utilities (gdalinfo, gdal_translate, ogr2ogr) under Valgrind. Post-commit checks - Once committed, trunk is compiled on Linux, Mac OS X and cross-compiled mingw 32 and 64 bit Travis-CI instances, so you check that the new code does not break those environments. - Coverity Scan static analyis is run weekly on Tuesday, 22:00 MDT. Developers/maintainers can request access on the GDAL Project page. Last modified 4 years ago Last modified on Aug 10, 2015 8:13:06 AM
https://trac.osgeo.org/gdal/wiki/ReviewerChecklist
CC-MAIN-2019-43
refinedweb
730
60.24
>>>>> "J. - J< ? DenyHosts-2.6 Index: common/Makefile.common =================================================================== RCS file: /cvs/extras/common/Makefile.common,v retrieving revision 1.58 diff -u -r1.58 Makefile.common --- common/Makefile.common 18 May 2007 18:25:32 -0000 1.58 +++ common/Makefile.common 29 May 2007 22:44:18 -0000 @@ -139,7 +139,7 @@ PREP_ARCHES = $(addprefix prep-,$(ARCHES)) ## list all our bogus targets -.PHONY :: $(ARCHES) sources uploadsource upload export check build-check plague koji build test-srpm srpm tag force-tag verrel new clean patch prep compile install-short compile-short FORCE local +.PHONY :: $(ARCHES) sources uploadsource upload export check build-check plague koji build test-srpm srpm tag force-tag verrel new clean patch prep compile install-short compile-short FORCE local chain-restart chain-queue chain-go # The TARGETS define is meant for local module targets that should be # made in addition to the SOURCEFILES whenever needed @@ -393,6 +393,20 @@ build: plague endif +chain-restart: + @rm -f ~/.koji/chain-builds + @echo "Queue of chained builds cleared." + +chain-queue: build-check $(COMMON_DIR)/branches + @if [ ! -x "$(BUILD_CLIENT)" ]; then echo "Must have koji installed - see"; exit 1; fi + @echo -n 'cvs://cvs.fedoraproject.org/cvs/pkgs?$(CVS_REPOSITORY)#$(TAG) : ' >> ~/.koji/chain-builds + @echo "Chained build queued." + +chain-go: + @if [ ! -s ~/.koji/chain-builds ]; then echo "Must run chain-queue first to queue a chained build"; exit 1; fi + @$(BUILD_CLIENT) chain-build $(BUILD_FLAGS) $(TARGET) `cat ~/.koji/chain-builds` + @rm -f ~/.koji/chain-builds + # "make new | less" to see what has changed since the last tag was assigned new: - cvs diff -u -r$$(cvs log Makefile 2>/dev/null | awk '/^symbolic names:$$/ {getline; sub(/^[ \t]*/, "") ; sub (/:.*$$/, ""); print; exit 0}')
https://www.redhat.com/archives/fedora-devel-list/2007-May/msg01868.html
CC-MAIN-2019-43
refinedweb
275
58.18
DP first attempts to interpret all names (which can include wildcard characters such as `*' and `&') in the context of devices, resolving to composite and atomic devices if they are registered with Device Access. Failure in that results in DP attempting to resolve the exact (non-wildcarded) names via normal Channel Access mechanisms. Names which are not resolvable in either namespace cause DP to continue to periodically interrogate the Process Variable (PV) namespace via Channel Access (CA)'s ca_search() mechanism. FIGURE 1. - DP Namespace Display pages can also be built up using drag-and-drop from other programs, such as MEDM. Process variable names can be dragged from active MEDM screens and dropped into desired positions on display pages, with the usual name resolution described above taking place. (Similarly, text can be dragged and dropped in the "Append:" text field, but this mechanism requires focus in the text field and a subsequent carriage return, hence direct dropping in the page is recommended.) Alternatively, a ".dp" file can be built up using an ordinary text editor, with one device name or wildcard string per line. Pages thus constructed can then be saved by invoking File -> Save As... and entering the desired file name. The recommended suffix for such "device files" is ".dp". DP files generated with any of the above mechanisms can be subsequently read into DP via selecting the File -> Open... menu entry. The user is then presented with a file selection dialog box which allows the selection of the file of interest to open. Rows can be deleted via either dragging to the scissors icon at the lower right corner of the window, or selected (via MB1) and then cut from the page via the Edit -> Cut menu entry To clear the entire page, the user can select either the File -> New menu entry, or the Edit -> Clear menu entry. When namespace entries are to be added to the page by text entry in the "Append" text field, they are added to the end of the page if no rows are selected. If a row or rows are selected, new entries are appended after the last selected row. Note that the output format is PostScript, so destination printers should be PostScript-capable. Also note that for file directed output, a unique name based on the date and time is automatically generated, but can be overridden by the user.
http://www.aps.anl.gov/epics/EpicsDocumentation/ExtensionsManuals/ParamDisp/dp.html
CC-MAIN-2014-41
refinedweb
398
56.49
can be reproduced on both mono 3.4 and 3.6. The following program causes the C# compiler to crash: using System.Threading.Tasks; class MainClass { public static void Main(string [] args) { EnqueueTask(async delegate { await Task.Yield(); return 0; }); } static Task<T> EnqueueTask<T>(Func<Task<T>> func) { return null; } } The compiler error is "Error CS0584: Internal compiler error: Object reference not set to an instance of an object" The same code works fine on Microsoft's compiler. The problem seems to be the usage of "async delegate" syntax. No error is reported when the "delegate" keyword is changed to the lambda syntax "() =>". It also seems to be related to the use of generics; no error is reported if type parameter "T" is removed from the EnqueueTask method. *** This bug has been marked as a duplicate of bug 20849 *** Already fixed in mono master
https://bugzilla.xamarin.com/21/21808/bug.html
CC-MAIN-2021-39
refinedweb
145
56.55
Difference between revisions of "Sudo" Revision as of 16:16, 29 June 2013 zh-CN:Sudo Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary text Template:Article summary end sudo ("substitute user do") allows a system administrator to delegate authority to give certain users (or groups of users) the ability to run some (or all) commands as root or another user while providing an audit trail of the commands and their arguments.Sudo Main Page Contents - 1 Rationale - 2 Installation - 3 Usage - 4 Configuration - 5 Tips and tricks - 6 Troubleshooting Rationale Sudo is an alternative to su for running commands as root. Unlike su, which launches a root shell that allows all further commands root access, sudo instead grants temporary privilege escalation to a single command. By enabling root privileges only when needed, sudo usage reduces the likelyhood that a typo or a bug in an invoked command will ruin the system. Sudo can also be used to run commands as other users; additionally, sudo logs all commands and failed access attempts for security auditing. Installation Install the sudo package, available in the official repositories: # pacman -S sudo To begin using sudo as a non-privileged user, it must be properly configured. So make sure you read the configuration section. Usage With sudo, users can prefix commands with sudo to run them with superuser (or other) privileges. For example, to use pacman: $ sudo pacman -Syu See the sudo manual for more information. Configuration View current settings Run sudo -ll to print out the current sudo configuration. Using visudo The configuration file for sudo is /etc/sudoers. It should always be edited with the visudo command. visudo locks the sudoers file, saves edits to a temporary file, and checks that file's grammar before copying it to /etc/sudoers. sudoersbe free of syntax errors! Any error makes sudo unusable. Always edit it with visudoto prevent errors. The default editor for visudo is vi. It will be used if you do not specify another editor, by setting either VISUAL or EDITOR environment variables (used in that order) to the desired editor, e.g. vim. The command is run as root: # VISUAL="/usr/bin/vim -p -X" visudo You can permanently change the setting system-wide to e.g. vim by appending: export VISUAL="/usr/bin/vim -p -X" to your ~/.bashrc file. Note that this won't take effect for already-running shells. To change the editor of choice permanently only for visudo, add the following line to /etc/sudoers where vim is your prefered editor: # Reset environment by default Defaults env_reset # Set default EDITOR to vim, and do not allow visudo to use EDITOR/VISUAL. Defaults editor="/usr/bin/vim -p -X", !env_editor Example Entries To allow a user to gain full root privileges when he/she precedes a command with sudo, add the following line: USER_NAME ALL=(ALL) ALL To allow a user to run all commands as any user but only the machine with hostname HOST_NAME: USER_NAME HOST_NAME=(ALL) ALL To allow members of group wheel sudo access: %wheel ALL=(ALL) ALL To disable asking for a password for user USER_NAME: Defaults:USER_NAME !authenticate Enable explicitly defined commands only for user USER_NAME on host HOST_NAME: USER_NAME HOST_NAME=/usr/bin/halt,/usr/bin/poweroff,/usr/bin/reboot,/usr/bin/pacman -Syu %wheelline if your user is in this group. Enable explicitly defined commands only for user USER_NAME on host HOST_NAME without password: USER_NAME HOST_NAME= NOPASSWD: /usr/bin/halt,/usr/bin/poweroff,/usr/bin/reboot,/usr/bin/pacman -Syu A detailed sudoers example can be found here. Otherwise, see the sudoers manual for detailed information. Sudoers default file permissions The owner and group for the sudoers file must both be 0. The file permissions must be set to 0440. These permissions are set by default, but if you accidentally change them, they should be changed back immediately or sudo will fail. # chown -c root:root /etc/sudoers # chmod -c 0440 /etc/sudoers Password cache timeout Users may wish to change the default timeout before the cached password expires. This is accomplished with the timestamp_timeout option in /etc/sudoers which is in minutes. Set timeout to 20 minutes. Defaults:USER_NAME timestamp_timeout=20 Tips and tricks File example This example is especially helpful for those using terminal multiplexers like screen, tmux, or ratpoison, and those using sudo from scripts/cronjobs: /etc/sudoers Cmnd_Alias WHEELER = /usr/bin/lsof, /bin/nice, /bin/ps, /usr/bin/top, /usr/local/bin/nano, /usr/bin/ss, /usr/bin/locate, /usr/bin/find, /usr/bin/rsync Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/nice, /usr/bin/ionice, /usr/bin/top, /usr/bin/kill, /usr/bin/killall, /usr/bin/ps, /usr/bin/pkill Cmnd_Alias EDITS = /usr/bin/vim, /usr/bin/nano, /usr/bin/cat, /usr/bin/vi Cmnd_Alias ARCHLINUX = /usr/bin/gparted, /usr/bin/pacman root ALL = (ALL) ALL USER_NAME ALL = (ALL) ALL, NOPASSWD: WHEELER, NOPASSWD: PROCESSES, NOPASSWD: ARCHLINUX, NOPASSWD: EDITS Defaults !requiretty, !tty_tickets, !umask Defaults visiblepw, path_info, insults, lecture=always Defaults loglinelen = 0, logfile =/var/log/sudo.log, log_year, log_host, syslog=auth Defaults mailto=webmaster@foobar.com, mail_badpass, mail_no_user, mail_no_perms Defaults passwd_tries = 8, passwd_timeout = 1 Defaults env_reset, always_set_home, set_home, set_logname Defaults !env_editor, editor="/usr/bin/vim:/usr/bin/vi:/usr/bin/nano" Defaults timestamp_timeout=360 Defaults passprompt="Sudo invoked by [%u] on [%H] - Cmd run as %U - Password for user %p:" Enabling Tab-completion in Bash Template:Keypress-completion, by default, will not work when a user is initially added to the sudoers file. For example, normally john only needs to type: $ fire<Template:Keypress> and the shell will complete the command for him as: $ firefox If, however, john is added to the sudoers file and he types: $ sudo fire<Template:Keypress> the shell will do nothing. To enable Template:Keypress-completion with sudo, install the bash-completion package from the official repositories. see bash#Auto-completion for more information. Alternatively, add the following to your ~/.bashrc: complete -cf sudo Run X11 apps using sudo To allow sudo to start graphical application in X11, you need to add: Defaults env_keep += "HOME" to visudo. Disable per-terminal sudo If you are annoyed by sudo's defaults that require you to enter your password every time you open a new terminal, disable tty_tickets: Defaults !tty_tickets Environment variables If you have a lot of environment variables, or you export your proxy settings via export http_proxy="...", when using sudo these variables do not get passed to the root account unless you run sudo with the -E option. $ sudo -E pacman -Syu The recommended way of preserving environment variables is to append them to env_keep: /etc/sudoers ~/.bashrc or /etc/bash.bashrc: alias sudo='sudo ' Insults Users can configure sudo to display clever insults when an incorrect password is entered instead of printing the default "wrong password" message. Find the Defaults line in /etc/sudoers and append "insults" after a comma to existing options. The final result might look like this: #Defaults specification Defaults insults To test, type sudo -K to end the current session and let sudo ask for the password again. Root password Users can configure sudo to ask for the root password instead of the user password by adding "rootpw" to the Defaults line in /etc/sudoers: Defaults timestamp_timeout=0,rootpw Disable root login With sudo installed and configured, users may wish to disable the root login. Without root, attackers must first guess a user name configured as a sudoer as well as the user password. The account can be locked via passwd: # passwd -l root A similar command unlocks root. $ sudo passwd -u root Alternatively, edit /etc/shadow ~/.kde4/share/config/kdesurc: [super-user-command] super-user-command=sudo PolicyKit When disabling the root account, it is necessary to change the PolicyKit configuration for local authorization to reflect that. The default is to ask for the root password, so that must be changed. With polkit-1, this can be achieved by editing /etc/polkit-1/localauthority.conf.d/50-localauthority.conf so that AdminIdentities=unix-user:0 is replaced with something else, depending on the system configuration. It can be a list of users and groups, for example: AdminIdentities=unix-group:wheel or AdminIdentities=unix-user:me;unixuser:mom;unix-group:wheel For more information, see man pklocalauthority. NetworkManager Even with the above PolicyKit configuration you still need to configure a policy for NetworkManager. This is documented on the NetworkManager page of this wiki. Troubleshooting SSH TTY Issues SSH does not allocate a tty by default when running a remote command. Without a tty, sudo cannot disable echo when prompting for a password. You can use ssh's -tt option to force it to allocate a tty (or -t twice). The Defaults option requiretty only allows the user to run sudo if they have a tty. # Disable "ssh hostname sudo <cmd>", because it will show the password in clear text. You have to run "ssh -t hostname sudo <cmd>". # #Defaults requiretty Display User Privileges You can find out what privileges a particular user has with the following command: $ sudo -lU yourusename Or view your own with: $ sudo -l Matching Defaults entries for yourusename on this host: loglinelen=0, logfile=/var/log/sudo.log, log_year, syslog=auth, mailto=sqpt.webmaster@gmail.com, mail_badpass, mail_no_user, mail_no_perms, env_reset, always_set_home, tty_tickets, lecture=always, pwfeedback, rootpw, set_home User yourusename may run the following commands on this host: (ALL) ALL (ALL) NOPASSWD: /usr/bin/lsof, /bin/nice, /usr/bin/ss, /usr/bin/su, /usr/bin/locate, /usr/bin/find, /usr/bin/rsync, /usr/bin/strace, (ALL) /bin/nice, /bin/kill, /usr/bin/nice, /usr/bin/ionice, /usr/bin/top, /usr/bin/kill, /usr/bin/killall, /usr/bin/ps, /usr/bin/pkill (ALL) /usr/bin/gparted, /usr/bin/pacman (ALL) /usr/local/bin/synergyc, /usr/local/bin/synergys (ALL) /usr/bin/vim, /usr/bin/nano, /usr/bin/cat (root) NOPASSWD: /usr/local/bin/synergyc Permissive Umask Sudo will union the user's umask value with its own umask (which defaults to 0022). This prevents sudo from creating files with more open permissions than the user's umask allows. While this is a sane default if no custom umask is in use, this can lead to situations where a utility run by sudo may create files with different permissions than if run by root directly. If errors arise from this, sudo provides a means to fix the umask, even if the desired umask is more permissive than the umask that the user has specified. Adding this (using visudo) will override sudo's default behavior: Defaults umask = 0022 Defaults umask_override This sets sudo's umask to root's default umask (0022) and overrides the default behavior, always using the indicated umask regardless of what umask the user as set. Defaults Skeleton At this link you can find a list of all the options available to use with the Defaults command in /etc/sudoers. The same list is reproduced right below in a format optimized for copying and pasting it in your sudoers files and then make changes. #Defaults always_set_home # always_set_home: If enabled, sudo will set the HOME environment variable to the home directory of the target user (which is root unless the -u option is used). This effectively means that the -H op # always_set_home is only effective for configurations where either env_reset is disabled or HOME is present in the env_keep list. This flag is off by default. #Defaults authenticate # authenticate: If set, users must authenticate themselves via a password (or other means of authentication) before they may run commands. This default may be overridden via the PASSWD and NOPASSWD t #Defaults closefrom_override # closefrom_override: If set, the user may use sudo's -C option which overrides the default starting point at which sudo begins closing open file descriptors. This flag is off by default. #Defaults compress_io # compress_io: If set, and sudo is configured to log a command's input or output, the I/O logs will be compressed using zlib. This flag is on by default when sudo is compiled with zlib support. #Defaults env_editor # env_editor: If set, visudo will use the value of the EDITOR or VISUAL environment variables before falling back on the default editor list. Note that this may create a security hole as it allows th # separated list of editors in the editor variable. visudo will then only use the EDITOR or VISUAL if they match a value specified in editor. This flag is on by default. #Defaults env_reset # env_reset: If set, sudo will run the command in a minimal environment containing the TERM, PATH, HOME, MAIL, SHELL, LOGNAME, USER, USERNAME and SUDO_* variables. Any variables in the caller's envir # in the file specified by the env_file option (if any). The default contents of the env_keep and env_check lists are displayed when sudo is run by root with the -V option. If the secure_path opti # default. #Defaults fast_glob # fast_glob: Normally, sudo uses the glob(3) function to do shell-style globbing when matching path names. However, since it accesses the file system, glob(3) can take a long time to complete for som # (automounted). The fast_glob option causes sudo to use the fnmatch(3) function, which does not access the file system to do its matching. The disadvantage of fast_glob is that it is unable to ma # names that include globbing characters are used with the negation operator, '!', as such rules can be trivially bypassed. As such, this option should not be used when sudoers contains rules that #Defaults fqdn # fqdn: Set this flag if you want to put fully qualified host names in the sudoers file. I.e., instead of myhost you would use myhost.mydomain.edu. You may still use the short form if you wish (and # sudo unusable if DNS stops working (for example if the machine is not plugged into the network). Also note that you must use the host's official name as DNS knows it. That is, you may not use a # all aliases from DNS. If your machine's host name (as returned by the hostname command) is already fully qualified you shouldn't need to set fqdn. This flag is off by default. #Defaults ignore_dot # ignore_dot: If set, sudo will ignore '.' or '' (current dir) in the PATH environment variable; the PATH itself is not modified. This flag is off by default. #Defaults ignore_local_sudoers # ignore_local_sudoers: If set via LDAP, parsing of /etc/sudoers will be skipped. This is intended for Enterprises that wish to prevent the usage of local sudoers files so that only LDAP is used. Th # present, /etc/sudoers does not even need to exist. Since this option tells sudo how to behave when no specific LDAP entries have been matched, this sudoOption is only meaningful for the cn=default #Defaults insults # insults: If set, sudo will insult users when they enter an incorrect password. This flag is off by default. #Defaults log_host # log_host: If set, the host name will be logged in the (non-syslog) sudo log file. This flag is off by default. #Defaults log_input # log_input: If set, sudo will run the command in a pseudo tty and log all user input. If the standard input is not connected to the user's tty, due to I/O redirection or because the command is part # Input is logged to the directory specified by the iolog_dir option (/var/log/sudo-io by default) using a unique session ID that is included in the normal sudo log line, prefixed with TSID=. The i # Note that user input may contain sensitive information such as passwords (even if they are not echoed to the screen), which will be stored in the log file unencrypted. In most cases, logging the #Defaults log_output # log_output: If set, sudo will run the command in a pseudo tty and log all output that is sent to the screen, similar to the script(1) command. If the standard output or standard error is not connec # is also captured and stored in separate log files. # Output is logged to the directory specified by the iolog_dir option (/var/log/sudo-io by default) using a unique session ID that is included in the normal sudo log line, prefixed with TSID=. The # Output logs may be viewed with the sudoreplay(8) utility, which can also be used to list or search the available logs. #Defaults log_year # log_year: If set, the four-digit year will be logged in the (non-syslog) sudo log file. This flag is off by default. #Defaults long_otp_prompt # long_otp_prompt: When validating with a One Time Password (OTP) scheme such as S/Key or OPIE, a two-line prompt is used to make it easier to cut and paste the challenge to a local window. It's not #Defaults mail_always # mail_always: Send mail to the mailto user every time a users runs sudo. This flag is off by default. #Defaults mail_badpass # mail_badpass: Send mail to the mailto user if the user running sudo does not enter the correct password. This flag is off by default. #Defaults mail_no_host # mail_no_host: If set, mail will be sent to the mailto user if the invoking user exists in the sudoers file, but is not allowed to run commands on the current host. This flag is off by default. #Defaults mail_no_perms # mail_no_perms: If set, mail will be sent to the mailto user if the invoking user is allowed to use sudo but the command they are trying is not listed in their sudoers file entry or is explicitly den #Defaults mail_no_user # mail_no_user: If set, mail will be sent to the mailto user if the invoking user is not in the sudoers file. This flag is on by default. #Defaults noexec # noexec: If set, all commands run via sudo will behave as if the NOEXEC tag has been set, unless overridden by a EXEC tag. See the description of NOEXEC and EXEC below as well as the "PREVENTING SHE #Defaults path_info # path_info: Normally, sudo will tell the user when a command could not be found in their PATH environment variable. Some sites may wish to disable this as it could be used to gather information on t # the executable is simply not in the user's PATH, sudo will tell the user that they are not allowed to run it, which can be confusing. This flag is on by default. #Defaults passprompt_override # passprompt_override: The password prompt specified by passprompt will normally only be used if the password prompt provided by systems such as PAM matches the string "Password:". If passprompt_over #Defaults preserve_groups # preserve_groups: By default, sudo will initialize the group vector to the list of groups the target user is in. When preserve_groups is set, the user's existing group vector is left unaltered. The # default. #Defaults pwfeedback # pwfeedback: By default, sudo reads the password like most other Unix programs, by turning off echo until the user hits the return (or enter) key. Some users become confused by this as it appears to # the user presses a key. Note that this does have a security impact as an onlooker may be able to determine the length of the password being entered. This flag is off by default. #Defaults requiretty # requiretty: If set, sudo will only run when the user is logged in to a real tty. When this flag is set, sudo can only be run from a login session and not via other means such as cron(8) or cgi-bin #Defaults root_sudo # root_sudo: If set, root is allowed to run sudo too. Disabling this prevents users from "chaining" sudo commands to get a root shell by doing something like "sudo sudo /bin/sh". Note, however, that # real additional security; it exists purely for historical reasons. This flag is on by default. #Defaults rootpw # rootpw: If set, sudo will prompt for the root password instead of the password of the invoking user. This flag is off by default. #Defaults runaspw # runaspw: If set, sudo will prompt for the password of the user defined by the runas_default option (defaults to root) instead of the password of the invoking user. This flag is off by default. #Defaults set_home # set_home: If enabled and sudo is invoked with the -s option the HOME environment variable will be set to the home directory of the target user (which is root unless the -u option is used). This eff # is enabled, so set_home is only effective for configurations where either env_reset is disabled or HOME is present in the env_keep list. This flag is off by default. #Defaults set_logname # set_logname: Normally, sudo will set the LOGNAME, USER and USERNAME environment variables to the name of the target user (usually root unless the -u option is given). However, since some programs ( # may be desirable to change this behavior. This can be done by negating the set_logname option. Note that if the env_reset option has not been disabled, entries in the env_keep list will override #Defaults set_utmp # set_utmp: When enabled, sudo will create an entry in the utmp (or utmpx) file when a pseudo-tty is allocated. A pseudo-tty is allocated by sudo when the log_input, log_output or use_pty flags are e # the tty, time, type and pid fields updated. This flag is on by default. #Defaults setenv # setenv: Allow the user to disable the env_reset option from the command line via the -E option. Additionally, environment variables set via the command line are not subject to the restrictions impo # variables in this manner. This flag is off by default. #Defaults shell_noargs # shell_noargs: If set and sudo is invoked with no arguments it acts as if the -s option had been given. That is, it runs a shell as root (the shell is determined by the SHELL environment variable if # is off by default. #Defaults stay_setuid # stay_setuid: Normally, when sudo executes a command the real and effective UIDs are set to the target user (root by default). This option changes that behavior such that the real UID is left as the # systems that disable some potentially dangerous functionality when a program is run setuid. This option is only effective on systems with either the setreuid() or setresuid() function. This flag #Defaults targetpw # targetpw: If set, sudo will prompt for the password of the user specified by the -u option (defaults to root) instead of the password of the invoking user. In addition, the timestamp file name will # passwd database as an argument to the -u option. This flag is off by default. #Defaults tty_tickets # tty_tickets: If set, users must authenticate on a per-tty basis. With this flag enabled, sudo will use a file named for the tty the user is logged in on in the user's time stamp directory. If disa #Defaults umask_override # umask_override: If set, sudo will set the umask as specified by sudoers without modification. This makes it possible to specify a more permissive umask in sudoers than the user's own umask and matc # user's umask and what is specified in sudoers. This flag is off by default. #Defaults use_pty # use_pty: If set, sudo will run the command in a pseudo-pty even if no I/O logging is being gone. A malicious program run under sudo could conceivably fork a background process that retains to the u # that impossible. This flag is off by default. #Defaults utmp_runas # utmp_runas: If set, sudo will store the name of the runas user when updating the utmp (or utmpx) file. By default, sudo stores the name of the invoking user. This flag is off by default. #Defaults visiblepw # visiblepw: By default, sudo will refuse to run if the user must enter a password but it is not possible to disable echo on the terminal. If the visiblepw flag is set, sudo will prompt for a passwor # somehost sudo ls" since rsh(1) does not allocate a tty. This flag is off by default. #Defaults closefrom # closefrom: Before it executes a command, sudo will close all open file descriptors other than standard input, standard output and standard error (ie: file descriptors 0-2). The closefrom option can #Defaults passwd_tries # passwd_tries: The number of tries a user gets to enter his/her password before sudo logs the failure and exits. The default is 3. #Defaults loglinelen # loglinelen: Number of characters per line for the file log. This value is used to decide when to wrap lines for nicer log files. This has no effect on the syslog log file, only the file log. The #Defaults passwd_timeout # passwd_timeout: Number of minutes before the sudo password prompt times out, or 0 for no timeout. The timeout may include a fractional component if minute granularity is insufficient, for example 2 #Defaults timestamp_timeout # timestamp_timeout: Number of minutes that can elapse before sudo will ask for a passwd again. The timeout may include a fractional component if minute granularity is insufficient, for example 2.5. # timestamp will never expire. This can be used to allow users to create or delete their own timestamps via sudo -v and sudo -k respectively. #Defaults umask # umask: Umask to use when running the command. Negate this option or set it to 0777 to preserve the user's umask. The actual umask that is used will be the union of the user's umask and the value o # running a command. Note on systems that use PAM, the default PAM configuration may specify its own umask which will override the value set in sudoers. #Defaults badpass_message # badpass_message: Message that is displayed if a user enters an incorrect password. The default is Sorry, try again. unless insults are enabled. #Defaults editor # editor: A colon (':') separated list of editors allowed to be used with visudo. visudo will choose the editor that matches the user's EDITOR environment variable if possible, or the first editor in #Defaults iolog_dir # iolog_dir: The top-level directory to use when constructing the path name for the input/output log directory. Only used if the log_input or log_output options are enabled or when the LOG_INPUT or L # directory. The default is "/var/log/sudo-io". # The following percent (`%') escape sequences are supported: # % command will be run as (e.g. root) # %{runas_group} - expanded to the group name of the user the command will be run as (e.g. wheel) # %{hostname} - expanded to the local host name without the domain name # %{command} - expanded to the base name of the command being run # In addition, any escape sequences supported by the system's strftime() function will be expanded. # To include a literal `%' character, the string `%%' should be used. #Defaults iolog_file # iolog_file: The path name, relative to iolog_dir, in which to store input/output logs when the log_input or log_output options are enabled or when the LOG_INPUT or LOG_OUTPUT tags are present for a #() function. #Defaults mailsub # mailsub: Subject of the mail sent to the mailto user. The escape %h will expand to the host name of the machine. Default is *** SECURITY information for %h ***. #Defaults noexec_file # noexec_file: This option is no longer supported. The path to the noexec file should now be set in the /etc/sudo.conf file. #Defaults passprompt # passprompt: The default prompt to use when asking for a password; can be overridden via the -p option or the SUDO_PROMPT environment variable. The following percent (`%') escape sequences are suppo # Password:. #Defaults runas_default # runas_default: The default user to run commands as if the -u option is not specified on the command line. This defaults to root. #Defaults syslog_badpri # syslog_badpri: Syslog priority to use when user authenticates unsuccessfully. Defaults to alert. # The following syslog priorities are supported: alert, crit, debug, emerg, err, info, notice, and warning. #Defaults syslog_goodpri # syslog_goodpri: Syslog priority to use when user authenticates successfully. Defaults to notice. # See syslog_badpri for the list of supported syslog priorities. #Defaults sudoers_locale # sudoers_locale: Locale to use when parsing the sudoers file, logging commands, and sending email. Note that changing the locale may affect how sudoers is interpreted. Defaults to "C". #Defaults timestampdir # timestampdir: The directory in which sudo stores its timestamp files. The default is /var/db/sudo. #Defaults timestampowner # timestampowner: The owner of the timestamp directory and the timestamps stored therein. The default is root. #Defaults env_file # env_file: The env_file option specifies the fully qualified path to a file containing variables to be set in the environment of the program being run. Entries in this file should either be of the f # quotes. Variables in this file are subject to other sudo environment settings such as env_keep and env_check. #Defaults exempt_group # exempt_group: Users in this group are exempt from password and PATH requirements. The group name specified should not include a % prefix. This is not set by default. #Defaults group_plugin # group_plugin: A string containing a sudoers group plugin with optional arguments. This can be used to implement support for the nonunix_group syntax described earlier. The string should consist of # configuration arguments the plugin requires. These arguments (if any) will be passed to the plugin's initialization function. If arguments are present, the string must be enclosed in double quot # For example, given /etc/sudo-group, a group file in Unix group format, the sample group plugin can be used: # Defaults group_plugin="sample_group.so /etc/sudo-group" # For more information see sudo_plugin(5). #Defaults lecture #. #Defaults lecture_file # lecture_file: Path to a file containing an alternate sudo lecture that will be used in place of the standard lecture if the named file exists. By default, sudo uses a built-in lecture. #Defaults listpw #. #Defaults logfile # logfile: Path to the sudo log file (not the syslog log file). Setting a path turns on logging to a file; negating this option turns it off. By default, sudo logs via syslog. #Defaults mailerflags # mailerflags: Flags to use when invoking mailer. Defaults to -t. #Defaults mailerpath # mailerpath: Path to mail program used to send warning mail. Defaults to the path to sendmail found at configure time. #Defaults mailfrom # mailfrom: Address to use for the "from" address when sending warning and error mail. The address should be enclosed in double quotes (") to protect against sudo interpreting the @ sign. Defaults t #Defaults mailto # mailto: Address to send warning and error mail to. The address should be enclosed in double quotes (") to protect against sudo interpreting the @ sign. Defaults to root. #Defaults secure_path # secure_path: Path used for every command run from sudo. If you don't trust the people running sudo to have a sane PATH environment variable you may want to use this. Another use is if you want to # option are not affected by secure_path. This option is not set by default. #Defaults syslog #. #Defaults verifypw #. #Defaults env_check # env_check: Environment variables to be removed from the user's environment if the variable's value contains % or / characters. This can be used to guard against printf-style format vulnerabilities # value without double-quotes. The list can be replaced, added to, deleted from, or disabled by using the =, +=, -=, and ! operators respectively. Regardless of whether the env_reset option is ena # they pass the aforementioned check. The default list of environment variables to check is displayed when sudo is run by root with the -V option. #Defaults env_delete # env_delete: Environment variables to be removed from the user's environment when the env_reset option is not in effect. The argument may be a double-quoted, space-separated list or a single value w # +=, -=, and ! operators respectively. The default list of environment variables to remove is displayed when sudo is run by root with the -V option. Note that many operating systems will remove p #Defaults env_keep # env_keep: Environment variables to be preserved in the user's environment when the env_reset option is in effect. This allows fine-grained control over the environment sudo-spawned processes will r # quotes. The list can be replaced, added to, deleted from, or disabled by using the =, +=, -=, and ! operators respectively. The default list of variables to keep is displayed when sudo is run by
https://wiki.archlinux.org/index.php?title=Sudo&diff=prev&oldid=264706
CC-MAIN-2018-22
refinedweb
5,309
53.31
Write a function to find if a given string C is interleaving of other two given strings A and B. Interleaving : C is said to be interleaving of A and B, if it contains all characters of A and B, order of all characters in individual strings (A, B) is preserved. Examples a) Input strings : A : ACBD B : AB C : CD Output : True b) Input strings: A : ADBC B : AB C : CD Output : False, even though all characters are there, order is changed. Time complexity : O(m+n), m and n are lengths of strings. Algorithm Given strings be A, B and C, Check if C is interleaving of A and B 1. Iterate through all characters of C, pick characters of C one by one. 2. If it doesn’t matches with first character of A and B then return False. 3. If the character matches with first character of A, repeat above process for second character of C. compare with second character of A and first of B. 4. Continue this process till it reaches the last character of A. 5. If all characters matches either with a character of A or character of B and length of C is sum of length of A and length of B. 6. Return True. C++ Program #include <bits/stdc++.h> using namespace std; //Main function to check int main() { const char *A = "CBD"; const char *B = "AGH"; const char *C = "ACGHBD"; //For all charcters in C while (*C != 0) { //If charcter is equal to charcter in A if (*A == *C) { A++;//Increment A pointer, next charcter } //If charcter is equal to charcter in B else if (*B == *C) { B++;//Increment B pointer, next character } //If charcter is not equal to charcter in A and B else { cout<<"not an interleaving"<<endl;//return false } C++;//Increment C pointer, next character } //After finishing all characters in C //Characters still left in A or B, Then return false if (*A || *B) { cout<<"not an interleaving"<<endl;; } //Else return true cout<<"ACGHBD is an interleaving of CBD and AGH"<<endl; return 0; }
https://www.tutorialcup.com/interview/string/string-interleaving-two-strings-not.htm
CC-MAIN-2020-10
refinedweb
348
64.24
This is a method of randomly sampling n items from a set of M items, with equal probability; where M >= n and M, the number of items is unknown until the end. This means that the equal probability sampling should be maintained for all successive items > n as they become available (although the content of successive samples can change). - The algorithm - Select the first n items as the sample as they become available; - For the i-th item where i > n, have a random chance of n/i of keeping it. If failing this chance, the sample remains the same. If not, have it randomly (1/n) replace one of the previously selected n items of the sample. - Repeat #2 for any subsequent items. - The Task - Create a function s_of_n_creatorthat given n the maximum sample size, returns a function s_of_nthat takes one parameter, item. - Function s_of_nwhen called with successive items returns an equi-weighted random sample of up to n of its items so far, each time it is called, calculated using Knuths Algorithm S. - Test your functions by printing and showing the frequency of occurrences of the selected digits from 100,000 repetitions of: - Use the s_of_n_creator with n == 3 to generate an s_of_n. - call s_of_n with each of the digits 0 to 9 in order, keeping the returned three digits of its random sampling from its last call with argument item=9. Note: A class taking n and generating a callable instance/function might also be used. - Reference - The Art of Computer Programming, Vol 2, 3.4.2 p.142 #include <iostream> #include <functional> #include <vector> #include <cstdlib> #include <ctime> template <typename T> std::function<std::vector<T>(T)> s_of_n_creator(int n) { std::vector<T> sample; int i = 0; return [=](T item) mutable { i++; if (i <= n) { sample.push_back(item); } else if (std::rand() % i < n) { sample[std::rand() % n] = item; } return sample; }; } int main() { std::srand(std::time(NULL)); int bin[10] = {0}; for (int trial = 0; trial < 100000; trial++) { auto s_of_n = s_of_n_creator<int>(3); std::vector<int> sample; for (int i = 0; i < 10; i++) sample = s_of_n(i); for (int s : sample) bin[s]++; } for (int x : bin) std::cout << x << std::endl; return 0; } - Output: 30052 29740 30197 30223 29857 29688 30095 29803 30098 30247 Content is available under GNU Free Documentation License 1.2.
https://tfetimes.com/c-knuths-algorithm-s/
CC-MAIN-2019-43
refinedweb
390
54.36
use MyMaths; my $l = MyMaths->new(1.2); my $r = MyMaths->new(3.4); print "A: ", $l + $r, "\n"; use myint; print "B: ", $l + $r, "\n"; { no myint; print "C: ", $l + $r, "\n"; } print "D: ", $l + $r, "\n"; no myint; print "E: ", $l + $r, "\n"; to give the output A: 4.6 B: 4 C: 4.6 D: 4 E: 4.6 i.e., where use myint; is in effect, addition operations are forced to integer, whereas by default they are not, with the default behaviour being restored via no myint; The minimal implementation of the package MyMaths would be something like this: package MyMaths; use warnings; use strict; use myint(); use overload '+' => sub { my ($l, $r) = @_; # Pass 1 to check up one call level from here if (myint::in_effect(1)) { int($$l) + int($$r); } else { $$l + $$r; } }; sub new { my ($class, $value) = @_; bless \$value, $class; } 1; Note how we load the user pragma myint with an empty list () to prevent its import being called. The interaction with the Perl compilation happens inside package myint: package myint; use strict; use warnings; sub import { $^H{"myint/in_effect"} = 1; } sub unimport { $^H{"myint/in_effect"} = 0; } sub in_effect { my $level = shift // 0; my $hinthash = (caller($level))[10]; return $hinthash->{"myint/in_effect"}; } 1; As pragmata are implemented as modules, like any other module, use myint; becomes BEGIN { require myint; myint->import(); } and no myint; is BEGIN { require myint; myint->unimport(); } Hence the import and unimport routines are called at compile time for the user's code. User pragmata store their state by writing to the magical hash %^H, hence these two routines manipulate it. The state information in %^H is stored in the optree, and can be retrieved read-only at runtime with caller(), at index 10 of the list of returned results. In the example pragma, retrieval is encapsulated into the routine in_effect(), which takes as parameter the number of call frames to go up to find the value of the pragma in the user's script. This uses caller() to determine the value of $^H{"myint/in_effect"} when each line of the user's script was called, and therefore provide the correct semantics in the subroutine implementing the overloaded addition. There is only a single %^H, but arbitrarily many modules that want to use its scoping semantics. To avoid stepping on each other's toes, they need to be sure to use different keys in the hash. It is therefore conventional for a module to use only keys that begin with the module's name (the name of its main package) and a "/" character. After this module-identifying prefix, the rest of the key is entirely up to the module: it may include any characters whatsoever. For example, a module Foo::Bar should use keys such as Foo::Bar/baz and Foo::Bar/$%/_!. Modules following this convention all play nicely with each other. The Perl core uses a handful of keys in %^H which do not follow this convention, because they predate it. Keys that follow the convention won't conflict with the core's historical keys.. Don't attempt to store references to data structures as integers which are retrieved via caller and converted back, as this will not be threadsafe. Accesses would be to the structure without locking (which is not safe for Perl's scalars), and either the structure has to leak, or it has to be freed when its creating thread terminates, which may be before the optree referencing it is deleted, if other threads outlive it.
http://search.cpan.org/dist/perl-5.17.5/pod/perlpragma.pod
CC-MAIN-2014-23
refinedweb
592
53.34
On Fri, Oct 11, 2002 at 11:44:15AM -0400, Nathan Hawkins wrote: > Done. The attached patch is against cvs checkout from this morning. I > left out the arch specific patches to glibc itself. They're still pretty > large. Baring any typos, this should be committed now. I'm a little curious why: ifneq ($(DEB_HOST_GNU_SYSTEM),linux) echo "slibdir = /lib" >> $(objdir)/configparms echo "rootsbindir = /sbin" >> $(objdir)/configparms echo "sysconfdir = /etc" >> $(objdir)/configparms endif isn't done by the Linux archs. At first glance they seem like the right options for there too. -- learning from failures is nice in theory... but in practice, it sucks :) - Wolfgang Jaehrling
https://lists.debian.org/debian-glibc/2002/10/msg00129.html
CC-MAIN-2014-15
refinedweb
105
77.53
Securing Models On a report server, report models are used as data sources for both creating and using ad hoc reports. You can secure report models in three ways: through the report server folder namespace, through model item security, and through database security. Because security for report models is multi-layered, a user who can view a model in the folder hierarchy might encounter other restrictions that impose limits on how that model is used at design time and run time. The ability to use a model as a report data source depends on the following factors: Role-based security on a model (that is, the ability to view a model in the report server folder hierarchy). Role-based security on the report that uses the model as a data source. If a user cannot access the report, he or she might not be able view the data that the model provides (in Reporting Services, data from a model is viewable only in reports; third-party applications can expose model data in other ways). Security on items within the model. Database security at the view, table, or column level. As with all items that are stored on a report server, you can define item-level role assignments that determine whether a user can view or manage a report model. Users who have permission to view a model can see it in the report server folder hierarchy, read a limited amount of information about the model in the General properties page (for example, when it was created or modified), and query the model by clicking through links in any ad hoc report that uses the model as a data source. Users who have permission to manage a model can delete, rename, and update the model. Typically, model management tasks also require the ability to publish new models, but the ability to do that is actually conveyed through role assignments on folders, where the folder role assignment determines whether users can add items to it. Users who have permission to view a published model cannot open it directly to view its contents or download it to the file system. At run time, all interaction with the report model is through the report that uses it. Model item security allows you to control access to specific parts of a model. To configure model item security, use SQL Server Management Studio. After you enable model item security, you can create role assignments on specific nodes in the model namespace. For more information, see Model Item Security Page (Report Manager). A report model namespace is represented as a hierarchical structure that includes a root node, entities, model roles, and fields. It also includes folders and perspectives that you can use to organize (but not secure) model items. When you view the model in Management Studio, you can browse the hierarchical structure and specify role assignments at different levels. You can specify role assignments on the root node of a report model to control access to the entire model, or on parts of a model to vary access permissions on selected branches. As with report server folder namespace security, the model namespace supports inherited security for items lower in the tree structure. Model item security is off by default. When model item security is not enabled, all permissions for viewing the data that the model represents are determined through role assignments on the model and report in the report server folder hierarchy. Model item security is transparent to the user. If a user does not have access to a particular branch of the model hierarchy, that portion of the model is not presented to the user in the report. It cannot be used for data exploration, nor can it return data in a report. With model item security, the report server modifies the query that is sent to the data source to exclude any portion of the model that is off limits to the user. Database security provides the third layer of security in a model-driven report. If you restrict access to tables or columns, the database will return an access denied error for all unauthorized access. If you include in your model any tables or columns that are subject to database security, a database error message will be returned if a user accesses a model item that maps to a table or column that he or she is not authorized to view. While database security at the table or column level is necessary in some scenarios, it is import to consider how it affects ad hoc report navigation. A user who gets a database error message while navigating a report must retrace his or her steps to get back to the portion of the model to which he or she has access. Use Management Studio to secure these parts of a model: root node folders entities model roles (where the term "role" refers to the relationship between entities) fields You cannot secure perspectives as whole, but you can secure the model items within the perspective. Security is inherited based on the model item's security. For example, if the model item can only be accessed by administrators within the model, then the model item can only be accessed by administrators when it appears in the perspective. Report model security is separate from security you define on the report server folder hierarchy and at the system level. The root node of a model is not accessed or secured through the folder hierarchy. As an alternative to restricting access through role assignments, you can use the Hidden property to prevent users from seeing portions of a model. If you do not want any users to see a model item, change the Hidden property for the item to true in Model Designer. Hiding an item does not remove it from model calculations or relationships. For example, if you hide a field that is used in an expression, the field is still used in the expression even if users cannot see it. Hiding an item hides it for all users. If you want to vary visibility and access by user or group, use role assignments rather than the Hidden property to secure the item. You can secure items in a model from Report Manager. To secure model items, the report model must be deployed on the report server. In Report Manager, Navigate to the folder that contains the model. Hover your mouse over the name of the model, then click the arrow to open the menu and select Security. In the Model Properties page, click Model Item Security. Select the Secure individual model items independently of this model checkbox. Select the root node. A role assignment is required on the root node. Click Assign read permission to the following users and groups. Type the list of users or groups seperated by a semi-colon ';'. Click Apply. Navigate to the next entity, relationship, field, or folder that you want to secure. Repeat steps 6 through 8.
https://technet.microsoft.com/en-us/library/ms156505(v=sql.105).aspx
CC-MAIN-2016-30
refinedweb
1,167
50.87
Creating. The first thing we need to do is to create an Azure Storage account in the Azure Portal. Once logged into the Portal, you’ll want to click on the big green plus sign for New in the top left. Next, you’ll want to select Data + Storage and then Storage to get to the configuration blade for your new storage account. Here, you’ll want to enter a unique name for your Azure Storage account. Note that this name must be globally unique and must be a valid URL. You’ll also want to make sure that the physical location you select is closest to the consumers of your data as costs can increase based on the proximity of the consumer to the storage region. You can leave the rest of the information with the defaults and then click the Create button. Azure will grind away for a bit to create your storage account as you watch an animation on your home screen. There’s a much more in-depth article specifically on Azure Storage Accounts at the Azure site that you may find interesting, though don’t be alarmed that the screenshots there differ from what I have here or what you might actually encounter on the Azure Portal itself. The Azure folks have been tweaking the look of the Azure Preview Portal pretty regularly. Eventually, the Azure Storage account will be created and you will be presented with the dashboard page for your new storage account. Adding the Container In Azure Blob Storage, each blob must live inside a Container. A Container is just a way to group blobs and is used as part of the URL that is created for each blob. An Azure Storage account can contain unlimited Containers and each Container can contain unlimited blobs. So, let’s add a Container so we’ll have somewhere to store our images. In the Summary area, you want to now click on the Containers link to show your containers for this storage account and then click the white plus icon just below the Containers header. In the Add a Container blade, enter a name for your container and select Blob and click OK. Once your container is created, it will be displayed in the list of containers. Copy the URL that is created for your container from the URL column. Let’s go ahead and copy that into our web.config file as we’ll soon need it. Configuration Settings Open the Visual Studio solution we created in the last Azure Bit so we can wire up saving of our UploadedImage into our newly created Azure Storage account. While you’ve still got that URL in your clipboard, let’s paste that into the appSettings of our web.config file as ImageRootPath. Also, go ahead and add an appSetting for your Container name as we will need that as well. - <appSettings> - <add key=“ImageRootPath“ value=““ /> - <add key=“ImagesContainer“ value=“images“ /> - </appSettings> Since we already have web.config open, let’s go ahead and grab the connection string for our storage account and add that to our connectionStrings in web.config. In the main dashboard for your storage account, you’ll want to click the All Settings link and then select Keys so that you can see the various key settings for your storage account including the connection strings. You should be seeing something that looks roughly like the below. Note that I’ve masked some of my super secret keys in this screenshot. It’s very important that you guard your keys as they can be used to gain unfettered access to your storage accounts if they are compromised. You can always regenerate new keys by using the buttons just under the Manage keys header in the Azure Portal if you find that your keys have been compromised. Also, there are various rotation strategies you can employ to automate the exchanging of primary and secondary keys, but that is beyond the scope of this series. Now copy the value given for Primary Connection String and add this to the connectionStrings section of web.config: - <connectionStrings> - <add name=“BlobStorageConnectionString“ - connectionString=“DefaultEndpointsProtocol=https;AccountName=imagemanipulator;AccountKey=XXXXXXXXXXXXX“ /> - </connectionStrings> Next, we’ll update ImageService to grab the values we just placed in web.config and assign these to private fields in ImageService. In addition, now that we have these values, we can construct and assign the URL for our UploadedImage in the CreateUploadedImage method. - public class ImageService : IImageService - { - private readonly string _imageRootPath; - private readonly string _containerName; - private readonly string _blobStorageConnectionString; - public ImageService() - { - _imageRootPath = ConfigurationManager.AppSettings[“ImageRootPath”]; - _containerName = ConfigurationManager.AppSettings[“ImagesContainer”]; - _blobStorageConnectionString = ConfigurationManager.ConnectionStrings[“BlobStorageConnectionString”].ConnectionString; - } - public async Task<UploadedImage> CreateUploadedImage(HttpPostedFileBase file) - { - if ((file != null) && (file.ContentLength > 0) && !string.IsNullOrEmpty(file.FileName)) - { - byte[] fileBytes = new byte[file.ContentLength]; - await file.InputStream.ReadAsync(fileBytes, 0, Convert.ToInt32(file.ContentLength)); - return new UploadedImage - { - ContentType = file.ContentType, - Data = fileBytes, - Name = file.FileName, - Url = string.Format(“{0}/{1}“, _imageRootPath, file.FileName) - }; - } - return null; - } - } Wiring the Image Service to Upload For this next section, we will need to bring in some additional packages via Nuget. To do this, right-click on the web project and select Manage NuGet Packages and then search for WindowsAzure.Storage and click Install. This will bring in all of the required Nuget packages needed for interaction with Azure Storage, so be sure to click “I Accept” when prompted. The flow for saving our image to Azure Blob Storage is that we first need to get a reference to our Storage Account and use that to create a reference to our Container. Once we have our Container, we configure it to allow public access and we use it to get a reference to a block of memory for our blob. Lastly, we tell the block that we are about to insert an image and finally we kick off the upload of the image to our Container using the name we generated earlier. Let’s now see what that looks like in some (heavily-commented) code. First, add the new method to the IImageService interface: - public interface IImageService - { - Task<UploadedImage> CreateUploadedImage(HttpPostedFileBase file); - Task AddImageToBlobStorageAsync(UploadedImage image); - } And now, implement the new method in ImageService: - public async Task AddImageToBlobStorageAsync(UploadedImage image) - { - // get the container reference - var container = GetImagesBlobContainer(); - // using the container reference, get a block blob reference and set its type - CloudBlockBlob blockBlob = container.GetBlockBlobReference(image.Name); - blockBlob.Properties.ContentType = image.ContentType; - // finally, upload the image into blob storage using the block blob reference - var fileBytes = image.Data; - await blockBlob.UploadFromByteArrayAsync(fileBytes, 0, fileBytes.Length); - } - private CloudBlobContainer GetImagesBlobContainer() - { - // use the connection string to get the storage account - var storageAccount = CloudStorageAccount.Parse(_blobStorageConnectionString); - // using the storage account, create the blob client - var blobClient = storageAccount.CreateCloudBlobClient(); - // finally, using the blob client, get a reference to our container - var container = blobClient.GetContainerReference(_containerName); - // if we had not created the container in the portal, this would automatically create it for us at run time - container.CreateIfNotExists(); - // by default, blobs are private and would require your access key to download. - // You can allow public access to the blobs by making the container public. - container.SetPermissions( - new BlobContainerPermissions - { - PublicAccess = BlobContainerPublicAccessType.Blob - }); - return container; - } One thing to note about the public access we set on our Container (since I know that the PublicAccess = BlobContainerPublicAccessType.Blob might make some folks nervous). Containers are created by default with private access. We are updating our Container to public to allow read-only anonymous access of our images. We will still need our private access keys in order to delete or edit images in the Container. The last step for the upload process is to actually call the AddImageToBlobStorageAsync method from our HomeController’s Upload action method. - [HttpPost] - public async Task<ActionResult> Upload(FormCollection formCollection) - { - if(Request != null) - { - HttpPostedFileBase file = Request.Files[“uploadedFile”]; - var uploadedImage = await _imageService.CreateUploadedImage(file); - await _imageService.AddImageToBlobStorageAsync(uploadedImage); - } - return View(“Index”); - } At this point, you should be able to run the application and upload an image and it should save to Azure Blob Storage and you should see it appear in your web page. You can right-click on the image and choose Inspect Element or View Properties (depending on your browser) to see that the image is actually being served from your Azure Storage account. Debugging with Azure Storage Explorer Now that you have the site working and saving to Azure Blob Storage, I’ll point you to one of my favorite debugging tools that I use when I am working with Azure Storage. There are several tools that exist for viewing and manipulating the contents of Azure Storage accounts, but my favorite is Azure Storage Explorer. Once you install Azure Storage Explorer, you’ll want to click on the Add Account button in the top menu. To get the values needed for the Add Storage Account dialog, you’ll want to return to the All Settings blade for your storage account in the Azure Portal. The first two settings are the ones you want for creating a new account in Azure Storage Explorer. Here’s the mapping: Once you have the storage account configured in Azure Storage Explorer, you can view the contents of your container(s), like this: You can double-click on any of the items and the View Blob dialog will open showing you the Properties Tab with everything you could possibly want to know about your blob item and even allows you to change most of the properties directly from the interface. If you then click the Content Tab, select Image, and then click the View button you can see your image: Let’s Get it to the Cloud As a final step, we want to publish all of this new Azure Image Manipulator goodness that we’ve created in these first two Azure Bits back out to our Azure website. You do this by right-clicking on the web project in Solution Explorer and choosing “Publish..”. Once Visual Studio completes the publishing process, you should be able to run your Azure website and upload an image to Azure Blob Storage just the same as when you were running against localhost earlier. Now that we have our original image safely stored away in Azure Blob Storage, we need to give notice that our image is ready for processing. We’ll do this by placing a message in an Azure Queue. In the next Azure Bit, I’ll walk through setting up an Azure Queue and inserting a message into the queue signaling that our image is ready for manipulation. Did you miss the first Azure Bit? You can read Azure Bits #1 – Up and Running at the Wintellect DevCenter.
https://www.wintellect.com/azure-bits-2-saving-the-image-to-azure-blob-storage/
CC-MAIN-2020-45
refinedweb
1,779
53.31
How to use matlab with WEKA Matlab is a great environment for prototyping algorithms, WEKA is a great environment for setting up and running experiments. Here is how to use them together. (Of course, you need to already have Matlab, WEKA, and Java installed!) First, you or your system administrator will need to download and install Stefan Muller's JMatLink . This very nice piece of work allows one to use the matlab engine directly from java. The JMatLink site contains makefiles for Windows and Solaris. Here is the makefile I used to build JMatLink on linux. Once you've built JMatLink, try it out by running the included TestGui app (e.g. "java TestGui"). When you start TestGui, you should see the matlab splash screen pop up. Once in TestGui, the first thing you need to do is open your connection to matlab by clicking on the "Open" button. When you open a matlab instance, you should see the matlab spash screen pop up. Once the matlab instance has started up, you can try typing commands into the text area and then hitting the "Eval(String)" button. A good one to try is "peaks" which will pop up a graph of a surface. Once you've verified that TestGui is working, you can try integrating matlab into WEKA. If WEKA is not already installed on your system, you or your system administrator will need to download and install it from here . Before you can start adding your matlab stuff to WEKA, you'll need to understand the WEKA class hierarchy and design patterns (detailed in the documentation). Once you've done that, here is the basic procedure for incorporating a matlab script into WEKA: Make sure that you import "JMatLink" in your file. import JMatLink; You'll also need, within the body of your routine, to create a JMatLink communication object: private JMatLink engine; At the point in your program where you want to use matlab, you'll need to instantiate the JMatLink object: engine = new JMatLink(); and then open it: engine.engOpen(); To actually use matlab on data that you are using in WEKA, you'll need to send an array to the matlab engine: engine.engPutArray("array",array); You can then operate on the array, just like you were in matlab: engine.engEvalString("training_set=array(:,1:4)'"); engine.engEvalString("result = kmeans(training_set,"+m_NumClusters+")"); (Note that m_NumClusters is a variable in the WEKA code.) And then you can retrieve your result from matlab: double[][] result = engine.engGetArray("result"); And close the connection to matlab (Remember that each instance uses a matlab license!): engine.engClose(); That's it! If you have a specific problem (and you're a LANS member), please feel free to email me and I'll help you out.
http://www.lans.ece.utexas.edu/userguides/matlab-weka.html
crawl-002
refinedweb
462
62.38
tgext.debugbar 0.2.4 Provides debug toolbar for TurboGears2 About Debug Toolbar tgext.debugbar provides a Debug Toolbar for TurboGears2 framework. Exposed sections are: - Controller and Rendering time reporting - Controller Profiling - Request Parameters, Headers, Attributes and Environ - SQLAlchemy Queries reporting and timing - Explain and Show result of performed SQLAlchemy queries - List mounted controllers, their path and exposed methods - Log Messages Installing tgext.debugbar can be installed both from pypi or from bitbucket: easy_install tgext.debugbar should just work for most of the users Using it with Pluggables Like any other pluggable extension, the debugbar can be activated through the pluggables interface inside your app_cfg.py: from tgext.pluggable import plug plug(base_config, 'tgext.debugbar') The debugbar will then check for the debug config option disabling itself when it is false. Using it without pluggable While the pluggables interface makes convenient to pass options to the debugbar, you might want to avoid using it for various reasons. In such cases you can enable the debugbar by adding the following lines to your project app_cfg.py: from tgext.debugbar import enable_debugbar enable_debugbar(base_config) Enabling Logs Whenever your response is JSON or an ajax request, or any other kind of content which is not a plain HTML page, the debugbar bar is not injected inside your response. This is to prevent it from messing with your output when it would probably break things. There are cases when you might be interested in getting access to some informations from the debugbar even when your output is not HTML. For example your might be interested in knowing which queries have been performed to retrieve your JSON response. To enable logging such informations you can pass the enable_logs=True option to the plug call which activates the debugbar. Inventing Mode The DebugBar provides the inventing mode, such feature is inspired by the Inventing On Principle to speed up experimenting and prototyping with your website. Whenever the inventing mode is enable your web page will automatically update when you change it, being it a controller, template or css change. The inventing mode can be enabled by passing the inventing=True option to the plug call which activates the debugbar. If you want to disable inventing mode for CSS files, you can enable the inventing mode and then pass the inventing_css=False option. - Author: Alessandro Molina - Keywords: turbogears2.widgets - License: MIT - Categories - Package Index Owner: amol - Package Index Maintainer: pedersen - DOAP record: tgext.debugbar-0.2.4.xml
https://pypi.python.org/pypi/tgext.debugbar/0.2.4
CC-MAIN-2016-50
refinedweb
411
53.51
Ok - newbie and not an experienced coder by any stretch of the imagination. I'm trying to create "back" and "next" buttons for a dynamic page. I'm following the tutorial at: to the letter! I've changed the dataset ID and the "link to item" value to the field key and end up with this javascript: import {local} from "wix-storage"; const linkField = "link-Artwork-Images-title"; // replace this value $w.onReady(function () { $w("#Artwork-Images").onReady(() => { const numberOfItems = $w("#Artwork-Images").getTotalCount(); $w("#Artwork-Images").getItems(0, numberOfItems) .then( (result) => { const dynamicPageURLs = result.items.map(item => item[linkField]); local.setItem('dynamicPageURLs', dynamicPageURLs); } ) .catch( (err) => { console.log(err.code, err.message); } ); } ); } ); However, when I copy & paste this code into the page tab of the Code panel I keep getting the following error message: "ESLint failed to validate as an error occurred unknown character at postition 9" ??? Can anyone assist please as this is only the 1st stage in the process? PS - if by "position 9" - it means line 9 - this is the code which appears on that line: $w("#Artwork-Images").onReady(() => { Hi, is '#Artwork-Images' the name of your dataset or database? Liran. Hi Liran - the database name is "#Artwork-Images" - when I click onto the dataset settings on the index page the dataset name is coming up as "#Artwork-Images dataset" - is that why I'm getting the error message? Could be... Please open the properties panel (tool->properties panel), then select the Dataset, then change it's id accordingly or copy it from there to the code. Liran. I've just done that. Am now getting an error message on Line 15 stating: "local" is undefined ? Ignore last comment Liran - I've managed to solve that part - thanks - I'm moving onto the next step On the next step in the code panel I am now getting the following error messages: "#previous" is not a valid selector name and "#back" is not a valid selector name (both the buttons on the dynamic item page have been named this and both are in lower case)? If I ignore the error messages and preview the item page I am getting: Loading the code for the Artwork-Images (Title) page. To debug this code, open chiye.js in Developer Tools.TypeError: $w(...).disable is not a functionWix code SDK Warning: The alt parameter that is passed to the alt method cannot be set to null or undefined. Ok - I have managed to sort out the code and get it to work - however, what I actually want is code which allows the user to select a "back" or "previous" button and be re-directed BACK to the URL they just came from - NOT the previous item in a dataset. Is this possible please? Oh... Then why is the browser back button not good enough? Liran. I really wanted to link it to a simple "X" on the dynamic image page - so that as soon as the item page is closed they are taken back to the index page they came from - this must be possible with code? I have managed to get the previous and next buttons to work but I am getting several error messages on the page code even though it is working which reads: "To debug this code, open chiye.js in Developer Tools.Wix code SDK Warning: The text parameter that is passed to the text method cannot be set to null or undefined. " This warning means you're assigning null or undefined to a text (could be $w('#text1').text =...). Regarding the 'X' button. I assume you have different index pages, so you'll need to 'remember' the page they came from. You can try using wixStorage for that. Liran. I do have different index pages and need to "remember" the page. Is there a tutorial anywhere on how to use wixStorage for this purpose? will a "history.back" script work? Hi, Unfortunately we do not have a tutorial that combines wix-window and wix-storage. However you can find a tutorial that uses wix-storage to communicate between pages here, which will probably guide you in the right direction. To get the current page url simply call: It is not possible to directly access the History object with Wix Code. Thanks. Will take a look.
https://www.wix.com/corvid/forum/community-discussion/create-previous-and-next-buttons-for-a-dynamic-item-page
CC-MAIN-2020-16
refinedweb
721
63.09
This page is likely outdated (last edited on 20 Dec 2009). Visit the new documentation for updated content. System.Messaging At this point Mono has alpha support for the System.Messaging namespace, current (as of release 2.4.2.1) Mono uses AMQP through RabbitMQ to partially implement the messaging APIs. Getting Started To get started with System.Messaging, first you will need an AMQP implementation that will provide the messaging implementation. While in theory any AMQP should work, only RabbitMQ has been tested so far, binaries are available for Windows and various flavours of GNU/Linux. To implement a System.Messaging client, you will need to reference the following 4 dlls, available in the 2.4.2.1 release of Mono. - System.Messaging.dll (the MS compatible API) - Mono.Messaging.dll (The Mono Messaging SPI) - Mono.Messaging.RabbitMQ.dll (Bindings from Mono.Messaging to the RabbitMQ server) - RabbitMQ.Client.dll (The RabbitMQ supplied .NET client library) Simply implement a System.Messaging client as you would against the MS APIs. When running the client it is necessary to define the MONO_MESSAGING_PROVIDER environment variable. It should be set to: Mono.Messaging.RabbitMQ.RabbitMQMessagingProvider,Mono.Messaging.RabbitMQ Which is the full class name of the messaging provider class that is the entry point for the implementation of the Mono.Messaging SPI. In coming versions of Mono the value for the environment variable will be shortened. Due to a bug (which will be fixed in a later release), you will also need to add the mono assemblies to the MONO_PATH environment variable, e.g: MONO_PATH=/usr/lib/mono/2.0 Example code: using System; using System.Messaging; namespace messagingexample { class MainClass { public static void Main (string[]args) { // '.' Represents connecting to localhost. // 'private$' Indicates a private queue, not really needed for // mono/rabbitmq, but is supported for MS compatibility. // 'testq' Is the name of the queue. string path = @".\private$\testq"; MessageQueue queue = MessageQueue.Exists (path) ? new MessageQueue (path) : MessageQueue.Create (path); queue.Formatter = new BinaryMessageFormatter (); while (true) { Console.Write ("Please enter a message (empty line to exit): "); string input = Console.ReadLine (); if (input != null && input.Length > 0) { Message m = new Message (input); queue.Send (m); Message response = queue.Receive (); Console.WriteLine ("Received: {0}", response.Body); } else { Console.WriteLine ("Exiting..."); break; } } } } } What’s Supported? The Microsoft System.Messaging APIs (for better or worse) weren’t designed to be a generic messaging API, unlike something like JMS. Therefore some parts of the System.Messaging API are difficult to implement as they tie too closely to the MSMQ server to be reimplemented in a generic fashion. Some of the key areas that can’t be supported are: - MessageQueue.ReadHandle - MessageQueue.WriteHandle - MessageQueue security and permissions support. - MessageEnumerators are mostly supported, but some of the transactional remove methods can’t be implemented. However most of the key aspects are implemented including Send/Receive, Async Send/Receive, Transactions, Persistence. The current implementation doesn’t support authentication against the RabbitMQ server, so relies on using the default guest account. AMQP version 0.8 (currently supported by RabbitMQ) does not have support for queue discovery, i.e. looking up/finding queues by name. AMQP uses a ‘declare’ command which creates to queue if it doesn’t exist. To support MessageQueue.Exists (“queue name”) methods, the Mono/RabbitMQ integration keeps a local cache of the queues that have been created. So the first time MessageQueue.Exists is called for a specific queue after starting it will always return false until MessageQueue.Create is called, at which point the name of the queue is added to the cache. What’s In Progress - Moving the Mono.Messaging.RabbitMQ to .NET 2.0 APIs to support System.Configuration. - Use System.Configuration to support authentication and allow different queues to map to different users. - Performance and Load testing. Getting Involved What’s most needed right now is users and testers. We really need a picture of what Mono users most want to see from its implementation of System.Messaging. Grab the latest release, try it out and raise bugs against the parts that don’t work.
https://www.mono-project.com/archived/systemmessaging/
CC-MAIN-2021-39
refinedweb
680
53.47
StringKit is a new, simple and fast way to investigate and modify strings in Swift – The next level of string manipulation. Installation Requirements - iOS 9.0+ | macOS 10.11+ | tvOS 9.0+ | watchOS 2.0+ | Linux - Xcode 8.1+ - Swift 3.1+ Manual - Download the ZIP archive. - Add the Sourcefolder or the StringKit.frameworkfile to your project. - That’s it. :] Dependency Managers CocoaPods - Add pod 'StringKit', '~> 0.9.0'to your Podfile. - Run pod update-> StringKit should be installed now. - You are finished! You can work with the new .workspacefile now. :] Carthage - Create a Cartfile in your project directory. - Add github "rainerniemann/StringKit" ~> 0.9.0to your Cartfile. - Go to your project directory and run carthage update --platform iOSin your terminal (for iOS). - Open the output folder with open carthageand drag and drop the StringKit.swift file into your project. - You are done. :] Swift Package Manager - Create a Package.swift file in your project directory. - Add the following or just the dependency to your Package.swift file. import PackageDescription let package = Package(name: "YOUR_APPLICATIONS_NAME", targets: [], dependencies: [ .Package(url: "", versions: Version(0,9,0) … Version(0,9,0)) ]) 3. Run `swift build`. 4. Every time you want to debug the programm, run `swift build -Xswiftc "-target" -Xswiftc "x86_64-apple-macosx10.11"`. ## Import ```swift import StringKit Authors Rainer Niemann, [email protected] Andreas Niemann, [email protected] License This project is licensed under the MIT License. See the LICENSE.md file for details. Latest podspec { "name": "StringKit", "version": "0.9.0", "summary": "StringKit is a new, simple and fast way to investigate and modify strings in Swift - The next level of string manipulation.", "description": "StringKit is a new, simple and fast way to investigate and modify strings in Swift - The next level of string manipulation.", "license": { "type": "MIT", "file": "LICENSE.md" }, "homepage": "", "authors": { "Rainer Niemann": "[email protected]" }, "social_media_url": "", "documentation_url": "", "platforms": { "ios": "9.0", "osx": "10.11", "tvos": "9.0", "watchos": "2.0" }, "source": { "git": "", "tag": "0.9.0" }, "source_files": "Sources/*.swift", "ios": { "vendored_frameworks": "Frameworks/iOS/StringKit.framework" }, "requires_arc": true, "pod_target_xcconfig": { "SWIFT_VERSION": "3.1" }, "pushed_with_swift_version": "3.1" } Fri, 25 Aug 2017 00:00:04 +0000
https://tryexcept.com/articles/cocoapod/stringkit
CC-MAIN-2018-13
refinedweb
350
64.78
Have you seen this unofficial Java API for Google Translator? So, I was thinking that it could be interesting to mix this stuff with Java Speech API (JSAPI). JSAPI was divided to support two things: speech recognizers and synthesizers. Speech synthesis is the process of generating human speech from written text for a specific language. Speech recognition is the process of converting human speech to words/commands. This converted text can be used or interpreted in different ways (interesting and simple definition from this article). It seems that there are not too many open source projects that cares about the recognizer part. I have found a very interesting one called Sphinx, but did not have time to try it yet. I was thinking how cool would be to have an open source software to make possible a talk between two different people, like you say something in a language, it translates to another language and say that. Have anybody seen anything like that? Non commercial? For voip? So, I work in part of a demo, but only with the the synthesizer part, the text input is manually. I used FreeTTS for that. Basically this piece of code gets an word input in Portuguese, translates it to English and then say the word. package speech; import com.google.api.translate.Language; import com.google.api.translate.Translate; import com.sun.speech.freetts.Voice; import com.sun.speech.freetts.VoiceManager; import java.io.BufferedReader; import java.io.InputStreamReader; public class Main { public static void main(String[] args) { VoiceManager voiceManager = VoiceManager.getInstance(); Voice voice = voiceManager.getVoice("kevin16"); voice.allocate(); String text = null; do { try { System.out.println("Type a word in Portuguese and listen it in English -> "); text = new BufferedReader(new InputStreamReader(System.in)).readLine(); voice.speak(Translate.translate(text, Language.PORTUGESE, Language.ENGLISH)); } catch (Exception ex) { ex.printStackTrace(); } } while (!text.equals("!quit")); voice.deallocate(); System.exit(0); } } If you got excited to work in an open source project like that, go ahead and write something! I can guarantee your fun! Really cool idea ! Don't know any app doing that, but would be interested to contribute in a such project ! Lot of possibility, but I wonder how this can be efficient, cause automatic translation fail on grammar syntax ... Posted by: alois on November 11, 2008 at 07:13 AM Hello alois, I wish I had time right now to start a project like that. Anyway, if anyone got interested, I can help in some way too. We definitely would find problems with the grammar syntax, but we still can start something... Cheers! Posted by: brunogh on November 14, 2008 at 05:21 AM Hai i tried this freetts example, but audio file is not detecting. Tell me how can i overcome this problem.... Posted by: chandrasekar85 on February 16, 2009 at 03:46 AM Hello, please make sure if you have setted it right. Otherwise try its users list. Posted by: brunogh on February 16, 2009 at 04:20 AM
http://weblogs.java.net/blog/brunogh/archive/2008/11/playing_with_tr.html
crawl-002
refinedweb
498
59.19
import java.util.Random; public class flip { int head = 0; int tail = 0; public static void main (String[] args){ double start = 0; double end = 1; double random = new Random().nextDouble(); double result = start + (random * (end - start)); System.out.println(result); if (result < 0.5){ System.out.println("head"); }else if (result > 0.5 && result < 1.0){ System.out.println("tail"); } } } currently i am trying to make the coin flip 10 times and display how much times heads and tails come out. However i cant use any loops so i know i have to call my if method 10x. i just don't know how to lay that out properly and make all the data from the 10 method calls print out. I know i got to store it in my filed variables and then have a system.out.println print out the number in heads and tails. any help please? thanks you
http://www.gamedev.net/topic/632604-virtual-coin-flip/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2016-30
refinedweb
152
76.01
Caching¶ When working with AiiDA, you might sometimes re-run calculations which were already successfully executed. Because this can waste a lot of computational resources, you can enable AiiDA to cache calculations, which means that it will re-use existing calculations if a calculation with the same inputs is submitted again. When a calculation is cached, a copy of the original calculation is created. This copy will keep the input links of the new calculation. The outputs of the original calculation are also copied, and linked to the new calculation. This allows for the new calculation to be a separate Node in the provenance graph and, critically, preserves the acyclicity of the graph. Caching is also implemented for Data nodes. This is not very useful in practice (yet), but is an easy way to show how the caching mechanism works: In [1]: from __future__ import print_function In [2]: from aiida.orm.data.str import Str In [3]: n1 = Str('test string') In [4]: n1.store() Out[4]: u'test string' In [5]: n2 = Str('test string') In [6]: n2.store(use_cache=True) Out[6]: u'test string' In [7]: print('UUID of n1:', n1.uuid) UUID of n1: 956109e1-4382-4240-a711-2a4f3b522122 In [8]: print('n2 is cached from:', n2.get_extra('_aiida_cached_from')) n2 is cached from: 956109e1-4382-4240-a711-2a4f3b522122 As you can see, passing use_cache=True to the store method enables using the cache. The fact that n2 was created from n1 is stored in the _aiida_cached_from extra of n2. When running a JobCalculation through the Process interface, you cannot directly set the use_cache flag when the calculation node is stored internally. Instead, you can pass the _use_cache flag to the run or submit method. Caching is not implemented for workchains and workfunctions. Unlike calculations, they can not only create new data nodes, but also return exsting ones. When copying a cached workchain, it’s not clear which node should be returned without actually running the workchain. This is explained in more detail in the section Caching and the Provenance Graph. Configuration¶ Of course, using caching would be quite tedious if you had to set use_cache manually everywhere. To fix this, the default for use_cache can be set in the .aiida/cache_config.yml file. You can specify a global default, or enable / disable caching for specific calculation or data classes. An example configuration file might look like this: profile-name: default: False enabled: - aiida.orm.calculation.job.simpleplugins.templatereplacer.TemplatereplacerCalculation - aiida.orm.data.str.Str disabled: - aiida.orm.data.float.Float This means that caching is enabled for TemplatereplacerCalculation and Str, and disabled for all other classes. In this example, manually disabling aiida.orm.data.float.Float is actually not needed, since the default: False configuration means that caching is disabled for all classes unless it is manually enabled. Note also that the fully qualified class import name (e.g., aiida.orm.data.str.Str) must be given, not just the class name ( Str). This is to avoid accidentally matching classes with the same name. You can get this name by combining the module name and class name, or (usually) from the string representation of the class: In [9]: Str.__module__ + '.' + Str.__name__ Out[9]: 'aiida.orm.data.str.Str' In [10]: str(Str) Out[10]: "<class 'aiida.orm.data.str.Str'>" Note that this is not the same as the type string stored in the database. How are cached nodes matched?¶ To determine wheter a given node is identical to an existing one, a hash of the content of the node is created. If a node of the same class with the same hash already exists in the database, this is considered a cache match. You can manually check the hash of a given node with the .get_hash() method. Once a node is stored in the database, its hash is stored in the _aiida_hash extra, and this is used to find matching nodes. By default, this hash is created from: - all attributes of a node, except the _updatable_attributes - the __version__of the module which defines the node class - the content of the repository folder of the node - the UUID of the computer, if the node has one In the case of calculations, the hashes of the inputs are also included. When developing calculation and data classes, there are some methods you can use to determine how the hash is created: - To ignore specific attributes, a Nodesubclass can have a _hash_ignored_attributesattribute. This is a list of attribute names which are ignored when creating the hash. - For calculations, the _hash_ignored_inputsattribute lists inputs that should be ignored when creating the hash. - To add things which should be considered in the hash, you can override the _get_objects_to_hashmethod. Note that doing so overrides the behavior described above, so you should make sure to use the super()method. - Pass a keyword argument to .get_hash. These are passed on to aiida.common.hashing.make_hash. For example, the ignored_folder_contentkeyword is used by the JobCalculationto ignore the raw_inputsubfolder of its repository folder. Additionally, there are two methods you can use to disable caching for particular nodes: - The _is_valid_cache()method determines whether a particular node can be used as a cache. This is used for example to disable caching from failed calculations. - Node classes have a _cacheableattribute, which can be set to Falseto completely switch off caching for nodes of that class. This avoids performing queries for the hash altogether. There are two ways in which the hash match can go wrong: False negatives, where two nodes should have the same hash but do not, or false positives, where two different nodes have the same hash. It is important to understand that false negatives are highly preferrable, because they only increase the runtime of your calculations, as if caching was disabled. False positives however can break the logic of your calculations. Be mindful of this when modifying the caching behaviour of your calculation and data classes. What to do when caching is used when it shouldn’t¶ In general, the caching mechanism should trigger only when the output of a calculation will be exactly the same as if it is run again. However, there might be some edge cases where this is not true. For example, if the parser is in a different python module than the calculation, the version number used in the hash will not change when the parser is updated. While the “correct” solution to this problem is to increase the version number of a calculation when the behavior of its parser changes, there might still be cases (e.g. during development) when you manually want to stop a particular node from being cached. In such cases, you can follow these steps to disable caching: If you suspect that a node has been cached in error, check that it has a _aiida_cached_fromextra. If that’s not the case, it is not a problem of caching. Get all nodes which match your node, and clear their hash: for n in node.get_all_same_nodes(): n.clear_hash() Run your calculation again. Now it should not use caching. If you instead think that there is a bug in the AiiDA implementation, please open an issue (with enough information to be able to reproduce the error, otherwise it is hard for us to help you) in the AiiDA GitHub repository:. Caching and the Provenance Graph¶ The goal of the caching mechanism is to speed up AiiDA calculations by re-using duplicate calculations. However, the resulting provenance graph should be exactly the same as if caching was disabled. This has important consequences on the kind of caching operations that are possible. The provenance graph consists of nodes describing data, calculations and workchains, and links describing the relationship between these nodes. We have seen that the hash of a node is used to determine whether two nodes are equivalent. To successfully use a cached node however, we also need to know how the new node should be linked to its parents and children. In the case of a plain data node, this is simple: Copying a data node from an equivalent node should not change its links, so we just need to preserve the links which this new node already has. For calculations, the situation is a bit more complex: The node can have inputs and creates new data nodes as outputs. Again, the new node needs to keep its existing links. For the outputs, the calculation needs to create a copy of each node and link these as its outputs. This makes it look as if the calculation had produced these outputs itself, without caching. Finally, workchains can create links not only to nodes which they create themselves, but also to nodes created by a calculation that they called, or even their ancestors. This is where caching becomes impossible. Consider the following example (using workfunctions for simplicity): from aiida.orm.data.int import Int from aiida.work.workfunction import workfunction @workfunction def select(a, b): return b d = Int(1) r1 = select(d, d) r2 = select(Int(1), Int(1)) The two select workfunctions have the same inputs as far as their hashes go. However, the first call uses the same input node twice, while the second one has two different inputs. If the second call should be cached from the first one, it is not clear which of the two input nodes should be returned. While this example might seem contrived, the conclusion is valid more generally: Because workchains can return nodes from their history, they cannot be cached. Since even two equivalent workchains (with the same inputs) can have a different history, there is no way to deduce which links should be created on the new workchain without actually running it. Overall, this limitation is acceptable: The runtime of AiiDA workchains is usually dominated by time spent inside expensive calculations. Since these can be avoided with the caching mechanism, it still improves the runtime and required computer resources a lot.
http://aiida-core.readthedocs.io/en/latest/concepts/caching.html
CC-MAIN-2018-34
refinedweb
1,661
62.27
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322) Description of problem: The binary compiled on ia34 with shared library, doesn't work on ia64 RedHat Enterprise Linux 3. The sample is below: #include <errno.h> extern int errno; main() { char *syscmd="mkdir '/tmp/xxxx'" ; int sysrc = 0; printf("command = %s\n",syscmd); sysrc = system(syscmd); printf("system = %d\n",sysrc); printf("errno = %d %s\n",errno,strerror(errno)); } Version-Release number of selected component (if applicable): glibc-2.3.2-95.3 How reproducible: Always Steps to Reproduce: 1.compile the sample on ia32 with shared library 2.execute the binary on ia64 3. Actual Results: command = mkdir '/tmp/xxxx' system = -1 errno = 14 Bad address Expected Results: command = mkdir '/tmp/xxxx' system = 0 errno = 0 Additional info: Same binary works fine on RH72 for IA64. Also fine the binary is compiled statically on ia32. The source file works properly on ia32 with compiling ia32, or on ia64 with compiling ia64. Is this a bug of /emul/ia32-linux/lib/*.so ? *** This bug has been marked as a duplicate of 107116 *** also your code is broken: #include <errno.h> extern int errno; is very very invalid > also your code is broken: > #include <errno.h> > extern int errno; > is very very invalid The program validity is not essential. We just tell you that "system()" system call doesn't work on RHAS3 for IA64, although this works on RH72 for IA64. Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.
https://bugzilla.redhat.com/show_bug.cgi?id=111132
CC-MAIN-2019-22
refinedweb
263
59.6
Your jQuery Selector Context Can Be A jQuery Object This is just a quick post to clear up any confusion over what kind of objects can be used as a context when performing a jQuery selector execution. Sometimes, when reviewing jQuery code, I see people make data type conversions in their context usage: - $( "div", myStuff[ 0 ] ) Here, you can see that the programmer is converting their "myStuff" jQuery collection into a single DOM node to be used as a selection context. This might make sense if the myStuff collection contained nodes within which you did not want to search; but more often than not, this data type conversion is entirely arbitrary. And, more than that, it is unnecessary, and inefficient. If you look at the documentation for the $() method, you will see that the context argument can be a DOM element, a document, or a jQuery object. And, if you check the actual source code, you will see that the use of the context: - $( selector, context ) ... is actually turned into a find() method internally: - $( context ).find( selector ) To optimize this execution, jQuery actually checks to see if the given context is a jQuery object; if it is, it performs the find() method call directly on the given context. If the given context is a DOM element, however, it first converts it to a jQuery object and then executes the find() method. So, actually, if you extract a DOM node as the context from a given jQuery object, you're slowing down execution, forcing jQuery to re-package the DOM node as a jQuery object. When using a context, I personally find the find() method to be easier to read; but, if you're going to use the context argument in the $() method, just remember that it can be a jQuery object; there's no need to pass in a DOM element if you don't already have one. Looking For A New Job? - Senior Developer at Quality Bicycle Products - ColdFusion Developer at WRIS Web Services - Coldfusion Developer at Cavulus - Web Developer at Townsend Communications, Inc. - Support Programming Manager at InterCoastal Net Designs Reader Comments Hi Ben, But Brandon Aaron says it the other way. Check this: I've tried his samples and I've tried the same in my project too. I found that selectors perform better when context is DOM node rather than jQuery object. Well, I'm not sure if jQuery 1.4.2 has any changes in context execution. I'm using 1.3.2 in my project. @Krishna, I believe what Brandon is referring to is the actual "context" attribute of the jQuery collection. This is somewhat of a "behind the scenes" property. What I'm referring to is the context used in the $() method. Brandon points out that this does become $().find() under the hood; all he's saying is that the type of context passed has a different effect on the context property. As far as why the difference, I'm not really sure. As far as performance is concerned, yes, it appears there are differences in 1.3 and 1.4 versions of the library. 1.4 seems to take more variety into account, probably optimizing for more use cases. In jQuery 1.4, it definitely tries to use the existing jQuery object before it checks for a raw DOM element (which it then converts to a jQuery object). In jQuery 1.3, the init() method appears to be much less optimized, simply calling: jQuery( context ).find( selector ); ... in this case, you might be right about using a raw DOM element as slightly faster as this will be the more slightly optimized case. In any case, however, you can use either a jQuery collection or a DOM element. If you *know* you're dealing with a jQuery object, always do: context.find(selector) over: $(context).find(selector) - or - $(selector, context) IMO, passing in a jQuery object as the context is only useful if you've written code where the context could be either a DOM node or a jQuery object, because the function/method is used multiple ways. Whenever possible, I like my code explicit in what the context type because there is some overhead in jQuery when it comes to initializing new jQuery objects. If you don't have to initialize a new object, don't do it. @Dan, I agree regarding context.find(). To me, that's the most readable way to do it; and, since the majority of times (for me), my context is already a jQuery collection, it's what makes the most sense from performance. jQuery 1.4.2 seems to use this approach when possible over the raw DOM element - but you still get the extra method call involved. There is one case in which you need a DOM node as the "context" argument: when you're using it for the .live() method. Otherwise, you're right. @Karl, Is this for when it uses delegation? @Ben, Yeah. By default, .live() binds events to document and uses event delegation to see if $(event.target).closest(selector) exists before executing your function. As of jQuery 1.4, you can specify a DOM element to bind to instead of document, which limits the possible elements that trigger the event. More info here: (hope that makes sense. I'm typing quickly while at work thinking about a bunch of other stuff. :) ) @Karl, Ah, gotcha; yes you are making sense :) My feeling is that using a jQuery object as the context argument only hinders readability, and the performance will be slightly worse (however negligible) because of that extra internal conversion. I sometimes see $("expr", cachedjQueryObject) being interpreted as $("expr").andSelf(), rather than $(cachedjQueryObject).find("expr"). The latter issues a command and is more procedural/self-documenting. I tend to use the context parameter only when inside an anonymous callback function and the context will be "this". IMO, $("span", this) is just as readable as $(this).find(), but that could just be my syntax highlighting :) be made, you need to pass an actual DOM reference as the second parameter. Passing the jQuery object itself - e.g. $('#context') - still requires a search of the entire document for the reference. Searching the entire document completely negates the point of passing a context.' So until 1.4, it was necessary (unless Cody was wrong). Cheers, Colin @Eric, I tend the prefer the $( ... ).find() approach; but that might just be because I typically have a jQuery collection to work with at the time. As you are saying though, when in a callback, I will also use the "this" reference as the context node and it works quite nicely. @Colin, I'll have to take a look at the jQuery source code for 1.4.2 to be sure; I'll get back to you on that matter. As far as what Cody Lindley was saying, I think perhaps what he meant was that you simply don't want to be performing jQuery-look-ups as *part* of the context definition. Meaning, if you have a jQuery collection already (and therefore have DOM nodes within it), then you can use it; but, don't use a jQuery collection if you have to look up the nodes to define the context meaning. I could be misunderstanding, but I think about it like this - imagine I have some callback that passes in a jQuery collection: doSomething( myCollection ){ return( $( ".target", myCollection ) ); } ... here, I am using the given jQuery collection (myCollection) as the context. I believe this is ok since the jQuery collection has already searched the DOM and collected the given DOM nodes. However, I think what Cody's saying is that you want to avoid things like this: doSomething(){ return( $( ".target", $("#parent") ); } ... since jQuery had to perform two searches - one for Parent and one for Target. If you have that kind of a setup, then you can probably just roll it into one search: $( "#parent .target" ); ... this way, Sizzle can really optimize the lookup. specifically after reading this book. Cheers, Colin @Colin, I just took a look in the jQuery 1.4.2 development (non-minified) version and the init() method which handles the jQuery collection creation looks like this (minus a LOT of code) ~Line 159: // HANDLE: $(expr, $(...)) } else if ( !context || context.jquery ) { . . . return (context || rootjQuery).find( selector ); // HANDLE: $(expr, context) // (which is just equivalent to: // $(context).find(expr) } else { . . . return jQuery( context ).find( selector ); } It looks like the jQuery library still handles both versions. In the end, if the context object is not a jQuery object, you can see that internally, the code is still converting the context DOM node into a jQuery object. So, whether you pass in a DOM node or a jQuery collection object as context, you still end up with a jQuery collection. It looks like both ways are equally performant - it just depends on what you are going to do with the context object after you pass it in as a context. Meaning, if you are going to use it externally as a jQuery object, you might as well convert it externally. You know a way to pass many contexts? Tks contextFactory = function(selector) { var ctx = $(selector); return function(selector) { return $(selector, ctx); }; }; $popup = contextFactory(".jsPopupRegister"); $popup("a") more readable IMO
http://www.bennadel.com/blog/1876-your-jquery-selector-context-can-be-a-jquery-object.htm?_rewrite
CC-MAIN-2016-07
refinedweb
1,548
63.8
Issue 179: Tabbed Layout does not scale for >50 windows Comment #3 by andrea.rossato: While I wonder how long it took to the xmonad algorithm complexity calculation team to lay out the math their O notated conclusions are based on (actually I'd really love to see that math too), still I'm sure the decoration module rewrite group formed within the xmonad developer team is going to take ages, literally, to reach an agreement with the tabbed module maintaining team on how to prototype a new decoration algorithm, given the constrains set out by the ACC team. Which is unfortunate for the casual xmonad user who needs to open 50-100 windows on the same workspace, quite typical for this kind of geek. Sadly I cannot be of any help. Still I was a bit amazed by the fact that iterating over such a short list of integers could lead to such a slow down - and even though I'm running a version of tabbed which has not been bloated with all the bottom, top, always, sometimes code, still I can reproduce your problem. So I decided to spend half an hour and did some math too. You know, not that scientifically sounded kind of proof a real haskeller would be able to do, but just some stupid old fashioned debugging, something even a LUA coder would be able to do: 1. I created a simple function to print out the time, with picoseconds (requires System.Time and System.IO): printNow :: String -> X () printNow s = do t <- io (toCalendarTime =<< getClockTime) io $ hPutStrLn stderr $ s ++ " - " ++ show (ctSec t) ++ "." ++ show (ctPicosec t) 2. Then I wrote a small layout modifier: data Debug a = Debug String deriving (Show, Read) instance LayoutModifier Debug Window where redoLayout (Debug s) _ _ wrs = do printNow s return (wrs, Nothing) 3. In .xmonad/xmonad.hs I wrote: main = xmonad defaultConfig {layoutHook = ModifiedLayout (Debug "tabbed") myTab } myTab = decoration shrinkText defaultTheme Tabbed (ModifiedLayout (Debug "Simplest") Simplest) I also called printNow at the very end of XMonad.Operations.windows. So we print the time: a. right after Simplest returned b. right after myTab returned c. at the end of the windows call. below there are the numbers of a few runs with enough tabs to slow everything down. I don't know if they are meaningful, this is why I'm posting them here. If they indeed are, then they are also a bit puzzling. If someone comes up with an idea on how to isolate the problem, please drop me a line. Historically I can say that from 0.4 up to 0.6 no changes have been made to tabbed and related code. The text size calculation code has been there from the very beginning of a tabbed layout (0.3 for sure) without changes. Anyway I think there's room for further investigation before calling for a rewrite (a tabbed layout with O(n) is something I'd really like to read, but I don't know if in that far future I'll still remember how to read Haskell). The numbers: Simplest - 57.187047000000 tabbed - 57.272167000000 Operations - 57.356136000000 Simplest - 57.356837000000 tabbed - 57.422710000000 Operations - 57.523055000000 Simplest - 57.523762000000 tabbed - 57.599250000000 Operations - 57.686736000000 Simplest - 59.65497000000 tabbed - 59.119980000000 Operations - 59.141826000000 Simplest - 1.672190000000 tabbed - 1.723279000000 Operations - 1.745065000000 Simplest - 3.152586000000 tabbed - 3.205879000000 Operations - 3.227333000000 Simplest - 5.839898000000 tabbed - 5.889285000000 Operations - 5.913992000000 Simplest - 5.914665000000 tabbed - 5.969013000000 Operations - 6.12130000000 Simplest - 6.12793000000 tabbed - 6.67688000000 Operations - 6.97815000000 Simplest - 6.98518000000 tabbed - 6.225578000000 Operations - 6.263437000000 Simplest - 6.264132000000 tabbed - 6.319227000000 Operations - 6.356627000000 Simplest - 6.357312000000 tabbed - 6.411805000000 Operations - 6.451679000000 Simplest - 7.712861000000 tabbed - 7.763174000000 Operations - 7.805636000000 Simplest - 7.806320000000 tabbed - 7.860956000000 Operations - 7.905911000000 Simplest - 7.906581000000 tabbed - 7.962566000000 Operations - 8.23426000000 if you want more you know how to do it. -- You received this message because you are listed in the owner or CC fields of this issue, or because you starred this issue. You may adjust your issue notification preferences at:
http://article.gmane.org/gmane.comp.lang.haskell.xmonad/5337
crawl-002
refinedweb
693
71.41
Hi! The mesa driver is the only driver which works. (without having a blackscreen when I try to logon) But this driver only support opengl 3.0 and not version 4.0 and more (I don't know what's... Type: Posts; User: Lolilolight. I've tried this : const std::string perPixLightingFragmentShader = "#version 130 \n" "uniform sampler2D normalMap;" "uniform vec3 resolution;" ... Hi, I'm trying to set up a shader for normal mapping in a 2D context. This following image show what I'm trying to do : 1370 The normal map contains the normals of my tiles, and their... Maybe you can loop on all the triangles of your surface and use the "raytracing" algorithm. (This is how I do) Ok now I'm trying to bind textures but I've a strange result. :o Here is the code : #include <GL/glew.h> #include <GL/gl.h> #include <SFML/Window.hpp> #include <iostream> Woooowwwww, I didn't know that, I've read the tutorials on your website but I think I should have missed something. Ok so i suppose that I have to use triangles strips instead. But I'm... I've tried to install the proprietary drivers from the amd web site but as I excepected it tells me that there are not compatible with my graphical card. (Under ubuntu 14.04) Ok, it works with triangles but not with quads. :/ int main(int argc, char* argv[]) { odfaeg::RenderWindow window(sf::VideoMode(800, 600, 32), "Modern... Hi, I've installed the last version of mesa driver, but, this one seems to be still experimental, and ..., I'm just wandering if I shoudn't use old opengl versions on linux. I'm trying to display... I've solved this, we must bind the shader before changind parameters, I simply didn't know that. Hi! I'm trying to pass an array of 16 float to my shader (which represent a 4*4 matrix), but My shader doesn't draw nothing event if I pass an identity matrix. Here is the code : ... I've found the solution, we have just to separate the layout qualifiers by a , : "#version 330 core \n" "layout(origin_upper_left, pixel_center_integer) in vec4 gl_FragCoord;" ... Hi! I want to change the origin lower left to the origin upper left corner like it's explained in the documentation : But I have an... O_O I've tried the source code of the first triangle from thte tutorial, and even if opengl returns me some errors, it's displaying something, so... This function returns 1 as value which is corresponding to the core profile : GLint profile; glGetIntegerv(GLX_CONTEXT_PROFILE_MASK_ARB, &profile); std::cout<<"profile : "<<profile; ... Haha! It seems that x11 don't want to create me a CORE profile, I have opengl errors when I call opengl functions. #include <stdio.h> #include <stdlib.h> #include <string.h> #include... No, even if I put the source code into a separated file, the version 330 of my shaders doesn't compile. I've tried this : int attributes[] = { GLX_CONTEXT_MAJOR_VERSION_ARB, static_cast<int>(m_settings.majorVersion), ... Mmm..., I have a core profile, if I try to create a compatibility profile it fails to create the opengl context. And I load my shader from the memory so I need the \n otherwise, the text is... Mmm .......... I think that driver doesn't support opengl 3.3 very, well. Opengl return me an invalid error while generating the vao. And the compilation of the shaders fails : const...
https://www.opengl.org/discussion_boards/search.php?s=c0d44075a2b45a0a3d92ce635e931230&searchid=1275998
CC-MAIN-2015-14
refinedweb
575
66.74
crap! I knew it was something like that... must have skipped that part in the docs 8) And damn... it doesn't seem to work as expected... I can't get info on anything because I get permission denied errors... which sucks - I was hoping I'd be able to use this interface as a little AUR plugin to "check current version" or something like that.... could be an automatic way to flag things out of date.... /me shrugs]]> Hey phrakture, try using a dictionary for the login username & password >>> import xmlrpclib >>> srv=xmlrpclib.Server('') >>> srv.login({'username':'tmaynard','password':'XXXXXXXXXX'}) {'Lifetime': 600, 'API Version': '1.03', 'SID': '0535d169f82838bbbb8e9c7f785c0282'} >>> Happy Hacking --Todd]]> so... I came across this little gem: freshmeat has an xmlrpc interface... which should allow things (like the AUR, hint hint) to query freshmeat's version of a package.... however... it doesn't like me... import xmlrcplib srv=xmlrpclib.Server('') srv.login('phrakture','XXXXXXXXXXX') produces: xmlrpclib.Fault: <Fault 10: 'Login incorrect'>... which is totally wrong because my login is valid... no ideas... can anyone get this to login? for the record you want to do something like so: session=srv.login('user','pass') print srv.fetch_branch_list(session['SID'],'some_project_name')
https://bbs.archlinux.org/extern.php?action=feed&tid=14199&type=atom
CC-MAIN-2016-36
refinedweb
201
62.14
- NAME - VERSION - SYNOPSIS - DESCRIPTION - USING AND EXAMPLES - Changing logging level - Changing default level - Changing per-output level - Setting default per-output level - Enabling/disabling output - Changing log level of cron scripts - Changing log file name/location - Changing other output parameters - Logging to syslog - Logging to directory - Multiple outputs - Changing level of certain module(s) - Only displaying log from certain module(s) - Displaying category name - Displaying location name - Preventing logging level to be changed from outside the script - Debugging - FUNCTIONS - PATTERN STYLES - ENVIRONMENT - Turning on/off logging - Setting general level - Setting per-output level - Setting per-category level - Setting per-output, per-category level - Controlling extra fields to log - Force-enable or disable color - Turn on Log::Any::App's debugging - Turn on showing elapsed time in screen - Filtering - Per-output filtering - Extra things to log - Why? - What's the benefit of using Log::Any::App? - And what's the benefit of using Log::Any? - Do I need Log::Any::App if I am writing modules? - Why use Log4perl? - Are you coupling adapter with Log::Any (thus defeating Log::Any's purpose)? - How do I create extra logger objects? - My needs are not met by the simple configuration system of Log::Any::App! - BUGS/TODOS - ROAD TO 1.0 - SEE ALSO - SOURCE - BUGS - AUTHOR NAME Log::Any::App - An easy way to use Log::Any in applications VERSION version 0.08 SYNOPSIS Most of the time you only need to do this: # in your script.pl use Log::Any::App '$log'; $log->warn("blah ..."); if ($log->is_debug) { ... } # or, in command line % perl -MLog::Any::App -MModuleThatUsesLogAny -e'...' Here's the default logging that Log::Any::App sets up for you: Condition screen file syslog dir --------------------------------+-------+------------------+-------------+--- -e (one-liners) y - - - Scripts running as normal user y ~/NAME.log - - Scripts running as root y /var/log/NAME.log - - Daemons - y y - You can customize level from outside the script, using environment variables or command-line options (won't interfere with command-line processing modules like Getopt::Long etc): % DEBUG=1 script.pl % LOG_LEVEL=trace script.pl % script.pl --verbose And to customize other stuffs: use Log::Any::App '$log', -syslog => 1, # turn on syslog logging explicitly -screen => 0, # turn off screen logging explicitly -file => {path=>'/foo/bar', max_size=>'10M', histories=>10}; # customize file logging For more customization like categories, per-category level, per-output level, multiple outputs, string patterns, etc see "USING AND EXAMPLES" and init(). DESCRIPTION future). To use Log::Any::App you need to be sold on the idea of Log::Any first, so please do a read up on that first. The goal of Log::Any::App is to provide developers an easy and concise way to add logging to their *applications*. That is, instead of modules; modules remain using Log::Any to produce logs. Applications can upgrade to full Log4perl later when necessary, although in my experience, they usually don't. With Log::Any::App, you can replace this code in your application: use Log::Any '$log'; use Log::Any::Adapter; use Log::Log4perl; my $log4perl_config = ' some long multiline config...'; Log::Log4perl->init(\$log4perl_config); Log::Any::Adapter->set('Log4perl'); with just this: use Log::Any::App '$log'; # plus some other options when necessary Most of the time you don't need to configure anything as Log::Any::App will construct the most appropriate default Log4perl configuration for your application. USING AND EXAMPLES To use Log::Any::App, just do: use Log::Any::App '$log'; or from the command line: % perl -MLog::Any::App -MModuleThatUsesLogAny -e ... This will send logs to screen as well as file (unless -e scripts, which only log to screen). Default log file is ~/$SCRIPT_NAME.log, or /var/log/$SCRIPT_NAME.log if script is running as root. Default level is 'warn'. The 'use Log::Any::App' statement can be issued before or after the modules that use Log::Any, it doesn't matter. Logging will be initialized in the INIT phase by Log::Any::App. You are not required to import '$log', and don't need to if you do not produce logs in your application (only in the modules). Changing logging level Since one of the most commonly tweaked logging setting is level (for example: increasing level when debugging problems), Log::Any::App provides several mechanisms to change log level, either from the script or from outside the script, for your convenience. Below are the mechanisms, ordered from highest priority: import argument (inside the script) command line arguments (outside the script) environment variables (outside the script) level flag files (outside the script) variables in 'main' package (inside the script) These mechanisms are explained in more details in the documentation for the init() function. But below are some examples. To change level from inside the script: use Log::Any::App '$log', -level => 'debug'; This is useful if you want a fixed level that cannot be overridden by other mechanisms (since setting level using import argument has the highest priority). But oftentimes what you want is changing level without modifying the script itself. Thereby, just write: use Log::Any::App '$log'; and then you can use environment variables to change level: TRACE=1 script.pl; # setting level to trace DEBUG=1 script.pl; # setting level to debug VERBOSE=1 script.pl; # setting level to info QUIET=1 script.pl; # setting level to error LOG_LEVEL=trace script.pl; # setting a specific log level or command-line options: script.pl --trace script.pl --debug script.pl --verbose script.pl --quiet script.pl --log_level=debug; # '--log-level debug' will also do Regarding command-line options: Log::Any::App won't consume the command-line options from @ARGV and thus won't interfere with command-line processing modules like Getopt::Long or App::Options. If you use a command-line processing module and plan to use command-line options to set level, you might want to define these level options, or your command-line processing module will complain about unknown options. Changing default level The default log level is 'warn'. To change the default level, you can use 'main' package variables (since they have the lowest priority): use Log::Any::App '$log'; BEGIN { our $Log_Level = 'info' } # be more verbose by default Then you will still be able to use level flag files or environment variables or command-line options to override this setting. Changing per-output level Logging level can also be specified on a per-output level. For example, if you want your script to be chatty on the screen but still logs to file at the default 'warn' level: SCREEN_VERBOSE=1 script.pl SCREEN_DEBUG=1 script.pl SCREEN_TRACE=1 script.pl SCREEN_LOG_LEVEL=info script.pl script.pl --screen_verbose script.pl --screen-debug script.pl --screen-trace=1 script.pl --screen-log-level=info Similarly, to set only file level, use FILE_VERBOSE, FILE_LOG_LEVEL, --file-trace, and so on. Setting default per-output level As with setting default level, you can also set default level on a per-output basis: use Log::Any::App '$log'; BEGIN { our $Screen_Log_Level = 'off'; our $File_Quiet = 1; # setting file level to 'error' # and so on } If a per-output level is not specified, it will default to the general log level. Enabling/disabling output To disable a certain output, you can do this: use Log::Any::App '$log', -file => 0; or: use Log::Any::App '$log', -screen => {level=>'off'}; and this won't allow the output to be re-enabled from outside the script. However if you do this: use Log::Any::App; BEGIN { our $Screen_Log_Level = 'off' } then by default screen logging is turned off but you will be able to override the screen log level using level flag files or environment variables or command-line options (SCREEN_DEBUG, --screen-verbose, and so on). Changing log level of cron scripts Environment variables and command-line options allow changing log level without modifying the script. But for scripts specified in crontab, they still require changing crontab entries, e.g.: # turn on debugging */5 * * * * DEBUG=1 foo # be silent */5 * * * * bar --quiet Another mechanism, level flag file, is useful in this case. By doing: $ echo debug > ~/foo.log_level # touch /etc/bar.QUIET you can also change log levels without modifying your crontab. Changing log file name/location By default Log::Any::App will log to file to ~/$NAME.log (or /var/log/$NAME.log if script is running as root), where $NAME is taken from the basename of $0. But this can be changed using: use Log::Any::App '$log', -name => 'myprog'; Or, using custom path: use Log::Any::App '$log', -file => '/path/to/file'; Changing other output parameters Each output argument can accept a hashref to specify various options. For example: use Log::Any::App '$log', -screen => {color=>0}, # never use color -file => {path=>'/var/log/foo', max_size=>'10M', histories=>10, }, For all the available options of each output, see the init() function. Logging to syslog Logging to syslog is enabled by default if your script looks like or declare that it is a daemon, e.g.: use Net::Daemon; # this indicate your program is a daemon use Log::Any::App; # syslog logging will be turned on by default use Log::Any::App -daemon => 1; # script declares that it is a daemon # idem package main; our $IS_DAEMON = 1; But if you are certain you don't want syslog logging: use Log::Any::App -syslog => 0; Logging to directory This is done using Log::Dispatch::Dir where each log message is logged to a different file in a specified directory. By default logging to dir is not turned on, to turn it on: use Log::Any::App '$log', -dir => 1; For all the available options of directory output, see the init() function. Multiple outputs Each output argument can accept an arrayref to specify more than one output. For example below is a code to log to three files: use Log::Any::App '$log', -file => [1, # default, to ~/$NAME.log or /var/log/$NAME.log "/var/log/log1", {path=>"/var/log/debug_foo", category=>'Foo', level=>'debug'}]; Changing level of certain module(s) Suppose you want to shut up logs from modules Foo, Bar::Baz, and Qux (and their submodules as well, e.g. Foo::Alpha, Bar::Baz::Beta::Gamma) because they are too noisy: use Log::Any::App '$log', -category_level => { Foo => 'off', 'Bar::Baz' => 'off', Qux => 'off' }; or (same thing): use Log::Any::App '$log', -category_alias => { -noisy => [qw/Foo Bar::Baz Qux/] }, -category_level => { -noisy => 'off' }; You can even specify this on a per-output basis. Suppose you only want to shut up the noisy modules on the screen, but not on the file: use Log::Any::App '$log', -category_alias => { -noisy => [qw/Foo Bar::Baz Qux/] }, -screen => { category_level => { -noisy => 'off' } }; Or perhaps, you want to shut up the noisy modules everywhere, except on the screen: use Log::Any::App '$log', -category_alias => { -noisy => [qw/Foo Bar::Baz Qux/] }, -category_level => { -noisy => 'off' }, -syslog => 1, # uses general -category_level -file => "/var/log/foo", # uses general -category_level -screen => { category_level => {} }; # overrides general -category_level You can also do this from the outside the script using environment variable, which is more flexible. Encode data structure using JSON: % LOG_SHOW_CATEGORY=1 \ LOG_CATEGORY_ALIAS='{"-noisy":["Foo","Bar::Baz","Quz"]}' \ LOG_CATEGORY_LEVEL='{"-noisy":"off"}' script.pl ... Only displaying log from certain module(s) Use a combination of LOG_LEVEL and LOG_CATEGORY_LEVEL. For example: % LOG_LEVEL=off LOG_CATEGORY_LEVEL='{"Foo.Bar":"trace", "Baz":"info"}' \ script.pl ... Displaying category name % LOG_SHOW_CATEGORY=1 script.pl ... Now instead of: [25] Starting baz ritual ... now log messages will be prefixed with category: [cat Foo.Bar][25] Starting baz ritual ... Displaying location name % LOG_SHOW_LOCATION=1 script.pl ... Now log messages will be prefixed with location (function/file/line number) information: [loc Foo::Bar lib/Foo/Bar.pm (12)][25] Starting baz ritual ... Preventing logging level to be changed from outside the script Sometimes, for security/audit reasons, you don't want to allow script caller to change logging level. As explained previously, you can use the 'level' import argument (the highest priority of level-setting): use Log::Any::App '$log', -level => 'debug'; # always use debug level TODO: Allow something like 'debug+' to allow other mechanisms to *increase* the level but not decrease it. Or 'debug-' to allow other mechanisms to decrease level but not increase it. And finally 'debug,trace' to specify allowable levels (is this necessary?) Debugging To see the Log4perl configuration that is generated by Log::Any::App and how it came to be, set environment LOGANYAPP_DEBUG to true. FUNCTIONS None is exported. init(\@args) This is the actual function that implements the setup and configuration of logging. You normally need not call this function explicitly (but see below), it will be called once in an INIT block. In fact, when you do: use Log::Any::App 'a', 'b', 'c'; it is actually passed as: init(['a', 'b', 'c']); You will need to call init() manually if you require Log::Any::App at runtime, in which case it is too late to run INIT block. If you want to run Log::Any::App in runtime phase, do this: require Log::Any::App; Log::Any::App::init(['a', 'b', 'c']); Arguments to init can be one or more of: - -log => BOOL Whether to do log at all. Default is from LOG environment variable, or 1. This option is only to allow users to disable Log::Any::App (thus speeding up startup by avoiding loading Log4perl, etc) by passing LOG=1 environment when running programs. However, if you explicitly set this option to 1, Log::Any::App cannot be disabled this way. - -init => BOOL Whether to call Log::Log4perl->init() after setting up the Log4perl configuration. Default is true. You can set this to false, and you can initialize Log4perl yourself (but then there's not much point in using this module, right?) - -name => STRING Change the program name. Default is taken from $0. - -level_flag_paths => ARRAY OF STRING Edit level flag file locations. The default is [$homedir, "/etc"]. - -daemon => BOOL Declare that script is a daemon. Default is no. Aside from this, to declare that your script is a daemon you can also set $main::IS_DAEMON to true. - -category_alias => {ALIAS=>CATEGORY, ...} Create category aliases so the ALIAS can be used in place of real categories in each output's category specification. For example, instead of doing this: init( -file => [category=>[qw/Foo Bar Baz/], ...], -screen => [category=>[qw/Foo Bar Baz/]], ); you can do this instead: init( -category_alias => {-fbb => [qw/Foo Bar Baz/]}, -file => [category=>'-fbb', ...], -screen => [category=>'-fbb', ...], ); You can also specify this from the environment variable LOG_CATEGORY_ALIAS using JSON encoding, e.g. LOG_CATEGORY_ALIAS='{"-fbb":["Foo","Bar","Baz"]}' - -category_level => {CATEGORY=>LEVEL, ...} Specify per-category level. Categories not mentioned on this will use the general level (-level). This can be used to increase or decrease logging on certain categories/modules. You can also specify this from the environment variable LOG_CATEGORY_LEVEL using JSON encoding, e.g. LOG_CATEGORY_LEVEL='{"-fbb":"off"}' - -level => 'trace'|'debug'|'info'|'warn'|'error'|'fatal'|'off' Specify log level for all outputs. Each output can override this value. The default log level is determined as follow: Search in command-line options. If App::Options is present, these keys are checked in %App::options: log_level, trace (if true then level is trace), debug (if true then level is debug), verbose (if true then level is info), quiet (if true then level is error). Otherwise, it will try to scrape @ARGV for the presence of --log-level, --trace, --debug, --verbose, or --quiet (this usually works because Log::Any::App does this in the INIT phase, before you call Getopt::Long's GetOptions() or the like). Search in environment variables. Otherwise, it will look for environment variables: LOG_LEVEL, QUIET. VERBOSE, DEBUG, TRACE. Search in level flag files. Otherwise, it will look for existence of files with one of these names $NAME.QUIET, $NAME.VERBOSE, $NAME.TRACE, $NAME.DEBUG, or content of $NAME.log_levelin ~ or /etc. Search in main package variables. Otherwise, it will try to search for package variables in the mainnamespace with names like $Log_Levelor $LOG_LEVELor $log_level, $Quietor $QUIETor $quiet, $Verboseor $VERBOSEor $verbose, $Traceor $TRACEor $trace, $Debugor $DEBUGor $debug. If everything fails, it defaults to 'warn'. - -filter_text => STR Only show log lines matching STR. Default from LOG_FILTER_TEXTenvironment. - -filter_no_text => STR Only show log lines not matching STR. Default from LOG_FILTER_NO_TEXTenvironment. - -filter_citext => STR Only show log lines matching STR (case insensitive). Default from LOG_FILTER_CITEXTenvironment. - -filter_no_citext => STR Only show log lines not matching STR (case insensitive). Default from LOG_FILTER_NO_CITEXTenvironment. - -filter_re => RE Only show log lines matching regex pattern RE. Default from LOG_FILTER_REenvironment. - -filter_no_re => RE Only show log lines not matching regex pattern RE. Default from LOG_FILTER_NO_REenvironment. - -file => 0 | 1|yes|true | PATH | {opts} | [{opts}, ...] Specify output to one or more files, using Log::Dispatch::FileWriteRotate. If the argument is a false boolean value, file logging will be turned off. If argument is a true value that matches /^(1|yes|true)$/i, file logging will be turned on with default path, etc. If the argument is another scalar value then it is assumed to be a path. If the argument is a hashref, then the keys of the hashref must be one of: level, path, max_size(maximum size before rotating, in bytes, 0 means unlimited or never rotate), histories(number of old files to keep, excluding the current file), suffix(will be passed to Log::Dispatch::FileWriteRotate's constructor), period(will be passed to Log::Dispatch::FileWriteRotate's constructor), buffer_size(will be passed to Log::Dispatch::FileWriteRotate's constructor), category(a string of ref to array of strings), category_level(a hashref, similar to -category_level), pattern_style(see "PATTERN STYLES"), pattern(Log4perl pattern), filter_text, filter_no_text, filter_citext, filter_no_citext, filter_re, filter_no_re. If the argument is an arrayref, it is assumed to be specifying multiple files, with each element of the array as a hashref. How Log::Any::App determines defaults for file logging: If program is a one-liner script specified using "perl -e", the default is no file logging. Otherwise file logging is turned on. If the program runs as root, the default path is /var/log/$NAME.log, where $NAME is taken from $0 (or -name). Otherwise the default path is ~/$NAME.log. Intermediate directories will be made with File::Path. If specified pathends with a slash (e.g. "/my/log/"), it is assumed to be a directory and the final file path is directory appended with $NAME.log. Default rotating behaviour is no rotate (max_size = 0). Default level for file is the same as the global level set by -level. But App::options, command line, environment, level flag file, and package variables in main are also searched first (for FILE_LOG_LEVEL, FILE_TRACE, FILE_DEBUG, FILE_VERBOSE, FILE_QUIET, and the similars). You can also specify category level from environment FILE_LOG_CATEGORY_LEVEL. - -dir => 0 | 1|yes|true | PATH | {opts} | [{opts}, ...] Log messages using Log::Dispatch::Dir. Each message is logged into separate files in the directory. Useful for dumping content (e.g. HTML, network dumps, or temporary results). If the argument is a false boolean value, dir logging will be turned off. If argument is a true value that matches /^(1|yes|true)$/i, dir logging will be turned on with defaults path, etc. If the argument is another scalar value then it is assumed to be a directory path. If the argument is a hashref, then the keys of the hashref must be one of: level, path, max_size(maximum total size of files before deleting older files, in bytes, 0 means unlimited), max_age(maximum age of files to keep, in seconds, undef means unlimited). histories(number of old files to keep, excluding the current file), category, category_level(a hashref, similar to -category_level), pattern_style(see "PATTERN STYLES"), pattern(Log4perl pattern), filename_pattern(pattern of file name), filter_text, filter_no_text, filter_citext, filter_no_citext, filter_re, filter_no_re. If the argument is an arrayref, it is assumed to be specifying multiple directories, with each element of the array as a hashref. How Log::Any::App determines defaults for dir logging: Directory logging is by default turned off. You have to explicitly turn it on. If the program runs as root, the default path is /var/log/$NAME/, where $NAME is taken from $0. Otherwise the default path is ~/log/$NAME/. Intermediate directories will be created with File::Path. Program name can be changed using -name. Default rotating parameters are: histories=1000, max_size=0, max_age=undef. Default level for dir logging is the same as the global level set by -level. But App::options, command line, environment, level flag file, and package variables in main are also searched first (for DIR_LOG_LEVEL, DIR_TRACE, DIR_DEBUG, DIR_VERBOSE, DIR_QUIET, and the similars). You can also specify category level from environment DIR_LOG_CATEGORY_LEVEL. - -screen => 0 | 1|yes|true | {opts} Log messages using Log::Log4perl::Appender::ScreenColoredLevels. If the argument is a false boolean value, screen logging will be turned off. If argument is a true value that matches /^(1|yes|true)$/i, screen logging will be turned on with default settings. If the argument is a hashref, then the keys of the hashref must be one of: color(default is true, set to 0 to turn off color), stderr(default is true, set to 0 to log to stdout instead), level, category, category_level(a hashref, similar to -category_level), pattern_style(see "PATTERN STYLE"), pattern(Log4perl string pattern), filter_text, filter_no_text, filter_citext, filter_no_citext, filter_re, filter_no_re. How Log::Any::App determines defaults for screen logging: Screen logging is turned on by default. Default level for screen logging is the same as the global level set by -level. But App::options, command line, environment, level flag file, and package variables in main are also searched first (for SCREEN_LOG_LEVEL, SCREEN_TRACE, SCREEN_DEBUG, SCREEN_VERBOSE, SCREEN_QUIET, and the similars). Color can also be turned on/off using environment variable COLOR (if color argument is not set). You can also specify category level from environment SCREEN_LOG_CATEGORY_LEVEL. - -syslog => 0 | 1|yes|true | {opts} Log messages using Log::Dispatch::Syslog. If the argument is a false boolean value, syslog logging will be turned off. If argument is a true value that matches /^(1|yes|true)$/i, syslog logging will be turned on with default level, ident, etc. If the argument is a hashref, then the keys of the hashref must be one of: level, ident, facility, category, category_level(a hashref, similar to -category_level), pattern_style(see "PATTERN STYLES"), pattern(Log4perl pattern), filter_text, filter_no_text, filter_citext, filter_no_citext, filter_re, filter_no_re. How Log::Any::App determines defaults for syslog logging: If a program is a daemon (determined by detecting modules like Net::Server or Proc::PID::File, or by checking if -daemon or $main::IS_DAEMON is true) then syslog logging is turned on by default and facility is set to daemon, otherwise the default is off. Ident is program's name by default ($0, or -name). Default level for syslog logging is the same as the global level set by -level. But App::options, command line, environment, level flag file, and package variables in main are also searched first (for SYSLOG_LOG_LEVEL, SYSLOG_TRACE, SYSLOG_DEBUG, SYSLOG_VERBOSE, SYSLOG_QUIET, and the similars). You can also specify category level from environment SYSLOG_LOG_CATEGORY_LEVEL. - -dump => BOOL If set to true then Log::Any::App will dump the generated Log4perl config. Useful for debugging the logging. PATTERN STYLES Log::Any::App provides some styles for Log4perl patterns. You can specify pattern_style instead of directly specifying pattern. example: use Log::Any::App -screen => {pattern_style=>"script_long"}; Name Description Example output ---- ----------- -------------- plain The message, the whole message, Message and nothing but the message. Used by dir logging. Equivalent to pattern: '%m' plain_nl Message plus newline. The default Message for screen without LOG_ELAPSED_TIME_IN_SCREEN. Equivalent to pattern: '%m%n' script_short For scripts that run for a short [234] Message time (a few seconds). Shows just the number of milliseconds. This is the default for screen under LOG_ELAPSED_TIME_IN_SCREEN. Equivalent to pattern: '[%r] %m%n' script_long Scripts that will run for a [2010-04-22 18:01:02] Message while (more than a few seconds). Shows date/time. Equivalent to pattern: '[%d] %m%n' daemon For typical daemons. Shows PID [pid 1234] [2010-04-22 18:01:02] Message and date/time. This is the default for file logging. Equivalent to pattern: '[pid %P] [%d] %m%n' syslog Style suitable for syslog [pid 1234] Message logging. Equivalent to pattern: '[pid %p] %m' For each of the above there are also cat_XXX (e.g. cat_script_long) which are the same as XXX but with [cat %c] in front of the pattern. It is used mainly to show categories and then filter by categories. You can turn picking default pattern style with category using environment variable LOG_SHOW_CATEGORY. And for each of the above there are also loc_XXX (e.g. loc_syslog) which are the same as XXX but with [loc %l] in front of the pattern. It is used to show calling location (file, function/method, and line number). You can turn picking default pattern style with location prefix using environment variable LOG_SHOW_LOCATION. If you have a favorite pattern style, please do share them. ENVIRONMENT Below is summary of environment variables used. Turning on/off logging LOG (bool) Setting general level TRACE (bool) setting general level to trace DEBUG (bool) setting general level to debug VERBOSE (bool) setting general level to info QUIET (bool) setting general level to error (turn off warnings) LOG_LEVEL (str) Setting per-output level FILE_TRACE, FILE_DEBUG, FILE_VERBOSE, FILE_QUIET, FILE_LOG_LEVEL SCREEN_TRACE and so on DIR_TRACE and so on SYSLOG_TRACE and so on Setting per-category level LOG_CATEGORY_LEVEL (hash, json) LOG_CATEGORY_ALIAS (hash, json) Setting per-output, per-category level FILE_LOG_CATEGORY_LEVEL SCREEN_LOG_CATEGORY_LEVEL and so on Controlling extra fields to log LOG_SHOW_LOCATION LOG_SHOW_CATEGORY Force-enable or disable color COLOR (bool) Turn on Log::Any::App's debugging LOGANYAPP_DEBUG (bool) Turn on showing elapsed time in screen LOG_ELAPSED_TIME_IN_SCREEN (bool) Filtering LOG_FILTER_TEXT (str) LOG_FILTER_NO_TEXT (str) LOG_FILTER_CITEXT (str) LOG_FILTER_NO_CITEXT (str) LOG_FILTER_RE (str) LOG_FILTER_NO_RE (str) Per-output filtering {FILE,DIR,SCREEN,SYSLOG}_LOG_FILTER_TEXT (str) and so on Extra things to log LOG_ENV (bool) If set to 1, will dump environment variables at the start of program. Useful for debugging e.g. CGI or git hook scripts. You might also want to look at Log::Any::Adapter::Core::Patch::UseDataDump to make the dump more readable. Logging will be done under category mainand at level trace. Why? I initially wrote Log::Any::App because I'm sick of having to parse command-line options to set log level like --verbose, --log-level=debug for every script. Also, before using Log::Any I previously used Log4perl directly and modules which produce logs using Log4perl cannot be directly use'd in one-liners without Log4perl complaining about uninitialized configuration or some such. Thus, I like Log::Any's default null adapter and want to settle using Log::Any for any kind of logging. Log::Any::App makes it easy to output Log::Any logs in your scripts and even one-liners. What's the benefit of using Log::Any::App? You get all the benefits of Log::Any, as what Log::Any::App does is just wrap Log::Any and Log4perl with some nice defaults. It provides you with an easy way to consume Log::Any logs and customize level/some other options via various ways. And what's the benefit of using Log::Any? This is better described in the Log::Any documentation itself, but in short: Log::Any frees your module users to use whatever logging framework they want. It increases the reusability of your modules. Do I need Log::Any::App if I am writing modules? No, if you write modules just use Log::Any. Why use Log4perl? Log::Any::App uses the Log4perl adapter to display the logs because it is mature, flexible, featureful. The other alternative adapter is Log::Dispatch, but you can use Log::Dispatch::* output modules in Log4perl and (currently) not vice versa. Other adapters might be considered in the future, for now I'm fairly satisfied with Log4perl. It does have a slightly heavy startup cost for my taste, but it is still bearable. Are you coupling adapter with Log::Any (thus defeating Log::Any's purpose)? No, producing logs are still done with Log::Any as usual and not tied to Log4perl in any way. Your modules, as explained above, only 'use Log::Any' and do not depend on Log::Any::App at all. Should portions of your application code get refactored into modules later, you don't need to change the logging part. And if your application becomes more complex and Log::Any::App doesn't suffice your custom logging needs anymore, you can just replace 'use Log::Any::App' line with something more adequate. How do I create extra logger objects? The usual way as with Log::Any: my $other_log = Log::Any->get_logger(category => $category); My needs are not met by the simple configuration system of Log::Any::App! You can use the Log4perl adapter directly and write your own Log4perl configuration (or even other adapters). Log::Any::App is meant for quick and simple logging output needs anyway (but do tell me if your logging output needs are reasonably simple and should be supported by Log::Any::App). BUGS/TODOS Need to provide appropriate defaults for Windows/other OS. ROAD TO 1.0 Here are some planned changes/development before 1.0 is reached. There might be some incompatibilities, please read this section carefully. Everything is configurable via environment/command-line/option file As I love specifying log options from environment, I will make every init() options configurable from outside the script (environment/command-line/control file). Of course, init() arguments still take precedence for authors that do not want some/all options to be overridden from outside. Reorganization of command-line/environment names Aside from the handy and short TRACE (--trace), DEBUG, VERBOSE, QUIET, all the other environment names will be put under LOG_ prefix. This means FILE_LOG_LEVEL will be changed to LOG_FILE_LEVEL, and so on. SCREEN_VERBOSE will be changed to VERBOSE_SCREEN. This is meant to reduce "pollution" of the environment variables namespace. Log option file (option file for short, previously "flag file") will be searched in <PROG>.log_options. Its content is in JSON and will become init() arguments. For example: {"file": 1, "screen":{"level":"trace"}} or more akin to init() (both will be supported): ["-file": 1, "-screen":{"level":"trace"}] Possible reorganization of package variable names To be more strict and reduce confusion, case variation might not be searched. Pluggable backend This is actually the main motivator to reach 1.0 and all these changes. Backends will be put in Log::Any::App::Backend::Log4perl, and so on. Pluggable output Probably split to Log::Any::App::Output::file, and so on. Each output needs its backend support. App::Options support will probably be dropped I no longer use App::Options these days, and I don't know of any Log::Any::App user who does. Probably some hooks to allow for more flexibility. For example, if user wants to parse or detect levels/log file paths/etc from some custom logic. SEE ALSO Log::Any and Log::Log4perl Some alternative logging modules: Log::Dispatchouli (based on Log::Dispatch), Log::Fast, Log::Log4perl::Tiny. Really, there are 7,451 of them (roughly one third of CPAN) at the time of this writing..
https://metacpan.org/pod/release/SHARYANTO/Alt-Log-Any-App-FWR-0.08/lib/Log/Any/App.pm
CC-MAIN-2015-14
refinedweb
5,179
55.64
Hi, On Wed, Apr 22, 2015 at 4:48 AM, Chris <chris...@gmx.at> wrote: > Hi all, > > I need so share a list of strings between some objects during a session; > Within the session, the list of strings will be deleted based on specific > requests. > > Currently, I store them in the page but the disadvantage of this approach > is that I have to delegate the list to each sub(sub)component. > You can create a helper class: PageHelper#getMyList(Page page) { return ((MyPage) page).getMyList();} And use it in any component: MyList myList = PageHelper.getMyList(getPage()); > > Would it be good practice to store this directly in the wicket session? > How to do this? > Yes. Just create a property/getter/setter in MySession and then use it: ((MySession) Session.get()).getMyList(); Make sure you synchronize the access to the list! > > A second approach would be to store it in the user object and inject this > object in the individual components. > This is also an option. It could be a session scoped bean, or a provider... > > Could you give me a recommendation? > > Thanks, Chris > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org > For additional commands, e-mail: users-h...@wicket.apache.org > >
https://www.mail-archive.com/users@wicket.apache.org/msg86783.html
CC-MAIN-2020-24
refinedweb
201
59.5
craigmcc 00/07/10 09:14:40 Modified: proposals/catalina/src/share/org/apache/tomcat/connector RequestStream.java Log: Add byte-array versions of the read() method. Revision Changes Path 1.4 +53 .3 retrieving revision 1.4 diff -u -r1.3 -r1.4 --- RequestStream.java 2000/05/22 04:57:22 1.3 +++ RequestStream.java 2000/07/10 16:14:39 1.4 @@ -1,7 +1,7 @@ /* - * $Header: /home/cvs/jakarta-tomcat/proposals/catalina/src/share/org/apache/tomcat/connector/RequestStream.java,v 1.3 2000/05/22 04:57:22 remm Exp $ - * $Revision: 1.3 $ - * $Date: 2000/05/22 04:57:22 $ + * $Header: /home/cvs/jakarta-tomcat/proposals/catalina/src/share/org/apache/tomcat/connector/RequestStream.java,v 1.4 2000/07/10 16:14:39 craigmcc Exp $ + * $Revision: 1.4 $ + * $Date: 2000/07/10 16:14:39 $ * * ==================================================================== * @@ -79,7 +79,7 @@ * not reading more than that many bytes on the underlying stream. * * @author Craig R. McClanahan - * @version $Revision: 1.3 $ $Date: 2000/05/22 04:57:22 $ + * @version $Revision: 1.4 $ $Date: 2000/07/10 16:14:39 $ */ public class RequestStream @@ -189,6 +189,55 @@ if (b >= 0) count++; return (b); + + } + + + /** + * Read some number of bytes from the input stream, and store them + * into the buffer array b. The number of bytes actually read is + * returned as an integer. This method blocks until input data is + * available, end of file is detected, or an exception is thrown. + * + * @param b The buffer into which the data is read + * + * @exception IOException if an input/output error occurs + */ + public int read(byte b[]) throws IOException { + + return (read(b, 0, b.length)); + + } + + + /** + * Read up to <code>len</code> bytes of data from the input stream + * into an array of bytes. An attempt is made to read as many as + * <code>len</code> bytes, but a smaller number may be read, + * possibly zero. The number of bytes actually read is returned as + * an integer. This method blocks until input data is available, + * end of file is detected, or an exception is thrown. + * + * @param b The buffer into which the data is read + * @param off The start offset into array <code>b</code> at which + * the data is written + * @param len The maximum number of bytes to read + * + * @exception IOException if an input/output error occurs + */ + public int read(byte b[], int off, int len) throws IOException { + + int toRead = len; + if (length > 0) { + if (count >= length) + return (-1); + if ((count + len) > length) + toRead = length - count; + } + int actuallyRead = super.read(b, off, toRead); + if (actuallyRead >= 0) + count += actuallyRead; + return (actuallyRead); }
http://mail-archives.apache.org/mod_mbox/tomcat-dev/200007.mbox/%3C20000710161440.76850.qmail@locus.apache.org%3E
CC-MAIN-2017-22
refinedweb
426
50.02
Act! Web Components Act! Web Components are standards based components that you can use to make your application look and feel like Act!. Built by the Act! development team and used in the Act! product, these components will continue to be maintained and expanded on as the product grows. Interested in contributing or need to file a bug? Check out our Github page! As easy as HTML An Act! Web Component is just an HTML element. You can use it anywhere you can use HTML! <awc-button>Button</awc-button> Configure with attributes Act! Web Components can be configured with attributes in plain HTML. <awc-button outlined>Outlined Button</awc-button> Declarative rendering Act! Web Components can be used with declarative rendering libraries like Angular, React, Vue, and lit-html import { html, render } from 'lit-html'; const name="lit-html"; render(html` <h2>This is an <awc-button></h2> <awc-button>A Button</awc-button> `, document.body); This is an <awc-button> Now let's take a look at the available Act! Web Components.
https://plugindeveloper.actops.com/web-components/overview/
CC-MAIN-2022-33
refinedweb
174
61.22
Creating Your App Edit Page Welcome to the Ember Tutorial! This tutorial is meant to introduce basic Ember concepts while creating a professional looking application. If you get stuck at any point during the tutorial feel free to visit for a working example of the completed app. Ember CLI, Ember's command line interface, provides a standard project structure, a set of development tools, and an addon system. This allows Ember developers to focus on building apps rather than building the support structures that make them run. From your command line, a quick ember --help shows the commands Ember CLI provides. For more information on a specific command, type ember help <command-name>. Creating a New App To create a new project using Ember CLI, use the new command. In preparation for the tutorial in the next section, you can make an app called super-rentals. Directory Structure The new command generates a project structure with the following files and directories: Let's take a look at the folders and files Ember CLI generates. app: This is where folders and files for models, components, routes, templates and styles are stored. The majority of your coding on an Ember project happens in this folder. bower_components / bower.json: Bower is a dependency management tool. It is used in Ember CLI to manage front-end plugins and component dependencies (HTML, CSS, JavaScript, etc). All Bower components are installed in the bower_components directory. If we open bower.json, we see the list of dependencies that are installed automatically including Ember, Ember CLI Shims, and QUnit (for testing). If we add additional front-end dependencies, such as Bootstrap, we will see them listed here, and added to the bower_components directory. config: The config directory contains the environment.js where you can configure settings for your app. dist: When we build our app for deployment, the output files will be created here. node_modules / package.json: This directory and file are from npm. npm is the package manager for Node.js. Ember is built with Node and uses a variety of Node.js modules for operation. The package.json file maintains the list of current npm dependencies for the app. Any Ember CLI add-ons you install will also show up here. Packages listed in package.json are installed in the node_modules directory. public: This directory contains assets such as images and fonts. vendor: This directory is where front-end dependencies (such as JavaScript or CSS) that are not managed by Bower go. tests / testem.js: Automated tests for our app go in the tests folder, and Ember CLI's test runner testem is configured in testem.js. tmp: Ember CLI temporary files live here. ember-cli-build.js: This file describes how Ember CLI should build our app. ES6 Modules If you take a look at app/router.js, you'll notice some syntax that may be unfamiliar to you. Ember CLI uses ECMAScript 2015 (ES2015 for short or previously known as ES6) modules to organize application code. For example, the line import Ember from 'ember'; gives us access to the actual Ember.js library as the variable Ember. And the import config from './config/environment'; line gives us access to our app's configuration data as the variable config. const is a way to declare a read-only variable, as to make sure it is not accidentally reassigned elsewhere. At the end of the file, export default Router; makes the Router variable defined in this file available to other parts of the app. Upgrading Ember Before continuing to the tutorial, make sure that you have the most recent versions of Ember and Ember Data installed. If the version of ember in bower.json is lower than the version number in the upper-left corner of these Guides, update the version number in bower.json and then run bower install. Similarly, if the version of ember-data in package.json is lower, update the version number and then run npm install. The Development Server Once we have a new project in place, we can confirm everything is working by starting the Ember development server: or, for short: If we navigate to, we'll see the default welcome screen. Once we add our own app/templates/application.hbs file, the welcome screen will be replaced with our own content.
https://guides.emberjs.com/v2.9.0/tutorial/ember-cli/
CC-MAIN-2017-13
refinedweb
722
58.08
Summary: Learn to use the ClickOnce deployment technology to deploy Microsoft Office-based solutions built with Visual Studio 2008 Professional. Note that this article was previously published under the title "Deploying Solutions for 2007 Office System with ClickOnce Using Visual Studio Tools for the Office System (3.0)" (20 printed pages) Robert Green, MCW Technologies, LLC Published: October 2007 Updated: November 2008 Applies to: 2007 Microsoft Office system, Microsoft Visual Studio 2008 Professional Contents Publishing Solutions Using ClickOnce Creating the Sample Application Deploying Solutions with the Publish Wizard Installing Microsoft Office Solutions Publishing Options Updating Solutions ClickOnce Security Checks Conclusion About the Author Additional Resources Client applications, such as those using Windows Forms or Windows Presentation Foundation, provide a rich user experience and also provide access to local resources such as storage and printing. They have offline support so users can install applications once and use them anywhere. Solutions built using Microsoft Visual Studio 2008 Professional are client applications with the user interface (UI) provided by familiar applications, such as Microsoft Office Word 2007 or Microsoft Office Excel 2007. ClickOnce is a deployment technology that allows you to create self-updating Windows-based applications that can be installed and run with minimal user interaction. ClickOnce brings the benefits of Web-based deployment to client applications. In Visual Studio 2008 Professional, you can now deploy solutions for the 2007 Microsoft Office system using ClickOnce. Like any applications that use the Microsoft .NET Framework, after you create and test a solution in your development environment, you need to deploy it to users. If you make changes to the solution, you need to deploy those updates. The questions you need to ask and answer to deploy a Microsoft Office solution are no different from those you would ask and answer to deploy a Windows client application. What do users need on their computers to run the solution? How will you deploy the solution to users? How will you deploy updates? This article introduces using ClickOnce to deploy solutions for the 2007 Office system. The exercises in this article use document-level solutions. The techniques work the same for application-level solutions. A ClickOnce application is any Windows Presentation Foundation, Windows Forms, console application, or Microsoft Office solution published using ClickOnce technology. You can publish a ClickOnce application in three different ways: from a Web page, from a network file share, or from media such as a DVD. After a ClickOnce application is installed, it can run locally even when the computer is offline, or it can be run in an online-only mode without permanently installing anything on the computer. For more information, see Choosing a ClickOnce Deployment Strategy. When you publish a solution, you must specify a publishing location. This is where Visual Studio copies the solution files necessary for installing the solution. By default, the publishing location is also the installation path. It is from this location that users install the solution from. You can specify an installation path that differs from the publishing location. For example: You can publish to a Web site or network share and have the users install from there. You can publish to a staging Web site and then copy the files to another Web site for installation. You can publish to a network share to test installations and then have users install from a Web site or build CDs for installation. The Microsoft Office solution contains the following components: The Microsoft Office document if you create a document-level solution. The customization assembly and any assemblies upon which it relies. A deployment manifest file. This identifies the location and the current version of the solution. An application manifest file. This identifies the name, version, and location of the customization assembly. There are two mechanisms for publishing applications from within Visual Studio. You can publish by using the Publish Wizard or by using the Publish page of the Project Designer. The Publish Wizard prompts you for the basic information required to publish a solution. You can use the Publish page to set additional options and customize the publishing process. In this exercise, you create a simple Microsoft Office solution to use in the remainder of this article. Start Visual Studio. If you use Windows Vista, make sure you are running Visual Studio as administrator so that you have permission to publish to a Web server. Create a Microsoft Office Word 2007 Template solution named ClickOnce Demo. Open the file ThisDocument.vb. Add the following code to the ThisDocument_Startup method: Try ' Insert the date into the document. Me.Content.InsertAfter(DateTime.Today.ToShortDateString()) ' Go to the end of the line and add two paragraph returns. With Me.Application.Selection .EndKey(Word.WdUnits.wdLine) .TypeParagraph() .TypeParagraph() End With Catch MessageBox.Show( _ "The date could not be added to the document.") End Try Run the solution and confirm the customization adds the date to the document and adds two lines below the date. Exit Word 2007. In this exercise, you deploy the Microsoft Office solution by using the Publish Wizard. In the next exercise, you deploy an update to the solution. To publish the solution, in Solution Explorer, right-click the project name and then click Publish. This displays the Publish Wizard dialog box. Type the location where you want to publish the solution. You can specify a URL, as shown in Figure 1, a disk path, a local folder, or a network share. Click Next. Type the location from which users will install the solution. This can be a Web site or network share, as shown in Figure 2. The Wizard sets the default installation path to be the same as the publish location. You can change this if you want. Click Finish. Visual Studio builds the solution before publishing it. Visual Studio displays the message "Publish succeeded" in the lower left after it finishes. In Windows Explorer, navigate to the folder, %systemdrive%\inetpub\wwwroot\ClickOnce Demo. You should see contents similar to those shown in Figure 3. %systemdrive%\inetpub\wwwroot\ClickOnce Demo Visual Studio copied the template to the publishing location and created the file, ClickOnce Demo.vsto. This file is a copy of the deployment manifest file. The deployment manifest file is an XML file that describes the solution deployment and tracks the current version number. The Visual Studio Tools for the Office system runtime queries the deployment manifest file to determine which version of the application manifest file to download. Right-click the ClickOnce Demo.vsto file and select Open With…. In the Open With… dialog box, select Notepad from the Other Programs list and clear Always use the selected program to open this kind of file. Click OK to view the deployment manifest. Ordinarily, there should be no need to open the manifest file. In this exercise, you are simply viewing the contents as a way to better understand the published solutions. Ensure the check box Always use the selected program to open this kind of file is not selected. If you associate .vsto files with Notepad by default, ClickOnce functionality will not launch the Visual Studio Tools for Office runtime to begin the installation or to update functionality. You should see the following XML, which identifies to the runtime the location of the application manifest file. <dependency> <dependentAssembly dependencyType="install" codebase="ClickOnce Demo_1_0_0_0\ClickOnce Demo.dll.manifest" size="11704"> In Windows Explorer, navigate to the folder, Application Files\ClickOnce Demo_1_0_0_0 folder. This folder contains the template and the customization assembly. The customization assembly has a .deploy file name extension. The folder also contains the file, ClickOnce Demo.dll.manifest. This is the application manifest file, an XML file that provides the runtime with the information it needs to load and update customization assemblies. The folder also contains a copy of the deployment manifest. In Notepad, open the file ClickOnce Demo.dll.manifest. The following code example indicates to the runtime to retrieve the 1.0.0.0 version of the customization assembly from the ClickOnce cache. <dependency> <dependentAssembly dependencyType="install" allowDelayedBinding="true" codebase="ClickOnce Demo.dll" size="18944"> <assemblyIdentity name="ClickOnce Demo" version="1.0.0.0" language="neutral" processorArchitecture="msil" /> You can install a Microsoft Office solution from a Web site, a network share or physical media such as a DVD. This section walks you through installing and uninstalling solutions. You can launch the installation by using a number of ways: By opening the document. By executing the .vsto file. By running the Setup program. For more advanced deployment scenarios, is to run the Visual Studio Tools for Office installer, %commonprogramfiles%\microsoft shared\VSTO\9.0\VSTOInstaller.exe, and pass the path of the deployment manifest as an argument. When you open a document, the Visual Studio Tools for the Office system runtime downloads the deployment manifest file. In this example, the file is ClickOnce Demo.vsto. If the customization is new or updated, the runtime executes the .vsto file using the execution engine component of Visual Studio Tools for the Office system. You can also execute the .vsto file yourself to install the solution. The Web server does not recognize the .vsto file type by default. If you published the solution to a Web site, first you must add a MIME type to the Web server. In Control Panel, click Administrative Tools. Click Internet Information Services Manager, and then double-click MIME Types. In the Actions pane, click Add. In the File Name Extension box, type .vsto. In the MIME Type text box, type application/x-ms-vsto, and then click OK. From the publish location, copy ClickOnce Demo.dotx to another location, for example the desktop. Ensure the template is not currently open in Visual Studio. Open the template. The Microsoft Office Customization Installer dialog box appears. The installer first downloads the deployment manifest file, ClickOnce Demo.vsto. Next, the installer checks if the publisher is recognized and trusted. It is not, so the installer displays the dialog box shown in Figure 5. Click Install. The runtime for Visual Studio Tools for the Office system runtime downloads the customization. After the installation completes, click Close. The document should display the date. The insertion point should be two lines below that. In Control Panel, click Programs and Features. You should see the ClickOnce Demo application, as shown in Figure 6. Uninstall ClickOnce Demo. Open the browser. In the address bar, type: Demo/ClickOnce demo.vsto. You should see the same dialog boxes in the previous steps. Install the solution. Open the Word 2007 template. In this exercise, you publish the solution to a network share and install it from that location. Create a ClickOnce Demo folder on a computer on your network and share it. You can use your computer if there is no network available. Return to Visual Studio. In Solution Explorer, right-click the project name, and then click Publish. Type \\computer_name\ClickOnce Demo as the publish location. Click Next. Select From a UNC path or file share. Type\\computer_name \ClickOnce Demo as the publishing location. Click Finish. In Windows Explorer, navigate to the shared folder, ClickOnce Demo folder. You should see contents similar to those you saw previously. By default, each time you publish a solution, Visual Studio increments the revision part of the solution’s version number. The solution you just published is therefore version 1.0.0.1. Copy ClickOnce Demo.dotx from the publish location and overwrite the previous version. The new version of ClickOnce Demo.dotx downloads the latest deployment manifest file from the Web site location. Install the solution by using one of the installation methods. The Microsoft Office Customization Installer dialog box appears, as shown in Figure 7. Notice that the installer now retrieves the deployment manifest file from the network share. Install the customization as before. Confirm that the customization runs. Exit Word 2007, and then uninstall ClickOnce Demo. In Windows Explorer, navigate to the shared folder. Type \\computer_name\ClickOnce Demo and then click OK. You should see the Microsoft Office Customization Installer. Follow the prompts to install the customization. Open the document and confirm the customization runs. Exit Word 2007 and uninstall ClickOnce Demo. In the ClickOnce Demo folder, double-click the setup.exe file. You should see the Microsoft Office Customization Installer. Open the document and confirm that the customization runs. In this exercise, you saw three ways you can install an Office solution: You can open the document. You can execute the deployment manifest file. You can run the Setup program. In this example, all three of these methods download the deployment manifest file and install the customization. However, the Microsoft Office-based solution you built in Visual Studio will run only if the computer has the following software installed: Microsoft .NET Framework 3.5 Visual Studio Tools for Office runtime Appropriate Microsoft Office application, including the primary interop assemblies The Setup program installs the .NET Framework and the Visual Studio Tools for Office runtime if they are not present. These are downloaded from the Microsoft Download Center, or you can specify the location for these prerequisites. The next section demonstrates this. The Publish Wizard asks you for two pieces of information: the publish location and installation path. If you are publishing to a Web site, you must specify the installation path, which by default is the publish location. If you are publishing to a disk path or file share and you want the publish location and installation path to differ, you must specify the installation path. If you accept the default or leave the installation path blank, the publish location and install path remain the same. You can use the Publish Location page of the Project Designer to specify a broader range of options and settings for publishing. This section walks through using the Publish Location page to specify additional publishing options. In Visual Studio, in the Project Designer, navigate to the Publish Location page, as shown in Figure 8. On the Publish Location page, you can set or change the publishing and installation folders. You can also select the publish language for the solution. If you choose a language other than English, you must ensure your users installed the Visual Studio Tools for the Office system Language Package or that you add it to the solution’s prerequisites so the Setup program installs it. For more information, see Microsoft Visual Studio 2005 Tools for the Microsoft Office System (VSTO2005) Language Package. Automatically increment revision with each release is checked by default. With this option, the revision portion of the version increases each time you publish the solution. Figure 8 shows that the next version is 1.0.0.2. We recommend that you increment the version number each time you publish. If you add significant functionality or resolve a large number of issues, you may want to increment the minor portion rather than the revision portion. Click Prerequisites to view the solution’s prerequisites, as shown in Figure 9. Microsoft Office solutions you build with Visual Studio 2008 require both the Microsoft .NET Framework 3.5 and the runtime for Visual Studio Tools for the Office system. By default, when you publish a solution, Visual Studio creates a Setup.exe file. Because this executable file uses Windows Installer technology to install prerequisites, Windows Installer 3.1 is another prerequisite. You can specify additional prerequisites, such as SQL Server 2005 Express Edition, if your solution requires them. By default, ClickOnce downloads required components from the vendor’s Web site. You can instruct ClickOnce to install them from a different location. Locations can include the application installation path, a Web site, an intranet site, a network share, or a folder on the user’s computer. As you see, a user can easily install a published solution. However, installing the solution is only part of the value of ClickOnce. Another benefit is automatic application updates. The solution periodically reads its deployment manifest file to see if updates to the customization are available. If updates are available, the Visual Studio Tools for the Office system runtime downloads the new version of the customization. To control the updating behavior of an application, in the Project Designer, on the Publish Location page, click Updates. This displays the Customization Updates dialog box, as shown in Figure 10. By default, the runtime for Visual Studio Tools for the Office system checks for updates every seven days. This minimizes the performance penalty of checking for updates. If you have five Word 2007 add-ins installed, for example, and each add-in checks each time it loads, you must wait for each add-in to check for updates each time you start Word 2007. The runtime checks for updates by downloading a new version of the deployment manifest file and checking if it refers to a new version of the application manifest file. You can change this behavior by changing the interval, by turning off automatic checking, or by choosing to check each time the customization runs. When you install a customization, the installer records the updating behavior. If the behavior changes, the Visual Studio Tools for the Office system runtime records that change the next time it downloads an update. Suppose you publish and install a customization that uses the default update interval of seven days. Two days later, you make a change to the customization. You also change the updating behavior to check every time. You then republish. Five days later, the runtime for Visual Studio Tools for the Office system checks for an update, discovers one exists, and downloads it. At that time, the runtime records the updating behavior change in the registry. The next time the user runs the customization, the runtime checks for updates and downloads as necessary. If you want to change the update behavior immediately, you can have your users reinstall the customization. On the Publish Location page of the Project Designer, click Updates. Select Check every time the customization runs, and then click OK. Navigate to the ClickOnce Demo folder and delete the existing contents. In Visual Studio, change the Revision portion of the version number to 0. You are now republishing the initial version of the solution and, in effect, starting over. Click Publish Now to publish the application again. Notice the Revision portion of the version number is now 1. Uninstall the customization. Navigate to the publishing folder in Windows Explorer. Reinstall the customization by double-clicking Setup.exe or ClickOnce Demo.vsto. Reinstalling is necessary because the currently installed version uses the default update interval of seven days. Recopy the document file, overwriting the current version. Modify the ThisDocument_Startup method to display the date using a different format. Me.Content.InsertAfter(DateTime.Today.ToLongDateString()) Republish the solution by clicking Publish Now in the Publish page of the Project Designer. Open the document and confirm that the date appears in the longer format. In Windows Explorer, navigate to the Application Files folder in the publishing location. You should two folders, one for each version of the solution. These folders contain version specific copies of the files you saw before. In Windows Explorer, navigate back to the publishing location folder. In Notepad, open the file ClickOnce Demo.vsto. You should see the following XML, which identifies the application manifest file location. <dependency> <dependentAssembly dependencyType="install" codebase= "Application Files\ClickOnce Demo_1_0_0_1\ClickOnce Demo.dll.manifest" size="11628"> The Visual Studio Tools for the Office system runtime downloads the updated customization. The new customization runs and you should see the date in the longer format. Automatic updating is a powerful feature of ClickOnce. You simply publish a new version of the customization. When users open the document, they automatically receive and install the update. Each time you publish a solution, Visual Studio updates the publishing location and installation path. The top-level folder contains the current deployment manifest file and the deployment manifest files for each published version of the solution. There are folders corresponding to each version and these contain the document customization and application manifest file for that version of the solution. Keeping these versions of the solutions makes it easy for you to roll back an update. When you open the document, the Visual Studio Tools for the Office system runtime downloads ClickOnce Demo.vsto. This is a copy of the most recent deployment manifest file and indicates which version of the customization to load. In the previous exercise, you published version 1.0.0.1. The deployment manifest file indicates that version. To load a different version of the customization, you can replace the version in the publishing location or installation path with a previous version. In Windows Explorer, navigate to the folder, Application Files\ClickOnce Demo_1_0_0_0. ClickOnce Demo_1_0_0_0.vsto. Make a copy of the file, ClickOnce Demo_1_0_0_0.vsto. Overwrite the ClickOnce Demo.vsto file with the one you just copied. You should see the dialog box shown in Figure 8. The Visual Studio Tools for the Office system runtime downloads the previous customization. You should see the date in the shorter format. To return to the 1.0.0.1 version of the customization, repeat the previous steps but make a copy of the deployment manifest in the Application Files\ClickOnce Demo_1_0_0_1 folder. When a user installs or updates a Microsoft Office solution, the Visual Studio Tools for the Office system runtime performs the following series of security checks to determine if the solution is trusted: Does the document reside on the user’s computer or in a trusted folder? If it does, the runtime proceeds to the next check. If it does not, the solution does not load. Does the application manifest file identify whether the solution requests FullTrust permission? This is the default. If it does, the runtime proceeds to the next check. If it does not, the solution does not install or update. Is the deployment manifest file in the Internet Explorer Restricted Sites zone? If it is not, the runtime proceeds to the next check. If it is, the solution does not install or update. Is the deployment manifest file signed with an explicity untrusted certificate? If it is not, the runtime proceeds to the next check. If it is, the solution does not install or update. Is the deployment manifest file signed with a trusted published certificate? If it is, the solution installs or updates with no further security checks. If it is not, the runtime proceeds to the next check. Is user prompting (asking the user if he or she trusts the installation) disabled? If it is, the solution does not install or update. If it is not, the runtime proceeds to the next check. Is the solution in the inclusion list? If it is, the solution installs or updates with no further security checks. If it is not, the runtime proceeds to the next check. When prompted, did the user choose to install the solution? If the user did, the solution installs or updates. If the user did not, the solution does not install or update. The following sections explore in more detail the ClickOnce trust prompt, trusted certificates, and the inclusion list. When you installed the Microsoft Office solution in the first exercise, you saw a dialog box that informed you that the publisher could not be verified. This dialog box is the ClickOnce trust prompt. The trust prompt asked you if you wanted to install the customization. As a best practice, you should not prompt users if they trust the installation. The typical user may not have the knowledge required to make the correct decision. Does the user know if they should not trust it? You can remove remove the users from the trust decision in two ways. You can sign the ClickOnce manifest file with a trusted publisher certificate or you can add the solution to the inclusion list ahead of time. Both of these techniques allow the installation to proceed without user prompting. There is an option between providing users with no information and removing them from the decision. You can sign the deployment manifest file with a certificate that has a known identity. The ClickOnce trust prompt then changes to reflect the known identity of the publisher. You trust software if you know who published it and you trust that publisher. A digital certificate provides proof of identity. It can also provide proof of trust. To publish a solution by using ClickOnce, you must sign the application and deployment manifest files with a digital certificate. In the Project Designer, on the Signing tab, shown in Figure 12, notice that Sign the ClickOnce manifest is selected. Notice also that the solution contains the file ClickOnce Demo_TemporaryKey.pfx. The .pfx file is a personal certificate file. Visual Studio creates this when you create the project. Visual Studio 2005 creates the .pfx file when you build the solution, unless you specify a certificate. The certificate contains a public key and a private key. Visual Studio uses these to apply a digital signature to the manifest file. The digital signature provides you with proof no one has altered the manifest files after publishing. ClickOnce uses the digital signature to verify the publisher and the permission level of the solution. The default certificate is for development purposes, not production, and is therefore not a trusted certificate. It provides no proof of identity or trust. That is why the runtime for Visual Studio Tools for the Office system prompted you for permission to install the solution. A digital certificate provides proof of identity if it is installed in the Trusted Root Certification Authority certificate store and has not expired or been revoked. A digital certificate provides proof of trust if it provides proof of identity and is installed in the Trusted Publishers certificate store on the user’s computer. You can control user prompting and require trusted certificates. To do this, add the following registry key: \HKLM\Software\Microsoft\.NETFramework\Security\TrustManager\PromptingLevel. Modify the registry at your own risk. If you corrupt the registry, you might have to reinstall the operating system to fix the problem. Microsoft cannot guarantee that registry problems can be solved. Then add the following string values for each of the security zones: MyComputer LocalIntranet Internet TrustedSites UntrustedSites Set the value of each security zone to one of the following: Enabled. The installation prompts the user if the solution is not signed with a trusted certificate. This is the default for the MyComputer, LocalIntranet, and TrustedSites zones. AuthenticodeRequired. The user cannot install the solution if it is not signed with a trusted root authority certificate. The installation prompts the user if the solution is not signed with a trusted publisher certificate. This is the default for the Internet zone. Disabled. The user cannot install the solution if it is not signed with a trusted publisher certificate. This is the default for the UntrustedSites zone. If you want to prevent users from making any trust decisions, use Disabled as the value for each security zone. If you sign a Microsoft Office solution with a trusted certificate and you install that certificate on the user’s computer, the solution always installs and updates with no need for user prompting. As an alternative to using a trusted certificate, you can add the solution to the inclusion list. The inclusion list is a list of trusted solutions. Visual Studio Tools for the Office system uses the registry to maintain this list. When a user chooses to trust a solution using the ClickOnce trust prompt, Visual Studio Tools for the Office system adds the solution to the inclusion list. Subsequent updates do not require user prompting, as the solution is in the list of trusted solutions. You can manually add a solution to the inclusion list, or remove a solution, by using the UserInclusionList class. This class is in the Microsoft.VisualStudio.Tools.Applications.Runtime.Security namespace. In this exercise, you add a solution to the inclusion list and then install it without prompting. To start, create a Microsoft Office solution. Create a Word 2007 Template solution named ClickOnce Demo 2. Add the following to the ThisDocument_Startup method. Publish this solution. Next, add the solution to the inclusion list. Create a Windows Forms Application solution. Add a reference to Microsoft.VisualStudio.Tools.Office.Runtime. Add two text boxes to the form. The second text box should be multiline. Add a button to the form. Set the Text property to Add. Add the following code to the Form1.vb code file. Imports Microsoft.VisualStudio.Tools.Office.Runtime.Security Add the following declarations. Private solutionLocation As Uri Private entry As AddInSecurityEntry Add the following code to the button’s Click event handler. solutionLocation = New Uri(TextBox1.Text) entry = New AddInSecurityEntry(solutionLocation, TextBox2.Text) UserInclusionList.Add(entry) Run the application. In the first text box, type the location of the deployment manifest. For example, type: Demo 2/ClickOnce Demo 2.vsto or type: \\computer_name\ClickOnce Demo 2\ClickOnce Demo 2.vsto Navigate to the publish location, and then open the deployment manifest file. Locate the RSAKeyValue element. Copy the contents of that element, including all child elements. Paste the XML into the second text box on the form, as shown in the following example. <RSAKeyValue> <Modulus>xS7LuQGcYZYijaXJzuYRiqpHud3XYBsXZv8ELmbUzYA0Pc410pmUF0XAnGOkkEDH7GZHCfGKmdKdT4j7m56iQooL QuGoJSpaoqiTV0loVDPfByY+7yvtm424znaj2SkJ0ro4adeF9TOY30EON5pDo4E/bzTr1c+ToYofA0maVo8=</Modulus> <Exponent>AQAB</Exponent> </RSAKeyValue> Click Add. Finally, you install the solution without user prompting. Return to the publishing location. From the publish location, copy ClickOnce Demo 2.dotx to another location, for example to the desktop. Open the Word template. The runtime for Visual Studio Tools for the Office system should download and install the customization without prompting you for a trust decision. An add-in or customization for Visual Studio Tools for the Office system needs permission to run. If you build Microsoft Office 2003 solutions with any version of Visual Studio Tools for the Office system or if you build 2007 Office system solutions by using Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office system (Second Edition), you have to grant full trust permission to the assembly. By default, these solutions do not have permission to execute. This is a source of confusion and difficulty for developers working with Visual Studio Tools for the Office system. There are documented ways to make this easier. However, they require extra steps and not everyone is fully aware of them. As you see in this article, Visual Studio 2008 Professional enables you to use ClickOnce to deploy solutions for the 2007 Office system. ClickOnce, by default, gives Microsoft Office solutions full trust permissions. The extra step that has caused issues in the past is now built-in. This is a welcome enhancement to the product. ClickOnce is easy to use, but it also provides several layers of control. You can determine which of the techniques you have seen here to use for your solutions and customers.#, and Microsoft Windows Workflow Foundation. Before joining MCW, Robert worked at Microsoft as the Product Manager for Visual Studio Tools for the Office system. For more information, see the following resources: Visual Studio Developer Center Office Development with Visual Studio Developer Portal ClickOnce ClickOnce Deployment
http://msdn.microsoft.com/en-us/library/bb821233.aspx
crawl-002
refinedweb
5,194
50.12
pmix_put man page PMIx_Put — Push a value into the client's namespace Synopsis #include <pmix.h> pmix\_status\_t PMIx\_Init(pmix\_scope\_t scope, const char key[], pmix\_value\_t *val); Arguments scope : Defines a scope for data "put" by PMI per the following: - (a) PMI_LOCAL - the data is intended only for other application processes on the same node. Data marked in this way will not be included in data packages sent to remote requestors - (b) PMI_REMOTE - the data is intended solely for application processes on remote nodes. Data marked in this way will not be shared with other processes on the same node - (c) PMI_GLOBAL - the data is to be shared with all other requesting processes, regardless of location key : String key identifying the information. This can be either one of the PMIx defined attributes, or a user-defined value val : Pointer to a pmix_value_t structure containing the data to be pushed along with the type of the provided data. Description Push a value into the client's namespace. The client library will cache the information locally until PMIx_Commit is called. The provided scope value is passed to the local PMIx server, which will distribute the data as directed. Return Value Returns PMIX_SUCCESS on success. On error, a negative value corresponding to a PMIx errno is returned. Errors PMIx errno values are defined in pmix_common.h. Notes See 'pmix_common.h' for definition of the pmix_value_t structure. See Also PMIx_Constants(7), PMIx_Structures(7) Authors PMIx.
https://www.mankier.com/3/pmix_put
CC-MAIN-2018-22
refinedweb
246
54.83
Mercurial > dropbear view libtommath/bn_mp_prime_fermat.c @ 457:e430a26064ee DROPBEAR_0.50 Make dropbearkey only generate 1024 bit keys line source #include <tommath.h> #ifdef BN_MP_PRIME_FERMAT], */ /* performs one Fermat test. * * If "a" were prime then b**a == b (mod a) since the order of * the multiplicative sub-group would be phi(a) = a-1. That means * it would be the same as b**(a mod (a-1)) == b**1 == b (mod a). * * Sets result to 1 if the congruence holds, or zero otherwise. */ int mp_prime_fermat (mp_int * a, mp_int * b, int *result) { mp_int t; int err; /* default to composite */ *result = MP_NO; /* ensure b > 1 */ if (mp_cmp_d(b, 1) != MP_GT) { return MP_VAL; } /* init t */ if ((err = mp_init (&t)) != MP_OKAY) { return err; } /* compute t = b**a mod a */ if ((err = mp_exptmod (b, a, a, &t)) != MP_OKAY) { goto LBL_T; } /* is it equal to b? */ if (mp_cmp (&t, b) == MP_EQ) { *result = MP_YES; } err = MP_OKAY; LBL_T:mp_clear (&t); return err; } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_prime_fermat.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */
https://hg.ucc.asn.au/dropbear/file/e430a26064ee/libtommath/bn_mp_prime_fermat.c
CC-MAIN-2022-40
refinedweb
170
77.84
The Stack class represents a Last-in- First –Out (LIFO) stack of objects. It is mainly two operation push and pop i.e. we can push (insert) items into the stack and pop (retrieve) items into the stack. Stack is implemented a circular buffer. It is follows the (Last-in-first-out) strategy. We can push the items into a stack and get into a reverse order. Stack is returned as last elements as a first when you try to retrieve the items into the stack. As elements are added to a stack, the capacity is automatically increased as required through reallocation. Stack Constructor: 1. Stack (): initializes a new instance of the stack class i.e. empty and has the default initial capacity. 2. Stack (ICollection): Initializes a new instances of the stack class that contains elements copied. 3. Stack (Int 32): initializes a new instances of the stack class i.e. empty and has the specified initial capacity or the default initial capacity, whichever is greater. Stack Properties: 1. Count: Gets the number of elements contained in the stack. 2. IsSynchronized: Gets a value indicating whether access to the stack is synchronized (thread safe). Stack Methods: Some of the important methods of stack class are: 1. Clear (): Removes all objects from the stack. 2. Clone (): Create a shallow copy of the stack. 3. Contains (Object): Determines whether an elements is in the stack. 4. Peek (): Returns the object at the top of the stack without removing it. 5. Pop (): Removes and returns the object at the top of the stack. 6. Push (Object): Inserts an object at the top of the stack. Advantage of Stack: 1. As elements are added to the stack, the capacity of a stack is automatically increased as required through reallocation. 2. Stack accepts null as a valid value and allows duplicate elements. 3. Also applicable thread safety of the stack class. For example, we create an instance of stack class with initial capacity by using its default constructor and Inserting (Push) three values into the stack. After inserting elements we have retrieve (Pop) the elements. using System;using System.Collections;namespace StackEx{class Program{static void Main(string[] args){ Stack st = new Stack();// Inserting(Pop) values into the stack st.Push("Hi");st.Push("Hello");st.Push("Everyone");Console.WriteLine(@"Let us try to use stack in the program ");Console.WriteLine();Console.WriteLine("Count: {0}",st.Count);Console.WriteLine();Console.WriteLine("Values:");Console.WriteLine();PrintValues(st);Console.ReadKey();} // For retrieving(Pop) values from the stackpublic static void PrintValues(IEnumerable MyCollection){ foreach (Object obj in MyCollection){Console.Write("{0}",obj);Console.WriteLine(); }}}} Output:
https://www.mindstick.com/Articles/11977/stack-class-in-c-sharp
CC-MAIN-2017-22
refinedweb
438
51.44
If you are developing a Silverlight RIA Service Application & you need to expose your GET or POST methods via the API calls that could be consumed by third-party apps then you can very easily utilize the ASP.Net Web API for that.In today's article we will see how to configure an existing Silverlight RIA Services Enabled app to use Web APIs. To summarize, the following steps are needed to create a RIA Enabled Silverlight App hosted in an ASP.Net Web App. using System.Collections.Generic; using System.Linq; using System.Web.Http; namespace SLRiaWebAPiDemo.Web { public classGroupsController : ApiController { MRM_LatestEntities _context = new MRM_LatestEntities(); //List<Group> groups = new List<Group>(); // GET api/<controller> public IEnumerable<Group> GetAllGroups() { return _context.Groups; //return new string[] { "value1", "value2" }; } // GET api/<controller>/5 public Group GetGroupById(int id) { return _context.Groups.Where(x => x.Id == id).FirstOrDefault(); //return "value"; public void Post([FromBody]string value) // PUT api/<controller>/5 public void Put(int id, [FromBody]string value) // DELETE api/<controller>/5 public void Delete(int id) } } Finally we need to tell our app to route to this controller when the URL is looking for it. In order to do this we need to add the controller mappings in the configurations of our app. public classWebApiConfig public staticvoid Register(HttpConfiguration config) config.Routes.MapHttpRoute( name:"DefaultApi", routeTemplate: "api/{controller}/{id}", defaults:new { id = RouteParameter.Optional } //defaults: new { controller = "Test", action = "Get", id = "" } ); } using System; public classGlobal : System.Web.HttpApplication protected void Application_Start(object sender,EventArgs e) WebApiConfig.Register(GlobalConfiguration.Configuration); That's all we need. Put a breakpoint at the GetAllGroups() method in WebApi Controller & hit F5. When the app is up & running for example change it to Hit enter and you will see that the breakpoint is hit. Continue debugging & you will see the JSON data in the browser. ConclusionThis was a very basic example that shows how you can configure your Silverlight RIA Service or ASP.Net app to use the Web API without actually getting started with a full blown MVC project. You can create multiple APIs & release them to be used by your clients. Supreet is a strange guy. Whether it is his interaction with the machines or people, he excels at both. A fanatic developer who has a fad for new technology. He is also well renowned for his latest gadgets and love for t... Read more ©2015 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/2450ca/configuring-silverlight-ria-services-app-to-use-Asp-Net-web/
CC-MAIN-2015-11
refinedweb
409
50.43
#include <OldTimer.H> A simple class for keeping track of elapsed time. CH_XD::OldTimer provides the ability to measure the passage of wall-clock time within units of code, and relate multiple measurements in a hierarchical manner. A timer can be started, stopped, reset and printed out. Multiple timers can be instantiated and related to each other hierarchically: a timer instance may have a parent and children. The hierarchical relationship is relevant only when printing out the summary report. In parallel, the timers on each proc operate independently. Only the summary operations cause communication between procs. Construct an unnamed timer that has no relation to any other instance. Construct a named timer and add it to a table.
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.2/classOldTimer.html
CC-MAIN-2020-50
refinedweb
118
50.02
mktemp - make a unique filename #include <stdlib.h> char *mktemp(char *template); The mktemp() function replaces the contents of the string pointed to by template by a unique filename and returns template. The application must initialise template to be a filename with six trailing 'X's; mktemp() replaces each 'X' with a single byte character from the portable filename character set. The mktemp() function returns the pointer template. If a unique name cannot be created, template points to a null string. No errors are defined. None. Between the time a pathname is created and the file opened, it is possible for some other process to create a file with the same name. The mkstemp() function avoids this problem. For portability with previous versions of this document, tmpnam() is preferred over this function. None. mkstemp(), tmpfile(), tmpnam(), <stdlib.h>.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/mktemp.html
CC-MAIN-2017-30
refinedweb
138
66.33
Opened 3 years ago Closed 3 years ago #19565 closed Bug (invalid) FloatField object returns string when it's value is set from string Description When float field is set from string value representing float, it is correctly saved (string is converted to float). But the field itself returns string in this case (even after save). This is pretty awful, because there is no error when the field is saved... and then suddenly you get a string value from field that you expect to return float. If you get access to the same row via different row, it - correctly - returns float instead. Expected behaviour - the field should consistently return float. Some code presenting it. I am als attaching .zipped full django project with only this model and test. model: class TestModel(models.Model): test_field = FloatField(null=True) test: class TestFloatFieldAsString(TestCase): def test_float_field_as_string(self): model1 = TestModel(test_field=12.4) model1.save() print model1.test_field + 1.0 model2 = TestModel(test_field='12.4') model2.save() try: print model2.test_field + 1.0 self.fail("Expect failure here") except TypeError as exc: print "expected:" + str(exc) model3 = TestModel.objects.get(id = model2.id) print model3.test_field + 1.0 Attachments (1) Change History (5) Changed 3 years ago by jarek@… comment:1 Changed 3 years ago by anonymous - Needs documentation unset - Needs tests unset - Patch needs improvement unset The above test prints: 13.4 expected:cannot concatenate 'str' and 'float' objects 13.5 comment:2 Changed 3 years ago by anonymous Also interesting effect is having 13.5 returned where we expect 13.4 .. but this one is likely caused by float precision arithmetics comment:3 Changed 3 years ago by psjinx@… If I call clean_fields() then m.test_field returns a float value, however if I call m.save() then m.test_field returns a string. It means clean_fields() method is not called while saving. In [14]: m = TestModel(test_field='12.4') In [15]: m.clean_fields() In [16]: m.test_field Out[16]: 12.4 In [17]: m = TestModel(test_field='12.4') In [18]: m.save() In [19]: m.test_field Out[19]: '12.4' comment:4 Changed 3 years ago by kmtracey - Resolution set to invalid - Status changed from new to closed This is much like #12401, only for a different type of field. When a model object is created, the field value isn't converted from string (or whatever is passed in the constructor) to "correct" type for the field. As noted in, this is a long-standing design decision and cannot be changed at this point without significant backward incompatibility. full django project with test that shows the problem
https://code.djangoproject.com/ticket/19565
CC-MAIN-2016-22
refinedweb
434
61.12
Version control, work item tracking, project management, and build management Brian White Microsoft Corporation October 2005 Applies to: Microsoft Team Foundation Server Microsoft Visual Studio Team System Summary: This article looks at the features of Team Foundation Server, a component of the Visual Studio Team System. (9 printed pages) Note This content originally appeared in the January 2005 edition of .NET Developer's Journal. It is reproduced here with permission of the publisher. Introduction Team Projects Version Control and Work Item Tracking Build Management and Testing Project Management Project Reporting and Metrics Conclusion Do you find yourself tired after sitting in meetings all day? Do you have feelings of uselessness after weeks of bug fixing? Do you often find yourself feeling lonely, as if you've lost touch with your team and the status of your project? Are you frustrated by frequent fire drills and the sense that your project is in utter chaos? Are you hopelessly lost in a sea of changing policies and procedures with no end in sight? If you answered yes to any of these questions, you may be clinically depressed and require extensive psychological counseling. Or maybe, just maybe, you are using the wrong software life-cycle tools. As part of the Visual Studio Team System 2005 release, Microsoft will introduce a new server product designed specifically to improve software team collaboration and increase individual productivity. This new product is called Team Foundation Server (TFS). This article discusses the key objects created and managed by software teams, why the integration between these objects is so important, and how Team Foundation Server can help your team be more successful. Software development is a team activity now more than ever. When you think about software teams, what are the key objects that come to mind? First and foremost is the simple notion of a project. Not project in the Visual Studio solution sense of the word, but rather a set of individuals with shared goals to be achieved in a desired time. Team Foundation Server calls this a team project. In life-cycle tools today, this is mainly just a concept and not a physical object at all. You'll find tools either have no notion of this fundamental concept, or they have several different notions that are not consistent. To deliver tight integrations, a software life-cycle tool suite must have a common, shared notion of a team project. Also, an actual object is needed that is managed with persistent data, so that automation, relationships, and communication are possible. Team projects are central to everything Team Foundation Server provides. Team projects are surfaced in several places, primarily inside Visual Studio in the Team Explorer and through the Team Project Web site (based on Windows Sharepoint Services). Figure 1 shows the team explorer inside Visual Studio, with the team project Web site opened as a document window. Figure 1. Team Explorer Every team follows different processes and procedures. You may want your Web site to look a certain way, you may have specific guidance for how development gets done, you may have your own templates for documents, and you may want to organize your documents using a specific hierarchy. If your team is larger, you may also have special roles for team members and specific permissions assigned to these roles. Team Foundation Server provides a completely customizable way to define all of the data surrounding a team project and encapsulate this information into a methodology template. When a new team project is created, Team Foundation Server uses the methodology information to automatically configure the team project, including creating the team project Web site. Methodologies can be exported, customized, shared, loaded, and even bought and sold by third parties. With Team Foundation Server, you can create new team projects quickly without having to do hours or days of setup work in various tools using various techniques. To help get you started, Microsoft will provide two methodologies for use with Team Foundation Server: MSF Agile and MSF Formal. MSF Agile draws on the best practices of the Agile community and the Microsoft Solutions Framework to deliver a lightweight, easy-to-use methodology geared toward small teams who must move quickly. MSF Formal is targeting larger teams who require more control over their development processes. MSF Formal is designed to help teams achieve CMMI Level 3 maturity. At the heart of software development is the edit/compile/unit test/debug cycle. Surrounding this is usually a team build (often nightly), integration and system testing, and bug submission and resolution. The key objects being created and managed here are source code, work items, builds, and tests. You'll see the relationship between these objects offers significant integration and automation opportunities that Team Foundation exploits. Source code is another obvious object that must be tracked and managed for software teams. Software configuration management/version control tools are widely seen as a necessary part of any reasonably sized software team. Team Foundation Server includes a new version control solution that supports many advanced SCM features such as namespace versioning, automatic check in, and changes sets. Team Foundation Server also includes some new and innovative SCM features such as putting work on hold, sharing a change with another developer before check-in, and integrated check-in policies (see Figure 2). Figure 2. Pending Check-ins Note Team Foundation Server's Version Control will not replace Microsoft's Visual Source Safe (VSS). VSS will continue to be developed and supported for smaller teams (< 5 people) requiring only version control capabilities. Team Foundation Server's Version Control is targeted at teams of five or more that require more advanced version control features and integrations with other life-cycle tools. Work items is another major object that teams use to prioritize and track and work. Work comes in many forms: bugs, tasks, features, requirements, change requests, issues, and so forth. Most teams today need to deal with several different tools to make sure they are getting a clear picture of everything that needs to be done. Team Foundation Server has the flexibility to manage all of these types of objects as different work item types, but tracks them in one place, allowing for cross work item type reporting and queries. All work items include things such as a description, assigned user, and states for tracking progress. However, with Team Foundation Server, work item types are completely customizable, which allows your team to define the fields, rules, state model, and form layout that make sense for your team. The real power of Team Foundation Server comes not from separately managing the source code and work items, but rather from their integration and the automation that can be done with this relationship. By simply enabling a check-in policy, software teams can make it required that developers specify one or more work items whenever they perform a check-in. This is easy to do by having work items readily available in an integrated Pending Check Ins window (see Figure 2). Checking the box next to the appropriate work item links the "why" of a change to the "what." The benefits to productivity and the automation possibilities of this simple link have yet to be fully realized in tools today. One example of the automation benefits is the support of automated state transitions on work items. In the past, developers would check in their code, and then go to their bug tracking tool to transition one or more bugs into the next state ("Ready for Test" or "Resolved," for example). With Team Foundation, a developer can simply indicate in the pending check-in dialog box which "check-in action" they want to occur after the check-in has finished. If they select the "resolve" action, the work items will be automatically transitioned to the next state. This reduces work for the developer, increases the flow of changes through the life cycle, and makes for a cleaner hand-off between the developer and tester. Testers who receive this work item to verify can easily access and review the source code changes by following the link recorded in the work item. Another object that is central to a software development team is the "build." Similar to the concept of project, builds are often treated as second-class citizens in life-cycle tools, and most companies are left to develop their own in-house build systems. A build is simply an object that identifies the manufactured software product. The source code and build scripts are the raw materials; the manufacturing process begins when a build is started; and the resulting libraries, executables, and other supporting files are the result. The whole build process does not really stop there because after the freshly constructed product drops off the assembly line, it must go through testing to ascertain its quality. Typically, a set of smoke tests is performed. These tests exercise the most basic or core functionality of the product to determine its overall health. Think of it in terms of going to see the doctor. Your doctor always checks your weight and blood pressure, listens to your heart, and pokes around in your ears and throat. Once general health has been established, more detailed testing can be performed. Team Foundation Server provides team build capabilities that elevate "build" to a first class object. Build configurations are defined within Visual Studio (see Figure 3) and can be executed on a scheduled basis or initiated on demand. Team Foundation Server utilizes the MS Build platform, which allows for maximum flexibility in customizing the build process to your environment. Figure 3. Build Configuration By utilizing the test automation capabilities provided in Visual Studio Team Edition for Software Testers, the smoke tests can also be automatically executed after the compile/linking steps of the build have been completed. These tests can include not only functional tests on the built application, but also static analysis testing on the source code to identify coding issues such as security holes or memory leaks. Builds can be executed on any build machine. However, all of the build information is collected and stored in the Team Foundation Server database and is available to team members both inside Visual Studio or on the team project Web site. Again, it is the integrations where the real power of Team Foundation Server comes into its own. The build report, which is automatically generated, contains not only the status of the build and where testers can go to find the results, but also the work items that went into the build. Through automation and the simple link between source code and work items, Team Foundation Server can trace back from the build to the source code changes and produce a list of the work items that were added since the last build. One final piece of integration automatically updates to the work item field "Integrated In Build" so that build information is available from the work items themselves. This is extremely helpful to both developers and testers. Testers now have a clear roadmap of what new capabilities or resolved bugs they can validate for any given build. Developers can also see when their changes got incorporated into the team build and verify the fixes themselves. This is particularly important for large teams where it may take a day or two for a change to go through several levels of integration and get into the "official" build. I also want to draw your attention to the link between tests and work items. We all know that testers find bugs and submit those bugs into a bug-tracking tool. The problem today is that it is often hard for the tester to include enough information in the bug to allow the developer to reproduce the problem. Thus, developers will either mark the bug as "not able to reproduce," or they need to go to the tester's office to try and debug the situation on the tester's machine. With Team Foundation Server, testers can submit bugs (or other work item types) directly from a failed test case in Visual Studio. Work items submitted in this way capture all of the testing information, automatically making it much easier for the tester to enter a work item. Information about the failed test is now readily available to the developer. He or she can review the test failure information and rerun the specific test quickly. Additionally, in the case of performance and load testing, a developer can step through the test run and look at various performance counters (some of which the tester may never have looked at personally). Storing and communicating load test run information allows performance investigations to go pretty deep before the developer needs to rerun the test. The treatment of source code, work items, builds, and tests as first class objects in the system is critical for any life-cycle tool. Team Foundation Server goes one step further and focuses on the integrations between these objects and delivers on the automation possibilities these integrations provide. This delivers improved productivity for both developers and testers, and significantly reduces the communications issues between individuals performing these roles on a software team. Of course a software team would not be complete without the individuals who do the work of defining what the product should do and managing the teams to deliver on time and on budget. Team Foundation Server has been designed to improve the collaboration, communication, and management of software development teams. There are several capabilities for both the project leader and the requirements definer (a.k.a., analyst, program manager, or product manager). Central to these roles is the concept of the team project discussed previously. With both the team project and methodology, Team Foundation Server already delivers a set of work item types and a team project Web site populated with project templates and process guidance customized for your team and your organization. But what objects other than work items are interesting to the project manager/analysts? One that comes to mind is a project plan. A close second is specifications or requirements. Many software development efforts start with a high-level vision document and an Excel spreadsheet of key work items, including things like priority, development effort estimate, and a brief description. These could take the form of functional requirements, scenarios, or quality of service requirements. In any case, you can usually find an Excel spreadsheet being used on projects to track these important items, issues, or other things. Team Foundation leverages the power of Excel for its ease of entry and tracking, but integrates the locally stored XLS files more formally into the overall development environment by linking them directly to work items in the Team Foundation Server database. Analysts using Team Foundation Server can start editing in Excel and then "publish" their lists to Team Foundation Server, creating new work items that can be tracked. They can continue to use Excel to update these items by synchronizing their spreadsheets with the existing work items database. Communicating work items and keeping lists up to date suddenly becomes much easier and more practical. It is also possible to create new spreadsheets from data in the work item tracking system. Say you need a quick graph for a presentation you are doing in an hour. With Team Foundation Server, you can create a query for all the Priority 1 Bugs and Tasks that are active on Team Projects A and B. From this query, you can select "Open in Microsoft Excel" and then build a graph quickly based on this data. The possibilities are endless, and the support for the Excel scenario has been well designed and thought out. For example, you can take work items offline by exporting them from Team Foundation Server to Excel and making changes while you are on the go. Edit owners, change priorities, and add new items. Then when you connect back up, all this data is synchronized to the work item database. You might be asking yourself, "What if others have made conflicting changes to the same work items while I am off-line?" The answer is that an easy-to-use compare/merge utility will help you resolve any of these conflicts. Figure 4 shows you the project queries in the Team Explorer and the work item results list inside Visual Studio. Note the "Open in Microsoft Excel" button at the top of the results grid. Figure 5 shows the same work items as they appear in Excel. Note the toolbar that supports getting a different set of work items or synchronizing the current work items with the Team Foundation database. Figure 4. Work Items in Visual Studio Figure 5. Work Items in Excel Project managers may be less worried about lists and requirements than they are about schedules. To support the project manager, Team Foundation Server provides a similar integration with Microsoft Project. A project manager can break down a project plan, schedule and assign tasks, and then "publish" these into the work item database, creating new work items of type "task" or whatever work item type makes sense for your organization. Just like Excel, there is bidirectional synchronization, which allows project managers to work offline in MS Project and take advantage of MS Project's scheduling and resource leveling services to manage their projects. How nice it would be if developers could just enter updates in Visual Studio, and when they synchronized, they would have that information updated in their project plans. Team Foundation Server makes this possible. What if you wanted to organize tasks by project iterations and also by product structure? With Team Foundation Server, work items can be categorized both by iteration and product structure. In MS project you can quickly change groupings to see both views, allowing you to switch between an overall project view and a view focused on the current iteration. One of the main reasons projects fail is project managers lose track of the real status of their projects. Unintegrated project plans quickly drift out of date and tracking bugs (typically the only real-world data available on which to make decisions) is inadequate. Decisions can become flawed due to this lack of information. Team Foundation Server keeps the project plan alive and makes it easier for team members to help the project manager keep things up to date. Imagine your joy at seeing active bug counts dropping lower and lower. But what if you are not aware that test coverage is also falling while source code changes are continuing to increase? These trends would indicate a problem in testing that would not be caught if you were only focused on bug trends. The problem today is that this kind of information is not readily available, and when it is, it's usually only accessible in separate tools or independent reports. Team Foundation Server provides cross-discipline integrated reporting, which gives you a more realistic view of the health of a software project. The Team Foundation Server warehouse is based on SQL Reporting Services. The warehouse collects data on all work items and their state, source code changes and churn, builds and build results, and test cases and test case coverage. Figure 8 shows a test effectiveness report by build. For each build, you can see the results of the automated tests, the amount of code changes that came into that build, and the code coverage of the tests. Watching these trends in addition to bug count can be incredibly helpful in identifying other project problems that need to be addressed. You may have other data sources that you use to evaluate project progress. To support collection and reporting of external data, the Team Foundation Server warehouse is fully extensible, with an adapter architecture that can draw data on a periodic basis from other data sources in your organization. With Team Foundation Server reporting, you now have access to cross-domain reports, giving you much better visibility into the progress of the project. These reports can be run from within Visual Studio or from the team project Web site. Access and availability of these reports is another way Team Foundation improves the communication between you and your team. Team Foundation Server is a software life-cycle tool designed specifically to improve software development team collaboration and increase productivity. Team Foundation Server delivers version control, work item tracking, project management, and build management capabilities. Above all, it focuses on the linkage between the central objects a software team manages, and on the unique integration and automation possibilities these links provide. More information can be found at. director of enterprise change management product strategy for IBM software group. Publications Book Published: Software Configuration Management Strategies and Rational ClearCase: A Practical Introduction (TheADDP9 Object Technology Series)
http://msdn.microsoft.com/en-us/library/ms364061(VS.80).aspx
crawl-002
refinedweb
3,473
50.77
C Primer. Fall Introduction C vs. Java... 1 - Raymond Hawkins - 2 years ago - Views: Transcription 1 CS 33 Intro Computer Systems Doeppner C Primer Fall 2016 Contents 1 Introduction C vs. Java Functions The main() Function Calling Functions Function Prototypes Other Function Notes Data Types Arrays Multi-dimensional Arrays Strings structs Loops and Conditionals 6 5 #include Statements 7 6 printf() An Example - Hello CS Assert 8 8 Compiling Code with gcc 9 9 Running Your Code 9 1 Introduction This document will cover everything that you need to know about C for the first CS33 lab. 2 1.1 C vs. Java Unlike Java, an object-oriented programming language, C is a procedural language. This means that C has no classes or objects. As a result, the organization of your C programs will be different than that of your Java programs. Instead of designing your programs around the different types of objects involved as in Java, you will design your programs around the tasks you need to complete. This document will attempt to highlight the major similarities and differences between C and Java as concepts are covered. 2 Functions Where Java has methods, C has functions. The purpose of each C function is to perform a task when called. However, while Java methods belong to a particular object or class, C functions do not, standing on their own. Java s static methods are a reasonable, if imperfect, comparison to functions, as static methods do not belong to a particular object. 2.1 The main() Function Much as every Java program must contain a method with the header public static void main(string[] args) each C program must have a function with the header int main(int argc, char **argv) This function serves as the entry point into your program; it is the first code called upon execution. Unlike its Java counterpart, the C main() function is of type int rather than void. This means that it should return an integer value. This value is generally used to let the system know if the program ran successfully, and if not, what error occurred. A return value of 0 indicates the program ran successfully. Additionally, C s main() function takes in two arguments instead of one. The notation char ** will be explained below, but you can think of the second argument of the C function as corresponding roughly to the String array parameter of the Java main() method. argv[0] is the name of the command used to execute the program, while argv[1] is the first command-line argument, argv[2] is the second command-line argument, and so on. The int argument in the C function is the length of argv. So if the user runs the program with./mycprogram test.txt then argc will be 2, argv[0] will be./mycprogram and argv[1] will be test.txt. 2 3 2.2 Calling Functions Calling a function in C is similar to calling a method in Java. Since there are no classes or objects, only the name of the function and the values of its parameters are needed. To call a function foo() with the parameter 0, you would use foo(0); One major difference between C and Java is that in C, you may call only those functions that have been declared above the function call in your code. This restriction may seem arbitrary, but it makes code compilation easier. Therefore, the following code is invalid: void foo() { bar(); void bar() { /*... */ It is possible, however, to get around this restriction with something called function prototypes. 2.3 Function Prototypes Function prototypes alert the compiler of the functions that you intend to use in a file. A function prototype consists of the function header followed by a semicolon. For example, the following is a function prototype. int foo(char bar); The above line would allow all functions below it in your code to call foo(), even though foo() s function body has not yet been given. Therefore, the following code fixes the error of the previous section s example. void bar(); void foo() { bar(); void bar() { /*... */ If you use a function prototype, you must include a body for that function later in your code. It is probably simplest to include at the top of your file prototypes for each of the functions that you use so that you need not worry about the order in which your functions are declared. 3 4 2.4 Other Function Notes Function overloading, having two functions with the same name but different parameter types, is not allowed in C, even though it is in Java. Since C has no classes or objects, private, protected, and public are not keywords. 3 Data Types The primitive data types in C are int, char, short (int), long (int), double and float. Note in particular that there is no boolean type. Theint type is used in its place. All operations that normally would work on a boolean, like!, work on ints in C. A function that would return true or false should instead return an integer value, with 0 corresponding to false. It is also worth noting that, unlike Java, C will not automatically initialize the values of variables within functions. In Java, a newly created variable of type int will have an initial value of 0. In C, the initial value of such a variable could be anything. These data types are passed by value in functions. You will learn how to pass them by reference later. 3.1 Arrays Arrays in C are somewhat different than arrays in Java. This document will cover how to create and use arrays as local variables on the stack. There is another way to create arrays, dynamic memory allocation, which you will learn about later. To create a one-dimensional array of 5 ints called intarr, one would use the following code: int intarr[5]; In the C99 standard, you can also declare a variable-length array whose size is determined at runtime. int n = 5; int intarr[n]; If we want to create an array with specific initial values, we can use the following code: int intarr[5] = {1, 2, 3, 4, 5; Variable-length arrays cannot be initialized in this manner. You can access elements in an array as you would in Java. However, unlike Java, C does not perform bounds checking on array accesses. That is, if intarr is a one-dimensional array of length 5, both intarr[-1] and intarr[5] are valid C expressions, even though they are outside the array. Therefore, be very careful when you access arrays to make sure you are accessing the elements you really mean to access. Reading or writing to indices outside of the array could cause your program 4 5 to have unexpected behavior. It is also noteworthy that C arrays are not objects and so have no length field. You will need to keep track of the length of an array yourself. WhiletheelementsofaCarraycanbemodified, thearrayvariableitselfisimmutable. Forinstance, the following code is invalid: int arr[3] = {5, 10, 15; int brr[3]; brr = arr; However, an array variable can be assigned to a pointer whose pointee type is the array element type. The pointer can then be used in the same manner as the array. int arr[3] = {5, 10, 15; int *aptr = arr; //aptr and arr now refer to the same array aptr[0] = 20; //Set the first element of arr to 20 The syntax for passing arrays in C is different than it would be in Java. The following is a valid method header in Java: public static int getmax(int[] arr) In C, the function header would be: int getmax(int *arr) or 1 : int getmax(int arr[]) C arrays are passed as pointers, so, as in Java, modifying the contents of an array passed as a parameter will modify the original array. Unlike Java, arrays created inside a function call are deleted when the function returns, so returning a C array created inside a function will not behave as you would like. To get around this, one approach is to pass in an array as an argument to a function of return type void that sets or modifies the array s values as desired Multi-dimensional Arrays C also allows for multi-dimensional arrays. Such arrays are declared using a similar syntax to one-dimensional arrays. For example, to declare and initialize a 2x3 array of ints, you would write: int arr[2][3] = {{0, 1, 2, {3, 4, 5; 1 Caution: a parameter of the form type arr[] is semantically identical to a parameter of the form type *arr. This means, for example, that while sizeof() applied to a local array will yield the total size in memory of the array, sizeof() applied to an array parameter will yield the size of a pointer. 5 6 Note that this syntax does not allow you to create ragged arrays, where different rows have different lengths. Passing multi-dimensional arrays to functions requires a little more care than one-dimensional arrays. In order to correctly access elements of a multi-dimensional array, C must know all dimensions of the array, except the first. For instance, the function header for a function foo() that accepts an mx3 int array, for any m, you would write: void foo(int[][3]) In C99, you can have variable-length array parameters, but you still must specify all dimensions except the first. 2 For instance, the following function accepts a three-dimensional int array whose second and third dimensions are specified by parameters n and m: void foo(int n, int m, int[][n][m]) Syntactically, C treats an n-dimensional array as a one-dimensional array of fixed-size n 1- dimensional arrays. The following code declares a 10x12 array of chars and creates a pointer to it: char arr[10][12]; char (* arr_ptr)[12] = arr; 3.2 Strings C has no explicit string type. However, a char array can be thought of as a string, which explains why the char ** argument in main() is like a String array. The null character, \0, is used to mark the end of a string, and such a string is said to be null-terminated. C supports string literals within double quotes, such as "Hello World!", and treats them as a null-terminated char array. C strings are not as flexible as Java Strings, however. For example, you may not combine two strings using the + operator. For the first lab, you will not need to create string variables, just print them. Later in this document is information about how to print strings. Information about how to create and manipulate strings will be presented later in the course. 3.3 structs A struct is a feature of the C language that stores multiple variables, called fields, at once. One can think of them as Java objects without methods. To define a struct whose type is Foo with two fields, an int x and a char y, you can use the following code above the functions in your.c file: struct Foo { int x; char y; ; 2 The reason for this has to do with how C handles arrays in memory. It s a bit complicated for this section, but ask a TA if you re curious. 6 7 You can then create a Foo named bar with the following code: struct Foo bar; Note that the type is actually struct Foo. To avoid needing to type struct every time you create a Foo, you can employ the typedef keyword. As an alternative to the above code for defining and creating a Foo, you can use the below code instead. typedef struct { int x; char y; Foo; Foo bar; To access the field x in bar, our new Foo, we can use bar.x. Note that, like primitives, structs are passed by value. We will cover a way to avoid this later in the course 3. 4 Loops and Conditionals Loops and conditionals are similar in C and in Java. Like Java, C has for loops, while loops, do-while loops, and if statements. Since C does not have boolean variables, the conditions for these statements are instead ints. In C, any non-zero value signifies true, while zero signifies false. In other words, the following is valid code. if(a+2) { printf("this prints if a is not -2"); else { printf("this prints only if a is -2"); 5 #include Statements You will sometimes wish to employ functions from the C standard library. These functions can perform tasks such as printing to the terminal, manipulating strings, and allocating arrays whose size is not known at compile time. Unlike Java, where the java.lang package is automatically imported for you, C does not automatically include any functions. You must include libraries yourself to use these functions. The C equivalent of Java s import statement is #include. Suppose that you want to use a function in stdio.h, which contains functions that handle input and output for your program. To do so, you must ensure that the top of your source file contains the line 3 If you pass a struct by value, each of its fields is copied. A function that edits a struct which was passed to it by value does not edit the original struct but rather the copy, which has no effect beyond the scope of the function. The alternative to passing by value is passing by reference using what are called pointers, which we will learn about later. 7 8 #include <stdio.h> More libraries will be introduced as they are needed. 6 printf() The function printf() is a library function in stdio.h. It is used to output text, somewhat like Java s System.out.print(). 4 To output a string constant like "Welcome to CS33!" on its own line, you would use the following code: printf("welcome to CS33!\n"); Suppose, now, that we wanted to generalize this message for any CS course. We have an integer variable coursenum; if coursenum equals 32, we d want to print "Welcome to CS32!". In Java, we could use a print statement like System.out.print("Welcome to CS" + coursenum + "!\n"); However, C strings cannot be appended like that. Instead, printf() allows us a different way to complete this task. We can use the statement: printf("welcome to CS%d!\n", coursenum); The %d signifies that we want to print the value of a base 10 integer that is given in the function arguments. You may find the following options useful: %d prints a base 10 integer, as explained above %f prints a floating point number %u prints an unsigned base 10 integer %s prints a null-terminated string %c prints the character associated with the provided ASCII value One may include as many of these symbols in the same printf() statement as one likes. Assuming a and b are variables of type int, then the following code will print their values. printf("the value of a is %d and the value of b is %d.\n", a, b); Note that, if a is an int, Java permits System.out.print(a); while in C, printf(a); is a syntax error, since a is not a string. 4 It would be more apt to compare it to Java s System.out.printf(). 8 9 6.1 An Example - Hello CS33 The following simple program prints "Hello World!" to the terminal on its own line, followed by "Welcome to CS33!" on the next line. #include <stdio.h> int main(int argc, char **argv) { int coursenum; coursenum = 33; printf("hello World!\nWelcome to CS%d!\n", coursenum); return 0; 7 Assert Assert statements form a very useful tool for testing and debugging C programs. assert() in C is like a function 5 provided by the header file assert.h. assert() takes a single argument, which is a boolean expression. If the expression evaluates to 1, then assert does nothing; if the expression evaluates to 0, then assert terminates execution of your program, and prints the statement that failed and the line number at which the code failed to stderr. In short, it asserts that the state of your program exhibits a particular characteristic. Assert statements are so useful since, unlike other code errors or ways to debug code, they can inform you exactly where and when your code started to function incorrectly. If you have assertions in your program and wish to turn them off, you can insert #define NDEBUG in your program above #include<assert.h> or compile with the option -DNDEBUG. 8 Compiling Code with gcc We will use gcc as our C compiler. This will take your.c source files and find errors in them. If there are no errors, it will compile them into an executable binary file. First, you should go to the folder with your source code. If your project consists of the single file hello.c and you wish your executable file to be called hello, you would run the command gcc hello.c -o hello You can compile multiple files in a similar way. If in addition to hello.c you have world.c, you would run the command gcc hello.c world.c -o hello Adding the -Wall flag will show warnings, if any. 5 Not precisely; assert() isn t really a function but rather a macro. This is how it can provide line number and code information. 9 10 9 Running Your Code Once you have created an executable binary, you may run it from the command line. Continuing our example from above, once we ve created the binary hello, we can run the program by going to the folder with the binary and running the command./hello This will execute the binary. Note that if you change your code, you will need to recompile it to update the binary. 10 Programming Language: Syntax. Introduction to C Language Overview, variables, Operators, Statements Programming Language: Syntax Introduction to C Language Overview, variables, Operators, Statements Based on slides McGraw-Hill Additional material 2004/2005 Lewis/Martin Modified by Diana Palsetia Syntax Lecture 03 Bits, Bytes and Data Types Lecture 03 Bits, Bytes and Data Types In this lecture Computer Languages Assembly Language The compiler Operating system Data and program instructions Bits, Bytes and Data Types ASCII table Data Types M3-R4: PROGRAMMING AND PROBLEM SOLVING THROUGH C LANGUAGE M3-R4: PROGRAMMING AND PROBLEM SOLVING THROUGH C LANGUAGE NOTE: IMPORTANT INSTRUCTIONS: 1. Question Paper in English and Hindi and Candidate can choose any one language. 2. In case of discrepancies in Programming for MSc Part I Herbert Martin Dietze University of Buckingham herbert@the-little-red-haired-girl.org July 24, 2001 Abstract The course introduces the C programming language and fundamental software development techniques. C A short introduction About these lectures C A short introduction Stefan Johansson Department of Computing Science Umeå University Objectives Give a short introduction to C and the C programming environment in Linux/Unix The C Programming Language course syllabus associate level TECHNOLOGIES The C Programming Language course syllabus associate level Course description The course fully covers the basics of programming in the C programming language and demonstrates fundamental Chapter 2: Problem Solving Using C++ Chapter 2: Problem Solving Using C++ 1 Objectives In this chapter, you will learn about: Modular programs Programming style Data types Arithmetic operations Variables and declaration statements Common MPI and C-Language Seminars 2010 MPI and C-Language Seminars 2010 Seminar Plan (1/3) Aim: Introduce the C Programming Language. Plan to cover: Basic C, and programming techniques needed for HPC coursework. C-bindings for the Message Passing CIS 190: C/C++ Programming. Lecture 1 Introduction and Getting Started CIS 190: C/C++ Programming Lecture 1 Introduction and Getting Started This course will teach you the basics of C and C++ give you more programming experience be appropriate for majors and non-majors not ASCII Encoding. The char Type. Manipulating Characters. Manipulating Characters The char Type ASCII Encoding The C char type stores small integers. It is usually 8 bits. char variables guaranteed to be able to hold integers 0.. +127. char variables mostly used to store characters Programming C++ Keywords. If/else Selection Structure. Looping Control Structures. Switch Statements. Example Program C++ Keywords There are many keywords in C++ that are not used in other languages. bool, const_cast, delete, dynamic_cast, const, enum, extern, register, sizeof, typedef, explicit, friend, inline, mutable, strsep exercises Introduction C strings Arrays of char strsep exercises Introduction The standard library function strsep enables a C programmer to parse or decompose a string into substrings, each terminated by a specified character. The goals of this document, C for Java Programmers C for Java Programmers CS 414 / CS 415 Niranjan Nagarajan Department of Computer Science Cornell University niranjan@cs.cornell.edu Original Slides: Alin Dobra Why use C instead of Java Intermediate-level) CS 261 Data Structures. Introduction to C Programming CS 261 Data Structures Introduction to C Programming Why C? C is a simple language, C makes it easier to focus on important concepts C is a lower level, imperative language Lays groundwork for many other C++ Programming: From Problem Analysis to Program Design, Fifth Edition. Chapter 2: Basic Elements of C++ C++ Programming: From Problem Analysis to Program Design, Fifth Edition Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with the basic components of a C++ 05 Case Study: C Programming Language CS 2SC3 and SE 2S03 Fall 2009 05 Case Study: C Programming Language William M. Farmer Department of Computing and Software McMaster University 18 November 2009 The C Programming Language Developed by Dennis Adjusted/Modified by Nicole Tobias. Chapter 2: Basic Elements of C++ Adjusted/Modified by Nicole Tobias Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with functions, special symbols, and identifiers in C++ Explore simple data types Overview of a C Program Overview of a C Program Programming with C CSCI 112, Spring 2015 Patrick Donnelly Montana State University Programming with C (CSCI 112) Spring 2015 2 / 42 C Language Components Preprocessor Directives, 2. Compressing data to reduce the amount of transmitted data (e.g., to save money). Presentation Layer The presentation layer is concerned with preserving the meaning of information sent across a network. The presentation layer may represent (encode) the data in various ways (e.g., data C AND C++ PROGRAMMING C AND C++ PROGRAMMING Bharathidasan University A Courseware prepared by University Informatics Centre Part I - Programming in C Getting Started This courseware is intended to be an introduction to C programming C programming. Intro to syntax & basic operations C programming Intro to syntax & basic operations Example 1: simple calculation with I/O Program, line by line Line 1: preprocessor directive; used to incorporate code from existing library not actually Lecture Set 2: Starting Java Lecture Set 2: Starting Java 1. Java Concepts 2. Java Programming Basics 3. User output 4. Variables and types 5. Expressions 6. User input 7. Uninitialized Variables CMSC 131 - Lecture Outlines -. Format String Vulnerability. printf ( user input ); Lecture Notes (Syracuse University) Format String Vulnerability: 1 Format String Vulnerability printf ( user input ); The above statement is quite common in C programs. In the lecture, we will find out 5 Arrays and Pointers 5 Arrays and Pointers 5.1 One-dimensional arrays Arrays offer a convenient way to store and access blocks of data. Think of arrays as a sequential list that offers indexed access. For example, a list Introduction to Data Structures Introduction to Data Structures Albert Gural October 28, 2011 1 Introduction When trying to convert from an algorithm to the actual code, one important aspect to consider is how to store and manipulate Arrays. Arrays, Argument Passing, Promotion, Demotion Arrays Arrays, Argument Passing, Promotion, Demotion Review Introduction to C C History Compiling C Identifiers Variables Declaration, Definition, Initialization Variable Types Logical Operators Control Strings in C++ and Java. Questions: Strings in C++ and Java Questions: 1 1. What kind of access control is achieved by the access control modifier protected? 2 2. There is a slight difference between how protected works in C++ and how it 6.087 Lecture 3 January 13, 2010 6.087 Lecture 3 January 13, 2010 Review Blocks and Compound Statements Control Flow Conditional Statements Loops Functions Modular Programming Variable Scope Static Variables Register Variables 1 Review: TN203. Porting a Program to Dynamic C. Introduction TN203 Porting a Program to Dynamic C Introduction Dynamic C has a number of improvements and differences compared to many other C compiler systems. This application note gives instructions and suggestions JAVA PRIMITIVE DATA TYPE JAVA PRIMITIVE DATA TYPE Description Not everything in Java is an object. There is a special group of data types (also known as primitive types) that will be used quite often in programming. For performance C Programming Tools. 1 Introduction. 2 man. 2.1 Manual Sections. CS 33 Intro Computer Systems Doeppner CS 33 Intro Computer Systems Doeppner C Programming Tools 2016 1 Introduction This handout contains descriptions of several tools which you will find useful throughout CS033 and the rest of your C-programming Comp151. Definitions & Declarations Comp151 Definitions & Declarations Example: Definition /* reverse_printcpp */ #include #include using namespace std; int global_var = 23; // global variable definition void reverse_print(const Dept. of CSE, IIT KGP Programming in C: Basics CS10001: Programming & Data Structures Pallab Dasgupta Professor, Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur Types of variable We must declare the The C Programming Language The C Programming Language CS 1025 Computer Science Fundamentals I Stephen M. Watt University of Western Ontario The C Programming Language A high-level language for writing low-level programs Allows machine-independent Introduction to Programming Block Tutorial C/C++ Michael Bader Master s Program Computational Science and Engineering C/C++ Tutorial Overview From Maple to C Variables, Operators, Statements Functions: declaration, definition, parameters Arrays and Pointers 1 Abstract Data Types Information Hiding 1 1 Abstract Data Types Information Hiding 1.1 Data Types Data types are an integral part of every programming language. ANSI-C has int, double and char to name just a few. Programmers are rarely content SYSTEMS PROGRAMMING C++ INTRODUCTION Faculty of Computer Science / Institute of Systems Architecture / Operating Systems SYSTEMS PROGRAMMING C++ INTRODUCTION Alexander Warg WHY C++? C++ is the language that allows to express ideas from the Standard C Input/Output. Output: printf() Table of Contents Standard C Input/Output 1 Output: printf() 2 Table of Contents Output: printf( ) - syntax & sematics Output: printf( ) - examples Output: printf( ) - format control Screen / Printer Control Input: scanf( The IC Language Specification. Spring 2006 Cornell University The IC Language Specification Spring 2006 Cornell University The IC language is a simple object-oriented language that we will use in the CS413 project. The goal is to build a complete optimizing compiler Pointers and dynamic memory management in C Pointers and dynamic memory management in C Jakob Rieck Proseminar C - 2014 Contents 1 Introduction 2 2 Pointers 4 2.1 Declaration & Initialization................... 5 2.2 Using pointers.......................... Introduction to Java Introduction to Java The HelloWorld program Primitive data types Assignment and arithmetic operations User input Conditional statements Looping Arrays CSA0011 Matthew Xuereb 2008 1 Java Overview A high: Variables. CS181: Programming Languages Variables CS181: Programming Languages Topics: Static vs. dynamic typing Strong vs. weak typing Pointers vs. references Vladimir Vacic, Christos Koufogiannakis, University of California at Riverside A Rudimentary Intro to C programming A Rudimentary Intro to C programming Wayne Goddard School of Computing, Clemson University, 2008 Part 4: Strings and Pointers 18 Strings.................................... D1 19 String Functions.............................. An Incomplete C++ Primer. University of Wyoming MA 5310 An Incomplete C++ Primer University of Wyoming MA 5310 Professor Craig C. Douglas C++ is a legacy programming language, as is other languages AP Computer Science Java Subset APPENDIX A AP Computer Science Java Subset The AP Java subset is intended to outline the features of Java that may appear on the AP Computer Science A Exam. The AP Java subset is NOT intended as an overall Computer Programming Tutorial Computer Programming Tutorial COMPUTER PROGRAMMING TUTORIAL by tutorialspoint.com tutorialspoint.com i ABOUT THE TUTORIAL Computer Prgramming Tutorial Computer programming is the act of writing computer INTI COLLEGE MALAYSIA CSC112 (F) / Page 1 of 5 INTI COLLEGE MALAYSIA CERTIFICATE IN COMPUTING AND INFORMATION TECHNOLOGY PROGRAMME CSC 112 : FUNDAMENTALS OF PROGRAMMING FINAL EXAMINATION : DECEMBER 2002 SESSION This paper consists 11 Introduction to Programming in C Chapter 11 Introduction to Programming in C There are 10 kinds of people in the world those that know binary, and those that don t. Based on slides McGraw-Hill Additional material 2004/2005 Lewis/Martin Visual Basic Programming. An Introduction Visual Basic Programming An Introduction Why Visual Basic? Programming for the Windows User Interface is extremely complicated. Other Graphical User Interfaces (GUI) are no better. Visual Basic provides VB.NET Programming Fundamentals Chapter 3 Objectives Programming Fundamentals In this chapter, you will: Learn about the programming language Write a module definition Use variables and data types Compute with Write decision-making statements Chapter 2: Basic Elements of C++ Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with functions, special symbols, and identifiers in C++ Explore simple data types Discover how a program evaluates Embedded Programming in C/C++: Lesson-1: Programming Elements and Programming in C Embedded Programming in C/C++: Lesson-1: Programming Elements and Programming in C 1 An essential part of any embedded system design Programming 2 Programming in Assembly or HLL Processor and memory-sensitive Arrays in Java. Working with Arrays Arrays in Java So far we have talked about variables as a storage location for a single value of a particular data type. We can also define a variable in such a way that it can store multiple values. Such
http://docplayer.net/23261646-C-primer-fall-introduction-c-vs-java-1.html
CC-MAIN-2018-47
refinedweb
4,994
52.6
the power supply is off. EEPROM memory comes in handy when we need to store calibration values, remember program state before powering off (or power failure) or store constants in EEPROM memory when you are short of program memory space, especially when using smaller AVRs. Think of a simple security system – EEPROM is the ideal place to store lock combinations and code sequences, and passwords. AVR Datasheets claim that EEPROM can withhold at least 100000 writes/erase cycles. Accessing AVR EEPROM memory Standard C functions don’t understand how one or another memory is accessed. So reading and writing EEPROM has to be done by following a special logical system. Simple EEPROM byte read and write operations have to be done via special registers. Atmega328 has. The EEPROM writing process is controlled via the EECR register. To avoid failures, there is some sequence of a nice read and write examples in the datasheet, so there is no need to repeat this. Let’s do a more interesting thing – use an interrupt to write to EEPROM. As you may know, Atmega328 has one dedicated interrupt EE_READY_vect, that may be set to occur whenever EEPROM becomes ready for writing or reading. Usually, you would have to poll for EEPE bit become zero in the loop – this requires active processing power. Interrupt driven EEPROM writing may be more efficient, especially when memory blocs have to be accessed. Let’s write a simple Interruption-driven AVR EEPROM writing routine and write the message buffer and writes them to EEPROM memory until the buffer is empty or. The same interrupt source can be used for reading EEPROM. Using EEPROM library from AVRLibC We discussed how you could the EEPROM library into our source code: #include <avr/eeprom.h> This opens access to several useful routines that allow us to read/write/update bytes, words, double words, floats, and even memory blocks. Let’s the number byte value of 100 to EEPROM address location 0x3FF and read back to some variable. Same expressions can be used if you need to write a word, double word, or even float. Just use one of prop[er functions like eeprom_read_word(). Working with AVR EEPROM memory blocks We have discussed one way of dealing with EEPROM memory blocks – using interrupt-driven writing from the buffer. But if you don’t want to write your own routine, you can use one of the library’s standard functions. of usual data-types like uint8_t. The good thing is that these functions are universal, allowing them to deal with any data type. Let’s that need the void pointers like (void *)readblock, so they meet function requirements – this action doesn’t affect pointer value itself. Our real concern now is to read and write. This is OK in some cases, but if we need to store lots of EEPROM data, it becomes unpractical. Why not define an eeprom variable where the address would be allocated automatically by the compiler. This is where the EEMEM attribute comes in. Again there is no magic in it. It simply means that variables with this attribute will be allocated in the .eeprom memory section. Let’s allocate one byte in EEPROM memory with an initial zero value. The. The compiler recognizes these values and creates a separate .eep file along with .hex. When the chip is being flashed, .eep file also has to be uploaded to have initial EEPROM values. Otherwise, your program will fail to try.
https://embedds.com/accessing-avr-eeprom-memory-in-avrgcc/
CC-MAIN-2021-17
refinedweb
580
64.91
Java may be the first programming language that springs to mind when you think about Android, but you don’t have to use Java for Android development. You can write Android apps in a number of different programming languages, including C#, Lua, C/C++, JavaScript, Scala, and Clojure, but there’s one alternative programming language in particular that’s been getting a ton of attention since this year’s Google I/O. To learn more about Java get a free introductory to Java course at Make Android Apps. As of the 3.0 Preview, Android Studio ships with Kotlin support built-in, so creating an Android project that understands Kotlin code is now as easy as selecting a checkbox in Android Studio’s project creation wizard. This news has generated plenty of buzz, and has also sparked a bit of a Kotlin vs Java debate. Chances are you’ve been reading lots of positive things about Kotlin recently, but if you do make the switch from Java to Kotlin, then what exactly are you gaining? What features does Kotlin have, that Java doesn’t, and vice versa? In this article, we’re going to be looking at all the major differences between Kotlin vs Java, including a few features that you’ll be sacrificing if you do make the move to Kotlin. Kotlin vs Java, the later offers more succinct code – with no findViewByIds If you compare a Kotlin class and a Java class that are performing the same work, then the Kotlin class will generally be much more concise, but there’s one area in particular where Kotlin can seriously reduce the amount of boilerplate code you need to write: findViewByIds. Kotlin Android Extensions allow you to import a reference to a View into your Activity file, at which point you’ll be able to work with that View as though it was part of the Activity. The result? You’ll never have to write another findViewById method again! Before you can use these extensions, you’ll need to add an extra plugin to your module-level build.gradle file (apply plugin: ‘kotlin-android-extensions’) but after that you’re ready to start importing Views, for example if your activity_main.xml file contained a TextView with the ID textView, then you’d add the following to your Activity: import kotlinx.android.synthetic.main.activity_main.textView You can then access this TextView using just its ID: textView.setText("Hello World") This is much more succinct than the Java equivalent: TextView text = (TextView) findViewById(R.id.textView); text.setText("Hello World"); Kotlin is null safe by default NullPointerExceptions are a huge source of frustration for Java developers. Java allows you to assign null to any variable, but if you try to use an object reference that has a null value, then brace yourself to encounter a NullPointerException! Read Next: Kotiln for Android introduction In Kotlin, all types are non-nullable (unable to hold a null value) by default. If you try to assign or return null in your Kotlin code, then it’ll fail at compile-time, so neither of the following lines will compile: val name: String = null fun getName() : String = null If you really want to assign a null value to a variable in Kotlin, then you’ll need to explicitly mark that variable as nullable, by adding a question mark after the type: val number: Int? = null This makes it almost impossible to encounter NullPointerExceptions in Kotlin – in fact, if you do encounter this exception, then chances are it’s because you explicitly asked Kotlin to throw one, or the NullPointerException is originating from external Java code. Extension functions#. Read Next: Java tutorial for beginners You create an extension function by prefixing the name of the class you want to extend (such as ‘String’) to the name of the function you’re creating (‘styleString’) for example: fun String.styleString(): String { // Style the string and then return it// } You can then call this function on instances of the extended class, via the . notation, as if it were part of that class: myString.styleString() Coroutines are first-class citizens Whenever you initiate a long-running operation, such as network I/O or CPU-intensive work, the calling thread is blocked until the operation completes. Since Android is single-threaded by default, as soon as you block the main thread your app’s UI is going to freeze, and it’ll remain unresponsive until the operation completes. In Java, the solution has traditionally been to create a background thread where you can perform this intensive or long-running work, but managing multiple threads can lead to complex, error-prone code, and creating a new thread is an expensive operation. While you can create additional threads in Kotlin, you can also use coroutines. Coroutines perform long-running and intensive tasks by suspending execution at a certain point without blocking the thread, and then resuming this function at a later point, possibly on another thread. This allows you to create non-blocking asynchronous code that looks synchronous, and is therefore more clear, concise and human-readable. Coroutines are also stackless, so they have a lower memory usage compared to threads, and they open the door to additional styles of asynchronous non-blocking programming, such as async/await. There are no checked exceptions Kotlin does not have checked exceptions, so you don’t need to catch or declare any exceptions. Whether this is something that draws you to Kotlin, or makes you want to stick with Java will depend on your opinion of checked exceptions, as this is a feature that divides the developer community. If you’re sick of try/catch blocks cluttering up your Java code, then you’re going to be happy with this omission, however if you find that checked exceptions encourage you to think about error recovery and ultimately push you towards creating more robust code, then you’re more likely to see this as an area where Java has the edge over Kotlin. Native support for delegation Kotlin, unlike Java, supports the ‘composition over inheritance’ design pattern, via first-class delegation (sometimes known as implicit delegation). Delegation is where a receiving object delegates operations to a second delegate object, which is a helper object with the original context. Kotlin’s class delegation is an alternative to inheritance that makes it possible to use multiple inheritance. Meanwhile, Kotlin’s delegated properties help prevent the duplication of code, for example if you need to reuse the same code for multiple properties’ getters and setters, then you can extract this code into a delegated property. The property delegate needs to define the getValue operator function and, optionally, the setValue operator: class Delegate { operator fun getValue(...) ... ... ... } operator fun setValue(...) ... ... ... } } Then, when you’re creating a property you can declare that the getter and setter functions for this particular property are handled by another class: class MyClass { var property: String by Delegate() } Data classes It’s not unusual for a project to have multiple classes that do nothing but hold data. In Java, you’ll find yourself writing lots of boilerplate code for these classes, even though the classes themselves have very little functionality. Typically, you’ll need to define a constructor, fields to store the data, getter and setter functions for each field, plus hashCode(), equals() and toString() functions. In Kotlin, if you include the ‘data’ keyword in your class definition, then the compiler will perform all of this work for you, including generating all the necessary getters and setters: data class Date(var month:String, var day: Int) Smart casts In Java, you often have to check type and then cast an object in situations where it’s already clear that the object can be cast. Kotlin’s smart casts can handle these redundant casts for you, so you don’t need to cast inside a statement if you’ve already checked it with Kotlin’s ‘is’ operator. For example, the compiler knows that the following cast is safe: if (hello is String) { printString(hello) } Support for constructors Unlike Java, a Kotlin class can have a primary constructor and one or more secondary constructors, which you create by including them in your class declaration: class MainActivity constructor(firstName: String) { } No support for implicit widening conversions Kotlin doesn’t support implicit widening conversions for numbers, so smaller types aren’t implicitly converted to bigger types. In Kotlin, if you want to assign a value of type Byte to an Int variable, then you’ll need to perform an explicit conversion, whereas Java has support for implicit conversions. Annotation processing libraries with Kotlin Kotlin supports all existing Java frameworks and libraries, including advanced frameworks that rely on annotation processing, although some Java libraries are already providing Kotlin extensions, such as RxKotlin. If you do want to use a Java library that relies on annotation processing, then adding it to your Kotlin project is slightly different as you’ll need to specify the dependency using the kotlin-kapt plugin, and then use the Kotlin Annotation processing tool (kapt) instead of annotationProcessor. For example: //Apply the plugin// apply plugin: 'kotlin-kapt' //Add the respective dependencies using the kapt configuration// dependencies { kapt "com.google.dagger:dagger-compiler:$dagger-version" ... ... ... } Interchangeability with Java When debating whether to use Kotlin or Java for Android development, you should be aware that there’s a third option: use both. Despite all the differences between the two languages, Java and Kotlin are 100% interoperable. You can call Kotlin code from Java, and you can call Java code from Kotlin, so it’s possible to have Kotlin and Java classes side-by-side within the same project, and everything will still compile. This flexibility to move between the two languages is useful when you’re getting started with Kotlin as it allows you to introduce Kotlin into an existing project incrementally, but you may also prefer to use both languages on a permanent basis. For example, there may be certain features that you prefer to write in Kotlin, and certain features that you find easier to write in Java. Since Kotlin and Java both compile to bytecode, your end-users won’t be able to tell where your Java code ends, and the Kotlin code begins, so there’s no reason why you can’t release an app that consists of Java and Kotlin code. If you do want to try Kotlin for yourself, then as long as you have Android Studio 3.0 Preview or higher installed, there’s a few ways that you can get started: - Create a new Android Studio project. The easiest method is to create a new project and select the ‘Include Kotlin support’ checkbox from the project creation wizard. - Add a Kotlin class to an existing directory. Control-click the directory in question, then select ‘File > New > Kotlin File/Class.’ Android Studio will display a banner asking you to configure your project to support Kotlin; click the ‘Configure’ link and follow the onscreen instructions. - Convert existing Java files to Kotlin. You can run any Java file through a Kotlin converter, by Control-clicking the file and selecting ‘Code > Convert Java File to Kotlin File.’ Wrapping up As you can see there are lots of good reasons to prefer Kotlin to Java, however there are a couple of areas where Java has the upper hand. Likely the Kotlin vs Java debate won’t settle anytime soon, with both having their own merits. So, are you going to be making the switch to Kotlin, or do you feel that Java is still the best option for Android development? Let us know in the comments! Read Next: An introduction to Java syntax for Android development
https://www.androidauthority.com/kotlin-vs-java-783187/
CC-MAIN-2018-47
refinedweb
1,956
53.14
The web has traveled a long way to support full-duplex (or two-way) communication between a client and server. This is the prime intention of the WebSocket protocol: to provide persistent real-time communication between the client and the server over a single TCP socket connection. The WebSocket protocol has only two agendas : 1.) to open up a handshake, and 2.) to help the data transfer. Once the server and client both have their handshakes in, they can send data to each other with less overhead at will. WebSocket communication takes place over a single TCP socket using either WS (port 80) or WSS (port 443) protocol. Almost every browser except Opera Mini provides admirable support for WebSockets at the time of writing, as per Can I Use. The story so far Historically, creating web apps that needed real-time data (like gaming or chat apps) required an abuse of HTTP protocol to establish bidirectional data transfer. There were multiple methods used to achieve real-time capabilities, but none of them were as efficient as WebSockets. HTTP polling, HTTP streaming, Comet, SSE — they all had their own drawbacks. HTTP polling The very first attempt to solve the problem was by polling the server at regular intervals. The HTTP long polling lifecycle is as follows: - The client sends out a request and keeps waiting for a response. - The server defers its response until there’s a change, update, or timeout. The request stayed “hanging” until the server had something to return to the client. - When there’s some change or update on the server end, it sends a response back to the client. - The client sends a new long poll request to listen to the next set of changes. There were a lot of loopholes in long polling — header overhead, latency, timeouts, caching, and so on. HTTP streaming This mechanism saved the pain of network latency because the initial request is kept open indefinitely. The request is never terminated, even after the server pushes the data. The first three lifecycle methods of HTTP streaming are the same in HTTP polling. When the response is sent back to the client, however, the request is never terminated; the server keeps the connection open and sends new updates whenever there’s a change. Server-sent events (SSE) With SSE, the server pushes data to the client. A chat or gaming application cannot completely rely on SSE. The perfect use case for SSE would be, e.g., the Facebook News Feed: whenever new posts comes in, the server pushes them to the timeline. SSE is sent over traditional HTTP and has restrictions on the number of open connections. These methods were not just inefficient, the code that went into them also made developers tired. Why WebSocket is the prince that was promised WebSockets are designed to supersede the existing bidirectional communication technologies. The existing methods described above are neither reliable nor efficient when it comes to full-duplex real-time communications. WebSockets are similar to SSE but also triumph in taking messages back from the client to the server. Connection restrictions are no longer an issue since data is served over a single TCP socket connection. Practical tutorial As mentioned in the introduction, the WebSocket protocol has only two agendas. Let’s see how WebSockets fulfills those agendas. To do that, I’m going to spin off a Node.js server and connect it to a client built with React.js. Agenda 1: WebSocket establishes a handshake between server and client Creating a handshake at the server level We can make use of a single port to spin off the HTTP server and the WebSocket server. The gist below shows the creation of a simple HTTP server. Once it is created, we tie the WebSocket server to the HTTP port: }); Once the WebSocket server is created, we need to accept the handshake on receiving the request from the client. I maintain all the connected clients as an object in my code with a unique user-id on receiving their request from the browser. //)) }); So, what happens when the connection is accepted? While sending the regular HTTP request to establish a connection, in the request headers, the client sends *Sec-WebSocket-Key*. The server encodes and hashes this value and adds a predefined GUID. It echoes the generated value in the *Sec-WebSocket-Accept* in the server-sent handshake. Once the request is accepted in the server (after necessary validations in production), the handshake is fulfilled with status code 101. If you see anything other than status code 101 in the browser, the WebSocket upgrade has failed, and the normal HTTP semantics will be followed. The *Sec-WebSocket-Accept* header field indicates whether the server is willing to accept the connection or not. Also, if the response lacks an *Upgrade* header field, or the *Upgrade* does not equal websocket, it means the WebSocket connection has failed. The successful server handshake looks like this: HTTP GET ws://127.0.0.1:8000/ 101 Switching Protocols Connection: Upgrade Sec-WebSocket-Accept: Nn/XHq0wK1oO5RTtriEWwR4F7Zw= Upgrade: websocket Creating a handshake at the client level At the client level, I’m using the same WebSocket package we are using in the server to establish the connection with the server (the WebSocket API in Web IDL is being standardized by the W3C). As soon as the request is accepted by the server, we will see WebSocket Client Connected on the browser console. Here’s the initial scaffold to create the connection to the server: import React, { Component } from 'react'; import { w3cwebsocket as W3CWebSocket } from "websocket"; const client = new W3CWebSocket('ws://127.0.0.1:8000'); class App extends Component { componentWillMount() { client.onopen = () => { console.log('WebSocket Client Connected'); }; client.onmessage = (message) => { console.log(message); }; } render() { return ( <div> Practical Intro To WebSockets. </div> ); } } export default App; The following headers are sent by the client to establish the handshake: HTTP GET ws://127.0.0.1:8000/ 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: vISxbQhM64Vzcr/CD7WHnw== Origin: Sec-WebSocket-Version: 13 Now that the client and server are connected with mutual handshakes, the WebSocket connection can transmit messages as it receives them, thereby fulfilling the second agenda of WebSocket protocol. Agenda 2: Real-time message transmission I’m going to code a basic real-time document editor where users can join together and edit a document. I’m tracking two events: - User activities: Every time a user joins or leaves, I broadcast the message to all the other connected clients. - Content changes: Every time content in the editor is changed, it is broadcast to all the other connected clients. The protocol allows us to send and receive messages as binary data or UTF-8 (N.B., transmitting and converting UTF-8 has less overhead). Understanding and implementing WebSockets is very easy as long as we have a good understanding of the socket events: onopen, onclose, and onmessage. The terminologies are the same on both the client and the server side. Sending and listening to messages on the client side From the client, when a new user joins in or when content changes, we trigger a message to the server using client.send to take the new information to the server. }); And listening to messages from the server are pretty simple: componentWillMount() { client.onopen = () => { console.log('WebSocket Client Connected'); }; client.onmessage = (message) => { const dataFromServer = JSON.parse(message.data); const stateToChange = {}; if (dataFromServer.type === "userevent") { stateToChange.currentUsers = Object.values(dataFromServer.data.users); } else if (dataFromServer.type === "contentchange") { stateToChange.text = dataFromServer.data.editorContent || contentDefaultMessage; } stateToChange.userActivity = dataFromServer.data.userActivity; this.setState({ ...stateToChange }); }; } Sending and listening to messages on the server side In the server, we simply have to catch the incoming message and broadcast it to all the clients connected to the WebSocket. And this is one of the differences between the infamous Socket.IO and WebSocket: we need to manually send the message to all clients when we use WebSockets. Socket.IO is a full-fledged library, so it handles that on its own. //)) }); What happens when the browser is closed? In that case, the WebSocket invokes the close event, which allows us to write the logic to terminate the current user’s connection. In my code, I broadcast a message to the remaining users when a user leaves the document: connection.on('close', function(connection) { console.log((new Date()) + " Peer " + userID + " disconnected."); const json = { type: typesDef.USER_EVENT }; userActivity.push(`${users[userID].username} left the document`); json.data = { users, userActivity }; delete clients[userID]; delete users[userID]; sendMessage(JSON.stringify(json)); }); The source code for this application is in my repo on GitHub. Conclusion WebSockets are one of the most interesting and convenient ways to achieve real-time capabilities in an application. It gives us a lot of flexibility to leverage full-duplex communications. I’d strongly suggest working with WebSockets before trying out Socket.IO and other available libraries. Happy coding! 🙂 “WebSockets tutorial: How to go real-time with Node and…” Great, post have you come across any issues in real-world scenario, where a single server has exposed a port on node.js server and multiple clients~80 has seen a deadlock and websocket blockage? Websocket not working after generating production APK using react native , anyone please help on this
http://blog.logrocket.com/websockets-tutorial-how-to-go-real-time-with-node-and-react-8e4693fbf843/comment-page-1/
CC-MAIN-2019-39
refinedweb
1,548
56.15
Article I describes building a simple search engine that crawls the file system from a specified folder, and indexing all HTML (or other types) of documents. A basic design and object model was developed as well as a query/results page which you can see here. This second article in the series discusses replacing the 'file system crawler' with a 'web spider' to search and catalog a website by following the links in the HTML. The challenges involved include: The design from Article I remains unchanged... A Catalog contains a collection of Words, and each Word contains a reference to every File that it appears in. ... the object model is the same too... What has changed is the way the Catalog is populated. Instead of looping through folders in the file system file system - SearcharooCrawler.aspx -obsolete]. We've also changed the name of the search page to SearcharooToo.aspx so you can use it side-by-side with the old one., recognizing the type of document that has been returned, determining what character set/encoding was used (for Text and HTML documents), etc. - basically a mini-browser! We'll start small, and attempt to build a passable spider using C#... To get something working quickly, let's just try to download the 'start page' - say the root page of the local machine (i.e., Step 2 - downloading pages). Here is the simplest possible code to get the contents of an HTML page from a website (localhost in this case): using System.Net; /*...*/ string url = ""; // just for testing WebClient browser = new WebClient(); UTF8Encoding enc = new UTF8Encoding(); string fileContents = enc.GetString(browser.DownloadData(url)); Listing 1 - Simplest way to download an HTML document its interface has no simple way for the code to query what the page's: // Create ArrayLists to hold the links we find... ArrayList linkLocal = new ArrayList(); ArrayList linkExternal = new ArrayList(); // Dodgy Regex will find *some* links foreach (Match match in Regex.Matches(htmlData , @"(?<=<(a|area)\s+href="").*?(?=""\s*/?>)" , RegexOptions.IgnoreCase|RegexOptions.ExplicitCapture)) { // Regex matches from opening "quote link = match.Value; // find first space (ie no spaces in Url) int spacePos = link.IndexOf(' '); // or first closing quote (NO single quotes) int quotePos = link.IndexOf('"'); int chopPos = (quotePos<spacePos?quotePos:spacePos); if (chopPos > 0) { // chopPos if quote or space first the at URL end link = link.Substring(0,chopPos); } if ( (link.Length > 8) && (link.Substring(0, 7).ToLower() == "http://") ) { // Assumes all links beginning with http:// are _external_ linkExternal.Add(link) ; } else { // otherwise they're "relative"/internal links // so we concatenate the base URL link = startingUrl + link; linkLocal.Add(link); } } // end looping through Matches of the 'link' pattern in the HTML data Listing 2 - Simplest way to find links in a page As with the first cut of page-downloading, there are a number of problems with this code. Firstly, the Regular Expression used to find the links is *very* restrictive, i.e., server name against the target server. Despite the bugs, testing against tailored HTML pages, this code will successfully parse the links into the linkLocal ArrayList, ready for processing -- coupling that list of URLs with the code to download URLs, we can effectively 'spider' a website! The basic code is shown below - comments show where additional code is required, either from the listings above or in Article I. protected void Page_Load (object sender, System.EventArgs e) { /* The initial function call */ startingPageUrl = ""; // Get from web.config parseUrl (startingPageUrl, new UTF8Encoding(), new WebClient() ); } /* This is called recursively for EVERY link we find */ public void parseUrl (string url, UTF8Encoding enc, WebClient browser) { if (visited.Contains(url)) { // Url already spidered, skip and go to next link Response.Write ("<br><font size=-2> "+ url +" already spidered</font>"); } else { // Add this URL to the 'visited' list, so we'll // skip it if we come across it again visited.Add(url); string fileContents = enc.GetString (browser.DownloadData(url)); // from Listing 1 // ### Pseudo-code ### // 1. Find links in the downloaded page // (add to linkLocal ArrayList - code in Listing 2) // 2. Extract <TITLE> and <META> Description, // Keywords (as Version 1 Listing 4) // 3. Remove all HTML and whitespace (as Version 1) // 4. Convert words to string array, // and add to catalog (as Version 1 Listing 7) // 5. If any links were found, recursively call this page if (null != pmd.LocalLinks) foreach (object link in pmd.LocalLinks) { parseUrl (Convert.ToString(link), enc, browser); } } } Listing 3 - Combining the link parsing and page downloading code. Review the three fundamental tasks for a search spider, and you can see we've developed enough code to build it: WebClientin Listings 1 and 2. Although the example above is picky about what links it will find, it will work to 'spider' and then search a website! FYI, the 'alpha version' of the code is available in the ZIP file along with the completed code for this article. The remainder of this article discusses the changes required to fix all the "problems" in the alpha version. The alpha code fails to follow 'relative' and 'absolute' links (e.g., "../.. (e.g., "../") and absolute references (e.g., starting with "/"). The code would need to do something like this:! Following relative links is made even more difficult because the WebClient class, while it enabled us to quickly get the spider up-and-running, is pretty dumb. It does not expose all the properties and methods required to properly emulate a web browser's behavior... It is capable of following redirects issued by a server, but it has no simple interface to communicate to the calling code exactly what URL it ended up requesting. The HttpWebRequest and HttpWebResponse classes provide a much more powerful interface for HTTP communication. HttpWebRequest has a number of useful properties, including: AllowAutoRedirect- configurable! MaximumAutomaticRedirections- redirection can be limited to prevent 'infinite loops' in naughty pages. UserAgent- set to "Mozilla/6.0 (MSIE 6.0; Windows NT 5.1; Searcharoo.NET Robot)" (see Problem 5 below). KeepAlive- efficient use of connections. Timeout- configurable based on the expected performance of the target website.). Assuming all web pages use ASCII will result in many pages being 'indexed' as garbage, because the bytes will be converted into 'random'-looking characters rather than the text they actually represent. The HttpWebResponse has another advantage over WebClient: it's easier to access HTTP server headers such as the ContentType and ContentEncoding. This enables the following code to be written: if (webresponse.ContentEncoding != String.Empty) { // Use the HttpHeader Content-Type // in preference to the one set in META htmldoc.Encoding = webresponse.ContentEncoding; } else if (htmldoc.Encoding == String.Empty) { // TODO: if still no encoding determined, // try to readline the stream until we // find either * META Content-Type // or * </head> (ie. stop looking for META) htmldoc.Encoding = "utf-8"; // default } // System.IO.StreamReader stream = new System.IO.StreamReader (webresponse.GetResponseStream(), Encoding.GetEncoding(htmldoc.Encoding) ); // we *may* have been redirected... and we want the *final* URL htmldoc.Uri = webresponse.ResponseUri; htmldoc.Length = webresponse.ContentLength; htmldoc.All = stream.ReadToEnd (); stream.Close(); Listing 4 - Check the HTTP Content Encoding and use the correct Encoding class to process the Byte[] Array returned from the server Elsewhere in the code, we use the ContentType to parse out the MIME-type of the data, so that we can ignore images and stylesheets (and, for this version, Word, PDF, ZIP and other file types). When building the alpha code, I implemented the simplest Regular Expression I could find to locate links in a string - (?<=<(a|area)\s+href=").*?(?="\s*/?>). The problem is that it is far too dumb to find the majority of links. Regular Expressions can be very powerful, and clearly a more complex expression was required. Not being an expert in this area, I turned to Google and eventually Matt Bourne who posted a couple of very useful Regex patterns, which resulted in the following code: // // Original Regex, just found <a href=""> links; and was "broken" // by spaces, out-of-order, etc // @"(?<=<a\s+[^""]*)""|'(?<value>[^']*)'|(?<value>" + @"[^""'<> \s]+)\s*)+", RegexOptions.IgnoreCase|RegexOptions.ExplicitCapture)) { // we're only interested in the href attribute // (although in future maybe index the 'alt'/'title'?) if ("href" == submatch.Groups[1].ToString().ToLower() ) { link = submatch.Groups[2].ToString(); break; } } /* check for internal/external link and supported scheme, then add to ArrayList */ } // foreach Listing 5 - More powerful Regex matching <to >) including the tag name and all attributes. The Match.Valuefor each match could be and of the link samples shown earlier. <a href='News.htm'> <a href=News.htm> <a class="cssLink" href="News.htm"> <area shape="rect" coords="0,0,110,20" href="News.htm"> <area href='News.htm' shape="rect" coords="0,0,110,20"> href='News.htm' href=News.htm class="cssLink" href="News.htm" shape="rect" coords="0,0,110,20" href="News.htm" href='News.htm' shape="rect" coords="0,0,110,20" hrefattribute, which becomes a link for us to process. The combination of these two Regular Expressions makes the link parsing a lot more robust. The alpha has very rudimentary META tag handling - so primitive that it accidentally assumed <META NAME="" CONTENT=""> instead of the correct <META HTTP- format. There are two ways to process the META tags correctly: Descriptionand Keywordsfor this document, and ROBOTStag so that our spider behaves nicely when presented with content that should not be indexed. Using a variation of the Regular Expressions from Problem 4, the code parses out the META tags as required, adds Keywords and Description to the indexed content, and stores the Description for display on the Search Results page. string metaKey = String.Empty, metaValue = String.Empty; foreach (Match metamatch in Regex.Matches (htmlData, @"<meta\s*(?:(?:\b(\w|-)+\b\s*(?:=\s*(?:""[^""]*""|'" + @"[^']*'|[^""'<> ]+)\s*)?)*)/?\s*>", RegexOptions.IgnoreCase|RegexOptions.ExplicitCapture)) { metaKey = String.Empty; metaValue = String.Empty; // Loop through the attribute/value pairs inside the tag foreach (Match submetamatch in Regex.Matches(metamatch.Value.ToString(), @"(?<name>\b(\w|-)+\b)\s*=\s*(""(?<value>" + @"[^""]*)""|'(?<value>[^']*)'|(?<value>[^""'<> ]+)\s*)+", RegexOptions.IgnoreCase|RegexOptions.ExplicitCapture)) { if ("http-equiv" == submetamatch.Groups[1].ToString().ToLower() ) { metaKey = submetamatch.Groups[2].ToString(); } if ( ("name" == submetamatch.Groups[1].ToString().ToLower() ) && (metaKey == String.Empty) ) { // if already set, HTTP-EQUIV overrides NAME metaKey = submetamatch.Groups[2].ToString(); } if ("content" == submetamatch.Groups[1].ToString().ToLower() ) { metaValue = submetamatch.Groups[2].ToString(); } } switch (metaKey.ToLower()) { case "description": htmldoc.Description = metaValue; break; case "keywords": case "keyword": htmldoc.Keywords = metaValue; break; case "robots": case "robot": htmldoc.SetRobotDirective (metaValue); break; } } Listing 6 - Parsing META tags is a two step process, because we have to check the ' name/http-equiv' so that we know what the content relates to! It also obeys the ROBOTS NOINDEX and NOFOLLOW directives if they appear in the META tags (you can read more about the Robot Exclusion Protocol as it relates to META tags; note that we have not implemented support for the robots.txt file which sits: or Screenshot 1 - The title of each page is displayed as it is spidered. We're using the CIA World FactBook as test data. Once the catalog is built, you are ready to search. All the hard work was done in Article 1 - this code is repeated for your information... /// ; } Article 1 Listing 8 - the Search method of the Catalog object We have not modified any of the Search objects in the diagram at the start of this article, in an effort to show how data encapsulation allows you to change both the way you collect data (i.e., from file system crawling to website spidering) and the way you present data (i.e., multiple terms is to 'parse' the query typed by the user. This means: trimming whitespace from around the query, and compressing whitespace between the query terms. We then Split the query into an Array[] of words and Trim any punctuation from around each term. searchterm = Request.QueryString["searchfor"].ToString().Trim(' '); Regex r = new Regex(@"\s+"); //remove all whitespace searchterm = r.Replace(searchterm, " "); // to a single space searchTermA = searchterm.Split(' '); // then split for (int i = 0; i < searchTermA.Length; i++) { // array of search terms searchTermA[i] = searchTermA[i].Trim (' ', '?','\"', ',', '\'', ';', ':', '.', '(', ')').ToLower(); } Listing 7 - the Search method of the Catalog object. // Array of arrays of results that match ONE of the search criteria Hashtable[] searchResultsArrayArray = new Hashtable[searchTermA.Length]; // finalResultsArray is populated with pages // that *match* ALL the search criteria HybridDictionary finalResultsArray = new HybridDictionary(); // Html output string string(not found)</font> "; // if *any one* of the terms isn't found, // there won't be a 'set' of matches botherToFindMatches = false; } else { int resultsInThisSet = searchResultsArrayArray[i].Count; matches += "<a href=\"?" + searchTermA[i] + "</a> <font color=gray(" + resultsInThisSet + ")</font> "; if ( (lengthOfShortestResultSet == -1) || (lengthOfShortestResultSet > resultsInThisSet) ) { indexOfShortestResultSet = i; lengthOfShortestResultSet = resultsInThisSet; } } } Listing 8 - Find the results for each of the terms individually. Diagram 1 - Finding the intersection of the result sets for each word involves traversing the 'array of arrays'. // Find the common files from the array of arrays of documents // matching ONE of the criteria if (botherToFindMatches) { // all words have *some* matches // loop through the *shortest* resultset int c = indexOfShortestResultSet; Hashtable searchResultsArray = searchResultsArrayArray[c]; if (null != searchResultsArray) foreach (object foundInFile in searchResultsArray) { // for each file in the *shortest* result set DictionaryEntry fo = (DictionaryEntry)foundInFile; // find matching files in the other resultsets int matchcount=0, totalcount=0, weight=0; for (int cx = 0; cx < searchResultsArrayArray.Length; cx++) { // keep track, so we can compare at the end (if term is in ALL) totalcount+=(cx+1); if (cx == c) { // current resultset // implicitly matches in the current resultset matchcount += (cx+1); // sum the weighting weight += (int)fo.Value; } else { Hashtable searchResultsArrayx = searchResultsArrayArray[cx]; if (null != searchResultsArrayx) foreach (object foundInFilex in searchResultsArrayx) { // for each file in the result set DictionaryEntry fox = (DictionaryEntry)foundInFilex; if (fo.Key == fox.Key) { // see if it matches // and if it matches, track the matchcount matchcount += (cx+1); // and weighting; then break out of loop, since weight += (int)fox.Value; break; // no need to keep looking through this resultset } } // foreach } // if } // for if ( (matchcount>0) && (matchcount == totalcount) ) { // was matched in each Array // set the final 'weight' // to the sum of individual document matches fo.Value = weight; if ( !finalResultsArray.Contains (fo.Key) ) finalResultsArray.Add ( fo.Key, fo); } // if } // foreach } // if Listing 9 - Finding the sub-set of documents that contain every word in the query. There're three nested loops in there - I never said this was efficient! The algorithm described above is performing a boolean AND query on all the words in the query, i.e.,! By the way,). Screenshot 2 - The Search input page has minor changes, including the filename to SearcharooToo.aspx! Screenshot 3 - You can refine your search, see the number of matches for each search term, view the time taken to perform the search and, most importantly, see the documents containing all the words in your query! you go! However, that means you accept all the default settings, such as crawling from the website root, and a five second timeout when downloading pages. To change those defaults, you need to add some settings to web.config: <appSettings> <!--website to spider--> <add key="Searcharoo_VirtualRoot" value="" /> <!--5 second timeout when downloading--> <add key="Searcharoo_RequestTimeout" value="5" /> <!--Max pages to index--> <add key="Searcharoo_RecursionLimit" value="200" /> < spider and return to the search page when complete. SearcharooSpider.aspx greatly increases the utility of Searcharoo, because you can now index your static and dynamic (e.g., database generated) pages to allow visitors to search your site. That means you could use it with products like Microsoft Content Management Server (CMS) which does not expose its content-database directly. The two remaining (major) problems with Searcharoo are: The next articles in this series will (hopefully) examine these two problems in more detail on CodeProject and at ConceptDevelopment.NET... You'll notice the two ASPX pages use the src="Searcharoo.cs" @Page attribute to share the common object model without compiling to an assembly, with the page-specific 'inline' using <script runat="server"> tags (similar to ASP 3 Studio .NET, you'll have to setup a Project and add the CS file and the two ASPX files (you can move the <script> code into the code-behind if you like), then compile. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/IP/Spideroo.aspx
crawl-002
refinedweb
2,670
57.16
ioctl_getfsmap man page ioctl_getfsmap — retrieve the physical layout of the filesystem Synopsis #include <sys/ioctl.h> #include <linux/fs.h> #include <linux/fsmap.h> int ioctl(int fd, FS_IOC_GETFSMAP, struct fsmap_head * arg); Description This ioctl(2) operation retrieves physical extent mappings for a filesystem. This information can be used to discover which files are mapped to a physical block, examine free space, or find known bad blocks, among other things. The sole argument to this operation should be a pointer to a single struct fsmap_head: struct fsmap { __u32 fmr_device; /* Device ID */ __u32 fmr_flags; /* Mapping flags */ __u64 fmr_physical; /* Device offset of segment */ __u64 fmr_owner; /* Owner ID */ __u64 fmr_offset; /* File offset of segment */ __u64 fmr_length; /* Length of segment */ __u64 fmr_reserved[3]; /* Must be zero */ }; struct fsmap_head { __u32 fmh_iflags; /* Control flags */ __u32 fmh_oflags; /* Output flags */ __u32 fmh_count; /* # of entries in array incl. input */ __u32 fmh_entries; /* # of entries filled in (output) */ __u64 fmh_reserved[6]; /* Must be zero */ struct fsmap fmh_keys[2]; /* Low and high keys for the mapping search */ struct fsmap fmh_recs[]; /* Returned records */ }; The two fmh_keys array elements specify the lowest and highest reverse-mapping key for which the application would like physical mapping information. A reverse mapping key consists of the tuple (device, block, owner, offset). The owner and offset fields are part of the key because some filesystems support sharing physical blocks between multiple files and therefore may return multiple mappings for a given physical block. Filesystem mappings are copied into the fmh_recs array, which immediately follows the header data. Fields of struct fsmap_head The fmh_iflags field is a bit mask passed to the kernel to alter the output. No flags are currently defined, so the caller must set this value to zero. The fmh_oflags field is a bit mask of flags set by the kernel concerning the returned mappings. If FMH_OF_DEV_T is set, then the fmr_device field represents a dev_t structure containing the major and minor numbers of the block device. The fmh_count field contains the number of elements in the array being passed to the kernel. If this value is 0, fmh_entries will be set to the number of records that would have been returned had the array been large enough; no mapping information will be returned. The fmh_entries field contains the number of elements in the fmh_recs array that contain useful information. The fmh_reserved fields must be set to zero. Keys The two key records in fsmap_head.fmh_keys specify the lowest and highest extent records in the keyspace that the caller wants returned. A filesystem that can share blocks between files likely requires the tuple (device, physical, owner, offset, flags) to uniquely index any filesystem mapping record. Classic non-sharing filesystems might be able to identify any record with only (device, physical, flags). For example, if the low key is set to (8:0, 36864, 0, 0, 0), the filesystem will only return records for extents starting at or above 36 KiB on disk. If the high key is set to (8:0, 1048576, 0, 0, 0), only records below 1 MiB will be returned. The format of fmr_device in the keys must match the format of the same field in the output records, as defined below. By convention, the field fsmap_head.fmh_keys[0] must contain the low key and fsmap_head.fmh_keys[1] must contain the high key for the request. For convenience, if fmr_length is set in the low key, it will be added to fmr_block or fmr_offset as appropriate. The caller can take advantage of this subtlety to set up subsequent calls by copying fsmap_head.fmh_recs[fsmap_head.fmh_entries - 1] into the low key. The function fsmap_advance (defined in linux/fsmap.h) provides this functionality. Fields of struct fsmap The fmr_device field uniquely identifies the underlying storage device. If the FMH_OF_DEV_T flag is set in the header's fmh_oflags field, this field contains a dev_t from which major and minor numbers can be extracted. If the flag is not set, this field contains a value that must be unique for each unique storage device. The fmr_physical field contains the disk address of the extent in bytes. The fmr_owner field contains the owner of the extent. This is an inode number unless FMR_OF_SPECIAL_OWNER is set in the fmr_flags field, in which case the value is determined by the filesystem. See the section below about owner values for more details. The fmr_offset field contains the logical address in the mapping record in bytes. This field has no meaning if the FMR_OF_SPECIAL_OWNER or FMR_OF_EXTENT_MAP flags are set in fmr_flags. The fmr_length field contains the length of the extent in bytes. The fmr_flags field is a bit mask of extent state flags. The bits data set.. Return Value On error, -1 is returned, and errno is set to indicate the error. Errorszero value was passed in one of the fields that must be zero. - ENOMEM Insufficient memory to process the request. - EOPNOTSUPP The filesystem does not support this command. - EUCLEAN The filesystem metadata is corrupt and needs repair. Versions The FS_IOC_GETFSMAP operation first appeared in Linux 4.12. Conforming to This API is Linux-specific. Not all filesystems support it. Example See io/fsmap.c in the xfsprogs distribution for a sample program. See Also ioctl(2) Colophon This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By ioctl(2).
https://www.mankier.com/2/ioctl_getfsmap
CC-MAIN-2018-43
refinedweb
905
64.91
A library is a file containing compiled code from various object files stuffed into a single file. It may contain a group of functions that are used in a particular context. For example, the ‘pthread’ library is used when thread related functions are to be used in the program. Broadly, a library (or Program Library) can be of two types : - programs. So, this way the size of programs(using shared library) and the memory footprint can be kept low as a lot of code is kept common in form of a shared library. Shared libraries provide modularity to the development environment as the library code can be changed, modified and recompiled without having to re-compile the applications that use this library. For example, for any change in the pthread library code, no change is required in the programs using pthread shared library. A shared library can be accessed through different names : - Name used by linker (‘lib’ followed by the library name, followed by ‘.so’ . For example libpthread.so) - Fully qualified name or soname ( ‘lib’ followed by the library name, followed by ‘.so’, followed by ‘.’ and a version number. For example : libpthread.so.1) - Real name (‘lib’ followed by the library name, followed by ‘.so’, followed by ‘.’ and a version number, followed by a ‘.’ and a minor number, followed by a ‘.’ and a release number. Release number is optional. For example, libpthread.so.1.1) A version number is changed for a shared library when the changes done in the code make the shared library incompatible with the previous version. For example, if a function is completely removed then a new version of the library is required. A minor number is changed in case there is a modification in the code that does not make the shared library incompatible with the previous version being used. For example, a small bug fix won’t break the compatibility of the existing shared library so only a minor number is changed while version remains the same. Now, one may wonder why so many names for a shared library? Well, these naming conventions help multiple versions of same shared library to co-exist in a system. The programs linking with the shared library do not need to take care about the latest version of the shared library installed in the system. Once the latest version of the shared library is installed successfully, all the programs automatically start linking to the latest version. The name used by linker is usually a symbolic link to the fully qualified soname which in turn is a symbolic link to the real name. Placement in File System There are mainly three standard locations in the filesystem where a library can be placed. - /lib - /usr/lib - /usr/local/lib We will go by the Filesystem Hierarchy standards(FHS) here. According to the FHS standards, All the libraries which are loaded at start up and running in the root filesystem are kept in /lib. While the libraries that are used by system internally are stored at /usr/lib. These libraries are not meant to be directly used by users or shell scripts. There is a third location /usr/local/lib( though it is not defined in the latest version of FHS ). If it exists, it contains all the libraries that are not part of standard distribution. These non-standard libraries are the one’s which you download and could be possibly buggy. Using ldconfig Once a shared library is created, copy the shared library to directory in which you want the library to reside (for example /usr/local/lib or /usr/lib). Now, run ldconfig command in this directory. What does ldconfig do? You remember that we discussed earlier that a linker name for shared library is a symbolic link to the fully qualified soname which in turn is a symbolic link to the real name. Well, this command does exactly the same. When you run an ELF executable, by default the loader is run first. The loader itself is a shared object file /lib/ld-linux.so.X where ‘X’ is a version number. This loader in turn finds and loads all the shared libraries on which our program depends. All the directories that are searched by the loader in order to find the libraries is stored in /etc/ld.so.conf. Searching all the directories specified in /etc/ld.so.conf file can be time consuming so every time ldconfig command is run, it sets up the required symbolic links and then creates a cache in file /etc/ld.so.cache where all the information required for executable is written. Reading information from cache is very less time consuming. The catch here is that ldconfig command needs to be run every-time a shared library is added or removed. So on start-up the program uses /etc/ld.so.cache to load the libraries it requires. Using Non Standard Library Locations When using non standard library locations. One of the following three steps could be carried out : Add the path to /etc/ld.so.conf file. This file contains paths to all the directories in which the library is searched by the loader. This file could sometime contain a single line like : include /etc/ld.so.conf.d/*.conf In that case, just create a conf file in the same directory. You can directly add a directory to cache by using the following command : ldconfig -n [non standard directory path containing shared library] Note that this is a temporary change and will be lost once the system is rebooted. Update the environment variable LD_LIBRARY_PATH to point to your directory containing the shared library. Loader will use the paths mentioned in this environment variable to resolve dependencies. Note that on some Unix systems the name of the environment variable could differ. Note: On a related topic, as we explained earlier, there are four main stages through which a source code passes in order to finally become an executable. Example (How to Create a Shared Library) Lets take a simple practical example to see how we can create and use shared libraries. The following is the piece of code (shared.c) that we want to put in a shared library : #include "shared.h" unsigned int add(unsigned int a, unsigned int b) { printf("\n Inside add()\n"); return (a+b); } shared.h looks like : #include<stdio.h> extern unsigned int add(unsigned int a, unsigned int b); Lets first make shared.c as a shared library. 1. Run the following two commands to create a shared library : gcc -c -Wall -Werror -fPIC shared.c gcc -shared -o libshared.so shared.o The first command compiles the code shared.c into position independent code which is required for a shared library. The second command actually creates a shared library with name ‘libshared.so’. 2. Here is the code of the program that uses the shared library function ‘add()’ #include<stdio.h> #include"shared.h" int main(void) { unsigned int a = 1; unsigned int b = 2; unsigned int result = 0; result = add(a,b); printf("\n The result is [%u]\n",result); return 0; } 3. Next, run the following command : gcc -L/home/himanshu/practice/ -Wall main.c -o main -lshared This command compiles the main.c code and tells gcc to link the code with shared library libshared.so (by using flag -l) and also tells the location of shared file(by using flag -L). 4. Now, export the path where the newly created shared library is kept by using the following command : export LD_LIBRARY_PATH=/home/himanshu/practice:$LD_LIBRARY_PATH The above command exports the path to the environment variable ‘LD_LIBRARY_PATH’. 5. Now run the executable ‘main’ : # ./main Inside add() The result is [3] So we see that shared library was loaded and the add function inside it was executed. Get the Linux Sysadmin Course Now! { 16 comments… read them below or add one } Hi, Thanks, very nice article… Very helpful article !!! thanks a lot…. Thanks!! Greate article. Learned something new today. I like the clear explanation and simple example. Keep up the good work!!! Great post! An equally great ‘part 2′ to this post would be how to dynamically load and unload shared libraries during run-time. G8t artical and most importantly well explained. Keep up the good work. Thanks… Wonder full post …. Helped me a lot in understanding static and dynamic Lib. Thanks buddies it may compile in simple line as gcc -o main main.c -o shared shared.c. you will get shared obj file then ./shared to compile or gcc -o shared shared.c -o main main.c you will get main obj file then ./main to execute it may compile in simple line as gcc -o main main.c -o shared shared.c. you will get shared obj file then ./shared to compile give example using dlsym() it will usefull for big project where is satic library… insufficient infromation… abt shared its good… understandable A very clear article. Very clearly explains Shared Libraries and how to create one to a beginner. ldconfig and shared library names are covered to the level necesary for a noob. I’ve gone through the gnu gcc manual and tried to understand it before, but it was difficult and It was still not clear, until I read this one. Thank you very much. in the example above, what’s the use of telling gcc the location of shared library if we need to export the path with export LD_CONFIG_PATH?? it’nt the same thing????? sir can you provide the makefile for same operation thanks and regards Ravi How to use dlsym() and dlopen()??
http://www.thegeekstuff.com/2012/06/linux-shared-libraries/
CC-MAIN-2014-52
refinedweb
1,610
66.33
We are planning a cross platform application that has certain 3D features which we want to realise with Unity 3D. The major part of the application however consists of many forms and traditional UI, not 'game UI'. What options are there to create enterprise application UI, in a cross platform manner (deploying to iOS and Android in particular) in combination with Unity3D viewports? Is there any way to build the traditional part of the application with HTML5 / CSS for example and deploy the whole thing to iOS as well as Android? Or are there other cross platform technologies that can be married with Unity 3D? Any suggestions? Answer by Sisso · Jan 21, 2014 at 04:00 PM NGUI is almost the default UI for unity3d, it is very powerfull with many components features, but you still will have a lot of work to do by yourself. There is many games with "game ui" with complex classic elemenst like tables, charts and tabs. I think that you can customize the native code (iOS app and Android Activity) to display unity3d in a native frame with your native UI or call unity3d only when you want. I know that exists some html/javascript frameworks that build android/ios applications. If you learn how integrate with native code, probably you be able to integrate with unity3d. Anyway, if you are not willing to implement your UI using NGUI (or one like it), I think that unity3d could be a bad choice for your app. Answer by tanoshimi · Jan 21, 2014 at 07:50 PM I'm not necessarily recommending it, but I believe that the most popular choice for "Enterprise UI" is Autodesk ScaleForm, which has a Unity 5 Horizontal Axis Event Trigger Problem 1 Answer How to use U.I.Elements/ U.I.Builder to access bools/toggles? 2 Answers The type or namespace name 'UI' does not exist in the namespace 'UnityEngine' 1 Answer Canvas with mask and transparent image 1 Answer How to have different padding for elements in a vertical layout group? 1 Answer
https://answers.unity.com/questions/622104/cross-platform-enterprise-ui-with-unity3d.html
CC-MAIN-2020-40
refinedweb
345
55.68
Spiders¶ Spiders are classes which define how a certain site (or group of sites) will be scraped, including how to perform the crawl (ie. follow links) and how to extract structured data from their pages (ie. scraping items). In other words, Spiders are the place where you define the custom behaviour for crawling and parsing pages for a particular site (or, in some cases, for the URLs specified in the start_urls and the parse method as callback function for the Requests. In the callback function, you parse the response (web page) and return either Item objects, Request objects, or an iterable of both. receive arguments in their constructors: class MySpider(BaseSpider): name = 'myspider' def __init__(self, category=None, *args, **kwargs): super(MySpider, self).__init__(*args, **kwargs) self.start_urls = ['' % category] # ... Spider arguments can also be passed through the Scrapyd schedule.json API. See Scrapyd documentation.() BaseSpider¶ - class scrapy.spider.BaseSpider, or without the TLD. So, for example, a spider that crawls mywebsite.com would often be called mywebsite. - allowed_domains¶ An optional list of strings containing domains that this spider is allowed to crawl. Requests for URLs not belonging to the domain names specified in this list won’t be followed if OffsiteMiddleware: def start_requests(self): return . BaseSpider example¶) CrawlSpider¶ - class scrapy.contrib BaseSpider (that you must specify), this class supports a new attribute: - rules¶ Which is a list of one (or more) Rule¶ - class scrapy.contrib iterator for performance reasons, since the xml and html iterators generate the whole DOM at once in order to parse it. However, using html HtmlXPathSelector. Keep in mind this uses DOM parsing and must load all DOM in memory which could be a problem for big feeds - 'xml' - an iterator which uses XmlXPathSelector. and uri will be used to automatically register namespaces using the register_namespace() method. You can then specify nodes with namespaces in the itertag attribute. XPathSelector for each node. Overriding this method is mandatory. Otherwise, you spider won’t work. This method must return either a Item object, a Request object, import log from scrapy.contrib): log.msg('Hi, this is a <%s> node!: %s' % (self.itertag, ''.join(node.extract()))) item = Item() item['id'] = node.select('@id').extract() item['name'] = node.select('name').extract() item['description'] = node.select( ...
http://doc.scrapy.org/en/0.18/topics/spiders.html
CC-MAIN-2016-18
refinedweb
375
59.5
Sharing modules in Angular/TypeScript/Ionic — Working Procedure - Sharing modules in Angular/TypeScript (tsconfig.json-based; code only): - One of the critical pieces: - GitHub for sample code: - Providers aren't NgModules, so we need a wrapper: - Adding a single instance of the Service instead of a separate copy each time we import:. This also covers how we reference an instance of the Service. Results from adapting MobiLoc to use the Front-end-common module The tickets were LE-79, LE-80, and LE-82 Summary: - Add ComponentsModule which is defined in FEC to the app.module.ts - Added the watch. and webpack.config.js alternatives that permit bringing in the FEC code (along with changes to tsconfig.json and package.json that were required) - Upon app initialization, check Auth service to see if we need to show Registration page. - May not be a problem for newer modules, but the older MobiLoc module needed this line to be added to the index.html: <script src="build/vendor.js"></script> - Add required plugins to the package.json "cordova-plugin-ionic-webview": {}, "cordova-plugin-customurlscheme": { "URL_SCHEME": "com.clueride", "ANDROID_SCHEME": "com.clueride", "ANDROID_HOST": "clueride.auth0.com", "ANDROID_PATHPREFIX": "/cordova/com.clueride/callback" }, "cordova-plugin-safariviewcontroller": {}, // Maybe? - Add Callback to app.component.ts import Auth0Cordova from "@auth0/cordova"; . . . /* Handles the return to the app after logging in at external site. */ (<any>window).handleOpenURL = (url) => { Auth0Cordova.onRedirectUri(url); } Ticket LE-82 covered a case specific to the Leaflet library (specifically, the CSS). Follow the README.md that covers the markers component. I had thought this was broken by making the changes above, but instead, what had happened was I had blown away the node_modules and this wiped out my copy of the CSS => SCSS file. For when we go to an NPM-based approach: - Sharing modules in Angular/TypeScript (NPM-based with 'link'): At a high level, the Ionic steps for building are set aside for the shared module, and replaced with steps for just turning the TypeScript into JavaScript and providing a 'dist' directory that contains the npm-supported artifacts — prior to any webpack. That module is then brought through the full Ionic build as if it were any other NPM module — when the apps are built. One of the problems was the HTML files were ignored. Not good when I wanted to share pages.
http://bikehighways.wikidot.com/module-sharing-ionic2
CC-MAIN-2019-26
refinedweb
389
59.19
Memory Leaks and Java Code When you aren't using objects, but they aren't touched by GC, a memory leak happens. Here are six ways memory leaks happen to look for and avoid. Join the DZone community and get the full member experience.Join For Free Java implicitly reclaims memory by GC (a daemon thread). GC periodically checks if there is any object which is unreachable or, to be precise, has no reference pointing to that object. If so, GC reclaims the newly-available memory. Now the question is should we worry about memory leaks or how Java handles it? Pay attention to the definition: An object is eligible for garbage collection when it is unreachable (unused), and no living thread can reach it. So if an object which is not used in an application but unintentionally has references, it is not eligible for garbage collection, and is a potential memory leak. GC takes care of unreachable objects, but can’t determine unused objects. Unused objects depend on application logic, so a programmer must pay attention to the business code. Silly mistakes silently grow up to be a monster. Memory leaks can occur in many ways, I will look at some examples. Example 1: Autoboxing package com.example.memoryleak; public class Adder { publiclong addIncremental(long l) { Long sum=0L; sum =sum+l; return sum; } public static void main(String[] args) { Adder adder = new Adder(); for(long ;i<1000;i++) { adder.addIncremental(i); } } } Can you spot the memory leak? Here I made a mistake. Instead of taking the primitive long for the sum, I took the Long (wrapper class), which is the cause of the memory leak. Due to auto-boxing, sum=sum+l; creates a new object in every iteration, so 1000 unnecessary objects will be created. Please avoid mixing and matching between primitive and wrapper classes. Try to use primitive as much as you can. Example 2: Using Cache package com.example.memoryleak; import java.util.HashMap; import java.util.Map; public class Cache { private Map<String,String> map= new HashMap<String,String>(); publicvoid initCache() { map.put("Anil", "Work as Engineer"); map.put("Shamik", "Work as Java Engineer"); map.put("Ram", "Work as Doctor"); } public Map<String,String> getCache() { return map; } publicvoid forEachDisplay() { for(String key : map.keySet()) { String val = map.get(key); System.out.println(key + " :: "+ val); } } public static void main(String[] args) { Cache cache = new Cache(); cache.initCache(); cache.forEachDisplay(); } } Here, a memory leak occurs due to the internal map data structure. This class is to display the employee value from the cache. Once those are displayed, there is no need to store those elements in the cache. We forgot to clear the cache, so although objects in cache are not required anymore by the application, it can’t be GCed, as map holds a strong reference to them. So when you're using your own Cache, don’t forget to clear them if items in the cache are no longer required. Alternatively, you can initialize cache by WeakHashMap. The beauty of WeakHashMap is, if keys are not referenced by any other objects, then that entry will be eligible for GC. There is lot to say about WeakHashMap, but I will discuss it in another article. Use it with caution, if you want to reuse the values stored in the cache, it may be that its key is not referenced by any other object, so the entry will be GCed and that value magically disappears. Example 3: Closing Connections try { Connection con = DriverManager.getConnection(); ………………….. con.close(); } Catch(exception ex) { } In the above example, we close the connection (Costly) resource in the try block, so in the case of an exception, the connection will not be closed. So it creates a memory leak as this connection never return back to the pool. Please always put any closing stuff in the finally block. Example 4: Using CustomKey package com.example.memoryleak; import java.util.HashMap; import java.util.Map; public class CustomKey { public CustomKey(String name) { this.name=name; } private String name; publicstaticvoid main(String[] args) { Map<CustomKey,String> map = new HashMap<CustomKey,String>(); map.put(new CustomKey("Shamik"), "Shamik Mitra"); String val = map.get(new CustomKey("Shamik")); System.out.println("Missing equals and hascode so value is not accessible from Map " + val); } } As in CustomKey we forgot to provide equals() and hashcode() implementation, so a key and value stored in map can’t be retrieved later, as the map get() method checks hashcode() and equals(). But this entry is not able to be GCed, as the map has a reference to it, but application can’t access it. Definitely a memory leak. So when you make your Custom key, always provide an equals and hashcode() implementation. Example 5: Mutable Custom Key package com.example.memoryleak; import java.util.HashMap; import java.util.Map; public class MutableCustomKey { public MutableCustomKey(String name) { this.name=name; } private String name; public String getName() { return name; } publicvoid; MutableCustomKey other = (MutableCustomKey) obj; if (name == null) { if (other.name != null) return false; } elseif (!name.equals(other.name)) return false; return true; } public static void main(String[] args) { MutableCustomKey key = new MutableCustomKey("Shamik"); Map<MutableCustomKey,String> map = new HashMap<MutableCustomKey,String>(); map.put(key, "Shamik Mitra"); MutableCustomKey refKey = new MutableCustomKey("Shamik"); String val = map.get(refKey); System.out.println("Value Found " + val); key.setName("Bubun"); String val1 = map.get(refKey); System.out.println("Due to MutableKey value not found " + val1); } } Although here we provided equals() and hashcode() for the custom Key, we made it mutable unintentionally after storing it into the map. If its property is changed, then that entry will never be found by the application, but map holds a reference, so a memory leak happens. Always make your custom key immutable. Example 6: Internal Data Structure package com.example.memoryleak; public class Stack { privateint maxSize; privateint[] stackArray; privateint pointer; public Stack(int s) { maxSize = s; stackArray = newint[maxSize]; pointer = -1; } public void push(int j) { stackArray[++pointer] = j; } public int pop() { return stackArray[pointer--]; } public int peek() { return stackArray[pointer]; } publicboolean isEmpty() { return (pointer == -1); } public boolean isFull() { return (pointer == maxSize - 1); } public static void main(String[] args) { Stack stack = new Stack(1000); for(int ;i<1000;i++) { stack.push(i); } for(int ;i<1000;i++) { int element = stack.pop(); System.out.println("Poped element is "+ element); } } } Here we face a tricky problem when Stack first grows then shrinks. Actually, it is due to the internal implementation. Stack internally holds an array, but from an application perspective, the active portion of Stack is where the pointer is pointing. So when Stack grows to 1000, internally the array cells are filled up with elements, but afterwards when we pop all elements, the pointer comes to zero, so according to the application it is empty, but the internal array contains all popped references. In Java, we call it an obsolete reference. An obsolete reference is a reference which can’t be dereferenced. This reference can’t be GCed, as the array holds those elements, but they are unnecessary after they are popped. To fix it, we need to set the null value when the pop action occurs so those objects are able to be GCed. public int pop() { int size = pointer-- int element= stackArray[size]; stackArray[size]; return element; } Safety Measure for Preventing Memory Leaks: Published at DZone with permission of Shamik Mitra, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/memory-leak-andjava-code
CC-MAIN-2022-33
refinedweb
1,243
57.67
I'm pretty sure new throwing an exception is platform-independent. Following that, you can install an exception handler with the effect of doing what you are suggesting (writing to file). Although, I think you would be hard pressed to find a system today that doesn't already transparently support this with virtual memory. Wow, I just tested it with this code: #include <iostream> using namespace std; int main() { long long int *test=new long long int[100000000000]; test[ 99999999999]=10; cout<<test[ 99999999999]; return 0; } And it performed without a problem. Considering that I only have 4 gigs of RAM on my machine, can I assume that my compiler is transparently supporting the whole file-writing thing? (if this works then my mass1venum dll (I wrote it as an excercise) will be able to be a lot simpler) Indeed, pretty much all modern operating systems have good mechanisms to deal with memory and use the hard-drive if necessary. Of course, there will always be a chance that you run out of memory (whether the OS doesn't want to take up more HDD memory or because you run out of HDD space). So, there is really no need for you to write code that does this type of thing, and the OS will do this much better than you can ever hope to. For instance, the OS will swap memory between the RAM and HDD such that your program is always working on RAM memory (the chunks of memory not currently used are basically sleeping on the HDD). This is a kind of mechanism you would have a really hard time doing yourself. As for the new operator, the C++ standard prescribes that it must throw a bad_alloc exception if you run out of memory, so that is entirely platform independent. If you want the "return NULL" behaviour, you must use the no-throw new operator, as in new(nothrow) int[1024]; If you are going to be working with large chunks of data, it might be a good idea to consider a container like std::deque which stores the complete array as a number of big chunks of data, as opposed to one contiguous array. This will generally be less demanding for the OS because the OS won't have to make space for one huge chunk of memory. But, of course, the memory won't be contiguous, so it might not be appropriate for your application. Earn rewards points for helping others. Gain kudos. Cash out. Get better answers yourself. It's as simple as contributing editorial or replying to discussions labeled OP Sponsor or OP Kudos
https://www.daniweb.com/software-development/cpp/threads/414483/checking-memory-bounds
CC-MAIN-2015-22
refinedweb
441
63.12
Playing With Apache Storm On Docker - Like A Boss Usama Ashraf ・19 min read This article is not the ultimate guide to Storm nor is it meant to be. Storm's pretty huge, and just one long-read probably can't do it justice anyways. Of course, any additions, feedback or constructive criticism will be greatly appreciated. OK, now that that's out of the way, let's see what we'll be covering: - The necessity of Storm, the 'why' of it, what it is and what it isn't - A bird's eye view of how it works. - What a Storm topology roughly looks like in code (Java) - Setting up and playing with a production-worthy Storm cluster on Docker. - A few words on message processing reliability. I'm also assuming that you're at least somewhat familiar with Docker and containerization. Continuous streams of data are ubiquitous and becoming even more so with the increasing number of IoT devices being used. Of course this data is stored, processed and analyzed to provide predictive, actionable results. But petabytes take long to analyze, even with Hadoop (as good as MapReduce may be) or Spark (a remedy to the limitations of MapReduce). Secondly, very often we don't need to deduce patterns over long periods of time. Of the petabytes of incoming data collected over months, at any given moment, we might not need to take into account all of it, just a real-time snapshot. Perhaps we don't need to know the longest trending hashtag over five years, but just the one right now. This is what Storm is built for, to accept tons of data coming in extremely fast, possibly from various sources, analyze it and publish the real-time updates to a UI or some other place without storing any itself. How It Works The architecture of Storm can be compared to a network of roads connecting a set of checkpoints. Traffic begins at a certain checkpoint (called a spout) and passes through other checkpoints (called bolts). The traffic is of course the stream of data that is retrieved by the spout (from a data source, a public API for example) and routed to various bolts where the data is filtered, sanitized, aggregated, analyzed, sent to a UI for people to view or any other target. The network of spouts and bolts is called a topology, and the data flows in the form of tuples (list of values that may have different types). Source: One important thing to talk about is the direction of the data traffic. Conventionally, we would have one or multiple spouts reading the data from an API, a Kafka topic or some other queuing system. The data would then flow one-way to one or multiple bolts which may forward it to other bolts and so on. Bolts may publish the analyzed data to a UI or to another bolt. But the traffic is almost always unidirectional, like a DAG. Although it is certainly possible to make cycles, we're unlikely to need such a convoluted topology. Installing a Storm release involves a number of steps, which you're free to follow on your machine. But later on I'll be using Docker containers for a Storm cluster deployment and the images will take care of setting up everything we need. Some Code While Storm does offer support for other languages, most topologies are written in Java since it's the most efficient option we have. A very basic spout, that just emits random digits, may look like this: public class RandomDigitSpout extends BaseRichSpout { // To output tuples from spout to the next stage bolt SpoutOutputCollector collector; public void nextTuple() { int randomDigit = ThreadLocalRandom.current().nextInt(0, 10); // Emit the digit to the next stage bolt collector.emit(new Values(randomDigit)); } public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) { // Tell Storm the schema of the output tuple for this spout. // It consists of a single column called 'random-digit'. outputFieldsDeclarer.declare(new Fields("random-digit")); } } And a simple bolt that takes in the stream of random and just emits the even ones: public class EvenDigitBolt extends BaseRichBolt { // To output tuples from this bolt to the next bolt. OutputCollector collector; public void execute(Tuple tuple) { // Get the 1st column 'random-digit' from the tuple int randomDigit = tuple.getInt(0); if (randomDigit % 2 == 0) { collector.emit(new Values(randomDigit)); } } public void declareOutputFields(OutputFieldsDeclarer declarer) { // Tell Storm the schema of the output tuple for this bolt. // It consists of a single column called 'even-digit' declarer.declare(new Fields("even-digit")); } } Another simple bolt that'll receive the filtered stream from EvenDigitBolt, and just multiply each even digit by 10 and emit it forward: public class MultiplyByTenBolt extends BaseRichBolt { OutputCollector collector; public void execute(Tuple tuple) { // Get 'even-digit' from the tuple. int evenDigit = tuple.getInt(0); collector.emit(new Values(evenDigit * 10)); } public void declareOutputFields(OutputFieldsDeclarer declarer) { declarer.declare(new Fields("even-digit-multiplied-by-ten")); } } Putting them together to form our topology: package packagename // ... public class OurSimpleTopology { public static void main(String[] args) throws Exception { // Create the topology TopologyBuilder builder = new TopologyBuilder(); // Attach the random digit spout to the topology. // Use just 1 thread for the spout. builder.setSpout("random-digit-spout", new RandomDigitSpout()); // Connect the even digit bolt to our spout. // The bolt will use 2 threads and the digits will be randomly // shuffled/distributed among the 2 threads. // The third parameter is formally called the parallelism hint. builder.setBolt("even-digit-bolt", new EvenDigitBolt(), 2) .shuffleGrouping("random-digit-spout"); // Connect the multiply-by-10 bolt to our even digit bolt. // This bolt will use 4 threads, among which data from the // even digit bolt will be shuffled/distributed randomly. builder.setBolt("multiplied-by-ten-bolt", new MultiplyByTenBolt(), 4) .shuffleGrouping("even-digit-bolt"); // Create a configuration object. Config conf = new Config(); // The number of independent JVM processes this topology will use. conf.setNumWorkers(2); // Submit our topology with the configuration. StormSubmitter.submitTopology("our-simple-topology", conf, builder.createTopology()); } } Parallelism In Storm Topologies Fully understanding parallelism in Storm can be daunting, at least in my experience. A topology requires at least one process to operate on (obviously). Within this process we can parallelize the execution of our spouts and bolts using threads. In our example, RandomDigitSpout will launch just one thread, and the data spewed from that thread will be distributed among 2 threads of the EvenDigitBolt. But the way this distribution happens, referred to as the stream grouping, can be important. For example you may have a stream of temperature recordings from two cities, where the tuples emitted by the spout look like this: // City name, temperature, time of recording ("Atlanta", 94, "2018-05-11 23:14") ("New York City", 75, "2018-05-11 23:15") ("New York City", 76, "2018-05-11 23:16") ("Atlanta", 96, "2018-05-11 23:15") ("New York City", 77, "2018-05-11 23:17") ("Atlanta", 95, "2018-05-11 23:16") ("New York City", 76, "2018-05-11 23:18") Suppose we're attaching just one bolt whose job is to calculate the changing average temperature of each city. If we can reasonably expect that in any given time interval we'll get roughly an equal number of tuples from both the cities, it would make sense to dedicate 2 threads to our bolt and send the data for Atlanta to one of them and New York to the other. A fields grouping would serve our purpose, which partitions data among the threads by the value of the field specified in the grouping: // The tuples with the same city name will go to the same thread. builder.setBolt("avg-temp-bolt", new AvgTempBolt(), 2) .fieldsGrouping("temp-spout", new Fields("city_name")); And of course there are other types of groupings as well. For most cases, though, the grouping probably won't matter much and you can just shuffle the data and throw it among the bolt threads randomly (shuffle grouping). Now there's another important component to this: the number of worker processes that our topology will run on. The total number of threads that we specified will then be equally divided among the worker processes. So in our example random digit topology we had 1 spout thread, 2 even-digit bolt threads and 4 multiply-by-ten bolt threads (7 total). Each of the 2 worker processes would be responsible for running 2 multiply-by-ten bolt threads, 1 even-digit bolt and one of the processes will run the 1 spout thread. Of course, the 2 worker processes will have their main threads, which in turn will launch the spout and bolt threads. So all in all we'll have 9 threads. These are collectively called executors. It's important to realize that if you set a spout's parallelism hint > 1 (i.e. multiple executors), you can end up emitting the same data several times. Say, the spout reads from the public Twitter stream API and uses two executors. That means that the bolts receiving the data from the spout will get the same tweet twice. It is only after the spout emits the tuples that data parallelism comes into play, i.e. the tuples get divided among the bolts according to the specified stream grouping. Running multiple workers on a single node would be fairly pointless. Later, however, we'll use a proper, distributed, multi-node cluster and see how workers are divided on different nodes. Building Our Topology Here's the directory structure I suggest: yourproject/ pom.xml src/ jvm/ packagename/ RandomDigitSpout.java EvenDigitBolt.java MultiplyByTenBolt.java OurSimpleTopology.java Maven is commonly used for building Storm topologies, and it requires a pom.xml file (The POM) that defines various configuration details, project dependencies etc. Getting into the nitty-gritty of the POM will probably be an overkill here. - First, we'll run mvn cleaninside yourprojectto clear any compiled files we may have, making sure to compile each module from scratch. - And then mvn packageto compile our code and package it in an executable JAR file, inside a newly created targetfolder. This might take quite a few minutes the first time, especially if your topology has many dependencies. - To submit our topology: storm jar target/packagename-{version number}.jar packagename.OurSimpleTopology Hopefully, by now the gap between concept and code in Storm has been somewhat bridged. However, no serious Storm deployment will be a single topology instance running on one server. What A Storm Cluster Looks Like To take full advantage of Storm's scalability and fault-tolerance, any production-grade topology would be submitted to a cluster of machines. Storm distributions are installed on the master node (Nimbus) and all the slave nodes (Supervisors). The master node runs the Storm Nimbus daemon and the Storm UI. The slave nodes run the Storm Supervisor daemons. A Zookeeper daemon on a separate node is used for coordination among the master node and the slave nodes. Zookeeper, by the way, is only used for cluster management and never any kind of message passing. It's not like spouts and bolts are sending data to each other through it or anything like that. The Nimbus daemon finds available Supervisors via ZooKeeper, to which the Supervisor daemons register themselves. And other managerial tasks, some of which will become clear shortly. The Storm UI is a web interface used to manage the state of our cluster. We'll get to this later. Our topology is submitted to the Nimbus daemon on the master node and then distributed among the worker processes running on the slave/supervisor nodes. Because of Zookeeper, it doesn't matter how many slave/supervisor nodes you run initially, as you can always seamlessly add more and Storm will automatically integrate them into the cluster. Whenever we start a Supervisor it allocates a certain number of worker processes (that we can configure) which can then be used by the submitted topology. So in the image above there are a total of 5 allocated workers. Remember this line: conf.setNumWorkers(5) This means that the topology will try to use a total of 5 workers. And since our two Supervisor nodes have a total of 5 allocated workers: each of the 5 allocated worker processes will run one instance of the topology. If we had done: conf.setNumWorkers(4) then one worker process would have remained idle/unused. If the number of specified workers was 6 and the total allocated workers were 5, then because of the limitation only 5 actual topology workers would've been functional. Before we set this all up using Docker, a few important things to keep in mind regarding fault-tolerance: - If any worker on any slave node dies, the Supervisor daemon will have it restarted. If restarting repeatedly fails, the worker will be reassigned to another machine. - If an entire slave node dies, its share of the work will be given to another supervisor/slave node. - If the Nimbus goes down, the workers will remain unaffected. However, until the Nimbus is restored workers won't be reassigned to other slave nodes if, say, their node crashes. - The Nimbus & Supervisors are themselves stateless, but with Zookeeper, some state information is stored so that things can begin where they were left off if a node crashes or a daemon dies unexpectedly. - Nimbus, Supervisor & Zookeeper daemons are all fail-fast. This means that they themselves are not very tolerant to unexpected errors, and will shut down if they encounter one. For this reason they have to be run under supervision using a watchdog program that monitors them constantly and restarts them automatically if they ever crash. Supervisord is probably the most popular option for this (not to be confused with the Storm Supervisor daemon). Note: In most Storm clusters, the Nimbus itself is never deployed as a single instance but as a cluster. If this fault-tolerance is not incorporated and our sole Nimbus goes down, we'll lose the ability to submit new topologies, gracefully kill running topologies, reassign work to other Supervisor nodes if one crashes etc. For simplicity, our illustrative cluster will use a single instance. Similarly, the Zookeeper is very often deployed as a cluster but we'll use just one. Dockerizing The Cluster Launching individual containers and all that goes along with them can be cumbersome, so I prefer to use Docker Compose. We'll be going with one Zookeeper node, one Nimbus node and one Supervisor node initially. They'll be defined as Compose services, all corresponding to one container each at the beginning. Later on, I'll use Compose scaling to add another Supervisor node (container). Here's our entire code & the project structure: zookeeper/ Dockerfile storm-nimbus/ Dockerfile storm.yaml code/ pom.xml src/ jvm/ coincident_hashtags/ ExclamationTopology.java storm-supervisor/ Dockerfile storm.yaml docker-compose.yml And our docker-compose.yml: version: '3.2' services: zookeeper: build: ./zookeeper # Keep it running. tty: true storm-nimbus: build: ./storm-nimbus # Run this service after 'zookeeper' and make 'zookeeper' reference. links: - zookeeper tty: true # Map port 8080 of the host machine to 8080 of the container. # To access the Storm UI from our host machine. ports: - 8080:8080 volumes: - './storm-nimbus:/theproject' storm-supervisor: build: ./storm-supervisor links: - zookeeper - storm-nimbus tty: true # Host volume used to store our code on the master node (Nimbus). volumes: storm-nimbus: Feel free to explore the Dockerfiles. They basically just install the dependencies (Java 8, Storm, Maven, Zookeeper etc) on the relevant containers. The storm.yaml files override certain default configurations for the Storm installations. The line ADD storm.yaml /conf inside the Nimbus and Supervisor Dockerfiles puts them inside the containers where Storm can read them. storm-nimbus/storm.yaml: # The Nimbus needs to know where the Zookeeper is. This specifies the list of the # hosts in the Zookeeper cluster. We're using just one node, of course. # 'zookeeper' is the Docker Compose network reference. storm.zookeeper.servers: - "zookeeper" storm-supervisor/storm.yaml: # Telling the Supervisor where the Zookeeper is. storm.zookeeper.servers: - "zookeeper" # The worker nodes need to know which machine(s) are the candidate of master # in order to download the topology jars. nimbus.seeds : ["storm-nimbus"] # For each Supervisor, we configure how many workers run on that machine. # Each worker uses a single port for receiving messages, and this setting # defines which ports are open for use. We define four ports here, so Storm will # allocate up to four workers to run on this node. supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703 These options are adequate for our cluster. The more curious can check out all the default configurations here. Run docker-compose up at the project root. After all the images have been built and all the service started, open a new terminal, type docker ps and you'll see something like this: Starting The Nimbus Let's SSH into the Nimbus container using its name: docker exec -it coincidenthashtagswithapachestorm_storm-nimbus_1 bash and then start the Nimbus daemon: storm nimbus Starting The Storm UI Similarly, open another terminal, SSH into the Nimbus again and launch the UI using storm ui: Go to localhost:8080 on your browser and you'll see a nice overview of our cluster: The Free slots in the Cluster Summary indicate how many total workers (on all Supervisor nodes) are available & waiting for a topology to consume them. Used Slots indicate how many of the total are currently busy with a topology. Since we haven't launched any Supervisors yet, they're both zero. We'll get to Executors and Tasks later. Also, as we can see, no topologies have been submitted yet. Starting A Supervisor Node SSH into the one Supervisor container and launch the Supervisor daemon: docker exec -it coincidenthashtagswithapachestorm_storm-supervisor_1 bash storm supervisor Now let's go refresh our UI: Note: Any changes in our cluster may take a few seconds to reflect on the UI. We have a new running Supervisor which comes with four allocated workers. These four workers are the result of specifying four ports in our storm.yaml for the Supervisor node. Of course, they're all free (four Free slots). Let's submit a topology to the Nimbus and put'em to work. Submitting A Topology To The Nimbus SSH into the Nimbus on a new terminal. I've written the Dockerfile so that we land on our working (landing) directory /theproject. Inside this is code, where our topology resides. Our topology is pretty simple. It uses a spout that generates random words and a bolt that just appends three exclamation marks (!!!) to the words. Two of these bolts are added back-to-back and so at the end of the stream we'll get words with six exclamation marks. It also specifies that it needs three workers (conf.setNumWorkers(3)). public static void main(String[] args) throws Exception { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("word", new TestWordSpout(), 10); builder.setBolt("exclaim1", new ExclamationBolt(), 3).shuffleGrouping("word"); builder.setBolt("exclaim2", new ExclamationBolt(), 2).shuffleGrouping("exclaim1"); Config conf = new Config(); // Turn on debugging mode conf.setDebug(true); conf.setNumWorkers(3); StormSubmitter.submitTopology("exclamation-topology", conf, builder.createTopology()); } cd code mvn clean mvn package storm jar target/coincident-hashtags-1.2.1.jar coincident_hashtags.ExclamationTopology After the topology has been submitted successfully, refresh the UI: As soon as we submitted the topology, the Zookeeper was notified. The Zookeeper in turn notified the Supervisor to download the code from the Nimbus. We now see our topology along with its three occupied workers, leaving just one free. And 10 word spout threads + 3 exclaim1 bolt threads + 2 exclaim bolt threads + the 3 main threads from the workers = total of 18 executors. And you might've noticed something new: tasks. WTF Are Tasks Another concept in Storm's parallelism. But don't sweat it, a task is just an instance of a spout or bolt that an executor uses; what actually does the processing. By default the number of tasks is equal to the number of executors. In rare cases you might need each executor to instantiate more tasks. // Each of the two executors (threads) of this bolt will instantiate // two objects of this bolt (total 4 bolt objects/tasks). builder.setBolt("even-digit-bolt", new EvenDigitBolt(), 2) .setNumTasks(4) .shuffleGrouping("random-digit-spout"); This is a shortcoming on my part, but I can't think of a good use case where we'd need multiple tasks per executor. May be if we were adding some parallelism ourselves, like spawning a new thread within the bolt to handle a long running task, then the main executor thread won't block and will be able to continue processing using the other bolt. However this can make our topology hard to understand. If any one knows of scenarios where the performance gain from multiple tasks outweighs the added complexity, please post a comment. Anyways, returning from that slight detour, let's see an overview of our topology. Click on the name under Topology Summary and scroll down to Worker Resources: We can clearly see the division of our executors (threads) among the 3 workers. And of course all the 3 workers are on the same, single Supervisor node we're running. Now, let's say scale out! Add Another Supervisor From the project root, let's add another Supervisor node/container docker-compose scale storm-supervisor=2 SSH into the new container: docker exec -it coincidenthashtagswithapachestorm_storm-supervisor_2 bash And fire up: storm supervisor If you refresh the UI you'll see that we've successfully added another Supervisor and four more workers (total of 8 workers/slots). To really take advantage of the new Supervisor, let's increase the topology's workers. - First kill the running one: storm kill exclamation-topology - Change this line to: conf.setNumWorkers(6) - Change the project version number in your pom.xml. Try using a proper scheme, like semantic versioning. I'll just stick with 1.2.1. - Rebuild the topology: mvn package - Resubmit it: storm jar target/coincident-hashtags-1.2.1.jar coincident_hashtags.ExclamationTopology Reload the UI: You can now see the new Supervisor and the 6 busy workers out of a total of 8 available ones. Also important to note is that the 6 busy ones have been equallly divided among the two Supervisors. Again, click the topology name and scroll down. We see two unique Supervisor IDs, both running on different nodes, and all our executors pretty evenly divided among them. This is great. But Storm comes with another nifty way of doing so while the topology is running. Something called rebalancing. On the Nimbus we'd run: storm rebalance exclamation-topology -n 6 (go from 3 to 6 workers) Or to change the number of executors for a particular component: storm rebalance exclamation-topology -e even-digit-bolt=3 Reliable Message Processing One question we haven't tackled is about what happens if a bolt fails to process a tuple. Well, Storm provides us a mechanism using which the originating spout (specifically the task) can replay the failed tuple. This processing guarantee doesn't just happen by itself, it's a conscious design choice and does add latency. Spouts send out tuples to bolts, which emit tuples derived from the input tuples to other bolts and so on. That one, original tuple spurs an entire tree of tuples. If any child tuple, so to speak, of the original one fails then any remedial steps (rollbacks etc) may well have to be taken at multiple bolts. That could get pretty hairy, and so what Storm does is that it allows the original tuple to be emitted again right from the source (the spout). Consequentially, any operations performed by bolts that are a function of the incoming tuples should be idempotent. A tuple is considered "fully processed" when every tuple in its tree has been processed, and every tuple has to be explicitly acknowledged by the bolts. However, that's not all. There's another thing to be done explicitly: maintain a link between the original tuple and its child tuples. Storm will then be able to trace the origin of the child tuples and thus be able to replay the original tuple. This is called anchoring. And this has been done in our exclamation bolt: // ExclamationBolt // 'tuple' is the original one received from the test word spout. // It's been anchored to/with the tuple going out. _collector.emit(tuple, new Values(exclamatedWord.toString())); // Explicitly acknowledge that the tuple has been processed. _collector.ack(tuple); The ack call will result in the ack method on the spout being called, if it has been implemented. So, say, you're reading the tuple data from some queue and you can only take it off the queue if the tuple has been fully processed. Well, the ack method is where you'd do that. You can also emit out tuples without anchoring: _collector.emit(new Values(exclamatedWord.toString())) and forgo reliability. A tuple can fail two ways: i) A bolt dies and a tuple times out. Or it times out for some other reason. The timeout is 30 seconds by default and can be changed using config.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 60) ii) The fail method is explicitly called on the tuple in a bolt: _collector.fail(tuple). You may do this in case of an exception. In both these cases, the fail method on the spout will be called, if it is implemented. And if we want the tuple to be replayed, it would have to be done explicitly in the fail method by calling emit, just like in nextTuple(). When tracking tuples, every one has to be acked or failed. Otherwise, the topology will eventually run out of memory. It's also important to know that you have to do all of this yourself when writing custom spouts and bolts. But the Storm core can help. For example, a bolt implementing BaseBasicBolt does acking automatically. Or built-in spouts for popular data sources like Kafka take care of queuing and replay logic after acknowledgment and failure. Parting Shots Designing a Storm topology or cluster is always about tweaking the various knobs we have and settling where the result seems optimal. There are a few things that'll help in this process, like using a configuration file to read parallelism hints, number of workers etc so you don't have to edit and recompile your code repeatedly. Define your bolts logically, one per indivisible task, and keep them light and efficient. Similarly, your spouts' nextTuple() methods should be optimized. Use the Storm UI effectively. By default it doesn't show us the complete picture, only 5% of the total tuples emitted. To monitor all of them use config.setStatsSampleRate(1.0d). Keep an eye on the Acks and Latency values for individual bolts and topologies via the UI, that's what you want to look at when turning the knobs. Though I did not know apache storm, I enjoyed reading your post about it! I like that you explained the topology in a clean way with a really good example of bolts and spouts. This helped me figure that apache storm would fit very well in my upcoming project. So thanks for the clear examples and the good explaination! Thanks Robin! Hope it served as a nice intro, and good luck on your project. Great article, thank you.
https://dev.to/usamaashraf/playing-with-apache-storm-on-docker---like-a-boss-4bgb
CC-MAIN-2019-43
refinedweb
4,578
55.54
Hi all, I'm working on a university senior design team to implement an acoustic feedback cancellation device using the teensy4.0 and audio shield. At this point we'd like to use the CMSIS-DSP library, which has an easy to use normalized lms adaptive filter. We've successfully got our teensy 4.0 modules up and running processing audio, but need to close the gap between the under-the-hood operations of the audio library, and the CMSIS-DSP library. Our main challenge at this point is figuring out how to access the packets of data that stream between teensy audio objects. It's very easy to use the audio library with other audio library objects, but is there a way to store these packets to a buffer instead of sending straight to another processing or output object? If this is possible, we could "intercept" the packets with our normalized LMS filter, do some processing, and then send it to the audio output object. The code below is just a basic sketch that passes audio between the line-in and line-out. #include <Audio.h> #include <Wire.h> #include <SD.h> #include <SPI.h> #include <SerialFlash.h> AudioInputI2S i2s_in; AudioOutputI2S i2s_out; AudioConnection patchcord1(i2s_in,0,i2s_out,0); AudioConnection patchcord2(i2s_in,0,i2s_out,1); AudioControlSGTL5000 sgtl5000; void setup() { Serial.begin(115200); AudioMemory(10); sgtl5000.enable(); sgtl5000.volume(0.5); Serial.print("Setup Complete \n"); } void loop() { Serial.print("Streaming audio \n"); delay(1000); }
https://forum.pjrc.com/threads/59171-Accessing-128-bit-sample-packets?s=d08912d784ed0087f897b4c7c88232bb
CC-MAIN-2020-10
refinedweb
244
50.63
A mini framework of image crawlers Documentation: This package is a mini framework of web crawlers. With modularization design, it can be extended conveniently., avoiding some troublesome problems like exception handling, thread scheduling and communication. It also provides built-in crawlers for popular image sites like flickr and search engines such as Google, Bing and Baidu. If you want to add your own crawlers into built-in, welcome for pull requests. Python 2.7+ or 3.4+ (recommended). Using builtin crawlers is very simple. from icrawler.builtin import GoogleImageCrawler google_crawler = GoogleImageCrawler(parser_threads=2, downloader_threads=4, storage={'root_dir': 'your_image_dir'}) google_crawler.crawl(keyword='sunny', max_num=1000, date_min=None, date_max=None, min_size=(200,200), max_size=None) Writing your own crawlers with this framework is also convenient, see the tutorials. A crawler consists of 3 main components (Feeder, Parser and Downloader), they are connected with each other with FIFO queues. The workflow is shown in the following figure. Feeder, parser and downloader are all thread pools, so you can specify the number of threads they use. Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/icrawler/
CC-MAIN-2017-26
refinedweb
191
58.99
old post Hello world for bare metal ARM using QEMU. The ARM926 is able to detect and manage a certain number of exceptions, for example: - “undefined” exception: when the core tries to execute an instruction that is not mapped on the instruction set - data abort: when the system bus reports an error in trying to access one of the peripherals - IRQ: the most common exception is the arrival of an interrupt request (IRQ) from a peripheral. When an exception occurs, the ARM926 core changes its operating “mode” and jumps to the beginning of the memory, with an offset from address 0 that depends on the exception. For example, when an “undefined” exception happens, the core jumps to address 4. By placing instruction at those addresses, it is possible to manage these exceptions with custom functions. The common way is to place a “jump” at each of those addresses, creating an “exception vector table” (where the vector is the jump instruction). The following is an assembly file called “ vectors.S” that shows an example of vector table and reset handler, which contains the minimal initialization that needs to be done before jumping to C code. .text .code 32 .global vectors_start .global vectors_end vectors_start: LDR PC, reset_handler_addr LDR PC, undef_handler_addr LDR PC, swi_handler_addr LDR PC, prefetch_abort_handler_addr LDR PC, data_abort_handler_addr B . LDR PC, irq_handler_addr LDR PC, fiq_handler_addr reset_handler_addr: .word reset_handler undef_handler_addr: .word undef_handler swi_handler_addr: .word swi_handler prefetch_abort_handler_addr: .word prefetch_abort_handler data_abort_handler_addr: .word data_abort_handler irq_handler_addr: .word irq_handler fiq_handler_addr: .word fiq_handler vectors_end: reset_handler: /* set Supervisor stack */ LDR sp, =stack_top /* copy vector table to address 0 */ BL copy_vectors /* get Program Status Register */ MRS r0, cpsr /* go in IRQ mode */ BIC r1, r0, #0x1F ORR r1, r1, #0x12 MSR cpsr, r1 /* set IRQ stack */ LDR sp, =irq_stack_top /* Enable IRQs */ BIC r0, r0, #0x80 /* go back in Supervisor mode */ MSR cpsr, r0 /* jump to main */ BL main B . .end Some details about the assembly code: The vector table must be placed at address 0, but when the program is executed, it is possible that it is not loaded at the beginning of the memory. For example QEMU loads the binary code at address 0x10000. For this reason the vector table needs to be copied before being useful. This is done by using global symbols to mark the beginning ( vectors_start) and end ( vectors_end) of the vectors area, and then using a function ( copy_vectors, implemented below in the C code) to copy it to the correct location. When the core receives an exception it changes operating mode, and this means (among other things) that it uses a different stack pointer. For this reason before enabling interrupts it is necessary to configure the stack for the modes that we intend to use. The operating mode can be changed manually by accessing the Program Status Register ( cpsr), which must also be used to enable IRQs. More information on the ARM9EJ-S Technical Reference Manual. I want to use the UART as a simple peripheral that uses IRQ to function. The Versatile manual indicates that an Interrupt Controller is used to manage the various IRQs. The following manuals are needed to understand what to do: - RealView Platform Baseboard for ARM926EJ–S User Guide - PrimeCell UART (PL011) Technical Reference Manual - PrimeCell Vectored Interrupt Controller (PL190) Technical Reference Manual In order to enable IRQ for UART, the interrupt must be enabled in three places: - The IRQs must be enabled in the ARM cpsr - The UART IRQ must be enabled in the Interrupt Controller - The interrupt generation must be enabled in UART registers for the chosen events I will use the “RX” event to fire an interrupt that manages the arrival of a byte from the UART, and then send it back as a sort of “echo”. The following “ test.c” file shows how to do it: #include <stdint.h> #define UART0_BASE_ADDR 0x101f1000 #define UART0_DR (*((volatile uint32_t *)(UART0_BASE_ADDR + 0x000))) #define UART0_IMSC (*((volatile uint32_t *)(UART0_BASE_ADDR + 0x038))) #define VIC_BASE_ADDR 0x10140000 #define VIC_INTENABLE (*((volatile uint32_t *)(VIC_BASE_ADDR + 0x010))) void __attribute__((interrupt)) irq_handler() { /* echo the received character + 1 */ UART0_DR = UART0_DR + 1; } /* all other handlers are infinite loops */ void __attribute__((interrupt)) undef_handler(void) { for(;;); } void __attribute__((interrupt)) swi_handler(void) { for(;;); } void __attribute__((interrupt)) prefetch_abort_handler(void) { for(;;); } void __attribute__((interrupt)) data_abort_handler(void) { for(;;); } void __attribute__((interrupt)) fiq_handler(void) { for(;;); } void copy_vectors(void) { extern uint32_t vectors_start; extern uint32_t vectors_end; uint32_t *vectors_src = &vectors_start; uint32_t *vectors_dst = (uint32_t *)0; while(vectors_src < &vectors_end) *vectors_dst++ = *vectors_src++; } void main(void) { /* enable UART0 IRQ */ VIC_INTENABLE = 1<<12; /* enable RXIM interrupt */ UART0_IMSC = 1<<4; for(;;); } The main code enables the interrupt and then waits forever. When a character is received from the UART, the IRQ is fired and the irq_handler function is called, transmitting back the modified character. In order to create the complete binary code, we need a linker script that is aware of the memory map of the system. In our case QEMU loads the code to address 0x10000. The following is the linker script “ test.ld” that is used to link the complete program: ENTRY(vectors_start) SECTIONS { . = 0x10000; .text : { vectors.o *(.text .rodata) } .data : { *(.data) } .bss : { *(.bss) } . = ALIGN(8); . = . + 0x1000; /* 4kB of stack memory */ stack_top = .; . = . + 0x1000; /* 4kB of irq stack memory */ irq_stack_top = .; } To compile the program I used CodeSourcery bare metal toolchain, but the commands can be adapted to work with other GCC toolchains such as the Emdebian ones. The commands are the following: arm-none-eabi-gcc -mcpu=arm926ej-s -c -o test.o test.c arm-none-eabi-gcc -mcpu=arm926ej-s -c -o vectors.o vectors.S arm-none-eabi-gcc -T test.ld test.o vectors.o -o test arm-none-eabi-objcopy -O binary test test.bin This creates a “ test.bin” binary file that contains our code. To simulate the program, the command to launch QEMU is the following: qemu-system-arm -M versatilepb -serial stdio -kernel test.bin The “ -serial stdio” option will redirect the terminal input/output to the emulated UART that we want to test. If you type some letters in the terminal where you launched the command, you will see them echoed back to you, modified by the interrupt handler. Possible next steps from here are: - Managing different sources of interrupt from the same peripheral - Managing IRQs from different peripherals - Dynamically remapping the exception handlers - Fully using the features of the Vectored Interrupt Controller - Enabling nested interrupts Here is a guide that contains much information about it: Building bare metal ARM with GNU [html] 冀博 2012/07/25 Thanks. I learn you from much things. linux player 2012/08/12 This article is great. Thank you very much for sharing it. I ma trying to use qemu to full board emulation. I wonder if you have any idea how to emulate GPIO input signals. Thanks in advance! Jim Balau 2012/08/13 I don’t think QEMU is able to emulate GPIO connections to the external world. In particular, as you can see from the GPIO emulation source code, there’s a “FIXME: Implement input interrupts” line. Manu 2012/08/21 Thanks for the nice post. In test.c LINE 36 – UART0_IMSC = 1<<4 it is mentioned as enabling the interrupts. But when i checked the PrimeCell UART (PL011) Technical Reference Manual, UART0_IMSC is the interrupt MASK register. So setting the value to 1 will mask the interrupts, isnt it? Balau 2012/08/21 The mask is a set of bits that is put in bitwise AND to the interrupt source. If a bit of the mask is 0, then the corresponding interrupt is “masked out”. If a bit of the mask is 1, then the corresponding interrupt can be propagated to the interrupt controller. I agree that “to mask an interrupt” usually means blocking it, so it may not be intuitive. pavan kumar.m 2012/09/09 arm-none-eabi-gcc command not found error is showing so please help Balau 2012/09/09 To compile the program I used CodeSourcery bare metal toolchain, but now it’s available only by registration, here: Try Sourcery CodeBench. Otherwise, as an example, you could use Emdebian or Linaro toolchains, with a similar command line but some additional options to make it work: arm-linux-gnueabi-gcc -mcpu=arm926ej-s -g -c -o test.o test.c arm-linux-gnueabi-gcc -mcpu=arm926ej-s -g -c -o vectors.o vectors.S arm-linux-gnueabi-gcc -T test.ld -nostdlib -Xlinker --build-id=none test.o vectors.o -o test arm-linux-gnueabi-objcopy -O binary test test.bin In any case you have to install both an ARM toolchain and qemu-system-arm to make my example work. The way to install them depends on your Linux distribution.x12 here 0xx. marmottus 2015/03/10 Hi Balau, I just tried to change the irq_handler to print something else than the entered character+1 and my program went into an infinite loop of of calls to irq_handler. I am using this function and now the screen is filled with ‘aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa…..’. void attribute((interrupt(“IRQ”))) irq_handler(void) { UART0_DR = ‘a’; } Any idea why I get this behaviour ? Thank you 🙂 Balau 2015/03/12 In the PL011 technical reference manual there’s an explanation of the RX interrupt, where they say that the interrupt is cleared by reading data or by clearing the interrupt (with USARTICR register). In my case the interrupt is cleared because I read the data register, in your case you don’t do it so the RX interrupt is still pending and irq_handleris called again. Note that QEMU might not implement exactly the behavior of ARM PL011 UART peripheral. If you want you can take a look at QEMU source code. marmottus 2015/03/13 Thanks a lot, I’ve read the documentation of the PL011. It indeed works when we access the DR register but when I cleared the interrupt via the ICR, the interrupt never appeared again, I guess QENU didn’t implement it completely but at least I get the desired effect by accessing the DR and I can return to the normal state after without staying in IRQ mode. Thank you ☺ Michael Rupp 2015/05/31 I’m receiving some linker errors when trying to link the object files. I copied the text of the files verbatim. I’m running Windows 7. Could I be missing a library? c:/program files (x86)/gnu tools arm embedded/4.9 2015q1/bin/../lib/gcc/arm-none -eabi/4.9.3/../../../../arm-none-eabi/lib/crt0.o: In function _start':__bss_start__’ (.text+0x104): undefined reference to c:/program files (x86)/gnu tools arm embedded/4.9 2015q1/bin/../lib/gcc/arm-none -eabi/4.9.3/../../../../arm-none-eabi/lib/crt0.o: In function _start':__bss_end__’ (.text+0x108): undefined reference to c:/program files (x86)/gnu tools arm embedded/4.9 2015q1/bin/../lib/gcc/arm-none -eabi/4.9.3/../../../../arm-none-eabi/lib\libc.a(lib_a-exit.o): In function exi_exit’ t': exit.c:(.text.exit+0x2c): undefined reference to collect2.exe: error: ld returned 1 exit status Michael Rupp 2015/05/31 I was able to fix my linker errors above. I found a solution here. It had to do with adding the bss_start and bss_end to the linker script. Here is the script I re-wrote. ENTRY(vectors_start) SECTIONS { . = 0x10000; .text : { vectors.o (.text .rodata) } .data : {(.data)} .bss : { bss_start = .; (.bss) bss_end = .; _exit = .; } . = ALIGN(8); . = . + 0x1000; /4kb of stack memory*/ stack_top = .; . = . + 0x1000; irq_stack_top = .; } It may just be an issue with the gcc tool chain for windows, so this may only be a problem linking with that version of tool chain. Michael Rupp 2015/05/31 FYI the “bss_start” was truncated when I posted above. Make sure you add the underscores per the reference page. Balau 2015/06/01 Yes the linker script and startup assembly file are usually tightly coupled with the toolchain that is used. I see you are using GCC ARM embedded (which is a very solid choice for the time being), so the script, assembly and command line can vary from what I wrote above (which was for CodeSourcery toolchain). You could have also tried one of “ -nostdlib“, “ -nodefaultlibs” or “ -nostartfiles” in the linking step. Also adding “ -Xlinker -Map=main.map” can generate a map file with useful information about what the linking step is doing, even if it fails. Michael Rupp 2015/06/02 Thanks for the linker flags. I will definitely keep those in mind. Now I will look into creating a simple menu and cli using what I have learned here. safa 2016/04/11 hello balau have u any idea how to configure and compile qemu on windows im using mingw64 and cygwin but i’m facing lots of problems relating to packages dependencies Balau 2016/04/11 I would first try the precompiled binaries from here or here, links that I found in QEMU wiki. Then I would try to ask them, because they managed to compile it. juanma2268 2016/11/26 Hi Balau, I was able to compile the solution using Linaro and the “-nostdlib -Xlinker –build-id=none” workaround. But when I launch the emulator I got into the qemu console instead of having the modified echoed strings: QEMU 2.0.0 monitor – type ‘help’ for more information (qemu) pulseaudio: set_sink_input_volume() failed pulseaudio: Reason: Invalid argument pulseaudio: set_sink_input_mute() failed pulseaudio: Reason: Invalid argument a unknown command: ‘a’ (qemu) aaa unknown command: ‘aaa’ (qemu) bbbbb unknown command: ‘bbbbb’ Any insights would be highly appreciated. Thanks! Balau 2016/12/04 Try also this example on serial ports: I don’t think it is related to the guest program that you are running, and more to do with launching qemu and its options.
https://balau82.wordpress.com/2012/04/15/arm926-interrupts-in-qemu/
CC-MAIN-2020-40
refinedweb
2,272
53.51
/10/2011 at 16:35, xxxxxxxx wrote: I need an example how to do the exact same thing as the default Python Generator Object but as an ObjectGenerator PyPlugin. That is, return a default Cube. No extra settings or anything. I've copied the "DoubleCircle" py example as I want to make a spline generator. Have my own ID for it. I have it all working in a PythonGenerator but can't for the life of me get it to return anything in a Plugin object Generator! So to get the basics of Object Generators (I've only been doing Tag plugins till now) it would be of great help to see a working example of returning a default object. Cheers Lennart On 13/10/2011 at 18:51, xxxxxxxx wrote: Here's a very simple example of an object plugin that generates a rectangle spline with a couple of size controls: -ScottA On 13/10/2011 at 19:51, xxxxxxxx wrote: Thanks Scott. I'll look into it a bit more but I'm confused as your setup generates a child object. What I need is the Generator to be the intended result itself, just as the PythonGenerator. On 13/10/2011 at 20:09, xxxxxxxx wrote: I'm not sure if this is what you want or not. But if you remove or comment out this line: ret.InsertUnder(op) Then the only object that will be created will be the generator object. Like if you used the Python Generator object. I'm not very experienced with object plugins myself. On 13/10/2011 at 22:09, xxxxxxxx wrote: The Python Generator object evaluates the code within c4d.plugins.ObjectData.GetVirtualObjects(). You just have to copy paste your code from the Python Generator to your Python Plugin, as if the GetVirtualObjects() Method was your main function. import c4d from c4d.plugins import ObjectData class MyObjectPlugin(ObjectData) : def GetVirtualObjects(self, op, hh) : " Overrides c4d.plugins.ObjectData.GetVirtualObjects() " # paste your code here for example: return c4d.BaseObject(c4d.Ocube) Cheers, On 14/10/2011 at 17:12, xxxxxxxx wrote: Thanks, Hm. I can use GetVirtualObjects() and I now get a visible object. If I create a BaseObject like Ocube it is recognized by ie a Cloner. How ever if a create a Osplineobject or SplineObject (and set it's points where I want them) I also get a visible object, -but- it is not seen as a spline by a Cloner or an AlignToSpline Tag. It -is- recognized by a SweepNurb. Running MakeEditable or CurrentStateToObject I do get the correct spline in return. ok, so looking at the Py_DoubleCircle example it does not use GetVirtualObjects() but GetContour() instead. And it is seen by Cloner etc. But what is it in that example that makes that happen? When I use my "functional" spline code I don't get any errors but no spline is generated. Doing MakeEditable I get a Null Object in return. What is it in GetContour that is important? My plugin is based on the Py example and all I do is create a simple spline with no settings at all, no Draw, Handles or anything. On 14/10/2011 at 17:18, xxxxxxxx wrote: Oh, the reason why I'm trying the Object Plugin way is that the setup is running using the Python Generator Object and it has the same issue that a returned spline is not seen by Cloners, AlignToSpline etc. On 15/10/2011 at 10:20, xxxxxxxx wrote: is it? put in PyGenerator and make editable import c4d #Welcome to the world of Python def main() : spl = c4d.SplineObject(2,0) spl.SetPoint(0,c4d.Vector(0.0)) spl.SetPoint(1,c4d.Vector(0.0,1000.0,0.0,)) spl.Message(c4d.MSG_UPDATE) return spl On 15/10/2011 at 14:52, xxxxxxxx wrote: Hi Ilya. I need to have the PyGenerator seen as a spline by a Cloner etc. -without- making it editable the same way as a cube generated in the PyGenerator is seen by a Cloner. The PyDoubleCircle example manage that and I can't figure out why. On 15/10/2011 at 16:22, xxxxxxxx wrote: In C++. We need to use the " virtual SplineObject* GetContour() " method. As part of the class. If it's left out. The spline does not even draw. This does not seem to be the case in python. Per the C++ SDK. GetContour() is used for generating splines. AFAIK. It's not needed to generate other types of objects. This is just a wild guess. But it looks like the way the python version is ported from C++ is allowing us to generate splines in python without using GetContour(). Which might be a bad practice. My guess is that this is not the "proper" way to build a spline generator. And will cause things like the Align-to-Spline tag not to work with it. This is just a wild guess based on my experience with it in C++. On 16/10/2011 at 02:56, xxxxxxxx wrote: Hello Lennart I try to put spl-pygen object, clonner draw it normally, without returning of base object(make editable) as described. At more complex code, pyGen was failed at drawing of object if it is in clonner/or mo-object. Also try Nux95's DrawApi helpers, thanks a'lot to Niklas! Niklas, can you make sample with DrawLines if it-s possible, i some ago apply at sample project...but lost it.... On 16/10/2011 at 09:43, xxxxxxxx wrote: Maybe I need to be even more clear Make a PyGenerator generating a spline. ie return c4d.BaseObject(c4d.Osplinecircle) Then put the PyGenerator in a Cloner as Object source does not allow me to clone stuff on the PyGenerator as the Cloner does not see the PyGenerator as a spline. Put the PyDoubleCircle in the Cloner instead as Object source and I can clone. The issue really comes down to that I can't see what in the PyDoubleCircle example that makes it work. (Other than it's using GetContour() ) On 16/10/2011 at 10:22, xxxxxxxx wrote: seems it's not possible, if only add optionally new functions as presenting of PyGen like splineObj...like checkbox with GetContour dependecy On 16/10/2011 at 18:42, xxxxxxxx wrote: Update. I've actually got it working (as a plugin), stripping down the PyDoubleCircle example bit by bit.. I'll post the barebone working code this week with a new question as it became a new issue how handle updating it. My guess it's about GetCache(), Get/SetDirty(), all new territory for me. Thanks till later! Lennart On 24/10/2011 at 07:59, xxxxxxxx wrote: Hello There is interesting and unstable(need to glue to matrix of obj) test On 24/10/2011 at 09:04, xxxxxxxx wrote: That's a good twist, adding the Cloner link via the script, nice. Don't need to do the CSTO either really. On 24/10/2011 at 09:18, xxxxxxxx wrote: Don't need to do the CSTO either really. Don't need to do the CSTO either really. You're right! Thanks a'lot. I played with M.Zabiegly's pointcloud's sample(big thanks to his xpresso tutorials), trying to get cache from procedural objects(Lennart, i make same 3d fashion sample as you did) and try to apply same CSTO. For CSTO and additional py-snippets i want to say Thanks to Sebastien, Niklas, Scott... On 26/10/2011 at 08:40, xxxxxxxx wrote: I have question is this function replicate GetContour() - BaseObject.GetRealSpline ? On 26/10/2011 at 11:14, xxxxxxxx wrote: Not really I'm afraid. But I've come a long way linking Cloners, MoSplines etc from within the plugin as per the example you show earlier. This way basically any virtual object within the generator can be picked. On 18/11/2011 at 09:20, xxxxxxxx wrote: Originally posted by xxxxxxxx Also try Nux95's DrawApi helpers, thanks a'lot to Niklas! Niklas, can you make sample with DrawLines if it-s possible, i some ago apply at sample project...but lost it.... Originally posted by xxxxxxxx I'm sorry, haven't ever read this post before. Can you explain more what 'ya need ? PS: My latest achievement in the DrawApi module. (Animations)
https://plugincafe.maxon.net/topic/6058/6220_basic-pygenerator-plugin-please
CC-MAIN-2021-39
refinedweb
1,397
74.08