text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
SYNOPSIS
#include <nng/nng.h> #include <nng/supplemental/http/http.h> int nng_http_handler_collect_body(nng_http_handler *handler, bool want, size_t maxsz);
DESCRIPTION
The
nng_http_handler_collect_data() function causes the handler to
collect any request body that was submitted with the request, and attach
it to the
nng_http_req before the handler is called.
Subsequently the data can be retrieved by the handler from the request with the
nng_http_req_get_data() function.
The collection is enabled if want is true. Furthermore, the data that the client may sent is limited by the value of maxsz. If the client attempts to send more data than maxsz, then the request will be terminated with a 400 “Bad Request” status.
In order to provide an unlimited size, use
(size_t)-1 for maxsz.
The value
0 for maxsz can be used to prevent any data from being passed
by the client.
The built-in handlers for files, directories, and static data limit the maxsz to zero by default. Otherwise the default setting is to enable this capability with a default value of maxsz of 1 megabyte.
RETURN VALUES
This function returns 0 on success, and non-zero otherwise. | https://nng.nanomsg.org/man/v1.2.2/nng_http_handler_collect_body.3http.html | CC-MAIN-2020-10 | refinedweb | 186 | 56.55 |
: ---------------------------(end of broadcast)--------------------------- -To unsubscribe from this list, send an email to: pgsql-announce-unsubscribe@postgresql.org
PostgreSQL 9.1 released
Posted Sep 12, 2011 16:36 UTC (Mon) by cmorgan (guest, #71980) [Link]
PostgreSQL 9.1 released
Posted Sep 13, 2011 1:24 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]
PostgreSQL 9.1 released
Posted Sep 13, 2011 11:06 UTC (Tue) by Lennie (guest, #49641) [Link]
In all other cases your data is lost (well, it will truncate the table on startup).
PostgreSQL 9.1 released
Posted Sep 13, 2011 17:20 UTC (Tue) by dlang (subscriber, #313) [Link]
with unlogged tables in postgres the table does not need to be held in memory at all times.
in both cases you do not have safety if the database crashes, but there are many cases where this is acceptable (a database of currently logged in sessions and associated data for example, if you crash people just login again and it's not a major problem)
It would be good to see benchmarks of this, I would expect that for the trivial queries that memcache can handle it will beat postgres, the question is by how much.
remember that unlogged tables still have all the power of postgres, they aren't just a key-value pair like memcached.
PostgreSQL 9.1 released
Posted Sep 13, 2011 19:22 UTC (Tue) by Lennie (guest, #49641) [Link]
But maybe it is good to have more choice.
There is a lot of choice at the moment.
PostgreSQL 9.1 released
Posted Sep 13, 2011 22:07 UTC (Tue) by dlang (subscriber, #313) [Link]
instead this is a way to create temporary tables that you can re-generate easily and make them faster by doing away with some of the safety.
you still have all the power of SQL, including transaction related features. the only thing that you are giving up is safety in the face of a server crash. If you have a table that you can either reload from disk, or recreate from other data, getting a speed increase in using it can be worth the hassle of recreating it if the system crashes.
SEPostgres
Posted Sep 12, 2011 16:52 UTC (Mon) by dpquigl (guest, #52852) [Link]
There are many really cool uses for SEPostgres that are not government related at all. I had met with Stephen Frost (Postgres Security Guy) and a bunch of the other Postgres developers in the last year to help make Kaigai's case for SEPostgres acceptance. In those meetings we came up with uses cases like PCI Compliance (credit card processing), and HIPA compliance as possible commercial applications of this technology. There are even use cases for this technology in web applications. Kaigai's work on SELinux+ for Apache allows him to control database access based on individual sites (Look up Kaigai's LAPP work). That usage is pretty cool because it means a vulnerability in one web application won't allow it to start reading tables of another web application or start creating tables and inserting data.
SEPostgres
Posted Sep 12, 2011 18:05 UTC (Mon) by Pc5Y9sbv (guest, #41328) [Link]
SEPostgres
Posted Sep 12, 2011 19:31 UTC (Mon) by dpquigl (guest, #52852) [Link]
SEPostgres
Posted Sep 12, 2011 22:30 UTC (Mon) by SEJeff (subscriber, #51588) [Link]
Remember, security is like an onion. It works best in layers and tastes pretty good fried.
SEPostgres
Posted Sep 12, 2011 22:50 UTC (Mon) by nix (subscriber, #2304) [Link]
The more I look at that analogy the better it gets.
SEPostgres
Posted Sep 12, 2011 23:09 UTC (Mon) by jamesmrh2 (guest, #31680) [Link]
I thought they were good for warding off evil.
SEPostgres
Posted Sep 12, 2011 23:27 UTC (Mon) by Pc5Y9sbv (guest, #41328) [Link]
The "owner" of the database (or namespace), who would set RBAC policies, would be the sys op. The sys op would also be the one creating the SEPostgres policies. I think this is an important pragmatic fact.
SEPostgres
Posted Sep 13, 2011 12:49 UTC (Tue) by SEJeff (subscriber, #51588) [Link]
Hi, I think you're still completely missing the point of MAC. MAC does *not* replace DAC. MAC complements DAC. With SELinux in a full MLS strict enforcing mode, you can pretty much eliminate the concept of "root" or "sys op" entirely. What if someone in "rogue webapp A" manages to exploit a flaw and install a shell on your webserver and somehow gets root privileges with said shell? With SELinux that "root" shell could be as good as worthless and would still disallow access to data from "secure webapp B" in the proper enforcing mode. Please do take some time and try to understand the subtle differences between DAC and MAC. It takes awhile to wrap your head around.
You're thinking far to course where MAC lets you do things much much more fine-grained. As a SELinux fan, I also use Posix ACLs quite a bit for scary permissioning problems. Does that mean that ACLs should replace standard DAC? Nope. It should be used as a means to complement and further secure what already exists.
DAC vs. MAC in SEPostgres
Posted Sep 13, 2011 15:47 UTC (Tue) by Pc5Y9sbv (guest, #41328) [Link]
I don't think "discretionary" vs. "mandatory" is a binary distinction when we're talking about fine-grained database resources not managed by the kernel. I still don't see how Postgres RBAC is any less mandatory than SEPostgres when it comes to isolating databases, schema, or tables between application tenants.
If I can subvert an application (database client process) I am not able to override the RBAC nor the SEPostgres policies being enforced on database resources. But, if I can subvert all per-site instances of the multi-tenant application, i.e. due to an application exploit rather than a site-specific configuration flaw, I can break down the per-site isolation to access whichever tenant's data I want, simply by controlling the right database client instance that already has privileges granted by RBAC and SEPostgres. Also, if I manage to subvert the database manager process, it seems I would be equally able to suppress the RBAC or the SEPostgres enforcement within that process to access database resources.
What am I missing here?
DAC vs. MAC in SEPostgres
Posted Sep 13, 2011 16:13 UTC (Tue) by dpquigl (guest, #52852) [Link]
DAC vs. MAC in SEPostgres
Posted Sep 13, 2011 16:46 UTC (Tue) by raven667 (subscriber, #5198) [Link]
Just to be clear, for a sysadmin managing an SELinux system it isn't that hard to work with and so it very likely provides a useful protection that is worth the effort. For the sysadmin who just needs to tweak policies or write rules for home-grown software I did not find it nearly as difficult as one would be led to believe. I think more effort has been spent in complaining about it than in learning how to use it.
DAC vs. MAC in SEPostgres
Posted Sep 13, 2011 16:19 UTC (Tue) by dpquigl (guest, #52852) [Link]
DAC vs. MAC in SEPostgres
Posted Sep 13, 2011 17:24 UTC (Tue) by Pc5Y9sbv (guest, #41328) [Link]
If I attack the application, I might subvert just one instance, for example by hijacking the web client that has site-level "super user" privileges to access data I shouldn't see or by using a code exploit that depends on invalid/unsafe data stored in that application instance. Neither RBAC nor SEPostgres helps here, because it still looks like the application code accessing its normal database content.
If I find an exploit in the web application code, not depending on the site-specific data content, I can repeat the attack on whichever site I want, and I can also access whatever data that Postgres would normally supply to that application instance. This is no different than if the "site" instances were each on physically separate servers with completely independent Postgres servers too. We see these sorts of exploits published all the time, due to the many layers of libraries and frameworks used for convenience on the web.
To do fine-grained restrictions, you would need to assign roles and/or context to each remote web user. This is a bootstrapping problem of getting from Internet-originated TCP stream containing an HTTP request to a restricted context suitable for evaluating the request. This means trusting Apache httpd and whatever web application components are involved in authentication and context establishment, much as we trust sshd today in establishing SE-Linux context for remote users.
To make SEPostgres "more mandatory", it would require pushing the access enforcement back into the kernel, e.g. by applying different SE-Linux contexts to different filesystem objects storing portions of the data, and having the query engine run in a more limited context for each client/query and making it tolerate refusals to access some elements of the data store. For example, each table is a separate file that could be protected to different levels, but you'd have to change the storage format to support column or row-level protections. You could then have the query engine act as if denied data does not exist. But this would probably require relaxing certain data integrity constraints, since they require a unified, global view of the existing data to validate them. But, you are still trusting parts of Postgres to act like sshd in establishing the right process context for clients.
DAC vs. MAC in SEPostgres
Posted Sep 13, 2011 19:45 UTC (Tue) by dpquigl (guest, #52852) [Link]
Your 5th paragraph is an interesting idea but would require major rearchitecting of the database to do that. The idea at the base of it though is should the database server be in the TCB. If you can have it be in the TCB then having it act as the object manager is acceptable. I don't think however that we're trusting Postgres to establish the process context. In this case the context is still coming from the kernel. Postgres uses a function called getpeercon which has the kernel return the context of the user on the other end of the connecting socket. In the case of a CLI program its the program's context in the case of apache its the worker thread context. From there Postgres uses a call in libselinux that takes this context and the label on the database object in question and gives it to the kernel to make an access control decision. After that it is up to Postgres to enforce the decision.
Your design makes me think that you've worked on MLS databases before (I'm actually wondering who you are) since it would allow you to do separation of different sensitivity levels and compartments. There are people who currently try to do that with Postgres and what they have encountered is that once you start developing the lattice you get a ton of servers each with their own contexts. Instead by making Postgres be the object manager and assigning labels to the database objects themselves it allows more relevant decisions to be made. Like whether this Top Secret trusted procedure can be executed by a secret level process. Or another example that a secret process can run a trusted procedure that has access to TS data because the procedure is known to scrub it down to a S level. The idea is that by virtue of being a database Postgres has better knowledge about the objects in itself. It definitely is better suited to make decisions than the kernel is. So we make it what's called a userspace object manager. The kernel still gets to make the access control decision however the userspace object manager enforces it.
The reason the kernel can still make these decisions is because of the Flask architecture which separates the policy from the mechanism. Inside flask the policy exists as a set of masks which say if I'm given identifier a and it wants to act on an object of a certain type with identifier b here are the permissions it has (which is a bitmask). it knows nothing about what the object classes are for the user space components. It just knows that user space asked it for a decision based on 3 integers and a bitmask.
SEPostgres
Posted Sep 13, 2011 15:59 UTC (Tue) by foom (subscriber, #14868) [Link]
No, if someone in "rogue webapp A" manages to exploit a flaw to get root, *that same flaw* can be used to become "super-root", bypassing all SELinux access control. SELinux *does not* protect you if the attacker has a kernel exploit.
SEPostgres
Posted Sep 13, 2011 16:03 UTC (Tue) by SEJeff (subscriber, #51588) [Link] for an example of why "root" is not so powerful in a full MLS SELinux environment.
SEPostgres
Posted Sep 13, 2011 1:29 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]
(PS: and yes, I have actual military experience)
SEPostgres
Posted Sep 13, 2011 4:57 UTC (Tue) by dpquigl (guest, #52852) [Link]
1) In the entire history of SELinux there probably haven't even been 80 people that have worked on it.
2) If you take the R&D dollars put into SELinux I guarantee companies such as Microsoft and Sun have spent far more on their security products.
3) I'd like a real example of this rather than baseless claims.
4) Again examples instead of baseless claims.
There will always be people who don't like SELinux for some reason or another. However considering SELinux stopped exploitation of a KVM vulnerability in May of this year I believe it actually is working when it is needed the most. The presentation at blackhat states you need a kernel level privesc to bypass SELinux in addition to the KVM vulnerability. Which definitely raises the bar (although not to an impossible level).
SEPostgres
Posted Sep 13, 2011 12:45 UTC (Tue) by SEJeff (subscriber, #51588) [Link]
Lets see how many people have worked on SELinux via our friend git log...
jeff@omniscience:~/git/linux-2.6/security$ (master) git log --pretty="%an" selinux/ | tr 'A-Z' 'a-z' | sort -u | wc -l
133
That shows 133 separate patch authors to code in the security/selinux/ subdirectory. That doesn't even begin to count the number of NSA analysts who worked on the design, a painstaking and > 1 person effort no doubt. If I manually remove a few duplicates (middle initials), that still leaves 128 separate patch authors.
I also think it is amusing (#2) that with all of the R&D MSFT has put into Windows, they have exactly 0 shipping MAC solutions. Linux has 4 integrated upstream I can think of without looking.
1.) SELinux
2.) AppArmour
3.) SMACK
4.) Tomoyo
Sun at least had the Trusted Solaris Extensions, which didn't catch on as much as the competing Linux-based solutions did. There is also TrustedBSD but still no MAC in windows.
Pot, meet kettle, you're both black. Please provide examples instead of baseless claims :)
SEPostgres
Posted Sep 13, 2011 14:33 UTC (Tue) by dpquigl (guest, #52852) [Link]
Something that people don't realize is the actual core group of developers over the development of SELinux has never been that large. People seem to think there is a huge group of people at the agency working on it when in reality its just a hand full of people. The initial design for type enforcement was done by one person a long time ago and the design for Flask and SELinux was done by two people at the agency (My former bosses). The initial implementation for SELinux was done by one man(Steve Smalley).
It is true that windows does not have a MAC implementation but I didn't say they did. However with Vista they did introduce a mandatory integrity system and lucky for Microsoft they hired the creator of AppArmor onto their security team so maybe they will come up with one. You're right TSOL and Solaris TX would have been better comparisons. Its a shame that Sun took a step back with TX and moved to zone based labeling instead of doing fine-grained labeling again like in TSOL. I was told that they are fixing that in Solaris-TX however that last I heard that wasn't done yet.
SEPostgres
Posted Sep 13, 2011 15:21 UTC (Tue) by PaXTeam (guest, #24616) [Link]
no, it didn't, unless there's some magic capability that prevents use-after-free bugs from being exploited in which case i'm sure many other projects would like to hear about it and make use of it ;).
> [...]I believe it actually is working when it is needed the most.
this is just spreading a false sense of security. as a sidenote, MAC wasn't even designed for exploit prevention, it's no wonder it can't do much about exploits.
> The presentation at blackhat states you need a kernel level privesc to bypass SELinux [...]
that's a tautology, you obviously need a kernel bug to bypass a kernel feature. whether an attacker *wants* to go there is independent of his ability to exploit the kvm bug itself which already gives him plenty of privileges he didn't otherwise have, despite SELinux and whatnot.
SEPostgres
Posted Sep 14, 2011 1:32 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]
And in the end, SELinux doesn't really matter that much. New local kernel exploits are discovered at least several times a years if not monthly. And SELinux can't do a thing against them (except make the life of admin harder during investigation and cleanup - been there, done that).
Just look at the recent hacks of kernel.org and linux.com - had they used SE* products? Nope, because they are way too complex. And probably would have been useless in the end.
Can we do better? Probably not. Linux fundamentally is not secure, and can't be made secure until it's rewritten in a safe language. That's not going to happen soon (if ever).
Ditto for PostgreSQL. Can SELinux be used to make it really secure? I doubt it - there's no way to really isolate security roles. However, ability to attach labels to database objects really rocks - I can use it in my application to do fast per-user views.
SEPostgres
Posted Sep 14, 2011 22:25 UTC (Wed) by cmccabe (guest, #60281) [Link]
Higher-level languages are not immune to security problems. Just look at the huge number of Javascript cross-site scripting exploits, SQL injection attacks, and insecure shell scripts out there.
A microkernel architecture would help a little bit, but again, it's not a cure-all. For example, if you somehow compromise the filesystem component of a microkernel, you can probably simply add another root user to /etc/passwd and reboot. If you can modify the ACPI bytecode, you can get root that way. If you have access to the PCI bus, you can probably modify memory anywhere, which is again equivalent to root.
What it comes down to is that security is hard. Careful use of static analysis can probably blunt the sharpest edges of C, and various sandboxing technologies can help for higher level code, but there are no magic bullets.
SEPostgres
Posted Sep 14, 2011 23:54 UTC (Wed) by nix (subscriber, #2304) [Link]
Security: never gonna happen. So, in brief, we're all doomed.
SEPostgres
Posted Sep 15, 2011 1:16 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]
I'm a security pessimist. Right now there's no way to secure Linux, in practice almost every vulnerability in almost any software can in principle be exploited to gain full root access.
But that shouldn't be so! We have safe languages which IMMEDIATELY limit the number of attacks. Even the worst SQL injection attack won't be able to give attacker root access and install a backdoor in your system, if database and kernel are written in safe a language.
Secure hardware is another issue, but we already have IOMMU which should help.
SEPostgres
Posted Sep 15, 2011 4:58 UTC (Thu) by dpquigl (guest, #52852) [Link]
[1]...
[2]
SEPostgres
Posted Sep 15, 2011 12:52 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]
Additionally, JVM's trust model and Code Access Security in CLR are braindead and should die.
SEPostgres
Posted Sep 15, 2011 18:58 UTC (Thu) by dlang (subscriber, #313) [Link]
SEPostgres
Posted Sep 15, 2011 20:30 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]
There will be problems with insecurely JIT-ed machine code, but I believe they can also be solved.
SEPostgres
Posted Sep 22, 2011 4:53 UTC (Thu) by cmccabe (guest, #60281) [Link]
But lets assume that "this time is different" and you really succeed in rewriting absolutely everything in ${LANGUAGE}. Well, once you have this perfect operating system (we'll assume it's bug-free, despite being written by humans), running on perfect hardware which somehow exists, you'll still get hacked.
Why? Because you'll give a login to someone who has a password sniffer installed on his computer. Or put his password on a post-it note near the monitor. Or who uses the same login for multiple accounts, one of which gets hacked. Or who uses a password that can be guessed. Or who you never should have trusted in the first place. Or any one of the million ways that your security can be breached that have nothing to do with what language your operating system is written in.
SEPostgres
Posted Sep 15, 2011 7:32 UTC (Thu) by dlang (subscriber, #313) [Link]
SEPostgres
Posted Sep 22, 2011 4:43 UTC (Thu) by cmccabe (guest, #60281) [Link]
But "most secure we have" does not equal "magic bullet." If you've ever written some Java code that throws an exception when the user gives you bad data, and then not caught that exception-- congratulations, your server now has a denial-of-service vulnerability. If you've ever not checked the credentials of a user before serving them data-- congratulations, there's another security hole.
If you like Java, you should check out Google Go, which is also statically typed, but actually fixes a lot of the problems that Java has (in my opinion.)
[1] Well, actually you can load classes dynamically in Java. But from what I understand, this ability can be locked down when necessary.
SEPostgres
Posted Sep 13, 2011 4:56 UTC (Tue) by jberkus (subscriber, #55561) [Link]
PostgreSQL 9.1 released
Posted Sep 13, 2011 4:41 UTC (Tue) by butlerm (guest, #13312) [Link]
CREATE OR REPLACE VIEW account_entry AS
SELECT * FROM z_account_entry a
WHERE a.owner = package.current_owner
WITH CHECK OPTION;
This allows a secure multi-tenant database (e.g. for software as a service applications) with an arbitrary number of record owners, each of whom cannot see or modify the rows that belong to any other owner, nor inadvertently give them away. No need for a separate database for each tenant. Just a simple implementation of row level security.
PostgreSQL 9.1 released
Posted Sep 13, 2011 4:55 UTC (Tue) by jberkus (subscriber, #55561) [Link]
See also the Veil project for a framework built around this concept:
Backslash escaped quotes
Posted Sep 13, 2011 19:08 UTC (Tue) by Richard_J_Neill (subscriber, #23093) [Link]
Applications which use backslashes to escape quotes in strings will break.
For example, at the psql prompt, type:
SELECT 'I\'m singly quoted';
This used to work; now it will fail. I'm not certain whether or not this allows a different class of SQL injection attack.
Incidentally, Postgresql are, according to the SQL spec, correct: the following are the "right" way to do it:
SELECT 'I''m singly quoted';
SELECT E'I\'m singly quoted';
However, it seems a little risky to change such a well-established default.
Backslash escaped quotes
Posted Sep 13, 2011 19:16 UTC (Tue) by Richard_J_Neill (subscriber, #23093) [Link]
$sql = "SELECT 'Some \'$quoted\' text' ...";
where $quoted comes from user-input, having being sanitised with addslashes()
is now vulnerable, if the user input is, for example:
";DROP TABLE students --"
This is built up as:
$sql = "SELECT 'Some \';DROP TABLE students --\' text' ...";
and if the backslashes aren't interpreted as escapes, we have all sorts of fun...
Backslash escaped quotes
Posted Sep 15, 2011 8:44 UTC (Thu) by rleigh (guest, #14622) [Link]
Regards,
Roger
PostgreSQL 9.1 released
Posted Sep 14, 2011 6:51 UTC (Wed) by paxillus (guest, #79451) [Link]
UPDATE accounts SET (contact_last_name, contact_first_name) =
(SELECT last_name, first_name FROM salesmen
WHERE salesmen.id = accounts.sales_id);
This is not currently implemented the source must be a list of independent expressions. "....
This is what I would like to see soon ...
PostgreSQL 9.1 released
Posted Sep 15, 2011 10:42 UTC (Thu) by brunowolff (guest, #71160) [Link]
PostgreSQL 9.1 released
Posted Sep 15, 2011 10:49 UTC (Thu) by andresfreund (subscriber, #69562) [Link]
UPDATE accounts SET contact_last_name = salesmen.last_name, contact_first_name = salesmen.last_name FROM salesmen WHERE salesmen.id = accounts.sales_id;
Linux is a registered trademark of Linus Torvalds | https://lwn.net/Articles/458520/ | CC-MAIN-2017-30 | refinedweb | 4,218 | 58.82 |
Ok so symfony created a new mime component to create emails, and a new mailer component to send those. Let's bundle them up and see what we end up with!
If you prefer to skip to the end, just install the bundle we made and enjoy!
Why?
Why we started this. We made a lot of symfony apps and in every app we started a new mail system, awesome they were. But after the third or fourth we knew the basics. But as is in every project we forgot to make a bundle and kept copy pasting the data missing stuff or re-create the methods over and over again. Se lets make a bundle that can be used and re-used.
This will be a setup for the core data of the bundle, so code.
History
So first up, a bit of history in my words. So probably none is true, but i like it. The guys (guys as in plural men and women of course) from symfony used a old package swiftmailer, so they ended up maintaining it. This could be cleaner and nicer, so they made that with the mime component, decoupling the sending in the mailer component. In the way making the components they set it up in a way to make it fancier and add magic. Awesome, now lets get into the code.
Setup
This article will talk about the setup, opinionated and the way we would set things up. We will have ideas and opinions. And if you differ from those, you will hopefully learn a thing or two to get you on your own way.
But the basic setup is:
- make it easy to set up emails in your symfony application.
- make use of a database, so you can manage and your clients can tinker.
- generate emails using the Inlining CSS Styles and Inky Email Templating Language.
- add a way to test and see what we are actually doing
Let's go
You can install the bundle by just installing it in your symfony application. This will setup everything you need to start emailing.
composer require disjfa/mail-bundle
In the basic setup you can just add your own email, An email should have a name, subject and content. You do need one
.env variable named
DISJFA_MAIL_FROM for the from email address.
Create an email
So now you need to create your own email. There is an interface for that! If you implement the interface the system will auto register the email so you are done. Lets check it.
<?php namespace Disjfa\MailBundle\Mail; interface MailInterface { public function getName(); public function getSubject(); public function getContent(); }
So your class will look something like this.
<?php namespace ...; use Disjfa\MailBundle\Mail\MailInterface; class ExampleMail implements MailInterface { public function getName(): string { return 'app.example'; } public function getSubject(): string { return 'subject'; } public function getContent(): string { return 'content'; } }
This will just generate an email with the subject
subject and the content
content. You can just extend the class and autowire a twig environment into that to make life easier or a translator. Example in the example folder.
This will generate the default email when sent.
The original email must have curly braces which will parsed and used for parameters. So in the subject and the content it should look like.
This is the email {{ email }}
When you do create a template using twig, please note that you cannot add twig tags directly. So escape them!
This is the email {{ '{{' }} email {{ '}}' }}
This will be read and checked for parameters. Only parameters made in the original content will be remembered in the system. For now we can also not implement fancy things like loops and extra stuff. The idea is to pre render data which you want to use and set those up as 'simple' parameters.
Designing emails
So there is a thing called Foundation for Emails. Those guys made a thing for making hard email structures easy to set up. So there is a twig extension for that. Also somebody mage a way to inline css styling so we can make styling emails a bit easier on the eye. So there is a twig extension for that.
So they made live wonderful for people that do not like emails. If you create an email that looks like this:
<!-- a simplified example of the Inky syntax --> <container> <row> <columns>This is the email {{ email }}.</columns> </row> </container>
It will be rendered and generated into something like this:
garbage of table and data structure, what will look awesome in email clients!
The last item we did not render, it will be generated. We will let you check that in the console.
Cool, email setup. Send them.
Next up, it is up to you to send the emails. You have to tell your application to send an email wich you names. But we have made that simple. Lets check that out.
<?php use Disjfa\MailBundle\Mail\MailFactory; use Disjfa\MailBundle\Mail\MailService; function myFunction(MailFactory $mailFactory, MailService $mailService) { $mail = $mailFactory->findByName('app.example'); $mailService->send($mail, [ 'param1' => 'value', 'param2' => 'value', ], 'info@example.com'); }
And done, add this in a function you use and inject the correct data to send them. You can place this wherever. You can even setup the mailer to start sending the emails async.
Checking the emails
For checking the emails there is a command!
bin/console disjfa:mail:preview-mail
This will result in a question, and list all the emails found in the application.
Please select an email [0] disjfa_mail.example [1] ... your email setup
Next up, what to do?
What to do? [0] preview [1] preview raw [2] send email
Preview will just dump the base html. Preview raw will ask for all the parameters in the email and render it out dumping a pile of garbage of table and data structure, which should look awesome in an email application. The last will also ask for an email address and it will send out a test.
Nice! Now you can make, create and test the emails.
Edit them
So now we have setup the emails, are able to make new email templates to use in your app. Now you just want to tweak them, they are never right for everyone. And anyone has their own opinion. So lets just edit them.
You can add the routes in a
config/routes/disjfa_mail.yaml file.
disjfa_mail: resource: '@DisjfaMailBundle/Controller/' type: annotation prefix: '/admin'
In this example the routes are annotated and mapped to the
/admin folder, just as an example. This should end up in your application. So now you can open up the location. As an example on a localhost.
Here is an overview of all emails in the system again! Cool. Here you can edit or preview them. If you edit one of them there will be subject and content edit area. And from the original html all the parameters are rendered and added as reference for your new template. Check it out in action.
As for the templates
For the template. There is just a basic layout adding some bootstrap and there is some clipboardjs added so it can be used. If you want to add the templates in your own you can just override the existing
layout.html.twig. You can just add your own in
templates/bundles/DisjfaMailBundle/layout.html.twig, just keep in mind to add the blocks needed.
The same is for the base email, but in this path
templates/bundles/DisjfaMailBundle/mail/email.html.twig. Again, keep in mind the original blocks to implement. But here you can also just add your logo and other data you want to add.
Cool, now what?
Now what? You can add your own and add custom emails in your applications. Send them as you like! Lets check what to do.
- Create a class implementing
MailInterface.
- Add some templates as needed.
- Add some translations as needed.
- Create a place in your application to send the emails.
And done. You just created your first email and you did not even need to know much html to get them sending. It sounded like a lot, but in the end it was easy.
Now get on your way and create some awesome emails, and don't forget to send them!
Photo by Mathyas Kurmann on Unsplash
Discussion (0) | https://dev.to/disjfa/mailing-with-symfony-what-can-we-do-1nl1 | CC-MAIN-2021-43 | refinedweb | 1,392 | 75.81 |
I thought I’d give an example on how to use the ObjectDataSource in a webpage.
A typical example could be listing states etc. This is fairly static data so it may be unnecessary to query a database for each request.
However, in this example we’ll use the Authors and their Books as an example.
This example will use and ObjectDataSource as a DataSource. The object will be returned by projecting Authors and Books into a new class via a LINQ query.
I use LINQ to Objects here but a database connection could be used as well.
So, let’s get down to it.
- Create a new WebSite, using Location: Filesystem works fine.
- Right click the added App_Code folder, Add New Item -> Class, name it BookShop.cs and replace the code with this.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
public class BookShop
{
List<Author> Authors = new List<Author>();
List<Book> Books = new List<Book>();
public BookShop()
{
CreateBooksAndAuthors();
}
// This will return all Authors to be used by the dropdown.
public List<Author> GetAuthors()
{
return Authors;
// Constructs a new list of BookAuthorItem for provided authorId and returns this to be used by the GridView.
public List<BookAuthorItem> GetBooksByAuthorId(string authorId)
var q = from b in Books
where b.AuthorId.Equals(authorId)
select new BookAuthorItem
{
BookId = b.BookId,
BookName = b.BookName,
AuthorName = Authors.First(x => x.AuthorId.Equals(authorId)).AuthorName,
AuthorAlias = Authors.First(x => x.AuthorId.Equals(authorId)).AuthorAlias
};
return q.ToList();
public class Author
public string AuthorId { get; set; }
public string AuthorName { get; set; }
public string AuthorAlias { get; set; }
public class Book
public string BookId { get; set; }
public string BookName { get; set; }
// We will project the query result into this class and return it to the calling client.
public class BookAuthorItem
private void CreateBooksAndAuthors()
// Create some books and authors.
Authors.Add(new Author { AuthorId = "1", AuthorName = "Steve Stevenson", AuthorAlias = "The great Steve" });
Authors.Add(new Author { AuthorId = "2", AuthorName = "Tom Tomson", AuthorAlias = "T.T." });
Authors.Add(new Author { AuthorId = "3", AuthorName = "Eric Ericson", AuthorAlias = "The Swede" });
Authors.Add(new Author { AuthorId = "4", AuthorName = "Paul Paulson" });
Books.Add(new Book { BookId = "1", BookName = "The great book", AuthorId = "1" });
Books.Add(new Book { BookId = "2", BookName = "Once upon a time", AuthorId = "1" });
Books.Add(new Book { BookId = "3", BookName = "Learning is doing", AuthorId = "2" });
Books.Add(new Book { BookId = "4", BookName = "The not so great book", AuthorId = "3" });
Books.Add(new Book { BookId = "5", BookName = "Summer is good", AuthorId = "3" });
Books.Add(new Book { BookId = "6", BookName = "Ah, you again!", AuthorId = "4" });
Books.Add(new Book { BookId = "7", BookName = "Book about cats", AuthorId = "4" });
Books.Add(new Book { BookId = "8", BookName = "My story", AuthorId = "4" });
}
- Now on the Default.aspx page, drop a DropDownList from the toolbox onto it.
- Select "Choose Data Source...".
- Select "<New Data Source>" from the "Select a datasource" dropdown.
- Select Object in the "Choose a Data Source Type" dialog, and then OK.
- Select BookShop in the "Choose a Business" dialog, and then Next.
- Select "GetAuthors" from the "Choose a method" dropdown, and then Finish.
- Select AuthorName as the data field to display in the DropDownList, select AuthorId as the datafield for the value of the DropDownList, and OK.
- Select the "Enable AutoPostback" checkbox.
- Now on the Default.aspx page, drop a GridView from the toolbox (Data section) onto it.
- Select "<New Data Source>" from the "Choose Data Source..." list
- Select "GetBooksByAuthorId" from the "Choose a method" dropdown, and then Next.
- Select "Control" in the "Parameter Source" dropdown and "DropDownList1" as the ControlID in the "Define Parameters" dialog, and Finish.
- That is all, your Default.aspx page should now have the following in the body of the source.
<body>
<form id="form1" runat="server">
<asp:DropDownList
</asp:DropDownList>
<asp:ObjectDataSource
</asp:ObjectDataSource>
<asp:GridView
<Columns>
<asp:BoundField
<asp:BoundField
<asp:BoundField
<asp:BoundField
</Columns>
</asp:GridView>
<asp:ObjectDataSource
<SelectParameters>
<asp:ControlParameter
</SelectParameters>
</form>
</body>
- Run the application, you should now be able to change the content of the GridView by selecting different authors with the DropDownList.
"ObjectDataSource Class"
"Language-Integrated Query (LINQ) - Projection Operations"
great! Thanks! | http://blogs.msdn.com/b/spike/archive/2010/01/15/how-to-use-the-objectdatasource-as-a-datasource-simple-example.aspx | CC-MAIN-2014-52 | refinedweb | 682 | 51.65 |
Around the turn of the year, I wrote a small command line tool, snipster which I then released on both GitHub and PyPI.
Unfortunately, the release to PyPI went less than great. Basically you could download it fine but you couldn't run it. In the end, there were actually two problems, both of which I will address.
Keep in mind, this tutorial is only for the part relating to the app being a command line tool. For all the general stuff, please refer to another source, e.g. this amazing page.
Tell PyPI That Your App Is Supposed to Be Called From the Command Line
The first issue that I ran into is that I didn't specify in the
setup.py file that my app had a cli and where the main method was. You need to set the following:
packages = ['snipster'], entry_points = { 'console_scripts': ['snipster=snipster.__main__:main'], }
In
packages you need to state the name of your package. Setuptools (which I used for my app) has an in-built function to find it for you,
find_packages(where="snipster"), I didn't use it though because my setting works as well and don't fix what isn't broken, right?
entry_points was the part that broke for me. Whatever I tried, I got the same error:
`ModuleNotFoundError: No module named 'snipster'`
This is very much related to problem number 2 though.
Make Sure That All the Imports Are Correct
My app worked perfectly fine on my local machine and even installing the app manually (downloading from GitHub and extracting) worked fine. However when installing from pypi it just wouldn't work. It took me literally 5 months to figure it out.
And what it comes down to is import paths. My mistake was not including the package name in the
from statement. So to fix this, my code went from this:
from Snippet import Snippet from SnippetList import showSnippetList, lookupSnippetPath from Sourcer import sourceSnippets
to this:
from snipster.Snippet import Snippet from snipster.SnippetList import showSnippetList, lookupSnippetPath from snipster.Sourcer import sourceSnippets
Finally: Make Sure the App Actually Runs
Now that everything seems like it's fixed, we need to check that everything actually works. Because I had to do this a lot while trying to fix the app, I wrote two handy scripts:
testInit.sh
python setup.py sdist bdist_wheel pip install dist/snipster-py-1.0.3.tar.gz echo " " echo " " echo " " echo " " snipster
testClean.sh
pip uninstall snipster-py rm -r build rm -r dist rm -r snipster_py.egg-info
Now, if everything went well the
snipster command would print the help page on the cli, meaning that python found the entry point.
And there you go. Those were my two big issues when releasing the app. Really quick fixes but oh so hard to find! | https://sophieau.com/article/pypi-release/ | CC-MAIN-2019-39 | refinedweb | 471 | 74.39 |
Back to C# basics: Difference between "=>" and "{ get; } =" for properties
I recently realized, the difference between
=> and
{ get; } = for properties might not be as known as everybody thinks, based on code I saw multiple times.
Here’s an example code.
public class C { public Foo A { get; } = new Foo(); public Foo B => new Foo(); }
Is it the same or is it not? The answer is, it’s not the same. The
A property is property with getter only (aka read only or immutable property). When
C instance is created a new instance of
Foo is assigned to the property and will be returned from now on. The
B property defines also only getter, but this time the getter contains the
new Foo(); as it’s body, aka returning new instance of
Foo every time you access
B.
Putting it into barebone C#, it would look like this.
public class C { readonly Foo _a = new Foo(); public Foo A { get { return _a; } } public Foo B { get { return new Foo(); } } }
Makes sense? | https://www.tabsoverspaces.com/233844-back-to-csharp-basics-difference-between-and-get-for-properties?utm_source=feed | CC-MAIN-2021-31 | refinedweb | 170 | 69.52 |
I'm unsure if I've encapsulated properly, and also, How can i use my methods and constructors in the main?? I get errors as I can't figure out how to create a new object using the main method, nor am I able to access the methods I've written.. Netbeans tells me i can't access non-static methods from a static context...so I'm at a loss of how to properly set up the permissions.
Here are the Project "requirements"
This project needs to be created as ONE Class file that will have the main method. When you create a new project in Netbeans you will come to a window shown below. Set the name of the main class as mentioned in the project description!
Create a MAIN class Ledger (The class should be encapsulated) that will record the sales for a store. It will have the attributes
• sale—an array of double values that are the amounts of all sales
• salesMade—the number of sales so far
• maxSales—the maximum number of sales that can be recorded
Following methods should be added to the class
• Ledger(max)—a constructor that sets the maximum number of sales to max
• addSale(d)—adds a sale, to the array, whose value is d
• getNumberOfSales—returns the number of sales made
• getTotalSales—returns the total value of the sales
• getAverageSale()—returns the average value of all the sales
• getCountAbove(v)—returns the number of sales that exceeded v in value
• A main method that will test this class. The main methods should be included in this class and not as a separate file. The main method should ask the user to enter a particular sale (call the methods to enter the sale after performing important illegal checks and checking if the array has an empty space) and then ask the user if the sales have finished through a loop. After the sales have been finished, show the following outputs: number of sales made, total sales, average sales and number of sales greater than a user entered sale price!
Rules:
1) Make sure that your program checks for illegal entries.
2) The program should be commented appropriately for easy understanding.
3) You class name should be “Ledger”. After you are finished with your programs (building and running it), please rename the file using the following format:
The class file should be named:
YourFullName_Project_5_Ledger.java
An example of the file name, if I were to write the program, would be RohitDua_Project_5_Ledger.java
/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package project5.Main; import javax.swing.*; /** * * @author Azeem */ public class Ledger { private int maxSales; private int salesMade; private double[] sales; Ledger(int x) { maxSales = x; sales = new double[maxSales]; }//constructor sets the max sales to the max value as defined by user. double[] addSale(double d) { double[] ans = new double[getSales().length+1]; System.arraycopy(getSales(), 0, ans, 0, getSales().length); ans[ans.length - 1] = d; return ans; }//copies existing sales and adds new element public boolean trufal(char yesno) { boolean yes = true; if(yesno == 'y'||yesno=='Y'){yes = true;} if(yesno == 'n'||yesno=='N'){yes = false;} return yes; }//currently this method is not used /** * @return the maxSales */ public int getMaxSales() { return maxSales; } /** * @param maxSales the maxSales to set */ public void setMaxSales(int maxSales) { this.maxSales = maxSales; } /** * @return the salesMade */ public int getSalesMade() { return salesMade; } /** * @param salesMade the salesMade to set */ public void setSalesMade(int salesMade) { this.salesMade = salesMade; } /** * @return the sales */ public double[] getSales() { return sales; } /** * @param sales the sales to set */ public void setSales(double[] sales) { this.sales = sales; } /** * Program asks if user wants to enter sales. * if yes, continuously loop until the user sets sales as finished, i.e. do while continue = true. * program loops through the sales to be recorded * during loop, user enters double value for array object for each sale which is error checked. * method addsale returns new array with old values plus new value * for each new sale input, copy src = previous new array from index 0 to end, and place in new array of +1length at index 0. * * during loop, new array at last index = user input double * when loop ends, the array is saved. * * getnumberofSales returns the amount of sales made (# times looped through) * gettotal sales returns the sum of all the values in the final array. * getAverageSale returns the mean of all the values in the finaly array. * getCountabove returns all values in final array that are greater than userinput V * */ public static void main(String[] args) { int Continue = JOptionPane.showConfirmDialog(null, "Would you like to begin entering sales?", "Welcome to Ledger!", JOptionPane.YES_NO_OPTION); do { double input = Double.parseDouble(JOptionPane.showInputDialog("Please enter the sales amount:")); Ledger.setMaxSales(); Continue = JOptionPane.showConfirmDialog(null, "Would you like to continue entering sales?", "Continue?", JOptionPane.YES_NO_OPTION); //System.out.println("Continue = " + Continue); }while(Continue == JOptionPane.YES_OPTION); // TODO code application logic here } }
This post has been edited by greatestone4eva: 10 May 2009 - 12:01 PM | http://www.dreamincode.net/forums/topic/104395-single-class-with-constructor-methods-and-main-method/ | CC-MAIN-2013-20 | refinedweb | 834 | 61.46 |
On Thu, 30 Sep 2004, Jeff Garzik wrote:> > > > +static inline void __iomem *ns_ioaddr(struct net_device *dev)> > +{> > + return (void __iomem *) dev->base_addr;> > +}> > +> > hmmmm. Since dev->base_addr gets exported to userspace, I don't think > it's that quick/easy to change.Hmm? This maintains the _exact_ old semantics, ie we do exactly what it used to do before. The inline function doesn't save the value off anywhere, it's really just a nicer way to do a cast in _one_ place rather than all over the world. Also, it ends up resulting in just _one_ place that knows where to get the base address, instead of several places in pretty much every function in the whole driver ;-P> Wouldn't it be better to just phase out the base of dev->base_addr > completely? I tend to prefer adding a "void __iomem *regs" to struct > netdev_private, and ignore dev->base_addr completely.Yes. I didn't want to change actual behaviour in a driver that I can't even test, so I went for the semantically 100% equivalent cleanup patch instead that just changes the syntax and gets rid of the warnings.But that's the other advantage of the ns_ioaddr() accessor function: somebody who does have the hw can now phase out "base_addr", and justchange that one one-liner function, and you can now get the base addressfrom anywhere you like ;) Linus-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2004/9/30/157 | CC-MAIN-2017-51 | refinedweb | 259 | 61.8 |
Introduction:
GridView is a new data bound control introduced by Microsoft in Visual Studio.NET 2005. Most of the operations like sorting, paging and selecting item from the GridView is already built in and you can use it through the design view. In this article I will explain that how you can select single as well as all the checkboxes which are inside the GridView control.
Selecting Checkboxes inside the GridView Control:
GridView has CheckboxField column which maps the checkbox to a field in the database. In this article we won't be using that, we will make a checkbox in a template column. Simply add a asp:checkbox control in the item template of the GridView control. If you are working with Datagrid control and want the same functionality than please check out my article Selecting Checkboxes inside the Datagrid control.
The html code looks something like this:
Now in the button click event write this code:
StringBuilder str = new StringBuilder();
// Select the checkboxes from the GridView control
for (int i = 0; i < GridView1.Rows.Count; i++)
{
GridViewRow row = GridView1.Rows[i];
bool isChecked = ((CheckBox) row.FindControl("chkSelect")).Checked;
if (isChecked)
// Column 2 is the name column
str.Append(GridView1.Rows[i].Cells[2].Text);
}
}
// prints out the result
Response.Write(str.ToString());
The code above just iterates through the GridView and selects the checked checkboxes. Later it appends the selected value to a StringBuilder object. In order to use StringBuilder you will need to add the System.Text namespace.
Making a CheckAll functionality:
To add a check all functionality in the GridView simply add a html checkbox to the header template of the checkbox column.
SelectAllCheckboxes JavaScript method:<elm.length;i++)
if(elm[i].type=="checkbox" && elm[i].id!=theBox.id)
//elm[i].click();
if(elm[i].checked!=xState)
elm[i].click();
//elm[i].checked=xState;
</script>
This is it. I hope you like the article, happy coding! | http://www.gridviewguy.com/Articles/81_Selecting_Checkboxes_inside_GridView_Control.aspx | crawl-001 | refinedweb | 320 | 56.66 |
The short, quick story of how we finally found something we liked after hours of uncomfortable typing.
Well we have Python sessions every week and we have this one person whose space-bar was not working or working when you don't expect it and it was shop closed for most of the time.
Once luck hit really hard and for some three weeks we forgot about it until the spacebar decided to make a comeback. The permanent solution the person was using was a virtual keyboard. He would position the mouse pointer on the keyboard's spacebar before typing and instead of pressing the spacebar, he would right click on the laptop's mouse pad, clicking the virtual spacebar. It's no fun typing i can tell you and no fun coding, for sure! We'll add in doing git push with it, it was really &^&^%*^%$%$£.
It was exactly after our practical git session that i started asking myself what we were teaching if we could not solve this with Python. I reached out to my keys and my first intuition was PyAuto Gui. And yes, the person was leaving in 5 minutes, he was collecting his things ... I had to get it right, right now!
I was lost in PyAutoGUI's keys docs, i decided to explore new options right there, out in the wild blue. I saw a package, keyboard gloups, should have thought of it XD. With the import antigravity joke flashing through my mind, i was off to the docs page which ... was a .md file. Euhh the example showed only one use case, i was digging, ploughing, rummaging ... I was ctrl+f brute searching until i came up with an infinite loop, event detection and key send. Cool! The api was really great as you intuitively guessed keys' names. Well done keyboard devs 🎉.
import keyboard # using module keyboard while True: # making a loop try: # used try so that if user pressed other than the given key error will not be shown if keyboard.is_pressed('ctrl+alt+s'): print('pressed') keyboard.write(' ') except: pass
Instead of sending the file, i virtual-keyboard typed the program in the person's laptop, i had no time to setup that crazy mouse setup for space bar. I went in quick, mobile messaging experience helped. Then we had to think out of combinations. On my PC, ctrl+s was working fine, but it was the popular save command and it was hitting hard on his pc. We tried z+x but it was typing those letters. Finally we settled on ctrl+alt+s.
Then about deployment, win8 links py files automatically. One click on the file and it was running. Phew we tested and the person liked it very much as he was tired of mousing around. The snake finally swallowed the mouse [1].
Do you have a story of how a short snippet ate away a bugging problem? Share it below!
( [1] Mike Driscoll has a Mouse Vs Python blog)
Images from unsplash
Discussion (0) | https://dev.to/abdurrahmaanj/the-funny-story-of-how-we-fixed-the-spacebar-with-9-lines-in-9-minutes-2f58 | CC-MAIN-2021-39 | refinedweb | 507 | 73.37 |
DEBSOURCES
Skip Quicknav
sources / highwayhash / 0~git20200803.9490b14
Strong (well-distributed and unpredictable) hashes:
* Portable implementation of
[SipHash]()
* HighwayHash, a 5x faster SIMD hash with [security
claims]()
## Quick Start
To build on a Linux or Mac platform, simply run `make`. For Windows, we provide
a Visual Studio 2015 project in the `msvc` subdirectory.
Run `benchmark` for speed measurements. `sip_hash_test` and `highwayhash_test`
ensure the implementations return known-good values for a given set of inputs.
64-bit SipHash for any CPU:
#include "highwayhash/sip_hash.h"
using namespace highwayhash;
const HH_U64 key2[2] HH_ALIGNAS(16) = {1234, 5678};
char in[8] = {1};
return SipHash(key2, in, 8);
64, 128 or 256 bit HighwayHash for the CPU determined by compiler flags:
#include "highwayhash/highwayhash.h"
using namespace highwayhash;
const HHKey key HH_ALIGNAS(32) = {1, 2, 3, 4};
char in[8] = {1};
HHResult64 result; // or HHResult128 or HHResult256
HHStateT<HH_TARGET> state(key);
HighwayHashT(&state, in, 8, &result);
64, 128 or 256 bit HighwayHash for the CPU on which we're currently running:
#include "highwayhash/highwayhash_target.h"
#include "highwayhash/instruction_sets.h"
using namespace highwayhash;
const HHKey key HH_ALIGNAS(32) = {1, 2, 3, 4};
char in[8] = {1};
HHResult64 result; // or HHResult128 or HHResult256
InstructionSets::Run<HighwayHash>(key, in, 8, &result);
C-callable 64-bit HighwayHash for the CPU on which we're currently running:
#include "highwayhash/c_bindings.h"
const uint64_t key[4] = {1, 2, 3, 4};
char in[8] = {1};
return HighwayHash64(key, in, 8);
Printing a 256-bit result in a hexadecimal format similar to sha1sum:
HHResult256 result;
printf("%016"PRIx64"%016"PRIx64"%016"PRIx64"%016"PRIx64"\n",
result[3], result[2], result[1], result[0]);
## Introduction
Hash functions are widely used, so it is desirable to increase their speed and
security. This package provides two 'strong' (well-distributed and
unpredictable) hash functions: a faster version of SipHash, and an even faster
algorithm we call HighwayHash.
SipHash is a fast but 'cryptographically strong' pseudo-random function by
Aumasson and Bernstein [].
HighwayHash is a new way of mixing inputs which may inspire new
cryptographically strong hashes. Large inputs are processed at a rate of 0.24
cycles per byte, and latency remains low even for small inputs. HighwayHash is
faster than SipHash for all input sizes, with 5 times higher throughput at 1
KiB. We discuss design choices and provide statistical analysis and preliminary
cryptanalysis in.
## Applications
Unlike prior strong hashes, these functions are fast enough to be recommended
as safer replacements for weak hashes in many applications. The additional CPU
cost appears affordable, based on profiling data indicating C++ hash functions
account for less than 0.25% of CPU usage.
Hash-based selection of random subsets is useful for A/B experiments and similar
applications. Such random generators are idempotent (repeatable and
deterministic), which is helpful for parallel algorithms and testing. To avoid
bias, it is important that the hash function be unpredictable and
indistinguishable from a uniform random generator. We have verified the bit
distribution and avalanche properties of SipHash and HighwayHash.
64-bit hashes are also useful for authenticating short-lived messages such as
network/RPC packets. This requires that the hash function withstand
differential, length extension and other attacks. We have published a formal
security analysis for HighwayHash. New cryptanalysis tools may still need to be
developed for further analysis.
Strong hashes are also important parts of methods for protecting hash tables
against unacceptable worst-case behavior and denial of service attacks
(see "hash flooding" below).
128 and 256-bit hashes can be useful for verifying data integrity (checksums).
## SipHash
Our SipHash implementation is a fast and portable drop-in replacement for
the reference C code. Outputs are identical for the given test cases (messages
between 0 and 63 bytes).
Interestingly, it is about twice as fast as a SIMD implementation using SSE4.1
(). This is presumably due to the lack of SIMD bit rotate
instructions prior to AVX-512.
SipHash13 is a faster but weaker variant with one mixing round per update and
three during finalization.
We also provide a data-parallel 'tree hash' variant that enables efficient SIMD
while retaining safety guarantees. This is about twice as fast as SipHash, but
does not return the same results.
## HighwayHash
We have devised a new way of mixing inputs with SIMD multiply and permute
instructions. The multiplications are 32x32 -> 64 bits and therefore infeasible
to reverse. Permuting equalizes the distribution of the resulting bytes.
The internal state is quite large (1024 bits) but fits within SIMD registers.
Due to limitations of the AVX2 instruction set, the registers are partitioned
into two 512-bit halves that remain independent until the reduce phase. The
algorithm outputs 64 bit digests or up to 256 bits at no extra cost.
In addition to high throughput, the algorithm is designed for low finalization
cost. The result is more than twice as fast as SipTreeHash.
We also provide an SSE4.1 version (80% as fast for large inputs and 95% as fast
for short inputs), an implementation for VSX on POWER and a portable version
(10% as fast). A third-party ARM implementation is referenced below.
Statistical analyses and preliminary cryptanalysis are given in.
## Versioning and stability
Now that 21 months have elapsed since their initial release, we have declared
all (64/128/256 bit) variants of HighwayHash frozen, i.e. unchanging forever.
SipHash and HighwayHash are 'fingerprint functions' whose input -> hash
mapping will not change. This is important for applications that write hashes to
persistent storage.
## Speed measurements
To measure the CPU cost of a hash function, we can either create an artificial
'microbenchmark' (easier to control, but probably not representative of the
actual runtime), or insert instrumentation directly into an application (risks
influencing the results through observer overhead). We provide novel variants of
both approaches that mitigate their respective disadvantages.
profiler.h uses software write-combining to stream program traces to memory
with minimal overhead. These can be analyzed offline, or when memory is full,
to learn how much time was spent in each (possibly nested) zone.
nanobenchmark.h enables cycle-accurate measurements of very short functions.
It uses CPU fences and robust statistics to minimize variability, and also
avoids unrealistic branch prediction effects.
We compile the 64-bit C++ implementations with a patched GCC 4.9 and run on a
single idle core of a Xeon E5-2690 v3 clocked at 2.6 GHz. CPU cost is measured
as cycles per byte for various input sizes:
Algorithm | 8 | 31 | 32 | 63 | 64 | 1024
---------------- | ----- | ---- | ---- | ---- | ---- | ----
HighwayHashAVX2 | 7.34 | 1.81 | 1.71 | 1.04 | 0.95 | 0.24
HighwayHashSSE41 | 8.00 | 2.11 | 1.75 | 1.13 | 0.96 | 0.30
SipTreeHash | 16.51 | 4.57 | 4.09 | 2.22 | 2.29 | 0.57
SipTreeHash13 | 12.33 | 3.47 | 3.06 | 1.68 | 1.63 | 0.33
SipHash | 8.13 | 2.58 | 2.73 | 1.87 | 1.93 | 1.26
SipHash13 | 6.96 | 2.09 | 2.12 | 1.32 | 1.33 | 0.68
SipTreeHash is slower than SipHash for small inputs because it processes blocks
of 32 bytes. AVX2 and SSE4.1 HighwayHash are faster than SipHash for all input
sizes due to their highly optimized handling of partial vectors.
Note that previous measurements included the initialization of their input,
which dramatically increased timings especially for small inputs.
## CPU requirements
SipTreeHash(13) requires an AVX2-capable CPU (e.g. Haswell). HighwayHash
includes a dispatcher that chooses the implementation (AVX2, SSE4.1, VSX or
portable) at runtime, as well as a directly callable function template that can
only run on the CPU for which it was built. SipHash(13) and
ScalarSipTreeHash(13) have no particular CPU requirements.
### AVX2 vs SSE4
When both AVX2 and SSE4 are available, the decision whether to use AVX2 is
non-obvious. AVX2 vectors are twice as wide, but require a higher power license
(integer multiplications count as 'heavy' instructions) and can thus reduce the
clock frequency of the core or entire socket(!) on Haswell systems. This
partially explains the observed 1.25x (not 2x) speedup over SSE4. Moreover, it
is inadvisable to only sporadically use AVX2 instructions because there is also
a ~56K cycle warmup period during which AVX2 operations are slower, and Haswell
can even stall during this period. Thus, we recommend avoiding AVX2 for
infrequent hashing if the rest of the application is also not using AVX2. For
any input larger than 1 MiB, it is probably worthwhile to enable AVX2.
### SIMD implementations
Our x86 implementations use custom vector classes with overloaded operators
(e.g. `const V4x64U a = b + c`) for type-safety and improved readability vs.
compiler intrinsics (e.g. `const __m256i a = _mm256_add_epi64(b, c)`).
The VSX implementation uses built-in vector types alongside Altivec intrinsics.
A high-performance third-party ARM implementation is mentioned below.
### Dispatch
Our instruction_sets dispatcher avoids running newer instructions on older CPUs
that do not support them. However, intrinsics, and therefore also any vector
classes that use them, require (on GCC < 4.9 or Clang < 3.9) a compiler flag
that also allows the compiler to generate code for that CPU. This means the
intrinsics must be placed in separate translation units that are compiled with
the required flags. It is important that these source files and their headers
not define any inline functions, because that might break the one definition
rule and cause crashes.
To minimize dispatch overhead when hashes are computed often (e.g. in a loop),
we can inline the hash function into its caller using templates. The dispatch
overhead will only be paid once (e.g. before the loop). The template mechanism
also avoids duplicating code in each CPU-specific implementation.
## Defending against hash flooding
To mitigate hash flooding attacks, we need to take both the hash function and
the data structure into account.
We wish to defend (web) services that utilize hash sets/maps against
denial-of-service attacks. Such data structures assign attacker-controlled
input messages `m` to a hash table bin `b` by computing the hash `H(s, m)`
using a hash function `H` seeded by `s`, and mapping it to a bin with some
narrowing function `b = R(h)`, discussed below.
Attackers may attempt to trigger 'flooding' (excessive work in insertions or
lookups) by finding multiple `m` that map to the same bin. If the attacker has
local access, they can do far worse, so we assume the attacker can only issue
remote requests. If the attacker is able to send large numbers of requests,
they can already deny service, so we need only ensure the attacker's cost is
sufficiently large compared to the service's provisioning.
If the hash function is 'weak', attackers can easily generate 'hash collisions'
(inputs mapping to the same hash values) that are independent of the seed. In
other words, certain input messages will cause collisions regardless of the seed
value. The author of SipHash has published C++ programs to generate such
'universal (key-independent) multicollisions' for CityHash and Murmur. Similar
'differential' attacks are likely possible for any hash function consisting only
of reversible operations (e.g. addition/multiplication/rotation) with a constant
operand. `n` requests with such inputs cause `n^2` work for an unprotected hash
table, which is unacceptable.
By contrast, 'strong' hashes such as SipHash or HighwayHash require infeasible
attacker effort to find a hash collision (an expected 2^32 guesses of `m` per
the birthday paradox) or recover the seed (2^63 requests). These security claims
assume the seed is secret. It is reasonable to suppose `s` is initially unknown
to attackers, e.g. generated on startup or even per-connection. A timing attack
by Wool/Bar-Yosef recovers 13-bit seeds by testing all 8K possibilities using
millions of requests, which takes several days (even assuming unrealistic 150 us
round-trip times). It appears infeasible to recover 64-bit seeds in this way.
However, attackers are only looking for multiple `m` mapping to the same bin
rather than identical hash values. We assume they know or are able to discover
the hash table size `p`. It is common to choose `p = 2^i` to enable an efficient
`R(h) := h & (p - 1)`, which simply retains the lower hash bits. It may be
easier for attackers to compute partial collisions where only the lower `i` bits
match. This can be prevented by choosing a prime `p` so that `R(h) := h % p`
incorporates all hash bits. The costly modulo operation can be avoided by
multiplying with the inverse (). An interesting alternative
suggested by Kyoung Jae Seo chooses a random subset of the `h` bits. Such an `R`
function can be computed in just 3 cycles using PEXT from the BMI2 instruction
set. This is expected to defend against SAT-solver attacks on the hash bits at a
slightly lower cost than the multiplicative inverse method, and still allows
power-of-two table sizes.
Summary thus far: given a strong hash function and secret seed, it appears
infeasible for attackers to generate hash collisions because `s` and/or `R` are
unknown. However, they can still observe the timings of data structure
operations for various `m`. With typical table sizes of 2^10 to 2^17 entries,
attackers can detect some 'bin collisions' (inputs mapping to the same bin).
Although this will be costly for the attacker, they can then send many instances
of such inputs, so we need to limit the resulting work for our data structure.
Hash tables with separate chaining typically store bin entries in a linked list,
so worst-case inputs lead to unacceptable linear-time lookup cost. We instead
seek optimal asymptotic worst-case complexity for each operation (insertion,
deletion and lookups), which is a constant factor times the logarithm of the
data structure size. This naturally leads to a tree-like data structure for each
bin. The Java8 HashMap only replaces its linked list with trees when needed.
This leads to additional cost and complexity for deciding whether a bin is a
list or tree.
Our first proposal (suggested by Github user funny-falcon) avoids this overhead
by always storing one tree per bin. It may also be worthwhile to store the first
entry directly in the bin, which avoids allocating any tree nodes in the common
case where bins are sparsely populated. What kind of tree should be used?
Given SipHash and HighwayHash provide high quality randomness, depending on
expecting attack surface simple non-balancing binary search tree could perform
reasonably well. [Wikipedia says]()
> After a long intermixed sequence of random insertion and deletion, the
> expected height of the tree approaches square root of the number of keys, √n,
> which grows much faster than log n.
While `O(√n)` is much larger than `O(log n)`, it is still much smaller than `O(n)`.
And it will certainly complicate the timing attack, since the time of operation
on collisioned bin will grow slower.
If stronger safety guarantees are needed, then a balanced tree should be used.
Scapegoat and splay trees only offer amortized complexity guarantees, whereas
treaps require an entropy source and have higher constant factors in practice.
Self-balancing structures such as 2-3 or red-black trees require additional
bookkeeping information. We can hope to reduce rebalancing cost by realizing
that the output bits of strong `H` functions are uniformly distributed. When
using them as keys instead of the original message `m`, recent relaxed balancing
schemes such as left-leaning red-black or weak AVL trees may require fewer tree
rotations to maintain their invariants. Note that `H` already determines the
bin, so we should only use the remaining bits. 64-bit hashes are likely
sufficient for this purpose, and HighwayHash generates up to 256 bits. It seems
unlikely that attackers can craft inputs resulting in worst cases for both the
bin index and tree key without being able to generate hash collisions, which
would contradict the security claims of strong hashes. Even if they succeed, the
relaxed tree balancing still guarantees an upper bound on height and therefore
the worst-case operation cost. For the AVL variant, the constant factors are
slightly lower than for red-black trees.
The second proposed approach uses augmented/de-amortized cuckoo hash tables
(). These guarantee worst-case `log n` bounds for all
operations, but only if the hash function is 'indistinguishable from random'
(uniformly distributed regardless of the input distribution), which is claimed
for SipHash and HighwayHash but certainly not for weak hashes.
Both alternatives retain good average case performance and defend against
flooding by limiting the amount of extra work an attacker can cause. The first
approach guarantees an upper bound of `log n` additional work even if the hash
function is compromised.
In summary, a strong hash function is not, by itself, sufficient to protect a
chained hash table from flooding attacks. However, strong hash functions are
important parts of two schemes for preventing denial of service. Using weak hash
functions can slightly accelerate the best-case and average-case performance of
a service, but at the risk of greatly reduced attack costs and worst-case
performance.
## Third-party implementations / bindings
Thanks to Damian Gryski and Frank Wessels for making us aware of these
third-party implementations or bindings. Please feel free to get in touch or
raise an issue and we'll add yours as well.
By | Language | URL
--- | --- | ---
Damian Gryski | Go and x64 assembly |
Lovell Fuller | node.js bindings |
Vinzent Steinberg | Rust bindings |
Frank Wessels & Andreas Auernhammer | Go and ARM assembly |
Phil Demetriou | Python 3 bindings |
> **_NOTE:_** For highwayhash-cffi, please note an [issue]()
has been reported ([merge request]()).
## Modules
### Hashes
* c_bindings.h declares C-callable versions of SipHash/HighwayHash.
* sip_hash.cc is the compatible implementation of SipHash, and also provides
the final reduction for sip_tree_hash.
* sip_tree_hash.cc is the faster but incompatible SIMD j-lanes tree hash.
* scalar_sip_tree_hash.cc is a non-SIMD version.
* state_helpers.h simplifies the implementation of the SipHash variants.
* highwayhash.h is our new, fast hash function.
* hh_{avx2,sse41,vsx,portable}.h are its various implementations.
* highwayhash_target.h chooses the best available implementation at runtime.
### Infrastructure
* arch_specific.h offers byte swapping and CPUID detection.
* compiler_specific.h defines some compiler-dependent language extensions.
* data_parallel.h provides a C++11 ThreadPool and PerThread (similar to
OpenMP).
* instruction_sets.h and targets.h enable efficient CPU-specific dispatching.
* nanobenchmark.h measures elapsed times with < 1 cycle variability.
* os_specific.h sets thread affinity and priority for benchmarking.
* profiler.h is a low-overhead, deterministic hierarchical profiler.
* tsc_timer.h obtains high-resolution timestamps without CPU reordering.
* vector256.h and vector128.h contain wrapper classes for AVX2 and SSE4.1.
By Jan Wassenberg <jan.wassenberg@gmail.com> and Jyrki Alakuijala
<jyrki.alakuijala@gmail.com>, updated 2018-10-02
This is not an official Google product. | https://sources.debian.org/src/highwayhash/0~git20200803.9490b14-2/README.md/ | CC-MAIN-2021-39 | refinedweb | 3,117 | 55.84 |
Two possible ways, using different (optional, but highly recommended)
built-in modules:
(1) Using module struct, you can convert almost any binary data (in
your host's format and byte order) to Python. In this case it would
be:
import struct
(Integer,) = struct.unpack('l', f.read(4))
This is the appropriate method if you are reading something like a
header structure; the format string can specify a list of types
describing the header, e.g. 'hhl' would be two shorts and a long,
converted to a triple of three Python integers.
Notes:
(a) struct.unpack() always returns a tuple, hence the funny left-hand
side of the assignment
(b) the choice of a format letter is machine dependent -- 'l' is a
machine 'long' (usually 4 bytes but can be 8 e.g. on DEC alpha or HP
snake); 'i' is a machine 'int' (usually 4 bytes but can be 2 e.g. on
Mac or PC)
(c) there is currently no way to specify an alternate byte order
(2) Using module array, you can efficiently read arrays of simple data
types:
import array
a = array.array('l')
a.read(f, 1)
Integer = a[0]
This is the appropriate method if you want to read many items of the
same type, e.g. to read a 1000 integers in one fell swoop, you would
use a.read(f, 1000). The same remark about the choice of format
letter applies; but there is a method to byte swap an entire array
(a.byteswap()) -- currently undocumented.
--Guido van Rossum, CWI, Amsterdam <Guido.van.Rossum@cwi.nl> | http://www.python.org/search/hypermail/python-1993/0393.html | crawl-002 | refinedweb | 264 | 61.16 |
qptrqueue.3qt man page
QPtrQueue — Template class that provides a queue
Synopsis
#include <qptrqueue.h>
Public Members
QPtrQueue ()
QPtrQueue ( const QPtrQueue<type> & queue )
~QPtrQueue ()
QPtrQueue<type> & operator= ( const QPtrQueue<type> & queue )
bool autoDelete () const
void setAutoDelete ( bool enable )
uint count () const
bool isEmpty () const
void enqueue ( const type * d )
type * dequeue ()
bool remove ()
void clear ()
type * head () const
operator type * () const
type * current () const
Protected Members
virtual QDataStream & read ( QDataStream & s, QPtrCollection::Item & item )
virtual QDataStream & write ( QDataStream & s, QPtrCollection::Item item ) const
Description
The QPtrQueue class is a template class that provides a queue..
Member Function Documentation
QPtrQueue::QPtrQueue ()
Creates an empty queue with autoDelete() set to FALSE.
QPtrQueue::QPtrQueue ( const QPtrQueue<type> & queue )
Creates a queue from queue.
Only the pointers are copied; the items are not. The autoDelete() flag is set to FALSE.
QPtrQueue::~QPtrQueue ()
Destroys the queue. Items in the queue are deleted if autoDelete() is TRUE.
bool QPtrQueue::autoDelete () const
Returns the setting of the auto-delete option. The default is FALSE.
See also setAutoDelete().
void QPtrQueue::clear ()
Removes all items from the queue, and deletes them if autoDelete() is TRUE.
See also remove().
uint QPtrQueue::count () const
Returns the number of items in the queue.
See also isEmpty().
type * QPtrQueue::current () const
Returns a pointer to the head item in the queue. The queue is not changed. Returns 0 if the queue is empty.
See also dequeue() and isEmpty().
type * QPtrQueue::dequeue ()
Takes the head item from the queue and returns a pointer to it. Returns 0 if the queue is empty.
See also enqueue() and count().
void QPtrQueue::enqueue ( const type * d )
Adds item d to the tail of the queue.
See also count() and dequeue().
type * QPtrQueue::head () const
Returns a pointer to the head item in the queue. The queue is not changed. Returns 0 if the queue is empty.
See also dequeue() and isEmpty().
bool QPtrQueue::isEmpty () const
Returns TRUE if the queue is empty; otherwise returns FALSE.
See also count(), dequeue(), and head().
QPtrQueue::operator type * () const
Returns a pointer to the head item in the queue. The queue is not changed. Returns 0 if the queue is empty.
See also dequeue() and isEmpty().
QPtrQueue<type> & QPtrQueue::operator= ( const Q.
QDataStream & QPtrQueue::read ( QDataStream & s, QPtrCollection::Item & item ) [virtual protected]
Reads a queue item, item, from the stream s and returns a reference to the stream.
The default implementation sets item to 0.
See also write().
bool QPtrQueue::remove ()
Removes the head item from the queue, and returns TRUE if there was an item, i.e. the queue wasn't empty; otherwise returns FALSE.
The item is deleted if autoDelete() is TRUE.
See also head(), isEmpty(), and dequeue().
void QPtrQueue::write ( QDataStream & s, QPtrCollection::Item item ) const [virtual protected]
Writes a queue item, item, to the stream s and returns a reference to the stream.
The default implementation does nothing.
See also read().queue.3qt) and the Qt version (3.3.8).
Referenced By
The man page QPtrQueue.3qt(3) is an alias of qptrqueue.3qt(3). | https://www.mankier.com/3/qptrqueue.3qt | CC-MAIN-2017-39 | refinedweb | 508 | 69.99 |
This short article explains how to create a cryptographic checksum (a hash) with the help of the hashlib++ library. hashlib++ is a simple and very easy to use library to create a cryptographic checksum called "hash". The library is written in plain C++, and should work with every compiler and platform. hashlib++ is released under the BSD-license and is therefore free software.
hashlib++ provides the so called "wrappers" for each supported hash function which simplifies the creation of the relevant hash. Instead of implementing the full algorithm for the hash function, you only have to instantiate a desired wrapper and call a member function like getHashFromString() or getHashFromFile().
getHashFromString()
getHashFromFile()
After downloading the small library from the project's website (), you have to include the base class "hashwrapper.h" and the header file of the wrapper you want to use:
#include "hashwrapper.h"
#include "sha1wrapper.h"
#include "md5wrapper.h"
After that, you can create wrapper objects:
hashwrapper *md5 = new md5wrapper();
hashwrapper *sha1 = new sha1wrapper();
Once a wrapper has been instantiated, you can basically call the member functions getHashFromFile() and getHashFromString() to create a hash from a file or string.
std::string mytexthash = md5->getHashFromString("Hello World");
std::string myfilehash = md5->getHashFromFile("README.TXT");
And that's all!
delete md5;
delete sha1;
Now you can add the corresponding *.cpp files (for MD5, for example: md5.cpp and md5wrapper.cpp) to your project and start compiling.
I have attached libtest.cpp, which is a full example of how to use hashlib++.. | http://www.codeproject.com/Articles/20602/Using-hashlib-for-easily-creating-cryptographic-ch?msg=3660552 | CC-MAIN-2015-11 | refinedweb | 250 | 58.99 |
Today on the podcast, Wes talks with Mike Milinkovich. Mike is.
Key Takeaways
- Java EE, unlikely Java SE, has always been a multi-vendor ecosystem. It made sense for everyone for Oracle to invite their partners to be involved in the governance of the specification for Java EE for it to continue moving forward. This is the reason for moving Java EE into the Eclipse Foundation as Jakarta EE.
- The current plan is for the Eclipse Foundation to get a copyright license to evolve the text of the specification and not a license to the trademarks of Java EE itself.
- The javax namespace must remain as is. For it to be evolved, a different namespace must be used.
- The javax namespace is a trademark of Oracle. Because of this, there are quality controls that Oracle required for its evolution. Ultimately because of those controls, the Eclipse Foundation felt it was better to branch javax into a different namespace and evolve it separately solely under Jakarta EE governance.
- Jakarta EE 8 is targeted to be released around Oracle Code ONE. Jakarta EE 8 will be exactly the same as Java EE 8. * The only difference is it will be licensed from Jakarta, not Oracle and only requires membership in the Working Group.
- Beyond EE 8, the release cycle, the plan for moving the javax namespace (and keeping compatibility with both the old javax namespace and the new namespace), and new specifications for inclusion into Jakarta EE are still active areas of discussion.
- Unrelated to the discussion of Jakarta EE (but discussed in the same board meeting), an attempt to bundle OpenJ9 with the Eclipse IDE failed because of licensing restrictions around a certified Java Runtime. OpenJ9 is certified when acquired through an IBM channel, but not when downloaded directly for us.
Subscribe on:
Show Notes
What has happened since you stood on stage in 2017 announcing the move of JavaEE to Eclipse foundation?
- 01:45 There’s two parts of the story – the first, what happened before the announcement on the stage, and the second is what’s happened since.
- 01:55 Part of the reason why the move happened in the first place is that JavaEE has always been a multi-vendor ecosystem.
- 02:00 WebSphere, and JBoss are just a big a name as WebLogic for example.
- 02:05 There’s always been multiple implementations, and it’s always been a pretty vibrant ecosystem in terms of products making pretty big bucks in this space.
- 02:15 Oracle realised with the move to cloud and the other things in their organisation, that inviting their partners to participate in the future direction was better for them.
- 02:35 Since the move was announced, a lot of really good things have happened.
- 02:40 GlassFish is now Eclipse GlassFish – millions of lines of code have been moved over.
- 02:50 Another piece of big news is that Oracle, for the first time in its history, open-sourced the JavaEE Testing Compatibility Kit (TCK).
- 03:00 GlassFish has been open-sourced for a while, but the TCKs were never open-sourced until they were transferred over to the Eclipse Foundation.
- 03:10 A lot of good engineering work has happened in order to move them over to the Eclipse Foundation.
- 03:20 In parallel with doing that, over about a year, negotiations happened between Oracle and the Eclipse Foundation about the legal issues over moving the specifications developed under the JCP to the Eclipse Foundation.
- 03:45 One of the aspects of trademark law is that when you license a trademark, it’s completely normal to have quality requirements to go with that trademark license.
- 03:50 If you’re going to license Kleenex, it’s got to be “this” good.
- 04:05 When we got into the details of what that would mean for JavaEE, it got very complicated, and we got into a corner where we couldn’t find an agreement that worked for both of us.
- 04:15 Then we tried Plan B, which is when we don’t get a license for the trademarks, but just get the copyright license for the specifications.
- 04:30 That’s where we are today.
There was also a good 18 months where there wasn’t much movement, right?
- 04:40 One of those things where there was no visible movement, but there was a lot of internal discussions on what was going to happen.
- 04:50 There was also getting those companies up on stage in 2017 – there was a lot of planning about making that happen as well.
So with the specs, you can make your own implementations, right?
- 05:10 It’s not about the implementations - it’s about the specs themselves.
- 05:15 We have to go out to IBM and RedHat and SAP and Pivotal that have contributed over the years to license their contributions.
- 05:25 It’s a license to evolve the text of those specifications over time.
- 05:30 The goal is that Servlet needs to live and breathe, we need a license to be able to do that.
What kind of license?
- 05:40 It’s a copyright license - it says that you have the right to copy, distributed, sub-license, create derivatives works - all of those appears in this document.
JavaEE is Servlets, JSF, EL, JaxRS, XML, DI, JPA, JMS …
- 06:15 One of the key points is that it’s easy to say “I don’t use JavaEE any more” – however, these specifications are fundamental to many cloud infrastructure and applications.
- 06:45 It’s trite to say JavaEE is dead – the APIs are still very much used, and there are also going to be legacy applications running in JavaEE servers for 20 years.
- 07:10 It might not be your first choice for building a new application, but if you like getting paid, or your bank not loosing your money, you’re probably using JavaEE every day.
- 07:40 Our goal is to take all of these specs and adding additional specs.
- 07:50 There are millions of developers that know Java, thousands of corporations that have deep Java skills in their organisations.
- 08:00 We want to leverage those skills and institutional knowledge in those corporations to create the right platform for the next generation of applications written in Java.
- 08:10 This is about the future of the platform, not the past.
What was being asked that couldn’t be met for reusing the javax namespace?
- 08:30 Fundamentally the issue was that Oracle considers the javax namespace to be a trademark of Oracle.
- 08:35 They wanted elements of control over the evolution of things inside the javax namespace that we did not want to give up.
- 08:55 The Eclipse Foundation is a not-for-profit 501(c)6 organisation, so we have legal requirements to be vendor neutral, and to act in the best interests industry of the industry, and in particular not for any one vendor or technology.
- 09:10 There were some things that despite good intentions on both sides, we could not come to an agreement on.
What does it actually mean?
- 09:30 We can actually use the javax namespace, we just can’t change it.
- 09:35 There are some additional requirements; for as long as we are using the javax namespace, there are conditions that it can only use a certified JavaSE runtime.
- 09:40 This is a good segue into how we see this evolving in the future.
- 09:45 The first thing we need to do is to create a Jakarta-branded specification that replaces JavaEE 8.
- 09:55 JakartaEE 8 is going to be exactly the same as JavaEE 8, including the javax namespace.
- 10:10 The first release of the specifications that come from the JakartaEE namespace are going to be exactly the same as JavaEE.
- 10:15 If you’re a developer and listening to this, you might be wondering why bother if you’re doing the same thing?
- 10:25 The answer is that the IP rules are completely different under Jakarta than under Java.
- 10:30 Under the JavaEE process, if you want to put the coffee cup logo on your project, you have to sign a license with Oracle.
- 10:40 Typically those JavaEE licenses are in the millions of dollars per year.
- 10:55 Under the Jakarta case, the requirement is that you have to be a member of the JakartaEE working group.
- 11:00 The TCK and specification licenses are far more liberal than the licenses that were provided under the JCP.
- 11:10 From a technical point of view, it’s exactly the same – but from a legal and business perspective, there are huge advantages for JakartaEE 8 to exist for the sake of the ecosystem.
- 11:25 What I predict will happen is that the JakartaEE 8 platform is going to be used by many companies for many years, like a long term support.
- 11:35 We’re seeing the same thing in the JavaSE world, where many companies are sticking with JavaSE 8, which is the one before the module system was put in.
- 11:45 It’s what they know, it’s the Java they know, so they are planning on running on JavaSE 8 for many years – I think it’s possible that the same thing will happen for JakartaEE 8.
- 11:55 One of the good things is that all of the companies involved are economically motivated to switch to this new brand the day that it’s available.
- 12:10 I think you’re going to see a very rapid switch of the whole ecosystem over to the JakartaEE branding.
The other problem was that the Eclipse IDE be running on a JVM certified by Oracle rather than other runtimes?
- 13:00 We are a vendor-neutral open-source foundation.
- 13:05 Our goal is that the only restrictions we want to apply to our projects is those that arise from OSI-approved open-source licenses.
- 13:15 Obviously the GPL has a different set of constraints than the BSD license.
- 13:20 Strictly speaking this is unrelated since we’re talking about Jakarta, since we’re talking about the Eclipse IDE, but it was in the same board meeting so it appeared in the same minutes.
- 13:30 Oracle wanted to insist that if we shipped a Java runtime from Eclipse that it had to be from Oracle or one of its licensees.
- 13:40 Obviously we had problems with that and couldn’t agree to it.
Why were Jakarta EE and Eclipse IDE mentioned together?
- 13:50 They weren’t – it just so happened that these two topics both appeared in the same board meeting.
- 13:55 They got conflated in people’s minds, but if you read the meeting minutes, they were separate conversations.
So why was Oracle in a position to say that they wanted the Eclipse IDE to run with an Oracle JDK?
- 14:10 The first thing to understand is that the Eclipse IDE has been around for 18 years, and it’s never shipped with a runtime.
- 14:25 So we had an idea to make it easier for our users and adopters to be able to ship a Java runtime with the IDE.
- 14:35 Originally they were suggesting a runtime based on Eclipse OpenJ9, which is the open-source project at Eclipse that builds a Java virtual machine.
- 14:45 IBM certify and use that JVM for their products, but it’s not certified if you just download it and use it from Eclipse.
- 14:55 That dilemma was what caught Oracle’s attention, and they said if you want to call it Java then it has to be certified.
- 15:10 It’s not preventing us from doing anything that we’ve been doing for many years.
- 15:15 It is preventing us from doing something that we thing would make the lives of our Eclipse IDE users easier.
- 15:25 We have a lot of IDE users that use Eclipse for C++ or for PHP, and those folks don’t necessarily have a JRE installed on their laptop - those were the kind of users we were targeting.
When are we going to see JakartaEE 8?
- 15:50 JakartaEE 8 is going to be exactly the same as JavaEE 8, so from a binary compatibility and API point of view, its going to be using the javax namespace.
- 16:00 You’ll be able to take your existing applications and move them over really quickly, and we expect all of the vendors to jump on the JakartaEE branding right away.
- 16:10 That’s the first release; the timetable for getting that out is before Oracle CodeOne in September 2019.
- 16:30 There’s a surprising amount of work that needs to happen to allow us to ship exactly the same specs.
- 16:35 We have to run each one of the specs through the new JakartaEE spec process.
- 16:40 Every one of those we have to create the projects, get the scope statements right, get the documents and copyright holders to allow us to change the documents, run it through approval process.
What is the JakartaEE spec process?
- 17:05 Part of this journey included creating a spec process for the Eclipse Foundation from scratch.
- 17:15 If you go back a year and a half ago, the Eclipse Foundation did not do specs.
- 17:20 We created a spec process that is an overlay on top of our development process.
- 17:25 The Eclipse Foundation is similar to the Apache Software Foundation, in that we have a particular governance model that governs how projects operate.
- 17:35 They have a lifecycle, they go through incubation, there’s meritocracy rules about how committers are added, that sort of stuff.
- 17:40 We have an Eclipse Way (which is somewhat similar to the Apache Way).
- 17:45 The idea for the spec process was to create a well-grounded professional specification process that is an overlay on top of how we run open-source projects.
- 18:00 We know how to run open-source projects: we’re good at those, so what are the additional things we need to add in order to create specifications?
- 18:10 One of the things that most developers might not realise is that spec processes are significantly different to open-source processes.
- 18:20 For example, we know that if you contribute to an open-source project, you are making contributions under the license that’s being used by the open-source project.
- 18:25 Most modern open-sources licences have a royalty-free patent grant on them.
- 18:40 The patent grants on specifications are far broader, in the sense that our entire patent portfolio, whether it reads contributions we made to the spec or not, is licensed to all independent implementations of the final specification.
- 18:55 That’s a much bigger and broader patent license than you typically get on open-source projects.
- 19:00 As a result, companies that join spec processes want to see a little bit more formality, because they’re contributing potentially great swathes of their patent portfolio to the specification.
- 19:20 That’s why we think of spec processes as being slow and cumbersome; it’s because they’re designed to be somewhat slow and cumbersome.
- 19:25 To the people who live in this world, that’s a feature, not a bug.
- 19:30 That’s what we did in the spec process: when do those licenses vest; how does someone join a spec project; how do we get companies to sign up agreements; how might their patents be used.
- 19:50 It’s basically building this toolkit of formality on top of our development process.
- 20:00 That said: our spec processes are run as close to an Eclipse Foundation open-source project as we possibly can.
- 20:05 Compared to the JCP, which is the predecessor, it’s going to be a much more open and agile process that was happening with the JCP.
What does the JakartaEE release process look forward after 8?
- 20:30 There’s a medium-term topic and a longer-term topic.
- 20:35 The medium-term topic is: what do we do for the first release after JakartaEE 8 done?
- 20:40 The content of JakartaEE 9 is a vigorous debate on the (jakarta-dev-)platform mailing list right now. []
- 20:50 There are two schools of thought: one is the big-bang, let’s take the namespace of javax and switch it to jakarta all at once.
- 21:00 There are some technical things that become possible, such as using class loaders that will automatically map javax namespace to jakarta namespace so you can get binary compatibility.
- 21:15 The incremental approach is to replace the namespace on an incremental basis when we want to update the spec for a package.
- 21:25 We haven’t even decided the content of the next release at that level of granularity, let alone talking about new specs or anything like that.
- 21:35 We purposefully and explicitly said that we are going to let this debate go until sometime in mid-June, and we’re going to make a decision and go forward.
- 21:45 Until we make that decision, we don’t have a date for JakartaEE 9.
- 21:55 Let’s say that we do a release for JakartaEE 9 and the only thing that we do is this mechanical translation of the namespace - we could probably crank that out in 8 weeks.
- 22:05 If we decide to add new specs or if we do an incremental approach, that would take longer.
- 22:10 Looking further down the line, once we get the JakartaEE 9 release out the door, what are we going to do about a standard release train cadence, whether we’re going to follow a release train - these are questions we aren’t going to answer until sometime in 2020.
- 23:05 The one thing we haven’t talked about yet, which is important: in parallel with these activities, MicroProfile continues to move quickly.
- 23:15 MicroProfile has a release cadence of 3 spec releases per year.
- 23:20 They just released MicroProfile 3.0, so they are going to move their specs forward quickly: innovation continues to happen while we are sorting out the Jakarta topics.
what is the distinction between MicroProfile and JakartaEE?
- 23:40 Think of MicroProfile as being a distinct overlay on top of pieces of JavaEE.
- 23:50 MicroProfile [] has a whole set of specifications - 16 - and they are related to the kinds of APIs and technologies needed to deliver micro-service based applications written in Java.
- 24:10 It uses a couple of specs from JavaEE: CDI, JaxRS and a few others.
- 24:20 They pulled a couple of specs from JavaEE, added a bunch of stuff on top that’s useful for micro-services, and that’s what MicroProfile is - so it’s distinct from JavaEE.
What’s the long term goal for the Eclipse Foundation?
- 24:45 The Eclipse Foundation is not a company: we don’t have a particular agenda here.
- 24:50 What we’re good at is getting companies - often direct competitors - to collaborate on building technology platforms.
- 25:00 We have IBM, Oracle, RedHat, Fujitsu, Tomitribe, Payara - all of the existing incumbents in the JavaEE space involved in this.
- 25:10 Our goal with both JakartaEE and MicroProfile is to help those companies define a multi-vendor platform for cloud-native Java.
- 25:25 There are millions of developers who know this technology, and thousands of companies who have deep institutional memory, knowledge and skills on building Java applications - they want to modernise and move it forward into the cloud.
- 25:40 We feel that the technologies that are being developed at the Eclipse Foundation are going to define the future for Java development in the cloud.
- 25:50 I would strongly encourage developers to learn more about the specs and the projects that are implementing them and to engage and participate - the more developers we have, the better the end result is going to be for everyone involved. | https://www.infoq.com/podcasts/milinkovich-jakarta-ee/?topicPageSponsorship=c1246725-b0a7-43a6-9ef9-68102c8d48e1&itm_source=podcasts_about_java&itm_medium=link&itm_campaign=java | CC-MAIN-2020-34 | refinedweb | 3,409 | 66.98 |
Why would anybody want to simulate hardware when developing a device driver? This article lays out the problem and proposes an approach to solve it. Part 1 of this series provides a broader understanding of the issues and implementation details.
You can apply these methods and strategies to many operating systems and hardware architectures. These strategies work for Linux®, VxWorks, and Windows® NT/2000 operating systems on IBM PowerPC® 405GP, Intel x86®, MIPS, and Motorola PPC architectures. This article series focuses on Linux on x86.
This article describes how to debug interrupts and Interrupt Service Routine (ISR) in a systematic manner, and gives detailed explanations and algorithms that let you step through the source on all possible paths/flow of the ISR. These techniques are helpful in all possible worlds, including combinations of interrupts and ISR, such as slow interrupt, fast interrupt, tasklet, bottom-half, and so on. Finally, this article discusses the hardware and software environment you need to achieve these objectives and run the test cases.
A scenario for simulation
This article helps device driver developers test the interrupt service routine as much as possible by simulating the various interrupts. Following a successful implementation of this simulation technique, you can also perform a Functional Verification Test (FVT) that may involve the device driver, application programming interface (API), and the application.
Consider a hypothetical device driver. Suppose your device driver must be written from scratch, and you do not have the actual hardware while developing this device driver. Your driver is complex: it could be used by multi-threaded applications. The driver will perform hardware register accesses and advanced programming by making use of
mmap(), for example.
Your device is going to generate different types of interrupts -- multiple and nested interrupts at the same time -- and that leads to the design and implementation of a complex Interrupt Service Routine (ISR). Your driver will perform some data manipulation based on a sequence of interrupts. This particular device driver is meant for an embedded system where you may not have much sophisticated debugging environment. This device driver should perform some diagnostics of the device itself. And finally, the driver is tightly coupled with various APIs and applications, and you may have to debug a device driver where interrupt losses and out-of-sequence interrupts happen.
This is a bit more complex than regular porting work.
The requirements
You will need two different systems for the strategies described here.
The first setup: Development/host machine
The first setup requires any Linux distribution and the device for which you are writing the device driver. You will also be applying a patch (extra routines) to your device driver. These extra routines are required only for this interrupt simulation.
You will also have to write a kernel thread that will generate the various interrupts. As part of implementation, we may break this thread into a few support routines. I explain this in detail below.
The second setup: Test machine
The second setup requires the kernel debugger-enabled kernel.
To enable the kernel debugger, you'll need the kernel debugger patches. There are two well-known kernel debuggers available. I have chosen kgdb instead of kdb, because kgdb lets you view the C source code.
Source-level debugging of ISR is the main objective here. You will need the device for which you are writing the driver and a null-modem (serial) cable for remote debugging. You need to apply the kgdb patch on the kernel, build an image with kernel debugging support, then run that kernel on the test machine.
You will control the test machine from the development/host machine through a serial port. Once you are in debug mode, the target machine's kernel stops. Even the jiffies (small packets of kernel time for timing interrupts) are not altered and this lets you debug the Interrupt Service Routine.
More setup caveats
Tip: Take extra care for drivers that include features like TTL (Time to Live), such as connection-oriented networking device drivers.
You will have complete control over raising interrupts (simulated) in the example described throughout this article.
When you need to debug any particular interrupt (one interrupt at a time), you will use the debugger-enabled setup. When you run the rigorous test (sequence of interrupts), you will use the first setup that does not require kernel debugger support. A combination of both these setups will give you the best result.
Before proceeding further, I will describe an
ioctl interface that you will be using in the two approaches.
The ioctl interface
A new
ioctl command should be added to the device driver to control the interrupt simulation from the test application. This
ioctl can be used in the FVT test application code. This
ioctl interface is meant for our hypothetical driver. Actual implementation would depend on the device and the driver.
In our example, part of the interrupt handling is carried out at the application level and part is in the driver. To achieve this you need application threads and kernel threads. The kernel thread and the application thread will handshake with each other.
This special purpose
ioctl interface will be able to control the interrupt generation sequence and the number of interrupts to be generated through the test application.
I will discuss two different sets of interrupts, normal interrupts and error interrupts.
A sophisticated way to have more control over raising interrupts and testing the ISR is to follow a two-tier architecture, having a special
ioctl function that would give the user application the freedom to raise a particular interrupt or sequence of interrupts at specified timings and the
ioctl implementation in the kernel land. In this approach, you could have more control over interrupt generation. You have to set the appropriate fields and then pass the same to the special
ioctl that would in turn either raise the interrupts or signal the kernel thread to raise the interrupts.
The ioctl structure
Listing 1 demonstrates the structure of
ioctl.
Listing 1. The ioctl structure
struct simulation_struct { struct interrupt_type EventsArray [MAX_INTR_TYPE] ; unsigned iteration_count; unsigned num_events; };
The explanation of this code goes like so:
EventsArray [MAX_INTR_TYPE]is an array of
struct interrupt_typeas defined based on the device and the different types of interrupts.
iteration_countis the count to control the number of iteration of interrupt simulation.
- If it is 0, raise all
MAX_INTR_TYPEinterrupts one by one (preprogrammed in the interrupt simulation module).
- If it is 1, normal interrupts in sequence only once.
- If it is greater than 1 and less than
MAX_COUNT, normal interrupts in sequence
iteration_counttimes.
- If it is
MAGIC_NUMBER, get the data/interrupt register values from the structure passed and generate the interrupt as per the structure values. In this case,
num_eventswill give the number of interrupts to be generated.
MAX_INTR_TYPE,
MAX_COUNT, and
MAGIC_NUMBER should be defined by you based on your needs and the actual hardware. As a rule of thumb,
MAX_COUNT should always be less than
MAGIC_NUMBER.
num_eventsis the number of valid entries in
EventsArray. The minimum value is 1 and the maximum value is
MAX_INTR_TYPE.
num_eventsinterrupts will be generated as per the input passed to
EventsArray.
The ioctl command
Listing 2. The ioctl command
ioctl name: INTR_SIMULATE Input: Pointer to struct simulation_struct Function Type: New feature
The
ioctl command works like so:
- If the interrupt simulation status flag is set, return
EBUSY. This scenario arises when interrupt simulation is already in progress.
- Check the initialization status of the driver. If the status is not good, then return the appropriate error code.
- Copy the contents of
arg(pointer to
struct simulation_struct) to the global structure. Make sure this copying happens inside the critical section by holding a spinlock. Note: The kernel thread will read the global structure, and interrupts will be generated based on the elements in the global structure. The spinlock is needed here, because the kernel thread will be running independently and will access the global structure.
- Set the interrupt simulation status flag to indicate that interrupt simulation is in progress.
- Wait until the interrupts are generated.
- Once the interrupts are generated, reset the interrupt simulation status flag and return success. This gives control back to the application.
The
ioctl command returns the following codes:
- On successful execution, it returns 0.
- Otherwise, it returns the appropriate error status flag.
keventd should run to create a new kernel thread.
The strategies
The three main strategies I would use for interrupt and hardware simulation are:
- Software-generated IRQ
- Using the kernel debugger
- Using the polling thread
Each strategy requires that a kernel thread is run.
Strategy 1: Software-generated IRQ
The primary aim of this approach is to simulate interrupts and test whether the ISR handles all possible interrupts. You can automate this activity and simulate the conditions so that the ISR would be invoked as in an actual runtime environment.
The kernel thread that we implement will raise the interrupt (the software-generated IRQ) for our device driver (and not for our card) by making use of the
INT assembly instruction.
Before the interrupt is raised, all other pre-requisites (setting up any address, data, etc.) for that interrupt should be handled in the kernel thread. Once the interrupt has been raised, the driver's ISR will be called. In the ISR you will not read the actual device's registers; instead you will have to read values from the local variables that are assigned by the kernel thread (simulation module). You should actually duplicate the device's registers. Before you raise the interrupt in the kernel thread, you will have to set the bit/mask values on these local registers.
Depending on your device driver, you may need to have a copy of the buffers, if the device has any. This depends on the implementation and the device. Since you have already set the (simulated) register values appropriately, the ISR will process normally as if it was a real interrupt. You might notice that this is more of a hardware simulation and not just interrupt simulation.
This approach requires some changes in the ISR. The code that accesses the actual card registers should now be changed to access the local variables that mimic the device's registers.
To achieve this mapping, you may include conditional compilation
#ifdef in places where you access the registers. To limit the number
#ifdefs in the ISR, you should
#define all the registers and keep all these
#defines in a separate header file. On top of these register
#define macros, you should also define another macro that dictates whether the ISR will run in simulation mode or in the original interrupt context. For example:
Listing 3. Example conditional compilation
#ifdef INTR_SIMULATION // Only for interrupt simulation #define PCIINTRSTATUS local_pciintrstatus // Access the local variable #else // Actual Interrupt #define PCIINTRSTATUS Dev-> DataStruct.ulIpcIntrStatus #endif
In the implementation of the driver, wherever you access the device's registers you have to use these
#defined macros instead of using the structure variables directly. This also provides more clarity, because you have avoided the multiple indirections of union or structure variables.
You may extend this
#ifdef technique to the API and to the applications so that you can link this interrupt-simulated driver to those modules to perform levels of FVT testing. For the unit testing of this interrupt-simulated driver (unit testing of ISR), you can have your own test application that will simulate some functionality of the APIs and applications that are part of the overall system.
Code changes for Strategy 1
The following code changes are necessary to use software-generated IRQs:
- All the registers being accessed need to be defined (
#define).
- A separate
ioctlcommand should be introduced to have control over interrupt simulation (see the section on the
ioctlinterface).
- A separate kernel thread should be written to raise the interrupt. This kernel thread will get registered during
openin the device driver on successful registration (
request_irq) of the ISR.
Use the kernel API
kernel_thread to register this kernel thread in this pseudo code:
Listing 4. Starting the kernel thread
#ifdef INTR_SIMULATION // // Start the Kernel Thread // start_kthread( raise_intr_thread, &raise_intr ); #endif // end of INTR_SIMULATION
The function
start_kthread launches the thread by calling kernel API
kernel_thread.
This kernel thread should be destroyed in the
close of the device driver in this pseudo code:
Listing 5. Stopping the kernel thread
#ifdef INTR_SIMULATION // // Stop the Kernel Thread // Stop_kthread( raise_intr_thread); #Endif // end of INTR_SIMULATION
These parts of the code (kernel thread registration and destruction) should again be within the
#define INTR_SIMULATION conditional compilation block.
- A test application should be written to handle these interrupts. This test application should simulate some part of the functionality of the APIs and applications to handle the raised interrupts. In our example, the test application will spawn threads and wait (blocked) for interrupts to release the threads. This blocking functionality is achieved by making use of
sleep_on_interruptiblewith a mutually exclusive lock inside the driver's
ioctlfunction. Whenever an interrupt occurs, one thread will be woken up (
wake_up_interruptible) and resume execution based on the interrupt.
- The special
ioctlfunction
INTR_SIMULATEneeds to be called to simulate the interrupts.
Strategy 2: Using the kernel debugger
The primary aim of this approach is to step through the source code of the tasklet and/or the bottom half which services the interrupts. Since you step through the kernel in this approach, you will not have the exact timing sequence. As mentioned earlier, extra care needs to be taken for device drivers like this that involve features like TTL (in this case one that uses connection-oriented networking device drivers).
This strategy lets you examine the device driver's complete code flow on a per-interrupt basis. This approach could be used along with the first strategy and you can use this approach to test the driver with the actual target setup.
This strategy requires the kernel thread to raise the interrupt so that the device's ISR will get called. You will have to place a break point in the tasklet or bottom half. The kernel will stop at this point when the ISR schedules the tasklet/bottom half. Once the break point is hit, you can step through the source and view or modify the variables.
In this strategy, you'll access the device's register the same way as in Strategy 1 -- using local register variables. If the device and the target architecture permit, you could access the device's register through the debugger.
By effectively making use of the kernel debugger, you can reduce the work of the kernel thread that was described earlier. With this approach, you could simulate the various conditions, sequence, and variables. While you are in the tasklet, you will be able to modify the (local) register values at debug time and be able to step through all the paths and flow of the source code.
Required code changes for Strategy 2
All the code changes that are required for Strategy 1 also apply to Strategy 2. However, some of the initialization and prerequisite code in the kernel thread will not be required, because you will be able to achieve those initializations during the debugging session itself.
You can decide whether to implement everything in the source code or to change the parameters during runtime using the debugger. You will not need a larger number of threads, since this approach runs on a per-interrupt basis.
Strategy 3: Using the polling thread
This approach is designed to rigorously test the tasklet/bottom-half code. In this approach you will not raise the interrupt. You can test all the interrupt sequences (out of sequence) by using a polling technique. This approach may also be used in conjunction with the kernel debugger (Strategy 2).
You will need two kernel threads for the implementation of this strategy. The first one is similar to the kernel thread mentioned in the previous strategies except that it will not raise the interrupt. However, you will change the local register variables, and once you finish the initialization/prerequisites for a particular interrupt, you will indicate that fact to the second kernel thread (the polling thread).
The polling thread waits for the signal from the first thread. It could keep polling for the signal (change) to occur or it could just sleep. Once it gets the signal, it schedules the tasklet/bottom half (software interrupt). The tasklet/bottom half executes in the same context as when an interrupt occurs.
It is important to note that these tasklets/bottom halves will run close to the interrupt context (software interrupt). However, the polling thread will run in a normal process context.
Required code changes for Strategy 3
The following code changes are necessary to use the polling strategy:
- All the registers being accessed need to be defined (
#define).
- You will need two kernel threads in this approach.
- The first thread is similar to the one defined for Strategy 1, but it will not raise the interrupt. All other initializations for the interrupts should be carried out in this thread.
- You will need another separate polling thread, which will get notified by the first thread when to schedule a tasklet.
- You will need to use an interprocess communication (IPC) mechanism between these two threads.
- These kernel threads should be destroyed in the
closeof the device driver. These portions of the code (kernel thread registration and destroy) will again be in the
#define INTR_SIMULATIONconditional compilation.
Note: If you do not enable this conditional compilation flag, you will get the release version of the driver object that will be used in the target environment.
- The test application will not require much change. It will simulate some parts of the functionalities of the APIs and applications to handle the raised interrupts. This test application will spawn threads and will keep waiting (blocked). The blocking functionality is achieved by making use of
sleep_on_interruptibleinside the driver's
ioctlfunction. Whenever any interrupt occurs, these threads will be woken up (
wake_up_interruptible) and resume execution based on the interrupt.
Note: Whenever we schedule the tasklets, blocked threads will start waking up and continue processing. Extra care must be taken not to infinitely block the kernel.
Designing kernel threads and test applications
The kernel thread(s) will be initialized in the
open entry point of the driver, provided
request_irq succeeds on successful registration of the interrupt service routine, for instance.
These threads will be destroyed in
close. The code to initialize and destroy the threads will be under
#ifdef INTR_SIMULATION, so that under normal compilation this code will not affect the release version of the driver object.
In this section, I'll examine
- two threads (an interrupt and polling thread),
- a test application, and
- test cases.
Interrupt thread
This thread keeps generating all possible interrupts based on the following algorithm:
Listing 6. Interrupt thread algorithm
1. Read the global interrupt simulation structure. If iteration_count is 0 1.1. For each and every iteration, 1.1.1. Set the particular interrupt status bit 1.2.1. Do any other preparation, if required. 1.3.1. If compiler option is polling mode (#ifdef POLLING) Intimate interrupt status register change to the polling thread. 1.4.1. Else Raise the card's interrupt by calling INT mnemonic 1.5.1. Delay the thread. 1.2. Continue until MAX_INTR_TYPE iterations (all MAX_INTR_TYPE possible interrupts once) Tip: You may use cpu_raise_irq or cpu_raise_softirq instead of using the INT mnemonic to make it portable between platforms, but make sure you are taking care of enabling and disabling interrupts in the proper place and sequence. 2. If iteration_count is 1, raise the normal interrupts (not the error interrupts) in their sequence. 2.1. For each and every iteration, 2.1.1. Set the particular interrupt status bit in the normal interrupts sequence. 2.2.1. Do any other preparation, if required. 2.3.1. If compiler option is polling mode (#ifdef POLLING) Intimate interrupt status register change to the polling thread. 2.4.1. Else Raise the device's interrupt by calling INT mnemonic 2.5.1. Delay the thread. 2.2. Continue until iteration MAX_NORMAL_INTR_TYPE (all MAX_NORMAL_INTR_TYPE normal interrupts once) 3. If iteration_count is greater than 1 and less than MAX_COUNT, raise the normal interrupts in sequence iteration_count times 3.1. For each and every iteration, 3.1.1. Set the particular interrupt status bit in the normal interrupts sequence. 3.1.2. Do any other preparation, if required. 3.1.3. If compiler option is polling mode (#ifdef POLLING) Intimate interrupt status register change to the polling thread for all the MAX_NORMAL_INTR_TYPE normal sequence interrupts (loop of MAX_NORMAL_INTR_TYPE iteration, 1 per interrupt) 3.1.4. Else Raise the card's interrupt by calling INT mnemonic for all the MAX_NORMAL_INTR_TYPE normal sequence interrupts (loop of MAX_NORMAL_INTR_TYPE iteration, 1 per interrupt) 3.1.5. Delay the thread. 3.2. Continue until iteration equals iteration_count 4. If the iteration_count is MAGIC_NUMBER, Get the interrupt register values from structure passed and generate the interrupt as per the structure values. In this case num_events will give the number of interrupts to be generated 4.1. For each and every iteration, 4.1.1. Set the particular interrupt status bit in the normal interrupts sequence. 4.1.2. Do any other preparation, if required. 4.1.3. If compiler option is polling mode (#ifdef POLLING) Intimate interrupt status register change to the polling thread as per the input passed. 4.1.4. Else Raise the card's interrupt by calling the INT mnemonic as per the input passed. 4.1.5. Delay the thread. 4.2. Continue until iteration equals num_events
Note: To start with, you may go for a one-second delay. Then you can tune the loop so that you will be generating as many interrupts as in the case of the original system.
Polling thread
A few things to remember about a polling thread:
- This thread will keep polling whether or not any change happens in the local interrupt status register in a loop.
- If there is any change in the status register, it means an interrupt has occurred.
- If there is an interrupt, schedule the tasklet using
schedule_tasklet.
- Continue the earlier-mentioned tasks.
A test application
This test application will inherit some part of code from the APIs and applications that make use of the driver. Here are seven steps to enabling the test application:
- In the main program, spawn the required number of threads.
- Issue
ioctlthat would be blocked (
sleep_on_interruptible) inside the driver/kernel.
- Fill the input structure for
ioctl INTR_SIMULATE.
- Issue
ioctl INTR_SIMULATE.
- Whenever an interrupt wakes up the thread, process the interrupt the same way the actual API and application process it.
- Register the sequence number, interrupt nature, and thread attributes to the main program.
- The main program keeps track of the information provided in step 6 and monitors whether any out of sequence or interrupt loss happens.
This is one of the crucial tests that you could carry out using this hardware simulation technique.
Enabling the test cases
The following steps illustrate how to enable the test cases.
Listing 7. Enabling the test cases
1. Raise all possible (MAX_INTR_TYPE) interrupts and check whether they are getting handled appropriately in the driver. 1.1. Use printk statements to check whether the appropriate interrupt handling steps are getting executed. 1.2. User /proc entry registered for our device driver. 1.3. Use kernel debugger kgdb and check whether the appropriate interrupt handling steps are getting executed. 2. Raise all normal sequence interrupts (MAX_NORMAL_INTR_TYPE interrupts) and check whether they are getting handled appropriately in the driver. Some device drivers need to handle a series of interrupts before they collectively perform some task. 3. Check to see if any interrupt is getting lost. 3.1. In the test application, when the thread gets woken up, check for the Interrupt ID (sequence number) 3.2. Check whether the interrupt that we have simulated is getting captured in the test application. This is a test for thread wake up.
Resources
Learn
- In Part 1 of this two-part series, "Debugging simulated hardware on Linux, Part 1: Interrupts and Interrupt Service Routine" (developerWorks, November 2005), learn about strategies and implementation details that you can apply to interrupt simulation, including the prerequisites, hardware, software setup, and test cases for testing the Interrupt Service Routine (ISR).
- "Smashing performance with OProfile" (developerWorks, October 2003) introduces a tool that can help you identify issues such as loop unrolling, poor cache utilization, inefficient type conversion and redundant operations, and branch mispredictions.
- Understanding the Linux Kernel, Third Edition (O'Reilly, November 2005) provides | http://www.ibm.com/developerworks/linux/library/l-hardsim/index.html | CC-MAIN-2016-44 | refinedweb | 4,093 | 54.63 |
What is Scala’s Nothing type for?
What is “Nothing”?
Nothing is a subtype of all types, also called the bottom type.
As the Scala official doc says,
Nothing is a subtype of all types. That means,
Nothing is a subtype of
Int and also is a subtype of
String and
List[User].
When I happened to see the definition at the first time, I was confused, and I read that definition out again and again, but I couldn’t figure out what does it mean. A subtype of
Int,
String, and
List[User]. Is there such a magical value?
Of course, Scala doc answers to this quite natural question.
There is no value that has type Nothing
“There is no value that has type Nothing”. Again, I read it out again and again, but it definitely says there is no value like that. It’s weird. If there’s no such value, what is
Nothing type for?
… In fact,
Nothing is definitely an important part of the Scala’s type system.
Type for expression which doesn’t return a value
Assume you have a method that returns
Int or throws an exception.
def oneOrThrow(num: Int): Int =
if (num == 1) num
else throw new Exception(s"$num is not 1")
The
if expression should be an
Int. In the positive case,
num is apparently
Int, but how about
else case? If you add type annotation to the both cases, that should look like this to return
Int:
def oneOrThrow(num: Int): Int =
if (num == 1) (num: Int)
else (throw new Exception(s"$num is not 1")): Int
Actually we can compile this without an error!
throw new Exception(s"$num is not 1")'s type can be
Int. In the same way, it should be legitimate to use
throw where any possible type is required.
// These snipets can be compiled without an error.
val int: Int = throw new Exception("fake Int")
val string: String = throw new Exception("fake String")
val maybeUser: Option[User] = throw new Exception("fake User")
def equalsOrFail[A](l: A, r: A): A =
if (l == r) l
else throw new Exception(s"$l is not $r")
Here, Scala compiler treats
throw expressions as
Nothing type. Do you remember?
Nothing is a subtype of all types so it can be an
Int,
String, and
A.
throw doesn’t return an concrete value, but it should be any type. Here,
Nothing seems to be a perfect choice. Thanks to it, Scala type checker can treat
throw like any other expression.
Stub
Another usage of
Nothing is
???.
??? is useful when developing an outline of features without considering their implementation detail.
// TODO: Implement them later.
def resolveAuthor(authorId: AuthorId): Future[User] = ???
def storeAuthor(author: Author): Future[Unit] = ???def updateAuthorName(authorId: AuthorId, name: AuthorName): Future[Unit] =
for {
author <- resolveAuthor(authorId)
_ <- storeAuthor(author.updateName(name))
} yield ()
Quite useful. Anyway, an interesting and beautiful fact about
??? is that it’s not a special syntax like
throw, but just a method defined at
Predef as
Nothing type.
def ??? : Nothing = throw new NotImplementedError
Thanks to its flexibility of
Nothing, we can use
??? as a stub of any required type.
Empty object for higher kinded types
The last example I introduce is for empty object like
Nil or
None. Higher kinded type like
Option and
List MAY contain
A values in it. At the same time, they MAY NOT contain a value, and such empty values are declared as
Nil or
None in Scala.
You might have used
None as type
Some[Int] or you might have used
Nil as
List[User]. Again,
Nothing helps this flexibility.
sealed abstract class Option[+A]final case class Some[+A](value: A) extends Option[A] { ... }case object None extends Option[Nothing] { ... }
Here,
Nothing is used as type parameter
+A of
Option[+A]. This
+ variance is important to let
Option[B] replace
Option[A] when
B is a subtype of
A. As I mentioned many times in this post,
Nothing is a subtype of all types. Therefore
Option[Nothing] is a subtype of
Option[Int] and
Option[User] and all possible
Option[A] types. There is no value of
Nothing, but it’s not a problem because both
Nil and
None are empty.
Conclusion
At first glance,
Nothing is weird and useless. However, its replaceability is quite powerful in Scala’s type system. I introduced some usages which I think are beautiful. You might not aware of
Nothing in daily programming, but definitely it makes Scala more elegant. | https://medium.com/@juntomioka/what-is-scalas-nothing-type-for-6d1a1d4bcc02 | CC-MAIN-2021-43 | refinedweb | 749 | 66.64 |
Is that you? Writing Better Software for Cool USB Hardware
I sometimes jokingly call the Logitech io2 pen the "poor man's TabletPC." It's a digital pen that uses real ink. The magic is the tiny camera sensor next to the ink tip that reads the absolute position of the pen on specially printed paper. The technology is licensed by Logitech from Anoto. Here's the details from Anoto's own web site:
"The Anoto pattern consists of small dots with a nominal spacing of 0.3 mm (0.01 inch). These dots are slightly displaced from a grid structure to form the proprietary Anoto pattern. When writing with a digital pen on a paper printed with the pattern, digital snapshots of the pattern are taken with a rate of more than 50 per second. Every snapshot contains enough information to make a calculation of the exact position of the pen. The intelligence in the paper, derived from the pattern, makes it possible to perform operations by just ticking a box with the pen, e.g. Store, Send, To Do, Address, etc."
To the naked eye the paper looks slightly off white. You don't even notice the dots. You write with the slightly oversized—but no overly so—pen on what looks like regular paper. You can buy the custom notebooks and Post-its for as little as $5 from Logitech directly or at office supply stores. I like the Black n' Red notebook, myself.
You can use their software to store and manipulate the digital ink files, save them as images, export to Word or OneNote, as well as run OCR (optical character recognition) on the pages.
The pen by itself is a pretty slick gadget, but why does it deserve a Some Assembly Required article? Because it has a full and supported .NET-based SDK! The Logitech io2 Software Plug-in Toolkit (FAQ) enables folks to extend this pen in some pretty unbelievable ways. Check out this case study on how the pen was used at the recent G8 summit to enable non-technies to do some pretty amazing collaboration.
Creating a plug-in using the io2 SDK is a little complex if you were to do it from scratch. However, when you install the SDK in adds a new "Logitech io2 Plug-in" project when you use File > New Project from within Visual Studio. It also includes default plug-in templates in both C# and VB. I started with the default C# plug-in. You just derive your plug-in from Logitech.Plugins.AbstractPlugin.
Visual C#
using System;
using System.Diagnostics;
using Microsoft.Win32;
using System.IO;
using Logitech.PlugIns;
using Logitech.Pen.Ink.Sdk;
namespace Hanselman.BlogJetPlugIn
{
public class PlugIn : AbstractPlugIn
{
public PlugIn(IPlatform platform) : base(platform){}
public override void Init(){base.Init();}
public override void Dispose(){}
public override void SelectionChanged()
{
base.SelectionChanged ();
}
public override object Run(IAction action,
ISelection selection, object[] parameters)
{
//My plugin will go here!
}
}
}
My vision is that I'll write up a blog post using the digital pen and paper, dock the pen, then from the Logitech organizer software select a menu my plug-in will add like "BlogJet this page..." and the digital ink would be transfered to BlogJet and then to my blog running DasBlog. Seems complex, but the flow is very simple. Write paper, dock, click, blogged.
BlogJet is a small and elegant offline blogging tool. I say offline because BlogJet is a
smart client that you install and run on your system, offline if you like. You can post when you're back online. I like BlogJet because is includes spell checking, image resizing and, more importantly, it supports posting to dozens of blogging engines. I use
DasBlog for my blogging engine, and since DasBlog supports the
standard MetaWeblog API for posting, I can use BlogJet to post to my blog without using a browser. You don't need to worry about the details of MetaWeblog or Blogger or the other blogging APIs; it's all handled
for you. BlogJet also supports uploading images, which isn't supported directly by MetaWeblog, by uploading them in parallel via FTP. All in all, it fit my needs perfectly.
BlogJet supports two ways of automating posting. You can use their -blogthis command line switch and include the contents you want posted in the clipboard, or you can write your content out to a temp directory as an HTML file and call BlogJet with that file as a parameter. It will automatically convert references to images into blog-relative references. This will make integration with the pen easy, as the Pen SDK supports getting bitmaps of the digital ink.
As an aside, I took the BlogJet "little red man" icon and overlaid a Logitech io2 Pen and used this as my Toolbar icon to expose my functionality within the Logitech Organizer.
(Click image to zoom)
Everything happens in the Run method. The action is passed in as an
IAction that would allow me to do different things based on context if I cared.
ISelection holds just that, the selected documents. I need to pass the selected items into a helper method that will write out a temporary HTML document and separate images for each page. I needed to get the installed location of the BlogJet software
from the registry (I found this by poking around.) Then I start a new process passing in BlogJet.exe's location surrounded by quotes, as well as the full path to the temporary HTML file that contains
<img> tags pointing to each of the pages.
Visual C#
using System;
using System.Diagnostics;
using Microsoft.Win32;
using System.IO;
using Logitech.PlugIns;
using Logitech.Pen.Ink.Sdk;
namespace Hanselman.BlogJetPlugIn
public override object Run(IAction action,
ISelection selection, object[] parameters)
{
try
{
WebPageMaker webPageMaker = new WebPageMaker(Platform);
string htmlFileUrl = webPageMaker.Make(selection.SelectedItems);
RegistryKey key = Registry.CurrentUser.OpenSubKey(
@"Software\DiFolders Software\BlogJet");
string BlogJetPath = key.GetValue("InstDir") as string;
BlogJetPath = "\"" + Path.Combine(BlogJetPath,"BlogJet.exe") + "\"";
ProcessStartInfo startInfo = new ProcessStartInfo(BlogJetPath);
startInfo.Arguments = "\"" + htmlFileUrl + "\"";
Platform.LogMessage(false,startInfo.FileName);
startInfo.UseShellExecute = true;
Process.Start(startInfo);
}
catch(PlugInTemplateException pe)
{
Platform.LogMessage(true, pe.Message);
}
catch(Exception e)
{
Platform.LogMessage(true, e.Message);
}
return null;
}
Here's a bit of a simplification of what is done in WebPageMaker. You can take a look at the source for more details.
Visual C#
StreamWriter writer = File.CreateText(htmlFilePath);
writer.WriteLine("<p>");
foreach(ISelectedItem selectedItem in selectedItems)
{
IDocument document = (IDocument)selectedItem.Object;
if(!document.IncludesStrokes) DocumentSerializer.Load(document);
for (int pageIndex = 0; pageIndex < document.Pages.Count; pageIndex++)
{
string imagePath = Path.GetFileNameWithoutExtension(
document.Name) + pageIndex.ToString("000") + ".jpg";
Regex reg = new Regex("[^a-zA-Z0-9. ]");
pageFilePath = Path.Combine(Path.GetTempPath(), reg.Replace(
Path.GetFileName(pageFilePath),String.Empty));
Bitmap bitmapImage = ImageGeneratorFactory.Create(
document.Pages[pageIndex]).GetBitmap();
bitmapImage.Save(pageFilePath, ImageFormat.Jpeg);
writer.WriteLine(string.Format(@"<img src=""{0}"">", imagePath));
}
}
writer.WriteLine("</p>");
writer.Close();
This little chunk of code creates a text file using a StreamWriter and spins through the selected items. Then it spins through each page within the document and generates an image in a temporary file that is a picture of the digital ink.This little chunk of code creates a text file using a StreamWriter and spins through the selected items. Then it spins through each page within the document and generates an image in a temporary file that is a picture of the digital ink.
There is a little hack in there as BlogJet has some trouble with Unicode characters within a filename. They seem to get double UrlEncoded, so I just strip out everything that's not A to Z, a number or a period.
Logitech was serious when they called this a Plug-in Toolkit. A finished plug-in doesn't show up in just one place—it's plugged into the complete io2 pen experience. I personally didn't expect this level of integration. I figured I'd get a right-click menu, or a toolbar button. Once my action was defined in the Run method within my code, I had to create a Plugin.xml file that defined all the places I wanted my new action to be available. For example, when you right-click on a .pen file within your My IO Documents folder, there's an "IO Actions" menu. I'd like to be in there.
(Click image to zoom)
Instead of messing around with Explorer and the complexities of shell integration, the SDK lets me add a menu item by adding these lines to my Plugin.xml file.
<Extension Point="Menus" ApplicationId="Logitech.Pen.ShellExtension">
<ObjectContribution
ObjectClass="Logitech.Pen.Ink.Sdk.IDocument">
<Action Id="Hanselman.BlogJetPlugIn.Menus6.Send"
DisabledLabel = "%BlogJetPlugInDisabledLabel"
Label="%BlogJetPlugInLabel"
Description="%BlogJetPlugInDescription"
Path = "Menus/ContextMenu/BlogJetPlugIn"
Index = "7"/>
</ObjectContribution>
</Extension>
There's also a Plugin.resx that keeps all my localized text. That's what the strings that start with % refer to. The Pen SDK will automatically pull text out of the .resx.
Since your plug-in assembly will run in-process in not only Explorer but also the Logitech Organizer software, debugging could be a little tricky. Logitech makes this much easier by including a "Plug-ins Test Platform" that will host your plug-in, provide toolbars, menus and even includes sample pages to test with.
The latest version of the io2 Software added a new feature called IOTags. Basically, while you're writing some text, you add a circle with a letter inside, and then draw a vertical line to encompass the significant text. I figured that could be useful to me as well, so I added a few more lines to my Plugin.xml file that would automatically blog text marked with the "B" IOTag.
<Extension Point="Automation">
<ObjectContribution
ObjectClass="Logitech.Pen.Ink.Sdk.IDocument">
<Action
Id = "Hanselman.BlogJetPlugIn.Automation.Send"
Description = "%BlogJetPlugInAutomationDescription">
<Data>
<IoTags FieldSeparator=";">
<IoTag Tag="B">
<Field Id="Subject" Type="Text" />
</IoTag>
</IoTags>
</Data>
</Action>
</ObjectContribution>
</Extension>!
At this point I can blog using the pen by:
The IOTag is nice because it implies immediate action. The blog post occurs immediately after docking the pen.
I got deep into this project before I started hosting my plug-in within Logitech's framework. I had been happily working in Visual Studio 2005 and the .NET Framework 2.0. I hadn't used any 2.0 libraries in this simple project, but I was using all the new shiny features of the new Visual Studio IDE. But, tragedy struck when I realized that the current (as of this writing) Logitech io2 SDK isn't compatible with .NET 2.0. Would I have to switch back to Visual Studio 2003? Such a thing was unthinkable!
(Click image to zoom)
MSBUILD to the rescue! Visual Studio 2005 uses a powerful and extensible build mechanism to handle compiles called MSBUILD. MSBUILD uses a default XML configuration file called a "targets" file. The system is so extensible that I could use a custom .target file to target the .NET 1.1 compiler with the 2005 IDE. This means my development environment had all the new bells and whistles of Visual Studio 2005 while maintaining compatibility with this .NET 1.1 version of the Logitech SDK. See for more details on this custom target file from Gustavo Guerra.
Note While MSBUILD is extensible for a reason, my way is not a currently (as of this writing) supported way of doing things. That means, if it doesn't work for you, you don't get to complain! The good news is that Microsoft officially announced on November 9th, 2005 that they would be creating the MSBuild Everett Environment (MSBEE). I will update my sample to use the official technique as soon as it is available. I'm very excited about this because it makes the whole Visual Studio 2005 story that much more compelling.
The result of all this can be seen on my blog at
There's things that could be extended, added, and improved on with this project. Here are some ideas to get you started:
Have fun and have no fear when faced with the words: Some Assembly Required!
I didn't understand what is this topic about? i reached this page from google in searching "C# code for digital pen". I want ro write a code in c# which doctors can write eye information like "+0.75+1.00*180" with digital pen on picture control and can save, erase and print it, but before that, program should recognize the text and store it as string in sql database. Can you help me?
it work on lcd display,how it work,where i can purchase this product.
I was just looking for something like this and it sounds great. Already have the pen, but the functionality avaliable isnt enough.
My only problem now is that the SDK has been removed (or just moved) from the logitech site and i cant find it anywhere.
Do you still got it or do you know where I can find it?
Cheers
@SaraX you can use the built in tablet support in the operating system to do this. If you want your application to have Ink support, there is an Ink object in .Net too
I need to use the pen to write in a text box forexample,can i use this codes?
great idea..thanks for sharing it.
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know. | https://channel9.msdn.com/coding4fun/articles/Do-You-Like-Me-Check-This-Box-Blogging-with-the-Logitech-io2-Digital-Pen | CC-MAIN-2017-09 | refinedweb | 2,267 | 58.18 |
07 June 2012 13:26 [Source: ICIS news]
LONDON (ICIS)--Crude oil futures rebounded on Thursday, gaining more than $1.00/bbl on the back of a successful bond auction in Spain and a surprise interest rate cut from China’s central bank.?xml:namespace>
By 12:00 GMT, the front-month July NYMEX WTI contract had touched an intra-day high at $86.52/bbl, a gain of $1.50/bbl. The contract then edged a little lower to trade around $86.35/bbl.
At the same time, the front-month July ICE Brent contract touched an intra-day high at $102.21/bbl, a gain of $1.57/bbl. It then weakened to trade around $102.05/bbl.
In China, the People’s Bank of China cut its benchmark lending and deposit rate by a quarter point on Thursday, in the hope to stimulate investment and demand in the country.
In Europe, Spain’s sovereign bond auction was successful after the country sold more than €2bn worth of 10-year bonds. However, it was forced to pay higher yields to attract | http://www.icis.com/Articles/2012/06/07/9567332/crude-gains-on-spanish-bond-auction-china-interest-rate-cut.html | CC-MAIN-2014-52 | refinedweb | 183 | 73.98 |
i have created a code for log. it is intended to provide the user with a recent
history of various events, primarily used in data-gathering applications where one
might be curious how a particular transaction completed. In this case, the log
need not be permanent nor saved to a file.
*i have to show the status of a running process in the same line of listbox.*
for this i used the below code inside the for loop.the problem is that each time it
enters for loop list box get updated and there is flickering in listbox.i need to
avoid this flickering and also i cant include any delay in for loop since
process shouldn't be delayed. here listbox1 is the list box.
namespace WindowsFormsApplication1 { class sample { int frame_cnt=0; int sl =listBox1.Items.Count; for(int i=0;i<928;i++) { int percent = (int)(((double)(frame_cnt) / (double)(928)) * 100) + 0; listbox1.Items.Add(percent.ToString()); listBox1.Items.RemoveAt(sl-1); frame_cnt++; } } } | https://www.daniweb.com/programming/software-development/threads/390857/to-avoid-flickering-in-listbox-in-c-sharp | CC-MAIN-2017-09 | refinedweb | 165 | 57.06 |
Hi,
I have a problem with using branch and hint histories which I got out of the cell simulator. I was able to get to a branch which is mispredicted (ran spu-gdb on ps3 and disas'd to the address) .. but I dont really know what to do now ;] because I have no idea how to "link" this branch in asm with a branch in my code ..
thanks for any hint
michal
ps. i am finishing my diploma thesis now, a part of it was implementing a ray-tracer with different acc. structures on Cell (currently BIH and KDT) ... let's say that me not being a linux guru (I am a game programmer, a windows person usually :]) really made be bang my head against the wall a lot in the last few months ;] lots of times I felt that the documentation just presumed that people are linux experts (this ps is not about the branch histories in particular, this is just my first post here, so I felt a need to say this :])
Topic
Pinned topic branch histories
2008-12-29T04:31:59Z |
Updated on 2011-03-06T09:51:16Z at 2011-03-06T09:51:16Z by mikee111
Re: branch histories2008-12-29T07:23:16Z
This is the accepted answer. This is the accepted answer.I've managed to get help from my roomie, who happens to know his way around *nix .. anyhow, through gdb disas and info line I can look at which branch is where in code ... now, what's weird is this
my histories look like this:
I am wondering, shouldn't there be always a branch for every hint?
on line 109 you can find
line 109 hbrr 0x00f4c: 0x00f8c 0x00c68 9716 0 9716
there's a branch there (on f8c), but it is not listed in branch history .... instead on line 75 is 0x00d80, which does not point to a line with a branch (which is kinda confusing for me, why is there a wrong address) ... but the stats correspond to the while cycle happening on f8c
line 75 brhnz 0x00d80: 9716 3928 5788 0 3928
(well I know that hbrr cannot be a hint for brhnz .. but that's the only thing I can pair it with) ...
anybody encountered a hint for a branch that was not in the branch history? when can this happen? anything in docs I've overlooked?
thanks
michal
Re: branch histories2008-12-30T08:04:06Z
This is the accepted answer. This is the accepted answer.well, according to surrounding posts everybody's busy with actually starting the simulator (or running it) .. so, can anyone at least point me to a place in the documentation (or anywhere on the internet) regarding branch and hint histories dumped from systemsim? I was unable to find any docs or at least a FAQ ..
thanks a lot :]
i wrote an x86 raytracer, ported it on cell, then tweaked the software caches, then put some branch hints in, then wrote a branchless version of my algos, then put in async cache access, now I want to tweak branch hints a little so I have something more to write about in my thesis.. it's still dead slow though and all those optimizations did not really help (branchless will spent all those saved cycles on more computation it has to do, and so on) ... you have to admire those guys who wrote that fully fledged RT on Cell, no idea how they did that :] well .. after its all written I'll post it here so you guys can rip my Cell results apart ;]
michal
Re: branch histories2008-12-30T12:49:49Z
This is the accepted answer. This is the accepted answer.
In particular, slides 50-51 and 110-113 may be helpful.
Mike Kistler
Re: branch histories2008-12-30T16:13:41Z
This is the accepted answer. This is the accepted answer.
what is bothering me is a glitch of some sorts which is happening and I described it in my second post
basically there's a hint instruction that is not paired with a branch (the one on line 109) and there should be one for the branch on line 75 (i put it there) and there isn't ..
so I guess that those 9716 stall equal the 9716 hbrr instruction that execute in vain, not having a valid branch to hint ... and I have NO idea why is this happening .. it kinda ruins my statistics, as I don't know how to interpret this :(
thanks
michal
Re: branch histories2009-01-01T11:36:10Z
This is the accepted answer. This is the accepted answer.
This is curious. I think it might be helpful to see the instructions near this hint ... say from 0xf00 to 0x1000. It would be even better if you could post enough of your code to be able to recreate this problem, but I know sometimes this is not possible or easy to do.
Mike Kistler
Re: branch histories2009-01-01T23:54:46Z
This is the accepted answer. This is the accepted answer.
i've forgotten to include
#define likely(_c) __builtin_expect((_c), 1)
#define unlikely(_c) __builtin_expect((_c), 0)
though its pretty selfexplanatory :]
....
the situation is thus this:
hbrr 0x00f4c: 0x00f8c 0x00c68 9716 0 9716
0x00000f4c hbrr 0xf8c,0xc68
0x00000f8c brnz $80,0xc68
this hint is indeed in asm and points to a branch that does exist in the code (but does not appear in branch histories for some reason) and it is the while cycle line
Line 695 of "../bih.cpp" starts at address 0xf8c and ends at 0xf90
695 while(likely(indexStack != 0 && !currNode.isLeaf()))
I've no idea why there are 9716 stalls, when this hint looks perfectly ok to me ... (and as it's branch is not listed in branch histories, I can't really know how does the branch itself perform)
what does appear in branch histories is this branch
brhnz 0x00d80: 9716 3928 5788 0 3928
0x00000d80 brhnz $3,0xda0
but
Line 719 of "../bih.cpp" starts at address 0xd7c and ends at 0xd84
719 bool secondTest = !SIMD_IS_FALSE(valid_iimax_t) && !SIMD_IS_FALSE(valid_iimax_s);
there's no branch on line 719, and
Line 724 of "../bih.cpp" starts at address 0xda0 <_ZN7BIH_SPE33Trace4PacketBranchlessWCacheAsyncER9RayPacket+952>
and ends at 0xda4 <_ZN7BIH_SPE33Trace4PacketBranchlessWCacheAsyncER9RayPacket+956>.
724 indexStack += firstTest;
there's nothing to jump to on line 724 either (no loop start, line just after a loop/if or something like that)
i've also checked the other frequented branch in branch histories
brnz 0x00ebc: 9716 7591 2125 0 7591
Line 163 of "../cache/cache-4way.h" starts at address 0xeb0 and ends at 0xec0
159 static inline int
160 __cache_line_lookup (unsigned int set, unsigned int ea)
161 {
162 // return (CACHELINE_ISTAG(set, ea) ? (int)set : -1);
163 if( unlikely( !CACHELINE_ISTAG(set, ea) ) )
164 return -1;
165 else
166 return (int)set;
167 }
(there was likely and no not inside, i've tried what will happen if there's unlikely there, nothing changed though, no hint)
this is cache code and yes, there's a branch there .. but it was supposed to be hinted, and it is not ... (the hint does not appear in hint histories, I've checked the code before 0x00ebc manually and there's no hint hinting this branch either)
considering the code, If necessary, I can upload my complete cell code and pinpoint the problematic locations, it is a raytracer, hence I would have to include some testing scene also . anyhow, I have a few that are smaller than 1MB, so that should not pose a problem :]
thanks a lot and have a happy new year! :] .. i am off writing my thesis text, kinda fed up with coding ;]
michal
Re: branch histories2009-01-03T04:23:59Z
This is the accepted answer. This is the accepted answer.
Now that we know there is a branch a 0xf8c, I would be suspicious that the hint does not appear early enough to avoid a stall when the branch is reached. The CBE Handbook (Section 24.3.3) says that:
An HBR instruction should be placed at least 11 cycles followed by four instruction pairs
before the branch instructions being hinted by the HBR instruction.
Looking at the assembler, it looks like the hbrr might be a little too late. It is very close -- depending on when the cycle counting actually starts and stops, it could be right on or it could be one cycle off. And the hint history seems to imply a one cycle stall each time the branch is encountered, so it seems possible that the hint is just one cycle to late. Of course, it could also be the case that the simulator could be off by a cycle and inserts a stall when there really should not be one.
I don't know why 0xf8c does not show up in the branch history.
Regarding line 719 ... you say there is no branch on that line, but I think you need to look at the disassembly for that line to know for sure. Did you do that?
If it's not a problem, it would be great if you could post enough of your code so that we can compile and run it ... that will be the best way to uncover what is going on here.
Mike Kistler
Re: branch histories2009-01-03T12:45:38Z
This is the accepted answer. This is the accepted answer.
b] 719 well I looked on disassembly paired with that line, and there is one ... but I don't see why is it compiled like this, because line 719 looks like this:
bool secondTest = !SIMD_IS_FALSE(valid_iimax_t) && !SIMD_IS_FALSE(valid_iimax_s);
where #define SIMD_IS_FALSE(a) (a == sse_zero_i)
this is supposed to be on of the precomputed bools to avoid having branches :]
anyways, I am gonna pack my code together and post it here as soon as possible
Re: branch histories2009-01-03T13:41:39Z
This is the accepted answer. This is the accepted answer.
small FAQ
a] settings.h are set as I had it when stats I showed were made (BIH structure used, branchless code, asyn cache, simulator mode, debug mode)
normally, cell works as a TCP server for x86 image client application, with debug mode it will raytrace a scene and stop
eclipse generated makefiles are included (my toolchain path is /opt/cell/toolchain/ .. you will have to change the path to code though, as it is set to /home/mikee/dip_cell/)
b] I've also included my testing setup with tlc file with my few added lines at the end for the cell simulator and shell script to run the simulator (just to pack it all)
c] there's a testing scene included (f15 fighter jet ;], f15.obj). As scene path is now hardcoded, you have to change the path in ppu/main.cpp:line 31 depending on where you actually have the scene
d] code we were talking about here is in function Trace4PacketBranchlessWAsyncCaches in bih.cpp: lines 667 to 784, with the profiled code between 697 and 740 (this function will be used for hierarchy traversal when settings.h as they are now are used)
Re: branch histories2009-01-03T13:43:16Z
This is the accepted answer. This is the accepted answer.
Re: branch histories2009-01-03T20:09:54Z
This is the accepted answer. This is the accepted answer.
we came to the conclusion that the branch on line 719 (in code there's no branch there) is made by optimizations and there's no hint for it because it's made up by g++ in some form of cycle decomposition into outer and inner loop (there's a hint for the outer loop, which I put there, but not for the inner)
is this a possible answer to what's happening?
I've compiled my code with -Os -finline-functions and ran it in the simulator, the same thing happened (apparent inconsistency between debug info and real code) .. so I've tried -O0, this was OK as far as debug info to code mapping is concerned, but of course there were lot of branch and links and also the while hint had dissapeared
I've tried to check man pages for spu-g++ and spu-gdb, only to find that these were identical to g++ and gdb mans, so I don't really have an idea how spu-g++ fools with those hints ...
Re: branch histories2009-01-03T23:30:34Z
This is the accepted answer. This is the accepted answer.
while(hint(x)) { }
to
if(hint(x)) do { } while(hint(x);
I've gained 0.13 CPI and saved some 150K cycles. I was suspicious of this, so I checked if the image is still raytraced correctly, and apparently it is :]
here's the new code and stats from simulator
Re: branch histories2009-01-03T23:49:29Z
This is the accepted answer. This is the accepted answer.
hbrr 0x014bc: 0x01500 0x011e0 9716 0 0
no stalls and 0x01500 is the address of the do { } while cycle, so this is basically what I wanted to achieve (to hint the inner branch) .. though still there's no 0x01500 branch in branch histories ..
brnz 0x0142c: 9716 7591 2125 0 7591
unfortunately no hint for cache access (this branch) ...
Re: branch histories2009-01-04T00:18:20Z
This is the accepted answer. This is the accepted answer.
Now the interesting question is whether the simulator is correct in it's interpretation of the required distance from hint to branch. Have you tried running this on actual hardware and measured the performance? If so, is it consistent with what is shown by the simulator?
Mike Kistler
Re: branch histories2009-01-04T14:08:53Z
This is the accepted answer. This is the accepted answer.
Regarding performance, I have remote access to a PS3 (placed at my faculty's building) with user rights only, and I was advised that for performance counters (like branch/hint stats, pipeline stats and so on) to work correctly kernel has to be patched and you have to have root rights of course (which I don't have). I can use SPE timer though, so I'll get the timings as soon as possible.
Re: branch histories2011-03-06T09:51:16Z
This is the accepted answer. This is the accepted answer.
It's been two years now, but well, someone may still find it interesting ... I am still doing ray tracing, but not on Cell :) | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014179536&ps=100 | CC-MAIN-2015-48 | refinedweb | 2,399 | 68.5 |
Utility functions for use by authentication GUI widgets or standalone apps. More...
#include <qgsauthguiutils.h>
Utility functions for use by authentication GUI widgets or standalone apps.
Definition at line 29 of file qgsauthguiutils.h.
Clear all cached authentication configs for session.
Definition at line 158 of file qgsauthguiutils.cpp.
Clear the currently cached master password (not its hash in database)
Definition at line 92 of file qgsauthguiutils.cpp.
Completely clear out the authentication database (configs and master password)
Definition at line 195 of file qgsauthguiutils.cpp.
Color a widget via a stylesheet if a file path is found or not.
Definition at line 238 of file qgsauthguiutils.cpp.
Open file dialog for auth associated widgets.
Definition at line 252 of file qgsauthguiutils.cpp.
Green color representing valid, trusted, etc.
certificate
Definition at line 30 of file qgsauthguiutils.cpp.
Green text stylesheet representing valid, trusted, etc.
certificate
Definition at line 50 of file qgsauthguiutils.cpp.
Verify the authentication system is active, else notify user.
Definition at line 65 of file qgsauthguiutils.cpp.
Orange color representing loaded component, but not stored in database.
Definition at line 35 of file qgsauthguiutils.cpp.
Orange text stylesheet representing loaded component, but not stored in database.
Definition at line 55 of file qgsauthguiutils.cpp.
Red color representing invalid, untrusted, etc.
certificate
Definition at line 40 of file qgsauthguiutils.cpp.
Red text stylesheet representing invalid, untrusted, etc.
certificate
Definition at line 60 of file qgsauthguiutils.cpp.
Remove all authentication configs.
Definition at line 168 of file qgsauthguiutils.cpp.
Reset the cached master password, updating its hash in authentication database and reseting all existing configs to use it.
Definition at line 114 of file qgsauthguiutils.cpp.
Sets the cached master password (and verifies it if its hash is in authentication database)
Definition at line 77 of file qgsauthguiutils.cpp.
Yellow color representing caution regarding action.
Definition at line 45 of file qgsauthguiutils.cpp. | https://api.qgis.org/2.12/classQgsAuthGuiUtils.html | CC-MAIN-2020-34 | refinedweb | 315 | 53.58 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
I just whipped this up for a customer who requested that a call to terminate() print some information about the exception which caused it, as apparently the SunPro C++ does. It prints the demangled name of the exception type and, for objects derived from exception, the result of what(). An awkward thing about this is that currently it needs to look in the libsupc++ and gcc source directories for the EH unwinder headers. Another problem is that __cxa_demangle isn't currently usable in general because it's under the GPL. My question is, how much of this should go into the distribution? I see several (largely orthogonal) options: 1) Nothing. Let anyone who wants this functionality reinvent it themselves. This is complicated by its reliance on ABI internals. 2) Install unwind-cxx.h and unwind.h so that users can get at them without keeping GCC source trees around. 2) Add current_exception_type to libsupc++, perhaps to the ABI specification. This would avoid the need to look at the unwinder headers. 3) Add verbose_terminate_handler to libsupc++ or libstdc++ and document it; people would still need to activate it with set_terminate. Thoughts? Jason
// verbose terminate_handler for IA64/GNU V3 C++ ABI // Copyright (C) 2001 Red Hat, Inc. #include <exception> #include <stdlib.h> #include <stdio.h> #include <cxxabi.h> // This file is found in the libstdc++-v3/libsupc++ source directory, and // it includes unwind.h from the gcc source directory. #include <unwind-cxx.h> using namespace std; using namespace abi; // Returns the type_info for the currently handled exception [15.3/8], or // null if there is none. type_info *current_exception_type () { __cxa_eh_globals *globals = __cxa_get_globals (); __cxa_exception *header = globals->caughtExceptions; if (header) return header->exceptionType; else return 0; } void verbose_terminate_handler () { // Make sure there was an exception; terminate is also called for an // attempt to rethrow when there is no suitable exception. type_info *t = current_exception_type (); if (t) { char const *name = t->name (); // Note that "name" is the mangled name. { int status; // __cxa_demangle is currently disabled in cp-demangle.c, and // subject to the GPL (Boo!). char *dem = __cxa_demangle (name, 0, 0, &status); printf ("terminate called after throwing a `%s'\n", status == 0 ? dem : name); if (status == 0) free (dem); } // If the exception is derived from std::exception, we can give more // information. try { throw; } catch (exception &exc) { printf (" what(): %s\n", exc.what()); } catch (...) { } } else printf ("terminate called without an active exception\n"); abort (); } // EXAMPLE -- cut here #include <stdexcept> int main () { std::set_terminate (verbose_terminate_handler); //throw; //throw 1; throw std::out_of_range ("not really"); } | http://gcc.gnu.org/ml/libstdc++/2001-11/msg00126.html | CC-MAIN-2020-05 | refinedweb | 431 | 58.79 |
People are just as bad a determining whether an existing sequence is random. Look closely at the following two images.
Which one do you think looks more randomly distributed? If you read the article that the images link to, you already know that most people say the first image looks more random because the points are more smoothly distributed. This is exactly the property that makes it less random than the second image, which has more instances of points in tight clusters. A real set of random data points wouldn't be as evenly spaced as the first image.
Computers are much better than people, but by no means perfect, at generating sequences of random numbers. A number or sequence is said to be random if it is generated by a non-deterministic and unpredictable process. Since computers are inherently deterministic machines, this means that no computer can ever algorithmically generate a sequence that is truly random. They can come very close, though, with the help of a class of algorithms known as pseudo-random number generators (PRNG).
For a programmer writing a PRNG, or software that relies on one, testing its randomness reveals a unique problem. Unit testing software compares the output of a function to an expected value. If the output is expected to be unpredictable, you have a testing dilemma.
To make matters worse, there is no way even theoretically to prove that a sequence was generated randomly. Luckily there are statistical methods that are useful for revealing when a sequence is not random. Let's take a look at a simple PRNG, then use a common statistical method, the Monte Carlo method, to compare its effectiveness to a standard software library implementation.
A Naive PRNG
One of the oldest algorithmic methods for generating pseudorandom numbers is called the linear congruential method. Here's a simple example implementation:
public class NaiveRandom {
private int x; // seed
private int m = 134456; // modulus
private int a = 8121; // multiplier
private int c = 28411; // increment
public NaiveRandom()
{
x = 0;
}
public NaiveRandom(int seed)
{
this.x = seed;
}
public double next()
{
x = (x * a + c) % m;
return x / (double)m;
}
}
The original pseudocode and constants in the code above come from an example given in Statistical Mechanics by Werner Krauth. As Krauth explains in his book, these values are good for studying the algorithm, but not great if you need really random values. In the following section we'll see how it compares to Java's built-in PRNG.
The Monte Carlo Pi Program
Take a unit circle inscribed in a square.
The area of a circle is given by the formula
A = πr2
Since the radius is 1, the area is equal to π. Since the diameter of the circle and the length of a side of the square are both 2, the area of the square is 4. The ratio (which we'll call ρ) of the area of the circle to the area of the square is
ρ = π / 4 = 0.785398164
If we select a random sample of points from the square, the proportion of those points lying inside the circle should be equal to ρ. We can multiply this proportion by 4 for comparison to a library value of π. If the random number generator is close to truly random, then the closer our approximation should be to the actual value of π.
We can simplify the calculations necessary for the test if we only concern ourselves with one quadrant of the square. This is possible because it's the proportion of points inside the circle to those inside the square that matters. The proportion of such points is the same in each quadrant as it is in the whole. If we say that the center of the circle is at point (0, 0), then we need only generate random points (x, y) where x and y are both between 0.0 and 1.0. This is the standard output range for most PRNGs, like Java's Random class. Points that lie within the circle will be those that obey the inequality
x2 + y2 ≤ 1
Here is the Java code for a Monte Carlo Pi simulation.
import java.util.Date;
import java.util.Random;
public class MonteCarloPi {
public static void main(String[] args) {
// seed for NaiveRandom
Date now = new Date();
int seconds = (int)now.getTime();
// create random number generators
NaiveRandom nrand = new NaiveRandom(seconds);
Random rand = new Random();
// total number of sample points to take
int numPoints = 10000;
int inNaiveCircle = 0;
double xn, yn, zn;
// xn and yn will be the random point
// zn will be the calculated distance to the center
int inRandCircle = 0;
double xr, yr, zr;
// xr and yr will be the random point
// zr will be the calculated distance to the center
for(int i=0; i < numPoints; ++i)
{
xn = nrand.next();
yn = nrand.next();
xr = rand.nextDouble();
yr = rand.nextDouble();
zn = (xn * xn) + (yn * yn);
if(zn <= 1.0)
inNaiveCircle++;
zr = (xr * xr) + (yr * yr);
if(zr <= 1.0)
inRandCircle++;
}
// calculate the Pi approximations
double naivePi = approxPi(inNaiveCircle, numPoints);
double randomPi = approxPi(inRandCircle, numPoints);
// calculate the % error
double naiveError = calcError(naivePi);
double randomError = calcError(randomPi);
System.out.println("Naive Pi Approximation: " +
naivePi + ", Error: " + naiveError);
System.out.println("Random Pi Approximation: " +
randomPi + ", Error: " + randomError);
}
static double approxPi(int inCircle, int totalPoints)
{
return (double)inCircle / totalPoints * 4.0;
}
static double calcError(double pi)
{
return (pi - Math.PI)/Math.PI * 100;
}
}
Results
Run the simulator several times and you will see that Java's built-in PRNG seems to outperform the naive implementation, but not by much. Neither performs particularly well, but Monte Carlo simulations are only expected to be close, not exact. Here are my results after ten runs of the simulation.
At least a small part of the imprecision of these results can be attributed to the fact that I only took 10,000 random points in each sample. A rule of thumb in Monte Carlo simulations is that for every 100X increase in data points, you'll get a 10X increase in the precision of your results. Since I took only 10,000 random points (100 * 100), I only got 2 digits of precision, with the third digit fluctuating.
Try increasing the number of random points to 1,000,000 and you should see that the third digit remains fixed over several runs of the program. The really interesting thing revealed is that our naive implementation continues to perform nearly as well as Java's built-in PRNG. This leads us to the conclusion (which can be easily verified) that Java's PRNG uses the same linear congruential method for generating random numbers as our naive implementation. The only significant difference between the two is that Java's implementation will be able to generate much longer sequences of random numbers before it starts to repeat itself.
Further Investigation
As I mentioned before, there are many statistical tests that can be used to measure the relative randomness of a sequence. I've only covered one of those tests in detail, but there are libraries available like the Diehard Battery of Tests of Randomness and ENT that include a wide variety of tests, and guidelines for interpreting them. If your application depends on randomness, I recommend evaluating your random number generator with one of these test suites.
13 comments:
How does SecureRandom do?
Tim,
Good suggestion. After 10 runs with 10,000 points each, the average errors(%) were:
NaiveRandom = 0.3555
JavaRandom = 0.3580
SecureRandom = 0.0041
I also did 10 runs with 1,000,000 points and got average errors of:
NaiveRandom = 0.0203
JavaRandom = 0.0300
SecureRandom = -0.0010
I wish I had thought of including SecureRandom before. I was peripherally aware that it existed, but having never used it for anything I completely forgot about it. :(
There are ways to generate truly random numbers using white noise from someones sound card. I have heard of this method in various places, its said that its a pretty good source of entropy. What are your thoughts on this?
Sam152,
Good question. There are very good source of entropy in your hardware, but I think they may be overkill for most applications. The obvious drawback is that hardware sources of randomness are inherently slower than software solutions, but if you're considering a white noise source I assume you've already accepted that.
The way the white noise sources that I've read of work is that you tune a radio to an unused frequency and point the radio at your microphone. Your soundcard digitizes the white noise and uses it as a source of random numbers. This seems very foolproof, but there is an ingenious attack vector. If an attacker discovers what radio frequency you're using as your source, they can begin broadcasting non-random "noise" on that frequency, influencing the sequence of random numbers generated in your application.
Now you may be saying that that is a completely unreasonable attack, and that no one would ever go to those lengths to crack your application. My point is that if you're not that paranoid, then a cryptographically secure pseudo-random number generator (a software solution) should be secure enough.
On the other hand, maybe you're writing an online poker site and you really are that paranoid (and rightfully so). If that's the case, I'd say test the hell out of whatever source you use to make sure it's random enough for your particular application.
That's a great insight. Really loving your blog by the way.
Keep it up.
Sam152,
Thanks for reading.
Interesting article, and I think I will write my own soon too on a similar subject.
As for hardware random number generators: They are not really too slow. Sure, if you use your sound card or another low-yield source of entropy you will be pretty limited. Also because you are misusing a piece of hardware that wasn;t designed to do what you try. Remember that you only get one or two bits of noise per sample which isn't exactly ideal.
There is specialized hardware for that sort of thing which can yield much greater speeds, up to above 100 MiB/s. Those things usually come as PCI cards.
The main problem with naïve hardware random number generation (like from the sound card or a webcam; but in general for all things concerning RNGs built by people with little understanding [yes, that includes me]) is that the results aren't optimal and in some cases even disastrously bad.
Case in point: Noise from a webcam or sound card (or any other wire) is biased. Depending on the temperature of the circuit. So your numbers will follow a different distribution in summer than in winter. Not ideal.
Much thought concerning hardware random number generators goes in exactly this topic: How to eliminate bias. Usually it's necessary to post-process the numbers obtained to get a uniform distribution (which is most of the time the desired outcome and can be transformed into other distributions, such as normal or gamma distributions easily).
I found a nice overview of some hardware RNGs along with some or their stats (such as speed) here. I don't think it's exhaustive and probably there are even faster devices by now. The prices usually also forbid using them for personal purposes :-)
Some people also came up with ingenious ways of generating numbers such as the dice generator and its (very) big brother, the Dice-O-Matic mark II. Of course, you only build such a beast if you need many dice rolls.
Although I'm not exactly sure on the effect of wear on the dice. Casinos for example are obliged to renew dice every now and then and casino dice also have no rounded edges to minimize skewing the distribution. This device is therefore probably not usably for, say, an online poker site (they have very stringent requirements for the random numbers they use).
On another note, the debate of PRNG vs. »true« RNG is also pretty pointless at times, as the debating people tend to ignore a very important fact about random number generation: Different uses have different requirements.
»True« random numbers are suitable for things like key generation, seeding cryptographically-secure PRNGs, &c.
Pseudo-random numbers are a must for simulations, where you need the ability to exactly reproduce the results of a simulation run. Also, since your results may be just an artifact of the PRNG used be sure to repeat the experiment with at least another PRNG of a different »family« (linear congruential generators, for example, are nearly unsuitable for simulations. But you shouldn't do the experiment with the Mersenne Twister and a WELL generator too, since both use a very similar underlying algorithm [in fact, WELL is an improvement on MT19937 by the original authors).
And then there are quasi-random numbers. The very first picture in this article details such numbers. They are designed not to be »random« but rather well-distributed. As such they are very suitable for Monte-Carlo simulations where a uniform coverage of the space is desirable without »clumps« or »holes« as seen in the second picture of the article.
So for your Pi example a quasi-random number generator might actually be the option yielding the best results.
I will be picky and assert that neither of the data sets depicted are "more random" then the other. This is like saying that 13 is more random than 100. Ignoring the fact that they represent finite samples from theoretical distributions, what you are really talking about are differences in distribution and what quasi-random number types call discrepancy.
Will,
The first thing I asked about the two images was, "Which one do you think looks more randomly distributed?" (Emphasis added.)
That's quite a bit different than asking if 13 is more random than 100, actually. One single data point can't be measured for randomness. A set of points certainly can.
I skipped over a lot of the fundamental tests that you would normally do in order to get to the Monte Carlo Pi program that I wanted to present. You'd want to do range, mean, variance, and bucket tests first, but I thought this program was a neat trick. If you really want to be picky, it would probably be easier to talk about these glaring omissions on my part, rather than split hairs about the difference between the phrases "more random" and "more randomly distributed."
Bill, thanks for sharing this. I wrote a small fiddle to test various implementation of PRNG for JavaScript. Sharing it here, just in case someone would want to play with it :)
I think I would use white noise combined with a PRNG. I would pick the ones & zeros out of the ambient noise in a room. Even if someone could control the noise in my room, the numbers would picked out in such away that they would never be able to influence the noise enough. If you needed more speed, this could be combined (e.g. like as the seed for handful of numbers) with a faster PRNG. Even using "slow" methods, a computer will make quick work of even the largest numbers.
Hi Bill,
I stumbled upon this post after writing something very similar on my own blog, and I was amused to recognized you SO username. Thanks for your very clear article.
I'd like to comment on a few points you made:
The obvious drawback is that hardware sources of randomness are inherently slower than software solutions, but if you're considering a white noise source I assume you've already accepted that.
Intel is currently launching a chip that might change this. Here's an article about this; basically, they are using coupled inverters forced into an inconsistent, and measuring how their wave function collapses to produce truly (quantum) random numbers, at a fascinating rate.
Also, I'd like to add to the debate launched by Will Dwinnell. I'm only speaking from my own understanding here, so please do correct me if I make mistakes, but the general ideas should be correct.
Deciding whether a single data point is random is at the core of the technique advised by the NIST for testing PRNGs. Given a sequence and a transformation (sum all bits, approximate pi, ...), you assess how likely a perfectly random source was to yield the same result (they call that the P-value). Finally, you reject the sequence as non-random if its p-value is lower than a predefined threshold, say 1% -- that is, if the probability of randomly obtaining the same result was below 0.01.
Finally, you repeat the experiment for many sequences, and verify that the rejection threshold is acceptably close to 0.01.
So by many tests, even asking which image was more random would have been perfectly acceptable ;)
I've written an article about testing pseudo-random number generators, though my hosting provider has some trouble these days. I would be honoured if you could perhaps have a look, and tell me what you think.
Again, thanks for this excellent post. You have a very nice writing style.
Cheers,
Clément.
I'm wondering why there is such a difference in error while computing PI using SecureRandom. It's just 10,000 points right? Wouldn't that mean that SecureRandom is more stratified? I mean more evenly distributed, which doesn't neccesarly mean it's more random? like quasi number generators for ex. | http://www.billthelizard.com/2009/05/how-do-you-test-random-number-generator.html | CC-MAIN-2017-47 | refinedweb | 2,936 | 61.87 |
Introduction
Now that I’ve got your attention with a provocative title I suppose I had better come clean and clarify some details:
A few years back I bought a cookery book written by a particularly annoying TV Chef. The title of the book was “30 minute meals”. My assumption (not unreasonably I don’t think) was that here would be some quick easy meals we could make during the week. The reality is that, although it is physically possible to make these meals in 30 minutes, that is only possible if:
- you have access to a good kitchen with lots of equipment
- you are able to prep veg at a professionals speed
- you are prepared to dash through the recipe at break-neck speed, leaving your kitchen looking like the scene of a terrorist attack afterwards
Whilst initially being annoyed at being misled, it’s now one of my favourite cook books, here’s why: Whilst the recipes take longer than 30 minutes for any normal person, these are still some tasty, and quite complex meals that can be made up in a surprisingly short time frame if you’re smart about it.
So what’s all this got to do with software?
In a similar manner to the 30 minute claim, 10 minutes, while physically possible comes with a few caveats:
- This is the fastest I’ve managed to complete this process not the average
- The React App in question is a simple ‘hello world’ example
- I added in just a few simple tests
- I was already signed up the services that I needed
- I knew exactly the steps I needed to do and in what order
However, like the cook book, the overall point here is not that everyone should be able to deploy an app in this time-frame but rather to demonstrate that you can put together, test and deploy a scalable, public facing application, complete with continuous integration in a shorter time-frame than you might expect using readily available tools. Still interested? Let’s take a look how.
The Prep
The “ingredients” that you will need to create your app are:
- VSCode or your favourite text editor
- Git
- Node
- We’ll be using GitHub as a repository so you’ll need a GitHub account
- We’ll use AWS Amplify to deploy our code so you’ll need an AWS account
Method
1. Git init
The first thing we are going to need is a git repo to work in. Head on over to GitHub and click the button to create a new repository. Give it a name such as “hello-world” and fill in any other details you might want. Grab the url and in your local git terminal do a
git clone.
2. Create React App
Next we need some code. The easiest way I know of getting started with a new React App is by using create-react-app, so that’s what we’ll use. Back to the terminal,
cd into the repo we just created in step 1, and run
npx create-react-app hello-world
Wait for the process to complete, then as a last step we’re going to move the files out of the created directory and up one level to our root directory just to keep things tidy:
mv hello-world/* hello-world/.* . && rmdir hello-world
Awesome now we should have a basic React app. Let’s run an
npm i to install our dependencies and then
npm run start to check everything is working. If everything works your default browser should open with the app running on
localhost:3000. Let’s stage and commit the new files, then do a
git push to setup our first commit to the repo.
3. Continuous Integration
Next we’re going to edit our GitHub repo and make a few changes that will allow our code to be built and tested each time we make a Pull Request into the master branch.
Head to your repository main page and in the toolbar at the top click on “Actions”.
GitHub should present you with a few default Actions based on the code we’ve pushed to the repo that you can choose from, go to the Node.js one and click “Set up this workflow”.
That will generate a .yml file for you that will execute an
npm ci, an
npm run build and a
npm run test each time we push to master or create a pull request (PR) from master. Click the “Start Commit” button and then “Commit new file” to commit the .yml directly to master.
If you go back to your “Code” tab you’ll notice a yellow marker next to the commit. This indicates a build is in progress on the commit we just made, wait for it to go green so you can be confident all is well.
Last thing we’re going to do is insist the build passes before a PR is merged. Go into the settings tab for the repository and click “Branches”. Type
master into the “Branch name pattern” text box and then check “Require status checks to pass before merging” and check the box for each node version (10.x, 12.x, 14.x). Hit “Create” then “Save Changes”.
4. Write some JSX
Ok now we’ve got our repository sorted out we need some code, create a new branch, name it what you like. Check it out and then we’re going to alter the
app.jsx file to be a basic hello world button and message, something like this:
import React, { useState } from 'react'; import './App.css'; function App() { const [ show, setShow ] = useState(); return ( <div className="App"> <button className="hello-button" onClick={() => setShow(!show)} >{show ? 'Reset' : 'Say Hello'}</button> {show && (<h1 className="hello-message">Hello World</h1>)} </div> ); } export default App;
commit that but don’t push just yet.
5. Test it
Next lets write some tests. We’re going to use Enzyme here to shallow render our App component so lets add it to the package.json
npm install --save enzyme enzyme-adapter-react-16 react-test-renderer
Next we need to update
src/setupTests.js to so that our app imports it:
import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() });
And last we are going to write a few simple tests around our component a bit like this:
import React from 'react'; import { shallow } from 'enzyme'; import App from './App'; test('should not show the message on mount', () => { const wrapper = shallow(<App />); expect(wrapper.find('.hello-message').exists()).toBeFalsy() }); test('should show the correct button text on mount', () => { const wrapper = shallow(<App />); const button = wrapper.find('.hello-button').first(); expect(button.text()).toEqual('Say Hello'); }); test('should change the button text on click', () => { const wrapper = shallow(<App />); const button = wrapper.find('.hello-button').first(); button.simulate('click'); expect( wrapper.find('.hello-button').first().text()).toEqual('Reset'); }); test('should show the message on button click', () => { const wrapper = shallow(<App />); let button = wrapper.find('.hello-button').first(); button.simulate('click'); expect(wrapper.find('.hello-message').exists()).toBeTruthy() });
Great now run
npm run test to make sure they all pass, then commit the tests with an appropriate message.
Once you’ve pushed the code, create a Pull Request in GitHub, you’ll notice that once you create the PR the build will be triggered and you’ll be blocked from merging until it’s complete. Once it’s passed we can merge and delete the branch, at this point we are nearly done!
Dependabot
A big issue with long running projects can be keeping your dependencies up to date, it’s easy to fall behind and this can cause a lot of pain when you do eventually get round to updating. We’re going to side-step all that pain by using dependabot to keep things up-to-date for us. First step is to sign up with your GitHub account following the onscreen instructions. Once you’ve done that we need to add a repo. Click “Select Repos to add” and add the hello-world repo.
That’s it, dependabot will create a new PR each time a version needs to be updated and will manage rebasing all out of date PRs it creates. We can go into the settings and set PRs to be auto-merged, which means that if (and only if) the build passes on GitHub, the update PR will be automatically merged for us. All in all this means that we can leave the repository untouched and all non-breaking updates will be handled automatically. When we come back to work on our project we just need to do an
npm install and handle any PRs which have breaking changes to be all up-to-date again. That’s a lot of work handled for us in a couple of clicks.
Deploy
Last step then, we’ve got a working app with tests and a CI pipeline, the only thing we have left to do is deploy. For this we’re going to use AWS Amplify. You’ll need an AWS account, once you have that search for Amplify under Services and navigate to the AWS Amplify Console. Click on “Connect App” and then choose the GitHub Option.
Follow the on-screen steps to connect your GitHub account and then choose your hello-world repo and select the
master branch when asked to add a repository branch.
Click next, you’ll be asked to configure your build settings but we don’t need any environmental variables here and the default is fine so let’s just click next again. We are asked to review our settings then click “Save and Deploy”!
That is it, AWS will show you a timeline of your app building and deploying, when all the steps are green you can click the link and be navigated to your deployed Hello World React app, or go there on your phone just to prove to yourself it’s live! Amplify will automatically pick up any new commit to the master branch and re-deploy a new version of the code for us. This means that if we make a new branch and add a feature, when we create a pull request, the CI pipeline will run the build and tests, when they pass we can merge our code, once the code is merged a new version will be deployed with no extra steps required by us.
Conclusion
We’ve gone from nothing, not even a git repository, to a tested React application complete with continuous integration, automated dependency updates and automated deployment to a public URL in (in my opinion) surprisingly few steps, using tools that are mostly free to access and handle all the boiler plate configuration for you. Let’s have a last look at the tools used here:
- create-react-app: free to use. With a single step will handle all the boiler plate of setting up a modern React app with a Jest test framework. We can extend this easily to add in things like Scss and TypeScript if we wanted.
- GitHub: free to use. GitHub provides us with a remote repository for our code and crucially, provides the ability to set up a CI server that will build and test our code each time we commit, blocking failing builds from being merged to master. All this with minimum configuration.
- dependabot: free to use. Dependabot keeps all our dependencies up-to-date, creating, maintaining and merging non-breaking pull requests. With just a few clicks we no longer need to worry about versions of packages getting stale and the security vulnerabilities which that can invite.
With just the tools listed above, even a developer with little to no experience of front end development and CI tools can create a scalable, production ready front end React application in a short amount of time.
AWS Amplify is the only product here that is not free, it is however pretty low cost at low levels of usage, if you’re simply deploying this example to try it out it should cost next to nothing, although you will want to delete it once you’re done to prevent the cost from accumulating. If you’re looking to deploy a small project I’ve personally found it to be pretty good value.
Hopefully what I’ve demonstrated here is how quickly and easily you can create and deploy an application using these tools, even if you don’t have a lot of knowledge or experience. I’ve used React as create-react-app is a powerful way to spin up a new project quickly but if you already have an application and want to add in CI and / or to deploy it then the process will be even quicker.
So that’s it, zero to deployed in less than 10 minutes… or at the very least in less time than you might expect. | https://blog.scottlogic.com/2020/09/03/create-test-integrate-and-deploy-a-react-app-in-under-10-minutes.html | CC-MAIN-2021-21 | refinedweb | 2,169 | 64.64 |
Tips for Scripting Java with Jython
Pages: 1, 2
In Jython, functions, methods, modules, and classes are all first class objects, which means they can be both passed to and returned from a function. This ability allows for some common programming tasks to be managed quite easily.
For example, suppose you have a complicated resource that needs to be opened, used, and closed in a variety of places, which might cause you to retype the open and close logic frequently. A simple example is a file. By passing the useful function as a parameter you can manage the resource in one place:
def fileLinesDo(fileName, lineFunc):
file = open(fileName,'r')
result = [lineFunc(each)for each in file.readlines()]
file.close()
return result
The key here is that you don't have to rewrite the open and close statements each time you use the file, you can just call fileLinesDo. This is a trivial point for files, but if we added error checking or had a more complicated resource, it becomes very useful.
open
fileLinesDo
In Java, the legality of a method call such as obj.method() is calculated at compile time. Although this ensures that the method will exist at runtime, in practice it leads to a lot of time and effort spent on typecasts. It also creates lines of code such as this, where you know the line will be legal at runtime, but you still have to convince the compiler.
obj.method()
MyClass instance = (MyClass)list.get(2);
In Jython, the legality of obj.method() is determined at runtime based on the runtime type of the obj variable. In addition to removing the need for a typecast, this has several subtle benefits. Most importantly, it encourages reuse by making it easier to use an object of a different type in the existing code. This eliminates the need to define Java-style interfaces; instead you can just define the methods directly.
obj
It also allows for old code to be used in ways not imagined when the code was written. Additionally, it makes Jython code much easier to unit test than Java code (and it also makes it easy to unit test Java code from Jython). In Jython, you can easily create a simple "mock object" that trivially responds to the methods of a hard- or expensive-to-create instance (such as a GUI or database), and use that to test your code.
Jython allows operator overloading in the form of specially named methods that can be defined on any object, and which are called by the interpreter, if they exist, when the corresponding core language function or operator is invoked.
Some of these are quite similar to Java functionality, such as the special method __str__, which is called when the object needs to be printed, and is equivalent to the Java toString(). Others, such as the __add__ method, are similar to operator overloading in C++. You can allow your instances to be accessible using array-like syntax using the __getitem__ and __setitem__ methods.
__str__
toString()
__add__
__getitem__
__setitem__
Other special methods allow an amazing amount of flexibility for your objects. The special method __getattr__ is called on any failed attribute lookup, and can be used, for example, to return a default value, or automatically redirect to a proxy object, or call a getter method. The special method __setattr__ is called on attribute assignment, and can be used, for example, to verify values, or to trivially broadcast changes to other interested objects. There is even a special method __call__, which allows your instances to be treated like functions.
__getattr__
__setattr__
__call__
Most of the features discussed above are features of the Python language, but Jython also has features that allow nearly transparent usage of existing Java code. Java packages can be imported into Jython as though they were Jython modules, and Java objects can be created using Jython object creation syntax. For the most part, you can use the Java objects in your Jython program exactly as though they were Jython objects (as shown in tip #1).
Jython classes can inherit from Java classes (Jython allows multiple inheritance of Jython classes, however no more than one Java class can be in the inheritance tree of a Jython object). The inheritance rules work essentially as you would expect -- the parent class Java methods are called if they are not overridden in the child class. If the parent class method is overloaded, Jython chooses the correct version of the method based on the signature of the arguments at runtime.
Jython uses introspection on Java objects to allow access to those objects in a more normal Python coding style. Python coders rarely use simple get and set methods, preferring to access the variable directly. Of course, Java style tends to encourage get and set methods. Jython introspection allows those calls to be made automatically. When Jython sees a line of code such as:
get
set
x = javaObj.attribute
it automatically generates a Java call of the method javaObj.getAttribute(), if such a method exists. Similarly, an assignment to javaObj.attribute is converted to a call to javaObj.setAttribute.
javaObj.getAttribute()
javaObj.attribute
javaObj.setAttribute
In addition, set methods can be automatically triggered from constructors by using a keyword argument. For example:
javax.swing.JProgressBar(stringPainted=1, foreground=color.green)
automatically invokes the setStringPainted() and setForeground() methods of the new JProgressBar instance -- even though the normal Java constructor for JProgressBar does not take arguments for these properties.
setStringPainted()
setForeground()
If the set method itself takes an instance argument, Jython lets you implicitly create an object of the needed class by placing the arguments to the constructor sequentially in a list on the right side of the assignment (technically, the structure in Jython is called a tuple, and is an immutable list). For example:
button.preferredSize = (300, 50)
calls the Java code
button.setPreferredSize(new Dimension(300, 50))
Jython also automatically performs introspection on event listener registration patterns and interfaces. When a Java class allows listeners to be registered for a bean event, Jython searches for the listener interface associated with that event.
When an instance of the class is created, Jython adds an additional attribute for each method defined by the interface. Since Jython has first class functions, you can directly assign a Jython function to those attributes, and Jython calls that function when the event is triggered. This allows you to replace Java inner classes with ordinary methods for responding to events.
The following Java code uses inner classes to terminate the program in response to a button push:
JButton close = new JButton("Close Me");
close.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent evt){
java.lang.System.exit(0);
}
});
The Jython equivalent uses Jython introspection to separate the function definition from the widget creation. It creates a separate function, terminateProgram(), and passes that to the JButton to be called when the action is performed. This code assumes that the JButton is defined inside a class.
terminateProgram()
close = swing.JButton("Close Me", actionPerformed=self.terminateProgram)
def terminateProgram(self, event):
java.lang.System.exit(0)
The combination of property and event introspection makes Java libraries extremely easy to use from Jython. Swing code, for example, is significantly easier to read and manage because of the use of the Jython shortcuts.
The easiest way to include Jython functionality into an existing Java program is to embed a Jython interpreter directly into the Java code. The Jython interpreter is a Java object, and an instance of the interpreter can be used within a Java program to evaluate Jython code. The Java program can interact with the Jython interpreter, and pass data back and forth between the two.
The embedded interpreter can be used to manage various kinds of customization of the parent program. For example, you could write a system management framework that allows users to define responses to system events in Jython. Property files could be written as live Jython modules. You could create programmatic filters with all the power of the Python language to enhance, for example, an email client. You could use Jython for macro functionality in nearly any kind of program, or as a mechanism to separate game logic from a game engine.
This is a very straightforward way of getting the benefits of Jython on an existing Java project. In fact, Jython servlets are written in this manner, with a pre-existing Java servlet which reads the Jython servlet in an embedded interpreter.
I hope this list has encouraged you to seek out Jython and try it on your JVM-based projects. For more information get a copy of Jython Essentials. Jython can be downloaded from Jython.org.
Noel Rappin
has a Ph.D. in computer science from the Georgia Institute of Technology, where his research included methods for teaching Object-Oriented Programming and Design. He has extensive production experience in both Java and Python.. | http://archive.oreilly.com/pub/a/onjava/2002/03/27/jython.html?page=2 | CC-MAIN-2016-40 | refinedweb | 1,484 | 53.41 |
15 May 2012
By clicking Submit, you accept the Adobe Terms of Use.
A general understanding of JavaScript will help you make the most of this article.
All
Sophisticated JavaScript applications can be found all over the place these days. As these applications become more and more complex, it's no longer acceptable to have a long chain of jQuery callback statements, or even distinct functions called at various points through your application. This has led to JavaScript developers learning what traditional software programmers have known for decades: organization and efficiency are important and can make the difference between an application that performs great and one that doesn't.
One of the most commonly used architecture patterns to achieve this organization and efficiency is known as Model View Controller (or MVC). This pattern encourages developers to separate distinct parts of their application into pieces that are more manageable. Rather than having a function that makes a call directly to the database, you create a Model to manage that for you. Instead of having an HTML file sprinkled with output and logic statements, a simple template, or View, allows you to streamline your display code. Finally a Controller manages the flow of your application, helping the various bits and pieces talk to each other more efficiently. Using this pattern in your application makes it easier to add new functionality.
As part of the recent explosion of Internet-based software development, a dizzying array of MVC frameworks with names like Ember.js, Backbone.js, Knockout.js, Spine.js, Batman.js, and Angular.js have emerged. Written in JavaScript and designed for JavaScript development, these libraries have filled the void between beginner and intermediate developers on one side, and hardcore programmers on the other. They offer various features and functionality that will suit different developers of varying skill levels based on their needs.
In this tutorial you'll become more familiar with the basics of Ember.js as you build a working Twitter timeline viewer.
Ember.js (under that name) is one of the newest members of the JavaScript framework pack. It evolved out of a project called SproutCore, created originally in 2007 and used heavily by Apple for various web applications including MobileMe. At emberjs.com, Ember is described as "a JavaScript framework for creating ambitious web applications that eliminates boilerplate and provide a standard application architecture." It comes tightly integrated with a templating engine known as Handlebars, which gives Ember one of its most powerful features: two-way data-binding. Ember also offers other features such as state management (is a user logged out or logged in), auto-updating templates (when the underlying data changes so does your UI), and computed properties (firstName + lastName = fullName). Ember is already a powerful player after a solid year's worth of development.
Ember has only one dependency—jQuery. The boilerplate HTML setup for an Ember application should look something like the code below. Note that both jQuery and Ember are being pulled from a CDN (content delivery network). This speeds up your users' page load if they have already downloaded these files as a result of earlier visits to other websites that require them.
<html> <head> <script src=""></script> <script src=""></script> <script src="js/app.js"></script> </head> <body> </body> </html>
Before you proceed with this tutorial it would probably be a good idea to more clearly define MVC. The concept has been around since 1979 and since that time a number of different variations on the pattern have emerged. The most common flow usually goes something like this:
Understanding how the MVC pattern works can make your application flow more easily. And, because code is split into distinct pieces, it's easier for teams of developers to work together without interfering with each other.
JavaScript is a flexible and powerful language but it also has its shortcomings. Out of the box it doesn't offer the sort of functionality that lends itself to MVC style development. So Ember has extended the base language with a slew of extras. When building your Ember application there are four main pieces that you'll be working with: Application, Model, View, and Controller. The following sections review each of these pieces.
Every Ember application requires an instance of Ember.Application. It's the basis for the entire rest of your code, and provides useful functionality as well as a namespace (a way of grouping the rest of the pieces of your app). Defining an Ember application is simple:
Songs = Ember.Application.create({ mixmaster: 'Andy' });
This code defines an application named
Songs with a property named
mixmaster set to
Andy . You can call your application whatever you like, but Ember requires the variable name to begin with a capital letter so that the binding system can find it. There are additional built-in options that can be added when creating your application, and you can add any arbitrary property or method as well, but the main one beginning users might care about is the
ready() method. This works exactly like jQuery's
document.ready() block and can be implemented in the following manner:
Songs = Ember.Application.create({ mixmaster: 'Andy', totalReviews: 0, ready: function(){ alert('Ember sings helloooooooooo!'); } });
An application is nothing without data. Ember helps developers manage this data in a structured way using Models. In addition to holding data, Ember Models also model the data within them. In other words, if you wanted to store information about your MP3 collection, your Model might contain a title property, an artist property, a genre property, and so on. That Model might look something like this:
Songs.Song = Ember.Object.extend({ title: null, artist: null, genre: null, listens: 0 });
There's a few things to note about these few lines.
Songsis the name of the application, while
Songis the name of the Model.
title,
artist, and
genreproperties will obviously be filled in later, and so are marked
null(or nothing). The
listensproperty defaults to
0and its value will increase as you listen to your music collection.
Now that the
Song model is in place, you can add your first song. You used
extend to initialize the
Song model, but you'll use
create to add an instance of it. Here's what that looks like:
mySong = Song.create({ title: 'Son of the Morning', artist: 'Oh, Sleeper', genre: 'Screamo' });
Notice that the variable doesn't begin with an upper case letter, that's because it's an instance of the
Song model. The new song also isn't within the
Songs namespace. You'll almost never create an instance of a Model within your application. You're certainly welcome to do so, but generally you'd place each instance of a Model within a larger collection of similar objects such as an ArrayController (more on that later).
In an Ember application or any MVC style application a View is something the user can see and interact with. You define an inline template by adding raw HTML directly to the page. This template will be contained within
script tags. You add it to the page wherever you want your content to appear.
<script type="text/x-handlebars"> Hello <b>{{Songs.mixmaster}}</b> </script>
Notice that the
script tag has a type of
text/x-handlebars . This gives Ember something to grab on to when it loads up the page. Any HTML contained within this script tag is automatically prepared by Ember for use in your application. Placing these lines of code within your application will display the following text:
Hello <b>Andy</b>
Before moving on, take a peek under the hood. In your browser, right-click the bold text and inspect it using the browser's dev tools. You might notice some extra elements. In order to know which part of your HTML to update when an underlying property changes, Handlebars will insert marker elements with a unique ID; for example:
<b> <script id="metamorph-0-start" type="text/x-placeholder"></script> Andy <script id="metamorph-0-end" type="text/x-placeholder"></script> </b>
You can also define a View directly in JavaScript, and then display it to the page by using a view helper. Ember has generic views that create simple
div tags in your application, but it also comes prepackaged with a set of views for building basic controls such as text inputs, check boxes, and select lists. You start by defining a simple
TextArea View within your JavaScript file.
Songs.ReviewTextArea = Ember.TextArea.extend({ placeholder: 'Enter your review' });
Then display it to the page by referencing the path to the variable containing the view, prefaced by the word
view . Running the following code in your browser displays a TextArea field with placeholder text of "Enter your review". You can also specify
rows and
cols as additional properties in your definition.
<script type="text/x-handlebars"> {{view Songs.ReviewTextArea}} </script>
By now you're probably wondering what the
{{ and
}} in the code stand for, so this is a perfect time to talk about Handlebars, also known as mustaches. Turn your head sideways and you'll see why they're called Handlebars pard'ner. Handlebars is a templating engine that lets developers mix vanilla HTML and Handlebars expressions resulting in rendered HTML. An expression begins with
{{ and ends with
}} . As discussed previously, all templates must be placed within
script tags with a type of
text/x-handlebars .
By default, any value contained within handlebars is said to be bound to its value. That means that if the value changes because of some other action within the application, the value displayed to the user will update as well. Consider the following code:
<script type="text/x-handlebars"> My songs have {{Songs.totalReviews}} reviews. </script>
When your application first initializes the user would see the following text.
My songs have 0 reviews.
But through the magic of data bindings, that value would change in real time as additional reviews were added by updating
Songs.totalReviews .
Handlebars also supports flow control through the use of
{{#if}} and
{{else}} . These elements let you conditionalize your templates based on values in your application. You could change the previous example to display an alternate message to the user when there are no reviews:
<script type="text/x-handlebars"> {{#if Songs.totalReviews}} Read all my reviews! {{else}} There are no reviews right now. {{/if}} </script>
If at any point in the life of the application, the
Songs.totalReviews value changes, the view will update and display the other part of the message. It's also worth noting that the
# and
/ symbols are merely there to tell Handlebars that this particular view helper has a closing part.
Earlier, the Model was defined as a way to enable developers to manage data. That's true, but only in a very narrow way. A Model only contains data about a single thing; for example, a song (but not songs) or a person (but not people). When you want to manage multiple pieces of the same type of data you need a Controller. With Ember you can use an ArrayController to manage sets of songs, people, widgets, or whatever. Each ArrayController has a built-in
content property that is used to store data. This data can be simple strings or complex values such as arrays or objects. Additionally, ArrayControllers can contain functions that are used to interact with the data contained within them. What might an ArrayController for your Song collection might look like?
Songs.songsController = Ember.ArrayController.create({ content: [], init: function(){ // create an instance of the Song model var song = Songs.Song.create({ title: 'Son of the Morning', artist: 'Oh, Sleeper', genre: 'Screamo' }); this.pushObject(song); } });
The
init function isn't required, but comes in handy as it will be triggered as soon as
songsController is ready. It could be used to populate the controller with existing data, and in this case you'll use it to add a single song to the Controller to illustrate Ember's data-binding. Add the previous ArrayController definition and the following inline template and run the code in your browser:
<script type="text/x-handlebars"> {{#each Songs.songsController}} <h3>{{title}}</h3> <p>{{artist}} - {{genre}}</p> {{/each}} </script>
The Handlebars
each helper receives a path to a set of data, and then loops over it. Everything inside the matching
each blocks will be displayed on the page for every item in the controller. Notice that you're not providing a path directly to the content array, because as far as Ember is concerned the controller is the array. The resulting HTML output looks like this:
<h3>Son of the Morning</h3> <p>Oh, Sleeper - Screamo</p>
At this point, you should have a good understanding of what Ember is and what it can do. You should also understand each of the pieces that enable Ember to work its magic: Application, Model, View, and Controller. It's time to put that knowledge to use in writing a real, working application. You're going to skip the industry standard "todo app" and move on to something near and dear to many: Twitter. In the rest of this tutorial you will be building a Twitter timeline viewer. Before writing any code, it might be useful to see the final result.
Using the boilerplate HTML page from the beginning of the article you'll first build out the base HTML. Copy and paste the following code into a new HTML file named index.html. You'll need to reference the CSS file found in the sample files for this article. The sample files also contain a starting point for this project so feel free to use that as well.
<!doctype html> <html> <head> <title>Tweets</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="styles.css"> <script src=""></script> <script src=""></script> <script src="app.js"></script> </head> <body> <script type="text/x-handlebars"> <div id="frm"> <b>Load Tweets for: </b> </div> <div id="content"> <div id="recent"> <h3>Recent Users</h3> </div> <div id="tweets"> <h3>Tweets</h3> </div> </div> </script> </body> </html>
You can see there are three parts to this application: an input field, which allows users to input a Twitter username, the timeline viewer, which displays the selected Twitter users' tweets, and a recent users list, which will store previous searches.
The search box will appear at the top of the page, the recent users in a column to the left, and the tweets themselves will have the majority of the page on the right side.
Next, create another file named app.js and add the following content. These comments helps you keep your code organized. Load this page up in your browser and make sure there are no errors.
/************************** * Application **************************/ /************************** * Models **************************/ /************************** * Views **************************/ /************************** * Controllers **************************/
The first thing you'll need to do is to initialize your application. Directly under the comment block labeled Application, place the following code:
App = Em.Application.create();
Notice that instead of saying Ember.Application, this line says Em.Application. The Ember team added this handy shortcut to reduce typing by allowing you to use "Em" in any place where you might use "Ember".
Next you'll add the TextInput view and the submit button. Directly under the comment block labeled "Views" add the following code:
App.SearchTextField = Em.TextField.extend({ insertNewline: function(){ App.tweetsController.loadTweets(); } });
This block starts by using the App namespace, then extends one of Ember's prepackaged Views, the TextField. In addition to allowing arbitrary properties and functions within Views, Ember also has built-in helper functions available for use. That's what the
insertNewLine() function is; it executes whenever the user presses the Enter/Return key on their keyboard while the cursor is within the input box.
Now that the TextField View is defined, you'll add the corresponding view helper code to the HTML file. Switch to index.html and add the following code directly after the line that reads "Load Tweets for". Remember that anything within
{{ and
}} is a template and will be used by Ember to output data. Additionally any template beginning with the word
view refers to a View that has been defined within your JavaScript code.
{{view App.SearchTextField placeholder="Twitter username" valueBinding="App.tweetsController.username"}} <button {{action "loadTweets" target="App.tweetsController"}}>Go!</button>
This portion of the template contains a view helper, and a button tag with an
{{action}} helper. The TextField View,
SearchTextField , begins with an attribute that is built into HTML5 text input fields, placeholder text. If the field is empty, the text within the
placeholder attribute will be placed into the input field. When someone begins typing, the value goes away. Ember enables developers to use any HTML 5 standard attributes within its built-in views.
The second attribute highlights the magic of Ember's data-bindings. Ember uses a set of conventions to help it determine what you're trying to accomplish. Any attribute in a view (either within a template, or in a JavaScript file) that ends with the word "Binding" (note the capital letter) automatically sets up a binding for the attribute that precedes it. In this case Ember is binding the value of
App.tweetsController.username to the input field's
value attribute. Anytime the contents of the variable changes, the value contained within the input field will update automatically, and vice versa.
The
{{action}} makes it easier to add functionality to input driven elements. It has two options: the action name and the target. Taken together they form a "path" to a function contained within an Ember object. In the case of the above button the "path" would be
App.tweetsController.loadTweets() , the same function called when a user presses the Enter key within the text field. Load index.html in your browser and click the submit button, or press the Enter key within the input field. If you're viewing the browser console you'll see an error. This is because
App.tweetsController is not yet defined.
Now would be a good time to define
App.tweetsController . Add the following code after the
Controllers comment block in app.js. The code below should be familiar to you. Namespace, ArrayController, content array–it's all there. This time though you'll be adding an arbitrary property (
username ), and a function (
loadTweets ). After adding the ArrayController, reload your browser. Type a word in the input box and then click the button. You'll get an alert box that echoes the word you typed. Feel free to remove the alert line at any time. You'll also see an error indicating that the
addUser method is not defined.
App.tweetsController = Em.ArrayController.create({ content: [], username: '', loadTweets: function() { var me = this; var username = me.get("username"); alert(username); if ( username ) { var url = '' url += '?screen_name=%@&callback=?'.fmt(me.get("username")); // push username to recent user array App.recentUsersController.addUser(username); } } });
Take a closer look at the
loadTweets function definition; it has some unfamiliar bits. The first line sets a scope for the rest of the function. By definition, the scope or this for all Ember objects is the current function, in this case
App.tweetsController . However, you'll be adding more functionality to the
loadTweets function later in this tutorial. Setting the current scope now helps Ember understand the context you're using.
As I noted previously, Ember offers a number of helper functions to make writing applications easier, and these include
get() and
set() . These two functions are built into every Ember object and provide quick access to any property or function. The next line uses the scope of the current object,
App.tweetsController , and then calls the
get() function, passing in the name of the property that you wish to get a value for. You might be curious about where the value of username is coming from to begin with. Remember that Ember's data bindings are bidirectional. This means that as soon as you type a value into the input field the
valueBinding attribute of the input field view updates the
App.tweetsController object with a value.
After the username has been retrieved, a test is run to make sure it's not empty. At the moment there are only two statements within the
if block, but that will change later. The first statement sets the URL to Twitter's JSON file for a user. You might not immediately notice anything special about this until you look closer and see
%@ , and the
.fmt() at the end. The
.fmt() function performs a handy string replacement with the
%@ as the marker. Since the design of the application calls for storing a running list of searches, you'll have to somehow store your search term. The final line performs that function, pushing the username value into the
App.recentUsersController ArrayController. Since this object doesn't exist yet, running the code will result in an error.
In this next section you'll create the object used to store recent searches. Take the following code and add it after the
App.tweetsController object.
App.recentUsersController = Em.ArrayController.create({ content: [], addUser: function(name) { if ( this.contains(name) ) this.removeObject(name); this.pushObject(name); }, removeUser: function(view){ this.removeObject(view.context); }, searchAgain: function(view){ App.tweetsController.set('username', view.context); App.tweetsController.loadTweets(); }, reverse: function(){ return this.toArray().reverse(); }.property('@each') });
You're already familiar with creating an ArrayController and adding an empty content array, but this object has a few new elements starting with the
addUser function. This will check the existing array (
this ) using a built-in Ember function named
contains() . If it finds a result it removes it by using the ArrayController's function
removeObject() . This function has an opposite named
pushObject(), which is used to add individual objects to the content array. Both functions also have pluralized versions that handle multiple objects:
pushObjects() and
removeObjects() . This code first removes an existing term before adding it so that the same search term isn't displayed more than once. Since you already know how to remove an object from the content array, the only new element in the
removeUser() function is the argument. When a function is called using the
{{action}} helper, Ember implicitly passes in a reference to the current view. In the case of
App.tweetsController , the view has a context that is essentially the item that is currently being iterated over. This context is used to remove the selected item from the array.
The
searchAgain() function also receives the current view as an argument. When a user clicks a previously searched term, this function populates
App.tweetsController.username with the selected username, then triggers the
loadTweets() function, offering a single-click view for previous searches.
By default, Ember displays contents to the page in ascending order. Array index 1 is first, array index 2 is second, and so on. The design of this application calls for displaying recent searches in descending order. This means that the array must be reversed. While this isn't built-in functionality you can see how easy it is to add.
Reverse() first converts the Ember content array into a plain vanilla array using the Ember
toArray() function, reverses it, and then returns it. What makes it possible to use this function as a data source is the
property() function tacked on at the end. The
property() function takes a comma-delimited list of properties required by the specified function. In this case the
property() function is implicitly using the content array itself, addressing each element within that array using the
@each dependant key. You'll see how to implement the
reverse() function in the next section.
Now that you're storing your previous searches, it's time to display them on the page. Copy the following template and add it after the
h3 tag labeled
Recent Users .
<ol> {{#each App.recentUsersController.reverse}} <li> <a href="#" title="view again" {{action "searchAgain" target="App.recentUsersController"}}>{{this}}</a> - <a href="#" title="remove" {{action "removeUser" target="App.recentUsersController"}}>X</a> </li> {{/each}} </ol>
You should be familiar with all of this code at this point. The
each block points at the content array and the HTML contained within it will be applied for every item within the
App.recentUsersController variable. It's not necessary to explicitly point to the content array, but in this case this code points to the
reverse function, which provides the data in reverse order. The
{{action}} helper lets users click on each anchor tag and trigger the indicated function. The only element that might not be familiar is
{{this}} . When iterating over a content array, Ember keeps a reference to the current index in the
{{this}} variable. Because the value of each item is only a string, you can directly output the value of the current item using
{{this}} . Clicking on a Twitter username will load that user's tweets again, while clicking on their name will remove them from the
recentUsersController .
Saving search terms is good, but how about actually performing the search? Next you'll be adding the pieces that will retrieve the JSON packet from Twitter and display it to the page. Take the following Ember Model and add it directly after the comment block labeled Model. Remember that Ember Models are a blueprint for the data they will contain.
App.Tweet = Em.Object.extend({ avatar: null, screen_name: null, text: null, date: null });
In app.js locate the line that reads
App.recentUsersController.addUser(username); and add the following code directly after it:
$.getJSON(url,function(data){ me.set('content', []); $(data).each(function(index,value){ var t = App.Tweet.create({ avatar: value.user.profile_image_url, screen_name: value.user.screen_name, text: value.text, date: value.created_at }); me.pushObject(t); }) });
If you've used jQuery before you might have used the
.get() function to retrieve data. The
.getJSON() function does the same thing except it expects a JSON packet as a result. In addition it takes the returned JSON string and converts it into executable JavaScript code for you. Once the data has been retrieved, the content array is emptied removing all existing tweets. The next line takes the packet of data and wraps it in a jQuery object so that the
.each() method can loop over the resulting Tweets. Within the
each block a copy of the Tweet Model is populated with data, and then pushed into the ArrayController.
Finally, you'll need to add the following display code to index.html. Copy and paste it directly after the
h3 tag labeled
Tweets .
<ul> {{#each App.tweetsController}} <li> <img {{bindAttr src="avatar"}} /> <span>{{date}}</span> <h3>{{screen_name}}</h3> <p>{{text}}</p> </li> {{/each}} </ul>
Ember makes it easy to output data to the page using plain
{{Handlebars}} but there's a catch. Remember how Ember wraps outputted values in script tags? That's not an option when you're working with HTML attributes. So Ember provides the
{{bindAttr}} helper. Any attribute placed within this helper will output as normal, but still retain bindings. Go ahead and run your application now. Input a username and watch the Tweets fly in.
In this article you learned the basics of Ember.js functionality. You learned how Ember implements MVC using its Models, Views, Controllers, and of course the Application object. You created templates with view helpers and action helpers using Handlebars. You learned how to create blueprints for your data using Models, store that data in collection sets with Controllers, and display the data to the page using Views. Finally, you used Ember to build an entire application with data bindings, computed properties, and auto-updating templates. Your mother would be so proud of you!
For further reading on Ember, check out a few of the following links: | https://www.adobe.com/devnet/html5/articles/flame-on-a-beginners-guide-to-emberjs.html | CC-MAIN-2014-23 | refinedweb | 4,628 | 56.35 |
Windows Presentation Foundation (WPF) provides a bunch of controls that are “data aware” – you can simply bind them against any data source – Xml, CLR class, etc.
I tried extending once such control - ItemsControl – which serves at the base class for controls that display a bunch of items – ListBox, TabControl, ComboBox, etc. – to add functionality that will allow N items to be displayed at a time, and then loop over the remaining items at a specified frequency. The cool part is that as far a designer is concerned, all the implementation is hidden – all the designer has to do is to use the control on the design surface of Interactive Designer, drag-n-drop data onto the control, and then worry about the presentation of data. In addition, one can specify the number of items to display at once and the refresh frequency.
Feel free to download the sample and use the control. Source available here.
Yeah i got an "invalid XAML" error message as well...
invalid XAML" error message as well
To view a sample or tutorial, click on one of thumbnails or links below. Some samples also have supporting
Invalid XAML code if I try to open any Visual Studio 2008 Beta 2 created Page.xaml in Expression Blend 2 August Preview.
Replace the namespace declarations withe the new ones
xmlns=""
xmlns:x=""
xmlns:c=""
Replace the reference to FabrikamThrobbingItemsDemo with the new format:
"clr-namespace:FabrikamThrobbingItemsDemo" | http://blogs.msdn.com/b/unnir/archive/2006/01/23/516606.aspx | CC-MAIN-2013-20 | refinedweb | 238 | 60.35 |
Summary
The MapFrame object is a page layout element that is used to display the contents of a map on a layout. It also provides access to page size and positioning, basic navigation methods, and export options.
Discussion
A MapFrame is a Layout element that displays the geographic information added to a Map. It also controls the size and positioning of itself on a layout. More than one MapFrame can reference the same Map.
The listElements function on the Layout object can be used to return MapFrame objects. Providing an element_type of MAPFRAME_ELEMENT allows you to return only MapFrame elements. You can also further refine your filter by providing a wildcard. It is important to uniquely name each map frame so it can be easily referenced using its name property.
Once a MapFrame is referenced, you can get its associated map using the map property, which would allow you to manage layers on a map, get to a map's bookmarks, and more. Even more important is that you can change the Map object a MapFrame is referencing. You can switch maps of the same dimension or different dimensions. If you change the dimension of a map, you must first change the type property on the MapViewer object. For example, if the Camera mode is MAP (for 2D), you must change the mode to LOCAL or GLOBAL (for 3D). Once this is done, you can change the map from 2D to 3D.
The camera property returns a reference to the Camera object. The Camera controls the location and viewing positions of the data being displayed within a map frame. It controls items like scale and extent for 2D maps and camera position information for 3D maps.
It is important to understand the navigation methods and how they work between map (2D) and scene (3D) map frames. Anytime a 2D map, bookmark, or method that uses an extent is applied to a 3D map frame, the result will be a planimetric view of the data. The getLayerExtent, panToExtent, and zoomToAllLayers will always result in a planimetric view. The zoomToBookmark method maintains a 3D view when working between 3D map frames, but once a 2D bookmark is used on a 3D map frame or the other way around, the result will be a planimetric view of the map frame's data.
The elementPositionX and elementPositionY parameters are based on the element's anchor position.
Properties
Method Overview
Methods
exportToAIX (out_aix, , {resolution}, {world_file}, {gif_color_mode})
GIF files are a legacy raster format for use on the web. GIFs cannot contain more than 256 colors (8-bits per pixel), which along with optional lossless RLE or LZW compression, makes them smaller than other file formats. Like PNGs, GIF files can also define a transparent color. GIFs can be generated with an accompanying world file for use as georeferenced raster data
exportToJPEG (out_j, PNG is usually a superior format for map images. JPEG files can be generated with an accompanying world file for use as georeferenced raster data.
exportToPDF (out_pdf, , {resolution}, {world_file}, {color_mode}, {embed_color_profile})
PNG is a versatile raster format that can display in web browsers and when)
This method is perfect for situations where the MapFrame scale should not change but the location should. Rather than setting the extent and then having to reset the scale each time, panToExtent maintains the scale and centers the current map frame on the new extent.
zoomToAllLayers ({selection_only}, {symbolized_extent})
If zoomToAllLayers is used on a MapFrame with a global or local scene, the result will be a planimetric view.
Code sample
The following script will set the extent of a map frame named Yosemite National Park on a layout named Main Attractions at Yosemite National Park to match the extent of a layer named Ranger Stations.
import arcpy aprx = arcpy.mp.ArcGISProject(r"C:\Projects\YosemiteNP\Yosemite.aprx") m = aprx.listMaps("Yose*")[0] lyr = m.listLayers("Ranger Stations")[0] lyt = aprx.listLayouts("Main Attr*")[0] mf = lyt.listElements("mapframe_element", "Yosemite National Park")[0] mf.camera.setExtent(mf.getLayerExtent(lyr, False, True)) aprx.saveACopy(r"C:\Projects\YosemiteNP\Yosemite_Updated.aprx") del aprx
The following script demonstrates how to change a map frame that is referencing a 2D map to reference a 3D map. First, the script imports two documents into a blank project. Next, it references the appropriate maps and map frames. Finally, it changes the Camera type property to GLOBAL before changing the map property from a 2D map to a 3D map.
import arcpy aprx = arcpy.mp.ArcGISProject(r"C:\Projects\Blank.aprx") #Import documents into project aprx.importDocument(r"C:\Projects\YosemiteNP\Documents\Yosemite.mxd") aprx.importDocument(r"C:\Projects\YosemiteNP\Documents\Yosemite_3DViews.3dd") #Reference maps m_scenic = aprx.listMaps("Globe layers")[0] #Reference Layout and map frames lyt = aprx.listLayouts()[0] mf_inset1 = lyt.listElements("MapFrame_Element", "Inset1")[0] mf_inset2 = lyt.listElements("MapFrame_Element", "Inset2")[0] #Convert inset maps into Globe Views mf_inset1.camera.type = "GLOBAL" mf_inset1.map = m_scenic mf_inset2.camera.type = "GLOBAL" mf_inset2.map = m_scenic aprx.saveACopy(r"C:\Projects\YosemiteNP\Yosemite.aprx") | https://pro.arcgis.com/en/pro-app/latest/arcpy/mapping/mapframe-class.htm | CC-MAIN-2022-27 | refinedweb | 834 | 57.37 |
jw schultz writes:>'t> exclusive to the /proc/*/exe. It applies to all symlinks in> /proc/$pid.Guessing:1. readlink and follow permissions were not distinct2. symlink following in /proc wasn't done normally3. therefore, readlink implied access to the target's dataRather than fix #1 or #2, readlink() got restricted.> As near as i can tell it seems to be a> functional-equivalency carryover from 2.2. It isn't causing> much harm but i do wonder if this is intentional and if so,> why. I'm at a loss to see why refusing to allow non-owners> to identify a process's cwd, exe, and root would be> desireable. The only other things we refuse are mem, fd/> and eviron, the reasons for which are obvious and the> restrictions are per-file rather than as a class.Being able to readlink() in the fd directory is much lessrevealing than the content of the maps file. IMHO both ofthem should be restricted, but the "maps" file matters more.Just look at this: cat /proc/1/mapsThis all looks completely mixed up. Users SHOULD NOT be ableto read /proc/1/maps, but SHOULD be able to readlink at leastany /proc/1/* symlink that has meaning in the current namespace.(so not if the observer is in a chroot environment)-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2002/10/28/6 | CC-MAIN-2016-36 | refinedweb | 247 | 65.01 |
ICEnroll4::EnumAlgs method
[This method is no longer available for use as of Windows Server 2008 and Windows Vista.]
The EnumAlgs method retrieves the IDs of cryptographic algorithms in a given algorithm class that are supported by the current cryptographic service provider (CSP). This method was first defined in the ICEnroll3 interface.
Syntax
Parameters
- dwIndex [in]
Specifies the ordinal position of the algorithm whose ID will be retrieved. Specify zero for the first algorithm.
- algClass [in]
A cryptographic algorithm class. The IDs returned by this method will be in the specified class. Specify one of the following:
- ALG_CLASS_HASH
- ALG_CLASS_KEY_EXCHANGE
- ALG_CLASS_MSG_ENCRYPT
- ALG_CLASS_DATA_ENCRYPT
- ALG_CLASS_SIGNATURE
- pdwAlgID [out]
A pointer to a variable to receive a cryptographic algorithm ID that is supported by the current CSP.
Return value
C++
The return value is an HRESULT. A value of S_OK indicates success. When there are no more algorithms to enumerate, the value ERROR_NO_MORE_ITEMS is returned.
VB
A cryptographic algorithm ID which is supported by the current CSP. When there are no more algorithms to enumerate, the value ERROR_NO_MORE_ITEMS is returned.
Remarks
For algorithm ID and class constants used by this method, see Wincrypt.h.
Examples
#include <windows.h> #include <stdio.h> #include <Xenroll.h> DWORD dwAlgID; DWORD dwIndex; BSTR bstrAlgName = NULL; HRESULT hr, hr2; // Loop through the AlgIDs. dwIndex = 0; while ( TRUE ) { // Enumerate the alg IDs for a specific class. hr = pEnroll->EnumAlgs(dwIndex, ALG_CLASS_SIGNATURE, &dwAlgID); if ( S_OK != hr ) { break; } // Do something with the AlgID. // For example, retrieve the corresponding name. hr2 = pEnroll->GetAlgName( dwAlgID, &bstrAlgName); if ( FAILED( hr2 ) ) printf("Failed GetAlgName [%x]\n", hr); else printf("AlgID: %d Name: %S\n", dwAlgID, bstrAlgName ); // Reuse the BSTR variable in next iteration. if ( NULL != bstrAlgName ) { SysFreeString( bstrAlgName ); bstrAlgName = NULL; } // Increment the index. dwIndex++; }
Requirements
See also | https://msdn.microsoft.com/en-us/library/windows/desktop/aa382851(v=vs.85).aspx | CC-MAIN-2016-36 | refinedweb | 290 | 51.75 |
When implementing DNS on your network, you need to choose at least one server to be responsible for maintaining your domain. This is referred to as your primary name server, and it gets all the information about the zones it is responsible for from local files. Any changes you make to your domain are made on this server.
Many networks also have at least one more server as a backup, or secondary name server. If something happens to your primary server, this machine can continue to service requests. The secondary server gets its information from the primary server's zone file. When this exchange of information takes place, it is referred to as a zone transfer.
A third type of server is called a caching-only server. A cache is part of a computer's memory that keeps frequently requested data ready to be accessed. As a caching-only server, it responds to queries from clients on the local network for name resolution requests. It queries other DNS servers for information about domains and computers that offer services such as Web and FTP. When it receives information from other DNS servers, it stores that information in its cache in case a request for that information is made again.
Caching-only servers are used by client computers on the local network to resolve names. Other DNS servers on the Internet will not know about them and therefore will not query them. This is desirable if you want to distribute the load your servers are put under. A caching-only server is also simple to maintain, if for instance you have a remote site where client computers need name resolution services and nothing more.
The cache is preconfigured with the IP addresses of nine root-level DNS servers. If this computer has access to the Internet via a router, it is ready to work. Client computers could include the IP address of this DNS server in their search order list, and this DNS server would begin to service requests by contacting other DNS servers and automatically adding entries to its cache.
By the Way
DNS must be implemented as a service or daemon running on the DNS server machine. Windows servers have a DNS service, though some Microsoft admins prefer to use third-party DNS implementations. The Unix world has a number of DNS implementation options, but the most popular choice is Berkeley Internet Name Domain (BIND).
A group of DNS hosts in a collective configuration with a common set of DNS servers is called a zone. On simple networks, a zone might represent a complete DNS domain. For instance, the domain punyisp.com might be treated as a single zone for purposes of DNS configuration. On more complex networks, the DNS configuration for a subdomain is sometimes delegated to another zone that serves the subdomain. Zone delegation lets administrators with more immediate knowledge of a subnetwork manage the DNS configuration for that subnetwork. For instance, the DNS administrators for the domain cocacola.com might delegate the DNS configuration of the subdomain dallas.cocacola.com to a zone controlled by the DNS administrators in the Dallas office, who have closer watch on hosts in dallas.cocacola.com.
You might ask, "What's the difference between a zone and a domain?" It is important to note that, aside from the subtle semantic difference (a domain is a subdivision of the namespace and a zone is a collection of hosts), the concepts of a zone and a domain are not exactly parallel. As you read this section, keep the following facts in mind:
Membership in a subdomain implies membership in the parent domain. For instance, a host in dallas.cocacola.com is also part of cocacola.com. By contrast, if the zone for dallas.cocacola.com is delegated, a host in dallas.cocacola.com is not part of the cocacola.com zone.
If a subdomain is not specifically delegated, it does not require a separate zone and is simply included with the zone file for the parent domain.
The details of how to delegate a DNS zone depend upon the DNS server application. For now, the important thing to remember is that a zone represents a collective configuration for a group of DNS servers and hosts, and DNS administrators can optionally delegate portions of the namespace to other zones for administrative efficiency.
As the previous section stated, a DNS zone is an administrative unit representing a collection of computers inhabiting a portion of the DNS namespace. The DNS configuration for a zone is stored in a zone file. DNS servers refer to the information in the zone file when responding to queries and initiating requests. A zone file is a text file with a standardized structure. The contents of the zone file consists of multiple resource records. A resource record is a one-line statement providing a chunk of useful information about the DNS configuration. Some common types of resource records include the following:
SOA—
SOA stands for Start of Authority. The SOA record designates the authoritative name server for the zone.
NS—
NS stands for Name Server. The NS record designates a name server for the zone. A zone may have several name servers (and, hence, several NS records) but only one SOA record for the authoritative name server.
A—
An A record maps a DNS name to an IP address.
PTR—
A PTR record maps an IP address to a DNS name.
CNAME—
CNAME is short for canonical name. A CNAME record maps an alias to the actual hostname represented by an A record.
Thus, the zone file tells the DNS server:
The authoritative DNS server for the zone
The DNS servers (authoritative and non-authoritative) in the zone
The DNS-name-to-IP-address mappings for hosts within the zone
Aliases (alternative names) for hosts within the zone
Other resource record types provide information on topics such as mail servers (MX records), IP-to-DNS-name mappings (PTR records), and well-known services (WKS records). A sample zone file looks something like this:
@ IN SOA boris.cocacola.com. hostmaster.cocacola.com. (
201.9 ; serial number incremented with each
; file update
;
3600 ; refresh time (in seconds)
1800 ; retry time (in seconds)
4000000 ; expiration time (in weeks)
3600) ; minimum TTL
IN NS horace.cocacola.com.
IN NS boris.cocacola.com.
;
; Host to IP address mappings
;
localhost IN A 127.0.0.1
chuck IN A 181.21.23.4
amy IN A 181.21.23.5
darrah IN A 181.21.23.6
joe IN A 181.21.23.7
bill IN A 181.21.23.8
;
; Aliases
;
ap IN CNAME amy
db IN CNAME darrah
bu IN CNAME bill
Note that the SOA record includes several parameters governing the process of updating the secondary DNS servers with the master copy of the zone data on the primary DNS server. In addition to a serial number representing the version number of the zone file itself, there are parameters that represent the following:
Refresh time—
The time interval at which secondary DNS servers should query the primary server for an update of zone information.
Retry time—
The time to wait before trying again if a zone update is unsuccessful.
Expiration time—
The upper limit for how long the secondary name servers should retain a record without a refresh.
Minimum Time-to-Live (TTL)—
The default time-to-live for exported zone records.
The rightmost term of the SOA record is actually the email address for the person with responsibility for the zone. Replace the first period with an @ sign to form the email address.
The preceding example is, of course, the simplest of zone files. Larger files might include hundreds of address records and other less common record types representing other aspects of the configuration. The name of the zone file, and in some cases the format, can vary depending upon the DNS server software. This example is based on the popular BIND (Berkeley Internet Name Domain), the most common name server on the Internet.
It is worth remembering, also, that the honored practice of configuring services by manipulating text files is fading from favor. Many DNS server applications provide a user interface that hides the details of the zone file from the reader.
Dynamic DNS (described later in this chapter) provides yet another layer of separation from the details of the configuration.
Another type of zone file necessary for DNS name resolution is the reverse lookup file. This file is used when a client provides an IP address and requests the corresponding hostname. In IP addresses, the leftmost portion is general, and the rightmost portion is specific. However, in domain names the opposite is true: The left portion is specific, and the right portion, such as com or edu, is general. To create a reverse lookup zone file you must reverse the order of the network address so the general and specific portions follow the same pattern used within domain names. For example, the zone for the 192.59.66.0 network would have the name 66.59.192.in-addr.arpa.
Every resource record in this file always has the host ID followed by .in-addr.arpa. The in-addr portion stands for inverse address, and the arpa portion is another top level domain and is a holdover from the original ARPAnet that preceded the Internet.
Class A and B networks have shorter reverse lookup zone names due to the fact that they contain fewer network bits. For example, in the Class A network 43.0.0.0, the reverse lookup zone must have the name 43.in-addr.arpa. In the Class B network 172.58.0.0, the reverse lookup zone must have the name 58.172.in-addr.arpa.
You can use any network utility that supports name resolution to test whether your network is resolving names properly. A Web browser, an FTP client, a Telnet client, or the Ping utility can tell you whether your computer is succeeding with name resolution. If you can connect to a resource using its IP address but you cannot connect to the resource using a hostname or FQDN, there is a good chance the problem is a name resolution problem.
If your computer uses a hosts file and also uses DNS, keep in mind that you need to disable or rename the hosts file temporarily when you test DNS. Otherwise it will not be easy to determine whether the name was resolved through the hosts file or DNS. The following section describes how to use Ping to test DNS. A later section describes the NSLookup utility, which provides a number of DNS configuration and troubleshooting features.
The simple and useful Ping utility is a good candidate for testing your DNS configuration. Ping sends a signal to another computer and waits for a reply. If a reply arrives, you know that the two computers are connected. If you know the IP address of a remote computer, you can ping the computer by IP address:
ping 198.1.14.2
If this command succeeds, you know your computer can connect to the remote computer by IP address.
Now try to ping the remote computer by DNS name:
ping williepc.remotenet.com
If you can ping the remote computer by IP address but not by DNS name, you might have a name resolution problem. If you can ping by DNS name, name resolution is working properly.
You'll learn more about Ping in Hour 13, "Connectivity Utilities."
The NSLookup utility enables you to query DNS servers and view information such as their resource records, and it is useful when troubleshooting DNS problems. The NSLookup utility operates in two modes:
Batch mode—
In Batch mode, you start NSLookup and provide input parameters. NSLookup performs the functions requested by the input parameters, displays the results, and then terminates.
Interactive mode—
In Interactive mode, you start NSLookup without supplying input parameters. NSLookup then prompts you for parameters. When you enter the parameters, NSLookup performs the requested actions, displays the results, returns to a prompt, and waits for the next set of parameters. Most administrators use Interactive mode because it is more convenient when performing a series of actions.
NSLookup has an extensive list of options. A few basic options covered here give you a feel for how NSLookup works.
To run NSLookup in Interactive mode, enter the name nslookup from a command prompt.
As shown in Figure 11.7, each NSLookup response starts with the name and IP address of the DNS server that NSLookup is currently using, for example
Default Server: dnsserver.Lastingimpressions.com
Address: 192.59.66.200
>
The chevron character (>) is NSLookup's prompt.
NSLookup has about 15 settings that you can change to affect how NSLookup operates. A few of the most commonly used settings are listed here:
?; and help—
These commands are used to view a list of all NSLookup commands.
server—
This command specifies which DNS server to query.
ls—
This command is used to list the names in a domain, as shown near the middle of Figure 11.7.
ls -a—
This command lists canonical names and aliases in a domain, as shown in Figure 11.7.
ls -d—
This command lists all resource records, as shown near the bottom of Figure 11.7.
set all—
This command displays the current value of all settings.
NSLookup is not restricted to viewing information from your DNS server; you can view information from virtually any DNS server. If you have an Internet service provider (ISP), you should have IP addresses for at least two DNS servers.
NSLookup can use either IP addresses or domain names. You can switch NSLookup to another DNS server by entering the server command followed by either the IP address or the FQDN. For instance, to connect NSLookup to the E root server, you can enter server 192.203.230.10. Then you can enter virtually any domain name, such as samspublishing.com, and see the IP addresses registered for that domain name. Be aware that most commercial DNS servers and root servers will refuse ls commands because they can generate a tremendous amount of traffic and might pose a security leak. | http://www.yaldex.com/tcp_ip/0672325659_ch11lev1sec5.html | CC-MAIN-2017-17 | refinedweb | 2,373 | 63.59 |
#include <sp_instr.h>
sp_instr_set_case_expr is used in the "simple CASE" implementation to evaluate and store the CASE-expression in the runtime context.
Update all instruction with the given label in the backpatch list to the specified instruction pointer.
Reimplemented from sp_lex_branch_instr.
Execute core function of instruction after all preparations (e.g. setting of proper LEX, saving part of the thread context).
Implements sp_lex_instr.
Callback function which is called after the statement query string is successfully parsed, and the thread context has not been switched to the outer context. The thread context contains new LEX-object corresponding to the parsed query string.
Reimplemented from sp_lex_instr.
Mark this instruction as reachable during optimization and return the index to the next instruction. Jump instruction will add their destination to the leads list.
Reimplemented from sp_lex_branch_instr.
Inform the instruction that it has been moved during optimization. Most instructions will simply update its index, but jump instructions must also take care of their destination pointers. Forward jumps get pushed to the backpatch list 'ibp'.
Reimplemented from sp_lex_branch_instr.
Update the destination; used by the SP-instruction-optimizer.
Reimplemented from sp_lex_branch_instr. | http://mingxinglai.com/mysql56-annotation/classsp__instr__set__case__expr.html | CC-MAIN-2019-18 | refinedweb | 184 | 52.36 |
User talk:Noluv4u
From Uncyclopedia, the content-free encyclopedia
edit Welcome!
Hello, Noluv4u, and welcome to Uncyclopedia! Thank you for registering an account.oluv4 18:27, October 16, 2009 (UTC)
edit Oreos
Hey, I just noticed your changes to HowTo:Subsist on an all-OREO® diet for the rest of your life, including the addition of a new section about Oriole cookies and a move to the Oriole cookie namespace. While, in most cases, it would make sense to move an article from one location to another if its name is misleading or no longer quantitative of whatever the article is about, moving it simply because an out-of-place sub-section was added is not as legitimate a reason. If you want to make an article about Oriole cookies, it would make more sense to start one from scratch, rather than altering an article which is about a completely different topic.
I reverted your changes and simply made Oriole cookies into its own article, however, it probably isn't long enough to hold its own. I added a stub tag, but you might consider expanding the article when you have the time. User:KneeChee27/sig2 21:52, November 14, 2009 (UTC)
edit I just realized...
When your username is phonetically spoken aloud, it sounds like "no lube for you." I believe we should make this a new policy on Uncyclopedia.
Also, good job categorizing your template. Love,
It is supposed to be "no love for you." My wife (also an Uncylopedia editor) and I joke about this phrase a lot, but with our names.
- I copied the text from another template, and I am working on modifying it. I have done this a lot on Wikipedia when making templates. Noluv4u 00:45, September 17, 2010 (UTC)
- Your wife edits Uncyc, too? Women? On Uncyc? GASP. Love,
edit Yom Kippur
Nice start on the article, made me laugh a few times. I hope you'll develop it:52, September 18, 2010 (UTC)
- Indeed I plan to. I started working on it just hours before Yom Kippur (intentionally), then Yom Kippur came. Now, Yom Kippur is over, and I plan to continue. Noluv4u 03:43, September 19, 2010 (UTC)
- Excellent, I'll look forward to reading it once it's done. Shavua to:02, September 19, 2010 (UTC)
edit Gluten
Are you the same guy who started this article before remembering your old user name? I had reverted the first attempt to add this to {{Mommy's medicine}}, as it didn't seem to be going anywhere. Your collaborator is a Gluten Nazi and I would let her take the lead for a while; or at the least, don't lard the article up with initial quotations, cheers! Spıke Ѧ 00:08 19-May-14 | http://uncyclopedia.wikia.com/wiki/User_talk:Noluv4u | CC-MAIN-2015-18 | refinedweb | 464 | 62.88 |
April 15, 2019 Single Round Match 755 Editorials
SRM 755 was held on April 15, 2019. Thanks to misof for setting the problems and writing the editorials.
OneHandScheduling
We need to determine whether the time intervals given in the input are disjoint. If they are, Misof can do everything, if they aren’t, he cannot.
One possible solution is to look at each pair of intervals and check whether they overlap. Depending on the implementation, this may be tricky, as there are multiple different ways in which two intervals may overlap. Probably the easiest way is the following: suppose we have two closed intervals [a,b] and [c,d]. The earliest time at which both of them started is max(a,c), the latest time at which neither has ended is min(b,d). Clearly, there is an overlap if and only if max(a,c) <= min(b,d).
Another, less error-prone solution is to simply look at all integer times between 0 and 10^6, inclusive. If each of them belongs to at most one of the given intervals, they obviously have to be disjoint, and if you find a time that belongs to two or more intervals, you know that the intervals are not disjoint.
public String checkSchedule(int[] tStart, int[] tEnd) { for (int t = 0; t <= 1 _000_000; ++t) { int inside = 0; for (int i = 0; i < tStart.length; ++i) if (tStart[i] <= t && t <= tEnd[i]) ++inside; } if (inside > 1) return "impossible"; } return "possible"; }
OneHandSort
There are many different ways to sort the given sequence. The simplest one is probably the one where you consider each slot on the shelf from the left to the right. If the slot contains the correct element, you do nothing. Otherwise, you put the current element from that slot to slot N, then you move the correct element to the current (now empty) slot, and then you return the arbitrary element from slot N to the empty slot you just created.
def sortShelf(target): N = len(target) target.append(-1) # add the empty slot answer = [] for n in range(N): if n == target[n]: continue correct = target.index(n) answer.append( n ) answer.append( correct ) answer.append( N ) target[correct] = target[n] target[n] = n return answer
OneHandSort2
In this problem we have the same setting as in OneHandSort, but there are two notable differences. First, we need to actually generate a huge input, and second, we need to solve it in an optimal number of moves.
In order to generate the input, we need an efficient way to answer the query “what is the smallest unused number that is greater than or equal to x?”. In order to do this, we can use an ordered set data structure, such as “set” in C++ or “TreeSet” in Java. We will store all unused numbers in this data structure, and remove them as we assign them to the input array. Whenever we get a query, we simply use the corresponding method of the data structure — e.g., “lower_bound” in C++ or “ceiling” in Java.
In order to solve the input optimally, note that what we have is a permutation of numbers 0 through N-1. For each permutation there is a unique way to split that permutation into cycles. (E.g., if element in slot 3 wants to go to slot 7, element in slot 7 wants to go to slot 42, and element in slot 42 wants to go to slot 3, this is one cycle.)
If we have a cycle of length 1, we don’t have to do anything: this is an element that is in its correct slot.
For any other cycle, note that when we move one of its elements for the first time, we cannot put it into the correct slot, as it is currently occupied. Hence, for a cycle of length x we need at least x+1 moves.
On the other hand, it’s easy to solve a cycle of length x in exactly x+1 moves: just move any one of its elements into the empty slot N, then do exactly x-1 moves in which you move the element that belongs to the currently empty slot into that slot (thereby freeing a slot for another element) and finally return the element from slot N into its correct slot that is now empty.
In the above example of a cycle, we would move “element 7” (meaning “the element that belongs into the slot 7”) from slot 3 to slot N, then element 3 from slot 42 to slot 3, then element 42 from slot 7 to slot 42, and finally element 7 from slot N to slot 7.
Hence, once we have the permutation, we split it into cycles and then compute the answer by looking at their lengths.
If we use the algorithm described above, the permutation can be generated in O(n log n) time. We can easily decompose a permutation into cycles in O(n) time, so the total time complexity remains O(n log n).
public int minMoves(int N, int[] targetPrefix, int a, int b) { TreeSet<Integer> unused = new TreeSet<Integer>(); for (int n=0; n<N; ++n) unused.add(n); int[] target = new int[N]; for (int n=0; n<targetPrefix.length; ++n) { target[n] = targetPrefix[n]; unused.remove( target[n] ); } for (int n=targetPrefix.length; n<N; ++n) { long nextll = target[n-1]; nextll = (nextll * a + b) % N; int next = (int)nextll; Integer tmp = unused.ceiling(next); if (tmp == null) tmp = unused.ceiling(0); target[n] = tmp; unused.remove( target[n] ); } boolean[] seen = new boolean[N]; for (int n=0; n<N; ++n) seen[n] = false; int answer = 0; for (int n=0; n<N; ++n) if (!seen[n]) { int cycleLength = 1; seen[n] = true; int where = n; while (true) { where = target[where]; if (seen[where]) break; seen[where] = true; ++cycleLength; } if (cycleLength > 1) answer += cycleLength + 1; } return answer; }
OneHandRoadPainting
This problem is solvable greedily — but note that not all greedy solutions work.
A correct solution can be made using the following observations:
- There is always an optimal solution that consists of multiple trips such that in each trip Misof begins in his home, takes the paint, walks somewhere, turns around, and on his way home he paints some segments. (It doesn’t make sense to walk across the same section twice in the same direction. Whatever you do during the second pass you could do during the first pass. It doesn’t matter whether you paint on your way there or on your way back.)
- Consider the point farthest away from the home that needs to be painted. This point has to be painted in some trip, which means that it has to be visited in some trip. Now comes the greedy observation: in that trip, we can paint the paintPerBrush meters of the road that need painting and are farthest away from the home.
In order to prove the greedy observation, we can do a switching argument. Suppose you have an optimal solution S. If it has a trip that matches our greedy observation, we are done. Otherwise, consider the trip that visits the farthest point in S. While this trip uses fewer paint than paintPerBrush, take some painting away from some other trip and assign it to this trip. Clearly this doesn’t change the value of the solution, so we still have an optimal solution, but now the trip that visits the farthest point uses all available paint. Now, if it still doesn’t match our greedy observation, there has to be a segment (or a collection of them) closer to home that we do paint during this trip, and a segment (or a collection of them) farther from home that we don’t. Find any trip that paints anything in the second part and swap it for something in the first part. This can never worsen our solution, because our special trip remains of the same length, and on the other trip we swapped a segment for some other segment closer to home. Thus, we can change any optimal solution into one where our greedy observation works.
All that remains is to implement the greedy strategy in an efficient way. The only catch is that we cannot simulate one trip at a time: in the worst case, there can be up to 2*10^9 trips.
Thus, we do the following. Look at the last segment that needs to be painted. If it requires more than one trip using the greedy strategy, do all the trips at once, except for the last one. More precisely, we will do (segmentLength div paintPerBrush) trips. The total distances traveled during these trips form an arithmetic series and we can sum them up easily using a formula. This leaves us with the case where the length of the last segment is strictly smaller than paintPerBrush. We handle it by simulating one trip. In this trip we consider the segments from the back to the front. As long as we have enough paint to paint the current segment, we paint it and forget about it. Eventually, we will either run out of segments (in which case we are done) or we will run out of paint (in which case we’ll use the last remaining paint on our brush to paint the tail of the currently active segment).
This gives us a solution that runs in O(number of segments).
public long fastest(int[] dStart, int[] dEnd, int paintPerBrush) { long answer = 0; int active = dStart.length - 1; while (active >= 0) { long fullRuns = (dEnd[active] - dStart[active]) / paintPerBrush; if (fullRuns > 0) { answer += (2L*dEnd[active] - (fullRuns-1)*paintPerBrush) * fullRuns; dEnd[active] -= fullRuns * paintPerBrush; } if (dStart[active] == dEnd[active]) { --active; continue; } answer += 2L*dEnd[active]; long paintRemaining = paintPerBrush; while (true) { long paintNeeded = dEnd[active] - dStart[active]; if (paintNeeded <= paintRemaining) { paintRemaining -= paintNeeded; --active; if (active == -1) break; } else { dEnd[active] -= paintRemaining; break; } } if (active == -1) break; } return answer; }
DejaVu
The first thing we do is that we go through the movie and build a data structure that maps each scene to the indices in the movie at which it occurs.
Consider an array A as long as the movie. Set all its elements to 0. Then, for each scene, set the element that corresponds to its second occurrence (if it exists) to +1, and its third occurrence (if it exists) to -1. The key observation is that the number of deja vus in the first X scenes of the movie is the sum of the first X elements of this sequence.
If we want to start watching at the beginning and we are looking for the optimal moment when to stop watching, we want to find the largest among all the prefix sums of the array A. We could answer this easily in linear time, but instead of doing that we will use a data structure. The reason why we do so will become apparent in a moment.
In particular, the data structure we’ll use will be a simple interval tree (a.k.a. range tree or tournament tree) built on top of the array A. Each inner node of this tree represents some contiguous segment of A. In each inner node we will store two values: the sum of that segment, and the largest of all prefix sums for that segment only.
These values are easy to propagate along the tree. For any inner node, its sum is the sum of the sums of its children, and its largest prefix sum is either the largest prefix sum of the left child, or the sum of the left child plus the largest prefix sum of the right child.
In order to find the best end for the movie, we simply look into the root of the tree and report the maximum prefix sum found there.
Why did we choose this data structure? Well, duh, because it’s easy to update. Now that we know the optimal end, consider what happens if we shift the beginning of the movie one scene to the right. That is, the scene M[0] stopped being in the movie. What do we have to change in the array A? It turns out that we only need to update three cells: the ones corresponding to the second, third, and fourth occurrence of that same scene in the movie. Now they become the first, second, and third occurrence, and as such their contribution to the number of deja vus changes from +1, -1, 0 to 0, +1, -1. Thus, we can make these three updates (or fewer, if the scene doesn’t have that many occurrences) and after each of them we propagate the changes up the interval tree.
In this way, we can implement the operation “discard the first scene and recompute the best end” in O(log n) time, making the full solution run in O(n log n).
int[][] sum; int[][] max_psum; void update(int level, int index) { sum[level][index] = sum[level+1][2*index] + sum[level+1][2*index+1]; max_psum[level][index] = Math.max( max_psum[level+1][2*index], sum[level+1][2*index] + max_psum[level+1][2*index+1] ); } void iset(int level, int index, int value) { sum[level][index] = value; max_psum[level][index] = value; while (level > 0) { --level; index /= 2; update(level,index); } } public int mostDejaVus(int N, int seed, int R) { // omitted: generate the array M as described in the statement int depth = 18; sum = new int[depth+1][]; for (int l=0; l<=depth; ++l) sum[l] = new int[1<<l]; max_psum = new int[depth+1][]; for (int l=0; l<=depth; ++l) max_psum[l] = new int[1<<l]; HashMap<Integer, ArrayList<Integer> > occurrences = new HashMap<Integer, ArrayList<Integer> >(); for (int n=0; n<N; ++n) { if (!occurrences.containsKey( M[n] )) { occurrences.put( M[n], new ArrayList<Integer>() ); } occurrences.get( M[n] ).add(n); } HashMap<Integer, Integer> offsets = new HashMap<Integer, Integer>(); for (int scene : occurrences.keySet()) offsets.put( scene, 0 ); for (int n=0; n<(1<<depth); ++n) iset(depth,n,0); for (int scene : occurrences.keySet()) { if (occurrences.get(scene).size() >= 2) { iset(depth,occurrences.get(scene).get(1),+1); } if (occurrences.get(scene).size() >= 3) { iset(depth,occurrences.get(scene).get(2),-1); } } int answer = 0; for (int start=0; start<N; ++start) { if (max_psum[0][0] > answer) answer = max_psum[0][0]; int scene = M[start]; int off = offsets.get(scene); if (occurrences.get(scene).size() >= 2+off) { iset(depth,occurrences.get(scene).get(1+off),0); } if (occurrences.get(scene).size() >= 3+off) { iset(depth,occurrences.get(scene).get(2+off),+1); } if (occurrences.get(scene).size() >= 4+off) { iset(depth,occurrences.get(scene).get(3+off),-1); } offsets.put(scene,off+1); } return answer; }
misof | https://www.topcoder.com/single-round-match-755-editorials/ | CC-MAIN-2019-43 | refinedweb | 2,477 | 69.62 |
Django CMS polls plugin
Project Description
Why?
There is no established Polls plugin for DjangoCMS. Yes, cmsplugin-poll exists, but it’s latest update was at 2013 and looks like it is abandoned. Personaly I want a simple plugin, that is up to date and support latest Django and DjangoCMS. So this one could be at the spot.
Requirements
It works fine and tested under Python 2.7. The following libraries are required
- Django >= 1.5
- django-cms >= 3.0 (we recommend to use Django CMS 3.0 and higher, contact us if you need prior CMS versions supports and have some issues)
Installation
$ pip install cmsplugin-polls
Update your settings.py
INSTALLED_APPS = [ # django contrib and django cms apps 'cmsplugin_polls', ]
Do not forget to include URLs to urls.py (namespace is important)
urlpatterns = patterns('', url(r'^polls/', include('cmsplugin_polls.urls', namespace='polls')), url(r'^', include('cms.urls')), )
And to migrate your database
django-admin.py migrate captcha cmsplugin_polls
Roadmap
- AJAX submiting out-of-box
- Python 3 support
Changelog
The changelog can be found at repo’s release notes
Contributing
Fork the repo, create a feature branch then send me pull request. Feel free to create new issues or contact me via email.
Translation
You could also help me to translate cmsplugin-polls to your native language with Transifex
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/cmsplugin-polls/ | CC-MAIN-2018-17 | refinedweb | 243 | 59.19 |
(This patch series contains revisions of the patches from and a few more.)The following series adds a "cgroup.procs" file to each cgroup that reportsunique tgids rather than pids, which can also be written to for moving allthreads in a threadgroup at once.Patch #5 introduces a new rwsem that must be taken for reading in the fork()path, and patch #6 reveals potential for a race when forking in certainsubsystems before the subsystem's attach() function is called, the naivesolution to which would be holding on to the fork rwsem until after the attachloop. Suggestions for alternative approaches or tweaks to the current approachare welcome; one potential fix is to make the fork rwsem per-threadgroup,which will involve adding a field to task_struct, thereby drastically reducingcontention when a write to the procs file is in progress.This patch series was written at the same time as Li Zefan's pid namespacebugfix patch (from ), and contains a similarbut finer-grained fix for the same bug. These patches can either be rewrittento be applied on top of Li's patch, or be applied as they are with Li's patchreversed.---Ben Blum (6): Lets ss->can_attach and ss->attach do whole threadgroups at a time Makes procs file writable to move all threads by tgid at once Changes css_set freeing mechanism to be under RCU Quick vmalloc vs kmalloc fix to the case where array size is too large Ensures correct concurrent opening/reading of pidlists across pid namespaces Adds a read-only "procs" file similar to "tasks" that shows only unique tgids Documentation/cgroups/cgroups.txt | 12 - include/linux/cgroup.h | 58 ++- kernel/cgroup.c | 816 ++++++++++++++++++++++++++++++------- kernel/cgroup_freezer.c | 15 + kernel/cpuset.c | 65 ++- kernel/fork.c | 2 kernel/ns_cgroup.c | 16 + kernel/sched.c | 37 ++ mm/memcontrol.c | 3 security/device_cgroup.c | 3 10 files changed, 843 insertions(+), 184 deletions(-) | https://lkml.org/lkml/2009/7/23/330 | CC-MAIN-2015-32 | refinedweb | 313 | 64.51 |
A Python implementation of [JSON Web Token draft 01]().
This is Mozilla’s fork of [PyJWT]() which adds RSA algorithms, fixes some timing attacks, and makes a few other adjustments. It is used in projects such as [webpay]().
Install the module with [pip]() or something similar:
pip install PyJWT-mozilla
This install step will also install/compile [M2Crypto]() so you will need swig for this. You can get it with a package manager like:
brew install swig
Alternatively you can probably find a binary package for M2Crypto with something like this:
sudo apt-get install python-m2crypto
import jwt jwt.encode({“some”: “payload”}, “secret”)
Note the resulting JWT will not be encrypted, but verifiable with a secret key.
jwt.decode(“someJWTstring”, “secret”)
If the secret is wrong, it will raise a jwt.DecodeError telling you as such. You can still get at the payload by setting the verify argument to false.
jwt.decode(“someJWTstring”, verify=False)
The JWT spec supports several algorithms for cryptographic signing. This library currently supports:
Change the algorithm with by setting it in encode:
jwt.encode({“some”: “payload”}, “secret”, “HS512”)
Install the project in a [virtualenv]() (or wherever) by typing this from the root:
python setup.py develop
Run the tests like this:
python tests/test_jwt.py. | https://pypi.org/project/PyJWT-mozilla/ | CC-MAIN-2017-09 | refinedweb | 211 | 56.45 |
Proper definition of embedded jbossScott Stark Nov 16, 2007 12:38 PM
So after fumbling around with a proper definition of what embedded jboss is, its clear we need to define it in terms of api, configuration, and integration plugins, artifacts. We also need to get the current embedded subproject refactored. What the the usecases for which there need to be unit tests in the server codebase?
1. Re: Proper definition of embedded jbossBill Burke Nov 19, 2007 1:01 PM (in response to Scott Stark)
Use cases in order of importance (in my opinion):
1. Junit testing "outside" of application server. This means making it as easy as possible in IDE's to right click on a set of unit tests and just run them from the IDE without any special IDE plugins
2. Small lightweight SE apps that can boot quickly and in which the app developer controls directory structure, and bootstrapping.
3. Tomcat integration and abstraction. Ability to provide JBoss projects a la carte to Tomcat.
4. App server abstraction. Framework to provide other JBoss projects with an easy way, common way to plugin into other application servers.
If you look at the embedded project, there's really not much code. A lot of it is to work around old JMX Microcontainer issues, or issues with Tomcat. This codebase should be getting smaller, faster, and simpler over time as we convert older jboss projects to use the new kernel
API:
There's only a few exposed APIs. Specifically a bunch were written that ease-of-use wrappers around the MainDeployer to provide a programmatic API for users to this service. Specifically:
org.jboss.embedded.Bootstrap
org.jboss.embedded.DeploymentGroup
There's also VFS apis that should be exposed and supported, specifically the:
org.jboss.virtual.plugins.context.vfs.AssembledContextFactory;
org.jboss.virtual.plugins.context.vfs.AssembledDirectory;
CONFIGURATION:
Configuration should be no different than JBOSS AS, except that maybe we don't want to ship with the profile service or management layers. (Also, see refactoring comments below, specifically point #7
TESTING:
I also worked on an extension to JBoss Test framework that could take existing JBoss unit tests and run them within embedded. This worked by subclassing existing Junit tests and running with the embedded container. Of course, you couldn't work with tests that required non-jar file formats or special classloading features as embedded is supposed to work in the environments classloader. As many of these tests should be converted so that we are sure various JBoss components do not require special app server configurations.
WHAT REFACTORINGS SHOULD BE DONE:
The embedded project should be a very small project. If the rest of JBoss is designed correctly, the embeded project should really just be a packaging and integration project. Integration being whatever bridges that need to be built to environments we want to run in.
IMO, most of the refactorings that should be done are not within embedded.
1. Gut the requirement for special protocol handlers for URLs (i.e. "vfsfile:", "resource:"). What sucks about the JDK is that once the URL Handler factory is set, you can never unset it. This required me to hack Tomcat to allow our deployment system to work.
2. For the VFS provide default behavior for when it doesn't recognize a URL protocol. Like, for instance, following the rule of if the URL ends in "/", then abort, if not, treat the VFS root as a JAR.
3. Allow injectable MBeanServer. Currently, we have a hard dependency on the JBoss JMX implementation mainly because we put JBoss specific mbeans within private JMX namespaces (like the MBeanRegistry).
4. Move old services over to the new kernel and beans.xml format wherever possible.
5. Improve bootstrap times. Previously 20% of embedded boottime was within all the JAXB shit you guys do. Would be cool if we could precalculate and persist models at build time, rather than at runtime.
6. Refactor Seam so that it as a JBoss Kernel deployment component. This is going to become more and more important as we get to the deployment model we want to get to. That is, a model where you just throw a bunch of shit in your WEB-INF/lib or classpath and everything just works.
7. Seam is an interesting model in that everything is turned on by default (they have to do this model because all their services are defined using annotations). To turn things off you explicitly turn them off within a configuration file. If you want to tweak a service's default configuration, you again, have to do this in a specific central configuration file. WHY IS THIS A GOOD THING? Well, if JBoss MC allowed this, the Embedded packaging could be just everything JARRED up in one jar that is put in the user's Classpath. No exposed configuration files or directory structure. If the user wants to turn off/configure things, they just add a bit of configuration to the area's they are used to tweaking.
INTEGRATION PLUGINS:
* SE (already discussed)
* Junit (already discussed)
* Tomcat. Please read the issues I encountered in the embedded wiki. I got a lot of shit from the Seam division for how I did the Tomcat integration. Some of it was unavoidable (specifically, the URL Handler junk), others, specifically the JNDI hacks could have been avoided, but the consequence would have been changes throughout the JBoss AS codebase. Changes I was unwilling to do. The whole idea of embedded is that we should have to have a whole team dedicated to maintaining it because it would be a thin packaging around the AS distribution. The less code that's in embedded, the better it is.
* Other app servers. If done correctly, JBoss AS should be reusable in other application server environments. We should be able to use the environments TM, JNDI, etc. services. But I think this should be a LOW PRIORITY. Given resource constraints, we'd need at least one dedicated person on embedded to be able to make this a reality.
Well, that's all I can think of right now....
2. Re: Proper definition of embedded jbossCarlo de Wolf Nov 20, 2007 2:46 AM (in response to Scott Stark)
Embedded must have it's own codebase. It should not reside in AS trunk. This will allow it to have an independent release/life cycles.
Thus it won't be released with active snapshots (1), we can continue development post AS release (2) and it won't fall on it's ass when major refactoring takes place (3).
1: See JBoss Embedded Beta2 release
2: See Embedded EJB 3 RCs
3: See Embedded EJB 3 RC 9
It should then become apparent what it takes for AS to be a component in a consuming product. (and thus defining the EAP roadmap)
As for what tests should be in AS codebase: all the SPI requirements put down by Bill (under Refactoring) must be tested before AS is delivered.
3. Re: Proper definition of embedded jbossViet Nov 20, 2007 12:21 PM (in response to Scott Stark)
Bill,
if you want to be useful (for the JBoss Portal project), you should provide us something similar to dysoweb based on JBoss MC.
4. Re: Proper definition of embedded jbossBill Burke Nov 20, 2007 1:36 PM (in response to Scott Stark)
.
5. Re: Proper definition of embedded jbossViet Nov 20, 2007 1:46 PM (in response to Scott Stark)
I see what you mean the sentence "This is going to become more and more important as we get to the deployment model we want to get to. That is, a model where you just throw a bunch of shit in your WEB-INF/lib or classpath and everything just works. " confused me. I assumed it would work cross deployments.
"bill.burke@jboss.com" wrote:
.
6. Re: Proper definition of embedded jbossMark Newton Nov 26, 2007 12:35 PM (in response to Scott Stark)
Hi Bill, Scott,
Having looked at the Embedded JBoss documentation in the wiki and from my current work on the JBoss Microcontainer User Guide I think that we could benefit from a change in name for the project.
Given that Embedded JBoss is all about using JBoss enterprise services and our EJB3 container in different runtime environments (by allowing them to be loaded by different classloaders) how about:
JBoss Reloaded
This carries on the tradition of naming releases after the Matrix and hints at the fact that you are 'reloading' JBoss services through different classloaders. In addition it starts with the prefix 'JBoss' and does not lend itself to being used as a noun which is helpful since Embedded JBoss is really a way of packaging up the microcontainer with some integration code so that it can be used in other runtimes. In this sense it isn't a product as such.
The JBoss Reloaded project would therefore contain distributions and docs to explain how to configure JBoss services and the EJB3 container in Tomcat, GlassFish, standalone Java SE apps, etc...
I already have a part in the JBoss Microcontainer User Guide called 'Integration' where I intend to explain how the microcontainer is used within JBoss AS. We could also add in here explanations of how to use it in Tomcat and GlassFish if necessary.
I ultimately would like to develop a single story with which to explain our technology all the way from JBoss Microcontainer through to JBoss AS. Seen in this light the JBoss Reloaded project would sit nicely in the middle, demonstrating uses of our technology that gradually become more and more integrated with different runtimes until we arrive at JBoss AS 5 which is the best of the bunch :)
what do you think?
Mark Newton
7. Re: Proper definition of embedded jbossScott Stark Nov 26, 2007 3:05 PM (in response to Scott Stark)
I don't so much care about the name as the breaking out of the project with required integration code and docs. That makes sense.
8. Re: Proper definition of embedded jbossMax Rydahl Andersen Nov 28, 2007 9:50 AM (in response to Scott Stark)
We are depending rather heavily on the embedded jboss in context of Seam and being able to run tests from a plain unittest so i'm very interested in this discussion and I can provide you with the perfect environment to test it in: JBoss Tools
In here we simply just run the testng seam unit test with the basic classpath and everything seems to work fine with it (except for the outstanding issue with MC about failing when having manifest.mf and ejb-jar.xml in exploded dir).
9. Re: Proper definition of embedded jbossScott Stark Nov 28, 2007 1:49 PM (in response to Scott Stark)
Point us to a page that describes how to set this up and run this.
10. Re: Proper definition of embedded jbossMax Rydahl Andersen Nov 28, 2007 3:42 PM (in response to Scott Stark)
How to install JBoss Tools (nightly builds are best until we GA):
How to use TestNG setup in jboss tools (requires to install the testng plugin from testng.org):
11. Re: Proper definition of embedded jbossScott Stark Nov 28, 2007 6:09 PM (in response to Scott Stark)
Ok, I'll have to try this on my laptop as I have not been able to get eclipse 3.3.x to run on my rhel4 server.
12. Re: Proper definition of embedded jbossMax Rydahl Andersen Nov 28, 2007 6:22 PM (in response to Scott Stark)
hmm - weird.
what happens with eclipse 3.3 on rhel 4?
13. Re: Proper definition of embedded jbossScott Stark Nov 28, 2007 6:34 PM (in response to Scott Stark)
Some swt library problem:
[starksm@succubus forums]$ ~/java/eclipse-europa/eclipse !SESSION 2007-11-28 15:33:18.219 -----------------------------------------------eclipse.buildId=I20070625-1500 java.version=1.5.0_11 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=linux, ARCH=x86, WS=gtk, NL=en_US Framework arguments: org.eclipse.platform Command-line arguments: -os linux -ws gtk -arch x86 -consoleLog org.eclipse.platform !ENTRY org.eclipse.osgi 4 0 2007-11-28 15:33:19.132 !MESSAGE Application error !STACK 1 java.lang.UnsatisfiedLinkError: no swt-gtk-3346 or swt-gtk in swt.library.path, java.library.path or the jar file at org.eclipse.swt.internal.Library.loadLibrary(Library.java:219) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:151) at org.eclipse.swt.internal.C.<clinit>(C.java:21) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:128) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:482) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:161) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay(IDEApplication.java:133) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:86):504) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:443) at org.eclipse.equinox.launcher.Main.run(Main.java:1169) at org.eclipse.equinox.launcher.Main.main(Main.java:1144) !ENTRY org.eclipse.osgi 2 0 2007-11-28 15:33:19.258 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2007-11-28 15:33:19.258 !MESSAGE Bundle update@plugins/org.eclipse.jdt.compiler.apt_1.0.0.v20070510-2000.jar [55] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2007-11-28 15:33:19.258 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2007-11-28 15:33:19.259 !MESSAGE Bundle update@plugins/org.eclipse.jdt.apt.pluggable.core_1.0.0.v20070529-2100.jar [176] was not resolved. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2007-11-28 15:33:19.259 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2007-11-28 15:33:19.259 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.dispatch_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2007-11-28 15:33:19.259 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.model_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2007-11-28 15:33:19.260 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.util_0.0.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2007-11-28 15:33:19.260 !MESSAGE Bundle update@plugins/org.eclipse.jdt.compiler.tool_1.0.0.v_771.jar [180] was not resolved. *** glibc detected *** double free or corruption (!prev): 0x080706e8 *** Aborted
14. Re: Proper definition of embedded jbossMax Rydahl Andersen Nov 28, 2007 6:40 PM (in response to Scott Stark)
is that on a 64 bit machine ?
then you need Sun Java 2 Standard Edition 5.0 Update 11 for Linux x86_64
anyway I recommend you use a 32bit java anyway so you can use the visual editor which we don't have a 64 bit bundle for yet. | https://developer.jboss.org/thread/127532 | CC-MAIN-2017-34 | refinedweb | 2,556 | 56.66 |
The numpy module in python consists of so many interesting functions. One such fascinating and time-saving method is the numpy vstack() function. Many times we want to stack different arrays into one array without losing the value. And that too in one line of code. So, to solve this problem, there are two functions available in numpy vstack() and hstack(). Here ‘v’ means ‘Vertical,’ and ‘h’ means ‘Horizontal.’
In this particular article, we will discuss in-depth the Numpy vstack() function. The numpy.vstack() function in Python is used to stack or pile the sequence of input arrays vertically (row-wise) and make them a single array. You can use vstack() very effectively up to three-dimensional arrays. Enough talk now; let’s move directly to the usage and examples from the basics.
Syntax
numpy.vstack(tup)
Parameters
Note
We need only one argument for this function: ‘tup.’ Tup is known as a tuple containing arrays to be stacked. This parameter is a required parameter, and we have to mandatory pass a value.
Return Value
Stacked Array: The array (nd-array) formed by stacking the passed arrays.
Examples to Simplify Numpy Vstack
Now, we have seen the syntax, required parameters, and return value of the function numpy stack. Let’s move to the examples section. Here we will start from the very basic case and after that, we will increase the level of examples gradually.
Example 1: Basic Case to Learn the Working of Numpy Vstack
In this example 1, we will simply initialize, declare two numpy arrays and then make their vertical stack using vstack function.
import numpy as np x = np.array([0, 1, 2]) print ("First Input array : \n", x) y = np.array([3, 4, 5]) print ("Second Input array : \n", y) res = np.vstack((x,y)) print ("Vertically stacked array:\n ", res)
Output:
First Input array : [0 1 2] Second Input array : [3 4 5] Vertically stacked array: [[0 1 2] [3 4 5]]
Explanation:
In the above example, we stacked two numpy arrays vertically (row-wise). Firstly we imported the numpy module. Following the import, we initialized, declared, and stored two numpy arrays in variable ‘x and y’. After that, with the np.vstack() function, we piled or stacked the two 1-D numpy arrays. Here please note that the stack will be done vertically (row-wisestack). Also, both the arrays must have the same shape along all but the first axis.
Example 2: Combining Three 1-D Arrays Vertically Using numpy.vstack function
Let’s move to the second example here we will take three 1-D arrays and combine them into one single array.
import numpy as np x = np.array([0, 1]) print ("First Input array : \n", x) y = np.array([2, 3]) print ("Second Input array : \n", y) z = np.array([4, 5]) print ("Third Input array : \n", z) res = np.vstack((x, y, z)) print ("Vertically stacked array:\n ", res)
Output:
[[0 1] [2 3] [4 5]]
Explanation
In the above example we have done all the things similar to the example 1 except adding one extra array. In the example 1 we can see there are two arrays. But in this example we have used three arrays ‘x, y, z’. And with the help of np.vstack() we joined them together row-wise (vertically).
Example 3: Combining 2-D Numpy Arrays With Numpy.vstack
import numpy as np x = np.array([[0, 1], [2, 3]]) print ("First Input array : \n", x) y = np.array([[4, 5], [6, 7]]) print ("Second Input array : \n", y) res = np.vstack((x, y)) print ("Vertically stacked array:\n ", res)
Output:
[[0 1] [2 3] [4 5] [6 7]]
Explanation:
In the above example, we have initialized and declared two 2-D arrays. And we have stored them in two variables, ‘x,y’ respectively. After storing the variables in two different arrays, we used the function to join the two 2-D arrays and make them one single 2-d array. Here we need to make sure that the shape of both the input arrays should be the same. If the shapes are different, then we will get a value error.
Example 4: Stacking 3-D Numpy Array using vstack Function
import numpy as np x = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) print ("First Input array : \n", x) y = np.array([[[9, 10], [11, 12]], [[13, 14], [15, 16]]]) print ("Second Input array : \n", y) res = np.vstack((x, y)) print ("Vertically stacked array:\n ", res)
Output:
First Input array : [[[1 2] [3 4]] [[5 6] [7 8]]] Second Input array : [[[ 9 10] [11 12]] [[13 14] [15 16]]] Vertically stacked array: [[[ 1 2] [ 3 4]] [[ 5 6] [ 7 8]] [[ 9 10] [11 12]] [[13 14] [15 16]]]
Explanation
We can use this function for stacking or combining a 3-D array vertically (row-wise). Instead of a 1-D array or a 2-D array in the above example, we have declared and initialized two 3-D arrays. After initializing, we have stored them in two variables, ‘x and y’ respectively. Following the storing part, we have used the function to stack the 3-D array in a vertical manner (row-wise).
Note: The shape of the input arrays should be same.
Can We Combine Numpy Arrays with Different Shapes Using Vstack
The simple one word answer is No. Let’ prove it through one of the example.
import numpy as np x = np.array([0, 1]) print ("First Input array : \n", x) y = np.array([3, 4, 5]) print ("Second Input array : \n", y) res = np.vstack((x,y)) print ("Vertically stacked array:\n ", res)
Output:
ValueError: all the input array dimensions except for the concatenation axis must match exactly
Explanation:
In the above case we get a value error. Here firstly we have imported the required module. After that, we have initialized two arrays and stored them in two different variables. Here the point to be noted is that in the variable ‘x’ the array has two elements. But in the variable ‘y’ the array has three elements. So, we can see the shape of both the arrays is not the same. Which is the basic requirement, while working with this function. That’s why we get a value error.
Difference Between Np.Vstack() and Np.Concatenate()
NumPy concatenate is similar to a more flexible model of np.vstack. NumPy concatenate also unites together NumPy arrays, but it might combine arrays collectively either vertically or even horizontally. So NumPy concatenate gets the capacity to unite arrays together like np.vstack plus np.hstack. How np.concatenate acts depends on how you utilize the axis parameter from the syntax.
Difference Between numpy vstack() and hstack()
NumPy hstack and NumPy vstack are alike because they both unite NumPy arrays together. The significant distinction is that np.hstack unites NumPy arrays horizontally and np. vstack unites arrays vertically.
Aside from that however, the syntax and behavior is quite similar.
Do the Number of Columns and Rows Needs to Be Same?
Rows: No, if you use NumPy vstack, the input arrays may have a different number of rows.
Columns: If you use NumPy vstack, the input arrays have to possess exactly the identical amount of columns.
Conclusion
In this article, we have learned, different facets like syntax, functioning, and cases of this vstack in detail. Numpy.vstack() is a function that helps to pile the input sequence vertically so as to produce one stacked array. It can be useful when we want to stack different arrays into one row-wise (vertically). We can use this function up to nd-arrays but it’s recommended to use it till
3-D arrays.
However, if you have any doubts or questions do let me know in the comment section below. I will try to help you as soon as possible.
Happy Pythoning! | https://www.pythonpool.com/numpy-vstack/ | CC-MAIN-2021-43 | refinedweb | 1,319 | 66.03 |
Type: Posts; User: b721991
Yes the Jframe does not display is the overall problem. When I run it on netbeans. The Jframe will not diplay, it just says build succesful and does nothing else
This one has the updated code
I thought I didn't successfully post the other thread, and I made an error with the code formating so it doesn't look right. The problem is when I compile the code, the applet will never show up.
Is there any issues with this code that can be fixed or made better?
[code]
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.border.TitledBorder;
...
Is everything running smoothly with this code, can some please review I get errors when I try to run the program. I have attached the code, as well at the picture. 30641
import java.awt.*;... | http://forums.codeguru.com/search.php?s=8caf6c66ace0682d0a59bbd0c819b95d&searchid=8139721 | CC-MAIN-2015-48 | refinedweb | 144 | 81.73 |
This tutorial depends on step-7.
This is a rather short example which only shows some aspects of using higher order mappings. By mapping we mean the transformation between the unit cell (i.e. the unit line, square, or cube) to the cells in real space. In all the previous examples, we have implicitly used linear or d-linear mappings; you will not have noticed this at all, since this is what happens if you do not do anything special. However, if your domain has curved boundaries, there are cases where the piecewise linear approximation of the boundary (i.e. by straight line segments) is not sufficient, and you want that your computational domain is an approximation to the real domain using curved boundaries as well. If the boundary approximation uses piecewise quadratic parabolas to approximate the true boundary, then we say that this is a quadratic or \(Q_2\) approximation. If we use piecewise graphs of cubic polynomials, then this is a \(Q_3\) approximation, and so on.
For some differential equations, it is known that piecewise linear approximations of the boundary, i.e. \(Q_1\) mappings, are not sufficient if the boundary of the exact domain is curved. Examples are the biharmonic equation using \(C^1\) elements, or the Euler equations of gas dynamics on domains with curved reflective boundaries. In these cases, it is necessary to compute the integrals using a higher order mapping. If we do not use such a higher order mapping, the order of approximation of the boundary dominates the order of convergence of the entire numerical scheme, irrespective of the order of convergence of the discretization in the interior of the domain.
Rather than demonstrating the use of higher order mappings with one of these more complicated examples, we do only a brief computation: calculating the value of \(\pi=3.141592653589793238462643\ldots\) by two different methods.
The first method uses a triangulated approximation of the circle with unit radius and integrates a unit magnitude constant function ( \(f = 1\)) over it. Of course, if the domain were the exact unit circle, then the area would be \(\pi\), but since we only use an approximation by piecewise polynomial segments, the value of the area we integrate over is not exactly \(\pi\). However, it is known that as we refine the triangulation, a \(Q_p\) mapping approximates the boundary with an order \(h^{p+1}\), where \(h\) is the mesh size. We will check the values of the computed area of the circle and their convergence towards \(\pi\) under mesh refinement for different mappings. We will also find a convergence behavior that is surprising at first, but has a good explanation.
The second method works similarly, but this time does not use the area of the triangulated unit circle, but rather its perimeter. \(\pi\) is then approximated by half of the perimeter, as we choose the radius equal to one.
The first of the following include files are probably well-known by now and need no further explanation.
This include file is new. Even if we are not solving a PDE in this tutorial, we want to use a dummy finite element with zero degrees of freedoms provided by the FE_Nothing class.
The following header file is also new: in it, we declare the MappingQ class which we will use for polynomial mappings of arbitrary order:
And this again is C++:
The last step is as in previous programs:
Now, as we want to compute the value of \(\pi\), we have to compare to something. These are the first few digits of \(\pi\), which we define beforehand for later use. Since we would like to compute the difference between two numbers which are quite accurate, with the accuracy of the computed approximation to \(\pi\) being in the range of the number of digits which a double variable can hold, we rather declare the reference value as a
long double and give it a number of extra digits:
Then, the first task will be to generate some output. Since this program is so small, we do not employ object oriented techniques in it and do not declare classes (although, of course, we use the object oriented features of the library). Rather, we just pack the functionality into separate functions. We make these functions templates on the number of space dimensions to conform to usual practice when using deal.II, although we will only use them for two space dimensions and throw an exception when attempted to use for any other spatial dimension.
The first of these functions just generates a triangulation of a circle (hyperball) and outputs the \(Q_p\) mapping of its cells for different values of
p. Then, we refine the grid once and do so again.
So first generate a coarse triangulation of the circle and associate a suitable boundary description to it. By default, GridGenerator::hyper_ball attaches a SphericalManifold to the boundary (and uses FlatManifold for the interior) so we simply call that function and move on:
Then alternate between generating output on the current mesh for \(Q_1\), \(Q_2\), and \(Q_3\) mappings, and (at the end of the loop body) refining the mesh once globally.
For this, first set up an object describing the mapping. This is done using the MappingQ class, which takes as argument to the constructor the polynomial degree which it shall use.
As a side note, for a piecewise linear mapping, you could give a value of
1 to the constructor of MappingQ, but there is also a class MappingQ1 that achieves the same effect. Historically, it did a lot of things in a simpler way than MappingQ but is today just a wrapper around the latter. It is, however, still the class that is used implicitly in many places of the library if you do not specify another mapping explicitly.
In order to actually write out the present grid with this mapping, we set up an object which we will use for output. We will generate Gnuplot output, which consists of a set of lines describing the mapped triangulation. By default, only one line is drawn for each face of the triangulation, but since we want to explicitly see the effect of the mapping, we want to have the faces in more detail. This can be done by passing the output object a structure which contains some flags. In the present case, since Gnuplot can only draw straight lines, we output a number of additional points on the faces so that each face is drawn by 30 small lines instead of only one. This is sufficient to give us the impression of seeing a curved line, rather than a set of straight lines.
Finally, generate a filename and a file for output:
Then write out the triangulation to this file. The last argument of the function is a pointer to a mapping object. This argument has a default value, and if no value is given a simple MappingQ1 object is taken, which we briefly described above. This would then result in a piecewise linear approximation of the true boundary in the output.
At the end of the loop, refine the mesh globally.
Now we proceed with the main part of the code, the approximation of \(\pi\). The area of a circle is of course given by \(\pi r^2\), so having a circle of radius 1, the area represents just the number that is searched for. The numerical computation of the area is performed by integrating the constant function of value 1 over the whole computational domain, i.e. by computing the areas \(\int_K 1 dx=\int_{\hat K} 1 \ \textrm{det}\ J(\hat x) d\hat x \approx \sum_i \textrm{det} \ J(\hat x_i)w(\hat x_i)\), where the sum extends over all quadrature points on all active cells in the triangulation, with \(w(x_i)\) being the weight of quadrature point \(x_i\). The integrals on each cell are approximated by numerical quadrature, hence the only additional ingredient we need is to set up a FEValues object that provides the corresponding
JxW values of each cell. (Note that
JxW is meant to abbreviate Jacobian determinant times weight; since in numerical quadrature the two factors always occur at the same places, we only offer the combined quantity, rather than two separate ones.) We note that here we won't use the FEValues object in its original purpose, i.e. for the computation of values of basis functions of a specific finite element at certain quadrature points. Rather, we use it only to gain the
JxW at the quadrature points, irrespective of the (dummy) finite element we will give to the constructor of the FEValues object. The actual finite element given to the FEValues object is not used at all, so we could give any.
For the numerical quadrature on all cells we employ a quadrature rule of sufficiently high degree. We choose QGauss that is of order 8 (4 points), to be sure that the errors due to numerical quadrature are of higher order than the order (maximal 6) that will occur due to the order of the approximation of the boundary, i.e. the order of the mappings employed. Note that the integrand, the Jacobian determinant, is not a polynomial function (rather, it is a rational one), so we do not use Gauss quadrature in order to get the exact value of the integral as done often in finite element computations, but could as well have used any quadrature formula of like order instead.
Now start by looping over polynomial mapping degrees=1..4:
First generate the triangulation, the boundary and the mapping object as already seen.
We now create a finite element. Unlike the rest of the example programs, we do not actually need to do any computations with shape functions; we only need the
JxW values from an FEValues object. Hence we use the special finite element class FE_Nothing which has exactly zero degrees of freedom per cell (as the name implies, the local basis on each cell is the empty set). A more typical usage of FE_Nothing is shown in step-46.
Likewise, we need to create a DoFHandler object. We do not actually use it, but it will provide us with
active_cell_iterators that are needed to reinitialize the FEValues object on each cell of the triangulation.
Now we set up the FEValues object, giving the Mapping, the dummy finite element and the quadrature object to the constructor, together with the update flags asking for the
JxW values at the quadrature points only. This tells the FEValues object that it needs not compute other quantities upon calling the
reinit function, thus saving computation time.
The most important difference in the construction of the FEValues object compared to previous example programs is that we pass a mapping object as first argument, which is to be used in the computation of the mapping from unit to real cell. In previous examples, this argument was omitted, resulting in the implicit use of an object of type MappingQ1.
We employ an object of the ConvergenceTable class to store all important data like the approximated values for \(\pi\) and the error with respect to the true value of \(\pi\). We will also use functions provided by the ConvergenceTable class to compute convergence rates of the approximations to \(\pi\).
Now we loop over several refinement steps of the triangulation.
In this loop we first add the number of active cells of the current triangulation to the table. This function automatically creates a table column with superscription
cells, in case this column was not created before.
Then we distribute the degrees of freedom for the dummy finite element. Strictly speaking we do not need this function call in our special case but we call it to make the DoFHandler happy – otherwise it would throw an assertion in the FEValues::reinit function below.
We define the variable area as
long double like we did for the
pi variable before.
Now we loop over all cells, reinitialize the FEValues object for each cell, and add up all the
JxW values for this cell to
area...
...and store the resulting area values and the errors in the table. We need a static cast to double as there is no add_value(string, long double) function implemented. Note that this also concerns the second call as the
fabs function in the
std namespace is overloaded on its argument types, so there exists a version taking and returning a
long double, in contrast to the global namespace where only one such function is declared (which takes and returns a double).
We want to compute the convergence rates of the
error column. Therefore we need to omit the other columns from the convergence rate evaluation before calling
evaluate_all_convergence_rates
Finally we set the precision and scientific mode for output of some of the quantities...
...and write the whole table to std::cout.
The following, second function also computes an approximation of \(\pi\) but this time via the perimeter \(2\pi r\) of the domain instead of the area. This function is only a variation of the previous function. So we will mainly give documentation for the differences.
We take the same order of quadrature but this time a
dim-1 dimensional quadrature as we will integrate over (boundary) lines rather than over cells.
We loop over all degrees, create the triangulation, the boundary, the mapping, the dummy finite element and the DoFHandler object as seen before.
Then we create a FEFaceValues object instead of a FEValues object as in the previous function. Again, we pass a mapping as first argument.
Now we run over all cells and over all faces of each cell. Only the contributions of the
JxW values on boundary faces are added to the long double variable
perimeter.
We reinit the FEFaceValues object with the cell iterator and the number of the face.
Then store the evaluated values in the table...
...and end this function as we did in the previous one:
The following main function just calls the above functions in the order of their appearance. Apart from this, it looks just like the main functions of previous tutorial programs.
The program performs two tasks, the first being to generate a visualization of the mapped domain, the second to compute pi by the two methods described. Let us first take a look at the generated graphics. They are generated in Gnuplot format, and can be viewed with the commands
or using one of the other filenames. The second line makes sure that the aspect ratio of the generated output is actually 1:1, i.e. a circle is drawn as a circle on your screen, rather than as an ellipse. The third line switches off the key in the graphic, as that will only print information (the filename) which is not that important right now. Similarly, the fourth and fifth disable tick marks. The plot is then generated with a specific line width ("lw", here set to 4) and line type ("lt", here chosen by saying that the line should be drawn using the RGB color "black").
The following table shows the triangulated computational domain for \(Q_1\), \(Q_2\), and \(Q_3\) mappings, for the original coarse grid (left), and a once uniformly refined grid (right).
These pictures show the obvious advantage of higher order mappings: they approximate the true boundary quite well also on rather coarse meshes. To demonstrate this a little further, here is part of the upper right quarter circle of the coarse meshes with \(Q_2\) and \(Q_3\) mappings, where the dashed red line marks the actual circle:
Obviously the quadratic mapping approximates the boundary quite well, while for the cubic mapping the difference between approximated domain and true one is hardly visible already for the coarse grid. You can also see that the mapping only changes something at the outer boundaries of the triangulation. In the interior, all lines are still represented by linear functions, resulting in additional computations only on cells at the boundary. Higher order mappings are therefore usually not noticeably slower than lower order ones, because the additional computations are only performed on a small subset of all cells.
The second purpose of the program was to compute the value of pi to good accuracy. This is the output of this part of the program:
One of the immediate observations from the output is that in all cases the values converge quickly to the true value of \(\pi=3.141592653589793238462643\). Note that for the \(Q_4\) mapping, we are already in the regime of roundoff errors and the convergence rate levels off, which is already quite a lot. However, also note that for the \(Q_1\) mapping, even on the finest grid the accuracy is significantly worse than on the coarse grid for a \(Q_3\) mapping!
The last column of the output shows the convergence order, in powers of the mesh width \(h\). In the introduction, we had stated that the convergence order for a \(Q_p\) mapping should be \(h^{p+1}\). However, in the example shown, the order is rather \(h^{2p}\)! This at first surprising fact is explained by the properties of the \(Q_p\) mapping. At order p, it uses support points that are based on the p+1 point Gauss-Lobatto quadrature rule that selects the support points in such a way that the quadrature rule converges at order 2p. Even though these points are here only used for interpolation of a pth order polynomial, we get a superconvergence effect when numerically evaluating the integral, resulting in the observed high order of convergence. (This effect is also discussed in detail in the following publication: A. Bonito, A. Demlow, and J. Owen: "A priori error estimates for finite element approximations to eigenvalues and eigenfunctions of the Laplace-Beltrami operator", submitted, 2018.) | https://dealii.org/developer/doxygen/deal.II/step_10.html | CC-MAIN-2021-04 | refinedweb | 2,965 | 57.81 |
Im
AJAX IM
requirements for the Ajax IM is PHP 5.2, MySql 5.0.
For the more detail...
AJAX IM
ajax im ("asynchronous javascript and xml instant messenger"
Hi - Struts
Hi Hi friends,
must for struts in mysql or not necessary... know it is possible to run struts using oracle10g....please reply me fast its...://
Thanks. Hi Soniya,
We can use oracle too in struts
how to connect to database in php using mysql
how to connect to database in php using mysql how to connect to database in php using mysql
How to connect mysql with jsp
How to connect mysql with jsp how to connect jsp with mysql while using apache tomcat with JDBC - JDBC
();
}
}
Thanks
Rajanikant Hi friend,
To mysql connect using JDBC...how to connect mysql with JDBC I have created three tables in the database MYsql, i have to connect them now using JDBC, can u please suggest! public NewJFrame() {
initComponents();
try...:mysql://localhost:3306/test","root","admin");
}
catch(Exception e...
to all Experts.. please help me. im using a Jcreator. - Java Beginners
to all Experts.. please help me. im using a Jcreator. 3. Write a program that asks the user to enter a word. The program will then repeat word...;
int times;
. . . .
times = inputString.length()
Hi
how to connect j2me program with mysql using servlet?
how to connect j2me program with mysql using servlet? my program...();
String userid=connect(user.toLowerCase().trim(), pwd.toLowerCase().trim... String connect(String user,String pwd){
String db="mobileapp
Hi - Struts
Hi Hi Friends,
I want to installed tomcat5.0 version please... please help me. its very urgent Hi friend,
Some points to be remember... it in production
* Complete server monitoring using JMX and the manager web
can't connect to MySQL Server(10060) - JDBC
created a DB in the Hosting MySQL server and now i want to create a table By using MySQL front end. Can we connect to the MySQL server using the I.P address...can't connect to MySQL Server(10060) Hii Sir,
I am working
Connect JSP with mysql
Connect JSP with mysql
...;
This query creates database 'usermaster' in
Mysql.
Connect JSP with mysql :
Now in the following jsp code, you will see
how to connect
Internationalization using struts - Struts
Internationalization using struts Hi, I want to develop a web application in Hindi language using Struts. I have an small idea... to convert hindi characters into suitable types for struts. I struck here please
using captcha with Struts - Struts
using captcha with Struts Hello everybody: I'm developping a web application using Struts framework, and i would like to use captcha Hi friend,Java Captcha in Struts 2 Application :
struts
struts Hi,... please help me out how to store image in database(mysql) using struts
hi
hi sir i've a project on railway reservation... i need to connect netbeans and mysql with the help of jdbc driver... then i need to design the frame... in my frame should reach mysql and should get saved in a database which we've
hi
hi sir i've a project on railway reservation... i need to connect netbeans and mysql with the help of jdbc driver... then i need to design the frame... in my frame should reach mysql and should get saved in a database which we've
MYSql with struts
MYSql with struts How to write insert and update query in struts by using mysql database?
Please visit the following link:
HI.
HI. hi,plz send me the code for me using search button bind the data from data base in dropdownlist
hi.......
hi....... i've a project on railway reservation... i need to connect netbeans and mysql with the help of jdbc driver... then i need to design... enter in my frame should reach mysql and should get saved in a database which we've
hi
jquery alert dialog box using struts2 how to develop jquery alert dialog box using struts2
hi
hi add any two numbers using bitwise operatorsprint("code sample
hi
hi servlet program to retrieve data from database using session object
how to access the messagresource.proprties filevalues in normal class file using struts - Struts
using struts i want to access the below username and password in class...(Exception e){
System.out.println("unnable to connect"+e);
}return...();
System.out.println("connected");
}
}
Hi Friend,
Try
update in db ....so with out using javascript
...only html,java,servlets,db
hi
update in db ....so with out using javascript
...only html,java,db should.. I want upload the image using jsp. When i browse the file then pass that file to another jsp it was going on perfect. But when i read that image file, we cant read that file.it return -1.
and my console message
Connect JSP with mysql
Connect JSP with mysql
... you how to connect to
MySQL database from your JSP code. First, you need... database in my sql command prompt)
2. Connect JSP with mysql:
cannot connect to database - JDBC
cannot connect to database Iam using eclipse in my system ,when connecting the database mysql version 5.0 to the eclipse iam getting an error as ""Creating connection to mysql has encountered a problem.Could not connect to mysql to MS Acces wothout using ODBC but JDBC - JDBC
Connect to MS Acces wothout using ODBC but JDBC Hi,
I want to connect my MS Access using JDBC but not ODBC.
Please help me out.
Thanks
need help....how to connect and disconnect multiple databases(databases created in mysql) using java and my sql
need help....how to connect and disconnect multiple databases(databases created in mysql) using java and my sql i am working on a project on deadlock in distributed transactions , and in that i am using my sql in java i need
Using Network Address To Connect to a Database
to connect a MySql database with your application over a
network then you must load... is "root"
An example of using network address to connect to database...
.style1 {
text-align: center;
}
How To Use Network Address To Connect
Connecting to MySQL
();
bds.setDriverClassName("com.mysql.jdbc.Driver");
bds.setUrl("jdbc:mysql...("");
try {
System.out.println("Trying to connect!!"...-dbcp.jar, commons-pool.jar, j2ee.jar and
mysql-connector-java-5.1.7-bin.jar
Using radio button in struts - Struts
Using radio button in struts Hello to all ,
I have a big problem... single selection). Hi friend,
Please give full details and full source code to solve the problem :
For more information on radio in Struts
how to connect the database using hibernet through servlet/jsp through form
how to connect the database using hibernet through servlet/jsp through form plz give me the reply
Hi Friend,
Please visit the following link:
Hope
using tiles without struts
using tiles without struts Hi
I am trying to make an application using tiles 2.0.
Description of my web.xml is as follows:
tiles
org.apache.tiles.web.startup.TilesServlet
Connecting to remote mysql server using jdbc.
Connecting to remote mysql server using jdbc. How to Connect to remote mysql server using jdbc
connect sql server 2005 using php
connect sql server 2005 using php how to connect sql server 2005 using php program. how mssql_connect will work
how to connect client to server using Sockets
how to connect client to server using Sockets how to connect client to server using Sockets
how to connect SQL Server 2005 using php
how to connect SQL Server 2005 using php i need to connect SQL Server 2005 using php. how can i connect . how to use mssql_connect function
struts
struts shopping cart project in struts with oracle database connection shopping cart project in struts with oracle database connection
Have a look at the following link:
Struts Shopping Cart using MySQL
mysql
mysql How can install My sql drivers using a jar File
Hi you need to download the mysql-connector jar file for connecting java program from mysql database.......
Hi friend,
MySQL is open source database can not connect to database in servlet - JSP-Servlet
i can not connect to database in servlet Hi
I am following... offer . Hi friend,
Code to connect to database in servlet... table using Statement",but typing the following url(
struts
struts Hi
how struts flows from jsp page to databae and also using validation ?
Thanks
Kalins Naik
How to save excel sheet into mysql database using blob or clob
How to save excel sheet into mysql database using blob or clob Hi All,
I am new to java and i need to upload excel sheet to mysql,
please suggest me the steps to do this, i am able to connect to the database, from there i don't
shopping cart using sstruts - Struts
shopping cart using sstruts Hi,
This is question i asked ,u send....
--------------
I need the example programs for shopping cart using struts... it immediately.
Regards,
Valarmathi
Answers
Hi Friend,
Please visit
How to connect
How to connect how to connect to a remote host using jsp? We need to update files to the database of the application on the local machine from the updated database on our webpage
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
How can i use Facebook connect button
article or blogspot.
Currently im exploring this one using this test blogspot.... Im gonna use this "Connect to facebook" feature to my blog which im gonna use...How can i use Facebook connect button Please to meet you all guys
I
struts
struts hi
i would like to have a ready example of struts using"action class,DAO,and services"
so please help me
Grouping Of Test cases by using struts
Grouping Of Test cases by using struts Hi,
I can able to create, save, delete and execute test cases in my application. But i want to group those test cases. Give me the flow for coding
PHP MySQL Connect to a Database
PHP MySQL Connect to a Database:
In WAMP software stack, php and mysql are bundled together and we can use it
with so much ease. To connect with mysql server we can use two ways one
how to insert and retrieve an image from mysql using java - Java Beginners
how to insert and retrieve an image from mysql using java how to insert and retrieve an image from mysql using java? Hi friend,
Code to insert image using java :
import java.sql.*;
import java.io.*;
class
struts
struts hi
Before asking question, i would like to thank you... technologies like servlets, jsp,and struts.
i am doing one struts application where i doing validations by using DynaVAlidatorForm
in that have different fields
Fixed Value check using struts validator framework
Fixed Value check using struts validator framework Hi All,
can anyone tell me how to use struts validator framework for fixed value check.
eg. country='India';
Thanks in advance
Using the Validator framework in struts
Using the Validator framework in struts What is the Benefits of Using the Validator framework in struts
hi sir - Java Beginners
netbeans,
and how to connect with database by using the netbeans ,plz provide the details sir,
Thanks for ur coporation sir Hi Friend...hi sir Hi,sir,i am try in netbeans for to develop the swings,plz
how insert multiple images in mysql database with use of struts 1.3.8 or java method, with single bean,or using array
how insert multiple images in mysql database with use of struts 1.3.8 or java method, with single bean,or using array i am using netbeans 7.0 ,with struts 1.3.8 and i want to insert multiple images in mysql database ,with use
pls provide steps to connect mysql in flex - XML
pls provide steps to connect mysql in flex hi ,PLS SUGGEST ME I... THE FUNCTIONS.PHP THERE .OR ANY OTHERWAY .HOW FLEX CONNECT WITH WAMP .IF SO PLS...;Hi friend,
For read more information on flex visit to :
http
create a form using struts
create a form using struts How can I create a form for inputting text and uploading image using
Hi.... - Java Beginners
Hi.... Hi Friends,
Thanks for reply can send me sample of code using ur idea...
First part i have completed but i want to how to go... me its very urgent....
Hi Friend,
Plz give full details | http://roseindia.net/tutorialhelp/comment/12914 | CC-MAIN-2014-41 | refinedweb | 2,087 | 71.24 |
… and a hearty welcome to our regular GNOME bloggers! Hopefully you’ve noticed that we’ve had a bit of an upgrade over here… No more NewsBruiser — we’re running with WordPress MU.
What does that mean for you?
- Rocking user experience, built on the foundations of everybody’s favourite blogging tool, WordPress.
- Multiple blogs: Yes, you can run more than just one blog! You may want one for yourself, one for your project. Do be nice though, there’s only one namespace here.
- Multiple authors: You can invite existing users to participate on your blog — perfect for developer teams to contribute to a project blog. There are a few different user levels, so you can act as ‘editor’ to a team of contributors, give core maintainers administrative access, etc.
- Some cool themes to start you off, including the beautifully stylesheetable Sandbox. You can apply your own CSS to any theme using the ‘Custom CSS’ page. Make your blog yours.
- Customisable sidebar widgets, and a few cool widget plugins to play with. Stick a Twitter or Flickr feed in your sidebar, let everyone know where you’re at.
- Spam protection thanks to the Bad Behavior plugin, enabled by default on every blog, and auto-closing comments after 21 days of inactivity. We’re going to wait and see how bad the spam might get before implementing other measures.
- All the mod cons you’d expect from WordPress: XMLRPC, APP, feeds, GUI editor, blogrolls, great management of pages, categories and comments.
- Seamless migration from NewsBruiser, and thorough redirection to a much better URL scheme. Not only have we maintained your Google-juice, we’ve enhanced it!
- Easy sign-up: Anyone with a gnome.org, gtk.org or gimp.org email address can join!
Enjoy! | https://blogs.gnome.org/blog/category/general/page/2/ | CC-MAIN-2016-22 | refinedweb | 293 | 67.15 |
Resuming processes revisitedRichard Evans Jun 29, 2012 10:48 AM
Firstly apologies for creating a new thread on an oft-discussed topic. This is partly because I want to make sure I get up-to-date info and partly because there is a new question in here.
JBPM 5.3
I have a number of processes with 100% async service handlers. Many of the processes have handlers that poll a back-end for data, driven by a timer (delay=30s).
I am trying to resume after halt. By default I use a kSession created in in a spring context using the <drools:> namespace. As the config is fairly complex I use this to help me restore the session as follows. If I restore the session, I don't use the default session
kSession = JPAKnowledgeService.loadStatefulKnowledgeSession(id,
kSession.getKnowledgeBase(),
kSession.getSessionConfiguration(),
kSession.getEnvironment());
//?kSession.signalEvent("Trigger", null); //Seen this in one post?
Q1. Is there any reason I should NOT initialise from a prev ksession like this? I don’t want to have to create env and config in code as well as spring config
Q2. Should processes resume without further action (they don’t, even if not timers)
Q3. Should timers resume in 5.3? I know that this has been a subject of hot debate in the past.
Q4. If not, any suggestions on how to
- - EITHER resume processes stuck on timers
- - OR implement polling without timers (e.g. halt processes awaiting an a signal from an externally scheduled call to kSession.signalEvent())
Thanks in anticipation.
1. Re: Resuming processes revisitedMauricio Salatino Jun 29, 2012 11:02 AM (in response to Richard Evans)
Hi Richard,
A1: it looks ok, are you having any trouble with that?
A2: the process will not be automatically resumed.. unless you keep the session alive
A3: I'm not up to date with those discussions, but I can imagine how to create the mechanisms for "long running timers"
A4: a) you can use an external quarts which a handler that load the session and do something with it, b) same as 'a', I think that you get it right, just get an external source
that notifies you when the timer is due and reload the session and some something like signal an event or insert a fact.
Cheers
2. Re: Resuming processes revisitedRichard Evans Jun 29, 2012 6:54 PM (in response to Mauricio Salatino)
Hello Mauricio,
A1+ Only that processes are not resumed
A2+ What does "keep the session alive" mean? My scenario is: loads of processes running fine, tin fails, restart on new tin some time later, start container, loadStatefulKnowledgeSession..... then what?
A3 & 4+ I'll look further into these. The quartz (or equivant) is not my problem. I'm a learner on jbpm so don't know I can a) stop a process to wait signal and then b) handle a signal to move on to next state & resume the process.
3. Re: Resuming processes revisitedMaciej Swiderski Jul 2, 2012 9:03 AM (in response to Richard Evans)
Regarding alternative approach to timers - why not utilize intermediate signal catch events? That is kind of wait state until some external trigger will move it further - using ksession.signalEvent....
HTH
4. Re: Resuming processes revisitedRichard Evans Jul 4, 2012 12:51 PM (in response to Maciej Swiderski)
Hello again. Thanks for your reply. Sorry I was drawn into something else for a couple of days...
Let me try to understand. I understand how I can have something in my process that fields a signal. I am not sure how I get that to move my process forward only if it is in certain places in the flow (where I used to have the timers.
Consider a flow like this:
[Call external API1 SR] ==> (timer1) ==> [Get API1Response] ==>[Call extenal API2] ==> (timer2) ==> [Get API2Response] ==> (END)
If the timers don't trigger when I resume the session, can I replace this with something like ...
[Call external API1 SR] ==> (SignalHandler) ==> [Get API1Response] ==>[Call extenal API2] ==> (SignalHandler) ==> [Get API2Response] ==> (END)
... and have an external signal generator that will cause any (SignalHandler) node to resume execution at the next step? I have lots of processes with many timers so I'd not want to have a separate signal to replace each timer in each process. I obviously don't want the signal to divert the line of execution unless it was at the handler.
Regards,
Richard
5. Re: Resuming processes revisitedMaciej Swiderski Jul 4, 2012 1:43 PM (in response to Richard Evans)
Yes, you can do what you described with signal events and in my opinion it is more suitable than timers. In fact both timer and signal are events (in your case intermediate catch events) so only event definition needs to change.
Signals have types (signalRef in bpmn2) that is used to distinguish between different signals so you could have different types for given places in your process. You can also pass data when signaling so it could be an alternative for you to send back the response from external api if you signal generator will have it. You can either signal via session which will signal all process instances that awaits given type of signal or signal dedicated process instance.
HTH
6. Re: Resuming processes revisitedRichard Evans Jul 10, 2012 2:08 PM (in response to Maciej Swiderski)
Me again.
I was really looking forward to picking this up again but .... not good news. My external timer events are not getting handled.
I am still digging but I thought I'd float the problem in case anyone can volunteer advice.
The first time I call kSession.signalEvent I see
- DefaultSignalManager.addEventListener is called OK
- DefaultSignalManager.removeEventListener is called before
- DefaultSignalManager.internalSignalEvent is gets called but by then no registered listeners
- DefaultSignalManager.removeEventListener is called some more
All of this is down below kSession.signalEvent. When removeEventListener is called, the stack tells me that this is inside a commit. I have NO transaction runningw when I call kSession.signalEvent.
The second time I see no addEventListeners calls.
Any ideas?
Thanks for your continuing help.
Richard
7. Re: Resuming processes revisitedMaciej Swiderski Jul 11, 2012 7:14 AM (in response to Richard Evans)
I am not sure I follow you on what you're trying to do now. Do you use timers and then would like to trigger them manually? If so, that is not what I had in mind. I suggested to use signal event definitions )with corresponding type) instead of timers. Then as soon as you enter the signal node events will be registered.
In addition what could simplify discussion here is to have an example (process) with test case if possible.
HTH
8. Re: Resuming processes revisitedRichard Evans Jul 11, 2012 9:10 AM (in response to Maciej Swiderski)
I am generating events periodically. Sorry for the bad choice of terms. Every so oftern I call kSession.signalEvent ("xxx", null).
The process looks like this.
[Call external API1 SR] ==> (SignalEvent) ==> [Get API1Response] ==>[Call extenal API2] ==> (SignalEvent) ==> [Get API2Response] ==> (END)
Interestingly I have found that the code works if I enumerate the process instances and iterate around calling kSession.signalEvent ("xxx", null, processInstanceId).
(Yep - if I get stuck again I will abstract a but of code.)
Richard | https://developer.jboss.org/message/747481 | CC-MAIN-2017-09 | refinedweb | 1,211 | 63.8 |
Opened 11 years ago
Closed 7 years ago
#5497 closed Bug (worksforme)
OneToOneField limit_choices_to filters parent set in admin
Description
Setting limit_choices_to on a OneToOneField filters the queryset displayed in admin for the parent field.
Seeing this in current SVN (6304)
Example models:
from django.db import models class Place(models.Model): name = models.CharField(max_length=50) class Admin: pass def __unicode__(self): return u"%s the place" % self.name class Restaurant(models.Model): place = models.OneToOneField(Place, limit_choices_to={'name': 'My Place'}) serves_hot_dogs = models.BooleanField() serves_pizza = models.BooleanField() class Admin: pass def __unicode__(self): return u"%s the restaurant" % self.place.name
Change History (9)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
comment:3 Changed 10 years ago by
I'm a bit confused -- limit_choices_to is supposed to limit the choices... what did you expect to happen?
comment:4 Changed 10 years ago by
From the example above, I would expect
limit_choices_to to limit the
Place choices in the
Restaurant model.
The unexpected behavior comes when viewing
Place objects in admin. It only displays objects that match the
limit_choices_to that is set in
Restaurant
I haven't checked whether this still occurs in trunk.
comment:5 Changed 10 years ago by
Please do check if this still applies on trunk and close the ticket if it doesn't.
comment:6 Changed 10 years ago by
confirmed in revision 7238
comment:7 Changed 10 years ago by
Does anyone know if this occurs in newforms-admin, this seems like a problem with the admin filtering, not necessarily the model/queryset.
To further clarify, in the above example, only places with the name 'My Place' will be displayed in admin. | https://code.djangoproject.com/ticket/5497 | CC-MAIN-2018-17 | refinedweb | 285 | 57.16 |
We migrated our NameNodes from low configuration to high configuration machines last week. Firstly,we imported the current directory including fsimage and editlog files from original ActiveNameNode to new ActiveNameNode and started the New NameNode, then changed the configuration of all datanodes and restarted all of datanodes , then blockreport to new NameNodes at once and send heartbeat after that.
Everything seemed perfect, but after we restarted Resoucemanager , most of the users compained that their jobs couldn't be executed for the reason of permission problem.
We applied Acls in our clusters, and after migrated we found most of the directories and files which were not set Acls before now had the properties of Acls. That is the reason why users could not execute their jobs.So we had to change most of the files permission to a+r and directories permission to a+rx to make sure the jobs can be executed.
After searching this problem for some days, i found there is a bug in FSEditLog.java. The ThreadLocal variable cache in FSEditLog don’t set the proper value in logMkdir and logOpenFile functions. Here is the code of logMkdir:
public void logMkDir(String path, INode newNode) {
PermissionStatus permissions = newNode.getPermissionStatus();
MkdirOp op = MkdirOp.getInstance(cache.get())
.setInodeId(newNode.getId())
.setPath(path)
.setTimestamp(newNode.getModificationTime())
.setPermissionStatus(permissions);
AclFeature f = newNode.getAclFeature();
if (f != null)
logEdit(op);
}
For example, if we mkdir with Acls through one handler(Thread indeed), we set the AclEntries to the op from the cache. After that, if we mkdir without any Acls setting and set through the same handler, the AclEnties from the cache is the same with the last one which set the Acls, and because the newNode have no AclFeature, we don’t have any chance to change it. Then the editlog is wrong,record the wrong Acls. After the Standby load the editlogs from journalnodes and apply them to memory in SNN then savenamespace and transfer the wrong fsimage to ANN, all the fsimages get wrong. The only solution is to save namespace from ANN and you can get the right fsimage. | https://issues.apache.org/jira/browse/HDFS-7385?attachmentSortBy=fileName | CC-MAIN-2021-04 | refinedweb | 350 | 51.99 |
Hey all,
Erm given the problem
I've tried to write the program source code for this one but I've been alot of errors which I cant fix. If any of you can tell what's wrong with my program and fix it for me I'd really appreciate it. My source code thusfar is as follows:
#include <iostream.h> #include <stdio.h> #include <ctype.h> using namespace std; int main() { int i,h[80]={0},words=1; char s[15]; cout<<"Enter the String: "; gets(s); for(i=0;s!='\0';i++) { if(s==' ') words++; h[tolower(s)-'a']++; } cout<<"\n"<<words<<" words"; for(i=0;i<26;i++) if(h!=0) cout<<"\n"<<h<<" "<<(char)(i+97); system ("pause"); return 0; } /* limitaitons - will give wrong output for a statement with more than one space between words - wrong output for leading and trailing spaces */
I seriously cannot understand what's wrong with this code. If you guys can fix it it'd be really nice. Also, please please please give me comments along the way, otherwise I'll get lost halfway through. | https://www.daniweb.com/programming/software-development/threads/72429/i-can-t-fix-identify-the-errors-with-this-program | CC-MAIN-2017-04 | refinedweb | 184 | 82.24 |
x += x++;
Other:
public class Order: Record<string, int> {
public Order(string item, int qty): base(item,qty) {}
public string Item { get { return state.Item1;}}
public int Quantity { get { return state.Item2; } }
}
public class Customer: Record<string, IEnumerable<Order>> {
public Customer(string name, IEnumerable<Order> orders) : base(name, orders) { }
public string Name { get { return state.Item1; } }
public IEnumerable<Order> Orders { get { return state.Item2; } }
});
}
PingBack from
.NET RestLess - A Simple REST Framework ASP.NET AJAX Overview And Technical Tips A C# library to write
.NETRestLess-ASimpleRESTFrameworkASP.NETAJAXOverviewAndTechnicalTipsAC#libraryto...
Welcome to the forty-third issue of Community Convergence. The last few weeks have been consumed by the
By the way, there's another reason your hash-code algorithm is unwise: Assuming the component hash-codes are uniformly distributed over the integers from 0 to 2^32-1 (thus interpreting the signed integers as unsigned integers here, for simplicity), your will not be: a binary or never lowers the number of "on" bits; each bit of your result hash code has only a 25% chance to be off - which lowers the entropy of the overall hash code to be around 26 bits, as opposed to 32 bits.
Finally, tuples of tuples will hash particularly badly; and so will tuples with higher than 2-arity.
Eamon: you are right. It is bad. As I said before, I was lazy.
Wow, a little more and we'll finally have the easy-to-use record construct of Pascal in the 1980s. Just declare a Record type and you're set. I have (seriously) often wished for this in C# instead of cumbersome, separate struct or class def's.
This's a really excellent introduction. I think I begin to know what the functional code is after reading your posts and code. But I have some questions about this:
For the GetHashCode(), why don't you try "xor" other than "or"? I think "xor" will get a better job than the "or"~
For something about all the code, and I found TypeUnion<T> is really useless! Since I'll never have the change to use a member union!
Previous posts: Part I - Background Part II - Tuples Now that we know what Tuples are, we can start talking about Record, as they use a derivative of Tuples under the cover. But first, what is a record? Well, in C# parlance a Record is a sort of immutabl
The Quest for Quick-and-Easy Class-Based Immutable Value Objects in C# - Part 1: Introduction
The Quest for Quick-and-Easy Immutable Value Objects in C#
Other posts in the series: Part I - Background Part II - Tuples Part III - Records Part IV - Type Unions | http://blogs.msdn.com/lucabol/archive/2008/04/21/a-c-library-to-write-functional-code-part-iii-records.aspx | crawl-002 | refinedweb | 445 | 62.78 |
Introduction to spring boot ide
We can say that it enables us to create working with the standalone spring application using the default configuration. Spring ide or initializer is nothing but the web application which was used to generate the project in the spring boot framework, it uses ide in our project we need to specify the necessary configuration for our project. ide is providing the project wizard which was integrated with the API of the spring initializer. There are multiple ide tools available like spring tool suite, Intellij, eclipse, and Netbeans.
What is spring boot ide?
- It is nothing but an open-source project which provided the set of plugins for ide of eclipse.
- Those plugins will make the eclipse IDE aware of the framework. Our ide is to understand our spring boot project from the perspective of the spring boot framework and also it will provide the additional features in the spring boot project.
- ide is helps us to implement easier and more convenient projects in ide.
- ide is providing the various wizards for creating the project and also we can start the project using spring boot.
- Using boot ide provides the spring boot-specific refactoring support for the config files. Using ide we can resolve the dependency of graphical visualization of our beans.
- Below are the ide available to develop the spring boot applications.
- Intellij IDE
- Spring tool suite
- Eclipse
- Netbeans spring boot plugin
- Intellij IDE is a very good development environment of spring boot. This ide is available in the community edition means we can easily implement our project using Intellij ide.
- Intellij ide is also available in the ultimate edition, also it comes with the new feature but to use Intellij ultimate edition we need to purchase the same.
- Intellij ide 2018 is come with some new features, using this version we can connect directly to the spring boot initializer service and we can easily create the skeleton of our spring boot project.
- The ultimate edition of Intellij ide will also debug and test our application more easily.
- The spring tools are used with various coding environments like eclipse also we can use this in visual studio and lightweight code editor as an atom.
- As we know that Netbeans is the well-featured plugin used for spring boot project development.
- Netbeans is providing the list of features to implement the spring boot project. Netbeans is providing the new project wizard of maven to our project.
- The Netbeans is also providing the enhanced file editor properties and file template of spring boot to our project.
- Using Netbeans we can easily implement the pom.xml files to our spring boot project.
- To use ide in Netbeans we need first install the plugin, after installing plugin we have to use the same in our project.
Getting Started guide
- Using spring tool suite, eclipse ide, and Netbeans ide we can implement the spring boot project.
- We need the following software to implement the project by using ide are as follows.
- JDK 8 or later
- Spring tool suite IDE, eclipse IDE or Netbeans IDE
- We can also use the spring tool suite IDE to develop the ide applications. This is providing the ready-to-use environment to debug, deploy and run the applications.
- To use the spring tool suite IDE first we need to install the spring tool suite on our server. After installing the spring tool suite we need to launch the same at the time of launching we are creating the workspace for our project.
- In the above example, we have to create a workspace named as Spring-Boot at the desktop location. After launching the spring tool suite IDE it will look as follows.
Create a New Project
Below is the step to create ide project using the spring tool suite as follows.
- Create project template using spring initializer and give the following name to the project metadata.
In the below step we have provided project group name as com.example, artifact name as spring-boot-ide, project name as spring-boot- ide, package as jar file, and selecting java version as 11.
Group – com.example
Artifact name – spring-boot- ide
Name – spring-boot- ide
Description - Project of spring-boot- ide
Package name - com.example.spring-boot- ide
Packaging – Jar
Java – 11
Dependencies – spring web
- After generating project extract files and open this project by using spring tool suite –
In this step we are extracting the project template and opening the project into the spring tool suite IDE.
- After opening project using spring tool suite check the project and its files –
Creation of a Spring Application
Below is the step to create a new spring application by using the spring tool suite IDE are as follows.
- Add the dependency
Code –
<dependency> -- Start of dependency tag.
<groupId>org.springframework.boot</groupId> -- Start and end of groupId tag.
<artifactId>spring-boot-starter-web</artifactId> -- Start and end of artifactId tag.
</dependency> -- End of dependency tag.
- Create source file for application
Code –
public class spring_ide {
private String short_message;
public void setMessage(String short_message){
this.short_message = short_message;
}
public void getMessage (){
System.out.println ("Spring boot ide : " + short_message);
}
}
- Create main java file for ide application –
Code –
public class spring_main {
public static void main /* main method for spring boot ide application */ (String[] args) {
System.out.println("Project of spring boot IDE" );
}
}
- Run the application –
Conclusion – spring boot ide
It will enable us to create working with the standalone spring application. ide is nothing but an open-source project which provided a set of plugins for ide. Intellij IDE is a good development environment for spring boot, Intellij ide is available in the community and ultimate edition.
Recommended Articles
This is a guide to spring boot ide. Here we discuss the step to create a project using the spring tool suite along with the codes. You may also have a look at the following articles to learn more – | https://www.educba.com/spring-boot-ide/?source=leftnav | CC-MAIN-2022-40 | refinedweb | 984 | 62.07 |
Interviewees often come across questions about the difference between for loop and while loop. In most cases, these differences are at the practical level as they are both guided by the same conditional go-to statement. This means that they check for a test condition before entering the loop’s code block.
In this article, we will learn about the differences between for loop and while loop.
Let us start with a brief introduction to loops in programming.
What are the loops?
In Java and C++ programming languages, there are different statements for iteration. These are the while loop, for loop, and the do-while loop. These loops allow any given set of instructions to be executed repeatedly until a specific condition is true. The loop terminates as soon as the state is false.
For loop vs. While loop
The following comparison chart depicts the difference between for and while loops:
What is 'for' loop?
In Java and other related object-oriented programming languages, there are two different kinds of for loop – traditional and ‘for each.’ The syntax of the commonly found for loop statement is as follows:
for(initialization; condition; iteration){ //body of for loop }
The loop controlling variable of the for loop is initialized only once through a single execution. This takes place during the first iteration of the for a loop.
The condition about the for loop is executed on each instance of the loop being iterated.
Whenever the loop command is executed, the initialization condition is completed before the condition is checked. In case the state is real, the commands given in the body of the for loop are executed.
After that, the iteration statement comes into force and is executed. Once this stage is over, the iteration condition is checked all over again with the intent to decipher if the for loop has to iterate further or terminate.
The iteration and initialization statements in Java may comprise of multiple statements separated by commas.
Example of for loop:
#include <stdio.h> int main () { int x; for( x = 1; x < 10; x = x + 1 ){ printf("Value: %dn", x); } return 0; } Out Put: Value: 1 Value: 2 Value: 3 Value: 4 Value: 5 Value: 6 Value: 7 Value: 8 Value: 9
What is 'while' loop?
The while loop is the most fundamental of all loops present in Java and C++, and its working is the same in both the languages. The while loop is declared as follows:
while (condition) { statements; //body of loop }
The while loop assesses the condition initially; post that, it executes the statements until the conditions specified in the while loop returns a ‘false.’
The conditions related to the while loop may be in the form of any boolean expression. The condition is true if a non-zero value is returned and becomes false in case zero is returned.
The loop repeats itself in case the condition becomes true. In case the condition becomes false, then the next line of the code, which is immediately after the iteration command, gets executed.
In the while loop, the body loop or statements may either be in the form of a block of statements, a clear statement, or just a single statement.
Example of While Loop:
#include <stdio.h> int main () { int x = 1; /* Execute While Loop */ while( x < 10 ) { printf("Value: %dn", x); x++; } return 0; } Value: 1 Value: 2 Value: 3 Value: 4 Value: 5 Value: 6 Value: 7 Value: 8 Value: 9
A key difference between while and for loop
- When it comes to the definition of the conditions present in the iteration statements, they are usually predefined in case of for loop in C. On the other hand. The conditions are open-ended in the while loop in C.
- In the case of the for loop, the condition checking, initialization, and all increments or decrements of iteration variables are done explicitly. These acts are carried out within the syntax of the loop itself. On the other hand, the conditions can be checked and initialized only within the syntax of the loop in case of the while loop. This is an essential difference between these two loops in C.
- In case the condition command is absent in the for loop, the loop will end up iterating countless times. On the other hand, any failure to add the condition command in the while loop would result in compilation errors.
Conclusion
The different points of difference between the for loop and while loop make it easy for programmers to consider their correct usage in Java and C++. The for loop is best used for loops wherein initialization and increment form single statements and tend to be logically related. Use this when you know the number of times the loop will run.
On the contrary, use while loop when you don’t know how many times the loop will execute. It is also a good choice when obtaining user input and reading the contents of a file into a variable. | https://www.stechies.com/difference-between-while-loop/ | CC-MAIN-2022-21 | refinedweb | 832 | 61.46 |
Created on 2009-04-18 18:37 by bquinlan, last changed 2011-10-21 20:34 by bquinlan. This issue is now closed.
...in seconds-based library functions (e.g. time.sleep) and calculations
(e.g. distance = velocity * ?).
Please include a proper description of your problem, and a patch
description when you post a patch.
I did add a patch description: "Adds a datetime.total_seconds attribute"
- is that unclear?
The idea is that you should be able to extract the total number of
seconds in the duration i.e.
>>> dt = datetime.timedelta(seconds=1234567.89)
>>> dt.total_seconds
1234567.89
I saw the patch description as well, but usually you put that
description, and perhaps a motivation as well, in the comment. That way
it's easier for people to directly see what an issue is about.
OK, a bit on motivation:
1. datetime.timedelta instances are a convenient way of representing
durations
2. datetime.timedelta instances cannot be conveniently used in many
calculations e.g. calculating distance based on velocity and time
3. datetime.timedelta instances cannot be conveniently used in many
library functions e.g. time.sleep(), urllib2.urlopen(timeout=)
I propose to fix that by adding a timedelta.total_seconds attribute that
equals:
timedelta.days * 3600 * 24 + timedelta.seconds + timedelta.microseconds
/ 100000.0
The addition looks quite legitimate to me.
The only thing is that it may be better as a method (total_seconds())
rather than an attribute, given the other APIs in the datetime module.
Also, the patch lacks some unit tests.
Sorry for the last comment about unit tests, they are here actually :-)
Attached is a patch that implements .total_seconds as an instance method
Given the timing, I fear this will have to wait for 3.2.
The patch is committed in r76529 (trunk) and r76530 (py3k). Thank you!
A late note: this would be redundant if the oft-requested division of
timedeltas were implemented: t.total_seconds could then be spelt
t/timedelta(seconds=1)
with the advantage that there would then be a natural way to spell
t.total_days or t.total_hours as well.
That should be t.total_seconds(), of course.
> A late note: this would be redundant if the oft-requested division of
> timedeltas were implemented: t.total_seconds could then be spelt
>
> t/timedelta(seconds=1)
It would be less obvious, though.
Sorry for commenting on a closed issue but I just bumped into a problem requiring a total_minute() method I ended up implementing by doing some raw math by hand.
Would it be a reasonable addition?
If so I'll open a separate issue.
What about
def total_minutes(td):
return td / datetime.timedelta(minutes=1)
?
You'll probably get more traction if you file a new bug. | https://bugs.python.org/issue5788 | CC-MAIN-2018-05 | refinedweb | 452 | 68.97 |
Writing Unit tests for C/C++ with the Microsoft Unit Testing Framework for C++
Note
This article applies to Visual Studio 2015. If you're looking for Visual Studio 2017 documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2017. Download it here.
Use the Native Test Project template to create a separate Visual Studio project for your tests.
The project contains some sample test code.
Make the DLL accessible to the test project:
#includea
.hfile that contains declarations of the DLL’s externally-accessible functions.
The
.hfilecontainsmacro,.
Walkthrough: Developing an unmanaged DLL with Test Explorer.
Verify that the tests run in Test Explorer:
Insert some test code:
TEST_METHOD(TestMethod1) { Assert::AreEqual(1,1); }
Notice that the
Assertclass:
// Find the square root of a number. double CRootFinder::SquareRoot(double v) { return 0.0; }:
#include "..\RootFinder\RootFinder.h"
Add a basic test that uses the exported function:
TEST_METHOD(BasicTest) { CRootFinder rooter; Assert::AreEqual( // Expected value: 0.0, // Actual value: rooter.SquareRoot(0.0), // Tolerance: 0.01, // Message: L"Basic test failed", // Line number - used if there is no PDB file: LINE_INFO()); }
Build the solution.
The new test appears in Test Explorer._METHOD(RangeTest) { CRootFinder rooter; for (double v = 1e-6; v < 1e6; v = v * 3.2) { double actual = rooter.SquareRoot(v*v); Assert::AreEqual(v, actual, v/1000); } }.
Build the solution, and then in Test Explorer, choose Run All.
The new test fails.
Tip
Verify that each test fails immediately after you have written it. This helps you avoid the easy mistake of writing a test that never fails.
Enhance the code under test so that the new test passes:
#include <math.h> ... double CRootFinder::SquareRoot(double v) { double result = v; double diff = v; while (diff > result/1000) { double oldResult = result; result = result - (result*result - v)/(2*result); diff = abs (oldResult - result); } return result; }
Build the solution and then in Test Explorer, choose Run All.
Both tests pass.
Tip
Develop code by adding tests one at a time. Make sure that all the tests pass after each iteration.:
#include <stdexcept> ... double CRootFinder::SquareRoot(double v) { // Validate parameter: if (v < 0.0) { throw std::out_of_range("Can't do square roots of negatives"); }
All tests now pass.
Tip
If individual tests have no dependencies that prevent them from being run in any order, turn on parallel test execution with the
toggle button on the toolbar. This can noticeably reduce the time taken to run all the tests.
Refactor the code without changing tests
Simplify the central calculation in the SquareRoot function:
// old code: // result = result - (result*result - v)/(2*result); // new code: result = (result + v/result)/2.0;
Build the solution and choose Run All, to make sure that you have not introduced an error.
Tip
A good set of unit tests gives confidence that you have not introduced bugs when you change the code.
Keep refactoring separate from other changes.
Next steps.
See Also
Adding unit tests to existing C++ applications
Using Microsoft.VisualStudio.TestTools.CppUnitTestFramework
An Overview of Managed/Unmanaged Code Interoperability
Debugging Native Code
Walkthrough: Creating and Using a Dynamic Link Library (C++)
Importing and Exporting | https://docs.microsoft.com/en-us/visualstudio/test/writing-unit-tests-for-c-cpp-with-the-microsoft-unit-testing-framework-for-cpp?view=vs-2015 | CC-MAIN-2018-43 | refinedweb | 522 | 58.99 |
Let me know when you guys have finalized any changes to aclinux.h, and I will update this file in the base ACPICA code.Bob>-----Original Message----->From: Justin P. Mattock [mailto:justinmattock@gmail.com]>Sent: Thursday, December 10, 2009 2:46 PM>To: Alexey Starikovskiy>Cc: Pavel Machek; Xiaotian Feng; lenb@kernel.org; Lin, Ming M; Moore,>Robert; linux-acpi@vger.kernel.org; linux-kernel@vger.kernel.org>Subject: Re: [PATCH] ACPICA: don't cond_resched() when irq_disabled or>in_atomic> update>the kernel, and then changed>aclinux.h to the above post.>>I'm am not seeing this warning message>upon wake-up.>but with the acpi merge stuff with>acpi_walk_namespace seems to break nvidia>(nvidia's problem now)>>there is also some thing where the machine>takes a good 30 secs or so to wake up>(not sure if this is from the updated patch)>in dmesg I see:>>platform microcode: firmware requesting intel-ucode/06-17-0a>firmware microcode: parent mocrocode should not be sleeping.>>I'm thinking I need something in /lib/firmare>>Justin P. Mattock--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2009/12/10/394 | CC-MAIN-2016-50 | refinedweb | 207 | 50.12 |
This site uses strictly necessary cookies. More Information
I'm very new to coding in general and I am using C# I'm trying to make it so when you press W the sprite (which is in a 2D space) moves up I'm getting an error CS1525 unexpected symbol "transform"
using UnityEngine;
using System.Collections;
public class PlayerHandler : MonoBehaviour
{
float speed = 1.0f;
// Initialization
void Start ()
{
}
// Update
void Update ()
{
if (Input.getkey(KeyCode.W)
transform(Vector3.forward * speed * Time.deltaTime);
}
}
you need to get the position (transform.position), modify it and then set it to the new value.
something like:
if (Input.Get$$anonymous$$ey($$anonymous$$eyCode.W)
{
var newPosition = transform.position;
newPosition += (Vector3.forward * speed * Time.deltaTime);
transform.position = newPosition;
}
pay special attention to the function names too - you most likely got an error with your code.
the unity docs can usually help solve these issues...waiting for some kind soul to help you out here is going to slow you down ;)
Answer by maccabbe
·
Dec 26, 2015 at 05:37 PM
If you are just starting out with Unity we suggest checking out the learn section, as there exists several tutorials, documentation and live training sessions. Specifically, if you need to learn about programming then please go through the scripting tutorials.
In this case the error you got is because of the line
if (Input.getkey(KeyCode.W)
Where the second parenthesis are not closed. Your computer then continues to read the next line, expecting it to finish the true/false condition and realizes that the symbol transform is not being used as part of the conditional hence the error unexpected symbol "transform".
Then the next line also has an error as you use an object as a function when as explained above you should be adjusting the position of the transform. An alternative to setting position would be to use Transform.Translate.
So your code should be
if (Input.getkey(KeyCode.W))
transform.Translate(Vector3.forward * speed * rotate my player with the gamepad right stick ?
0
Answers
how to use right stc on gamepad like Input.mousePosition
0
Answers
GameObject detect LineRenderer colliding?
2
Answers
delay "after mouseclick"
0
Answers
[C#] How to make this code toggleable
3
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1117269/error-cs1525-unexpected-symbol-how-do-i-fix-this-a.html | CC-MAIN-2021-49 | refinedweb | 376 | 57.67 |
Those of us who have played Counter-Strike, or have watched the Pwned.nl movie understand how annoying [myg0t] members can be. They hack, they ruin your fun, and are just plain aggrivating. For this reason, I've being scripting an admin mod plugin for a server I play at. Because this is the only forum I know of that would probably be able to help without flaming me down. Those of you who have programmed in C will find Small rather familiar in terms of syntax.
Here's the code so far:
If someone could read through it, and point at any mistakes(minus the pseudocode at the end), I would be very thankful.If someone could read through it, and point at any mistakes(minus the pseudocode at the end), I would be very thankful.Code:
/******************************
This plugin is made to kick and give 1 hour minute bans to people wearing the myg0t tag. I am not responsible for anything that may happen by using this plugin.
Please distrubute plugin source code with the .amx file, so server owners may
customize it as they wish. This plugin is written for the =D|2= Digital Riot Clan Server.
Special thanks to Infected2506 for helping me write this
*******************************/
/*The #include's for the plugin*/
#include <core>
#include <console>
#include <string>
#include <plugin>
#include <admin>
#include <adminlib>
/*some constants and crap*/
new g_Version[]="1.0" /* the plugin version number... obviously */
new str_[]("[myg0t]")
/*some events*/
public plugin_init()
{
plugin_registerinfo("Plugin to scan playernames for the infamous [myg0t] tag, and ban
them for a full hour");
return PLUGIN_CONTINUE;
}
/* Handling the strings for the names(thanks Infected for your help */
public plugin_connect(HLUserName, HLIP, UserIndex) {
new strName[MAX_NAME_LENGTH];
convert_string(HLUserName, strName, MAX_NAME_LENGTH);
}
/* the following's some pseudo code that is soon to be converted to Small. If anyone has
a knowledgeable understanding here, that would be great */
strsr "HLUserName"
if(mygot == HLUserName)
then(admin_ban HLUserName 60) | http://www.antionline.com/printthread.php?t=258563&pp=10&page=1 | CC-MAIN-2017-09 | refinedweb | 322 | 61.97 |
Aside from the languages that we offer client libraries for, the Google My Business API can also be used in many other languages. This page provides information and an example of how to use the API in Python.
Before you work with the Google My Business API, you need to register your application and obtain OAuth 2.0 credentials. See Basic setup for details on how to work with the Google My Business API.
System requirements
The following are the system requirements needed to work with the Google My Business API in Python. For more details, see Basic setup.
- Operating systems:
- Linus
- Mac OS X
- Windows
- Python 2.7, 3.4 or higher
Install the required libraries
The easiest way to use the Google My Business API in Python is with the Google APIs Python library, and the OAuth2 Python library To install these libraries, run the following:
$ pip install --upgrade google-api-python-client $ pip install oauth2client
Get your client secrets
An OAuth 2.0 client ID must be created before you proceed with the following steps:
- Open the Credentials page in the API Console.
- Click the name of a client ID to view that ID.
- Click Download JSON.
- Create a new file called
client_secrets.jsonin the same directory as your Python file and add to it the contents of the JSON file that you downloaded in step 3.
Download the discovery document
Go to the Samples page, right
click Download discovery document, and select Save Link As. Then, save
the file as
myBusiness_discovery.json in the same directory as your
Python file.
Call the API
Use the built in
sample_tools utility of the Google APIs Python client
to build an API service from the discovery document that you downloaded,
and authenticate the user with OAuth. Now the user has access to their
My Business account.
The following example returns a list of accounts accessible to the user and the locations contained in the first account:
from googleapiclient import sample_tools from googleapiclient.http import build_http discovery_doc = "gmb_discovery.json" def main(argv): # Use the discovery doc to build a service that we can use to make # MyBusiness API calls, and authenticate the user so we can access their # account service, flags = sample_tools.init(argv, "mybusiness", "v4", __doc__, __file__, scope="", discovery_filename=discovery_doc) # Get the list of accounts the authenticated user has access to output = service.accounts().list().execute() print("List of Accounts:\n") print(json.dumps(output, indent=2) + "\n") firstAccount = output["accounts"][0]["name"] # Get the list of locations for the first account in the list print("List of Locations for Account " + firstAccount) locationsList = service.accounts().locations().list(parent=firstAccount).execute() print(json.dumps(locationsList, indent=2)) if __name__ == "__main__": main(sys.argv) | https://developers.google.com/my-business/content/python?hl=ru | CC-MAIN-2019-35 | refinedweb | 453 | 54.73 |
Coding: Passing booleans into methods
In a post I wrote a couple of days ago about understanding the context of a piece of code before criticising it, one of the examples that I used of a time when it seems fine to break a rule was passing a boolean into a method to determine whether or not to show an editable version of a control on the page.
Chatting with Nick about this yesterday it became clear to me that I’ve missed one important reason why you’d not want to pass a boolean into a method.
The first reason I hate passing booleans around is that it usually means we are controlling the path code should take inside a method rather than just calling the appropriate method ourself.
The following type code is not that unusual to see:
public void SomeMethod(bool someBoolean) { if(someBoolean) { // doThis } else { // doThat } }
The client of this method knows what it wants to happen so why not just have two methods, like so:
public void DoThis() { }
public void DoThat() { }
In the specific case I was referring to in the post we had a HtmlHelper (ASP.NET MVC) method called DropDownOrReadOnly which either rendered a drop down with options for a user to select or just displayed the option they had previously selected if they were an existing user.
The boolean in this case was a property on the model which indicated whether or not the user had the ability to change these options or not.
It was therefore a case of doing an if statement in the aspx page or inside the helper. Initially we went for putting it in the aspx page but they started to look so messy we moved it into the helper.
Now what I totally didn’t see in this example until Nick pointed it out is that where we are passing in a boolean to this method, what we really want is an object which defines a strategy for how we render the control - we can delegate the decision for whether to display a drop down or read only version of the control.
Instead of passing in a boolean we could end up with something like this:
public abstract class EditMode { public static readonly EditMode Editable = new Editable(); public static readonly EditMode ReadOnly = new ReadOnly(); public abstract void RenderFieldWith(HtmlHelper htmlHelper); }
public class Editable : EditMode { public override void RenderFieldWith(HtmlHelper htmlHelper) { htmlHelper.Label(...); } }
public class ReadOnly : EditMode { public override void RenderFieldWith(HtmlHelper htmlHelper) { htmlHelper.DropDownList(...); } }
We’ve added the 'Label' method to HtmlHelper as an extension method for the sake of the above example. I’m sure the API for EditMode can be done better but that’s the basic idea.
We could then use it like this:
public static class HtmlHelperExtensions { public static void DropDownOrReadOnly(this HtmlHelper htmlHelper, EditMode editMode) { editMode.Render(htmlHelper); } }
Again I’ve simplified the API to show the idea of delegating responsibility for how we render the control to the EditMode. Nick has written more about this idea in a post about refactoring to the law of demeter.
The final reason that passing booleans around is not a great idea is that when you read the code it’s not immediately obvious what’s going on - the API is not expressible at all.
If we compare
HtmlHelper.DropDownOrReadOnly(true)
with
HtmlHelper.DropDownOrReadOnly(EditMode.ReadOnly)
I think it’s clear that with the second approach it’s much easier for someone coming into the code to understand what is going. | https://www.markhneedham.com/blog/2009/04/08/coding-passing-booleans-into-methods/ | CC-MAIN-2021-21 | refinedweb | 587 | 51.52 |
to
- Using Create New File create a new
finxter.solfile
Well done. Your file – empty so far – is ready for more actions!
In this article, we’ll add (i) license identifier, (ii) pragma, (iii) another file via import, and (iv) add some comments.
The actual smart contract code is out of scope here but check out other Finxter tutorials – there is plenty of that.
License Identifier
I know you are eager to get to the meat, but before you jump there, bear with me. The so-called SPDX license identifier is the first element you need to jot down.
What the heck is that? SPDX, or the Software Package Data Exchange, is an international, open standard for communicating software information including licenses or copyrights.
Being a standard means that many companies and organizations have agreed to do some things in a certain way. And Solidity has also adopted that standard.
Why bother, you ask?
Well, your code will be transparent in a blockchain and that transparency triggers copyright issues. The SPDX identifier hence allows you to specify what you allow others to do with your code. And vice versa, you learn what you can do with other people’s code too.
An example comment line with an identifier would be:
// SPDX-License-Identifier: MIT
What this means is:
👩⚖️ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, [etc etc…]
SPDX license list has over 450 various license identifiers!
But don’t worry too much now, we are here to learn Solidity, not the legal twists. So for now, take my word and use MIT as your default one. Or if you do not want to specify a license or if the source code is not open-source, please use the special value UNLICENSED.
🛑 Caution here folks – UNLICENSE (without ‘d’ at the end) is a completely different license! Open door type of one. It offers free and unencumbered software released into the public domain.
By the way, a good rule of thumb is to use the OSI-approved ones when browsing. “Only” about 120+ options for consideration.
Does that identifier do anything technically to your code? No, it won’t break how it works. After all, it is a comment.
But from Solidity >=0.6.8 (so 0.6.8 and any higher), that comment must be part of your code. Otherwise expect a Warning in your compiler.
The compiler checks the existence but does not know if your identifier is the right one (if it exists in the SPDX list). Starting in Solidity 0.8.8 it checks for multiple SPDX license identifiers next to each other and validates them. It still allows you to play with it though 🙂
In 0.8.7 you could easily get away with some crazy identifier.
// SPDX-License-Identifier: cheating_on_you pragma solidity 0.8.7;
From 0.8.8 onwards it actually starts to pay attention and throws errors. Note the red error icon on the left in line 1.
Same happens when you add multiple licenses inappropriately.
This one below goes unnoticed though – compiler says OK.
// SPDX-License-Identifier: cheatingonyou pragma solidity 0.8.15;
Since SPDX info is a comment, it is recognized by the compiler anywhere in the file at the file level, but for clarity put it at the very top of the file.
Lastly, the identifier will become part of your metadata once it’s compiled. And that is machine-readable so others will find it easy to query.
Take action.
1. Add the license-related comment to your code (
finxter.sol)
// SPDX-License-Identifier: MIT
Pragmas
The second important keyword is
pragma. It comes in several shapes and forms – as a version pragma, ABI Coder pragma or experimental pragma.
⭐ Note: A pragma directive is always local to a source file. So you must add it to all your files if you want to enable it in your whole project.
Remember also that if you import another file, the pragma from that file does not automatically apply to the importing file.
1. Version pragma defines for which versions of Solidity the code is written.
In the example:
pragma solidity >= 0.8.7;
we can expect that no compiler on version 0.8.7 or higher will throw any pragma errors.
Other examples – for illustration and education – how to define pragma versions:
pragma solidity 0.6.8; //single instance pragma solidity >= 0.6.8; //0.6.8 and any above pragma solidity ^0.6.8; //0.6.8 and any above but less than 0.7.0 pragma solidity 0.6.8 ^0.7.5; // single instance AND any above 0.7.5 but less than 0.8 -> this AND condition cannot be met here pragma solidity 0.8.1 || ^0.8.10; // one instance OR any from 0.8.10 but less than 0.9.0
For practical reasons, the version pragma is the only type you should really care about when you start your Solidity journey.
2. ABI Coder pragma
As per Solidity documentation, you have two options to choose from:
pragma abicoder v1; pragma abicoder v2;
However as of Solidity 0.8.0 the
ABIEncoder is activated by default so for a rookie like you and me, there’s nothing to worry about anymore.
With version 0.8.0+ you can already enjoy the benefits of working more effectively with arrays and structs. These are just some of data types but explaining this goes way beyond this tutorial.
And no need to call this pragma additionally, as you had to do in the past, e.g.:
// SPDX-License-Identifier: GPL-3.0 pragma solidity ^0.4.16; pragma experimental ABIEncoderV2;
3. Experimental pragma
Getting here is a risky business so better do not try it yourself at home 🙂
Solidity might be offering features that are – as labeled – experimental.
So if you have some technical appetite and skills and want to play, showcase to your potential clients or whatever the purpose, go ahead. But if again, you are still early in the game, just park for now.
Take Action.
1. Add a version pragma directive to your code
pragma solidity ^0.8.15;
Importing other Source Files
You can import files in Solidity. That sounds obvious but let’s say this upfront.
Importing other files is important since you can break down your code into multiple files, which makes it more modular, easier to manage and control, and – best of all – re-usable.
The simplest way to import is using this line of code:
import "filename";
In our quick Remix exercise, imagine we have another file called “
helloWorld.sol” located in the same directory. In order to import it to our
finxter.sol file, one would use:
import "./helloWorld.sol";
Note: Pythonic
import "helloWorld.sol" would not work here.
For education purposes and in very simple implementations, this is the shortest way to import. Its disadvantage is that it pollutes the namespace by importing all global symbols from the imported file into the current global system.
That approach carries also a risk of importing symbols that are imported into the file we are importing. That file can contain symbols imported for yet another file and so on. Such subsequent importing may lead to confusion about where the symbols come from and where actually they are defined.
Solidity recommends using a variant, which may look more complex at first. But it only adds one new global symbol to the namespace, here
symbolName, whose members are symbols from the imported file.
Makes sense?
import * as symbolName from "filename";
The best approach however would be to import relevant symbols explicitly.
So for instance, if the imported file “
helloWorld.sol” would have a contract named “
sayHello”, one could use only that. Rule of thumb here: import only the things you will use.
import {something} from "filename";
Take action:
1. Add a new file named “
helloWorld.sol” that contains this code
// SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract sayHello { // empty contract }
2. In the “
finxter.sol” file, add import
import {sayHello} from "./helloWorld.sol";
Commenting the code is possible in the two following ways:
1. Regular comments
1.1 Single-line comment, e.g.
// this is a single-line regular comment
1.2 Multi-line comment, e.g.
/* This comment spans many lines */
2. NatSpec comments
NatSpec stands for Ethereum Natural Language Specification. It is a special form of comments to provide rich documentation for functions, return variables, and more.
The recommendation is that Solidity contracts are fully annotated using NatSpec for all public interfaces (everything in the ABI).
Use NatSpecs comments directly above function declarations or statements.e.g.
2.1 Single-line
/// single-line NatSpec comment
2.2 Multi-line
/** multi-line NatSpec comment */
CODE example
// SPDX-License-Identifier: GPL-3.0 pragma solidity >=0.4.16 <0.9.0; /// @author The Solidity Team /// @title A simple storage example contract SimpleStorage { uint storedData; /// Store `x`. /// @param x the new value to store /// @dev stores the number in the state variable `storedData` function set(uint x) public { storedData = x; } /** Return the stored value. @dev retrieves the value of the state variable `storedData` @return the stored value */ function get() public view returns (uint) { return storedData; } }
Take action:
1. Add a regular multi-line comment to your
finxter.sol file
/* This tutorial comes from finxter.com */
Reference: This article is based on some contents from the documentation. Check out this awesome resource too!. | https://blog.finxter.com/layout-of-a-solidity-source-file/ | CC-MAIN-2022-33 | refinedweb | 1,592 | 59.5 |
Octber 24th, 2021
Written by Juliette Woodrow, Brahm Capoor, Nick Parlante, Anna Mistele, and John Dalloul
This week in section you will gain practice with string parsing, building lists, drawing, and the main function. These problems are meant to prepare you for homework 5. Solutions will be posted at the end of the week. There are more problems on this than we expect you to get through in section. Feel free to use the other ones as practice! If you have any questions, post on Ed or email Juliette!
s[end]
We have gotten a lot of questions about why s[len(s)] causes an eror but using len(s) as the right index in a slice does not cause an error. For eaxmple:
>>>>> at = s.find('@') >>> end = len(s) >>> s[end] Traceback (most recent call last): File "stdin", line 1, in module IndexError: string index out of range >>> >>> s[at + 1:end] 'python'
What is going on here? There are two reasons why this happens.
Reason 1: UBNI
Reason 2: Slice Garbage
>>>>> len(s) 6 >>> s[2:5] 'tho' >>> s[2:6] 'thon' >>> s[2:46789] 'thon'
Implement a function, find_time(str) that takes in a string and, if it contains one, returns a time written in the string. Times will be of the format "XX:XX" and you don't need to include any AM or PM designations. For example, given the input string "Let's go to the movies at 09:30 tomorrow" your function should return "09:30". If there are no times in the string, return the empty string.
Implement a function, parse_out_hashtag(s), which takes in a string representing a single tweet and returns the hashtag within the tweet. For this problem, each tweet will have only one hashtag. A Hashtag can be defined as a string of 1 or more alphanumeric characters immadiately following a "#" character. A single hashtag ends at the first non-alphanumeric character following the '#'. For example, parse_out_hashtag('I am going to #wearmymask everywhere') would return 'wearmymask' and parse_out_hashtag('what is #ResX?') would return 'ResX'.
Here are some drawings for looping over the word following the '#' character (also known as an octothorpe).
Before:
After:
Implement a function, find_price(s, currency) that takes in a string and a currency symbol and returns an integer price if that is mentioned in the string s. We want to look through a line and find the place where a price is mentioned. You can assume that the currency symbol specified will only show up once in the line provided. To find the price, locate the currency symbol and then find all the digits AFTER the symbol (not all currencies have the symbol come first--think about how we could adjust our code if the symbol came last, instead). Stop reading once the last digit has been read. Return the price as an integer (without the currency symbol).
For example, find_price("Une cramique au chocolat coûte €3", "€") would return 3.
Implement a function, exclaim_word(s), which takes in a string and returns the exclamatory word in that string. Consider an exclamatory word the "word" substring of one or more alphabetic chars which are immediately to the left of the '!' in s. For example: exclaim_word('x123hello!cs106a') would return 'hello' and exclaim_word(32happy!day') would return 'happy'. If there are no exclamatory words, return the empty string.
Implement a function, parse_phone_number(s), which takes in a string that has a ten digit phone number somewhere in it and returns a list of the number in two parts: the area code and the rest of the number. The area code is the part of the phone number up until the first '-' character. The string can have text before or after the phone number. You may assume the following: that the only digits in the string will be part of the phone number. If there are any digits in the string, they will make up a complete phone number. If there is no phone number in the string, return an empty list. There may be '-' outside of the phone number. For example, parse_phone_number('Zoom is too much-call me instead: 212-225-9876') would return ['212', '2259876'], parse_phone_number('so call-me beep-me if you wanna reach me-at 650-555-5555 ') would return ['650', '5555555'], and parse_phone_number('The weekend is not long enough') would return [].
You have been hired by Stanford to create a digital version of their new flag that they are desiging to welcome all students back to campus in the fall. They have given you the following description of the flag:
To accomplish this task, decompose into a function called draw_stanford_flag(canvas, left, top, width, height, num_stripes) where left should be the starting x value of the top left corner of this flag on the canvas and top should be the starting y value of the top left corner of this flag on the canvas.
You should also decompose into a function called draw_spider_patch(canvas, left, top, width, height, n) where left should be the starting x value of the top left corner of this spider patch on the canvas and top should be the starting y value of the top left corner of this spider patch on the canvas. n is the number of lines to draw and is guaranteed to be 2 or more.
Once you have that working, write a function called draw_flag(canvas, width, height, num_stripes, n) which draws the three flags at their locations specified above. Below is an example of what one call to draw_flag(canvas, 600, 600, 9, 4) should look like. You can ignore the black outline on parts of the image.
Your job is to write a program that emulates the 3 calculator functions shown below:
You may assume that you are provided with a main function that takes as a parameter the list of arguments typed in the console, as below:You may assume that you are provided with a main function that takes as a parameter the list of arguments typed in the console, as below:
$ python3 calculator.py -square 42 1764 # prints the square of the number passed in $ python3 calculator.py -exp 2 10 1024 # prints the first number raised to the power of the second number $ python3 calculator.py -add 1 2 3 4 5 15 # prints the sum of all the numbers typed in
Thus, your job is to decompose and implement the main function so that your program produces the sample output above.Thus, your job is to decompose and implement the main function so that your program produces the sample output above.
import sys def main(args): # your code here pass if __name__ == "__main__": main(sys.argv)
Notes | https://web.stanford.edu/class/cs106a/section-w2021/section5/section5.html | CC-MAIN-2021-49 | refinedweb | 1,121 | 68.81 |
How to Access SQLite Database in Android For Debugging?
A software library that provides a relational database management system is called SQLite. It allows the users to interact with RDBMS. The lite in SQLite denotes the lightweight when it comes to setup, database administration, and resources required. In SQLite, a database is stored in a single file — a property that differentiates it from other database engines. The SQLite code is open source and available publicly. It can be used for any purpose and free of cost. It’s the most widely deployed database around the globe and includes very high-profile projects such as Whatsapp. It is a very compact library. The library size depends on the target platform and compiler settings, but it can be less than 600KiB even after enabling all the features. SQLite’s response to memory allocation failures and disk I/O errors is very graceful. All the transactions in SQLite follow ACID properties, even if it is interrupted by system crashes or even power failures. All the transactions are verified by automated tests. SQLite has the following major features: self-contained, serverless, zero-configuration, transactional.
Advantages of SQLite
1. Self Contained
The self-contained property of SQLite means that it requires minimum support from the OS(operating system) or any external library. This enables SQLite to be usable in any type of environment, especially in embedded devices like Android, iPhone, game consoles, etc. ANSI-C is used to develop SQLite. Its source code is available as a big sqlite3.c and its header file sqlite3.h. If anyone wants to enable the use of SQLite in their app, they just need to drop these files in the project and compile them in the code.
2. Serverless
RDBMS such as MySQL, PostgreSQL requires a separate server process to operate. The applications that need to access the database server uses TCP/IP protocol to send and receive requests. This is termed Client/server architecture. The following diagram explains the RDBMS client/server architecture:
Whereas SQLite does not work in this manner, it doesn’t require a server to run. In SQLite database is integrated with the application that accesses it. The application directly interacts with the database, reading and writing from the database files stored on the disk. The following diagram explains the SQLite server-less architecture:
3. Zero configuration
For using SQLite, no installation is required due to its serverless architecture. No server process is present that needs to be configured, started, and stopped. Also, SQLite does not use any configuration files.
4. Transactional
All the transactions in SQLite are completely ACID-compliant. It means all queries and changes are Atomic, Consistent, Isolated, and Durable. All the changes within a transaction either take place completely or not at all even when an unexpected situation like application crash, power failure, or operating system crash occurs.
Methods to Access SQLite Database in Android for Debugging
Method 1: Stetho
It is a simple library from Facebook, which can be used to easily debug network calls. While using Stetho, there is no need to add logs during the development phase and remove them while releasing. The debugger is not needed. In 3 easy steps, we are able to get all the network calls on our Chrome browser. All network requests getting fired from the application, such as latency, fire sequence, request params, response data. In Stetho, there are many more features apart from network inspection.
Step 1: Add dependency in build.gradle
implementation 'com.facebook.stetho:stetho:1.5.1' implementation 'com.facebook.stetho:stetho-ok implementation 'com.facebook.stetho:stetho-urlconnection:1.5.1'
Javascript Console can also be added
implementation 'com.facebook.stetho:stetho-js-rhino:1.5.1'
Step 2: Initialize it in the application class with one line of code
public class MyApplication extends Application { public void onCreate() { super.OnCreate(); if(BuildConfig.DEBUG) { Stetho.initializeWithDefaults(this) } } }
Register the class in AndroidManifest.xml
<manifest xmlns:android=" ...> <application android:name="MyApplication" ...> </application> </manifest>
Step 3: Enable network inspection
If OkHttp library at 3.x release is being used, then one can use the interceptors system to add data to the existing stack, that too automatically. This is the most straightforward way to enable network inspection.
OkHttpClient.Builder() .addNetworkInterceptor(StethoInterceptor()) .build()
Interceptors are the mechanism used to monitor, retry and rewrite network calls. The setup to inspect every call arising from the app is complete, now the steps to see them in Chrome browser remain.
Step 4: Setup Chrome DevTool to inspect
Client or server protocol provided by the Stetho software provides for the application, are used to implement the integration of Chrome DevTools. Now open Chrome and connect the device which is having the app.
Enter chrome://inspect in the URL bar of Chrome.
Hit enter and the connected devices will be displayed.
All the devices with USB debugging options will be shown here, also all the apps on the device with stetho integrated. Click on inspect below the app name, a new window opens and it gives an overview of all the services in a list format with all the details. Any duplicate request being fired will also be displayed. Clicking on any single request shows three different sections – Headers, preview, and response. Click on each one to view the individual section.
Step 5: Database inspection with Stetho
When the pop window is opened after clicking on inspecting, select resources at the top. Many options will appear on the left side, select Web SQL from it, it’s a dropdown where all the app databases are seen. By clicking on any table, it shows the structure with all the data inside it. There is also absolutely no need to export or import things. Queries to check the result can also be executed. Click on the .db file and an editor will open up on the right side. Write queries depending on the requirements.
Method 2: Using Shell
- Enable Developer option in Android device
- Enable USB debugging
- Connect your device via USB
- When prompted ‘Allow USB debugging. Click OK.
Unix Shell can be used to run a variety of commands on a device. Android Debug Bridge(adb) provides access to it. Open ADB’s shell from the command prompt by running ADB shell. This shell can be used to copy the database out of the applications directory and paste it to any location on the device.
hell@device:/ $ mkdir /sdcard/data/ shell@device:/ $ run-as <com.your.package> shell@device:/data/data/com.your.package $ cp databases/<database>.db /sdcard/data/ shell@device:/data/data/com.your.package $ exit shell@device:/ $ exit
Now the database file is accessible from “/sdcard/data/”. One can use ADB pull to pull the file to the localhost:
adb pull /sdcard/data/<database>.db
Similarly, you can use ADB push to push a file to the device and therefore update the database. Simply execute the steps in reverse.
Open the database
A copy of the application’s database is created. Now an SQLite compatible database explorer is required to open the database. SQLite database browser is one of the best tools for browsing and editing SQLite database files. It’s open-source software available for Windows, macOS, and Linux.
Method 3: Using Android Debug Database (ADD)
Tasks done by Android debug database:
- See all the databases.
- See all the data in the shared preferences used in the application.
- Run any SQL query on the given database to update and delete the data.
- Directly edit the database values.
- Directly edit shared preferences.
- Directly add a row in the database.
- Directly add a key value in the shared preferences.
- Delete database rows and shared preferences.
- Search in the data.
- Sort data.
- Export Database.
- Check the Database version.
- Download database
Prerequisite:
Before implementing the Android debug database library in the app, a few things need to be taken care of
- Android device and laptop should be on the same network. (LAN or Wifi)
- If using a mobile phone over USB, execute the following command in the terminal
adb forward tcp:8080 tcp:8080
If you need to use any other port than 8080, make the following changes in the build.gradle file
debug { resValue("string", "PORT_NUMBER", "8081") }
Using Android Debug Database library
Add the following dependency to start using the Android debug database in the app. Add this dependency in build.gradle file.
dependencies { ... debugImplementation 'com.amitshekhar.android:debug-db:1.0.6' ... }
Now run the application, you will see a similar entry to one given below in the logcat.
D/DebugDB: Open in your browser
To get the debug address URL from the code, just call the below method:
DebugDB.getAddressLog();
Open the URL in the browser and an interface will open up, showing all the details related to the databases and shared preferences.
Add. Delete or edit operations on the values present in the table can be done easily just by clicking on the add, delete or edit button respectively
Any database operation can also be done using the SQL query, just type the query in the query section and the Android debug database will do the rest.
Getting Debug Address URL in Toast
Debug address can also be displayed using the Toast message. It can be done using the following code., to debug a custom database file,) { } } }
Disadvantage of SQLite
SQLite’s signature feature, which makes it very much different from others, is its portability. Unfortunately, it makes it a poor choice when many different users are updating the same table simultaneously. It’s a golden rule, to maintain the integrity of data, only one user can write to the file at a time. It also requires more work to ensure the security of private data due to features that make SQLite accessible. SQLite is quite different from other database systems, it limits many advanced features that are offered by other relational database systems. SQLite does not validate data types. SQLite allows users to store data of any type into any column, whereas many other database software would reject data that does not conform to a table’s schema.
SQLite creates schemas, which limit the type of data in each column, but it does not enforce them. The example below shows that the id column wants to store integers, the name column wants to store text, and the age column wants to store integers:
CREATE TABLE celebs ( id INTEGER, name TEXT, age INTEGER );
SQLite will never reject values of the different data types. We may insert the wrong data types in the columns. Storing different data types in the same column is unacceptable behavior that can lead to unfixable errors, so it’s required to be consistent about the data that goes in a particular column, even though SQLite will not enforce it. | https://www.geeksforgeeks.org/how-to-access-sqlite-database-in-android-for-debugging/?ref=rp | CC-MAIN-2022-21 | refinedweb | 1,800 | 56.76 |
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
2.6 KiB
2.6 KiB
lethe README
lethe is a Python module for git-based snapshotting.
lethe is intended as a mechanism for creating commits outside
the standard git branching/tagging workflows. It is meant to enable
additional use-cases without disrupting the standard workflows.
Use cases include:
- Short-lived:
- On-disk undo log
- Syncing work-in-progress between computers before it's ready
- Long-lived:
- lab notebook: Recording the code / configuration state that resulted in a given output
- incremental backup: Space-efficient time-based backups of a codebase
Usage
Creating a commit from the command line
$ cd path/to/repo $ lethe 122d058e375274a186c407f28602c3b14a2cab95
This effectively snapshots the current state of the repository (as would be seen by
git add --all) and creates a new commit (
122d058e375274a186c407f28602c3b14a2cab95)
which points to it. The current branch and index are not changed.
Flags:
-p my_parent_refis used to provide "parent" refs which become the parents of the created commit. If a parent ref is a symbolic ref, both the provided ref and the ref it points to are used as parents. If not present, defaults to
-p HEAD.
-t ref/lethe/my_target_refis used to provide "target" refs which will be created/updated to point to the created commit. If not present, defaults to adding an entry of the form
-t refs/lethe/my_branchfor each parent ref of the form
refs/heads/my_branch, and
-t refs/lethe/my/refpathfor non-head refs of the form
refs/my/refpath. All provided parent refs and any dereferenced parent refs are used to generate default target refs. If any of the target refs already exist, the commits they point to become parents of the created commit.
-m "my message"sets the commit message for the snapshot. By default, "snapshot " is used.
-r path/to/repocan be provided to specify a repository outside of the current working directory.
$ cd path/to/repo $ git branch * master $ lethe
is equivalent to
lethe -r path/to/repo -p HEAD
or
lethe -r path/to/repo -p HEAD -p refs/heads/master -t refs/lethe/HEAD -t refs/lethe/master
Creating a commit programmatically
import lethe REPO = '/path/to/repo' commit_sha = lethe.snap(cwd=REPO) tree_sha = lethe.get_tree(commit_sha, cwd=REPO) print('Created new commit with hash ' + commit_sha + ' aka refs/lethe/HEAD') print('Code (tree) state is ' + tree_sha)
Installation
Requirements:
- python 3 (written and tested with 3.6)
- git (accessible on the system
PATH)
Install with pip:
pip3 install lethe | https://mpxd.net/code/jan/lethe/src/branch/master/README.md | CC-MAIN-2022-05 | refinedweb | 434 | 53.21 |
Qt GUI C++ Classes
The Qt GUI module provides the basic enablers for graphical applications written with Qt. More...
Classes
Detailed Description
The Qt GUI module provides classes for windowing system integration, event handling, OpenGL and OpenGL ES integration, 2D graphics, imaging, fonts and typography. These classes are used internally by Qt's user interface technologies and can also be used directly, for instance to write applications using low-level OpenGL ES graphics APIs.
To include the definitions of the module's classes, use the following directive:
#include <QtGui>
If you use qmake to build your projects, Qt GUI is included by default. To disable Qt GUI, add the following line to your
.pro file:
QT -=. | http://doc.qt.io/qt-5/qtgui-module.html | CC-MAIN-2017-34 | refinedweb | 116 | 52.39 |
first_name taken from incorrect attribute 'cn'
Bug Description
A user's name was bing displayed as "First Last Last" after login with LDAP credentials.
Looking at "ldap/security.py" shows that first_name is being taken from "cn" attribute,
http://
"The 'cn' ('commonName' in X.500) attribute type contains names of an
object. Each name is one value of this multi-valued attribute. If
the object corresponds to a person, it is typically the person's full
name."
The appropriate attribute is "givenName":
"The 'givenName' attribute type contains name strings that are the
part of a person's name that is not their surname."
I suggest adding "givenName" to the "all_attrs" list, and substituting
"givenName" for "cn" in the "first_name" assignment.
class PersonLDAPPerso
all_attrs = (
'sn', # surname
'cn', # common name
def __init__(self, *args, **kw):
def oneline(self, t):
if not t:
return u''
return u' '.join(
def update(self):
# 'first_name': self.oneline(
})
}) | https://bugs.launchpad.net/schooltool.ldap/+bug/1508147 | CC-MAIN-2019-43 | refinedweb | 151 | 57.27 |
Adding Text To Speech
Introduction
Certainly one of the most popular Activities available is Speak, which takes the words you type in and speaks them out loud, at the same time displaying a cartoon face that seems to be speaking the words. You might be surprised to learn how little of the code in that Activity is used to get the words spoken. If your Activity could benefit from having words spoken out loud (the possibilities for educational Activities and games are definitely there) this chapter will teach you how to make it happen.
We Have Ways To Make You Talk
A couple of ways, actually, and neither one is that painful. They are:
- Running the espeak program directly
- Using the gstreamer espeak plugin
Both approaches have their advantages. The first one is the one used by Speak. (Technically, Speak uses the gstreamer plugin if it is available, and otherwise executes espeak directly. For what Speak is doing using the gstreamer plugin isn't really needed). Executing espeak is definitely the simplest method, and may be suitable for your own Activity. Its big advantage is that you do not need to have the gstreamer plugin installed. If your Activity needs to run on something other than the latest version of Sugar this will be something to consider.
The gstreamer plugin is what is used by Read Etexts to do text to speech with highlighting. For this application we needed to be able to do things that are not possible by just running espeak. For example:
- We needed to be able to pause and resume speech, because the Activity needs to speak a whole page worth of text, not just simple phrases.
- We needed to highlight the words being spoken as they are spoken.
You might think that you could achieve these objectives by running espeak on one word at a time. If you do, don't feel bad because I thought that too. On a fast computer it sounds really awful, like HAL 9000 developing a stutter towards the end of being deactivated. On the XO no sounds came out at all.
Originally Read Etexts used speech-dispatcher to do what the gstreamer plugin does. The developers of that program were very helpful in getting the highlighting in Read Etexts working, but speech-dispatcher needed to be configured before you could use it which was an issue for us. (There is more than one kind of text to speech software available and speech-dispatcher supports most of them. This makes configuration files inevitable). Aleksey Lim of Sugar Labs came up with the idea of using a gstreamer plugin and was the one who wrote it. He also rewrote much of Read Etexts so it would use the plugin if it was available, use speech-dispatcher if not, and would not support speech if neither was available.
Running espeak Directly
You can run the espeak program from the terminal to try out its options. To see what options are available for espeak you can use the man command:
man espeak
This will give you a manual page describing how to run the program and what options are available. The parts of the man page that are most interesting to us are these:
NAME espeak - A multi-lingual software speech synthesizer. SYNOPSIS espeak [options] [<words>] DESCRIPTION espeak is a software speech synthesizer for English, and some other languages. OPTIONS -p <integer> Pitch adjustment, 0 to 99, default is 50 -s <integer> Speed in words per minute, default is 160 -v <voice name> Use voice file of this name from espeak-data/voices --voices[=<language code>] Lists the available voices. If =<language code> is present then only those voices which are suitable for that language are listed.
Let's try out some of these options. First let's get a list of Voices:) ... and many more ...
Now that we know what the names of the voices are we can try them out. How about English with a French accent?
espeak "Your mother was a hamster and your father \ smelled of elderberries." -v fr
Let's try experimenting with rate and pitch:
espeak "I'm sorry, Dave. I'm afraid I can't \ do that." -s 120 -p 30
The next thing to do is to write some Python code to run espeak. Here is a short program adapted from the code in Speak:
import re import subprocess PITCH_MAX = 99 RATE_MAX = 99 PITCH_DEFAULT = PITCH_MAX/2 RATE_DEFAULT = RATE_MAX/3 def speak(text, rate=RATE_DEFAULT, pitch=PITCH_DEFAULT, voice="default"): # espeak uses 80 to 370 rate = 80 + (370-80) * int(rate) / 100 subprocess.call(["espeak", "-p", str(pitch), "-s", str(rate), "-v", voice, text], stdout=subprocess.PIPE) def voices(): out = [] result = subprocess.Popen(["espeak", "--voices"], stdout=subprocess.PIPE).communicate()[0] for line in result.split('\n'): m = re.match( r'\s*\d+\s+([\w-]+)\s+([MF])\s+([\w_-]+)\s+(.+)', line) if not m: continue language, gender, name, stuff = m.groups() if stuff.startswith('mb/') or \ name in ('en-rhotic','english_rp', 'english_wmids'): # these voices don't produce sound continue out.append((language, name)) return out def main(): print voices() speak("I'm afraid I can't do that, Dave.") speak("Your mother was a hamster, and your father " + "smelled of elderberries!", 30, 60, "fr") if __name__ == "__main__": main()
In the Git repository in the directory Adding_TTS this file is named espeak.py. Load this file into Eric and do Run Script from the Start menu to run it. In addition to hearing speech you should see this text:
[('af', 'afrikaans'), ('bs', 'bosnian'), ('ca', 'catalan'), ('cs', 'czech'), ('cy', 'welsh-test'), ('de', 'german'), ('el', 'greek'), ('en', 'default'), ('en-sc', 'en-scottish'), ('en-uk', 'english'), ('en-uk-north', 'lancashire'), ('en-us', 'english-us'), ('en-wi', 'en-westindies'), ('eo', 'esperanto'), ('es', 'spanish'), ('es-la', 'spanish-latin-american'), ('fi', 'finnish'), ('fr', 'french'), ('fr-be', 'french'), ('grc', 'greek-ancient'), ('hi', 'hindi-test'), ('hr', 'croatian'), ('hu', 'hungarian'), ('hy', 'armenian'), ('hy', 'armenian-west'), ('id', 'indonesian-test'), ('is', 'icelandic-test'), ('it', 'italian'), ('ku', 'kurdish'), ('la', 'latin'), ('lv', 'latvian'), ('mk', 'macedonian-test'), ('nl', 'dutch-test'), ('no', 'norwegian-test'), ('pl', 'polish'), ('pt', 'brazil'), ('pt-pt', 'portugal'), ('ro', 'romanian'), ('ru', 'russian_test'), ('sk', 'slovak'), ('sq', 'albanian'), ('sr', 'serbian'), ('sv', 'swedish'), ('sw', 'swahihi-test'), ('ta', 'tamil'), ('tr', 'turkish'), ('vi', 'vietnam-test'), ('zh', 'Mandarin'), ('zh-yue', 'cantonese-test')]
The voices() function returns a list of voices as one tuple per voice, and eliminates voices from the list that espeak cannot produce on its own. This list of tuples can be used to populate a drop down list.
The speak() function adjusts the value of rate so you can input a value between 0 and 99 rather than between 80 to 370. speak() is more complex in the Speak Activity than what we have here because in that Activity it monitors the spoken audio and generates mouth movements based on the amplitude of the voice. Making the face move is most of what the Speak Activity does, and since we aren't doing that we need very little code to make our Activity speak.
You can use import espeak to include this file in your own Activities.
Using The gstreamer espeak Plugin
The gstreamer espeak plugin can be installed in Fedora 10 or later using Add/Remove Software.
When you have this done you should be able to download the Read Etexts Activity (the real one, not the simplified version we're using for the book) from ASLO and try out the Speech tab. You should do that now. It will look something like this:
The book used in the earlier screenshots of this manual was Pride and Prejudice by Jane Austen. To balance things out the rest of the screen shots will be using The Innocents Abroad by Mark Twain.
Gstreamer is a framework for multimedia. If you've watched videos on the web you are familiar with the concept of streaming media. Instead of downloading a whole song or a whole movie clip and then playing it, streaming means the downloading and the playing happen at the same time, with the downloading just a bit behind the playing. There are many different kinds of media files: MP3's, DivX, WMV, Real Media, and so on. For every kind of media file Gstreamer has a plugin.
Gstreamer makes use of a concept called pipelining. The idea is that the output of one program can become the input to another. One way to handle that situation is to put the output of the first program into a temporary file and have the second program read it. This would mean that the first program would have to finish running before the second one could begin. What if you could have both programs run at the same time and have the second program read the data as the first one wrote it out? It's possible, and the mechanism for getting data from one program to the other is called a pipe. A collection of programs joined together in this way is called a pipeline. The program that feeds data into a pipe is called a source, and the data that takes the data out of the pipe is called a sink.
The gstreamer espeak plugin uses a simple pipe: text goes into espeak at one end and sound comes out the other and is sent to your soundcard. You might think that doesn't sound much different from what we were doing before, but it is. When you just run espeak the program has to load itself into memory, speak the text you give it into the sound card, then unload itself. This is one of the reasons you can't just use espeak a word at a time to achieve speech with highlighted words. There is a short lag while the program is loading. It isn't that noticeable if you give espeak a complete phrase or sentence to speak, but if it happens for every word it is very noticeable. Using the gstreamer plugin we can have espeak loaded into memory all the time, just waiting for us to send some words into its input pipe. It will speak them and then wait for the next batch.
Since we can control what goes into the pipe it is possible to pause and resume speech.
The example we'll use here is a version of Read Etexts again, but instead of the Activity we're going to modify the standalone version. There is nothing special about the gstreamer plugin that makes it only work with Activities. Any Python program can use it. I'm only including Text to Speech as a topic in this manual because every Sugar installation includes espeak and many Activities could find it useful.
There is a in our Git repository named speech.py which looks like this:
import gst', '*', '+', '/', '\' ] omitted_chars = set(omitted) while i < len(label_text): if label_text[i] not in omitted_chars: word_begin = i j = i while j < len(label_text) and \ label_text[j] not in omitted_chars: j = j + 1 word_end = j i = j word_t = (word_begin, word_end, \ label_text[word_begin: word_end].strip()) if word_t[2] != u'\r': word_tuples.append(word_t) i = i + 1 return word_tuples def add_word_marks(word_tuples): "Adds a mark between each word of text." i = 0 marked_up_text = '<speak> ' while i < len(word_tuples): word_t = word_tuples[i] marked_up_text = marked_up_text + \ '<mark name="' + str(i) + '"/>' + word_t[2] i = i + 1 return marked_up_text + '</speak>'
There is another file named ReadEtextsTTS.py which looks like this:
import sys import os import zipfile import pygtk import gtk import getopt import pango import gobject import time import speech speech_supported = True try: import gst gst.element_factory_make('espeak') print 'speech supported!' except Exception, e: speech_supported = False print 'speech not supported!' page=0 PAGE_SIZE = 45 class ReadEtextsActivity(): def __init__(self): "The entry point to the Activity" speech.highlight_cb = self.highlight_next_word # print speech.voices() def highlight_next_word(self, word_count): if word_count <⁞ keyname == 'plus': self.font_increase() return True if keyname == 'minus': self.font_decrease() if keyname == 'KP_Right': self.page_next() return True if keyname == 'Page_Up' or keyname == 'KP_Up': self.page_previous() return True if keyname == 'KP_Left': self.page_previous() return True if keyname == 'Page_Down' or keyname == 'KP_Down': self.page_next() return True if keyname == 'Up': self.scroll_up() return True if keyname == delete_cb(self, widget, event, data=None): speech.stop() return False def destroy_cb(self, widget, data=None): speech.stop() gtk.main_quit() def main(self, file_path): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("delete_event", self.delete_cb) self.window.connect("destroy", self.destroy_cb) self.window.set_title("Read Etexts Activity") self.window.set_size_request(800, 600)__": try: opts, args = getopt.getopt(sys.argv[1:], "") ReadEtextsActivity().main(args[0]) except getopt.error, msg: print msg print "This program has no options" sys.exit(2)
The program ReadEtextsTTS has only a few changes to make it enabled for speech. The first one checks for the existence of the gstreamer plugin:
speech_supported = True try: import gst gst.element_factory_make('espeak') print 'speech supported!' except Exception, e: speech_supported = False print 'speech not supported!'
This code detects whether the plugin is installed by attempting to import for the Python library associated with it named "gst". If the import fails it throws an Exception and we catch that Exception and use it to set a variable named speech_supported to False. We can check the value of this variable in other places in the program to make a program that works with Text To Speech if it is available and without it if it is not. Making a program work in different environments by doing these kinds of checks is called degrading gracefully. Catching exceptions on imports is a common technique in Python to achieve this. If you want your Activity to run on older versions of Sugar you may find yourself using it.
The next bit of code we're going to look at highlights a word in the textview and scrolls the textview to keep the highlighted word visible.
class ReadEtextsActivity(): def __init__(self): "The entry point to the Activity" speech.highlight_cb = self.highlight_next_word # print speech.voices() def highlight_next_word(self, word_count): if word_count <
In the __init__() method we assign a variable called highlight_cb in speech.py with a method called highlight_next_word(). This gives speech.py a way to call that method every time a new word in the textview needs to be highlighted.
The next line will print the list of tuples containing Voice names to the terminal if you uncomment it. We aren't letting the user change voices in this application but it would not be difficult to add that feature.
The code for the method that highlights the words follows. What it does is look in a list of tuples that contain the starting and ending offsets of every word in the textarea's text buffer. The caller of this method passes in a word number (for instance the first word in the buffer is word 0, the second is word 1, and so on). The method looks up that entry in the list, gets its starting and ending offsets, removes any previous highlighting, then highlights the new text. In addition to that it figures out what fraction of the total number of words the current word is and scrolls the textviewer enough to make sure that word is visible.
Of course this method works best on pages without many blank lines, which fortunately is most pages. It does not work so well on title pages. An experienced programmer could probably come up with a more elegant and reliable way of doing this scrolling. Let me know what you come up with.
Further down we see the code that gets the keystrokes the user enters and does speech-related things with them:
As you can see, the functions we're calling are all in the file speech.py that we imported. You don't have to fully understand how these functions work to make use of them in your own Activities. Notice that the code as written prevents the user from changing pitch or rate while speech is in progress. Notice also that there are two different methods in speech.py for doing speech. play() is the method for doing text to speech with word highlighting. say() is for saying short phrases produced by the user interface, in this case "Pitch adjusted" and "Rate adjusted". Of course if you put code like this in your Activity you would use the _() function so these phrases could be translated into other languages.
There is one more bit of code we need to do text to speech with highlighting: we need to prepare the words to be spoken to be highlighted in the textviewer.)
The beginning of this method reads a page's worth of text into a string called label_text and puts it into the textview's buffer. The last two lines splits the text into words, leaving in punctuation, and puts each word and its beginning and ending offsets into a tuple. The tuples are added to a List.
speech.add_word_marks() converts the words in the List to a document in SSML (Speech Synthesis Markup Language) format. SSML is a standard for adding tags (sort of like the tags used to make web pages) to text to tell speech software what to do with the text. We're just using a very small part of this standard to produce a marked up document with a mark between each word, like this:
<speak> <mark name="0"/>The<mark name="1"/>quick<mark name-"2"/> brown<mark name="3"/>fox<mark name="4"/>jumps </speak>
When espeak reads this file it will do a callback into our program every time it reads one of the mark tags. The callback will contain the number of the word in the word_tuples List which it will get from the name attribute of the mark tag. In this way the method being called will know which word to highlight. The advantage of using the mark name rather than just highlighting the next word in the textviewer is that if espeak should fail to do one of the callbacks the highlighting won't be thrown out of sync. This was a problem with speech-dispatcher.
A callback is just what it sounds like. When one program calls another program it can pass in a function or method of its own that it wants the second program to call when something happens.
To try out the new program run
./ReadEtextsTTS.py bookfile
from the Terminal. You can adjust pitch and rate up and down using the keys 7, 8, 9, and 0 on the top row of the keyboard. It should say "Pitch Adjusted" or "Rate Adjusted" when you do that. You can start, pause, and resume speech with highlighting by using the End key on the keypad. (On the XO laptop the "game" keys are mapped to what is the numeric keypad on a normal keyboard. This makes these keys handy for use when the XO is folded into tablet mode and the keyboard is not available). You cannot change pitch or rate while speech is in progress. Attempts to do that will be ignored. The program in action looks like this:
That brings us to the end of the topic of Text to Speech. If you're like to see more, the Git repository for this book has a few more sample programs that use the gstreamer espeak plugin. These examples were created by the author of the plugin and demonstrate some other ways you can use it. There's even a "choir" program that demonstrates multiple voices speaking at the same time. | http://booki.flossmanuals.net/make-your-own-sugar-activities/ch018_adding-text-to-speech | CC-MAIN-2018-05 | refinedweb | 3,255 | 70.94 |
fclose() prototype
int fclose(FILE* stream);
The
fclose() function takes a single argument, a file stream which is to be closed. All the data that are buffered but not written are flushed to the OS and all unread buffered data are discarded.
Even if the operation fails, the stream is no longer associated with the file. If the file pointer is used after
fclose() is executed, the behaviour is undefined.
It is defined in <cstdio> header file.
fclose() Parameters
stream: The file stream to close.
fclose() Return value
The fclose() function returns:
- Zero on success.
- EOF on failure.
Example: How fclose() function works
#include <iostream> #include <cstdio> using namespace std; int main() { FILE *fp; fp = fopen("file.txt","w"); char str[20] = "Hello World!"; if (fp == NULL) { cout << "Error opening file"; exit(1); } fprintf(fp,"%s",str); fclose(fp); cout << "File closed successfully"; return 0; }
When you run the program, the output will be:
File closed successfully | https://www.programiz.com/cpp-programming/library-function/cstdio/fclose | CC-MAIN-2020-16 | refinedweb | 157 | 74.79 |
ACCESS
access - check user's permissions for a file
#include <unistd.h> access(char *pathname, int mode);
access checks whether the process would be allowed to read, write or test for existence of the file (or other file system object) whose name is pathname. If pathname is a symbolic link permissions of the filereferred execve__(2)? call will still fail.
On success (all requested permissions granted), zero is returned. On error (at least one bit in mode asked for a permission that is denied, or some other error occurred), -1 is returned, and errno is set appropriately.
stat(2), open(2), chmod(2), chown(2), setuid(2), setgid(2)
6 pages link to access(2): | http://wiki.wlug.org.nz/access(2) | CC-MAIN-2015-18 | refinedweb | 115 | 64.81 |
Hi guys,
GITHUB:
I reworked my deferred engine playground basically from scratch. It started out as a real fast way to make some models appear on screen, but now it has a bit more structure.
What it is:It's a scene setup which enables programmers to quickly try out new shaders in a deferred environment with a relatively easy way to import models with physically based materials`
This is the main draw function. As you can see it's pretty structured and easy to change the rendering or add some shaders.
Speaking of shaders and materials - it's really easy to have special code for certain materials. My GBuffer contains the following data:
(2 x RGBA + 1x R)
So you can simply set the material type of the helmet visor to an arbitrary number and manipulate pixels with this material type.An example would be the helmets below which feature a spooky skull hologram made with a simple shader trick.
importing and using objects and lights is really straight forward as well.
public class Assets
{
public Model SkullTest;
public MaterialEffect SkullTestMaterial;
public void Load(ContentManager content)
{
SkullTest = content.Load<Model>("Art/test/skull");
SkullTestMaterial = CreateMaterial(Color.White, 0,0,
albedoMap: content.Load<Texture2D>("Art/test/skull_albedo"),
normalMap: content.Load<Texture2D>("Art/test/skull_normal"),
roughnessMap: content.Load<Texture2D>("Art/test/skull_roughness"),
metallicMap: content.Load<Texture2D>("Art/test/skull_metalness"));
}
}
And then to use in engine
public class MainLogic
{
....
public void Initialize(Assets assets)
{
Camera = new Camera(position: new Vector3(-80, 0, -10), lookat: new Vector3(1, 0, -10));
....
AddEntity(_assets.SkullModel, _assets.SkullTestMaterial, new Vector3(0, 0, -10), -Math.PI/2, 0, 0, 1);
}
}
Culling etc. is handled in a pretty robust way, so you basically don't need to worry about that.
Adding point lights is similarly easy (I have ditched spot and directional lights for now, they are easy to implement)
public void Initialize(Assets assets)
{
myLight = AddPointLight(position: new Vector3(2, 2, -20), radius: 50, color: Color.Wheat, intensity: 20, castShadows: true, shadowResolution: 1024);
}
I am pretty proud of my frustum culling and shadow optimizations, which i did last night. In a static scene shadows cost almost nothing in terms of performance, which should be expected.
I use cubeMaps for the shadows of my point lights. They are not optimized to the end but have some neat stuff, like for example they will only update the faces that have changing geometry. So if an object moves next to a static light source only 1 or 2 projections are redrawn instead of all 6.
Typically game scenes don't have millions of polygons around each light source, but with the Stanford dragon and the Sponza atrium, which are very heavy in terms of polycount, the cost for shadowmap generation is a bit inflated.
Nevertheless I can run 10 moving, fully updating lights with pretty big radii, at around 80Hz, which feels pretty good. These are soft VSM shadows by the way (not blurred)
Another thing that makes life easier is a working debug console with Autofill and suggestions
Plus the ability to drag and resize the window to your hearts content
So yeah, just a pet project of mine. Basically redone last night and i like to write about stuff.
I am thinking of making this public if there is demand. It's obviously still a bit hacky and not perfectly documented (and probably implemented) but it could prove useful maybe.
Anyways - ideas what to implement next?
Yes please This is awesome!! I'm really interested in getting into graphics programming and I'm taking a graphics programming class this semester. Would love to do some PBR experiments with this.
By the way I'm working on getting glsl support in MG which opens the path to support shader stages besides vertex and pixel shaders in case you're interested in that
I think I speak for a lot of people when I say anything with shaders would be received with open arms! I have made a few attempts to get into it, but not managed to compile anything that works using hlsl. -There's always some damn version conflict or outdated this or that, or the moon is out of alignment...
Looks like it might be a cool learning example for a lot of people, to say the least!
Can someone give this a try? I think everything should be included
In case you get it to start you can use F1 to change render modes, the key above TAB for console (different in each language) and L to spawn new lights. move the existing light with arrow keys
Apart from that I implemented some form of SSAO. It sucks still, but it works, yay.
Hi kosmonautgames,
some files are missing:defaultfont.spritefontstormtrooper.fbxStormtrooper_Albedo.pngStormtrooper_Metalness.pngStormtrooper_Normal.pngStormtrooper_Roughness.png
alright thanks, I commited the change. There is currently a lot of trial/error stuff going on, but it should run fine
Now the build succeeds
What "trial/error stuff" do you mean?
If possible, can you explain how the "spooky skull hologram" works?
I removed it from the main renderer since it's not really relevant, but the idea is pretty simple.
Draw the skulls to another rendertarget. Set the materialtype of the helmet glass to 2.
Then in the deferred compose shader (where everything is merged into the final image) we look what material we are rendering. If the mat_type is 2 then we read the skull-rendertarget at the position, distort it or do some other effect, and add it to the output.
Man, this is really fantastic! I'd love to use something like this to play around with shaders!
You should readd it
Would love to see this in the renderer
Finally tried it, runs without problems, and looks really nice! The code is much bigger than I expected, it will take a solid effort to get through, but it looks like it's structured in a way that makes it possible to follow.
Thanks so far for sharing this!
I cant seem to add the skull holograms....I can find the skull model, but I cant find the material map files. (???) Are they included?
Edit: So I can't find the materials for the skull, but I can add a blank generic material, so the skull just comes out white....But the skull is drawn like an object in front of the helmet. not as a hologram on the visor. It looks like this:
Due to popular request I reimplemented the feature!
Yay!
I also implemented highly experimental screen space emissive materials
Plus some SSAO!
Let's talk briefly about the "holograms"
With F1 you can cycle rendertargets, you can find the hologram rendertarget at 8th position or so.
You can change the rendering style with the console command g_HologramUseGauss true or not. If true it will use a gaussian blur to read from the hologram map.
In the assets I added a new material
hologramMaterial = CreateMaterial(Color.White, 0.2f, 1, null, null, null, null, null, MaterialEffect.MaterialTypes.Hologram, 1);
If you have a material with the type hologram it will only get rendered to the hologram texture.
To have materials project the hologram you need to set the materialtype toMaterialEffect.MaterialTypes.ProjectHologram
MaterialEffect.MaterialTypes.ProjectHologram
In the assets class, where I load all my assets you can find the function ProcessHelmets(), where I assign specific material values to the submeshes of the helmet. You can see the visor uses ProjectHologram, while some outlines use Hologram, so they get incorporated.
Screen space emissive materials are inherently flawed but were a fun experiment. Currently the limitation is that it doesn't work with submeshes, basically it's only good with a model with only 1 submesh. Change the objects material to have a type of MaterialEffect.MaterialTypes.Emissive to make it work.
g_Emissive ... can change some properties.
new Console variables for ssao are also available, just type in ssao and see what's there.
Submeshes are now properly supported.
HiDo you know a good tool to generate pbr textures from albedo's and normals other than substance which isnt free. I m struggling with photoshop and loose a lot of time as i m not an artistOther than that, impossible to get it working it fails on loading effects, but i have built them with monogamePT and rebuild in VS. Im using the latest dev version of mg win 7 + gt 620Ill post the exact message of error when i arrive home in about 1h
there is minor problems Vector truncation etc. in shaders, which makes it throw warnings and not build (on the first try). I should fix that stuff. Regardless, it builds after the first fail, it ignores these warnings then, for me at least.
I'm not using the newest Monogame builds since they don't work for me - I can't load any effects, even if they contain almost nothing or are the base monogame ones. I've wasted countless hours on that, and there are several threads here that about that. Oh well.
I can't recommend texturing programs apart from Substance Painter, I haven't worked with anything else.
It builds under VS and PT, but when I start the app under VS, i get this error:
Yeah, that's the same problem I have when I use newer MonoGame dev builds. I work on 3.6.0.187.
And many others.
I can't say anything about that, I have no clue how the problem can be fixed. I can't even build the Monogame source without getting the SharpDX exception for the base monogame shaders.
EDIT: Oh it seems there are some fixes, I need to try it out.
EDIT2: I reinstalled the newest monogame dev build but it still doesn't work.
Thank you! The pictures look awesome!
Yesterday I fiddled around until I could load the sponza model in my little deferred rendering engine. I think some of the normals of the sponza model aren't right, for example the ones of the floor. I vaguely remember correcting them in Blender, when I used it in my renderer.
So far I had no luck using the corrected version from the XNA 4.0 project, since it is in FBX format, but not the right one to use with Monogame. Unfortunately I could't find the old *.blend file for re-exporting to a newer version of the FBX format.
It would be awesome to have a nice high quality test model like sponza, where loading is not so tedious.
My version works ok, it has some holes, but most of the normal stuff is good, I'm positive.
It's .obj, but that's not an issue obviously. The real issue is to have all the materials loaded, since the default monogame only loads diffuse.
Unfortunately I have troubles creating my own custom effects for importing since I can't compile MonoGame source. It worked with an older version of the source, but I just stuck to the default for this one. | http://community.monogame.net/t/deferred-engine-playground-download/8180 | CC-MAIN-2020-10 | refinedweb | 1,848 | 64 |
A file class with some useful methods for tag manipulation. More...
#include <tfilestream.h>
Detailed Description
A file class with some useful methods for tag manipulation.
This class is a basic file class with some methods that are particularly useful for tag editors. It has methods to take advantage of ByteVector and a binary search method for finding patterns in a file.
Constructor & Destructor Documentation
Destroys this FileStream instance.
Member Function Documentation
Returns the buffer size that is used for internal buffering.
Reset the end-of-file and error flags on the file.
Reimplemented from TagLib::IOStream.
Insert data at position start in the file overwriting replace bytes of the original content.
- Note
- This method is slow since it requires rewriting all of the file after the insertion point.
Implements TagLib::IOStream.
Since the file can currently only be opened as an argument to the constructor (sort-of by design), this returns if that open succeeded.
Implements TagLib::IOStream.
Returns the length of the file.
Implements TagLib::IOStream.
Returns the file name in the local file system encoding.
Implements TagLib::IOStream.
Reads a block of size length at the current get pointer.
Implements TagLib::IOStream.
Returns true if the file is read only (or if the file can not be opened).
Implements TagLib::IOStream.
Removes a block of the file starting a start and continuing for length bytes.
- Note
- This method is slow since it involves rewriting all of the file after the removed portion.
Implements TagLib::IOStream.
Move the I/O pointer to offset in the file from position p. This defaults to seeking from the beginning of the file.
Implements TagLib::IOStream.
Returns the current offset within the file.
Implements TagLib::IOStream.
Truncates the file to a length.
Implements TagLib::IOStream.
Attempts to write the block data at the current get pointer. If the file is currently only opened read only – i.e. readOnly() returns true – this attempts to reopen the file in read/write mode.
- Note
- This should be used instead of using the streaming output operator for a ByteVector. And even this function is significantly slower than doing output with a char[].
Implements TagLib::IOStream.
The documentation for this class was generated from the following file: | http://taglib.github.io/api/classTagLib_1_1FileStream.html | CC-MAIN-2015-48 | refinedweb | 370 | 66.74 |
04 November 2011 10:47 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
January LLDPE futures, the most actively traded contract on the Dalian Commodity Exchange (DCE), closed at yuan (CNY) 9,780/tonne ($1,540/tonne) on Friday, up by CNY220/tonne from Thursday’s settlement price.
About 1.4m January contracts totalling about 3.4m tonnes were traded on Friday, according to DCE data.
A strong rebound of LLDPE prices in the domestic physical market also boosted investor sentiment, futures brokers said.
Imported LLDPE was selling at CNY9,350-9,800/tonne EXWH (ex-warehouse) this week, CNY100-800/tonne higher from last week, according to Chemease, an ICIS service | http://www.icis.com/Articles/2011/11/04/9505410/china-lldpe-futures-up-2.3-amid-optimism-over-eurozone-debts.html | CC-MAIN-2015-11 | refinedweb | 111 | 55.74 |
I've heard a lot about entity component systems for MMO and data-intensive games, but i haven't heard anything about a specific system for tycoon or simulation games. Do you guys know any pattern or technique that's helpful on building a tycoon game, which is heavily based on time scales and ticks?
Programming technique for a Tycoon game?
#1 Members - Reputation: 675
Posted 04 January 2014 - 05:39 PM
#2 Members - Reputation: 675
Posted 07 January 2014 - 07:39 PM
No one? Do you guys think that an ECS variant would suffice? With each component being a data type for looping ticks?
#3 Prime Members - Reputation: 1535
Posted 07 January 2014 - 08:54 PM
It really depends on the situation and something you need to plan ahead of time. It is also a skill you will develop overtime. I provided code examples down below.
Example 1: Suppose you were tasked to program a Wizard to be a NPC.
You can do this below in the programming sense in Scenario 1 but you will see Wizard inherits from two classes NPC and Sprite which means it is of type NPC and Sprite together. While it will work in the English-speaking sense, coding style wise, it will just create a bunch of class extension for an object. Scenario 1 can cause a ripple effect on your Wizard class when you decide to change the code for NPC that you don't want your Wizard to inherit from.
Scenario 1: <---Bad Design for NPC design
public class Wizard extends NPC { } public class NPC extends Sprite { }
Why do the above: when you can code like Scenario 2 down below. Here if you perform code design on the NPC class, it has no effect on you Wizard class because Wizard does not inherit from NPC. Here Wizard is a Sprite but NPC has a Sprite. Inheritance uses "is a" relationship but Composition uses "has a" relationship.
Scenario 2: <---Better Choice for NPC design
public class NPC implements Talkable { private Sprite sprite; public NPC(Sprite sprite) { } } public class Wizard extends Sprite { public Wizard(double x, double y) { super(x,y); } } public class Game() { NPC npc = new NPC(new Wizard (0,0)); }
There are appropriate cases where a class can have a mixed of both inheritance and composition in the works at the same time. There are also some classes that use only inheritance. Some class that use inheritance but will be later treated as a component.
You generally use composition of you want the class(for example: Sora class which is the class for the main character) to be made up of other classes you already written(like the animated functionality to move in 4 directions). However, the animation functionality can be using inheritance. Sora can also be using inheritance to inherit the position properties of the Sprite class. So in that sense, Sora uses both inheritance and composition at the same time.
Code from my game:
This code uses inheritance and composition at the same time.
public class Sora extends Sprite { // soraRunRight use inheritance internally but is treated as a component. private Animation soraRunRight; private Animation soraRunLeft; public Sora(double x, double y) { super(x,y); } } This code only use inheritance because my animation will need to inheritance the same loading assets and interval assigment and drawing algorithm. public class soraRunRight extends Animation { } public class soraRunLeft extends Animation { }
To answer your question, in terms of ticks, you should just pass the tick as a parameter of a method like this:
public void update(long milliseconds)
{
}
You don't want a tick variable hanging around for every object.
The question you pose is still vague. What technique or pattern for building a tycoon game? You should name a specific feature from the tycoon game you need help designing. This statement would be much more clear.
Edited by warnexus, 07 January 2014 - 08:58 PM. | http://www.gamedev.net/topic/651893-programming-technique-for-a-tycoon-game/ | CC-MAIN-2016-40 | refinedweb | 652 | 60.55 |
XML::Filter::Sort - SAX filter for sorting elements in XML
use XML::Filter::Sort; use XML::SAX::Machines qw( :all ); my $sorter = XML::Filter::Sort->new( Record => 'person', Keys => [ [ 'lastname', 'alpha', 'asc' ], [ 'firstname', 'alpha', 'asc' ], [ '@age', 'num', 'desc'] ], ); my $filter = Pipeline( $sorter => \*STDOUT ); $filter->parse_file(\*STDIN);
Or from the command line:
xmlsort
This module is a SAX filter for sorting 'records' in XML documents (including documents larger than available memory). The
xmlsort utility which is included with this distribution can be used to sort an XML file from the command line without writing Perl code (see
perldoc xmlsort).
These examples assume that you will create an XML::Filter::Sort object and use it in a SAX::Machines pipeline (as in the synopsis above). Of course you could use the object directly by hooking up to a SAX generator and a SAX handler but such details are omitted from the sample code.
When you create an XML::Filter::Sort object (with the
new() method), you must use the 'Record' option to identify which elements you want sorted. The simplest way to do this is to simply use the element name, eg:
my $sorter = XML::Filter::Sort->new( Record => 'colour' );
Which could be used to transform this XML:
<options> <colour>red</colour> <colour>green</colour> <colour>blue</colour> <options>
to this:
<options> <colour>blue</colour> <colour>green</colour> <colour>red</colour> </options>
You can define a more specific path to the record by adding a prefix of element names separated by forward slashes, eg:
my $sorter = XML::Filter::Sort->new( Record => 'hair/colour' );
which would only sort <colour> elements contained directly within a <hair> element (and would therefore leave our sample document above unchanged). A path which starts with a slash is an 'absolute' path and must specify all intervening elements from the root element to the record elements.
A record element may contain other elements. The order of the record elements may be changed by the sorting process but the order of any child elements within them will not.
The default sort uses the full text of each 'record' element and uses an alphabetic comparison. You can use the 'Keys' option to specify a list of elements within each record whose text content should be used as sort keys. You can also use this option to specify whether the keys should be compared alphabetically or numerically and whether the resulting order should be ascending or descending, eg:
my $sorter = XML::Filter::Sort->new( Record => 'person', Keys => [ [ 'lastname', 'alpha', 'asc' ], [ 'firstname', 'alpha', 'asc' ], [ '@age', 'alpha', 'desc' ], ] );
Given this record ...
<person age='35'> <firstname>Aardvark</firstname> <lastname>Zebedee</lastname> </person>
The above code would use 'Zebedee' as the first (primary) sort key, 'Aardvark' as the second sort key and the number 35 as the third sort key. In this case, records with the same first and last name would be sorted from oldest to youngest.
As with the 'record' path, it is possible to specify a path to the sort key elements (or attributes). To make a path relative to the record element itself, use './' at the start of the path.
A simple path string defining which elements should be treated as 'records' to be sorted (see "PATH SYNTAX"). Elements which do not match this path will not be altered by the filter. Elements which do match this path will be re-ordered depending on their contents and the value of the Keys option.
When a record element is re-ordered, it takes its leading whitespace with it.
Only lists of contiguous record elements will be sorted. A list of records which has a 'foreign body' (a non-record element, non-whitespace text, a comment or a processing instruction) between two elements will be treated as two separate lists and each will be sorted in isolation of the other.
This option specifies which parts of the records should be used as sort keys. The first form uses a list-of-lists syntax. Each key is defined using a list of three elements:
This item is optional and defaults to 'alpha'.
You may prefer to define the Keys using a delimited string rather than a list of lists. Keys in the string should be separated by either newlines or semicolons and the components of a key should be separated by whitespace or commas. It is not possible to define a subroutine reference comparator using the string syntax.
Enabling this option will make sort comparisions case-insensitive (rather than the default case-sensitive).
The sort key values for each record will be the text content of the child elements specified using the Keys option (above). If you enable this option, leading and trailing whitespace will be stripped from the keys and each internal run of spaces will be collapsed to a single space. The default value for this option is off for efficiency.
Note: The contents of the record are not affected by this setting - merely the copy of the data that is used in the sort comparisons.
You can also supply your own custom 'fix-ups' by passing this option a reference to a subroutine. The subroutine will be called once for each record and will be passed a list of the key values for the record. The routine must return the same number of elements each time it is called, but this may be less than the number of values passed to it. You might use this option to combine multiple key values into one (eg: using sprintf).
Note: You can enable both the NormaliseKeySpace and the KeyFilterSub options - space normalisation will occur first.
This option serves two purposes: it enables disk buffering rather than the default memory buffering and it allows you to specify where on disk the data should be buffered. Disk buffering will be slower than memory buffering, so don't ask for it if you don't need it. For more details, see "IMPLEMENTATION".
Note: It is safe to specify the same temporary directory path for multiple instances since each will create a uniquely named subdirectory (and clean it up afterwards).
The disk buffering mode actually sorts chunks of records in memory before saving them to disk. The default chunk size is 10 megabytes. You can use this option to specify an alternative chunk size (in bytes) which is more attuned to your available resources (more is better). A suffix of 'K' or 'M' is recognised as kilobytes or megabytes respectively.
If you have not enabled disk buffering (using 'TempDir'), the MaxMem option has no effect. Attempting to sort a large document using only memory buffering may result in Perl dying with an 'out of memory' error.
If your SAX parser can do validation and generates ignorable_whitespace() events, you can enable this option to discard these events. If you leave this option at it's default value (implying you want the whitespace), the events will be translated to characters() events.
A simple element path syntax is used in two places:
In each case you can use a just an element name, or a list of element names separated by forward slashes. eg:
Record => 'ul/li', Keys => 'name'
If a 'Record' path begins with a '/' then it will be anchored at the document root. If a 'Keys' path begins with './' then it is anchored at the current record element. Unanchored paths can match at any level.
A 'Keys' path can include an attribute name prefixed with an '@' symbol, eg:
Keys => './@href'
Each element or attribute name can include a namespace URI prefix in curly braces, eg:
Record => '{}li'
If you do not include a namespace prefix, all elements with the specified name will be matched, regardless of any namespace URI association they might have.
If you include an empty namespace prefix (eg:
'{}li') then only records which do not have a namespace association will be matched.
In order to arrange records into sorted order, this module uses buffering. It does not need to buffer the whole document, but for any sequence of records within a document, all records must be buffered. Unless you specify otherwise, the records will be buffered in memory. The memory requirements are similar to DOM implementations - 10 to 50 times the character count of the source XML. If your documents are so large that you would not process them with a DOM parser then you should enable disk buffering.
If you enable disk buffering, sequences of records will be assembled into 'chunks' of approximately 10 megabytes (this value is configurable). Each chunk will be sorted and saved to disk. At the end of the record sequence, all the sorted chunks will be merged and written out as SAX events.
The memory buffering mode represents each record an a XML::Filter::Sort::Buffer object and uses XML::Filter::Sort::BufferMgr objects to manage the buffers. For details of the internals, see XML::Filter::Sort::BufferMgr.
The disk buffering mode represents each record an a XML::Filter::Sort::DiskBuffer object and uses XML::Filter::Sort::DiskBufferMgr objects to manage the buffers. For details of the internals, see XML::Filter::Sort::DiskBufferMgr.
ignorable_whitespace() events shouldn't be translated to normal characters() events - perhaps in a later release they won't be.
XML::Filter::Sort requires XML::SAX::Base and plays nicely with XML::SAX::Machines.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/dist/XML-Filter-Sort/lib/XML/Filter/Sort.pm | CC-MAIN-2013-20 | refinedweb | 1,566 | 59.64 |
Could you please give a hint on how to run existing hoc file which includes some GUI (ses-file) by using Python? I have PyNEURON installed and run my hoc file with
Code: Select all
from neuron import * h('load_file("mosinit.hoc")')
Code: Select all
In [6]: h('load_file("mosinit.hoc")') NEURON: Couldn't find: nrngui.hoc in mosinit.hoc near line 1 load_file("nrngui.hoc") ^ 0 NEURON: lambda_f not declared at the top level in CA1template.hoc near line 212 external lambda_f ^ xopen("CA1template.hoc") execute1("{xopen("CA1template.hoc")}") load_file("CA1template.hoc") xopen("mosinit.hoc") and others 0 NEURON: syntax error in CA1template.hoc near line 214 forsec all { nseg = int((L/(0.1*lambda_f(100))+.9)/2)*2 + 1 } ^ xopen("mosinit.hoc") execute1("{xopen("mosinit.hoc")}") load_file("mosinit.hoc") 0 Out[6]: 1
Code: Select all
load_file("nrngui.hoc") load_file("CA1template.hoc")
Code: Select all
load_file("session.ses")
I basically need Python to plot the output of NEURON code, and run/control simulations on the top level. How do I do it right?
Thank you! | https://www.neuron.yale.edu/phpBB/viewtopic.php?p=10333 | CC-MAIN-2020-16 | refinedweb | 177 | 54.18 |
egison
Programming language with non-linear pattern-matching against non-free data
See all snapshots
egison appears in
egison-4.1.2@sha256:be707f3ecc841477159a6a59cae3ad5b4eea5796c2dd844e7add6d36455d9eab,7636
Module documentation for 4.1.2
- Language
- Language.Egison
- Language.Egison.AST
- Language.Egison.CmdOptions
- Language.Egison.Completion
- Language.Egison.Core
- Language.Egison.Data
- Language.Egison.Desugar
- Language.Egison.Eval
- Language.Egison.EvalState
- Language.Egison.IExpr
- Language.Egison.MList
- Language.Egison.Match
- Language.Egison.Math
- Language.Egison.MathOutput
- Language.Egison.Parser
- Language.Egison.Pretty
- Language.Egison.PrettyMath
- Language.Egison.Primitives
- Language.Egison.RState
- Language.Egison.Tensor use non-linear pattern matching for non-free data types in Egison. A non-free data type is a data type whose data have no canonical form, or a standard way to represent that object. For example, multisets are non-free data types because a multiset {a,b,b} has two other syntastically different representations: !
def twinPrimes := matchAll primes as list integer with | _ ++ $p :: #(p + 2) :: _ -> (p, p + 2) take 8 twinPrimes -- [(3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73)]
Poker Hands
The following code is a program that determines poker-hands written in Egison. All hands are expressed in a single pattern.
def a program to solve the travelling salesman problem in a single pattern-matching expression.
def graph := multiset (string, multiset (string, integer)) def)])] def) > def t := fst (qF' 1 1 (-1)) > qF' 1 (-t) 1 ((-1 + sqrt 5 + sqrt 2 * sqrt (-5 - sqrt 5)) / 4, (-1 + sqrt 5 - sqrt 2 * sqrt (-5 - sqrt 5)) / 4) > def def x := [| θ, φ |] def X := [| r * (sin θ) * (cos φ) -- x , r * (sin θ) * (sin φ) -- y , r * (cos θ) -- z |] def e_i_j := (∂/∂ X_j x~i) -- Metric tensors def g_i_j := generateTensor (\x y -> V.* e_x_# e_y_#) [2, 2] def g~i~j := M.inverse g_#_# g_#_# -- [| [| r^2, 0 |], [| 0, r^2 * (sin θ)^2 |] |]_#_# g~#~# -- [| [| 1 / r^2, 0 |], [| 0, 1 / (r^2 * (sin θ)^2) |] |]~#~# -- Christoffel symbols def |] |]_#_# def def def x := [| θ, φ |] def g_i_j := [| [| r^2, 0 |], [| 0, r^2 * (sin θ)^2 |] |]_i_j def g~i~j := [| [| 1 / r^2, 0 |], [| 0, 1 / (r^2 * (sin θ)^2) |] |]~i~j -- Christoffel symbols def Γ_j_l_k := (1 / 2) * (∂/∂ g_j_l x~k + ∂/∂ g_j_k x~l - ∂/∂ g_k_l x~j) def Γ~i_k_l := withSymbols [j] g~i~j . Γ_j_l_k -- Exterior derivative def d %t := !(flip ∂/∂) x t -- Wedge product infixl expression 7 ∧ def (∧) %x %y := x !. y -- Connection form def ω~i_j := Γ~i_j_# -- Curvature form def.
Installation
Installation guide is available on our website.
If you are a beginner of Egison, it would be better to install
egison-tutorial as well.
We also have online interpreter and online tutorial. Enjoy!
Notes for Developers
You can build Egison as follows:
$ stack init $ stack build --fast
For testing, see test/README.md..
Changes
Changelog
Latest
4.1.0
New Features
- Enabled user-defined infixes for expressions and patterns:
- Allowed
letexpression to decompose data. Unlike
matchexpressions (of Egison), this does not require matchers and the decomposition pattern is limited.
> let (x :: _) := [1, 2, 3] in x 1 > let (x :: _) := [] in x Primitive data pattern match failed stack trace: <stdin>
- Enabled data decomposition at lambda arguments.
> (\(x, _) -> x) (1, 2) 1
- Implemented partial application.
> let add x y := x + y in map (add 1) [1, 2, 3] [2, 3, 4]
- Huge speedup in mathematical programs:
- Reimplemented math normalization, which was originally implemented in Egison, to the interpreter in Haskell.
- Implemented lazy evaluation on tensor elements.
- Added new syntax for symmetric / anti-symmetric tensors.
Backward-incompatible Changes
- Changed the syntax to start definitions with
defkeyword.
def x := 1
iowas previously defined as a syntastic constructs, but it is changed into a primitive function. Namely, users will need to wrap the arguments to
ioin a parenthesis, or insert
$after
io.
-- Invalid io isEof () -- OK io (isEOF ()) io $ isEOF ()
Miscellaneous
- Added a command line option
--no-normalizeto turn off math normalization implemented in the standard math library.
- Revived TSV input options:
- Deprecated
redefine.
4.0.3
- Renamed
f.piinto
pi.
4.0.1
- Fixed a bug of not-patterns inside sequential patterns.
- Deprecated
procedure(replace them with anonymous function)
4.0.0
- Enabled the Haskell-like new syntax by default. | https://www.stackage.org/lts-18.6/package/egison-4.1.2 | CC-MAIN-2022-40 | refinedweb | 714 | 54.42 |
A library that is loaded into physical memory only once and reused by multiple processes via virtual memory.Generally Shared libraries are .so (or in windows .dll) files.
Why shared libraries ??
- They reduce memory consumption if used by more than one process, and they reduce the size of the executable.
- They make developing applications easier: a small change in the implementation of a function in the library don't need the user to recompile and relink his application code every time. You need to only relink if you make incompatible changes, such as adding arguments to a call or changing the size of a struct.
Let us see how to create a shared library on Linux. We use following source code files for this post.
calc_mean.h
#ifndef calc_mean_h__ #define calc_mean_h__ double mean(double, double); #endif // calc_mean_h__calc_mean.c
double mean(double a, double b) { return (a+b) / 2; }main.c - We are including our shared library in this application.
#include <stdio.h> #include "calc_mean.h" int main(int argc, char* argv[]) { double v1, v2, m; v1 = 5.2; v2 = 7.9; m = mean(v1, v2); printf("The mean of %3.2f and %3.2f is %3.2f\n", v1, v2, m); return 0; }
1. Creating Object File with Position Independent Code
All the code that goes into a shared library needs to be position independent. We can make gcc emit position-independent code by passing it one of the command-line switches -fpic or -fPIC (the former is preferred, unless the modules have grown so large that the relocatable code table is simply too small in which case the compiler will emit an error message, and you have to use -fPIC).
First we will create object files for all .c files that goes into a shared library.
gcc -c -fPIC calc_mean.c -o calc_mean.oAbove we are compiling calc_mean.c with -fPIC option and generating calc_mean.o object file.Above command on successful produces a shared library named "libcalc_mean.so".
- -shared: Produces a shared object which can then be linked with other objects to form an executable.
3. Using the Shared Library
Now let us link the created shared library with our application. Compile main.c as shown below
$ gcc -o test main.c -lcalc_mean /usr/bin/ld: cannot find -lcalc_mean collect2: ld returned 1 exit statusThe linker doesn’t know where to find libcalc_mean. But why ?
GCC has a list of places to look by default for shared libraries, but our directory is not in that list. Bingo that's the reason compilation failed at linking level.
Now we need to tell GCC where to find libcalc_mean.so. We will do that with the -L option.
gcc -o test main.c -lcalc_mean -L/home/cf/slib
- -l option tells the compiler to look for a file named libsomething.so The something is specified by the argument immediately following the “-l”. i.e. -lmean
- -L option tells the compiler where to find the library. The path to the directory containing the shared libraries is followed by "-L". If no “-L” is specified, the compiler will search the usual locations. "-L." means looking for the shared libraries in the current directory and "-L/home/cf/lib" means looking for the shared libraries at "/opt/lib" path. You can specify as many “-l” and “-L” options as you like.
mv libcalc_mean.so /home/cf/slibNow compile main.c. It would be successful and creates an executable named "test".
Let us check if the path to our shared library is included successfully into the executable by linker as shown below.
ldd executablename
$ ldd test linux-gate.so.1 => (0x00332000) libcalc_mean.so => not found libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0x006aa000) /lib/ld-linux.so.2 (0x00db9000)You can see that linker cannot find our shared library libcalc_mean.so.
Basically libraries are present in /usr/lib amongst other places. Static libraries (.a suffix) are incorporated into the binary at link time, whereas dynamic ones (.so suffix) are referenced by location.
Check the PART2 of this article to understand further
We dont need to declare the header file "calc_mean.h" in the main.c file because we are any how linking the object file dynamically.
I tried your idea, produces some random value.
"calc_mean.h" file just contains the declaration for the "mean" function which is included into "main.c" file.
I dont say it is mandatory but good coding practice to declare a function before we use. Offcourse still we can successfully compile the code but come across warnings.
Amazing article.Cleanly written and well explained. I thoroughly enjoyed reading it.
Header file needs to be inculded.. else it will take default to ints...
will .so (or dlls) have to supplied..its for your compiler that works before your program execution...
Rajat Paliwal
I am very much happy to read your blog. The explanation is crystal clear . Thanks for sharing the knowledge .
The explanation is crystal clear. Thanks for sharing the info .
Cleaned up the post with latest information and unwanted stuff
Thank you very much. I searched the web few days for this kind of article. Easy to understand and well described. Thank you!
That is great!
If you contrasted it with the command to statically the same, it would have been a complete tutorial. I found after some trial and error that this works.
gcc -o static_main.out main.c -L/home/cf/slib calc_mean.o
Thanks for the quick and effective tutorial.
Good explanation. Very clear. That's what it makes for better understanding. Thank you.
good tutorial
Thats the clever method to explain. I found what I was looking in other tutorial, which actually missing in them. very nice
I found what I was looking in other tutorial, which actually missing in them. very nice
Well explained tutorial ...
Well done! Simple and direct. What is, how to create and how to use.
Thanks for this - it says what must be known without any "smalltalk".
A question to the nerds:
Is it possible to use an initialization routine which is only called once when the library is called first time? And how can this be recognized?
Thank You!
Hi,
Just a little question. I tried to use this and but I'm stuck with another error. Even when I add the lib path and the include path it just says it can't find the header file. How can I fix this? Hope somebody could help.
Thanks for the tutorial.
I have 1 question...when we create our shared library before moving it to the desired location where does it go...Is it the pwd or some other location..plz answer?? | http://codingfreak.blogspot.com/2009/12/creating-and-using-shared-libraries-in.html | CC-MAIN-2017-34 | refinedweb | 1,114 | 69.68 |
How to: Write a parallel_for_each Loop
This example shows how to use the concurrency::parallel_for_each algorithm to compute the count of prime numbers in a std::array object in parallel.
Example> using namespace concurrency; using namespace std; // Calls the provided work function and returns the number of milliseconds // that it takes to call that function. template <class Function> __int64 time_call(Function&& f) { __int64 begin = GetTickCount(); f(); return GetTickCount() - begin; } // Determines whether the input value is prime. bool is_prime(int n) { if (n < 2) return false; for (int i = 2; i < n; ++i) { if ((n % i) == 0) return false; } return true; } int wmain() { // Create an array object that contains 200000 integers. array<int, 200000> a; // Initialize the array such that a[i] == i.
Compiling the Code
To compile the code, copy it and then paste it in a Visual Studio project, or paste it in a file that is named
parallel-count-primes.cpp and then run the following command in a Visual Studio Command Prompt window.
cl.exe /EHsc parallel-count-primes.cpp
Robust Programming.
See also
Parallel Algorithms
parallel_for_each Function
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/cpp/parallel/concrt/how-to-write-a-parallel-for-each-loop?view=vs-2017 | CC-MAIN-2019-26 | refinedweb | 185 | 54.02 |
[Table of Contents] [Next Topic]
Object and collection initializers are a C# 3.0 feature that allows you to create objects and collections of objects in an expression context instead of in a statement context. Sometimes this is called in-line initialization of objects.
When doing FP, we want to create tuples (next topic), and we have to use object and collection initializers in order to do this.
It is very convenient and much more readable to create object collections/graphs/trees using object initializers.
There are a few important terms to understand for this discussion. If we define these terms, it will make it much easier to talk about object initialization.
An object graph is a number of C# objects that are tied together in some significant way. For example, you might create a Rectangle object, which contains multiple Point objects:
public class Point
{
private int x, y;
public int X { get { return x; } set { x = value; } }
public int Y { get { return y; } set { y = value; } }
}
public class Rectangle
private Point p1 = new Point();
private Point p2 = new Point();
public Point P1 { get { return p1; } }
public Point P2 { get { return p2; } }
You would typically create an object graph as follows:
Rectangle r = new Rectangle();
r.P1.X = 1;
r.P1.Y = 2;
r.P2.X = 3;
r.P2.Y = 4;
An important idea to understand here is the difference between statement context and expression context. A statement context is one where the compiler expects and can parse a statement. An expression context is much more limited. As an example, you can pass an expression as an argument to a method, but you are not allowed to write a complete statement where an argument is expected.
One of the most important characteristics of object initializers is that it allows you to create an entire object hierarchy in an expression context instead of a statement context.
The above object hierarchy could only be initialized in a context where statements are allowed, such as in the body of a method, or maybe when initializing a member variable.
An expression context can be found anywhere in the language where you are allowed to write an expression. If you need to pass an object hierarchy as an argument to a method, previously, you needed to create the objects, and then pass the root object to the method. In contrast, with C# 3.0 you can new up the objects in-line as a parameter to the method.
The best way to understand object initializers is to start with the simplest, easiest case and progress from there.
If we have a class Person, as follows:
public class Person
string name;
int age;
bool canCode;
public string Name
{
get { return name; }
set { name = value; }
}
public int Age
get { return age; }
set { age = value; }
public bool CanCode
get { return canCode; }
set { canCode = value; }
Then C# 3.0 allows us to create and initialize an object of type person with the following syntax:
new Person {
Name = "John Doe", Age = 31, CanCode = true
This is a pattern that is semantically equivalent to:
Person value = new Person();
value.Name = "John Doe";
value.Age = 31;
value.CanCode = true;
You can use the above object initializer in an expression context. In the following example, the object is initialized, and then passed as an argument to a method:
PrintPerson(
new Person {
Name = "John Doe", Age = 31, CanCode = true
);
In the above example, the code called the Person constructor that didn't have arguments. Of course, because no constructors were defined, there was a default constructor created by the compiler without arguments. You could have included the parentheses when calling the constructor, as follows:
new Person() {
Because the code is calling the constructor that takes no arguments, you can eliminate the parentheses.
If there were a constructor defined that took the name as an argument, you could call that constructor in the object initializer, as follows:
new Person("John Doe") {
Age = 31, CanCode = true
Sometimes an object contains other objects. This is a very common pattern.
You can initialize the above Rectangle class with its embedded Point objects as follows:
new Rectangle {
P1 = {
X = 0,
Y = 1
},
P2 = {
X = 2,
Y = 3
One point to make here - you don't need to use the new operator on the two points because they were created when the Rectangle object was created.
Using C# 3.0, you can initialize a collection with elements for the newly created collection. A collection initializer consists of a sequence of element initializers, enclosed by braces and separated by commas. Each element initializer specifies an element to be added to the collection object being initialized.
You can initialize a collection of integers like this:
List<int> listInt = new List<int> {
2, 4, 6, 8
};
foreach (int i in listInt)
Console.WriteLine(i);
This outputs:
2
4
6
8
You can also initialize a collection of objects. Using the Point class defined earlier in this topic, you can initialize a collection as follows:
List<Point> listPoints = new List<Point> {
new Point {
X = 1,
Y = 2
X = 20,
Y = 40
foreach (Point p in listPoints)
Console.WriteLine("{0}:{1}", p.X, p.Y);
1:2
20:40
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/ericwhite/pages/Object-and-Collection-Initializers.aspx | crawl-002 | refinedweb | 886 | 59.33 |
to remember
django.db.backends.postgresql.
There is now no question that psycopg2 is the database driver to use when interacting with Postgres from Python. However, the old name will continue to work, for backwards compatibility.
JSONField
The release of Django 1.8 brought us built-in support for Postgres' useful
HSTORE data type, via
HStoreField, but the release of Django 1.9 has us even more excited. Django now ships with full support for Postgres' indexable
JSONB datatype, as
JSONField.
Here's an example of what
JSONField allows us to easily do within Django models:
Profile.objects.create( name='Arthur Dent', info={ 'residences': [ {'name': 'Earth', 'type': 'planet', 'status': 'destroyed'}, {'name': 'Heart of Gold', 'type': 'ship'} ] } )
Before, without
JSONField, this simple metadata would require a dedicated
residences table, with a
status column that would rarely be utilized. With
JSONB, you have total flexibility of the shape of your data, without over-burdening your database schema.
You can query the data, which can be fully indexed by Postgres, just as easily:
>>> Profile.objects.filter(info__residences__contains={'name': 'Earth'}) [<Profile: Arthur Dent>]
JSONB is available on Postgres versions 9.4 and above, on all Heroku Postgres plans.
Database Transaction Commit Hooks
Django's database innards have a new hook available for executing a callable after a database transaction has been committed successfully:
on_commit(). The code originates from the django-transaction-hooks project, which has been fully integrated into Django proper.
Here's an example of using
on_commit() with WebHooks:
from datetime import datetime import requests from django.db import connection def alert_webservice(): """Alerts webservice of database transaction events.""" # Prepare WebHook payload. time = datetime.utcnow().isoformat() data = {'event': {'type': 'transaction', 'time': time}} # Sent the payload to the webservice. r = requests.post('', data=data) # Ensure that request was successful. assert r.ok # Register alert_webservice transaction hook. connection.on_commit(alert_webservice)
Deploy Django 1.9 on Heroku Today
As always, Django 1.9 is fully supported by Heroku Python. Click the deploy button below to instantly deploy a free instance of a Django 1.9 application to your Heroku account, ready for hacking. | https://blog.heroku.com/django_1_9_s_improvements_for_postgres | CC-MAIN-2018-17 | refinedweb | 346 | 50.73 |
I'm attempting problem 7 on Project Euler. It is asking for the 10,001th prime number. Now, in my head I have a good idea on how to tackle this one systematically, but I'm having a hard time translating this into code. My method for finding this prime number is this (please tell me if there are any flaws in my logic!):
Any number that is not prime can be broken up into smaller prime numbers. Thus, if some number n cannot be divided evenly into any of the prime numbers smaller than it, then n must be a prime number. With that being said, one could identify the 10,001th prime number by starting at 2, adding 1 (3), and checking to see if this number can be divided evenly by 2. Since 3 cannot be divided evenly by 2, it is also prime and can be added to the "prime list". Then the next number, 4, could be checked by dividing it by 2 and 3. Since it can be divided by at least one of these numbers (2), then it is not prime. Then 5 would be checked, and since it cannot be evenly divided by 2 or 3, it is prime and would be added to the list. This process would continue until 10001 prime numbers are present in the list, and the final one would be the answer.
I know that was probably an unnecessary explanation, but I wanted to give a clear explanation of my intention behind writing my code. So..here it is:
package find.a.prime.number; import java.util.Scanner; public class FindAPrimeNumber { public static void main(String[] args) { Scanner scan = new Scanner(System.in); int[] numberOfprimes = new int[10001]; numberOfprimes[0] = 2; int test = 3; int numberIndices = 0; int b = 0; int a = 0; while(numberIndices <= 10001){ while(b <= numberIndices){ if(test%numberOfprimes[b]!= 0){ if(b == numberIndices){ numberOfprimes[b] = a; numberIndices = numberIndices + 1; } b = b + 1; } else{ test = test + 1; break; } } b=1; } System.out.println("The 10,001st prime number is " + test); } }
Alright, so here are my two questions..
1) Am I even approaching this correctly? If not, feel free to nudge me in the right direction. And by nudge, I do not mean "give me the answer". I'd like to figure this out on my own.
2) When I compile this program, it says that I can't "divide by zero" on the line "if(test%numberOfprimes[b] != 0){". Why is it doing that? When does numberOfprimes[b] ever equal zero?
Thanks in advance for taking your time reading this! And again, try not to judge me too harshly! I just started learning this a couple days ago, so I'm still very much a novice..
-Jacob
This post has been edited by jacobsnakob94: 02 October 2011 - 07:00 PM | https://www.dreamincode.net/forums/topic/249568-question-about-finding-prime-numbers-java/ | CC-MAIN-2018-26 | refinedweb | 476 | 73.47 |
I've written a simple program that output's my name, course, and a time \ date stamp.
Having researched the code to do so, I've used the includes for both the ctime and time.h libraries.
I leveraged localtime_s and asctime_s to actually convert to string etc., and the program runs fine when I step through it in Visual Studio.
However, we're working with CGI and this program will ultimately be called up from the cgi-bin directory of a web server. The first thing
that needs to be done after copying it to the webserver is to compile it. That's where my problem occurs.
Using the console command: g++ lab5a.cpp -o lab5a.exe the file is supposed to compile and be converted to a .exe.
However, instead I receive errors:
localtime_s was not declared in this scope
asctime_s was not declared in this scope
I "believe" it's because of the included libraries, but not sure.
They are:
Here's the actual timestamp block of code:Here's the actual timestamp block of code:HTML Code:
#include <iostream>
#include <ctime>
#include <time.h>
Of course, I go on to error check and complete the block, but you get the picture.Of course, I go on to error check and complete the block, but you get the picture.HTML Code:
//Begin date and time instructions
time_t curTime;
struct tm locTime;
const int TimeStrLen = 26;
char timeStr[ TimeStrLen ];
if ( ( -1 != time( &curTime ) ) // Seconds since 01-01-1970
&& ( 0 == localtime_s( &locTime, &curTime ) ) // Convert to local time
&& ( 0 == asctime_s( timeStr, TimeStrLen, &locTime ) ) // Convert to string
)
Can somebody shed light on the errors and what might stop this code from compiling.
All that I've read points to deprecated code, specifically in the _s lines. | http://forums.codeguru.com/printthread.php?t=535843&pp=15&page=1 | CC-MAIN-2016-18 | refinedweb | 295 | 71.95 |
Lenses, also known as functional references, are a powerful way of looking at, constructing, and using functions on complex data types. They're also, unfortunately, a very new and complex subject making them challenging to learn. This tutorial intends to help lay out the basics of lensing.
I'm here assuming that you're familiar with moderate complexity Haskell. Truly, understanding the use of lenses isn't terribly difficult, but the phrasing, type errors, and use of Template Haskell can be confusing. I suggest below that if you do not recognize a snippet of Haskell entirely, try to read through it. Again, the lens concept isn't challenging, even if it looks that way at first
The first complexity of lensing is that there are a variety of libraries offering "functional references" or "lenses", some of which are compatible and some of which aren't. These include
data-accessor,
fclabels,
lenses,
data-lens, and
lens. The newest, largest, and most active library is
lens offering the
Control.Lens module and the remainder of this article uses it.
import Control.Lens
Feel free to take a look at the Haddock documentation, but beware it's quite dense, terse, and challenging. Lenses are a simple concept, but also very general.
What is a lens anyhow?
At its simplest, a lens is a value representing maps between a complex type and one of its constituents. This map works both ways—we can get or "access" the constituent and set or "mutate" it. For this reason, you can think of lenses as Haskell's "getters" and "setters", but we shall see that they are far more powerful.
data Arc = Arc { _degree :: Int, _minute :: Int, _second :: Int } data Location = Location { _latitude :: Arc, _longitude :: Arc } -- This is a TH splice, it just creates some functions for us automatically based on the record functions in 'Location'. We'll describe them in more detail below. $(makeLenses ''Location)
Here we generate some lenses automatically for a record type
Location using Template Haskell. Lenses are easy to write on your own, but we'll treat them as black boxes for a while.
The underscores in the record names
_degree,
_minute, etc. are a
Control.Lensconvention for generating TH.
The result of this TH splice is the creation of two lenses, one corresponding to each field of the record.
latitude :: Lens' Location Arc longitude :: Lens' Location Arc
If you're following along in GHC, though, you'll get a bit of a surprise already. The inferred types of these lenses are quite exotic, like
latitude :: Functor f => (Arc -> f Arc) -> Location -> f Location, betraying that
Lens'is simply a strange
typesynonym. We'll understand this in much more depth later, but at first it's required to just remember the simpler type
Lens'.
We can use lenses as both getters and setters on
Location types.
getLatitude :: Location -> Arc getLatitude = view latitude setLatitude :: Arc -> Location -> Location setLatitude = set latitude
Which is so simple it almost makes us wonder why we ever bothered with the whole lens concept! After all, we can already write getters and setters using record syntax.
getLatitudeR :: Location -> Arc getLatitudeR (Location { _latitude = lat }) = lat setLatitudeR :: Arc -> Location -> Location setLatitudeR lat loc = loc { _latitude = lat }
so all we've bought so far with lenses is the ability to wrap these two functions up into a single value. More power is to come, but this intuition is a great first step. In fact, it's the second way we'll see to build lenses, using the function
lens :: (c -> a) -> (c -> a -> c) -> Lens' c a which takes a getter and setter and combines them into a lens!
Using
lens we can see how getters and setters turn into lenses and even note a law of lenses
lens getLatitudeR (flip setLatitudeR) === -- we can replace the getters and setters with their lens versions lens getLatitude (flip setLatitude) === -- which have these definitions lens (view latitude) (flip $ set latitude) === -- which is identical to latitude -- OR, for all lenses, l l == lens (view l) (flip $ set l)
First joys of abstraction
So what exactly do we buy, wrapping "getters" and "setters" up together? Well, for one, we can forgo record syntax (for better or worse) and export just the lenses instead of the record functions if we like. For another, we can have other kinds of combinators to operate on these lenses for affecting the "focal" record values.
For instance, modification is immediately a combinator,
over (and this is built in to the library itself)
modifyLatitude :: (Arc -> Arc) -> (Location -> Location) modifyLatitude f = latitude `over` f -- which wraps the motif modifyLatitude :: (Arc -> Arc) -> (Location -> Location) modifyLatitude f lat = setLatitude (f $ getLatitude lat)
So,
over allows us lift a function between the getter and the setter, to create a function which modifies just a tiny part of the greater whole. Really,
over is nothing special—we've trivially built it from
getLatitude and
setLatitude, but you can begin to see the difference in thought. All of these various update/accessor functions have been rolled into a single value,
We can thus think of a lens as focusing in on a smaller part of a larger object.
That intution is powerful.
Building telescopes
So now that we have a basic understanding of lenses, let's build some more.
$(makeLenses ''Arc) getDegreeOfLat :: Location -> Int getDegreeOfLat = view degree . view latitude setDegreeOfLat :: Int -> Location -> Location setDegreeOfLat = over latitude . set degree
Perfect! We can compose our getter and setter functions to dive more deeply. We could even combine these deeper, "more focused" lenses to form a new lens.
degreeOfLat'Manually :: Lens' Location Int degreeOfLat'Manually = lens getDegreeOfLat (flip setDegreeOfLat)
But this is getting a little out of control. Haskell is all about having mind-sized chunks of computational value which we combine meaningfully. Is there a way to directly combine lenses? Yes, and the method may come as a surprise
degreeOfLat = latitude . degree -- well, that was easy!
Lenses, with their weird underlying type involving
foralls and functions of
Functors compose... much like functions do! The only difference is you read the composition right-to-left instead of the usual left-to-right function chaining. This can be a little confusing for a functional programmer, but if you squint it looks a lot like nested property referencing on an object in an OO language.
As an aside, while all of the lens libraries appreciate and allow for easy composition of lenses, only certain representations can be combined using
(.)and it was quite a breakthrough to discover this. The details will be fleshed out later.
You might think of these as placing two (real world) lenses in series—together they refine the optics to focus more and more deeply into the subject.
(.) :: Lens' a b -> Lens' b c -> Lens' a c
Other kinds of optics
If lens composition gives us telescopes, can we build other kinds of optical machinery?
Let's look at other basic ways of composing Haskell types. Perhaps the two most essential methods are pairs and eithers, i.e.
(,) and
Either.
Lenses can be combined in ways analogous to the first two—you can link two lenses to operate in parallel using
alongside (forming a pair of glasses, perhaps)
latitude `alongside` longitude :: Lens' (Location, Location) (Arc, Arc)
and you can link two lenses such that either the first or the second is used
choosing degree minute :: Lens' (Either Arc Arc) Int
which you might think of as "teeing" two beams of lensed light together—like a beam splitter run in reverse. It lets us take lenses which focus from different locations but into the same type and combine then.
Postscript
If this were truly all there were to lenses it might be enough to find a place in your toolkit. They provide a new abstraction—the idea of holding on to a value that's focused on a constituent of a larger type—and a meaningful algebra for combining (via pairs and eithers, products and coproducts), composing, and modifying these values. They subsume record syntax while minimizing book-keeping on getters and setters. They even include a cute syntax throwing lenses
over functions.
But this is just one module of the more than 40 included in the library---indeed, nothing in this article besides the unnecessary
makeLenses Template Haskell tricks exists outside of
Control.Lens.Lens.
Furthermore, the same trick that allows us to compose lenses via
(.) unlocks a very general methodology for thinking about mapping and traversing over data structures that can be taken much further. Later we'll see how to use lenses within monads, to build powerful roundtrip transformations, abstract across cons'able or index'able data structures, or even to subsume zippers and generics.
We'll also see how the underlying Rank 2 structure of the lens allows for all of this functionality to be easily composed.
So, until next time—cheers!
Thanks to Serge Le Huitouze for edits. | https://www.schoolofhaskell.com/school/to-infinity-and-beyond/pick-of-the-week/basic-lensing | CC-MAIN-2016-50 | refinedweb | 1,482 | 52.09 |
> On Fri, Aug 20, 2010 at 10:48 PM, Wu Fengguang <fengguang.wu@intel.com> wrote:> > On Fri, Aug 20, 2010 at 05:31:29PM +0800, Michael Rubin wrote:> >>?> > LOL. I know about these counters. This goes back and forth a lot.> The reason we don't want to use this interface is several fold.Please don't use LOL if you want to get good discuttion. afaict, Wu havedeep knowledge in this area. However all kernel-developer don't know allkernel knob.> >.In nowadays, many distro mount debugfs at boot time. so, can you pleaseelaborate you worried risk? even though we have namespace.> 3) Full system counters are easier to handle the juggling of removable> storage where these numbers will appear and disappear due to being> dynamic.> > The goal is to get a full view of the system writeback behaviour not a> "kinda got it-oops maybe not" view.I bet nobody oppose this point :)--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2010/8/23/458 | CC-MAIN-2017-22 | refinedweb | 184 | 77.13 |
Value ConvertersThe first thing we should ask is; what is Value Converter(s)?Well, to say it in simple words it is a Converter that converts value based on input as Source and provides output as Target.Hope the following figure make things easier to understand:As you can see from the above image we have both way arrows in between Source and Target. That means: the value converter not only designed to take input from the target and modify the source, it also works when source data is updated on target. As you are familiar with two way bindings in WPF.We will straight away create a WPF Application and create simple converters to understand it.Creating WPF ApplicationFire up Visual Studio 2008 and Create a WPF Application, name it as ConvertersInWPF.Now let's think what simple converter we can use! Let's say we will create one or more Converters. So we will keep inside a folder:Now add a class to the Converter Folder, name it as StatusToColor.csHere's the idea, we will have a list of Requests and we will display the status of the request in colors.Let's have the following entity class that represent Request structure.As you see in above code display we have an Enum as Status and we have 5 values inside of it; such as: Submitted, Assigned, InProgress, Resolved, and Closed.Now based on above information we will modify our StatusToColor.csImplement IValueConverter in the above class.As the namespace is not referred, we have to type it upto end and after resolving you would see the following namespace to be used.Now we have to implement the methods in this Interface. So resolve the interface again.Select the Implement interface option: the following methods would be defined as follows:As we discussed in the beginning that the Source and Target can convert each other so that's why we have two methods defined. Convert would do conversion from Source -> Target and ConvertBack would do conversion from Target -> Source.For the time being we will proceed with Convert method. First we will define the Enum here again.We can use the same name but for understanding I am giving a different name.Remove the throw new Exception(); and replace with following code.The above code display for converting and returning is a very simple logic to convert.Now for user's knowledge about the colr use we would put the colors with some information.Now we have bind the converter in the XAML, for that first thing we have to do is to use the namespace.Now we have customized the DataGrid, to have a TextBoxColumn and a TemplateColumn to display the Status as defined Color.Now for the Rectangle in TemplateColumn we will fill using the Converter:We are good to go, and run the application. But before that let's have some sample data to display.You would see the colors are reflected as per the data given.Hope this article helps.
Visit in for latest articles.Facebook Page:
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/uploadfile/dpatra/value-converter-in-wpf-part-i/ | CC-MAIN-2015-27 | refinedweb | 524 | 65.83 |
A few of my UnitTests have a Sleep that is defined in a loop. I want to profile not only each iteration of the test, but the overall time for all iterations, in order to show any non linear scaling. For example, if I profile the "Overall", it includes the time for the sleep. I can use
Stopwatch
[TestMethod]
public void TestMethod1()
{
TestContext.BeginTimer("Overall");
for (int i = 0; i < 5; i++)
{
TestContext.BeginTimer("Per");
doAction();
TestContext.EndTimer("Per");
Sleep(1000);
}
TestContext.EndTimer("Overall");
}
Well, I had a similar problem. I wanted to report some extra data/reports/counters from my tests in the final test result like Visual Studio does and I found a solution.
First, this cannot be done with the way you are trying. There is no direct link between the Load Test and the Unit Test where the TestContext exists.
Second, you have to understand how visual studio creates the reports. It collects data from the performance counters of the OS. You can edit these counters, remove those you don't want and add others you want.
The load test configuration has two basic sections regarding the counters. These are:
The
Counter Sets. These are sets of counters, for example
agent which is added by default. If you open this counter set you will see that it collects counters such as Memory, Processor, PhysicalDisk e.t.c. So, at the end of the test you can see all these data from all your agents. If you you want to add more counters to this counter set you can double click on it (from the load test editor, see picture below) and select
Add Counters. This will open a window with all the counters of your system and select those you want.
The
Counter Set Mappings. Here you associate the counters sets with your machines. By default the
[CONTROLLER MACHINE] and
[AGENT MACHINES] are added with some default counter sets. This means that all the counters contained in the counter sets which are mapped to the
[CONTROLLER MACHINE] will be gathered from your controller machine. The same applies for all your agents.
You can add more counters sets and more machines. By right clicking on the
Counter Set Mappings -->
Manage Counter Sets... a new window opens as below:
As you can see, I have added an extra machine with name
db_1. This is the computer name of the machine and it must be at the same domain with the controller in order to have access to it and collect counters. I have also tagged it as
database server and selected the
sql counter set (default for sql counters but you can edit it and add any counter you want). Now every time this load test is executed, the controller will go to a machine with computer name db_1 and collect data which will be reported at the final test results.
Ok, after this (big) introduction it's time to see how to add your data into the final test results. In order to do this you must create your own custom performance counters. This means that a new Performance Counter Category must be created in the machines you need to collect these data. In your case, in all of your agents because this is where the UnitTests are executed.
After you have created the counters in the agents, you can edit the
Agents counter set as shown above and select your extra custom counters.
Here is a sample code on how to do this.
First create the performance counters to all your agents. Run this code only once on every agent machine (or you can add it in a load test plugin):
void CreateCounter() { if (PerformanceCounterCategory.Exists("MyCounters")) { PerformanceCounterCategory.Delete("MyCounters"); } //Create the Counters collection and add your custom counters CounterCreationDataCollection counters = new CounterCreationDataCollection(); // The name of the counter is Delay counters.Add(new CounterCreationData("Delay", "Keeps the actual delay", PerformanceCounterType.AverageCount64)); // .... Add the rest counters // Create the custom counter category PerformanceCounterCategory.Create("MyCounters", "Custom Performance Counters", PerformanceCounterCategoryType.MultiInstance, counters); }
And here the code of your test:
[TestClass] public class UnitTest1 { PerformanceCounter OverallDelay; PerformanceCounter PerDelay; [ClassInitialize] public static void ClassInitialize(TestContext TestContext) { // Create the instances of the counters for the current test // Initialize it here so it will created only once for this test class OverallDelay= new PerformanceCounter("MyCounters", "Delay", "Overall", false)); PerDelay= new PerformanceCounter("MyCounters", "Delay", "Per", false)); // .... Add the rest counters instances } [ClassCleanup] public void CleanUp() { // Reset the counters and remove the counter instances OverallDelay.RawValue = 0; OverallDelay.EndInit(); OverallDelay.RemoveInstance(); OverallDelay.Dispose(); PerDelay.RawValue = 0; PerDelay.EndInit(); PerDelay.RemoveInstance(); PerDelay.Dispose(); } [TestMethod] public void TestMethod1() { // Use stopwatch to keep track of the the delay Stopwatch overall = new Stopwatch(); Stopwatch per = new Stopwatch(); overall.Start(); for (int i = 0; i < 5; i++) { per.Start(); doAction(); per.Stop(); // Update the "Per" instance of the "Delay" counter for each doAction on every test PerDelay.Incerement(per.ElapsedMilliseconds); Sleep(1000); per.Reset(); } overall.Stop(); // Update the "Overall" instance of the "Delay" counter on every test OverallDelay.Incerement(overall.ElapsedMilliseconds); } }
Now, when your tests are executed, they will report to the counter their data. At the end of the load test you will be able to see the counter in every agent machine and add it to the graphs. It will be reported with MIN, MAX and AVG values.
I hope I helped. :) | https://codedump.io/share/2MddSq4SBMDi/1/can-i-create-a-custom-testcontext-timer-for-unittestloadtest-in-visual-studio | CC-MAIN-2017-04 | refinedweb | 894 | 56.66 |
RationalWiki:Noticeboard
Helios 02:18, 22 May 2007 (CDT):
- RationalWiki's first userbox is here.
- 2nd one here. olliegrind 06:51, 22 May 2007 (CDT)
OK, I have over a hundred of the templates (without names!) stashed in my sandbox - does anyone know an efficient way to upload them? Editing each one, one at a time is tedious! humanbe in 12:51, 22 May 2007 (CDT)
- I dont no but I think Linus would be the best person to ask, he's our tech guy =) --Helios-talk to me 13:12, 22 May 2007 (CDT)
- Well, the overall best way is to click the "move" button at the top of each page, and use the wizard to move it all, create redirects automatically, and generally be cool. --Linus M. 18:04, 22 May 2007 (CDT)
- Wow, that won't work with that. Ah well, we'd better simply C&P them. --Linus M. 18:13, 22 May 2007 (CDT)
- Yeah, been doing it. It looks imposing at first, but then I realized 80% of those UXBs will never be used here, or, if someone wants them, they can make them themselves. But anyway what is the point of putting the "noticeboard" on the front page? The link was enough, wasn't it? It just looks... wrong. humanbe in 18:36, 22 May 2007 (CDT)
I'm uploading my Alaska article. Helios-talk to me 13:39, 22 May 2007 (CDT)
Clean up job for a sysop[edit]
Who wants the unenviable job of moving all the articles about CP to the CP namespace, and all the essays to the essay namespace, and deleting the redirects? Tmtoulouse 19:50, 22 May 2007 (CDT)
-
- I'll do a few, just for the power trip, but then off to work.-AmesG 19:58, 22 May 2007 (CDT)
- Trent, some of the "Essays" just got moved out of the essay space, non? Is there a list of these things somewhere? How did we get enough articles on CP and essays for there to be a "lot" of work already? Point me and I'll do what I can to help. And... at a higher meta-level of sysopness, can you improve the "namespace" article, give it a big highlight for people, make it easy to figure out exactly how to get an article title right? Tanks! And airplanes! humanbe in 20:00, 22 May 2007 (CDT)
- They got moved because there was no such thing as those name spaces yet, I had to move them out so when I created the namespace nothing bad happened to them. Now that I created the namespaces I have to put them back in. Basically atleast. Tmtoulouse 20:07, 22 May 2007 (CDT)
Non-funny vandalism[edit]
User:1234 went on a small spree of non-funny vandalism. This consisted of a series of page moves, apparently intended to hide Main Page, its content-containing template, and its talk page. There were also 8 pages created named Owned through Owned8; 7 of them just contained the word 'Owned'; the 8th was a redirect to Main+Page, which contained photographs of doggies. The doggies page was also used to replace RationalWiki:Site support.
Per my understanding of the emerging community consensus, I did the following:
- blocked User:1234 for 24 hours
- Undid the page moves
- Saved a copy of the doggies page at RationalWiki:Site support/Vandalism/1234
- Deleted the Owned* pages
- Deleted the various nonsense-named redirects left behind by the page moves
If any sysop disagrees with any of those actions, I won't be upset if they're undone. And if any non-sysop disagrees, lemme know and we can talk about it. And if the community thinks I overstepped the consensus or my bounds, speak up about that too. Not that anyone could stop ya. --jtltalk 02:31, 30 May 2007 (CDT)
- He was a busy little chappie, wasn't he? looks like you dun good to me.--Bob_M (talk) 03:19, 30 May 2007 (CDT)
- In his(?) defense, the dog is adorable. But when faced with this sort of "cyber terrorism", it's clear what we must do now. We must disable the Move and Upload functions for non-sysop! And then we must disable registrations! And we must block all new accounts if our gut says there's something fishy going on. And all of this will make our site "grow rapidly"! After all, this worked so extremely well for CP! ;) --Sid 12:15, 30 May 2007 (CDT)
- 95% of right thinking people know conservatives use vandalism as a tactic to make their point. They are also deceitful. ɱ@δ ɱ!ɳHello?/I did this! 15:23, 30 May 2007 (CDT)
Maybe funny vandalism[edit]
New user JtI (that last letter is a capital EYE) moved by User page and User talk page. I'm less sure that's not funny, so feel I'm on less solid grounds blocking, so I've only blocked for 2 hours. Other than that, same as above --jtltalk 02:42, 30 May 2007 (CDT)
- OMGz, batten down the hatches! Lock down the site! Ban all suspicious users! Write articles about evil, deceitful vandals messing up our furniture! hehe humanbe in 12:11, 30 May 2007 (CDT)
Listen, Buddy, we know who you are and IT STOPS HERE! We have filed a case, ID# IJ429K421K/215-ad-421/20932-01 and have given your IP to the FBI (who for some reason sounded like a 12 year old girl, but no matter). You will be PUNISHED! --PalMD-yada yada 12:31, 30 May 2007 (CDT)
- Maybe we should build a virtual pillory? Flippin;-)
Anyone willing to ASCII art it? --ויִכִּ נתֶּרֶפּרֶתֵּר שְׁלֹום!
Update...it turns out that the report I filed was with the cable company. My cable is back on. The real case number is 758dsa258fjtr598advd/23--PalMD-yada yada 12:41, 30 May 2007 (CDT)
We read it while
this ticked.--PalMD-yada yada 13:35, 30 May 2007 (CDT)
-
- Maybe we should use this as a study on how conservative vandals work, because it obviously is a conservative vandal. Conservative, particularly conservative vandals must have a strong desire for attention. Conservatives are almost always more aggressive than liberals here, as conservatives insist on the last word, insist on continuing debate long after it has become tiresome, and repeat complaints after they have been rejected. You know that sort of stuff. Sterile 13:40, 30 May 2007 (CDT) | https://rationalwiki.org/wiki/RationalWiki:Noticeboard | CC-MAIN-2020-05 | refinedweb | 1,091 | 80.92 |
Hi, I am writing a game. Three card brag. The rules of the game are
1.Each player is dealt three cards.
2.The hand is then assigned a rank based on the cards recieved
3. The hand Ranks are 0,1,2,3
4. the hand ranks are then compared and the highest hand rank from that game is the winner, other wise it's a draw.
I'm almost finished with this game, but im having trouble figuring out how to tally each players total game points. the game points are just the values from the hand ranks added to each other from each game. the score should automatically update once the user clicks on the button to start a new game.another trick to it is that the winner of the game automatically gets a point just for winning.
so an example of what im trying to do.
first game:
player 1 has a hand rank of 3//winner
player 1 Total Score:0
player 2 has a hand rank of 2
player 2 Total score: 0
second game:
player 1 has a hand rank of 2//winner
player 1 Total Score:7
player 2 has a hand rank of 1
player 2 Total Score :2
how would i go about doing this, i have to catch the value of the handRank and add it to the last value of the handRank everytime but how would i do this? through an array?? with a for loop running? I don't know, I'm lost. Any suggestions would be appreciated. here is my player.java code for you guys. if you need to see the rest of my code. let me know.
public class Player { private String name=""; private int cardsInHandForGame; private int cardsInHand = 0; private int totalGamePoints; private Card[] hand = new Card[3]; private int handRank; public Player() { totalGamePoints = 0; handRank = 0; for(int i = 0; i < cardsInHandForGame; ++i) { hand[i] = new Card(); } } //Name public void setName(String n) { name = n; } public String getName( ) { return name; } public void setCardsInHandForGame(int numberOfCards) { cardsInHandForGame = numberOfCards; } //Hand Rank public int getHandRank() { return handRank; } //Total Game Points public int getTotalPoints() { return totalGamePoints; } public String toString() { String s =""; ////write this=============================== return s; } //returns the player's name, current hand rank, and total pts public String getNameHandRankAndPoints() { String s = "your hand rank is: "+ handRank /*+"your total score is: " + totalGamePoints*/; //this is where i should output the score!! return s; } //setCard Is called when a card is dealt to the player. //The Frame will call this, passing a Card c, //setting it into the player's hand (at the current //cardsInHand position), increment the number of cards //in the player's hand (cardsInHand++). //When the player has the maximum number of cards for the game, //the player determines the hand rank and assigns points to his score. public void setCard(Card c) { //put the card in the player hand hand[cardsInHand] = c; //increment hand index cardsInHand++; if(cardsInHand == (cardsInHandForGame)) { cardsInHand = 0; determineHandRank(); } } //performs the logic needed to determine the winning hand level //assigns value (0,1,2,3) to handRank. //winning hand points are assigned to totalGamePoints public int determineHandRank( ) { //you write this, assign value to handRank, add to points //handRank; holds the current hand's winning rank value //0 = not a winning hand //1 = lowest level, J, Q, K //2 = mid-level, 2 or 3 of any suit or rank //3 = high-level, 3-3's or an Ace //If 3 3's or an Ace in Hand[0], Hand[1], or Hand[2], then Rank = 3 if((hand[0].getRank()== hand[1].getRank() && hand[1].getRank() == hand[2].getRank()) || (hand[0].getRank()== 14 || hand[1].getRank() == 14 || hand[2].getRank() == 14 )) { handRank = 3; } else if((hand[0].getRank() == hand[1].getRank()) || (hand[0].getRank() == hand[2].getRank()) || (hand[1].getRank() == hand[2].getRank())) { handRank = 2; } else if((hand[0].getRank() >10 && hand[0].getRank() < 14) || (hand[1].getRank() >10 && hand[1].getRank() < 14 )|| (hand[2].getRank() >10 && hand[2].getRank() < 14 )) { handRank = 1; } else { handRank = 0; } return handRank; //If 2 or 3 and suit or rank in Hand[0], Hand[1], or Hand[2], then Rank = 2 //If 1 Jack Queen or King in Hand[0], Hand[1], or Hand[2], then Rank = 1 //else = 0 } //If the player has won the hand, give'em an extra point //Note: this will be called from the Frame! public void creditGameWin( ) { totalGamePoints++; } } | https://www.daniweb.com/programming/software-development/threads/426989/suggestions-on-figuring-out-how-sum-the-game-points-card-game | CC-MAIN-2017-51 | refinedweb | 738 | 70.94 |
Opened 9 years ago
Closed 7 years ago
Last modified 7 years ago
#11124 closed (fixed)
ModelAdmin's has_change_permission() and has_delete_permission() methods don't examine their obj parameter
Description
ModelAdmin's
has_change_permission(self, request, obj=None) and
has_delete_permission(self, request, obj=None) don't examine their obj parameters, they always check the permission of the request user on the class registered with the ModelAdmin instance.
their docstrings, e.g.
""" Returns True if the given request has permission to change the given Django model instance. If `obj` is None, this should return True if the given request has permission to change *any* object of the given type. """
are slightly misleading.
This applies to trunk and 1.0.x
Attachments (1)
Change History (12)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
Changed 9 years ago by
comment:3 Changed 9 years ago by
comment:4 Changed 8 years ago by
comment:5 Changed 8 years ago by
I guess the correct way (after #12462 got applied) would be something like this:
def has_change_permission(self, request, user, obj=None): opts = self.opts perm = opts.app_label + '.' + opts.get_change_permission() if obj is None: return request.user.has_perm(perm) else: return request.user.has_perm(perm, obj)
comment:6 Changed 8 years ago by
I should also notice, that we would need the
if obj is None
clause as long as there are backends which are allowed to not support obj as param…
comment:7 Changed 8 years ago by
Okay ignore my previous two posts, it was late yesterday :/ This is the current situation:
The ModelBackend kicks in even if an obj is passed in; which is kinda suboptimal I guess as according to the docs Backends will return False if not supporting rowlevel perms.
After that got fixed (#12462) the !Modelbackend will return False always for obj!=None. This means that as soon as a user updates his ModelAdmin to supply obj to the backends he will have to use a rowlevel-backend (Those who don't want to use row level perms could write a dummy backend which passes the check to the ModelBackend but without passing the obj to the ModelBackend).
comment:8 follow-up: 9 Changed 7 years ago by
New Django user here, I ran into this problem. The proper fix to this, IMHO, is to use some logic like this:
if **the user object supports two arguments***: return request.user.has_perm(perm, obj) else: return request.user.has_perm(perm)
The key is a test to figure out if the user object supports the two-argument version of perm.
We could test it by trying to call the two-argument version, and if an exception is thrown, reverting to the one-argument version. This of course would require checking to make sure the exception was thrown by our initial call, re-raising exceptions that come from user code.
But we don't have to be that complicated. There is a built-in Python module to solve this problem, the inspect module. In particular, inspect.getargs. So I am thinking something like this:
args, varargs, varkw = inspect.getargs(user.has_perm) if len(args) >= 2 or varargs is not None: return request.user.has_perm(perm, obj) else: return request.user.has_perm(perm)
The only concern is whether inspect is portable to all the Python environments Django must support (e.g. Jython). But that is why testing frameworks were invented, we just need to write a test with a single-argument-only backend and see if the test fails in any supported environment :)
The really proper fix to this is to deprecate the one-argument call and unconditionally call the two-argument version. It should be put on the list for Django 2.0 or the next release with backwards-compatibility-breaking changes.
comment:9 Changed 7 years ago by
The really proper fix to this is to deprecate the one-argument call and unconditionally call the two-argument version. It should be put on the list for Django 2.0 or the next release with backwards-compatibility-breaking changes.
That's already handled by the current code (see: and). Actually I don't see what your "fix" is supposed to fix, the user currently supports an optional obj parameter for all methods (at least those where it makes sense).
Now that I see I realize these methods are placeholders for user-provided sub-classes to implement these per-model instance checks.
Will upload a patch for the docstrings (to make clear these methods are intended to be overridden for users with advanced needs just like it is done in other ModelAdmin methods' docstrings and to make clear the
objargument isn't used in the base implementation) shortly. | https://code.djangoproject.com/ticket/11124 | CC-MAIN-2017-51 | refinedweb | 791 | 60.35 |
A month ago I released CodeBetter.Canvas, a simple application with equally simple goals:
- Provide developers with a starting point for new ASP.NET MVC projects
- Provide developers with a learning tool for oft-talked about tools/patterns.
Yesterday I released an updated version of CodeBetter.Canvas to Google Code. You can check it out at. There are a number of new things in this release, including a revamped repository model, the introduction of a model binding framework (these end up doing a lot of heavy lifting), an improved validation framework, and ideas on how to do paged lists.
Enjoy.
What do you think of using OnAuthorization instead of OnActionExecuting? That way it will work with AuthorizeAttribute.
Any chance of you implementing this?
It would make dropping in a different container (StructureMap) a lot easier.
@Roberto @Karl
My current solution, and this is very early days, is to copy the rules from the domain model to the view (or in our case REST contract) model. The way we’re doing it is essentially 3 steps:
1) Take the information that specifies how to map from the view model to the domain model
2) Ask the domain model for the list of rules its applying
3) Use the outputs from 1 & 2 to work out how to apply the rules to the view model instead of the domain model, so basically your “mapping” the rules.
The secret sauce is PostSharp, essentially you attribute your view model using something like this:
[MapValidationRulesUsing(RuleProviderType = "Security.Domain.GroupRulesProvider, Security.Domain")]
public class GroupContract : IValidatableContract
MapValidationRulesUsing is a PostSharp attribute and its responsible for running the three steps I’ve described then applying the rules to the view model. It does this by copying the rules over and mapping them, so maybe our StringNotEmptyRule was applied to Customer.Name.First in the domain and its copied to run on CustomerViewModel.FirstName. Its more difficult with attributes but I’m working on that.
* Circular Dependency *
However its far than perfect, notice that RuleProviderType is a string.
Why?
Well take the concrete example above, GroupContract is in a project called ServiceContract and GroupRulesProvider/Group are in a project called Service. The Service project references the ServiceContracts one, which is fine.
However we now have a dependency from the ServiceContract to the Service project (obviously). How can we copy rules from a (domain) class in the Service project to a project in ServiceContract using PostSharp if ServiceContract needs to build before Service does?
It would work fine if PostSharp ran after the entire solution had built, but it doesn’t. So if I clean the solution and build then when PostSharp tries to run on the ServiceContract project it will fall over because the Service project wouldn’t yet exist.
My “solution” is to PostSharp run as a post build task on the Service project, telling it to run against ServiceContract. Not tried this yet but the author of P# has indicated it should work.
I’m not sure if this is a hack too far yet….
Roberto:
I’ve struggled with that exact issue. I end up duplicating my validation attributes which is a huge NO-NO. Its one of the reasons I made the Credentials a component, so that you could use it as a ViewModel without dragging in the entire User Model.
My only thought on this (so far) is to use a Fluent Validation framework (there’s at least 1 really good one for .NET) and use to properly manage duplicate validation rules.
We’ll see…
Oops.. plese disregard the last line of that comment! Apparently it needed some additional editing.
@Karl,
What I meant was, how would you deal with validation when you have a ViewModel pattern in place?
Context:
I assume that we can call your Credential, and User entities the Model, in most sample apps I have seen recently the Model’s don’t get pushed out to the View there is always a middle man the ViewModel that represents the specific data that your View needs. In the Canvas the attributes for validation are assigned to the model and I agree that this is the right place for them but if we follow the ViewModel pattern those attributes would not be accesible to the View.
I assume that we can write a property in our ViewModel that would translate/expose the validation rules in the Model to the ViewModel.
Regards,
Roberto.-
would reside in a model (Its definitely where they should be), but the UI sees a ViewModel that is a buddy object that is
@Andre:
I might write some documentation, but the code is meant to be bare enough to be understandable. I can see how some of the concepts might be very obscure depending on your experience. For example, the indirection introduced by a Dependency Injection framework might make some things seem awfully magical. And of course, if you aren’t at all familiar with NHibernate, all of that might be pretty strange.
Was there something in particular I should focus for documentation/helping people get started?
@Roberto:
Not sure I’m following you. Isn’t that what I have? Attributes on domain models which generate javascript validation code?
Hi Karl,
Have you thought about working in an implementation of the ViewModel pattern into the Canvas. I would love to see a working example on how to handle mapping validations that are assigned as attributes to your model to a viewmodel for a specific controller.
Regards,
Roberto.-
Hi Karl
I managed to install TortoiseSVN, thanks for the tip. Is there any documentation on how to use your project or just to understand it? I’m working on a project that I want to put in MVC and I’m in need of some guidance and some good patterns that, I think, your project will provide.
Thanks in advance
Andre:
Google code uses Subversion. You’ll need a subversion client. You can find a list at (most are free).
Most people use TurtoiseSVN (free) – it integrates with the Windows Shell. If thats a bit too much for you, try one of the stand-alone client on that page. Once installed, you simply do a checkout from:
This is a great opportunity to learn about Subversion for you!
Hi
I would like to see and use your project. I’ve never worked with google code so I could download your project to my computer?
Thanks
Gary:
Validation has been moved to the binders, it feels better (largely because don’t mind feeding an IUserRepository into a binder as much as I do into a model). It also gives me that sense of trusting my code – once an entity is passed the user input boundary, the system doesn’t have to keep checking validity at each level. On the flip side, you get far worse re-use.
All that to say that it isn’t being used. I should have taken it out. Or maybe have the base model binder check for it and call it – in the case where validation does make sense at the entity level.
Karl, I’ve just taken a few minutes to look at your project so please forgive me if I’m missing something but I don’t see how your using the IValidate interface. In your other example the User entity inherited directly from it but that is not the case here.
I’m I missing something obvious?
Karl – great idea in keeping it close to a blank canvas… I checked out the code and for me it does make it easier to learn the patterns you are espousing. Timely too… I’ve been looking for good repository examples using Fluent NH. Thanks. | http://codebetter.com/karlseguin/2009/05/25/revisiting-codebetter-canvas/ | crawl-003 | refinedweb | 1,289 | 62.38 |
Hi,
I can't seem to understand why this is not working as expected. The output from the two "couts" are different. Can someone please explain why?
1st cout:1st cout:Code:#include <iostream> #include <vector> using namespace std; int main() { vector<int> vIntA, vIntB; for(int a =0; a<6;a++) { for(int b=0; b< 5; b++ ) { vIntA.push_back(a); vIntB.push_back(b); cout << "a: " << a << ", b: " << b << "\n"; } } for(int a =0; a<6;a++) { for(int b=0; b< 5; b++ ) cout << "a: " << a ", vIntA: " << vIntA.at(a) << ", vIntB: " << vIntB.at(b) << "\n"; } return 0; }
-a & b work as intended.
2nd cout:
-a & b is ok, as is vIntB. But vIntA is incorrect, mostly with zeros until it accesses 5th element where it's 1's.
What am I missing here? | https://cboard.cprogramming.com/cplusplus-programming/149272-nested-loop-vector-issue.html | CC-MAIN-2017-26 | refinedweb | 135 | 77.23 |
I'd like to bump this topic one more time. After more testing using
static revision numbers, I am still not able to use Ivy namespaces to
map revision numbers between repositories. There's no problem with
organisations or modules.
This time I used static revision numbers instead of expressions, in
order to guarantee an 1:1 relationship. For example, trying to map
rev=1.0.0.1 to rev=1.0.0-final. It kept failing with an error similar
to below, "inconsistent module descriptor". It claims it "found" a
revision of 1.0.0-final inside ivy-1.0.01.xml (which is NOT contained in
the file), although it expected a revision of 1.0.0.1 (which truly IS in
the ivy-1.0.0.1.xml file).
So does namespace mapping actually work for revisions? Is there a test
case for this?
> -----Original Message-----
> From: Brown, Carlton [mailto:Carlton.Brown@compucredit.com]
> Sent: Friday, April 04, 2008 1:52 PM
> To: ivy-user@ant.apache.org
> Subject: Namespace problems
>
> I'm getting errors I don't understand while using install
> with namespace
> and I'd appreciate any insight.
>
> Briefly, my intention is to copy foo-module-1.0.0.0 from an
> RC repository into a final repository with the version
> changed to 1.0.0 (truncating the final number).
>
> My rename rule looks like this:
>
> <rule>
> <fromsystem>
> <!-- Space holder, because although there will never
> be a fromsystem copy, Ivy still throws NPE if we don't
> include this xml element -->
> </fromsystem>
> <tosystem>
> <src rev="(.+)\.(.+)\.(.+)\.(.+)"/>
>
> <dest rev="$r1\.$r2\.$r3"/>
>
> </tosystem>
> </rule>
>
> First, a question... clearly, the <fromsystem> is required
> because I get a NullPointerException if it is not defined.
> But am I wrong to think that <fromsystem> is unnecessary if I
> know that I will never install
> *from* the system, always *to* it?
>
> Second, regarding the strange error:
> [ivy:install] ERROR: rc-fs: bad revision found in
> C:\artifact-repositories\rc-repo\myorg\foo-module\1.0.0.0\ivy.xml:
> [ivy:install] java.text.ParseException: inconsistent module
> descriptor file found in
> 'C:\artifact-repositories\rc-repo\myorg\foo-module\1.0.0.0\ivy
> .xml': bad
> revision: expected='1.0.0.0' found='1.0.0';
>
> I don't understand the reasons for this error, because
> naturally the original ivy file should not contain the
> modified revision number.
> Even more odd, it seems to me that the semantic sense of the error is
> reversed. The so-called 'expected' revision 1.0.0.0 is definitely
> found in the 1.0.0.0\ivy.xml file. The so-called 'found' revision of
> 1.0.0 is, of course, not found in that any file (since 1.0.0
> is the new
> revision number to be installed). So basically I'm totally
> confused as
> to what went wrong.
>
> I'd really appreciate help understanding this, I beat my head
> against it
> for several hours and came no closer to understanding it. I emulated
> the tutorial as much as possible, but it seems I am having no luck.
>
> Thanks,
> Carlton
>
>
>
> <HTML><BODY><P><hr size=1></P>
> <P><STRONG>
> ====================================================
>.
> ====================================================
> </STRONG></P></BODY></HTML>
>
-----------------------------------------
====================================================.
==================================================== | http://mail-archives.apache.org/mod_mbox/ant-ivy-user/200804.mbox/%3CEE7473F9E5EC3D4CBC1CDDD38BC1FA3206CDC046@CWA0020EX.ccsouth.cccore.com%3E | CC-MAIN-2016-26 | refinedweb | 527 | 60.72 |
The previous article in this series, Introduction to Play 2 for Java, introduced the Play 2 Framework, demonstrated how to set up a Play environment, and presented a simple Hello, World application. Here I expand upon that foundation to show you how to build a typical web application using Play’s Scala Templates and how Play implements a domain-driven design. We build a simple Widget management application that presents a list of widgets and allows the user to add a new widget, update an existing widget, and delete a widget.
Domain-Driven Design
If you come from a Java EE or from a Spring background, then you’re probably familiar with separating persistence logic from domain objects: A domain object contains the object’s properties, and a separate repository object contains logic for persisting domain objects to and from external storage, such as a database. Play implements things a little differently: The domain object not only encapsulates the object’s properties, but it also defines persistence methods. Play does not force you to implement your applications like this, but if you want your Play application to be consistent with other Play applications, then it is considered a best practice.
Domain objects should be stored, by default, in a “models” package and follow the structure shown in listing 1.
Listing 1. Structure of a Play domain object
public class Widget { // Fields are public and getters and setters will be automatically generated and use, //e.g. id = "a" would be translated to setId("a") by Play public String id; public String name; // Finders are part of the domain object, but are static public static Widget findById( String id ) { ... } public static List<Widget> findAll() {...} // Update is part of the domain object and updates this instance with a command like JPA.em().merge(this); public void update() { ... } // Save is part of the domain object and inserts a new instance with a command like JPA.em().persist(this); public void save() { .. } // Delete is part of the domain object and deletes the current instance with a // command like JPA.em().remove(this); public void delete() { ... } }
Play domain objects define all their fields as public and then Play implicitly wraps assignment calls with set() methods. It makes development a little easier, but it requires a little faith on your part to trust Play will protect your fields for you.
All query methods are defined in the domain object, but they are static, so they do not require an object instance to execute them. You’ll typically see a findAll() that returns a list, findByXX() that finds a single object, and so forth.
Finally, methods that operate on the object instance are defined as instance methods: save() inserts a new object into the data store, update() updates an existing object in the data store, and delete() removes an object from the data store.
For our example we’ll bypass using a database and instead store the “cache” of data objects in memory as a static array in the Widget class itself. Note that this is not a recommended approach, but considering we’re learning Play and not JPA/Hibernate, this simplifies our implementation. Listing 2 shows the source code for the Widget class.
Listing 2. Widget.java
package models; import java.util.List; import java.util.ArrayList; public class Widget { public String id; public String name; public String description; public Widget() { } public Widget(String id, String name, String description) { this.id = id; this.name = name; this.description = description; } public static Widget findById( String id ) { for( Widget widget : widgets ) { if( widget.id.equals( id ) ) { return widget; } } return null; } public static List<Widget> findAll() { return widgets; } public void save() { widgets.add( this ); } public void update() { for( Widget widget : widgets ) { if( widget.id.equals( id ) ) { widget.name = name; widget.description = description; } } } public void delete() { widgets.remove( this ); } private static List<Widget> widgets; static { widgets = new ArrayList<Widget>(); widgets.add( new Widget( "1", "Widget 1", "My Widget 1" ) ); widgets.add( new Widget( "2", "Widget 2", "My Widget 2" ) ); widgets.add( new Widget( "3", "Widget 3", "My Widget 3" ) ); widgets.add( new Widget( "4", "Widget 4", "My Widget 4" ) ); } }
The Widget class defines three instance variables: id, name, and description. Because a Widget is a generic term for a “thing,” its attributes are not important, so these will suffice. The bottom of Listing 2 defines a static ArrayList named widgets and a static code block that initializes it with some sample values. The other methods operate on this static object: findAll() returns the widgets list; findById() searches through all widgets for a matching one; save() adds the object to the widgets list; update() finds the widget with the matching ID and updates its fields; and delete() removes the object from the widgets list. Replace these method implementations with your database or NoSQL persistence methods and your domain object will be complete.
Scala Templates
With our domain object defined, let’s turn our attention to the application workflow. We’re going to create controller actions for the following routes:
- GET /widgets: Returns a list of widgets
- POST /widgets: Creates a new widget
- GET /widget/id: Returns a specific widget
- POST /widget-update/id: Updates a widget
- DELETE /widget/id: Deletes a specific widget
Let’s start by reviewing the GET /widgets URI, which returns a list of widgets. We need to add a new entry in the routes file:
GET /widgets controllers.WidgetController.list()
This route maps a call to GET /widgets to the WidgetController’s list() action method. The list() method is simple: It retrieves the list of Widgets by calling Widget.findAll() (that we created in the previous section) and sends that to our list template:
public static Result list() { return ok( list.render( Widget.findAll()) ); }
The list template (list.scala.html) accepts a list of Widgets and renders them in HTML, which is shown in Listing 3.
Listing 3. list.scala.html
@( widgets: List[Widget] ) <> </body> </html>
The first line in Listing 3 tells Play that the template requires a List of Widgets and names it “widgets” on the page. The page builds a table that shows the Widget’s id, name, and description fields. The Scala notation for iterating over a collection is the @for command. The iteration logic is backward from Java’s notation but it reads: For all widget instances in the widgets collection do this.
@for( widget <- widgets )
Now we have a widget variable in the body of the for loop that we can access by prefixing it with an at symbol (@):
- @widget.id: Returns the widget’s ID
- @widget.name: Returns the widget’s name
- @widget.description: Returns the widget’s description
We also added a link to the id that invokes the details action, which we review next. Note that rather than using the URI of the details action, we reference it through its routes value:
@routes.WidgetController.details(widget.id)
The details action accepts the ID of the Widget to display, loads the Widget from the Widget.findById() method, fills in a Form, and renders that form using the details template:
private static Form<Widget> widgetForm = Form.form( Widget.class ); public static Result details( String id ) { Widget widget = Widget.findById( id ); if( widget == null ) { return notFound( "No widget found with id: " + id ); } // Create a filled form with the contents of the widget Form<Widget> filledForm = widgetForm.fill( widget ); // Return an HTTP 200 OK with the form rendered by the details template return ok( details.render( filledForm ) ); }
The WidgetController class defines a Play form (Form<Widget>) object, which will be used to both render an existing Widget object as well as serve as a mechanism for the user to POST a Widget to the controller to create a new Widget object. (The full WidgetController is shown below.) The details() method queries for a Widget and, if it is found, then it fills the form by invoking the widget form’s fill() method, renders the form using the details template, and returns an HTTP OK response. If the Widget is not found, then it returns an HTTP 404 Not Found response by invoking the notFound() method.
The routes file maps the following URI to the details action:
GET /widget/:id controllers.WidgetController.details(id : String)
We use the HTTP GET verb for the URI /widget/:id and map that to the details() method, which accepts the ID as a String.
Listing 4 shows the source code for the details template.
Listing 4. details.scala.html
@(widgetForm: Form[Widget]) @import helper._ <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <h2>Widget</h2> @helper.form( action = routes.WidgetController.update() ) { @helper.inputText( widgetForm( "id" ), '_label -> "ID" ) @helper.inputText( widgetForm( "name" ), '_label -> "Name" ) @helper.inputText( widgetForm( "description" ), '_label -> "Description" ) <input type="submit" value="Update"> } </body> </html>
The details template accepts a Form[Widget] and assigns it to the widgetForm variable. It uses a helper class to manage the form, so it imports helper._ (Scala notation of import helper.*). The helper class creates a form using its form() method and passes the action URI to which to POST the form body to, which in this case is the routes.WidgetController.update() action. The body of the form is wrapped in braces and inside the form you can see additional helper methods for creating and populating form elements. The inputText() method creates a form text field; it accepts the form field as its first parameter and the label as its second parameter. If the form has values in it, as it does in this case, then the value of the field is set in the form. Finally, the submit button is used to submit the form to update() action method. The resultant HTML for viewing “Widget 1” is the following:
<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> </head> <body> <h2>Widget</h2> <form action="/widget-update" method="POST" > <dl class=" " id="id_field"> <dt><label for="id">ID</label></dt> <dd> <input type="text" id="id" name="id" value="1" > </dd> <dd class="info">Required</dd> </dl> <dl class=" " id="name_field"> <dt><label for="name">Name</label></dt> <dd> <input type="text" id="name" name="name" value="Widget 1" > </dd> <dd class="info">Required</dd> </dl> <dl class=" " id="description_field"> <dt><label for="description">Description</label></dt> <dd> <input type="text" id="description" name="description" value="My Widget 1" > </dd> </dl> <input type="submit" value="Update"> </form> </body> </html>
The next thing that we might want to do is add a new Widget to our list. Let’s augment our homepage (the list template) to add a form to the bottom of the page that allows our users to add new Widgets. As we did in the previous section, we need to create a Form object and pass it to the template, but this time the Form object should not be populated:
private static Form<Widget> widgetForm = Form.form( Widget.class ); public static Result list() { return ok( list.render( Widget.findAll(), widgetForm ) ); }
We already created the widgetForm in the previous example, but it is shown here again for completeness. When we render the list template, we’re going to pass the list of all Widgets as well as the unpopulated widgetForm. Listing 5 adds the new form to the list template.
Listing 5. list.scala.html
@( widgets: List[Widget], widgetForm: Form[Widget] ) @import helper._ <> <h2>Add New Widget</h2> @helper.form( action = routes.WidgetController.save() ) { @helper.inputText( widgetForm( "id" ), '_label -> "ID" ) @helper.inputText( widgetForm( "name" ), '_label -> "Name" ) @helper.inputText( widgetForm( "description" ), '_label -> "Description" ) <input type="submit" value="Add"> } </body> </html>
Listing 5 updates the expected parameters to include both a List of Widgets as well as a Widget Form and then it imports the form helper classes. The bottom of Listing 5 creates the form using the helper.form() method, with the action directed to the routes.WidgetController.save() action. This form looks a whole lot like the form in Listing 4.
To complete this example, we need to add one more feature: a delete link. Deleting web resources is accomplished by using the DELETE HTTP method, which we cannot simply invoke by adding a link or a form. Instead we need to make the call using JavaScript. Let’s add a new delete method to our routes file:
DELETE /widget/:id controllers.WidgetController.delete(id : String)
When the DELETE HTTP verb is passed to the /widget/id URI, then the WidgetController’s delete() action will be invoked:
public static Result delete( String id ) { Widget widget = Widget.findById( id ); if( widget == null ) { return notFound( "No widget found with id: " + id ); } widget.delete(); return redirect( routes.WidgetController.list() ); }
The delete() method finds the Widget with the specified ID and, if it is found, it deletes it by invoking the Widget’s delete() method; if the Widget is not found, then it returns a notFound() response (404) with an error message. Finally, the method redirects the caller to the WidgetController’s list() action.
Listing 6 shows the final version of our list.scala.html template that includes the new delete button.
Listing 6. list.scala.html (final)
@( widgets: List[Widget], widgetForm: Form[Widget] ) @import helper._ @main( "Widgets" ) { <h2>Widgets</h2> <script> function del(url) { $.ajax({ url: url, type: 'DELETE', success: function(results) { // Refresh the page location.reload(); } }); } </script> <table> <thead><th>ID</th><th>Name</th><th>Description</th><th>Delete</th></thead> <tbody> @for( widget <- widgets ) { <tr> <td><a href="@routes.WidgetController.details(widget.id)"> @widget.id </a></td> <td>@widget.name</td> <td>@widget.description</td> <td><a href="#" onclick="javascript:del('@routes.WidgetController.delete(widget.id)')">Delete</a></td> </tr> } </tbody> </table> <h2>Add New Widget</h2> @helper.form( action = routes.WidgetController.save() ) { @helper.inputText( widgetForm( "id" ), '_label -> "ID" ) @helper.inputText( widgetForm( "name" ), '_label -> "Name" ) @helper.inputText( widgetForm( "description" ), '_label -> "Description" ) <input type="submit" value="Add"> } }
Listing 6 may look a little strange compared to our previous listings, primarily because it is lacking the HTML headers and footers and the body is now wrapped in a main() method. When you created your project, Play created a main.scala.html file for you that accepts a title String and content as Html. Listing 7 shows the contents of the main.scala.html template.
Listing 7. main.scala.html
@(title: String)(content: Html) <> </head> <body> @content </body> </html>
The title is pasted in as the <head> <title> element and the content is pasted inside the <body> of the document. Furthermore, the header imports JQuery for us, which you’ll find in the public/javascripts folder – we need JQuery to simplify our Ajax delete call. If you want to maintain a consistent look-and-feel to your pages, you should add styling information, menus, and other common resources to the main template and then wrap your other templates in a call to main().
Listing 6 adds a delete link to each Widget with the following line:
<td><a href="#" onclick="javascript:del('@routes.WidgetController.delete(widget.id)')">Delete</a></td>
We invoke the del() JavaScript method, which was shown in Listing 6, passing it the route to the WidgetController’s delete() action and the id of the Widget to delete. The del() method uses JQuery’s ajax() method to invoke the specified URL with the specified type (HTTP DELETE verb) and a success method to invoke upon completion (reload the page).
When you’ve completed your code, launch your application using the play command from your application’s home directory and execute run from the play shell. Open a browser to the following URL and take it for a spin:
You can download the source code for this article here.
Summary
The Play Framework is not a traditional Java web framework and actually requires us to think about developing web applications differently. It runs in its own JVM, not inside a Servlet container, and it supports instant redeployment of applications without a build cycle. When building Play applications you are required to think in terms of HTTP and not in terms of Java.
The “Introduction to Play 2 for Java” article presented an overview of Play, showed how to set up a Play environment, and then built a Hello, World application. Here we built a more complicated Play application that manages CRUD (create, read, update, and delete) operations for a Widget, which uses Play’s domain-driven paradigm, and that better utilizes Play’s Scala templates.
The final article, Integrating Play with Akka, integrates Play with Akka to realize the true power of asynchronous messaging and to show how to suspend a request while waiting for a response so that we can support more simultaneous requests than a traditional Java web application. | http://www.informit.com/articles/article.aspx?p=2223715 | CC-MAIN-2015-22 | refinedweb | 2,783 | 53.61 |
AxKit::App::TABOO::XSP::Category - Category management tag library for TABOO
Add the category: namespace to your XSP
<xsp:page> tag,
e.g.:
<xsp:page
Add this taglib to AxKit (via httpd.conf or .htaccess):
AxAddXSPTaglib AxKit::App::TABOO::XSP::Category
This XSP taglib provides two tags to retrieve a structured XML fragment with all information of a single category or all categories of a certain type.
Apache::AxKit::Language::XSP::SimpleTaglib has been used to write this taglib.
<get-category
This tag will replace itself with some structured XML containing all fields of categories of type
foo. It relates to the TABOO Data object AxKit::App::TABOO::Data::Category, and calls on that to do the hard work.
The root element of the returned object is
cat:categories and each category is wrapped in an element
cat:category and contains
catname and
name.
<get-categories
This tag will replace itself with some structured XML containing all categories of type
foo. It relates to the TABOO Data object AxKit::App::TABOO::Data::Plurals::Categories, and calls on that to do the hard work. See the documentation of that class to see the available types. If a boolean
onlycontent attribute (or child element) is set, it will check if there are articles or stories in the
categ category types, and return only those.
The root element of the returned object is
categories and each category is wrapped in an element (surprise!)
category. The type will also be available in an attribute called
type, and ordered alphabetically by name.
<store/>
It will take whatever data it finds in the Apache::Request object held by AxKit, and hand it to a new AxKit::App::TABOO::Data::Article object, which will use whatever data it finds useful. It will not store anything unless the user is logged in and authenticated with an authorization level. It will perform different sanity checks and throw exceptions if the user tries to add data it is not authorized to do.
Finally, the Data object is instructed to save itself.
<exists catname="foo"/>
This tag will check if a category allready exists. It is a boolean tag, which has child elements
<true> and
<false>. It takes a catname, which may be given as an attribute or a child element named
catname, and if the category is found in the data store, the contents of
<true> child element is included, otherwise, the contents of
<false> is included.
See AxKit::App::TABOO. | http://search.cpan.org/dist/AxKit-App-TABOO/lib/AxKit/App/TABOO/XSP/Category.pm | crawl-003 | refinedweb | 411 | 52.49 |
This is the classic printer style, with infinite paper and a lovely noise during
printing. They are also fairly simple to operate - you can just write text
directly to
/dev/lp (or
/dev/usb/lp9 in my case) and it’ll print it out.
Slightly more sophisticated instructions can be written to them with ANSI escape
sequences, just like a terminal. They can also be rigged up to CUPS, then you
can use something like
man -t 5 scdoc to produce printouts like this:
Plugging the printer into Linux and writing out pages isn’t much for hack value, however. What I really wanted to make was something resembling an old-school TTY - teletypewriter. So I wrote some glue code in Golang, and soon enough I had a shell:
The glue code I wrote for this is fairly straightforward. In the simplest form,
it spins up a pty (pseudo-terminal), runs
/bin/sh in it, and writes the pty
output into the line printer device. For those unaware, a pseudo-terminal is the
key piece of software infrastructure for running interactive text applications.
Applications which want to do things like print colored text, move
the cursor around and draw a TUI, and so on, will open
/dev/tty to open the
current TTY device. For most applications used today, this is a
“pseudo-terminal”, or pty, which is a terminal emulated in userspace - i.e. by
your terminal emulator. However, your terminal emulator is emulating a
terminal - the control sequences applications send to these are
backwards-compatible with 50 years of computing history. Interfaces like these
are the namesake of the TTY.
Visual terminals came onto the scene later on, and in the classic computing tradition, the old hands complained that it was less useful - you could no longer write notes on your backlog, tear off a page and hand it to a colleague, or white-out mistakes. Early visual terminals could also be plugged directly into a line printer, and you could configure them to echo to the printer or print out a screenfull of text at a time. A distinct advantage of visual terminals is not having to deal with so much bloody paper, a problem that I’ve become acutely familiar with in the past few days1.
Getting back to the glue code, I chose Golang because setting up a TTY is a bit of a hassle in C, but in Golang it’s pretty straightforward. There is a serial port and in theory I could have plugged it in and spawned a getty on the resulting serial device - but (1) it’d be write-only, so not especially interactive without hardware hacks, and (2) I didn’t feel like digging out my serial cables. So:
import "git.sr.ht/~sircmpwn/pty" // fork of github.com/kr/pty // ... winsize := pty.Winsize{ Cols: 160, Rows: 24, } cmd := exec.Command("/bin/sh") cmd.Env = append(os.Environ(), "TERM=lp", fmt.Sprintf("COLUMNS=%d", 180)) tty, err := pty.StartWithSize(cmd, &winsize)
P.S. We’re going to dive through the code in detail now. If you just want more cool videos of this in action, skip to the bottom.
I set the TERM environment variable to
lp, for line printer, which doesn’t
really exist but prevents most applications from trying anything too tricksy
with their escape codes. The
tty variable here is an
io.ReadWriter whose
output is sent to the printer and whose input is sourced from wherever, in my
case from the stdin of this process2.
For a little more quality-of-life, I looked up Epson’s proprietary ANSI escape sequences and found out that you can tell the printer to feed back and forth in 216th” increments with the j and J escape sequences. The following code will feed 2.5” out, then back in:
f.Write([]byte("\x1BJ\xD8\x1BJ\xD8\x1BJ\x6C")) f.Write([]byte("\x1Bj\xD8\x1Bj\xD8\x1Bj\x6C"))
Which happens to be the perfect amount to move the last-written line up out of the printer for the user to read, then back in to be written to some more. A little bit of timing logic in a goroutine manages the transition between “spool out so the user can read the output” and “spool in to write some more output”:
func lpmgr(in chan (interface{}), out chan ([]byte)) { // TODO: Runtime configurable option? Discover printers? dunno f, err := os.OpenFile("/dev/usb/lp9", os.O_RDWR, 0755) if err != nil { panic(err) } feed := false f.Write([]byte("\n\n\n\r")) timeout := 250 * time.Millisecond for { select { case <-in: // Increase the timeout after input timeout = 1 * time.Second case data := <-out: if feed { f.Write([]byte("\x1Bj\xD8\x1Bj\xD8\x1Bj\x6C")) feed = false } f.Write(lptl(data)) case <-time.After(timeout): timeout = 200 * time.Millisecond if !feed { feed = true f.Write([]byte("\x1BJ\xD8\x1BJ\xD8\x1BJ\x6C")) } } } }
lptl is a work-in-progress thing which tweaks the outgoing data for some
quality-of-life changes, like changing backspace to ^H. Then, the main event
loop looks something like this:
inch := make(chan (interface{})) outch := make(chan ([]byte)) go lpmgr(inch, outch) inbuf := make([]byte, 4096) go func() { for { n, err := os.Stdin.Read(inbuf) if err != nil { panic(err) } tty.Write(inbuf[:n]) inch <- nil } }() outbuf := make([]byte, 4096) for { n, err := tty.Read(outbuf) if err != nil { panic(err) } b := make([]byte, n) copy(b, outbuf[:n]) outch <- b }
The tty will echo characters written to it, so we just write to it from stdin and increase the form feed timeout closer to the user’s input so that it’s not constantly feeding in and out as you write. The resulting system is pretty pleasant to use! I spent about hour working on improvements to it on a live stream. You can watch the system in action on the archive here:
If you were a fly on the wall when Unix was written, it would have looked a lot like this. And remember: ed is the standard text editor.
?
Don’t worry, I recycled it all. ↩
In the future I want to make this use libinput or something, or eventually make a kernel module which lets you pair a USB keyboard with a line printer to make a TTY directly. Or maybe a little microcontroller which translates a USB keyboard into serial TX and forwards RX to the printer. Possibilities! | https://drewdevault.com/2019/10/30/Line-printer-shell-hack.html | CC-MAIN-2019-47 | refinedweb | 1,072 | 63.39 |
Create the database class
The final task in our tutorial is to create the QuotesDbHelper class, which lets us access the SQL database where our quote data is stored. This class uses Qt functions and classes to access the file system on the device directly. It also uses Cascades data access APIs (such as the SqlDataAccess class) with Qt SQL APIs (such as the QSqlDatabase class) to read and update our SQL database.
In this section, we won't go into detail about all of the classes and functions that are used in QuotesDbHelper. Several comments are included in the code samples to help guide you through the code. If you'd like to learn more about accessing the file system and storing data, see File system access and Data management.
Create the QuotesDbHelper class
In the src folder of your project, create a class called QuotesDbHelper. Similar to when you created the QuotesApp class, don't worry about adding method stubs. We start by implementing our functions in the .cpp file, so open the QuotesDbHelper.cpp file.
At the top of the file, we include the associated header file, QuotesDbHelper.h, and use the bb::data namespace. The class constructor is empty. The class destructor performs a couple of clean-up operations on our SQL databases to make sure that our app frees its resources correctly. The mDb object is an instance of the QSqlDatabase class, which we use to interact with the database in the file system.
#include "quotesdbhelper.h" using namespace bb::data; QuotesDbHelper::QuotesDbHelper() { } QuotesDbHelper::~QuotesDbHelper() { if (mDb.isOpen()) { QSqlDatabase::removeDatabase(mDbNameWithPath); mDb.removeDatabase("QSQLITE"); } }
Implement the copy function
Depending on the location of files in your project, your app might have read-only access to the files, or it might have read-write access. For example, for a file that's included in the assets folder of your project, your app typically has read-only access to this file. In our app, we placed our database file, quotes.db, in the assets/sql folder. Because we need to have read-write access to the database file to insert, update, or delete quote records, we create a function called copyDbToDataFolder(). This function copies our database file from the assets folder to the data folder. After the file is copied to the data folder, our app has full read-write access to the file.
bool QuotesDbHelper::copyDbToDataFolder(const QString databaseName) { // First, we check to see if the file already exists in the // data folder (that is, the file was copied already) QString dataFolder = QDir::homePath(); QString newFileName = dataFolder + "/" + databaseName; QFile newFile(newFileName); if (!newFile.exists()) { // If the file is not already in the data folder, we copy // it from the assets folder (read-only) to the data folder // (read-write) QString appFolder(QDir::homePath()); appFolder.chop(4); QString originalFileName = appFolder + "app/native/assets/sql/" + databaseName; QFile originalFile(originalFileName); if (originalFile.exists()) { return originalFile.copy(newFileName); } else { qDebug() << "Failed to copy file, database file does not exist."; return false; } } return true; }
Implement the load function
Next, we implement the loadDataBase() function. We copy the database file to the data folder using the copyDbToDataFolder() function above, and then set up an SqlDataAccess object that points to the database file. This object lets us execute SQL queries, such as SELECT, on the database. We call execute() to retrieve all of the entries in the specified table (in our case, the quotes table), and we store them in a QVariantList. We also make sure that no errors have occurred.
QVariantList QuotesDbHelper::loadDataBase(const QString databaseName, const QString table) { QVariantList sqlData; if (copyDbToDataFolder(databaseName)) { // Load database entries using an SqlDataAccess object into a // QVariantList, which can be used in a GroupDataModel to // display a sorted list mDbNameWithPath = "data/" + databaseName; // Set up an SqlDataAccess object SqlDataAccess sqlDataAccess(mDbNameWithPath); // Set a query to obtain all entries in the table and load into // our QVariantList sqlData = sqlDataAccess.execute("select * from " + table) .value<QVariantList>(); if (sqlDataAccess.hasError()) { DataAccessError err = sqlDataAccess.error(); qWarning() << "SQL error: type=" << err.errorType() << ": " << err.errorMessage(); return sqlData; }
We can use our SqlDataAccess object to read entries from the database file, but we need to set up another database connection (using QSqlDatabase) to allow us to insert, update, and delete database entries. By using QSqlDatabase to set up another connection, we won't conflict with the connection that's already set up using SqlDataAccess. We make sure that this second connection was created successfully, and then we open the database using this connection.
mDb = QSqlDatabase::addDatabase("QSQLITE", "database_helper_connection"); mDb.setDatabaseName(mDbNameWithPath); if (!mDb.isValid()) { qWarning() << "Could not set database name, probably due to an invalid driver."; return sqlData; } bool success = mDb.open(); if (!success) { qWarning() << "Could not open database."; return sqlData; } // Store the name of the table (used in the insert/update/delete // functions) mTable = table; } return sqlData; }
Implement the delete function
Our next function, deleteById(), simply deletes a record by using the DELETE query. To perform the actual deletion, we pass the query to another function, queryDatabase(), which we'll implement a little later on.
bool QuotesDbHelper::deleteById(QVariant id) { // Query for deleting an entry in the table if (id.canConvert(QVariant::String)) { QString query = "DELETE FROM " + mTable + " WHERE id=" + id.toString(); return queryDatabase(query); } qWarning() << "Failed to delete item with id: " << id; return false; }
Implement the insert and update functions
The two functions that write to the database, insert() and update(), both have a similar structure. We use the prepare() function of the QSqlQuery class to prepare our query. This function makes it easier to construct and prepare an SQL query, especially when the query is complex. For example, a single quotation mark (') inside a double quotation mark (") is difficult to handle if you don't bind your values using the prepare() function. Here's how to implement the insert() function:
QVariant QuotesDbHelper::insert(QVariantMap map) { QSqlQuery sqlQuery(mDb); sqlQuery.prepare("INSERT INTO " + mTable + " (firstname, lastname, quote)" "VALUES(:firstName, :lastName, :quote)"); sqlQuery.bindValue(":firstName", map["firstname"]); sqlQuery.bindValue(":lastName", map["lastname"]); sqlQuery.bindValue(":quote", map["quote"]); sqlQuery.exec(); QSqlError err = sqlQuery.lastError(); if (err.isValid()) { qWarning() << "SQL reported an error : " << err.text(); } return sqlQuery.lastInsertId(); }
Here's how we construct the update() function:
bool QuotesDbHelper::update(QVariantMap map) { QSqlQuery sqlQuery(mDb); sqlQuery.prepare("UPDATE " + mTable + " SET firstname=:firstName, lastname=:lastName, quote=:quote WHERE id=:id"); sqlQuery.bindValue(":firstName", map["firstname"]); sqlQuery.bindValue(":lastName", map["lastname"]); sqlQuery.bindValue(":quote", map["quote"]); sqlQuery.bindValue(":id", map["id"].toString()); sqlQuery.exec(); QSqlError err = sqlQuery.lastError(); if (!err.isValid()) { return true; } qWarning() << "SQL reported an error : " << err.text(); return false; }
Implement the query function
We have one last function to implement, queryDatabase(). This function performs the actual deletion for the deleteById() function above.
bool QuotesDbHelper::queryDatabase(const QString query) { // Execute the query QSqlQuery sqlQuery(query, mDb); QSqlError err = sqlQuery.lastError(); if (err.isValid()) { qWarning() << "SQL reported an error for query: " << query << " error: " << mDb.lastError().text(); return false; } return true; }
Complete the QuotesDbHelper header file
Now that we've finished the function implementations, we need to complete the associated QuotesDbHelper.h file. Open the QuotesDbHelper.h file in your project's src folder. Similar to the QuotesApp.h file, the contents of this file are straightforward.
#ifndef _QUOTESDBHELPER_H_ #define _QUOTESDBHELPER_H_ #include <QtSql/QtSql> #include <bb/data/SqlDataAccess> using namespace bb::data; class QuotesDbHelper { public: QuotesDbHelper(); ~QuotesDbHelper(); QVariantList loadDataBase(const QString databaseName, const QString table); bool deleteById(QVariant id); QVariant insert(QVariantMap map); bool update(QVariantMap map); private: bool copyDbToDataFolder(const QString databaseName); bool queryDatabase(const QString query); QSqlDatabase mDb; QString mTable; QString mDbNameWithPath; }; #endif
Add the data library to your project
The code for our Quotes app is complete, but there's one additional thing that we need to do. To use classes in the bb::data namespace, we need to add the appropriate library to our project. You can add additional libraries in the .pro file that's included in the root folder of the project.
Open the .pro file in your project. This file should have the same name as the project itself. In this file, add the following line below the CONFIG line:
LIBS += -lbbdata
Change the visual style of your project
Depending on the device that your app is run on, the default visual style can change. To make sure that your app looks good on all devices, we need to set the visual style of our project so that the text of the quote shows up in the quote bubble. If we don't set the visual style to bright, the quote text doesn't appear in the bubble on devices that use a dark style. Go ahead and change the visual style to bright in the bar-descriptor.xml file of your project. For more information, see Setting the visual style.
That's it! Build and run the project to see the final result.
The latest version of the Quotes sample app includes some new features for API level 10.3.0, such as design units. If you want to look at the complete source code for the updated sample app, you can download the entire project and import it into the Momentics IDE.
Last modified: 2015-03-31
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/documentation/ui/lists/lists_create_database_class.html | CC-MAIN-2015-18 | refinedweb | 1,545 | 57.27 |
Project 0
PART A
PartA
PART B
PART A
PartA
PART B
GDE Error: Unable to load profile settings
<pre lang="ACTIONSCRIPT">
package {
import flash.display.Graphics;
import flash.display.MovieClip;
public class Sine1 extends MovieClip {
public var m:MovieClip;
public function Sine1():void {
m = new MovieClip();
m.graphics.lineStyle(0, 0×000000);
addChild(m);
for (var i:Number = 0; i < 90; i++) {
drawSine(3, 38 + 5.75 * i, -27, .02, 0.00064);
}
}
public function drawSine(sinX:Number, sinY:Number,
ampl:Number, step:Number, p:Number):void {
m.graphics.moveTo(sinX, sinY);
for (var i:Number = 0, j:Number = 9.3; i < 600; i += step, j += p) {
m.graphics.lineTo(sinX + i, sinY + ampl * Math.sin(i/j));
}
}
}
}
</pre>
This was fun! I used Processing to generate the sinusoidal lines and then cropped and scaled in Adobe Illustrator. Here’s my final rendition
And, here’s my code. I haven’t taken calculus for a few years now, so I based a lot of the sin math off of the trigg tutorial on the Processing website, located here.
PART A – Noll Sinusoids
The PDF file:
The code:
PART B – Tennis for Two
The link to applet:
The link to code: (Copy & Paste txt into processing environment, save the file, and run it!!!)
Part-B
Pong Main Code:
Part A:
code:
————————————
Part B:
I originally had text in the background displaying the score, which used a font I downloaded from online, but it caused a lot of issues when uploading. The zip file containing the text version is still in my uploaded folder if you would like to view it.
Also, maybe I’m just being stupid, but it wouldn’t let me embed the game with iframe. Did anyone else have these problems?.
import processing.pdf.*;
size (650,600,PDF,”project0.pdf”);
smooth();
background(255);
strokeWeight(.6);
int Amplitude = 30;
int yDistance = 70;
for(int i=0; i<90; i++){
for(int x=0; x<600; x++){
float period = map(x, 0, 599, 60, 180);
float y1 = Amplitude * (-sin(x*2*PI/period)) + yDistance;
float y2 = Amplitude * (-sin((x+1)*2*PI/period)) + yDistance;
line(x, y1, x+1, y2);
}
yDistance += 5;
}
My version of pong tracks the players progress by changing the color of the background each time the player catches the ball with his paddle
click on the link to play:
////BALL
//ball velocity x
float velX=(int)random(4,10);
//ball velocity y
float velY=(int)random(4,10);
//ball pos x
float ballPosX = random(100, 700);
//ball pos y
int ballPosY = 20;
int bgd=255;
////PADDLE
//paddle pos x
int padPosX = 400;
////SETUP
void setup(){
size(800, 800);
}
////DRAW
void draw(){
background(bgd);
ballPosY += velY;
ballPosX += velX;
fill(238, 58, 140);
ellipse(ballPosX, ballPosY, 20, 20);
fill(93, 252, 10);
rect(padPosX, 780, 80, 10);
//update position of paddle
padPosX = mouseX;
//update position of ball & check to see if ball is touching padding
if(((ballPosY) >= 760) && ((ballPosX) < (padPosX+80)) && ((ballPosX) > (padPosX)) ){ //if ball hits paddle in the middle
ballPosY = 759;
bgd -= 20;
velY = -1 * velY;
velX = velX + (int)(((padPosX+40)-ballPosX) / 10);
}
//check to see if ball is touching boundaries
else if(ballPosY < 0){ //if ball hits top boundary
velY = -1 * velY;
}
else if(ballPosX < 30 || ballPosX > 770){ //if ball hits side boundaries
velX = -1 * velX;
}
//check to see if ball misses paddle
else if(ballPosY > 840){ //if ball misses paddle
ballPosX = random(100,700);
ballPosY = 20;
velX = random(4,10);
velY = random(4,10);
delay(500);
bgd=255;
}
}
A. Ninety Parallel Sinusoids
B. Pong
Sinuoids
Pong (1.0)
This version uses keyboard navigation (Up and Down keys).
Go see Pong (1.0).
View source (1.1)
View source (1.0)
Changes
Written in Math
Latex Version
PDF rendered by Apple Grapher:
Written in Javascript (with help from rightjs) for HTML5 canvas.
Kuan-Ju’s Crazy Pong is super crazy. Every time when the ball hits the paddle, the ball moves faster and the paddle gets shorter. The color of background changes when the ball bounces the paddle, which will disturb you and drive you crazy. Let’s see how long you can stay alive..
Part A: HTML, Source code
Part B: HTML, Source code
Part A – Noll Knockoff
Part B – Pong
It doesn’t seem to like this whole embed thing, so here’s the link to the index page:
Pong (In Stunning Terminal Green)
Both projects can be view at: <–sin wave <–Pong
So, I realized that Noll used a black and white printer and monitor to display his piece… He really had color in there
Here’s the ugly loop, unabridged:
for (int j = 0; j < 90; j++){ noFill(); stroke(r, g, bee); beginShape(); float b = .125; for (int i = 0; i < width+xOffset; i++){ float x = i-xOffset; float c = PI; float y = yOffset + topPadding + (amplitude*sin(radians(i)/b)); b += .00054; curveVertex(x,y); } endShape(); r-=(252/90); g-=(164/90); yOffset += 5; }
And now for pong:
since the iFrame thing is wack.
Like everyone else, my iframes didn’t seem to work so here are links:
davidyen_project0a
sinusoids.pdf
Here’s a link to my single player Pong game.
I used Processing to develop this simple program. I got a lot of help from the “Bounce” example provided by Processing here because I have never made an object bounce using processing before.
Part A
Not sure if the iframes are working for me, if not the links should still work.
Part B
caryn-project-0 | http://golancourses.net/2010spring/category/project-0/ | CC-MAIN-2014-15 | refinedweb | 915 | 68.4 |
InvokeSource
Since: BlackBerry 10.0.0
#include <bb/system/InvokeSource>
To link against this class, add the following line to your .pro file: LIBS += -lbbsystem
The InvokeSource class represents the entity making an invocation request on a target.
Overview
Public Functions Index
Public Functions
Creates a new InvokeSource object.
BlackBerry 10.0.0
Creates a copy of an existing InvokeSource object.
BlackBerry 10.0.0
Destructor.
BlackBerry 10.0.0
unsigned int
Returns the primary group ID of the process making the invocation request.
The group ID of the source process.
BlackBerry 10.0.0
QString
Returns the install ID assigned to the process making the request.
The install ID of the source process.
BlackBerry 10.0.0
InvokeSource &
Copies the data of an existing InvokeSource object to this object.
The InvokeSource instance.
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/cascades/bb__system__invokesource.html | CC-MAIN-2017-17 | refinedweb | 153 | 53.17 |
Opened 8 years ago
Closed 6 years ago
#14615 closed Bug (fixed)
Related objects manager returns related objects with null FKs for unsaved instances
Description
Let's say we have 2 models (one refer to another):
class User(Model): pass class MyObject(Model): user = ForeignKey(User, null=True, blank=True)
In this case ORM works wrong in this case:
User().myobject_set.all() # that would return all MyObjects that have user=None
So None/null value is supposed to be a valid foreign key between objects with is obviously not. Only if foreign key is not null - then it should be used.
I use a simple workaround that may be useful for fixing the issue:
User().myobject_set.exclude(user=None).all()
Attachments (2)
Change History (17)
comment:1 follow-up: 2 Changed 8 years ago by
comment:2 Changed 8 years ago by
I don't think the
User().myobject_set.all()syntax you are proposing is at all intuitive.
MyObject.objects.filter(user=None)makes much more sense as a query returning "all Myobjects with user=None".
That's exact point I meant.
The meaning of
user.myobject_set is to return all
MyObjects that are related to this
user. So if
user.id is
None then
user.myobject_set should return NO objects at all. Right now it works wrong - it returns all
MyObjects with
user=None. That should be fixed.
For getting
MyObjects with
user=None developers must use
MyObject.objects.filter(user=None)
comment:3 Changed 8 years ago by
comment:4 Changed 8 years ago by
Interesting... this bug (I think it's a bug) manifests itself only when you use the reverse relationship on an unsaved instance of the model, specifically
User().my_objects.
All the same, I can confirm the problem using the OP's models and a test case that looks like this:
from django.test import TestCase from testapp.models import User, MyObject class MyTestCase(TestCase): def test_reverse_related(self): MyObject.objects.create() u = User.objects.create() # This passes self.assertQuerysetEqual(u.myobject_set.all(), []) # This one fails self.assertQuerysetEqual(User().myobject_set.all(), [])
comment:5 Changed 8 years ago by
Discussed on mailing list -
Changed 8 years ago by
patch - throw exception rather than do a query for unsaved instances
comment:6 Changed 8 years ago by
My idea was to fix this in a very straight way - change ForeignKey behavior (the piece of code that generates many_set method) - so it doesn't take into account null.
Let me explain a bit more. If foreign key is null - then there's no link between objects. So *_set should return EMPTY set disregarding DB state.
Using object state (saved/not saved) makes this thing much more complicated and have unpredicted not covered cases. If user changed object PK and didn't save that into DB - that's his decision, we should not keep the state.
Another edge case is null PK (user used custom field as PK) - that is simply covered by initial code, but not by object state.
comment:7 Changed 8 years ago by
It is precisely things like a null PK that your solution doesn't cover. For foreign key values, we don't know whether 'None' means 'No value has been set' or 'database NULL'. If we'd thought about this at the beginning we might have used separate values to indicate those two, but it is too late now.
Even without that problem, we don't want to fix this by silently returning an empty set of objects. It is always nonsense to ask the DB for the related objects of an unsaved object. So every time someone does that, it is a logical error in their program. Making sure that such code produces an empty set of results is not actually helpful.
comment:8 Changed 8 years ago by
comment:9 Changed 8 years ago by
comment:10 Changed 8 years ago by
Changed 8 years ago by
updated patch
comment:11 Changed 8 years ago by
Applied the patch from Victor and it work
comment:12 Changed 7 years ago by
Milestone 1.3 deleted
comment:13 Changed 7 years ago by
I don't think this is ready for checkin, because we never came to a conclusion about Carl's comments here:
So we are still on DDN really, because the two alternatives seem to be:
1) Use instance._state.adding to detect 'unset' primary keys (controversial)
2) Admit that we can't cope with this situation, and call the bug a developer error in which they get a silly answer because they've asked a silly question.
comment:14 Changed 7 years ago by
According to wikipedia and this discussion:
...having a NULL primary key is not allowed. SQLite allows it, but doesn't guarantee it will moving forward. Therefore, we should not program for that edge case. Instead, we can detect unsaved instances using the existing 'pk is None' idiom.
(If we make a decision that NULL PKs *are* allowed, there are lots of bit of code that probably need updating, and that should be done separately - although that seems unlikely).
The
instance._state.adding flag was needed for the edge case of doing uniqueness validation for explicitly set
CharField PKs for new objects. That is not needed here, or wanted - if the user creates a Model instance with an explicitly set PK which corresponds to data in the database, we expect that doing the related lookups should return the records related to that PK e.g. we expect
User(id=1).myobject_set.all() to return
MyObject.objects.filter(user_id=1), even though the User instance was not saved.
Apologies for the confusion my patch added.
comment:15 Changed 6 years ago by
Given Luke's last comment, I believe this is a duplicate #18153 which I fixed in 3190abcd75b1fcd660353da4001885ef82cbc596.
I don't think the
User().myobject_set.all()syntax you are proposing is at all intuitive.
MyObject.objects.filter(user=None)makes much more sense as a query returning "all Myobjects with user=None". | https://code.djangoproject.com/ticket/14615 | CC-MAIN-2019-04 | refinedweb | 1,005 | 64 |
15374/how-to-set-up-dynamodb-trigger-using-aws-lambda
I'm trying to create a DynamoDB trigger using DynamoDB Streams and AWS Lambda.But I am not very familiar with AWS Services yet, so i don’t know but how to read and process a DynamoDB Stream event in Java 8.
Essentially, what I want to do is create a record in table B whenever a record is created in table A.
Could any of you please point me to a code or post that handles this use case in Java?
Well this code worked for me. You can use it to receive and process DynamoDB events in a Lambda function -
public class Handler implements RequestHandler<DynamodbEvent, Void> {
@Override
public Void handleRequest(DynamodbEvent dynamodbEvent, Context context) {
for (DynamodbStreamRecord record : dynamodbEvent.getRecords()) {
if (record == null) {
continue;
}
// Your code here
// Write to Table B using DynamoDB Java API
}
return null;
}
}
When you create your Lambda, add the stream from table A as your event source, and that's it.
You'll need to use aws api-gateway update-domain-name. ...READ MORE
In approach one, you are directly calling ...READ MORE
The type of change set operation. To ...READ MORE
In Spring Core change the PropertySourcesPlaceholderConfigurer class can ...READ MORE
First, of all the tools installed to ...READ MORE
You can create a DynamoDB table using ...READ MORE
Here is the table that I have ...READ MORE
Nice article, very informative!
I have a scenario ...READ MORE
Setting up a SPF record is pretty ...READ MORE
aws s3 ls s3://<your_bucket_name>/ | awk '{print ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/15374/how-to-set-up-dynamodb-trigger-using-aws-lambda?show=15376 | CC-MAIN-2022-40 | refinedweb | 288 | 65.83 |
Jan Kaliszewski wrote: > [originally from python-list at python.org, > crossposted to python-ideas at python.org] > > 04-09-2009 o 00:46:01 Ken Newton <krnewton at gmail.com> wrote: > >> I) + '}' > [snip] > > I find the idea interesting and close to my own needs in many > situations, if I could alter it a bit. > > Of course, we always can use an empty class ('class MyStruct: pass') > or simply use a dict... But both methods are inconvinient in some > ways. > > In the case of dict we are convicted -- even when we need static > access -- to mapping notation (obj['member']) which is less > convenient and (what's more important) more error-prone than > attribute dot-notation. > > In the case of empty class/object we can use convenient attr > dot-notation but dynamic access is less natural... > > IMHO there could be -- in collections module or even as a built-in > factory function -- something (somehow) similar to namedtuple, but > mutable and more dict-like. I'am less focused on nesting such > structures, and more on making it a namespace-like objects with > convenience-and-today-usage features. Please consider the code: > > > class AttrDict(dict): # (or maybe from OrderedDict) > "It's only a model. (Shhh!)" > > def __getattr__(self, name): > if name.startswith('_'): > raise AttributeError("AttrDict's key can't " > "start with underscore") > else: > return self[name] > > def __setattr__(self, name, value): > self[name] = value > > def __delattr__(self, name): > del self[name] > > def __repr__(self): > return '{0}({1})'.format(self.__class__.__name__, > dict.__repr__(self)) > def __str__(self): > return self._as_str() > > def _gen_format(self, indwidth, indstate): > indst = indstate * ' ' > ind = (indstate + indwidth) * ' ' > yield ('\n' + indst + '{' if indstate else '{') > for key, val in self.items(): > valstr = (str(val) if not isinstance(val, AttrDict) > else val._as_str(indwidth, indstate + indwidth)) > yield '{ind}{key}: {valstr}'.format(ind=ind, key=key, > valstr=valstr) > yield indst + '}' > > def _as_str(self, indwidth=4, indstate=0): > return '\n'.join(self._gen_format(indwidth, indstate)) > > def _as_dict(self): > return dict.copy(self) > > > # Test code: > if __name__ == '__main__': > struct = AttrDict() > struct.first = 1 > struct.second = 2.0 > struct. struct.fourth = [4] > print(struct) > # output: > # { > # 'second': 2.0 > # 'fourth': [4] > # 'third': '3rd' > # 'first': 1 > # } > > del struct.fourth > > print(repr(struct)) > # output: > # AttrDict({'second': 2.0, 'third': '3rd', 'first': 1}) > > print(struct.first) # (static access) > # output: > # 1 > > for x in ('first', 'second', 'third'): > print(struct[x]) # (dynamic access) > # output: > # 1 > # 2.0 > # 3rd > > struct.sub = AttrDict(a=1, b=2, c=89) > print(struct._as_dict()) > # output: > # {'second': 2.0, 'sub': AttrDict({'a': 1, 'c': 89, 'b': 2}),\ > # 'third': '3rd', 'first': 1} > > print(struct._as_str(8)) > # output: > # { > # second: 2.0 > # sub: > # { > # a: 1 > # c: 89 > # b: 2 > # } > # third: 3rd > # first: 1 > # } > > > What do you think about it? > > Cheers, > *j > I like both suggestions. The dot notation is simpler than the dictionary one, in may cases. struct is perhaps a name to avoid, as it is a standard module. The result has similarities to the Pascal Record: Colin W. | https://mail.python.org/pipermail/python-list/2009-September/550551.html | CC-MAIN-2017-30 | refinedweb | 488 | 60.41 |
Developers for Python -- the fifth most popular programming language according to the Tiobe index -- now have several new features for performing asynchronous operations, matrix math, type hinting, and other functions.
Much of the functionality added to the newly released Python 3.5 echoes that of other popular languages, but that's not to say Python has been heavily influenced by outside forces. Rather, most of the additions to Python 3.5 are meant to complement or enhance the work Python programmers are already doing.
Coroutines (
async/await)
Coroutines, the single biggest addition to Python 3.5, provide a native way to perform asynchronous programming. A function labeled with the
async keyword -- which suspends its execution until another condition is met -- makes it into a coroutine:
async def read_data(db):
data = await db.fetch('SELECT ...')
A function like this would wait for data to come in from the
db.fetch function (also an
async function), but wouldn't block the execution of other functions.
Python has had the ability to run code asynchronously, but the syntax has been derided as "un-Pythonic" -- not in line with the language's espoused behaviors. The
async/await function is meant to remedy that complaint, and it provides features similar to what languages like Go already have.
Type hinting
Type hinting, another long-awaited addition to Python, provides an option for annotating variables (including function arguments) to indicate the type of variable in use. For example:
def greet(name: str) -> str
return 'Hello there, {}'.format(name)
For this function, the
name variable is a string, as is the value returned by the function. In such cases, no type checking is performed at runtime; instead, it's optionally performed by a third-party code analyzer.
Guido van Rossum, author of the Python language, has been a proponent of type hinting, but he has been equally adamant that the language not become statically typed in the manner of C or Go. The purpose here, as explained in the feature proposal document, is to make Python more amenable to offline analysis and "(perhaps, in some contexts) code generation utilizing type information."
The
@ matrix multiplication operator
Math and science applications are a big part of Python's use cases, but matrix multiplication, typically performed in those jobs, doesn't have a standard operator in Python. The various math and science libraries available for Python implement matrix multiplication, but not consistently.
Version 3.5's
@ infix operator provides a common syntax for matrix math, with the promise of more concise and readable code (a common aspiration for Python). A future release of the NumPy math-and-stats package will also support
@ for matrix multiplication at high speed.
According to the design document for the feature, "
@ will be used frequently. In fact, evidence suggests it may be used more frequently than // or the bitwise operators."
All of Python's future development is concentrated in the 3.x branch. While the 2.x branch is more broadly used, it is less versatile, and the tide has been slow in turning. Some key elements of the Python ecosystem have made the jump -- for example, the vast majority of the most popular Python packages are now 3.x compatible. | http://www.infoworld.com/article/2969654/application-development/python-35-builds-on-strengths-and-borrows-from-go.html | CC-MAIN-2017-17 | refinedweb | 535 | 53.1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.