text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Microsoft Word is a word processor developed by Microsoft which was initially released in 1983. It is one of the most used word processors in the world with its users continuously growing each day. Microsoft constantly updates Word every once in a while with major improvements and improved accessibility.
Many users face a problem where they cannot access a word file. The error states that they don’t have enough privileges to view the content. This error can pop up on a number of occasions; when you transfer files from another computer, when you update your Word client, or when you encrypt files.
Solution 1: Changing File Permissions
In most cases, the file doesn’t have ownership for your account to view or read it. We can try adding the owner to your account by accessing the file security settings and changing the permissions.
-. Click on Advanced present at the bottom of the screen.
- If you another window with the button “Continue” with an administrator logo right beside it, click it so you can access the new window. Once in the permissions tab, click on “Add”.
- Once in the Permission Entry window, click on “select a principal” present on top of the window.
- You might not know the exact name of the user you are trying to add so click on “Advanced” at the bottom left of the screen so we can add the user from the list.
- Now we can start searching for our user/group and select it to grant the permission accordingly. Click “Find New” and the list of all users will populate in the space (at the bottom).
- Search the list for “Authenticated Users” and after selecting it, press Ok.
- Now the user/group will be populated automatically in the object namespace. Press Ok to proceed.
- Make sure that all the checkboxes are checked (all the permissions are granted [Full control, modify, read and execute, read and write etc.)
- Press Ok and Apply in the permission windows to apply the changes and exit. Now try opening the file by double-clicking it. Hopefully, it will open without any problems.
Solution 2: Changing Deny Permissions
It is also possible that the file you are trying to access has its control denied to all users in its security properties. This usually happens when you transfer files in bulk from one computer to another.
- Right-click on the word document and select Properties.
- Once in the Properties window, navigate to the Security tab.
- If you have a tick present in front of every user, it means that access is not granted to all the groups.
- Click on the “Edit” button to change the permissions.
- Once in the permissions tab, click on the “Allow” button present on Full Control. Now all the Deny will be automatically removed. Press Apply to save changes and exit.
- Try running the word document again. It is possible that you require a restart.
Solution 3: Removing Properties and Personal Information
Word tends to automatically save personal information in the file information section such as authors name, date modified etc. There were many cases where the users reported that removing this information solved the problem for them and they were able to successfully open the file.
- Right-click on the word document and select Properties.
- Navigate to the details tab and click the option which states “Remove Properties and Personal Information”.
- Check the option which says “Create a copy with all possible properties removed” and press Ok.
This will automatically create a copy in the current directory of the computer with all the attributes removed. You can also select all the files at once and perform this operating by opening properties and selecting Details tab.
Solution 4: Checking Anti-Virus Exceptions
Many Anti-Virus have a feature where they automatically protect folders (such as my documents) and cause access problems like the one we are facing. You should head over to your anti-virus settings and check the protected list if the file you are accessing is in a folder which protected.
All anti-virus are different so we cannot list all the methods right here. For example Panda Cloud Antivirus has a data protection where folders tend to be added to the “Protected” list automatically. Make sure that the folder isn’t protected, restart your computer and try accessing the document again.
Solution 5: Copying all the Documents to Another Drive
You can also try copying all the existing documents to another hard drive altogether and check if the problem still persists. In many cases the document couldn’t be opened when it was on a hard drive/SSD but it opened perfectly when it was copied to another drive or the computer.
Open the external/hard drive and manually select all the files to be copied to another location. Right-click and select Copy. Now navigate to your computer and in an accessible location, create a new folder and paste all the contents.
Solution 6: Changing “Inherit from parent entries” Option
Inherit from parent entries is an option available on Microsoft which is ready enabled from the very start. It helps in ease of access and makes a lot of things simpler if you are working regularly with Word. However, this might prove to be a problem like cause the problem we were facing. We can try disabling it and check if this brings any change.
- Right-click on the word document and select Properties.
- Select the Security tab and click on “Advanced” present at the near bottom of the screen.
- At the near bottom of the screen which states “Disable inheritance”. Click it.
- Press Apply to save changes and exit. Now try accessing the file again. | https://appuals.com/fix-word-cannot-open-the-document-user-does-not-have-access-privileges/ | CC-MAIN-2020-10 | refinedweb | 953 | 63.8 |
Visual Studio 2010: Generating sequence diagrams on the fly
One nice feature that Visual Studio 2010 provides is generating sequence diagrams on the fly. Just point out method and choose diagram generating option from context menu. In this posting I will show you how to generate sequence diagrams on the fly.
To keep example illustrative and simple I will use simple code. Let’s suppose we have ASP.NET MVC application with following controller.
public class PriceEnquiryController : Controller
{
// ... some properties and methods ...
public ActionResult Index()
{
var model = new PriceEnquiryListModel();
model.PriceEnquiries = PriceEnquiryRepository.ListPriceEnquiries();
return View(model);
}
}
Let’s say we want to generate sequence diagram for this method. All we have to do is to right click on method with mouse, select “Generate Sequence Diagram …” and modify some options.
You can select what kind of calls you want to include and exclude. Setting these options you can make diagram to reflect only those parts of your code that you really need to visualize.
If you look at the image you see that you can also set call depth. With more complex methods it may be very helpful because too large call depth causes huge diagram with too much knowledge on it.
When we are done with options we have to click “OK” button and Visual Studio 2010 generates sequence diagram. You can click on image to see it in original size.
This diagram is pretty simple and primitive but you can also generate more complex diagrams. Just set options you need and let Visual Studio 2010 generate – it works like charm. Let’s hope we get more diagrams when Visual Studio 2010 first stable version is released. | http://weblogs.asp.net/gunnarpeipman/visual-studio-2010-generating-sequence-diagrams-on-the-fly | CC-MAIN-2015-48 | refinedweb | 277 | 66.23 |
- NAME
- VERSION
- DESCRIPTION
- BUT WHY?
- NOTE ABOUT THE EXAMPLES
- RAW CGI.pm EXAMPLES
- PSGI/Plack
- Mojolicious
- Dancer2
- Catalyst
- Others
- Dependency Handling
- SEE ALSO
- AUTHOR INFORMATION
NAME
CGI::Alternatives - Documentation for alternative solutions to CGI.pm
VERSION
0.10
DESCRIPTION
This module doesn't do anything, it exists solely to document alternatives to the CGI.pm module.
BUT WHY?
CGI.pm hasn't been considered good practice for many years, and there have been alternatives available for web development in perl for a long time. Despite this there are still some perl developers that will recommend the use of CGI.pm for web development and prototyping. The two main arguments for the use of CGI.pm, often given by those developers, are no longer true:
1) "CGI.pm is a core module so you don't have install anything extra." This is now incorrect:
If you are doing any serious web development you are going to have to use external dependencies, DBI is not in the core for example.
2) "CGI.pm scripts are shorter and simpler than alternative implementations." Again, not true and the following examples will show that.
NOTE ABOUT THE EXAMPLES
All of the following are functionally identical. They display a very simple form with one text input box. When the form is submit it is redisplayed with the original input displayed below the input box.
This example may be trivial, but that is the point. The frameworks shown here feature a great deal of functionality for dealing with other parts of your application and dealing with that in a maintainable way, with full separation of concerns and easy testing.
All the examples are commented, where i feel it is necessary to highlight the differences between the implementations, however i do not explain the details of the frameworks - i would be duplicating the framework's docs if i did that, so have a look at the links provided and investigate further.
All of the examples in this documentation can be found within the examples/ directory within this distribution. If you want to run them you will need to install the necessary CPAN modules, these are not included as dependencies in this distribution.
RAW CGI.pm EXAMPLES
This is the base script that will be re-implemented using the other frameworks
There are two versions - one that uses the HTML generation functions of CGI.pm and one that uses Template Toolkit. This is where we get into the first issue with CGI.pm - poor separation of concerns. CGI.pm (and cgi-lib.pl) existed years before template engines were available in perl. As a consequence, to make the generation of html easier, functions were added to output HTML direct from scripts themselves. In doing this you immediately increase the maintenance burden as any changes required to the HTML need to be done within the scripts. You can't just hand a template to the web-designers and allow them to work their magic. Don't mix the business logic and the presentation layer. Just don't.
CGI.pm With Inline HTML Functions
A simple example with form using the html generation functions of CGI.pm. Please don't use these functions, i am merely showing them here for comparison reasons.
#!/usr/bin/env perl # most CGI.pm scripts i encounter don't use script or warnings. # please don't omit these, you are asking for a world of pain # somewhere down the line if you choose to develop sans strict use strict; use warnings; use CGI qw/ -utf8 /; my $cgi = CGI->new; my $res = $cgi->param( 'user_input' ); my $out = $cgi->header( -type => 'text/html', -charset => 'utf-8', ); # html output functions. at best this is a lesson in obfuscation # at worst it is an unmaintainable nightmare (and i'm using # relatively clean perl code and a very very simple example here) $out .= $cgi->start_html( "An Example Form" ); $out .= $cgi->start_form( -method => "post", -action => "/example_form", ); $out .= $cgi->p( "Say something: ", $cgi->textfield( -name => 'user_input' ), $cgi->br, ( $res ? ( $cgi->br, "You wrote: $res" ) : () ), $cgi->br, $cgi->br, $cgi->submit, ); $out .= $cgi->end_form; $out .= $cgi->end_html; print $out;
CGI.pm Using Template Toolkit
I'm including this example to show that it is easy to move the html generation out of the raw CGI.pm script and into a template for better separation of concerns.
#!/usr/bin/env perl # most CGI.pm scripts i encounter don't use script or warnings. # please don't omit these, you are asking for a world of pain # somewhere down the line if you choose to develop sans strict use strict; use warnings; use FindBin qw/ $Script $Bin /; use Template; use CGI qw/ -utf8 /; # necessary objects my $cgi = CGI->new; my $tt = Template->new({ INCLUDE_PATH => "$Bin/templates", }); # the user input my $res = $cgi->param( 'user_input' ); # we're using TT but we *still* need to print the Content-Type header # we can't put that in the template because we need it to be reusable # by the various other frameworks my $out = $cgi->header( -type => 'text/html', -charset => 'utf-8', ); # TT will append the output to the passed referenced SCALAR $tt->process( "example_form.html.tt", { result => $res, }, \$out, ) or die $tt->error; print $out;
The Template File
Here's a key point - this template file will be re-used by all the following framework examples with absolutely no modifications. We can move between the frameworks without having to do any porting of the HTML because it has been divorced from the controller code. What did i say? Separation of concerns: win.
<html> <meta charset="utf-8"> <head>An Example Form</head> <body> <form action="/example_form" method="post"> <p> Say something: <input name="user_input" type="text" /><br /> [% IF result %] <br />You wrote: [% result %] [% END %] <br /> <br /> <input type="submit" /> </p> </from> </body> </html>
One important point to make is the action is /example_form, so the CGI.pm scripts above would have to be called example_form or the webserver would have to be setup to redirect routes to /example_form to whatever the cgi script is called (cgi.pl and cgi_tt.pl in the examples/ directory)
Note that I have used Template::Toolkit here, another excellent template engine is Text::Xslate. I would avoid Mason(2) and HTML::Template. Please don't write your own template engine.
PSGI/Plack
PSGI is an interface between Perl web applications and web servers, and Plack is a Perl module and toolkit that contains PSGI middleware, helpers and adapters to web servers.
Plack is a collection of building blocks to create web applications, ranging from quick & easy scripts, to the foundations of building larger frameworks.
#!/usr/bin/env perl use strict; use warnings; use feature qw/ state /; use FindBin qw/ $Bin /; use Template; use Plack::Request; use Plack::Response; my $app = sub { my $req = Plack::Request->new( shift ); my $res = Plack::Response->new( 200 ); state $tt = Template->new({ INCLUDE_PATH => "$Bin/templates", }); my $out; $tt->process( "example_form.html.tt", { result => $req->parameters->{'user_input'}, }, \$out, ) or die $tt->error; $res->body( $out ); $res->finalize; };
To run this script:
plackup examples/plack_psgi.pl
That makes the script (the "app") available at http://*:5000
Mojolicious
CPAN:
Repo:
Mojolicious is a feature rich modern web framework, with no none-core dependencies. It is incredibly easy to get a web app up and running with Mojolicious.
Mojolicious Lite App
Note that we are using the TtRenderer plugin here, as by default Mojolicious uses its own .ep format
#!/usr/bin/env perl # automatically enables "strict", "warnings", "utf8" and perl 5.10 features use Mojolicious::Lite; use Mojolicious::Plugin::TtRenderer; # automatically render *.html.tt templates plugin 'tt_renderer'; any '/example_form' => sub { my ( $self ) = @_; $self->stash( result => $self->param( 'user_input' ) ); }; app->start;
To run this script (and all the following Mojolicious examples):
morbo examples/mojolicious_lite.pl
That makes the page available at http://*:3000/example_form
Mojolicious Full App
#!/usr/bin/env perl # in reality this would be in a separate file package ExampleApp; # automatically enables "strict", "warnings", "utf8" and perl 5.10 features use Mojo::Base qw( Mojolicious ); sub startup { my ( $self ) = @_; $self->plugin( 'tt_renderer' ); $self->routes->any('/example_form') ->to('ExampleController#example_form'); } # in reality this would be in a separate file package ExampleApp::ExampleController; use Mojo::Base 'Mojolicious::Controller'; sub example_form { my ( $self ) = @_; $self->stash( result => $self->param( 'user_input' ) ); $self->render( 'example_form' ); } # in reality this would be in a separate file package main; use strict; use warnings; use Mojolicious::Commands; Mojolicious::Commands->start_app( 'ExampleApp' );
This is a "full fat" version of the app in Mojolicious, as stated in the comments you would split the packages out into separate files in the real thing. Run using:
morbo examples/mojolicious.pl
Mojolicious Lite App Wrapping The CGI.pm Script(s)
#!/usr/bin/env perl # automatically enables "strict", "warnings", "utf8" and Perl 5.10 features use Mojolicious::Lite; use Mojolicious::Plugin::CGI; use FindBin qw/$Bin/; plugin CGI => [ '/example_form' => "examples/cgi_tt.pl" ]; app->start;
This is an interesting example - we can wrap the existing CGI.pm scripts with Mojolicious and then add new routes to the Mojolicious app - this gives us a migration path. There is one thing to consider - if you are serving your cgi scripts using a persistent webserver (e.g. mod_perl) then you will see a hit in the performance because Mojolicious::Plugin::CGI will exec the cgi script for each request. Run using:
morbo examples/mojolicious_lite_plugin_cgi.pl
Dancer2
CPAN:
Repo:
Dancer2 is a rewrite of Dancer, they share a lot in common but i would recommend Dancer2 as it solved some issues with Dancer
#!/usr/bin/env perl # automatically enables strict and warnings use Dancer2; any [ 'get','post' ] => '/example_form' => sub { template 'example_form.html.tt', { 'result' => params->{'user_input'} }; }; start;
Honestly that's just beautiful. The above example can be run with:
perl examples/dancer2.pl
That makes the page available at http://*:3000/example_form
Catalyst
CPAN:
Repo: git://git.shadowcat.co.uk/catagits/Catalyst-Runtime.git
Catalyst is one of the older web frameworks in perl, but is still very popular, actively maintained, and feature rich. It has a heavier dependency list than the above frameworks, but this should not be taken as a negative point.
Catalyst is slightly more involved in that you have to set up your entire app as the first step, this involved running:
catalyst.pl example_form
Which will create the various directories and scripts for building/running your app. You then need to add the necessary controllers, views, and templates. This has all been done automatically through the use of the helper scripts that come with Catalyst. The important bit, the actual example code, is just this in the examples/example_form/lib/example_form/Controller/Root.pm controller:
package example_form::Controller::Root; # automatically enables strict and warnings use Moose; use namespace::autoclean; BEGIN { extends 'Catalyst::Controller' } __PACKAGE__->config(namespace => ''); sub example_form : Local { my ( $self,$c ) = @_; $c->stash( template => 'example_form.html.tt', result => $c->req->params->{user_input}, ); } sub end : ActionClass('RenderView') {} __PACKAGE__->meta->make_immutable; 1;
Then running the server:
perl examples/example_form/script/example_form_server.pl
Again makes the page available at http://*:3000/example_form
Others
The three (four) examples above are the "big three", currently very popular with great communities and support. There are other frameworks available:
Dependency Handling
This is a whole other topic, but given CGI.pm is no longer in the perl core you would have to install it anyway. It would be a good idea to do this the right way from beginning. I'm not going to this in detail here, there are many many good sources of information on the web. Here are some links to get you started:
Managing perl:
Managing perl modules:
SEE ALSO
Task::Kensho - A Glimpse at an Enlightened Perl
AUTHOR INFORMATION
Lee Johnson -
leejo@cpan.org
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. If you would like to contribute documentation please raise an issue / pull request: | https://metacpan.org/pod/release/LEEJO/CGI-Alternatives-0.10/lib/CGI/Alternatives.pm | CC-MAIN-2017-04 | refinedweb | 1,982 | 54.02 |
Related
Tutorial
Working with Singletons Singleton is one of the most well known and hated design patterns amongst developers. It is very easy to implement a basic version of the singleton pattern (probably why it’s abused so much). In this article, we’ll take a look at what singletons are and how to best implement them in JavaScript.
There are times when you need to have only one instance of a class and no more. It could be some kind of resource manager, one that maintains I/O ports of your application or some global lookup for values. That’s where singletons come in.
Singletons are used to create an instance of a class if it does not exist or else return the reference of the existing one. In other words, Singletons are created exactly once during the runtime of the application in the global scope.
You might ask, why use singletons in a language with global variables? They don’t seem very different from global variables (or static ones), and most regard them as “glorified globals”. JavaScript in particular has that difference very very blurred, because the following code…
var Alliagator = { color: "green", getColor: function() { console.log(color); }; }
Is technically a singleton object, since it’s an object literal - which means that the object with that name is unique throughout the application (since it can’t be redeclared).
This seems to have a lot in common with global variables in JavaScript as well. So what’s the difference?
- For starters, global variables are lexically scoped whereas singletons are not, meaning if there is another variable with the same name as the global variable inside a programming block, then that reference is given priority; In case of singletons, being sort of static in declaration, should not have that reference redeclared.
- The value of a singleton is modified through methods.
- The singleton is not freed until the termination of the program, which is likely not the case for a global variable.
An interesting advantage of a singleton is that it’s thread-safe. While that feature is not really applicable to Javascript, this comes in handy in languages like C++. This is just a case to prove the point that it’s not really weird to go for singletons even in a language that supports global variables.
There are scenarios where singletons are handy. Some applications of singletons are logger objects or configuration settings classes.
A quick way to declare a singleton would be:
// Declare them like this var SingletonInstance = { method1: function () { ... } method2: function () { ... } }; // and use them as such console.log(SingletonInstance.method1()); console.log(SingletonInstance.method2());
While this may be the easy way, it’s not necessarily the best. Another way would be to use factory classes that allows us to create a singleton once.
var SingletonFactory = (function(){ function SingletonClass() { // ... } var instance; return { getInstance: function(){ // check if instance is available if (!instance) { instance = new SingletonClass(); delete instance.constructor; // or set it to null } return instance; } }; })();
This is better than the last example because the class definition is private and the constructor is deleted after the first instance creation, which helps us prevent duplicate singletons in the program. But the above approach looks a lot like the factory pattern.
Perhaps the cleanest approach is to use a combination of ES6 classes,
const and
Object.freeze():
class Singleton { constructor(){ ... } method1(){ ... } method2(){ ... } } const singletonInstance = new Singleton(); Object.freeze(singletonInstance);
We can go a little further and write this singleton in a module and then export it with the ES6 export functionality.
export default singletonInstance;
Then use that singleton by importing it:
import mySingleton from './path-to-my-singleton-definition.js'; mySingleton.method_1() // Now use your singletons
So take your pick, find which approach works best for your application and puts readability first.
Conclusion
It’s quite likely that you will run into overwhelming online literature on how singletons are bad for object oriented designs. The idea is to use singletons where it doesn’t affect the state of the application, because if you fail to follow, then testing goes right out the window. This severely limits their usage in big applications. Most developers agree that global state is bad, but they love them too much to look at its bad sides while others go extreme lengths to avoid using global states. Whether singletons are good or bad, understanding this fundamental design pattern and adding it to your programming toolkit is always a wise idea.
Further Reading
Check out this blog post by Misko Hevery for more insight into the global state issue with singletons. | https://www.digitalocean.com/community/tutorials/js-js-singletons | CC-MAIN-2020-34 | refinedweb | 762 | 54.22 |
I see ya back to playing my brah. Cool hit me back up in discord man.
-
- McNubberson
- Registered Member
- Member for 6 years, 8 months, and 28 days
Last active Tue, Sep, 1 2020 07:46:37
- 1 Follower
- 181 Total Posts
- 42 Thanks
- Aug 31, 2020McNubberson posted a message on FRESH REALM 18+ | looking for active peoplePosted in: Minecraft Realms
0McNubberson posted a message on AOGX Be Realm'n.Posted in: Minecraft Realms
Welcome to the community man. Good to have you here.
0McNubberson posted a message on AOGX Be Realm'n.Posted in: Minecraft Realms
Here's something different. The AOGX have a realm. Yes that AOGX from good old CS days. No longer pwning noobs, or necro spraying bodies, we're old now. It's just me and the infamous Tbone currently, and we are looking for the more mature crowd to hang out with. (Us being you know, older now, I'm close to 40 who knew) Of course vanilla survival, those of the female persuasion are of course welcome because if anyone knows chicks rock at gaming, it is us. Even if you think you may not be close enough in age, I reccomend at least trying. I even reached out to a nearly 70 year old and a 25 year old because the interview was so good. Voice is a huge plus because well typing can stink while gaming and we love to chat. Hit me up!
AOGX is a community for friends, not a clan. We are looking for some new friends, who happen to game and play minecraft.
Reach me here on the forum or this post if it is the only way you can. I say this only because I don't just sit here. I do check the forum regularly though so no pressure.
Skype: RawSlade
Discord: RawSlade#4112
1McNubberson posted a message on Easy Difficulty is Too HardPosted in: Suggestions
I can kinda relate to this since I'm a bit of a n00b even after four-ish years of playing the game, mostly because I do a lot of stupid stuff like jumping into a group of monsters thinking I can kill them all.
That being said, I think I'm the exception to the rule. I also find that people who think the game should be easier tend to say it's too hard for their kids. That's fair since Normal and Hard exist to make the game harder, so why shouldn't Easy be easy enough for the lowest age demographic the game appeals to, namely 10-year-olds?
Alright so what about these changes?
Eh... Maybe on its own, but I wouldn't do that in addition to all the other additions.
Seems reasonable.
Agreed.
Hmmm maybe? I haven't had a lot of experience with drowned yet, but I do think the fact they spawn in any water with a light level of 7 sounds annoying.
Again, haven't had a lot of experience with these guys yet so I'll have to take your word for it.
Food depletion in general is kinda ridiculous, so you're not alone on this one.
Now, in defense of Minecraft as it currently stands, I don't think even in Easy mode monsters should be incompetent. But I do think they do deserve a little bit of a nerf.
Mostly Support.
I agree 95% with this response. The only reason I do not 100%, is I have dealt with the drowned a lot, and I can see the viewpoint you may have with this. I have witnessed my daughter get slaughtered over and over with these things.
As said, nerf, not incompetent. I personally cannot test to the difficulty of easy because I started out on normal, and have played hard for years. I do see a lot of what you said though through my kids.
1Posted in: Suggestions
This is very true. Like most of the popular fishing games of all time, involve you in an underwater view watching yourself reeling in the lure, and seeing the fish as you do. I mean, really? WoW fishing is ok in the sense you never know exactly what you are going to get and you really have to hunt for the good stuff, but that's pools of water. Lack of fishing freedom. FFXV fishing was cool because of the fishing upgrades to catch big game fish, and the fact it fit in the cooking buff system along with time of day fish. But on a whole, lacked freedom and hid behind a linear path game system. The only decent fishing is fishing sims, and they still have many flaws.
I think if (Opinion here) MC added variety (of catchable fish), it would further nerf drops in afk farms and add a (little) bit of spice to actual fishing. Again this is clearly opinion because there are people who do not touch fishing period so adding more does nothing for them.
I love fishing in RL and I do dig most of the fishing in MC. In fact it is my prefered food in MC. Sometimes I do fantasize about how I wish there were fish traps, river nets, ocean nets, and deep water throw lines, so I could go off and do other things while waiting for food, but it is not implemented, and probably never will be. In a real scenario finding water then food to start with, then worry about shelter and then scout. It is a little backwards here. It's make sure I have wood and a shelter, then find food and I do not need water except in my case to catch my food. For the fishing here to not "suck," it will never happen because it's all based on opinion. It can never be agreed upon.
0Posted in: Suggestions
Mojang debunked this reasoning with the introduction of the crafting book and advancements. They ensure a player completely detached from the "community" will understand how you are supposed to play from the game's explanations alone. So should there be an "afk fish farm" tutorial? Regular mob spawners are simple. You got the spawner, it shouldn't be too hard to figure out some basic farms. But afk fishing relies on inside knowledge on how block ticks work... or looking up the wiki.
Which proves my point. After you have the eyes, where do they go? Crafting recipes do not tell you they go in a strong hold. Just in ender chests. Which according to everyone who uses "late game" as a viable term, is KEY to the game. How do you find out? The answer is the same as afk fish farms no matter how you look at it. Look it up, Wiki, YouTube, ask someone, stumble upon it on your own. Etc. So no, it's not "Debunked." Also with it becoming a feature, I believe it already is.
0Posted in: Suggestions
I don't need to. It's a choice. Just like I ignored AGTRigomortis' very next comment after our posts. Granted it was not adressed to me because he quoted you, and most likely thinks by you saying there is a lack of balance, that you needed to argue your point. Doesn't matter we both chose to move on.
As far as some quasi survival/creative mode. The only thing you have to do in survival is get food, and depending on your difficulty setting, that is not even true. So is it really a true survival game? My kids when playing by themselves, use survival as a challenging creative map.
0Posted in: Suggestions
If everyone loves these farms, why not make them legit features in the game.
How about a new, underwater dungeon, home to a “fishing spawner” which provides fish and treasure.
You cannot expect a new player to know placing a bunch of trapdoors and pressure plates in a very certain configuration will grant the afk fishing techniques. So if it “doesn’t matter” like everyone so claims-
Make it legit.
Anyone that new, isn't going to know how you all suggest how to play the game either.
I mean, how do you know about afk fish farms?
The likelihood is small. If it was to happen that the person had no contact with MC prior at all, they know no one that plays or play with no one that hasn't played before. The odds of them learning are quite high by the fact how many people know. I mean they have to learn how to make eyes of ender, but once they do... where does that go? The answers are the same in anyway you answer the "how do they learn where they go" question.
0Posted in: Suggestions
We have suggested "actual ways of fixing it" multiple times throughout this argument. In my first post, for example, I suggested bait items be required in order to fish. This is beyond the scope of the current suggestion, however, which simply suggests an end to AFK Fish Farms in general.
I was suggesting in a different thread.
This was in general.
They have been suggested before, without the arguments. I'd say see fishg's thread, but I was notified you already had because I've already said generally, I like it. I hope it polishes up. Also, if they cannot take it, maybe the internet is a bad place.
"You miss 100% of the shots you don't take."
I've been debated, mocked, and still kept on trucking. All I did was defend Mojang's decision to keep it all these years. As much as I detest Iron farms, I'll go to bat for them too.
0Posted in: Suggestions
Here is my suggestion:
However, the simple fact I've realized is the quickest and most controversial suggestions get the most feedback, while smaller yet simple suggestions get far less, and larger complex suggestions are rarely read in their entirety. But I encourage you to read mine or any other fishing overhaul (there are a few older ones, do a search on the forum) and reply to that. I agree that this endless debate is going no where.
I've been there mate. Said it was generally a good idea. I just didn't like it had some focus on the afk fishing aspect, because then that would be the focal point, and detract from a really good idea.
0McNubberson posted a message on Base RaidsPosted in: SuggestionsQuote from Agtrigormortis»
"."
Apparently some people have forgotten this, this is very evident by the looks of the replies from the OP of this thread.
The suggestion that was made was similar to this, except it required you to go hunt for an extremely rare item in another dimension, just to avoid the annoyance. This doesn't appear to be an idea most players would get behind, I certainly wouldn't, friends on my server wouldn't, neither did some others replying here or another forum.
Why did you quote me when what you said clearly has nothing to do with what I said?
3Posted in: Suggestions
I mean how did this thread even become a thing. It was a blanket statement saying it was bad. I remember when suggestions here had to have an outlined idea for implementing or fixing things followed with ideas and/or steps to improve it. It mitigated most arguing and actually talked about the real suggestion.
0McNubberson posted a message on Base RaidsPosted in: Suggestions
I would actually love the idea of more/better spawned patrols, or a forced raid adaption. Take some redstone (and some other materials), and a nautilus for instance and make a 5 minute warning horn or something. That way those that want to know about a forced raid, would be able to have a little time to prepare. Obviously you do not want the warning, do not make it. I would love to combine a defending game feature along with surviving. Maybe that should be a new game period...hmmm. Up until recently, Our base was considered a village (we had placed some villagers inside deep in the bowels), and was the only way we could force a raid on our base.
0Posted in: Suggestions
Nearly 7 years ago 1,4,2 drops and the primitive afk fishing machine was born. People were calling foul, because a person could gain exp fast and not be anywhere near their computers for periods of time. Essentially doing nothing to get something. It was Minecraft news that shook multiple forums. No one cared they were getting fish. Obvious, I know.
Nearly 6 years ago 1.7.2 drops and a massive change happened. Treasure added including enchanted books. The masses went at it with "You should only be able to find (saddles, nametags, enchanted books, enchanted rods) in actual treasure chest." However a lot of people realized they also added junk to up the random gamble it was to fish. Also, the drops of fish and junk were high compared to the rest. I believe it still is, just not the same numbers.
Nearly 3 and 1/2 years ago 1.9 drops and mending became a thing. Afk fish farms now could be extended and the new book would become exploited due to other farms as would not fit the aim of extending use of tools, but become virtually indestructible. The outcry was loud and long. Some of those same people I imagine are still on about it today that were then.
The same arguments being made today in the present, have been made before. You want this to change? I suggest new material. I also suggest making actual ways of fixing it, then to just argue whether it is an exploit, or if you like it or not. Also, trying to prove the worthiness of an item to a person whom sees no value in a particular item, isn't making a point. If mending wasn't backed by exploited exp farms or farmables, the value of mending would be low as it's intended feature of over time repairs. You know what item is rare and worthwhile to me? A book with Protection IV. I am sure someone out there gets it more than I do. Luckily, its a common enchant when directing enchanting as piece of armor on the table. Otherwise, I'd have way more mending than protection enchants.
1McNubberson posted a message on fishing questionPosted in: Discussion
Certain angles while in a boat cause it to clip the boat (while reeling) and drop. Same thing depending on height, edges, and angle.
- To post a comment, please login.
- Apr 12, 2019McNubberson posted a message on Minecraft 1.14 Pre-Release 1 and 2Posted in: NewsQuote from allyourbasesaregone»
Oh boy, you guys sure are fun...
Since when was fitting through 1.5 block areas a thing anyway? The smallest I could get through was between two closed trapdoors on opposite short sides of a door frame. Is that new with this update?
Yes, it is new.
- May 9, 2018McNubberson posted a message on Minecraft 1.13 Snapshot 18W19APosted in: News.
- Jan 13, 2018McNubberson posted a message on Minecraft 1.13 Snapshot 18W02APosted in: News
1.13 is mostly technical changes anyway, so unless you do a lot of work with command blocks (or just commands in the console,) there won't be a whole lot of difference from 1.12.2. Except now Modding will be way easier, and new wood stuff, and many other things... The "update aquatic" with all the fancy new mobs and ocean features that everyone's waiting for is
(probably)going to be 1.14.
1.13
Additions
General
Data packs
- Like resource packs, but for loot tables, advancements, functions, structures, recipes and tags.
- Used by placing them into the datapacks folder of a world.
- Data packs are .zip files or folders, with a pack.mcmeta in the root. See: Tutorials/Creating a resource pack#pack.mcmeta. The packs are located in (world)/datapacks/.
- Structures will load from (world)/generated/structures/(namespace)/(file).nbt before checking data packs.
- However, this directory should not be used to distribute structures. Instead, move these files into data packs.
- Reloadable using /reload.
- Structure: pack.mcmeta, data folder, structures, recipes and tags.
- Items, blocks and functions can be "tagged" with an ID.
- Block tags can be used when testing for blocks in the world.
- Items tags can be used when testing for items in inventories.
- Function tags can be used when calling functions using commands or advancements.
- Functions tagged in minecraft:tick will run every tick at the beginning of the tick.
- Functions tagged in minecraft:load will run once after a (re)load.
- Tags are created using data packs in data/(namespace)/tags/blocks, data/(namespace)/tags/items, and data/(namespace)/tags/functions.
- When overriding a tag from a different data pack, you can choose to replace or append.
- By default all tags append if another data pack created the tag.
- Adding "replace": true to can add for example #foo:bar in a tag value list to reference another tag called foo:bar.
- Self referencing is not possible.
- There are 10 default tags for both items and blocks: minecraft:buttons, minecraft:carpets, minecraft:doors, minecraft:logs, minecraft:planks, minecraft:saplings, minecraft:stone_bricks, minecraft:wooden_buttons, minecraft:wooden_doors and minecraft:wool
- There are 2 extra default tags for blocks that don't have an equivalent for items: minecraft:flower_pots and minecraft:enderman_holdable.
- Advancement item predicates now support item tags.
Loot tables
- Added the set_name function to loot tables.
Options
- FS (Fullscreen) Resolution
- Is used to change the resolution.
- An option in chat settings to toggle automatic command suggestions (defaults on, otherwise hit tab to bring them up).
- Options when editing a world to make a backup and open the backups folder.
Blocks
- Trapdoors, buttons and pressure plates made from all six types of wood.
- A pumpkin block, without the face. The previous pumpkin block has been renamed "Carved Pumpkin".
- Right-clicking a pumpkin block with shears will turn it into a carved pumpkin and make it spit out 4 pumpkin seeds.
Commands
General
- A command UI when typing commands in the chat.
- Different components of commands will be displayed in different colors.
- Errors will be displayed in red without having to run the command.
- An nbt argument in target selectors.
- A new command parsing library known as brigadier.
Coordinates
- is the amount of blocks in the specified direction.
Specific commands
/data
- A command that allows the player to get, merge, and remove entity and block nbt data
- /data get block <pos> [<path>] [<scale>]
- Will return the NBT data from the block at pos as its result (if a path is specified). A path can be specified to only retrieve that nbt data, but this is limited to numeric tags. An optional scale can be provided to scale the number retrieved.
- /data get entity <target> [<path>] [<scale>]
- Will return the NBT data from one target entity as its result (if a path is specified). A path can be specified to only retrieve that nbt data, but this is limited to numeric tags. An optional scale can be provided to scale the number retrieved.
- /data merge block <pos> <nbt>
- Will merge the block nbt data at pos with the specified nbt data.
- /data merge entity <target> <nbt>
- Will merge the entity nbt data from target with the specified nbt data. Merging player nbt data is not allowed.
- /data remove block <pos> <path>
- Will remove nbt data at path from the block at pos.
- /data remove entity <target> <path>
- Will remove nbt data at path from one target entity. Removing player nbt data is not allowed.
- Data paths look like this: foo.bar[0]."A [crazy name]".baz.
- foo.bar means foo's child called bar.
- foo[0] means element 0 of foo.
- "quoted strings" may be used if a name of a key needs to be escaped.
- Examples of old commands:
- /entitydata <target> {} is now /data get entity <target>
- /blockdata <pos> <nbt> is now /data merge block <pos> <nbt>
- Examples of new functionalities:
/datapack
- A command to control loaded data packs.
- Has the following subcommands:
- enable <name> - will enable the specific pack.
- disable <name> - will disable the specific pack.
- list [available|enabled] - will list all data packs, or only the available/enabled ones.
- Data packs are enabled by default, but if you disable it
/teleport
- Added facing.
- /teleport [<targets>] (<location>|<destination>) facing (<facingEntity>|<facingLocation>)
- Will rotate an entity to face either an entity or a location.
/time
Items
- An item form for bark blocks for all six types of wood.
- An item form for smooth quartz, smooth red sandstone, smooth sandstone, and smooth stone.
- An item form for red and brown mushroom blocks and mushroom stems.
- A debug stick to cycle between different block states.
- Left clicking cycles through states; right clicking cycles through values. Shift clicking will cycle through the states or values in reverse order.
- A model for petrified oak slab - the old wood slab that acts like a stone slab.
Changes
General
The "flattening"
- Numeric block metadata completely phased out in favor of block states.
- Split, merged, created, deleted, and renamed a lot of blocks, blockstates and items.
- Blocks and items previously differing because of damage value have gotten their own id, for example white_wool instead of wool:0
- Damage has been moved to the tag tag and is only used by tools and armor; maps use a map tag.
- Files and commands no longer use data or set_data.
- Structures do not run an upgrade path for this.
- To update your structures, load them all in 1.12, then update to 1.13 and save all structures again.
Recipes
- Custom recipes are now loaded from data packs in data/(namespace)/recipes/(name).json
- Turning off the vanilla data pack will also remove all recipes.
- Recipes can now refer to a tag instead of an item.
Creative Inventory
- Because of The "flattening", certain blocks and items have been moved around in their respective groups, for example the Purpur block is now after Obsidian.
- Mushroom Blocks, farmland and grass path are added to the inventory, under the Decoration Blocks group. Additionally, blank firework rockets are added to the Miscellaneous group.
Death Messages
- Added death message for when the player is blown up by a bed in the nether or end
- "Player was killed by [Intentional Game Design]"
- Clicking on "[Intentional Game Design]" opens a link to MCPE-28723
Statistics
- area now.
Blocks
- The upper limit of the block ID has dissappeared.
- Blocks which used to have no bottom texture (like repeaters, comparators, torches, etc.) now have a bottom texture, not including redstone wire.
- Flicking a lever on now displays redstone particles.
- pumpkins and fence gates no longer requires a block below them.
- Silverfish-Infested blocks will now break instantly, no matter the tool.
- Bark can now be crafted. 4 logs in a square yield 3 bark.
- Multiple vines facing different directions, including on the bottom of blocks, can now be placed in the same block space.
Entities
Item frames
- Item frames can now be put on floors and ceilings.
Paintings
- Paintings now use a namespaced ID for their motive.
Commands
General
- Commands and functions are much faster and more efficient.
- Most commands are now more case-sensitive. Lowercase is preferable wherever possible.
- For example, this is no longer allowed: /scoreboard ObJeCtIvEs ...
- The output signal of a command block used to be its "success count", but now is its "result".
- Changed all custom names (blocks, items, entities, block entities) to translatable text components.
- Server commands (functions, console, rcon) now run from world spawn in the overworld, instead of at 0,0,0.
- Errors during a command are now a nicer error message (with a tool tip for more info).
NBT
- Thrower and Owner nbt keys of item entities are no longer strings but are instead compounds with two longs named L and M.
- owner nbt keys of snowballs, eggs and ender pearls are no longer strings but are instead compounds with two longs named L and M.
Command UI
- A new prototype for the command UI.
Functions
- Functions are now completely parsed and cached on load.
- This means if a command is incorrect for any reason, the player will know about it on load.
Specific Commands
/advancement
- Removed /advancement test in favor of entity selectors.
/blockdata
/clear
- The syntax of /clear has changed.
- /clear [<target>] [<item>] [<data>] [<count>] [<nbt>] will become /clear [<target>] [<item>] [<count>]
- See the item argument type for more details.
/clone
- The syntax of /clone has been changed.
- /clone <begin> <end> <destination> filtered [force|move|normal] [<block>] [<data>] will become /clone <begin> <end> <destination> filtered [<block>] [force|move|normal]
- /clone <begin> <end> <destination> [replace|masked] [force|move|normal] [<block>] [<data>] will become /clone <begin> <end> <destination> [replace|masked] [force|move|normal]
/defaultgamemode and /gamemode
- Now only accepts string IDs, not shorthand or numeric.
- /gamemode 2 will become /gamemode adventure
- /defaultgamemode sp is now /defaultgamemode spectator
/difficulty
- /difficulty [<value>] now only accepts string IDs, not shorthand or numeric.
- /difficulty 2 is now /difficulty normal
- /difficulty p is now /difficulty peaceful
- You can now query for the current difficulty by using /difficulty without any arguments.
/effect
- The syntax of /effect has been split off, to avoid ambiguity.
- /effect <entity> <effect> is now /effect give <entity> <effect>
- /effect <entity> clear is now /effect clear <entity> [<effect>]
- Giving an effect will now fail if it didn't actually do anything.
- Some mobs are immune (for example an ender dragon).
- Stronger existing effects prevent new weaker ones.
/enchant
- Removed in favor of /modifyitem.
/entitydata
/execute
- The syntax of /execute has been split off.
- but nothing else.
- /execute align <axes> <chained command> executes a command after aligning the current position to the block grid (rounding down), <axes> is any combination of x y and z (for example: x,xz,zyx and yz).
- Examples:
- x=-1.8,y=2.3,z=5.9 using x will become x=-2,y=2.3,z=5.9
- x=2.4,y=-1.1,z=3.8 using yxz will (overworld|the_end|the_nether) lets you store the result or success of a command somewhere:
- will be stored when the full command has finished executing.
- If a command isn't successful (success is 0), result will always be set to 0.
- It will be made clear what the expected result of each command is.
- /execute store (result|success) score <name> <objective> <chained command>
- The value is stored into the scoreboard under <name> and <objective>.
- The objective must exist, but unlike with /stats you don't need to set an initial value for <name>.
- /execute store (result|success) block <pos> <path> (byte|double|float|int|long|short) <scale> <chained command>
- The value is stored in the nbt data at path of the block at pos as a byte, double, float, int, long, or short.
- /execute store (result|success) entity <target> <path> (byte|double|float|int|long|short) <scale> <chained command>
- The value is stored in the nbt data at path of one target entity as a byte, double, float, int, long, or short.
- Like /data, /execute store can't modify player nbt. Nbt inside the tag key of items in the player's Inventory or EnderItems is an exception and can be modified by /execute store[3][/sup].
- Data paths look like this: foo.bar[0]."A [crazy name]".baz.
- foo.bar means foo's child called bar.
- foo[0] means element 0 of foo.
- "quoted strings" may be used if a name of a key needs to be escaped.
- Examples:
- /execute store success score @a foo run say hi
- /execute as @e[type=pig] at @s store success entity @s Saddle byte 1 if entity @p[distance=..5]
- You can chain all sub-commands together.
- After every sub-command you need to write another sub-command.
- When you're done with chaining sub-commands, run lets you write the actual command to be executed.
- / is no longer allowed before the command.
- /execute as somebody at somebody run say hi
- Example of old commands:
- /execute @e ~ ~ ~ detect ~ ~ ~ stone 0 say Stone! is now /execute as @e at @s if block ~ ~ ~ stone run say Stone!
- /execute @e ~ ~ ~ detect ~ ~ ~ grass 0 summon pig is now /execute at @e if block ~ ~ ~ grass_block run summon pig
- /execute @e ~ ~ ~ say Hello! is now /execute as @e run say Hello!
/experience
- >.
/fill
- The syntax of /fill has been changed.
- /fill <begin> <end> <block> <data> replace [<replaceBlock>] [<replaceData>] is now /fill <begin> <end> <block> replace [<filter>]
- /fill <begin> <end> <block> [<data>] [destroy|hollow|keep|outline|replace] [<nbt>] is now /fill <begin> <end> <block> [destroy|hollow|keep|outline|replace]
/function
- /function no longer accepts [if|unless] <entity> arguments.
/gamerule
- /gamerule no longer accepts unknown rules ("custom gamerules").
- You can use functions or scoreboards as replacements, with no loss of functionality.
- Existing custom gamerules will just not be accessible. Only built-in rules will be available.
- Values to /gamerule are now type checked (giving a string if it wants an int is a very obvious error).
- Removed the gameLoopFunction gamerule in favor of functions tagged in minecraft:tick.
/give
- The syntax of /give has changed.
- /give <players> <item> [<count>] [<data>] [<nbt>] is now /give <players> <item> [<count>]
- See the item argument type for more details.
/kill
- A target is now mandatory
/locate
- The y-coordinate is now returned as 64 instead of ?.
- The result of the command, used by /execute store, will be the absolute distance to the structure.
/particle
- The <params> argument has been removed, instead the parameters for particles like block can be specified right after the <name> argument using the new block argument.
- Particle names have been changed.
/playsound
- Will Tab ↹ auto-complete custom sound events.
/replaceitem
- The syntax of /replaceitem has changed.
- /replaceitem block <pos> <slot> <item> [<count>] [<data>] [<nbt>] is now /replaceitem block <pos> <slot> <item> [<count>]
- /replaceitem entity <target> <slot> <item> [<count>] [<data>] [<nbt>] is now /replaceitem entity <target> <slot> <item> [<count>]
- See the item argument type for more details.
- The slot argument no longer requires slot..
- For example, slot.hotbar.1 now is hotbar.1
/scoreboard
- /scoreboard had [<dataTag>] removed from its commands in favor of the nbt argument in entity selectors.
- /scoreboard players tag and /scoreboard teams removed. Replaced by /tag and /team respectively.
- /scoreboard players test removed in favor of /execute (if|unless) score, entity selectors and /scoreboard players get <target> <objective>.
/setblock
- The syntax of /setblock has changed.
- /setblock <pos> <block> [<data>] [<mode>] [<nbt>] is now /setblock <pos> <block> [<mode>]
- See the block argument type for more details.
/stats
- Removed. Now part of /execute.
- The new /execute one isn't a direct replacement, the behavior has changed:
- It's now per-command, instead of per-entity or per-block.
- There's only result and success, which covers all the old stat types.
/stopsound
- * can now be used instead of source to stop all sounds with a certain name, across all sources.
/tag
- Replaces /scoreboard players tag.
- Keeps the same syntax.
- /tag <players> add <tag> to add <tag> to <players>.
- /tag <players> remove <tag> to remove <tag> from <players>.
- /tag <players> list to list all tags on players.
/team
- Replaces /scoreboard teams.
- Keeps the same syntax.
- /team add <team> [<displayname>]
- /team empty <team>
- /team join <team> [<members>]
- /team leave [<members>]
- /team list [<team>]
- /team option <team> <option> <value>
/testfor, /testforblock and /testforblocks
/toggledownfall
- Removed. It was always used to stop the rain, then make you frustrated in a minute when it's raining again.
- Use /weather.
/tp and /teleport
- /tp is now an alias of /teleport (much like /w, /msg and /tell).
- /teleport has been simplified a bit, to avoid ambiguity.
- ] will teleport you to that position facing an entity's feet or eyes (default feet).
- Teleporting to an entity in another dimension is now allowed.
/trigger
/weather
- If you don't specify a time, it now defaults to 5 minutes (previously random).
Argument Types
Target selectors
- More error handling has been introduced.
- Arguments may now be quoted.
- Things like limit=0, level=-10, gamemode=purple are not allowed.
- There's no longer a "min" and "max" separate values, we instead support ranges.
- level=10 is level 10
- level=10..12 is level 10, 11 or 12
- level=5.. is anything level 5 or above
- level=..15 is anything level 15 or below
- The arcane shorthand names have been renamed.
- m -> gamemode
- l or lm -> level
- r or rm -> distance
- rx or rxm -> x_rotation
- ry or rym -> y_rotation
- c -> limit
- x, y, z, distance, x_rotation, y_rotation are now doubles and allow values like 12.34
- x and z are no longer center-corrected.
- This means x=0 no longer equates to x=0.5.
- gamemode (previously m) no longer allows numerical or shorthand IDs.
- limit (was c) No longer allows negative values.
- Use sort=furthest instead.
- The name argument now supports spaces (as long as it's quoted).
- Multiple of the same argument in target selectors is now possible.
-.
-..5}
- You can test for advancements with advancements={foo=true,bar=false,custom:something={criterion=true}}
- true for "they completed the advancement", false for "they have not completed the advancement"
- Alternatively, pass a block of specific criteria to test for (again, true/false)
Blocks
- Wherever a <block>, optionally [.
Items
- Wherever an <item>, optionally [<data>].
Mobs
Horse
- The model has been changed to be more consistent with other mobs[4][/sup]
- Some animations like opening its mouth when grazing have been removed from the model as well.
Structures
Witch Huts
- Now generates with a mushroom in the flower pot.
- Previously, it was completely empty.
Other
Controls
- The name of keybindings now describes the actual key (e.g. 'LBUTTON' -> 'Left Button', 'BACKSLASH' -> '\')
Options
- Removed 3D Anaglyph completely
Resource packs
- Updated format number
- The default resource pack can now be moved up and down on the resource pack selection screen.
Planned additions
Commands
/modifyitem
Unconfirmed featuresThese features are not confirmed for 1.13, but they were mentioned or showcased by developers during development. Main article: Mentioned features
- The ability to change biome dependent colors (such as foliage, water, and the sky) without needing mods.[6][/sup]
- Ability for the recipe book to show smelting recipes.[7][/sup]
- Recipe book design might be changed.[9][/sup][10][/sup]
- /save-all, /save-on and /save-off will be replaced by /save, /save enable and /save disable.[11][/sup]
- /ban, /ban-ip, /pardon and /pardon-ip will be replaced by other commands.[12][/sup]
Fixes
149 issues fixed
LWJGL 2-related issues
- MC-1519 – Key gets stuck when toggling fullscreen
- MC-3643 – CTRL /-80282 – On Linux in fullscreen, character sometimes cannot fully turn around
- MC-81818 – When resizing the window, you may end up spinning around
- MC
From released versions before 1.13
- MC-1511 – Anvil can be placed in certain blocks
- MC-1685 – Unable to write in a new blank Book and Quill after renaming it in an anvil
--2666 – Corner Cobblestone Wall Has Incorrect Collision Box
--5024 – Reticle/Crosshair not centered on the screen
- MC-5037 – Riding a pig / horse with a cape causes it to not behave as expected
- MC-5305 – Burning arrows in ground are not extinguished by rain
- MC-5694 – High efficiency tools / fast mining destroys some blocks client-side only
--12000 – The hit-box of corner fences isn't the same as the collision-box!
- MC-19966 – Fully grown pumpkin stems attach to pumpkin, even if another stem is already attached
- MC-26739 – Doors won't update with redstone
- MC-31100 – /setblock does not update blocks needing support and certain powered states
- MC-31222 – Crash on pressing the inventory close key and an item manipulation key at the same time in large chests
- MC-31346 – When you light a cobblestone wall, it turns into a corner piece for a second
- MC-32539 – You can write on both the server name and the server address input field at the same time
- MC-32972 – /summon accepts arguments that it will ignore
- MC-33710 – Snow & Iron Golem, Ender Dragon, Illusioner, Giant, and Wither not in Mob statistics
- MC-34365 – Triple Chest - Triple Bug
- MC returns "cannot be found" for invalid entities
- MC-70188 – Some blocks cannot be placed facing a wall with /setblock-77570 – "and" doesn't get translated when listing entities
- MC-79255 – Using /trigger first command not giving any feedback with invalid argument
- MC-101113 – /playsound command is not validating arguments correctly
--106681 – Scoreboard teams leave doesn't work if first player fails
- MC-107145 – Entity kill stat objectives using old/incorrect entity names
- MC-107359 – You can replace loot tables and advancements, but not structure files
- MC-108756 – Dungeons generating triple chests
- MC-109591 – Detecting the block states not saved in meta data does not work
- MC-109659 – The observer only detected upgrade top of the door if opened/closed with energy (button, lever, etc.), but not with your hand.
- MC-109799 – Observer don't power when update and push by piston at the same time
- MC-110566 – Failed /scoreboard players operation can
- MC-112693 – Scoreboard team colors use raw § formatting992 – Right clicking a command block minecart opens GUI and uses held item
- MC-113347 – Rails rotate when moved
-32 – Bed particles cause Z-fighting
- MC-117933 – /clone command-118408 – Torches and redstone torches cannot be placed on top of a Jack o'Lantern but can be placed on pumpkin
- and /replaceitem, but can be used in /setblock, /fill, /execute detect and /testforblock
- MC-122085 – Generating server icon leaks encoded data buffer
- MC-123708 – clearCustomName() and hasDisplayName() inconsistent
Private issues
- Oct 27, 2016McNubberson posted a message on Snapshot 16w43a Ready For Testing!Posted in: News
I was happy to see the fishing stuff fixed. I do a lot of it in minecraft, and assumed it was meant to be in there, because its been in the official releases. I am sure I am not the only one, because this took so long, surely it wasn't because they couldn't find the bugs.
Bam back to fishing!
Also, happy about the redstone, everything should no longer be broken.
- To post a comment, please login.
0 | https://www.minecraftforum.net/members/McNubberson/posts | CC-MAIN-2021-21 | refinedweb | 6,373 | 64.91 |
The first step to utilizing the
logging module is to import it into our application. We can do this using the following line of code:
import logging
Next, we can obtain our logger object using the
getLogger(name) method. This method takes in an optional, single input parameter called
name that represents the name of the logger. Calling
getLogger() with the same
name value returns the same logger object. If we give no
name, the root logger is returned.
We recommend using the built-in Python variable
__name__ for this value since it will return the current module’s name. This reduces the chance of accidentally reusing a logger name and retrieving the wrong logger object.
Here, we set our logger object to a variable called
logger below. We will use it later on in this exercise.
logger = logging.getLogger(__name__)
Next, we need to inform the logger where we want our logged messages to go. To do this, we will use something called a handler. For Python, we will use the
logging module’s
StreamHandler class. This class handles where our logged messages will output. The
StreamHandler class takes in an optional, single input called
stream. If we do not supply this value, the logs will direct to
sys.stderr. Since we want our output to be direct to the console, we should supply
sys.stdout as the
stream value. Note that we must import the
sys library to reference
stdout as the stream value.
import sys stream_handler = logging.StreamHandler(sys.stdout).
Finally, we can now add our stream handler object to the logger. The
logging module provides a method called
addHandler(hdlr) that adds a specific handler to the logger object. The
hdlr input represents the handler object to add, which in our example is the
StreamHandler object.
logger.addHandler(stream_handler)
Instructions
Import the
logging module and the
sys module into our script.py file.
Create a new logger by setting it to a variable called
logger. The name of the logger should be the module’s name.
Add a handler to the logger to direct the logged messages to the console. First, create a
StreamHandler object called
stream_handler. Then, add
stream_handler to your
logger object.
Note that no output will show in the console just yet until we actually log a message. | https://www.codecademy.com/courses/seds-software-engineering-in-python-i/lessons/python-logging/exercises/creating-a-logger | CC-MAIN-2022-40 | refinedweb | 385 | 67.04 |
Continue with the previous section Exception handling in c++ ...
Catalog
1. What happens when an exception is thrown in the main() function
2. What happens when an exception is thrown in a destructor
3. Exception specification of functions
4. Analysis of Dynamic Memory Request Results
5. New usage of new keywords
1. What happens when an exception is thrown in the main() function
From the previous section Logical analysis of throw exception You know, when an exception is thrown, it propagates up along the function call stack, during which time, if the exception is caught, the program will run normally; if the exception is still not caught in the main() function, that is, what happens when an exception is thrown in the main() function?(The program crashes, but the results vary slightly depending on the compiler)
Throw an exception in main() functionThrow an exception in main() function
1 #include <iostream> 2 #include <cstdlib> 3 4 using namespace std; 5 6 class Test 7 { 8 public: 9 Test() 10 { 11 cout << "Test()" << endl; 12 } 13 14 ~Test() 15 { 16 cout << "~Test()" << endl; 17 } 18 }; 19 20 int main() 21 { 22 cout << "main() begin..." << endl; 23 24 static Test t; 25 26 throw 1; 27 28 cout << "main() end..." << endl; 29 30 return 0; 31 } | https://programmer.group/5e7844e9945c0.html | CC-MAIN-2020-24 | refinedweb | 213 | 66.88 |
Any way to speed up image upload to Dropbox (or alternative)?
Hi --
I'm trying to automate some data collection across various app-enabled activities (e.g. meditation) to a central Google Sheets spreadsheet, using Dropbox to collect any associated images. This is implemented via a Pythonista app triggered via the share sheet. For example, after a meditation, I share the final summary view with the script, and it logs some details about the meditation to Google Forms (which ends up in Google Sheets) and saves any image passed to the share sheet to Dropbox. A URL to the uploaded image is included in the Google Forms submission.
My challenge is that the Dropbox image upload is slow enough to really disrupt my flow, so I'm trying see if there is an obvious way to speed it up. In particular, it looks like I have to explicitly write out the image to a BytesIO stream, even if it is already in the desired format (PNG) -- see uploader code below. Is there a faster way to do this? For example, is there a way to perform these operations in parallel / async i.e. grab the shared link while the image uploads in the background?
Thanks in advance!
def upload_image_to_dropbox(image, pathname_in_dropbox): with io.BytesIO() as output: # this seems necessary even if image.format is already "PNG" image.save(output, format="PNG") contents = output.getvalue() dbfile = dbx.files_upload(contents, pathname_in_dropbox, dropbox.files.WriteMode.add, mute=True) # takes a long time dbfile_url = dbx.sharing_create_shared_link(pathname_in_dropbox, short_url=False, pending_upload=None) link_to_image = dbfile_url.url.replace("?dl=0","?raw=1") return link_to_image
@felciano, have not seen your whole code, but I would expect that you can skip saving the data, especially if it is already PNG, by using the
appex.get_image_data()and giving that to the upload. I expect this to potentially have an even larger impact since in the background the UIImage to PIL conversion is also not necessary.
If your images are actually HEIC, I would suggest the following, again to skip the PIL conversion:
img = appex.get_image(image_type='ui') content = img.to_png()
As for the actual upload, speed is dictated by the image size, network connection (with upload speeds being typically slower than downloads) and dropbox rate limiting. Some ideas:
- If you do not need the full resolution image, resize before upload.
- Give the whole upload/url retrieval process to another thread to complete. (But not sure how well this works with an appex.)
- Store the images and possible other info in a directory, and only trigger uploading them all at the end of your flow. Then you can also use threads or asyncio to send them in parallel, if supported by the dropbox API.
Do you really need png (versus jpg)? Back when I was experimenting with real time display of images, I found that saving PIL images as png (even to bytesio) was something like 50x slower than full quality jpg. Jpg will be much smaller than png too, and since likely your bottleneck is the network (or, Dropbox api rate limiting if you are trying to upload tons of files), sma size will help too.
I'd suggest adding some logging calls at key points in your code to see how long the different ops take.
Also, if you are traversing through a ui.Image at all, and doing lots of images, you will need to wrap your code ina a
with objc_util.autoreleasepool(): # code to generate images here
That will actually slow things down ever so slightly, but keeps you from running out of memory. Be sure you close your bytesio objects, too.
Qa
Thanks for the suggestions @mikael. I actually started playing around with
appexand
get_image_databut was finding inconsistent data values there. In particular,
get_image_dataoften had a Boolean as value rather than actual data, whereas
get_imagecontained a
PIL.PngImagePlugin.PngImageFile. This seems different than the documented behavior from the `appear documentation, but I can’t tell if that is a Pythonista issue or whether Pythonista just passes through the data from the calling app, in which case that is where the issue might lie.
The source of the image (i.e. the app from which I’m triggering the share sheet) is CalmApp, in case that matters.
Forgot to reply to close this out: using the
get_imageversion as suggested by @mikael did the trick, at least in terms of saving a second or so in runtime (which is noticeable). Thanks all! | https://forum.omz-software.com/topic/6817/any-way-to-speed-up-image-upload-to-dropbox-or-alternative | CC-MAIN-2022-40 | refinedweb | 744 | 64.51 |
Subject: Re: [boost] [1.44] Beta progress?
From: Robert Ramey (ramey_at_[hidden])
Date: 2010-07-26 01:56:36
Matthias Troyer wrote:
> On 25 Jul 2010, at 10:28, Robert Ramey wrote:
>
>> Matthias Troyer wrote:
>>>
>>> Then please demonstrate how to implement an archive that actually
>>> does anything sensible and supports pointers, etc. without depending
>>> on what you call implementation details. The only way is
>>> implementing all the functionality from scratch.
>>
>> Here's what to do:
>>
>> a) derive from common archive instead of binary_archive.
>
> I have one more question in addition to my previous comment:
>
> common_oarchive is in namespace archive::detail while
> basic_binary_oarchive is in the top namespace archive.
> Do I understand you correctly that deriving from
> archive::detail::common_oarchive is safe and not considered depending
> on implementation details, while deriving from
> archive::basic_binary_oarchive is not?
>
> I can easily change all the Boost.MPI archives to use
> archive::detail::common_oarchive where they now use
> archive::basic_binary_oarchive (although this will not solve the
> issue we have right now).
I can see where this would be confusing. Let me indicate what I mean
of a few of the terms being used.
archive concept - minimal concept which states the function interface
that an archive class has to support. Doesn't say anything specific
about the sematics or functionality.
>From this I made some models of this concept. These models
implemented behavior that was deemed useful. I factored the
implementation of common functionality in a few different places:
base_archive - library code
common_archive - library and types needed to implement the
desired functionality.
interface -
iserialzation - common code for user's types.
Now when I thought of "user", I was thinking of someone
who just an archive already built. I didn't really think of
a person making a new archive as a "user". Truth is, I was
just factoring code.
I put all this stuff in "detail" namespace because I didn't think it
was interesting to users. And of course it isn't documented
like other stuff is and one might change it because after all
it's a "detail".
And the stuff in "detail" has it's own "public" and
"implemention detail" aspects. For an archive developer
who wants to develope an archive with functionality
similar to the existing ones, it's not a detail. He wants to know
that the public functions aren't going to change.
As I've said - I just never thought about this. On
the other hand, I don't think the "detail" interface
has changed very much (if at all) over time. I can't
honestly say I know this - because as I've said -
I never thought about it. I suspect that it hasn't
changed much because we haven't had much
if any breakage originating in this area.
So - back to our problem.
I had thought the the source of the issue was coupling
mpi_archive/skeleton to binary_archive implementation. That's
why I thought deriving from common_archive would
If I'm wrong about the above then deriving
mpi_archive from common_archive won't help - though
it would probably be a good idea.
If the only problem is that version_type eliminated
some operations mpi_archive depended on all
integer types to have (STRONG_TYPEDEF)
this can also be worked out one way or the other
without too much difficulty.
If all the above is true - this shouldn't be so hard
to address. Given that we've had so much
difficulty with this, it's possible that one of the
above is not true.
Finally, you've indicated that an archive writer needs
to know the list of internal types in the archive
and that they'll never change. This would suggest
to me that perhaps a separate section in the
documentation describing the "common
archive implementation" (text_archive, etc)distinct from
other "sample implementations" (trivial archive, simple_log_archive
etc.)
a description of the functionality of these archives.
Basically this supplies the missing "semantics" left
undefined by the archive concept. Basically
it would list this functionaly, pointers, tracking
versioning, etc.
common - this implemented special types
used internally and their interface. We can
discuss whether these types should have the
rich interface permited by STRONG_TYPEDEF
or a narrow one which is useful for catching
coding errors.
What is still missing here?
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2010/07/169299.php | CC-MAIN-2021-43 | refinedweb | 732 | 65.42 |
After explaining how to use U1DB to store simple bits of information in Ubuntu SDK apps, and saying that that caters for 80% of my need for data storage, I should explain the other thing I do, which is to store dynamic data; documents created from user data.
To understand more about how to retrieve data from U1DB through Indexes and Queries, you can read the core U1DB documentation. Its code examples are for the Python implementation, and QML works differently for creating documents (as we’ve seen, QML is declarative; there’s no code to write, you just describe a document and it all works), but indexing and querying documents have the same underlying philosophy regardless of implementation, and the core docs explain what an index is, what a query is, and how they work.
First, a simple code example.
import QtQuick 2.0 import Ubuntu.Components 0.1 import U1db 1.0 as U1db import Ubuntu.Components.ListItems 0.1 as ListItem MainView { width: units.gu(40) height: units.gu(71) /* ---------------------------------------------------- Set up the U1DB database Declare a named document ---------------------------------------------------- */ U1db.Database { id: db; path: "simpleu1dbdemo2.u1db" } U1db.Index { database: db id: by_type /* You have to specify in the index all fields you want to retrieve The query should return the whole document, not just indexed fields */ expression: ["things.type", "things.placename"] } U1db.Query { id: places index: by_type query: ["*", "*"] } Page { title: "U1DB ListModel" Column { id: col width: parent.width spacing: units.gu(1) Label { width: parent.width text: "Enter a place to add to list" horizontalAlignment: Text.AlignHCenter } Rectangle { id: ta width: parent.width - units.gu(2) color: "white" height: inp.height * 2 anchors.horizontalCenter: parent.horizontalCenter radius: 5 TextInput { id: inp width: parent.width - units.gu(2) anchors.centerIn: parent onAccepted: inp.adddoc() function adddoc() { /* Indexes do not work on top-level fields. So put everything in the document in a dict called "things" so that they're not top-level fields any more. */ db.putDoc({things: {type: "place", placename: inp.text}}) inp.text = "" } } } Button { text: "Add" width: ta.width anchors.horizontalCenter: parent.horizontalCenter onClicked: inp.adddoc() } } ListView { anchors.top: col.bottom anchors.bottom: parent.bottom width: parent.width model: places clip: true delegate: ListItem.Standard { text: model.contents.placename control: Button { text: "x" width: units.gu(4) onClicked: { /* To delete a document, you currently have to set its contents to empty string. There will be db.delete_doc eventually. */ db.putDoc("", model.docId); } } } } } }
You type in a place name and say “Add”; it gets added to the list. The list is stored in U1DB, so it persists; close the app and open it again and you still have your place names. Click a place to delete it.
This covers almost all the remaining stuff that I need to do with data storage. There are a few outstanding bugs still with using U1DB from QML, which I’ve annotated in the source above, and at the moment you have to work around those bugs by doing things that you ought not to have to; once they’re fixed, this becomes more intuitive to use. | http://www.kryogenix.org/days/2014/01/23/using-u1db-in-listviews-in-ubuntu-sdk-apps/ | CC-MAIN-2016-18 | refinedweb | 516 | 59.6 |
Quick Guide
From Nemerle Homepage
This is a small guide to nearly all Nemerle features, specially for people coming from C#:
Variables
- mutable a = 5;: mutable variables can be changed later
- def b = 4;: normal variables cannot be changed
In all variables, type inference works
- def c : float = 4;: a type annotation
Lists
- def nil = [];: the empty list is []
- def numbers = [1, 2, 3];: generate a list with those values
- def more_numbers = 4 :: numbers; : :: adds a new item to the list
List comprehensions
- $[ (x, y) | x in list1, y in list2, x < y]: get all pairs on the lists for which the condition is true.
- $[1, 3 .. 8]: ranges, from 1 to 8 step 2 (3-1)
- $[1 .. 5]: range from 1 to 5 step 1
- $[ (x, y) | x in [1 .. 3], y in [2, 4 .. 10] ]: generate all pairs
Useful methods on lists
- Length, Head, Tail, Last, Nth
- FirstN: return first N elements of the list
- ChopFirstN: return the list without the first N elements
- LastN: return the last N elements
- Reverse
- Remove: removes an element
- Contains
- Iter (f : 'a -> void): executes the function in all elements
- Map (f : 'a -> 'b): returns a new list['b], executing the function in all elements of the list['a]
- Group (f : 'a * 'a -> int): return a list of lists, grouping the elements whose function returns the same value
- FoldLeft, FoldRight: returns a value result of executing the function recursively on the list, with a first element, in normal or reverse order
- ForAll (f : 'a -> bool): returns true if all elements return true
- Exists (f : 'a -> bool): returns true if at least one application of the function in the elements returns true
- Find (pred : 'a -> bool): finds the first element whose predicate is true
- Filter (pred : 'a -> bool): returns a new list containing all elements whose predicate is true
- Sort (cmp : 'a * 'a -> int): sorts the list based on the method. The reason underlying the fact that the function returns an int is that CompareTo functions in .NET return an int
- RemoveDuplicates
- ToArray
Most of the BCL collection types have extended counterparts in Nemerle library, that add methods such as Iter, Map or Fold, for example, Hashtable (that extends Dictionary) or LinkedList.
Arrays
- def a = array(3);: specifies the number of elements
- def b = array[1, 2, 3]: specifies the elements
Tuples
- def t = (1, "one");: a tuple is a set of values which no name to recognise them
- def fst = t[0];: use 0-based index to get the items
Nemerle.Utility.Pair
This class contains three methods that work with pairs (tuples of two elements):
- First and Second retrieve the first or second element of the tuple, respectively
- Swap exchanges both elements of the pair
Decisions
- when (condition) code: execute code if condition is true
- unless (condition) code: execute code if condition is false
- if (condition) code1 else code2: execute code1 if condition is true, code2 if it is false
Loops
- for (mutable i = 0; i < 10; i++) code: for loop as of in C
- foreach (i in $[0..9]) code like the above, using range
- foreach (i in $[9,8..0]) code reverse order
- while (condition) code: execute code while condition is true
- do code while (condition): the same as above, but checks the condition at the end of the code, so it is always run at least once
- foreach (x in list) code: execute the code for every member x of the enumerable collection list
- foreach (x when x != null in list) code: like the above but only for not null members
- repeat (10) code: repeats the code the specified number of times
Exceptions
- throw ArgumentException();: throws a new exception of the desired type
- Syntax for try-catch-finally handler resembles the one from C#, although catch handlers use a pattern matching-like syntax:
try { code } catch { | e is ArgumentNullException => ... | e is OverflowException => ... | e is MyException when e.Reason == Reason.Warn => ... | _ => ... } finally { // Finally code is always executed // no matter an exception were thrown or not }
Variants
- Create a variant with empty options and options with extra information
variant Volume { | Max | Min | Other { v : int } }
- def v1 = Volume.Max ();: an empty constructor is used for empty options
- def v2 = Volume.Other (5);: the constructor for non-empty elements gets all fields as parameters, in the order they were written in code
- You need to include [XmlSerializable] on top of the variant to make it serializable via XML
Enums
As in C#, are just a good face for ints:
enum Seasons { | Spring | Autumn | Winter | Summer }
Nullable types
Value-types (structs) cannot be null (that's a platform requirement). However, sometimes you need nullable integers or booleans.
- def t : int? = 3;: declare a new nullable int with a value
- def t : int? = null;: declare a new nullable int with null value
- when (t != null) {...}: boolean expressions are overloaded to check for values on nullable types. This is the same as saying when (t.HasValue) {...}
- a = b ?? -1;: a gets the wrapped value of b if is isn't null. Otherwise, it gets the value after the two question marks
Pattern Matching
Literal matching
- Numbers:
match (value) { | 1 => "one" | 2 => "two" | _ => "more" }
- Strings:
match (value) { | "one" => 1 | "two" => 2 | _ => 0 }
- Enums:
match (value) { | Seasons.Spring | Seasons.Summer => "hot" | _ => "cold" }
List matching
match (some_list) { | [42, 42] => "two forty-two" | [42, _] => "forty-two on first position of two-element list" | [_, 42] => "forty-two on second position of two-element list" | 42 :: _ => "forty-two on first position" | _ :: 42 :: _ => "forty-two on second position" | [] => "an empty list!" | _ => "another list" }
Variable pattern
Binds the variable names:
def display (l) { match (l) { | head :: tail => Write ($ "$head, "); display (tail) | [] => WriteLine ("end") } }
Tuple pattern
match (tuple) { | ( 42, _ ) => "42 on first position" | ( _, 42 ) => "42 on second position" | ( x, y ) => $"( $x, $y )" }
Type check
Checks if the value is of given type, binding the new value with the new type
def check (o : object) { match (o) { | i is int => $"An int: $i" | s is string => $"A string: $(s.ToUpper())" | _ => "Object of another type" } }
Record pattern
Binds on field or properties
def check_class (c) { match (c) { | MyClass where (foo = 0) => true | _ => false } }
If the type is known to the compiler, the where clause is not needed
def check_class (c) { match (c) { | (foo = 0) => true | _ => false } }
as pattern
Binds a value matched with an identifier
variant Foo { | A { x : int; mutable y : string; } | B } match (some_foo ()) { | A (3, _) as a => a.y = "three"; | _ => {} }
Type hint pattern
Used to explicitly declare types where the compiler cannot infer it
def foo (l) { | (s : string) :: _ => s [0] | [] => '?' }
with clause
Used to specify a default value in cases where we need to match both a small and a big structure with the same code.
def foo (_) { | [x] with y = 3 | [x, y] => x * y | _ => 42 }
Regular Expressions match
regexp match (str) { | "a+.*" => printf ("a\n"); | @"(?<num : int>\d+)-\w+" => printf ("%d\n", num + 3); | "(?<name>(Ala|Kasia))? ma kota" => match (name) { | Some (n) => printf ("%s\n", n) | None => printf ("noname?\n") } | _ => printf ("default\n"); }
you must add a reference to Nemerle.Text to use this functionality (using Nemerle.Text)
Yield
- Works as in C#, used for enumerating sequences:
Range (from : int, to : int) : IEnumerable[int] { for (mutable i = from; i <= to; i++) yield i; }
Imperative programming and Blocks
To use return, break and continue we need to open the Nemerle.Imperative namespace:
- return x;: cuts the execution of the function, method or property, stablishing x as the return value
- continue;: on a loop, continues with the next iteration
- break;: on a loop, cuts the execution of the loop and continues with the following code
Blocks are a set of instructions, preceded with an identifier and a colon. The result of the block is the last value computed, except if the name of the block is used,along with the return value, to jump out the block.
def x = foo: { when (some_cond) foo (3); // if some_cond is true, the block will return qux (); 42 // else it will return 42 }
Functions
- Functions can be declared inside methods, and type inference works for them:
public Greet (people : list[string]) : void { def say_hello (s) { System.Console.WriteLine ($"Hello $s"); } foreach (person in people) say_hello (person); }
- compute (f : int * int -> int, x : int, y : int) { ... }: functions can be passes as parameters, whose types are divided by * and its return type is told after ->
- compute ( fun (x, y) { x + y }, 3, 4): anonymous functions can be created inline, just preceding them with fun and its list of parameters
- compute ( (x, y) => x + y, 3, 4): anonymous functions have multiple forms
- def addFive = compute (fun (x, y) { x + y }, _, 5): partial application, substitutes one or more parameters, then yielding another function with a smaller set of parameters
- def addFive = _ + 5;: partial application substituting the parameter with _
- def name = _.Name;: you can also use this type of partial application to access members
- lambda x -> Console.WriteLine (x): a easier construct to create a function with just one parameter
String formatting
- def s = $"The number is $i";: insert the value of the variable i where $i is placed
- def s = $"$x + $y = $(x+y)";: $(...) can be used to make calculations or access members
print Functions
- print (value), sprint, fprint: substitutes variable names with $ notation in the string and returns it to Console, string or TextReader, respectively
- printf (value), sprintf, fprintf: as above, but with formatting
- scanf (format), sscanf, fscanf: returns a string extracted from console, string or TextReader that keeps the specified format
Type conversions
- def s = n :> int;: cast may fail with an InvalidCastException
- def s = n : int;: cast cannot fail, only for supertypes
Namespaces
- namespace NS { ... }: declares a new namespace
- using System;: open the namespace, that is, makes all members inside it not to requiere full qualifying. It also opens namespace inside it (different with C#)
- using System.Console;: open the type, making its methods visible
- using C = System.Console;: create an alias to refer the type or namespace
Classes and Modules
- class Example { ... }: creates a class Example
- module Example { ... }: creates a module Example, that is, a class with all members static
Interfaces
Defines a set of public methods an adhering class must implement
interface IExample { Method() : void; } class Example : IExample { public Method () : void { ... } }
Accessibility
- public: everyone can access
- internal: only classes in the current assembly (DLL or EXE) can access
- private only current type can access
- protected: access is limited to current type and its derived types
- protected internal means protected or internal - access limited to derived types or types in the current assembly
Modifiers
- static: no instance is needed for accessing
- mutable: if not set for field, they are read-only
- volatile: only for fields, it means that the field has to be always read from the memory, and written instantly. Useful for multithreaded programs
- extern: used on methods, along with DllImport attributes to call out to native code
- partial: only on type definitions, it means that a given type definition is split across several files
Constructors
- Take this as its name:
class Example { mutable Name : string; public this (name : string) { Name = name; } }
- def n = Example ("serras");: no new keyword is used to create new objects
Static Constructors
Executed once per type. No parameters.
class Example { static public this() { ... } }
[Record] macro
Generates a constructor which assigns a value to every field:
[Record] class Point { public x : int; public y : int; }
is the same as:
class Point { public x : int; public y : int; public this (x : int, y : int) { this.x = x; this.y = y } }
Inheritance
- class Human : Mammal { ... }: class Human inherits from Mammal, or implements Mammal interface
Modifiers
- abstract: the method contains no actual implementation, that must be provided in child classes
- override: redefinition of a virtual or abstract method
- virtual: the most derived method will always be called
- new: allows name redefinition in nested or derived classes
- sealed: no derivation or redefinition is possible in derived classes
Parameters
- method (x : int, y : int, z : bool = false) { ... }: default parameters
- method (i : ref int) { ... }: passes the parameter by reference, that is, changing the actual value
- method (i : out int) { ... }: specifies an out parameter, used for returning values
- Values passed as ref or out parameters must be decorated with the ref or out keyword
- Write (method (3, z = true, y = 1));: parameter names can be used, after unnamed ones
Properties
public Property : string { get { property } set { property = value } }
Property Accessors
- [Accessor (Sum)] mutable sum : int;: generate a public property with name Sum, getting the value from the field sum, with just a getter
- [Accessor (Sum, flags=Internal)] mutable sum : int;: change the accessibility
- [Accessor (Sum, flags=WantSetter)] mutable sum : int;: generate both getter and setter
- [Accessor] mutable sum_of_sam : int;: property name used, capitalized and underscores removed : SumOfSam
Flag Accessors
For setting individual bits of enumerations fields, making boolean propeties:
[System.Flags] enum States { | Working = 0x0001 | Married = 0x0002 | Graduate = 0x0004 } [FlagAccessor (Working, Married, Graduate, flags=WantSetter)] mutable state : States;
Indexers
class Table { public Item [row : int, column : int] { get { ... } set { ... } } }
Operator Overloading
[Record] class Vector { [Accessor] x : double; [Accessor] y : double; [Accessor] z : double; // + operator public static @+ (v1 : Vector, v2 : Vector) : Vector { Vector (v1.X + v2.X, v1.Y + v2.Y, v1.Z + v2.Z) } // implicit cast operator public static @: (p : Point) : Vector { Vector (p.X, p.Y, p.Z) } // explicit cast operator public static @:> (p : double) : Vector { Vector (p, 0.0, 0.0) } }
: operator will be used by compiler to automatically convert from one value to another if it is needed, like passing object to method expecting some different type. Cast :> must be explicitly written by user, like def x = A() :> B;
Delegates and Events
- delegate Foo (_ : int, _ : string) : void;: creates a new delegate type
- def f1 = Foo (method);: creates a delegate instance containing a reference to a method or function
- def f2 = Foo ( fun (i, s) { ... } );: creates a delegate containing an anonymous function
- event MyEvent : Foo;: creates an event field of delegate type Foo
- Foo += method;: adds a method to the delegate
- Foo -= method;: remove a method from the delegate
- Foo (2, "two");: invokes a delegate, that is, calls all the methods in it, in the order they were added, passing these parameters
Genericity
- A generic type:
class Example[T] { mutable item : T; public foo [Z] () : void { } }
- Generic functions, variants and methods use the same syntax
- def x = Example[int].foo.[string]();: for rare cases where generic parameter cannot be inferred
Constraints
- class Example['a] where 'a : IComparable['a] { ... }: 'a must implement or inherit the class or interface stated
- class Example['a] where 'a : class {...}: 'a must be a reference type
- class Example['a] where 'a : struct {...}: 'a must be a value type
- class Example['a] where 'a : enum {...}: 'a must be an enumeration type
- class Example['a] where 'a : new () {...}: 'a must implement a parameterless constructor. That's the only way you can create instances of classes defined by type parameters
Co- and contra-variance
- class Example[+T] { ... }: makes the argument type covariant, that is, you can assign, to a variable of type Example[T] an instance of Example[U] if U subclasses T (for example, you could assign an instance of Example[string] to an Example[object] variable). Covariant type parameters can only be used as return types.
- class Example[-T] { ... }: makes the argument type contravariant, that is, you can assign, to a variable of type Example[T] an instance of Example[U] if T subclasses U (for example, you could assign an instance of Example[object] to an Example[string] variable). Contravariant type parameters can only be used as argument types or in generic interfaces.
Extension methods
- You can declare an extension method by adding this to the first parameter:
namespace ExtensionExample { class ExtensionClass { public static PlusOne (this i : int) { i + 1 } } }
- You need to open the namespace to use the extension method as member methods:
using ExtensionExample; def n = 3; def m = n.PlusOne (); // that will call the extension method
Design by contract
All types needed are included in the Nemerle.Assertions namespace. By default, if contract is violated a Nemerle.AssertionException is thrown. You can change it by means of otherwise
getfoo (i : int) : int requires i >= 0 && i < 5 otherwise throw System.ArgumentOutOfRangeException () { ... }
Preconditions
- On a method:
class String { public Substring (startIdx : int) : string requires startIdx >= 0 && startIdx <= this.Length { ... } }
- On a parameter:
ConnectTrees (requires (!tree1.Cyclic ()) tree1 : Graph, requires (!tree2.Cyclic ()) tree2 : Graph, e : Edge) : Graph { ... }
Postconditions
public Clear () : void ensures this.IsEmpty { ... }
Class invariants
- Assign class invariants:
class Vector [T] invariant position >= 0 && position <= arr.Length { ... }
- Use expose to change the invariants
expose (this) { x += 3; y += 3; }
Lazy evaluation
- def l = lazy (MethodWithBigCost ());: declares a variable whose value will be retrieved only when needed
- method ([Lazy] x : int, y : int) { ... }: the parameter will only be fetched in rare cases
- The field Next will only be evaluated when requested, because or its LazyValue type:
class InfList { public Val : int; public Next : LazyValue [InfList]; public this (v : int) { Val = v; Next = lazy (InfList (v + 1)); } }
Late Binding
Late expressions are executed dynamically at runtime, which makes it perfect for uses such as COM. No strict checks are done, so you must take care of calling the correct method on the correct instance and so on.
- late (expr) or late expr: executes a valid Nemerle expression with late binding.
- nolate (expr) or nolate expr: allows executing a expression as if it wasn't late in a late binding environment.
- late obj.Length: the Length property or field is retrieved at runtime, so you could use it over strings, lists, arrays... without worrying of the type. Of course, if the object does not contain a Length member, an exception will be thrown.
- late o.Name.[1]: calls the default indexer on o.Name. You must be aware that calling o.Name.Chars[1] calls the indexer named Chars on o.Name.
Aliases
Type alias
- type int = System.Int32;: establishes an alias for the System.Int32 type. int can be used anywhere in code with the exact same meaning as System.Int32.
Alias macro
[Alias (F2, F3 ())] public static F1 () : int { System.Random ().Next () }
This code generates a F2 property and a F3 method with no arguments:
public F2 : int { get { System.Random.Next () } } public F3 () : int { System.Random.Next () }
[Alias (Hd, Head2 (), Head3 (l))] public static Head (l : list ['a]) : 'a { match (l) { ... } }
Arguments not specified as treated as this. Method arguments must match the ones from the method being aliased, but their order can be changed. The previous code generates:
public Hd : 'a { get { def l = this; match (l) { ... } } } public Head2 () : int { def l = this; match (l) { ... } } public static Head3 (l) : int { match (l) { ... } }
Operators
- a = b;: assignment. In Nemerle, this operator doesn't return a value, so multiple assignments are not allowed
- Compound assignment is allowed: a += b;
- +: number addition, string concatenation
- -, *, /, %: number substraction, multiplication, division and modulo
- a <-> b;: swap, exchanges values of a and b
- ++, --: adds or substract 1 to the value
- +, -
Logical Operators
- ==, !=: equal, not equal
- x.Equals(y): compare any two objects. x and y may have different types. Types are allowed to provide their own override of the Equals method, by default, it checks by reference in reference types and by value in value types (including enums)
- >, <, >=, <=: greater than, less than, greater or equal to, less or equal to
- &&: short-circuiting and
- ||: short-circuiting or
- !: not
You can also use and, or, not if you open the Nemerle.English namespace
Bit Operators
- &, |, ^: bit-level and, or and xor operations
- %&&, %||, %^^: bit-level and, or and xor, returning true if the result is non-zero
- >>, <<: right and left bitwise shift
Checked and unchecked contexts
- unchecked { ... }: any number operation that goes beyond the limits of the type will silently go into an overflow
- checked { ... }: operations overflowing will throw a OverflowException
Some things inherited from C#
- using (resource) { ... }: the resource must implement the IDisposable interface. This block assures the resource is correctly disposed even though an exception is thrown
- lock (object) { ... }: for multithreaded application. Assures that no other code executes this block if the block object is being used
Logging
All macros are inside the Nemerle.Logging namespace:
- [assembly: LogFunction (DoLog)]: specifies a logging function
- [assembly: LogFunction (DEBUG => log4net_category.Debug, TRACE => log4net_category.Info)]: specifies different logging functions for different compilation flags
- log (VERB, "example", 2): calls the logging function if the compilation flag is set, with those parameters
- whenlogging (VERB) { code }: executes the code only if the compilation flag is set
- [assembly: LogFlag (VERB, true)]: sets the compilation flag for VERB to true
- [assembly: LogCondition (EnableLogging), LogFlag (DEBUG, true)]: add a condition that will be cheked each time the log function is called
- [assembly: LogFormat (PrependFlag)]: prepend logging flag to each message
Assertions
- assert (condition, "message");: if condition is false, a AssertionException will be thrown, with the actual line and column number in the code file
Profiling macros
- [assembly: ProfSetup]: add initial setup for profiling
- [Profile] foo () : int { ...}: tell the profile to include this method
- [ProfDump] Dump () : void { }: each call to this method will show the results of the profiling
SQL macros
- [ConfigureConnection (connectionString, name)]: applied to a class, tells the compiler about the connections used later in other SQL macros
- ExecuteNonQuery ("INSERT INTO employee VALUES ('John', 'Boo')", connection);: executes the query returning no results, via the specified connection
- def count = ExecuteScalar ("SELECT COUNT FROM employee WHERE firstname = $myparm", connection);: retrieves just one result, using the specified connection. See you can use the $ notation to substite variables
- Execute a code for every returned result, binding the column names to variables. Beware that the code is just another parameter, so you need to end parenthesis after them
ExecuteReaderLoop ("SELECT * FROM employee WHERE firstname = $myparm", dbcon, { Nemerle.IO.printf ("Name: %s %s\n", firstname, lastname) });
Concurrency macros
All this macros are in Nemerle.Concurrency namespace
- async { ... }: You can execute any block of code asynchronously:
- Additionally, you can create a method that will always execute asynchronously: async Read (s : string) { ... }
Chords
Chords are sets of methods that only return a value when some exact amount of them are called in some order. This example states very well both the syntax and uses of chords:
class Buffer [T] { [ChordMember] public Put (msg : T) : void; public Get () : T chord { | Put => msg } }
More examples can be found at the SVN repository
Nemerle Standard Library
Collection classes
All in Nemerle.Collections namespace:
- ICollection['a]: extends .NET ICollection by adding contract for mapping, folding, itering...
- Hashtable['a, 'b]: extends System.Collections.Generic.Dictionary[K, V] by adding Fold, Map and Iter methods. Hastable saves items with an unique key
- Heap: saves a list of objects, allowing only to extract the first removing (ExtractFirst) or not removing (Top) it. Like usual, allows Map, Iter and Fold operations.
- LinkedList['a]: extends .NET generic LinkedList, adding some useful, list-like methods
- Queue['a]: like always, extends .NET Queue['a] by adding the useful list-like methods like Iter, Fold, Map... Queues are data structures which only allow to add or remove items at the end or the start of them
- Set['a]: an implementation of mathematical sets. It allows all normal operations on list plus:
- Sum: adds two sets yielding only one replica of each duplicated element
- Substract: returns all elements of the first set that are not in the second one
- Intersect: return the elements that appear on both sets
- Xor: return the elements that appear in one of the sets, but not in both
- Stack['a]: extends .NET Stack, a class which allows to add or remove items only on the top of them
- Tree contains an implentation of Red-Black trees, an useful structure
- RList (Random Access List) is a purely functional data structure
Array and String extensions
This methods live in Nemerle.Utility namespace, in NArray and NString classes, but can be used in arrays and string in code in a normal way. The methods adds functionality à la list for these two types.
Nemerle.IO
Contains helper functions for handling an input stream: ReadIntDigits, ReadRealDigits, ReadString, ReadChar, ConsumeWhiteSpace and CheckInput.
Option
The type option['a] allows to save values that can be null. option.None tells that it has no value. option.Some (value) saves a value. You can also use the nullable types (discussed above), but they are limited to valuetypes. | http://nemerle.org/Quick_Guide | crawl-001 | refinedweb | 4,056 | 54.46 |
How To Convert Image To Matrix Using Python
In this tutorial, we are going to learn how to convert an image to the matrix in Python. Before we get into our problem, basic ideas should be made clear to all.
What is Image Processing In Python
- Image Processing in Python is a technique or method through which data of Image can be retrieved in the form of numbers.
- This is done so because at last, the work we want through the process will be executed with the computers.
- The libraries which are commonly used for this are NUMPY, MATPLOTLIB and PILLOW.
How to implement the Image Processing Technique to our motive
- As we all know that there are various libraries and modules which can be integrated with Python.
- Here, We will be using PILLOW and NUMPY because these libraries are easier to understand and less sophisticated.
Convert Image To Matrix in Python
- Import Image module from PILLOW library of Python as PIL.
- Import array module from NUMPY library of Python.
- These two libraries are for Image extraction from the source file and defining the dimensions of the matrix.
Now, let us code to implement it.
from PIL import Image from numpy import array im_1 = Image.open(r"C:\Users\CHITRANSH PANT\Desktop\New Chrome Logo.jpg") ar = array(im_1) ar
The output from the above code, as follows.
array([[[146, 166, 177], [177, 197, 208], [143, 163, 174], …, [177, 197, 208], [146, 166, 177], [176, 196, 207]], [[176, 196, 207], [178, 198, 209], [176, 196, 207], …, [175, 195, 206], [170, 190, 201], [168, 188, 199]], [[142, 162, 173], [177, 197, 208], [143, 163, 174], …, [177, 197, 208], [142, 162, 173], [176, 196, 207]], …, [[176, 196, 207], [176, 196, 207], [173, 193, 204], …, [176, 196, 207], [177, 197, 208], [173, 193, 204]], [[138, 158, 169], [171, 191, 202], [150, 170, 181], …, [173, 193, 204], [145, 165, 176], [176, 196, 207]], [[177, 197, 208], [179, 199, 210], [176, 196, 207], …, [166, 186, 197], [172, 192, 203], [173, 193, 204]]], dtype=uint8)
Here I am providing you all with the Image so that you can take it as your example.
Also read: | https://www.codespeedy.com/how-to-convert-image-to-matrix-in-python/ | CC-MAIN-2019-43 | refinedweb | 359 | 73.31 |
When working with implicit-encoded dependent function types, such as
scalaz.Unapply and numerous Shapeless operations, you’d frequently
like to acquire instances of those functions to see what types get
calculated for them.
For example,
++ on Shapeless
HLists is driven by
Prepend:
def ++[S <: HList](suffix : S)(implicit prepend : Prepend[L, S]) : prepend.Out = prepend(l, suffix)
So given some
HLists, we can expect to be able to combine them in a
couple ways. First, by using the syntax function above, and then by
acquiring a value of
prepend’s type directly and invoking it, just
as in the body of the above function.
import shapeless._, ops.hlist._ import scalaz._, std.string._, std.tuple._, syntax.applicative._ scala> val ohi = 1 :: "hi" :: HNil ohi: shapeless.::[Int,shapeless.::[String,shapeless.HNil]] = 1 :: hi :: HNil scala> ohi ++ ohi res0: shapeless.::[Int,shapeless.::[String,shapeless.::[Int,shapeless.::[String,shapeless.HNil]]]] = 1 :: hi :: 1 :: hi :: HNil scala> val ohipohi = implicitly[Prepend[String :: Int :: HNil, String :: Int :: HNil]] ohipohi: shapeless.ops.hlist.Prepend[ shapeless.::[String,shapeless.::[Int,shapeless.HNil]], shapeless.::[String,shapeless.::[Int,shapeless.HNil]]] = shapeless.ops.hlist$Prepend$$anon$58@13399e98 scala> ohipohi(ohi, ohi) res3: ohipohi.Out = 1 :: hi :: 1 :: hi :: HNil
Back over in Scalaz, for purposes of an
Applicative instance,
(String, Int) selects its second type parameter. Just as the
To*OpsUnapply functions acquire
Unapply instances to do their
work:
implicit def ToApplicativeOpsUnapply[FA](v: FA)(implicit F0: Unapply[Applicative, FA]) = new ApplicativeOps[F0.M,F0.A](F0(v))(F0.TC)
We can acquire an instance and use it.
scala> val t2ap = implicitly[Unapply[Applicative, (String, Int)]] t2ap: scalaz.Unapply[scalaz.Applicative,(String, Int)] = scalaz.Unapply_0$$anon$13@18214797 scala> t2ap.TC.point(42) res5: t2ap.M[Int] = ("",42)
Now let’s get that first element out of that tuple we got by calling
point.
scala> res5._1 <console>:31: error: value _1 is not a member of t2ap.M[Int] res5._1 ^
Uh, huh? Let’s try adding the
HLists we got from
ohipohi before.
cala> res3 ++ res3 <console>:32: error: could not find implicit value for parameter prepend: shapeless.ops.hlist.Prepend[ohipohi.Out,ohipohi.Out] res3 ++ res3 ^
The clue is in the type report in the above: path-dependent type
members of
t2ap and
ohipohi appear. That wouldn’t be a problem,
normally, as we know what they are, but they’re existential to
Scala.
scala> implicitly[t2ap.M[Int] =:= (String, Int)] <console>:30: error: Cannot prove that t2ap.M[Int] =:= (String, Int). implicitly[t2ap.M[Int] =:= (String, Int)] ^
implicitlyonly gives what you ask for
The explanation lies with the
implicitly calls we made to acquire
the specific dependent functions we wanted to use. Let’s look at the
definition of
implicitly and see if it can enlighten:
def implicitly[T](implicit e: T): T
In other words,
implicitly returns exactly what you asked for,
type-wise. Recall the inferred type of
ohipohi when it was defined:
ohipohi: shapeless.ops.hlist.Prepend[ shapeless.::[String,shapeless.::[Int,shapeless.HNil]], shapeless.::[String,shapeless.::[Int,shapeless.HNil]]]
Not coincidentally, this is the exact type we gave as a type
parameter to
implicitly. What’s important is that
Out, the type
member of
Prepend that determines its result type, is existential in
both cases.
In other words, the rule of
implicitly is “you asked for it, you got
it”.
implicitly
The answer here is to simulate the weird way in which dependent method
types, like
++ and
ToApplicativeOpsUnapply, can pass through extra
type information about their implicit parameters that would otherwise
be lost. We do this by reinventing
implicitly.
The first try is obvious: follow the comment in the
Predef.scala
source and give
implicitly a singleton type result.
def implicitly2[T <: AnyRef](implicit e: T): T with e.type = e scala> val ohipohi2 = implicitly2[Prepend[Int :: String :: HNil, Int :: String :: HNil]] ohipohi2: shapeless.ops.hlist.Prepend[ shapeless.::[Int,shapeless.::[String,shapeless.HNil]], shapeless.::[Int,shapeless.::[String,shapeless.HNil]]] with e.type = shapeless.ops.hlist$Prepend$$anon$58@4abe65da scala> ohipohi2(ohi, ohi) res9: ohipohi2.Out = 1 :: hi :: 1 :: hi :: HNil scala> res9 ++ res9 <console>:33: error: could not find implicit value for parameter prepend: shapeless.ops.hlist.Prepend[ohipohi2.Out,ohipohi2.Out] res9 ++ res9 ^
Not quite good enough.
implicitly
I think it’s strange that the above doesn’t work, but we can deal with it by being a little more specific.
def implicitlyDepFn[T <: DepFn2[_,_]](implicit e: T) : T {type Out = e.Out} = e scala> val ohipohi3 = implicitlyDepFn[Prepend[Int :: String :: HNil, Int :: String :: HNil]] ohipohi3: shapeless.ops.hlist.Prepend[ shapeless.::[Int,shapeless.::[String,shapeless.HNil]], shapeless.::[Int,shapeless.::[String,shapeless.HNil]]]{ type Out = shapeless.::[Int,shapeless.::[String, shapeless.::[Int,shapeless.::[String,shapeless.HNil]]]] } = shapeless.ops.hlist$Prepend$$anon$58@7306572f scala> ohipohi3(ohi, ohi) res11: ohipohi3.Out = 1 :: hi :: 1 :: hi :: HNil scala> res11 ++ res11 res12: shapeless.::[Int,shapeless.::[String,shapeless.::[Int,shapeless.::[String, shapeless.::[Int,shapeless.::[String,shapeless.::[Int,shapeless.::[String, shapeless.HNil]]]]]]]] = 1 :: hi :: 1 :: hi :: 1 :: hi :: 1 :: hi :: HNil
Now that’s more like it. The trick is in the return type of
implicitlyDepFn, which includes the structural refinement
{type Out
= e.Out}.
Again, it’s weird that this structural refinement isn’t subsumed by
the return type
e.type from
implicitly2’s definition, but I’m not
sure it’s wrong, either, given the ephemeral nature of type stability.
Thankfully, most of the evidence for dependent function types in
Shapeless extends from the
DepFn* traits, so you only need one of
these special
implicitly variants for each, rather than one for each
individual dependent function type you wish to acquire instances of in
this way.
Unapply
We can similarly acquire instances of
scalaz.Unapply conveniently.
I believe this function will be supplied with Scalaz 7.0.6, and it is
already included in the 7.1 development branch,
so you will be able to write
Unapply[TC, type] to get instances as
with plain typeclass lookup in Scalaz, but it’s easy enough to define
yourself.
def unap[TC[_[_]], MA](implicit U: Unapply[TC, MA]): U.type { type M[A] = U.M[A] type A = U.A } = U scala> val t2ap2 = unap[Applicative, (String, Int)] t2ap2: U.type{type M[A] = (String, A); type A = Int} = scalaz.Unapply_0$$anon$13@3adb9933 scala> t2ap2.TC.point(42) res13: (String, Int) = ("",42) scala> res13._1 res14: String = ""
This article was tested with Scala 2.10.3, Scalaz 7.0.5, and Shapeless 2.0.0-M1.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2014/01/18/implicitly_existential.html | CC-MAIN-2019-13 | refinedweb | 1,108 | 50.73 |
Using C# attributes to Add Special Behavior to the Unity’s Inspector Window
A C# attribute is a mark that can be placed before a variable within a script to add some special behavior. There are a lot of attributes created to interact with the UnityEngine and UnityEditor namespaces.
This example is focused on some attributes that help us to add features to the inspector window. So when we create a GameObject and attach a script with some variables to it that’s what we see in the inspector window.
Adding C# attributes:
The syntax to add a C# attribute is place the attribute name before the variable declaration between brackets:
[Attribute]public int variable;
- HideInIspector: allows us to hide a public variable from the inspector.
e.g.[HideInInspector]public int score = 0;
- SerializeField: forces Unity to serialize a private field and show it in the inspector, very useful when you want to keep a variable private but allow the designer to modify the value.
e.g.[SerializeField]private int _shield = 3;
- Min: set the min value for a float or int variable.
e.g.[Min(3)]private float _jumpForce = 8f;
- Range: define a range of values for an int or float variable.
e.g.[Range(0,5)]private float _speed = 5f;
- Header: allows to put a header above some fields in the inspector.
e.g.[Header(“Health Settings”)]
Adding all of them we can have a result like this in the inspector.
Unity offers more attributes that you can use for the inspector and others functionality, below I left you a link with all about attributes in the Unity docs. | https://rusbenguzman.medium.com/using-c-attributes-to-add-special-behavior-to-the-unitys-inspector-window-2c0809603eb1?source=post_internal_links---------4---------------------------- | CC-MAIN-2021-31 | refinedweb | 271 | 56.89 |
Last updated on October 21st, 2017 |
Are you sure the type of data you’re sending from your Ionic app is the one that’s storing in your Firebase database?
It turns out that for me, it wasn’t.
I was checking out a revenue tracking app I build for my wife, she organizes events, and I was convinced that all my programming was on point, little did I know that some of the ticket prices were storing in Firebase as
string instead of number =/
That lead me to dig a little deeper into form validation with Ionic 2, and not only that but to look on how to validate the server side with Firebase, to make sure things were storing as I thought.
By the end of this post, you’ll learn how to validate your data with both Ionic Framework and Firebase.
You’ll do this in 3 steps:
- STEP #1: You’ll do client-side validation in the Ionic Form.
- STEP #2: You’ll add an extra layer of safety with TypeScript’s type declaration.
- STEP #3: You’ll validate the data server-side with Firebase
Make sure to get the code directly from Github so you can follow along with the post:.
The first thing you’ll do is create your app and initialize Firebase, that’s out of the scope of this post (mainly because I’m tired of copy/pasting it).
If you don’t know how to do that you can read about it here first.
After your app is ready to roll, I want you to create a provider to handle the data (yeah, we’re using a provider even tho is only one function)
Open your terminal and create it like this:
$ ionic generate provider Firebase
The Ionic CLI v3+ will auto-magically import and initialize the provider for you, so everything is ready to start with the Form validation.
Step #1: Create and Validate the form in Ionic
We’re going to do something simple here. We’re creating a form that will take three inputs, a song’s name, its artist’s name, and the user’s age to make sure the user is over 18yo.
Go to
home.ts and import angular’s form modules:
import { FormBuilder, FormGroup, Validators } from '@angular/forms';
We’ll use
FormBuilder to create the form, so go ahead and inject it into the controller.
public addSongForm:FormGroup; constructor(public navCtrl: NavController, public formBuilder: FormBuilder, public firebaseData: FirebaseProvider) {...}
Now we’re going to be initializing the form and declaring the inputs it’s going to have.
this.addSongForm = formBuilder.group({ songName: ['', Validators.compose([Validators.required, Validators.maxLength(45)])], artistName: ['', Validators.compose([Validators.required, Validators.minLength(2)])], userAge: ['', Validators.compose([Validators.required])] });
Let’s go through a bit of theory about what we just saw.
Angular forms module comes with a lot of pre-built goodies, one of those goodies is the Validators module, that module comes with pre-configured validators like
required,
minLength, and
maxLength.
Without you doing any extra work, the Validators module is checking that the
songName input won’t have more than 45 characters, or that the artist’s name needs at least 2 characters, or that all of the fields are required.
The cool thing tho is that we can kick it up a notch and create our validators
For example, I want to validate that users are over 18 years old, so I’m going to be requiring the user to fill the age field and validating that field is over 18.
I know there are probably 10 better ways to do this, but remember, that’s not the point 😛
We’re going to create a validator that takes the age and makes sure it’s a number greater than or equal to 18.
For that I want you to create a folder called
validators inside your
src folder, and create a file called
age.ts
Open up
age.ts and let’s start creating our validator
The first thing you’ll do in that file is to import the module we’ll need:
import { FormControl } from '@angular/forms';
Then create and export the class, I’m gonna call it
AgeValidator:
export class AgeValidator {...}
And inside the class, we’ll create a method called
isValid:
static isValid(control: FormControl): any {...}
Now inside that method we’ll verify the age:
if (control.value >= 18){ return null; } return {"notOldEnough": true};
If the value it’s evaluating is greater than or equal to 18, it’s going to return null, but if it’s not, it will return that object.
Now that the validator is ready, go ahead and import it in
home.ts:
import { AgeValidator } from '../../validators/age';
And add it to the
userAge field initialization inside the constructor:
this.addSongForm = formBuilder.group({ userAge: ['', Validators.compose([Validators.required, AgeValidator.isValid])] });
The View Form
Now it’s time to go to
home.html and start creating the form, first, delete everything inside the
<ion-content></ion-content> tags.
And create a form there:
<form [formGroup]="addSongForm" (submit)="addSong()" novalidate></form>
The form is going to have a few things:
[formGroup]="addSongForm"is the name (and initialization in the ts file) we’re giving the form.
(submit)="addSong()"is telling Ionic that when this form is submitted it needs to run the
addSong()function.
novalidatetells the browser to turn validation off, that way we handle the validation with the form modules.
After the form is created it’s time to add our first input. First we’ll create the input:
<ion-item> <ion-label stacked>Song Name</ion-label> <ion-input </ion-input> </ion-item>
Then, we’ll show an error message if the form isn’t valid, so right after that input add a paragraph with the error message:
<ion-item <p> The song's name is required to be under 45 characters. </p> </ion-item>
We’re setting up the error message to hide, and only show if:
- The form field isn’t valid.
AND
- The form field is
dirty(this just means the user already added value to it)
Let’s also add a CSS class to show a small red line if the field isn’t valid (you know, nothing says form errors like red lines)
<ion-input [class.invalid]="!addSongForm.controls.songName.valid && addSongForm.controls.songName.dirty"> </ion-input>
That right there adds a CSS class called
invalid if the form isn’t valid and has a value inside.
By the way, that’s one line of CSS
.invalid { border-bottom: 1px solid #FF6153; }
In the end, the entire input should look like this:
<ion-item> <ion-label stacked>Song Name</ion-label> <ion-input </ion-input> </ion-item> <ion-item <p> The song's name is required to be under 45 characters. </p> </ion-item>
Now repeat this process 2 times to get the artist’s name:
<ion-item> <ion-label stacked>Artist Name</ion-label> <ion-input </ion-input> </ion-item> <ion-item <p> The artist's name has to be at least 2 characters long. </p> </ion-item>
And to get the user’s age:
<ion-item> <ion-label stacked>How old are you?</ion-label> <ion-input </ion-input> </ion-item> <ion-item <p> You must be 18 or older to use this app. </p> </ion-item>
And finally you’ll add a submit button:
<button ion-button block Add Song </button>
Let’s take it a step forward and disable the button until it’s valid:
<button ion-button block Add Song </button>
And there you have complete form validation working with an Ionic 2 app.
And for many apps, that’s it, that’s all the validation they offer, and that’s OK, kind of.
But we’re going to be taking things to a different level, and we’re going to work on having our form data validated in multiple ways to avoid weird surprises.
So we’ll add 2 extra layers
Step #2: Add TypeScript type declaration
For the type declarations, you’ll start working on your
FirebaseData provider, to send the data to Firebase.
Go ahead and in the
firebase-data.ts file import Firebase:
import firebase from 'firebase';
And then just create the function to push the new song to the database:
saveSong(songName, artistName, userAge) { return firebase.database().ref('songs') .push({ songName, artistName, userAge }); }
That’s a regular
.push() function to add objects to a Firebase list, one cool thing I learned in ES6 for Everyone is that if the object properties and values have the same name you can just type them once, so:
.push({ songName: songName, artistName: artistName, userAge: userAge });
Becomes:
.push({ songName, artistName, userAge });
And now, add the type declarations, just as easy as:
saveSong(songName: string, artistName: string, userAge: number) {...}
That tells TypeScript that the song’s name has to be a string, the artist’s name has to be a string, and the user’s age needs to be a number.
Now in your
home.ts file just create the
addSong() function to send the data to the provider, it should be something like this:
addSong(){ if (!this.addSongForm.valid){ console.log("Nice try!"); } else { this.firebaseData.saveSong(this.addSongForm.value.songName, this.addSongForm.value.artistName, parseFloat(this.addSongForm.value.userAge)).then( () => { this.addSongForm.reset(); }); } }
If the form isn’t valid, don’t do anything, and if it is, then sends the data to the provider, I’m resetting all the input values after it’s saved.
See? We just added a small extra integrity layer for our data with little work.
And now it’s time to do the server side validation to make things EXTRA safe!
Step #3: Add server side data validation
For example, if somehow you managed to send “23” instead of 23 you’ll be sending a string instead of a number, and Firebase will just store it.
Then if you need to do some operations with that age, you’ll start getting errors.
Firebase provides a very detailed security language, where you can set who can see what.
For example, you can set your songs to be read only by the user who store it, or that only admins can save new songs, etc.
But what a lot of people don’t know, is that the Security Rules also let you write validation rules, where you can specify what kind of data you’re going to be storing (You can get ULTRA specific there)
To edit them, go to security rules:
You can find them in your Firebase console, right next to the database:
Then we’ll start adding the rules, to the simple read/write rules you have there, and you’ll add a validation rule to the songs node:
"songs": { "$songId": { ".validate": "" } }
Inside that validate property we’ll be adding our rules.
Firebase has a few variables ready to use, like
data,
newData, and
now.
We’ll be using
newData since it’s the variable that refers to new data being save in our database.
If you want a list of all the variables and their explanation you can check Firebase official docs.
The first thing we need to make sure is that every new song saved has the three properties, the song’s name, the artist’s name and the user’s age.
For that we’ll use the
.hasChildren() property:
"songs": { "$songId": { ".validate": "newData.hasChildren(['songName', 'artistName', 'userAge'])" } }
In that line, we’re telling the Firebase database that every new song we save needs to have three children, one child called
songName, another one called
artistName and a third one called
userAge.
Let’s start validating the song’s name first, remember that we set 2 rules for that field 1) it has to be a string and 2) it has to be less than 45 characters.
So let’s take care of the first one, making sure it’s a string, for that you’ll append another rule to the validate property:
newData.child('songName').isString()
That will make sure that when a new song is added, its property
songName needs to be a string.
You can also append another rule and tell it to check its length and make sure it’s under 45 characters
newData.child('songName').val().length <= 45
Now let’s do the same with the
artistName property:
newData.child('artistName').isString()
And
newData.child('songName').val().length > 1
And lastly we’ll need to validate the
userAge field:
newData.child('userAge').isNumber() && newData.child('userAge').val() > 17
In the end, the security rules should look like this:
{ "rules": { ".read": true, ".write": true, "songs": { "$songId": { ".validate": "newData.hasChildren(['songName', 'artistName', 'userAge']) && newData.child('songName').isString() && newData.child('songName').val().length <= 45 && newData.child('artistName').isString() && newData.child('songName').val().length > 1 && newData.child('userAge').isNumber() && newData.child('userAge').val() > 17" } } } }
That will make sure that you’re always storing the right data if by any chance you send a string instead of a number for the user’s age, then the
.push() method is going to throw a Permission Denied error.
And there you have it, you now have a complete validation flow, starting from validating your inputs in your app and moving all the way to do server-side validation with Firebase.
Go ahead and have a cookie, you deserve it, you just:
- Used angular form modules to create validation in your form.
- Used TypeScript to add validation layer.
- and used Firebase security rules to do server-side validation. | https://javebratt.com/validate-forms-ionic-firebase/ | CC-MAIN-2018-05 | refinedweb | 2,253 | 59.03 |
Given two strings s1 and s2, where s1 contains wild card characters and s2 is a normal string. Write a function that will return true if both the given strings match
The following wildcard characters are allowed
‘*’ = Matches with zero or more instances of any character or set of characters
Example :
“Pro*ing” will be matched with “Programming”
‘?’ = Matched with any one character
Example :
“Pro?ing” will be matched witth “Proking”
Example
INPUT :
s1 = “Pro?gr*”
s2 = “Programming”
OUTPUT:
TRUE
Algorithm
1. If there are characters after s1 and no characters after ‘*’ in s2 then return false
2. If the string s1 contains ‘?’ or current characters of both strings match, then do a recursive call to the remaining part of s1 and s2 ie, s1+1, s2+1
3. If there is ‘*’, then there are two possibilities
a. we consider current character of s2 ie, charMatching(s1+1, s2)
b. we ignore the current character of s2 ie, charMatching(s1,s2+1)
C++ Program
#include <bits/stdc++.h> using namespace std; bool charMatching(char *s1, char * s2) { // If we reach at the end of both strings, we are done if (*s1 == '\0' && *s2 == '\0') return true; // Make sure that the characters after '*' are present // in s2 string. if (*s1 == '*' && *(s1+1) != '\0' && *s2 == '\0') return false; // If the s1 string contains '?', or current characters // of both strings match if (*s1 == '?' || *s1 == *s2) return charMatching(s1+1, s2+1); // If there is *, then there are two possibilities // We consider current character of s2 string or We ignore current character of s2 string. if (*s1 == '*') return charMatching(s1+1, s2) || charMatching(s1, s2+1); return false; } int main() { char s1[] = "Prog*ing"; char s2[] = "Programming"; if(charMatching(s1, s2)) { cout<<"TRUE"<<endl; } else { cout<<"FALSE"<<endl; } return 0; }
Try It | https://www.tutorialcup.com/interview/string/wildcard-character-matching.htm | CC-MAIN-2021-49 | refinedweb | 298 | 72.26 |
[SOLVED] (Beginner) Very simple XML Parser
Hi! I'm trying to use the SAX parser for XML in Qt. I've got some problems to implement this... It's very simple, only using a command console for the moment:
MyHandler.h :
@#include <QtXml/QXmlDefaultHandler>
class MyHandler : public QXmlDefaultHandler
{
public:
bool readFile(const QString &fileName);
protected:
bool startElement(const QString &namespaceURI,
const QString &localName,
const QString &qName,
const QXmlAttributes &attributes);
bool endElement(const QString &namespaceURI,
const QString &localName,
const QString &qName);
bool characters(const QString &str);
bool fatalError(const QXmlParseException &exception);
};
#endif
@
MyHandler.cpp :
@#include <QtDebug>
#include "MyHandler.h"
bool MyHandler::readFile(const QString &fileName)
{
QFile file(fileName);
QXmlInputSource inputSource(&file);
QXmlSimpleReader reader;
reader.setContentHandler(this);
reader.setErrorHandler(this);
return reader.parse(inputSource);
}
bool MyHandler::startElement(const QString & /* namespaceURI /,
const QString & / localName */,
const QString &qName,
const QXmlAttributes &attributes)
{
qDebug() << "Start of element " << qName;
for (int i=0; i<attributes.length(); i++)
qDebug() << " " << attributes.qName(i) << "=" << attributes.value(i);
return true;
}
bool MyHandler::endElement(const QString & /* namespaceURI /,
const QString & / localName */,
const QString &qName)
{
qDebug() << "End of element " << qName;
return true;
}
bool MyHandler::fatalError(const QXmlParseException &exception)
{
qDebug() << "Parse error at line " << exception.lineNumber()
<< ", " << "column " << exception.columnNumber() << ": ";
return false;
}
@
and main.cpp :
@#include "MyHandler.h"
int main(int argc, char *argv[])
{
MyHandler handler;
handler.readFile("MyFile.xml");
return 0;
}
@
But, I've got 48 errors (!) about "undefined reference in function QXmlContentHandler". Maybe it's a problem with the "include"?
Thanks a lot!
This sounds more like a linker error message.
Including the required header (.h) file is one thing, but you will also have to link your binary against the library file (.lib) that actually implements those functions that are declared in the header file!
Do you link your project against the "QtXml4.lib" library?
Thanks for your reply. It's great. I don't have any QtXml4.lib bit I've got QtXml4.dll in a lib directory.
I've tried in MyProject.pro :
@LIBS += "C:\QtSDK\Desktop\Qt\4.7.4\mingw\lib\QtXml4.dll"
@ but the error is now: File not found
- mlong Moderators
Did you add the line
@
QT += xml
@
to your .pro file?
Nope, you can't link DLL files directly.
There always is a .lib file (called an "import library") that corresponds to the DLL file.
The linker needs the .lib file, the resulting EXE file then needs the .dll at runtime.
[quote author="mlong" date="1336412783"]Did you add the line
@
QT += xml
@
to your .pro file?
[/quote]
That's it! I forgot that. here's my .pro :
@QT += xml
CONFIG += console
HEADERS = MyHandler.h
SOURCES = main.cpp
MyHandler.cpp@
But the error is now: "collect2: ld returned 1 exit status - File not Found" What's missing?
Looks like you told the linker to add the required .lib file, but it's missing from your system.
Is there a "QtXml4.lib" file on your system? Should be in your "Qt\lib" folder...
Thanks for your help! No, it's not here. I've searched the entire drive. I have only the .dll version. where can I find this file?
How did you install Qt?
Everything you need to build Qt apps is included in the Qt SDK download:
Be sure to install the following package:
Development Tools > Desktop Qt > Qt 4.8.1 (Desktop) > MinGW -or- MSVC 2008/2010
(BTW: You shouldn't think about the .lib and .dll files as different "versions". The .lib file is the required import library for the .dll file. The .lib is needed to build the executable, the .dll will be needed when you run it!)
The .lib is in the right directory (4.8.1) now, thanks! I didn't know tha there was this update.
I've got problems for the Path setup, Is it different now?
I've tried:
in User variables :
@PATH : C:\QtSDK\Desktop\Qt\4.8.1\msvc2010\bin;C:\QtSDK\Desktop\Qt\4.7.4\mingw\bin@
and
@QTDIR : C:\QtSDK\Desktop\Qt\4.8.1@
and
@QMAKESPEC : win32-g++@ (don't know whether it's used)
like I've seen:
But I can't build:
@The program has unexpectedly finished.
C:\test\test2\debug\test2.exe exited with code -1073741511@ | https://forum.qt.io/topic/16537/solved-beginner-very-simple-xml-parser | CC-MAIN-2018-30 | refinedweb | 698 | 62.85 |
Solidity Language¶
The Solidity language supported by Solang aims to be compatible with the latest Ethereum Foundation Solidity Compiler, version 0.7 with some caveats.
Note
Where differences exist between different targets or the Ethereum Foundation Solidity compiler, this is noted in boxes like these.
Brief Language Status¶
As with any new project, bugs are possible. Please report any issues you may find to github.
Differences:
- libraries are always statically linked into the contract code
- Solang generates WebAssembly or BPF rather than EVM. This means that the
assembly {}statement using EVM instructions is not supported
Unique features to Solang:
- Solang can target different blockchains and some features depending on the target. For example, Parity Substrate uses a different ABI encoding and allows constructors to be overloaded.
- Events can be declared outside of contracts
- Base contracts can be declared in any order
- There is a
print()function for debugging
- Strings can be formatted with python style format string, which is useful for debugging:
print("x = {}".format(x));
Solidity Source File Structure¶
A single Solidity source file may define multiple contracts. A contract is defined
with the
contract keyword, following by the contract name and then the definition
of the contract in between curly braces
{ and
}.
contract A { /// foo simply returns true function foo() public returns (bool) { return true; } } contract B { /// bar simply returns false function bar() public returns (bool) { return false; } }
When compiling this, Solang will output contract code for both A and B, irrespective of the name of source file. Although multiple contracts maybe defined in one solidity source file, it might be convenient to define a single contract in each file with the same name as the file name.
Imports¶
The
import directive is used to import from other Solidity files. This can be useful to
keep a single definition in one file, which can be used in multiple other files. Solidity imports
are somewhat similar to JavaScript ES6, however there is no export statement, or default export.
The following can be imported:
- global constants
- struct definitions
- enums definitions
- event definitions
- global functions
- contracts, including abstract contract, libraries, and interfaces
There are a few different flavours of import. You can specify if you want everything imported, or a just a select few. You can also rename the imports. The following directive imports only foo and bar:
import {foo, bar} from "defines.sol";
Solang will look for the file defines.sol in the same directory as the current file. You can specify
more directories to search with the
--importpath commandline option.
Just like with ES6,
import is hoisted to the top and both foo and bar are usuable
even before the
import statement. It is also possible to import everything from
defines.sol by leaving the list out. Note that this is different than ES6, which would import nothing
with this syntax.
import "defines.sol";
Everything defined in defines.sol is now usable in your Solidity file. However, if something with the same name is defined in defines.sol and also in the current file, you will get a warning. Note that that it is legal to import the same file more than once.
It is also possible to rename an import. In this case, only type foo will be imported, and bar will be imported as baz. This is useful if you have already have a bar and you want to avoid a naming conflict.
import {bar as baz,foo} from "defines.sol";
Rather than renaming individual imports, it is also possible to make all the types in a file available under a special import object. In this case, the bar defined in defines.sol can is now visible as defs.bar, and foo is defs.foo. As long as there is no previous type defs, there can be no naming conflict.
import "defines.sol" as defs;
This also has a slightly more baroque syntax, which does exactly the same.
import * as defs from "defines.sol";
Pragmas¶
A pragma value is a special directive to the compiler. It has a name, and a value. The name is an identifier and the value is any text terminated by a semicolon ;. Solang parses pragmas but does not recognise any.
Often, Solidity source files start with a
pragma solidity which specifies the Ethereum
Foundation Solidity compiler version which is permitted to compile this code. Solang does
not follow the Ethereum Foundation Solidity compiler version numbering scheme, so these
pragma statements are silently ignored. There is no need for a
pragma solidity statement
when using Solang.
pragma solidity >=0.4.0 <0.4.8; pragma experimental ABIEncoderV2;
The ABIEncoderV2 pragma is not needed with Solang; structures can always be ABI encoded or decoded. All other pragma statements are ignored, but generate warnings.
Types¶
The following primitive types are supported.
Integer Types¶
uint
- This represents a single unsigned integer of 256 bits wide. Values can be for example
0,
102,
0xdeadcafe, or
1000_000_000_000_000.
uint64,
uint32,
uint16,
uint8
- These represent shorter single unsigned integers of the given width. These widths are most efficient and should be used whenever possible.
uintN
- These represent shorter single unsigned integers of width
N.
Ncan be anything between 8 and 256 bits and a multiple of 8, e.g.
uint24.
int
- This represents a single signed integer of 256 bits wide. Values can be for example
-102,
0,
102or
-0xdead_cafe.
int64,
int32,
int16,
int8
- These represent shorter single signed integers of the given width. These widths are most efficient and should be used whenever possible.
intN
- These represent shorter single signed integers of width
N.
Ncan be anything between 8 and 256 bits and a multiple of 8, e.g.
int128.
Underscores
_ are allowed in numbers, as long as the number does not start with
an underscore.
1_000 is allowed but
_1000 is not. Similarly
0xffff_0000 is fine, but
0x_f is not.
Scientific notation is supported, e.g.
1e6 is one million. Only integer values
are supported.
Assigning values which cannot fit into the type gives a compiler error. For example:
uint8 foo = 300;
The largest value an
uint8 can hold is (2 8) - 1 = 255. So, the compiler says:
literal 300 is too large to fit into type ‘uint8’
Tip
When using integers, whenever possible use the
int64,
int32 or
uint64,
uint32 types.
The Solidity language has its origins for the Ethereum Virtual Machine (EVM), which has support for 256 bit arithmetic. Most common CPUs like x86_64 do not implement arithmetic for such large types, and any EVM virtual machine implementation has to do bigint calculations, which are expensive.
WebAssembly or BPF do not support this. As a result that Solang has to emulate larger types with many instructions, resulting in larger contract code and higher gas cost.
Fixed Length byte arrays¶
Solidity has a primitive type unique to the language. It is a fixed-length byte array of 1 to 32
bytes, declared with bytes followed by the array length, for example:
bytes32,
bytes24,
bytes8, or
bytes1.
byte is an alias for
byte1, so
byte is an array of 1 element. The arrays can be initialized with either a hex string or
a text string.
bytes4 foo = "ABCD"; bytes4 bar = hex"41_42_43_44";
The ascii value for
A is 41 in hexadecimal. So, in this case, foo and bar
are initialized to the same value. Underscores are allowed in hex strings; they exist for
readability. If the string is shorter than the type, it is padded with zeros. For example:
bytes6 foo = "AB" "CD"; bytes5 bar = hex"41";
String literals can be concatenated like they can in C or C++. Here the types are longer than
the initializers; this means they are padded at the end with zeros. foo will contain the following
bytes in hexadecimal
41 42 43 44 00 00 and bar will be
41 00 00 00 00.
These types can be used with all the bitwise operators,
~,
|,
&,
^,
<<, and
>>. When these operators are used, the type behaves like an unsigned integer type. In this case
think the type not as an array but as a long number. For example, it is possible to shift by one bit:
bytes2 foo = hex"0101" << 1; // foo is 02 02
Since this is an array type, it is possible to read array elements too. They are indexed from zero. It is not permitted to set array elements; the value of a bytesN type can only be changed by setting the entire array value.
bytes6 wake_code = "heotymeo"; bytes1 second_letter = wake_code[1]; // second_letter is "e"
The length can be read using the
.length member variable. Since this is a fixed size array, this
is always the length of the type itself.
bytes32 hash; assert(hash.length == 32); byte b; assert(b.length == 1);
Address and Address Payable Type¶
The
address type holds the address of an account. The length of an
address type depends on
the target being compiled for. On ewasm, an address is 20 bytes. Substrate has an address length
of 32 bytes. The format of an address literal depends on what target you are building for. On ewasm,
ethereum addresses can be specified with a particular hexadecimal number.
address foo = 0xE9430d8C01C4E4Bb33E44fd7748942085D82fC91;
The hexadecimal string has to have 40 characters, and not contain any underscores.
The capitalization, i.e. whether
a to
f values are capitalized, is important.
It is defined in
EIP-55. For example,
when compiling:
address foo = 0xe9430d8C01C4E4Bb33E44fd7748942085D82fC91;
Since the hexadecimal string is 40 characters without underscores, and the string does not match the EIP-55 encoding, the compiler will refused to compile this. To make this a regular hexadecimal number, not an address literal, add some leading zeros or some underscores. In order to fix the address literal, copy the address literal from the compiler error message:
error: address literal has incorrect checksum, expected ‘0xE9430d8C01C4E4Bb33E44fd7748942085D82fC91’
Substrate or Solana addresses are base58 encoded, not hexadecimal. An address literal can be specified with
the special syntax
address"<account>".
address foo = address"5GBWmgdFAMqm8ZgAHGobqDqX6tjLxJhv53ygjNtaaAn3sjeZ";
An address can be payable or not. An payable address can used with the
.send() and .transfer() functions, and
selfdestruct(address payable recipient) function. A non-payable address or contract can be cast to an
address payable
using the
payable() cast, like so:
address payable addr = payable(this);
address cannot be used in any arithmetic or bitwise operations. However, it can be cast to and from
bytes types and integer types. The
== and
!= operators work for comparing two address types.
address foo = address(0);
Note
The type name
address payable cannot be used as a cast in the Ethereum Foundation Solidity compiler,
and the cast must be
payable instead. This is
apparently due to a limitation in their parser.
Solang’s generated parser has no such limitation and allows
address payable to be used as a cast,
but allows
payable to be used as a cast well, for compatibility reasons.
Note
Substrate can be compiled with a different type for Address. If you need support for a different length than the default, please get in touch.
Enums¶
Solidity enums types need to have a definition which lists the possible values it can hold. An enum
has a type name, and a list of unique values. Enum types can used in public functions, but the value
is represented as a
uint8 in the ABI.
contract enum_example { enum Weekday { Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday } function is_weekend(Weekday day) public pure returns (bool) { return (day == Weekday.Saturday || day == Weekday.Sunday); } }
An enum can be converted to and from integer, but this requires an explicit cast. The value of an enum is numbered from 0, like in C and Rust.
If enum is declared in another contract, the type can be refered to with contractname.typename. The individual enum values are contractname.typename.value. The enum declaration does not have to appear in a contract, in which case it can be used without the contract name prefix.
enum planets { Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune } contract timeofday { enum time { Night, Day, Dawn, Dusk } } contract stargazing { function look_for(timeofday.time when) public returns (planets[]) { if (when == timeofday.time.Dawn || when == timeofday.time.Dusk) { planets[] x = new planets[](2); x[0] = planets.Mercury; x[1] = planets.Venus; return x; } else if (when == timeofday.time.Night) { planets[] x = new planets[](5); x[0] = planets.Mars; x[1] = planets.Jupiter; x[2] = planets.Saturn; x[3] = planets.Uranus; x[4] = planets.Neptune; return x; } else { planets[] x = new planets[](1); x[0] = planets.Earth; return x; } } }
Struct Type¶
A struct is composite type of several other types. This is used to group related items together.
contract deck { enum suit { club, diamonds, hearts, spades } enum value { two, three, four, five, six, seven, eight, nine, ten, jack, queen, king, ace } struct card { value v; suit s; } function score(card c) public returns (uint32 score) { if (c.s == suit.hearts) { if (c.v == value.ace) { score = 14; } if (c.v == value.king) { score = 13; } if (c.v == value.queen) { score = 12; } if (c.v == value.jack) { score = 11; } } // all others score 0 } }
A struct has one or more fields, each with a unique name. Structs can be function arguments and return values. Structs can contain other structs. There is a struct literal syntax to create a struct with all the fields set.
contract deck { enum suit { club, diamonds, hearts, spades } enum value { two, three, four, five, six, seven, eight, nine, ten, jack, queen, king, ace } struct card { value v; suit s; } card card1 = card(value.two, suit.club); card card2 = card({s: suit.club, v: value.two}); // This function does a lot of copying function set_card1(card c) public returns (card previous) { previous = card1; card1 = c; } }
The two contract storage variables
card1 and
card2 have initializers using struct literals. Struct
literals can either set fields by their position, or field name. In either syntax, all the fields must
be specified. When specifying structs fields by position, the order of the fields must match with the
struct definition. When fields are specified by name, the order is not important.
Struct definitions from other contracts can be used, by referring to them with the contractname. prefix. Struct definitions can appear outside of contract definitions, in which case they can be used in any contract without the prefix.
struct user { string name; bool active; } contract auth { function authenticate(string name, db.users storage users) public returns (bool) { // ... } } contract db { struct users { user[] field1; int32 count; } }
The users struct contains an array of user, which is another struct. The users struct is defined in contract db, and can be used in another contract with the type name db.users. Astute readers may have noticed that the db.users struct is used before it is declared. In Solidity, types can be always be used before their declaration, or before they are imported.
Structs can be contract storage variables. Structs in contract storage can be assigned to structs in memory and vice versa, like in the set_card1() function. Copying structs between storage and memory is expensive; code has to be generated for each field and executed.
- The function argument
chas to ABI decoded (1 copy + decoding overhead)
- The
card1has to load from contract storage (1 copy + contract storage overhead)
- The
chas to be stored into contract storage (1 copy + contract storage overhead)
- The
previousstruct has to ABI encoded (1 copy + encoding overhead)
Note that struct variables are references. When contract struct variables or normal struct variables are passed around, just the memory address or storage slot is passed around internally. This makes it very cheap, but it does mean that if a called function modifies the struct, then this is visible in the caller as well.
contract foo { struct bar { bytes32 f1; bytes32 f2; bytes32 f3; bytes32 f4; } function f(bar b) public { b.f4 = "foobar"; } function example() public { bar bar1; // bar1 is passed by reference; just its pointer is passed f(bar1); assert(bar1.f4 == "foobar"); } }
Note
In the Ethereum Foundation Solidity compiler, you need to add
pragma experimental ABIEncoderV2;
to use structs as return values or function arguments in public functions. The default ABI encoder
of Solang can handle structs, so there is no need for this pragma. The Solang compiler ignores
this pragma if present.
Fixed Length Arrays¶
Arrays can be declared by adding [length] to the type name, where length is a constant expression. Any type can be made into an array, including arrays themselves (also known as arrays of arrays). For example:
contract foo { /// In a vote with 11 voters, do the ayes have it? function f(bool[11] votes) public pure returns (bool) { uint32 i; uint32 ayes = 0; for (i=0; i<votes.length; i++) { if (votes[i]) { ayes += 1; } } // votes.length is odd; integer truncation means that 11 / 2 = 5 return ayes > votes.length / 2; } }
Note the length of the array can be read with the
.length member. The length is readonly.
Arrays can be initialized with an array literal. For example:
contract primes { function primenumber(uint32 n) public pure returns (uint64) { uint64[10] primes = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ]; return primes[n]; } }
Any array subscript which is out of bounds (either an negative array index, or an index past the
last element) will cause a runtime exception. In this example, calling
primenumber(10) will
fail; the first prime number is indexed by 0, and the last by 9.
Arrays are passed by reference. If you modify the array in another function, those changes will be reflected in the current function. For example:
contract reference { function set_2(int8[4] a) pure private { a[2] = 102; } function foo() private { int8[4] val = [ int8(1), 2, 3, 4 ]; set_2(val); // val was passed by reference, so was modified assert(val[2] == 102); } }
Note
In Solidity, an fixed array of 32 bytes (or smaller) can be declared as
bytes32 or
uint8[32]. In the Ethereum ABI encoding, an
int8[32] is encoded using
32 × 32 = 1024 bytes. This is because the Ethereum ABI encoding pads each primitive to
32 bytes. However, since
bytes32 is a primitive in itself, this will only be 32
bytes when ABI encoded.
In Substrate, the SCALE encoding uses 32 bytes for both types.
Dynamic Length Arrays¶
Dynamic length arrays are useful for when you do not know in advance how long your arrays
will need to be. They are declared by adding
[] to your type. How they can be used depends
on whether they are contract storage variables or stored in memory.
Memory dynamic arrays must be allocated with
new before they can be used. The
new
expression requires a single unsigned integer argument. The length can be read using
length member variable. Once created, the length of the array cannot be changed.
contract dynamicarray { function test(uint32 size) public { int64[] memory a = new int64[](size); for (uint32 i = 0; i < size; i++) { a[i] = 1 << i; } assert(a.length == size); } }
Note
There is experimental support for push() and pop() on memory arrays.
Storage dynamic memory arrays do not have to be allocated. By default, they have a
length of zero and elements can be added and removed using the
push() and
pop()
methods.
contract s { int64[] a; function test() public { // push takes a single argument with the item to be added a.push(128); // push with no arguments adds 0 a.push(); // now we have two elements in our array, 128 and 0 assert(a.length == 2); a[0] |= 64; // pop removes the last element a.pop(); // you can assign the return value of pop int64 v = a.pop(); assert(v == 192); } }
Calling the method
pop() on an empty array is an error and contract execution will abort,
just like when you access an element beyond the end of an array.
push() without any arguments returns a storage reference. This is only available for types
that support storage references (see below).
contract example { struct user { address who; uint32 hitcount; } s[] foo; function test() public { // foo.push() creates an empty entry and returns a reference to it user storage x = foo.push(); x.who = address(1); x.hitcount = 1; } }
Depending on the array element,
pop() can be costly. It has to first copy the element to
memory, and then clear storage.
String¶
Strings can be initialized with a string literal or a hex literal. Strings can be concatenated and compared, and formatted using .format(); no other operations are allowed on strings.
contract example { function test1(string s) public returns (bool) { string str = "Hello, " + s + "!"; return (str == "Hello, World!"); } function test2(string s, int64 n) public returns (string res) { res = "Hello, {}! #{}".format(s, n); } }
Strings can be cast to bytes. This cast has no runtime cost, since both types use the same underlying data structure.
Note
The Ethereum Foundation Solidity compiler does not allow unicode characters in string literals,
unless it is prefixed with unicode, e.g.
unicode"€" . For compatibility, Solang also
accepts the unicode prefix. Solang always allows unicode characters in strings.
Dynamic Length Bytes¶
The
bytes datatype is a dynamic length array of bytes. It can be created with
the
new operator, or from an string or hex initializer. Unlike the
string type,
it is possible to index the
bytes datatype like an array.
contract b { function test() public { bytes a = hex"0000_00fa"; bytes b = new bytes(4); b[3] = hex"fa"; assert(a == b); } }
If the
bytes variable is a storage variable, there is a
push() and
pop()
method available to add and remove bytes from the array. Array elements in a
memory
bytes can be modified, but no elements can be removed or added, in other
words,
push() and
pop() are not available when
bytes is stored in memory.
A
string type can be cast to
bytes. This way, the string can be modified or
characters can be read. Note this will access the string by byte, not character, so
any non-ascii characters will need special handling.
An dynamic array of bytes can use the type
bytes or
byte[]. The latter
stores each byte in an individual storage slot, while the former stores the
entire string in a single storage slot, when possible. Additionally a
string
can be cast to
bytes but not to
byte[].
Mappings¶
Mappings are a dictionary type, or associative arrays. Mappings have a number of limitations:
- it has to be in contract storage, not memory
- they are not iterable
- the key cannot be a
struct, array, or another mapping.
Mappings are declared with
mapping(keytype => valuetype), for example:
contract b { struct user { bool exists; address addr; } mapping(string => user) users; function add(string name, address addr) public { // assigning to a storage variable creates a reference user storage s = users[name]; s.exists = true; s.addr = addr; } function get(string name) public view returns (bool, address) { // assigning to a memory variable creates a copy user s = users[name]; return (s.exists, s.addr); } function rm(string name) public { delete users[name]; } }
Tip
When assigning multiple members in a struct in a mapping, it is better to create a storage variable as a reference to the struct, and then assign to the reference. The add() function above could have been written as:
function add(string name, address addr) public { s[name].exists = true; s[name].addr = addr; }
Here the storage slot for struct is calculated twice, which includes an expensive keccak256 calculation.
If you access a non-existing field on a mapping, all the fields will read as zero. So, it
is common practise to have a boolean field called
exists. Since mappings are not iterable,
it is not possible to do a
delete on an mapping, but an entry can be deleted.
Note
Solidity takes the keccak 256 hash of the key and the storage slot, and simply uses that to find the entry. There are no hash collision chains. This scheme is simple and avoids “hash flooding” attacks where the attacker chooses data which hashes to the same hash collision chain, making the hash table very slow; it will behave like a linked list.
In order to implement mappings in memory, a new scheme must be found which avoids this attack. Usually this is done with SipHash, but this cannot be used in smart contracts since there is no place to store secrets. Collision chains are needed since memory has a much smaller address space than the 256 bit storage slots.
Any suggestions for solving this are very welcome!
Contract Types¶
In Solidity, other smart contracts can be called and created. So, there is a type to hold the address of a contract. This is in fact simply the address of the contract, with some syntax sugar for calling functions it.
A contract can be created with the new statement, followed by the name of the contract. The arguments to the constructor must be provided.
contract child { function announce() public { print("Greetings from child contract"); } } contract creator { function test() public { child c = new child(); c.announce(); } }
Since child does not have a constructor, no arguments are needed for the new statement. The variable c of the contract child type, which simply holds its address. Functions can be called on this type. The contract type can be cast to and from address, provided an explicit cast is used.
The expression
this evaluates to the current contract, which can be cast to
address or
address payable.
contract example { function get_address() public returns (address) { return address(this); } }
Function Types¶
Function types are references to functions. You can use function types to pass functions
for callbacks, for example. Function types come in two flavours,
internal and
external.
An internal function is a reference to a function in the same contract or one of its base contracts.
An external function is a reference to a public or external function on any contract.
When declaring a function type, you must specify the parameters types, return types, mutability, and whether it is external or internal. The parameters or return types cannot have names.
contract ft { function test() public { // reference to an internal function with two argments, returning bool // with the default mutability (i.e. cannot be payable) function(int32, bool) internal returns (bool) x; // the local function func1 can be assigned to this type; mutability // can be more restrictive than the type. x = func1; // now you can call func1 via the x bool res = x(102, false); // reference to an internal function with no return values, must be pure function(int32 arg1, bool arg2) internal pure y; // Does not compile: wrong number of return types and mutability // is not compatible. y = func1; } function func1(int32 arg, bool arg2) view internal returns (bool) { return false; } }
If the
internal or
external keyword is omitted, the type defaults to internal.
Just like any other type, a function type can be a function argument, function return type, or a contract storage variable. Internal function types cannot be used in public functions parameters or return types.
An external function type is a reference to a function in a particular contract. It stores the address of the contract, and the function selector. An internal function type only stores the function reference. When assigning a value to an external function selector, the contract and function must be specified, by using a function on particular contract instance.
contract ft { function test(paffling p) public { // this.callback can be used as an external function type value p.set_callback(this.callback); } function callback(int32 count, string foo) public { // ... } } contract paffling { // the first visibility "external" is for the function type, the second "internal" is // for the callback variables function(int32, string) external internal callback; function set_callback(function(int32, string) external c) public { callback = c; } function piffle() public { callback(1, "paffled"); } }
Storage References¶
Parameters, return types, and variables can be declared storage references by adding
storage after the type name. This means that the variable holds a references to a
particular contract storage variable.
contract felix { enum Felines { None, Lynx, Felis, Puma, Catopuma }; Felines[100] group_a; Felines[100] group_b; function count_pumas(Felines[100] storage cats) private returns (uint32) { uint32 count = 0; uint32 i = 0; for (i = 0; i < cats.length; i++) { if (cats[i] == Felines.Puma) { ++count; } } return count; } function all_pumas() public returns (uint32) { Felines[100] storage ref = group_a; uint32 total = count_pumas(ref); ref = group_b; total += count_pumas(ref); return total; } }
Functions which have either storage parameter or return types cannot be public; when a function is called via the ABI encoder/decoder, it is not possible to pass references, just values. However it is possible to use storage reference variables in public functions, as demonstrated in function all_pumas().
Expressions¶
Solidity resembles the C family of languages. Expressions can use the following operators.
Arithmetic operators¶
The binary operators
-,
+,
*,
/,
%, and
** are supported, and also
in the assignment form
-=,
+=,
*=,
/=, and
%=. There is a
unary operator
-.
uint32 fahrenheit = celcius * 9 / 5 + 32;
Parentheses can be used too, of course:
uint32 celcius = (fahrenheit - 32) * 5 / 9;
Operators can also come in the assignment form.
balance += 10;
The exponation (or power) can be used to multiply a number N times by itself, i.e. x y. This can only be done for unsigned types.
uint64 thousand = 1000; uint64 billion = thousand ** 3;
Overflow checking is limited to types of 64 bits and smaller, if the –math-overflow command line argument is specified. No overflow checking is generated in unchecked blocks, like so:
contract foo { function f(int64 n) public { unchecked { int64 j = n - 1; } } }
Warning
Overflow checking for types larger than
int64 (e.g.
uint128) is not implemented yet.
Bitwise operators¶
The
|,
&,
^ are supported, as are the shift operators
<<
and
>>. These are also available in the assignment form
|=,
&=,
^=,
<<=, and
>>=. Lastly there is a unary operator
~ to
invert all the bits in a value.
Logical operators¶
The logical operators
||,
&&, and
! are supported. The
|| and
&&
short-circuit. For example:
bool foo = x > 0 || bar();
bar() will not be called if the left hand expression evaluates to true, i.e. x is greater
than 0. If x is 0, then bar() will be called and the result of the
|| will be
the return value of bar(). Similarly, the right hand expressions of
&& will not be
evaluated if the left hand expression evaluates to
false; in this case, whatever
ever the outcome of the right hand expression, the
&& will result in
false.
bool foo = x > 0 && bar();
Now
bar() will only be called if x is greater than 0. If x is 0 then the
&&
will result in false, irrespective of what bar() would return, so bar() is not
called at all. The expression elides execution of the right hand side, which is also
called short-circuit.
Conditional operator¶
The ternary conditional operator
? : is supported:
uint64 abs = foo > 0 ? foo : -foo;
Comparison operators¶
It is also possible to compare values. For, this the
>=,
>,
==,
!=,
<, and
<=
is supported. This is useful for conditionals.
The result of a comparison operator can be assigned to a bool. For example:
bool even = (value % 2) == 0;
It is not allowed to assign an integer to a bool; an explicit comparision is needed to turn it into a bool.
Increment and Decrement operators¶
The post-increment and pre-increment operators are implemented like you would expect. So,
a++
evaluates to the value of
a before incrementing, and
++a evaluates to value of
a
after incrementing.
this¶
The keyword
this evaluates to the current contract. The type of this is the type of the
current contract. It can be cast to
address or
address payable using a cast.
contract kadowari { function nomi() public { kadowari c = this; address a = address(this); } }
Function calls made via this are function calls through the external call mechanism; i.e. they have to serialize and deserialise the arguments and have the external call overhead. In addition, this only works with public functions.
contract kadowari { function nomi() public { this.nokogiri(102); } function nokogiri(int a) public { // ... } }
type(..) operators¶
For integer values, the minimum and maximum values the types can hold are available using the
type(...).min and
type(...).max operators. For unsigned integers,
type(..).min
will always be 0.
contract example { int16 stored; function func(int x) public { if (x < type(int16).min || x > type(int16).max) { revert("value will not fit"); } stored = int16(x); } }
The EIP-165 interface value can be retrieved using the
syntax
type(...).interfaceId. This is only permitted on interfaces. The interfaceId is simply
an bitwise XOR of all function selectors in the interface. This makes it possible to uniquely identify
an interface at runtime, which can be used to write a supportsInterface() function as described
in the EIP.
The contract code for a contract, i.e. the binary WebAssembly or BPF, can be retrieved using the
type(c).creationCode and
type(c).runtimeCode fields, as
bytes. In Ethereum,
the constructor code is in the
creationCode WebAssembly and all the functions are in
the
runtimeCode WebAssembly or BPF. Parity Substrate has a single WebAssembly code for both,
so both fields will evaluate to the same value.
contract example { function test() public { bytes runtime = type(other).runtimeCode; } } contract other { bool foo; }
Note
type().creationCode and
type().runtimeCode are compile time constants.
It is not possible to access the code for the current contract. If this were possible, then the contract code would need to contain itself as a constant array, which would result in an contract of infinite size.
Ether and time units¶
Any decimal numeric literal constant can have a unit denomination. For example
10 minutes will evaluate to 600, i.e. the constant will be multiplied by the
multiplier listed below. The following units are available:
Note that
ether,
wei and the other Ethereum currency denominations are available when not
compiling for Ethereum, but they will produce warnings.
Casting¶
Solidity is very strict about the sign of operations, and whether an assignment can truncate a value. You can force the compiler to accept truncations or differences in sign by adding a cast.
Some examples:
function abs(int bar) public returns (int64) { if (bar > 0) { return bar; } else { return -bar; } }
The compiler will say:
implicit conversion would truncate from int256 to int64
Now you can work around this by adding a cast to the argument to return
return int64(bar);,
however it would be much nicer if the return value matched the argument. Instead, implement
multiple overloaded abs() functions, so that there is an
abs() for each type.
It is allowed to cast from a
bytes type to
int or
uint (or vice versa), only if the length
of the type is the same. This requires an explicit cast.
bytes4 selector = "ABCD"; uint32 selector_as_uint = uint32(selector);
If the length also needs to change, then another cast is needed to adjust the length. Truncation and extension is different for integers and bytes types. Integers pad zeros on the left when extending, and truncate on the right. bytes pad on right when extending, and truncate on the left. For example:
bytes4 start = "ABCD"; uint64 start1 = uint64(uint4(start)); // first cast to int, then extend as int: start1 = 0x41424344 uint64 start2 = uint64(bytes8(start)); // first extend as bytes, then cast to int: start2 = 0x4142434400000000
A similar example for truncation:
uint64 start = 0xdead_cafe; bytes4 start1 = bytes4(uint32(start)); // first truncate as int, then cast: start1 = hex"cafe" bytes4 start2 = bytes4(bytes8(start)); // first cast, then truncate as bytes: start2 = hex"dead"
Since
byte is array of one byte, a conversion from
byte to
uint8 requires a cast.
Statements¶
In functions, you can declare variables in code blocks. If the name is the same as an existing function, enum type, or another variable, then the compiler will generate a warning as the original item is no longer accessible.
contract test { uint foo = 102; uint bar; function foobar() private { // AVOID: this shadows the contract storage variable foo uint foo = 5; } }
Scoping rules apply as you would expect, so if you declare a variable in a block, then it is not accessible outside that block. For example:
function foo() public { // new block is introduced with { and ends with } { uint a; a = 102; } // ERROR: a is out of scope uint b = a + 5; }
If statement¶
Conditional execution of a block can be achieved using an
if (condition) { } statement. The
condition must evaluate to a
bool value.
function foo(uint32 n) private { if (n > 10) { // do something } // ERROR: unlike C integers can not be used as a condition if (n) { // ... } }
The statements enclosed by
{ and
} (commonly known as a block) are executed only if
the condition evaluates to true.
While statement¶
Repeated execution of a block can be achieved using
while. It syntax is similar to
if,
however the block is repeatedly executed until the condition evaluates to false.
If the condition is not true on first execution, then the loop is never executed:
function foo(uint n) private { while (n >= 10) { n -= 9; } }
It is possible to terminate execution of the while statement by using the
break statement.
Execution will continue to next statement in the function. Alternatively,
continue will
cease execution of the block, but repeat the loop if the condition still holds:
function foo(uint n) private { while (n >= 10) { n--; if (n >= 100) { // do not execute the if statement below, but loop again continue; } if (bar(n)) { // cease execution of this while loop and jump to the "n = 102" statement break; } } n = 102; }
Do While statement¶
A
do { ... } while (condition); statement is much like the
while (condition) { ... } except
that the condition is evaluated after execution the block. This means that the block is executed
at least once, which is not true for
while statements:
function foo(uint n) private { do { n--; if (n >= 100) { // do not execute the if statement below, but loop again continue; } if (bar(n)) { // cease execution of this while loop and jump to the "n = 102" statement break; } } while (n > 10); n = 102; }
For statements¶
For loops are like
while loops with added syntaxic sugar. To execute a loop, we often
need to declare a loop variable, set its initial variable, have a loop condition, and then
adjust the loop variable for the next loop iteration.
For example, to loop from 0 to 1000 by steps of 100:
function foo() private { for (uint i = 0; i <= 1000; i += 100) { // ... } }
The declaration
uint i = 0 can be omitted if no new variable needs to be declared, and
similarly the post increment
i += 100 can be omitted if not necessary. The loop condition
must evaluate to a boolean, or it can be omitted completely. If it is ommited the block must
contain a
break or
return statement, else execution will
repeat infinitely (or until all gas is spent):
function foo(uint n) private { // all three omitted for (;;) { // there must be a way out if (n == 0) { break; } } }
Destructuring Statement¶
The destructuring statement can be used for making function calls to functions that have multiple return values. The list can contain either:
- The name of an existing variable. The type must match the type of the return value.
- A new variable declaration with a type. Again, the type must match the type of the return value.
- Empty; this return value is ignored and not accessible.
contract destructure { function func() internal returns (bool, int32, string) { return (true, 5, "abcd") } function test() public { string s; (bool b, , s) = func(); } }
The right hand side may also be a list of expressions. This type can be useful for swapping values, for example.
function test() public { (int32 a, int32 b, int32 c) = (1, 2, 3); (b, , a) = (a, 5, b); }
The right hand side of an destructure may contain the ternary conditional operator. The number of elements in both sides of the conditional must match the left hand side of the destructure statement.
function test(bool cond) public { (int32 a, int32 b, int32 c) = cond ? (1, 2, 3) : (4, 5, 6) }
Try Catch Statement¶
Sometimes execution gets reverted due to a
revert() or
require(). These types of problems
usually cause the entire transaction to be aborted. However, it is possible to catch
some of these problems and continue execution.
This is only possible for contract instantiation through new, and external function calls.
An internal function cannot be called from a try catch statement. Not all problems can be handled,
for example, out of gas cannot be caught. The
revert() and
require() builtins may
be passed a reason code, which can be inspected using the
catch Error(string) syntax.
contract aborting { constructor() { revert("bar"); } } contract runner { function test() public { try new aborting() returns (aborting a) { // new succeeded; a holds the a reference to the new contract } catch Error(string x) { if (x == "bar") { // "bar" revert or require was executed } } catch (bytes raw) { // if no error string could decoding, we end up here with the raw data } } }
The same statement can be used for calling external functions. The
returns (...)
part must match the return types for the function. If no name is provided, that
return value is not accessible.
contract aborting { function abort() public returns (int32, bool) { revert("bar"); } } contract runner { function test() public { aborting abort = new aborting(); try abort.abort() returns (int32 a, bool b) { // call succeeded; return values are in a and b } catch Error(string x) { if (x == "bar") { // "bar" reason code was provided through revert() or require() } } catch (bytes raw) { // if no error string could decoding, we end up here with the raw data } } }
There is an alternate syntax which avoids the abi decoding by leaving the catch Error(…) out. This might be useful when no error string is expected, and will generate shorter code.
contract aborting { function abort() public returns (int32, bool) { revert("bar"); } } contract runner { function test() public { aborting abort = new aborting(); try new abort.abort() returns (int32 a, bool b) { // call succeeded; return values are in a and b } catch (bytes raw) { // call failed with raw error in raw } } }
Functions¶
A function can be declared inside a contract, in which case it has access to the contracts contract storage variables, other contract functions etc. Functions can be also be declared outside a contract.
/// get_initial_bound is called from the constructor function get_initial_bound() returns (uint value) { value = 102; } contact foo { uint bound = get_initial_bound(); /** set bound for get with bound */ function set_bound(uint _bound) public { bound = _bound; } /// Clamp a value within a bound. /// The bound can be set with set_bound(). function get_with_bound(uint value) view public return (uint) { if (value < bound) { return value; } else { return bound; } } }
Function can have any number of arguments. Function arguments may have names; if they do not have names then they cannot be used in the function body, but they will be present in the public interface.
The return values may have names as demonstrated in the get_initial_bound() function. When at all of the return values have a name, then the return statement is no longer required at the end of a function body. In stead of returning the values which are provided in the return statement, the values of the return variables at the end of the function is returned. It is still possible to explicitly return some values with a return statement.
Functions which are declared
public will be present in the ABI and are callable
externally. If a function is declared
private then it is not callable externally,
but it can be called from within the contract. If a function is defined outside a
contract, then it cannot have a visibility specifier (e.g.
public).
Any DocComment before a function will be include in the ABI. Currently only Substrate supports documentation in the ABI.
Arguments passing and return values¶
Function arguments can be passed either by position or by name. When they are called by name, arguments can be in any order. However, functions with anonymous arguments (arguments without name) cannot be called this way.
contract foo { function bar(uint32 x, bool y) public returns (uint32) { if (y) { return 2; } return 3; } function test() public { uint32 a = bar(102, false); a = bar({ y: true, x: 302 }); } }
If the function has a single return value, this can be assigned to a variable. If the function has multiple return values, these can be assigned using the Destructuring Statement assignment statement:
contract foo { function bar1(uint32 x, bool y) public returns (address, byte32) { return (address(3), hex"01020304"); } function bar2(uint32 x, bool y) public returns (bool) { return !y; } function test() public { (address f1, bytes32 f2) = bar1(102, false); bool f3 = bar2({x: 255, y: true}) } }
It is also possible to call functions on other contracts, which is also known as calling external functions. The called function must be declared public. Calling external functions requires ABI encoding the arguments, and ABI decoding the return values. This much more costly than an internal function call.
contract foo { function bar1(uint32 x, bool y) public returns (address, byte32) { return (address(3), hex"01020304"); } function bar2(uint32 x, bool y) public returns (bool) { return !y; } } contract bar { function test(foo f) public { (address f1, bytes32 f2) = f.bar1(102, false); bool f3 = f.bar2({x: 255, y: true}) } }
The syntax for calling external call is the same as the external call, except for that it must be done on a contract type variable. Any error in an external call can be handled with Try Catch Statement.
Passing value and gas with external calls¶
For external calls, value can be sent along with the call. The callee must be
payable. Likewise, a gas limit can be set.
contract foo { function bar() public { other o = new other(); o.feh{value: 102, gas: 5000}(102); } } contract other { function feh(uint32 x) public payable { // ... } }
Note
The gas cannot be set on Solana for external calls.
State mutability¶
Some functions only read contract storage (also known as state), and others may write contract storage. Functions that do not write state can be executed off-chain. Off-chain execution is faster, does not require write access, and does not need any balance.
Functions that do not write state come in two flavours:
view and
pure.
pure
functions may not read state, and
view functions that do read state.
Functions that do write state come in two flavours:
payable and non-payable, the
default. Functions that are not intended to receive any value, should not be marked
payable. The compiler will check that every call does not included any value, and
there are runtime checks as well, which cause the function to be reverted if value is
sent.
A constructor can be marked
payable, in which case value can be passed with the
constructor.
Note
If value is sent to a non-payable function on Parity Substrate, the call will be reverted. However there is no refund performed, so value will remain with the callee.
payable on constructors is not enforced on Parity Substrate. Funds are needed
for storage rent and there is a minimum deposit needed for the contract. As a result,
constructors always receive value on Parity Substrate.
Function overloading¶
Multiple functions with the same name can be declared, as long as the arguments are different in at least one of two ways:
- The number of arguments must be different
- The type of at least one of the arguments is different
A function cannot be overloaded by changing the return types or number of returned values. Here is an example of an overloaded function:
contract shape { int64 bar; function abs(int val) public returns (int) { if (val >= 0) { return val; } else { return -val; } } function abs(int64 val) public returns (int64) { if (val >= 0) { return val; } else { return -val; } } function foo(int64 x) public { bar = abs(x); } }
In the function foo, abs() is called with an
int64 so the second implementation
of the function abs() is called.
Function Modifiers¶
Function modifiers are used to check pre-conditions or post-conditions for a function call. First a
new modifier must be declared which looks much like a function, but uses the
modifier
keyword rather than
function.
contract example { address owner; modifier only_owner() { require(msg.sender == owner); _; // insert post conditions here } function foo() only_owner public { // ... } }
The function foo can only be run by the owner of the contract, else the
require() in its
modifier will fail. The special symbol
_; will be replaced by body of the function. In fact,
if you specify
_; twice, the function will execute twice, which might not be a good idea.
A modifier cannot have visibility (e.g.
public) or mutability (e.g.
view) specified,
since a modifier is never externally callable. Modifiers can only be used by attaching them
to functions.
A modifier can have arguments, just like regular functions. Here if the price is less
than 50, foo() itself will never be executed, and execution will return to the caller with
nothing done since
_; is not reached in the modifier and as result foo() is never
executed.
contract example { modifier check_price(int64 price) { if (price >= 50) { _; } } function foo(int64 price) check_price(price) public { // ... } }
Multiple modifiers can be applied to single function. The modifiers are executed in the
order of the modifiers specified on the function declaration. Execution will continue to the next modifier
when the
_; is reached. In
this example, the only_owner modifier is run first, and if that reaches
_;, then
check_price is executed. The body of function foo() is only reached once check_price()
reaches
_;.
contract example { address owner; // a modifier with no arguments does not need "()" in its declaration modifier only_owner { require(msg.sender == owner); _; } modifier check_price(int64 price) { if (price >= 50) { _; } } function foo(int64 price) only_owner check_price(price) public { // ... } }
Modifiers can be inherited or declared
virtual in a base contract and then overriden, exactly like
functions can be.
contract base { address owner; modifier only_owner { require(msg.sender == owner); _; } modifier check_price(int64 price) virtual { if (price >= 10) { _; } } } contract example is base { modifier check_price(int64 price) override { if (price >= 50) { _; } } function foo(int64 price) only_owner check_price(price) public { // ... } }
Calling an external function using
call()¶
If you call a function on a contract, then the function selector and any arguments are ABI encoded for you, and any return values are decoded. Sometimes it is useful to call a function without abi encoding the arguments.
You can call a contract directly by using the
call() method on the address type.
This takes a single argument, which should be the ABI encoded arguments. The return
values are a
boolean which indicates success if true, and the ABI encoded
return value in
bytes.
contract a { function test() public { b v = new b(); // the following four lines are equivalent to "uint32 res = v.foo(3,5);" // Note that the signature is only hashed and not parsed. So, ensure that the // arguments are of the correct type. bytes data = abi.encodeWithSignature("foo(uint32,uint32)", uint32(3), uint32(5)); (bool success, bytes rawresult) = address(v).call(data); assert(success == true); uint32 res = abi.decode(rawresult, (uint32)); assert(res == 8); } } contract b { function foo(uint32 a, uint32 b) public returns (uint32) { return a + b; } }
Any value or gas limit can be specified for the external call. Note that no check is done to see
if the called function is
payable, since the compiler does not know what function you are
calling.
function test(address foo, bytes rawcalldata) public { (bool success, bytes rawresult) = foo.call{value: 102, gas: 1000}(rawcalldata); }
Note
ewasm also supports
staticcall() and
delegatecall() on the address type. These
call types are not supported on Parity Substrate.
fallback() and receive() function¶
When a function is called externally, either via an transaction or when one contract
call a function on another contract, the correct function is dispatched based on the
function selector in the raw encoded ABI call data. If there is no match, the call
reverts, unless there is a
fallback() or
receive() function defined.
If the call comes with value, then
receive() is executed, otherwise
fallback()
is executed. This made clear in the declarations;
receive() must be declared
payable, and
fallback() must not be declared
payable. If a call is made
with value and no
receive() function is defined, then the call reverts, likewise if
call is made without value and no
fallback() is defined, then the call also reverts.
Both functions must be declared
external.
contract test { int32 bar; function foo(uint32 x) public { bar = x; } fallback() external { // execute if function selector does not match "foo(uint32)" and no value sent } receive() payable external { // execute if function selector does not match "foo(uint32)" and value sent } }
Constants¶
Constants can be declared at the global level or at the contract level, just like contract storage variables. They do not use any contract storage and cannot be modified. The variable must have an initializer, which must be a constant expression. It is not allowed to call functions or read variables in the initializer:
string constant greeting = "Hello, World!"; contract ethereum { uint constant byzantium_block = 4_370_000; }
Contract Storage¶
Any variables declared at the contract level (so not declared in a function or constructor), will automatically become contract storage. Contract storage is maintained on chain, so they retain their values between calls. These are declared so:
contract hitcount { uint public counter = 1; function hit() public { counters++; } }
The
counter is maintained for each deployed
hitcount contract. When the contract is deployed,
the contract storage is set to 1. Contract storage variable do not need an initializer; when
it is not present, it is initialized to 0, or
false if it is a
bool.
Immutable Variables¶
A variable can be declared immutable. This means that it may only be modified in a constructor, and not in any other function or modifier.
contract foo { uint public immutable bar; constructor(int v) { bar = v; } function hit() public { // this is not permitted bar++; } }
This is purely a compiler syntax feature, the generated code is exactly the same.
Accessor Functions¶
Any contract storage variable which is declared public, automatically gets an accessor function. This
function has the same name as the variable name. So, in the example above, the value of counter can
retrieved by calling a function called
counter, which returns
uint.
If the type is either an array or a mapping, the key or array indices become arguments to the accessor function.
contract ethereum { // As a public mapping,this creates accessor function called balance, which takes // an address as an argument, and returns an uint mapping(address => uint) public balances; // A public array takes the index as an uint argument and returns the element, // in this case string. string[] users; }
The accessor function may override a method on a base contract by specifying
override. The base function
must be virtual and have the same signature as the accessor. The
override keyword only affects the
accessor function, so it can only be used in combination with public variables and cannot be used to
override a variable in the base contract.
contract foo is bar { int public override baz; } contract bar { function baz() public virtual returns (int) { return 512; } }
How to clear Contract Storage¶
Any contract storage variable can have its underlying contract storage cleared with the
delete
operator. This can be done on any type; a simple integer, an array element, or the entire
array itself. Contract storage has to be cleared slot (i.e. primitive) at a time, so if there are
many primitives, this can be costly.
contract s { struct user { address f1; int[] list; } user[1000] users; function clear() public { // delete has to iterate over 1000 users, and for each of those clear the // f1 field, read the length of the list, and iterate over each of those delete users; } }
Events¶
In Solidity, contracts can emit events that signal that changes have occurred. For example, a Solidity contract could emit a Deposit event, or BetPlaced in a poker game. These events are stored in the blockchain transaction log, so they become part of the permanent record. From Solidity’s perspective, you can emit events but you cannot access events on the chain.
Once those events are added to the chain, an off-chain application can listen for events. For example, the Web3.js interface has a subscribe() function. Another is example is Hyperledger Burrow which has a vent command which listens to events and inserts them into a Postgres database.
An event has two parts. First, there is a limited set of topics. Usually there are no more than 3 topics, and each of those has a fixed length of 32 bytes. They are there so that an application listening for events can easily filter for particular types of events, without needing to do any decoding. There is also a data section of variable length bytes, which is ABI encoded. To decode this part, the ABI for the event must be known.
From Solidity’s perspective, an event has a name, and zero or more fields. The fields can either be
indexed or
not.
indexed fields are stored as topics, so there can only be a limited number of
indexed fields. The other
fields are stored in the data section of the event. The event name does not need to be unique; just like
functions, they can be overloaded as long as the fields are of different types, or the event has
a different number of arguments.
In Parity Substrate, the topic fields are always the hash of the value of the field. Ethereum only hashes fields
which do not fit in the 32 bytes. Since a cryptographic hash is used, it is only possible to compare the topic against a
known value.
An event can be declared in a contract, or outside.
event CounterpartySigned ( address indexed party, address counter_party, uint contract_no ); contract Signer { funtion sign(address counter_party, uint contract_no) internal { emit CounterpartySigned(address(this), counter_party, contract_no); } }
Like function calls, the emit statement can have the fields specified by position, or by field name. Using field names rather than position may be useful in case the event name is overloaded, since the field names make it clearer which exact event is being emitted.
event UserModified( address user, string name ) anonymous; event UserModified( address user, uint64 groupid ); contract user { function set_name(string name) public { emit UserModified({ user: msg.sender, name: name }); } function set_groupid(uint64 id) public { emit UserModified({ user: msg.sender, groupid: id }); } }
In the transaction log, the first topic of an event is the keccak256 hash of the signature of the
event. The signature is the event name, followed by the fields types in a comma separated list in parentheses. So
the first topic for the second UserModified event would be the keccak256 hash of
UserModified(address,uint64).
You can leave this topic out by declaring the event
anonymous. This makes the event slightly smaller (32 bytes
less) and makes it possible to have 4
indexed fields rather than 3.
Constructors and contract instantiation¶
When a contract is deployed, the contract storage is initialized to the initializer values provided, and any constructor is called. A constructor is not required for a contract. A constructor is defined like so:
contract mycontract { uint foo; constructor(uint foo_value) { foo = foo_value; } }
A constructor does not have a name and may have any number of arguments. If a constructor has arguments, then when the contract is deployed then those arguments must be supplied.
If a contract is expected to hold receive value on instantiation, the constructor should be declared
payable.
Note
Parity Substrate allows multiple constructors to be defined, which is not true for ewasm. So, when building for Substrate, multiple constructors can be defined as long as their argument list is different (i.e. overloaded).
When the contract is deployed in the Polkadot UI, the user can select the constructor to be used.
Instantiation using new¶
Contracts can be created using the
new keyword. The contract that is being created might have
constructor arguments, which need to be provided.
contact hatchling { string name; constructor(string id) { require(id != "", "name must be provided"); name = id; } } contract adult { function test() public { hatchling h = new hatchling("luna"); } }
The constructor might fail for various reasons, for example
require() might fail here. This can
be handled using the Try Catch Statement statement, else errors cause the transaction to fail.
Sending value to the new contract¶
It is possible to send value to the new contract. This can be done with the
{value: 500}
syntax, like so:
contact hatchling { string name; constructor(string id) payable { require(id != "", "name must be provided"); name = id; } } contract adult { function test() public { hatchling h = new hatchling{value: 500}("luna"); } }
The constructor should be declared
payable for this to work.
Note
If no value is specified, then on Parity Substrate the minimum balance (also know as the existential deposit) is sent.
Setting the salt, gas, and space for the new contract¶
Note
ewasm does not yet provide a method for setting the salt or gas for the new contract, so these values are ignored.
Note
The gas or salt cannot be set on Solana. However, when creating a contract on Solana, the size of the new account can be set using space:.
When a new contract is created, the address for the new contract is a hash of the input
(the constructor arguments) to the new contract. So, a contract cannot be created twice
with the same input. This is why the salt is concatenated to the input. The salt is
either a random value or it can be explicitly set using the
{salt: 2} syntax. A
constant will remove the need for the runtime random generation, however creating
a contract twice with the same salt and arguments will fail. The salt is of type
uint256.
If gas is specified, this limits the amount gas the constructor for the new contract
can use. gas is a
uint64.
contact hatchling { string name; constructor(string id) payable { require(id != "", "name must be provided"); name = id; } } contract adult { function test() public { hatchling h = new hatchling{salt: 0, gas: 10000}("luna"); } }
When creating contract on Solana, the size of the new account can be specified using space:. By default, the new account is created with a size of 1 kilobyte (1024 bytes) plus the size required for any fixed-size fields. When you specify space, this is the space in addition to the fixed-size fields. So, if you specify space: 0, then there is no space for any dynamicially allocated fields.
contact hatchling { string name; constructor(string id) payable { require(id != "", "name must be provided"); name = id; } } contract adult { function test() public { hatchling h = new hatchling{space: 10240}("luna"); } }
Base contracts, abstract contracts and interfaces¶
Solidity contracts support object-oriented programming. The style Solidity is somewhat similar to C++, but there are many differences. In Solidity we are dealing with contracts, not classes.
Specifying base contracts¶
To inherit from another contract, you have to specify it as a base contract. Multiple contracts can be specified here.
contact a is b, c { constructor() {} } contact b { int foo; function func2() public {} constructor() {} } contact c { int bar; constructor() {} function func1() public {} }
In this case, contract
a inherits from both
b and
c. Both
func1() and
func2()
are visible in contract
a, and will be part of its public interface if they are declared
public or
external. In addition, the contract storage variables
foo and
bar are also availabe in
a.
Inheriting contracts is recursive; this means that if you inherit a contract, you also inherit everything
that that contract inherits. In this example, contract
a inherits
b directly, and inherits
c
through
b. This means that contract
b also has a variable
bar.
contact a is b { constructor() {} } contact b is c { int foo; function func2() public {} constructor() {} } contact c { int bar; constructor() {} function func1() public {} }
Virtual Functions¶
When inheriting from a base contract, it is possible to override a function with a newer function with the same name.
For this to be possible, the base contract must have specified the function as
virtual. The
inheriting contract must then specify the same function with the same name, arguments and return values, and
add the
override keyword.
contact a is b { function func(int a) override public returns (int) { return a + 11; } } contact b { function func(int a) virtual public returns (int) { return a + 10; } }
If the function is present in more than one base contract, the
override attribute must list all the base
contracts it is overriding.
contact a is b,c { function func(int a) override(b,c) public returns (int) { return a + 11; } } contact b { function func(int a) virtual public returns (int) { return a + 10; } } contact c { function func(int a) virtual public returns (int) { return a + 5; } }
Calling function in base contract¶
When a virtual function is called, the dispatch is virtual. If the function being called is overriden in another contract, then the overriding function is called. For example:
contract a is b { function baz() public returns (uint64) { return foo(); } function foo() internal override returns (uint64) { return 2; } } contract a { function foo() internal virtual returns (uint64) { return 1; } function bar() internal returns (uint64) { // since foo() is virtual, is a virtual dispatch call // when foo is called and a is a base contract of b, then foo in contract b will // be called; foo will return 2. return foo(); } function bar2() internal returns (uint64) { // this explicitly says "call foo of base contract a", and dispatch is not virtual return a.foo(); } }
Rather than specifying the base contract, use
super as the contract to call the base contract
function.
contract a is b { function baz() public returns (uint64) { // this will return 1 return super.foo(); } function foo() internal override returns (uint64) { return 2; } } contract b { function foo() internal virtual returns (uint64) { return 1; } }
If there are multiple base contracts which the define the same function, the function of the first base contract is called.
contract a is b1, b2 { function baz() public returns (uint64) { // this will return 100 return super.foo(); } function foo() internal override(b2, b2) returns (uint64) { return 2; } } contract b1 { function foo() internal virtual returns (uint64) { return 100; } } contract b2 { function foo() internal virtual returns (uint64) { return 200; } }
Specifying constructor arguments¶
If a contract inherits another contract, then when it is instantiated or deployed, then the constructor for its inherited contracts is called. The constructor arguments can be specified on the base contract itself.
contact a is b(1) { constructor() {} } contact b is c(2) { int foo; function func2(int i) public {} constructor() {} } contact c { int bar; constructor(int32 j) {} function func1() public {} }
When
a is deployed, the constructor for
c is executed first, then
b, and lastly
a. When the
constructor arguments are specified on the base contract, the values must be constant. It is possible to specify
the base arguments on the constructor for inheriting contract. Now we have access to the constructor arguments,
which means we can have runtime-defined arguments to the inheriting constructors.
contact a is b { constructor(int i) b(i+2) {} } contact b is c { int foo; function func2() public {} constructor(int j) c(j+3) {} } contact c { int bar; constructor(int32 k) {} function func1() public {} }
The execution is not entirely intuitive in this case. When contract
a is deployed with an int argument of 10,
then first the constructor argument or contract
b is calculated: 10+2, and that value is used as an
argument to constructor
b. constructor
b calculates the arguments for constructor
c to be: 12+3. Now,
with all the arguments for all the constructors established, constructor
c is executed with argument 15, then
constructor
b with argument 12, and lastly constructor
a with the original argument 10.
Abstract Contracts¶
An
abstract contract is one that cannot be instantiated, but it can be used as a base for another contract,
which can be instantiated. A contract can be abstract because the functions it defines do not have a body,
for example:
abstract contact a { function func2() virtual public; }
This contract cannot be instantiated, since there is no body or implementation for
func2. Another contract
can define this contract as a base contract and override
func2 with a body.
Another reason why a contract must be abstract is missing constructor arguments. In this case, if we were to
instantiate contract
a we would not know what the constructor arguments to its base
b would have to be.
Note that contract
c does inherit from
a and can specify the arguments for
b on its constructor,
even though
c does not directly inherit
b (but does indirectly).
abstract contact a is b { constructor() {} } contact b { constructor(int j) {} } contract c is a { constructor(int k) b(k*2) {} }
Interfaces¶
An interface is a contract sugar type with restrictions. This type cannot be instantiated; it can only define the functions prototypes for a contract. This is useful as a generic interface.
interface operator { function op1(int32 a, int32 b) external returns (int32); function op2(int32 a, int32 b) external returns (int32); } contract ferqu { operator op; constructor(bool do_adds) { if (do_adds) { op = new m1(); } else { op = new m2(); } } function x(int32 b) public returns (int32) { return op.op1(102, b); } } contract m1 is operator { function op1(int32 a, int32 b) public override returns (int32) { return a + b; } function op2(int32 a, int32 b) public override returns (int32) { return a - b; } } contract m2 is operator { function op1(int32 a, int32 b) public override returns (int32) { return a * b; } function op2(int32 a, int32 b) public override returns (int32) { return a / b; } }
- Interfaces can only have other interfaces as a base contract
- All functions must the
externalvisibilty
- No constructor can be declared
- No contract storage variables can exist (however constants are allowed)
- No function can have a body or implementation
Libraries¶
Libraries are a special type of contract which can be reused in multiple contracts. Functions declared in a library can
be called with the
library.function() syntax. When the library has been imported or declared, any contract
can use its functions simply by using its name.
contract test { function foo(uint64 x) public pure returns (uint64) { return ints.max(x, 65536); } } library ints { function max(uint64 a, uint64 b) public pure returns (uint64) { return a > b ? a : b; } }
When writing libraries there are restrictions compared to contracts:
- A library cannot have constructors, fallback or receive function
- A library cannot have base contracts
- A library cannot be a base contract
- A library cannot have virtual or override functions
- A library cannot have payable functions
Note
When using the Ethereum Foundation Solidity compiler, library are a special contract type and libraries are
called using delegatecall. Parity Substrate has no
delegatecall functionality so Solang statically
links the library calls into your contract code. This does make for larger contract code, however this
reduces the call overhead and make it possible to do compiler optimizations across library and contract code.
Library Using For¶
Libraries can be used as method calls on variables. The type of the variable needs to be bound to the library, and the type of the first parameter of the function of the library must match the type of a variable.
contract test { using lib for int32[100]; int32[100] bar; function foo() public returns (int64) { bar.set(10, 571); } } library lib { function set(int32[100] storage a, uint index, int32 val) internal { a[index] = val; } }
The syntax
using library
for Type
; is the syntax that binds the library to the type. This
must be specified on the contract. This binds library
lib to any variable with type
int32[100].
As a result of this, any method call on a variable of type
int32[100] will be matched to library
lib.
For the call to match, the first argument of the function must match the variable; note that here, bar
is of type
storage, since all contract variables are implicitly
storage.
There is an alternative syntax
using library
for *; which binds the library functions to any
variable that will match according to these rules.
Sending and receiving value¶
Value in Solidity is represented by
uint128.
Note
Parity Substrate can be compiled with a different type for
T::Balance. If you
need support for a different type, please raise an
issue.
Checking your balance¶
The balance of a contract can be checked with address
.balance, so your own balance
is
address(this).balance.
Note
Parity Substrate cannot check the balance for contracts other than the current one. If you need to check the balance of another contract, then add a balance function to that contract like the one below, and call that function instead.
function balance() public returns (uint128) { return address(this).balance; }
Creating contracts with an initial value¶
You can specify the value you want to be deposited in the new contract by
specifying
{value: 100 ether} before the constructor arguments. This is
explained in sending value to the new contract.
Sending value with an external call¶
You can specify the value you want to be sent along with the function call by
specifying
{value: 100 ether} before the function arguments. This is
explained in passing value and gas with external calls.
Sending value using
send() and
transfer()¶
The
send() and
transfer() functions are available as method on a
address payable variable. The single arguments is the amount of value you
would like to send. The difference between the two functions is what happens
in the failure case:
transfer() will revert the current call,
send()
returns a
bool which will be
false.
In order for the receiving contract to receive the value, it needs a
receive()
function, see fallback() and receive() function.
Here is an example:
contract A { B other; constructor() { other = new B(); bool complete = payable(other).transfer(100); if (!complete) { // oops } // if the following fails, our transaction will fail other.send(100); } } contract B { receive() payable external { // .. } }
Note
On Subtrate, this uses the
seal_transfer() mechanism rather than
seal_call(), since this
does not come with gas overhead. This means the
receive() function is not required in the
receiving contract, and it will not be called if it is present. If you want the
receive()
function to be called, use
address.call{value: 100}("") instead.
Builtin Functions and Variables¶
The Solidity language has a number of built-in variables and functions which give access to the chain environment or pre-defined functions. Some of these functions will be different on different chains.
Block and transaction¶
The functions and variables give access to block properties like block number and transaction properties like gas used, and value sent.
blockhash(uint64 block) returns (bytes32)¶
Returns the blockhash for a particular block. This not possible for the current block, or any block except for the most recent 256. Do not use this a source of randomness unless you know what you are doing.
Note
This function is not available on Parity Substrate. When using Parity Substrate,
use
random() as a source of random data.
random(bytes subject) returns (bytes32)¶
Returns random bytes based on the subject. The same subject for the same transaction
will return the same random bytes, so the result is deterministic. The chain has
a
max_subject_len, and if subject exceeds that, the transaction will be aborted.
Note
This function is only available on Parity Substrate.
msg properties¶
- uint128
msg.value
- The amount of value sent with a transaction, or 0 if no value was sent.
- bytes
msg.data
- The raw ABI encoded arguments passed to the current call.
- bytes4
msg.sig
- Function selector from the ABI encoded calldata, e.g. the first four bytes. This might be 0 if no function selector was present. In Ethereum, constructor calls do not have function selectors but in Parity Substrate they do.
- address
msg.sender
- The sender of the current call. This is either the address of the contract that called the current contract, or the address that started the transaction if it called the current contract directly.
tx properties¶
- uint128
tx.gasprice
- The price of one unit of gas. This field cannot be used on Parity Substrate, the explanation is in the warning box below.
- uint128
tx.gasprice(uint64 gas)
- The total price of gas units of gas.
Warning
On Parity Substrate, the cost of one gas unit may not be an exact whole round value. In fact,
if the gas price is less than 1 it may round down to 0, giving the incorrect appearance gas is free.
Therefore, avoid the
tx.gasprice member in favour of the function
tx.gasprice(uint64 gas).
To avoid rounding errors, pass the total amount of gas into
tx.gasprice(uint64 gas) rather than
doing arithmetic on the result. As an example, replace this bad example:
// BAD example uint128 cost = num_items * tx.gasprice(gas_per_item);
with:
uint128 cost = tx.gasprice(num_items * gas_per_item);
Note this function is not available on the Ethereum Foundation Solidity compiler.
- address
tx.origin
- The address that started this transaction. Not available on Parity Substrate
block properties¶
Some block properties are always available:
- uint64
block.number
- The current block number.
- uint64
block.timestamp
- The time in unix epoch, i.e. seconds since the beginning of 1970.
Do not use either of these two fields as a source of randomness unless you know what you are doing.
The other block properties depend on which chain is being used.
Parity Substrate¶
- uint128
block.tombstone_deposit
- The amount needed for a tombstone. Without it, contracts will disappear completely if the balance runs out.
- uint128
block.minimum_deposit
- The minimum amonut needed to create a contract. This does not include storage rent.
Error handling¶
assert(bool)¶
Assert takes a boolean argument. If that evaluates to false, execution is aborted.
contract c { constructor(int x) { assert(x > 0); } }
revert() or revert(string)¶
revert aborts execution of the current contract, and returns to the caller. revert() can be called with no arguments, or a single string argument, which is called the ReasonCode. This function can be called at any point, either in a constructor or a function.
If the caller is another contract, it can use the ReasonCode in a Try Catch Statement statement.
contract x { constructor(address foobar) { if (a == address(0)) { revert("foobar must a valid address"); } } }
require(bool) or require(bool, string)¶
This function is used to check that a condition holds true, or abort execution otherwise. So, if the first bool argument is true, this function does nothing, however if the bool arguments is false, then execution is aborted. There is an optional second string argument which is called the ReasonCode, which can be used by the caller to identify what the problem is.
contract x { constructor(address foobar) { require(foobar != address(0), "foobar must a valid address"); } }
ABI encoding and decoding¶
The ABI encoding depends on the target being compiled for. Substrate uses the SCALE Codec and ewasm uses Ethereum ABI encoding.
abi.decode(bytes, (type-list))¶
This function decodes the first argument and returns the decoded fields. type-list is a comma-separated list of types. If multiple values are decoded, then a destructure statement must be used.
uint64 foo = abi.decode(bar, (uint64));
(uint64 foo1, bool foo2) = abi.decode(bar, (uint64, bool));
If the arguments cannot be decoded, contract execution will abort. This can happen if the encoded length is too short, for example.
abi.encode(…)¶
ABI encodes the arguments to bytes. Any number of arguments can be provided.
uint16 x = 241; bytes foo = abi.encode(x);
On Substrate, foo will be
hex"f100". On Ethereum this will be
hex"00000000000000000000000000000000000000000000000000000000000000f1".
abi.encodeWithSelector(bytes4 selector, …)¶
ABI encodes the arguments with the function selector first. After the selector, any number of arguments can be provided.
bytes foo = abi.encodeWithSelector(hex"01020304", uint16(0xff00), "ABCD");
On Substrate, foo will be
hex"0403020100ff". On Ethereum this will be
hex"01020304000000000000000000000000000000000000000000000000000000000000ff00".
abi.encodeWithSignature(string signature, …)¶
ABI encodes the arguments with the
bytes4 hash of the signature. After the signature, any number of arguments
can be provided. This is equivalent to
abi.encodeWithSignature(bytes4(keccak256(signature)), ...).
bytes foo = abi.encodeWithSignature("test2(uint64)", uint64(257));
On Substrate, foo will be
hex"296dacf0_0101_0000__0000_0000". On Ethereum this will be
hex"296dacf0_0000000000000000000000000000000000000000000000000000000000000101".
abi.encodePacked(…)¶
ABI encodes the arguments to bytes. Any number of arguments can be provided. The packed encoding only
encodes the raw data, not the lengths of strings and arrays. For example, when encoding
string only the string
bytes will be encoded, not the length. It is not possible to decode packed encoding.
bytes foo = abi.encode(uint16(0xff00), "ABCD");
On Substrate, foo will be
hex"00ff41424344". On Ethereum this will be
hex"ff0041424344".
Cryptography¶
blake2_128(bytes)¶
This returns the
bytes16 blake2_128 hash of the bytes.
Note
This function is only available on Parity Substrate.
Mathematical¶
addmod(uint x, uint y, uint, k) returns (uint)¶
Add x to y, and then divides by k. x + y will not overflow.
Miscellaneous¶
print(string)¶
print() takes a string argument.
contract c { constructor() { print("Hello, world!"); } }
Note
print() is not available with the Ethereum Foundation Solidity compiler.
When using Substrate, this function is only available on development chains. If you use this function on a production chain, the contract will fail to load.
When using ewasm, the function is only available on hera when compiled with debugging.
selfdestruct(address payable recipient)¶
The
selfdestruct() function causes the current contract to be deleted, and any
remaining balance to be sent to recipient. This functions does not return, as the
contract no longer exists.
String formatting using
"{}".format()¶
Sometimes it is useful to convert an integer to a string, e.g. for debugging purposes. There is
a format builtin function for this, which is a method on string literals. Each
{} in the
string will be replaced with the value of an argument to format().
function foo(int arg1, bool arg2) public { print("foo entry arg1:{} arg2:{}".format(arg1, arg2)); }
Assuming arg1 is 5355 and arg2 is true, the output to the log will be
foo entry arg1:5355 arg2:true.
The types accepted by format are
bool,
uint,
int (any size, e.g.
int128 or
uint64),
address,
bytes (fixed and dynamic), and
string. Enums are also supported, but will print the ordinal value
of the enum. The
uint and
int types can have a format specifier. This allows you to convert to
hexadecimal
{:x} or binary
{:b}, rather than decimals. No other types
have a format specifier. To include a literal
{ or
}, replace it with
{{ or
}}.
function foo(int arg1, uint arg2) public { // print arg1 in hex, and arg2 in binary print("foo entry {{arg1:{:x},arg2:{:b}}}".format(arg1, arg2)); }
Assuming arg1 is 512 and arg2 is 196, the output to the log will be
foo entry {arg1:0x200,arg2:0b11000100}.
Warning
Each time you call the
format() some specialized code is generated, to format the string at
runtime. This requires loops and so on to do the conversion.
When formatting integers in to decimals, types larger than 64 bits require expensive division.
Be mindful this will increase the gas cost. Larger values will incur a higher gas cost.
Alternatively, use a hexadecimal
{:x} format specifier to reduce the cost. | https://solang.readthedocs.io/en/v0.1.8/language.html | CC-MAIN-2021-43 | refinedweb | 13,783 | 63.59 |
singleton - invokestatic - multiple threads
We made thousand (I = [0 .. 999]) singletonI classes.
public class singletonI extends abstractclass {
private static singletonI instance;
private singletonI(){}
public int getValue(){
body;
}
public static singletonI getInstance() {
if (instance == null) {
instance = new singletonI();
}
return instance;
}
}
Each class has a getValue method. The body of the getValue
method can either be complex or simple. A simple body means that
the method returns the value 10 (return 10;). A complex body is
a body that uses other classes' getValue methods to
calculate a complex value, which it subsequently returns (for example return
singletonI.getInstance().getValue() + singletonI.getInstance().getValue();)
We also made one large singleton class in which we
store all methods of the thousand separate classes. The compelex getValue methods
invoke other getValue methods which are now stored in the same class.
Could someone explain me why the large class alternative is faster than the
thousand classes alternative? If take 10 threads the difference between the two alternatives
is even bigger.
java version "1.4.2_11"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_11-b06)
Java HotSpot(TM) Client VM (build 1.4.2_11-b06, mixed mode)
1.) First of all if you use multiple threads than your programm is broken anyway - your getInstance() method is NOT THREAD SAFE! The most scalable approach I know of is to use ThreadLocal in conjunction with synchronized, google a bit for more information.
2.) To be honest I don't know, what you told us here is a bit vague. If you've several small instance classes you've more overhead of course because:
- You've more code which is executed (getInstance, null-check, memory copies and so on)
- I don't know much about the generated code but it could be that it fits better into caches.
3.) Use an up2date version of java, run a long time and use the java server runtime.
lg Clemens | https://www.java.net/node/657813 | CC-MAIN-2015-35 | refinedweb | 319 | 64 |
Beam DataFrames overview
The Apache Beam Python SDK provides a DataFrame API for working with pandas-like DataFrame objects. The feature lets you convert a PCollection to a DataFrame and then interact with the DataFrame using the standard methods available on the pandas DataFrame API. The DataFrame API is built on top of the pandas implementation, and pandas DataFrame methods are invoked on subsets of the datasets in parallel. The big difference between Beam DataFrames and pandas DataFrames is that operations are deferred by the Beam API, to support the Beam parallel processing model. (To learn more about differences between the DataFrame implementations, see Differences from pandas.)
You can think of Beam DataFrames as a domain-specific language (DSL) for Beam pipelines. Similar to Beam SQL, DataFrames is a DSL built into the Beam Python SDK. Using this DSL, you can create pipelines without referencing standard Beam constructs like ParDo or CombinePerKey.
The Beam DataFrame API is intended to provide access to a familiar programming interface within a Beam pipeline. In some cases, the DataFrame API can also improve pipeline efficiency by deferring to the highly efficient, vectorized pandas implementation.
What is a DataFrame?
If you’re new to pandas DataFrames, you can get started by reading 10 minutes to pandas, which shows you how to import and work with the
pandas package. pandas is an open-source Python library for data manipulation and analysis. It provides data structures that simplify working with relational or labeled data. One of these data structures is the DataFrame, which contains two-dimensional tabular data and provides labeled rows and columns for the data.
Using DataFrames
To use Beam DataFrames, you need to install Apache Beam version 2.26.0 or higher (for complete setup instructions, see the Apache Beam Python SDK Quickstart) and pandas version 1.0 or higher. You can use DataFrames as shown in the following example, which reads New York City taxi data from a CSV file, performs a grouped aggregation, and writes the output back to CSV:
pandas is able to infer column names from the first row of the CSV data, which is where
passenger_count and
DOLocationID come from.
In this example, the only traditional Beam type is the
Pipeline instance. Otherwise the example is written completely with the DataFrame API. This is possible because the Beam DataFrame API includes its own IO operations (for example,
read_csv and
to_csv) based on the pandas native implementations.
read_* and
to_* operations support file patterns and any Beam-compatible file system. The grouping is accomplished with a group-by-key, and arbitrary pandas operations (in this case,
sum) can be applied before the final write that occurs with
to_csv.
The Beam DataFrame API aims to be compatible with the native pandas implementation, with a few caveats detailed below in Differences from standard pandas.
Embedding DataFrames in a pipeline
To use the DataFrames API in a larger pipeline, you can convert a PCollection to a DataFrame, process the DataFrame, and then convert the DataFrame back to a PCollection. In order to convert a PCollection to a DataFrame and back, you have to use PCollections that have schemas attached. A PCollection with a schema attached is also referred to as a schema-aware PCollection. To learn more about attaching a schema to a PCollection, see Creating schemas.
Here’s an example that creates a schema-aware PCollection, converts it to a DataFrame using
to_dataframe, processes the DataFrame, and then converts the DataFrame back to a PCollection using
to_pcollection:
from apache_beam.dataframe.convert import to_dataframe from apache_beam.dataframe.convert import to_pcollection ... # Read the text file[pattern] into a PCollection. lines = p | 'Read' >> ReadFromText(known_args.input) words = ( lines | 'Split' >> beam.FlatMap( lambda line: re.findall(r'[\w]+', line)).with_output_types(str) # Map to Row objects to generate a schema suitable for conversion # to a dataframe. | 'ToRows' >> beam.Map(lambda word: beam.Row(word=word))) df = to_dataframe(words) df['count'] = 1 counted = df.groupby('word').sum() counted.to_csv(known_args.output) # Deferred DataFrames can also be converted back to schema'd PCollections counted_pc = to_pcollection(counted, include_indexes=True) # Do something with counted_pc ...
You can find the full wordcount example on GitHub, along with other example DataFrame pipelines.}") ...
DataframeTransform is similar to
SqlTransform from the Beam SQL DSL. Where
SqlTransform translates a SQL query to a PTransform,
DataframeTransform is a PTransform that applies a function that takes and returns DataFrames. A
DataframeTransform can be particularly useful if you have a stand-alone function that can be called both on Beam and on ordinary pandas DataFrames.
DataframeTransform can accept and return multiple PCollections by name and by keyword, as shown in the following examples: | https://beam.apache.org/documentation/dsls/dataframes/overview/ | CC-MAIN-2021-31 | refinedweb | 773 | 55.03 |
Preface
Sega Genesis Development Kit (SGDK) is a free set of tools needed to create a game for the Sega Megadrive console. This GDK contains a compiler of C code, and a set of libraries we need.
The purpose of this tutorial is to create the simplest application “Hello World“. But, first, you need to prepare the tools.
We prepare the tools.
First, download the latest version of SGDK, from the official GitHub repository (Link).
Then, move the contents of the archive to the C:/SGDK folder
Now, you need to create a system variables, for this click RMB(Right Mouse Button) on the “This computer” button. In the list, select “Properties”
Next, go to “Advanced System Settings” and click the “Environment Variables” button
And create a variable GDK_WIN
The variable is created. Don’t forget to click “OK” to confirm the action.
Write the code.
Download an empty project from MEGA.
And move it, anywhere. Then, open the main.c file in the src folder.
#include <genesis.h> int main() { while(1) { VDP_waitVSync(); } return (0); }
It already has written the necessary basis, let’s analyze it.
#include <genesis.h>
Here, we have connected a library with the necessary functions.
int main()
The entry point to the program.
VDP_waitVSync();
This function is waiting for the frame to be rendered.
while(1) { VDP_waitVSync(); }
The operation will be repeated an infinite number of times while(1).
Now, you need to add the inscription“Hello World”to the screen. To do this, add the following command, to the beginning of the main block.
VDP_drawText("Hello World", 1,1);
Let’s analyze the syntax.
VDP_drawText("message", x_tail, y_tail);
- message – text displayed on the screen.
- x_tail is a point x,on a tile grid.
- y_tail – point y,on the tile grid.
Tile is an image of 8×8 pixels.
That is, in our case.
VDP_drawText("Hello World", 1,1);
So, we will display the message “Hello World“, at the coordinates:
- x = 8
- y = 8
Now, open compile.bat to compile the application.
After compilation, a rom.bin file should appear in the out folder of the project. Run.
Final result.
| https://under-prog.ru/en/sgdk-creating-a-hello-world-application/ | CC-MAIN-2022-27 | refinedweb | 355 | 69.07 |
Custom drawing in 2D¶
Why?¶
Godot has nodes to draw sprites, polygons, particles, and all sorts of stuff. For most cases, this is enough. If there's no node to draw something specific you need, you can make any 2D node (for example, Control or Node2D based) draw custom commands.
But...¶
Custom drawing manually in a node is really useful. Here are some examples why:
Drawing shapes or logic that is not handled by nodes (example: making a node that draws a circle, an image with trails, a special kind of animated polygon, etc).
Visualizations that are not that compatible with nodes: (example: a tetris board). The tetris example uses a custom draw function to draw the blocks.
Drawing a large number of simple objects. Custom drawing avoids the overhead of using nodes which makes it less memory intensive and potentially faster.
Making a custom UI control. There are plenty of controls available, but it's easy to run into the need to make a new, custom one.
OK, how?¶
Add a script to any CanvasItem
derived node, like Control or
Node2D. Then override the
_draw() function.
extends Node2D func _draw(): # Your draw commands here pass
public override void _Draw() { // Your draw commands here }
Draw commands are described in the CanvasItem class reference. There are plenty of them.
Updating¶
The
_draw() function is only called once, and then the draw commands
are cached and remembered, so further calls are unnecessary.
If re-drawing is required because a state or something else changed,
simply call CanvasItem.update()
in that same node and a new
_draw() call will happen.
Here is a little more complex example, a texture variable that will be redrawn if modified:
extends Node2D export (Texture) var texture setget _set_texture func _set_texture(value): # If the texture variable is modified externally, # this callback is called. texture = value # Texture was changed. update() # Update the node's visual representation. func _draw(): draw_texture(texture, Vector2())
public class CustomNode2D : Node2D { private Texture _texture; public Texture Texture { get { return _texture; } set { _texture = value; Update(); } } public override void _Draw() { DrawTexture(_texture, new Vector2()); } }
In some cases, it may be desired to draw every frame. For this,
call
update() from the
_process() callback, like this:
extends Node2D func _draw(): # Your draw commands here pass func _process(delta): update()
public class CustomNode2D : Node2D { public override void _Draw() { // Your draw commands here } public override void _Process(float delta) { Update(); } }
An example: drawing circular arcs¶
We will now use the custom drawing functionality of the Godot Engine to draw
something that Godot doesn't provide functions for. As an example, Godot provides
a
draw_circle() function that draws a whole circle. However, what about drawing a
portion of a circle? You will have to code a function to perform this and draw it yourself.
Arc function¶
An arc is defined by its support circle parameters, that is, the center position and the radius. The arc itself is then defined by the angle it starts from and the angle at which it stops. These are the 4 arguments that we have to provide to our drawing function. We'll also provide the color value, so we can draw the arc in different colors if we wish.
Basically, drawing a shape on the screen requires it to be decomposed into a certain number of points linked from one to the next. As you can imagine, the more points your shape is made of, the smoother it will appear, but the heavier it will also be in terms of processing cost. In general, if your shape is huge (or in 3D, close to the camera), it will require more points to be drawn without it being angular-looking. On the contrary, if your shape is small (or in 3D, far from the camera), you may decrease its number of points to save processing costs; this is known as Level of Detail (LOD). In our example, we will simply use a fixed number of points, no matter the radius.
func draw_circle_arc(center, radius, angle_from, angle_to, color): var nb_points = 32 var points_arc = PackedVector2Array() for i in range(nb_points + 1): var angle_point = deg2rad(angle_from + i * (angle_to-angle_from) / nb_points - 90) points_arc.push_back(center + Vector2(cos(angle_point), sin(angle_point)) * radius) for index_point in range(nb_points): draw_line(points_arc[index_point], points_arc[index_point + 1], color)
public void DrawCircleArc(Vector2 center, float radius, float angleFrom, float angleTo, Color color) { int nbPoints = 32; var pointsArc = new Vector2[nbPoints]; for (int i = 0; i < nbPoints; ++i) { float anglePoint = Mathf.Deg2Rad(angleFrom + i * (angleTo - angleFrom) / nbPoints - 90f); pointsArc[i] = center + new Vector2(Mathf.Cos(anglePoint), Mathf.Sin(anglePoint)) * radius; } for (int i = 0; i < nbPoints - 1; ++i) DrawLine(pointsArc[i], pointsArc[i + 1], color); }
Remember the number of points our shape has to be decomposed into? We fixed this
number in the
nb_points variable to a value of
32. Then, we initialize an empty
PackedVector2Array, which is simply an array of
Vector2s.
The next step consists of computing the actual positions of these 32 points that compose an arc. This is done in the first for-loop: we iterate over the number of points for which we want to compute the positions, plus one to include the last point. We first determine the angle of each point, between the starting and ending angles.
The reason why each angle is decreased by 90° is that we will compute 2D positions
out of each angle using trigonometry (you know, cosine and sine stuff...). However,
cos() and
sin() use radians, not degrees. The angle of 0° (0 radian)
starts at 3 o'clock, although we want to start counting at 12 o'clock. So we decrease
each angle by 90° in order to start counting from 12 o'clock.
The actual position of a point located on a circle at angle
angle (in radians)
is given by
Vector2(cos(angle), sin(angle)). Since
cos() and
sin() return values
between -1 and 1, the position is located on a circle of radius 1. To have this
position on our support circle, which has a radius of
radius, we simply need to
multiply the position by
radius. Finally, we need to position our support circle
at the
center position, which is performed by adding it to our
Vector2 value.
Finally, we insert the point in the
PackedVector2Array which was previously defined.
Now, we need to actually draw our points. As you can imagine, we will not simply draw our 32 points: we need to draw everything that is between each of them. We could have computed every point ourselves using the previous method, and drew it one by one. But this is too complicated and inefficient (except if explicitly needed), so we simply draw lines between each pair of points. Unless the radius of our support circle is big, the length of each line between a pair of points will never be long enough to see them. If that were to happen, we would simply need to increase the number of points.
Draw the arc on the screen¶
We now have a function that draws stuff on the screen;
it is time to call it inside the
_draw() function:
func _draw(): var center = Vector2(200, 200) var radius = 80 var angle_from = 75 var angle_to = 195 var color = Color(1.0, 0.0, 0.0) draw_circle_arc(center, radius, angle_from, angle_to, color)
public override void _Draw() { var center = new Vector2(200, 200); float radius = 80; float angleFrom = 75; float angleTo = 195; var color = new Color(1, 0, 0); DrawCircleArc(center, radius, angleFrom, angleTo, color); }
Result:
Arc polygon function¶
We can take this a step further and not only write a function that draws the plain portion of the disc defined by the arc, but also its shape. The method is exactly the same as before, except that we draw a polygon instead of lines:
func draw_circle_arc_poly(center, radius, angle_from, angle_to, color): var nb_points = 32 var points_arc = PackedVector2Array() points_arc.push_back(center) var colors = PackedColorArray([color]) for i in range(nb_points + 1): var angle_point = deg2rad(angle_from + i * (angle_to - angle_from) / nb_points - 90) points_arc.push_back(center + Vector2(cos(angle_point), sin(angle_point)) * radius) draw_polygon(points_arc, colors)
public void DrawCircleArcPoly(Vector2 center, float radius, float angleFrom, float angleTo, Color color) { int nbPoints = 32; var pointsArc = new Vector2[nbPoints + 1]; pointsArc[0] = center; var colors = new Color[] { color }; for (int i = 0; i < nbPoints; ++i) { float anglePoint = Mathf.Deg2Rad(angleFrom + i * (angleTo - angleFrom) / nbPoints - 90); pointsArc[i + 1] = center + new Vector2(Mathf.Cos(anglePoint), Mathf.Sin(anglePoint)) * radius; } DrawPolygon(pointsArc, colors); }
Dynamic custom drawing¶
All right, we are now able to draw custom stuff on the screen. However, it is static; let's make this shape turn around the center. The solution to do this is simply to change the angle_from and angle_to values over time. For our example, we will simply increment them by 50. This increment value has to remain constant or else the rotation speed will change accordingly.
First, we have to make both angle_from and angle_to variables global at the top
of our script. Also note that you can store them in other nodes and access them
using
get_node().
extends Node2D var rotation_angle = 50 var angle_from = 75 var angle_to = 195
public class CustomNode2D : Node2D { private float _rotationAngle = 50; private float _angleFrom = 75; private float _angleTo = 195; }
We make these values change in the _process(delta) function.
We also increment our angle_from and angle_to values here. However, we must not
forget to
wrap() the resulting values between 0 and 360°! That is, if the angle
is 361°, then it is actually 1°. If you don't wrap these values, the script will
work correctly, but the angle values will grow bigger and bigger over time until
they reach the maximum integer value Godot can manage (
2^31 - 1).
When this happens, Godot may crash or produce unexpected behavior.
Finally, we must not forget to call the
update() function, which automatically
calls
_draw(). This way, you can control when you want to refresh the frame.
func _process(delta): angle_from += rotation_angle angle_to += rotation_angle # We only wrap angles when both of them are bigger than 360. if angle_from > 360 and angle_to > 360: angle_from = wrapf(angle_from, 0, 360) angle_to = wrapf(angle_to, 0, 360) update()
private float Wrap(float value, float minVal, float maxVal) { float f1 = value - minVal; float f2 = maxVal - minVal; return (f1 % f2) + minVal; } public override void _Process(float delta) { _angleFrom += _rotationAngle; _angleTo += _rotationAngle; // We only wrap angles when both of them are bigger than 360. if (_angleFrom > 360 && _angleTo > 360) { _angleFrom = Wrap(_angleFrom, 0, 360); _angleTo = Wrap(_angleTo, 0, 360); } Update(); }
Also, don't forget to modify the
_draw() function to make use of these variables:
func _draw(): var center = Vector2(200, 200) var radius = 80 var color = Color(1.0, 0.0, 0.0) draw_circle_arc( center, radius, angle_from, angle_to, color )
public override void _Draw() { var center = new Vector2(200, 200); float radius = 80; var color = new Color(1, 0, 0); DrawCircleArc(center, radius, _angleFrom, _angleTo, color); }
Let's run! It works, but the arc is rotating insanely fast! What's wrong?
The reason is that your GPU is actually displaying the frames as fast as it can.
We need to "normalize" the drawing by this speed; to achieve that, we have to make
use of the
delta parameter of the
_process() function.
delta contains the
time elapsed between the two last rendered frames. It is generally small
(about 0.0003 seconds, but this depends on your hardware), so using
delta to
control your drawing ensures that your program runs at the same speed on
everybody's hardware.
In our case, we simply need to multiply our
rotation_angle variable by
delta
in the
_process() function. This way, our 2 angles will be increased by a much
smaller value, which directly depends on the rendering speed.
func _process(delta): angle_from += rotation_angle * delta angle_to += rotation_angle * delta # We only wrap angles when both of them are bigger than 360. if angle_from > 360 and angle_to > 360: angle_from = wrapf(angle_from, 0, 360) angle_to = wrapf(angle_to, 0, 360) update()
public override void _Process(float delta) { _angleFrom += _rotationAngle * delta; _angleTo += _rotationAngle * delta; // We only wrap angles when both of them are bigger than 360. if (_angleFrom > 360 && _angleTo > 360) { _angleFrom = Wrap(_angleFrom, 0, 360); _angleTo = Wrap(_angleTo, 0, 360); } Update(); }
Let's run again! This time, the rotation displays fine!
Tools¶
Drawing your own nodes might also be desired while running them in the editor to use as a preview or visualization of some feature or behavior.
Remember to use the "tool" keyword at the top of the script (check the GDScript reference reference if you forgot what this does). | https://docs.godotengine.org/en/latest/tutorials/2d/custom_drawing_in_2d.html | CC-MAIN-2021-31 | refinedweb | 2,094 | 59.64 |
From: Andy Little (andy_at_[hidden])
Date: 2005-10-02 09:42:53
"Reece Dunn" <msclrhd_at_[hidden]> wrote
> Andy Little wrote:
>>"Reece Dunn" <msclrhd_at_[hidden]> wrote
>> > I have, as well as many others, have made attempts into this brave new
>>world
>> > ;).
I have been looking through your Boost.GUI library , (but only using VC7.1 ... I
should delve into X...whatever)
I am aware of the problem that windows applications use winMain as a
program-entry-point. However in VC7.1 it possible to change this in the Linker >
advanced > entry point box. If you use the entry point mainCRTStartup then main
will be used as a program entry point. (Then you can change winMain to main in
code. I'm not too sure about the way arguments are handled yet but it would be
easy to preprocess these before main of course) This solves one problem! I
discovered this while reading the VFC documentation. It works fine.
FWIW I am thinking of working on your Boost.Gui. One essential difference I
would like to make is to try to use the same types on each platform, so for
example window sizes would all be in either integers or floats. Ideally they
should be templated on each type. What do you thinlk?
I might go further and add units (maybe defaulting to pixels)
typedef gui::rect<px,int> rect 1;
typedef gui::rect<pc,double> rect 2;
typedef gui:rect<mm,int> rect 3;
IMO from experience this is very powerful and makes coding much easier.
Alternatively / as well it might be good to set the transform ( units and
,Wheres (0,0), is +Y up or down, etc ) per window.
BTW On the issue of size and point ; The problem with operations specified is
that assuming:
point p1,p2,p3
operations such as:
point result_point = (p1 +p2 +p3)/3;
are legal mathematical constructs ( whereas (p1+p2)/3 isnt of course). Thats why
Id prefer to go with an entity called vect which stands in for both size and
point. (You could make point safe with some runtime overhead I guess)
[cut]
> I agree that the library should be designed on the abstract level, but it
> should also be possible to pass a rectangle object to GetClientRect (or the
> equivalent in Mac, PalmOS, etc.) without having to write a lot of
> platform-dependant code yourself.
Ok but because you will inevitably be using the platform specific API , this
will usually be easy to achieve.
> That is, I want it to be easy to interact with Win32, XWindows or Mac code
> /in a way that is simple for the user/.
Maybe as a plugin.?
> I would also like the option of using native control/component types (or at
> least in terms of L&F and usability) as well as a cross platform L&F that
> Java can produce. Skinnable components at compile time (allowing a run-time
> skin option).
[cut]
custom L&F is obviously essential. Model-view-controller,Java AWT/ Swing and
VFC( mentioned in docs but not read by me yet) are worth looking at regarding
L&F.
As far as compile time entities is concerned I think they could be very useful
e.g :
namespace gui{ namespace meta{ ?
// colors defaut param for eg num valid bits
typedef rgb_color_c<int,255,0,0> red;
// taken from SVG
struct in;
struct mm;
struct cm;
struct px;
struct pt;
struct pc;
// or vect ;-)
template<typename X, typename Y, typename Units>
struct point;
template <typename TLpoint, typename BRpoint, typename Units>
struct rect;
// resource handles:
template < typename ResourceTag,typename ResourceInfo>
struct handle;
typedef handle<menu,MyMenu> my_menu;
}} //gui::mpl
Might even be possible to use this for resource scripts...
> The Win32 API can use either wide character or narrow versions. Using the
> narrow versions on WinNT and above can cause performance issues with the
> narrow -> wide -> narrow string conversions done by the OS. Therefore, it
> would be useful to have something like a gui::string that is std::string on
> narrow character platforms (Win32 built with *A API variants, XWindows,
> etc.) and std::wstring on wide character platforms (Win32 built with *W API
> variants).
OK, but even in a word-processing application the amount of text on screen at a
time cannot be that big. Is conversion speed that critical?
> This also has an impact on platform Unicode support. The "native conversion"
> I was referring to was that you cannot do:
>
> std::wstring str( "foo" );
Yes thats a problem! Ideally think there should be a Unicode string rather
than wstring (VFC, and Java have this!) , meanwhile basic_string<char> is fairly
portable. Because there is no console output characteristics defined for wchar
one could say that non cahracter output is actually part of the graphics module
. The graphics module should be seen as a module with few dependencies so
various modules acn be plugged in.
> at the moment. std::basic_string< native-character-type > should be the
> string type of preference.
As you may guess by now I would want the ability to have the same semantics for
a type for every platform defined.
It is easy then for the user to define native_character_type as a typedef.
regards
Andy Little
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/10/94645.php | CC-MAIN-2021-21 | refinedweb | 885 | 63.9 |
Subject: [Boost-announce] [boost] [TTI] Review Result
From: Joel falcou (joel.falcou_at_[hidden])
Date: 2011-08-02 09:06:02
This is the result summary for the Type Traits Introspection library by
Edward Diener.
The review have been on going from July 1st to july 17th and had generated a large amount of very interesting design discussion. The library received 5 Positive reviews and a few positive comments next to that.
**The verdict is that, the TTI Library is ACCEPTED into Boost.**
However, a few comments and concerns have been raised but are
rather simple changes to perform and I am sure Edward will
take these comments into account before releasing the library.
Such comments include:
* The main comment is the fact that Boost.TTI force the
definitions of its meta-functions into the boost::tti.
Concerns have been raised and requests made that the macro
don't specify any namespace and let users bring the generated
meta-functions in the namespace of their own liking.
* The number of macros handling various corner cases has been
raised as potentially too high. Effort to streamline the macro
based interface has been requested.
* Support for more compiler is needed. Some issues on ICC has been
raised but are currently being looked upon (or fixed).
* Naming scheme of the macro and of the generated meta-functions
have to be refined to be 1/ more clear 2/ avoid some non-standard
name starting with __
* Some more detailed and entry-level examples may be beneficial
to the library
* Pre-made macro for classical typedef like value_type or other std
components has been proposed
Thanks to all the participants and to Edward for this review.
_______________________________________________ | http://lists.boost.org/boost-announce/2011/08/0328.php | CC-MAIN-2014-52 | refinedweb | 281 | 60.75 |
Once a MC password has been imported by Empathy, it calls UpdateParameters({}, ('password')) to unset it, but this doesn't remove it from gnome-keyring as expected.
Looks like it's supposed to get removed in mcd-account-manager-default:_keyring_commit() but is not because the key is not in amd->secrets at this point.
I wrote a stupid script to help me debugging this.
(pasting as for some reason bz doesn't me to attach it)
import gobject
import dbus
import dbus.glib
import sys
BUS = 'org.freedesktop.Telepathy.AccountManager'
PATH = '/org/freedesktop/Telepathy/Account/gabble/jabber/cassidy_2dtest1_40jabber_2ebelnet_2ebe0'
IFACE = 'org.freedesktop.Telepathy.Account'
bus = dbus.SessionBus()
iface = dbus.Interface(bus.get_object(BUS, PATH), IFACE)
if sys.argv[1] == 'set':
print iface.UpdateParameters({'password': 'XXX'}, ())
else:
print iface.UpdateParameters({}, ('password',))
I can confirm that this issue still exists with MC 1:5.12.0-2, Empathy 3.4.2.1-1 and gnome-keyring 3.4.1-3.
When I log in, two passwords for my Collabora account show up in Seahorse:
• One, labelled “Telepathy password”, is the old, wrong one.
• The other, labelled “Instant messaging password”, is the new, right one.
The former shows up in `mc-tool show`.
Hitting Clear and unticking [ ] Remember password in Empathy removes the new, correct password from the keyring, and stops the old password from appearing in `mc-tool show`. Disabling and re-enabling the account throws up a password prompt. I enter the correct password there, and bingo! I'm online. But this only lasts for this session. If I log out and in again, MC picks the old password back up again.
I guess one fix for this bug would be to remove Gnome Keyring support from MC (bug 32578), but *something* would have to clear up the old password entries.
(In reply to comment #1)
> Looks like it's supposed to get removed in
> mcd-account-manager-default:_keyring_commit() but is not because the key is not
> in amd->secrets at this point.
Anything that's being deleted ought to be deleted from both places (keyfile *and* gnome-keyring), regardless of whether MC thinks it's secret.
Somehow I doubt it's actually that easy once you get into the bowels of MC account storage, but it ought to be possible...
Created attachment 66348 [details] [review]
Make the gnome-keyring test work again, with modern gnome-keyring
---
This is a prerequisite for having any sort of confidence that we've fixed this.
Created attachment 66359 [details] [review]
Default account backend: when deleting, always delete from both places
Our tracking of whether something is "secret" (in the gnome-keyring) is
pretty shaky, and in particular, we can forget that things are meant
to be "secret" sometimes (we lose that information when deleting
parameters).
Happily, when we're deleting things, it doesn't actually matter: the
right thing to do is clearly to delete from both locations, regardless
of where we think it ought to be.
Similarly, when we're setting a property to a new value, it's appropriate
to delete it from both locations, then put it in the right location
(only).
Created attachment 66360 [details] [review]
_keyring_commit: perform deletions for keys in removed, not in secrets
'removed' is essentially a set of (account, key) tuples that should
be deleted. What we were doing was:
foreach account in removed
foreach key in secrets[account]
delete (account, key)
which makes little sense - if we have param-password and
param-proxy-password and we want to unset Parameters['password'],
the current implementation would delete both. This commit changes it to:
foreach account in removed
foreach key in removed[account]
delete (account, key)
which has the advantage of actually making sense.
Created attachment 66361 [details] [review]
Default account backend: when deleting from the keyring, remove from secrets
Otherwise we'd just delete it, then (because it's still in secrets)
re-commit it!
Created attachment 66362 [details] [review]
Default account backend: when deleting passwords, delete the same thing we will look for
Deleting secrets with param="param-password" isn't a whole lot of use
when we save, and look up, param="password".
Created attachment 66363 [details] [review]
account-store-default: load the same names that MC would
This tool was previously not deleting "param-".
Created attachment 66364 [details] [review]
Test deletion of passwords from accounts
Created attachment 66365 [details] [review]
Default account backend: test that *all* passwords are deleted
All your patches are great!
Fixed in git for 5.13.1.
I'm leaving this open for the time being because this bug broke Empathy's gnome-keyring migration, so we should probably add a cleanup step which deletes any passwords that have a corresponding (post-migration) Empathy version.
Also, I haven't backported to 5.12 yet, but maybe we should consider it once this has had more testing.
(In reply to comment #14)
> Also, I haven't backported to 5.12 yet, but maybe we should consider it once
> this has had more testing.
Backported and released in 5.12.2.
(In reply to comment #14)
> I'm leaving this open for the time being because this bug broke Empathy's
> gnome-keyring migration, so we should probably add a cleanup step which deletes
> any passwords that have a corresponding (post-migration) Empathy version.
Still needed. I have most of a patch, but I still need to test it.
Created attachment 67272 [details] [review]
Default accounts backend: finish password migrations that Empathy 3.0 started
---
This gets rid of the fallout from this bug for people who upgraded to Empathy 3 while using MC < 5.12.2. I'd like to put it in 5.12.x. for those who prefer to review in cgit.
Comment on attachment 67272 [details] [review]
Default accounts backend: finish password migrations that Empathy 3.0 started
Review of attachment 67272 [details] [review]:
-----------------------------------------------------------------
Thanks for the patch. Looks good, although I have not tested it.
::: src/mcd-account-manager-default.c
@@ +320,5 @@
> +
> + DEBUG ("An Empathy 3.0 password migration wasn't finished "
> + "due to MC bugs. Finishing it now by deleting the "
> + "password for %s", account);
> +
Please add the fdo bug number to the debug output.
This bug is tracked as #687933 in the Debian BTS [1].
[1]
(In reply to comment #18)
> Please add the fdo bug number to the debug output.
Good point, I'll replace "MC bugs" with "fd.o #42088". I'm waiting for MC 1:5.12.1-3 to migrate into wheezy, and also for another Telepathy upstream developer to review the patch here; then I'll upload 1:5.12.1-4 to Debian.
For the record, this fd.o bug also represents Debian bug #686836, which is what caused passwords not to be deleted (actually more like three separate bugs, each of which on its own would have had those symptoms).
++
Fixed in 5.12.3 and 5.13.2. | https://bugs.freedesktop.org/show_bug.cgi?id=42088 | CC-MAIN-2016-22 | refinedweb | 1,154 | 64.2 |
Dim a LED With Raspberry Pi 3 and Python
Introduction: Dim a LED With Raspberry Pi 3 and Python
Hi ! I am going to show you how to dim a LED with Raspberry Pi.
Difficulty : EASY/BEGINNER.
Stay tuned for the next instructables!
Step 1: What Do We Need?
Hi ! I am going to show you how to dim a LED with Raspberry Pi.
Difficulty : EASY/BEGINNER.
First. You will need :
- Raspberry Pi
- 2 pcs Male/Female Connectors
- 1 pcs of a minimum of 100 OmH resistor.
- 1 pcs LED
- 1 pcs breadboard.
Step 2: Connect All Parts Together and Let's Have FUN.
Step 3: Write the Python Code.
To connect the LED with the Raspberry PI and to make it dim we need to write the software code.For this I used Python language.
import RPi.GPIO as GPIO import time GPIO.setwarnings(False) GPIO.setmode(GPIO.BOARD) GPIO.setup(7,GPIO.OUT) def dim() red_led = GPIO.PWM(7,100) red_led.start(0) pause_time = 0.010 for i in range(0,100+1): red_led.ChangeDutyCycle(i) time.sleep(pause_time) for i in range(100,-1,-1): red_led.ChangeDutyCycle(i) time.sleep(pause_time) GPIO.cleanup() dim()<br>
Great Intro to Raspberry Pi tutorial.
Thank you for your feedback! | http://www.instructables.com/id/Dim-a-LED-With-Raspberry-Pi-3-and-Python/ | CC-MAIN-2017-51 | refinedweb | 209 | 70.19 |
As more people start using Enterprise Library, a lot of people are starting to ask some great questions about the best way to manage this resource within their organizations. Even though we designed EntLib to be used on projects in large organizations, it’s probably fair to say that the installer and documentation is really focusing mainly on use on a single user’s workstation. This is something that we will look at improving in the next version, but in the meantime I know a lot of people are crying out for some advice on how to deal with issues such as strong naming, versioning and deployment. So I’ll have a go at describing some techniques that some customers are using successfully (in case you missed it, you should also check out the Enterprise Library Applied webcast where some real customers are describing their experiences). Of course, each organization is different and you may have a good reason for doing something different – but for most organizations, most of the time, hopefully this will make some sense:
Establish a centralized team that ‘owns’ and maintains Enterprise Library in your organization
Whether you have grand plans to extend and evolve the code, or whether you intend on using it pretty much as-is, it’s a good idea to establish a single centralized team that owns and maintains the codebase for your organization. This will give you organization-level control over the code and avoids the situation where many teams or developers make changes to the code resulting in inconsistent, fragmented versions of the codebase. The centralized version can, and should, still evolve, but it will be much easier to manage the code, add features and troubleshoot bugs if you keep the number of versions down to a minimum.
Strong-name and version the Enterprise Library assemblies
Since Enterprise Library is shipped as source code, it wasn’t feasible for us to strong-name the assemblies. However strong-naming (and the versioning that comes with the territory) is a really good idea, as it means you can determine the origin of the compiled code with confidence, and it adds additional runtime safety by ensuring that your applications only load code that you trust. Strong-naming is also a pre-requisite for installation into the GAC – so if you want to do this now, or think there is even a small chance you may want to do it later, then strong-naming is for you! Assuming you follow the previous tip and centrally maintain EntLib, strong-naming is easy. First, create a keypair (use: sn -k myorgskey.snk), and keep this key well guarded inside the central team. Then update the attributes in the AssemblyInfo.cs files for each project to refer to this key, or delete all of the attributes in these files and stick these in GlobalAssemblyInfo.cs (sorry, we should have left the attributes out of the AssemblyInfo.cs files in the first place to make this easier… next time…). Once you recompile, you’re strong-named! Now just make sure you have a consistent and predictable versioning policy so that your developers can understand what is going on each time you release a new version. I won’t discuss best practices for versioning here; hopefully you have some processes for versioning in your org already. If not, we can discuss this some other time!
Give your developers access to the source code and accept suggestions and improvements
So if you maintain EntLib centrally, there’s no need to give your developers access to the source code, right? Probably not. The source code is a great learning tool, and you should also encourage your developers to explore the code and come up with suggestions to improve it. So drop the source out of planes (we do!) – but make sure you have processes for accepting good changes into the official version, and try to prevent individual developers or teams to make random changes in their own versions (again, to avoid fragmentation and maintenance problems). And if you strong-name the official version, it will be easy to tell whether people are using the official version or something else.
Deploy Enterprise Library assemblies as a part of your applications
When it comes to deployment, you can choose to deploy a single shared copy of the EntLib assemblies to the GAC, or you can use private assemblies deployed with your applications. While there are pros and cons to both methods, in general I would generally recommend the latter, as it you can get fine-grained control over the assemblies loaded by each app without dealing with complex things like binding redirects and publisher policy. If you choose to do it this way, it probably doesn’t make sense to deploy Enterprise Library as a unit to your servers. Instead, just include whatever assemblies you need (which most likely won’t be all of them) in the deployment package. If you are using an MSI for deployment, this will run the EntLib assembly installers (to install the instrumentation) automatically. If not, you should run installutil over your assemblies after deployment.
Update (February 7th, 2006): Paul Linton pointed me to KB article 324519 which states that there are known issues with using strongly-named assemblies outside the GAC in ASP.NET 1.1 apps. So you should actually either strong-name AND GAC-deploy the assemblies, or leave then unsigned, private assemblies in ASP.NET 1.1 applications.
Once again, these tips aren’t for everyone – so if you have found other approaches that work well for your environment, feel free to comment and share your experiences with everyone else.
This posting is provided “AS IS” with no warranties, and confers no rights.
Deploying the assemblies with the app would be preferred, at least in my experience with sysadmins. For a while at my previous employer, the admins (both DBAs and System) had to migrate from old servers to new ones, and it is amazing the amount of cruft they had to deal with and the number of dependencies (undocumented of course) that they had to track down.
After all was said and done, though, things that used to take 2 hours to run completed in 5 minutes. And it wasn’t just from the hardware improvements!
Todd has a great blog post describing how to manage Enterprise Library in an organization. We only have 2 people on this project I lead and really need these guidelines. If you are in a larger organization or team that…
Errm … wouldn’t it be better to use the GAC? I’m retooling all of our web applications to use EL … isn’t it better to have all of them refer to the same assemblies in the GAC than for each one to have a copy … if we have 15 projects, that’s a lot of copies we shouldn’t need.
Also, I’ll probably have to extend the EL to cover our custom authentication scheme.
Of course the question I haven’t answered yet is, how do you get ASP.NET projects to reference assemblies in the GAC?
Thanks for any insights.
Hi Adam –
As I said, it depends on what you are doing, and I was generalizing. If you have many apps on the one machine, and you want to minimize the number of copies of shared components, that is what the GAC is good for. However this does sometimes complicate versioning when you want to upgrade different apps to use a new version at different times. But yes, there are definitely times when the GAC is the right way to go.
Regarding your question about ASP.NET apps referencing GAC assemblies, Visual Studio doesn’t (easily) let you reference assemblies from the GAC – you need to point to a copy of the assembly in the filesystem. However at runtime your application will look for the assembly in the GAC first, providing it can find a match with the right version/culture/public key token.
Tom
Best of the Blogs – Ian Smith reviews the best of the last week’s web development blogs so that you don’t have to. Week ending 10th April 2005. Whidby, Visual Studio Team System, Visual Studio 2003, Training, Productivity Enhancers, Front-End Development and more…
I have tried compiling on several machines, and I am unable to because two referenced components (nunit.framework and IBM.Data.DB2) can never be found (in all installations). Are these files missing from the download? And if I can’t compile, I can’t create my own strong named assembly so I am kind of at a dead end. Has anyone else had this problem?
Anoymous II:
The missing references don’t exist in the Enterprise Library package. However all is not lost – you don’t actually need these to successfully compile EntLib.
First, NUnit is only required if you are using a project configuration that includes the UNIT_TESTS directive. The default Debug and Release configurations don’t include this directive, so it should compile by default. If you do want to use the unit tests, you can download NUnit from.
Second, you only need IBM’s DB2 provider if you want to compile the EntLib DB2 plug-in and use the DAAB against DB2. If you want this you’ll need to get the DB2 provider from IBM. If you don’t want to do this, don’t compile the assembly. We don’t actually include the DB2 project in the EnterpriseLibrary.sln file, but it does exist in Data.sln.
Tom
Thanks for your reply. You are correct in that I don’t need or want NUnit or DB2. How do I compile as you suggest?
If I open src/Data.sln and try to compile in debug or release, it tells me the DB2 reference is missing, and I get a bunch of compiler errors like
"C:Program FilesMicrosoft Enterprise LibrarysrcDataDB2AssemblyInfo.cs(16): The type or namespace name ‘IBM’ could not be found (are you missing a using directive or an assembly reference?)"
How do I compile without these? I can’t create a strongly named assembly for my GAC without compiling, and I am not clear on what I need to do to get the compile to work. Thanks.
I think I got it. I just deleted that part of the solution. Then I added the SNK path to the AssemblyKeyFile() lines in the the three AssemblyInfo.cs files and compiled. I dropped all three .dll files into C:WINDOWSassembly, and the GAC took them, so I assume all is good.
Excellent post Tom and very timely for us. It’s always a challenge to manage common components but good suggestions here to follow especially around extending the libraries. Thanks!
Excellent post Tom and very timely for us. It’s always a challenge to manage common components but good suggestions here to follow especially around extending the libraries. Thanks!
I’m a newbie to the enterprise library but just discovered one little gotcha when strong naming the Configuration assembly specifically. The configuration file itself contains the full assembly descriptor of the Configuration assembly, including public key token. You must replace ‘PublicKeyToken=null’ with ‘PublicKeyToken=xxxxxxxxxxxx’ where xxxxxxx is of course the public key token of your now strongly named Configuration assembly. Failure to do so will result in a run-time assembly binding error and much confusion.
This is perhaps obvious to others but was not to me (at least not initially), so i offer it as a public service announcement to others who may hit the same issue.
What file specifically did you have to modify the PublicKeyToken=null to the key that of your component? I’m having difficulty determining which file is being referred to in the post.
I rebuilt the E/L, removing the AssemblyKeyFile lines from each AssemblyInfo.cs file and adding the key file link to the GlobalAssemblyInfo.cs file.
The project rebuilt fine (BuildLibrary.bat) and then I copied the assemblies into the bin folder (CopyAssemblies.bat).
With reference to the comment about the PublicKeyToken=null, I also found a problem here for projects I had previously configured with the E/L configuration file.
For such projects an exception is thrown by the config tool when trying to open the project. This is due to the config files referencing E/L assemblies before they were string named.
I edited the confid files for my projkects and updated any PublibKeyToken=null entries to the correct value for my E/L strong named assemblies and now they all open up fine.
(also, any new configs are made with the correct keys).
follow up to the question about my post and which file the PublicKeyToken entry is in: the configuration file i am referring to is the main application configuration file (ie. App.config for a WinForms app) plus any config files referenced by App.config (like dataConfiguration.config in my case).
The lines in App.config looks like:
<configSections>
<section name="enterpriselibrary.configurationSettings" type="Microsoft.Practices.EnterpriseLibrary.Configuration.ConfigurationManagerSectionHandler, Microsoft.Practices.EnterpriseLibrary.Configuration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0d095c9e5ddafc0d" />
</configSections>
Well I have been doing this for a while now (Using Enterprise Library Logging
and Exception… | https://blogs.msdn.microsoft.com/tomholl/2005/04/06/managing-enterprise-library-in-your-organization/ | CC-MAIN-2017-09 | refinedweb | 2,215 | 61.26 |
News
Video recording from 33rd Degree 2012
05 listopad 2012
See youtube playlist and expect new video every couple days.
Photos from 33rd Degree 2012
31 maj 2012
33rd Degree 2013
16 maj 2012
If you do not want to miss registration for 33rd Degree 2013, please sign up and we will notify you via email when registration starts.
You about 33rd Degree
26 marzec 2012
Like last year, in addition to your comments on twitter with #33degree tag, we will be collecting all blogposts (no censorship). If you wrote one, please send it to kontakt@dworld.pl. We will draw 1 free ticket for 33rd Degree 2013 among all blog posts authors.
Thanks for being part of 33rd Degree ...
Participant FAQ and links
18 marzec 2012
Links
Conference Book (to view on devices)
Conference Book (to print)
Con...
Speakers
Sessions
Twitter: From Ruby on Rails to the JVMRaffi Krikorian
Pointy haired bosses and pragmatic programmers: Facts and Fallacies of Software DevelopmentVenkat Subramaniam
Continuous Delivery Best PracticesKen Sipe
JavaFX 2.0 and Scala, Like Milk and CookiesStephen Chin
Economic Games in Software ProjectsMatthew McCullough
Workshop: GitMatthew McCullough
import continuous.delivery.*;Toomas Römer
Web SecurityKen Sipe
Smarter Testing with SpockLuke Daley
GEB - Very Groovy browser automationLuke Daley
Kotlin: A cool way to program for JVMAndrey Breslav
See More Talks >> | http://2012.33degree.org/index.html | CC-MAIN-2019-22 | refinedweb | 218 | 58.32 |
Java REPL or jshell is the new tool introduced in java 9. Today we will look into Java REPL basics and run some test programs in jshell interface.
Table of Contents
Java REPL
Let’s first try to understand why REPL support was added in Java, if it was that important then why in so late release.
As you know, Scala has become very popular to develop from small to large-scale applications because of it’s features and advantages. It supports multi-paradigm (Object-Oriented and Functional Programming) and REPL.
Oracle Corporation is trying to integrate most of Scala features into Java. They have already integrated some functional programming features as part of Java 8, such as lambda expressions.
Scala’s one of the best features is REPL (Read-Evaluate-Print-Loop). It’s a command line interface and Scala Interpreter to execute Scala programs. It’s very easy to use Scala REPL to learn basics of scala programming and even run small test code.
Because of Scala REPL and it’s benefits in reducing the learning curve and ease of running test code, Java REPL got introduced in java 9.
Java REPL – jshell
Java REPL application name is
jshell. JShell stands for Java Shell. jshell is an interactive tool to execute and evaluate java simple programs like variable declarations, statements, expressions, simple Programs etc.
Open command prompt and check java version to make sure you have java 9 or above, then only you can use jshell.
Since jshell don’t need any IDEs or extra editors to execute simple java programs, It’s very useful for beginners in core java and experts to use it to learn and evaluate new features and small test code.
Java REPL – jshell basics
We can access Java REPL by using
jshell command available as shown in below image.
Now, it’s time to execute few simple java examples to get the taste of java REPL tool.
Copypankaj:~ pankaj$ jshell | Welcome to JShell -- Version 9 | For an introduction type: /help intro jshell> jshell> System.out.println("Hello World"); Hello World jshell> String str = "Hello JournalDev Users" str ==> "Hello JournalDev Users" jshell> str str ==> "Hello JournalDev Users" jshell> System.out.println(str) Hello JournalDev Users jshell> int counter = 0 counter ==> 0 jshell> counter++ $6 ==> 0 jshell> counter counter ==> 1 jshell> counter+5 $8 ==> 6 jshell> counter counter ==> 1 jshell> counter=counter+5 counter ==> 6 jshell> counter counter ==> 6 jshell>
As shown in the above Java REPL examples, it’s very easy to develop “Hello World” program. We don’t need to define “public class” and public static void main(String[] args) method just to print one message.
NOTE: We don’t need to use “semicolons” for simple statements as shown in the above diagram.
Java REPL – execute class
We can also define and execute class methods in Java REPL shell.
Copyjshell> class Hello { ...> public static void sayHello() { ...> System.out.print("Hello"); ...> } ...> } | created class Hello jshell> Hello.sayHello() Hello jshell>
Java REPL – Help and Exit
To get jshell tool help section, use
/help command. To exit from jshell, use command
/exit.
Copyjshell> /help | Type a Java language expression, statement, or declaration. | Or type one of the following commands: | /list [<name or id>|-all|-start] | list the source you have typed | /edit <name or id> ... jshell> /exit | Goodbye pankaj:~ pankaj$
We can also use
Ctrl + D command to exit from jshell tool.
That’s all about Java REPL and jshell tool basics, read more at jshell – java shell.
Rajeev Ranjan says
how to set classpath of java 9 .
thank you
it’svery usefull
Vinay says
It’s very Nice…
Thank You.!!
Ramesh says
Cool Feature it’s lot helpful for beginners to learn concepts
vega says
Loved this tutorial. keep it up.
Anjaneya says
I would be interested in knowing how garbage collection would happen in JShell jvm processes.
Users might create huge variables and forget about them.. how would Java handle that intelligently?
Rambabu Posa says
Thanks for reading my tutorials.
Yes sure. I have already released Java SE 9 REPL Part-2. Wait for Part-3 post to answer your question.
Ram
Rachit says
Hey! When are you planning to publish part-3 for it. As I’ve the same doubt related to Garbage Collection?
Filip says
In what ways do you feel the jshell GC works differently than GC in the JVM? Jshell uses exactly the same GC algorithms as any other java application.
Balamurugan Guruswamy says
Nice Tutorial with hands-on images and well narrated steps.
Priya says
Thank you 🙂
Rakesh says
Very nice tutorial. Please postry more tutorials on java 9.
Suresh says
Mind-blowing tutorial on Java SE 9. explained very well. Please deliver some more tutorials on Java 9. I want to learn it before release. Thank you so much | https://www.journaldev.com/9879/java-repl-jshell | CC-MAIN-2019-30 | refinedweb | 801 | 66.23 |
from _ programmer? This blog post describes a way to do this, with a link to working code. The code is not very sophisticated: firstly, I am not a professional programmer and have no training in computer science; as a result, all I do is simple stuff. Secondly, I wrote the bulk of the code in a single day (ending with this blog post); I am sure it could be greatly improved upon.
However, like I say to beginners on various forums: the important thing is to first make your program do what you want it to.
Before I discuss the details of the code, I am going to make a rather long digression to explain my original motivation and what I found along the way.
Something special about Pattis's Karel the Robot
Anyone that has read more than a few posts on this blog is likely familiar with my Python implementations of Karel the Robot: a desktop version, named RUR-PLE, and a more modern and sophisticated web version, Reeborg's World. In what follows, I will give examples using code that could be readily executed in Reeborg's World. These are extremely simple ... but I give them to illustrate a very important point about what motivated me.
Reeborg is a faulty robot. It can move forward and turn left (by 90 degrees) as follows:
move() turn_left()
However, it cannot turn right directly. Nonetheless, one can define a new function to make it turn right by doing three consecutive left turns:
def turn_right(): turn_left() turn_left() turn_left()
Reeborg (like the original Karel) can make single decisions based on what it senses about its environment
if wall_in_front(): turn_left() else: move()or it can make repeated decisions in a similar way:
while not wall_in_front(): move()Using simple commands and conditions, beginners can learn the basics of programming using such an environment. Notice something important in the above code samples: no variables are used and there are no function arguments.
What if we wanted Reeborg to draw a square? If I were to use Guido van Robot, Python-like implementation, I would write the following
do 4: move turnleft
Once again, no variable nor any function arguments. However, if I wanted to do the same thing in Reeborg's World, at least up until a few days ago days ago, I would have needed to write:
for var in range(4): move() turn_left()
So much for the idea of not having variables nor function arguments .... I've always hated the "don't worry about it now" kind of statements made to students. However, the alternative is to explain a rather complicated expression (for beginners at this stage, especially young children who have never seen algebra and the idea of a variable) ... Wouldn't it be nice if
one could write instead:
repeat 4: move() turn_left()
for 4: move() turn_left()
So, I have a way to support a "repeat" keyword in Reeborg's World ... but it left me somewhat unsatisfied to have something like this not easily available elsewhere in the Python world.
Additional motivation
In one of his posts to python-ideas, Terry Jan Reddy mentioned a discussion on the idle-dev list about making idle friendlier to beginners. In one of his post, he mentioned the idea of having non-English keywords. This idea is not new. There already exists an unmaintained version with Chinese Keywords as well as a Lithuanian and Russion version. Maintaining a version based on a different language for keywords is surely not something simple ... nor I think it would be desirable. However, it might be possible to essentially achieve the same goal by using an approach I describe in the next section.
Even just adding new keywords can be quite difficult. For example, in this post, Eli Bendersky explains how one can add a new keyword to Python. "All" you ned to do is
- Modify the grammar to add the new keyword
- Modify the AST generation code; this requires a knowledge of C
- Compile the AST into bytecode
- Recompile the modified Python interpreter
Not exactly for the faint of heart...
A simpler way...
Using the unmodified standard Python interpreter, I have written some proof-of-concept code which works when importing modules that satisfy certain conditions. For example, if an imported module
contains as its first line
for __experimental__ import repeat_keyword
it will support constructs like
repeat 4: move() turn_left()
If instead, it has
from __experimental__ import function_keyword
then one will be able to write "function" instead of "lambda" (which has been identified as one of Python's warts by a few people including Raymond Hettinger.)
One can combine two transformations as follows:
from __experimental__ import repeat_keyword, function_keyword def experimental_syntax(): res = [] g = function x: x**2 repeat 3: res.append(g(2)) return res def normal_syntax(): res = [] g = lambda x: x**2 for i in range(3): res.append(g(2)) return res
and both functions will have exactly the same meaning. One caveat: this works only when a module is imported.
How does it work?
Each code transformer, such as repeat_keyword.py and function_keyword.py, contains a function named "transform_source_code"; this function takes an existing source code as input and returns a modified version of that code. These transformations can thus be chained easily, where the
output from one transformation is taken as the input from another.
The transformation happens when a module is imported using an import statement; this is an unfortunate limitation as one can not execute
python non_standard_module.pyfrom the command line and expect "non_standard_module.py" to be converted.
The magic occurs via an import hook. The code I have written uses the deprecated imp module. I have tried to figure out how to use the importlib module to accomplish the same thing ... but I failed. I would be greatful to anyone who could provide help with this.
Other potential usesIn addition to changing the syntax slightly to make it easier to (young) beginners, or to make it easier to understand to non-English speaker especially if they use a different character set, more experienced programmers might find this type of code transformation potentially useful.
When PEPs are written, they often contain small sample codes. When I read PEPs which are under discussion, I often wish I could try out to write and modify such code samples to see how they work. While not all proposed code could be made to work using the approach I have described, it might be possible in many cases. As an example, PEP 0465 -- A dedicated infix operator for matrix multiplication could almost certainly have been implemented as a proof-of-concept using the approach I used. I suspect that the existence of the appropriate code converter would often enrich the discussions about proposed changes to Python, and allow for a better evaluation of different alternatives.
A suggestion to come?...I'm not ready yet to bring another suggestion to the python-ideas list ... However ...
Python support the special "from __future__ import ..." construct to determine how it will interpret the rest of the code in that file. I think it would be useful if a similar kind of statement "from __experimental import ..." would also benefit from the same kind of special treatment so that it
would work in all instances, and not only when a module is imported. People could then share (and install via pip) special code importers and know that they would work in all situations, and not limited to special environments like Reeborg's World, or IDLE as suggested by T.J. Reddy and likely others.
However, a concrete suggestions along these lines will have to wait for another day...
2 comments:
An alternative way could be to implement a custom python codec and use them for your script. It's a bit hackier way to do some preprocessing in a "macro-style", and just requires to add something like "# encoding: mymacros" on top (or force it on your environment).
Felipe A. Hernandez: do you have a link to something like this actually working? I cannot see how it would/could work. | https://aroberge.blogspot.com/2015/10/from-experimental-import-somethingnew.html | CC-MAIN-2018-13 | refinedweb | 1,349 | 60.95 |
Leo Davidson observes that
a hit-test code is defined for
HTOBJECT, but it is not documented, and wonders
what's up.
#define HTOBJECT 19
The
HTOBJECT is another one of those features
that never got implemented.
The code does nothing and nobody uses it.
It was added back in Windows 95 for reasons lost to the
mists of time,
but when the reason for adding it vanished (maybe a feature got cut),
it was too late to remove it from the header file because that would
require renumbering
HTCLOSE and
HTHELP,
two values which were in widespread use already.
So the value just stayed in the header file,
taking up space but accomplishing nothing.
Why does removing it from the header file require renumbering other ones? I'm pretty sure I've seen gaps in #defines in Windows header files before.
Well then people would ask what happened to the missing numbers. For example, the window message #4, between WM_MOVE (0x0003) and WM_SIZE (0x0005). What happened to that? Was it a message that got cut before it was in a public header file?
I agree with John. It it is in the header as a #define then there should be no downside to removing it, especially, as Ray claims, that the functionality was never released and the functionality was never implemented.
Now, if it had been part of an enumerated data type declaration (e.g. typedef enum someType_e {…} someType; ) where it would change the values of members following it, I could see his point.
Personally, I believe the people who ask "what does this do?" outnumber those who ask "why the gap?" in these sorts of things.
@John somewhere, someone is probably using it for something completely unrelated.
#define TRUE HTOBJECT-19
Personally, I'd have removed it and not renumbered the other messages.
You could still remove it and make the values of each enumerant explicit.
@The MAZZTer That explains why it can't be removed _now_, not why it couldn't be removed at the stage of the Windows 95 SDK release that the post is talking about.
@The MAZZTer && Random832
It can be safely removed even with that example – I am pretty sure the compiler made that #define equal zero in the assembler. Remember, Microsoft is in business of backward compatibility from the compiled code point of view, not the source code.
You would be wrong there. Microsoft tends to retain source code compatibility as well. Breaking a build with an SDK update counts as a regression.
The MAZZTer makes a valid point about why you cannot remove the #define after it has shipped. I still don't see why it wasn't possible to remove the #define prior to a public release though.
@Danny: No. #define is for the preprocessor, and does a pretty simple replacement in this type of statement. It's a literal "replace HTOBJECT with the value 19" operation, and the assembler would never see it – it would see a literal 19 in places HTOBJECT had been, or absolutely nothing (including no zero) if HTOBJECT wasn't used anywhere, as there would have been nothing to replace.
@Danny: #define's are only automatically made 0 for the purposes of #if tests — if you say "#if SOME_MACRO" (or any expression involving SOME_MACRO) when SOME_MACRO is not defined, it is automatically given a value of 0 for that test. Stuff like "#define TRUE (HTOBJECT-19)" would still expand to code that gives errors about undefined identifiers or somesuch.
@Maurits: "enumerant". Great word, I love it. I'll slip it into casual conversation if I can!
This line could have been simply commented out, with an explanation.
Now, assuming that HTOBJECT was never used (for its intended purpose), could some future HTxxxxx constant use value 19 ?
@KenW & AdamR: You're missing @Danny's point. If you were to use #define TRUE HTOBJECT-19 in your code, then the compiler will treat that essentially as if you had written #define TRUE 0. The compiled code will continue to run regardless of whether are not HTOBJECT is removed (or renumbered) in future editions of the OS. It would only become a problem when you tried to recompile the code.
This is the opposite effect of renumbering the HTxxxxs, which would cause existed compiled code to stop running.
@Raymond: There is a school of thinking that says in this case a build break is a good thing. If the intended purpose was removed, then code that relied on that intended purpose should be updated to account for the missing functionality from the OS. A build break points this out to the developer that "Hey, that's something I need to look at and address"
And let's be honest – if the feature is removed from the OS, then any executable that relied on the function is broken, regardless of whether the SDK allows it to build or not.
But then again you originally said it was never implemented, and never used, so then why would any builds break in the first place?
Documenting it as "Not Used" would have been nice. The obvious way of going about implementing WM_HITTEST & WM_NCPAINT is going through the constant tables and making sure all values are handled. If this one can't occur it would have been nice to know that.
I think the point is, why risk breaking something that already works? Sure, it *shouldn't* break anything, but you never know with a system as big and complex as Windows. And there may be other code involved that wasn't mentioned here which, although it does nothing, might break if the define was removed… and then you'd have to make more changes… and then those changes have risks & consequences, too… and you can see where this leads.
Thanks for the answer!
For all the people arguing that it could/should have been removed, what's the harm in having it there? That it might cause idle curiosity or speculation isn't much of a reason to remove it.
And as Adam Rosenfield mentioned, the hole left in the numbering scheme would still leave cause for idle curiosity and speculation, so there's even less of an argument that than you might have thought at first.
A better reason might be that it's a needless intrusion on the global namespace, but even that rationale is pretty weak.
@James Curran: Never reuse number that has been assigned in previously released SDK of a product version that's still in use, even if you THINK noone is using it.
… full of sound and fury, signifying nothing.
@Ken, Adam and rest who didn't read my previous comment with enough attention. I was talking about compiled code, not preprocessor code, not source code, not interpreted code, not future to exist code that is different then the one currently is written inside a .exe. Raymond and it's entire department of compatibility is also in business of backward compatibility at that level as well, I doubt he cares if the app who used undocumented features who has to support in the next OS version really care in what programming language was written – he gets a compiled code to which start to attach debugger and debug the assembler that is in the memory. And before windows 95 any compiler would optimize that DEFINE to zero, so none of those applications would care if that define will get removed in current version of the Windows, will still work. That was my point, next time read more careful.
It could be reused on 64-bit windows. Simple ifdefs would do.
#define HTOBJECT_UNIMPLEMENTED_RESERVED 19 | https://blogs.msdn.microsoft.com/oldnewthing/20120711-00/?p=7153/ | CC-MAIN-2017-09 | refinedweb | 1,277 | 61.06 |
How to: Search for Objects, Definitions, and References (Symbols)
You can use Object Browser, Navigate To, Find Symbol, or Go to Definition to search for objects, definitions, or references (symbols) in a solution.
In the Object Browser, you can type a search string to filter the names of the symbols that are displayed in the objects pane for the current browsing scope. For example, the string MyObject would return "MyObject," "MyObjectTest" and "CMyObject."
You can use the Navigate To feature to search for a symbol or file in the source code.
To perform a more advanced search, you can use the Find Symbol tab of the Find and Replace window. Results are displayed in the Find Symbol Results window. For more information, see Find Symbol, Find and Replace Window and Find Symbol Results Window.
Note
Right-clicking a symbol in the objects pane of the Object Browser and then clicking Find All References also displays results in the Find Symbol Results window.
You can also search for the original definition of a code element by right-clicking an instance of the element in the editor and then clicking Go To Definition.
Searching for Symbols By Using Object Browser
When you type a search string in the Search box in Object Browser, just the current browsing scope is searched. Use the Browse list to select a browsing scope. For more information about how to scope searches, see How to: Modify the Browsing Scope in the Object Browser.
To search for symbols by using a search string in Object Browser
In Object Browser, in the Browse list, select a browsing scope.
In the Search box, type all or part of a symbol name to search for, or select one from the drop-down list.
Note
Searches are not case-sensitive.
Click Search.
The objects pane displays just those symbol names in the browsing scope that include the search string. The string is highlighted in every match.
To clear the results
In Object Browser, click the Clear Search button on the toolbar.
The objects pane now displays all the objects in the current browsing scope.
Searching for Symbols By Using Navigate To
Navigate To lets you find and navigate to a specific location in the solution, or explore elements in the solution. It helps you pick a good set of matching results from a query.
To search for symbols or files in Navigate To
On the Edit menu, click Navigate To.
In the upper box, type a search string.
Notice that results are displayed in the lower box as you type, and change as you type more. For example, if you type Click, then all symbol names that contain "Click" are displayed; however, if you add a space and the letter E, then the list is filtered to display just symbols that include "Click" and "E" (the space is treated as an and relationship).
The search results may include symbol definitions and file names in the solution, but does not include namespaces or local variables.
A search string can have multiple search terms, which must be separated by spaces. If a search term has an uppercase letter, the search for that term is case-sensitive; otherwise, the search is case-insensitive. File names are always case-insensitive for the first characters of the file name.
You can search for keywords that are contained in a symbol by using Camel casing and underscore characters to divide the symbol into keywords. For example, to find an "AddOrderHeader" symbol, you could search for "add", "order", "header", "order add", "AOH", and other combinations, as shown in the following table.
The Navigate To syntax does not support special logic or special characters such as these:
Wildcard matching
Boolean logic operators, including and, or, &, |
Regular expressions
You can double-click a result to view the definition in the code.
To make an enhanced query, use Find Symbol.
Searching for Symbols By Using Find Symbol
Use the Find Symbol tab of the Find and Replace window to locate lines in your project code where a symbol is defined, referenced, or used. In particular, Find Symbol is useful as follows:
Searching for the definition of a symbol or discovering multiple definitions of a symbol.
Searching for references to a symbol and instances where it is used in your code.
Exploring object hierarchies in referenced and external components so that you can learn about their functionality.
Using Find Symbol differs from finding text, as follows:
Find Symbol lets you limit the search scope to include only symbols.
Indicates any available definitions of the symbol and references to it in the Find Symbol Results window.
Lets you search in external components for which you do not have the source code.
To search for symbols by using a search string on the Find Symbol tab of the Find and Replace window
If any files that are to be searched are stored under source code control, check them out.
Switch any open documents to Source view.
On the Edit menu, click Find and Replace and then click Find Symbol.
In the Look in list, select one of the following search scopes.
All Components scopes the search to all available components, including the current solution, its referenced components, the .NET Framework class library, and any components that you have added by using Add Other Components.
.NET Framework scopes the search to just the .NET Framework class library.
My Solution scopes the search to just the symbol names that are defined or referenced in the open solution.
Custom Component Set (Object Browser) scopes the search to just the components that are currently added to the Custom Component Set search scope of the Object Browser.
Custom Component Set (Find Symbol) scopes the search to just the components that are currently added to the Custom Component Set search scope Find Symbol.
Click the Browse (...) button to display the Edit Custom Component Set Dialog Box, and then select a custom set of search folders. For more information, see How to: Modify the Browsing Scope in the Object Browser.
Under Find options, set the options you want, as follows:
Whole word filters the results so that only those that contain just the search string are displayed. For example, a search for MyObject would return "MyObject" but not "CMyObject" or "MyObjectC."
Prefix filters the results so that only those that begin with the search string are displayed. For example, a search for MyObject would return "MyObject" and "MyObjectTest" but not "CMyObject."
Substring returns all results that contain the search string. For example, a search for MyObject would return "MyObject", "MyObjectTest", and "CMyObject."
Together with any of these options, you can also select Match case. This option further filters the results so that only those that match the case of the search string are displayed. For example, a search for MyObject when Match case is selected would return "MyObject" but not "myobject" or "MYOBJECT".
Note
Find and Replace settings remain in effect from search to search. For more information, see Find Symbol, Find and Replace Window.
In the Find what box, enter the string or expression that you want to find.
Note
Wildcards and regular expressions cannot be used in Find Symbol searches.
Click Find All to begin the search.
The results are displayed in the Find Symbol Results Window. You can double-click a result to jump to its match in the code.
Navigating to a Definition
In the editor, when you want to search for the original definition of a code element, right-click the element and then click Go to Definition. You can search on elements such as members, types, variables, locals, and so forth.
Go to Definition uses compiler information to locate and display the original definition, even if your code uses overloads or type inferences, provided that the location is available to Visual Studio.
To search for the definition of a code element
In the editor, right-click the code element for which you want to find the definition, and then click Go To Definition.
The module in which the element is originally defined, if it is available, is displayed on a new tab in the editor. If the definition is not available, an error message is displayed.
See Also
Tasks
How to: Navigate in the Object Browser
Reference
Find Symbol Results Window
Other Resources
How to: Use Reference Highlighting | https://docs.microsoft.com/en-us/previous-versions/4sadchd3(v%3Dvs.110) | CC-MAIN-2019-26 | refinedweb | 1,392 | 61.77 |
SP 1.1 introduced an extremely valuable new capability: the ability to define your own JSP tags. You define how the tag, its attributes, and its body are interpreted, then group your tags into collections called tag libraries that can be used in any number of JSP files. The ability to define tag libraries in this way permits Java developers to boil down complex server-side behaviors into simple and easy-to-use elements that content developers can easily incorporate into their JSP pages.
Custom tags accomplish some of the same goals as beans that are accessed with jsp:useBean (see Chapter 13, "Using JavaBeans with JSP")encapsulating complex behaviors into simple and accessible forms. There are several differences, however. First, beans cannot manipulate JSP content; custom tags can. Second, complex operations can be reduced to a significantly simpler form with custom tags than with beans. Third, custom tags require quite a bit more work to set up than do beans. Fourth, beans are often defined in one servlet and then used in a different servlet or JSP page (see Chapter 15, "Integrating Servlets and JSP"), whereas custom tags usually define more self-contained behavior. Finally, custom tags are available only in JSP 1.1, but beans can be used in both JSP 1.0 and 1.1.
At the time this book went to press, no official release of Tomcat 3.0 properly supported custom tags, so the examples in this chapter use the beta version of Tomcat 3.1. Other than the support for custom tags and a few efficiency improvements and minor bug fixes, there is little difference in the behavior of the two versions. However, Tomcat 3.1 uses a slightly different directory structure, as summarized Table 14.1.
Table 14.1 Standard Tomcat Directories
14.1 The Components That Make Up a Tag Library rest of this section gives an overview of each of these components and the following sections give details on how to build these components for various different styles of tags.
The Tag Handler Class
When defining a new tag, your first task is to define a Java class that tells the system what to do when it sees the tag. This class must implement the javax.servlet.jsp.tagext.Tag interface. This is usually accomplished by extending the TagSupport or BodyTagSupport class. Listing 14.1 is an example of a simple tag that just inserts "Custom tag example (coreservlets.tags.ExampleTag)" into the JSP page wherever the corresponding tag is used. Don't worry about understanding the exact behavior of this class; that will be made clear in the next section. For now, just note that it is in the coreservlets.tags class and is called ExampleTag. Thus, with Tomcat 3.1, the class file would be in install_dir/webapps/ROOT/WEB-INF/classes/coreservlets/tags/ExampleTag.class.
Listing 14.1 ExampleTag.java
package coreservlets.tags; import javax.servlet.jsp.*; import javax.servlet.jsp.tagext.*; import java.io.*; /** Very simple JSP tag that just inserts a string * ("Custom tag example...") into the output. * The actual name of the tag is not defined here; * that is given by the Tag Library Descriptor (TLD) * file that is referenced by the taglib directive * in the JSP file. */ public class ExampleTag extends TagSupport { public int doStartTag() { try { JspWriter out = pageContext.getOut(); out.print("Custom tag example " + "(coreservlets.tags.ExampleTag)"); } catch(IOException ioe) { System.out.println("Error in ExampleTag: " + ioe); } return(SKIP_BODY); } }
The Tag Library Descriptor File
Once you have defined a tag handler, your next task is to identify the class to the server and to associate it with a particular XML tag name. This task is accomplished by means of a tag library descriptor file (in XML format) like the one shown in Listing 14.2. This file contains some fixed information, an arbitrary short name for your library, a short description, and a series of tag descriptions. The nonbold part of the listing is the same in virtually all tag library descriptors and can be copied verbatim from the source code archive at or from the Tomcat 3.1 standard examples (install_dir/webapps/examples/WEB-INF/jsp).
The format of tag descriptions will be described in later sections. For now, just note that the tag element defines the main name of the tag (really tag suffix, as will be seen shortly) and identifies the class that handles the tag. Since the tag handler class is in the coreservlets.tags package, the fully qualified class name of coreservlets.tags.ExampleTag is used. Note that this is a class name, not a URL or relative path name. The class can be installed anywhere on the server that beans or other supporting classes can be put. With Tomcat 3.1, the standard base location is install_dir/webapps/ROOT/WEB-INF/classes, so ExampleTag would be in install_dir/webapps/ROOT/WEB-INF/classes/coreservlets/tags. Although it is always a good idea to put your servlet classes in packages, a surprising feature of Tomcat 3.1 is that tag handlers are required to be in packages.
Listing 14.2 csajsp-taglib.tld
<>csajsp</shortname> <urn></urn> <info> A tag library from Core Servlets and JavaServer Pages,. </info> <tag> <name>example</name> <tagclass>coreservlets.tags.ExampleTag</tagclass> <info>Simplest example: inserts one line of output</info> <bodycontent>EMPTY</bodycontent> </tag> <!-- Other tags defined later... --> </taglib>
The JSP File
Once you have a tag handler implementation and a tag library description, you are ready to write a JSP file that makes use of the tag. Listing 14.3 gives an example. Somewhere before the first use of your tag, you need to use the taglib directive. This directive has the following form:
<%@ taglib uri="..." prefix="..." %>
The required uri attribute can be either an absolute or relative URL referring to a tag library descriptor file like the one shown in Listing 14.2. To complicate matters a little, however, Tomcat 3.1 uses a web.xml file that maps an absolute URL for a tag library descriptor to a file on the local system. I don't recommend that you use this approach, but you should be aware of it in case you look at the Apache examples and wonder why it works when they specify a nonexistent URL for the uri attribute of the taglib directive.
The prefix attribute, also required, specifies a prefix that will be used in front of whatever tag name the tag library descriptor defined. For example, if the TLD file defines a tag named tag1 and the prefix attribute has a value of test, the actual tag name would be test:tag1. This tag could be used in either of the following two ways, depending on whether it is defined to be a container that makes use of the tag body:
<test:tag1> Arbitrary JSP </test:tag1>
or just
<test:tag1 />
To illustrate, the descriptor file of Listing 14.2 is called csajsp-taglib.tld, and resides in the same directory as the JSP file shown in Listing 14.3. Thus, the taglib directive in the JSP file uses a simple relative URL giving just the filename, as shown below.
<%@ taglib uri="csajsp-taglib.tld" prefix="csajsp" %>
Furthermore, since the prefix attribute is csajsp (for Core Servlets and JavaServer Pages), the rest of the JSP page uses csajsp:example to refer to the example tag defined in the descriptor file. Figure 141 shows the result.
Listing 14.3 SimpleExample.jsp
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <%@ taglib </HEAD> <BODY> <H1><csajsp:example /></H1> <csajsp:example /> </BODY> </HTML>
Figure 141 Result of SimpleExample.jsp. | http://www.informit.com/articles/article.aspx?p=26119&seqNum=6 | CC-MAIN-2019-47 | refinedweb | 1,275 | 56.05 |
At this point, you're ready to take a look at all the defines, macros, data structures, and functions that you've created throughout the book. Moreover, I've put them all into a single pair of files: T3DLIB1.CPP|H. You can link these into your programs and use everything you've learned so far without hunting down all the code from the dozens of programs you've written.
In addition to all the stuff you've already written, I've created a few more 2D sprite functions to help facilitate 2D game programming. Actually, I used this stuff to create some of the demos at the end of the chapter, so you get to see that code clearly defined. Anyway, I'm not going to take too much time explaining everything here, but I'll give you enough to help you figure it out. Let's take a look at each code element one by one.
Thus far you have a fairly simple 2D engine going, as shown in Figure 8.54. Basically, the engine along with the additions I have made is now a 2D 8/16-bit color back buffered DirectX engine that has support for any resolution, along with clipping to the primary display surface, and has the ability to transparently operate in windowed mode.
To build an application using the library, you'll need to include T3DLIB1.CPP|H (from the CD) along with DDRAW.LIB (DirectDraw Library), WINMM.LIB (Win32 Multimedia Library), and a main game program T3DCONSOLE2.CPP which is based on the simpler T3DCONSOLE.CPP (you made this earlier in the book). Then you're ready to go.
Of course, you wrote a lot of code, and you're free to modify, use, abuse all this stuff, or even burn it if you like. I just thought you might like it all explained and put together in a couple of easy-to-use files.
Before we cover all the functionality and so forth of the engine, take a quick look at the latest incarnation of the console, version 2.0 which now has hooks and support for 8- or 16-bit mode windowed or full screen. Basically, by changing a few #defines at the top of the file you can select 8- or 16-bit mode, full screen or windowed display— it's really cool! Here it is for your review:
// T3DCONSOLE2.CPP -
// Use this as a template for your applications if you wish
// you may want to change things like the resolution of the
// application, if it's windowed, the directinput devices
// that are acquired and so forth...
// currently the app creates a 640x480x16 windowed display
// hence, you must be in 16 bit color before running the application
// if you want fullscreen mode then simple change the WINDOWED_APP
// value in the #defines below value to FALSE (0). Similarly, if
// you want another bitdepth, maybe 8-bit for 256 colors then
// change that in the call to DDraw_Init() in the function
// Game_Init() within this file.
// READ THIS!
// To compile make sure to include DDRAW.LIB, DSOUND.LIB,
// DINPUT.LIB, WINMM.LIB in the project link list, and of course
// the C++ source modules T3DLIB1.CPP,T3DLIB2.CPP, and T3DLIB3.CPP
// and the headers T3DLIB1.H,T3DLIB2.H, and T3DLIB3.H must
// be in the working directory of the compiler
// INCLUDES ///////////////////////////////////////////////
#define INITGUID // make sure all the COM interfaces are available
// instead of this you can include the .LIB file
// DXGUID.LIB
#include <dsound.h>
#include <dmksctrl.h>
#include <dmusici.h>
#include <dmusicc.h>
#include <dmusicf.h>
#include <dinput.h>
#include "T3DLIB1.h" // game library includes
#include "T3DLIB2.h"
#include "T3DLIB3.h"
// DEFINES ////////////////////////////////////////////////
// defines for windows interface
#define WINDOW_CLASS_NAME "WIN3DCLASS" // class name
#define WINDOW_TITLE "T3D Graphics Console Ver 2.0"
//
WNDCLASSEX winclass; // this will hold the class we create
HWND hwnd; // generic window handle
MSG msg; // generic message
HDC hdc; // graphics device context
// first fill in the window class stucture
winclass.cbSize = sizeof(WNDCLASSEX);Curso = LoadCursor(NULL, IDC_ARROW);
winclass.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH);
winclass.lpszMenuName = NULL;
winclass.lpszClassName = WINDOW_CLASS_NAME;
winclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);
// save hinstance in global
main_instance = hinstance;
// register the window class
if (!RegisterClassEx(&winclass))
return(0);
// create the window
if (!(hwnd = CreateWindowEx(NULL,// extended style
WINDOW_CLASS_NAME, // class
WINDOW_TITLE, // title
WINDOWED_APP ? (WS_OVERLAPPED | WS_SYSMENU | WS_VISIBLE) :
(WS_POPUP | WS_VISIBLE)),
0,0, // initial x,y
WINDOW_WIDTH,WINDOW_HEIGHT, // initial width, height
NULL, // handle to parent
NULL, // handle to menu
hinstance,// instance of this application
NULL))) // extra creation parms
return(0);
// save the window handle and instance in a global
main_window_handle = hwnd;
main_instance = hinstance;
// resize the window so that client is really width x height
if (WINDOWED_APP)
{
// now resize the window, so the client area is the
// actual size requested since there may be borders
// and controls if this is going to be a windowed app
// if the app is not windowed then it won't matter
RECT window_rect = {0,0,WINDOW_WIDTH,WINDOW_HEIGHT};
// make the call to adjust window_rect
AdjustWindowRectEx(&window_rect,
GetWindowStyle(main_window_handle),
GetMenu(main_window_handle) != NULL,
GetWindowExStyle(main_window_handle));
// save the global client offsets, they are needed in DDraw_Flip()
window_client_x0 = -window_rect.left;
window_client_y0 = -window_rect.top;
// now resize the window with a call to MoveWindow()
MoveWindow(main_window_handle,
CW_USEDEFAULT, // x position
CW_USEDEFAULT, // y position
window_rect.right - window_rect.left, // width
window_rect.bottom - window_rect.top, // height
TRUE);
// show the window, so there's no garbage on first render
ShowWindow(main_window_handle, SW_SHOW);
} // end if windowed
//
// T3D II GAME PROGRAMMING CONSOLE FUNCTIONS ////////////////
int Game_Init(void *parms)
{
// this function is where you do all the initialization
// for your game
// start up DirectDraw (replace the parms as you desire)
DDraw_Init(WINDOW_WIDTH, WINDOW_HEIGHT, WINDOW_BPP, WINDOWED_APP);
// initialize directinput
DInput_Init();
// acquire the keyboard
DInput_Init_Keyboard();
// add calls to acquire other directinput devices here...
// initialize directsound and directmusic
DSound_Init();
DMusic_Init();
// hide the mouse
ShowCursor(FALSE);
// seed random number generator
srand(Start_Clock());
// all your initialization code goes here...
// return success
return(1);
} // end Game_Init
///////////////////////////////////////////////////////////
int Game_Shutdown(void *parms)
{
// this function is where you shutdown your game and
// release all resources that you allocated
// shut everything down
// release all your resources created for the game here....
// now directsound
DSound_Stop_All_Sounds();
DSound_Delete_All_Sounds();
DSound_Shutdown();
// directmusic
DMusic_Delete_All_MIDI();
DMusic_Shutdown();
// shut down directinput
DInput_Release_Keyboard();
DInput_Shutdown();
// shutdown directdraw last
DDraw_Shutdown();
// return success
return(1);
} // end Game_Shutdown
//////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////
Basically, by controlling these #defines:
you can select the screen resolution (for full screen modes), the bitdepth (for full screen modes), and if the display is windowed (windowed displays use the current desktop color depth and resolution, so all you control is the window size).
The engine has one header file, T3DLIB1.H, and within it are a number of #defines that the engine uses. Here they are for your reference:
// DEFINES ////////////////////////////////////////////////
// default screen values, these are all overriden by the
// call to DDraw_Init() and are just here to have something
// to set the globals to instead of constant values
#define SCREEN_WIDTH 640 // size of screen
#define SCREEN_HEIGHT 480
#define SCREEN_BPP 8 // bits per pixel
#define MAX_COLORS_PALETTE 256
#define DEFAULT_PALETTE_FILE "PALDATA2.PAL"
// used for selecting full screen/windowed mode
#define SCREEN_FULLSCREEN 0
#define SCREEN_WINDOWED 1
// bitmap defines
#define BITMAP_ID 0x4D42 // universal id for a bitmap
#define BITMAP_STATE_DEAD 0
#define BITMAP_STATE_ALIVE 1
#define BITMAP_STATE_DYING 2
#define BITMAP_ATTR_LOADED 128
#define BITMAP_EXTRACT_MODE_CELL 0
#define BITMAP_EXTRACT_MODE_ABS 1
// directdraw pixel format defines, used to help
// bitmap loader put data in proper format
#define DD_PIXEL_FORMAT8 8
#define DD_PIXEL_FORMAT555 15
#define DD_PIXEL_FORMAT565 16
#define DD_PIXEL_FORMAT888 24
#define DD_PIXEL_FORMATALPHA888 32
// defines for BOBs
#define BOB_STATE_DEAD 0 // this is a dead bob
#define BOB_STATE_ALIVE 1 // this is a live bob
#define BOB_STATE_DYING 2 // this bob is dying
#define BOB_STATE_ANIM_DONE 1 // done animation state
#define MAX_BOB_FRAMES 64 // maximum number of bob frames
#define MAX_BOB_ANIMATIONS 16 // maximum number of animation sequeces
#define BOB_ATTR_SINGLE_FRAME 1 // bob has single frame
#define BOB_ATTR_MULTI_FRAME 2 // bob has multiple frames
#define BOB_ATTR_MULTI_ANIM 4 // bob has multiple animations
#define BOB_ATTR_ANIM_ONE_SHOT 8 // bob will perform the animation once
#define BOB_ATTR_VISIBLE 16 // bob is visible
#define BOB_ATTR_BOUNCE 32 // bob bounces off edges
#define BOB_ATTR_WRAPAROUND 64 // bob wraps around edges
#define BOB_ATTR_LOADED 128 // the bob has been loaded
#define BOB_ATTR_CLONE 256 // the bob is a clone
// screen transition commands
#define SCREEN_DARKNESS 0 // fade to black
#define SCREEN_WHITENESS 1 // fade to white
#define SCREEN_SWIPE_X 2 // do a horizontal swipe
#define SCREEN_SWIPE_Y 3 // do a vertical swipe
#define SCREEN_DISOLVE 4 // a pixel disolve
#define SCREEN_SCRUNCH 5 // a square compression
#define SCREEN_BLUENESS 6 // fade to blue
#define SCREEN_REDNESS 7 // fade to red
#define SCREEN_GREENNESS 8 // fade to green
// defines for Blink_Colors
#define BLINKER_ADD 0 // add a light to database
#define BLINKER_DELETE 1 // delete a light from database
#define BLINKER_UPDATE 2 // update a light
#define BLINKER_RUN 3 // run normal
// pi defines
#define PI ((float)3.141592654f)
#define PI2 ((float)6.283185307f)
#define PI_DIV_2 ((float)1.570796327f)
#define PI_DIV_4 ((float)0.785398163f)
#define PI_INV ((float)0.318309886f)
// fixed point mathematics constants
#define FIXP16_SHIFT 16
#define FIXP16_MAG 65536
#define FIXP16_DP_MASK 0x0000ffff
#define FIXP16_WP_MASK 0xffff0000
#define FIXP16_ROUND_UP 0x00008000
You've seen all of these in one place or another.
Next are all the macros you've written thus far. Again, you've seen them all in one place or another, but here they are all at once:
// these read the keyboard asynchronously
#define KEY_DOWN(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 1 : 0)
#define KEY_UP(vk_code) ((GetAsyncKeyState(vk_code) & 0x8000) ? 0 : 1)
//))
// this builds a 24 bit color value in 8.8.8 format
#define _RGB24BIT(a,r,g,b) ((b) + ((g) << 8) + ((r) << 16) )
// this builds a 32 bit color value in A.8.8.8 format (8-bit alpha mode)
#define _RGB32BIT(a,r,g,b) ((b) + ((g) << 8) + ((r) << 16) + ((a) << 24))
// bit manipulation macros
#define SET_BIT(word,bit_flag) ((word)=((word) | (bit_flag)))
#define RESET_BIT(word,bit_flag) ((word)=((word) & (~bit_flag)))
// initializes a direct draw struct
// basically zeros it and sets the dwSize field
#define DDRAW_INIT_STRUCT(ddstruct) {memset(&ddstruct,0,sizeof(ddstruct));
ddstruct.dwSize=sizeof(ddstruct); }
// used to compute the min and max of two expresions
#define MIN(a, b) (((a) < (b)) ? (a) : (b))
#define MAX(a, b) (((a) > (b)) ? (b) : (a))
// used for swapping algorithm
#define SWAP(a,b,t) {t=a; a=b; b=t;}
// some math macros
#define DEG_TO_RAD(ang) ((ang)*PI/180.0)
#define RAD_TO_DEG(rads) ((rads)*180.0/PI)
#define RAND_RANGE(x,y) ( (x) + (rand()%((y)-(x)+1)))
The next set of code elements includes the types and data structures that the engine uses. I'm going to list them all, but be warned that there are a couple you haven't seen yet that have to do with the Blitter Object Engine (BOB). To be consistent, let's take a look at everything at once:
// basic unsigned types
typedef unsigned short USHORT;
typedef unsigned short WORD;
typedef unsigned char UCHAR;
typedef unsigned char BYTE;
typedef unsigned int QUAD;
typedef unsigned int UINT;
// container structure for bitmaps .BMP;
// the blitter object structure BOB
typedef struct BOB_TYP
{
int state; // the state of the object (general)
int anim_state; // an animation state variable, up to you
int attr; // attributes pertaining
// to the object (general)
float x,y; // position bitmap will be displayed at
float xv,yv; // velocity of object
int width, height; // the width and height of the bob
int width_fill; // internal, used to force 8*x wide surfaces
int counter_1; // general counters
int counter_2;
int max_count_1; // general threshold values;
int max_count_2;
int varsI[16]; // stack of 16 integers
float varsF[16]; // stack of 16 floats
int curr_frame; // current animation frame
int num_frames; // total number of animation frames
int curr_animation; // index of current animation
int anim_counter; // used to time animation transitions
int anim_index; // animation element index
int anim_count_max; // number of cycles before animation
int *animations[MAX_BOB_ANIMATIONS]; // animation sequences
LPDIRECTDRAWSURFACE7 images[MAX_BOB_FRAMES]; // the bitmap images
// DD surfaces
} BOB, *BOB_PTR;
// the simple bitmap image
typedef struct BITMAP_IMAGE_TYP
{
int state; // state of bitmap
int attr; // attributes of bitmap
int x,y; // position of bitmap
int width, height; // size of bitmap
int num_bytes; // total bytes of bitmap
UCHAR *buffer; // pixels of bitmap
} BITMAP_IMAGE, *BITMAP_IMAGE_PTR;
// blinking light structure
typedef struct BLINKER_TYP
{
// user sets these
int color_index; // index of color to blink
PALETTEENTRY on_color; // RGB value of "on" color
PALETTEENTRY off_color;// RGB value of "off" color
int on_time; // number of frames to keep "on"
int off_time; // number of frames to keep "off"
// internal member
int counter; // counter for state transitions
int state; // state of light, -1 off, 1 on, 0 dead
} BLINKER, *BLINKER_PTR;
// a 2D vertex
typedef struct VERTEX2DI_TYP
{
int x,y; // the vertex
} VERTEX2DI, *VERTEX2DI_PTR;
//;
// 3x3 matrix /////////////////////////////////////////////
typedef struct MATRIX3X3_TYP
{
union
{
float M[3][3]; // array indexed data storage
// storage in row major form with explicit names
struct
{
float M00, M01, M02;
float M10, M11, M12;
float M20, M21, M22;
}; // end explicit names
}; // end union
} MATRIX3X3, *MATRIX3X3_PTR;
// 1x3 matrix /////////////////////////////////////////////
typedef struct MATRIX1X3_TYP
{
union
{
float M[3]; // array indexed data storage
// storage in row major form with explicit names
struct
{
float M00, M01, M02;
}; // end explicit names
}; // end union
} MATRIX1X3, *MATRIX1X3_PTR;
// 3x2 matrix /////////////////////////////////////////////
typedef struct MATRIX3X2_TYP
{
union
{
float M[3][2]; // array indexed data storage
// storage in row major form with explicit names
struct
{
float M00, M01;
float M10, M11;
float M20, M21;
}; // end explicit names
}; // end union
} MATRIX3X2, *MATRIX3X2_PTR;
// 1x2 matrix /////////////////////////////////////////////
typedef struct MATRIX1X2_TYP
{
union
{
float M[2]; // array indexed data storage
// storage in row major form with explicit names
struct
{
float M00, M01;
}; // end explicit names
}; // end union
} MATRIX1X2, *MATRIX1X2_PTR;
Not bad; nothing new, really. Basic types, all the bitmap stuff, polygon support, and a little matrix math.
You know that I like globals because they're so fast. Moreover, they're really appropriate for a lot of system-level variables (which a 2D/3D engine has a lot of). So here are the globals for the engine. Again, you've seen many of them, but the ones that look alien have comments, so read them:
extern FILE *fp_error; // general error file
extern char error_filename[80]; // error file name
// notice that interface 4.0 is used on a number of interfaces
extern LPDIRECTDRAW7 lpdd; // dd object
extern LPDIRECTDRAWSURFACE7 lpddsprimary; // dd primary surface
extern LPDIRECTDRAWSURFACE7 lpddsback; // dd back surface
extern LPDIRECTDRAWPALETTE lpddpal; // dd palette
extern LPDIRECTDRAWCLIPPER lpddclipper; // dd clipper for back surface
extern LPDIRECTDRAWCLIPPER lpddclipperwin; // dd clipper for window
extern PALETTEENTRY palette[256]; // color palette
extern PALETTEENTRY save_palette[256];// used to save palettes
extern DDSURFACEDESC2 ddsd; // a dd surface description struct
extern DDBLTFX ddbltfx; // used to fill
extern DDSCAPS2 ddscaps; // a dd surface capabilities struct
extern HRESULT ddrval; // result back from dd calls
extern UCHAR *primary_buffer; // primary video buffer
extern UCHAR *back_buffer; // secondary back buffer
extern int primary_lpitch; // memory line pitch
extern int back_lpitch; // memory line pitch
extern BITMAP_FILE bitmap8bit; // a 8 bit bitmap file
extern BITMAP_FILE bitmap16bit; // a 16 bit bitmap file
extern BITMAP_FILE bitmap24bit; // a 24 bit bitmap file
extern DWORD start_clock_count; // used for timing
extern int windowed_mode; // tracks if dd is windowed or not
// these defined the general clipping rectangle for software clipping
extern int min_clip_x, // clipping rectangle
max_clip_x,
min_clip_y,
max_clip_y;
// these are overwritten globally by DD_Init()
extern int screen_width, // width of screen
screen_height, // height of screen
screen_bpp, // bits per pixel
screen_windowed; // is this a windowed app?
extern int dd_pixel_format; // default pixel format set by call
// to DDraw_Init
extern int window_client_x0; // used to track the starting
// (x,y) client area for
extern int window_client_y0; // for windowed mode dd operations
// storage for our lookup tables
extern float cos_look[361]; // 1 extra so we can store 0-360 inclusive
extern float sin_look[361]; // 1 extra so we can store 0-360 inclusive
// function ptr to RGB16 builder
extern USHORT (*RGB16Bit)(int r, int g, int b);
// root functions
extern USHORT RGB16Bit565(int r, int g, int b);
extern USHORT RGB16Bit555(int r, int g, int b);
Now that you've seen all the data support, let's take a look at all the DirectDraw support functions that we've written along with some additions that I have made for full 16-bit windowed support. Let's take a quick look at each function.
Function Prototype:
int DDraw_Init(int width, // width of display
int height, // height of display
int bpp, // bits per pixel
int windowed=0); // controls windowed
Purpose:
DDraw_Init() starts up and initializes DirectDraw. You can send any resolution and color depth and select if you desire a windowed mode by setting windowed to 1. If you select a windowed mode then you must have created a window previously that was not full screen. DDraw_Init() must set up DirectDraw differently for windowed modes since the primary buffer is now the entire display, clipping must be performed to just the client area of the window and of course you have no control over the screen resolution or color depth on a windowed mode, so bpp is ignored.
Additionally, this function does a lot for you, it loads in a default palette paldata2.pal for 8-bit modes, sets up the clipping rectangle for both 8 and 16-bit modes windowed or full screen and in 16-bit modes determines the pixel format and tests if it's 5.5.5 or 5.6.5 and based on this points the function pointer:
USHORT (*RGB16Bit)(int r, int g, int b) = NULL;
At either:
USHORT RGB16Bit565(int r, int g, int b)
{
// this function simply builds a 5.6.5 format 16 bit pixel
// assumes input is RGB 0-255 each channel
r>>=3; g>>=2; b>>=3;
return(_RGB16BIT565((r),(g),(b)));
} // end RGB16Bit565
//////////////////////////////////////////////////////////
USHORT RGB16Bit555(int r, int g, int b)
{
// this function simply builds a 5.5.5 format 16 bit pixel
// assumes input is RGB 0-255 each channel
r>>=3; g>>=3; b>>=3;
return(_RGB16BIT555((r),(g),(b)));
which allows you to make the call RGB16Bit(r,g,b) to create a properly formatted RGB word in 16-bit mode and not have to worry about the pixel format. Isn't that nice? And of course R,G,B all must be in the range 0–255. Also, DDraw_Init() is smart enough in windowed mode to set all the clipping up for you, so you don't have to do anything!
Returns TRUE if successful.
Example:
// put the system into 800x600 with 256 colors
DDraw_Init(800,600,8);
// put the system into a windowed mode
// with a window size of 640x480 and 16-bit color
DDraw_Init(640,480,16,1);
int DDraw_Shutdown(void);
DDraw_Shutdown() shuts down DirectDraw and releases all interfaces.
// in your system shutdown code you might put
DDraw_Shutdown();
LPDIRECTDRAWCLIPPER
DDraw_Attach_Clipper(
LPDIRECTDRAWSURFACE7 lpdds, // surface to attach to
int num_rects, // number of rects
LPRECT clip_list); // pointer to rects
DDraw_Attach_Clipper() attaches a clipper to the sent surface (the back buffer in most cases). In addition, you must send the number of rectangles in the clipping list and a pointer to the RECT list itself. Returns TRUE if successful.
// creates a clipping region the size of the screen
RECT clip_zone = {0,0,SCREEN_WIDTH-1, SCREEN_HEIGHT-1};
DDraw_Attach_Clipper(lpddsback, 1, &clip_zone);
LPDIRECTDRAWSURFACE7
DDraw_Create_Surface(int width, // width of surface
int height, // height of surface
int mem_flags, // control flags
USHORT color_key_value=0); // the color key
DDraw_Create_Surface() is used to create a generic offscreen DirectDraw surface in system memory, VRAM, or AGP memory. The default is DDSCAPS_OFFSCREENPLAIN. Any additional control flags are logically ORed with the default. They're the standard DirectDraw DDSCAP* flags, such as DDSCAPS_SYSTEMMEMORY and DDSCAPS_VIDEOMEMORY for system memory and VRAM, respectively. Also, you may select a color key value, it currently defaults to 0. If the function is successful, it returns a pointer to the new surface. Otherwise, it returns NULL.
// let's create a 64x64 surface in VRAM
LPDIRECTDRAWSURFACE7 image =
DDraw_Create_Surface(64,64, DDSCAPS_VIDEOMEMORY);
int DDraw_Flip(void);
DDraw_Flip() simply flips the primary surface with the secondary surface for full screen modes, or in windowed mode copies the offscreen back buffer to the client area of the windowed display. The call waits until the flip can take place, so it may not return immediately. Returns TRUE if successful.
// flip em baby
DDraw_Flip();
int DDraw_Wait_For_Vsync(void);
DDraw_Wait_For_Vsync() waits until the next vertical blank period begins (when the raster hits the bottom of the screen). Returns TRUE if successful and FALSE if something really bad happened.
// wait 1/70th of sec
DDraw_Wait_For_Vsync();
int DDraw_Fill_Surface(LPDIRECTDRAWSURFACE7 lpdds, // surface to fill
int color, // color, index or RGB value
RECT *client=NULL) // rect to fill
DDraw_Fill_Surface() is used to fill a surface or rectangle within a surface with a color. The color must be in the color depth format of the surface, such as a single byte in 256-color mode or a RGB descriptor in high-color modes, if you want to fill the entire surface send set client to NULL which is its default, otherwise, you may send a RECT pointer as client and fill a region only of the surface. Returns TRUE if successful.
// fill the primary surface with color 0
DDraw_Fill_Surface(lpddsprimary,0);
UCHAR *DDraw_Lock_Surface(LPDIRECTDRAWSURFACE7 lpdds,int *lpitch);
DDraw_Lock_Surface() locks the sent surface (if possible) and returns a UCHAR pointer to the surface, along with updating the sent lpitch variable with the linear memory pitch of the surface. While the surface is locked, you can manipulate it and write pixels to it, but the blitter will be blocked, so remember to unlock the surface ASAP. In addition, after unlocking the surface, the memory pointer and pitch most likely become invalid and should not be used. DDraw_Lock_Surface() returns the non-NULL address of the surface memory if successful and NULL otherwise. Also, remember if you are in 16-bit mode then there are 2 bytes per pixel, but the pointer returned is still a UCHAR * and value written in lpitch is in bytes not pixels, so beware!
// holds the memory pitch
int lpitch = 0;
// let's lock the little 64x64 image we made
UCHAR *memory = DDraw_Lock_Surface(image, &lpitch);
int DDraw_Unlock_Surface(LPDIRECTDRAWSURFACE7 lpdds);
DDraw_Unlock_Surface() unlocks a surface previously locked with DDraw_Lock_Surface(). You need only send the pointer to the surface. Returns TRUE if successful.
// unlock the image surface
DDraw_Unlock_Surface(image);
Function Prototypes:
UCHAR *DDraw_Lock_Back_Surface(void);
UCHAR *DDraw_Lock_Primary_Surface(void);
These two functions are used to lock the primary and secondary rendering surfaces. However, in most cases you'll only be interested in locking the secondary surface in the double buffered system, but the ability to lock the primary surface is there if you need it. Additionally, If you call DDraw_Lock_Primary_Surface(), the following globals will become valid:
extern UCHAR *primary_buffer; // primary video buffer
extern int primary_lpitch; // memory line pitch
Then you're free to manipulate the surface memory as you want; however, the blitter will be blocked. Also, note that in windowed modes the primary buffer will point to the entire screen surface, not just your window, however, the secondary buffer will point to a rectangle exactly the size of the window's client area. Anyway, making the call to DDraw_Lock_Back_Surface() will lock the back buffer surface and validate the following globals:
extern UCHAR *back_buffer; // secondary back buffer
extern int back_lpitch; // memory line pitch
NOTE
Do not change any of these globals yourself; they're used to track state changes in the locking functions. Changing them yourself may make the engine go crazy.
// let lock the primary surface and write a pixel to the
// upper left hand corner
DDraw_Lock_Primary();
primary_buffer[0] = 100;
int DDraw_Unlock_Primary_Surface(void);
int DDraw_Unlock_Back_Surface(void);
These functions are used to unlock the primary or back buffer surfaces. If you try to unlock a surface that wasn't locked, there's no effect. Returns TRUE if successful.
// unlock the secondary back buffer
DDraw_Unlock_Back();
The next set of functions make up the 2D polygon system. This is by no means advanced, fast, or cutting-edge, but just your work up to this point. The functions do the job. There are better ways to do all of this stuff, but that's why you're glued to the book, right? <BG> Also, some of the functions have both an 8-bit and 16-bit version, the 16-bit versions are usually denoted by and extra "16" concatenated on the function name.
Function Prototype(s):
void Draw_Triangle_2D(int x1,int y1, // triangle vertices
int x2,int y2,
int x3,int y3,
int color, // 8-bit color index
UCHAR *dest_buffer, // destination buffer
int mempitch); // memory pitch
// 16-bit version
void Draw_Triangle_2D16(int x1,int y1, // triangle vertices
int x2,int y2,
int x3,int y3,
int color, // 16-bit RGB color descriptor
UCHAR *dest_buffer, // destination buffer
int mempitch); // memory pitch
// fixed point high speed version, slightly less accurate
void Draw_TriangleFP_2D(int x1,int y1,
int x2,int y2,
int x3,int y3,
int color,
UCHAR *dest_buffer,
int mempitch);
Draw_Triangle_2D*() draws a filled triangle in the given memory buffer with the sent color. The triangle will be clipped to the current clipping region set in the globals, not by the DirectDraw clipper. This is because the function uses software, not the blitter, to draw lines. Note: Draw_TriangleFP_2D() does the exact same thing, but it uses fixed-point math internally, is slightly faster, and is slightly less accurate. Both functions return nothing.
// draw a triangle (100,10) (150,50) (50,60)
// with color index 50 in the back buffer surface
Draw_Triangle_2D(100,10,150,50,50,60,
50, // color index 50
back_buffer,
back_lpitch);
// same example, but in a 16-bit mode
// draw a triangle (100,10) (150,50) (50,60)
// with color RGB(255,0,0) in the back buffer surface
Draw_Triangle_2D16(100,10,150,50,50,60,
RGB16Bit(255,0,0),
back_buffer,
back_lpitch);
inline void Draw_QuadFP_2D(int x0,int y0, // vertices
int x1,int y1,
int x2,int y2,
int x3,int y3,
int color, // 8-bit color index
UCHAR *dest_buffer, // destination video buffer
int mempitch); // memory pitch of buffer
Draw_QuadFP_2D() draws the sent quadrilateral as a composition of two triangles. Returns nothing.
// draw a quadrilateral, note vertices must be ordered
// either in cw or ccw order
Draw_QuadFP_2D(0,0, 10,0, 15,20, 5,25,
100,
back_buffer, back_lpitch);
void Draw_Filled_Polygon2D(
POLYGON2D_PTR poly, // poly to render
UCHAR *vbuffer, // video buffer
int mempitch); // memory pitch
// 16-bit version
void Draw_Filled_Polygon2D16(
POLYGON2D_PTR poly, // poly to render
UCHAR *vbuffer, // video buffer
int mempitch); // memory pitch
Draw_Filled_Polygon2D*() draws a general filled polygon with n sides. The function simply takes the polygon to render, a pointer to the video buffer, and the pitch, and that's it! Although, the calling parameters are the exact same for the 8-bit and 16-bit versions, internally the functions call different rasterizers, thus you must use the correct call. Note: The function renders relative to the poly's origin (x0,y0), so make sure these are initialized. Returns nothing.
// draw a polygon in the primary buffer
Draw_Filled_Polygon2D(&poly,
primary_buffer,
primary_lpitch);
int Translate_Polygon2D(
POLYGON2D_PTR poly, // poly to translate
int dx, int dy); // translation factors
Translate_Polygon2D() translates the given polygon's origin (x0,y0). Note: The function does not transform or modify the actual vertices making up the polygon. Returns TRUE if successful.
// translate polygon 10,-5
Translate_Polygon2D(&poly, 10, -5);
int Rotate_Polygon2D(
POLYGON2D_PTR poly, // poly to rotate
int theta); // angle 0-359
Rotate_Polygon2D() rotates the sent polygon in a counterclockwise fashion about its origin. The angle must be an integer from 0–359. Returns TRUE if successful.
// rotate polygon 10 degrees
Rotate_Polygon2D(&poly, 10);
int Scale_Polygon2D(POLYGON2D_PTR poly, // poly to scale
float sx, float sy); // scale factors
Scale_Polygon2D() scales the sent polygon by scale factors sx and sy in the x- and y-axes, respectively. Returns nothing.
// scale the poly equally 2x
Scale_Polygon2D(&poly, 2,2);
This set of functions contains a few of everything; it's kind of a potpourri of graphics primitives. Nothing you haven't seen—at least I don't think so, but then I've had so much Snapple Raspberry that I'm freaking out and little purple mechanical spiders are crawling all over me! And once again, some of the functions have both an 8-bit and 16-bit version, the 16-bit versions are usually denoted by and extra "16" concatenated on the function name.
int Draw_Clip_Line(int x0,int y0, // starting point
int x1, int y1, // ending point
int color, // 8-bit color
UCHAR *dest_buffer, // video buffer
int lpitch); // memory pitch
// 16-bit version
int Draw_Clip_Line16(int x0,int y0, // starting point
int x1, int y1, // ending point
int color, // 16-bit RGB color
UCHAR *dest_buffer, // video buffer
int lpitch); // memory pitch
Draw_Clip_Line*() clips the sent line to the current clipping rectangle and then draws a line in the sent buffer in either 8 or 16-bit mode. Returns TRUE if successful.
// draw a line in the back buffer from (10,10) to (100,200)
// 8-bit call
Draw_Clip_Line(10,10,100,200,
5, // color 5
back_buffer,
back_lpitch);
int Clip_Line(int &x1,int &y1, // starting point
int &x2, int &y2); // ending point
Clip_Line() is for the most part internal, but you can call it to clip the sent line to the current clipping rectangle. Note that the function modifies the sent endpoints, so save them if you don't want this side effect. Also, the function does not draw anything; it only clips the endpoints. Returns TRUE if successful.
// clip the line defined by x1,y1 to x2,y2
Clip_Line(x1,y1,x2,y2);
int Draw_Line(int xo, int yo, // starting point
int x1,int y1, // ending point
int color, // 8-bit color index
UCHAR *vb_start, // video buffer
int lpitch); // memory pitch
// 16-bit version
int Draw_Line16(int xo, int yo, // starting point
int x1,int y1, // ending point
int color, // 16-bit RGB color
UCHAR *vb_start, // video buffer
int lpitch); // memory pitch
Draw_Line*() draws a line in 8 or 16 bit mode without any clipping, so make sure that the endpoints are within the display surface's valid coordinates. This function is slightly faster than the clipped version because the clipping operation is not needed. Returns TRUE if successful.
// draw a line in the back buffer from (10,10) to (100,200)
// in 16-bit mode
Draw_Line16(10,10,100,200,
RGB16Bit(0,255,0), // bright green
back_buffer,
back_lpitch);
inline int Draw_Pixel(int x, int y, // position of pixel
int color, // 8-bit color
UCHAR *video_buffer, // gee hmm?
int lpitch); // memory pitch
// 16-bit version
inline int Draw_Pixel16(int x, int y, // position of pixel
int color, // 16-bit RGB color
UCHAR *video_buffer, // gee hmm?
int lpitch); // memory pitch
Draw_Pixel() draws a single pixel on the display surface memory. In most cases, you won't create objects based on pixels because the overhead of the call itself takes more time than plotting the pixel. But if speed isn't your concern, the function does the job. At least it's inline! Returns TRUE if successful.
// draw a pixel in the center of the 640x480 screen
// 8-bit example
Draw_Pixel(320,240, 100, back_buffer, back_lpitch);
int Draw_Rectangle(int x1, int y1, // upper left corner
int x2, int y2, // lower right corner
int color, // color descriptor, index for
// 8-bit modes, RGB value for 16-bit
// modes
LPDIRECTDRAWSURFACE7 lpdds); // dd surface
Draw_Rectangle() draws a rectangle on the sent DirectDraw surface. This function works the same in either 8 or 16-bit mode since it's a pure DirectDraw call. Note that the surface must be unlocked for the call to work. Moreover, the function uses the blitter, so it's very fast. Returns TRUE if successful.
// fill the screen using the blitter
Draw_Rectangle(0,0,639,479,0,lpddsback);
void HLine(int x1,int x2, // start and end x points
int y, // row to draw on
int color, // 8-bit color
UCHAR *vbuffer, // video buffer
int lpitch); // memory pitch
// 16-bit version
void HLine16(int x1,int x2, // start and end x points
int y, // row to draw on
int color, // 16-bit RGB color
UCHAR *vbuffer, // video buffer
int lpitch); // memory pitch
HLine*() draws a horizontal line very quickly as compared to the general line drawing function. Works in both 8 and 16-bit modes. Returns nothing.
// draw a fast line from 10,100 to 100,100
// 8-bit mode
HLine(10,100,100,
20, back_buffer, back_lpitch);
void VLine(int y1,int y2, // start and end row
int x, // column to draw in
int color, // 8-bit color
UCHAR *vbuffer,// video buffer
int lpitch); // memory pitch
// 16-bit version
void VLine16(int y1,int y2, // start and end row
int x, // column to draw in
int color, // 16-bit RGB color
UCHAR *vbuffer,// video buffer
int lpitch); // memory pitch
VLine*() draws a fast vertical line. It's not as fast as HLine(), but it's faster than Draw_Line(), so use it if you know a line is going to be vertical in all cases. Returns nothing.
// draw a line from 320,0 to 320,479
// 16-bit version
VLine16(0,479,320,RGB16Bit(255,255,255),
primary_buffer,
primary_lpitch);
void Screen_Transitions(int effect, // screen transition
UCHAR *vbuffer,// video buffer
int lpitch); // memory pitch
Screen_Transition() performs various in-memory screen transitions, as listed in the previous header information. Note that the transformations are destructive, so please save the image and/or palette if you need them after the transition. Additionally, the color manipulation transitions only work in 8-bit palettized modes. However, the screen swipes, and scrunches work in either mode. Returns nothing.
// fade the primary display screen to black
// only works for 8-bit modes
Screen_Transition(SCREEN_DARKNESS, NULL, 0);
// scrunch the screen, works in 8/16 bit modes
Screen_Transition(SCREEN_SCRUNCH, NULL, 0);
int Draw_Text_GDI(char *text, // null terminated string
int x,int y, // position
COLORREF color, // general RGB color
LPDIRECTDRAWSURFACE7 lpdds); // dd surface
int Draw_Text_GDI(char *text, // null terminated string
int x,int y, // position
int color, // 8-bit color index
LPDIRECTDRAWSURFACE7 lpdds); // dd surface
Draw_Text_GDI() draws GDI text on the sent surface with the desired color and position. The function is overloaded to take both a COLORREF in the form of the Windows RGB() macro for 8- or 16-bit modes or a 256-color 8-bit color index for 256-color modes only. Note that the destination surface must be unlocked for the function to operate because it locks it momentarily to perform the text blitting with GDI. Returns TRUE if successful.
// draw text with color RGB(100,100,0);
// note this call would work in either
// 8 or 16-bit modes
Draw_Text_GDI("This is a test",100,50,
RGB(100,100,0),lpddsprimary);
// draw text with color index 33
// note this call would work ONLY in
// 8-bit modes
Draw_Text_GDI("This is a test",100,50,
33,lpddsprimary);
The math library thus far is almost nonexistent, but that will soon change once you get to the math section of the book. I'll pump your brain full of all kinds of fun mathematical information and functions. Until then, sip the sweet simplicity because it will be your last…
int Fast_Distance_2D(int x, int y);
Fast_Distance() computes the distance from (0,0) to (x,y) using a fast approximation. Returns the distance within a 3.5 percent error truncated to an integer.
int x1=100,y1=200; // object one
int x2=400,y2=150; // object two
// compute the distance between object one and two
int dist = Fast_Distance_2D(x1-x2, y1-y2);
float Fast_Distance_3D(float x, float y, float z);
Fast_Distance_3D() computes the distance from (0,0,0) to (x,y,z) using a fast approximation. The function returns the distance within an 11 percent error.
// compute the distance from (0,0,0) to (100,200,300)
float dist = Fast_Distance_3D(100,200,300);
int Find_Bounding_Box_Poly2D(
POLYGON2D_PTR poly, // the polygon
float &min_x, float &max_x, // bounding box
float &min_y, float &max_y);
Find_Bounding_Box_Poly2D() computes the smallest rectangle that contains the sent polygon in poly. Returns TRUE if successful. Also, notice that the function takes parameters by reference.
POLYGON2D poly; // assume this is initialized
int min_x, max_x, min_y, max_y; // hold result
// find bounding box
Find_Bounding_Box_Poly2D(&poly,min_x,max_x,min_y,max_y);
int Open_Error_File(char *filename);
Open_Error_File() opens a disk file that receives error messages sent by you via the Write_Error() function. Returns TRUE if successful.
// open a general error log
Open_Error_File("errors.log");
int Close_Error_File(void);
Close_Error_File() closes a previously opened error file. Basically, it shuts down the stream. If you call this and an error file is not open, nothing will happen. Returns TRUE if successful.
// close the error system, note no parameter needed
Close_Error_File();
int Write_Error(char *string, ...); // error formatting string
Write_Error() writes an error out to the previously opened error file. If there is no file open, the function returns a FALSE and there's no harm. Note that the function uses the variable parameter indicator, so you can use this function as you would printf(). Returns TRUE if successful.
// write out some stuff
Write_Error("\nSystem Starting...");
Write_Error("x-vel = %d", y-vel = %d", xvel, yvel);
The following function set makes up the BITMAP_IMAGE and BITMAP_FILE manipulation routines. There are functions to load 8-, 16-, 24-, and 32-bit bitmaps, as well as to extract images from them and create simple BITMAP_IMAGE objects (which are not DirectDraw surfaces). In addition, there's functionality to draw these images in both 8- and 16-bit modes, but there's no clipping support. Hence, you can modify the source yourself if you need clipping or want to step up to the BOB objects, described at the end of the section. And as usual, some of the functions have both an 8-bit and 16-bit version, the 16-bit versions are usually denoted by an extra "16" concatenated on the function name.
int Load_Bitmap_File(BITMAP_FILE_PTR bitmap, // bitmap file
char *filename); // disk .BMP file to load
Load_Bitmap_File() loads a .BMP bitmap file from disk into the sent BITMAP_FILE structure where you can manipulate it. The function loads 8-, 16-, and 24-bit bitmaps, as well as the palette information on 8-bit .BMP files. Returns TRUE if successful.
// let's load "andre.bmp" off disk
BITMAP_FILE bitmap_file;
Load_Bitmap_File(&bitmap_file, "andre.bmp");
int Unload_Bitmap_File(BITMAP_FILE_PTR bitmap);
// bitmap to close and unload
Unload_Bitmap_File() deallocates the memory associated with the image buffer of a loaded BITMAP_FILE. Call this function when you've copied the image bits and/or are done working with a particular bitmap. You can reuse the structure, but the memory must be freed first. Returns TRUE if successful.
// close the file we just opened
Unload_Bitmap_File(&bitmap_file);
int Create_Bitmap(BITMAP_IMAGE_PTR image, // bitmap image
int x, int y, // starting position
int width, int height // size
int bpp=8); // bits per pixel, either 8 or 16
Create_Bitmap() creates either an 8- or 16-bit system memory bitmap at the given position with the given size. The bitmap is initially blank and is stored in the BITMAP_IMAGE image. The bitmap is not a DirectDraw surface, so there's no acceleration or clipping available. Returns TRUE if successful.
There's a big difference between a BITMAP_FILE and a BITMAP_IMAGE. A BITMAP_FILE is a disk .BMP file, whereas a BITMAP_IMAGE is a system memory object like a sprite that can be moved and drawn.
// let's create an 8-bit 64x64 bitmap image at (0,0)
BITMAP_IMAGE ship;
Create_Bitmap(&ship, 0,0, 64,64,8);
// and here's the same example in 16-bit mode
BITMAP_IMAGE ship;
Create_Bitmap(&ship, 0,0, 64,64,16);
int Destroy_Bitmap(BITMAP_IMAGE_PTR image); // bitmap image to destroy
Destroy_Bitmap() is used to release the memory allocated during the creation of a BITMAP_IMAGE object. You should call this function on your object when you're all done working with it—usually during the shutdown of the game, or if the object has been destroyed in a bloody battle. Returns TRUE if successful.
// destroy the previously created BITMAP_IMAGE
Destroy_Bitmap(&ship);
int Load_Image_Bitmap(
BITMAP_IMAGE_PTR image, // bitmap to store image in
BITMAP_FILE_PTR bitmap, // bitmap file object to load from
int cx,int cy, // coordinates where to scan (cell or abs)
int mode); // image scan mode: cell based or absolute
// 16-bit version
int Load_Image_Bitmap16(
BITMAP_IMAGE_PTR image, // bitmap to store image in
BITMAP_FILE_PTR bitmap, // bitmap file object to load from
int cx,int cy, // coordinates where to scan (cell or abs)
int mode); // image scan mode: cell based or absolute
#define BITMAP_EXTRACT_MODE_CELL 0
#define BITMAP_EXTRACT_MODE_ABS 1
Load_Image_Bitmap*() is used to scan an image from a previously loaded BITMAP_FILE object into the sent BITMAP_IMAGE storage area. This is how you get objects and image bits into a BITMAP_IMAGE. To use the function, you first must load a BITMAP_FILE and create the BITMAP_IMAGE. Then you make the call to scan an image of the same size out of the bitmap data stored in the BITMAP_FILE. There are two ways the function works, cell mode or absolute mode:
In cell mode, BITMAP_EXTRACT_MODE_CELL, the image is scanned making the assumption that all the images are in the .BMP file in a template that is some given size, mxn, with a 1-pixel border between each cell. The cells usually range from 8x8, 16x16, 32x32, 64x64, and so on. Take a look at TEMPLATE*.BMP on the CD; it contains a number of templates. Cell numbers range from left to right, top to bottom, and they start with (0,0).
The second mode of operation is absolute coordinate mode, BITMAP_EXTRACT_MODE_ABS. In this mode, the image is scanned at the exact coordinates sent in cx, cy. This method is good if you want to load your artwork with various-sized images on the same .BMP; hence, you can't template them.
Also, you must use the correct version of the function based on the bit depth you created the bitmap in and the bit depth of the image you are scanning. Therefore, use Load_Image_Bitmap() for 8-bit bitmaps and Load_Image_Bitmap16() for 16-bit images.
// assume the source bitmap .BMP file is 640x480 and
// has a 8x8 matrix of cells that are each 32x32
// then to load the 3rd cell to the right on the 2nd
// row (cell 2,1) in 8-bit mode, you would do this
// load in the .BMP file into memory
BITMAP_FILE bitmap_file;
Load_Bitmap_File(&bitmap_file,"images.bmp");
// initialize the bitmap
BITMAP_IMAGE ship;
Create_Bitmap(&ship, 0,0, 32,32,8);
// now scan out the data
Load_Image_Bitmap(&ship, &bitmap_file, 2,1,
BITMAP_EXTRACT_MODE_CELL);
// same example in 16-bit mode
// assume the source bitmap .BMP file is 640x480 and
// has a 8x8 matrix of cells that are each 32x32
// then to load the 3rd cell to the right on the 2nd
// row (cell 2,1) in 16-bit mode, you would do this
// load in the .BMP file into memory
BITMAP_FILE bitmap_file;
Load_Bitmap_File(&bitmap_file,"images24bit.bmp");
// initialize the bitmap
BITMAP_IMAGE ship;
Create_Bitmap(&ship, 0,0, 32,32,16);
// now scan out the data
Load_Image_Bitmap16(&ship, &bitmap_file, 2,1,
BITMAP_EXTRACT_MODE_CELL);
To load the exact same image, assuming it's still in the template, but using the absolute mode, you have to figure out the coordinates. Remember that there's a 1-pixel partitioning wall on each side of the image.
Load_Image_Bitmap16(&ship, &bitmap_file,
2*(32+1)+1,1*(32+1)+1,
BITMAP_EXTRACT_MODE_ABS);
int Draw_Bitmap(BITMAP_IMAGE_PTR source_bitmap, // bitmap to draw
UCHAR *dest_buffer, // video buffer
int lpitch, // memory pitch
int transparent); // transparency?
// 16-bit version
int Draw_Bitmap16(BITMAP_IMAGE_PTR source_bitmap, // bitmap to draw
UCHAR *dest_buffer, // video buffer
int lpitch, // memory pitch
int transparent); // transparency?
Draw_Bitmap*() draws the sent bitmap on the destination memory surface with or without transparency. If transparent is 1, transparency is enabled and any pixel with a color index of 0 will not be copied. Again, simply use the 16-bit version when you are working with 16-bit modes and bitmaps. Function returns TRUE if successful.
// draw our little ship on the back buffer
// 8-bit mode
Draw_Bitmap( &ship, back_buffer, back_lpitch, 1);
int Flip_Bitmap(UCHAR *image, // image bits to vertically flip
int bytes_per_line, // bytes per line
int height); // total rows or height
Flip_Bitmap() is usually called internally to flip upside-down .BMP files during loading to make them right-side up, but you might want to use it to flip an image yourself. The function does an in-memory flip and actually inverts the bitmap line by line, so your original sent data will be inverted. Watch out! Works on any bit depth since it works with bytes per line. Returns TRUE if successful.
// for fun flip the image bits of our little ship
Flip_Bitmap(ship->buffer, ship->width, ship_height);
int Scroll_Bitmap(BITMAP_IMAGE_PTR image, // bitmap to scroll
int dx, // amount to scroll on x-axis
int dy=0); // amount to scroll on y-axis
Scroll_Bitmap() is used to scroll a bitmap horizontally or vertically. The function works on both 8- and 16-bit bitmaps and determines their bit depth internally, so you need only call one function. To use the function simple call it with a pointer to the bitmap you want to scroll along with the x and y scrolling values in pixels. The values can be either positive or negative, positive values mean right and down, while negative values mean left and up. Additionally, you may scroll on both axis simulataneosly, or just one. It's up to you. Function returns TRUE if it was successful.
// scroll an image 2 pixels to the right
Scroll_Bitmap(&image, 2, 0);
int Copy_Bitmap(BITMAP_IMAGE_PTR dest_bitmap, // destination bitmap
int dest_x, int dest_y, // destination position
BITMAP_IMAGE_PTR source_bitmap, // source bitmap
int source_x, int source_y, // source position
int width, int height); // size of bitmap chunk to copy
Copy_Bitmap() is used to copy a rectangular region the source bitmap to the destination bitmap. The function internally scans the bitdepth of the source and destination, so the function work on either 8- or 16-bit modes with the same call. The bitmaps must of course be the same color depth though. To use the function simply send the destination bitmap, the point you want to copy the bitmap to, along with the source bitmap, the point you want to copy the bitmap from, and finally the width and height of the rectangle you want to copy. The function return TRUE if it's successful.
// copy a 100x100 rectangle from bitmap2 to bitmap1
// from the upper hand corner to the same
Copy_Bitmap(&bitmap1, 0,0,
&bitmap2, 0,0,
100,100);
The following functions make up the 256-color palette interface. These functions are only relevant if you have the display set for a 256-color mode—that is, 8-bit color. Additionally, when you start the system up in 8-bit mode via a call to DDraw_Init() it will load in a default palette off disk, or try to at least, the default palette files are palette1.pal, palette2.pal, and palette3.pal—currently palette2.pal is loaded.
int Set_Palette_Entry(
int color_index, // color index to change
LPPALETTEENTRY color); // the color
Set_Palette_Entry() is used to change a single color in the color palette. You simply send the color index 0..255, along with a pointer to PALETTEENTRY holding the color, and the update will occur on the next frame. In addition, this function updates the shadow palette. Note: This function is slow; if you need to update the entire palette, use Set_Palette(). Returns TRUE if successful and FALSE otherwise.
// set color 0 to black
PALETTEENTRY black = {0,0,0,PC_NOCOLLAPSE};
Set_Palette_Entry(0,&black);
int Get_Palette_Entry(
int color_index, // color index to retrieve
LPPALETTEENTRY color); // storage for color
Get_Palette_Entry() retrieves a palette entry from the current palette. However, the function is very fast because it retrieves the data from the RAM-based shadow palette. Hence, you can call this as much as you like because it doesn't disturb the hardware at all. However, if you make changes to the system palette by using Set_Palette_Entry() or Set_Palette(), the shadow palette won't be updated and the data retrieved may not be valid. Returns TRUE if successful and FALSE otherwise.
// let's get palette entry 100
PALETTEENTRY color;
Get_Palette_Entry(100,&color);
int Save_Palette_To_File(
char *filename, // filename to save at
LPPALETTEENTRY palette); // palette to save
Save_Palette_To_File() saves the sent palette data to an ASCII file on disk for later retrieval or processing. This function is very handy if you generate a palette on-the-fly and want to store it on disk. However, the function assumes that the pointer in the palette points to a 256-entry palette, so watch out! Returns TRUE if successful and FALSE otherwise.
PALETTEENTRY my_palette[256]; // assume this is built
// save the palette we made
// note file name can be anything, but I like *.pal
Save_Palette_To_file("/palettes/custom1.pal",my_palette);
int Load_Palette_From_File(
char *filename, // file to load from
LPPALETTEENTRY palette); // storage for palette
Load_Palette_From_File() is used to load a previously saved 256-color palette from disk via Save_Palette_To_File(). You simply send the filename along with storage for all 256 entries, and the palette is loaded from disk into the data structure. However, the function does not load the entries into the hardware palette; you must do this yourself with Set_Palette(). Returns TRUE if successful and FALSE otherwise.
// load the previously ksaved palette
PALETTEENTRY disk_palette[256];
Load_Palette_From_Disk("/palettes/custom1.pal",&disk_palette);
int Set_Palette(LPPALETTEENTRY set_palette);
// palette to load into hardware
Set_Palette() loads the sent palette data into the hardware and updates the shadow palette also. Returns TRUE if successful and FALSE otherwise.
// lets load the palette into the hardware
Set_Palette(disk_palette);
int Save_Palette(LPPALETTEENTRY sav_palette); // storage for palette
Save_Palette() scans the hardware palette out into sav_palette so that you can save it to disk or manipulate it. sav_palette must have enough storage for all 256 entries.
// retrieve the current DirectDraw hardware palette
PALETTEENTRY hardware_palette[256];
Save_Palette(hardware_palette);
int Rotate_Colors(int start_index, // starting index 0..255
int end_index); // ending index 0..255
Rotate_Colors() rotates a bank of colors in a cyclic manner in 8-bit modes. It manipulates the color palette hardware directly. Returns TRUE if successful and FALSE otherwise.
// rotate the entire palette
Rotate_Colors(0,255);
int Blink_Colors(int command, // blinker engine command
BLINKER_PTR new_light, // blinker data
int id); // id of blinker
Blink_Colors() is used to create asynchronous palette animation. The function is too long to explain here, so please refer to Chapter 7, "Advanced DirectDraw and Bitmapped Graphics," for a more in-depth description.
None
The next set of functions are just utility functions that I seem to use a lot, so I thought you might want to use them too.
DWORD Get_Clock(void);
Get_Clock() returns the current clock time in milliseconds since Windows was started.
// get the current tick count
DWORD start_time = Get_Clock();
DWORD Start_Clock(void);
Start_Clock() basically makes a call to Get_Clock() and stores the time in a global variable for you. Then you can call Wait_Clock(), which will wait for a certain number of milliseconds since your call to Start_Clock(). Returns the starting clock value at the time of the call.
// start the clock and set the global
Start_Clock();
DWORD Wait_Clock(DWORD count); // number of milliseconds to wait
Wait_Clock() simply waits the sent number of milliseconds since the call was made to Start_Clock(). Returns the current clock count at the time of the call. However, the function will not return until the time difference has elapsed.
// wait 30 milliseconds
Start_Clock();
// code...
Wait_Clock(30);
int Collision_Test(int x1, int y1, // upper lhs of obj1
int w1, int h1, // width, height of obj1
int x2, int y2, // upper lhs of obj2
int w2, int h2);// width, height of obj2
Collision_Test() basically performs an overlapping rectangle test on the two sent rectangles. The rectangles can represent whatever you like. You must send the upper-left-corner coordinates of each rectangle, along with its width and height. Returns TRUE if there is an overlap and FALSE if not.
// do these two BITMAP_IMAGE's overlap?
if (Collision_Test(ship1->x,ship1->y,ship1->width,ship1->height,
ship2->x,ship2->y,ship2->width,ship2->height))
{// hit
} // end if
int Color_Scan(int x1, int y1, // upper left of rect
int x2, int y2, // lower right of rect
UCHAR scan_start, // starting scan color
UCHAR scan_end, // ending scan color
UCHAR *scan_buffer, // memory to scan
int scan_lpitch); // linear memory pitch
// 16-bit version
int Color_Scan16(int x1, int y1, // upper left of rect
int x2, int y2, // lower right of rect
USHORT scan_start, // scan RGB value 1
USHORT scan_end, // scan RGB value 2
UCHAR *scan_buffer, // memory to scan
int scan_lpitch); // linear memory pitch
Color_Scan*() is another collision-detection algorithm that scans a rectangle for either a single 8-bit value or sequence of values in some continuous range in 8-bit modes or when used in 16-bit modes scans for up to 2 RGB values. You can use it to determine if a color index is present within some area. Returns TRUE if the color(s) was found.
// scan for colors in range from 122-124 inclusive in 8-bit mode
Color_Scan(10,10, 50, 50, 122,124, back_buffer, back_lpitch);
// scan for the RGB colors 10,30,40 and 100,0,12
Color_Scan(10,10, 50, 50, RGB16Bit(10,30,40), RGB16Bit(100,0,12),
back_buffer, back_lpitch); | http://www.yaldex.com/games-programming/0672323699_ch08lev1sec14.html | CC-MAIN-2016-50 | refinedweb | 8,741 | 56.69 |
The linked list is one of the most important concepts and data structures to learn while preparing for interviews. As we know a LinkedList allows only insertions and deletion at constant time, to get access to any random element we still need linear time. But this problem somewhat uses traversals too but in an interesting way.
In this problem, we are given a LinkedList (root node) and are asked to find the length of the cycle/loop that is present in the LinkedList. Let me explain this with an example :-
The above-Linked List has a loop that has 6 nodes in the loop/cycle. Hence, this shall return 6 as the answer.
Well, the very first task that I can observe is to determine whether the LinkedList contains a cycle or not. So, it shall follow from the same algorithm, and then we can try to think of any modifications to find its length also.
Approach #1
The first approach is based on maps. The idea is to store the address of the nodes as the key and their position as the values. So, when we traverse and insert into the map if we come across a node that points to an address that is already present in the map that means there is a cycle present in the LinkedList. From this, we can find the length by simply subtracting the current position from the position of the matched node as
position - map[currnode].
Algorithm
- Start traversing every node of the linked list present and maintain the position(incremented with every node) in the map.
- While inserting also check if that node is present in the map, that will mean we have come across that node and there is a cycle cause we are visiting the node again.
- If the node is not present in the map - increment the position counter and insert the current node address in the map.
- If the node is present in the map - then there is a cycle and we can find the number of nodes from
position - map[currnode]
Code Implementation
Find length of loop in linked list
#include
using namespace std; struct Node { int data; struct Node* next; Node(int num) { data = num; next = NULL; } }; int countNodesinLoop(struct Node* head) { struct Node* p = head; int pos = 0; unordered_map m; while (p != NULL) { if (m.find(p) == m.end()) { m[p] = pos; pos++; } else { return (pos - m[p]); } p = p->next; } return 0; } int main() { struct Node* head = new Node(1); head->next = new Node(2); head->next->next = new Node(3); head->next->next->next = new Node(4); head->next->next->next->next = new Node(5); head->next->next->next->next->next = new Node(6); head->next->next->next->next->next->next = head->next; cout << countNodesinLoop(head) << endl; return 0; }
Output: 5
Time Complexity: O(n), where n is the number of nodes.
Space complexity: O(n), for using a map
As you can see, this method uses some extra space, we have to try to think of something better to reduce the extra space at least.
Approach #2
The next idea is based on Floyd’s Cycle detection algorithm. The concept is to use two pointers, one fast-moving another slow-moving. Both the pointers traverse the linked list with different speeds and when they meet each other that means there’s a cycle present in the LinkedList. Save the address of this node and take a counter with 1 and start incrementing it while traversing the LinkedList again from the common point with another pointer. Once we reach the common pointer again, then we will have our number of nodes in cycle count in the pointer. We will return this count.
Algorithm
- Take two pointers, a fast pointer, and a slow pointer pointing to the head initially.
- Traverse both the pointers as
slowptr = slowptr->next(1 node at a time), and
fastptr = fastptr->next->next(2 nodes at a time).
- When slowptr == fastptr, the common point is the node for the head of the cycle.
- Fix one pointer to this node and take count = 0 and move the other pointer from the common point one by one in the linked list and increment the counter by 1 in each step
- When the other pointer reaches the common point then stop the iteration and return the count.
Code Implementation
Find the length of a loop in the linked list_2
#include
using namespace std; struct Node { int data; struct Node* next; }; int countNodes(struct Node *n) { int res = 1; struct Node *temp = n; while (temp->next != n) { res++; temp = temp->next; } return res; } int countNodesinLoop(struct Node *list) { struct Node *slow_p = list, *fast_p = list; while (slow_p && fast_p && fast_p->next) { slow_p = slow_p->next; fast_p = fast_p->next->next; if (slow_p == fast_p) return countNodes(slow_p); } return 0; } struct Node *newNode(int key) { struct Node *temp = new Node(); temp->data = key; temp->next = NULL; return temp; } int main() { struct Node *head = newNode(1); head->next = newNode(2); head->next->next = newNode(3); head->next->next->next = newNode(4); head->next->next->next->next = newNode(5); head->next->next->next->next->next = new Node(6); head->next->next->next->next->next->next = head->next; cout << countNodesinLoop(head) << endl; return 0; }
Output: 5
[forminator_quiz id="3414"]
Space complexity: O(1), no extra space is used.
So, in this blog, we have tried to explain how you can find the length of the cycle present in LinkedList. You can use any of these approaches, either by using a map or by using Floyd’s cycle detection algorithm although the second approach is the most efficient one and should be preferred. If you want to practice such problems check out PrepBytes - PrepBytes MyCode Contests | https://www.prepbytes.com/blog/linked-list/find-the-length-of-a-loop-in-the-linked-list/ | CC-MAIN-2022-21 | refinedweb | 954 | 71.28 |
view raw
I was a bit curious if I could do more work in a function after returning a result. Basically I'm making a site using the pyramid framework(which is simply coding in python) after I process the inputs I return variables to render the page but sometimes I want to do more work after I render the page.
For example, you come to my site and update your profile and all you care about is that its successful so I output a message saying 'success!' but after that done I want to take your update and update my activity logs of what your doing, update your friends activity streams, etc.. Right now I'm doing all that before I return the result status that you care about but I'm curious if I can do it after so users get their responses faster.
I have done multi-processing before and worst case I might just fork a thread to do this work but if there was a way to do work after a return statement then that would be simpler.
example:
def profile_update(inputs):
#take updates and update the database
return "it worked"
#do maintainence processing now..
No, unfortunately, once you hit the
return statement, you return from the function/method (either with or without a return value).
From the docs for return:
return leaves the current function call with the expression list (or None) as return value.
You may want to look into generator functions and the yield statement, this is a way to return a value from a function and continue processing and preparing another value to be returned when the function is called the next time. | https://codedump.io/share/GDtgOWWlJwxA/1/is-there-a-way-to-do-more-work-after-a-return-statement | CC-MAIN-2017-22 | refinedweb | 282 | 60.48 |
Data loading: MXNet recordIO¶
Overview¶
This example shows you how to use the data that is stored in the MXNet recordIO format with DALI.
Creating an Index¶
To use data that is stored in the recordIO format, we need to use the
MXNetReader operator. In addition to the arguments that are common to all readers, such as
random_shuffle, this operator takes the
path and
index_path arguments:
pathis the list of paths to recordIO files
index_pathis a list (of size 1) that contains the path to the index file. This file, with
.idxextension, is automatically created when you use MXNet’s
im2rec.pyutility; and can also be obtained from the recordIO file by using the
rec2idxutility that is included with DALI.
The
DALI_EXTRA_PATH environment variable should point to the location where data from DALI extra repository is downloaded.
Important: Ensure that you check out the correct release tag that corresponds to the installed version of DALI.
[1]:
from nvidia.dali.pipeline import Pipeline import nvidia.dali.fn as fn import nvidia.dali.types as types import numpy as np import os.path test_data_root = os.environ['DALI_EXTRA_PATH'] base = os.path.join(test_data_root, 'db', 'recordio') batch_size = 16 idx_files = [base + "/train.idx"] rec_files = [base + "/train.rec"]
Defining and running the pipeline¶
Define a simple pipeline that takes the images that are stored in the recordIO format, decodes them and prepares them for ingestion in DL framework.
Processing images involves cropping, normalizing, and
HWC ->
CHW conversion process.
[2]:
pipe = Pipeline(batch_size=batch_size, num_threads=4, device_id=0) with pipe: jpegs, labels = fn.mxnet_reader(path=rec_files, index_path=idx_files) images = fn.image_decoder(jpegs, device="mixed", output_type=types.RGB) output = fn.crop_mirror_normalize( images, dtype=types.FLOAT, crop=(224, 224), mean=[0., 0., 0.], std=[1., 1., 1.]) pipe.set_outputs(output, labels)
Let us now build and run") img_chw = image_batch.at(j) img_hwc = np.transpose(img_chw, (1,2,0))/255.0 plt.imshow(img_hwc)
[5]:
images, labels = pipe_out show_images(images.as_cpu()) | https://docs.nvidia.com/deeplearning/dali/master-user-guide/docs/examples/general/data_loading/dataloading_recordio.html | CC-MAIN-2021-04 | refinedweb | 321 | 52.76 |
I have a listing in a sub menu.I would like to display different captions for the different items, all which are dynamically obtained. Much the same way 'Open recent' works.
I've implemented a
def is_visible(self, index):
function to only show relevant menu items, and that works great.
How can I provide a custom caption?Is there a function along the lines of
has_caption(self, index):
that I could implement?
There's an API:
description(<args>) String Returns a description of the command with the given arguments. Used in the menu, if no caption is provided. Return None to get the default description.
So something like:
def description(self, *args):
return "DESCRIPTION"
Not sure exactly when it is called (once or each times ?)
I tried implementing a function like
but it is never called...Do I need to put something special in my .sublime-menu file to trigger this callback?I'm using sublime 2.
Tried it right now and it works:startup, version: 2221 windows x64 channel: stable
\Sublime Text 2\Packages\User\Main.sublime-menu
{
"id": "view",
"children":
{
"command": "example"
}
]
}
]
\Sublime Text 2\Packages\User\exemple.py[code]class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): print 'Hello'
def description(self, *args):
return "DESCRIPTION"[/code] | https://forum.sublimetext.com/t/dynamic-menu-caption/11331 | CC-MAIN-2016-18 | refinedweb | 209 | 60.72 |
59956/existing-bigquery-table-into-another-location-will-charged
You are not charged for copying a table, but you do incur charges for storing the new table and the table you copied.
For more information, see Copying an existing table.
You can stop an instance temporarily if ...READ MORE
You can create an instance from an ...READ MORE
To enable Cloud CDN for an existing ...READ MORE
Cloud Storage uses a flat namespace to ...READ MORE
To create a dataset:
Open the BigQuery web ...READ MORE
You can apply access controls during dataset creation by ...READ MORE
At a minimum, to update dataset properties, ...READ MORE
You can update a dataset's default table ...READ MORE
If you receive a permission error, an ...READ MORE
You are not charged for creating, updating, ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/59956/existing-bigquery-table-into-another-location-will-charged | CC-MAIN-2020-29 | refinedweb | 141 | 69.58 |
An application contains lots of lines which leave the stack empty. After these lines any code can be inserted, as long as it leaves the stack empty again. You can load some values onto the stack and store them off again, without disturbing the application's flow.
Let's take a look at an assembly's IL Assembler Language code. Each methods contains lines which put something onto the stack, or store something off the stack. We cannot always say what exactly is on the stack when a specific line executes, so we should not change anything between two lines. But there are some lines at which we know what is on the stack.
Every method has to contain at least one ret instruction. When the runtime environment reaches a ret, the stack must contain the return value and nothing else. That means, at a ret instruction in a method returning a Int32, the stack contains exactly one Int32 value. We could store it in a local variable, insert some code leaving the stack empty, and then put the return value back onto the stack. Nobody would notice it at runtime. There are much more lines like that, for example the closing brackets of .try { and .catch { blocks (definitly empty stack!) or method calls (only returned value of known type on the stack!). To keep the example simple, we are going to concentrate on void methods and ignore all the others. When a void method is left, the stack has to be empty, so we don't have to care about return values.
ret
Int32
.try {
.catch {
void
This is the IL Assembler Language code of a typical void Dispose() method:
void Dispose()
.method family hidebysig virtual instance void
Dispose(bool disposing) cil managed
{
// Code size 39 (0x27)
.maxstack 2
IL_0000: ldarg.1
IL_0001: brfalse.s IL_0016
IL_0003: ldarg.0
IL_0004: ldfld class [System]System.ComponentModel.Container
PictureKey.frmMain::components
IL_0009: brfalse.s IL_0016
IL_000b: ldarg.0
IL_000c: ldfld class [System]System.ComponentModel.Container
PictureKey.frmMain::components
IL_0011: callvirt instance void [System]System.ComponentModel
.Container::Dispose()
IL_0016: ldarg.0
IL_0017: ldarg.1
IL_0018: call instance void [System.Windows.Forms]System
.Windows.Forms.Form::Dispose(bool)
IL_0026: ret
}
So what will happen, if we insert a new local variable and store a constant in it, just before the method returns? Yes, nothing will happen, except a little bit of performance decrease.
.method family hidebysig virtual instance void
Dispose(bool disposing) cil managed
{
// Code size 39 (0x27)
.maxstack 2
.locals init (int32 V_0) //declare a new local variable
...
IL_001d: ldc.i4 0x74007a //load an int32 constant
IL_0022: stloc V_0 //store the constant in the local variable
IL_0026: ret
}
In C# the methods would look like this:
//Original
protected override void Dispose( bool disposing ) {
if( disposing ) {
if (components != null) {
components.Dispose();
}
}
base.Dispose( disposing );
}
//Version with hidden variable
protected override void Dispose( bool disposing ) {
int myvalue = 0;
if( disposing ) {
if (components != null) {
components.Dispose();
}
}
base.Dispose( disposing );
myvalue = 0x74007a;
}
We have just hidden four bytes in an application! The IL file will re-compile without errors, and if somebody de-compiles the new assembly, he can find the value 0x74007a.
To make life harder for people who disassemble our application and look for useless variables, we can disguise the hidden values as forgotten debug output:
ldstr bytearray(65 00) //load an "A"
stloc mystringvalue //store it
.maxstack 2 //set the stack size to exclude runtime exceptions
ldstr "DEBUG - current value is: {0}"
ldloc mystringvalue //simulate forgotten debug code
call void [mscorlib]System.Console::WriteLine(string, object)
In order to stay invisible even in console applications, we should rather disguise it as an operation. We can insert more local/instance/static variables, to make it look like the values were needed somewhere else:
.maxstack 2 //adjust stack size
ldc.i4 65 //load the "A"
ldloc myintvalue //load another local variable - declaration inserted above
add //65 + myintvalue
stsfld int32 NameSpace.ClassName::mystaticvalue
//remove the result from the stack
This example demonstrates how to hide values at all, so only this version will be used:
ldc.i4 65;
stloc myvalue
There is no need to insert two lines for each byte of the message. We can combine up to four bytes to one Int32 value, inserting only half a line per hidden byte. But first we have to know where to insert it at all.
Before editing the IL file, we have to call ILDAsm.exe to create it from the compiled assembly. Afterwards we call ILAsm.exe to re-assemble it. The interesting part is between these two steps: We must walk through the lines of IL Assembler Language code, finding the void methods, their last .locals init line, and one ret line. A message can contain more 4-byte blocks than there are void methods in the file, so we have to count the methods and calculate the number of bytes to hide in each of them. The method Analyse collects namespaces, classes and void methods:
.locals init
Analyse
/// <summary>Lists namespaces, classes and methods
/// with return type "void"</summary>
/// <param name="fileName">Name of the IL file to analyse</param>
/// <param name="namespaces">Returns the names of all namespaces
/// found in the file</param>
/// <param name="classes">Returns the names of all classes</param>
/// <param name="voidMethods">Returns the first lines of all method
/// signatures</param>
public void Analyse(String fileName,
out ArrayList namespaces, out ArrayList classes,
out ArrayList voidMethods){
//initialize return lists
namespaces = new ArrayList(); classes = new ArrayList();
voidMethods = new ArrayList();
//current method's header, or null if the method doesn't return "void"
String currentMethod = String.Empty;
//get the IL file line-by-line
String[] lines = ReadFile(fileName);
//loop over the lines of the IL file, fill lists
for(int indexLines=0; indexLines<lines.Length; indexLines++){
if(lines[indexLines].IndexOf(".namespace ") > 0){
//found a namespace!
namespaces.Add( ProcessNamespace(lines[indexLines]) );
}
else if(lines[indexLines].IndexOf(".class ") > 0){
//found a class!
classes.Add( ProcessClass(lines, ref indexLines) );
}
else if(lines[indexLines].IndexOf(".method ") > 0){
//found a method!
currentMethod = ProcessMethod(lines, ref indexLines);
if(currentMethod != null){
//method returns void - add to the list of usable methods
voidMethods.Add(currentMethod);
}
}
}
}
Given the number of usable methods, we can calculate the number of bytes per method:
//length of Unicode string + 1 position for this length
//(it is hidden with the message)
float messageLength = txtMessage.Text.Length*2 +1;
//bytes to hide in each method, using only its first "ret" instruction
int bytesPerMethod = (int)Math.Ceiling( (messageLength /
(float)voidMethods.Count));
Now we are ready to begin. The method HideOrExtract uses the value of bytesPerMethod to insert the lines for one or more 4-byte blocks above each ret keyword.
HideOrExtract
bytesPerMethod
/// <summary>Hides or extracts a message in/from an IL file</summary>
/// <param name="fileNameIn">Name of the IL file</param>
/// <param name="fileNameOut">Name for the output file -
/// ignored if [hide] is false</param>
/// <param name="message">Message to hide, or empty stream to
/// store extracted message</param>
/// <param name="hide">true: hide [message]; false: extract
/// a message</param>
private void HideOrExtract(String fileNameIn, String fileNameOut,
Stream message, bool hide){
if(hide){
//open the destination file
FileStream streamOut = new FileStream(fileNameOut, FileMode.Create);
writer = new StreamWriter(streamOut);
}else{
//count of bytes hidden in each method is unknown,
//it will be the first value to extract from the file
bytesPerMethod = 0;
}
//read the source file
String[] lines = ReadFile(fileNameIn);
//no, we are not finished yet
bool isMessageComplete = false;
//loop over the lines
for(int indexLines=0; indexLines<lines.Length; indexLines++){
if(lines[indexLines].IndexOf(".method ") > 0){
//found a method!
if(hide){
//hide as many bytes as needed
isMessageComplete = ProcessMethodHide(lines,
ref indexLines, message);
}else{
//extract all bytes hidden in this method
isMessageComplete = ProcessMethodExtract(lines,
ref indexLines, message);
}
}else if(hide){
//the line does not belong to a useable method - just copy it
writer.WriteLine(lines[indexLines]);
}
if(isMessageComplete){
break; //nothing else to do
}
}
//close writer
if(writer != null){ writer.Close(); }
}
The method ProcessMethodHide copies the method's header, and checks if the return type is void. Then it looks for the last .locals init line. If no .locals init is found, the additional variable will be inserted at the beginning of the method. The hidden variable must be the last variable initialized in the method, because the compilers emitting IL Assembler Language often use slot numbers instead of names for local variables. Just imagine a desaster like that:
ProcessMethodHide
//a C# compiler emitted this code - it adds 5+2
//original C# code:
//int x = 5; int y = 2;
//mystaticval = x+y;
.locals init ([0] int32 x, [1] int32 y)
IL_0000: ldc.i4.5
IL_0001: stloc.0
IL_0002: ldc.i4.2
IL_0003: stloc.1
IL_0004: ldloc.0
IL_0005: ldloc.1
IL_0006: add
IL_0007: stsfld int32 Demo.Form1::mystaticval
IL_000c: ret
If we inserted an initialization at the beginning of the method, we could not re-assemble the code, because slot 0 is already in use by myvalue:
myvalue
.locals init (int32 myvalue)
.locals init ([0] int32 x, [1] int32 y) //Error!
IL_0000: ldc.i4.5
IL_0001: stloc.0
...
So the additional local variables has to be initialized after the last existing .locals init. ProcessMethodHide inserts a new local variable, jumps to the first ret line and inserts ldc.i4/stloc pairs. The first value being hidden is the size of the message stream - the extracting method needs this value in order to know when to stop. The last value hidden in the first method is the count of message-bytes per method. It has to be placed right above the ret line, because the extracting method has to find it without knowing how many lines to go back (because that depends on just this value).
/// <summary>Hides one or more bytes from the message stream
/// in the IL file</summary>
/// <param name="lines">Lines of the IL file</param>
/// <param name="indexLines">Current index in [lines]</param>
/// <param name="message">Stream containing the message</param>
/// <returns>true: last byte has been hidden; false:
/// more message-bytes waiting</returns>
private bool ProcessMethodHide(String[] lines, ref int indexLines,
Stream message){
bool isMessageComplete = false;
int currentMessageValue, //next message-byte to hide
positionInitLocals, //index of the last ".locals init" line
positionRet, //index of the "ret" line
positionStartOfMethodLine; //index of the method's first line
writer.WriteLine(lines[indexLines]); //copy first line
//ignore if not a "void"-method
if(lines[indexLines].IndexOf(" void ") > 0){
//found a method with return type "void"
//the stack will be empty at it's end,
//so we can insert whatever we like
indexLines++; //next line
//search start of method block, copy all skipped lines
int oldIndex = indexLines;
SeekStartOfBlock(lines, ref indexLines);
CopyBlock(lines, oldIndex, indexLines);
//now we are at the method's opening bracket
positionStartOfMethodLine = indexLines;
//go to first line of the method
indexLines++;
//get position of last ".locals init" and first "ret"
positionInitLocals = positionRet = 0;
SeekLastLocalsInit(lines, ref indexLines, ref positionInitLocals,
ref positionRet);
if(positionInitLocals == 0){
//no .locals - insert line at beginning of method
positionInitLocals = positionStartOfMethodLine;
}
//copy from start of method until last .locals,
//or nothing (if no .locals found)
CopyBlock(lines, positionStartOfMethodLine, positionInitLocals+1);
indexLines = positionInitLocals+1;
//insert local variable
writer.Write(writer.NewLine);
writer.WriteLine(".locals init (int32 myvalue)");
//copy rest of the method until the line before "ret"
CopyBlock(lines, indexLines, positionRet);
//next line is "ret" - nothing left to damage on the stack
indexLines = positionRet;
//insert ldc/stloc pairs for [bytesPerMethod] bytes
//from the message stream
//combine 4 bytes in one Int32
for(int n=0; n<bytesPerMethod; n+=4){
isMessageComplete = GetNextMessageValue(message,
out currentMessageValue);
writer.WriteLine("ldc.i4 "+currentMessageValue.ToString());
writer.WriteLine("stloc myvalue");
}
//bytesPerMethod must be last value in the first method
if(! isBytesPerMethodWritten){
writer.WriteLine("ldc.i4 "+bytesPerMethod.ToString());
writer.WriteLine("stloc myvalue");
isBytesPerMethodWritten = true;
}
//copy current line
writer.WriteLine(lines[indexLines]);
if(isMessageComplete){
//nothing read from the message stream, the message is complete
//copy rest of the source file
indexLines++;
CopyBlock(lines, indexLines, lines.Length-1);
}
}
return isMessageComplete;
}
The method ProcessMethodExtract looks for the first ret line. If the number of bytes hidden in each method is still unknown, it jumps two lines back and extracts the number from the ldc.i4 line, which had been inserted as the last value in the first method. Otherwise it jumps back two lines per expected ldc.i4/stloc-pair, extracts the 4-byte blocks and writes them to the message stream. If an ldc.i4 is not found where it should be, the method throws an exception. The second extracted value (after the number bytes per method) is the length of the following message. When the message stream has reached this expected length, the isMessageComplete flag is set, HideOrExtract returns, and the extracted message is displayed. Extracting works just like hiding in reverse direction.
ProcessMethodExtract
ldc.i4
isMessageComplete
Sure you'll have noticed that this application doesn't use a key file to distribute the message. An intermediate assembly contains less void methods than an intermediate sentence contains characters, so a distribution key as it is used in all preceeding articles would only mean pushing loads of additional nonsense-lines into a few methods, and that would be much too obvious.A key file for this application could specify how to disguise the values - debug output, operations, instance fields, additional methods, and so on. I'll add such a feature in future versions, if somebody is interested in it.. | http://www.codeproject.com/Articles/5499/Steganography-VI-Hiding-messages-in-NET-Assemblies | CC-MAIN-2017-17 | refinedweb | 2,229 | 55.34 |
Overriding hashCode() Equal() contract
Every Java object has two very important methods i.e. hashCode() and an equals() method. These methods are designed to be overridden according to their specific general contract. This article describes why and how to override the hashCode() method that preserves the contract of HashCode while using HashMap, HashSet or any Collection.
Contract For HashCode
The contract for hashCode says
If two objects are equal, then calling hashCode() on both objects must return the same value.
Now the question that should come into your mind is that; is it necessary that the above statement should always be true?
Consider the fact that we have provided a correct implementation of equal function for our class, then what would happen if we do not obey the above contract.
To answer the above question, let us consider the two situations,
- Objects that are equal but return different hashCodes
- Objects that are not equal but return the same hashCode
Objects that are equal but return different hashCodesObjects that are equal but return different hashCodes
What would happen if the two objects are equal but return different hashCodes? Your code would run perfectly fine. You will never come in trouble unless and until you have not stored your object in a collection like HashSet or HashMap. But when you do that, you might get strange problems at runtime.
To understand this better, you have to first understand how collection classes such as HashMap and HashSet work. These collections classes depend on the fact that the objects that you put as a key in them must obey the above contract. You will get strange and unpredictable results at runtime if you do not obey the contract and try to store them in a collection.
Consider an example of HashMap. When you store the values in HashMap, the values are actually stored in a set of buckets. Each of those buckets has been assigned a number which is use to identify it. When you put a value in the HashMap, it stores the data in one of those buckets. Which bucket is used depends on the hashCode that will return by your object. Let’s say, if hashCode() method return 49 for an object, then it gets stored into the bucket 49 in the HashMap.
Later when you try to check whether that collection contains element or not by invoking Contains(element) method, the HashMap first gets the hashCode of that “element “. Afterwards it will look into the bucket that corresponds with the hashCode. If the bucket is empty, then it means we are done and its return false which means the HashMap does not contain the element.
If there are one or more objects in the bucket, then it will compare “element” with all other elements in that bucket using your defined equal() function.Objects that are not equal but return the same hashCode
The hashCode contract does not say anything about the above statement. Therefore different objects might return the same hashCode value, but collections like HashMap will work inefficiently if different objects return the same hashCode value.
Why Buckets
The reason why bucket mechanism is used is its efficiency. You can imagine that if all the objects you put in the HashMap would be stored in to one big list, then you have to compare your input with all the objects in the list when you want to check if a particular element is in the Map. With the use of buckets, you will now campare only the elements of specific bucket and any bucket usually holds only a small portion of all the elements in the HashMap.
Overriding hashCode Method
Writing a good hashCode() method is always a tricky task for a new class.Return Fixed Value
You can implement your hashCode() method such that you always return a fix value, for example like this:
//bad performance
@Override
public int hashCode() {
return 1;
}
The above method satisfies all the requirements and is considered legal according to the hash code contract but it would not be very efficient. If this method is used, all objects will be stored in the same bucket i.e. bucket 1 and when you try to ensure whether the specific object is present in the collection, then it will always have to check the entire content of the collection.
On the other hand if you override the hashCode() method for your class and if the method breaks the contract then calling contains() method may return false for the element which is present in the collection but in a different bucket.Method From Effective Java
Joshua Bloch in Effective Java provides an good guidelines for generating an hashCode() value
1. Store some constant nonzero value; say 17, in an int variable called result.
2. For each significant field f in your object (each field taken into account by the equals( )), do the following
a. Compute an int hashCode c for the field:
i. If the field is a boolean, compute
c = (f ? 1 : 0).
ii. If the field is a byte, char, short, or int, compute c = (int) f.
iii. If the field is a long, compute c = (int) (f ^ (f >>> 32)).
iv. If the field is a float, compute c = Float.floatToIntBits(f).
v. If the field is a double, compute
long l = Double.doubleToLongBits(f),
c = (int)(l ^ (l >>> 32))
vi. If the field is an object reference then equals( ) calls equals( ) for this field. compute
c = f.hashCode()
vii. If the field is an array, treat it as if each element were a separate field.
That is, compute a hashCode for each significant element by applying above rules to each
element
b. Combine the hashCode c computed in step 2.a into result as follows:
result = 37 * result + c;
3. Return result.
4. Look at the resulting hashCode() and make sure that equal instances have equal hash codes.
public class HashTest {
private String field1;
private short field2;
----
@Override
public int hashCode() {
int result = 17;
result = 37*result + field1.hashCode();
result = 37*result + (int)field2;
return result;
}
}
You can see that a constant 37 is chosen. The purpose to choose this number is that it is a prime number. We can choose any other prime number. Using prime number the objects will be distributed better over the buckets. I encourage user to explore the topic further by checking out other resources.Apache HashCodeBuilder
Writing a good hashCode() method is not always easy. Since it can be difficult to implement hashCode() correctly, it would be helpful if we have some reusable implementations of these.
The Jakarta-Commons org.apache.commons.lang.builder package is providing a class named HashCodeBuilder which is designed to help implementing hashCode() method. Usually developers struggle hard with implementing hashCode() method and this class aims to simplify the process.
Here is how you would implement a hashCode algorithm for our above class
public class HashTest {
private String field1;
private short field2;
----
@Override
public int hashCode() {
return new HashCodeBuilder(83, 7)
.append(field1)
.append(field2)
.toHashCode();
}
}
Note that the two numbers for the constructor are simply two different, non-zero, odd numbers - these numbers help to avoid collisions in the hashCode value across objects.
If required, the superclass hashCode() can be added using appendSuper(int).
You can see how easy it is to override HashCode() using Apache HashCodeBuilder.
Mutable Object As Key
It is a general advice that you should use immutable object as a key in a Collection. HashCode work best when calculated from immutable data. If you use Mutable object as key and change the state of the object so that the hashCode changes, then the store object will be in the wrong bucket in the Collection
The most important thing you should consider while implementing hashCode() is that regardless of when this method is called, it should produces the same value for a particular object every time when it is called. If you have a scenario like the object produces one hashCode() value when it is put() in to a HaspMap and produces another value during a get(), in that case you would not be able to retrieve that object. Therefore, if you hashCode() depends on mutable data in the object, then made changing those data will surely produce a different key by generating a different hashCode().
Look at the example below
public class Employee {
private String name;
private int age;
public Employee() {
}
public Employee(String name, int age) {
this.name = name;
this.age = age;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
@Override
public boolean equals(Object obj) {
//Remember: Some Java gurus recommend you avoid using instanceof
if (obj instanceof Employee) {
Employee emp = (Employee)obj;
return (emp.name == name && emp.age == age);
}
return false;
}
@Override
public int hashCode() {
return name.length() + age;
}
public static void main(String[] args) {
Employee e = new Employee("muhammad", 24);
Map<Object, Object> m = new HashMap<Object, Object>();
m.put(e, "Muhammad Ali Khojaye");
// getting output
System.out.println(m.get(e));
e.name = "abid";
// it fails to get
System.out.println(m.get(e));
e.name = "amirrana";
// it fails again
System.out.println(m.get(new Employee("muhammad", 24)));
}
}
So you can see in the above examples that how we are getting some unpredictable results.
You can easily fix the above by overriding the hashCode() using either Joshu Recipe or using HashCodeBuilder class.
Here is an example,Joshu Recommendation
@OverrideUsing HashCodeBuilder
public int hashCode() {
int result = 17;
result = 37*result + name.hashCode();
result = 37*result + age;
return result;
}
@Override
public int hashCode() {
return new HashCodeBuilder(83, 7)
.append(name)
.append(age)
.toHashCode();
}
Another Example of Mutable Field as Key
Let consider the example
public class HashTest {
private int mutableField;
private final int immutableField;
public HashTest(int mutableField, int immutableField) {
this.mutableField = mutableField;
this.immutableField = immutableField;
}
public void setMutableField(int mutableField) {
this.mutableField = mutableField;
}
@Override
public boolean equals(Object o) {
if(o instanceof HashTest) {
return (mutableField == ((HashTest)o).mutableField)
&& (immutableField == ((HashTest)o).immutableField);
}else {
return false;
}
}
@Override
public int hashCode() {
int result = 17;
result = 37 * result + this.mutableField;
result = 37 * result + this.immutableField;
return result;
}
public static void main(String[] args) {
Set<HashTest> set = new HashSet<HashTest>();
HashTest obj = new HashTest(6622458, 626304);
set.add(obj);
System.out.println(set.contains(obj));
obj.setMutableField(3867602);
System.out.println(set.contains(obj));
}
}
After changing mutableField, the computed hashCode is no longer pointing to the old bucket and the contains() returns false.
We can tackle such situation using either of these methods
- Hashcode is best when calculated from immutable data; therefore ensure that only immutable object would be used as key with Collections.
- Implement the hashCode() using our first technique i.e. return a constant value but you must aware that it would kills all those advantage of bucket mechanism.
- If you need mutable fields included in the hashCode method then you can calculate and store the hash value when the object is created and whenever you update mutable field, you must first remove it from the collection(set/map) and then add it back to the collection after updating it.
References and More Information
- See more at:
Yair Ogen replied on Tue, 2009/10/27 - 2:19am
Florent Ramiere replied on Tue, 2009/10/27 - 3:44am
Slava Lo replied on Tue, 2009/10/27 - 5:22am
Is it a bug in Employee.equals method implementation when 'name' field is compared using reference equality ?
Should it really be:35. return (name.equals(emp.name) && emp.age == age);
?
Muhammad Khojaye replied on Tue, 2009/10/27 - 6:19am
in response to:
Slava Lo
Yes. It should compare with equal, otherwise
e = new Employee(new String("muhammad"), 24);
would fail to get.
Developer Dude replied on Tue, 2009/10/27 - 11:10am
Thanks. I have used EqualsBuilder and HashCodeBuilder for years, in part due to collections issues, but I had not thought about what happens when you change a mutable object that is in a collection..
I am not saying my way is optimum, but it works for the apps I write.
Unfortunately, most of the value objects I have seen written by others, including many that wind up in collections, rarely have equals() or hashcode() overriden, and if they do it is usually done improperly. It is sad that such basic best practices and principles that have been known for years are ignored (even unknown) while everyone tries to follow all the bleeding edge tech and methodologies/paradigms.
Aljoscha Rittner replied on Tue, 2009/10/27 - 12:32pm
In Netbeans we have an equals/hashcode wizard, too.
br, josh.
Alex Miller replied on Wed, 2009/10/28 - 8:29am
Amit Jagtap replied on Wed, 2009/11/04 - 4:20am
Hi,
Great article. Thanks for posting this.
To counter the - Objects that are not equal but return the same hashCode
I was looking at the way hashcode value is computed for String objects and found it uses an almost similar formula, the one described by Joshua Bloch in Effective Java. Can we take advantage of String class's hashcode implementation for computing our hashcode? Some thing like the following -
This check in addendum to the above discussed can help in circumventing the not-equal-same-hashcode situtation.I would like to hear your thoughts on this.
thanks
Piotr Powalowski replied on Wed, 2009/11/04 - 7:27am
Mohamed El-beltagy replied on Wed, 2009/11/04 - 8:12am
Reading the code above in the "Mutable Object As Key" section and Joshu reciepe for fixing it, I noticed that Joshu relys on the String.hasCode() implementation. So I decided to check if a reference to two different strings would generate the same hashCode (which is of course non logic). Here's the code I tried:
I printed both the string's hashCode and the hashCode implementation's result using Joshu receipe and, as expected, got different values. Of course, using any of these hashCodes for storing into a hash collection will mean storing two differenet items in the collection.
So, depending on a String's hashcode while calculating mutable objects in a hashed collection would break the contract anyway. Unless of course, the string we are using in our calculations would not change once initiated in our object.
BTW, depending on a string that might change in an equal comparison, would break the equality contract as well.
Thinking of Amit suggesstion, this.toString().hashCode(), we have three possible scenarios:
1- The same VM: would work fine.
2- Two different VMs (for example, serialization using RMI or clustered environment): the result would change as the default toString method returns the object's full class name appended with the its full name's hash code. The toString() default implementation, accourding to Java API documentation, is
And the hashCode() according to the Java doc says:So, based on the dependency on the internal address of the object and due to the fact that we are on two different VMs; default hashCode would generate different values. (Analytically speaking. I hope someone with real life practice would share his info with us and correct me if I'm wrong).
3- toString() method is overridden with custom impl. In this case, if the string returned is always the same, it would generate the same hash code, otherwise....
As a conclusion, IMHO and generally speaking, if the string we are using in our calculations would not change once initiated in our object; we are safe. Otherwise, do not depend on it.
Or in other words, to implement hashCode method always depend on unmodifiable state of the object.
Another solution could be to store the hash code once generated and always return it afterwords. Haven't thought of the implefications of such a solution, but I guss if the hashed collection would only use the hash code to select the bucket, then we are safe with this solution (as it still have to call the equals method to all keys in that bucket to decide if the two keys are equal).
From the Java API docs:
According to the last point, and as a response to the next case you mentioned (two unequal objects return the same hash code), it would improve the performance of hash tables. But that does not mean that it will behave in a wrong way (as the hashed collection would still call the equals method against all elements in that bucket).
Note: I did not try HashCodeBuilder from Apache
Frank Lemke replied on Sun, 2012/04/15 - 10:54am
Shwetank Sharma replied on Tue, 2013/05/28 - 3:49pm
hi Muhammad,
In Topic "Mutable object as key" , You describe that why mutable object are not working with HashMap, when we are changing object name its hashcode also changed.
But after that you provided a solution for above problem your statement:
You can easily fix the above by overriding the hashCode() using either Joshu Recipe or usingHashCodeBuilder class.
But by using both solution which you suggest, not providing solution, hash map get method is is returning null.
my opinion: HashMap is always work with immutable object otherwise it will be return null while we will use mutable object & changes its property.
Muhammad Khojaye replied on Thu, 2013/07/18 - 5:53pm
in response to:
Shwetank Sharma
Yes you are right and It mention in another example in the article "Another Example of Mutable Field as Key ". In order to work with Mutable field as key, you can calculate and store the hash value when the object is created and whenever you update mutable field, you must first remove it from the collection(set/map) and then add it back to the collection after updating it.
Kushal Bajaj replied on Tue, 2013/07/23 - 4:10am
the best explanation over www .thanks mate
Lokesh Gupta replied on Wed, 2013/07/24 - 2:13am
I am in great confusion now. How the default hashcode() method compute the hashCode for any object. Is it internal structure, or its state or its memory location. What's it?? | http://java.dzone.com/articles/java-hashing | CC-MAIN-2014-41 | refinedweb | 3,019 | 60.35 |
See also: IRC log
<dhull> scribe:dhull
Glen: Adding embedded WSDL is good, metadata is maybe dodgy
Chair: Metadata discussion is secondary
Marsh: different idea of how WSDL
material should be split amongst docs. Extensibility for WSDL
is core, actual extensions incorporating WSDL is WSDL.
... Or (MarcH) everything WSDL goes into WSDL.
Marsh: Would be nice to have
extensions to the core already in.
... It would also be nice to have just one conformance statement per doc. E.g., WSDL conformance covered solely in WSDL conformance doc.
... Should supporting service name etc. be part of core WSA and its schema? Yes.
GlenD: But if it's extensibility, it doesn't matter.
MarcH: If this is in metadata, we
don't need anything special for validation as it's open
content
... Extensibiliy is everywhere anyway.
Marsh: This is an example of
extensibility, so should be mentioned with extensions ...
... Or you could say it's WSDL, so it goes with WSDL. Depends on which word you emphasize.
MarcH, GlennD: Belongs more in the WSDL (Glen: It's more than just an example of extensibility. It's a full-on WSDL extension)
Anish: Also see it as outside core. It's not absolutely essential.
dhull: WSDL has special status in
WS arch.
... Otherwise lean toward "outside core"
marsh: Inconsistency in namespace of properties (?)
marcH: Noted
GlenD: You can validate without knowing what the data means.
Marsh: Strange to have a schema span specification docs
GlenD: Basic question of how to
modularize.
... Art, not science
Marsh: Still haven't heard compelling reason for split.
Umit: Is your problem basically that this is done in the wsa: namespace.
Marsh: Yes. They're defined in a totally separate document from the wsa: core
GlenD: No harm in having them sitting in the namespace
Marsh: No harm in having them in the core.
Paco: So, do we really want a schema just for this?
Marsh: Timing issues to because WSDL 2.0 isn't done.
TonyR: You're anticipating split between WSDL1.1 and WSDL2.0 binding specs.
Marsh: Even more of an issue then. What goes in which spec?
MarcH: Does it matter?
... Happy to have everything in the same schema, specified in WSDL docs.
Anish: Core spec can refer to
WSDL1.1 binding spec, then later WSDL 2.0 (?)
... I could use non-SOAP or non-WSDL and still use core. So this is good (?)
Umit: Don't like the idea of same concept in two different namespaces.
(Anish: This == splitting among separate namespaces (?))
GlenD: WSDL 1.1 carries more extra stuff than WSDL2.0
Anish: Abstract properties are
independent of WSDL version.
... Don't need to invent three different namespaces. One ns for abstract proprties. Concrete properties stay in their present ns.
Umit: Are you suggesting two different ns tags for WSDL1.1 and WSDL2.0
Anish: No. Not for abstract
properties. We already have different sections for the
bindings.
... No need to invent new ns.
Chair: Does anyone feel really strongly about this? Seems like editorial tasks.
Marsh: Not just editorial. May need new namespace, which is more than editorial. Implications for conformance.
Anish: Agree not purely editiorial.
Marsh: Right now we're stuck in the middle. NEed to choose.
<umit> +1 it is not editorial
Anish: Core and binding are
redundant now.
... By putting this in core, WSDL is not jus another DL.
... Conformance cuts both ways. Easier for WSDL people, harder for non WSDL if it's together.
Marsh: If core is WSDL independent, would new NS be OK for shortcut properties (service name, selected interface)?
Anish: That's fine.
Marsh: OK with me, but then we need to be consistent: No WSDL in core at all.
+1 to consistent, no WSDL in core
Chair: Different ns from other WSDL stuff
Marsh: (reluctant) yes.
Umit: Observes that this is all a bit ironic (scribe didn't quite catch the drift)
Chair: Straw poll?
<umit> Today we have these two properties in the core regardless of Issue 24 and issue 26 anyway.
Chair: Acceptable to move WSDL machinery to WSDL doc, and to put version-independent WSDL stuff to new namespace?
TonyR: Relevant to WSDL1.1/2.0 split?
?: Looks orthogonal.
Marsh: Cancel straw poll.
Chair: Is there another proposal?
Marsh: Yes. Friendly amendment to vinoski.
GlenD: If we move things back into core, then we need to move more stuff back in.
Chair: Poll: WSDL in WSDL doc or WSDL in core.
Core: 1, WSDL 10: Abstain: 11
Omnes: Abstentions win!
RESOLUTION: Make sure that WSDL moves to WSDL, new ns for abstract properties.
Chair: Comments on embedded WSDL.
Anish: Really like it. Will this now be normative (in WSDL doc)? Prefer normative part of EPR-related 1.1 extensions, instead of (present) non-normative status.
Marsh: This is getting into
conformance.
... Is it important to keep names consistent with embedded WSDL.
Anish: It's all extensible. We don't prevent people from doing funky things.
Marsh: There's no MUST here.
Anish: If you inline WSDL you MUST do it this way.
Marsh: MUST what?
Anish: MUST put it in metadata.
GlenD: We say what it means IF it's in the metadata
Anish: Currently it's given as an example.
Marsh: Having example is helpful for interop but no MUST
GlenD: We do tie ourselves to
WSDL for this use case.
... Could also have gone with pure WSA.
Marsh: Not precluded.
Anish: Right now port is optional
service is required. You may have multipe ways of getting at
the same definition (?)
... Can always provide non-normative examples with specific qnames.
Marsh: Right now no MUSTs.
... Would be good in an appendix.
Ansih: Can do that. Here's an example, this specific problem solved this specific way. But inlining WSDL is broader.
GlenD: In..ter.esting. What do
you do when you see inline WSDL with multiple endpoints. As
alternates?
... There are various ways to interpret (alternatives, fallback, etc.) Is this a WSDL issue or WSA issue?
Anish: Same issues arise outside WSA. Could solve it in WSDL. Could also solve it as special WSA marker.
Vinoski: Could spend a lifetime trying to specify this.
GlenD: So leave it up to context?
Vinoski: Yes. Leave it up to WSDL.
<dorchard> doesn't seem like a WSDL issue as embedded WSDL is different than standalone WSDL. embeddeding wsdl will constrain the embedded wsdl.
GlenD: +1
<dorchard> it's a ws-a issue..
(GlenD: +1 to vinoski)
Chair: Options: 1) Appendix, non norm. 2) Appendix, but normative?
Anish: Appendices are fine. We can have as many as we like.
Several: It's editorial.
MarcH: OK. I'll take a shot.
Marsh: MUST service/selected interface elements match embedded WSDL. Less pressing if it's all in a separated doc.
MarcH: Could say service/selected is pointing to alternative to embedded. Not recommending this.
Anish: May contain multiple service endpoints. Is that bad?
Vinoski: No reason to preclude.
GlenD: Do we still have selector for service qname? If so would not matter so much.
Anish: Right. Embedded is
additional information.
... E.g. for later
Glen: or naive WSDL processor.
Marsh: Pick whatever you want from embedded and other WSDL (?)
GlenD: Important to be able to indicate that a bit of extensibility is important (wsa:mustUnderstand).
Chair: Roy has an interesting take on this.
Anish: So do we say this means whatever you want it to mean?
Marsh: If you put in hints, they apply to the EPR (?)
GlenD: What if embedded conflcts with something I already no about?
Marsh: How do you reconcile.
Chair: We don't deal with that.
Umit: We can only say that embedded info is self-consistent. Can't go farther.
GlenD: But processors can decide to ignore embedded WSDL.
Paco: Incumbent on minter of EPR (umit: yes)
umit: Client is guaranteed to get consistent info.
GlenD: Guarantee how?
Umit: We require it.
GlenD: Too much to ask (?)
Chair: What if I later add an extension that updates metadata.
Umit: Extension can violate anything.
JeffM: Can't violate spec and still conform.
TonyR: Are you suggesting mustUnderstand
GlenD: yes.
Chair: Not bringing that up now.
Umit: Need guarantee of consistency (?)
Marsh: What is harm if we don't have MUST (be consistent)?
Vinoski: Can't use EPR then. So what? Will there be policing action?
Marsh: And if so, will that be restrictive?
Anish: If I mint an EPR with a WSDL and the service doesn't implement it, is that an error?
Marsh: EPR is unusable.
Paco: WSDL places restrictions.
Embedded WSDL should be consistent (?)
... Don't want to preclude that. Better not to require consistency.
... May be additional contracts.
Chair: Related to issue 14.
Paco: Enforcing consistency may preclude scenarios we haven't thought of yet.
Anish: Independent of inlining of WSDL?
Paco: Yes?
s/yes?/yes./
Anish: Not specific to embedded WSDL.
Umit: Not comfortable with inconsistent WSDL (?)
marsh, Chair: Change MUST match in (non-normative!) text to SHOULD or should?
<pauld> discussion of relationship with i014: and i020
Anish: issue of what does it mean to have WSDL-related data in EPR.
Chair: Suggest changing MUST to should.
Anish: Thought this was normative.
Marsh: MUST be downgraded to SHOULD.
Paco: Is this requirement on EPR
minter?
... Clients shouldn't have to check consistency.
Marsh: Consumer's behavior if
didn't match is undefined. Burden is on minter.
... Non-normative appendix to core -> normative appendix to WSDL, s/MUST/SHOULD/
Chair: Proposal is <above> + move WSDL material from core to WSDL.
Anish: Current text has selected endpoint etc. outside metadata. Doesn't matter if we're moving.
Chair: Core right now is unstructured set of properties.
MarcH: WSDL document will
describe instances of Metadata.
... Metadata is a bag.
GlenD: What is functional difference between inside/outside metadata bag?
Chair: (-hat) If it's just for convenience, what do we say normatively about it (?)
Paco: Question is how do people feel about it.
GlenD: Feel it's silly, want to know what value it adds.
MarcH: Against metdata property or metadata syntax
GlenD: Both. Would be weird to
have mismatch between the two.
... What semantic do we gain here?
Paco: Hopefully people know why it's there.
GlenD: Quetsion is why separate bucket. Why not just put things in the EPR.
GlenD, Marsh: Metadata may have effects on what's on the wire (so not so distinct from properties?)
Paco: May not show on wire at all.
GlenD: Obviously a trick subject, but what is value of distinction.
Marsh: Algorithm: If there is metadata bout endpoint, put in metadata. IF about EPR put outside.
Umit: Exactly.
Anish: IT's the identifier.
Omnes: Anish, leave the room!
Chair: data/metadata is well-known tarpit. Can we focus on data model.
Marsh: Plan of record has redundant information (selected/service)
GlenD: Actually not. Metadata would be /descriptions/
Marsh: As it stands, selected/service would be in metadata.
GlenD, Paco: not good.
Paco: Ref props is also a container. One way is to say the interface/service are refprops (?)
Marsh: Two ways to resolve reduncancy. (remove either part)
Glen: Either top level as properties, or in metadata.
Chair: Charter doesn't require either way (had thought it might)
GlenD: want one or the other but not both (for selected etc.)
Anish: This would appear in metadata only (per split of WSDL from core (?))
Chair: Data model has them as root properties.
marsh: 4 possibilities ...
Anish: If we're using only XML then why do we need abstract properties (why not just infoset).
<pauld>
<pauld> PDF:
Chair: Straw poll: 1) accept duplication 2)special-case WSDL properties so they can't be in metadata 3) WSDL metadata isn't expressed as properties, just in metadata. 4) ....
Anish: what's difference between abstract property and infoset.
Marsh: sort of mixed.
Chair: 4) No metdatata property;
metadata children define their own top-level props
... 5) put WSDL props at top level, not in metadata bucket.
marsh: 5 is closer to status quo, that's about all there is to recommend it.
Chair: 6) WSDL metadat is a subproperty of the metadata property.
umit: What are we enabling by these different choices.
glenD: we had this discusion in WSDL, decided not to go there
Anish: Here we've decided this is XML 1.0. WSDL didn't. Why do we need extra layer?
GlenD: It made sense structurally in WSDL, may here. Infoset does not always map to same abstract structure.
Anish: Here abstract model contains some infoset. WSDL model is completely abstract
Marsh: is it?
Anish: maybe not
Marsh: property vlaues may be XML in WSDL
Glen: Abstract model makes coding easier. Don't need to worry about serialization as XML.
Anish: Infoset isn't abstract enough for you?
Umit: right
GlenD: You havea point
Marsh: Abstraction is good for EPR -> message rules
Anish: Can do same with infoset.
Hugo: SOAP features involved, too.
Several: Can we get every possible issue in here while we're at it?
Chair: Staw poll to narrow discussion?
Omnes: OK
<enter the TAG>
1: 0
2: 0
3: 13
4: 5
6: 5
5: clarify
GlenD: 4 is good because it leads to getting rid of metadata
dhull: point
TAG: Summary of raison d'etre
<DanC_> TAG issue endPointRefs-47: WS-Addressing SOAP binding & app protocols
noah: There is a separate issue
(to TAG) of how are URI being used
... Beyond strict issue 47
<mnot>
(presentation by PaulD -- see link)
<DanC_> (the analogy to email from/to/reply-to is completely new to me, and explains quite a lot. is it in the WG charter? did I miss it?)
<DanC_> (aha... it was there... I missed it... "# Abstract properties to identify subsequent destinations in the message exchange, including: * the reply destination" -- )
<DanC_> +1 "I dunno what metadata is. it's just stuff"
<dorchard> I did a comparison of ws-md to ws-a at
<dorchard> Dan, specs that I can think of: WS-Reliability, WS-reliablemessaging, ws-eventing, ws-notification, ws-transfer, ws-resourceframework, ws-coordination, ws-enumeration, and more...
<DanC_> er... so how does ws-notification use ws-addressing? I only learned which end of ws-addressing is up 5 minutes ago. I don't know which end of ws-notification is up.
<DanC_> timeline slide... wonderful!
<dorchard> in the eventing systems, like ws-n and ws-e, there is a "subscribe" message that returns a subscription structure that contains an EPR. It may use "ReplyTo" + "FaultTo".
<dims> suppose someone subscribes to a certain topic, they tell the server where (using ws-addressing) to send notifications
<DanC_> ok, that's a good example. is that in a ws-addressing use cases doc? (please? ;-)
<dorchard> nah, no use cases or reqs doc.
<anish> There is also a comparison of WS-MessageDelivery and WS-Addressing at
<DanC_> ("Addressing Infoset" slide... hmm... looks a lot like RDF. I wonder if it can be converted with XSLT)
Stuart: Seems odd to have To: as a URI and reply-to: as an EPR
<DanC_> SW: on "Addressing Infoset" slide, it seems odd to have To as a URI but From as an endpointref.
<dorchard> Dan, it is in xml so....
GlenD: Interesting queston which has been discussed at length.
Noah: Is this recursive?
Glend: No this is infoset of SOAP message
PaulD: Can use EPRS outside addressing.
<DanC_> (er... eek... as W3C/IETF liaison, I need to go tell the IETF we're reinventing email and get appropriate review. sigh.)
Plh: Also have refparams
<dorchard> dan, I'm not *quite* sure it is <email>
<DanC_> yeah, IETF doesn't have a monopoly on deliverying messages
<dorchard> The RefPs (aka Properties/Parameters) is the key extension from URIs, ala cookies.
<anish> on the slide: s/Reference Properties/Reference Parameters
PauD: Intentionally left in policies even if it's not "correct"
Noah: Examples?
GlenD: Object ID. To: URI is fronting a bunch of "thingies"
Several: Snickers and grins
Noah (?): Not sure that ID issue is nailed.
GlenD: Session ID, security info.
Noah: The expectation is I will get back to you with this, with the stuff you told me to echo
<DanC_> (aha! so end-point reference are kinda like closures.)
Omnes: Yes.
GlenD: and they're marked as ref props.
<dorchard> Dan, the ws-addressing authors didn't do a discrete interop event, but it has been used in multiple interop sessions, like ws-rm, ws-eventing, ws-notification, etc.
<anish> paul said identity, i think he meant identifier
Glend: Identically spelled EPRs in different contexts can have different effects.
Chair: As with URI. Sometimes lexical comparison is OK, sometimes not.
TimBL: The two are not the same. Closer to URI refs.
<DanC_> er... was that Noah? in any case, no one person speaks for the TAG here
Paco: ?
Stuart: Endpoint comparison is removed.
Omnes: yes
Dorchard: Common scenario is EPR
exsits in context and protocol around it.
... E.g. subscription contains expiry etc along with EPR. Comparison is external to EPR.
... Another scenario: bags of EPRS, need to distinguish.
Roy: Security model is arch
issue
... Spoofing of addresses and return addresses.
<end presentation>
<DanC_> RF's security question... issue 4 seems relevant
<anish> security issue is at:
<DanC_> RM: see SOAP binding security considerations section
marsh, Chair: Security is still open
<DanC_> HH has a relevant action
<DanC_> 4. Security Considerations
Chair: There's concern that there are not examples of how to use WSA in a sentence.
Paul Cotton: WSI BSP would speak directly to this.
Unknown TAG member: Should be able to pick out threats and countermeasures from this.
<DanC_> "threats and countermeasures"
MarcH: We do have such a list.
MarcH: Should we provide minimum requirements for WSA.
marsh: Consumer /can/ treat EPRs as opaque, but doesn' have to. E.g., can discard headers that raise security issues.
Noah: Tricky bit is that I can send you an EPR, causing you to represent what I want to someone who trusts you.
GlenD: There should be trust
relationship between EPR provider and EPR destination.
... Marking ref prop soap headers makes them identifiable.
noah: If the destination doesn't know WSA, then it's defenseless.
Paco: Security section calls out possibility of signed EPRs
MarcH: Haven't tackled establishment of trust.
Stuart: Trust is red herring.
Chair: cuts security discussion short
DanC: What /have/ people done with EPR, how do you test it?
GlenD: There have been interops for eventing, WSRF, reliable messaging, which use WS-A (as of a year ago).
DanC: This is testing higher-level protocols
GlenD: Also testing WSA
GlenD, Chair: We do have open issue for test cases. Going slowly.
Jeffm: Hasn't been testing onging.
DanC: We encourage that you do.
TimBL: E.g. formulating issue as test case.
DanC: EPRs look like closures, maybe why they're not just URIs.
GlenD: Not written that way, but
it's not inaccurate.
... There are also policies as well as state/context.
<anish> wrt the security discussion, I think it is worth mentioning that one of the options that the WG considered was to put the reference parameters (cookies) in the wsa:To SOAP header
DanC: There appears to be a body of shared stories here. Eg. email analogy was new. (And as IETF liason I have to tell them to review this)
<noah> Tag issue endPointRefs-47:
<DaveO> I posted a comment about mapping most of ws-a (action/replyto/messageid) to HTTP
<DaveO> it's at
Noah: Appears to call for exclusivity
DanC: It's a pain to put it both places
GlenD: Long-standing issue with SOAP (e.g. action)
Chair: Also issue of
physical/logical address.
... (so To: != physical address).
Plh: SMTP to send to list
<noah> Some quotes from Mark Baker emails:
<noah> And that, in a nutshell, is why I raised the issue... and also why I've
<noah> been so opposed to anything else premised on the concept of "protocol
<noah> independence".
<noah> ...and...
<noah> my proposal was just to remove redundant information, not
<noah> to change the meaning.
Dorchard: Some cases (reply-to:, fault-to:) are tricky w.r.t. qname -> URI mapping
<noah> So, I think Mark has fairly clearly stated that he wants the URI >only< at the HTTP level, not duplicated.
Dorchard: Issue is to make as full use of HTTP protocol as possible, to: is part of it.
Noah: Issue (MBaker) is
"protocol-independence is bad, just use HTTP, don't try to say
things at multiple levels"
... Or, is WSA making good use of the architecture of the web (?)
DanC: When is wsa:to used when it shouldn't be.
MarcH: It's mandatory.
DanC: Is it typical to use EPR for queries that should be GET.
MarcH: No one knows.
?: When is wsa:to being used to dispatch (?)
Dorchard: WSRF -- a good chunk of what they do is dispatch off of refprop.
DanC: Getting property values is obviously GET.
Dorchard: ALmost no one is using GET for those sorts of thing
DanC: Lots of damage, then.
Chair: Other issues 51, 52
<noah> Noah asks for clarification: Dan, are you asking whether WSA usage is discouraging people from using, for example, the SOAP WEBMETHOD=GET (which was added specifically at the TAG's request?)
<noah> I think Dan said: yes, effective paraphrase of my concern.
<DanC_> yes, that's my question, noah
<noah> Various: but that feature is rarely used anyway.
<noah> Dan: right, so LOTS of damage is being done.
TimBL: As HTTP matured,
flexibility and complicatoin came from header structure
... One delight of building on XML is that it provides nesting and information hiding (?). Ironic that SOAP is using headers again.
... When you flag where headers came from, you need it as a top-level header. Doesn't seem to be clean, hard structure.
... Boundary between IP/TCP is clean. This boundary isn't.
<DaveO> To a certain extent, SOAP is about mushing the layers together...
TimBL: IT would be nice to have clean layering. So you could say this EPR is used by this layer. Then parties know whether to pay attention or ignore (based on layer).
GlenD: EPRs specifically don't
require knowledge of what's going.
... WSDL policy/assertions say what to do and what you have to undrestand.
TimBL: Two completely different
architectures being put together.
... Don't guet guarantees of opaque architecture.
<DanC_> (glen and timbl seem to have agreed on something that I missed. darn.)
GlenD: There are already systems that use SOAP headers.
<DaveO> Dan, they agreed that there is less layering, roughly a flattening into soap headers.
GlenD: So data becomes first-class SOAP headers to avoid having to rearchitect.
<DanC_> hmm
Noah: There is no standard for mustUnderstand, but can grandfather to mean, you must use these policies.
Jeffm: Mantra is you can only look at wire, not what generated what's on it. Makes it harder.
Paco: SOAP allows for this building on headre (?)
<noah> I'd paraphase what I said differently: soap has a very clear standard for mustUnderstand. One of the things you can do in your spec for understanding YOUR header is to say that it involves a set of hierarchcal rules.
Jeffm: What if there's an interaction between (?)
<missed comments>
DaveO: Tim's point is valid: SOAP is flattening layers. Unlike TCP/IP. Bug or feature?
TBL: SOAP is clearly underneath everything else.
<noah> Thus, to get what TimBL wants, I claim the SOAP rec doesn't have to change. As Glen Danials said, it's the current SOAP >implementations< and other Recommendations using SOAP to date which have chosen a single layer model.
TBL: No one seems to know further layering.
GlenD: Architecture is that things may be carried by protocol, maybe by SOAP (?)
PaulCotton: What has WSA done in document to address this TAG issue?
<DaveO> noah, I've tried unsuccessfully to get the semantics of 'mU' into other specs, like Atom. The pushback is generally that it's too hard to parse through the entire infoset...
PC: Can someone summarize why you don't consider this an issue.
Marsh: Do we have to bind all the way down to HTTP, or just bind to SOAP (SOAP bound to HTTP)
PC: Are you going to reference this binding
HenryThompson: Should stop at SOAP.
GlenD: SOAP has binding framework. These expose at top layer as abstract things. WSA does not so much say what SOAP should do. Says, we serialize to SOAP.
MarcH: Don't tie WSA abstract properties to SOAP binding abstract properties.
<TBL> (I am not happy with the transcriptions of my points. btw, and the log seems to be 404)
HenryT: A good thing?
Pls review logging. I'm barely keeping up.
Pls. accept apologies.
<DanC_> (timbl, I suggest you contact the chair about the record when we're done)
<DanC_> (I think dhull's doing as good a job as can be expected)
GlenD: There are cases where people expect to see requestURI at one place, wsa:to elsewhere, to: is abstract.
<DanC_> ah.
PC: What have you said back to
Mark on this? IF WSA reaches down through SOAP, would you take
away a feature of SOAP (e.g. intermediaries).
... ISn't this the simplest answer? Issue 47 -> taking away a feature of SOAP. What did you say to this?
Mrash: Didn't accept issue. Mark wanted to take it to TAG anyway.
PC: Yes.
... Not wise for WSA to reach down under SOAP.
,
<timbl> (I don't expect any scribe to keep up with this discussion)
PaulC: TAG has neither agreed or disagreed with Mbaker.
PC: MB /is/ here!
DanC: 2/3 of this discussion
sounds like WSA, not TAG (unless oyu do it wrong ;-)
... No implementaiton experience with this?
Omnes: no.
<DaveO> time's just about out...
GlenD: Need to understand SOAP
binding, and WSA, and all layers. We're trying to write this in
a layered fashion allowing different bindings to plug in.
... E.g. SOAP/{HTTP,SMTP,JMS}
Chair: Issues are still open, actively discussed.
Danc: Please put hopes down as test cases.
Roy: If you use HTTP layer addressing, information is still there, can reconstitute. (So no harm in reaching undre SOAP in this sense (?))
Paco: It's an optimization (?)
Roy: What's more optimal depends on software. SOAP/HTTP is not anything optimal.
Naoh: You're still losing information. Eg. canonicalization/DSIG info.
?: Particularly end-end encryption.
<DanC_> (hopes to wit... the hopes around "specese" in hugo's proposal for issue... 51PaulKnight turning into code. Though since there isn't any such code yet, I'd be inclined to postpone that issue till the next version of ws-addressing, as I understand this WG to be chartered to standardize based on the deployed experience)
Roy: Duplicating doesn't have
this problem.
... Doesn't have position about SOAP (for polite company)
... Creted GET binding for a reason, to ensure that Web remains web, so that people can still get infomrmation with URIS.
... Deviation will be dealt with severely as it damages web as information space.
... IF you can do what you want w/o damaging web as info space, we're happy (I say)
<DanC_> (I think I took my turn(s))
<Zakim> DanC_, you wanted to ask for a couple of examples of who wants to layer on top of ws-addressing and to ask how much interoperability is expected... what's the test
DaveO: Impedence mismatch between HTTP and SOAP views. Eg. SOAP is arbitrary typed operations. Trying to bind WS flexibility into verb set is hard. Haven't got to middle ground, to be able to pull service-y things into web (vice/versa (?)) (?)
Roy: HTTP has no constraints on verbs.
PC: But this is least standardized area. Little effect in extending verbs.
GlenD: HTTP systems are not designed for extensibility.
Roy: Disagree. They're designed to be extensible in that way.
GlenD: SOAP service deployned in envionment built for it. Big difference between plugging in CGI script and plugging in new verb.
<DanC_> (I disagree; I think there's a pretty square box around GET/PUT/POST/DELETE. there's a reason no others have been standardized)
Roy: Not necessarily.
s/not necessarily/no/?
<plh>
<plh> slides:
<mnot> Scribe: bob
proposals: 1) WSDL metadata isn't expessed as properties; only in the metadata property's infoset
2) No metadata property; metadata children have to define their own top-level properties
3) WSDL metadata is a subproperty of the metadata property
straw poll
preferred/can live with: 1-10/23; 2-5/14; 3-4/18
DavidO: appeals for logic in voting and constancy of positions
Chair: observes that proposal 1 seems the way to go
RESOLUTION: issue 26 closed with Jonathan's proposal
<Chair> summary:
<Chair> - Jonathan's proposal
<Chair> - all WSDL-related metadata, examples described in WSDL doc, in separate ns
<Chair> - embedded WSDL appendix is normative
<Chair> - MUSTs in appendix to SHOULDs
<Chair> - WSDL metadata isn't expressed as properties; only in the metadata property's infoset
RESOLUTION: issue 24 also closed due to the resolution of issue 26
Person who opened issue 24 agrees that the resolution to issue 26 satifies the concern raised in issue 24
Hugo presents his proposal of mapping addressing propertiies to SOAP
straw poll indicates that more understanding around the tabe is required before informed decisions may be made
break
return from break
detailed discussion of Hugo's proposal
items 1 seems non controversial (anish suspending disbelief)
<dims> +1
<dims> +1
<dims> +1
straw poll indicates most can live with defining SOAP properties
straw poll indicates that addressing soap feature ought to be included in the SOAP binding document
straw poll of defining the SOAP properties can be lived with generally
only three prefered not to define the SOAP properties, doinjg it carries in straw
topic Hugo's proposal chapter 3
item 3 will be re-drafted
<hugo>
Three options on action mapping: a) always the same; b) SOAP action is the same as ADDR action, when SOAP action present;c) completely disconnected
option 2 is preferred 16/22; and will be probed in more detail
<scribe> ACTION: Hugo to write up an action proposal (ref 3. action mapping) fro SOAP 1.1 and 1.2 [recorded in]
<hugo>
editors need to back port some redacted 3.2 information disclaiming EPR carried metadata and other information not describing comparison of EPRs | https://www.w3.org/2002/ws/addr/5/02/28-ws-addr-minutes.html | CC-MAIN-2016-36 | refinedweb | 5,067 | 68.67 |
Android and desktop Java library
(1) By Andrzej on 2021-09-11 09:14:24 [link] [source]
import android.database.sqlite is accessible from standard Android library? How to use SQLite with Java for desktop? I want make portable library for test on desktop, using with Android app. I need make wrapper of two SQLite libraries?
(2) By Simon Slavin (slavin) on 2021-09-11 16:39:59 in reply to 1 [link] [source]
Android:
There is no clear agreement that one of these is definitely the best option for all SQLite development. You will need to do some reading and make some choices.
I don't know enough about Java to answer that part of your question.
(3) By Andrzej on 2021-09-11 19:46:05 in reply to 2 [link] [source]
I have today written simple wrapper for libraries and test the same code on Laptop Core I3 8th Gen and Samsung A42.
I create 10'000 rows , each with column "to jest text" and float. Moreover I add two indices, unique for column 2 and not unique on column 1+2.
First I have 22 seconds on laptop; I make one transaction and I have only 150 ms. On Samsung phone 1000 ms.
Why? Phone cores are slower, but phone has more cores. Is possible run one transaction on more than one core in SQLite. Is possible further optimization in Java?
(4) By anonymous on 2021-09-12 09:51:21 in reply to 3 [link] [source]
Phone cores are slower, but phone has more cores. Is possible run one transaction on more than one core in SQLite.
Transaction is mostly an I/O operation. SQLite may perform sorting in parallel if you enable the option, but more CPU cores won't be able to speed up I/O. Indeed, I/O is so much slower than CPU that we had invented caches for our storage devices in memory. Also, memory is so much slower than the CPU that there are memory caches in the CPU, usually multiple levels of them.
(5) By Simon Slavin (slavin) on 2021-09-12 14:09:16 in reply to 3 [source]
I assume you mean that all 10,000 rows are part of the same transaction. It might go faster if you try 1,000 rows and use ten transactions. Or it might not.
SQLite's processing is extremely fast. Almost all the time taken by SQLite is about storage access. Don't think about cores and GHz, think about memory bus speed, and time taken to read and write a sector.
Your Samsung phone is writing to Flash memory. Your laptop is passing the data to a storage subsystem, which stores it in a RAM cache then says "job done" (also in the background writes the cache to the real storage). Flash memory is slower than RAM.
(6) By Keith Medcalf (kmedcalf) on 2021-09-13 00:44:45 in reply to 3 [link] [source]
The fastest way to perform I/O is to not do it.
It would appear that "Samsung phone" does I/O 10 times slower than "laptop".
(7) By Keith Medcalf (kmedcalf) on 2021-09-13 00:55:36 in reply to 6 [link] [source]
A an aside, I have "samsung phone" and also "laptop". However in my case "samsung phone" I/O is slower than "laptop" by a factor of almost 1,000,000. | https://sqlite.org/forum/info/725a45707a43a8b9 | CC-MAIN-2022-27 | refinedweb | 571 | 81.63 |
Vapor OAuth is an OAuth2 Provider Library written for Vapor. You can integrate the library into your server to provide authorization for applications to connect to your APIs.
It follows both RFC 6749 and RFC6750 and there is an extensive test suite to make sure it adheres to the specification.
It also implements the RFC 7662 specification for Token Introspection, which is useful for microservices with a shared, central authorization server.
Vapor OAuth supports the standard grant types:
- Authorization Code
- Client Credentials
- Implicit Grant
- Password Credentials
For an excellent description on how the standard OAuth flows work, and what to expect when using and implementing them, have a look at.
Usage
Getting Started
Vapor OAuth can be added to your Vapor add with a simple provider. To get started, first add the library to your
Package.swift dependencies:
dependencies: [ ..., .package(url: "", from: "0.6.0")) ]
Next import the library into where you set up your
Droplet:
import VaporOAuth
Then add the provider to your
Config:
try addProvider(VaporOAuth.Provider(codeManager: MyCodeManager(), tokenManager: MyTokenManager(), clientRetriever: MyClientRetriever(), authorizeHandler: MyAuthHandler(), userManager: MyUserManager(), validScopes: ["view_profile", "edit_profile"], resourceServerRetriever: MyResourceServerRetriever()))
To integrate the library, you need to set up a number of things, which implement the various protocols required:
CodeManager- this is responsible for generating and managing OAuth Codes. It is only required for the Authorization Code flow, so if you do not want to support this grant, you can leave out this parameter and use the default implementation
TokenManager- this is responsible for generating and managing Access and Refresh Tokens. You can either store these in memory, in Fluent, or with any backend.
ClientRetriever- this is responsible for getting all of the clients you want to support in your app. If you want to be able to dynamically add clients then you will need to make sure you can do that with your implementation. If you only want to support a set group of clients, you can use the
StaticClientRetrieverwhich is provided for you
AuthorizeHandler- this is responsible for allowing users to allow/deny authorization requests. See below for more details. If you do not want to support this grant type you can exclude this parameter and use the default implementation
UserManager- this is responsible for authenticating and getting users for the Password Credentials flow. If you do not want to support this flow, you can exclude this parameter and use the default implementation.
validScopes- this is an optional array of scopes that you wish to support in your system.
ResourceServerRetriever- this is only required if using the Token Introspection Endpoint and is what is used to authenticate resource servers trying to access the endpoint
Note that there are a number of default implementations for the different required protocols for Fluent in the Vapor OAuth Fluent package.
The Provider will then register endpoints for authorization and tokens at
/oauth/authorize and
/oauth/token
Protecting Endpoints
Vapor OAuth has a helper extension on
Request to allow you to easily protect your API routes. For instance, let's say that you want to ensure that one route is accessed only with tokens with the
profile scope, you can do:
try request.oauth.assertScopes(["profile"])
This will throw a 401 error if the token is not valid or does not contain the
profile scope. This is so common, that there is a dedicated
OAuth2ScopeMiddleware for this behaviour. You just need to initialise this with an array of scopes that must be required for that
protect group. If you initialise it with a
nil array, then it will just make sure that the token is valid.
You can also get the user with
try request.oauth.user().
Protecting Resource Servers With Remote Auth Server
If you have resource servers that are not the same server as the OAuth server that you wish to protect using the Token Introspection Endpoint, things are slightly different. See the Token Introspection section for more information.
Grant Types
Authorization Code Grant
The Authorization Code flow is the most common flow used with OAuth. It is what most web applications will use for authorization with an OAuth Resource Server. The basic outline of this grant type is:
- A client (another app) redirects a resource owner (a user that holds information with you) to your Vapor app.
- Your Vapor app then authenticates the user and asks the user whether they want to allow the client access to the scopes requested (think logging into something with your Facebook account - it's this method).
- If the user approves the application then the OAuth server redirects back to the client with an OAuth Code (that is typically valid for 60s or so)
- The client can then exchange that code for an access and refresh token
- The client can use the access token to make requests to the Resource Server (the OAuth server, or your web app)
Implementation Details
As well as implementing the Code Manager, Token Manager, and Client Retriever, the most important part to implement is the
AuthorizeHandler. Your authorize handler is responsible for letting the user decide whether they should let an application have access to their account. It should be clear and easy to understand what is going on and should be clear what the application is requesting access to.
It is your responsibility to ensure that the user is logged in and handling the case when they are not. An example implementation for the authorize handler may look something like:
func handleAuthorizationRequest(_ request: Request, authorizationGetRequestObject: AuthorizationGetRequestObject) throws -> ResponseRepresentable { guard request.auth.isAuthenticated(FluentOAuthUser.self) else { let redirectCookie = Cookie(name: "OAuthRedirect", value: request.uri.description) let response = Response(redirect: "/login") response.cookies.insert(redirectCookie) return response } var parameters = Node([:], in: nil) let client = clientRetriever.getClient(clientID: authorizationGetRequestObject.clientID) try parameters.set("csrf_token", authorizationGetRequestObject.csrfToken) try parameters.set("scopes", authorizationGetRequestObject.scopes) try parameters.set("client_name", client.clientName) try parameters.set("client_image", client.clientImage) try parameters.set("user", request.auth.user) return try view.make("authorizeApplication", parameters) }
You need to add the
SessionsMiddleware to your application for this flow to complete in order for the CSRF protection to work.
When submitting the authorize form back to Vapor OAuth, in the form data it must include:
applicationAuthorized- a boolean value to signify if the user allowed access to the client or not
csrfToken- the CSRF token supplied in the handler to protect against CSRF attacks
Implicit Grant
The Implicit Grant is almost identical to the Authorize Code flow, except instead of being redirected back with a code which you then exchange for a token, you get redirected back with the token in the fragment. It is up to the client (such as an iOS application) to then parse the token out of the redirect URI fragment.
This flow was designed for clients where you couldn't guarantee the security of the client secret, client-side apps, but has fallen out of favour recently and it is generally recommended to use the Authorization Code flow without a client secret instead.
Resource Owner Password Credentials Grant
The Password Credentials flow should only be used for first party applications, and Vapor OAuth mandates this. This flow allows the client to collect the username and password of the user and submit them directly to the OAuth server to get a token.
Note that if you are using the password flow, as per the specification, you must secure your endpoint against brute force attacks with rate limiting or generating alerts. The library will output a warning message to the console for any unauthorized attempts, which you can use for this purpose. The message is in the form of
LOGIN WARNING: Invalid login attempt for user <USERNAME>.
Client Credentials Grant
Client Credentials is a userless flow and is designed for servers accessing other servers without the need for a user. Access is granted based upon the authentication of the client requesting access.
Token Introspection
If running a microservices architecture it is useful to have a single server that handles authorization, which all the other resource servers query. To do this, you can use the Token Introspection Endpoint extension. In Vapor OAuth, this adds an endpoint you can post tokens tokens at
/oauth/token_info.
You can send a POST request to this endpoint with a single parameter,
token, which contains the OAuth token you want to check. If it is valid and active, then it will return a JSON payload, that looks similar to:
{ "active": true, "client_id": "ABDED0123456", "scope": "email profile", "exp": 1503445858, "user_id": "12345678", "username": "hansolo", "email_address": "hansolo@therebelalliance.com" }
If the token has expired or does not exist then it will simply return:
{ "active": false }
This endpoint is protected using HTTP Basic Authentication so you need to send an
Authorization: Basic abc header with the request. This will check the
ResourceServerRetriever for the username and password sent.
Note: as per the spec - the token introspection endpoint MUST be protected by HTTPS - this means the server must be behind a TLS certificate (commonly known as SSL). Vapor OAuth leaves this up to the integrating library to implement.
Protecting Endpoints
To protect resources on other servers with OAuth using the Token Introspection endpoint, you either need to use the
OAuth2TokenIntrospectionMiddleware on your routes that you want to protect, or you need to manually set up the
Helper object (the middleware does this for you). Both the middleware and helper setup require:
tokenIntrospectionEndpoint- the endpoint where the token can be validated
client- the
Droplet's client to send the token validation request with
resourceServerUsername- the username of the resource server
resourceServerPassword- the password of the resource server
Once either of these has been set up, you can then call
request.oauth.user() or
request.oauth.assertScopes() like normal.
Github
Help us keep the lights on
Dependencies
Used By
Total: 2
Releases
0.6.1 - Apr 12, 2018
Vapor OAuth 0.6.1
#6 - allow users passwords to be changed
0.6.0 - Sep 25, 2017
Vapor OAuth 0.6.0
- PR #5 add support for Swift 4
0.5.0 - Sep 25, 2017
0.4.0 - Aug 5, 2017
Vapor OAuth 0.4.0
This release contains some major tidy ups and improvements to the code quality (as well as making
swiftlint happy. As part of this, a couple of API changes have occurred:
- the
OAuthClientinitialiser has the ID parameter renamed back to
userIDfor clarity. Note that it is still
idunder the hood to aid with Fluent integration
- the
AuthorizationHandleris now called with an
AuthorizationRequestObject, which encapsulates all the information that you need, rather than passing everything in with 100 parameters
0.3.0 - Aug 3, 2017
Vapor OAuth 0.3.0
This release provides better integration with Fluent whilst still keeping it separate. The
userID property on
OAuthUser has now been renamed to
id and is now of type
Identifier. This will required some changes with integrating clients but should not cause any issues.
It does however, make integrating with Fluent far simpler and means there are no duplicated IDs in a database. | https://swiftpack.co/package/brokenhandsio/vapor-oauth | CC-MAIN-2019-43 | refinedweb | 1,828 | 51.07 |
LinuxQuestions.org
(
/questions/
)
-
Programming
(
)
- -
Undefined Reference
(
)
ChemicalBurn
02-11-2005 09:41 AM
Undefined Reference
Im using Suse 9.1 and Kdevelop 3.1. Im building on top of an already existing library for a software, some applications have been built using this same library and they work perfectly fine. But when I use the same templates and classes, upon declaring some types I get errors such as: main.o(.text+0x27b): In function `main':
: undefined reference to `InputReader::InputReader[in-charge](std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_ostream<char, std::char_traits<char> >&, std::basic_filebuf<char, std::char_traits<char> >*, int)'
main.o(.text+0x2bc): In function `main':
: undefined reference to `InputReader::~InputReader [in-charge]()'
Although some other types work fine. But I do have the header file "InputReader" included, and I declare and initialize the object just as it was declared and initialized in the working applications, but it still gives me this error.
Im hoping this has happened to someone before, because its been staring me in the face for days now. Can anyone help?
Thanks.
jim mcnamara
02-11-2005 02:07 PM
Do you have multiple namespaces? It looks like the link editor thinks InputReader is in another namespace.
ChemicalBurn
02-14-2005 03:01 AM
no, I only have one namespace. Well one namespace other than the namespace std...
But the namespace Im including does not appear in the namespace folder of my project, I'm including a namespace that is declared simply in the code that I am using.
Meaning, I have a whole library I'm using for graph and graph manipulation, and they are all in the name space Graph...
I guess I have to do something more, now that you mentioned it....
All times are GMT -5. The time now is
09:54 PM
. | http://www.linuxquestions.org/questions/programming-9/undefined-reference-288933-print/ | CC-MAIN-2016-40 | refinedweb | 315 | 56.35 |
User talk:Chrabros
Contents
- 1 multipolygon is a relation
- 2 levels of buildings
- 3 not rendered my mapnik
- 4 Translation
- 5 Key:addr translation
- 6 organization or organisation
- 7 buildingːpart=yes
- 8 reorganisation of this wiki's navigation
- 9 Your reversion of Cs:Tag:historic=cannon
- 10 Překlad
- 11 agricultural
- 12 Problem with status link
- 13 About your edit of Tag:waterway=water_point page
- 14 "ruined:" or "ruined"?
- 15 Template:Discouraged
- 16 Elbe-Labe-Meeting
multipolygon is a relation
I think you make a huge mistake by disabling relations in many tags. Relation multipolygon is obviously a relation. The icon "area" means a "closed way", in opposition to the simple, unclosed, linear way (like a highway) symbolized by the icon "way". That's all. --Pieren (talk) 13:58, 6 February 2014 (UTC)
- Geeee, I am really confused now. :-0 I have added onRelation=yes yesterday on some building= tags, which I have edited. Not really important ones, there was no content at all ususaly. But I have changed it on the main building=* page, and I have got reverted by Tordanik (Undo revision 987059: building is an area tag). So I have read the discussion on this page and some users stated that the relation multipolygon should be considered as Area and not Relation in this context which seems to make sense. So I have reverted the other changes. And now you tell me that it was wrong. :-(
- I was not able to find an exact definition of what onRelation really means. They have reverted my change of onNode=yes for building, which I think is perfectly OK. Do you know where I can find the proper way how to determine how to set these?
- What tags should have onRelation=yes (excluding multipolygon)? I do not know now.
- And BTW I think that you are wrong about "closed way" and "area". They are not the same thing. I was translating the Area page lately, so I rememeber, and the difference is quite clear. Though they are technically the same, from renderer's point of view "closed way" is just a way, but "area" is the way PLUS the filling inside. So the icon IMO means "on closed way" or "on area" depending of the context of the tag.
Chrabros (talk) 14:12, 6 February 2014 (UTC)
- I understand your confusion but these two guys are wrong. Have a look here : Elements#Area_.28closed_way_.29. The multipolygon is in "relation" section. Also the taginfo statistics are showing numbers for nodes, ways and relations. The multipolygons are counted in the relations element types. Again, these guys are wrong. --Pieren (talk) 14:51, 6 February 2014 (UTC)
- OK, both ways of thinking about it have its merits. But it seemes to me logical (as it did before) that multipolygon relation is mainly relation. Also just under need is Taginfo count which clearly indicates number of relations (including multipolygon), so it would be stupid to say onRelation=no. Another reason is that when user searches some feature in OSM, this should give him a hint, which elements he should searche to find all objects. Last but not least Pieren seems to have higher "rank" than Tordanik and he is able to explain his opinions, instead of just reverting. ;-) Conclusion: reverted my reverts ;-), returned onRelation=yes, learned my lesson.
The same is true for Pieren, by the way. Dropping a single link and vanishing is not a proper discussion.
- Sorry for over-zealously reverting your edit. It's just that this topic was already on Key:building's talk page, and you changed the page without taking that into account and without giving a edit comment explaining why. So at that moment it felt to me as if you were wilfully ignoring previous discussion and just trying to push your opinion, which made me annoyed about your edit. In fact, I guess now that you felt the same way about my revert, for similar reasons.
- I've now also explained my opinion in more detail on Talk:Key:building, so perhaps you are still willing to give my way of thinking a second chance and consider a revert of the revert of the revert. ;-) --Tordanik 16:50, 7 February 2014 (UTC)
- Hi Tordanik, thank you for your reaction. Actually I was not reading the Talk page before I did the change. I was translating pages into Czech and this has become kind of routine for me - the onRelation value is usually left unfilled, so when I see that this value is used and it makes sense (usually on multipolygons) then I just add it.
- With the onNode=yes I was too fast. I understand now, that the intention for onNode=no might be to persuade the users to do the extra work and map the building as area. I should have read the Talk page before. Though it might me nice to write on the building page, that area is preferred for the building and that node is just for "cannot do it better" case. Just a thought. Chrabros (talk) 06:12, 9 February 2014 (UTC)
- I think adding something to that extent to the Building page would be appropriate. Perhaps also drop a line in there that most applications only benefit from building areas.
- But regarding the multipolygon dispute, I'm disappointed that you did an "after discussion" edit without even reacting to my arguments at Talk:Key:building#onRelation.3F. --Tordanik 15:24, 10 February 2014 (UTC)
- Gee, sorry, I have not noticed that you wrote on the Talk:Key:building#onRelation.3F page. It seems that RSS does notify only about changes in main pages, not talk pages. It was not meant to be offensive. I tought that the debate about onRelation was over, and that onNode is remaining open question. I'll respond there. Chrabros (talk) 06:59, 11 February 2014 (UTC)
levels of buildings
I would suggest to not suggest the 3D-tags like building:levels. I guess this is not accepted by the community. And IMHO this is not useful on the buidling-way, because mappers will mess up the building:levels with the total number of levels of a house. I would like to introduce a new tag which gives the number of levels of a building in total. 10. 2. 2014, 02:18 Cracklinrain (talk)
- Well, first of all please learn how to sign your comments on this wiki, so I do not have to do it for you.
- Second, I am little bit surprised that you say that the building:levels=* is not accepted by the community. There are cca 2,500,000 uses of this tag! So it seems that it is widely accepted. Maybe you meant that it was not proposed and approved in official process. Could be. But the tag is documented and used. So if you plan to propose a new tagging schema, then fine, but rather include this tag as well as it is, otherwise the confusion will be even greater. :-) Chrabros (talk) 04:34, 10 February 2014 (UTC)
- This is not about new tagging schemas. building:levels=* is used for 3D rendering. In the context of the building page without mentioning roof:levels=* etc. the mentioning gets misleading. To add the Tag might be OK in regard to the usage statistics, but not without mentioning the related tags of that concept. So we might keep building:levels=*, but than we will have to add roof:levels=* to the suggested tags at least. But IMHO the mentioning of the building:levels=*-Tag is not useful at all. --Cracklinrain (talk) 20:15, 10 February 2014 (UTC)
- Well, you might be right that this tag might be used incorrectly. But this is true for many tags, if not all of them. But in case of building:levels=* is the risk IMO very low as the page about it is very nice, it even has an explanatory picture, which is rare, so I think that the risk is very low here. So it seems weird to me that we would try to hide information about this tag from the users, esp. when this tag is relatively widely used. Chrabros (talk) 06:52, 11 February 2014 (UT)
- Come on, Pieren! This time it was not me who did it. The mapnik note was not added by me. You have reverted a wrong edit.
- I have removed the =References= section as it breaks up the structure of MapFeatures pages. Find another guilty guy. ;-) Chrabros (talk) 16:12, 12 February 2014 (UTC)
Translation
I have no idea. You should contact one of the admins or submit a ticket. --Pieren (talk) 09:43, 17 February 2014 (UTC)
Key:addr translation
It seems you have accidentally overwritten the English version of Key:addr with the Czech content: [1]. Was that your intention? --Tordanik 21:39, 23 February 2014 (UTC)
- Well, I think that Czech is a beautiful language and everyone should try to learn a little bit of it. ;-) But in this case I was called off to have a dinner and I have pasted into a wrong window. :-( Thanks for pointing it out Tordanik.
- Chrabros (talk) 01:44, 24 February 2014 (UTC)
organization or organisation
Please note that the wiki, like the tags, is normally expressed in uk English, not US English for historical reasons. Thus this change is not required. --Pieren (talk) 13:10, 10 March 2014 (UTC)
buildingːpart=yes
I don't understand why you prefer adding "=yes" to "building:part" here[[2]]. It seems unnecessary to me. --Jgpacker (talk) 18:06, 19 March 2014 (UTC)
- Maybe it is unnecessary maybe it is not. But the bottom of the page clearly states its meaning. I am not defending the logic of this tag. What I am defending is a integrity of the description of the page which you have broken. part is yes and parts is number. Read the bottom of the page. Your edit resulted in a difference between top and bottom of the page. Chrabros (talk) 19:04, 19 March 2014 (UTC)
- Yeah, it's true. I missed the other "=yes"..
- I removed the "=yes" part because there are other possible values for building:part [[3]]. But it seems they aren't approoved values, so I guess it's okay to leave the "=yes" as it is in the page content, but it should be removed from the "Useful combination" list --Jgpacker (talk) 19:41, 19 March 2014 (UTC)
- And why it should be removed when it is recommended by the page and 95% of values is yes? I would rather leave it there. Chrabros (talk) 19:50, 19 March 2014 (UTC)
- Because there are other possible values when making a combination with building:levels. You say 95% of the values are "yes", but that's out of 83 292 (currently), meaning there are 4,160 instances of this tag with other values, which is not insignificant. --Jgpacker (talk) 20:07, 19 March 2014 (UTC)
- Update: Simple_3D_Buildings#Building_parts mentions there are other possible values for building:part=*: "If some parts of the building=* have different attributes (e.g., height), they can be modelled as additional areas, tagged with building:part=yes or building:part=type of building:part." --Jgpacker (talk) 01:04, 20 March 2014 (UTC)
- OK, maybe I am wrong here. I thought that this sentence
- For describing number of levels at parts of building we use building:parts=* at polygon of building and building:part=yes and building:levels=* at polygons of parts of buildings.
- means that the number of levels of building part goes to building:parts=* as a number. But there is obviously another page building:parts=* describing building:parts=* as "horizontal/vertical/mixed". So probably misread this sentence. What do you think?
- Anyway: what about adding this to the Useful combinations: building:part=yes and building:parts=horizontal/vertical/mixed
- and this to the text building:part=yes or any other value suitable for building=*?
- Chrabros (talk) 03:18, 20 March 2014 (UTC)
- Update: Funny. When I was translating this page to Czech I was reading the sentence the other way, so that horizontal/vertical/... is the content of parts tag. ;-) Chrabros (talk) 03:30, 20 March 2014 (UTC)
- Well, the funny thing is that the possible types of building:part aren't specified in it's page. It indeed gives the impression that it could be one type of the building=* tag, however it doesn't confirm that(on taginfo, there are such values as "base" and "column"). I'm not sure how much information is necessary in the wiki page and how much is supposed to be only linked, but I think the useful combination part could be only links. --Jgpacker (talk) 20:36, 22 March 2014 :18,:10, 23 May 2014 (UTC)
Your reversion of Cs:Tag:historic=cannon
Could you spell out your objections to Moresby’s version more clearly if you’re making a change like that?--Andrew (talk) 06:13, 17 April 2014 (UTC)
I believe that I already did so User_talk:Moresby#New_proposed_KeyDescription_Template. Chrabros (talk) 06:27, 17 April 2014 (UTC)
Překlad
Díky za další překlady stránek do češtiny! Je to opravdu hodně užitečné a to nejen pro nováčky, na mailinglist chodí jen málo lidí (a ještě méně lidí je ochotno prohrabovat se jeho historií). Kdysi jsem se pustil do překladu FAQ (ale skončil jsem v podstatě akorát u nadpisů), je to tady k volnému využití. --Jkjk (talk) 11:26, 19 April 2014 (UTC)
- Díky za tip, pomalu na tom dělám, ale je to teda nášup. Budu rád, když mi tu stránku Cs:FAQ případně zkontroluješ a opravíš.
- Chrabros (talk) 13:09, 10 June 2014 (UTC)
agricultural
Hi Chrabros, could you please have a look at the page (created by you) Key:agricultural. The text seems to be contradictory now (after someone else's edit). Someone mentioned this on DE talk:Key:agricultural. Thanks. I do not want to dive deeply into this topic now. --Aseerel4c26 (talk) 19:59, 16 January 2015 (UTC)
- Hi, I do not see the contradiction. It seems it is in sync with the original proposal, which I have linked to the page. Chrabros (talk) 15:44, 18 January 2015 (UTC)
- Thanks. Hmm, "Legal access restriction for agricultural motor vehicles" (so the key is about a vehicle type) vs. "which can but not necessarily is performed using agricultural vehicles" (so the key is about a traffic type/purpose). --Aseerel4c26 (talk) 17:41, 18 January 2015 (UTC)
- Well, key agricultural=yes/no/* is about vehicle type and access=agricultural is about traffic purpose. That is how I understand the proposal. Chrabros (talk) 18:54, 18 January 2015 (UTC)
- I have tried to make the difference more visible on the page. Would you try to explain it on German talk page? My written German is not that good. Chrabros (talk) 19:11, 18 January 2015 (UTC)
- Thank you, I have tried to translate. Please check it. :-) --Aseerel4c26 (talk) 22:47, 18 January 2015 (UTC)
Problem with status link
Actually accepted tags within one proposal doesn't mean that their previous usages are accepted. Well this is because our documentation and tools are tag centric rather proposal and tag centric.
- Specifically you added status link for this page but proposal says nothing about location=roof.
- Sadly there over 8000 values, if all of them were deprecated by proposal, then we can claim tag location=* as Approved.
Right now we can only mention that 4 tags are accepted as part of proposal X.
Here is similar situation with residential=*. See Prevalent usage section. Xxzme (talk) 17:53, 26 January 2015 (UTC)
About your edit of Tag:waterway=water_point page
Hi, you have added the sentence below in waterway=water_point itself:
* waterway=water_point Same feature for boat holding tanks.
Perhaps you meant another tag, but which tag would you describe about? --Mfuji (talk) 09:23, 9 January 2016 (UTC)
- Hi, it was not me. I have just corrected a typo "watger"->"water".
- The sentence was added by User:Brycenesbitt.
- I have asked him already what he meant by this change - see User_talk:Brycenesbitt#water_point.3F
- Chrabros (talk) 08:31, 11 January 2016 (UTC)
- waterway=water_point is the same as ammenity=water_point, except for boats. So ammenity on land, waterway when accessible to boats. Or use both ammenity and waterway tags for the rare station that serves both users. Brycenesbitt (talk) 18:09, 14 March 2016 (UTC)
"ruined:" or "ruined"?
[4] - it was not an accident that it was "ruined:". The motivation was that wiki-search would not find the prefix version so I tried it this way.. just one of many tries- It may warrant a fresh look but have a look at the previous discussion which was here: RicoZ (talk) 22:42, 15 March 2016 (UTC)
- Hello, well, if you want this key to be named ruined:=* then I suppose the page should be moved to "Key:ruined:" as well. Otherwise the language bar does not work.
I wonder why this paged is named without ":" when the other related Lifecycle pages are named abandoned:=*, demolished:=*, removed:=*, ... Wouldn't be better to name it ruined:=* instead of ruined=* as well? Chrabros (talk) 07:13, 16 March 2016 (UTC)
- I think it should be "ruined:" and moved to "Key:ruined:". We have tried several alternatives to see how search engines would find those prefixes and may have forgotten to rename some. What would be also very helpful is a template for namespace prefixes/postfixes. I have started something but got distracted before finishing. RicoZ (talk) 22:37, 16 March 2016 (UTC)
Template:Discouraged
Just a heads up that I have some questions on Template_Talk:Discouraged--Jojo4u (talk) 23:03, 30 May 2016 (UTC)
Elbe-Labe-Meeting
Have you seen Elbe-Labe-Meeting? Would be nice to meet in person to talk about the Taginfo/Taglists feature. Joto (talk) 09:36, 5 September 2016 (UTC) | http://wiki.openstreetmap.org/wiki/User_talk:Chrabros | CC-MAIN-2016-44 | refinedweb | 2,990 | 72.46 |
Getting Started with Shuttle Service Bus
Shuttle Service Bus is a free .NET open source software project that provides you with a new approach in developing message-oriented EDA systems. Even though it is still in its infancy it is already used in production systems.
Here are some key points:
- Developed with C# on .NET 3.5
- The core has no dependencies on third-party products or projects
- Supports command messages as well as event messages (Pub/Sub)
- Has integrated message distribution
- Includes an administration shell to ease operational requirements
- Makes extensive use of interfaces to facilitate replacing or extending functionality
- Fault-tolerance via automatic retries
Why Would I Use a Service Bus?
Although using a service bus requires somewhat of a paradigm shift there are many advantages to going this route. By designing your system to have a service bus perform very specific functionality on a specific endpoint you can focus on getting that bit of software working in isolation.
Such an endpoint can then be independently versioned and maintained if you have decoupled it to the required degree.
This basically means that you will end up with messages being sent between various components and that these messages are processed asynchronously. This results in a fire-and-forget scenario that requires some careful thought as to how the system works.
The resulting UX can be quite different from traditional implementations where all actions requested by a user are processed immediately.
An example may be something as simple as sending an e-mail. You may set up an endpoint that handles the relevant command message type. So, when you need to send an e-mail you use something like this:
Bus.Send(new SendEMailCommand { From = "someone@from.com", To = "someone@to.com", Subject = "testing e-mail", Body = "Hello" });
You may be wondering how this differs from simply sending the mail directly yourself:
- The message sending call is immediate and we do not wait for it to complete
- Since the requests are queued we avoid a bottleneck
- Should the e-mail sending fail it will be retried automatically
- Given correct data we can be sure that the e-mail will be sent eventually; or, at the very least, manual action can be taken in the event of a problem
So our client code does not care how the endpoint actually sends the data. It may be using SMTP, or even some custom web-service.
How Does Shuttle Do Its Magic?
Shuttle Service Bus relies on two things:
- Messages, and
- Queues
The queuing infrastructure can be anything. You would, typically, want to use a real queuing technology such as MSMQ. Shuttle provides support for MSMQ and Sql Server table-based queues straight out-of-the-box. Should you wish to use anything else, it is a matter of implementing the relevant interfaces and you will be good-to-go. Shuttle makes use of a URI structure to represent the queues, e.g.:
- msmq://{machine}/{queue}
- sql://{connection-name}/{table}
To implement your own queue, you would simply pick a scheme and your queue implementation would decode the structure.
In order to host your endpoint, Shuttle makes use of a generic service host to simplify deployment. Implementing a new endpoint is a matter of configuring which queues to use and then starting up your service as shown in the following config file and code snippet:
<?xml version="1.0"?> <configuration> <configSections> <section name="serviceBus" type="Shuttle.ESB.Core.ServiceBusSection, Shuttle.ESB.Core"/> </configSections> <serviceBus> <inbox workQueueUri="msmq://./inbox-work" journalQueueUri="msmq://./inbox-journal" errorQueueUri="msmq://./shuttle-error" /> </serviceBus> </configuration> public class ServiceBusHost : IHost, IDisposable { private IServiceBus bus; public void Dispose() { bus.Dispose(); } public void Start() { bus = ServiceBus .Default() .Start(); } }
It is important to note that the generic host can run as either a console application or it can be installed as a service. This makes it particularly easy to debug your endpoint since you can specify the generic host to be the startup application when debugging within Visual Studio.
Once a particular message is retrieved from the inbox queue, the service bus will attempt to locate a handler and pass the message to the handler for processing. Messages that have no handler can either be moved to the error queue or discarded depending on the configuration specified.
Command vs. Event Messages
Since a command is an explicit request that a particular function be performed, it is sent to a single endpoint. This means that in order to send a message you need to know that an endpoint implements a specific behaviour. Therefore, commands result in a higher degree of behavioural coupling. When sending a message, the endpoint that the message should be sent to is configurable so that it may be changed at any time.
As an example you may have commands such as:
- SendEMailCommand
- ConvertDocumentCommand
- DeleteFileCommand
- CancelOrderCommand
Events, on the other hand, may have zero or more subscribers. Typically, an event would be defined since it is required in some business sense. So it stands to reason that there would be at least one subscriber, unless the event is defined in light of future requirements. When an event is published, each subscriber will receive a copy of the event message.
This differs from message distribution where a distributed message will be sent to only one worker inbox queue.
As an example you may have events such as:
- DocumentConvertedEvent
- FileDeletedEvent
- OrderCancelledEvent
Shuttle Service Bus allows you to specify headers for messages in order to add any necessary ad-hoc data to a message. There is also a correlation ID that may be used to group together related messages.
Scalability
Shuttle buys you quite a bit in terms of scalability since messages are queued and no immediate bottlenecks can arise. Even though endpoints may be configured to be multi-threaded, this does not mean that a particular endpoint cannot fall behind due to an excessive number of messages being received. Each and every endpoint has the built-in ability to distribute messages to another endpoint if any other endpoint notifies the distributor that it has threads available to perform work. To get this going is simply a matter of configuration.
Modules
Shuttle makes use of an observable pipeline structure. Events are registered in the pipeline in a specific order and observers may be registered in the pipeline to respond to particular events.
In keeping with the extensibility you can register your own module implementations in Shuttle. These modules typically plug into specific pipelines by adding observers that respond to particular pipeline events. You can even add your own events into the pipeline upon its creation.
What about Dependency Injection and Logging?
Since Shuttle makes such extensive use of interfaces for decoupling you are free to plug in any DI or logging implementation you feel comfortable with. The default implementations do not rely on any third-party components but there is currently a Castle Windsor implementation for DI and a Log4Net implementation for logging that may, optionally, be used.
An Example of Shuttle in a Production Environment
Shuttle has been implemented at a large South African short-term insurance company with great success to replace an aging document indexing system.
Clients send in claim-related documents via e-mail that are received by the Lotus Domino e-mail system. These e-mails are then extracted by the FileNet Email Manager application and placed in the IBM FileNet Content Engine. The content engine is configured to respond to any new documents with an e-mail classification that are committed. The content engine code that handles these arrivals writes out an XML file containing the relevant data.
From there, a Shuttle Content Engine endpoint picks up these XML files and an event is published indicating that new content has been placed in the content engine. The structure of this endpoint is such that it is not specific to the indexing process. It is therefore possible to re-use this endpoint to publish any new content. It is up to the content engine to simply write out the relevant XML files.
The Shuttle Indexing endpoint subscribes to the new content messages and as soon as one arrives it starts tracking all the documents that arrive for a specific e-mail since each e-mail has a unique identifier that is attached to the content engine metadata of each document. Once all the documents have been gathered for an e-mail, command messages are sent to the Shuttle Document Conversion endpoint to convert the HTML e-mail body and all JPG files to TIFF documents.
The document conversion endpoint is unaware of the indexing process and simply performs document conversions. Once a document has been converted, an event is published informing that a document conversion has succeeded or failed. Any system that requires document conversion would then subscribe to these event messages. In order to establish whether or not a message is intended for a particular system, the system requesting the conversion can make use of the correlation ID on the conversion request command and/or it can make use of name/value pair headers added to the outgoing conversion request command message. These headers are always attached to any related messages sent by Shuttle.
As soon as all the required document conversions are complete a command message would be sent to the Shuttle OvaFlo endpoint to create an indexing workflow instance in the IBM FileNet Process Engine. OvaFlo is a meta-workflow framework product developed by Ovations Group.
In order to perform the actual indexing, the users access the indexing web-based application. This application pulls the next available indexing workflow instance from OvaFlo and the related documents are displayed for classification. Each document is also linked to a specific claim and any other relevant indexing data is entered. It is also possible for users of the system to request that certain documents be converted to TIFF format. This is particularly useful when various documents that need to be individually classified have been placed in one file. Once the user is happy with the data the task is submitted for completion. This is handled asynchronously by firing off a command message allowing the user to immediately continue with the next indexing task.
In some instances, it was noticed that the work arriving from the web front-end queues up behind background system work. A particular example is that of document conversion where all the required conversions from e-mails arriving were being handled before the requests from the front-end. We simply installed a separate Shuttle Document Conversion endpoint using the same compiled assemblies with the generic host but we changed the configuration for this endpoint to use its own queues. The frond-end then sends conversion requests to this priority endpoint and the conversions are timely handled. All the background conversions are sent to the existing endpoint.
So, in this instance Shuttle was used to glue together disparate systems. The system is much more stable than the previous one having required fault tolerance built-in. The performance is excellent with no backlog being created.
Conclusion
Shuttle provides you with another free option when implementing an ESB. The project is hosted on CodePlex:
Pros:
- New
- Highly extensible
- Free open-source software
- Administration shell
Cons:
- New
- Process state data has to be handled manually
In time, and with community support, the various implementations available can be extended to include more queue, DI, logging, and other options.
About The Author
Eben Roux has almost 20 years of experience in the professional arena as a developer, consultant, and architect within many industries and has provided strategies and solutions that have contributed to the successful implementation of various systems. He is owner of the free open-source Shuttle Service Bus project and believes firmly in the development of quality software that empowers users to get their job done.
Having come from a Visual Basic background, Eben first became a Microsoft Certified Professional in 1998 and has completed 3 Microsoft Certified Solution Developer certifications by 2003 (VB5, VB6, VB.NET). Since moving exclusively to C# development in 2007 he has focused on domain-driven design implemented within an event-driven architecture based on message-oriented middleware.
What Shuttle brings to the .Net service bus arena?
by
Eduardo Miranda
Because there are other frameworks that would solve similar problems, a comparison between Shuttle and the most known frameworks on the are would be great.
Re: What Shuttle brings to the .Net service bus arena?
by
Michael Maier
What Shuttle brings to the .Net service bus arena?
by
Eben Roux
I have used NServiceBus 1.9 successfully on a previous project and it is still running in production (now 3 years later). The MassTransit architecture never really resonated with me. Shuttle is just over 2 years in the making and maybe MassTransit has changed somewhat. I decided to develop Shuttle shortly after NServiceBus went commercial.
The main idea behind Shuttle is to attempt to have a very clean, decoupled architecture to make it as extendable as possible without relying heavily on any third-party bits.
Single msmq and multiple subscribers
by
vavan p
Thank you.
Re: Single msmq and multiple subscribers
by
Eben Roux
Each endpoint has its own input queue. So each of your subscribers would receive a copy of a published message. But you cannot have multiple endpoints process the same input queue. An endpoint can specify a thread count to enable you to handle more than one message at a | http://www.infoq.com/articles/Shuttle-Service-Bus | CC-MAIN-2015-27 | refinedweb | 2,248 | 52.39 |
Ok, this seems to be working rather well so far. I'm now using a
DefaultSecurityManager and a DefaultAccessManager with a customised
LoginModule (for password validation) and a custom PrincipalProvider (for
LDAP access).
I've also extended the servlet to wrap all resources in a custom
AclResource, which delegates to the original resource but takes care of
providing WebDAV's ACL properties in addition to those of the original
resource. For this, I've created several properties which evaluate their
values lazily, so that I don't have to build the entire ACL property when
it's not even requested by the client. These properties are also invisible
in allprop requests, as recommended by the DAV spec.
(I should not that evaluating all this stuff seems to take a lot of
typecasts on faith, i.e. casting interfaces to their Jackrabbit
implementation counterparts, so I expect to have some work to do when the
next version comes out and some of this changes)
So far, I've taken care of DAV:supported-privilege-set and DAV:acl. On the
server side, I take the privileges from the privilege registry, convert most
of them to their DAV counterparts (those that seem to be exact matches that
is) and use a "JCR:" namespace for the rest of them. This ensures that the
client sees the actual privileges used by the server. Setting these seems to
have the desired effect on the server side.
I'm now worrying about two things:
1) DAV:current-user-privilege-set should return the ACL for the current
user, the idea apparently being that regardless of the user is allowed to
read the resource's full ACL, he should at least have access to his own
privileges. But as far as I understand, I need JCR_READ_ACCESS_CONTROL
permission to read any part of the ACL. Does that mean that a user is either
allowed to read the full ACL or nothing, not even his own privileges?
2) I'm also trying to support the DAV:owner property, denoting the owner of
a resource. This will be a simple string property containing the principal's
qualified name*. Querying it should therefore be simple. Setting it should
be allowed either (1) only for the owner and the admins, or (2)
alternatively be controlled through a custom privilege "modify-owner". As
far as I can tell, I have to provide my own ACLProvider so I can take care
of compiling different permissions when DAV:owner is accessed (I have to
handle the SET_PROPERTY permission manually in the grants() method). Is this
the correct way to to this, or am I getting myself in too much trouble
because I missed a more simply way? Also, in case of (2), how would I go
about creating and registering a custom privilege - obviously, I'd have to
put it in the privilege registry, but where can I do that, and how would I
make it an aggregated privilege of JCR_ALL?
*) It's a bit of a hack, but for now I'm using the qualified LDAP name to
identify principals. The DAV spec says principals "should" be referenced by
a HTTP or HTTPS URL, and obviously this would allow any compliant DAV client
to browse the users, but I don't see a way to mirror the LDAP directory into
a JCR collection (certainly not within a short implementation time and with
good performance), so I have my client access the LDAP on its own and just
use the qualified names in DAV requests. Not portable for a generic
implementation, but good enough for us, for now at least.
(Next up, once this works, is versioning, which will probably mess up my
AclResource delegate and cause some more work there, too)
Thanks as always,
Marian.
--
View this message in context:
Sent from the Jackrabbit - Users mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200904.mbox/%3C22947463.post@talk.nabble.com%3E | CC-MAIN-2017-30 | refinedweb | 647 | 54.76 |
Type error in recursion
Consider the following function:
def f(n) : def retfunc(x) : return 1 if n == 0 else x*add(f(n-k) for k in (1..n)) return retfunc
The test
w = f(9); print w, type(w)
gives
"function retfunc at 0x43d3de8" "type 'function'"
which looks fine to me. However an evaluation w(3) gives
TypeError: unsupported operand type(s) for +: 'int' and 'function'
How can I get around this error?
EDIT: One solution is, as indicated by DSM,
def f(n): retfunc(x) = 1 if n == 0 else add(x*f(n-k) for k in (1..n)) return retfunc
What I am trying to do here? Let's see:
for i in [1..8] : [c[0] for c in expand(f(i)(x)).coeffs()] [1] [1, 1] [1, 2, 1] [1, 3, 3, 1] [1, 4, 6, 4, 1] [1, 5, 10, 10, 5, 1] [1, 6, 15, 20, 15, 6, 1] [1, 7, 21, 35, 35, 21, 7, 1] | https://ask.sagemath.org/question/8765/type-error-in-recursion/ | CC-MAIN-2017-26 | refinedweb | 166 | 78.99 |
1
Hi,
While trying a program from "C++ Primer 3rd Edition" by Stannley Lippman, I encountered with following two errors.
1. The book says that the header file fstream also includes the header iostream, so including just fstream will do. But g++ complained about cout, cin and cerr despite having fstream included and namespace std used properly. Does it mean that fstream does not include iostream, or is it specific to g++ compiler.
2. I was trying to create and initialize an ofstream object as below
#include<iostream> #include<fstream> #include<string> using namespace std; void do_something() { string fname; cin >> fname; ofstream fout(fname); }
g++ did not compile this and says that there is no matching call for the last statement. Isn't there a constructor in the class ofstream which accepts a string object as argument? | https://www.daniweb.com/programming/software-development/threads/227997/question-on-fstream-header-and-ofstream-constructor | CC-MAIN-2016-40 | refinedweb | 137 | 66.07 |
Feof
From cpwiki
Current revision as of 11:21, 4 November 2008
This page is about feof, the C function which tests a file stream for end-of-file status, which also applies to the C++ equivalent, eof.
int feof (FILE * stream);
Controlling file reading loops
While it is seemingly logical to keep reading a file until you reach the end of the file, the
feof function does not lend itself to that ideally. It can only return true after EOF was set for the stream by another input function, such as
fgets. Meaning that there is a lag between your loop's end and your function calls, which can lead to an undesirable result.
The problem rears itself consistently if one were to read an empty file:
#include <stdio.h> int main () { FILE * fin = fopen("empty.txt", "r"); if (fin != NULL) { char input[] = "garbage"; while (!feof(fin)) { fgets(input, sizeof input, fin); fputs(input, stdout); } fclose(fin); } return 0; } /** my output: garbage**/
Provided that the file is truly empty, prompt loop termination would have prevented "garbage" from being printed, or indeed any other program instruction from being followed. Fortunately, the interface for FILEs (defined in <stdio.h>) have correctly designed input functions which will return a unique value if reading from the stream had failed. Incorporating the reading step as the loop condition itself will produce the ideal outcome.
Fixing the example is left as an exercise for the reader. | http://sourceforge.net/apps/mediawiki/cpwiki/index.php?title=Feof&diff=225&oldid=224 | CC-MAIN-2014-15 | refinedweb | 241 | 62.07 |
Okay, so this isn't actually the code I was working on. This is an oversimplified code extract that produces the exact same error. Thus, I thought if I could learn why I am getting errors with the simplified code, then I could apply it to my actual code. Thanks for any help/advice in advance!
#include <stdio.h>
int main()
{
struct fruit
{
int apples;
int oranges;
int strawberries;
};
int x;
int y;
int z;
x = 1;
y = 2;
z = 3;
struct fruit apples = x;
struct fruit oranges = y;
struct fruit strawberries = 4;
printf("The value is %d or %d", fruit.apples,fruit.strawberries);
return 0;
}
First of all, you cannot initialize a
struct type variable with an
int value anyway, you have to use either a brace-enclosed initializer, or initialize each member explicitly.
That said,
struct fruit applies struct fruit oranges struct fruit strawberries
is not how you define a variable of type
struct fruit and access the members. The correct way would be
struct fruit f; f.apples = x; f.oranges = y; f.strawberries= 4;
or, a more precise way,
struct fruit f = {x,y,4};
or, even cutting down the intermediate variables,
struct fruit f = {1,2,4}; | https://codedump.io/share/fwUMKzsBTwpl/1/error-in-initializing-structure-variables | CC-MAIN-2017-09 | refinedweb | 202 | 71.14 |
One
IniTechReportGenLib = 1 # Mark our customizations
and then assert in the product code:
# Initect TPS Report system
import ReportGeneratorLib
assert ReportGeneratorLib.IniTechReportGenLib == 1
# .. rest of product code ..
It’s a very simple technique, but can save headaches later, after you’ve long forgotten about having made the changes to the libraries.
Nice technique. My only observation would be that you don't necessarily need to modify the production code, but you do need to add that assertion in the unit tests for the code that is using that library. You could for example add the assertion in the setup method of your unit test module(s), so that it's picked by any unit test function/method that's exercising that code.
Grig
Brilliant! Why the !#$!@ haven't we been doing this all along at my company. *slaps forehead*
I do it both ways:
1. some stuff gets patched in source. I maintain either diffs or vendor-branches in Subversion
2. all of the rest of our monkeypatching is done from one module.
Great idea. My solution has been to rename the library file. Which causes a lot of changes to import/include statements throughout the code. (Of course this can be done automaticly)
Add a comment: | https://nedbatchelder.com/blog/200603/modifying_library_code.html | CC-MAIN-2021-31 | refinedweb | 207 | 64.61 |
Introduction:!
Step 1: Prepare Your Device.
Usually.
Now the sketch that is super simple:
#include "OneWire.h" #include "DallasTemperature.h" #define ONE_WIRE_BUS_1 A0 OneWire ourWire1(ONE_WIRE_BUS_1); DallasTemperature sensor1(&ourWire1); #include "LiquidCrystal_I2C.h"
LiquidCrystal_I2C lcd(0x27,16,2); float RawValue =0;
void setup(){ lcd.init(); lcd.backlight(); sensor1.begin(); sensor1.setResolution(11); } void loop(){ sensor1.requestTemperatures(); float RawValue = sensor1.getTempCByIndex(0); lcd.setCursor(0,0); lcd.print("Sens. 1 "); lcd.print(RawValue, 1); }
As you can see we use the Dallas Temperature library and a LCD screen with i2c connection.
In the setup we iniziate LCD and sensor and in the loop we simply request the temperature and store the value inside the variable RawValue to show it on the LCD.
If you want to keep it more simple, just use the serial monitor with the following sketch
#include "Wire.h"
#include "OneWire.h" #include "DallasTemperature.h" #define ONE_WIRE_BUS_1 A0 OneWire ourWire1(ONE_WIRE_BUS_1); DallasTemperature sensor1(&ourWire1);
float RawValue =0; void setup(){
delay(1000);
Serial.begin(9600);
sensor1.begin();
sensor1.setResolution(11);
} void loop(){ sensor1.requestTemperatures(); float RawValue = sensor1.getTempCByIndex(0); Serial.print("Sens. 1 "); Serial.println(RawValue, 1); }
Now follow me in the core of the project to calibrate the sensor.
Step 2: Two Point Calibration
Something to know first
To calibrate a thermo-sensor, you have to measure something of which you know the temperature. The simple way to do it at home is using boiling water and a bath of melting ice, also called a "triple-point" bath. In those cases we know that water boils at 100°C on the sea level. Keep in mind that to make a precise measurement you should know your altitude and calculate the proper boiling temperature there.
To be honest you should check the atmospheric pressure and not the altitude. But that way is accurate enough.
The triple-point bath, or ice bath, is the temperature at which water exists in the three states solid, liquid and gas, that temperature is 0,01°C. We will use, to simplify, 0°C.
Knowing the value that the sensor read and the value that should be, we can modify the raw value of the DS18B20 into something more correct.
NOTE: you could also use more temperature to calibrate the sensor just putting it in some other substance of which you know the boiling point like Ether (35°C), Pentane (36,1°C), Acetone (56°C) or Ethanol (78,37°C), but those boiling substances produce high inflammable gasses! SoDon't do it!
Boiling Water:
Put some water in a pot and heat it up until it boils (bubbles of gas are developing and the water is agitating itself). Immerge your sensor where it does not touch anything but water. Wait a couple of minutes and read the lcd or the serial monitor.
The temperature should remain the same for at least one minute. If so, write that value down. That is your: RawHigh value.
Triple-point bath:
Now take a big glass (you don't need anything huge nor a pot) and fill it to the border with ice cubes. Try to use small sized ice cubes. Now fill the 80% of the glass with cold water. Refill with ice if the lever try to go down.
Now put your sensor inside the water/ice thing and wait one and a half minute. Read the temperature that should remain the same for 30 seconds at least. If so, write it down, that it your RawLow value.
Step 3: Use the Values You Get in the Right Way!
So, now you got some important values:
- RawHigh
- RawLow
- ReferenceHigh
- ReferenceLow
The references value are obviously 99.9°C for the boiling water (at my altitude of 22m), and 0°C for the melting ice bath. Now calculate the ranges fo those values:
- RawRange = RawHigh - RawLow
- ReferenceRange = ReferenceHigh - ReferenceLow
Now you're all set to use that sensor in any other project being sure that it will give you a right measurement. How? Using the value you got here in the project you will create with that sensor.
In your future project you'll have to use the values you read in this one and I suggest to do it using the same names I used here.
Declare the variables before the void setup() section just like this:
float RawHigh = 99.6;
float RawLow = 0.5;
float ReferenceHigh = 99.9;
float ReferenceLow = 0;
float RawRange = RawHigh - RawLow;
float ReferenceRange = ReferenceHigh - ReferenceLow;
Than, every time you will use the sensor, you can use the following formula to calculate the CorrectedValue:
float CorrectedValue = (((RawValue - RawLow) * ReferenceRange) / RawRange) + ReferenceLow;
RawValue is obviously the reading of the sensor.
That's it!
Now you know how to calibrate your DS18B20 sensor or any other sensor that you'll use! Have fun!
Recommendations
We have a be nice policy.
Please be positive and constructive.
2 Comments
A very nice Instructible, and a practical guide for a two-point calibration based on reasonable approximations. For best results, all of the water used should be distilled water. That is for the boiling water, the ice cubes, and the water in the ice point container. Ideally the ice should be shaved, but finely crushed cubes are good for this approximation.
If you enjoy this, have you considered a career in measurement science?
Yes, you're right, the water should be distilled. I omitted that to have a simple approach. I should have also measure the air-pressure to determinate the precise boiling temperature.
I enjoyed that, like I'm enjoying prototyping and coding for Arduino, taking analog photographs and doing a lot more than any other 33y.o. guy, but I also have no degree and, on the paper, I'm just a simple like any other.
I also wrote academic papers (published in an academic platform) about Fashion Business and Social Networking, I taught photography and I was for a long time a professional photographer in the fashion business in Italy.
But still, I got no official "higher education", so I'm "stuck" in doing only works that makes me live good enough to pursue my hobbies.
Personal satisfaction is something that I enjoy much more than a career! :-) | http://www.instructables.com/id/Calibration-of-DS18B20-Sensor-With-Arduino-UNO/ | CC-MAIN-2018-17 | refinedweb | 1,034 | 65.22 |
My.
Let's start:
1. Download or clone the full GitHub project at (it's the newest version)
2. From a terminal go to /nix/ folder and launch the script. In OSX it is ran with:
Code: Select all
.:
Code: Select all
import sys
print sys.path
You will get a loo.
6. Open a new file browser window and navigate to your downloaded GitHub project: /IfcOpenShell/src/ifcopenshell-python/ and copy the full folder /ifcopenshell/
7. Paste it inside /site-packages/ folder. Now you should have something like:
Code: Select all
:
Code: Select:
Code: Select all
/IfcOpenShell/build/Darwin/x86_64/build/ifcopenshell/[b]python-2.7[/b].10/ifcwrap/
10. Paste them inside /site-packages/ifcopenshell/
11. Check everything is in place:
Code: Select all
and test if everything is working:
12.1 in the Python console write:
Code: Select all:
Code: Select all
house.FCStd
house.ifc
12.3 Open house.FCStd, select the root "Building" object and export it (File
Cheers
EDIT: Before you ask for them, here you have a temporal download link for binaries for some Python versions. I have packed them so that you could straightaway download and install them without having to compile anything. I guess, FreeCAD developers may want to download them and make them available somewhere over the official site or even start thinking about including it inside FreeCAD. | http://forum.freecadweb.org/viewtopic.php?f=23&t=17536&sid=91d8eb4dda4a0eac4241e5b10b82b36a | CC-MAIN-2017-17 | refinedweb | 228 | 67.15 |
Hi Philipp, Dominik Dominik, can you give a short overview what zope.generic can do? [...] > > If you take a closer look at this package and you will see > that each > > subpackage is well documented. > > Right. But that information isn't easily accessible. You have > to go to zope.generic/trunk/src/zope/generic/foobar. That's > FIVE directory layers down from zope.generic!
Yes, that's true, but if you install only one package from the 'zope.generic' collection you need to have that README.txt in this package and not more then that. > Another question I raise: Why are there subpackages in the > first place? > And what's the rationale for calling it 'generic'. It's be > like calling it 'zope.fast' or 'zope.easy': you get no idea > about the contents of the package. Of course, I know why it's > called 'zope.generic'. Because it's a package *collection*. I > think package collections (e.g. like zope.app or > zope.products which we used to have some years ago) are a mistake... I'm not sure if that is also right for the generic package. The generic package is ONE concept which is implemented in different reusable sub-packages which also can be used as single packages for just one aspect of the generic concept. > > The generic package collection is/was developed/mentained > by Dominik. > > The zope.generic sub-package are very well layered and have clear > > dependency that's the reason why they are containd in a collection > > package. I'm not fimilar with the state of the 'generic' > project right > > now. But it's a really cool concept. > > I'm not convinced that easily :). Perhaps sketching out what > it really is and what its purpose is would help. I think Dominik can explain the generic pattern. All I can say this quickly, you can define new instances only based on schemas without a class implementation. It's a kind of ZClasses, Archetypes in a clean Zope3 like implementation. Generic means you can define generic components which are initializable and configurable during the runtime. There is also a pattern for enhance such a generic instances via plugins. All I can say, it's generic ;-) > >> * For example, what does the 'z3c' namespace package stand > for? Who's > >> behind this stuff? And why does it sometimes use 'sandbox' or > >> 'Sandbox' > >> instead of or in parallel of 'trunk' for its main > development branch? > > > > The z3c top level package is a namespace for additional > packages where > > I and Bernd started to use at the SwissEasterSprint. > > Great. This should be written down somewhere. (I'm not > blaming you for not having done this; we have no rule for > this right now. I think we should have a rule, though). no problem, I agree to have a place for such infos. The README.txt files are not good enough for give a overview because you have to checkout first or browse the really slow repos with a browser. Tell me where is somewhere and I can write something ;-) > >> - Do other people also think it'd be a good idea to come > up with some > >> repository guidelines? Stephan had a proposal about specifying > >> package metadata and code maturity/quality, I think it's worth > >> working towards easily accessible info like that. If others agree, > >> then let's get started. > > > > Not started, just make progress in what allready started > with Stephans > > proposal and the ZSCP implementation ;-) > > > > Yes, I agree > > Perhaps you can check the proposal and see what we did in > >. I guess there is more to > implement but > > right now it's working and very professional looking. > > Thanks to Kamal Gill for the great desing work! > > I'm not as much looking for a website that gathers all the > ZSCP information as much as for guidelines that help us > maintain a certain quality of documentation within the repository. I think the proposal from Stephan should catch all your questions. If not feel free to define and propose them. I agree with have a more clear process for all you are asking. > >> - Should this be part of the Zope Foundation development process > >> (which again seems to be worked out by the Zope Management > >> Organization)? If so, I'll hereby volunteer to join such a > committee > >> and contribute my ideas (especially on package organization in the > >> repository and the associated development process). > > > > I really hope that we adapt the prosess described in Stephans ZSCP > > proposal. > > > > > > > ?rev=6 > > 6671&view=markup > > > > Whould be cool if we could make progress on what Stephan > started with > > the proposal and your ideas. > > Yup. I'll reiterate over Stephan's proposal once again and > provide my comments. > > Philipp > _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub: | https://www.mail-archive.com/zope3-dev@zope.org/msg05652.html | CC-MAIN-2018-05 | refinedweb | 788 | 66.84 |
In xHarbour the behavior of references stored in array is reverted in comparison to Clipper or Harbour.
In Clipper and Harbour VM executing code like: aVal[ 1 ] := 100 clears unconditionally 1-st item in aVal array and assigns to it value 100. xHarbour checks if aVal[ 1 ] is an reference and in such case it resolves the reference and then assign 100 to the destination item. On access Clipper and Harbour VM executing code like: x := aVal[ 1 ] copy to x the exact value stored in aVal[ 1 ]. xHarbour checks is aVal[ 1 ] is an reference and in such case it resolves the reference and then copy to x the value of reference destination item. It can be seen in code like: proc main local p1 := "A", p2 := "B", p3 := "C" ? p1, p2, p3 p( { @p1, p2, @p3 } ) ? p1, p2, p3 proc p( aParams ) local x1, x2, x3 x1 := aParams[ 1 ] x2 := aParams[ 2 ] x3 := aParams[ 3 ] x1 := lower( x1 ) + "1" x2 := lower( x2 ) + "2" x3 := lower( x3 ) + "3" Harbour and Clipper shows: A B C a1 B c3 but xHarbour: A B C A B C It's not Clipper compatible so in some cases it may cause portability problems. F.e. code like above was used in Clipper as workaround for limited number of parameters (32 in Clipper). But it allows to directly assign items of arrays returned by hb_aParams() and updating corresponding variables passed by references (see functions with variable number of parameters below). Anyhow the fact that xHarbour does not have '...' operator which can respect existing references in passed parameters and does not support named parameters in functions with variable number of parameters causes that reverted references introduce limitation, f.e. it's not possible to make code like: func f( ... ) local aParams := hb_aParams() if len( aParams ) == 1 return f1( aParams[ 1 ] ) elseif len( aParams ) == 2 return f2( aParams[ 1 ], aParams[ 2 ] ) elseif len( aParams ) >= 3 return f3( aParams[ 1 ], aParams[ 2 ], aParams[ 3 ] ) endif return 0 which will respect references in parameters passed to f() function. | http://cch4clipper.blogspot.com/2011/03/harbourxharbour-differences-references.html | CC-MAIN-2018-30 | refinedweb | 341 | 55.58 |
Has anyone followed this Tutorial lately?
I followed it and then downloaded his github example of just the front end
I got it almost fully working. I can post to the server with Postman. I can also manually add data to the table in VS through backend project.
But I can not get the front end, Android app working. It just stores stuff locally, it never gets to the server, push or pull. But something that is weird is when a terminate the app and run it again it does not load any of previous data. So it may not even be storing it locally either, that or its wiping it everytime.
This is the code for the AzureService. For any of the other code it is the same as the project sited above.
namespace PillTrackerApp.Services
{
class AzureService { public MobileServiceClient Client { get; set; } = null; private IMobileServiceSyncTable<Pill> pillTable; public static bool UseAuth { get; set; } = false; public async Task Initialize() { if (Client?.SyncContext?.IsInitialized ?? false) return; var appUrl = ""; Client = new MobileServiceClient(appUrl); var path = "syncstore.db"; //path = Path.Combine(MobileServiceClient.DefaultDatabasePath, path); var store = new MobileServiceSQLiteStore(path); store.DefineTable<Pill>(); await Client.SyncContext.InitializeAsync(store); pillTable = Client.GetSyncTable<Pill>(); } public async Task SyncPill() { try { if (!CrossConnectivity.Current.IsConnected) return; await pillTable.PullAsync("allPills", pillTable.CreateQuery()); await Client.SyncContext.PushAsync(); } catch (Exception ex) { Debug.WriteLine("Unable to sync pills, that is alright as we have offline capabilities: " + ex); } } public async Task<IEnumerable<Pill>> GetPills() { await Initialize(); await SyncPill(); return await pillTable.OrderBy(c => c.DateUtc).ToEnumerableAsync(); } public async Task<Pill> AddPill(bool atHome, string location) { await Initialize(); var pill = new Pill() { DateUtc = DateTime.UtcNow, TakenAtHome = atHome, OS = Device.RuntimePlatform, Location = location ?? string.Empty }; await pillTable.InsertAsync(pill); await SyncPill(); return pill; } }
}
I have F9 these two lines
await pillTable.PullAsync("allPills", pillTable.CreateQuery());
await Client.SyncContext.PushAsync();
And it never seems to run them it just exits the try statement. I even ran then before the try statement and nothing, but no exception was thrown either...
I would love any insight! I did type it all by hand and I didn't copy and paste so I might of messed something up, but I cant find it if that was the issue
Answers
Is the question I am asking not clear/ confusing/ makes no sense? Sorry if it is.
In the time since the post I was able to get a different project on Android to talk with an easy table. But I need it to work with the project above that uses a c# backend.
Thanks! | https://forums.xamarin.com/discussion/comment/308021/ | CC-MAIN-2019-18 | refinedweb | 426 | 60.11 |
#include "NewOneWire.h"
So, you're saying I should declare all my library-specific variables in the .cpp file, instead of .h?
extern HardwareSerial Serial; // so it can be referenced in lots of places
// actually create the object: HardwareSerial Serial(&rx_buffer, &UBRRH, &UBRRL, &UCSRA, &UCSRB, &UDR, RXEN, TXEN, RXCIE, UDRE, U2X);
Why can't they fix the issues with the files?
And if I don't want them to be global to the entire sketch, I don't prototype them with extern in the header?
It would be ideal to be able to have the ISR call functions from inside a class. Is that possible?
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=92212.msg696138 | CC-MAIN-2017-09 | refinedweb | 140 | 73.37 |
I was updating a site from Django 1.5.x to 1.7 to take advantage of the new, built-in migrations, when I ran into this error:
django.core.exceptions.ImproperlyConfigured: Application labels aren't unique, duplicates: utils
It kind of made sense because I have a directory of common code that has a bunch of apps. I combine that code with my current project by symbolically linking to common_code in my site packages. Both my current project and common_code have apps called “utils”.
In settings.py I load both:
INSTALLED_APPS = [..., 'utils', 'common_code.utils', ...]
Prior to Django 1.7, that worked fine. But that no longer works in 1.7.
The solution is in the new Applications section of the docs. Applying it to my specific case, in my current project utils app, I added a file: apps.py. Here is the contents of apps.py:
from django.apps import AppConfig class MYUtils(AppConfig): name = 'utils' # Full Python path to the application label ='my_utils'
The docs mention something about adding code to utils.__init__ to avoid having to change your settings file. But in my case, it seemed simpler to change settings.py to:
INSTALLED_APPS = [..., 'utils.apps.MYUtils', 'common_code.utils', ...]
Hey , I did same as you explained .. getting now Import Error –
I thought path is not correct — but checked it twice for it and it is correct path only …
What should I do to resolve this problem
Sorry. There are so many ways to get paths wrong, its impossible for me to know why you are getting that error. Once you figure it out, please post it here to help others.
I’m building project with django-oscar. I have to modify the customer view by extending it.
When I do so then I get this error..”django.core.exceptions.ImproperlyConfigured: Application labels aren’t unique, duplicates: customer”.
Can you help to solve this?
Thanks
Sorry, I do not have time to dive into this. All I can say is the blog post is directly related to your problem. In my case the app that was a duplicate was utils. In your case its customer. | https://snakeycode.wordpress.com/2014/10/09/django-application-labels-arent-unique/ | CC-MAIN-2017-43 | refinedweb | 356 | 69.18 |
Get the highlights in your inbox every week.
3 features that debuted in Python 3.0 you should use now | Opensource.com
3 features that debuted in Python 3.0 you should use now
Explore some of the underutilized but still useful Python features.
Subscribe now
This is the first in a series of articles about features that first appeared in a version of Python 3.x. Python 3.0 was first released in 2008, and even though it has been out for a while, many of the features it introduced are underused and pretty cool. Here are three you should know about.
Keyword-only arguments
Python 3.0 first introduced the idea of keyword-only arguments. Before this, it was impossible to specify an API where some arguments could be passed in only via keywords. This is useful in functions with many arguments, some of which might be optional.
Consider a contrived example:
def show_arguments(base, extended=None, improved=None, augmented=None):
print("base is", base)
if extended is not None:
print("extended is", extended)
if improved is not None:
print("improved is", improved)
if augmented is not None:
print("augmented is", augmented)
When reading code that calls this function, it is sometimes hard to understand what is happening:
show_arguments("hello", "extra")
base is hello
extended is extra
show_arguments("hello", None, "extra")
base is hello
improved is extra
While it is possible to call this function with keyword arguments, it is not obvious that this is the best way. Instead, you can mark these arguments as keyword-only:
def show_arguments(base, *, extended=None, improved=None, augmented=None):
print("base is", base)
if extended is not None:
print("extended is", extended)
if improved is not None:
print("improved is", improved)
if augmented is not None:
print("augmented is", augmented)
Now, you can't pass in the extra arguments with positional arguments:
show_arguments("hello", "extra")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-6000400c4441> in <module>
----> 1 show_arguments("hello", "extra")
TypeError: show_arguments() takes 1 positional argument but 2 were given
Valid calls to the function are much easier to predict:
show_arguments("hello", improved="extra")
base is hello
improved is extra
nonlocal
Sometimes, functional programming folks judge a language by how easy is it to write an accumulator. An accumulator is a function that, when called, returns the sum of all arguments sent to it so far.
The standard answer in Python before 3.0 was:
class _Accumulator:
def __init__(self):
self._so_far = 0
def __call__(self, arg):
self._so_far += arg
return self._so_far
def make_accumulator():
return _Accumulator()
While admittedly somewhat verbose, this does work:
acc = make_accumulator()
print("1", acc(1))
print("5", acc(5))
print("3", acc(3))
The output for this would be:
1 1
5 6
3 9
In Python 3.x, nonlocal can achieve the same behavior with significantly less code.
def make_accumulator():
so_far = 0
def accumulate(arg):
nonlocal so_far
so_far += arg
return so_far
return accumulate
While accumulators are contrived examples, the ability to use the
nonlocal keyword to have inner functions with state is a powerful tool.
Extended destructuringImagine you have a CSV file where each row consists of several elements:
- The first element is a year
- The second element is a month
- The other elements are the total articles published that month, one entry for each day
Note that the last element is total articles, not articles published per day. For example, a row can begin with:
2021,1,5,8,10
This means that in January 2021, five articles were published on the first day. On the second day, three more articles were published, bringing the total to 8. On the third day, two more articles were published.
Months can have 28, 30, or 31 days. How hard is it to extract the month, day, and total articles?
In versions of Python before 3.0, you might write something like:
year, month, total = row[0], row[1], row[-1]
This is correct, but it obscures the format. With extended destructuring, the same can be expressed this way:
year, month, *rest, total = row
This means that if the format ever changes to prefix a description, you can change the code to:
_, year, month, *rest, total = row
Without needing to add
1 to each of the indices.
What's next?
Python 3.0 and its later versions have been out for more than 12 years, but some of its features are underutilized. In the next article in this series, I'll look at three more of them. | https://opensource.com/article/21/5/python-30-features | CC-MAIN-2021-43 | refinedweb | 752 | 50.16 |
view raw
I'm exporting a factory function in ES6, which returns an object with properties.
When calling the factory, the object is created, but its values don't update.
Example:
// factory.js
let counter = 1;
let factory = () => {
let increment = function(){
counter++;
}
return { counter, increment };
}
export default factory;
// main.js
import factory from './factory';
let f = factory();
console.log(f.counter); // =>1
f.increment();
console.log(f.counter); // => stil 1, not 2?
In JavaScript primitive types are passed by value.
let counter = 1; let factory = () => { ... // The property `counter` in this object gets passed a value of the `counter` declared in the scope of `factory.js`. // It does not get a reference. return { counter, increment }; }
When you return the object from the
factory function, its property
counter is assigned a value from the
counter declared in the scope of
factory.js. This essentially means the
counter property on the object received a copy of the value – there is nothing linking the value of the
counter variable and the
counter property.
let counter = 1; let factory = () => { let increment = function () { // The `++` operates on the `counter` variable declared in the scope of `factory.js`. // There is no link between the value `counter` in the scope of `factory.js` and the `counter` as the property of the object returned. // As a result when the `counter` variable is incremented, the `counter` property remains the same as the value initially passed to it. counter++; }; };
When you increment the
counter, you are incrementing the value of the variable declared in the scope of
factory.js. The variable's value is a
Number therefore being a primitive. Primitive values are passed by value so there is no link between the variable
counter and the property
counter.
Hopefully all that makes sense. If you want to do some more reading on this idea of passing by value (compared to passing by reference) you can see the following StackOverflow questions:
At this point in time you might be asking how can I fix this?
Instead of incrementing
counter you need to increment
this.counter. Like this:
let increment = () => { this.counter++; };
This works because the context of
this is the function
factory. The
counter property is on the variable you assigned the result of calling
factory. | https://codedump.io/share/P6AIUteMfn75/1/es6-exported-object-values-not-updated | CC-MAIN-2017-22 | refinedweb | 374 | 66.03 |
I am trying to use a constant typedef returned from a function. I need help in getting a const char * returned from the function. A test case is shown below.
t1.h:
====
typedef char Char;
typedef Char * CharPtr;
const CharPtr getChar();
t1.c:
====
#include "t1.h"
#define GREETING
void main()
{
const char * p;
p = getChar();
}
const CharPtr getChar()
{
return GREETING;
}
Now when I compile this I get " Function cannot return a const qualified type" on both the prototype and the definition.
Now I know that a function can return a pointer to a constant object but not a constant object.
And I did find one description in my "C++ Primer" book stating that the const modifies the type of cstr. So that in this case getChar is returning "char * const" and not a "const char *" as expected.
But I don't know how to modify the typedef in order to get "const char *". Any help appreciated. | https://cboard.cprogramming.com/c-programming/7005-help-function-returns-typedef-const.html | CC-MAIN-2017-09 | refinedweb | 156 | 74.39 |
[SOLVED] QFrame subclass not keeping properties set in Qt Designer
I have a UI form which contains a set of QFrames. I need extra functionality so I created a QFrame subclass in my project and, using Qt Designer, I promoted them to rgbFrame (my QFrame subclass). The problem is that the frame disappears completely at run time. If I set stylesheet programatically to set background color I see a widget with no frame. Doing a complete styling finally gives me a visible QFrame but this is not what I want, I want it to be native and use the OS style and be able to control it like a normal QFrame. What am I doing wrong to make this happen? Here is my subclass:
@
#include <QObject>
#include <QFrame>
#include <QEvent>
class rgbFrame : public QFrame
{
Q_OBJECT
public:
rgbFrame(QWidget * parent = 0, Qt::WindowFlags f = 0): QFrame(parent, f){
}
signals:
void colorClick();
protected:
int counter;
int stop;
bool event ( QEvent * e ){
if(e->type()==QEvent::MouseButtonRelease){
emit colorClick();
counter++;
}
}
};@
SOLVED: I ended up overriding the paintEvent function but even that was not getting called. So I then installed an event filter and called paintevent from there which solved the problem:
@
rgbFrame::rgbFrame(QWidget * parent, Qt::WindowFlags f): QFrame(parent, f)
{
installEventFilter(this);
}
void rgbFrame::paintEvent(QPaintEvent *e)
{
QFrame::paintEvent(e); // pass event to base class
}
bool rgbFrame::eventFilter(QObject *o, QEvent *e)
{
if (e->type() == QEvent::Paint) {
paintEvent((QPaintEvent *)e);
}
return QFrame::eventFilter(o, e);
}
@
You forgot to call @return QFrame::event(e);@ in rgbFrame::event().
If you fix that, you shouldn't need to reimplement paintEvent or install an event filter anymore.
You could also only reimplement mouseReleaseEvent instead of event if you are only interested in that type of event.
Edit: added a return
What I didn't understand is why I had to do this in the first place, I had sub-classed other qobjects before without having to do this. But now that I look at it I think know what you mean:
Even though the class has paintEvent(), mouseMoveEvent(), mousePressEvent(), mouseReleaseEvent(), moveEvent() etc ... if a subclass re-implements Event() then all the event functions don't get called because they all go through event first.
To be more precise, event is the one calling all the "sub event functions" (paintEvent, mouseMoveEvent...), so you need to call the base class function if you don't handle the event or if you don't want to filter it out.
And since widgets can also use the sub event functions to act upon them, you should call the base class corresponding function for these too, when you don't handle the events (it is at least indicated in "QWidget::keyPressEvent documentation ": ).
yeah I have already done this with QWidget::keyPressEvent() and other sub-event functions but it just did not dawn on me that I was cutting of all sub events() by reimplementing event(), I simply considered that both the event and sub-event functions would get called. | https://forum.qt.io/topic/6652/solved-qframe-subclass-not-keeping-properties-set-in-qt-designer | CC-MAIN-2017-47 | refinedweb | 498 | 60.89 |
»
Developer Certification (SCJD/OCMJD)
Author
[URLyBird]Is my lock appraoch right?
Zhixiong Pan
Ranch Hand
Joined: Jan 25, 2006
Posts: 239
posted
Apr 11, 2006 03:51:00
0
Hi all,
The book() method in HotelBusiness will first call lock() in Data. In DB Layer I have a class LockManager. So lock() in Data will first call lock() in LockManager. Following is abstrac of code for LockManager. I can not convince myself, what about your idea?
public class LockManager{ HashMap lockMap; long lock(recNO){ while(true){ synchronized(lockMap){ check if the recNo already in lockMap; } if(already locked){ synchronized(this){ Thread.currentThread().wait(); } } else{ generate lockCookie; return lockCookie; } } }
[Andrew: Deleted methods - refer to
SCJD FAQ
for why.]
[ April 11, 2006: Message edited by: Andrew Monkhouse ]
SCJP 1.4 SCJD
Luc Feys
Greenhorn
Joined: Nov 21, 2005
Posts: 20
posted
Apr 11, 2006 04:36:00
0
Hello Zhixiong,
I might be wrong but I think there is a securiy hole in your lock() method.
Suppose two threads A and B.
Thread A executes the code protected by "synchronized(lockMap)" and finds that the record is not locked.
Then thread B jumps in and executes the same code protected by "synchronized(lockMap)" before thread A has the possibility to really lock the record. So thread B also thinks that the record is not locked.
So both threads will proceed in locking the record (but only the last one's cookie will be valid).
You should be able to
test
this easily. If you would add a "Thread.sleep" between the check of the map and the actual locking, I think you would get a
SecurityException
thrown by at least one thread.
You should find plenty of possible ways to solve this in the forum. But it is also very well explained in Andrew's book.
I hope this was of any help.
Luc
Mihai Radulescu
Ranch Hand
Joined: Sep 18, 2003
Posts: 918
I like...
posted
Apr 11, 2006 04:39:00
0
Hi Zhixiong,
Here are some points:
1.Why you wait on thread - this can create problems if you choose to use RMI(just search RMI- are a lot of topics with this theme)
2.Consider this scenario : a thread can snack after the sync block
synchronized(lockMap){ check if the recNo already in lockMap; }
and lock/release some records - this can alter the lockMap -> the condition "if(already locked)" does not guarantee that the record are really locked or not.
3.You can generate a lock cookie also if the record was not locked(see point 2).
4.You will get a IllegalMonitorException because you wait using the current thread and you notify using the actual instance(I presume is not the current thread)
5.You can use the contains() methods to check if a Map contains a key
6.Your isLocked never return false !! -this is not so nice.
Regards ,Mihai
SCJP, SCJD, SCWCD, OCPJBCD
Zhixiong Pan
Ranch Hand
Joined: Jan 25, 2006
Posts: 239
posted
Apr 11, 2006 06:33:00
0
Hi Mihai,
Thanks for your reply.According your points, I want to consult with you:
1. Cause RMI will produce threads itself instead of your working, so why not use wait. Are there any potential emergency? If there are, what thread blocking approach should be used instead?
2 and 3. I modify the lock as following:
long lock(recNO){ while(true){ synchronized(lockMap){ generate lockCookie; check if the recNo already in lockMap; if(not clocked) lockMap.add(new Integer(recNo),lockCookie); } if(already locked){ synchronized(this){ try{ this.wait(); } catch(InterruptedException e){} } } } return lockCookie; }
4. not quite clear.
5. You are right!
6. I think the call of isLocked() can only get ture return or get a
SecurityException
.
Mihai Radulescu
Ranch Hand
Joined: Sep 18, 2003
Posts: 918
I like...
posted
Apr 11, 2006 07:23:00
0
Hi Zxiong,
Right now I am pretty busy and I try to be fast:
1.You don't know how many threads are used by the RMI - is not a bad idea to have a lock for each thread but if you choose this solution you must be shore that you can manage/manipulate this locks easily. Back to your example you use the like look the current thread -
the actual current thread
so when yourelease your look you must be shore that you get the same (Thread)instance. Once more RMI does not guarantee that all your methods are running under the same thread. So if you lock with a lock object be shore that you release with the same lock (object).
2&3.A Thread can sneck between your while() block and if block - lets say that a thread waits in the spin condition and it pass this - we are exact after the while block - an other thread slide in and lock the record A(lets say) , the fist thread knows that the record A is not lock(it pass the spin condition) and ....bam. More simple can happen that your lock returns a lookCookie even if the record is not really lock.
4. The wait & notify(All) must be called on the same instance your previous code snippet you use(
currentTnerad
.wait() and
this
notifyAll())
5.A boolean method can be return true or false -
thats the boolean meaning
In improper to use an exception instead of false - this make your code a little bit hard to understand.
Regards ,Mihai
I agree. Here's the link:
subject: [URLyBird]Is my lock appraoch right?
Similar Threads
[URLyBird]Why dead lock?
[URLyBird]Is my lock manager class right?
Passed 360/400
my question about LockManager.java
design question: should the lock manager be synchronized whenever it's used in Data?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/188171/java-developer-SCJD/certification/URLyBird-lock-appraoch | CC-MAIN-2015-22 | refinedweb | 972 | 73.07 |
-
should be: -
change: "...type the following line in a Unix shell window or ..."
to: "...type the following line in a Unix shell window (use Xalan) or ..."
note: where Xalan is in monospace font
Trying to load whitespace-pi.xml in firefox gives:
Error loading stylesheet: (null)
Modifying the file to point to whitespace.xsl seems to give the intended results.
<xsl:processing-instructionhref="new.css"
should be:
<xsl:processing-instructionhref="processing.css"
Note from the Author or Editor:Yes, this is correct. "new.css" should be "processing.css"
xalan name.xsl encoding.xsl
NOW READS:
xalan name.xml encoding.xsl
3rd line of example has "<xsl:output" in italics.
'lu {font-size: 16pt}' should be 'ul {font-size: 16pt}'
Justification. The CSS Stylesheet uses HTML element selectors to style the resulting transformed document and there is no 'lu' tag in HTML.
Note from the Author or Editor:This is correct. "lu" should be "ul"
"\___namespace 'xmlns' = "urn:wyeast-net:invoice'
NOW READS:
"\___namespace '' = "urn:wyeast-net:invoice'
The XML listing is precedeed with the wrong heading description (in italics) saying what it is. It should be 'Example 4-3. An XML list Candian provinces' not 'An XML list of contributors to XML 1.0'
Note from the Author or Editor:Correct. Should be:
Example 4-3. An XML list of Canadian provinces
Parent Forward
should be:
Parent Reverse
Description of floor(number) function is as follows:
Returns the largest integer (closest to positive infinity) that is not less than the argument.
The word "less" should be "more".
xalan -i 1 price.xml frag.xsl
NOW READS:
saxon price.xml frag.xsl
"xalan europe.xsl sort.xsl"
NOW READS:
"xalan europe.xml sort.xsl"
change: "The plain text, alphabetized result tree..."
to: "The plain text lexicographically alphabetized result tree..."
"So far, you have sorted nodes alphabetically. You can also..."
NOW READS:
"So far, you have sorted nodes alphabetically (actually, lexicographically). You can also..."
"...is supposed to determine the language system environment."
should be:
"...is supposed to determine the language from the system enviroment."
xalan -p kp 'es' un.xml kp.xsl
NOW READS:
xalan -p kp "'es'" un.xml kp.xsl
xalan -p cr 'Oregon' states.xml cross.xsl
NOW READS:
xalan -p cr "'Oregon'" states.xml cross.xsl
xalan africa.xsl comma.xsl
NOW READS:
xalan africa.xml comma.xsl
"Only instructions elements..."
NOW READS:
"Only instruction elements..."
"...you will get the same result with excludeonlit.xsl that you did with exclude.lit."
should be
"...you will get the same result with excludeonlit.xsl that you did with exclude.xsl."
"Java Version 1.4...comes standard with JAXP."
should be
"JAXP comes standard with Java Version 1.4..."
xls:exclude-result-prefixes
should be
xsl:excude-result-prefixes
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/catalog/errata.csp?isbn=9780596003272 | CC-MAIN-2014-41 | refinedweb | 485 | 63.36 |
In this article I would like to demonstrate how to extend the ListView control in the .NET Compact Framework. We will focus on enabling some of the ListView Extended Styles. If we take a look at the Windows Mobile 5.0 Pocket PC SDK we will see that there are certain features of ListView that aren’t provided by the .NET Compact Framework.
An example of the ListView extended styles is displaying gridlines around items and subitems, double buffering, and drawing a gradient background. These extended styles can be enabled in native code by using the ListView_SetExtendedListViewStyle macro or by sending LVM_SETEXTENDEDLISTVIEWSTYLE messages to the ListView.
Send Message
We will be using a lot of P/Invoking so let’s start with creating an internal static class called NativeMethods. We need a P/Invoke declaration for SendMessage(HWND, UINT, UINT, UINT).
internal static class NativeMethods
{
[DllImport(“coredll.dll”)]
public static extern uint SendMessage(IntPtr hwnd, uint msg, uint wparam, uint lparam);
}
Enabling and Disabling Extended Styles
Now that we have our SendMessage P/Invoke declaration in place, we can begin extending the ListView control. Let’s start off with creating a class called ListViewEx that inherits from ListView. We need to look into the native header files of the Pocket PC SDK to get the ListView Messages. For now we will only need LVM_[GET/SET]EXTENDEDLISTVIEWSTYLE message which will be the main focus of all the examples. I will declare my class as a partial class and create all the pieces one by one for each example. Let’s create a private method called SetStyle(), this method will enable/disable extended styles for the ListView
public partial class ListViewEx : ListView
{
private const uint LVM_FIRST = 0x1000;
private const uint LVM_SETEXTENDEDLISTVIEWSTYLE = LVM_FIRST + 54;
private const uint LVM_GETEXTENDEDLISTVIEWSTYLE = LVM_FIRST + 55;
private void SetStyle(uint style, bool enable)
{
uint currentStyle = NativeMethods.SendMessage(
Handle,
LVM_GETEXTENDEDLISTVIEWSTYLE,
0,
0);
if (enable)
NativeMethods.SendMessage(
Handle,
LVM_SETEXTENDEDLISTVIEWSTYLE,
0,
currentStyle | style);
else
NativeMethods.SendMessage(
Handle,
LVM_SETEXTENDEDLISTVIEWSTYLE,
0,
currentStyle & ~style);
}
}
Grid Lines
For my first example, let’s enable GridLines in the ListView control. We can do this by using LVS_EX_GRIDLINES. This displays gridlines around items and sub-items and is available only in conjunction with the Details mode.
public partial class ListViewEx : ListView
{
private const uint LVS_EX_GRIDLINES = 0x00000001;
private bool gridLines = false;
public bool GridLines
{
get { return gridLines; }
set
{
gridLines = value;
SetStyle(LVS_EX_GRIDLINES, gridLines);
}
}
}
What the code above did was add the LVS_EX_GRIDLINES style to the existing extended styles by using the SetStyle() helper method we first created.
An interesting discovery to this is that the Design Time attributes of the Compact Framework ListView control includes the GridLines property. Now that we created the property in the code, when we open the Visual Studio Properties Window for our ListViewEx we will notice that GridLines property we created falls immediately under the “Appearance” category and even includes a description 🙂
Double Buffering
Do you notice that when you populate a ListView control with a lot of items, the drawing flickers a lot when you scroll up and down the list? Although it is not in the Pocket PC documentation for Windows Mobile 5.0, the ListView actually has an extended style called LVS_EX_DOUBLEBUFFER. Enabling the LVS_EX_DOUBLEBUFFER solves the flickering issue and gives the user a more smooth scrolling experience.
public partial class ListViewEx : ListView
{
private const uint LVS_EX_DOUBLEBUFFER = 0x00010000;
private bool doubleBuffering = false;
public bool DoubleBuffering
{
get { return doubleBuffering; }
set
{
doubleBuffering = value;
SetStyle(LVS_EX_DOUBLEBUFFER, doubleBuffering);
}
}
}
Gradient Background
Another cool extended style is the LVS_EX_GRADIENT. This extended style draws a gradient background similar to the one found in Pocket Outlook. It uses the system colors and fades from right to left. But what is really cool about this is that this is done by the OS. All we had to do was enable the style.
public partial class ListViewEx : ListView
{
private const uint LVS_EX_GRADIENT = 0x20000000;
private bool gradient = false;
public bool Gradient
{
get { return gradient; }
set
{
gradient = value;
SetStyle(LVS_EX_GRADIENT, gradient);
}
}
}
If you want to look more into extended styles then I suggest you check out the Pocket PC Platform SDK documentation. There a few other extended styles that I did not discuss that might be useful for you. You can get the definitions in a file called commctrl.h in your Windows Mobile SDK “INCLUDE” directory.
8 thoughts on “ListView Extended Styles in .NETCF”
hi
i m new in CE development.
i m using sqlite in my application
sqlite version 060.
but when i m trying to open database it gives me an error cant find Pinvoke method..
error message is “Can’t find PInvoke DLL ‘SQLite.Interop.060.DLL’.”
i m using system.data.sqlite namespace……
i had also copied the same dll to my windows folder but not succeeded.
Please help me….
Hey Christian,
in the Gradient get Field you return “doubleBuffering” where it should be “gradient”.
Thanks for the code!!!
Hey Sebastian,
I’ll fix that immediately! Thanks for notifying me about it.
hi i am new mobile application development , will you give the full source of ListViewEx
Sure, I'll upload it to:
Who knows where to download XRumer 5.0 Palladium?
Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!
i'm trying to override the onpaint for a TextBox inb CF 2.0
Unfortunately i read somewhere i should call SetStyle(ControlStyles.UserPaint) in my constructor in order to see my OnPaint override called.
SetStyle is not there in CF 2.0
Do you know the message to be sent for simulating SetStyle(ControlStyles.UserPaint) via core.dll in CF 2.0 ?
Thanx for the post and for any suggestion
Hi
Can you do the same but using VB.NET? Im trying to do that but i cant get it
Thanks | https://christianhelle.com/2008/10/listview-extended-styles-in-netcf/ | CC-MAIN-2021-04 | refinedweb | 964 | 64.41 |
Provided by: manpages-dev_5.01-1_all
NAME
addseverity - introduce new severity classes
SYNOPSIS
#include <fmtmsg.h> int addseverity(int severity, const char *s); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): addseverity(): Since glibc 2.19: _DEFAULT_SOURCE Glibc 2.19 and earlier: _SVID_SOURCE
DESCRIPTION
This function allows the introduction of new severity classes which can be addressed by the severity argument of the fmtmsg(3) function. By default, that function knows onlyaddseverity() │ Thread safety │ MT-Safe │ └──────────────┴───────────────┴─────────┘
CONFORMING TO
This function is not specified in the X/Open Portability Guide although the fmtmsg(3) function is. It is available on System V systems.
NOTES
New severity classes can also be added by setting the environment variable SEV_LEVEL.
SEE ALSO
fmtmsg(3)
COLOPHON
This page is part of release 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.ubuntu.com/manpages/eoan/man3/addseverity.3.html | CC-MAIN-2019-30 | refinedweb | 156 | 50.02 |
Gantry::Docs::Cookbook - Gantry How Tos
This document is set up like a cookbook, but all the recipes are fully implemented in Gantry::Samples. The first recipe explains how to run the samples.
You might also be interested in Gantry::Docs::FAQ which answers a different set of questions that are not covered in Gantry::Samples. Bigtop has its own Bigtop::Docs::Cookbook and full documentation suite, see Bigtop::Docs::TOC.
The questions are:
To run the samples, you need sqlite 3 and Gantry. Once those are in place just change to the samples directory of the Gantry distribution and type:
./app.server
The stand alone server will print a list of available URLs like this:
Available urls:
These locations will be mentioned again in the appropriate sections below.
You need three things to upload a file...
file_uploadmethod supplied by all engines.
Note that there is no special plugin to load. Gantry engines all know how to upload files and are happy to do so at any time.
Gantry::Samples::FileUpload provides an example. It has a single do_ method,
do_main, which uses
fileupload.tt supplied in the samples
html/templates sub directory.
Once the form validates,
do_main says:
my $upload = $self->file_upload( 'file' );
where file is the name of the form field containing the file name. This method returns a hash ref with thses keys:
A unique identifier for the file based on the current time and a random number.
Base name of user's file.
File suffix (e.g. txt).
Name of user's file including suffix.
Number of bytes in file.
Mime type of file.
The handle from which you read the file.
To see how to catch and store the file, see do_main in Gantry::Samples::FileUpload.
Note that Bigtop does not help with file uploads, since the actual upload is done by a single method call and the details of processing the received file vary too much for a generic scheme.
The short answer is: not like the samples. The samples are set up to run exclusively in a stand aloner server environment, so that people trying them can more easily run them.
For configuration best practice advice, see the Gantry::Docs::FAQ question "How should I configure an app?" It shows how we prefer to configure apps in a normal life cycle.
While you could authenticate users in a variety of ways, our prefered scheme is with a cookie. Gantry provides
Gantry::Plugins::AuthCookie to manage those cookies. To see a demonstration, run the samples and visit.
You need two pieces to make authentication work. First, you need to modify the config information. This is best done in its Bigtop file. Second, you need to set up a database to keep track of the users and their passwords. I'll explain the Bigtop changes first.
In the top level config block, where the engine statement is, add:
plugins AuthCookie;
This will make every controller in the application use the authentication plugin, but you still have to ask nicely for it to keep people out. By default, everyone is still allowed in to all pages. To keep people out, set config variables at that controller level. For example:
controller AuthCookie::SQLite::Closed { page_link_label `AuthCookie w/ SQLite Closed`; rel_location `authcookie/sqlite/closed`; config { auth_deny yes => no_accessor; auth_require `valid-user` => no_accessor; } }
This creates a controller which denies access unless the user is valid. The
auth_require `valid-user` syntax is meant to mimic Apache basic auth syntax.
This controller doesn't have any methods. Rather, it inherits them from this one:
controller AuthCookie { page_link_label AuthCookie; rel_location authcookie; method do_open is stub { } method do_closed is stub { } }
These stubs are filled inside
Gantry::Samples::AuthCookie. They are not particularly interesting, but here they are:
sub do_open { my ( $self ) = @_; return( "you're in" ); } # END do_open sub do_closed { my ( $self ) = @_; my @lines; push( @lines, "you're in: " . $self->user ); push( @lines, ht_br(), ht_br(), ht_a( ( $self->app_rootp . "/login?logout=1" ), ('logout ' . $self->user), ), ); return( join( "", @lines ) ); } # END do_closed
Some pages display differently for logged in users than they do for others. For instance the front page of perlmonks always shows the Seekers of Perl Wisdom section for anyone not logged in, but a page of the user's choice for those logged in. This requires looking at the cookie, to find out who is logged in, without denying access. These modules in the samples demonstrate this:
Gantry::Samples::AuthCookie::SQLite Gantry::Samples::TablePermissions Gantry::Samples::TablePermCRUD
They do this by setting:
auth_optional yes => no_accessor;
in their controller level config blocks. This sets the
user_row attribute of the site object, without keeping anyone out of the page for failure to have a valid cookie. Now that the config tells the auth cookie plugin where to look for user data and which pages to restrict, we must have database tables for that user data.
For
Gantry::Plugins::AuthCookie to work, you need to set up a database for it to use, or add tables to your app's existing database. Using a separate database is good in a corporate setting where users have various access to many different apps. Combining the auth tables into an existing database is better for self standing sites. The samples use a single database, so I'll show that approach first.
There are three essential columns in the user table needed for authentication: id, user name, and pass word. The id is for the benifit of the ORM. The other fields hold the user's credentials. The names of these columns is not fixed by the AuthCookie plugin. The defaults are ident and password. We'll see how to control the names the plugin uses below.
You are welcome to put additional information in the user rows. The user table from the samples has this schema (which was generated by bigtop):
CREATE TABLE user ( id INTEGER PRIMARY KEY AUTOINCREMENT, active INTEGER, username varchar, password varchar, email varchar, fname varchar, lname varchar, created datetime, modified datetime );
When you use authentication, with either auth_require or auth_optional, you can get the ORM row for the logged in user by calling
user_row on your Gantry site object. If auth is optional, and the user is not logged in, you will still get a user row, but it won't have data in it.
To tell the plugin about the table, the samples use these variables in the app level config block:
config { dbconn `dbi:SQLite:dbname=app.db` => no_accessor; auth_table user => no_accessor; auth_user_field username; #... }
To use a separate database for auth, set
auth_dbconn like dbconn, but pointing to the other database. We need to tell the plugin the name of our user table with the
auth_table config parameter. Since sqlite allows us to, we call it
user. Since our user names are stored in the
username column, and not in
ident, we must set the
auth_user_field config parameter to
username. To change the pass word column away from
password, we would use
auth_password_field, but we don't need to in this case.
That's all you need to set up cookie base user log-ins. If you want to further restrict pages to subsets of logged-in users, see the next question.
Once you have mastered authenticating users, it is usually a short step to wanting to divide them into groups with different access rights. For instance, a message board like slashdot needs special access for editors.
As with authentication, there are two parts to authorization. First, you need to change the config info (or code). Then, you need to include the groups and their member lists in the database. I'll take them in that order.
Note well, that the samples do not use group authorization at the controller level. They do use groups at the row level (see the next question). Thus, the tables and models are in place to handle groups. You could alter the examples as described below to convert the valid-user requirement into a group membership requirement.
To restrict the closed controller to a specific group, first change the
auth_require value from
valid-user to
group. Then, add
auth_groups with the name of the group whose members are allowed to reach the page.
controller AuthCookie::SQLite::Closed { page_link_label `AuthCookie w/ SQLite Closed`; rel_location `authcookie/sqlite/closed`; config { auth_deny yes => no_accessor; auth_require `group` => no_accessor; auth_groups `admin` => no_accessor; } }
Note that the value for the
auth_groups config statement may be a comma separated list of group names (with optional internal whitespace). Logged in users who are members of any listed group, will be allowed to access the page.
In addition to the user table described above, you need two other tables to make groups work. The first table names the groups. The second lists the members.
Here is the definition of the group table (it was generated by bigtop):
CREATE TABLE user_group ( id INTEGER PRIMARY KEY AUTOINCREMENT, ident varchar, description varchar, created datetime, modified datetime );
The short name of the group is
ident. This is the name you list in
auth_groups values. The usually longer
description is meant to be a more verbose and therefore understandable description of the group. Only the ident field is used by the AuthCookie plugin.
The group membership table represents a three way join between the user and user_group tables. (See "What is a three way join?" if you aren't familiar with the three way join concept.) Each row in this table links one user to one group in classic many-to-many fashion (again, bigtop generated the table layout):
CREATE TABLE user_user_group ( id INTEGER PRIMARY KEY AUTOINCREMENT, user INTEGER, user_group INTEGER );
If you generate these tables with bigtop, it will make controllers to manage them. You will almost surely want to secure those with group level authorization. You will probably need to add one user manually through the database command line tool to bootstrap the management process. The samples allow anyone to update the user information, a method suitable only for development, if ever there was one.
In order to answer this question, let's begin by talking about how the samples of this work. The Table Permissions controller represents a one table 'For Sale' bulletin board (so does the Table Permissions with Manual CRUD, see below). Note that this is not a fully functional site. It just demonstrates row level permissions.
Any user may visit the Table Permissions main listing and see all of the items for sale. But what else you may do there is governed by whether you are logged in and whether your logged in user is n the admin authorization group.
A three way join is a many-to-many relationship between two tables. For example, the relationship between authors and books. An author of one book likely has written other books. For each author, there are many books. But, in the other direction a good number of books have multiple authors. For each book there could be many authors.
Generally, you need a special extra table to hold the relationship:
+--------+ +------+ | author | | book | +--------+ +------+ ^ ^ \ / +-------------------+ | author_book | +-------------------+
Here the author_book table has only two (or three fields if you give it an id). Both are foriegn keys to the tables on the end of the relationship.
You need two things for easy use of a three way structure, once your SQL is in place. First, you need your model to understand it. Second, you need an easy way to perform CRUD on the join table rows.
Making the model understand your three way preferences is easy. Start with the two tables on the ends of the relationship as normal. Then add this to the bigtop file:
join_table author_book { joins author => book; }
You can do this with kickstart syntax when you make or augment the bigtop file:
bigtop -n BookStore 'author(name)<->book(title,year:int4)'
That will make the two regular tables and the joining table.
Once you have a three way structure, you can use Gantry::Utils::Threeway to manage the rows in the joining table (the one in the middle). You can do this from the controller for the table on either end of the many-to-many relationship, or for both of them.
To show this, I'll pull code from Gantry::Samples::User.
First, use the module:
use Gantry::Utils::Threeway;
Then provide a do_ method. The one in the user example manages group membership for users. In kickstart syntax, the relationship is user<->groups. The full method is:
#----------------------------------------------------------------- # $self->do_groups( ) #----------------------------------------------------------------- sub do_groups { my ( $self, $user_id ) = @_; my $threeway = Gantry::Utils::Threeway->new( { self => $self, primary_id => $user_id, primary_table => 'user', join_table => 'user_user_group', secondary_table => 'user_group', legend => 'Assign Groups to User' } ); $threeway->process(); } # END do_groups
All you have to do is construct the three way object and call process on it. This displays a form with a check box for each group. The current memberships are already checked. Clicking in the boxes and submitting the form updates them.
The keys needed by
new are:
The Gantry site object.
The value of the foreign key in the joining table that points to this controller.
The name of the controller's table.
The name of the joining table.
The name of the table on the other end of the many-to-many.
HTML fieldset legend around the form where new joining table rows are created from check box values.
If you want to access rows from the table on the other end of the many-to-many relationship, use the
many_to_many relationship in the model:
my @groups = $user->user_groups();
That will return an array of groups to which the current user belongs. You can turn that around through a group row:
my @members = $group->users();
If you need the rows from the joining table, use the
has_many relationship from the model:
my @joining_rows = $user->user_user_groups();
By far, the easiest way to create a three way joining structure is with a bigtop
join_table block as shown above. But you can do it yourself. In your author model, add calls like these:
__PACKAGE__->has_many( author_books => 'YourApp::Model::author_book', 'author' # your table name ); __PACKAGE__->many_to_many( books => 'author_books', # value matches the has many above 'book' # the other table name );
Then do the same in the book model. Finally, make sure you have a model for the joining rows with a normal foreign key
belongs_to for each of the end point tables:
__PACKAGE__->belongs_to( user => 'YourApp::Model::author' ); __PACKAGE__->belongs_to( user => 'YourApp::Model::book' );
There are two types or 'styles' of SOAP requests. Gantry can help with either, but it is better at the document style, so that is what I'll discuss here.
To see a sample of this approach, run the samples app.server in one shell and samples/bin/soap_client in another. Give the client a Farenheit temperature on the command line. You should see a SOAP request packet. The client will send that packet to the server immediately after printing it for you. Then, you should see a SOAP response packet with the temperature in Celcius.
Here's what you need to do to make your own server.
Make a controller. For instance, you could add this to your bigtop file:
controller SOAP { rel_location GantrySoapService; skip_test 1; plugins SOAP::Doc; method do_f2c is stub { } }
When you regenerate, you'll have a new SOAP.pm in which to place your code. It will inherit from a GEN module that uses the document style SOAP plugin. Then you'll have to fill in the code for the do_f2c routine. Bigtop made this stub:
#----------------------------------------------------------------- # $self->do_f2c( ) #----------------------------------------------------------------- sub do_f2c { my ( $self ) = @_; }
All we need to do is fill it in.
If all the SOAP request parameters are at the same level in their packet (a fairly common case), you can take advantage of the plugin's automated conversion of the SOAP packet into form parameters. If your SOAP packet has nested tags, you'll need to parse the XML with a module like
XML::LibXML or
XML::Twig. The sample's packets are simple.
Here is the finished routine (less comments offering advice on XML::LibXML):
1 sub do_f2c { 2 my ( $self ) = @_; 3 my $time = $self->soap_current_time(); 4 my $params = $self->params(); # easy way 5 6 my $f_temp = $params->{ farenheit }; 7 my $celcius = 5.0 * ( $f_temp - 32 ) / 9.0; 8 9 my $ret_struct = [ 10 { 11 GantrySoapServiceResponse => [ 12 { currentUTCTime => $time }, 13 { celcius => $celcius }, 14 ] 15 } 16 ]; 17 18 $self->soap_namespace_set( 19 '' 20 ); 21 22 return $self->soap_out( $ret_struct, 'internal', 'pretty' ); 23 } # END do_f2c
If you need to tell your client the UTC time of your response in valid SOAP time format, call
soap_current_time, as I did on line 3.
Since my server's SOAP requests are simple, I can call
params on line 5, just as I would to handle form parameters. The input parameter is in the
farenheit key (line 6). A grade school formula does the conversion on line 8.
Lines 9-16 build the structure of the return packet. The top level tag is
GantrySoapServiceResponse. Inside it will be a list of tags (order often matters to DTDs), one for the time, the other for the converted temperature.
To control the namespace of
GantrySoapServiceResponse and its children, I called
soap_namespace_set (line 18).
Finally, line 22 uses
soap_out to send the packet back to the client. It expects:
See the example above. If you need a empty tag like <empty />, use
{ empty => undef }
This must be a string chosen from 'prefix' or 'internal'. The default is prefix. This governs where the namespace is defined, and therefore has a cosmetic effect on the SOAP packet. A prefix namespace is defined in the SOAP Envelope tag where it is given the prefix tns. That prefix appears on all tags in the returned packet.
If you use internal instead, the namespace is defined in the top level tag:
<GantrySoapServiceResponse xmlns="">
Then the elements in the body of the response have no explicit namespace prefix.
If this has a true value, the resulting XML packet will have various whitespace added to it to improve human readability. No whitespace will be added anywhere that would affect parsing the result.
That's all there is to a document style SOAP server in Gantry.
There is a sample SOAP client in samples/bin/soapclient. There are three parts to a SOAP client: build the XML for the request, send that XML to the proper URL, parse the response (this list leaves out coming up with the data for the request). Gantry can help with the first one, but you need LWP or something similar for the other two.
Here is commentary on the soapclient sample.
se strict; use warnings; use lib qw( lib ../lib );
This lib directory makes sure that code comes from the samples or from distribution and not from installed locations. This is useful for developers working on the samples.
use LWP::UserAgent; use Gantry::Plugins::SOAP::Doc;
LWP will handle the actual http request/response. The SOAP::Doc plugin will make the XML to send.
my $f_temp = shift || 68; my $url = '';
These set the URL or the web service. You must be running the samples app.server on port 8080 of the local host for this to work.
my $site = { action_url => $url, post_to_url => $url, target_namespace => '', };
This structure will be passed to helper routines below. The namespace is not particularly important, but it services as documentation for users of the service.
my $request_args = [ { temperature => [ { farenheit => $f_temp }, ] }, ];
This structure is the data for the request. It has an outer XML tag called temperature. Inside that tag is one parameter for the remote method called farenheit. To add other parameters, add more hashes with keys for the parameters and legal values. If you need an empty tag, use { key_name => undef }
Which will generate:
<key_name />
Note that ensuring valid parameters is totally up to you.
my $soap_obj = Gantry::Plugins::SOAP::Doc->new( $site );
Instantiate a SOAP::Doc plugin object. If you happened to have a Gantry object with the SOAP plugin like a server, you could skip this step and just use the Gantry object as the invocant of
soap_out in the next step.
my $request_xml = $soap_obj->soap_out( $request_args, 'internal', 'pretty' );
Calling
soap_out on a SOAP::Doc plugin object (or a Gantry site object) returns a valid XML SOAP packet.
warn "request:\n$request_xml\n"; transact_via_xml( $site, $request_xml );
The packet is first printed for the user's benefit, then sent to the server.
The
transact_via_xml sub is not that interesting, but I'll include it for completeness. Mostly it makes sure to get everything aligned for proper LWP functioning.
sub transact_via_xml { my ( $site, $request_xml ) = @_; # make the request my $user_agent = LWP::UserAgent->new(); $user_agent->agent( 'Sunflower/1.0' ); my $request = HTTP::Request->new( POST => $site->{ post_to_url } ); $request->content_type( 'text/xml; charset=utf-8' ); $request->content_length( length $request_xml ); $request->header( 'Host' => $site->{host} ); $request->header( 'SoapAction' => $site->{ action_url } ); $request->content( $request_xml ); my $response = $user_agent->request( $request ); warn $response->content . "\n"; }
That's a complete stand alone client. You could do the same three steps in a Gantry controller to contact a foreign web service while serving a page request, depending on the service throughput and your users' patience.
When a user is adding a row to a database table, they often need to add more than one. Gantry provides a little feature to make this easier called 'Submit and Add Another'. If you turn it on, the user will see it as a button between 'Submit' and 'Cancel'. Clicking it will first validate the form. If it validates, the row will be created. But, instead of going back to a main listing, the user will be returned to the add form.
To turn on this feature in a manual form method, add:
submit_and_add_another => 1,
to the returned hash.
Bigtop and tentmaker can do this for you. Simply add:
submit_and_add_another => 1
to the
extra_keys for the form. Note that there is no specific keyword for this. You can set any key in the forms hash by adding it to
extra_keys. | http://search.cpan.org/dist/Gantry/lib/Gantry/Docs/Cookbook.pod | CC-MAIN-2016-30 | refinedweb | 3,657 | 63.9 |
Programming is often highly collaborative. In addition, our own code can quickly become difficult to understand when we return to it — sometimes only a few hours later! For these reasons, it’s often useful to leave notes in our code for ourselves or other developers.
As we write a C++ program, we can write comments in the code that the compiler will ignore as our program runs. These comments exist just for human readers.
Comments can explain what the code is doing, leave instructions for developers using the code, or add any other useful annotations.
There are two types of code comments in C++:
A single line comment will comment out a single line and is denoted with two forward slashes
//preceding it:// Prints "hi!" to the terminal std::cout << "hi!";
You can also use a single line comment after a line of code:std::cout << "hi!"; // Prints "hi!"
A multi-line comment will comment out multiple lines and is denoted with
/*to begin the comment, and
*/to end the comment:/* This is all commented. std::cout << "hi!"; None of this is going to run! */
Instructions
Let’s practice adding a comment.
Add a new line above
#include <iostream>.
Write a single line comment that says
Harry Potter.
Compile and execute spell.cpp using the terminal.
This checkpoint will pass after you compile and execute. | https://production.codecademy.com/courses/learn-c-plus-plus/lessons/cpp-compile-execute/exercises/comments | CC-MAIN-2021-04 | refinedweb | 225 | 66.94 |
Help please, due in 1 hour. Does not compile
Shawn Lorne
Greenhorn
Posts: 1
0
Hi all,
Kinda short on time. Error message: Sales2.main(Sales2.java:56)
Java Result: 1
Here is the code!
// Exercise 7.20 Solution: Sales2.java
// Program totals sales for salespeople and products.
import java.util.Scanner;
public class Sales2
{
public static void main( String[] args )
{
Scanner input = new Scanner( System.in );
// sales array holds data on number of each product sold
// by each salesperson
double[][] sales = new double[ 5 ][ 4 ]; // 5 salespeople, 4 products each person
System.out.print( "Enter salesperson number (-1 to end): " );
int person = input.nextInt(); // the salesperson index
while ( person != -1 )
{
// To do
// prompt to enter product number and save it as an integer
System.out.print("Please enter the product number: ");
int product = input.nextInt();
// promp to enter sales amont and save it as double
System.out.print("Please enter the sales amount: ");
double salesAmount = input.nextDouble();
// error-check the input number for the array boundary
//error check input
if (person >= 1 && person < 5 && product >=1 && product < 6 &&
salesAmount >= 0)
sales[product -1][person -1] =+ salesAmount;
else
System.out.println("Invalid Input!");
System.out.print("Enter sales person number(-1 to end):");
person = input.nextInt();
}//end while
//salesperson totals
double salesPersonTotal[] = new double[4];
//display data table
for (int column = 0; column < 4; column ++)
salesPersonTotal[column] = 0;
System.out.printf("%8s%14s%14s%14s%14s%10s\n",
"Product", "Salesperson 1", "Salesperson 2","Salesperson 3",
"Salesperson 4", "Total");
//Printing a person's sales of a product
for (int row = 0; row < 5; row ++)
{
double productTotal = 0.0;
System.out.printf("8%d%",(row +1) );
for ( int column = 0; column < 4; column ++)
{
System.out.printf("%14.2f", sales[row][column]);
productTotal += sales[row][column];
salesPersonTotal[column] += sales[row][column];
} //end for loop
}
System.out.printf("8%s", "Total");
for (int column = 0; column < 4; column ++)
System.out.printf("14.2%f", salesPersonTotal[column]);
System.out.println();
}// end calculations
} // end main
What am I missing??? The table does not want to show.
Edit: Forgot problem spec
7.20 (Total Sales) Use
Thus, each salesperson passes in between 0 and 5 sales slips per day. Assume that the information from all. Your tabular output should include these cross-totals to the right of the totaled rows and to the bottom of the totaled columns.
See attached program template and output sample.
0
Shawn Lorne wrote:Hi all,
Kinda short on time. Error message:
Well, looks like your time has expired, so there's not much point in replying; but for future reference:
1. UseCodeTags (←click).
2. EaseUp.
3. TellTheDetails.
In fact, a good read of the HowToAskQuestionsOnJavaRanch page would probably be in order.
And next time: Don't leave it so late.
Winston
0
First of all, don't use \n, use %n. That will result in \n on Linux and \r\n on Windows.: %%.
Shawn Lorne wrote: System.out.printf("8%d%",(row +1) );
...
System.out.printf("8%s", "Total");
...
System.out.printf("14.2%f", salesPersonTotal[column]);: %%.
SCJP 1.4 - SCJP 6 - SCWCD 5 - OCEEJBD 6 - OCEJPAD 6
How To Ask Questions How To Answer Questions
| http://www.coderanch.com/t/584930/java/java/due-hour-compile | CC-MAIN-2016-07 | refinedweb | 525 | 51.24 |
This chapter discusses advanced techniques in Pro*C/C++ and contains the following topics:
This section explains how the Pro*C/C++ Precompiler handles character host variables. There are four host variable character types:
Do not confuse VARCHAR (a host variable data structure supplied by the precompiler) with VARCHAR2 (an Oracle internal datatype for variable-length character strings).
The CHAR_MAP precompiler command line option is available to specify the default mapping of char[n] and char host variables. Oracle Pro*C/C++:
char ch_array[5]; strncpy(ch_array, "12345", 5); /* char_map=charz is the default in Oracle7 and Oracle8 */ EXEC ORACLE OPTION (char_map=charz); /* Select retrieves a string "AB" from the database */ SQL SELECT ... INTO :ch_array FROM ... WHERE ... ; /* ch_array == { 'A', 'B', ' ', ' ', '\0' } */ strncpy (ch_array, "12345", 5); EXEC ORACLE OPTION (char_map=string) ; /* Select retrieves a string "AB" from the database */ EXEC SQL SELECT ... INTO :ch_array FROM ... WHERE ... ; /* ch_array == { 'A', 'B', '\0', '4', '5' } */ strncpy( ch_array, "12345", 5); EXEC ORACLE OPTION (char_map=charf); /* Select retrieves a string "AB" from the database */ EXEC SQL SELECT ... INTO :ch_array FROM ... WHERE ... ; /* ch_array == { 'A', 'B', ' ', ' ', ' ' } */
The DBMS and CHAR_MAP options determine how Pro*C/C++ treats data in character arrays and strings. These options allow your program to observe compatibility with ANSI fixed-length strings, or to maintain compatibility with previous releases of Oracle and Pro*C/C++ that use variable-length strings. See Chapter 10, "Precompiler Options" for a complete description of the DBMS and CHAR_MAP options.
The DBMS option affects character data both on input (from your host variables to the Oracle table) and on output (from an Oracle table to your host variables).
Character Array and the CHAR_MAP Option
The mapping of character arrays can also be set by the CHAR_MAP option independent of the DBMS option. DBMS=V7 or DBMS=V8 both use CHAR_MAP=CHARZ, which can be overridden by specifying either CHAR_MAP=VARCHAR2 or STRING or CHARF.
On input, the DBMS option determines the format that a host variable character array must have in your program. When the CHAR_MAP=VARCHAR2, host variable character arrays must be blank padded, and should not be null-terminated. When the DBMS=V7 or V8, character arrays must be null-terminated ('\0').
When the CHAR_MAP option is set to VARCHAR2 trailing blanks are removed up to the first non-blank character before the value is sent to the database. An un-initialized character array can contain null characters. To make sure that the nulls are not inserted into the table, you must blank-pad the character array to its length. For example, if you execute the statements:
char emp_name[10]; ... strcpy(emp_name, "MILLER"); /* WRONG! Note no blank-padding */ EXEC SQL INSERT INTO emp (empno, ename, deptno) VALUES (1234, :emp_name, 20);
you will find that the string "MILLER" was inserted as "MILLER\0\0\0\0" (with four null bytes appended to it). This value does not meet the following search condition:
. . . WHERE ename = 'MILLER';
To INSERT the character array when CHAR_MAP is set to VARCHAR2, you should execute the statements
strncpy(emp_name, "MILLER ", 10); /* 4 trailing blanks */ EXEC SQL INSERT INTO emp (empno, ename, deptno) VALUES (1234, :emp_name, 20);
When DBMS=V7 or V8, input data in a character array must be null-terminated. So, make sure that your data ends with a null.
char emp_name[11]; /* Note: one greater than column size of 10 */ ... strcpy(emp_name, "MILLER"); /* No blank-padding required */ EXEC SQL INSERT INTO emp (empno, ename, deptno) VALUES (1234, :emp_name, 20);
The pointer must address a null-terminated buffer that is large enough to hold the input data. Your program must allocate enough memory to do this.
The following example illustrates all possible combinations of the effects of the CHAR_MAP option settings on the value retrieved from a database into a character array.
Assume a database
TABLE strdbase ( ..., strval VARCHAR2(6));
which contains the following strings in the column strval:
"" -- string of length 0 "AB" -- string of length 2 "KING" -- string of length 4 "QUEEN" -- string of length 5 "MILLER" -- string of length 6
In a Pro*C/C++ program, initialize the 5-character host array str with 'X' characters and use for the retrieval of all the values in column strval:
char str[5] = {'X', 'X', 'X','X', 'X'} ; short str_ind; ... EXEC SQL SELECT strval INTO :str:str_ind WHERE ... ;
with the following results for the array, str, and the indicator variable, str_ind, as CHAR_MAP is set to VARCHAR2, CHARF, CHARZ and STRING:
strval = "" "AB" "KING" "QUEEN" "MILLER" --------------------------------------------------------------- VARCHAR2 " " -1 "AB " 0 "KING " 0 "QUEEN" 0 "MILLE" 6 CHARF "XXXXX" -1 "AB " 0 "KING " 0 "QUEEN" 0 "MILLE" 6 CHARZ " 0" -1 "AB 0" 0 "KING0" 0 "QUEE0" 5 "MILL0" 6 STRING "0XXXX" -1 "AB0XX" 0 "KING0" 0 "QUEE0" 5 "MILL0" 6
where 0 stands for the null character, '\0'.
On output, the DBMS and CHAR_MAP options determines the format that a host variable character array will have in your program. When CHAR_MAP=VARCHAR2, host variable character arrays are blank padded up to the length of the array, but never null-terminated. When DBMS=V7 or V8 (or CHAR_MAP=CHARZ), character arrays are blank padded, then null-terminated in the final position in the array.
Consider the following example of character output:
CREATE TABLE test_char (C_col CHAR(10), V_col VARCHAR2(10)); INSERT INTO test_char VALUES ('MILLER', 'KING');
A precompiler program to select from this table contains the following embedded SQL:
... char name1[10]; char name2[10]; ... EXEC SQL SELECT C_col, V_col INTO :name1, :name2 FROM test_char;
If you precompile the program with CHAR_MAP=VARCHAR2, name1 will contain:
"MILLER####"
that is, the name "MILLER" followed by 4 blanks, with no null-termination. (If name1 had been declared with a size of 15, there are 9 blanks following the name.)
name2 will contain:
"KING######" /* 6 trailing blanks */
If you precompile the program with DBMS=V7 or V8, name1 will contain:
"MILLER###\0" /* 3 trailing blanks, then a null-terminator */
that is, a string containing the name, blank-padded to the length of the column, followed by a null terminator. name2 will contain:
"KING#####\0"
In summary, if CHAR_MAP=VARCHAR2, the output from either a CHARACTER column or a VARCHAR2 column is blank-padded to the length of the host variable array. If DBMS=V7 or V8, the output string is always null-terminated.
The DBMS and CHAR_MAP options do not affect the way character data are output to a pointer host variable.
When you output data to a character pointer host variable, the pointer must point to a buffer large enough to hold the output from the table, plus one extra byte to hold a null terminator.
The precompiler runtime environment calls
strlen() to determine the size of the output buffer, so make sure that the buffer does not contain any embedded nulls ('\0'). Fill allocated buffers with some value other than '\0', then null-terminate the buffer, before fetching the data.
The following code fragment uses the columns and table defined in the previous section, and shows how to declare and SELECT into character pointer host variables:
... char *p_name1; char *p_name2; ... p_name1 = (char *) malloc(11); p_name2 = (char *) malloc(11); strcpy(p_name1, " "); strcpy(p_name2, "0123456789"); EXEC SQL SELECT C_col, V_col INTO :p_name1, :p_name2 FROM test_char;
When the SELECT statement mentioned earlier is executed with any DBMS or CHAR_MAP setting, the value fetched is:
"MILLER####\0" /* 4 trailing blanks and a null terminator */ "KING######\0" /* 6 blanks and null */
The following example shows how VARCHAR host variables are declared:
VARCHAR emp_name1[10]; /* VARCHAR variable */ VARCHAR *emp_name2; /* pointer to VARCHAR */
When you use a VARCHAR variable as an input host variable, your program need only place the desired string in the array member of the expanded VARCHAR declaration (emp_name1.arr in our example) and set the length member (emp_name1.len). There is no need to blank-pad the array. Exactly emp_name1.len characters are sent to Oracle, counting any blanks and nulls. In the following example, you set emp_name1.len to 8:
strcpy((char *)emp_name1.arr, "VAN HORN"); emp_name1.len = strlen((char *)emp_name1.arr);
When you use a pointer to a VARCHAR as an input host variable, you must allocate enough memory for the expanded VARCHAR declaration. Then, you must place the desired string in the array member and set the length member, as shown in the following example:
emp_name2 = malloc(sizeof(short) + 10) /* len + arr */ strcpy((char *)emp_name2->arr, "MILLER"); emp_name2->len = strlen((char *)emp_name2->arr);
Or, to make emp_name2 point to an existing VARCHAR (emp_name1 in this case), you could code the assignment
emp_name2 = &emp_name1;
then use the VARCHAR pointer in the usual way, as in
EXEC SQL INSERT INTO EMP (EMPNO, ENAME, DEPTNO) VALUES (:emp_number, :emp_name2, :dept_number);
When you use a VARCHAR variable as an output host variable, the program interface sets the length member but does not null-terminate the array member. As with character arrays, your program can null-terminate the arr member of a VARCHAR variable before passing it to a function such as
printf() or
strlen(). An example follows:
emp_name1.arr[emp_name1.len] = '\0'; printf("%s", emp_name1.arr);
Or, you can use the length member to limit the printing of the string, as in:
printf("%.*s", emp_name1.len, emp_name1.arr);
An advantage of VARCHAR variables over character arrays is that the length of the value returned by Oracle is available immediately. With character arrays, you might need to strip the trailing blanks yourself to get the actual length of the character string.
When you use a pointer to a VARCHAR as an output host variable, the program interface determines the variable's maximum length by checking the length member (emp_name2->len in our example). So, your program must set this member before every fetch. The fetch then sets the length member to the actual number of characters returned, as the following example shows:
emp_name2->len = 10; /* Set maximum length of buffer. */ EXEC SQL SELECT ENAME INTO :emp_name2 WHERE EMPNO = 7934; printf("%d characters returned to emp_name2", emp_name2->len);
Pro*C/C++ allows fixed-width Unicode data (character set Unicode Standard Version 3.0, known simply as UCS-16) in host
char variables. UCS-16 uses 2 bytes for each character, so it is an unsigned 2-byte datatype. SQL statement text in UCS-16 is not supported yet.
In the following example code a host variable,
employee, of the Unicode type utext is declared to be 20 Unicode characters long. A table
emp is created containing the column
ename, which is 60 bytes long, so that database character sets in Asian languages, where multibyte and int.
At run time, the datatype code of every host variable used in a SQL statement is passed to Oracle. Oracle uses the codes to convert between internal and external datatypes.
Before assigning a SELECTed column (or pseudocolumn) value to an output host variable, Oracle must convert the internal datatype of the source column to the datatype of the host variable. Likewise, before assigning or comparing the value of an input host variable to a column, Oracle must convert the external datatype of the host variable to the internal datatype of the target column.
Conversions between internal and external datatypes follow the usual data conversion rules. For example, you can convert a CHAR value of "1234" to a C short value. You cannot convert a CHAR value of "65543" (number too large) or "10F" (number not decimal) to a C short value. Likewise, you cannot convert a
char[n] value that contains any alphabetic characters to a NUMBER value.
Datatype equivalencing lets you control the way Oracle interprets input data, and the way Oracle formats output data. It provides the ability to override the default external datatypes that the precompiler assigns. On a variable-by-variable basis, you can map (or make equivalent) supported C host variable datatypes to Oracle external datatypes. You can also map user-defined datatypes to Oracle external datatypes.
By default, the Pro*C/C++ Precompiler assigns a specific external datatype to every host variable.
Table 5-2 lists the default assignments:
With the VAR statement, you can override the default assignments by equivalencing host variables to Oracle external datatypes. The syntax you use is
EXEC SQL VAR host_variable IS type_name [ (length) ];
where host_variable is an input or output host variable (or host array) declared earlier, type_name is the name of a valid external datatype, and length is an integer literal specifying a valid length in bytes.
Host variable equivalencing is useful in several ways. For example, suppose you want to SELECT employee names from the EMP table, then pass them to a routine that expects null-terminated strings. You need not explicitly null-terminate the names. Simply equivalence a host variable to the STRING external datatype, as follows:
... char emp_name[11]; EXEC SQL VAR emp_name IS STRING(11);
The length of the ENAME column in the EMP table is 10 characters, so you allot the new emp_name 11 characters to accommodate the null terminator. When you SELECT a value from the ENAME column into emp_name, the program interface null-terminates the value for you.
You can use any external datatypes except NUMBER (for example, VARNUM).
You can also map (or make equivalent) user-defined datatypes to Oracle external datatypes. First, define a new datatype structured like the external datatype that suits your needs. Then, map your new datatype to the external datatype using the TYPE statement.
With the TYPE statement, you can assign an Oracle external datatype to a whole class of host variables. The syntax you use is:
EXEC SQL TYPE user_type IS type_name [ (length) ] [REFERENCE];
Suppose you need a variable-length string datatype to hold graphics characters. First, declare a struct with a short length component followed by a 65533-byte data component. Second, use typedef to define a new datatype based on the struct. Then, equivalence your new user-defined datatype to the VARRAW external datatype, as shown in the following example:
struct screen { short len; char buff[4000]; }; typedef struct screen graphics; EXEC SQL TYPE graphics IS VARRAW(4000); graphics crt; -- host variable of type graphics ...
You specify a length of 4000 bytes for the new graphics type because that is the maximum length of the data component in your struct. The precompiler allows for the len component (and any padding) when it sends the length to the Oracle server.
You can declare a user-defined type to be a pointer, either explicitly, as a pointer to a scalar or struct type, or implicitly, as an array, and use this type in an EXEC SQL TYPE statement. In this case, you must use the REFERENCE clause at the end of the statement, as shown in the following example:
typedef unsigned char *my_raw; EXEC SQL TYPE my_raw IS VARRAW(4000) REFERENCE; my_raw graphics_buffer; ... graphics_buffer = (my_raw) malloc(4004);
In this example, you allocated additional memory over the type length (4000). This is necessary because the precompiler also returns the length (the size of a short), and can add padding after the length due to word alignment restrictions on your system. If you do not know the alignment practices on your system, make sure to allocate sufficient extra bytes for the length and padding (9 should usually be sufficient).
CHARF is a fixed-length character string. You can use this datatype in VAR and TYPE statements to equivalence C datatypes to the fixed-length SQL standard datatype CHAR, regardless of the setting of the DBMS or CHAR_MAP option.
When DBMS=V7 or V8, specifying the external datatype CHARACTER in a VAR or TYPE statement equivalences the C datatype to the fixed-length datatype CHAR (datatype code 96). However, when CHAR_MAP=VARCHAR2, the C datatype is equivalenced to the variable-length datatype VARCHAR2 (code 1).
Now, you can always equivalence C datatypes to the fixed-length SQL standard type CHARACTER by using the CHARF datatype in the VAR or TYPE statement. When you use CHARF, the equivalence is always made to the fixed-length character type, regardless of the setting of the DBMS or CHAR_MAP option.
You can code an EXEC SQL VAR ... or EXEC SQL TYPE ... statement anywhere in your program. These statements are treated as executable statements that change the datatype of any variable affected by them from the point that the TYPE or VAR statement was made to the end of the scope of the variable. If you precompile with MODE=ANSI, you must use Declare Sections. In this case, the TYPE or VAR statement must be in a Declare Section.. See also "Large Objects (LOBs)".
Please read the introductory comments for an explanation of the program's purpose.
/*************************************************************** sample4.pc This program demonstrates the use of type equivalencing using the LONG VARRAW external datatype. In order to provide a useful example that is portable across different systems, the program inserts binary files into and retrieves them from the database. For example, suppose you have a file called 'hello' in the current directory. You can create this file by compiling the following source code: #include <stdio.h> int main() { printf("Hello World!\n"); } When this program is run, we get: $hello Hello World! Here is some sample output from a run of sample4: $sample4 Connected. Do you want to create (or re-create) the EXECUTABLES table (y/n)? y EXECUTABLES table successfully dropped. Now creating new table... EXECUTABLES table created.) -------------------- -------------- Total Executables: 0: i Enter the key under which you will insert this executable: hello Enter the filename to insert under key 'hello'. If the file is not in the current directory, enter the full path: hello Inserting file 'hello' under key 'hello'... Inserted.) -------------------- -------------- hello 5508 Total Executables: 1: r Enter the key for the executable you wish to retrieve: hello Enter the file to write the executable stored under key hello into. If you don't want the file in the current directory, enter the full path: h1 Retrieving executable stored under key 'hello' to file 'h1'... Retrieved.: ok = 1; /* Connect to the database. */ do_connect(); printf("Do you want to create (or re-create) the EXECUTABLES table (y/n)? "); gets(reply); if ((reply[0] == 'y') || (reply[0] == 'Y')) create_table(); /* Print the menu, and read in the user's selection. */ print_menu(); gets(reply); while (ok) { switch(reply[0]) { case 'I': case 'i': /* User selected insert - get the key and file name. */ printf("Enter the key under which you will insert this executable: ");); break; case 'Q': case 'q': /* User selected quit - just end the loop. */ ok = 0; break; default: /* Invalid selection. */ printf("Invalid selection.\n"); break; } if (ok) { /* Print the menu again. */ print_menu(); gets(reply); } } EXEC SQL COMMIT WORK RELEASE; } /* Connect to the database. */ void do_connect() { /* Note this declaration: uid is a char * pointer, so Oracle will do a strlen() on it at runtime to determine the length. */ char *uid = "scott/tiger"; EXEC SQL WHENEVER SQLERROR DO sql_error("do_connect():CONNECT"); EXEC SQL CONNECT :uid; printf("Connected.\n"); } /* Creates the executables table. */ void create_table() { /* We are going to check for errors ourselves for this statement. */ EXEC SQL WHENEVER SQLERROR CONTINUE; EXEC SQL DROP TABLE EXECUTABLES; if (sqlca.sqlcode == 0) { printf("EXECUTABLES table successfully dropped. "); printf("Now creating new table...\n"); } else if (sqlca.sqlcode == NON_EXISTENT) { printf("EXECUTABLES table does not exist. "); printf("Now creating new table...\n"); } else sql_error("create_table()"); /* Reset error handler. */ EXEC SQL WHENEVER SQLERROR DO sql_error("create_table()_t message_length; /* Turn off the call to sql_error() to avoid a possible infinite loop */ EXEC SQL WHENEVER SQLERROR CONTINUE; printf("\nOracle error while executing %s!\n", routine); /* Use sqlglm() to get the full text of the error message. */ buffer_size = sizeof(message_buffer); sqlglm(message_buffer, &buffer_size, &message_length); printf("%.*s\n", message_length, message_buffer); EXEC SQL ROLLBACK WORK RELEASE; exit(1); } /* Opens the binary file identified by 'filename' for writing, and copies the contents of: %d\n", sqlca.sqlerrd[2]); } /* Prints the menu selections. */ void print_menu() { printf("\nSample 4 Menu. Would you like to:\n"); printf("(I)nsert a new executable into the database\n"); printf("(R)etrieve an executable from the database\n"); printf("(L)ist the executables stored in the database\n"); printf("(D)elete an executable from the database\n"); printf("(Q)uit the program\n\n"); printf("Enter i, r, l, or q: "); }
Pro*C/C++ supports most C preprocessor directives. Some of the things that you can do using the Pro*C/C++ preprocessor are:
sqlca.h, using the #include directive
The Pro*C/C++ preprocessor recognizes most C preprocessor commands, and effectively performs the required macro substitutions, file inclusions, and conditional source text inclusions or exclusions. The Pro*C/C++ preprocessor uses the values obtained from preprocessing, and alters the source output text (the generated
.c output file).
An example should clarify this point. Consider the following program fragment:
#include "my_header.h" ... VARCHAR name[VC_LEN]; /* a Pro*C-supplied datatype */ char another_name[VC_LEN]; /* a pure C datatype */ ...
Suppose the file
my_header.h in the current directory contains, among other things, the line
#define VC_LEN 20
The precompiler reads the file
my_header.h, and uses the defined value of VC_LEN (20), declares the structure of name as VARCHAR[20].
char is a native type. The precompiler does not substitute 20 in the declaration of another_name[VC_LEN].
This does not matter, since the precompiler does not need to process declarations of C datatypes, even when they are used as host variables. It is left up to the C compiler's preprocessor to actually include the file
my_header.h, and perform the substitution of 20 for VC_LEN in the declaration of another_name.
The preprocessor directives that Pro*C/C++ supports are:
Some C preprocessor directives are not used by the Pro*C/C++ preprocessor. Most of these directives are not relevant for the precompiler. For example, #pragma is a directive for the C compiler--the precompiler does not process it. The C preprocessor directives not processed by the precompiler are:
While your C compiler preprocessor may support these directives, Pro*C/C++ does not use them. Most of these directives are not used by the precompiler. You can use these directives in your Pro*C/C++ program if your compiler supports them, but only in C or C++ code, not in embedded SQL statements or declarations of variables using datatypes supplied by the precompiler, such as VARCHAR.
Pro*C/C++ predefines a C preprocessor macro called ORA_PROC that you can use to avoid having the precompiler process unnecessary or irrelevant sections of code. Some applications include large header files, which provide information that is unnecessary when precompiling. By conditionally excluding such header files based on the ORA_PROC macro, the precompiler never reads the file.
The following example uses the ORA_PROC macro to exclude the
irrelevant.h file:
#ifndef ORA_PROC #include <irrelevant.h> #endif
Because ORA_PROC is defined during precompilation, the
irrelevant.h file is never included.
The ORA_PROC macro is available only for C preprocessor directives, such as #ifdef or #ifndef. The EXEC ORACLE conditional statements do not share the same namespaces as the C preprocessor macros. Therefore, the condition in the following example does not use the predefined ORA_PROC macro:
EXEC ORACLE IFNDEF ORA_PROC; <section of code to be ignored> EXEC ORACLE ENDIF;
ORA_PROC, in this case, must be set using either the DEFINE option or an EXEC ORACLE DEFINE statement for this conditional code fragment to work properly.
The Pro*C/C++ Precompiler for each system assumes a standard location for header files to be read by the preprocessor, such as
sqlca.h,
oraca.h, and
sqlda.h. For example, on most UNIX systems, the standard location is
$ORACLE_HOME/precomp/public. For the default location on your system, see your system-specific Oracle documentation. If header files that you need to include are not in the default location, you must use the INCLUDE= option, on the command line or as an EXEC ORACLE option.
To specify the location of system header files, such as
stdio.h or
iostream.h, where the location might be different from that hard-coded into Pro*C/C++ use the SYS_INCLUDE precompiler option.
You can use the #define command to create named constants, and use them in place of "magic numbers" in your source code. You can use #defined constants for declarations that the precompiler requires, such as VARCHAR[const]. For example, instead of code with bugs, such as:
... VARCHAR emp_name[10]; VARCHAR dept_loc[14]; ... ... /* much later in the code ... */ f42() { /* did you remember the correct size? */ VARCHAR new_dept_loc[10]; ... }
you can code:
#define ENAME_LEN 10 #define LOCATION_LEN 14 VARCHAR new_emp_name[ENAME_LEN]; ... /* much later in the code ... */ f42() { VARCHAR new_dept_loc[LOCATION_LEN]; ... }
You can use preprocessor macros with arguments for objects that the precompiler must process, just as you can for C objects. For example:
#define ENAME_LEN 10 #define LOCATION_LEN 14 #define MAX(A,B) ((A) > (B) ? (A) : (B)) ... f43() { /* need to declare a temporary variable to hold either an employee name or a department location */ VARCHAR name_loc_temp[MAX(ENAME_LEN, LOCATION_LEN)]; ... }
You can use the #include, #ifdef and #endif preprocessor directives to conditionally include a file that the precompiler requires. For example:
#ifdef ORACLE_MODE #include <sqlca.h> #else long SQLCODE; #endif
There are restrictions on the use of the #define preprocessor directive in Pro*C/C++ You cannot use the #define directive to create symbolic constants for use in executable SQL statements. The following invalid example demonstrates this:
#define RESEARCH_DEPT 40 ... EXEC SQL SELECT empno, sal INTO :emp_number, :salary /* host arrays */ FROM emp WHERE deptno = RESEARCH_DEPT; /* INVALID! */
The only declarative SQL statements where you can legally use a #defined macro are TYPE and VAR statements. So, for example, the following uses of a macro are legal in Pro*C/C++
#define STR_LEN 40 ... typedef char asciiz[STR_LEN]; ... EXEC SQL TYPE asciiz IS STRING(STR_LEN) REFERENCE; ... EXEC SQL VAR password IS STRING(STR_LEN);
The preprocessor ignores directives # and ## to create tokens that the precompiler must recognize. You can use these commands (if your compiler supports them) in pure C code that the precompiler does not have to process. Using the preprocessor command ## is not valid in this example:
#define MAKE_COL_NAME(A) col ## A ... EXEC SQL SELECT MAKE_COL_NAME(1), MAKE_COL_NAME(2) INTO :x, :y FROM table1;
The example is incorrect because the precompiler ignores ##.
Because of the way the Pro*C/C++ preprocessor handles the #include directive, as described in the previous section, you cannot use the #include directive to include files that contain embedded SQL statements. You use #include to include files that contain purely declarative statements and directives; for example, #defines, and declarations of variables and structures required by the precompiler, such as in
sqlca.h.
You can include the
sqlca.h,
oraca.h, and
sqlda.h declaration header files in your Pro*C/C++ program using either the C/C++ preprocessor #include command, or the precompiler EXEC SQL INCLUDE command. For example, you use the following statement to include the SQL Communications Area structure (SQLCA) in your program with the EXEC SQL option:
EXEC SQL INCLUDE sqlca;
To include the SQLCA using the C/C++ preprocessor directive, add the following code:
#include <sqlca.h>
When you use the preprocessor #include directive, you must specify the file extension (such as
.h).
When you precompile a file that contains a #include directive or an EXEC SQL INCLUDE statement, you have to tell the precompiler the location of all files to be included. You can use the INCLUDE= option, either in the command line, or in the system configuration file, or in the user configuration file.
The default location for standard preprocessor header files, such as
sqlca.h,
oraca.h, and
sqlda.h, is preset in the precompiler. The location varies from system to system. See your system-specific Oracle documentation for the default location on your system.
When you compile the
.c output file that Pro*C/C++ generates, you must use the option provided by your compiler and operating system to identify the location of included files.
For example, on most UNIX systems, you can compile the generated C source file using the command
cc -o progname -I$ORACLE_HOME/sqllib/public ... filename.c ...
On VAX/OPENVMS systems, you pre-pend the include directory path to the value in the logical VAXC$INCLUDE.
When you use an EXEC SQL INCLUDE statement in your program, the precompiler includes the source text in the output (
.c) file. Therefore, you can have declarative and executable embedded SQL statements in a file that is included using EXEC SQL INCLUDE.
When you include a file using #include, the precompiler merely reads the file, and keeps track of #defined macros.
If you define macros on the C compiler's command line, you might also have to define these macros on the precompiler command line, depending on the requirements of your application. For example, if you compile with a UNIX command line such as
cc -DDEBUG ...
you should precompile using the DEFINE= option, namely
proc DEFINE=DEBUG ...
The location of all included files that need to be precompiled must be specified on the command line, or in a configuration file.
For example, if you are developing under UNIX, and your application includes files in the directory
/home/project42/include, you must specify this directory both on the Pro*C/C++ command line and on the
cc command line. You use commands like these:
proc iname=my_app.pc include=/home/project42/include ... cc -I/home/project42/include ... my_app.c
or you include the appropriate macros in a makefile. For complete information about compiling and linking your Pro*C/C++ a
.h extension.
Assume that you have a header file called
top.h.Then you can precompile it, specifying that HEADER=
hdr:
proc HEADER=hdr INAME=top.h
Pro*C/C++ precompiles the given input file,
top.h, and generates a new precompiled header file,
top.hdr, in the same directory. The output file,
top.hdr, can be moved to a directory that the
#include statement will cause to be searched. headers should be avoided as much as possible because INCLUDE; DEFINE INCLUDE; DEFINE or INCLUDE options change, any precompiled header files must be re-created and Pro*C/C++ programs which use them re-precompiled. was made to something in the unprocessed portion..
Some port-specific symbols are predefined for you when the Pro*C/C++ precompiler is installed on your system.., but it has never allowed the use of a numeric constant declaration in any constant expression.
Pro*C/C++ supports the use of numeric constant declarations anywhere that an ordinary numeric literal or macro is used, given the macro expands to some numeric literal.
This is used primarily for declaring the sizes of arrays for bind variables to be used in a SQL statement.
In Pro*C/C++, normal C scoping rules are used to find and locate the declaration of a numeric constant declaration.
const int g = 30; /* Global declaration to both function_1() and function_2() */ void function_1() { const int a = 10; /* Local declaration only to function_1() */ char x[a]; exec sql select ename into :x from emp where job = 'PRESIDENT'; } void function_2() { const int a = 20; /* Local declaration only to function_2() */ VARCHAR v[a]; exec sql select ename into :v from emp where job = 'PRESIDENT'; } void main() { char m[g]; /* The global g */ exec sql select ename into :m from emp where job = 'PRESIDENT'; }
Variables which are of specific static types need to be defined with static and initialized. The following rules must be kept in mind when declaring numeric constants in Pro*C/C++:
Any attempt to use an identifier that does not resolve to a constant declaration with a valid initializer is considered an error.
The following shows examples of what is not permitted and why:
int a; int b = 10; volatile c; volatile d = 10; const e; const f = b; VARCHAR v1[a]; /* No const qualifier, missing initializer */ VARCHAR v2[b]; /* No const qualifier */ VARCHAR v3[c]; /* Not a constant, missing initializer */ VARCHAR v4[d]; /* Not a constant */ VARCHAR v5[e]; /* Missing initializer */ VARCHAR v6[f]; /* Bad initializer.. b is not a constant */, for example, to perform client-side DATE arithmetic, execute navigational operations on objects and so on. These SQLLIB functions are described later, OCI are declared in header file
sql2oci.h:
SQLEnvGet(), to return a pointer to an OCI environment handle associated with a given SQLLIB runtime context. Used for both single and shared server environments.
SQLSvcCtxGet(), to return an OCI service context handle for a Pro*C/C++ database connection. Used for both single and shared server environments.
shared server shared server environment);
To embed OCI release 7 calls in your Pro*C/C++ program, take the following steps:
oci.h. For details, see the Oracle Call Interface programmer's Guide for Release 7.
orlon()or
onblon()calls.
sqllda() to set up the LDA.SQLLIB function
That way, the Pro*C/C++ Precompiler and the OCI "know" that they are working together. However, there is no sharing of Oracle cursors.
You need not worry about declaring the OCI Host Data Area (HDA) because the Oracle, you must call
sqllda() with a different LDA immediately after each CONNECT. In the following example, you connect to two nondefault nondefault database */ EXEC SQL CONNECT :username IDENTIFIED BY :password; AT DB_NAME1 USING :db_string1; /* set up first LDA */ sqllda(&lda1); /* connect to second nondefault nondefault nodes, so that later SQL statements can refer to the databases by name.
The names of SQLLIB functions are listed in Table 5-3. You can use these SQLLIB functions for both threaded and nonthreaded applications. Previously, for example,
sqlglm() was documented as the nonthreaded or default context version of this function, while
sqlglmt() was the threaded or nondefault context version, with context as the first argument. The names
sqlglm() and
sqlglmt() are still available. The new function
SQLErrorGetText() requires the same arguments as
sqlglmt(). For nonthreaded or default context applications, pass the defined constant SQL_SINGLE_RCTX as the context. lists all the SQLLIB public functions and their corresponding syntax. Cross-references to the nonthreaded or default-context usages are provided to help you find more complete descriptions.Table 5-3 SQLLIB Public Functions -- New Names.
The following OCI calls cannot be issued by an X/Open application: OCOM, OCON, OCOF, ONBLON, ORLON, OLON, OLOGOF.
For a discussion of how to use OCI Release 8 calls in Pro*C/C++, see also "Interface to OCI Release 8".
To get XA functionality, you must link the XA library to your X/Open application object modules. For instructions, see your system-specific Oracle documentation. | http://docs.oracle.com/cd/B10501_01/appdev.920/a97269/pc_05adv.htm | CC-MAIN-2014-15 | refinedweb | 5,726 | 53.61 |
I'd love to get Derby out - it adds a lot to the footprint.
I can help with the mapping of data structures to LDAP. As Emmanuel said
the best way is to embellish the page we already have in confluence perhaps
with a section on each data structure and we can write out the quartz schema
for it.
Looking initially at the DB schema you provided Kiran I can see this will be
pretty easy to do. For starters we will have a quartzTrigger ABSTRACT
objectClass with a quartzTriggerName attribute and a quartzTriggerGroup
attribute. This can be extended for quartzCronTrigger, quartzBlobTrigger,
quartzSimpleTrigger STRUCTURAL objectClasses. Under a trigger we can
subordinate the jobs the trigger has fired off. This is oversimplified I
know but these are the best steps to take to just knock this out.
It will not be that hard to do. Once we know the schema and namespace
organization it's pretty easy to do and I'm not very worried about the
schema design at all.
Alex
On Sat, Sep 27, 2008 at 12:05 PM, Emmanuel Lecharny <elecharny@gmail.com>wrote:
> Hi Kiran,
>
> first, I suggest that you create a page on the wiki where you put your
> thoughts about Quartz integration, as it will be more easy to look at the
> graphics. (when this page is modified, the dev list is notified)
>
> Kiran Ayyagari wrote:
>
>>
>> Here are some ideas that I have atm
>>
>> 1. Serialize the in-memory data structures used by quartz and reload them
>> after server startup.
>> (may require a separate partition to store)
>>
> This is clearly an option. Now, we have to express Quartz needed structure
> in a hierarchical way.
>
>>
>> 2. Use an embeddable RDBMS. In this case quartz can directly store data
>> using its inbuilt JdbcStore.
>> (This may be the last and least preferable option )
>>
> Yep. I would not favor this approach either, but we have to remain open
> minded ... (all in all, we already embed Derby in ADS to manage replication
> :)
>
>
> --
> --
> cordialement, regards,
> Emmanuel Lécharny
>
> directory.apache.org
>
>
> | http://mail-archives.apache.org/mod_mbox/directory-dev/200809.mbox/%3Ca32f6b020809270954j201fd9cfj33330d5cc751c207@mail.gmail.com%3E | CC-MAIN-2017-09 | refinedweb | 339 | 72.46 |
#include <snomasks.h>
Snomask manager handles routing of SNOMASK (usermode +s) messages to opers. Modules and the core can enable and disable snomask characters. If they do, then sending snomasks using these characters becomes possible.
Create a new SnomaskManager
Enable a snomask.
Called once per 5 seconds from the mainloop, this flushes any cached snotices. The way the caching works is as follows: Calls to WriteToSnoMask write to a cache, if the call is the same as it was for the previous call, then a count is incremented. If it is different, the previous message it just sent normally via NOTICE (with count if > 1) and the new message is cached. This acts as a sender in case the number of notices is not particularly significant, in order to keep notices going out.
Check whether a given character is an enabled (initialized) snomask. Valid snomask chars are lower- or uppercase letters and have a description. Snomasks are initialized with EnableSnomask().
Write to all users with a given snomask (sent globally)
Write to all users with a given snomask (sent globally)
Write to all users with a given snomask (local server only)
Write to all users with a given snomask (local server only) | http://www.inspircd.org/api/3.0/class_snomask_manager.html | CC-MAIN-2017-43 | refinedweb | 202 | 62.88 |
Platform
isPlatform
The
isPlatform method can be used to test if your app is running on a certain platform:
import { isPlatform } from '@ionic/vue'; isPlatform('ios'); // returns true when running on a iOS device
Depending on the platform the user is on, isPlatform.
getPlatforms
The
getPlatforms method can be used to determine which platforms your app is currently running on.
import { getPlatforms } from '@ionic/vue'; getPlatforms(); // returns ["iphone", "ios", "mobile", "mobileweb"] from an iPhone
Depending on what device you are on,
getPlatforms can return multiple values. Each possible value is a hierarchy of platforms. For example, on an iPhone, it would return mobile, ios, and iphone.
Platforms
Below is a table listing all the possible platform values along with corresponding descriptions. | https://ionicframework.com/jp/docs/zh/vue/platform | CC-MAIN-2021-43 | refinedweb | 121 | 51.48 |
Mega Man Legends 2
Review by KeyBlade999
"The best seventy-five dollars I've ever spent."
~ Review in Short ~
Gameplay: An excellent, entertaining mix of classic MegaMan elements and those of RPGs and the first MegaMan Legends.
Story: Somewhat of a sequel to MegaMan Legends with the RPG stereotype of saving the world.
Graphics: Pretty good for the PlayStation era, though sub-par by today's standards. The PS2's texture smoothing helps.
Sound and Music: Fairly varied overall, with some sprinklings of tunes from the original MegaMan Legends and remixes.
Play Time: About one hour for a speed-play, though more like fifteen to twenty for a basic playthrough, and about forty for a comprehensive one.
Replayability: Quite high, as this game is exceptionally entertaining and comes with difficulty options for those needing challenge.
Recommendation: It's rather hard to say. Frankly, this game is one of the best I have ever had the opportunity to play - this much goes without saying. However, nowadays, getting a good copy of the game can run you over one hundred dollars on Amazon. Whether it is worth it or not is up to you. I, however, spent something like seventy-five dollars on it and do not regret this decision in the slightest infinitesimal way. My recommendation would be to buy it.
~ Review in Long ~
It has been around twenty to thirty years since Capcom began their MegaMan series on the NES. In that same amount of time, we've seen hundreds, thousands of other games get released. Few people now actually own (or at least use) their NES consoles. Nowadays, at the time of writing, we're waiting for the Nintendo Wii U from Nintendo, the next PlayStation (after the PS3) from Sony, and the next Xbox (after the 360) from Microsoft. NES's are now aged.
In the meantime, we've also seen a large number of MegaMan games come from Capcom. MegaMan Legends 2 is but one of several dozen MegaMan games. The Blue Bomber ran, all guns blazing, into this entry in 2000 for the PlayStation. MegaMan Legends as a series is one of the few RPG-style MegaMan games you'll ever see... and perhaps the best.
Is there one particular reason why? No, there are very many!
GAME HISTORY:
MegaMan as a series began on the NES console a few decades ago, and was developed by Capcom. They quickly released new MegaMan games in the series almost year after year, going to MegaMan 8 rather rapidly. During this time, various other MegaMan games were released, such as a large portion of the brutal MegaMan X series and the MegaMan Zero series. Other games came after the MegaMan Legends series, such as MegaMan 9, some time after MegaMan 8. The MegaMan Battle Network series became rather popular as well.
However, we're not here to talk about those games. MegaMan Legends, as a series, was a hallmark for MegaMan overall, as it launched the Blue Bomber into two new fields for him - RPGs and 3D. The original came out for the PlayStation in the late 1990s. MegaMan Legends is regarded by some to be a flop; by others, such as myself, a treasure. MegaMan Legends 2 perfected on the formulas presented throughout MegaMan and was released in 2000 for the PlayStation as well.
For some time, the series remained dormant. It was early in the 2010s that Capcom openly told everyone that they were, finally, planning to make a MegaMan Legends 3, for the Nintendo 3DS no less! Sadly, however, they soon cancelled the project due to a supposed lack of support, and little news has shown that the series will be revived any time soon.
GAMEPLAY: 10/10.
General Progression:
Progression through the game is much like you'd expect in most role-playing games, and it really takes cues from the first MegaMan Legends. You pretty much have a linear path to go through in the game - you'll go from dungeon to dungeon with a few intermediary sequences. You'll fight a fair few bosses.
It pretty much can be boiled right down to that. There isn't a whole lot of non-linearity in this game - you have a few sidequests and a few extra dungeons, but that's about it. You don't have to progress forward in this game until you want to, though, like you'd expect.
Battle System:
The battles in this game will all be in real time, much like you'd expect having playing MegaMan Legends before, or perhaps another game like Kingdom Hearts. In fact, the battles themselves can be most readily connected to Kingdom Hearts.
During the battles, it can be you (MegaMan) versus a large number of enemies - generally two or so, but I have seen up into the dozens of enemies. You have two basic weapons - your Buster Gun and a Special Weapon. The latter can be a number of various things, from a missile launcher to a mine to a laser. You also have the Lifter, which you can use to throw enemies and other objects around.
The enemies have a huge variety of attacks, from the simple tackle to the complexity of increasing the local gravity and tossing explosives at you. Thusly, the battles all offer some sort of challenge to you; even on the easiest difficulty, it will be hard to avoid being hurt pretty badly at some point. The enemies themselves don't actually have a lot of weak points you need to abuse like you'd expect out of an RPG - given that you lack elemental differentiation in your weaponry, this should be expected. However, that doesn't make the battles any less entertaining.
Unlike the typical RPG, though, battles don't earn you a type of "EXP." or "experience points". Rather, that is all done financially through shops and inventions. So, while it is possible to grind to an insurmountably high level of power early in the game, that level of power will be surpassed later on due to the staggered nature of the availability of items.
The Dungeons:
Those who played the first MegaMan Legends probably recall the stark simplicity of the dungeons. Oh, sure, the layouts were complex enough, but the in-game map solved that problem rather quickly. In the end, the first MegaMan Legends was a game in which you mainly just went through and shot at enemies. Very rarely, if ever, were you actually given a true test of your mental faculties over your ability to press the Square Button really fast.
Don't expect anything like that here, though. (Of course, the layouts are actually even more complex!) MegaMan Legends 2 has some rather annoying puzzles with the dungeons. You'll get started off pretty easily so you can get the hang of it, but, about a third of the way through the game, you'll be looking at needing to solve puzzles and keeping that health gauge up. A few of these are more like what you'd expect out of a puzzle game given a certain riddle, but a large number of them are based moreso on the actual physics of the game. For example, you'll be using buoyancy in water to create stepping stones to an upper area so you can continue, or luring Reaverbots into traps.
There are also a variety of elemental themes for the main dungeons of the game. Pretty much every one of these takes a cue from the classic MegaMan series. For example, you have a dungeon absolutely filled with lava, and another dungeon in which you are sometimes underwater. This can make for an interesting change versus MegaMan Legends, for you now have to deal with new types of physics, and new types of dangers. After all, lava burns, doesn't it? Isn't ice slippery?
You will be made open to new equipment to deal with these new things, which is perhaps my favorite thing about this game. The dungeons are only as hard you make them. You can choose to grind for hours and hours, getting the perfect equipment and whatnot for a dungeon ... or you can decide to rush headlong and enjoy the challenge of going into a dungeon blindly.
Other Changes from the Original:
Perhaps the most spectacular is the introduction of various "statuses" MegaMan can obtain, which are used in lieu of the shield from MegaMan Legends. These statuses are quite well what you'd expect from a game such as Pokemon. You can have a burned status (in which you lose health over time), a paralyzed status (in which you lose speed, mobility, and jumping height), a Buster Leak (in which you pretty much can't attack), and a "freezer burn" status (basically being burned, and often paralyzed, simultaneously). These make the battle environment insurmountably more dangerous, as the lack of them in MegaMan Legends led to the idea that every fight can just be endured through. Granted, you can still do that here, but it is harder.
Like MegaMan Legends, this game does implement a fairly in-depth invention system. This system basically requires you to find random stuff and give it to Roll, MegaMan's dear friend and Spotter, so she can invent something. In this game, the system is more generalized into only making Special Weapons rather than a varying number of special, other items. She also can upgrade these weapons, but a much higher and harder-to-obtain cost than in MegaMan Legends. As a bit of a comparison, the most expensive weapon in MegaMan Legends 2 is about 10.4 times more costly than the most expensive one in MegaMan Legends.
To further elaborate on the difficulty in obtaining such an exorbitant amount of money (in this game, Zenny), you must know of the next change. The refractors in this game, which can be picked up for Zenny, have a few additional types, but barely exceed the base values for the original, and enemies don't really drop much more than they did in the first game. So, in effect, you can say that the money systems are, at the core level, the same.
Next up on our list would have to be the difficulty system. To a point, there is no difficulty system. Rather, the game's difficulty is based on MegaMan's Digger's License, which shows just how good of a Digger he is. As his rank goes up, so does the game's difficulty. This generally results in increased enemy defenses, but also raises the amount of refractors the enemies drop. However, there is a bit of a difficulty system, in which you'll be allowed to start with a certain License rank. In turn, this can result in a super-easy or super-hard game by affecting what equipment MegaMan has, the refractor-Zenny conversion factors, enemy defenses, and more. Diggers' Licenses overall will also determine what extra dungeons you can enter.
Finally, equipment. My memory relates this back to the days of MegaMan X. In the original MegaMan Legends, you basically could choose your Buster Gun, Special Weapon, and whether to choose to wear a helmet, armor, and Jet Skates. In this game, though, you'll have the first options on this list. Additionally, you'll have a much larger variety of what to wear in helmets, shoes, and armors. Each of these has a number of effects - for example, your damage reduction can be different, or you may be able to avoid certain statuses. It is an integral part of the strategy in this game, now, to develop your equipment rather than turn it "on" or "off".
Other than those main five examples, the majority of the game is the same as MegaMan Legends except for the obvious (new items, new weapons, new enemies, and so on).
Sidequests:
What is an RPG without a few sidequests? MegaMan Legends did not really take this to much of a deep level like other games of its era (Final Fantasy VII, VIII, and IX), but they were okay. The sidequests for MegaMan Legends 2 are a few more in number, and often a lot deeper.
The majority of them actually have a common theme of MegaMan's "morality", so to speak. In other words, doing bad or good will affect some of the sidequests. For example, doing bad will close the doors of every single sidequest, except one, for which it is required. Sometimes, you'll be punished or rewarded for these quests by, for example, getting higher or lower shop prices or Special Weapon development costs.
The other quests are a bit more fun, and, depending on how you look at it, more rewarding. There are three main extra dungeons in this game (plus a few extremely minor ones); however, your access to these depends on your Digger's License ranking, which, in turn, can mean you need to do the tests for the licenses. Yes, a lot of the sidequests are intertwined together in such a manner. Other than those dungeons, though, you won't see much. There is a superb (and hard) quiz game you can play, and the races from MegaMan Legends return (in a more in-depth style), but little else is there.
STORY: 9/10.
This game takes place approximately one year after MegaMan's adventures on Kattlelox Island in the first MegaMan Legends...
For a while now, Bluecher, a friend of Professor Barrell's, has been thinking of the Forbidden Island that he once landed on thirty years before. He has amassed his fortune into making a ship to go to this supposedly impenetrable island. Professor Barrell, against his better judgment, also decides to go with him, likely to investigate the loss of his daughter - Roll's mother, Matilda - and his son-in-law.
Meanwhile, the pirate gang known as the Bonnes, as well as their rivals in Glyde's gang, have caught wind of this and allied together to get whatever treasure may be on the island. Roll and MegaMan are arbitrarily following the ship Bluecher made, when, suddenly, it is attacked by an unknown woman that is somehow registered as familiar to Roll!
The ship begins to plummet towards Forbidden Island, where the vortex will tear it to shreds, along with everyone inside. But Roll and MegaMan can't go in there - their ship is worse off. They, instead, decide to land on a nearby town and, there, they search to find a way to get on Forbidden Island.
And so, their next journey, unbeknownst to them, begins. Who is this woman that attacked Bluecher's ship, and why? Why is Forbidden Island forbidden? What is it hiding? And, perhaps most importantly, how is MegaMan intertwined in all of this? These questions and more will be answered...
GRAPHICS: 9/10.
The graphical quality of this game mostly depends on from which standpoint you view them from. For the purpose of this review, I compare the graphics to other games of the PlayStation era. By comparison with modern games, especially on the HD consoles (PS3, 360), you'll probably find the graphics in MegaMan Legends to be rather inferior. Texture smoothing on the PlayStation 2 is somewhat helpful in raising the quality by smoothing out the pixels, but it also has a tendency to just feel odd when looking at it.
Anyways, the graphics are fairly vibrant and varied. You'll have a huge number of environments to look at, from lava-filled dungeons to forests to underwater ruins, it's all pretty nice, and Capcom even put in some extra effects. For example, you can see bubbles in the underwater dungeons and wavering in the images on-screen.
Quality-wise, it is around par with other games of its time - notably, Final Fantasy VII, VIII, and IX. The graphics could use some work as far as smoothness goes, and some of the things you don't ever get to interact with just a bit fuzzy. However, all in all, the graphics are pretty good.
SOUND AND MUSIC: 9/10.
There are a fair variety of sound effects in this game, the majority of which you probably would have heard in the first MegaMan Legends - a lot of explosions, zapping when damaged, laser shots, and the like. The sounds can be pretty overwhelming, and you'll be hearing them almost constantly in this game, sometimes to the point of drowning out the background music. Quality-wise, it's still good.
As far as the music goes, I don't particularly see a problem with it. I don't really pay attention to music much at all, but, I have to say, I kinda like the music. There are a fair number of varying themes that suit the situation. I especially remember the underwater ruins and the theme that implied a bunch of mystery for obvious reasons - after all, you try finding a 3D RPG with an underwater dungeon!
To put it briefly, the sound effects are good, though unlikely to have been unheard to players of the original MegaMan Legends. The background music is of a decent quality and the music itself is amazing. There are a scant few remixes of themes from the first MegaMan Legends, and even a few classical tracks in the game!
PLAY TIME: 10/10.
For a PlayStation game, I'd have to say that the amount of play you can get from MegaMan Legends 2 is rather spot-on for a few reasons. The biggest is the range of time it can take to play the game, and the next is that the game actually has longevity.
A basic playthrough of the game can be expected to last somewhere from ten to fifteen hours, especially for first-timers. However, the game can be sped-through easily in an hour - in one sitting, versus over a few days and weeks. On the opposite end of the spectrum, you'll find that the comprehensive playthroughs can last much longer, generally around forty to fifty hours if you already know your way around the game.
Definitely, this disparity can result from a large amount of extra content you unknowingly would do in a basic playthrough, or would do in a comprehensive one, but not do in a speed-run. Speed-runs are, in fact, the most challenging way to play this game if you try, because there is so much you would otherwise gain to aid yourself. Comprehensive playthroughs has a lot of content to trudge through, from several extra dungeons to raising millions and millions of Zenny to max out all of the Special Weapons.
This game obviously has a fair range of possibilities for you as far as longevity goes. You can have a game you play in one sitting, over the course of a week or two, or over the course of a few months - whichever you prefer, as it is mostly you who makes those decisions. Capcom only opened the door; it is up to you to step on through it.
REPLAYABILITY: 10/10.
There are a very, very few games that do not bore me with repeated playthroughs. I can count them on one hand - two are MegaMan Legends 1 and 2, and another is Dark Cloud 2. Given I've played well over two hundred games, that means a lot to me.
This game is just so entertaining. To some extent, the dungeons are randomized in their enemy formations to prevent a true step-by-step walkthrough, so you can't readily look up how to beat an enemy in a situation. The dungeons are practically unique for the RPG genre and present unique challenges. Overall, the game itself is wholly entertaining and also varied, the main factor in being able to replay a game.
Perhaps the next big factor would be the difficulty system. There are practically five different difficulty levels ranging from a game you can play through in about an hour to a game that is so hellishly difficult that it can take months to play through what would otherwise take a few days, just because of minor changes in Zenny conversions and enemy stats. Everyone will have, at some point, a difficulty available to them that is also perfect for them without creating boredom or over-challenging.
All in all, MegaMan Legends 2 provides perhaps one of the best replay values a game can get. It lets you do all of the choosing in how hard you want it to be; that way, you are able to make sure you get just the right amount of challenge without needing to throw a controller out the window. Simply put, amazing.
THE END. Overall score: 9.5/10.
In MegaMan Legends 2, you have perhaps one of the most treasured, maybe even the best, games of all time. The game is unique in its environments, in its challenges, and in its puzzles. It provides a massive level of entertainment, and a massive amount of it for those who choose to delve into the sidequests.
It provides an intriguing, albeit sadly cut-off, storyline. It can be played for an hour or fifty, and often repeatedly without a bit of boredom. The only shortcomings this game really has, in fact, are found in the technical limitations of the PlayStation era as far as graphics and sound go. Remember, this is a 2012 review, so those technologies have advanced much in the past dozen years.
Regardless of those extremely minor shortcomings, MegaMan Legends 2 is by far one of the best, if not the best, games I have ever played. I paid a rather large amount of money for it (even by modern standards) to get it - seventy-five dollars. By comparison, most games at launch at sixty dollars, and that's the high end.
And yet, it has quickly shown to be the best money I have ever spent on any video game. I would, without reservation, without hesitation, recommend this game to absolutely anyone. It is practically the perfect game from my perspective - a perfect mixture of the classic MegaMan platforming elements, classic adventure game elements, and great RPG elements.
There is, in fact, a reason why it costs so much. It is just that good!
Reviewer's Score: 10/10 | Originally Posted: 11/01/12
Game Release: Mega Man Legends 2 (US, 10/24/00)
Would you recommend this Review? Yes No
Add Comment No Comment
Got Your Own Opinion?
You can submit your own review for this game using our Review Submission Form. | http://www.gamefaqs.com/ps/197897-mega-man-legends-2/reviews/review-152277 | CC-MAIN-2015-11 | refinedweb | 3,778 | 69.82 |
reader pointed this out to me. It looks like the paragraph might be from "Mining the Social Web", which uses iPython notebooks. How about this rewrite:
The example source code for this book is maintained in a public github repository.
Add the URL? (There isn't one in the current paragraph.) Should there be more in this paragraph?
Immediately following the last "NOTE" box in this section, the text reads:
"The first code line in the file defines the package for the type, named intro."
This refers to the listing for upper1.scala. In the listing shown, however, the package is named introscala.
Note from the Author or Editor:Correct. Should be "... named introscala."
In code: // src/main/scala/progscala2/patternmatching/match-deep.sc
There is a missing right parenthesis at the line:
case Person("Alice", 25, Address(_, "Chicago", _) => println("Hi Alice!")
Note from the Author or Editor:Good catch. Should be:
case Person("Alice", 25, Address(_, "Chicago", _)) => println("Hi Alice!")
Now reads:
Revision History for the <em>First</em> Edition:
2014-11-25: First release
2015-03-27: Second release
Should read:
Revision History for the <strong>Second</strong> Edition:
2014-11-25: First release
2015-03-27: Second release
manage[R,T].apply() returns "Any", not "T".
The code example ignores the return value of "T", but the example would be more useful if it returned a type related to "T", like Option[T],Try[T].or Either.
Thanks! Nice book.
Note from the Author or Editor:There are a few improvement required:
1. "apply" should be declared to return "T".
2. The catch close should rethrow the exception. Otherwise, apply doesn't type check.
The try clause does return a T, from the call to f(res.get).
Here is an improved version of the file:
// src/main/scala/progscala2/rounding/TryCatchArm.scala
package progscala2.rounding
import scala.language.reflectiveCalls
import scala.util.control.NonFatal
// DeanW (Dec. 21, 2015): Refined the implementation and the usage
// example below to more clearly indicate the handling of the returned
// object of type T.
object manage {
def apply[R <: { def close():Unit }, T](resource: => R)(f: R => T): T = {
var res: Option[R] = None
try {
res = Some(resource) // Only reference "resource" once!!
f(res.get) // Return the T instance
} catch {
case NonFatal(ex) =>
println(s"manage.apply(): Non fatal exception! $ex")
throw ex
} finally {
if (res != None) {
println(s"Closing resource...")
res.get.close
}
}
}
}
object TryCatchARM {
/** Usage: scala rounding.TryCatch filename1 filename2 ... */
def main(args: Array[String]) = {
val sizes = args map (arg => returnFileLength(arg))
println("Returned sizes: " + (sizes.mkString(", ")))
}
import scala.io.Source
def returnFileLength(fileName: String): Int = {
println() // Add a blank line for legibility
manage(Source.fromFile(fileName)) { source =>
val size = source.getLines.size
println(s"file $fileName has $size lines")
if (size > 200) throw new RuntimeException(s"Big file: $fileName!")
size
}
}
}
No mistake in the book but in the projectcode from github.
After running the help and eclipse tasks, importing of the code examples in Eclipse didn't succeed because the colon in the projectname (after "Second Edition") was not accepted. I had to remove it first in the xml.
Note from the Author or Editor:I changed the project name in the github repo's "build.sbt". It should now work.
scala Upper1
should instead be
scala -cp . Upper1
Note from the Author or Editor:This appears to be platform dependent, but it's a fine default.
"Because map takes a single function argument, where the function itself takes a single argument." is a clause. I think removing the leading "Because" will fix it.
Note from the Author or Editor:Yes, change "Because" to "The".
s/imaging/imagine/
Note from the Author or Editor:correct
The package in the scala code Shapes.scala is defined as
package progscala2.introscala.shapes
However in page 21 the code states
import progscala2.intro.shapes._
import progscala2.intro.shapes._
as you can see progscala2.intro != progscala2.introscala
As a result the code in the book will not work.
I apologize for selecting serious technical mistake when it may be a simple problem.
But for new people learning the language, it may be frustrating. :)
Thank you and more power.
Note from the Author or Editor:The line on page 21:
scala> import progscala2.intro.shapes._
should be
scala> import progscala2.introscala.shapes._
scala> import progscala2.introscala.shapes._
import progscala2.introscala.shapes._
scala> val p00 = new Point
p00: progscala2.introscala.shapes.Point = Point(0.0,0.0)
scala> val p20 = new Point(2.0)
p20: progscala2.introscala.shapes.Point = Point(2.0,0.0)
scala> val p20b = new Point(2.0)
p20b: progscala2.introscala.shapes.Point = Point(2.0,0.0)
scala> val p02 = new Point(y = 2.0)
p02: progscala2.introscala.shapes.Point = Point(0.0,2.0)
scala> p00 == p20
res0: Boolean = false
scala> p20 == p20b
res1: Boolean = true
Note from the Author or Editor:Correct. The initial import statement and the "echo" on the next line should have "progscala2." prefixes. I found some other mistakes like this that I'll email to the production team, as well.
s/to to/to do/
Note from the Author or Editor:correct.
The Person class example given on page 33 is correct.
But the code in the git repository says:
// src/main/scala/progscala2/typelessdomore/person.sc
says class Person(val mame: String, var age: Int)
note name is typed as mame, while later in the same file it is referred as name..
Note from the Author or Editor:Yes, it should be "name".
s/can want to/can/
Note from the Author or Editor:Doh! correct
"This function avoids the risk of throwing a MathError exception..."
'MathError' should probably be MatchError
"A MathError is only thrown..."
Same here
Note from the Author or Editor:Damn. It should be MatchError.
The text shows MathError when it should show MatchError
Note from the Author or Editor:yes.
The signature of onSuccess function is incorrect.
Book:
def onSuccess[U](func: (Try[T]) => U)( implicit executor: ExecutionContext): Unit
Scala SDK:
def
onSuccess[U](pf: PartialFunction[T, U])(implicit executor: ExecutionContext): Unit
Note from the Author or Editor:That's correct. I don't where I got the signature shown in the book.
The factorial example on page 42:
// src/main/scala/progscala2/typelessdomore/factorial.sc
def factorial(i: Int): Long = {
def fact(i: Int, accumulator: Int): Long = {
if (i <= 1) accumulator
else fact(i - 1, i * accumulator)
}
fact(i, 1)
}
(0 to 5) foreach ( i => println(factorial(i)) )
As explained in the book (for the return type) "We used Long because factorials grow in size quickly".
However this won't work because the "fact" method accepts only Int which will truncate the Long to Int, resulting in negative factorials. For example run this example with range 0 to 20 ( (0 to 20) foreach ( i => println(factorial(i)) )
I think accumulator should be of type Long:
def factorial (i: Int): Long = {
def fact(i: Int, accumulator: Long): Long = {
if (i <= 1) accumulator
else fact (i - 1, i * accumulator)
}
fact (i, 1)
}
This way it will for range "0 to 20" (but not beyond that as it goes outside Long range :) )
Note from the Author or Editor:Ah, yes. You're right. the "accumulator" argument to "fact" should be "Long", not "Int".
On page 46, line 5 (excluding white spaces) of the formatted code example should be p.age and not p.lastName
Note from the Author or Editor:Correct. It should be "p.age". Copy, paste error...
In the section, "Methods with Multiple Argument Lists", in page 51, the author starts the last paragraph with "The third advantage ...".
Am I missing another advantage #2? I am assuming advantage #1 is being able to use the 'syntactic sugar' advantage of being able to use '{}' vs '()'.
Note from the Author or Editor:Yes, no explicit "second" advantage is mentioned. I've "renumbered" them to have a total of 3, not 4.
The last sentence in paragraph 3 reads as "Fortunately, Java 8 finally adds clojures to Java.". I think, this is what the author meant "Fortunately, Java 8 finally adds closures to Java."
Note from the Author or Editor:Doh! Yes, "closures" is the correct work.
s/three useful type/three useful types/ (add an 's' to "type").
Note from the Author or Editor:Correct.
whether or not the we process the files
should be
whether or not we process the files
Note from the Author or Editor:Yes, delete the "the".
"enhancment" should be "enhancement".
In code snippet of apply manage val/val error.
Resource close never because symbol 'res' define two time in different scope:
var res: Option[R] = None // first time as mutable
val res = Some(resource) // second time as immutable in try scope...
In 'finally' scope using 'var res' which is never 'None'.
Note from the Author or Editor:Correct. The "val res = Some... should not have the "val". Fixed in the next early-access release.
The example:
for {
x <- Seq(1, 2, 2.7, "one", "two", 'four)
} {
val str = x match {
case _: Int | _: Double => "a number: "+x
case "one"
=> "string one"
case _: String
=> "other string: "+x
case _
=> "unexpected value: " + x
}
println(str)
}
point to file // src/main/scala/progscala2/patternmatching/match-variable2.sc
Actually this code is in the file // src/main/scala/progscala2/patternmatching/match-variable3.sc
same needs to be updated in match-variable3.sc file in the comment.
Note from the Author or Editor:Correct. The comment should say match-variable3.sc
The case of "Alice" is missing last closing parentheses both in book and and git source i.e.
case Person("Alice", 25, Address(_, "Chicago", _) => println("Hi Alice!")
should be:
case Person("Alice", 25, Address(_, "Chicago", _) => println("Hi Alice!"))
Note from the Author or Editor:Actually the lasted printing has it right. There's a missing paren before the "=>". It should be:
case Person("Alice", 25, Address(_, "Chicago", _)) => println("Hi Alice!")
(not after the println). I just pushed a fix to the github source, too.
The github source is missing the file src/main/scala/progscala2/patternmatching/infix.sc.
Note from the Author or Editor:This file was actually in src/main/scala/progscala2/rounding. I moved it to patternmatching in the Github repo, so no change to the book's text is required.
The $ is missing for title in println for MagazineExtractorRE i.e.
case MagazineExtractorRE(title, issue) => println(s"""Magazine "title", issue $issue""")
should be
case MagazineExtractorRE(title, issue) => println(s"""Magazine "$title", issue $issue""")
Note from the Author or Editor:Correct, the source file is missing the $ as shown. I've fixed the source in the Git repo.
This string "list double" is in result of code snippet page 132. But this string in not in code. It is copy past error.
Note from the Author or Editor:Correct. Will fix in final version.
You say:
==============
So, the following two expressions are equivalent:
<:<(A, B)
A <:< B
In toMap, the B is really a pair:
<:<(A, (T,U))
A <:< (T,U)
==============
It should actually be using square brackets since we are talking about the type, not an instance. This is my attempt to correct it:
==============
So, the following two expressions are equivalent:
<:<[A, B]
A <:< B
In toMap, the B is really a pair:
<:<[A, (T,U)]
A <:< (T,U)
==============
...use map for a a custom target...
Has a redundant 'a'
Note from the Author or Editor:Yes, there's a redundant "a".
The overridingConversion code could not be found
Note from the Author or Editor:The wrong source file was included here (i.e., the previous file was included in the text twice). There are only two changes, however:
1. The comment with the file name should end with .../implicit-conversions-resolution2.sc
2. There is a new line of text before the definition of "class O":
implicit def overridingConversion(s: String): Foo = Foo("Boo: "+s)
Just a suggestion: instead of using Int and String as the markers for your examples, consider using marker classes, like so, and also mentioning why using common classes like Int and String is not a good idea:
class IntMarker
class StringMarker
object M {
def m(seq: Seq[Int])(implicit i: IntMarker): Unit =
println(s"Seq[Int]: $seq")
def m(seq: Seq[String])(implicit i: StringMarker): Unit =
println(s"Seq[String]: $seq")
}
implicit val anIntMarker = new IntMarker
implicit val aStringMarker = new StringMarker
M.m(List(1,2,3))
M.m(List("one", "two", "three"))
Note from the Author or Editor:Good idea. I changed the example accordingly.
First, we won�t hande arrays or nested objects, just �flat� JSON expressions like {"a": "A", "b": 123, "c": 3.14159}
May be - First, we won�t HANDLE arrays or nested objects, .....
Note from the Author or Editor:Fixed.
The factorial-recur1.sc example in the book and github is missing "{".
def factorial(i: BigInt): BigInt =
should be
def factorial(i: BigInt): BigInt = {
As mentioned curried-func.sc is not in // src/main/scala/progscala2/fp/datastructs/curried-func.sc but in // src/main/scala/progscala2/fp/curry/curried-func.sc
Note from the Author or Editor:Correct. It's in the fp/curry location.
The example given:
List.empty[Int] optionReduce (_ + _)
should be
List.empty[Int] reduceOption (_ + _)
There is no method optionReduce on List.
Note from the Author or Editor:Yes, should be "reduceOption".
foldLeft and foldRight are mentioned, but the accompanying code sample implements reduceLeft and reduceRight.
Note from the Author or Editor:Correct. The text before the example should say reduceLeft and reduceRight
actual:
sealed abstract class Option[+T]
should be
sealed abstract class Option[+A]
as A is used in the methods signature
Note from the Author or Editor:Ah, good catch. Yes, should be "+A".
The comment mentions expr1, whereas the implementation mentions expr
Note from the Author or Editor:pg 223 in my PDF. Actually the 2nd and 3rd examples on the page have this issue. For consistency with the rest of the example, I would change the code, not the comments.
2nd example becomes: expr1 map { case pat => expr2 }
3rd example becomes: expr1 foreach { case pat => expr2 }
The first sentence mentions three distinct things, but the second paragraph starts of with "Both" as though covering only two things.
I suspect Validation was added after Either and Try but this particular text was not adjusted accordingly.
Note from the Author or Editor:Thanks for adding the extra detail. I found the issue. It's the 1st paragraph on page 239 in my PDF. Yes, instead of "both" starting the 2nd sentence, it should say "All three".
Following up on previous errata...
The paragraph begins with:
Either, Try, and Validation express through types a fuller picture of how the program actually behaves. Both say that a valid value ...
In this case I believe "Both say" should be replaced with something like "They say", since more than two subjects are involved.
Note from the Author or Editor:Thanks, confirmed after the added details in the previous bug report.
"also call" should be "also called"
Note from the Author or Editor:Already fixed in the latest printing.
The start of the fourth paragraph repeats what is stated in the third paragraph.
Note from the Author or Editor:Page 244 in my PDF. Correct. the third paragraph could be deleted completely:
If an object and a class have the same name and are defined in the same file, they are called companions.
The sample code for creating a Map is missing a value for the first entry:
Map("one" ->, "two" -> 2)
- which does not compile.
Note from the Author or Editor:1st paragraph on pg. 245 in my PDF. Yes, it should be "Map("one" -> 1, "two" -> 2)"
Hi,
is the last bullet point( number 4) correct? it states that
cp.value = new CSuper
is a correct assignment. OK.
How can that be correct if instead
cp.value = new C
is stated not to be correct? CSuper is a super type of C, which is a super type of CSub. So if 2 if wrong then also 4 should be NOT OK
I suspect bullet point 4 should be Not OK then.
Can u please confirm this?
Kind regards,
Mario Paniccia
Note from the Author or Editor:This is a hard concept to explain and I made a mistake here. You are correct that this line is not correct, for the same reason as in the previous 4 bullet points.
So this last #4 note be the same as the previous #4 note: "Compilation error, because a CSuper instance can’t be substituted for a C instance."
There is also a typo in the explanation for #1, it should say "... declared type of cp, ..."
There appears to be markup in the text
<emphasis role="keep-together">contract</emphasis>]
Note from the Author or Editor:Wow! It's near the bottom of page 284 in my PDF. The word "contract" should be in italics/emphasis without the HTML tags.
The statement:
Null is implemented in the compiler as if it has the following declaration:
package scala
abstract final class Null extends AnyRef
is confusing. I understand the declaration is hypothetical, but it uses language keywords which have meanings that contradict the reality of the Null type. In particular, the hypothetical declaration says that Null's immediate superclass is AnyRef (it's not) and that, being abstract, it cannot have any instances (it does). I would avoid trying to come up with a technically correct hypothetical declaration (I tried :-) and simply say that the compiler creates the Null type as a class that extends every class that has AnyRef as its highest superclass, and a singleton object named null as the only instance of Null.
The hypothetical declaration of Nothing at the bottom of the page is similarly confusing.
Note from the Author or Editor:Yes, it's not intended to be a precise definition, as Scala doesn't provide a way to declare your own "bottom" types. I'll reword it in the 3rd edition, if there is one.
The fourth line of second paragraph:
attempts to use override val name = "foo"
I think it should be
override var name = "foo"
i.e. an attempt to use var to override a parameterless method will get an error
Note from the Author or Editor:Correct, it should be "var" not "val". There is also another typo in the first sentence of the same paragraph, "... writer method, override name_=, ..." There should be a "def" between "override" and "name".
the source code included is :
// src/main/scala/progscala2/basicoop/ValueClassPhoneNumber.sc
But it shoud be
// src/main/scala/progscala2/objectsystem/value-class-universal-traits.sc
Note from the Author or Editor:Fixed in the Github repo.
actual:
The one implementation of collection.mutable.Map is a hash-trie class collec tion.concurrent.TrieMap
I think the author meant:
The one implementation of collection.concurrent.Map is a hash-trie class collec tion.concurrent.TrieMap
Note from the Author or Editor:Good catch!
"Vector is implemented using as a"
Either using or as should be removed.
Note from the Author or Editor:Yes, wording needs cleaning up.
..."so if we map it to a a set"...
One of the "a"s is redundant.
Note from the Author or Editor:Ah yes. Thanks.
def equalFields(other: ProtectedClass1) =
(protectedField1 == other.protectedField1) &&
(protectedField1 == other.protectedField1) &&
(nested == other.nested)
should be
def equalFields(other: ProtectedClass1) =
(protectedField1 == other.protectedField1) &&
(protectedField2 == other.protectedField2) &&
(nested == other.nested)
The regular expression:
val blankRE = """^\s*#?\s*$""".r
is claimed to be: "a matcher for blank lines (or “comments,” lines where the first non- whitespace character is a #), which are ignored."
However any non-whitespace after a # will cause the match to fail.
I believe you intended something more like:
val blankRE = """^\s*(?:#.*)?\s*$""".r
Note from the Author or Editor:The proposed change to blankRE is what it should be.
(The page number is the printed page number in the PDF; it's the "physical" page 505.)
The sentence: "Use -encoding UTF8 if you use non-ASCII characters in names or the allowed symbols, such as ⇒ (Unicode \u21D2) instead of =>." should instead end with "... =>."
Broken link to sbt-web in
'By now you’ve installed SBT. If you do JVM-based web development, see also the new sbt-web project"
->
Note from the Author or Editor:The dollar signs "$$" shouldn't be at the end of the URL. I don't know how they got there.
In example for-validations-good-form.sc, the if condition in function validName should be
if (n.length > 0 && n.matches("""^\p{Alpha}+$""")) Success(List(key -> n))
instead of
if (n.length > 0 && n.matches("""^\p{Alpha}$""")) Success(List(key -> n))
The symbol "+" is missed.
Note from the Author or Editor:Yes, the + should be in the triple-quoted string: ""^\p{Alpha}+$"""
First sentence: "... that manipulates programs, rather than data", should really read "that manipulates programs as data".
"... we'll focus on the most stable parts: runtime reflection..." Change "runtime" to "runtime and compile-time" (drop the hyphen if you prefer...)
" scala.reflect.api.Types.TypeRef and scala.reflect.api.Symbols.TypeSymbol" should be " scala.reflect.api.Types#TypeRef and scala.reflect.api.Symbols#TypeSymbol". That is, the last period in each name should be a # instead. The links work correctly, fortunately.
This example is actually nonsense for reasons I won't elaborate here. It would be best to remove it completely. I recommend the following changes.
1. Delete the paragraphs starting at "We saw in "More on Type Matching"..." through to the single-sentence paragraph "However, as mentioned, it's not accurate for Seq[Any] with mixed elements, the second to last example". on pages 520-521.
2. Move the example at the end of this section, mkArray.sc, in its place. Starting at the paragraph on pg. 522: "Another important usage or ClassTag ...", plus the mkArray.sc example, and finally the paragraph "It uses the Array.apply method for AnyRefs, which has a second argument list with a single implicit ClassTag argument". However, change the leading words, "Another important ..." to "An important usage..."
3. With this example moved, the last paragraphs will now be the ones I didn't mention that are unchanged:
"The compiler exploits the type information...
"Hence, ClassTags can't "resurrect" type information...
"ClassTag is actually a weaker version of ...
"Note that there are older types in the ...
<end of section>
(Actually, there is a correction for the last paragraph that I suggested in another bug report.)
"These types are being deprecated." Change to "These types will be deprecated eventually."
"... syntax expands the list into comma-separated values" should be "... syntax expands the list into a list of trees"
"Recall that we said that macros are a limited form of compiler plug-in" would be more accurately put, "Recall that we said that macros work like a limited form of compiler plug-in"
The line "import reflect.runtime.universe._" should be there. (It can cause problems in some cases). So, it should be removed, but the "1" bullet should be moved to the next line, "import scala.language.experimental.macros".
In the table 10-1 "Type variance annotations and their meanings" the entry for contravariance contradicts the definition I have seen elsewhere.
The book shows:
-T means Contravariant. E.g., X[Tsup] is a supertype of X[T].
I have read in other sources:
-T means Contravariant. E.g., X[Tsup] is a subtype of X[T]
Which one is right?
Note from the Author or Editor:Yikes. It should say "X[Tsup] is a subtype of X[T]"
(Sorry for lack of a page number: O'Reilly should start paginating their Kindle books [hint-hint].)
In Chapter 10, section "The == and != Methods":
```
Note
In Java, C++, and C#, the == operator tests for reference, not value equality. In contrast, Scala's == operator tests for value equality.
```
In C++, the == operator must be explicitly defined for user-defined types (i.e. classes). Implementers can define equality however they want, but typically they define it to provide value equality semantics. There is no default implementation that provides reference equality semantics.
Note from the Author or Editor:Remove reference to C++
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/catalog/errata.csp?isbn=0636920033073 | CC-MAIN-2017-26 | refinedweb | 4,084 | 59.7 |
FHI-aims¶
Introduction¶
FHI-aims is a all-electron full-potential density functional theory code using a numeric local orbital basis set.
Running the Calculator¶
The default initialization command for the FHI-aims calculator is
- class
ase.calculators.aims.
Aims(restart=None, ignore_bad_restart_file=False, label='.', atoms=None, cubes=None, radmul=None, tier=None, aims_command=None, outfilename=None, **kwargs)[source]¶
Construct the FHI-aims calculator.
The keyword arguments (kwargs) can be one of the ASE standard keywords: ‘xc’, ‘kpts’ and ‘smearing’ or any of FHI-aims’ native keywords.
Note
The behavior of command/run_command has been refactored ase X.X.X It is now possible to independently specify the command to call FHI-aims and the outputfile into which stdout is directed. In general, we replaced
<run_command> = <aims_command> + ” > ” + <outfilename
That is,what used to be, e.g.,
>>> calc = Aims(run_command = "mpiexec -np 4 aims.x > aims.out")
can now be achieved with the two arguments
>>> calc = Aims(>> outfilename = "aims.out")
Backward compatibility, however, is provided. Also, the command actually used to run FHI-aims is dynamically updated (i.e., the “command” member variable). That is, e.g.,
>>> calc = Aims() >>> print(calc.command) aims.version.serial.x > aims.out >>> calc.>> print(calc.command) aims.version.serial.x > systemX.out >>> calc.>> print(calc.command) mpiexec -np 4 aims.version.scalapack.mpi > systemX.out
Arguments:
- cubes: AimsCube object
Cube file specification.
- radmul: int
Set radial multiplier for the basis set of all atomic species.
- tier: int or array of ints
Set basis set tier for all atomic species.
- aims_commandstr
The full command as executed to run FHI-aims without the redirection to stdout. For instance “mpiexec -np 4 aims.x”. Note that this is not the same as “command” or “run_command”. .. note:: Added in ase X.X.X
- outfilenamestr
The file (incl. path) to which stdout is redirected. Defaults to “aims.out” .. note:: Added in ase X.X.X
- run_commandstr, optional (default=None)
Same as “command”, see FileIOCalculator documentation. .. note:: Deprecated in ase X.X.X
- outfilenamestr, optional (default=aims.out)
File into which the stdout of the FHI aims run is piped into. Note that this will be only of any effect, if the <run_command> does not yet contain a ‘>’ directive.
- plus_udict
For DFT+U. Adds a +U term to one specific shell of the species.
- kwargsdict
Any of the base class arguments.
In order to run a calculation, you have to ensure that at least the
following
str variables are specified, either in the initialization
or as shell environment variables:
List of keywords¶
This is a non-exclusive list of keywords for the
control.in file
that can be addresses from within ASE. The meaning for these keywords is
exactly the same as in FHI-aims, please refer to its manual for help on
their use.
One thing that should be mentioned is that keywords with more than
one option have been implemented as tuples/lists, eg.
k_grid=(12,12,12) or
relativistic=('atomic_zora','scalar').
In those cases, specifying a single string containing all the options is also possible.
None of the keywords have any default within ASE, but do check the defaults set by FHI-aims.
Example keywords describing the computational method used:
Note
Any argument can be changed after the initial construction of the calculator, simply by setting it with the method
>>> calc.set(keyword=value)
Volumetric Data Output¶
The class
- class
ase.calculators.aims.
AimsCube(origin=(0, 0, 0), edges=[(0.1, 0.0, 0.0), (0.0, 0.1, 0.0), (0.0, 0.0, 0.1)], points=(50, 50, 50), plots=None)[source]¶
Object to ensure the output of cube files, can be attached to Aims object
parameters:
- origin, edges, points:
Same as in the FHI-aims output
- plots:
what to print, same names as in FHI-aims
describes an object that takes care of the volumetric output requests within FHI-aims. An object of this type can be attached to the main Aims() object as an option.
The possible arguments for AimsCube are:
The possible values for the entry of plots are discussed in detail in the FHI-aims manual, see below for an example.
Example¶
Here is an example of how to obtain the geometry of a water molecule,
assuming
ASE_AIMS_COMMAND and
AIMS_SPECIES_DIR are set:
ase/test/aims/H2O_aims.py.
from ase import Atoms from ase.calculators.aims import Aims, AimsCube from ase.optimize import QuasiNewton water = Atoms('HOH', [(1, 0, 0), (0, 0, 0), (0, 1, 0)]) water_cube = AimsCube(points=(29, 29, 29), plots=('total_density', 'delta_density', 'eigenstate 5', 'eigenstate 6')) calc = Aims(xc='PBE', output=['dipole'], sc_accuracy_etot=1e-6, sc_accuracy_eev=1e-3, sc_accuracy_rho=1e-6, sc_accuracy_forces=1e-4, cubes=water_cube) water.set_calculator(calc) dynamics = QuasiNewton(water, trajectory='square_water.traj') dynamics.run(fmax=0.01) | https://wiki.fysik.dtu.dk/ase/dev/ase/calculators/FHI-aims.html | CC-MAIN-2020-05 | refinedweb | 790 | 50.33 |
How do you build a scalable machine learning infrastructure?
There are a few critical elements when building a machine learning infrastructure. You need your machine learning infrastructure to be built for scalability, and to provide you with visibility so you can build plans on top of your existing stack. We’ll first talk about the AI fabric comprising your compute resources, orchestration platforms like Kubernetes or OpenShift, and learn how to integrate that to your machine learning workflows. Components of a machine learning infrastructure also require solutions for data management, data version control and should provide a ML workbench for data scientists to give a simple way to train models, work on their research, and optimize models and algorithms. The last component of a scalable machine learning infrastructure is offering an easy and intuitive way to deploy models to production. One of the biggest challenges today, is that a lot of the models don’t make it to production because of hidden technical debt that the organization has. Your machine learning infrastructure should be agnostic, and easily integrate into your existing and future stack. It should be portable and utilize containers for simple deployments, and allow your data scientists to run experiments and workloads in one click. In the following sections we will dive into the main aspects of building a scalable machine learning infrastructure.
What are the biggest machine learning infrastructure challenges?
What are the biggest machine learning infrastructure business challenges?
How do you use MLOps best practices in your machine learning infrastructure?
What architecture can support machine learning at scale?
Now we will go over the steps to building an architecture that can support enterprise machine learning workloads at scale.
1. Containers
Containers are key to providing a flexible and portable machine learning infrastructure. With containers you can assign machine learning workloads to different compute resources. So GPUs, cloud GPUs, accelerators, any resource that you have can be assigned to each workload. Using containers can help distribute jobs on any of the resources that you have available. It is great for DevOps engineers because it provides a more portable and flexible way to manage workloads.
Containers help you to define an environment and are also great for reproducibility and reproducible data science. You can launch the containers anywhere on any cloud native technology. So, on-premise, Kubernetes cluster, bare-metal, using Docker simply and also cloud resources that have extensive support for all the different containers. You can also operate orchestration platforms like OpenShift, that make it easier for you to run and execute containers in the cluster.
2. Orchestration
When it comes to orchestration, you need to build something that is compute resource agnostic. While Kubernetes is becoming the standard way of deploying machine learning and for orchestration, there are so many flavors of Kubernetes. There is Rancher, there is OpenShift, there is Vanilla Kubernetes. Even for small deployments, there is MicroK8, and MiniKube. So when you’re designing your own infrastructure, you need to decide what kind of orchestration platform you’re aiming to support now and in the future. So you need to be able to design the stack in a way that fits your existing infrastructure while considering future infrastructure needs.
Also, whatever infrastructure you’re designing, you need to be able to leverage all the compute resources that you already have in your enterprise. So, if you have a large Spark cluster, Hadoop environment or you have bare-metal servers that are not running on Kubernetes – like large CPU clusters – then you need to be able to support those as well. You need to build an infrastructure that can integrate to the Hadoop cluster, that can leverage Spark, that can leverage YARN, and can leverage all the technology that your organization has. Not only that, but additionally you should consider how to manage all your compute resources in one place for all your data scientists across the industry to access and use in one click.
3. Hybrid cloud multi cloud infrastructure
What are the benefits of a hybrid cloud infrastructure for machine learning? This is a big topic that could easily take on its own post. But specifically in machine learning, a hybrid cloud infrastructure is ideal because usually machine learning workloads are stateless. That means that you may run a machine learning training for a day, or for two weeks, and terminate the machine. As long as all the models and data are being stored, you can simply terminate the machine, and forget about it. Hybrid cloud deployment for machine learning is unlike software in this way. In software you need to persist and make sure the database is shared across the hybrid environment. For hybrid cloud machine learning, it’s beneficial to control your resources in order to utilize the existing compute you already have. For example, let’s say an organization has eight GPUs on-premise, and 10 data scientists. Your organization would want to be able to utilize all of the eight GPUs, and only burst to cloud when it reaches 100% utilization or allocation. Cloud bursting is an essential capability that allows organizations to increase parameterization, and also reduces cloud costs. Not only that, but cloud bursting allows data scientists to easily scale machine learning activities.
4. Agnostic & open infrastructure
Flexibility and being able to easily extend your base platform is critical, because machine learning is evolving extremely fast. So you need to design your machine learning infrastructure in a way that enables you to easily extend it. That means that if there is a new technology, a new operator, a new platform that you want to integrate, you can easily do that without reconfiguring your entire infrastructure. If there is one thing you take from this guide on machine learning infrastructure is to pick your technologies carefully, make sure it is agnostic, and built for scale. That way you can quickly adopt new technologies and operators as they evolve.
Second, if your infrastructure is agnostic, you also need to think about your interface with data scientists. If your interface is not intuitive, then you will miss the benefits of the new technology into your infrastructure. Remember, data scientists are not DevOps engineers or IT. Often they are PhD’s in math and don’t want to work with YAML files or namespaces or deployments etc. They want to do what they were hired to do which is to work on their models. So you need somehow be able to abstract the interface for data scientists, especially if you’re using Kubernetes while providing them the flexibility and control that they need. Meaning that if there are data scientists or DevOps on your team who want to get into the internals of Kubernetes, you need to be able to allow that as well. In the end, it is all about supporting your data science and engineering teams to make them better professionals.
How do you schedule jobs on each of the different interfaces?
What are the examples of different hybrid environments?
Different enterprises require different types of environments for their machine learning. Many machine learning teams are running on legacy systems, or have their own resources available. Some enterprises require highly secure and governed infrastructures, and some are extremely diversified for different types of workloads. We have never encountered 2 infrastructures the same. The reality is that all infrastructures should be built around the need of the enterprise, not to conform to the platform. Here are some real life examples of diverse and supported scenarios from our customers.
1. Simplifying a complex and heterogeneous IT stack
2. Increasing on-prem GPU utilization with hybrid and multi cloud setup
Another nice example is for a hybrid multi cloud environment. This customer has a DGX-1, and another DGX-1 both are on-premise. Those are used to serve two different teams today. It is organized so that they get a pool of 16 GPUs that can be consumed by anyone on the team. In addition, they have connected their cloud resources from AWS and GCP. This infrastructure allows them to increase on parameterization, and then burst to AWS and GCP, only once they’ve reached capacity. They can even prioritize the cloud bursting, so first burst on GCP and then burst on AWS or integrate spot or preemptive instances that will save you a lot of money.
3. NVIDIA multiple DGXs with cloud bursting to Azure
The ML infrastructure visibility checklist (10 Question you should ask as DS Manager)
When you have a diverse hybrid compute infrastructure for machine learning, one of the biggest challenges is managing the infrastructure. There are a few goals when managing your compute resources. One goal is to maximize utilization, and two is to maximize productivity of your data scientists. The number one infrastructure capability that can increase your utilization and productivity is with visibility. Visibility can help you make informed decisions about your infrastructure and machine learning workflow. Here are a few questions you should be asking yourself as a data science leader about building a visibility tool for your machine learning infrastructure.
1. What is the best way to track in real time so we know what is going on in the past, present and future?
2. What parameters do we need to track? Do you have data on the job, container, allocation and utilization?
3. Do you have a list of dependencies, or network configurations, or data, or storage configurations?
4. Are you able to see job logs such as what happened in the POD?
5. Do you have visibility into the container when the data scientist run this job?
6. Do you have system metrics? Can you see how much of the GPU is really utilized compared to what is really consumed by the user?
7. Do you have visibility into machine learning metrics and artifacts, so model weights, checkpoints etc?
8. Can your measure capacity? (ex. Do you know how many GPUs are connected to your cluster?)
9. Can you measure utilization? (ex. Do you know the total number GPUs available?)
10. Can you measure allocation? (ex. How many GPUs are being utilized at this time?)
These questions should help guide you towards a more transparent machine learning infrastructure. Once you have visibility into your server metrics, you will be able to start improving your performance.
How to build an MLOps visibility tool
What actions can I take to improve machine learning server utilization?
Once you are able to track the capacity waste with a visibility tool, you can use this knowledge to educate your data scientists on better ways to use resources. Here are a few actions you can take to maximize your machine learning server utilization:
1. Stop jobs that aren’t working
In the data science workflow wasteful situations can occur. Monitor for jobs that are stuck or aren’t using any of the resources allocated. For instance, perhaps a data scientist forgot to shut down a Jupyter Notebook. With live server visibility you can stop waste at the time it occurs.
2. Data-driven utilization insights
Operations teams can use raw data to analyze the overall machine learning workflow by user, job, container etc. Once the data is being collected, you can dive deeper into all the jobs that are running in the platform and extract insights. For example you can build a report on how many models are used in the cloud.
3. Define the key questions for your use case
Just like any data analysis, you need to define what kind of information is important for you, and stakeholders to understand. You can see if users are not utilizing all their hardware resources, or identify patterns of workloads that underperform, and adjust your strategy accordingly.
How to Plan Ahead? (how to use data driven ML infrastructure and capacity planning)
What is the future of machine learning infrastructure?
How can I integrate an MLOps infrastructure quickly?
For example, you can simply load data, set up preprocessing using Spark in an on prem Hadoop cluster, run model training using GPUs on prem with cloud bursting enabled. We also support distributed training with PyTorch and TensorFlow, so this could also be great and useful. For example you can do the model training on-premise, a canary deployment in the cloud and do some A/B testing. | https://cnvrg.io/building-scalable-machine-learning-infrastructure/ | CC-MAIN-2021-43 | refinedweb | 2,060 | 53.61 |
Summary
Calculates the Transformed Soil Adjusted Vegetation Index (TSAVI) from a multiband raster object and returns a raster object with the index values.
Discussion
The Transformed Soil Adjusted Vegetation Index (TSAVI) method is a vegetation index that minimizes soil brightness influences by assuming the soil line has an arbitrary slope and intercept.
TSAVI = (s *(NIR - s * Red - a)) / (a * NIR + Red - a * s + X * (1 + s2))
For information about other multiband raster indexes, see the Band Arithmetic raster function.
The referenced raster dataset for the raster object is temporary. To make it permanent, you can call the raster object's save method.
Syntax
TSAVI (raster, {nir_band_id}, {red_band_id}, {s}, {a}, {X})
Code sample
Calculates the Transformed Soil Adjusted Vegetation Index for a Landsat 8 image.
import arcpy TSAVI_raster = arcpy.sa.TSAVI("Landsat8.tif",5,4,0.33,0.5,1.5) | https://pro.arcgis.com/en/pro-app/latest/arcpy/spatial-analyst/tsavi.htm | CC-MAIN-2021-49 | refinedweb | 140 | 53.1 |
03 August 2011 05:48 [Source: ICIS news]
By ?xml:namespace>
Offers from two Chinese producers advanced to $1,700-1,720/tonne FOB (free on board)
PET offers in Asian on Wednesday were up $50/tonne from the start of the week, while some sellers refrained from making offers given the ongoing instability in raw material costs. Market sources expect one PET producer will likely resume offers this afternoon at $1,720/tonne FOB NE (northeast) Asia.
Prices PET feedstocks paraxylene (PX), purified terephthalic acid (PTA) and monoethylene glycol (MEG) continued to rise.
Asian MEG prices advanced to $1,280-1,285/tonne CFR China on Tuesday from Friday’s close of $1,225-1,235/tonne CFR China. Glycol prices were spurred by supply concerns after a weekend blaze at a propylene pipeline at Taiwanese Formosa group’s petrochemical complex in Mailiao.
Co-feedstock PTA, meanwhile, jumped to $1,245-1,275/tonne CFR China by Tuesday from Friday’s close of $1,190-1,222/tonne CFR China, following a surge in PX prices on Monday.
On Friday, the August PX Asian Contract Price (ACP) was settled at $1,540/tonne CFR Asia, up by $140/tonne from the settlement reached for July.
PET makers such as
Selling ideas for PET were heard at $1,670-1,700/tonne FOB or higher on Wednesday. One producer may resume offer on Wednesday at $1,720/tonne FOB NE Asia.
“It is a difficult time to make either selling or buying decisions. We are not sure by how much we should increase prices [in order to cover costs],” said a northeast Asian PET producer.
The strong uptrend in raw material prices spurred successive price hikes by Asian PET producers in the past two weeks. But the speed of the raw material price increase presented further difficulty for PET producers who hesitated to raise prices by the same extent.
“Our feedstock costs have gone up to about $1,550/tonne based on spot PTA and MEG prices,” said a second northeast Asian PET maker.
Asian PET producers export within the region as well as to other markets worldwide, including the North and South Americas, Europe, the Commonwealth of Independent States (CIS), the Middle East,
Buyers in some of the key markets have been reluctant to accept the price increases, said several producers surveyed this week. Producers have declined bids below $1,660-1,680/tonne FOB and are concluding deals in increasingly smaller volumes, they said.
“Feedstock is a real headache for us. But bids are not going up as quickly and it’s becoming more difficult to sell our product,” the first northeast Asian producer said.
Other major Asian PET makers include
“Our customers understand that our costs are rising but they are also worried about committing to larger volumes at today’s high prices,” said the second PET producer. “We have to closely monitor feedstock prices every day. There is too much uncertainty ahead.”
( | http://www.icis.com/Articles/2011/08/03/9481993/asia-pet-offers-jump-50-70tonne-on-soaring-feedstock-costs.html | CC-MAIN-2014-52 | refinedweb | 496 | 59.13 |
uuid5
Inserts a UUID v5 into the target field.
There are two ways to use this function: with or without a namespace. Given this schema:
{ "type": "record", "name": "events", "fields": [ { "name": "input", "type": "string" }, { "name": "id_ns", "type": "string" } ] }
and a record such as:
{ "input" : "john smith", "id_ns": "02b317d3-7fec-421a-89c5-3ad0eb83c79e" }
There are two options for using this function:
uuid5(/input)
uuid5(/input, /id_ns)
The first option will generate a simple UUID v5 that does not use a namespace in the generation process. The second will take the value of the supplied record path and use it as the namespace.
Please note that the namespace must always be a valid UUID string. An empty string, another data type, etc. will result in an error. This is by design because the most common use case for UUID v5 is to uniquely identify records across data sets. | https://docs.cloudera.com/cfm/2.1.1/nifi-record-guide/topics/nifi-uuid5.html | CC-MAIN-2021-31 | refinedweb | 146 | 79.09 |
Reading an Excel Spreadsheet into an SQL database using ODBC
This is my first time to post...
I downloaded for Windows the latest version of QT today:
Qt Creator 3.1.1 (opensource)
Based on Qt 5.2.1 (MSVC 2010, 32 bit)
I have a windows application where I want to read an Excel spreadsheet into a SQL database (or create a database from a spreadsheet) using ODBC. However I am getting an error. I was wondering if anyone here has been around the block on this and can help me out. Here are my .pro file entries for accessing the ODBC plugin:
@
QT += core gui
sql
QTPLUGIN += gjpeg
qgif
qkrcodecs
qsqlodbc
@
Here is my source code for the window I am trying to create for this:
@
#include "spreadsheettodatabase.h"
#include "ui_spreadsheettodatabase.h"
#include <QApplication>
#include <QtPlugin>
#define QT_STATICPLUGIN 1
#define QT_DEBUG_PLUGINS 1
SpreadsheetToDatabase::SpreadsheetToDatabase(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::SpreadsheetToDatabase)
{
ui->setupUi(this); QString pathString = "c:\\Qt\\Tools\\QtCreator\\bin\\SpreadsheetToDatabase\\CCS_Fault_CodesY.xls"; QSqlDatabase db = QSqlDatabase::addDatabase("QODBC"); db.setDatabaseName("DRIVER={Microsoft Excel Driver(*.xls)};DBQ=" + QString("CCS_Fault_CodesY.xls")); if (db.open ()) { QSqlQuery query; query.exec ("select Component, Condition from [CCS_Fault_Codes]"); while (query.next ()) { QString ComponentStr = query.value(0).toString (); QString ConditionStr = query.value(1).toString (); QMessageBox::critical (0, ComponentStr, ConditionStr); } } else { QMessageBox::critical (0, QObject :: tr ("Database Error"), db.lastError().text()); return; }
@
My db.open() statement fails, and here is the error I receive:
[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified QODBC3: Unable to connect
Any ideas on what I am doing wrong??? I have tried using a path to the exact location of the spreadsheet, and I still got this same error, see QString define: pathString ….
[edit added missing coding tags @ SGaist]
I forgot to mention that I do see the qsqlodbc.dll file in the folder with all of the other plugin files, just where QT places them by default.
Also, I was following a supposed working example of how to do this, from this forum entry:
One last comment, in my main.cpp file, I have these lines of code:
#include <QApplication>
#include <QtPlugin>
#define QT_STATICPLUGIN 1
#define QT_DEBUG_PLUGINS 1
Q_IMPORT_PLUGIN(qjpeg)
Q_IMPORT_PLUGIN(qgif)
Q_IMPORT_PLUGIN(qkrcodecs)
Q_IMPORT_PLUGIN(qsqlodbc)
However, whenever I do not comment out the Q_IMPORT_PLUGIN statements, I get a compile error for each of these plugins similar to this one:
Undefined reference to qt_static_plugin_qjpet()
I think not being able to execute these statements and having to comment them out is what is causing my db.open() failure.
Hi just guessing but try adding an explicit path to you Excel file, i.e. instead of
@
db.setDatabaseName(“DRIVER={Microsoft Excel Driver(.xls)};DBQ=” + QString(“CCS_Fault_CodesY.xls”)); @
try
@db.setDatabaseName(“DRIVER={Microsoft Excel Driver(.xls)};DBQ=” + pathString); @
hskoglund,
Thank you for replying. As I mentioned I have already tried using an explicit path to my spreadsheet file, and got the same error. There are really two things going on here: 1) The excel spreadsheet file cannot be found, 2) The ODBC database file cannot be opened because the ODBC3 driver QT is looking for cannot be found.
Thanks again for commenting!!
jbomkamp
Hi, a bit tricky this one, but I managed to get a small Qt widget test app going:
First I created a simple Excel 2003 book1.xls file, having just an "A" column with some text in about 10 rows. I placed the file in C:\Temp.
Then an empty Widget app, inserted sql in the .pro file, and then changed mainwindow.cpp into this:
@
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "QSqlDatabase"
#include "QSqlQuery"
#include "QDebug"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
QSqlDatabase db = QSqlDatabase::addDatabase("QODBC"); db.setDatabaseName("Driver={Microsoft Excel Driver (*.xls)};dbq=C:\\Temp\\Book1.xls"); if (db.open()) { QSqlQuery query; query.exec ("select * from [sheet1$]"); while (query.next ()) { qDebug() << query.value(0).toString(); } }
}
MainWindow::~MainWindow()
{
delete ui;
}
@
The rows of text are seen :-)
Note about the C:\Temp\ directory in .setDatabaseName: you can skip it and just type '...dbq=Book1.xls", then if you launch the app from QtCreator it will look for the book1.xls file in the main build directory (where release and debug are subdirectories), and if you launch the app from a CMD window, it will look for the book1.xls file in the same directory as the .exe file.
(EDIT: forget to mention, the text in the first row isn't shown, because ODBC thinks it's a fieldname.)
hskoglund,
Thank you for your effort reply. I noticed the only difference about the things that worked for you and the code that I have written and attempted to debug would be the case of the text of "Driver" and "dbq" in your setDatabaseName method. I tried making that change and my code received the same error.
Yesterday, I took known working sqlite code created for Linux and attempted to get it to run from this Windows QT version, and it also failed to find the sqlite driver, and could not work. My conclusion after all of this is that the version of QT that I downloaded is faulty, either in the compiler itself, or the plugins. I have read notes from people saying that they had downloaded versions of QT in which the plugins that were included were built with a different version of the compiler than what they downloaded, and the plugins didn't work. That is most likely what is the problem, I surmise. Because this effort has taken so long for me already, yesterday we decided upon a different solution than this one here for our database needs at this moment. I already implemented that solution. So, for now this attempt is going to the scrap heap. Thanks again for your help!!
Jbomkamp | https://forum.qt.io/topic/42450/reading-an-excel-spreadsheet-into-an-sql-database-using-odbc | CC-MAIN-2018-13 | refinedweb | 970 | 57.67 |
This will be a little story about a single piece of code: Why it got written, how it got used, what happened after the initial usage and where it is today. At the end, you’ll get the full source code and a brainteaser.
Prelude
In the year 2004, a long-term customer asked us to develop a little data charting software for the web. The task wasn’t very complicated, but there were two hidden challenges. The first challenge was the data source itself that could have outages for various reasons that each needed to be addressed differently. The second, more subtle challenge was a “message from the operator” that should be displayed, but without the comments. Failing to meet any of these challenges would put the project at risk of usability.
On a side note, when the project was finished, the greatest risk to its usability wasn’t these challenges, but some assumptions made by the developers that turned out wrong, without proper test coverage or documentation. But that’s fodder for another blog post.
Why it got written
When addressing the functionality of the “message from the operator”, we developed it in a test-first manner, as the specification was quite clear: Everything after the first comment sign (“#”) must never be displayed on the web. Soon, we discovered a serious flaw (let’s call it a bug) in the java.util.StringTokenizer class we used to break down the string. Whenever the comment sign was the first character of the string, it just got ignored. This behaviour is still present with today’s JDK and will not be fixed, as StringTokenizer is a legacy class now:
public class LeadingDelimiterBug { @Test public void ignoresLeadingDelimiter() throws Exception { StringTokenizer tokenizer = new StringTokenizer("#thisShouldn'tBeShown", "#"); assertEquals("", tokenizer.nextToken()); assertEquals("thisShouldn'tBeShown", tokenizer.nextToken()); }
String.split() wasn’t available in 2004, so we had to develop our own string partitioning functionality. It was my task and I named the class StringChunker. The class was born on a monday, 21.06.2004, coincidentally also the longest day of the year. I remember coding it until late in the night.
How it got used
The StringChunker class was developed test-first and suffered from feature creep early on. As it was planned as an utility class, I didn’t focus on the requirements at hand, but thought of “possibly needed functionality” and implemented those, too. The class soon had 9 member variables and over 250 lines of code. You could toggle between four different tokenizing modes like “ignore leading/trailing delimiters”, which ironically is exactly what the StringTokenizer does. The code was secured with tests that covered assumed use cases.
Despite the swiss army knife of string tokenizing that I created, the class only served to pick the comment apart from the payload of the operator’s message. If the special case of a leading comment sign would have been declared impossible (or ruled out beforehands), the StringTokenizer would have done the job just as good. Today, there is String.split() that handles the job decently:
public class LeadingDelimiterBug { @Test public void ignoresLeadingDelimiterWithSplit() throws Exception { String[] tokens = "#thisShouldn'tBeShown".split("\\#"); assertEquals("", tokens[0]); assertEquals("thisShouldn'tBeShown", tokens[1]); }
But the StringChunker in summer 2004 was the shiny new utility class for the job. It got included in the project and known to the developers.
What happened afterwards
The StringChunker was a success in the project and soon was adopted to virtually every other project in our company. Several bugs and quirks were found (despite the unit tests, there were edge cases) and fixed. This lead to a multitude of slightly different implementations over the years. If you want to know what version of the class you’re using, you need to look at the test that covers all bugfixes (or lacks them).
Whenever one of our developers had to chop a string, he instantly imported the StringChunker to the project. Not long after, the class got promoted to be part of our base library of classes that serves as the foundation for every new project. Now the StringChunker was available like every class of java.lang or java.util and got used like a commodity.
Where it is today
When you compare the initial implementation with today’s code, there really isn’t much difference. Some methods got rewritten to conform to our recent taste of style, but the core of the class still is a hopeless mess of 25-lines-methods and a mind-boggling amount of member variables and conditional statements. I’m still a little bit ashamed to be the creator of such a beast, even if it’s not the worst code I’ve ever written (or will write).
The test coverage of the class never reached 100%, it’s at 95% with some lines lacking a test. This will be the topic of the challenge at the end of this blog post. The test code never got enough love to be readable. It’s only a wall of text in its current state. We can do better than that now.
The class is so ubiquitous in our code base that more than a dozen other foundation classes rely on it. If you would delete the class in a project of ours, it would definitely fall apart somewhere crucial. This will be the most important point in the conclusion.
The source
If you want to have a look at the complete source of the StringChunker, you can download the zip archive containing the compileable sources from our download server. Please bear in mind that we give out the code for educational purpose only. You are free to adapt the work to suit your needs, though.
An open question
When you look at the test coverage, you’ll notice that some lines aren’t tested. We have an internal challenge for several years now if somebody is able to come up with a test that covers these lines. It might be possible that these lines aren’t logically reachable and should be deleted. Or our test harness still has holes. The really annoying aspect about this is that we cannot just delete the lines and see what happens. Most of our ancient projects lack extensive test coverages, and even if they are tested, there could be a critical test missing, allowing the project to pass the tests but fail in production. It’s just too dangerous a risk to take.
So the challenge to you is: Can you provide test cases that cover the remaining lines, thus pushing the test coverage to 100%? I’m very eager to see your solution.
Conclusion
The StringChunker class is a very important class in our toolset. It’s versatile and well tried. But it suffered from feature creep from the very first implementation. There are too many different operation modes combined in one class, violating the Single Responsibility Principle and agglomerating complexity. The test coverage isn’t perfect, leaving little but enough room for speculative functionality (behaviour you might employ, presumably unaware of the fact that it isn’t guaranteed by tests). And while the StringChunker code got micro-refactored (and improved) several times over the years, the test code has a bad case of code rot, leaving it in a state of paralysis. Before the production code is changed in any manner, the test code needs to be overhauled to be readable again.
If I should weight the advantages provided by this class to the disadvantages and risks, I would consider the StringChunker a legacy risk. It might even be a technical debt, now that String.split() is available. The major pain point is that this class is used way too often given its poor code quality. With every new usage, the direct or assumed cost of code change rises. And the code has to change to comply to our current quality standards.
Finale
This was my confession about “old code” in a blog post series that was started by Volker with his blog post “Old Code”. As a personal statement: I’m embarrassed. I can vividly remember the feeling of satisfaction when this beast was completed. I’m guilty of promoting the code as a solution to every use case that could easily be implemented with a StringTokenizer or a String.split(), just because it is available, too and it contains my genius. After reviewing the code, I hope the bigger genius lies within avoiding the class in the future. | https://schneide.blog/tag/old-code/ | CC-MAIN-2019-30 | refinedweb | 1,418 | 63.09 |
Using:(())
using Godot; using System; public class Node : Godot.Node {=(108, 108), relative=(26, 1), speed=(164.152496, 159.119843), pressure=(0), tilt=(0, 0) InputEventMouseButton : button_index=BUTTON_LEFT, pressed=true, position=(108, 107), button_mask=1, doubleclick=false InputEventMouseButton : button_index=BUTTON_LEFT, pressed=false, position=(108, 107), button_mask=0, doubleclick=false S F Alt InputEventMouseMotion : button_mask=0, position=(108, 107), relative=(0, -1), speed=(164.152496, 159.119843), pressure=(0), tilt=)
public override void _Input(InputEvent inputEvent) { if (inputEvent is InputEventMouseButton mouseEvent) { GD.Print("mouse button event at ", mouseEvent<<
Capturing actions!"); } }
Keyboard events.keycode == KEY_T: print("T was pressed")
public override void _Input(InputEvent inputEvent) { if (inputEvent is InputEventKey keyEvent && keyEvent.Pressed) { if ((KeyList)keyEvent.Keycode == KeyList.T) { GD.Print("T was pressed"); } } }
Tip
See @GlobalScope_KeyList for a list of keycode constants.
Warning
Due to keyboard ghosting, not all key inputs may be registered at a given time if you press too many keys at once. Due to their location on the keyboard, certain keys are more prone to ghosting than others. Some keyboards feature antighosting at a hardware level, but this feature is generally not present on low-end keyboards and laptop keyboards.
As a result, it's recommended to use a default keyboard layout that is designed to work well on a keyboard without antighosting. See this Gamedev Stack Exchange question for more information.
Keyboard modifiers.keycode == KEY_T: if event.shift: print("Shift+T was pressed") else: print("T was pressed")
public override void _Input(InputEvent inputEvent) { if (inputEvent is InputEventKey keyEvent && keyEvent.Pressed) { switch ((KeyList)keyEvent.Keycode) { case KeyList.T: GD.Print(keyEvent.Shift ? "Shift+T was pressed" : "T was pressed"); break; } } }
Tip
See @GlobalScope_KeyList for a list of keycode2D node:
extends Node var dragging = false var click_radius = 32 # Size of the sprite. func _input(event): if event is InputEventMouseButton and event.button_index == BUTTON_LEFT: if (event.position - $Sprite2D.position).length() < click_radius: # Start dragging if the click is on the sprite. if not dragging and event.pressed: dragging = true # Stop dragging if the button is released. if dragging and not event.pressed: dragging = false if event is InputEventMouseMotion and dragging: # While dragging, move the sprite with the mouse. $Sprite2D.position = event.position
using Godot; using System; public class Node2D : Godot.Node2D { private bool dragging = false; private int clickRadius = 32; // Size of the sprite. public override void _Input(InputEvent inputEvent) { Sprite2D sprite = GetNodeOrNull<Sprite2D>("Sprite2D"); = true; } } // Stop dragging if the button is released. if (dragging && !mouseEvent.Pressed) { dragging = false; } } else { if (inputEvent is InputEventMouseMotion motionEvent && dragging) { // While dragging, move the sprite with the mouse. sprite.Position = motionEvent. | https://docs.godotengine.org/en/latest/tutorials/inputs/input_examples.html | CC-MAIN-2022-40 | refinedweb | 426 | 58.89 |
arcgis.widgets module¶
MapView¶
- class
arcgis.widgets.
MapView(**kwargs)¶
Bases:
ipywidgets.widgets.domwidget.DOMWidget
Mapping widget for Jupyter Notebook and JupyterLab.
Note
Note: If the Jupyter Notebook server is running over you need to configure your portal/organization to allow your host and port; or else you will run into CORs issues.
This can be accomplished by signing into your portal/organization in a browser, then navigating to:
Organization > Settings > Security > Allow origins > Add > (replace with the host/port you are running on)
add_layer(item, options=None)¶
Adds the specified layer or item to the map widget.
Warning
Calling
MapView.add_layer()on an
arcgis.raster.Rasterinstance has the following limitations:
Local raster overlays do not persist beyond the notebook session on published web maps/web scenes – you would need to seperately publish these local rasters.
The entire raster image data is placed on the MapView’s canvas with no performance optimizations. This means no pyramids, no dynamic downsampling, etc. Please be mindful of the size of the local raster and your computer’s hardware limitations.
Pixel values and projections are not guaranteed to be accurate, especially when the local raster’s Spatial Reference doesn’t reproject accurately to Web Mercator (what the
MapViewwidget uses).
# USAGE EXAMPLE: Add a feature layer with smart mapping renderer and # a definition expression to limit the features drawn. map1 = gis.map("Seattle, WA") map1.add_layer(wa_streets_feature_layer, {'renderer':'ClassedSizeRenderer', 'field_name':'DistMiles', 'opacity':0.75})
- property
basemap¶
What basemap you would like to apply to the widget (‘topo’, ‘national-geographic’, etc.). See basemaps for a full list
- # Usage example: Set the widget basemap equal to an item
from arcgis.mapping import WebMap widget = gis.map() # Use basemap from another item as your own widget.basemap = webmap widget.basemap = tiled_map_service_item widget.basemap = image_layer_item widget.basemap = webmap2.basemap widget.basemap - ‘national-geographic’
basemaps= ['dark-gray', 'dark-gray-vector', 'gray', 'gray-vector', 'hybrid', 'national-geographic', 'oceans', 'osm', 'satellite', 'streets', 'streets-navigation-vector', 'streets-night-vector', 'streets-relief-vector', 'streets-vector', 'terrain', 'topo', 'topo-vector']¶
A list of possible basemaps to set .basemap with
clear_graphics()¶
Clear the graphics drawn on the map widget. Graphics are shapes drawn using the ‘draw()’ method.
display_message(msg)¶
Displays a message on the upper-right corner of the map widget. You can only send one message at a time, multiple messages don’t show.
draw(shape, popup=None, symbol=None, attributes=None)¶
Draws a shape on the map widget. You can draw anything from known geometries, coordinate pairs, FeatureSet objects.
embed(output_in_cell=True, set_as_preview=True)¶
Embeds the current state of the map into the underlying notebook as an interactive HTML/JS/CSS element. This element will always display this ‘snapshot’ state of the map, regardless of any future Python code ran.
In all notebook outputs, each embedded HTML element will contain the entire map state and all relevant HTML wrapped in an <iframe> element. This means that the data for the embedded HTML element lives inside the notebook file itself, allowing for easy sharing of notebooks and generated HTML previews of notebooks.
Note
When this function is called with set_as_preview = True, the embedded HTML preview element will overwrite the static image preview from any previous MapView.take_screenshot(set_as_preview=True) call
Note
Any embedded maps must only reference publicly available data. The embedded map must also have access to the to load the necessry JavaScript components on the page
- property
end_time¶
datetime.datetime property. If time_mode == “time-window”, represents the upper bound ‘thumb’ of the time slider. For all other time_mode values, not used.
export_to_html(path_to_file, title='Exported ArcGIS Map Widget', credentials_prompt=False)¶
Takes the current state of the map widget, and exports it to a standalone HTML file that can be viewed in any web browser.
By default, only publically viewable layers will be visible in any exported html map. Specify credentials_prompt=True to have a user be prompted for their credentials when opening the HTML page to view private content.
Warning
Follow best security practices when sharing any HTML page that prompts a user for a password.
Note
You cannot successfully authenticate if you open the HTML page in a browser locally like. The credentials prompt will only properly function if served over a HTTP/HTTPS server.
- property
heading¶
For 3D mode, the compass heading of the camera in degrees. Heading is zero when north is the top of the screen. It increases as the view rotates clockwise. The angles are always normalized between 0 and 360 degrees. Note that you can NOT set heading in 2D mode. 2D mode uses the ‘rotation’ property.
hide_mode_switch¶
When set to ‘True’ will hide the 2D/3D switch button from the widget. Note that once the button is hidden, it cannot be made visible again: you have to create a new MapView instance.
jupyter_target¶
A readonly string that is either ‘lab’ or ‘notebook’: represents if this widget is drawn in a Jupyter Notebook environment, or a JupyterLab environment
- property
layers¶
A list of the JSON representation of layers added to the map widget using the add_layers() method
legend¶
If set to True, will display a legend in the widget that will describe all layers added to the map. If set to False, will hide the legend. Default: False.
- property
local_raster_file_format¶
String getter/setter. When calling
map.add_layer(arcgis.raster.Raster())for a local raster file, an intermediate image file must be written to disk in order to successfully display on the map. This file format can be one of the following:
mode¶
The string that specifies whether the map displays in ‘2D’ mode (MapView) or ‘3D’ mode (SceneView). Possible values: ‘2D’, ‘3D’.
Note that you can also toggle between '2D' and '3D' mode by pressing the icon in the widget UI.
on_click(callback, remove=False)¶
Register a callback to execute when the map is clicked. The callback will be called with one argument, the clicked widget instance.
on_draw_end(callback, remove=False)¶
Register a callback to execute when something is drawn. The callback will be called with two arguments: the clicked widget instance, and the geometry drawn
print_service_url¶
Note
Note: this property is obselete as of >v1.6 of the Python API, since the underlying JavaScript code ran during a take_screenshot() Python call has been has been changed to MapView.takeScreenshot() instead of calling a Print Service URL. Any value you set to this property will be ignored (2D screenshots will still be taken successfully).
remove_layers(layers=None)¶
Removes the layers added to the map widget. You can get the list of layers added to the widget by querying the ‘layers’ property.
- Returns
True if layer is successfully removed. Else, False.
- property
rotation¶
For 2D mode, the clockwise rotation of due north in relation to the top of the view in degrees. Note that you can NOT set rotation in 3D mode. 3D mode uses the ‘heading’ property.
save(item_properties, mode=None, thumbnail=None, metadata=None, owner=None, folder=None)¶
Save the map widget object into a new web map Item or a new web scene item in your GIS.
Note
If you started out with a fresh map widget object, use this method to save it as a the webmap/webscene item in your GIS. If you started with a map widget object from an existing webmap/webscene object, calling this method will create a new item with your changes. If you want to update the existing item with your changes, call the update() method instead.
Note
Saving as a WebScene item only works in a Jupyter environment: the map must be visually displayed in the notebook before calling this method.
Key:Value Dictionary Options for Argument item_properties
URL 1: //02r3000000ms000000 :return:
Item object corresponding to the new web map Item created.
USAGE EXAMPLE: Save map widget as a new web map item in GIS map1 = gis.map("Italy") map1.add_layer(Italy_streets_item) map1.basemap = 'dark-gray' italy_streets_map = map1.save({'title':'Italy streets', 'snippet':'Arterial road network of Italy', 'tags':'streets, network, roads'})
- property
scale¶
The map scale at the center of the view. If set to X, the scale of the map would be 1:X.
For continuous values to apply and not get “snapped” to the closest level of detail, set mapview.snap_to_zoom = False.
- # Usage example: Sets the scale to 1:24000
map = gis.map() map.scale = 24000
- classmethod
set_js_cdn(js_cdn)¶
Call this function before the creation of any MapView object, and each instantiated object will use the specified js_cdn parameter as the ArcGIS API for JavaScript CDN URL instead of the default This functionality is necessary in disconnected environments if the portal you are connecting to doesn’t ship with the minimum necessary JavaScript API version.
You may not need to call this function to view the widget in disconnected environments: if your computer cannot reach js.arcgis.com, and you have a GIS() connection to a portal, the widget will automatically attempt to use that portal’s JS API that it ships with.
set_time_extent(start_time, end_time, interval=1, unit='milliseconds')¶
When time_slider = True, the time extent to display on the time slider.
- property
snap_to_zoom¶
When True, snap to the next level of detail when zooming in or out. When False, the zoom is continous. Only applies in 2D mode
- property
start_time¶
datetime.datetime property. If time_mode == “time-window”, represents the lower bound ‘thumb’ of the time slider. For all other time_mode values, represents the single thumb on the time slider.
Synchronizes the navigation from this MapView to another MapView instance so panning/zooming/navigating in one will update the other.
# USAGE EXAMPLE: link the navigation of two maps together from ipywidgets import HBox map1 = gis.map("Chicago, IL") map1.basemap = "gray" map2 = gis.map("Chicago, IL") map2.basemap = "dark-gray" map1.sync_navigation(map2) HBox([map1, map2])
tab_mode¶
This string property specifies the 'default' behavior of toggling a new window in a JupyterLab environment, whether that is called by pressing the icon in the widget UI, or by calling toggle_window_view() function without arguments.
Note that after a widget is ‘seperated’ from the notebook, you can drag it, split it, put it in a new tab, etc. See the JupyterLab guide pages for more information.
take_screenshot(output_in_cell=True, set_as_preview=True, file_path='')¶
Takes a screenshot of the current widget view. Only works in a Jupyter Notebook environment.
In all notebook outputs, each image will be encoded to a base64 data URI and wrapped in an HTML <img> tag, like <img src=”base64Str”>. This means that the data for the image lives inside the notebook file itself, allowing for easy sharing of notebooks and generated HTML previews of notebooks.
Note
This function acts asyncronously, meaning that the Python function will return right away, with the notebook outputs/files being written after an indeterminate amount of time. Avoid calling this function multiple times in a row if the asyncronous portion of the function hasn’t finished yet.
Note
When this function is called with set_as_preview = True, the static image preview will overwrite the embedded HTML element preview from any previous MapView.embed_html(set_as_preview=True) call
- property
tilt¶
For 3D mode, the tilt of the camera in degrees with respect to the surface as projected down from the camera position. Tilt is zero when looking straight down at the surface and 90 degrees when the camera is looking parallel to the surface. Note that you can NOT set tilt in 2D mode.
time_mode¶
String used for defining if the temporal data will be displayed cumulatively up to a point in time, a single instant in time, or within a time range.
Possible values: “instant”, “time-window”, “cumulative-from-start”, “cumulative-from-end”. Default: “time-window”
See for more info.
time_slider¶
If set to True, will display a time slider in the widget that will allow you to visualize temporal data for an applicable layer added to the map. Default: False.
toggle_window_view(title='ArcGIS Map', tab_mode=None)¶
In a JupyterLab environment, calling this function will separate the drawn map widget to a new window next to the open notebook, allowing you to move the widget it, split it, put it in a new tab, etc. If the widget is already seperated in a new window, calling this function will restore the widget to the notebook where it originated from. See the JupyterLab guide pages for more information.
Note that this functionality can also be achieved by pressing the icon in the widget UI.
Unsynchronizes connections made to other MapView instances made via my_mapview.sync_navigation(other_mapview).
update(mode=None, item_properties=None, thumbnail=None, metadata=None)¶
Updates the WebMap/Web Scene item that was used to create the MapWidget object. In addition, you can update other item properties, thumbnail and metadata.
Note
If you started out a MapView object from an existing webmap/webscene item, use this method to update the webmap/webscene item in your with your changes. If you started out with a fresh MapView object (without a webmap/webscene item), calling this method will raise a RuntimeError exception. If you want to save the map widget into a new item, call the save() method instead. For item_properties, pass in arguments for only the properties you want to be updated. All other properties will be untouched. For example, if you want to update only the item’s description, then only provide the description argument in item_properties.
Note
Saving as a WebScene item only works in a Jupyter environment: the map must be visually displayed in the notebook before calling this method.
Key:Value Dictionary Options for Argument item_properties
URL 1: //02r3000000ms000000 :return:
A boolean indicating success (True) or failure (False).
USAGE EXAMPLE: Interactively add a new layer and change the basemap of an existing web map. italy_streets_item = gis.content.search("Italy streets", "Web Map")[0] map1 = MapView(item = italy_streets_item) map1.add_layer(Italy_streets2) map1.basemap = 'dark-gray-vector' map1.update(thumbnail = './new_webmap.png')
- property
zoom¶
What level of zoom you want to apply: the higher the number, the more zoomed in you are. | https://developers.arcgis.com/python/api-reference/1.8.3/arcgis.widgets.html | CC-MAIN-2022-21 | refinedweb | 2,320 | 55.54 |
Welcome to this tutorial on tkinter GUI widgets. In this article, I will introduce you to all the Tkinter Widgets in brief and provide you with some simple code snippets to create the widgets. Towards the end of this article, you will be able to use all the code and build a mini-service using the GUI widgets.
Creating the Main Tkinter window
This step is necessary for any Tkinter GUI widget irrespective of its characteristics.
from tkinter import * root = Tk() #optional root.configure(background='yellow') root.geometry('600x800) #place any of your code lines here #end of code root.mainloop()
The code snippet shown above is a bare-bone structure which has to be used to define the tkinter GUI box before placing any widget on the panel.
In the code snippet, the
tkinter library has been imported and the
Tk() constructor has been instantiated using the
root object. Towards the end of the code, I have called this entire program using
root.mainloop().
The
root.configure() is used to add additional properties to your mainframe. In this example, I have used it to add the property of
background and the
root.geometry() ensures that the main frame is of the desired size specified. These are however optional to use.
Placing Tkinter GUI Widgets
Now that we have initialized the mainframe for Tkinter, we will have a look at the different widgets.
I will be introducing the most commonly used widgets which include a label, the button, a check button, an entry, a slider (which in Tkinter is called the scale), a list box, and a radio button.
Append the code snippets given below to the code for the main window.
1. Tkinter Label Widget
For the label widget here, we will define it using the
Label constructor itself. The label is going to go in the root main window and the text will say “Hey, welcome to this my GUI”.
Then we pack the label inside the window and we have provided an argument with pady to give us a little bit more space on the y-axis.
label=Label(root,text="Hey, welcome to this my GUI") label.pack(pady=10)
2. Tkinter Button Widget
The button will be placed on the same main window and is created using the Button() constructor. The text for the button will say “press button”. Notice that the text color is green. For that, we have assigned green to the foreground.
When the button is pressed we want to activate a function and we’re going to assign that function to the command argument. The name of the function is
button_trigger(). When the button is pressed, it’s going to activate this function and print a message that says “button pressed”.
We have packed the button into the main root window. So when we press this button it’s going to activate this function and it’s going to print the message in the console.
def button_trigerr(): print("Button Pressed") button = Button(root,text="press button", foreground="green", command=button_trigger) button.pack(pady=10)
3. Tkinter Check Button Widget
For the next example, we have the check button.
When we check this box or button it’s going to turn the background white, like turning on a light. Then if we uncheck it, it’s going to turn the background black like turning a light off. Let’s try it out.
def check_button_action(): print(check_button_var.get()) if check_button_var.get()==1: root.configure(background='white') else: root.configure(background='black') check_button_var = IntVar() check_button = tk.Checkbutton(root, text="on/off", variable=check_button_var, command= button_action) check_button.pack(pady=10)
So first, create the check button using
Checkbutton(). It’s going on the root main window. The text is “on/off”.
We have associated a variable with this check button and it is an Integer. The function that will be activated by the check button and is named button_action.
The check button has two default states which are 0 and 1 and those default states will be assigned to this variable here. This variable will keep track of the state of the check button and to get the state of the check button.
We just go ahead and reference the
variable.get(). If the state of the check button is 1, that is equivalent to the box being checked and we’re going to make the background of the window white.
If it is 0, we’re going to make the background of the root window black which gives us the effect of turning the light on or off. We have then packed this into the “main” frame with a pady of 10.
4. Tkinter Entry Widget
The entry widget allows us to type in text and transfers the text from the text box or entry to the console and displays the message on the console.
To create the entry widget, we’ve gone ahead and created a frame. To create the frame, we use Frame().
The frame is going to go in the main root window with a border width of 5 with a sunken effect. We reference the framed pack and that will pack the frame into the main window.
entry_frame = Frame(root, borderwidth=5, relief = SUNKEN) entry_frame.pack() text_box=Entry(entry_frame) text_box.pack() def get_entry_text(): print(text_box.get()) label.configure(text=text_box.get()) button = Button(entry_frame, text='get entry', command=get_entry_text) button.pack(pady=10)
We have then created our entry text box and the entry is going to go into the frame. We packed the entry into the frame. So the frame will go into the main window and the entry will go into the frame.
Then we go ahead and created a button that will transfer the text from the entry to the console. Now notice that our entry message is printed to the console, and also updated our label on the mainframe. To get the text we just use the get() method.
5. Tkinter Scale Widget
Next let’s go over the slider or scale widget here. So for this widget, let’s say that you have a restaurant bill and it’s $100 and you want to see how different tip amounts will affect the total bill.
We can use the slider for the tip and the entry box for the bill and then the label will show us the total bill. The label would show us the total bill.
slider_frame = Frame(root, borderwidth=5, relief=SUNKEN) slider_frame.pack(pady=10) def calculate_total_bill(value): print(value) if bill_amount_entry.get()!=' ': tip_percent=float(value) bill=float(bill_amount_entry.get()) tip_amount=tip_percent*bill text=f'(bill+tip_amount)' bill_with_tip.configure(text=text) slider = Scale(slider_frame, from_=0.00, to=1.0,orient=HORIZONTAL, length=400, tickinterval=0.1, resolution=0.01, command=calculate_total_bill) slider.pack()
Okay, so to create the scale we use Scale() and then here we put in all of our parameters or arguments for the entry text box. We created that above.
We have packed the slider,the entry text box and the label into our frame which will go inside the main window. We created the frame like we did in the last example.
For changes made by the slider, the
calculate_total_bill() will be activated. This function will basically take the text which is the bill amount from the entry box. Then it will take the tip percentage from the slider scale, apply the tip to the bill and then give us the total bill which will be displayed in the label.
6. Tkinter ListBox Widget
Next let’s go over the list box widget. So here we have our list box with five items. In this example, we’re just going to choose one of the items. Then we’re going to press the button and we want to transfer the text from the item to a label.
listbox_frame=Frame(root,borderwidth=5, relief=SUNKEN) listbox_frame.pack(pady=10) listbox=Listbox(listbox_frame) listbox.pack() listbox.insert(END,"one") for item in ["two","three","four","five"]: listbox.insert(END, item) def list_item_selected(): selection=listbox.curselection() if selection: print(listbox.get(selection[0])) list_box_label.configure(text=listbox.get(selection[0])) list_box_button = Button(listbox_frame,text='list item button', command=list_item_selected) list_box_button.pack()
To create the list box we use
Listbox(). We’re going to put the list box inside of a frame. After we have created our list box, we can go ahead and insert items into the list box.
If you’d like to insert several items, you can do it with a for Loop. Here we have created a button, when we press the button. It’s going to activate the
list_item_selected() created.
To access the selection, we reference the
listbox.curselection(). To make sure that we actually have something selected, we use
if selection: and if we have an item selected, we reference the listbox item and then we get the actual item.
The reason that we have used the square brackets with the zero is that the item is typically a single digit Tuple and this will give us just the digit. Then we want to go ahead and update our label with the item that we selected.
7. Tkinter RadioButton Widget
So for the last example, let’s go over radio buttons. Now depending on the radio button selected, we’re going to get an image displayed here. So here we have mountains, boating and camping.
I have created our three radio buttons. All of the radio buttons will be placed inside the main root window. For the text, we have gone ahead and assigned “mountains, boating and camping”.
All of the radio buttons are going to have a value associated with one variable and we have created that variable.
Label(root, text="choose icon") def radio_button_func(): print(rb_icon_var.get()) if(rb_icon_var.get())==1: radio_button_icon.configure(text='\u26F0') elif rb_icon_var_get()==2: radio_button_icon.configure(text='\u26F5') elif rb_icon_var_get()==3: radio_button_icon.configure(text='\u26FA') rb_icon_var=IntVar() Radiobutton(root,text="mountains",variable=rb_icon_var, value=1, command=radio_button_func).pack() Radiobutton(root,text="boating",variable=rb_icon_var, value=2, command=radio_button_func).pack() Radiobutton(root,text="camping",variable=rb_icon_var, value=3, command=radio_button_func).pack() radio_button_icon = Label(root, text=' ', font=("Helvetica",150)) radio_button_icon.pack()
In this case, since you can only click one radio button at a time, the value associated with either of the radio buttons, which we have assigned here 1, 2, or 3, will be assigned to the variable.
When a radio button is clicked, it is going to activate or call the “radio_button_func()”.
So if this first radio button is clicked for mountains, the value of 1 will be assigned to this variable and then we will get that value and test if it’s equal to 1.
And if it is equal to 1, we are going to use the Unicode text representation for mountains.
Conclusion
To quickly conclude we have gone across a few commonly used widgets and the usage of them are as follows:
- Label – Display text or messages
- Button – Used in toolbars, application windows, pop-up windows, and dialogue boxes
- Check button – Used to implement on-off selections.
- Entry Widget – Used to enter or display a single line of text
- Scale widget – Used instead of an Entry widget, when you want the user to input a bounded numerical value.
- List box – Used to display a list of alternatives.
- Radio button – Used as a way to offer many possible selections to the user but lets the user choose only one of them.
Do try out these various widgets and do let us know your favorite one in the comment section below!! | https://www.askpython.com/python/tkinter-gui-widgets | CC-MAIN-2020-50 | refinedweb | 1,934 | 57.47 |
Published to labs.mysql.com as part of the MySQL InnoDB Cluster 5.7.17 Preview 2 bundle.
A
--usercommand line option was added to define the user to run Router as. This option is required if Router is bootstrapped or started as a super user, such as root. This option is also defined as
userunder the
[DEFAULT]namespace. This option is not available on Windows.
In addition, the packaging scripts (Debian and RPM packages) now create a Router-specific system user named mysqlrouter that Router runs as by default. This account does not have shell access, and its home directory points to the directory where the default Router configuration file is stored. Previously, the user named mysql was used by default. (Bug #25070949)
No quorum did not cause the connections to be blocked. (Bug #25134206)
The
--helptext referred to a nonexistent option named "--master-key-path", instead of "--master-key-file". (Bug #25074305)
After dissolving a MySQL InnoDB cluster that was bootstrapped, bootstrapping to the old primary server and port would not function. (Bug #25069674)
On Linux, the default
keyring_pathpath included
/var/run, but because some Linux distributions mount
/var/run/to
tmpfs, this definition was lost when the host was restarted. Now,
/var/lib/is used on most systems. (Bug #25045182)
An existing configuration file with a missing [metadata_cache] section (including empty files) would cause --bootstrap to fail. (Bug #25045119)
Having multiple
metadata_cachedefinitions (with different section keys) would cause Router to unexpectedly exit. This error is now handled, and Router is closed with an error message. (Bug #24962552)
Routing to the default destination port for the x protocol (33060) did not function for standalone routing. (Bug #24955339)
X-Protocol routing treated errors from the server as handshake failures, which caused each invalid authentication request to increment the connect error counter. Now, it behaves like the classic protocol, so during the handshake when the server sends an error to the client (even if it Access Denied error), this is not considered a failed handshake. This is also how MySQL Server behaves. (Bug #24911725)
Metadata cache section's did not allow the optional section keys definitions. (Bug #24909259)
After performing a successful
--bootstrapoperation, immediately executing a second and failed bootstrap operation (against a different URI) could cause Router to not connect to the metadata cache for the first bootstrap configuration due to internal changes made by the second. (Bug #24902404)
--bootstrapnow sets
bind_address=0.0.0.0for each route in the generated Router configuration file, when before it did not set it and relied upon the
bind_addressdefault value of 127.0.0.1. In addition, the
--conf-bind-addresscommand line option was added to modify the
bind_addressvalue set by bootstrap. (Bug #24846715)
Bootstrapping router with the
--conf-use-socketsoption was not defining the
socketoption in the generated configuration file. (Bug #24842143)
After bootstrapping Router with the
--conf-skip-tcp
--conf-use-socketsoptions, neither MySQL Shell or the MySQL client could connect to Router. (Bug #24841281)
The keyring plugin is only loaded if either configured, or if there is a password involved in the configuration. Previously, Router would always load the plugin and then prompt for a password. (Bug #24840690)
The
--nameoption is now optional. (Bug #24807941)
Configuring the router for using more than one routing rule with UNIX domain sockets and no TCP ports would fail with a "duplicate IP or name found" configuration error. This made it impossible to configure R/W splitting using Unix sockets. (Bug #24799417)
Fixed compilation related warnings. (Bug #24701344)
Router was not able to connect (function) after stopping group replication on the primary node. This affected both read-only and read-write routing sections. (Bug #24659690)
Error logging for metadata connections and routed client connections were improved to be more descriptive, and they were changed to warnings instead of debug messages. (Bug #22010817) | https://dev.mysql.com/doc/relnotes/mysql-router/en/news-2-1-1.html | CC-MAIN-2020-24 | refinedweb | 639 | 55.34 |
def foo(li=[]): li.append(1) print(li) foo([2]) # Out: [2, 1] foo([3]) # Out: [3, 1]
This code behaves as expected, but what if we don't pass an argument?
foo() # Out: [1] As expected... foo() # Out: [1, 1] Not as expected...
This is because default arguments of functions and methods are evaluated at definition time rather than run time. So we only ever have a single instance of the
li list.
The way to get around it is to use only immutable types for default arguments:
def foo(li=None): if not li: li = [] li.append(1) print(li) foo() # Out: [1] foo() # Out: [1]
While an improvement and although
if not li correctly evaluates to
False, many other objects do as well, such as zero-length sequences. The following example arguments can cause unintended results:
x = [] foo(li=x) # Out: [1] foo(li="") # Out: [1] foo(li=0) # Out: [1]
The idiomatic approach is to directly check the argument against the
None object:
def foo(li=None): if li is None: li = [] li.append(1) print(li) foo() # Out: [1] | https://riptutorial.com/python/example/12258/mutable-default-argument | CC-MAIN-2021-31 | refinedweb | 184 | 72.05 |
WebStorm 2018.1 EAP, 181.3007: better support for dynamic imports, new in TypeScript support
The new WebStorm 2018.1 Early Preview build (181.3007.17) is now available
You can update via Toolbox App, or download the build here and install it beside your stable WebStorm version.
Download WebStorm 2018.1 EAP
Better Extract method refactoring
Extract method refactoring now works without any additional dialogs, so it no longer takes your attention away from the code.
You can still open a refactoring dialog if you press Alt-Cmd-M again. In the dialog you can see the list of function parameters and select whether the function will be defined as
function name() or
let name = function().
Improved support for dynamic imports with import()
If you are using dynamic imports in your JavaScript or TypeScript code, you’ll notice a whole bunch of improvements.
First, you’ll get code completion for the properties of the imported module, and you will be able to jump back to its definition with Cmd-click.
It doesn’t matter whether the imported file has default or named exports, WebStorm will correctly find the usages. And if you rename a named export of the module, its usages will be correctly updated in normal and dynamic imports.
New in TypeScript support
Adding a type guard
In TypeScript, there are new quick-fixes called “Enclose in type guard” and “Prefix with type guard” that are shown in cases when a used method is not available for one of the types in a union type.
To see the suggestions, just hit Alt-Enter when the cursor is on the highlighted code.
WebStorm will use
instanceof or discriminant property type guards when appropriate.
Completion for a list of parameters
Now when you call
super() in the class constructor, one of the suggested options will be a list of all the parameters available in the constructor of the parent class. This a small thing, but it can save you some time because you won’t have to select every parameter separately.
You will also see this suggestion if two methods have a similar signature and you call one method in another one.
Rename for property names in strings
In some cases, in TypeScript a property name can be used in a string. Now you can easily rename such properties using Rename refactoring – WebStorm will make sure that these usages are not forgotten.
Open in Terminal
For folders there’s now a new action Open in Terminal – it opens the integrated terminal on the folder’s path. The action is available via Find action or via the context menu of this folder. For files, it will go to the path of its parent folder.
WebStorm Team
5 Responses to WebStorm 2018.1 EAP, 181.3007: better support for dynamic imports, new in TypeScript support
Artem says:January 30, 2018
Thanks!
What name is for Typescript feature used in “Rename for property names in strings”?
About generics with field names.
Anton Lobov says:January 30, 2018
TypeScript feature is called “keyof”, or, more formally, “index type query”:.
Rename for property names in literals was inspired by this request:.
WebStorm now can also rename property declarations defined directly in type aliases and used via mapped types (), e.g.,
type Props = “name” | “surname”;
type Person = { [T in Props | “age”]: string};
var p: Person;
p.name; p.surname; p.age; // <– you can rename them
Artem says:January 31, 2018
Awesome, thanks!
Nugzar says:April 25, 2018
When I’m including component in simple way
import Component from ‘./my-component’;
Inside my-component.js I can press CTRL+click on export default and see where component includes.
But when I use I’m using react-loadable and include modules like this
const LoadableComponent = Loadable({
loader: () => import(‘./my-component’),
});
Inside my-component.js I can’t go to usage place by pressing CTRL+click on default. It shows that “No usages found”. But I’m importing it by react-loadable.
Can you fix this?
WebStorm 2018.1.2 Build #WS-181.4668.60
Ekaterina Prigara says:April 25, 2018
Hello,
There’s no way WebStorm can understand from the static analysis of this code that LoadableComponent is actually your Component. To resolve the component in this case we need to specifically support this pattern. Please submit a feature request about that on our issue tracker:
Thank you! | https://blog.jetbrains.com/webstorm/2018/01/webstorm-2018-1-eap-181-3007/ | CC-MAIN-2021-10 | refinedweb | 730 | 64.91 |
CodePlexProject Hosting for Open Source Software
Hi Everyone,
Here are some of the upcoming ASP .NET classes at WiZiQ, these are FREE public classes and anyone can join and learn the programming language:
1. 5 imp ASP.NET interview questions
Sunday, April 11, 2010 12:30 AM (EST)
2.5 imp .NET interview questions
Sunday, April 18, 2010 12:30 AM (EST)
3. 5 imp ADO.NET interview questions
Saturday, April 24, 2010 12:30 AM (EST)
4. 5 important .NET interview questions
Sunday, April 25, 2010 12:30 AM (EST)
Regards,
Sukhpreet
Thanks Sukhpreet for giving useful information.
If there will be more any .NET class.
Kindly tell me.
Thanks
Kishan Srivastava
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://blogengine.codeplex.com/discussions/208415 | CC-MAIN-2017-43 | refinedweb | 149 | 78.14 |
To express choosing between two alternatives, Scala
has a conditional expression
if-else.
It looks like a
if-else in Java, but is used for expressions, not statements.
Example:
def abs(x: Int) = if (x >= 0) x else -x
x >= 0 is a predicate, of type
Boolean.
Boolean expressions
b can be composed of
true false // Constants !b // Negation b && b // Conjunction b || b // Disjunction
and of the usual comparison operations:
e <= e, e >= e, e < e, e > e, e == e, e != e
Here are reduction rules for Boolean expressions (
e is an arbitrary expression):
!true --> false !false --> true true && e --> e false && e --> false true || e --> true false || e --> e
Note that
&& and
|| do not always need their right operand to be evaluated.
We say, these expressions use “short-circuit evaluation”.
We will define in this section a method
/** Calculates the square root of parameter x */ def sqrt(x: Double): Double = ...
The classical way to achieve this is by successive approximations using Newton's method.
To compute
sqrt(x):
y(let's pick
y = 1).
yand
x/y.
Example:
Estimation Quotient Mean 1 2 / 1 = 2 1.5 1.5 2 / 1.5 = 1.333 1.4167 1.4167 2 / 1.4167 = 1.4118 1.4142 1.4142 ... ...
First, we define a method which computes one iteration step:
def sqrtIter(guess: Double, x: Double): Double = if (isGoodEnough(guess, x)) guess else sqrtIter(improve(guess, x), x)
Note that
sqrtIter is recursive, its right-hand side calls itself.
Recursive methods need an explicit return type in Scala.
For non-recursive methods, the return type is optional.
Second, we define a method
improve to improve an estimate and a test to check for termination:
def improve(guess: Double, x: Double) = (guess + x / guess) / 2 def isGoodEnough(guess: Double, x: Double) = abs(guess * guess - x) < 0.001
Third, we define the
sqrt function:
def sqrt(x: Double) = sqrtIter(1.0, x)
You have seen simple elements of functional programing in Scala.
You have learned the difference between the call-by-name and call-by-value evaluation strategies.
You have learned a way to reason about program execution: reduce expressions using the substitution model.
Complete the following method definition that computes the factorial of a number:
def factorial(n: Int): Int = if (n == res0) res1 else factorial(n - res2) * n factorial(3) shouldBe 6 factorial(4) shouldBe 24 | https://www.scala-exercises.org/scala_tutorial/functional_loops | CC-MAIN-2017-39 | refinedweb | 395 | 55.84 |
Landing a job at Facebook is a dream for many developers around the globe. Facebook is one of the top tech companies in the world, with a workforce of over 52,000 strong. Facebook is known for its growth-based company culture, fast promotion tracks, excellent benefits, and top salaries that few companies can match.
But competition is fierce, and with a swell of new hires, Facebook is on the lookout for the top candidates. Facebook focuses on your cultural fit, generalist knowledge, ability to build within constraints, and expert coding skills.
To help you prepare, today I will walk through everything you need to crack an Facebook interview, including coding questions and a step-by-step preparation guide.
Today we will go over:
To land a software engineering job at Facebook, you need to know what lies ahead. The more prepared you are, the more confident you will be. So, let’s break it down.
For a deeper dive into Facebook’s interview process, check out Coding Interviews’s free Facebook Coding Interview Guide.
Coding Questions: Facebook interview questions focus on generalist knowledge on algorithms, data structures, and time complexity. They also test on architecture and system design (even entry level).
Hiring Levels: Facebook normally hires at level E3 for entry level software roles with E9 behind the height of levels. E5 is considered an entry-level manager role.
Hiring Teams: Central hires for Oculus, Facebook Groups, and WhatsApp.
Programming languages: Facebook prefers most standard languages, including Java, C++, Python, Ruby, and Perl.
What’s different about Facebook interviews?
System design interview:
- At Facebook, you can expect these questions no matter what level you are interviewing for.
Structured interviewing:
- Facebook will pair you with interviewers who have either held the position you’re interviewing for or with individuals who work directly with the position you’re interviewing for.
Core values and your behavioral interview:
- Facebook interviewers will also evaluate your ability to embody their five core values: Move Fast, Be Bold, Focus on Impact, Be Open, and Build Social Value.
In this section, we’ll take a deep dive into the top 40 coding interview questions. We will discuss the answers and runtime complexities for the 15 questions you’re bound to see in an interview followed by the definitive list of 25 questions you’ll likely encounter.
Given an integer array, move all elements that are 0 to the left while maintaining the order of other elements in the array. The array has to be modified in-place. Try it yourself before reviewing the solution and explanation.
def move_zeros_to_left(A): #TODO: Write - Your - Code pass
Runtime complexity: Linear,
Memory Complexity: Constant,
Keep two markers:
read_index and
write_index and point them to the end of the array. Let’s take a look at an overview of the algorithm.
While moving
read_index towards the start of the array:
read_indexpoints to
0, skip.
read_indexpoints to a non-zero value, write the value at
read_indexto
write_indexand decrement
write_index.
write_indexand to the current position of
write_indexas well.
You are given an array (list) of interval pairs as input where each interval has a start and end timestamp. The input array is sorted by starting timestamps. You are required to merge overlapping intervals and return a new output array.
Consider the input array below. Intervals (1, 5), (3, 7), (4, 6), (6, 8) are overlapping so they should be merged to one big interval (1, 8). Similarly, intervals (10, 12) and (12, 15) are also overlapping and should be merged to (10, 15).
Try it yourself before reviewing the solution and explanation.
class Pair: def __init__(self, first, second): self.first = first self.second = second def merge_intervals(v): # write your code here result = [] return result
Runtime complexity: Linear,
Memory Complexity: Linear,
This problem can be solved in a simple linear scan algorithm. We know that input is sorted by starting timestamps. Here is the approach we are following:. Try it yourself before reviewing the solution and explanation.
def convert_to_linked_list(root): #TODO: Write - Your - Code return None
Runtime complexity: Linear,
Memory Complexity: Linear, .
Recursive solution has memory complexity as it will consume memory on the stack up to the height of binary tree
h. It will be for balanced trees and in the worst case can be .
In an in-order traversal, first the left sub-tree is traversed, then the root is visited, and finally the right sub-tree is traversed.
One simple way of solving this problem is to start with an empty doubly linked list. While doing the in-order traversal of the binary tree, keep inserting each element output into the doubly linked list.
But, if we look at the question carefully, the interviewer wants us to convert the binary tree to a doubly linked list in-place i.e. we should not create new nodes for the doubly linked list.
This problem can be solved recursively using a divide and conquer approach. Below is the algorithm specified.
Given the root of a binary tree, display the node values at each level. Node values for all levels should be displayed on separate lines. Let’s take a look at the below binary tree.
Level order traversal for this tree should look like:
def level_order_traversal(root): result = "" #TODO: Write - Your - Code return result
Runtime complexity: Linear,
Memory Complexity: Linear,.
Reverse the order of words in a given sentence (an array of characters). Take the “Hello World” string for example:
def reverse_words(sentence): # sentence here is an array of characters #TODO: Write - Your - Code return sentence
Here is how the solution works:
For more on string reversal, read my article Best practices for reversing a string in JavaScript, C++, and Python.
You are given a dictionary of words and a large input string. You have to find out whether the input string can be completely segmented into the words of a given dictionary. The following example elaborates on the problem further.
def can_segment_string(s, dictionary): #TODO: Write - Your - Code return False
Runtime complexity: Exponential, , if we only use recursion. With memoization, the runtime complexity of this solution can be improved to be polynomial, .
Memory Complexity: Polynomial,
You can solve this problem by segmenting the large string at each possible position to see if the string can be completely segmented to words in the dictionary. If you write the algorithm in steps it will be as follows:
n = length of input string for i = 0 to n - 1 first_word = substring (input string from index [0, i] ) second_word = substring (input string from index [i + 1, n - 1] ) if dictionary has first_word if second_word is in dictionary OR second_word is of zero length, then return true recursively call this method with second_word as input and return true if it can be segmented
The algorithm will compute two strings from scratch in each iteration of the loop. Worst case scenario, there would be a recursive call of the
second_word each time. This shoots the time complexity up to .
You can see that you may be computing the same substring multiple times, even if it doesn’t exist in the dictionary. This redundancy can be fixed by memoization, where you remember which substrings have already been solved.
To achieve memoization, you can store the
second string in a new set each time. This will reduce both time and memory complexities.
Given a list of daily stock prices (integers for simplicity), return the buy and sell prices for making the maximum profit.
We need to maximize the single buy/sell profit. If we can’t make any profit, we’ll try to minimize the loss. For the below examples, buy (orange) and sell (green) prices for making a maximum profit are highlighted.
def find_buy_sell_stock_prices(array): #TODO: Write - Your - Code return -1, -1 #Return a tuple with (high, low) price values
Runtime complexity: Linear,
Memory Complexity: Constant,
The values in the array represent the cost of a stock each day. As we can
buy and
sell the stock only once, we need to find the best
buy and
sell prices for which our profit is maximized (or loss is minimized) over a given span of time.
A naive solution, with runtime complexity of , is to find the maximum gain between each element and its succeeding elements.
There is a tricky linear solution to this problem that requires maintaining
current_buy_price (which is the smallest number seen so far),
current_profit, and
global_profit as we iterate through the entire array of stock prices.
At each iteration, we will compare the
current_profit with the
global_profit and update the
global_profit accordingly.
The basic algorithm is as follows:
current buy = stock_prices[0] global sell = stock_prices[1] global profit = global sell - current buy for i = 1 to stock_prices.length: current profit = stock_prices[i] - current buy if current profit is greater than global profit then update global profit to current profit and update global sell to stock_prices[i] if stock_prices[i] is less than current buy then update current buy to stock_prices[i] return global profit and global sell
Study the definitive list of 16 patterns for coding questions, based on similarities in the techniques needed to solve them. Once you’re familiar with a pattern, you’ll be able to solve dozens of problems with it.
Grokking the Coding Interview: Patterns for Coding Questions
Given a double,
x, and an integer,
n, write a function to calculate
x raised to the power
n. For example:
power (2, 5) = 32
power (3, 4) = 81
power (1.5, 3) = 3.375
power (2, -2) = 0.25
def power(x, n): #TODO: Write - Your - Code return x
Runtime complexity: Logarithmic,
Memory Complexity: Logarithmic,
A simple algorithm for this problem is to multiply
x by
n times. The time complexity of this algorithm would be . We can use the divide and conquer approach to solve this problem more efficiently.
In the dividing step, we keep dividing
n by
2 recursively until we reach the base case i.e.
n == 1
In the combining step, we get the result,
r, of the sub-problem and compute the result of the current problem using the two rules below:
We are given a set of integers and we have to find all the possible subsets of this set of integers.
def get_all_subsets(v, sets): #TODO: Write - Your - Code return sets
Runtime complexity: Exponential, , where is the number of integers in the given set
Memory Complexity: Constant,
There are several ways to solve this problem. We will discuss the one that is neat and easier to understand. We know that for a set of ‘n’ elements there are subsets. For example, a set with 3 elements will have 8 subsets. Here is the algorithm we will use:
n = size of given integer set subsets_count = 2^n for i = 0 to subsets_count form a subset using the value of 'i' as following: bits in number 'i' represent index of elements to choose from original set, if a specific bit is 1 choose that number from original set and add it to current subset, e.g. if i = 6 i.e 110 in binary means that 1st and 2nd elements in original array need to be picked. add current subset to list of all subsets
Note that the ordering of bits for picking integers from the set does not matter; picking integers from left to right would produce the same output as picking integers from right to left.
Given the root node of a directed graph, clone this graph by creating its deep copy so that the cloned graph has the same vertices and edges as the original graph.
Let’s look at the below graphs as an example. If the input graph is G = (V, E) where V is set of vertices and E is set of edges, then the output graph (cloned graph) G’ = (V’, E’) such that V = V’ and E = E’.
Note: We are assuming that all vertices are reachable from the root vertex. i.e. we have a connected graph.
class Node: def __init__(self, d): self.data = d self.neighbors = [] def clone(root): return None # return root
Runtime complexity: Linear
Memory Complexity: Logarithmic,
We use depth-first traversal and create a copy of each node while traversing the graph. To avoid getting stuck in cycles, we’ll use a hashtable to store each completed node and will not revisit nodes that exist in the hashtable.
The hashtable key will be a node in the original graph, and its value will be the corresponding node in the cloned graph.
Serialize a binary tree to a file and then deserialize it back to a tree so that the original and the deserialized trees are identical.
There is no restriction regarding the format of a serialized stream, therefore you can serialize it in any efficient format. However, after deserializing the tree from the stream, it should be exactly like the original tree. Consider the below tree as the input tree.
def serialize(node, stream): #TODO: Write - Your - Code return def deserialize(stream): #TODO: Write - Your - Code return None
Runtime complexity: Linear
Memory Complexity: Logarithmic,
There can be multiple approaches to serialize and deserialize the tree. One approach is to perform a depth-first traversal and serialize individual nodes to the stream. We’ll use a pre-order traversal here. We’ll also serialize some markers to represent a null pointer to help deserialize the tree.
Consider the below binary tree as an example. Markers (M*) have been added in this tree to represent null nodes. The number with each marker i.e. 1 in M1, 2 in M2, merely represents the relative position of a marker in the stream.
The serialized tree (pre-order traversal) from the above example would look like the below list.
When deserializing the tree we’ll again use the pre-order traversal and create a new node for every non-marker node. Encountering a marker indicates that it was a null node.
Given a sorted array of integers, return the low and high index of the given key. You must return -1 if the indexes are not found.
The array length can be in the millions with many duplicates.
In the following example, according to the the
key, the
low and
high indices would be:
key: 1,
low= 0 and
high= 0
key: 2,
low= 1 and
high= 1
key: 5,
low= 2 and
high= 9
key: 20,
low= 10 and
high= 10
For the testing of your code, the input array will be:
1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 6, 6
def find_low_index(arr, key): #TODO: Write - Your - Code return -2 def find_high_index(arr, key): #TODO: Write - Your - Code return -2
Runtime complexity: Logarithmic
Memory Complexity: Constant,
Linearly scanning the sorted array for
low and
high indices are highly inefficient since our array size can be in millions. Instead, we will use a slightly modified binary search to find the
low and
high indices of a given key.
We need to do binary search twice:
lowindex.
highindex.
Let’s look at the algorithm for finding the
low index:
lowand
highindices and calculate the
midindex.
midindex is less than the
key,
lowbecomes
mid + 1(to move towards the start of range).
key, the
highbecomes
mid - 1. Index at
lowremains the same.
lowis greater than
high,
lowwould be pointing to the first occurrence of the
key.
lowdoes not match the
key, return
-1.
Similarly, we can find the
high index by slightly modifying the above condition:
lowindex to
mid + 1when the element at
midindex is less than or equal to the
key.
highindex to
mid - 1when the element at
midis greater than the
key.
Search for a given number in a sorted array, with unique elements, that has been rotated by some arbitrary number. Return
-1 if the number does not exist.
Assume that the array does not contain duplicates
Below is an original array before rotation.
After performing rotation on this array 6 times it changes to:
The task is to find a given number in this array.
def binary_search_rotated(arr, key): #TODO: Write - Your - Code return -1
Runtime complexity: Logarithmic,
Memory Complexity: Logarithmic,
The solution is essentially a binary search but with some modifications. If we look at the array in the example closely, we notice that at least one half of the array is always sorted. We can use this property to our advantage.
If the number
n lies within the sorted half of the array, then our problem is a basic binary search. Otherwise, discard the sorted half and keep examining the unsorted half. Since we are partitioning the array in half at each step, this gives us runtime complexity.
StrStr(string search)
Now that you have a sense of what to expect from an interview and know what kinds of questions to expect, let’s learn some preparation strategies based on Facebook’s unique interview process.
The first thing you should do is update your resume to be metrics/deliverables driven. It’s also a good idea to show how the work you’ve done can translate into their five core values: Move fast, Be bold, Focus on impact, Be open, and Build social value.
I recommend at least three months of self-study to be successful. This includes choosing a programming language, reviewing the basics, and studying algorithms, data structures, system design, object-oriented programming, and more.
It’s important to practice coding using different tools:
For a robust, 12-week interview guide, check out our article, the 3 Month Coding Interview Preparation Bootcamp
The design interview usually doesn’t involve any coding, so you’ll need to learn how to answer these questions. This will be done on a whiteboard during the interview, so practice your designs by hand. Study up on system design and product design.
The best way to master system design questions is not by memorizing answers but by learning the anatomy of a system design question. You need to train yourself to think from the ground up while also considering scaling and requirements.
Pro tip: If you want to stand out in the system design interview, you’ll need to discuss how Machine Learning can be implemented in your design.
Facebook wants next-gen engineers, and they focus heavily on artificial intelligence. Consider brushing up on ML concepts and ML system design principles.
Once you get the basics down and progress through the interview prep roadmap, master the best practices.
When you practice, learn how to articulate your process out loud. Facebook cares a lot about how you think. When you code, explain your thought process as if another person were in the room.
You also want to start timing yourself to learn how to manage your time effectively. It’s always better to take time planning your answer than to just jump it with brute force.
Facebook cares that you fit with their company, so you need to be prepared to talk about yourself. For each of Facebook’s values, brainstorm how you fit and why these values matter to you.
You should also think about your 2 to 4 year career aspirations, interests, and strengths as an engineer, as they will likely come up in the interview.
To learn how to prepare, check out my article Behavioral Interviews: how to prepare and ace interview questions
Facebook values self-starters, so it’s important that you come prepared with questions for your interviewers. You’ll have time during every interview to ask your own questions. This is also an opportunity to determine if Facebook is a good fit for your lifestyle and needs.
Cracking the Facebook coding interview comes down to the time you spend preparing, such as practicing coding questions, studying behavioral interviews, and understanding Facebook’s company culture.
There is no golden ticket, but more preparation will surely make you a more confident and desirable candidate. The essential resources below will help you prepare and build confidence for Facebook interviews.
Keep learning and studying!
Join a community of 500,000 monthly readers. A free, bi-monthly email with a roundup of Educative's top articles and coding tips. | https://www.educative.io/blog/cracking-top-facebook-coding-interview-questions | CC-MAIN-2021-31 | refinedweb | 3,368 | 61.67 |
A couple of aspects of Active Directory (AD) that often seem confusing are sites and site links. The basic definition of these concepts seems simple enough. For example, a site is defined as a portion of your network that has high-bandwidth connectivity.
Typically this means a site is one or more IP subnets in a single location such as a campus, building, or floor. A site link, on the other hand, is a group of sites linked together by a router or some other WAN device. A typical site link joins only two sites together, but you can have site links with three or more sites on backbone networks that connect groups of buildings on a campus.
Also important to remember is that sites and site links represent a logical topology that is separate from the domain namespace of your AD deployment. In other words, one domain can contain multiple sites while one site can belong to several domains. And while domains are generally deployed to mirror the business or administrative structure of your organization, sites and site links are created to mirror the structure of the underlying physical network that supports your AD deployment.
The main reasons for using this second structure are to provide mechanisms for controlling the replication of AD information between domain controllers (DCs) in different locations and for enabling users to find and access resources that have proximity to where they are located. For example, the site topology of your network enables a user at a remote branch office to find and use a DC at his or her branch office to log on to the network instead of a DC at headquarters. The advantages of using a nearby domain controller this way include shorter latency for faster logons and reduced bandwidth usage of costly WAN connections.
So far so good, but what administrators often forget is that while sites and site links are supposed to mirror the underlying physical network for your company, they may not do this depending on how your sites and site links have been created.
For example, while a site link should normally represent a physical WAN link between two locations that are different sites, AD has no way of actually determining if this is the case, if there actually is a WAN link connecting the locations, what kind of WAN link it is, and so on.
So if you made a mistake in the way you designed your site topology, failed to create the appropriate site links, or did something else wrong when you designed and implemented your sites and site links, you may end up with replication problems, stale directories, and frustrated users.
One example of the typical kind of misconception that occurs concerning then ensure redundancy for replication purposes by first. Here's what you figure you'll do:
------ T1-Site-Link ------ Minneapolis-Site Fargo-Site ----- ISDN-Site-Link -----.
Instead of following the above approach, which on its face seemed logical enough, here's what you should do:
Minneapolis-Site ----- Minneapolis-Fargo-Site-Link ----- Fargo Site
In other words, even though you have two different WAN links between your two sites, you should only create a single site link joining the sites together and leave your WAN link redundancy to your access router, which can switch to ISDN when the T1 line fails.
While mapping your physical network to your site topology is a good thing, too much of a good thing can be bad. Sure, by disabling site-link transitivity you can create a site topology that (almost) exactly mirrors the way your actual network works, but the harder you try to make things match the more maintenance you're going to have to perform as the administrator to keep AD replication working optimally on your network.
It's best then to simply paint the broad outlines of your network when you create your sites and site links, and leave AD to work out the rest by creating the topology and connection objects it feels are necessary to ensure sufficient redundancy for AD replication to occur without problems.
And of course, simpler is always better. If you can get away using only one domain and one site for your company, do it -- if your primary WAN links are T1 or higher, it shouldn't be a problem.
Mitch Tulloch is the author of Windows 2000 Administration in a Nutshell, Windows Server 2003 in a Nutshell, and Windows Server Hacks.
Return to WindowsDevCenter.com. | http://archive.oreilly.com/lpt/a/4927 | CC-MAIN-2015-22 | refinedweb | 748 | 50.3 |
Sample thread program in java:
What is the output of below Java program?
class MyThreadClass extends Thread{
public void run(){
for(int i=0; i<10; i++){
System.out.println(i);
}
}
}
public class ThreadDemo3 {
public static void main(String[] args) {
MyThreadClass mtc = new MyThreadClass();
mtc.start();
}
}
it prints from 0 to 9
it prints from 0 to 10
It gives a compile time error in main method mtc.start(). Because There is no function with start() in our MyThreadClass.
run time exception and crash. for loop will run infinite number of times and crashes.
it prints from 0 to 9.
Note: Even though there is no start() function in our class, it is present in Thread class which we are extending in our class. start() function will internally call run() method of our MyThreadClass. So there is no error. It runs properly and prints from 0 to 9.
Back To Top | http://skillgun.com/question/3077/java/threads/sample-thread-program-in-java-what-is-the-output-of-below-java-program-class-mythreadclass-extends-thread-public-void-run-forint-i0-i10-i-systemoutprintlni-public-class-threaddemo3-public-static-void-mainstring-args-mythreadclass-mtc-new-mythreadclass-m | CC-MAIN-2016-44 | refinedweb | 151 | 85.79 |
I know of the schedule_function but I don't think that will work like this with minute data as it won't stop the algo from running just reset it periodically. Any suggestions?
I know of the schedule_function but I don't think that will work like this with minute data as it won't stop the algo from running just reset it periodically. Any suggestions?
This is a really easy way of doing it. It counts each minute and when
handle_data() has been called 5 times, it executes the logic.
Happy coding
EDIT: I forgot you should also have
if not get_open_orders(stock): before the
order_target_percent line.
minuteCounter is a variable which gets incremented every update (every minute) and whenever it exceeds 5 minutes, your logic runs and the variable is reset. The not get_open_orders(stock) makes sure that you don't send orders for a stock for which you already have orders sent for. That's a good precaution if you don't want to bother properly accounting for outstanding orders, because otherwise, if your existing orders and the new ones you send both get filled, you'll exceed your desired position.
Note that instead of the minute counter variable as suggested by James Jack, I'd just do
if get_datetime().minute % 5 != 0: # Only run when the minute is divisible by 5 . return # And don't execute the rest of this function stock = context.stocks prices = history(bar_count = 13, frequency = '1m', field = 'price') ....
It gets incremented automatically by quantopian? Also is there a way to log what time the orders took place just so I can double check that it's working?
Hi Ari,
If you're talking about my example code, the algo increments
context.minuteCounter without help from Quantopian.
If you're talking about Alex's suggestion of
if get_datetime().minute % 5 != 0 then there is no counter. Instead the current time is used, which is updated by Quantopian.
To log the current time you can use
print "the time is {}".format(get_datetime('US/Eastern').time())
Since we know that on a typical trading day there are 390 minutes, we can create
schedule_functions in a loop that fire very 5 minutes, if the day is cut short e.g. a holiday, or half day, then the extra
schedule_functions will never execute.Try this...
def initialize(context): for minute in range(0, 390, 5): schedule_function(trade, date_rules.every_day(), time_rules.market_open(minute).
The approaches presented here are actually only workarounds for a missing generalization of the API to arbitrary scheduling intervals between 1m and 1w. Are there any plans to improve the API in this regard?
total_minutes_pipeline = 6*60 + 5 for i in range(1, total_minutes_five_minute): if i % 5 == 0: schedule_function( five_minute, date_rules.every_day(), time_rules.market_open(minutes=i), True )
This is from the help docs. This sets the function, 'five_minute()' to run every 5 minutes.
I would not try to change how often 'handle_data' runs, rather define a new function scheduled every 5 minutes. | https://www.quantopian.com/posts/i-want-this-algorithm-to-only-run-handle-data-every-5-minutes-or-at-least-only-make-trades-every-five-minutes-how-can-i-do-this | CC-MAIN-2018-39 | refinedweb | 497 | 63.9 |
Lightweight Invisible CAPTCHA Validator Control
UPDATE: This code is now hosted in the Subkismet project on CodePlex..
- It requires Atlas and Atlas is pretty heavyweight.
- Atlas is pre-release right now.
- We’re waiting on a bug fix in Atlas to be implemented.
- It is not accessible as it doesn’t work if javascript is enabled..
- Easy to use. Only one assembly to reference.
- Is invisible.
- Works when javascript is disabled..
![ Accessible version of the Invisible CAPTCHA
control]().
I developed this control as part of the
Subtext.Web.Control.dll
assembly which is part of the Subtext
project, thus you can grab this assembly
from our Subversion repository..
[assembly: WebResource("YourNameSpace.InvisibleCaptcha.js", "text/javascript")]
You will also need to find the call to
Page.ClientScript.GetWebResourceUrl inside InvisibleCaptcha.cs and
change it to match the namespace specified in the
WebResource
attribute..
[Download InvisibleCaptcha here].
tags: CAPTCHA, Comment Spam, ASP.NET, Validator | http://haacked.com/archive/2006/09/26/lightweight_invisible_captcha_validator_control.aspx/ | CC-MAIN-2016-50 | refinedweb | 152 | 54.29 |
You can learn about NumPy in Python Programs with Outputs helped you to understand the language better.
Python Programming – Array Creation
There are several ways to create an array in NumPy like np. array, np. zeros, np. ones, etc. Each of them provides some flexibility. Syntax to create an array with important attributes of a Numpy object is
Syntax: NumPy.array (object, dtype=None,
copy=True, order=’K’, subok=False, ndmin=0)
Parameters:
Object: An array, any object exposing the array interface.
DType: Desired data type of array.
Copy: If true(default), then the object is copied.
Order ‘K’, ‘A’/C’/F’: Specify the memory layout of the array. If the object is not an array, the newly created array will be in C order (row-major) unless ‘F’ is specified, in which case it will be in Fortran order.
Subok: If True, then sub-classes will be passed- through, otherwise the returned array will be forced to be a base-class array,
ndmin: Specifies minimum dimensions of resultant
array.
Take a look at the following examples to understand better.
For example, to convert a list of numeric value into a one-dimensional Numpy array as shown in Figure 13.8.
import numpy as np
List = [1,2,3,4]
al = np.array(List)
print(al)
The above code written in Python is shown in Figure 13.9:
The simplest way to create an array in Numpy is to use Python List:
myPythonList = [1,9,8,3]
To convert Python list to a numpy array by using the object np.array:
numpy_array_from_list = np.array(myPythonList)
- Python Programming – Introduction To Numpy
- Python Programming – Array Attributes
- Python Programming – Scope and Module
Methods used to create Numpy Array
Different methods used to create a Numpy array are discussed in the following sections.
To display the contents of the list:
numpy_array_from_list
>>>
array( [1, 9, 8, 3] )
>>>
empty ( )
the empty ( ) function is used to create empty arrays or an uninitialized array of specified shape and dtype, in which you can store actual data as and when required. (See Figure 13.10). Note that empty ( ) created array with random garbage values.
Example
Demo of empty ( ) function.
zeros ( )
The function zeros ( ) takes the same attributes as empty ( ) and creates an array with specifies size and type but filled with zeros. Default dtype is float. (See Figure 13.11).
Example
Demo of zero ( ) function.
Let’s Try
ones ( )
The function ones ( ) takes the same attributes as empty( ), and creates an array with specified size and type but filled with ones. (See Figure 13.12).
Example
Demo of one ( ) function.
Let’s Try
copy ( ) All NumPy arrays come with the copy ( ) method. (See Figure 13.13).
Example
Demo of copy ( ) function. | https://pythonarray.com/python-programming-array-creation/ | CC-MAIN-2022-40 | refinedweb | 452 | 66.23 |
What Is a Contract?
In the context of this chapter, we will consider a contract to be any mechanism that requires a developer to comply with the specifications of an Application Programming Interface (API). Often, an API is referred to as a framework. The online dictionary Dictionary.com () defines a contract as an agreement between two or more parties for the doing or not doing of something specified—an agreement enforceable by law."
This is exactly what happens when a developer uses an API—with the project manager, business owner or industry standard providing the enforcement. When using contracts, the developer is required to comply with the rules defined in the framework. This includes issues like method names, number of parameters, and so on. In short, standards are created to facilitate good development practices.
Enforcement is vital because it is always possible for a developer to break a contract. Without enforcement, a rogue developer could decide to reinvent the wheel and write her own code rather than use the specification contract? Let's assume that we want to create an application to draw shapes. Our goal is to draw every kind of shape represented in our current design, as well as ones that might be added later. There are two conditions we must adhere to.
First, we want all shapes to use the same syntax to draw themselves. For example, we want every shape implemented in our system to contain a method called draw(). Thus, seasoned developers implicitly know that to draw a shape you simply invoke the draw() method, regardless of what the shape happens to be. Theoretically, this reduces the amount of time spent fumbling through manuals and cuts down on syntax errors.
Second, remember that it is important that every class be responsible for its own actions. Thus, even though a class is required to provide a method called draw(), that class must provide its own implementation of the code. For example, the classes Circle and Rectangle both have a draw() method; however, the Circle class obviously has code to draw a circle, and as expected, the Rectangle class has code to draw a rectangle. When we ultimately create classes called Circle and Rectangle, which are subclasses of Shape, these classes must implement their own version of Draw (see Figure 8.3).
Figure 8.3 An abstract class hierarchy.
In this way, we have a Shape framework that is truly polymorphic. The Draw method can be invoked for every single shape in the system, and invoking
Let's look at some code to illustrate how Rectangle and Circle conform to the Shape contract. Here is the code for the Shape class:
public abstract class Shape { public abstract void draw(); // no implementation }
Note that the class does not provide any implementation for draw(); basically there is no code and this is what makes the method abstract (providing any code would make the method concrete). There are two reasons why there is no implementation. First, Shape does not know what to draw, so we could not implement the draw() method even if we wanted to.
Second, we want the subclasses to provide the implementation. Let's look at the Circle and Rectangle classes:
public class Circle extends Shape { public void Draw() {System.out.println ("Draw a Circle"}; } public class Rectangle extends Shape { public void Draw() {System.out.println ("Draw a Rectangle"}; }
Note that both Circle and Rectangle extend (that is, inherit from) Shape. Also notice that they provide the actual implementation (in this case, the implementation is obviously trivial). Here is where the contract comes in. If Circle inherits from Shape and fails to provide a draw() method, Circle won't even compile. Thus, Circle would fail to satisfy the contract with Shape. A project manager can require that programmers creating shapes for the application must inherit from Shape. By doing this, all shapes in the application will have a draw() method that performs in an expected manner.
Although the concept of abstract classes revolves around abstract methods, there is nothing stopping Shape from actually providing some implementation. (Remember that the definition for an abstract class is that it contains one or more abstract methods—this implies that an abstract class can also provide concrete methods.) For example, although Circle and Rectangle implement the draw() method differently, they share the same mechanism for setting the color of the shape. So, the Shape class can have a color attribute and a method to set the color. This setColor() method is an actual concrete implementation, and would be inherited by both Circle and Rectangle. The only methods that a subclass must implement are the ones that the superclass declares as abstract. These abstract methods are the contract.
Some languages, such as C++, use only abstract classes to implement contracts; however. Java and .NET have another mechanism that implements a contract called an interface.
Interfaces
Before defining an interface, it is interesting to note that C++ does not have a construct called an interface. For C++, an abstract class provides the functionality of an interface. The obvious question is this: If an abstract class can provide the same functionality as an interface, why do Java and .NET bother to provide this construct called an interface?
For one thing, C++ supports multiple inheritance, whereas Java and .NET do not. Although Java and .NET classes can inherit from only one parent class, they can implement many interfaces. Using more than one abstract class constitutes multiple inheritance; thus Java and .NET cannot go this route. Although this explanation might specify the need for Java and .NET interfaces, it does not really explain what an interface is. Let's explore what function an interface performs.
As with abstract classes, interfaces are a powerful way to enforce contracts for a framework. Before we get into any conceptual definitions, it's helpful to see an actual interface UML diagram and the corresponding code. Consider an interface called Nameable, as shown in Figure 8.4.
Figure 8.4 A UML diagram of a Java interface.
Note that Nameable is identified in the UML diagram as an interface, which distinguishes it from a regular class (abstract or not). Also note that the interface contains two methods, getName() and setName(). Here is the corresponding code: at all. As a result, any class that implements an interface must provide the implementation for all methods. For example, in Java, a class inherits from an abstract class, whereas a class implements an interface.
Tying It All Together
If both abstract classes and interfaces provide abstract methods, what is the real difference between the two? As we saw before, an abstract class provides both abstract and concrete methods, whereas an interface provides only abstract methods. Why is there such a difference?
Assume that we want to design a class that represents a dog, with the intent of adding more mammals later. The logical move would be to create an abstract class called Mammal:.
Let's also create a class called Head that we will use in a composition relationship:, let's create a class called Dog that is a subclass of Mammal, implements Nameable, and has a Head object (see Figure 8.5).
Figure 8.5 A UML diagram of the sample code.
In a nutshell, Java and .NET build objects in three ways: inheritance, interfaces, and composition. Note the dashed line in Figure 8.5 that represents the interface. This example illustrates when you should use each of these constructs. When do you choose an abstract class? When do you choose an interface? When do you choose composition? Let's explore further.
You should be familiar with the following concepts:
- Dog is a Mammal, so the relationship is inheritance.
- Dog implements Nameable, so the relationship is an interface.
- Dog has a Head, so the relationship is composition.
The following code shows how you would incorporate an abstract class and an interface in the same class.
public class Dog extends Mammal implements Nameable { String name; Head head; public void makeNoise(){System.out.println("Bark");} public void setName (String aName) {name = aName;} public String getName () {return (name);} }
After looking at the UML diagram, you might are key to a strong object-oriented design.
Although inheritance is a strict is-a relationship, an interface is not quite., we saw that Mammal provided a concrete method called generateHeat(). Even though we do not know what kind of mammal we have, on.
The Compiler Proof
Can we prove or disprove that interfaces have a true is-a relationship? In the case of Java (and this can also be done in C# or VB), we can let the compiler tell us. Consider the following code:
Dog D = new Dog(); Head H = D;
When this code is run through the compiler, the following error is produced:
Test.java:6: Incompatible type for Identifier. Can't convert Dog to Head. Head H = D;
Obviously, a dog is not a head. Not only do we know this, but the compiler agrees. However, as expected, the following code works just fine:
Dog D = new Dog(); Mammal M = D;
This is a true inheritance relationship, and it is not surprising that the compiler parses this code cleanly because a dog is a mammal.
Now we can perform the true test of the interface. Is an interface an actual is-a relationship? The compiler thinks so:
Dog D = new Dog(); Nameable N = D;
This code works fine. So,. Let's explore this concept in greater detail by providing an:
public class Dog extends Mammal implements Nameable { String name; Head head; } public class Planet { String planetName; public void getplanetName() {return planetName;}; }
Likewise, the Car class might have code like this:
public class Car { String carName; public String getCarName() { return carName;}; }
And the Dog class might have code like this:
public class Dog { String dogName; public String getDogName() { return dogName;}; }
The obvious issue, we can create an interface (we can use the Nameable interface that we used previously). The convention is that all classes must implement Nameable. In this way, the users only have to remember a single interface for all classes when it comes to naming conventions:
public interface Nameable { public String getName(); public void setName(String aName); }
The new classes, Planet, Car, and Dog, should look like this:, we have a standard interface, and we've used a contract to ensure that it is the case.
There is one little issue that you might have thought about. The idea of a contract is great as long as everyone plays by the rules, but what if some shady individual doesn could be reprimanded, or even fired..
System Plug-in-Points
Basically, contracts are "plug-in points" into your code. Anyplace where you want to make parts of a system abstract, you can use a contract. Instead of coupling to objects of specific classes, you can connect to any object that implements the contract. You need to be aware of where contracts are useful; however, you can overuse them. You want to identify common features such as the Nameable interface, as discussed in this chapter. However, be aware that there is a trade-off when using contracts. They might make code reuse more of a reality, but they make things somewhat more complex. | http://www.informit.com/articles/article.aspx?p=1238839&seqNum=3 | CC-MAIN-2018-30 | refinedweb | 1,875 | 64 |
Hi everyone,
I’m pleased to hear that the move we’ve taken around SDN Subscriptions is resulting in some lively discussion and response. We firmly believe that this change is a win for our development community. This is a move we’ve been looking to make for quite some time, and I’m pleased that we’re now able to roll out what many of you have been requesting. Bottom line – it is important to continue to seek ways to make it easier for all of you to get your hands on the tools you need to innovate. We hope you agree this is a step in the right direction.
Here the facts:
To purchase a license, please go to:
Here the reasons:
You may be curious as to the reasons behind this reduction. Well, we launched the subscriptions program over a year ago, in September 2007. made the offerings out of reach for their budgets. Our goal here is to further expand the community of developers that have access to the tools they need to innovate. We’ve listened to your feedback and are pleased to announce that we’re taking steps to make the full NetWeaver platform much more accessible and affordable to our development community. The NetWeaver Development Subscription is still the same package with the same benefits and components as before. You will still receive the comprehensive NetWeaver platform, tools, resources, and knowledge (Virtual TechEd) along with bundled support and many great features.
Here for existing subscribers:
A couple of words to existing subscription customers. To provide compensation for the price difference, we have a special program for current customers who bought their subscriptions on or after September 1, 2008. All customers who either bought a new subscription or who renewed their subscription on or after September, 2008, will receive a 6 months extension. We are sending notifications to all customers who qualify for the extension.
Here about country availability:
Today the subscriptions are still only available in Germany and in the US. Many of you have been asking at recent TechEds when we will offer the subscriptions in other countries, particularly in Canada, Brazil, the rest of Europe and Australia. We are looking at various options and we really want to make the expansion happen as soon as possible, however we can not give you a date at this time.
Here about free downloads:
A final note regarding the free downloads. For those of you who do not need a license for commercial reasons, we will continue to offer several components of SAP NetWeaver as free download trials. We are working on a final and official download policy. Please keep in mind that the free software and tools are available for non productive use only, whereas the SAP NetWeaver Subscription license provides commercialization rights, including the license to sell, demo, and market applications developed with an active subscription. Commercialization is key for developers and ISVs building commercial applications and solutions with SAP NetWeaver. The subscription also offers the ABAPTM programming language and Java development tools and resources, bundled support, including premium content, knowledge bases, and permissions to register ABAP namespaces. For a comparison between the free downloads and the subscriptions, please go to:
I read this blog with great pleasure -:) I think it’s great that SAP has choose to re-price the subscription program, allowing more people to access it.
Peru is not listed on the soon to be available countries…No surprises…I might be the only one to buy it LOL However, I think it’s great that you’re planning to extend the limits of the program.
Thanks for this!
Greetings,
Blag.
Ex tax the price is £893
The US price is £775 + tax
Whats the US Tax hit?
Strange numbers though … didn’t you every hear of the psycology of €995?
Thanks for listening to the community … we appreciate it.
Nigel
The US price does not include tax because that is State based depending on your shipping address.
That is even better.
Nigel
Ideally, SAP would benefit from making the SDN subscription much like ‘an offer you cannot refuse’, and that would be at or about $30 – 45/ month or approx $500 – 500 p.a.
Congratulations on the current move.
Regards
/am
Can you please give some update on timimgs for the SDN Subscription rollout in Canada. I am really anxiously waiting to subscribe.
Any feedback would be greatly appreciated.
Regards,
SP
Currently we have no plans to expand the subscription program to other countries in 2009. We are looking into other options to provide the community with additional subscription program services. If you are an SAP partner, you could purchase a partner license through your local partner manager in Canada. We apologize for any inconvenience this may cause you. Regards,
Claudine
Regards
Marius
Unfortunately we are not going to roll the program out in S. Africa this year. From a legal stand point, the license is supposed to be used in the country where it’s been purchased. You may consider purchasing a copy of the NW Development License (perpetual) through direct sales or through your local partner manager (if you are an SAP partner). Best,
Claudine | https://blogs.sap.com/2008/11/17/subscriptions-price-reductions/ | CC-MAIN-2018-13 | refinedweb | 869 | 61.16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.